Resource-Constrained Project Scheduling: An Evaluation of Adaptive Control Schemes for Parameterized Sampling Heuristics

Size: px
Start display at page:

Download "Resource-Constrained Project Scheduling: An Evaluation of Adaptive Control Schemes for Parameterized Sampling Heuristics"

Transcription

1 Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel No. 488 Resource-Constrained Project Scheduling: An Evaluation of Adaptive Control Schemes for Parameterized Sampling Heuristics Schirmer Original Version September 998 Revised Version November 999 Second Revised Version May 2000 Please do not copy, publish or distribute without permission of the author. Dr. Andreas Schirmer Deutsche Lufthansa AG, Informatiksysteme Personal, HAM PI Weg beim Jäger 93, D-2233 Hamburg, Institut für Betriebswirtschaftslehre, Christian-Albrechts-Universität zu Kiel Olshausenstr. 40, D-248 Kiel,

2 Abstract: For most computationally intractable problems there exists no simple heuristic which consistently outperforms all other heuristics. One remedy is to bundle simple heuristics into composite ones in a fixed and predetermined way (Barman 997). Adaptive control schemes take this approach one step further by dynamically combining algorithms. Several such algorithms have been proposed recently in various settings, yet an experimental investigation comparing them to other contemporary methods has been lacking. We aim to close this gap by a comprehensive computational study on the field of resource-constrained project scheduling. Also we show how to improve effectiveness of the best algorithm by means of randomized sampling. Finally, we expose several advantages of adaptive control schemes over other algorithms which facilitate the OR practitioner's task of designing good algorithms for newly arising problems.

3 . Introduction There is no doubt that trying to find 'better' heuristics for a computationally intractable problem is intellectually appealing. For most such problems, though, researchers have come to believe that there is no such thing as a 'best' heuristic. If a heuristic is hailed as best, this usually means that it does well on a majority of instances but leaves a minority where others do better. For neighborhood search algorithms, analytical proof of this effect is provided by the no-free-lunchtheorem (Wolpert, Macready 995). For the most widely used form of fast scheduling heuristics, viz. priority rule-based construction methods, experimental results suggest that instance characteristics may bear significantly upon their relative ranking (Kolisch, Drexl 996; Schirmer 2000). As a consequence, scheduling heuristics should enlist more than just one rule, else they cannot account for the specifics of particular instances. First steps in this direction were made with the design of composite (or combinatorial) rules (cf. Barman 997 and the literature cited therein), which combine simple priority rules in a fixed and predetermined way; we refer to such algorithms as fixed control schemes. Adaptive control schemes, which dynamically combine algorithmic components as appropriate, constitute one possible next step. In essence, these aim for a 'learning' capability, i.e. the ability to extricate good algorithms from an uninformed compilation of components by gathering - for each instance anew - knowledge of which combinations work well and which do not. Although the very name 'adaptive control scheme' has, to the best of our knowledge, not been used before, such schemes can be found in the literature. We therefore begin by evaluating two recent such schemes, a randomized one (Kimms 996) and a local search-based one (Haase 996; Haase et al. 998). These or similar schemes have been used in various settings such as lotsizing and scheduling or course scheduling; still a thorough experimental investigation comparing them to contemporary methods by means of standard benchmark instances has been lacking. We will also examine how the effectiveness of the latter scheme can be improved by randomization. Our work intends to close this gap by validating the algorithms on one of the best-studied scheduling problems, the resource-constrained project scheduling problem (RCPSP). The RCPSP can be couched as follows: The single project consists of a number J of activities to be scheduled, using a number R of renewable resources with a per-period capacity of K r. In each period of its nonpreemptable duration d j activity j requests k jr units of resource r. A precedence order on the activities stipulates that some activities must be finished before others may be started. The goal is to find an assignment of periods to activities (a schedule) that covers all activities, ensures for each renewable resource r that in each period the total usage of r by all activities performed in that period does not exceed the per-period availability of r, respects the partial order, and minimizes the total project length. This problem is notoriously intractable. Although we have also evaluated the algorithms on a substantially more general

4 2 problem, viz. the RCPSP under partially renewable resources (RCPSP/Π), we refer the reader to Schirmer (999a, pp ; 999b) for details as the findings and conclusions are rather similar to those derived on the RCPSP. The remainder of this work is structured as follows. In Section 2 we present the requisite algorithmic components, viz. scheduling scheme, simple and composited priority rules, random sampling schemes, and bounding rules. Section 3 is devoted to a taxonomy of control schemes, based upon which we describe the adaptive approaches to be explored. Section 4 details the experimentation carried out and analyzes the respective results. A summary in Section 5, along with some conclusions and suggested directions for future work, wraps up this work. 2. Components of Priority Rule-Based Algorithms Priority rule-based methods consist of at least two components. A scheduling scheme determines how a schedule is constructed, building feasible full schedules (which cover all activities) by augmenting partial schedules (which cover only a proper subset of the activities) in a stage-wise manner. On each stage, the scheme determines all activities currently eligible for scheduling. In this, priority rules serve to resolve conflicts if several activities could be scheduled. In addition, such methods may be complemented by random sampling schemes which improve effectiveness by increasing the solution space, and by bounding rules which improve efficiency by eliminating regions of the solution space that cannot contain better schedules. 2.. Scheduling Scheme We restrict our attention to the serial scheduling scheme (SSS) for two reasons. First, when priority rules are applied deterministically, the SSS discriminates better between good and bad rules than its parallel counterpart. Second, we will later feed those composite rules identified as good to a random sampling scheme. Due to the neglectable computation times required (at the order of milliseconds per iteration for the instances considered here), such algorithms are commonly run for larger iteration numbers (and will be run so in the second stage). As the SSS is dominant for larger samples, we want to find those rules which perform good under the SSS. The SSS divides the set of activities into three disjoint subsets or states: scheduled, eligible, and ineligible. An activity that is already in the partial schedule is scheduled. Otherwise, an activity is called eligible if all its predecessors are scheduled, and ineligible otherwise. The scheme proceeds in N = J stages, indexed by n. For notational purposes, we refer on stage n to the set of scheduled activities as S n and to the set of eligible activities as decision set D n. Let also denote P j (2 j J) the set of all immediate predecessors of activity j w.r.t. the precedence order. D n is determined dynamically from

5 3 D n {j j S n P j S n } ( n N) () On each stage n, one activity j from D n is selected - using a priority rule if more than one activity is eligible - and scheduled to begin at its earliest feasible start time. Then j is moved from D n to S n which may render some ineligible activities eligible if now all their predecessors are scheduled. The scheme terminates on stage N when all activities are scheduled. The SSS has been used in numerous studies (for a survey cf. Kolisch 996b). Note that for each feasible RCPSP-instance the SSS probes the set of active schedules (Kolisch 996b) which always contains at least one optimal schedule Simple Priority Rules We briefly introduce the priority rules to be used. Let for each activity j ( j J) denote LST j (LFT j ) the latest start (finish) time (Kelley 963) and EFST j the dynamically updated earliest feasible start time w.r.t. all constraints. Using this notation, the rules used can be defined as done in Table where also a classification in terms of several straightforward criteria is given. The well-known first four of these rules were selected because in several studies on the RCPSP they were found to hold particular promise (Davis, Patterson 975; Boctor 990; Alvarez- Valdés, Tamarit 989; Ulusoy, Özdamar 989; Kolisch 996a). The latter four were found to perform rather poorly; they were deliberately included for reasons about to unfold later. Extremum Measure Definition Static vs. Dynamic Local vs. Global MIN SLK j LST j - EFST j D L MIN LST j LFT j - d j S G MIN LFT j LFT j S L MAX MTS j {j' j j'} S L MIN SPT j d j S L MIN DRD j Σ r k jr / max {k j'r j' D n } D G MIN TRD j Σ r k jr S L MIN TRS j Σ r k jr / K r S L Table : Simple Priority Rules - Definition and Classification 2.3. Parameterized Composite Priority Rules All above rules are simple ones (Kolisch 995, p. 86). As we aim to minimize the project makespan, it is straightforward that certain (precedence-based) rules perform better on less capacitated instances while other (resource-based) ones do so on instances with scarce resources. Also, some rules may accord similar or even identical priority values to several candidate ac-

6 4 tivities, especially so when scheduling large projects; therefore the discriminative potential of simple rules may be rather limited in certain situations. This insight motivates the concept of composite rules which are made up from several simple priority rules. They combine several numerical measures, each reflecting information about the desirability to select a specific candidate, into one priority value. Note that experimental results suggest good composite rules need not always be made up from good simple rules: Ulusoy, Özdamar (989) report that, although the rules MAX MIS (most immediate successors) and MAX TRS perform poorly as individuals, their weighted combination WRUP does rather well. So an appropriately balanced combination of different priority rules may do better than each of its parts. Let I denote the number of priority rules. Let also for each activity j D n denote v i (j) the priority accorded by rule i, and let finally for each rule i be α i [-, ] an (exponential) control parameter or weight assigned to rule i. Now, we can define a composite rule in general as v(j) I v i ( j) α i i= extr = max (j D n ) (2) Obviously, for I = this definition includes the special case of a simple rule. For each priority rule i, the corresponding control parameter α i allows to vary the influence of the rule as well as whether candidates with high or with low priority values are to be preferred. On one hand, α i (0,] implies extr = max and increasing values tend to pronounce the differences between the candidate activities; on the other hand, α i [-, 0) implies extr = min and increasing values tend to reduce the differences between the activities. For α i = 0, all candidates receive the same priority value. In our implementation, ties are broken by activity index j. Note that combining rules with different domains may overemphasize the influence of some rules. Suppose for instance that one rule draws priorities from {0,...,} while another draws from {,...,00}; a composite rule using these with identical weights will essentially behave like the latter. This imbalance of scales can either be adjusted explicitly by normalizing or scaling the priorities or implicitly by adjusting the weights (Whitehouse, Brown 979). Following the former approach, we counter this effect by scaling the priority values to the interval [0,] by v' i (j) max max v i ( j) { v i ( j') j' D } v i ( j) n { v i ( j') j' D } n if extr i = min if extr i = max ( i I; j D n ) (3)

7 If the denominator is zero, the priorities remain unchanged. Note that formula (3) transforms the priority values of all rules i with extr i = min to values with extr i = max, so it suffices to let α i [0,]. We thus compose the individual priority values only after having scaled them, i.e. 5 v(j) I v' i ( j) α i i= extr = max (j D n ) (4) If a value of v' i (j) is zero, then v' i (j) α i is set to zero. We have also tried a multiplicative variant (cf. Eq. 5) of composing the individual priorities; yet it yielded noticeably worse results, so we use the exponential one in the sequel. v(j) I αi v' i ( j) i= extr = max (j D n ) (5) Note that, given a scheduling scheme and a set of I priority rules, each control parameter vector r α = (α,...,α I ) fully specifies one scheduling algorithm. Since each algorithm can be conceived as a mapping from the problem to the solution space, it is straightforward to regard any r α- vector as encoding a specific solution of the instance tackled. Hence, for any algorithm employing composite rules, the parameter space [0,] I can be said to represent that subspace of the solution space that is sampled by the algorithm Random Sampling Schemes When some means of randomization have been incorporated into an algorithm, repeated application of randomized heuristics will produce a set of solutions rather than one single solution. Usually some of these solutions will be better than the one found with the deterministic version of the same method (Cooper 976; Drexl, Grünewald 993; Laguna et al. 994). In scheduling algorithms, randomization is mostly fit into the procedure selecting between several alternative candidates; it relies upon measures of attractiveness, i.e. priority values, that are mapped into selection probabilities. Thus, a randomized method chooses among the available candidates according to probability values which are biased to favour apparently attractive selections. A biased random sampling scheme consists of a monotone mapping p: D n [0,] assigning a probability value p(j) to each candidate in the decision set by transforming a priority value of each candidate into a probability value; w.l.o.g. we require that Dn p(j) = ( n N) (6) j=

8 6 To formalize the randomized selection once the probabilities are calculated, let denote P r (D n ) = (p(π()),...,p(π( D n ))) the sequence of probability values of all candidates in a decision set D n ; w.l.o.g. we assume P r (D n ) to be ordered by ascending activity index. Also, let denote ζ [0,] a random number. Then, the activity j* to be selected is determined as k j* min {j D n j = π(k) ζ p( π(i)) } (7) i= One scheme used here is the regret-based biased random sampling (RBRS) one (Drexl, Grünewald 993). The regret value of a candidate j measures the worst-case consequence that might possibly result from selecting another candidate. Let denote V(D n ) the set of priority values of all candidates in a decision set D n. Then, the regrets are computed and modified by v'(j) max V ( Dn) v( j) v( j) min V ( Dn) iff extr = min iff extr = max (j D n ) (8) v''(j) (v'(j) + ε) µ (j D n ) (9) where ε R >0 and µ R 0. Now the selection probabilities are derived from p(j) v''( j) v''( j') j' D n (j D n ) (0) ε guarantees v''(j) to be nonzero; otherwise candidates with priorities of zero could never be selected, an undesirable consequence in the presence of scarce resources. µ allows to diminish or enforce the differences between the modified priorities for µ < or µ >, respectively. Note that the selection process becomes ever more random for µ 0 and ever more deterministic for µ. The second scheme, the modified regret-based biased random sampling one (MRBRS), is a modification of the above, in that it computes regrets from the original priorities by (8) and modifies them by (9). However, rather than using a constant value of ε, ε is determined dynamically. Letting denote V'(D n ) + the set of all positive transformed priorities of the candidates in D n, i.e. V'(D n ) + {v'(j) j D n v'(j) > 0} () ε can be derived from ε min V '( D n) + / δ iff ( j D n ) v'(j) > 0 otherwise (2)

9 7 where δ is a positive integer. Selection probabilities are again derived from (0). Thus, the above actually defines a family MRBRS/δ of sampling schemes which have been found to be particularly effective in conjunction with the SSS Bounding Rules While bounding rules are mostly used in conjunction with exact methods, they can also speed up heuristics. Simple sampling methods, cannot utilize experience from earlier iterations, so each iteration virtually starts from scratch. In the worst case, the optimum is found in the first iteration, so all subsequent iterations are wasted. Bounds can avoid this waste by restricting the solution space searched. Here, we use two classical bounds, viz. a precedence-based optimality one and a time window one. Resource-based bounds were not included, as the additional effort exceeds the savings by far. General precedence-based lower bound (GPLB) This bound is applicable whenever a feasible full schedule has been found. It is based upon relaxing the resource constraints, so a lower bound on the makespan is determined as the length of a critical path in the project. Whenever FT J = EFT J (3) holds, the whole run can be terminated. In this case, the heuristically derived upper bound FT J equals the precedence-based lower bound EFT J, so the schedule at hand is optimal. Serial time window bound (STWB) The idea is to decrease the latest start times of all unscheduled activities whenever an iteration finds an improved upper bound on the makespan. Let denote FT J the makespan of the incumbent best solution. Whenever a feasible solution with a shorter makespan of FT J < FT J has been found, all LST-values can be updated by LST j LST j - FT J + FT J - ( j J) (4) Now, the current iteration can be terminated whenever a selected activity j* would have to start outside its updated time window, i.e. EFST j* > LST j* (5) as the incumbent partial schedule could not be completed to a full schedule better than the best known one. Note that by construction of the STWB any feasible schedule will be better than the previous one since its makespan will at most equal the makespan of that one, less one. We do not detail results for the algorithms presented with or without bounding rules because the general suitability of bounding rules for scheduling heuristics has been analyzed in depth in several studies (Kolisch 995, pp ; Schirmer, Riesenberg 998).

10 8 3. Control Schemes Parameterized algorithms possess (one or more) control parameters which allow to direct the way in which they proceed. While these are often understood as numerical parameters only, we adopt a broader view and allow also the choice of certain algorithmic components such as priority rules, scheduling or sampling schemes. To algorithmic schemes which govern the instantiation of the control parameters we refer as control schemes. Although this very name has, to our knowledge, not been used before, we maintain that some kind of control scheme is always present whenever parameterized methods are used. In the sequel, we propose a taxonomy of control schemes, distinguishing between classical and adaptive forms. 3.. Classical Control Schemes The most simple control schemes might be called fixed control schemes (FCS), as the instantiations of the control parameters are fixed in advance, regardless of the instances tackled. For example, a FCS could lay down that some numeric parameter always be set to some fixed constant or to increasingly larger values. These precepts will reflect the results of previous experimentation, having been found to produce the best results over all test instances. FCS have been used e.g. by Kolisch, Drexl (996), Schirmer, Riesenberg (997), and Böttcher et al. (999). Class-based control schemes (CCS) are motivated by the observation that, as the prescriptions of FCS apply to all instances alike, they are unable to account for the specifics of particular instances. This observation has already been made, among others, by Davis, Patterson (975). Thus CCS instantiate the control parameters depending on some characteristics of the instances attempted: They use a partition of the problem's instances into equivalence classes and assign specific parameter instantiations for each class. Kolisch, Drexl (996) propose a control scheme which selects the scheduling scheme to be applied according to two quantities: the number of iterations to be performed and the resource strength of the instance. Other examples of CCS can be found in Kimms (998) and Schirmer, Riesenberg (998). Also CCS rely upon extensive experimentation to develop sufficient insight into the effect of the parameters on the algorithms' performance Adaptive Control Schemes Motivation and Concept Even the recommendations of CCS are based upon the best average performance observed over all test instances; rarely will the resulting values be the most appropriate ones for each and every possible instance of the problem at hand. To demonstrate this, we run the above simple rules deterministically under the SSS, applying them to the KSD-instance set J30 of the RCPSP (for details we refer the reader to Section 4.. below). The relevant results are given in Table

11 9 2. Note that for each instance the best heuristically found solution is used as a reference point, as we intend to expose the occurrence rather than the degree of mutual dominance. The column "Best" reports the number of instances for which a rule did find the best heuristic solution - an honor it may share with other rules - whereas the column "Best Alone" refers to those instances where the rule was the only one to do so. For comparison, we also include the numbers of instances solved to optimality. While the best rule ranked best on only 68% of the instances, even the worst rules ranked best on 7% of the instances, on some of these they even were the only such rules. It stands to reason that this kind of result extends to problems other than the RCPSP. Rule Instances Solved Optimally Best Best Alone SLK LST LFT MTS SPT DRD 2 24 TRD TRS Table 2: On the Mutual Dominance of Priority Rules We therefore advocate selecting such values by some more flexible device, capable of exploiting the outcome of previous instantiations to steer the algorithm towards better performance. As they instantiate the control parameters based on the outcome of past selections, we refer to them as adaptive control schemes because, in the words of Glover, Laguna (997), the instantiation "chosen at any iteration [...] is a function of those previously chosen. That is, the method is adaptive in the sense of updating relevant information from iteration to iteration". In contrast, we might say that class-based as well as fixed control schemes are adapted rather than adaptive in nature. By saying this we mean that the former are (hopefully) tailored to the specifics of the problem in general while the latter incorporate some capability of adapting the values of control parameters to the specifics of a given instance (cf. Table 3). Throughout the literature that we are aware of, control schemes which are actually adaptive in this sense are scarce. Bölte, Thonemann (996) apply a genetic algorithm-based control scheme to identify good cooling schemes for a simulated annealing method. Kraay, Harker (996) discuss a number of related approaches for repetitive combinatorial optimization problems.

12 0 Control Scheme Fixed in Advance Fixed for all Instances Fixed yes yes Class-Based yes no Adaptive no no Table 3: Classification of Control Schemes Randomized Cooling-Based Control To the first approach we refer as a randomized cooling control scheme (RCCS). It is an iterative control scheme which determines the values for each parameter as a function of three quantities: the so far best-performing value of that parameter, a random value, and a measure indicating how successful the previous value was. The scheme terminates after a specified number Z of iterations. The attraction of this approach lies in the ease with which an - albeit restricted - learning capability can be added to a parameterized algorithmic scheme. In every iteration z (2 z Z) and for each α i, let denote α i r a random value from its domain and α* i,z- that value of α i which produced the best solution in the z- iterations performed so far. Then the value α iz of α i in iteration z is determined as a convex combination of these two values, i.e. α iz ( - β iz ) α* i,z- + β iz α i r ( i I; 2 z Z) (6) where β iz [0,]. Clearly, at the endpoints of the one-dimensional parameter space of β iz, we select purely random values α i r if β iz = and locally optimal values α* iz if β iz = 0. Initially, of course, we set α i α i r by setting β i. The construction of this combination intends to eventually steer the process towards a steady state, i.e. to make it converge for Z to a local optimum. Thus, β is kept constant as long as this continues to produce better solutions; otherwise β is decreased in order to favor the selection of values close to the so far best-performing ones. Essentially, this is done by counting the number of iterations which found a solution bettering the incumbent best one, which we denote by Ψ, and taking its reciprocal. Now, using FT* Jz to denote the best project makespan constructed in all z iterations and setting β i, β iz can be calculated from β iz / ( γ Ψ) iff FT* J, z = FT* J, z 2 γ Ψ βi, z otherwise ( i I; 3 z Z) (7) where γ R >0. Since the quotient in (7) decreases monotonically, the β iz converge to zero, allowing the α iz to monotonically move closer to some local optimum α* z. γ allows to adjust the rate of convergence: 0 < γ < implies a slower, γ > a faster convergence than the unadjusted rate / Ψ. The condition γ Ψ ensures that α [0,]. The reference to cooling in the

13 naming of this control scheme is motivated, of course, by its kinship to schemes used to control the performance of simulated annealing algorithms (Kirkpatrick et al. 983). Let us clarify here that we need to distinguish between two kinds of control parameters, those of the solution method (here: the α i of the composite rule-based scheduling approach), and those of the control scheme identifying appropriate values for the former ones (here: the γ of the RCCS). For the sake of distinction, we refer to the latter ones as control scheme parameters. While replacing one kind of control parameter by another might seem a futile exercise, we hasten to add that a meaningful control scheme will of course use less parameters than the actual solution method; e.g., the RCCS requires only one parameter to control the instantiation of an arbitrary number I of parameters. Good control schemes, thus, will aim to free the user from managing a multitude of parameters while allowing him some kind of high-level or aggregate control via one or two control scheme parameters, if at all. We illustrate the scheme for I = 2 priority rules in Figure, after 5, 0, and 20 iterations. Each symbol " " represents an evaluated α-vector and thus a solution for the instance at hand; the symbol " " marks the respective best vector in whose vicinity the search for good vectors (and solutions) is intensified. A formal description of the scheme is given in the Appendix. α 2 α 2 α 2 0 α 0 α 0 α Figure : Exemplary Application of Randomized Cooling-Based Control Scheme Local Search-Based Control The second approach we consider is a local search control scheme (LSCS), which systematically applies and appraises different control parameter values (Haase 996; Haase et al. 998) to identify promising ones, hopefully guiding the computational effort to promising regions of the parameter and thus the solution space. Following the classification of local search procedures used in Leon, Ramamoorthy (997), the scheme can be characterized as a steepest-de- A commonly used cooling scheme for simulated annealing algorithms is αz β α z- ( z Z) where β < (cp. e.g. Kirkpatrick et al. 983; Wilhelm, Ward 987).

14 2 scent search method. Recall from above that each parameter vector α r encodes one particular solution. In each iteration, the neighborhood comprises a fixed number of such vectors. From each of these one solution is constructed, which is evaluated in terms of its objective function value. If the best so-found solution is better than the incumbent best one, the search process is intensified in the surrounding region by focussing on the neighborhood of the corresponding vector; otherwise, the procedure terminates. The method spans an I-dimensional grid over the control parameter space by defining a set of equidistant points. In each iteration the grid, which initially embraces the entire space [0,] I, is refined to ever smaller subspaces. The number of gridpoints, which remains constant, is determined by an integral control parameter, the grid granularity σ, in the following way. Let for each iteration 2 denote Lα i and Uα i the lower and upper bound of the parameter subspace of rule i. Then for each iteration the so-called grid width i of rule i is defined as i (Uα i - Lα i ) / σ ( i I) (8) The initial iteration of the procedure starts off with α i = 0, Lα i = 0, and Uα i = for each rule i and determines a solution. It then increments α by its grid width and constructs another solution, and so forth, until eventually α = Uα. Then α is reset to Lα, and α 2 is incremented by 2. When also α 2 has reached its upper bound, it is reset to its lower bound while α 3 is incremented by 3 and so forth, until eventually all α i equal their upper bound. Thus, the number of gridpoints considered per iteration is (σ+) I. For each gridpoint, i.e. parameter vector, one solution is constructed. If the iteration fails to improve the incumbent best solution, the algorithm halts (To restrict computation times, the algorithm could also terminate after some time limit; however, this approach is not pursued here as computation times are modest for the test instances attempted). Otherwise, let denote α* i that weight of rule i which produced the best solution in that iteration; ties are broken by taking the first such weight. Now, new bounds are calculated from Lα i max{0, α* i - i (σ-) / σ} ( i I) (9) Uα i min{, α* i + i (σ-) / σ} ( i I) (20) the grid widths i of all rules are updated according to (8), and the next iteration starts. We illustrate the procedure for I = 2 priority rules and a grid granularity of σ = 3. Figure 2 depicts the first three iterations. Each evaluated parameter value vector is represented by an intersection, so the shaded area shows the parameter subspace spanned by the grid searched in one iteration. Again, each symbol " " marks the best control parameter vector found in that 2 As the iteration index is not necessary in this section, we shall suppress it for brevity of presentation.

15 iteration, in whose vicinity the search for good vectors (and solutions) is intensified. Again, a formal description is given in the Appendix. 3 α 2 α 2 α 2 0 α 0 α 0 α Figure 2: Exemplary Application of Local Search-Based Control Scheme Randomized Local Search-Based Control It is well-known that the effectiveness of construction methods can be increased by augmenting them by a good randomization scheme. This strategy has been very successful for simple priority rule-based methods for the RCPSP, improving e.g. the effectiveness of the rule LST from 6.56% when applied deterministically to.89% when applied in a randomized manner (Schirmer, Riesenberg 997). We will therefore follow up on the question whether similar improvements can be reaped for adaptive control schemes. For this purpose, we added a randomization stage to the LSCS, using the MRBRS as sampling scheme 3. We refer to the so-arising scheme as RLSCS. 3 The reader might wonder why this was done for the LSCS only. Two reasons were involved. First, the RCCS already holds a random component, i.e. the way in which the composite rules are determined. A sampling scheme constitutes a different approach to introduce randomization into an algorithm, as it affects not the rule composition but the way in which the composite rules are used. Clearly, there are many ways to "skin the randomization cat". We believe that employing both approaches within the same algorithm will produce just too much random noise in the results. Second, without giving away too much from the results to be presented below, let us hint that the RCCS was found to perform rather disappointingly, so adding a sampling scheme was considered not really worthwhile.

16 4 4. Experimental Analysis 4.. Experimental Setting 4... Test Instances and Reference Solutions As a test bed, we used the KSD-instance set J30 generated with ProGen (Kolisch, Sprecher 997). Each instance comprises four renewable resources and 30 non-dummy activities, having a nonpreemptable duration of between one and ten periods and between one and three successors and predecessors each. Systematically varied design parameters for these instances are network complexity (NC), resource factor (RF), and resource strength (RS). NC is defined as the average number of non-redundant arcs per activity, RF determines the number of resources that are requested by each activity, and RS expresses resource scarcity measured between minimum and maximum demand. A more comprehensive characterization is given in Kolisch et al. (995). The respective parameter levels used are NC {.5,.8, 2.}, RF {0.25, 0.5, 0.75,.0}, and RS {0.2, 0.5, 0.7,.0}. For each instance cluster defined by a combination of these parameters, J30 contains ten instances, for a total of 480 instances. Note that those 20 instances where RS =.0 are trivially solvable by the earliest start time schedule. The optimal reference solutions were computed with the exact algorithm of Demeulemeester, Herroelen (992, 997) Priority Rule Sets and Scheduling Algorithm Particular care has to be taken in defining the rule sets to be used. In order to assess the control schemes' ability to cope with sets of mostly good or bad rules as well as with large and small sets, we defined several such sets. Our classification of rules as good or bad draws on several studies (e.g. Boctor 990; Valls et al. 992; Kolisch 996b), measuring effectiveness in terms of average deviation from optimum when applying them deterministically under the SSS; exemplary results taken from Schirmer, Riesenberg (997) are shown in Table 4. Good Rules Bad Rules SLK LST LFT MTS SPT DRD TRD TRS 5.58% 6.56% 7.44% 8.74% 22.80% 23.3% 23.38% 24.0% Table 4: Effectiveness and Classification of Priority Rules Using this classification, we defined seven rule sets which are blended as shown in Table 5.

17 5 Rule Sets I Characterization Priority Rules 8 4 good, 4 bad rules SLK, LST, LFT, MTS, SPT, DRD, TRD, TRS good, 3 bad rules SLK, LST, LFT, DRD, TRD, TRS good, 2 bad rules SLK, LST, TRD, TRS 4 2 good, bad rules SLK, TRS good, bad rules SLK, LST, LFT, TRS 6 4 good, 3 bad rules SLK, DRD, TRD, TRS good rules SLK, LST, LFT, MTS good rules SLK, LST, LFT good rules SLK, LST bad rules SPT, DRD, TRD, TRS 3 3 bad rules DRD, TRD, TRS bad rules TRD, TRS Table 5: Priority Rule Sets Among other factors, in the remainder of this work we will analyze the effect of two factors, viz. the number of priority rules I and the proportion of good to bad rules, expressed in terms of the percentage Θ of good rules. We refer to these factors as rule set cardinality and rule set composition. Note that we can vary the first factor while holding the second one constant by regarding rule sets 7, 8, 9 for the case of Θ = 00%,, 2, 3, and 4 for Θ = 50%, and 0,, 2 for Θ = 0%. Vice versa, to vary the second factor while keeping the first one fixed we merely need to consider rule sets 7, 5, 3, 6, 0 for the case of I = 4 and rule sets 9, 4, 2 for the case of I = 2. As instantiation of SchedulingAlgorithm, we use the serial scheduling scheme outlined earlier Performance Measures Specifying a set of values for each factor within an experiment describes over which levels it is varied during an experiment, so one value for each factor determines a run of an experiment. We regard, for each instance and algorithm, the outcome of each run in terms of both effectiveness and efficiency. Effectiveness is determined for the best schedule found in a run, measuring its deviation from the optimum as a percentage. Efficiency usually captures the CPU-time required for the run, measured in terms of seconds; where appropriate, we also report the number of schedules generated. Measurements were taken using an implementation in Pascal, running on a Pentium 266 personal computer with 32 MB RAM under Windows Experimental Design In addition to the effect of the adaptive control schemes, we examine that of the rule set cardinality I and the rule set composition Θ. Further analyses, such as on the effect of the control

18 6 scheme parameters, can be found in Schirmer (999a, pp. 07-). Both bounding rules described earlier were used throughout all experiments conducted Randomized Cooling-Based Control To demonstrate the effect that the number of rules in a set exerts on the effectiveness, we present the average deviations of the respective rule sets in Table 6. The results were obtained from setting γ = 0.25 and Z = 00. Note that for the sake of compactness also the results of the other algorithms are included, which we will explain and discuss later. For most rule sets it turns out that increasing (decreasing) the number of rules while keeping the proportion of good to bad rules fixed produces better (worse) results since the pool is larger from which an appropriate rule can be selected for each instance. The only exception to this finding is set 7; currently we have no explanation for this aberration. Rule Set I Θ RCCS LSCS RLSCS % 3.9% 5.23% 2.95% % 2.9% 2.35%.99% % 3.39% 2.9% 2.32% 8 50% 3.62% 4.25% 2.74% % 5.26% 3.86% 2.75% % 5.74% 3.40% 2.55% % 6.88% 6.33% 3.39% 0 4 0% 7.48% 9.53% 3.53% 3 0% 8.86% 0.99% 3.64% 2 2 0% 0.85%.23% 3.68% Table 6: Effect of Rule Set Cardinality Similar to the above, we arrange in Table 7 the results pertaining to the influence of the rule set composition, given a fixed set cardinality. Again, we used settings of γ = 0.25 and Z = 00. Unsurprisingly, we find that increasing (decreasing) the proportion of good rules in a constant number of rules produces better (worse) results since the number of good rules is larger from which an appropriate rule can be selected for each instance.

19 7 Rule Set I Θ RCCS LSCS RLSCS % 3.9% 5.23% 2.95% % 4.35% 2.68% 2.% % 5.74% 3.40% 2.55% % 6.87% 6.50% 3.39% 0 4 0% 7.48% 9.53% 3.53% % 3.39% 2.9% 2.32% % 6.88% 6.33% 3.39% 2 2 0% 0.85%.23% 3.68% Table 7: Effect of Rule Set Composition - Deviations Computation times required by the RCCS approach in our experimentation are shown in Table 8; again the results from γ = 0.25 and Z = 00 are detailed over the different rule sets. Rule Set Avg CPU Rule Set Avg CPU Table 8: Effect of Rule Sets - CPU Times (RCCS) 4.3. Local Search-Based Control In order to demonstrate the influence of I and σ on the number of gridpoints, Table 9 summarizes some exemplary numbers. Due to the prohibitive effort required to cover large gridpoint numbers, for each rule set cardinality I σ is set to the maximum non-shaded value. σ I Sol/Iter I Sol/Iter I Sol/Iter I Sol/Iter I Sol/Iter , , , , , , ,656 8,679, ,40 6 7, ,764, , ,44 8 6,777, , , ,046, , ,000 6,000, ,000, ,33 4 4,64 6,77, ,358,88 Table 9: Exemplary Numbers of Gridpoints

20 8 For the corresponding effectiveness results refer again to Tables 6 and 7. Contrary to the findings for the RCCS, the relationship between rule set cardinality and effectiveness is not simply a linear one for the LSCS; rather the best results are obtained when two (rule sets 9, 3) or three (rule sets 8, 2) good rules are part of the rule set. W.r.t. the composition of the rule sets, including good rules in a rule set is clearly important, as the presence of bad rules only cannot be compensated by the LSCS. Yet we find that even rule sets with Θ = 00% are not guaranteed to be the most effective ones, as again those rule sets which contain two or three good rules have the highest effectivity. The computation times observed for the LSCS approach are reported in Table 0. Rule Set Avg CPU Rule Set Avg CPU Table 0: Effect of Rule Sets - CPU Times (deterministic LSCS) Note that due to the use of a dynamic termination criterion for the LSCS approach, instead of just setting a fixed sample size, the number of iterations performed by the LSCS algorithm may vary substantially for different instances. Also, the number of gridpoints actually visited, and thus the number of schedules generated per iteration of the LSCS, may be smaller than (σ+) I due to the inclusion of bounding rules in the scheduling algorithm. We therefore give the average numbers of generated schedules per rule set in Table. Rule Set Avg Schedules Rule Set Avg Schedules Table : Effect of Rule Sets - Generated Schedules (deterministic LSCS) 4.4. Randomized Local Search-Based Control For exploring the effect of the control parameters of the sampling scheme, viz. µ and δ, we first of all ran nine experiments where µ {, 2, 3} and δ {0, 00, 000}, values which are in line with similar experiments in the context of sampling methods using simple priority rules. The influence of µ and δ, however, turned out to be negligible: The standard deviation of the average deviations, computed over all nine combinations of µ and δ, is never larger than

21 9 0.5%, and for of the 2 rule sets ranges between 0.02 and 0.04%. We thus confine ourselves in the sequel to the results from using µ = and δ = 0. For the results refer again to Tables 6 and 7. Evidently, the effectiveness of all rule sets is improved by adding randomization as a second stage. As already good rule sets can be said to start into the second stage with an advantage, their improvements are less dramatical than those of the bad rule sets. Still, the ranking of the rule sets remains virtually unchanged such that again sets comprising two or three good rules sport the highest effectiveness. Another interesting question is how often the additional randomization stage actually improves effectiveness. Table 2 shows, again for µ = and δ = 0, the number of instances for which randomization produced a better solution than the one deterministically found. The results are representative for other settings of µ and δ. Rule Set Instances Rule Set Instances Table 2: Effect of Randomization - Improved Instances The computation times measured for the randomized LSCS approach are given in Table 3. Rule Set Avg CPU Rule Set Avg CPU Table 3: Effect of Rule Sets - CPU Times (randomized LSCS) 4.5. Comparison with Other Algorithms To assess the merits of the approaches dicussed in this paper, we compare them to the most effective sampling-based construction methods currently available for the RCPSP. The results for the adaptive search procedure of Kolisch, Drexl (996), which were not reported for all 360 instances in the original paper, were derived from reimplementing the procedure. For comparison, we also include a number of more simple priority rule-based construction algorithms. For the sake of simplicity, we restrict the presentation of results for the adaptive control algorithms to rule set 8. As the first stage of the LSCS-algorithms usually performs between 00 and 500 iterations (cf. Table ), we provide the results for both sample sizes in Table 4.

22 20 Algorithm Reference Sample Size Case-based reasoning Schirmer (2000).53%.04% Adaptive search Kolisch, Drexl (996).85%.20% RLSCS, rule set 8.94% SSS, MRBRS/0, LST Schirmer, Riesenberg (997).95%.3% SSS, MRBRS/0, LFT Schirmer, Riesenberg (997).99%.33% LSCS, rule set % PSS, RBRS, WCS Kolisch (996a) 2.36% 2.08% PSS, RBRS, LFT Kolisch (996b) 2.40% 2.09% RCCS, rule set 8 2.9% 2.76% Table 4: Comparison of Several Construction Methods - Deviations One point deserving further mention is that we deliberately excluded more evolved, metaheuristic algorithms from the comparison which use concepts such as tabu search or population genetics. We so so for several reasons: One, by design such methods are much more complicated and thus less efficient than the algorithms considered here. Two, the benefits of metaheuristics usually come to fully bear only when larger samples of solutions are drawn; recent studies thus examine samples of 000 and 5000 schedules (cf. Kolisch, Hartmann 999), as we look for fast methods we tend to use much smaller samples. Three, while comparing our algorithms only to construction methods might seem too easy a competition, the case-based reasoning algorithm listed in Table 4 outperforms all but the most recent metaheuristics, e.g. the tabu search algorithm of Baar et al. (997) and the genetic algorithm of Leon, Ramamoorthy (995); indeed, only one known metaheuristic, i.e. the genetic algorithm of Hartmann (998), consistently outperforms this algorithm (cf. Schirmer 2000). We find the RCCS approach to be the least effective of all algorithms considered, even with the best rule set found in our experimentation. The LSCS is much more effective, especially when augmented by a randomization stage; its performance is comparable to that of other good construction methods, although it is hampered by a lower efficiency as it clearly requires more than 00 iterations with most rule sets to achieve this effectiveness (cf. Table ). State-of-theart sampling-based methods still outperform the algorithms considered here. 5. Summary and Conclusions Adaptive control schemes try to dynamically compose good algorithms from an unsorted collection of simple algorithms, thus tailoring one individual algorithm to each instance. Thus, one hopes to turn out methods positioned between construction and search methods in terms of both effectiveness and efficiency. To put them to a meaningful test, we applied them to the RCPSP,

23 2 thus using a well-researched platform for experimental evaluation. We chose a collection of priority rule-based construction methods to start with, as adaptive control schemes we adapted two simple search-based mechanisms recently proposed in the literature. We find the RCCS to be too simple to be of much value, even when furnished with the best rule set found in our experimentation. The RLSCS achieves an effectiveness comparable to that of other good construction methods but requires more iterations to do so. Our experimentation on the more general RCPSP/Π, upon which we did not report here, corroborate these findings (Schirmer 999b). As even the RLSCS fails to outperform state-of-the-art construction heuristics, our results seem to indicate that adaptive control schemes cannot compete with the more classical sampling heuristics using fixed control schemes. Still, discarding adaptive control schemes on the basis of effectiveness alone would be premature. Such a focus often turns the process of improving algorithms into a competitive race which concentrates on 'beating' other algorithms, rather than on insight why some algorithms perform better than others (Hooker 995). Recall that the analytical and algorithmic advances achieved for the RCPSP are the result of several decades of inventive and persistent research; so judging new algorithms from their effectiveness on problems such as the RCPSP clearly puts then at a disadvantage. Indeed, just a few years ago the RLSCS would have been competitive with the best sampling methods. Rather we conclude that the rather straightforward algorithms proposed so far are too simple to unlock the full potential of adaptive control schemes. Note also that adaptive control schemes enjoy two important advantages over more evolved algorithms, which facilitate the OR practitioner's task of designing good heuristics for new problems. First, by construction they are robust to changing the instances tackled, thus obviating the need of expertise on the characteristics of instances. Second, they are - at least moderately - robust to changing the rules employed, thus lessening the importance of expertise on the characteristics of such rules for the problem at hand. Expertise on both areas would otherwise be necessary to identify good algorithms, and usually extensive experimentation is required to acquire it. Thus, adaptive control schemes provide a commendable approach when little is known on what constitutes a good rule or what makes an instance easy or difficult to solve. This situation arises e.g. in OR consulting projects where the time-to-algorithm is at a premium and where indeed such algorithms have been successfully used (Haase et al. 998, 999). Although we evaluated them on project scheduling problems only, adaptive control schemes have significance beyond pure project scheduling. As project scheduling deals with the allocation of scarce resources to individual tasks, operations, or activities over time, it covers a multitude of recurring problems in many areas of economic decision making. Its models and methods are used in a variety of production applications, in particular in low volume, small batch, and make-to-order production and assembly. Indeed, these can be considered as prime applications of project scheduling. Note also that parameterized sampling methods similar to

Adaptive Control Schemes Applied to Project Scheduling With Partially Renewable Resources

Adaptive Control Schemes Applied to Project Scheduling With Partially Renewable Resources Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel No. 520 Adaptive Control Schemes Applied to Project Scheduling With Partially Renewable Resources Schirmer December 1999

More information

A TABU SEARCH ALGORITHM FOR THE RESOURCE-CONSTRAINED PROJECT SCHEDULING PROBLEM

A TABU SEARCH ALGORITHM FOR THE RESOURCE-CONSTRAINED PROJECT SCHEDULING PROBLEM ASAC 2004 Quebec, QUEBEC Michel Gagnon Defence R&D Canada Valcartier Fayez F. Boctor Gilles d Avignon Faculty of Business Administration Laval University A TABU SEARCH ALGORITHM FOR THE RESOURCE-CONSTRAINED

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Heuristic Algorithms for Solving the Resource-Constrained Project Scheduling Problem: Classification and Computational Analysis

Heuristic Algorithms for Solving the Resource-Constrained Project Scheduling Problem: Classification and Computational Analysis This is a preprint of an article published in J. Weglarz (editor): Project scheduling: Recent models, algorithms and applications, pages 147 178, Kluwer, Amsterdam, the Netherlands, 1999 c 1999 by Kluwer

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

PROPOSED METHODOLOGY FOR COMPARING SCHEDULE GENERATION SCHEMES IN CONSTRUCTION RESOURCE SCHEDULING. Jin-Lee Kim

PROPOSED METHODOLOGY FOR COMPARING SCHEDULE GENERATION SCHEMES IN CONSTRUCTION RESOURCE SCHEDULING. Jin-Lee Kim Proceedings of the 009 Winter Simulation Conference M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin and R. G. Ingalls, eds. PROPOSED METHODOLOGY FOR COMPARING SCHEDULE GENERATION SCHEMES IN CONSTRUCTION

More information

Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012

Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Solving Assembly Line Balancing Problem in the State of Multiple- Alternative

More information

Forward-backward Improvement for Genetic Algorithm Based Optimization of Resource Constrained Scheduling Problem

Forward-backward Improvement for Genetic Algorithm Based Optimization of Resource Constrained Scheduling Problem 2017 2nd International Conference on Advances in Management Engineering and Information Technology (AMEIT 2017) ISBN: 978-1-60595-457-8 Forward-backward Improvement for Genetic Algorithm Based Optimization

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

GRASP. Greedy Randomized Adaptive. Search Procedure

GRASP. Greedy Randomized Adaptive. Search Procedure GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation

More information

The Cross-Entropy Method

The Cross-Entropy Method The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,

More information

A Scatter Search Algorithm for Project Scheduling under Partially Renewable Resources

A Scatter Search Algorithm for Project Scheduling under Partially Renewable Resources A Scatter Search Algorithm for Project Scheduling under Partially Renewable Resources R. Alvarez-Valdes, E. Crespo, J.M. Tamarit, F. Villa University of Valencia, Department of Statistics and Operations

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Leveraging Set Relations in Exact Set Similarity Join

Leveraging Set Relations in Exact Set Similarity Join Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

DOCUMENT DE TRAVAIL

DOCUMENT DE TRAVAIL Publié par : Published by : Publicación de la : Édition électronique : Electronic publishing : Edición electrónica : Disponible sur Internet : Available on Internet Disponible por Internet : Faculté des

More information

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH 79 CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH 5.1 INTRODUCTION Water distribution systems are complex interconnected networks that require extensive planning

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft A Cluster Based Scatter Search Heuristic for the Vehicle Routing Problem University of Regensburg Discussion Papers in Economics No. 415, November

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

Mixed Criticality Scheduling in Time-Triggered Legacy Systems

Mixed Criticality Scheduling in Time-Triggered Legacy Systems Mixed Criticality Scheduling in Time-Triggered Legacy Systems Jens Theis and Gerhard Fohler Technische Universität Kaiserslautern, Germany Email: {jtheis,fohler}@eit.uni-kl.de Abstract Research on mixed

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions. CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

More information

Key Concepts: Economic Computation, Part II

Key Concepts: Economic Computation, Part II Key Concepts: Economic Computation, Part II Brent Hickman Fall, 2009 The purpose of the second section of these notes is to give you some further practice with numerical computation in MATLAB, and also

More information

Constructive meta-heuristics

Constructive meta-heuristics Constructive meta-heuristics Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Improving constructive algorithms For many problems constructive algorithms

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology Carlos A. S. OLIVEIRA CAO Lab, Dept. of ISE, University of Florida Gainesville, FL 32611, USA David PAOLINI

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek Yingfei Dong David Du Poznan Supercomputing and Dept. of Electrical Engineering Dept. of Computer Science Networking Center

More information

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach Carlos A. S. Passos (CenPRA) carlos.passos@cenpra.gov.br Daniel M. Aquino (UNICAMP, PIBIC/CNPq)

More information

5. Lecture notes on matroid intersection

5. Lecture notes on matroid intersection Massachusetts Institute of Technology Handout 14 18.433: Combinatorial Optimization April 1st, 2009 Michel X. Goemans 5. Lecture notes on matroid intersection One nice feature about matroids is that a

More information

An Efficient Heuristic Algorithm for Capacitated Lot Sizing Problem with Overtime Decisions

An Efficient Heuristic Algorithm for Capacitated Lot Sizing Problem with Overtime Decisions An Efficient Heuristic Algorithm for Capacitated Lot Sizing Problem with Overtime Decisions Cagatay Iris and Mehmet Mutlu Yenisey Department of Industrial Engineering, Istanbul Technical University, 34367,

More information

An algorithm for Performance Analysis of Single-Source Acyclic graphs

An algorithm for Performance Analysis of Single-Source Acyclic graphs An algorithm for Performance Analysis of Single-Source Acyclic graphs Gabriele Mencagli September 26, 2011 In this document we face with the problem of exploiting the performance analysis of acyclic graphs

More information

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini 6. Tabu Search 6.3 Minimum k-tree Problem Fall 2010 Instructor: Dr. Masoud Yaghini Outline Definition Initial Solution Neighborhood Structure and Move Mechanism Tabu Structure Illustrative Tabu Structure

More information

Bound Consistency for Binary Length-Lex Set Constraints

Bound Consistency for Binary Length-Lex Set Constraints Bound Consistency for Binary Length-Lex Set Constraints Pascal Van Hentenryck and Justin Yip Brown University, Box 1910 Carmen Gervet Boston University, Providence, RI 02912 808 Commonwealth Av. Boston,

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

14.1 Encoding for different models of computation

14.1 Encoding for different models of computation Lecture 14 Decidable languages In the previous lecture we discussed some examples of encoding schemes, through which various objects can be represented by strings over a given alphabet. We will begin this

More information

Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems

Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems James Montgomery No Institute Given Abstract. When using a constructive search algorithm, solutions to scheduling

More information

A proof-producing CSP solver: A proof supplement

A proof-producing CSP solver: A proof supplement A proof-producing CSP solver: A proof supplement Report IE/IS-2010-02 Michael Veksler Ofer Strichman mveksler@tx.technion.ac.il ofers@ie.technion.ac.il Technion Institute of Technology April 12, 2010 Abstract

More information

Heuristic (Informed) Search

Heuristic (Informed) Search Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap., Sect..1 3 1 Search Algorithm #2 SEARCH#2 1. INSERT(initial-node,Open-List) 2. Repeat: a. If empty(open-list) then return failure

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Fast algorithms for max independent set

Fast algorithms for max independent set Fast algorithms for max independent set N. Bourgeois 1 B. Escoffier 1 V. Th. Paschos 1 J.M.M. van Rooij 2 1 LAMSADE, CNRS and Université Paris-Dauphine, France {bourgeois,escoffier,paschos}@lamsade.dauphine.fr

More information

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover J. Garen 1 1. Department of Economics, University of Osnabrück, Katharinenstraße 3,

More information

Scheduling with Bus Access Optimization for Distributed Embedded Systems

Scheduling with Bus Access Optimization for Distributed Embedded Systems 472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000 Scheduling with Bus Access Optimization for Distributed Embedded Systems Petru Eles, Member, IEEE, Alex

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Lecture 5 Finding meaningful clusters in data. 5.1 Kleinberg s axiomatic framework for clustering

Lecture 5 Finding meaningful clusters in data. 5.1 Kleinberg s axiomatic framework for clustering CSE 291: Unsupervised learning Spring 2008 Lecture 5 Finding meaningful clusters in data So far we ve been in the vector quantization mindset, where we want to approximate a data set by a small number

More information

Table 1 below illustrates the construction for the case of 11 integers selected from 20.

Table 1 below illustrates the construction for the case of 11 integers selected from 20. Q: a) From the first 200 natural numbers 101 of them are arbitrarily chosen. Prove that among the numbers chosen there exists a pair such that one divides the other. b) Prove that if 100 numbers are chosen

More information

Polynomial-Time Approximation Algorithms

Polynomial-Time Approximation Algorithms 6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

ACO and other (meta)heuristics for CO

ACO and other (meta)heuristics for CO ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution

More information

Free-Form Shape Optimization using CAD Models

Free-Form Shape Optimization using CAD Models Free-Form Shape Optimization using CAD Models D. Baumgärtner 1, M. Breitenberger 1, K.-U. Bletzinger 1 1 Lehrstuhl für Statik, Technische Universität München (TUM), Arcisstraße 21, D-80333 München 1 Motivation

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Paths, Flowers and Vertex Cover

Paths, Flowers and Vertex Cover Paths, Flowers and Vertex Cover Venkatesh Raman M. S. Ramanujan Saket Saurabh Abstract It is well known that in a bipartite (and more generally in a König) graph, the size of the minimum vertex cover is

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Recitation-6: Hardness of Inference Contents 1 NP-Hardness Part-II

More information

An Approach to Task Attribute Assignment for Uniprocessor Systems

An Approach to Task Attribute Assignment for Uniprocessor Systems An Approach to ttribute Assignment for Uniprocessor Systems I. Bate and A. Burns Real-Time Systems Research Group Department of Computer Science University of York York, United Kingdom e-mail: fijb,burnsg@cs.york.ac.uk

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

Branch-and-bound: an example

Branch-and-bound: an example Branch-and-bound: an example Giovanni Righini Università degli Studi di Milano Operations Research Complements The Linear Ordering Problem The Linear Ordering Problem (LOP) is an N P-hard combinatorial

More information

α Coverage to Extend Network Lifetime on Wireless Sensor Networks

α Coverage to Extend Network Lifetime on Wireless Sensor Networks Noname manuscript No. (will be inserted by the editor) α Coverage to Extend Network Lifetime on Wireless Sensor Networks Monica Gentili Andrea Raiconi Received: date / Accepted: date Abstract An important

More information

Interleaving Schemes on Circulant Graphs with Two Offsets

Interleaving Schemes on Circulant Graphs with Two Offsets Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical

More information

Stochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction

Stochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction Stochastic branch & bound applying target oriented branch & bound method to optimal scenario tree reduction Volker Stix Vienna University of Economics Department of Information Business Augasse 2 6 A-1090

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES COMPSTAT 2004 Symposium c Physica-Verlag/Springer 2004 COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES Tvrdík J. and Křivý I. Key words: Global optimization, evolutionary algorithms, heuristics,

More information

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer CS-621 Theory Gems November 21, 2012 Lecture 19 Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer 1 Introduction We continue our exploration of streaming algorithms. First,

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems Chapter I An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems Christine Solnon I.1 Derek Bridge I.2 Subset selection problems involve finding an optimal feasible subset of an initial

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS A Project Report Presented to The faculty of the Department of Computer Science San Jose State University In Partial Fulfillment of the Requirements

More information

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions. THREE LECTURES ON BASIC TOPOLOGY PHILIP FOTH 1. Basic notions. Let X be a set. To make a topological space out of X, one must specify a collection T of subsets of X, which are said to be open subsets of

More information

Scheduling. Job Shop Scheduling. Example JSP. JSP (cont.)

Scheduling. Job Shop Scheduling. Example JSP. JSP (cont.) Scheduling Scheduling is the problem of allocating scarce resources to activities over time. [Baker 1974] Typically, planning is deciding what to do, and scheduling is deciding when to do it. Generally,

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols Preprint Incompatibility Dimensions and Integration of Atomic Protocols, Yousef J. Al-Houmaily, International Arab Journal of Information Technology, Vol. 5, No. 4, pp. 381-392, October 2008. Incompatibility

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

Design of Flexible Assembly Line to Minimize Equipment Cost

Design of Flexible Assembly Line to Minimize Equipment Cost Design of Flexible Assembly Line to Minimize Equipment Cost Joseph Bukchin Department of Industrial Engineering Faculty of Engineering, Tel-Aviv University, Tel-Aviv 69978 ISRAEL Tel: 972-3-640794; Fax:

More information

Implementation and modeling of two-phase locking concurrency control a performance study

Implementation and modeling of two-phase locking concurrency control a performance study INFSOF 4047 Information and Software Technology 42 (2000) 257 273 www.elsevier.nl/locate/infsof Implementation and modeling of two-phase locking concurrency control a performance study N.B. Al-Jumah a,

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

Self-Organizing Maps for cyclic and unbounded graphs

Self-Organizing Maps for cyclic and unbounded graphs Self-Organizing Maps for cyclic and unbounded graphs M. Hagenbuchner 1, A. Sperduti 2, A.C. Tsoi 3 1- University of Wollongong, Wollongong, Australia. 2- University of Padova, Padova, Italy. 3- Hong Kong

More information

Chapter 2 Overview of the Design Methodology

Chapter 2 Overview of the Design Methodology Chapter 2 Overview of the Design Methodology This chapter presents an overview of the design methodology which is developed in this thesis, by identifying global abstraction levels at which a distributed

More information

An Evolutionary Algorithm for Minimizing Multimodal Functions

An Evolutionary Algorithm for Minimizing Multimodal Functions An Evolutionary Algorithm for Minimizing Multimodal Functions D.G. Sotiropoulos, V.P. Plagianakos and M.N. Vrahatis University of Patras, Department of Mamatics, Division of Computational Mamatics & Informatics,

More information

Conflict Graphs for Combinatorial Optimization Problems

Conflict Graphs for Combinatorial Optimization Problems Conflict Graphs for Combinatorial Optimization Problems Ulrich Pferschy joint work with Andreas Darmann and Joachim Schauer University of Graz, Austria Introduction Combinatorial Optimization Problem CO

More information

Pincer-Search: An Efficient Algorithm. for Discovering the Maximum Frequent Set

Pincer-Search: An Efficient Algorithm. for Discovering the Maximum Frequent Set Pincer-Search: An Efficient Algorithm for Discovering the Maximum Frequent Set Dao-I Lin Telcordia Technologies, Inc. Zvi M. Kedem New York University July 15, 1999 Abstract Discovering frequent itemsets

More information

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

PROBLEM FORMULATION AND RESEARCH METHODOLOGY PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter

More information

Leveraging Transitive Relations for Crowdsourced Joins*

Leveraging Transitive Relations for Crowdsourced Joins* Leveraging Transitive Relations for Crowdsourced Joins* Jiannan Wang #, Guoliang Li #, Tim Kraska, Michael J. Franklin, Jianhua Feng # # Department of Computer Science, Tsinghua University, Brown University,

More information

Evolving Variable-Ordering Heuristics for Constrained Optimisation

Evolving Variable-Ordering Heuristics for Constrained Optimisation Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005

More information

An Eternal Domination Problem in Grids

An Eternal Domination Problem in Grids Theory and Applications of Graphs Volume Issue 1 Article 2 2017 An Eternal Domination Problem in Grids William Klostermeyer University of North Florida, klostermeyer@hotmail.com Margaret-Ellen Messinger

More information

An Ant Approach to the Flow Shop Problem

An Ant Approach to the Flow Shop Problem An Ant Approach to the Flow Shop Problem Thomas Stützle TU Darmstadt, Computer Science Department Alexanderstr. 10, 64283 Darmstadt Phone: +49-6151-166651, Fax +49-6151-165326 email: stuetzle@informatik.tu-darmstadt.de

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information