Tuning MAX MIN Ant System with off-line and on-line methods

Size: px
Start display at page:

Download "Tuning MAX MIN Ant System with off-line and on-line methods"

Transcription

1 Université Libre de Bruxelles Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini, Tomas Stützle, and Mauro Birattari IRIDIA Tecnical Report Series Tecnical Report No. TR/IRIDIA/ November 2010

2 IRIDIA Tecnical Report Series ISSN Publised by: IRIDIA, Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Université Libre de Bruxelles Av F. D. Roosevelt 50, CP 194/ Bruxelles, Belgium Tecnical report number TR/IRIDIA/ Te information provided is te sole responsibility of te autors and does not necessarily reflect te opinion of te members of IRIDIA. Te autors take full responsibility for any copyrigt breaces tat may result from publication of tis paper in te IRIDIA Tecnical Report Series. IRIDIA is not responsible for any use tat migt be made of data appearing in tis publication.

3 Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini Dipartimento di Matematica Applicata Università Ca Foscari Venezia, Venezia, Italia Tomas Stützle and Mauro Birattari IRIDIA, CoDE, Université Libre de Bruxelles, Brussels, Belgium Contact: November 2010 Abstract In tis paper, we study two approaces for tuning parameters of optimization algoritms: off-line and on-line tuning. Te goal of tuning is to find an appropriate configuration of an algoritm, were a configuration is a specific setting of all parameters. In off-line tuning, te appropriate configuration is cosen after a preliminary pase. In on-line tuning, te configuration migt be canged during te solution process to adapt to te instance being solved and possibly to te state of te searc. Parameter tuning is very important in swarm intelligence: te performance of many tecniques is strongly influenced by te configuration cosen. In tis paper, we study te performance of MAX MIN Ant System wen its parameters are tuned off-line and wen tey are tuned on-line. We consider one off-line and five on-line metods, and we tackle two combinatorial optimization problems. Te results sow tat off-line tuning generally outperforms on-line tuning. In some cases, on-line tuning acieves better results tan off-line tuning, but te best on-line metod is not always te same. 1 Introduction Many swarm intelligence tecniques, suc as ant colony optimization (Dorigo and Stützle, 2004) and particle swarm optimization (Clerc, 2006), ave numerical and categorical parameters tat ave an impact on performance (Birattari, 2009). Te appropriate configuration, i.e., te appropriate setting of all relevant parameters, depends on te problem instances to be tackled. Te approaces for finding appropriate configurations can be divided in two families: off-line and on-line tuning. In off-line tuning, te appropriate configuration is selected in a preliminary analysis. Traditionally, off-line tuning was performed manually, but recently a number of automatic procedures ave been proposed (Birattari et al, 2002; Balaprakas et al, 2007; Adenso-Díaz and Laguna, 2006; Hutter et al, 2009). In on-line tuning, te configuration canges in dependence of te status of te searc process wile executing te algoritm. Also in tis case, several automatic procedures ave been proposed in te literature (Anginolfi et al, 2008; Battiti et al, 2008; Lobo et al, 2007; Martens et al, 2007). Off-line and on-line tuning ave bot merits and drawbacks. In particular, one of te main merits of off-line tuning is tat it does not require knowledge on te specific algoritm to be tuned: it treats optimization algoritms as black-boxes. On te oter and, one of te main drawbacks 1

4 of off-line tuning is tat some resources ave to be devoted to tuning parameters before deploying te algoritm. In on-line tuning, no additional resource is necessary before actually tackling an instance, and a ig flexibility is acieved by adapting te configuration to an instance, depending on te specific pase of te searc. Tis flexibility is paid for in terms of design complexity: te on-line metod must be incorporated in te implementation of te optimization algoritm. Tus, a very good understanding of all different features of te algoritm is necessary for obtaining an effective on-line metod. Hence, an on-line metod must be designed for a specific algoritm, and it cannot directly be exploited on different ones. In tis paper, we analyze te performance of off-line and on-line tuning under different experimental conditions, for testing weter te intuitions tat typically drive te coice of one approac against te oter may be supported by experimental evidence. In particular, we consider tree a priori conjectures: a) off-line tuning outperforms on-line tuning for tackling omogeneous instances; b) te advantage of off-line tuning is mitigated wen tackling eterogeneous instances, to te extent tat it becomes a disadvantage wen instances are igly eterogeneous; c) a large number of parameters to be tuned penalizes on-line tuning. We report te results of an extensive experimental analysis for refusing or corroborating tese conjectures. We consider MAX MIN Ant System (MMAS) (Stützle and Hoos, 2000), one of te most successful ant colony optimization (ACO) algoritms, and we apply it to two well-studied combinatorial optimization problems: te traveling salesman problem (TSP) and te quadratic assignment problem (QAP). For eac problem we consider several sets of instances wit various levels of eterogeneity. We study te performance acieved by te algoritm wen it is tuned off-line troug te F-Race procedure (Birattari, 2003), and wen it is tuned on-line. We test five on-line metods: one of tem is based on a local searc approac (Anginolfi et al, 2008); te remaining four are based on a self-adaptive approac (Martens et al, 2007), wic represents te state-of-te-art for on-line tuning in ACO. In self-adaptive approaces, te space of configurations is coupled wit te solution space of te instance to be tackled, and te algoritm operates in te joint space so obtained. We analyze te performance of tese on-line metods as a function of te number of parameters tuned. In tese experiments, off-line tuning acieves better performance tan on-line tuning. Moreover, te results sow tat te relative performance of on-line metods depends bot on te instances tackled and on te experimental conditions. Tis suggests tat good results migt be obtained by ybridizing off-line and on-line tuning: off-line tuning may elp in selecting te most important parameters to be tuned on-line; ten, on-line tuning may adapt te configuration to eac specific instance and to te various pases of te searc. Te rest of te paper is organized as follows. In Section 2, we describe te main caracteristics of MAX MIN Ant System and te specific algoritms we implement for solving te two problems tackled. In Section 3, we present te procedures we consider for off-line and on-line tuning. In Sections 4 and 5, we report te experimental setup and te results we obtain, respectively. Finally, in Section 6 we draw some conclusions. 2 MAX MIN Ant System In MMAS, a colony of artificial ants explores te searc space iteratively. Ants are independent agents tat construct solutions incrementally, component by component. Ants communicate indirectly wit eac oter troug peromone trails, biasing te searc toward regions of te searc space containing ig quality solutions. Wen applying an ACO algoritm, an optimization problem is typically mapped to a construction grap G = (V, E), wit V being te set of nodes and E being te set of edges connecting te nodes. Solution components may be represented eiter by nodes or by edges of tis grap. In te following we describe te main procedures tat caracterize MMAS, considering solution components associated to edges, and supposing tat a minimization problem is to be solved. Te description can be easily extended to te case of selecting components associated to nodes. A peromone trail τ ij is associated to eac edge (i, j) E: it represents te cumulated 2

5 knowledge of te colony on te convenience of including component (i, j) in solutions. At te end of eac iteration, in wic m ants construct one solution eac, tis cumulated knowledge is enriced by applying te peromone update rule given in Equation (1). Some peromone evaporates from eac edge, and some is deposited on te edges belonging to te best solution, considering eiter te last iteration (iteration-best solution), te wole run (best-so-far solution), or te best since a reinitialization of te peromone trails (restart-best solution) (Stützle and Hoos, 2000): F (s best ), if edge (i, j) is part of te best solution; τ ij = (1 ρ) τ ij + (1) 0, oterwise; were F (s best ) is te inverse of te cost of te best solution (recall tat we are dealing wit minimization problems ere), and ρ is a parameter of te algoritm typically named evaporation rate (0 < ρ < 1). In addition, te peromone strengt is constantly maintained in te interval [τ min, τ max ]. Tese bounds are functions of te state of te searc. Wen peromone trails are excessively concentrated tey are re-initialized uniformly on all edges to favor exploration (Stützle and Hoos, 2000). Exploiting te common knowledge represented by peromone trails, ants construct solutions incrementally, by selecting at eac step te edge to traverse. In MMAS, tis selection is done according to te random-proportional rule. Ant k, being in node i, moves to node j N k wit probability [τ ij ] α [η ij ] β p ij =, (2) [τ i ] α [η i ] β N k were α and β are parameters of te algoritm, η ij is a euristic measure representing te desirability of using edge (i, j) from a greedy point of view, and N k is te set of nodes to wic te ant can move to. Typically, te m solutions generated by ants at eac iteration are used as starting points for local searc runs, one for eac initial solution. Te solutions returned are ten used in te peromone update as if tey ad been built by ants. Te procedures just described, caracterizing MMAS, may be used for designing algoritms for virtually any optimization problem. In Sections 2.1 and 2.2 we describe te specific implementation we use for te TSP and te QAP, respectively. 2.1 MMAS for te TSP Te TSP consists in finding a minimum cost tour for visiting a set of cities exactly once, starting and ending at te same location. Te cost of using eac route connecting two cities is fixed. A TSP instance is mapped to a grap by associating a node to eac city, and an edge wit a predefined cost to eac route. Te objective of te problem is to construct a minimum cost Hamiltonian tour. For solving te TSP, we use te algoritm implemented in te ACOTSP software (Stützle, 2002). Te euristic measure η ij is te inverse of te cost of traversing te edge (i, j). Te set of nodes in wic ant k, being in node i, can move to (N k ) includes all nodes tat belong to te candidate list associated to i and tat ave not been visited yet. Te candidate list contains te nn nearest neigbors of node i, were nn is a parameter of te algoritm. Te remaining nodes are included in N k only if all nodes in te candidate list ave already been visited. Te performance of MMAS for te TSP can possibly be improved by using te pseudo-random proportional rule tat is used by anoter ACO algoritm, namely ant colony system (ACS) (Dorigo and Gambardella, 1997). Tus, we exploit tis rule ere; wit a probability q 0, te next node j to be inserted in an ant s partial solution is te most attractive node according to peromone and euristic measure: j = arg max {τiη α β N k i }. (3) 3

6 Wit probability 1 q 0, te ant uses te traditional random proportional rule of Equation (2). Tus, q 0 is a furter parameter of te algoritm, wit 0 q 0 < 1. Te 2-opt local searc is applied to all solutions constructed by ants. 2.2 MMAS for te QAP Te QAP consists in finding a minimum cost assignment of a set of facilities to a set of locations. A fixed distance is given for eac pair of locations, and a fixed flow is given for eac pair of facilities. Te cost contribution for a pair of assignments of two facilities to two different locations is given by te product between te flow excanged by te facilities and te distance of teir respective locations. Te total cost in te QAP is te sum of all cost contributions by all pairs of assignments. A QAP instance is mapped to a grap by associating nodes to facilities and locations: using edge (i, j) in te solution construction corresponds to assigning facility i to location j. Te implementation of MMAS for te QAP used in tis paper is described by Stützle and Hoos (2000). No euristic measure is associated to edges; tus, parameter β is not considered ere. Te algoritm strictly follows te general framework of MMAS described at te beginning of te current section. Depending on te caracteristics of te set of instances tackled, te best performing local searc may vary (Stützle and Hoos, 2000). Tus, local searc is a furter parameter of te algoritm. In te following it will be referred to as ls. 3 Off-line and on-line tuning: te metods considered We study te results acieved by MMAS wen its parameters are tuned off-line and wen tey are tuned on-line. In te following, let P be te vector of all H parameters of te algoritm, and P t be te vector of parameters to be tuned. Te parameters in P t are a subset of tose in P. Tuning metods operate in a configuration space determined a priori. Eac possible setting of a parameter p is given by a vector S p. If p is a numerical parameter, a setting is a number. We assume tat te possible values are obtained by a discretization of te feasible intervals in te case of realvalued parameters, or by a selection of representative values in te case of integer parameters. Te possible values are listed in S p in increasing order. If p is a categorical parameter we use an arbitrary order. Te configuration space is ten given as te set of all possible combinations of parameter settings. For wat concerns off-line tuning, F-Race (Birattari, 2009; Birattari et al, 2002) appears to be a convenient coice for tuning te relatively small number of candidate configurations considered. Before solving te instances of interest, F-Race exploits a set of tuning instances wit caracteristics similar to tose of te instances to be tackled. One step of F-Race consists in evaluating eac configuration on one tuning instance. After eac step, F-Race discards any configuration tat is worse tan at least anoter one. For wat concerns on-line tuning, we consider five different metods. Te first one, LS, is based on te exploration of te configuration space using a naive local searc approac (Anginolfi et al, 2008). LS evaluates at eac step one reference configuration and te neigbors of it. Te neigborood includes all configurations tat differ in te setting of exactly one parameter p from te reference one. If te setting of tis parameter in te reference configuration is te i t setting in te vector of possible ones S p, te neigbors are te ones taking te (i 1) st and te (i + 1) st settings. If eiter te (i 1) st or te (i + 1) st setting do not exist in S p, wic means tat te reference configuration includes eiter te first or te last setting in S p, one neigbor is not valid, and LS doubles te reference configuration. Te reference configuration is replaced by one of te neigbors if it is not te best performing. LS evaluates te configurations by splitting te ant colony in as many groups as is te size of te neigborood, and by aving eac group of ants building solutions using a different configuration. Te performance of eac configuration is defined as te value of te best solution found by te corresponding group of ants. LS evaluates eac candidate configuration for 10 iterations of MMAS before deciding weter canging te 4

7 reference configuration or not, as done by Anginolfi et al (2008). In case of ties, tey are resolved randomly Te remaining four on-line metods are self-adaptive approaces. In self-adaptive approaces te mecanism tat is used to modify configurations is te same as tat underlying te algoritm to be tuned (Eiben et al, 2007). Tis can be realized by associating to eac parameter p i to be tuned a set of nodes V p i in te construction grap, corresponding to all possible settings in S pi. Tis results in an additional set of nodes V = p i P t V p i. (4) Eac node associated to a parameter is connected troug an edge to eac node associated to te following parameter in P t resulting in edge set E. Eac node associated to te last parameter is connected to all nodes in te original set V. Peromone trails are associated to all edges in E E and tey all are managed wit te update rule reported in Equation (1). Te configuration to be used is built by te searc mecanism tat is used in te ACO algoritm, by selecting one node in eac set V p i. Multiple attempts ave been made for exploiting efficiently te self-adaptive approac in ACO algoritms: Martens et al (2007); Randall (2004); Förster et al (2007) and Kicane et al (2009) ave proposed different interpretations of tis approac. Tey differ in two main points. On one and, parameters ave been considered eiter independently or interdependently from one anoter. If parameters are considered independently, peromone trails are associated to te nodes in V. In tis way, a setting for one parameter is cosen independently of te oter settings. If peromone is on te edges, interactions among parameters may be taken into account. In all te on-line metods used in tis paper, parameters are considered as interdependent. On te oter and, parameters ave been managed eiter at te colony or at te ant level. Tree metods we implemented for tis study manage parameters at te colony level: at eac iteration te same setting is used for te wole colony. Te tree metods differ in te ratio used for coosing te edges of E on wic peromone is deposited. Te first metod, SAc, reinforces peromone on te configuration wit wic te best-so-far, te iteration-best, or te restart-best solution was found, according to te scedule defined by Dorigo and Stützle (2004). Te second and te tird metods, SAcb and SAcm, reinforce peromone on te best configuration among tose used in te last 25 iterations. SAcb uses te configuration wit wic te algoritm found te best solution; SAcm te configuration wit wic te algoritm found te set of solutions wit te best mean cost. Te fourt metod, SAa, manages te parameters at te ant level, tat is, at eac iteration ants may use different configurations. Bot, ρ and m are colony-wise parameters. Tus tey cannot be tuned on-line by te metods tat use multiple configurations in eac iteration of MMAS, tat is, metods were parameters are cosen based on te ant-level. Hence SAc, SAcb and SAcm may tune all te parameters described in Section 2: P = (α, β, ρ, m, q 0, n) for te TSP, and P = (α, ρ, m) for te QAP. SAa and LS, instead, may tune fewer parameters: P = (α, β, q 0, n) for te TSP, and P = (α) for te QAP. 4 Experimental Setup In te experimental analysis, we consider multiple versions of MMAS tat differ in te way parameters are managed and in te initial configuration used: 1. te configuration is maintained fixed trougout te instances and trougout te runs. In tis case, we distinguis two configurations : literature (L): te configuration used is te one suggested in te literature (Stützle and Hoos, 2000; Dorigo and Stützle, 2004); off-line (): te configuration used is te one selected by F-Race. 5

8 Table 1: Values tat can be cosen for eac parameter for te TSP and te QAP. Te values reported in bold type are te ones suggested in te literature (Stützle and Hoos, 2000; Dorigo and Stützle, 2004). Tey are used in te literature and on-line versions. TSP P = (q 0, β, ρ, m, α, nn) parameter settings (S) q 0 0.0, 0.25, 0.5, 0.75, 0.9 β 1, 2, 3, 5, 10 ρ 0.1, 0.2, 0.3, 0.5, m 5, 10, 25, 50, 100 α 0.5, 1, 1.5, 2, 3 nn 10, 20, 40, 60, 80 QAP P = (m, ρ, α) parameter settings (S) m 1,2,5,8,16 ρ 0.2,0.4,0.6,0.8 α 0.5, 1, 1.5, 2, 3 l 0=first improvement (don t look bits), 1=first improvement (no don t look bits), 2=best improvement, 3=tabu searc runs of lengt 2n, 4=tabu searc runs of lengt 6n 2. te configuration is tuned on-line according to one of te metods described in Section 3: ON {LS, SAa, SAc, SAcb, SAcm}: literature + on-line (L+ON): te initial configuration is te one reported in te literature (Stützle and Hoos, 2000; Dorigo and Stützle, 2004); off-line + on-line (+ON): te initial configuration is te one selected by F-Race. We evaluate on-line metods as a function of te number of parameters tuned, for refusing or corroborating te conjecture stating tat te performance of tese metods worsens if several parameters are to be tuned. For eac number of parameters tuned, we test all vectors of cardinality (recall tat te cardinality of P is indicated ( ) as H). Te total number of combinations, and tus H te total number of P t s for eac is:. In te discussion of te results we report te number of parameters tuned immediately after te acronym of te on-line metod, between parentesis, e.g, L-SAc(3) means tat L-SAc as to tune tree parameters. In te experiments, we consider te same sets of parameter settings for all tuning metods to avoid a bias in favor of one metod. In fact, different configuration spaces may influence te relative performance of te metods. In Table 1, we report te vector S of possible values of eac parameters. Te configuration space includes all combinations of te different settings; it amounts to 15,625 candidates for te TSP, and 500 for te QAP. Te configurations suggested in te literature are reported in bold type. We sow te vector of parameters of te algoritm next to te problem acronym. Wen P t does not include all parameters of P, te same order is maintained, after deleting te parameters not included in te specific subset tested. For te QAP, te literature suggests to use 2-opt wit best improvement for structured instances, and tabu searc wit run lengt 2n for unstructured instances (Stützle and Hoos, 2000). Not being possible to identify a generally appropriate local searc, we add parameter ls to te set P considered for off-line tuning. In on-line tuning, instead, P never includes ls: preliminary results sow tat te on-line metods implemented are penalized by te presence of ls in te set of parameters to be tuned. Te difficulty for tuning on-line local searc is tat bot te computational time and te solution quality may differ for eac settings. So, tere is a trade-off between computational time and solution quality. Adjusting on-line metods to take tis trade-off into account would require some additional effort. We compare off-line and on-line tuning on multiple sets of instances for eac problem, performing bot sort and long runs. Te results reported for eac version of MMAS are obtained wit one single run on 100 instances for eac set and for eac run lengt (Birattari, 2004; Birattari and Dorigo, 2007). We perform te experiments on Xeon E5410 quad core 2.33GHz processors wit 6

9 Table 2: Sets of instances considered for te TSP. U(a, b) indicates tat a number was randomly drawn between a and b for eac instance, according to a uniform probability distribution. Te second part reports te configuration selected by F-Race for sort and long runs. set number spatial F-Race selection of nodes distribution α β ρ q 0 m nn TSP(2000, u) 2000 uniform sort long TSP(2000, c) 2000 clustered sort TSP(2000, x) 2000 uniform and sort clustered TSP(x, u) U(1000, 2000) uniform sort TSP(x, c) U(1000, 2000) clustered sort TSP(x, x) U(1000, 2000) uniform and sort clustered 2x6 MB L2-Cace and 8 GB RAM, running under te Linux Rocks Cluster Distribution, after compiling te code wit gcc, version 3.4. For eac run lengt, we execute F-Race on 1000 tuning instances of eac set. Tese instances do not include tose used in te evaluations of te tuning metods. Eac run of F-Race terminates wen eiter all tuning instances are solved, or a fixed number of runs of MMAS are executed (156,250 in te TSP and 7,500 in te QAP). To examine te impact of te eterogeneity of instance sets on tuning metods, for te TSP we define six sets of instances wit different number of cities and different spatial distribution of te cities. All instances are generated troug portgen, te instance generator used in te 8t DIMACS Callenge on te TSP (Jonson et al, 2001). Te caracteristics of eac set are described in Table 2. In second part of tis Table we report te configuration selected by F-Race for sort and long runs. Hereafter, we will refer to a set of TSP instances as TSP followed by a parentesis indicating te number of cities included and teir spatial distribution. Wen eiter te number of cities or teir spatial distribution are not te same in all instances, we report an x in te corresponding position. For example, if a set includes instances wit 2000 cities tat can be eiter uniformly distributed in te space or grouped in clusters, we use te acronym TSP(2000, x). Te different sets ave different levels of eterogeneity: we call a set omogeneous if all instances ave te same number of cities and te same spatial distribution, as in TSP(2000, u) and TSP(2000, c); we call a set eterogeneous if neiter te number of cities nor teir spatial distribution is te same in all instances, as in set TSP(x, x); at an intermediate level between tose two extremes, we define instance sets were eiter te number of cities or teir spatial distribution is not te same in all instances, as in sets TSP(2000, x), TSP(x, u) and TSP(x, c). For convenience we refer to tese sets as semieterogeneous. Te run lengt is 10 and 60 CPU seconds for sort and long runs, respectively. In sort runs, te literature version completes between 100 and 120 iterations on instances of set TSP(2000, u). Long runs last 60 CPU seconds. We performed long runs only on instances of set TSP(2000, u). For te QAP, we consider 28 sets of instances obtained troug te instance generator described by Stützle and Fernandes (2004). We generate instances of tree sizes: 60, 80 and 100. For eac size, we generate bot unstructured (RR) and structured (ES) instances. In unstructured instances, te entries of bot distance and flow matrices are random numbers uniformly distributed in te interval [0, 99] (Taillard, 1991). In structured instances, te entries of te distance matrix are te Euclidean distances of points wose coordinates are drawn in a square according to a uniform distribution. Te entries are rounded to te nearest integer. For wat concerns te flow matrix, first a set of points are randomly located in a square of size For eac pair of points, if te distance is longer tan a predefined tresold t, ten te flow is set to zero; 7

10 oterwise it is equal to a value x resulting from te following procedure: i) consider a random number x 1 [0, 0.7]; ii) compute x 2 = ln x 1 ; iv) set x = 100x Te result of tis procedure is a matrix in wic te number of entries for eac value is decreasing as a function of te value itself. We generate instances by setting different values of parameter t. Te peculiarities of eac set are described in Table 3. In te second alf of Table 3 we report te configuration selected by F-Race for sort and long runs. Hereafter we will refer to te sets of QAP instances as QAP followed by a parentesis indicating te size of te instances and te caracteristics of te matrices. As in te TSP, wen te instances in a set ave different caracteristics, an x is reported in te corresponding position. For example, te set of instances generated inserting random entries in te distance and flow matrices of size 60, 80 or 100, is indicated as QAP(x, RR). Also for te QAP we define a scale of eterogeneity for te sets of instances: an instance set is omogeneous if all instances ave te same size and caracteristics of te distance and te flow matrices, as in te first 18 sets in Table 3; an instance set is eterogeneous if neiter te size nor caracteristics of te distance and te flow matrices are te same in all instances, as in last set in Table 3; semi-eterogeneous refers to instance sets were eiter te size or instance caracteristics, eiter RR or ES, are not te same in all instances, as in te 19 t to te 27 t set in Table 3. Te run lengt is 17 and 29 CPU seconds for sort and long runs, respectively. For determining tese values we run te literature version on instances of size 60, 80 and 100, considering as stopping criterion te number of iterations performed. Sort runs are stopped after 200 iterations; long runs after 600. In te experiments, te time available for solving one instance is te average computational time used in tese runs. 5 Experimental Results We compare off-line and on-line metods wen all parameters of MMAS are tuned off-line, and eiter one or more parameters are tuned on-line. As said before, beyond te relative performance of off-line and on-line tuning, we also study te impact of te number of parameters to be tuned on te performance of on-line tuning metods. Terefore, for eac number of parameters to be tuned, w e consider two possible contexts: 1. no a priori knowledge on te best parameter(s) to be tuned. Tis context simulates te situation in wic one as no information on te parameter(s) tat can be most profitably tuned on-line. Te expected performance in tis case is neiter te best acievable nor te worst. For mimicking tis situation we consider te average result over all combinations of parameters tat can be tuned; 2. perfect a posteriori knowledge on te best parameter(s) to be tuned. Tis context simulates te situation in wic one perfectly knows wic are te most important parameters to be tuned on-line for acieving te best performance. For mimicking tis situation we consider only te combination of parameters tat results to be te best a posteriori. For eac problem, we report te results of all tuning metod for two sets of instances. In Pellegrini et al (2010a), we depict te results for all te sets described in section Traveling Salesman Problem We report te results obtained in sort runs by te off-line metod and by te five on-line metods on te instances of sets TSP(2000, u) and TSP(x, x). Tey are represented in Figures 1 and 3 for te two contexts considered: no a priori and perfect a posteriori knowledge on te best vector to tune. We present te results in terms of position in te ing acieved by te different variants of te algoritm witin te instances of eac set. We test te significance of te differences wit 8

11 Table 3: Sets of instances considered for te QAP. Te second part reports te configuration selected by F-Race for sort and long runs. set size type F-Race selection α ρ l m QAP(60, RR) 60 unstructured sort long QAP(60, ES10) 60 structured, t = 10 sort long QAP(60, ES20) 60 structured, t = 20 sort long QAP(60, ES42) 60 structured, t = 42 sort long QAP(60, ES72) 60 structured, t = 72 sort long QAP(60, ES100) 60 structured, t = 100 sort long QAP(80, RR) 80 unstructured sort long QAP(80, ES10) 80 structured, t = 10 sort long QAP(80, ES20) 80 structured, t = 20 sort long QAP(80, ES42) 80 structured, t = 42 sort long QAP(80, ES72) 80 structured, t = 72 sort long QAP(80, ES100) 80 structured, t = 100 sort long QAP(100, RR) 100 unstructured sort long QAP(100, ES10) 100 structured, t = 10 sort long QAP(100, ES20) 100 structured, t = 20 sort long QAP(100, ES42) 100 structured, t = 42 sort long QAP(100, ES72) 100 structured, t = 72 sort long QAP(100, ES100) 100 structured, t = 100 sort long QAP(60, Ex) 60 structured, t sort {10, 20, 42, 72, 100} long QAP(80, Ex) 80 structured, t sort {10, 20, 42, 72, 100} long QAP(80, xx) 80 unstructured or sort structured, t = 10 long QAP(100, Ex) 100 structured, t sort {10, 20, 42, 72, 100} long QAP(x, ES10) {60, 80, 100} structured, t = 10 sort long QAP(x, ES20) {60, 80, 100} structured, t = 20 sort long QAP(x, ES42) {60, 80, 100} structured, t = 42 sort long QAP(x, ES72) {60, 80, 100} structured, t = 72 sort long QAP(x, ES100) {60, 80, 100} structured, t = 100 sort long QAP(x, Ex) {60, 80, 100} structured, t sort {10, 20, 42, 72, 100} long

12 te Friedman test for all-pairwise comparisons, and in te plots we report te 95% simultaneous confidence intervals: for eac variant we depict te median over all instances, and we represent te bounds of te corresponding confidence interval; if te intervals of two variants overlap, ten te difference among te variants is not statistically significant. Te order in wic te acronyms are listed reflects teir median ing (Ciarandini, 2005). Moreover, eac on-line metod is evaluated as a function of te number of parameters tuned. We present te results used in tis analysis for te same two sets of instances, in Figures 2 and 4, in wic we focus on te performance of one well performing on-line metod, namely SAc. We depict te boxplot of te ings obtained in te 100 instances of eac set. In Pellegrini et al (2010b), we reported te results obtained wit te same analysis on smaller sets of instances Experiment 1: no a priori knowledge on te best parameter(s) to be tuned. Wen we consider te context in wic te implementer as no a priori knowledge on te impact of te different parameters on te beavior of on-line tuning metods, we acieve te results represented in Figure 1. Te off-line version performs significantly better tan all on-line versions on bot sets, TSP(2000, u) and TSP(x, x). Te relative performance of te approaces is te same irrespective of te level of eterogeneity of te sets of instances. Focusing on te relative performance of on-line versions, using te configuration selected by F-Race as te initial one appears quite advantageous: te average ing of off-line + on-line versions is and in sets TSP(2000, u) and TSP(x, x), respectively, wile te average ing of te literature + on-line ones is and Tis trends can be detected also by observing te lowest part of bot grapics in Figure 1, even if te best performing on-line metod on set TSP(2000, u) is an on-line version. Self-adaptive approaces wit parameters managed at te colony level outperform quite constantly te oter on-line metods. Te metods wit parameters tuned at te ant level, LS and SAa, are always in te last positions of te ing, regardless te number of parameters tuned. Te average ing on set TSP(2000, u) is for te metods tat manage parameters at te colony level, and for tose tat manage parameters at te ant level. Te average ings in set TSP(x, x) are and 42.38, respectively. Te results acieved in te sets of instances not reported ere suggest te same observations: te off-line version is always te best performing, and te results acieved by off-line + on-line versions are in average better tan tose of te literature + on-line ones. Te best results are found by +SAcm on set TSP(x, c) and by +SAc in te remaining tree sets. Te results for long runs on set TSP(2000, u) are qualitatively equivalent, a part from two main points. First, te relative performance of te off-line + on-line versions and te literature + on-line ones in terms of average ing is te same observed in sort runs: te average ing ere is and 29.86, respectively. Second, te relative performance of te literature version, wic is quite poor in sort runs, becomes comparable to te best on-line versions in long runs. In fact, te parameter settings suggested in te literature ave been proposed for obtaining good performance wit relatively ig computational time available. Te best performing on-line version in tis case is +SAc. Te quality of te results acieved by on-line tuning metods, independently weter tey start from te literature or from te off-line tuned parameter setting, worsens as te number of parameters tuned increases. Exemplary results are sown in Figure 2 for SAc on two sets of instances. Here we can observe tat, as more parameters are tuned, te s for on-line metod gets worse, indicating tat, te more parameters are tuned on-line, te worse gets te solution quality reaced by te algoritms Experiment 2: perfect a posteriori knowledge on te best parameter(s) to be tuned. In te context tat simulates te situation in wic te implementer as perfect a posteriori knowledge on te impact of te different parameters on te performance of on-line tuning metods, 10

13 L SAa(4) L LS(4) +LS(4) +SAa(4) L LS(3) L SAa(3) L SAa(2) L LS(2) +SAa(3) +LS(3) +LS(2) +SAa(2) L SAcb(5) L SAcm(5) L SAcb(6) L SAc(5) L SAcm(6) L SAc(6) +SAa(1) L SAcb(4) L SAcm(4) L SAcm(3) L L SAc(4) L SAc(3) +SAcb(6) +SAcb(5) +SAcm(6) +SAcm(5) +SAc(5) +SAc(6) +SAcb(4) +SAcm(4) +SAc(4) +SAc(1) y (a) TSP(2000, u) L SAa(4) L LS(4) L SAa(3) L LS(3) +SAa(4) +LS(4) L LS(2) L SAa(2) +SAa(3) +LS(3) +SAa(2) +LS(2) L SAcb(6) L SAcb(5) L SAcm(6) L SAcm(5) L SAc(5) L SAc(6) +SAa(1) L SAcb(4) L SAcm(4) L SAc(4) L SAcm(3) L SAc(3) L +SAcb(6) +SAcb(5) +SAcm(6) +SAcm(5) +SAc(6) +SAc(5) +SAcb(4) +SAcm(4) +SAc(4) +SAc(1) y (b) TSP(x, x) Figure 1: Results simulating no a priori knowledge on te best parameter(s) to be tuned. Simultaneous confidence intervals for all-pairwise comparisons of s between all versions of MMAS, applied to two sets of TSP instances. Results for sort runs are reported. 11

14 +SAc +SAc L SAc L SAc (a) TSP(2000, u) (b) TSP(x, x) Figure 2: Results simulating no a priori knowledge on te best parameter(s) to be tuned. Ranks acieved by an on-line metod as a function of te number of parameters tuned. Sort runs. MMAS acieves te results represented in Figure 3. Te off-line version performs significantly better tan all on-line ones in set TSP(x, x), wile tis is not true for set TSP(2000, u). Te difference is statistically significant only in te latter case. Increasing te level of eterogeneity of te set of instances, ten, does not favor on-line metods. As in te context of no a priori knowledge on te best combination, te best on-line metods are self-adaptive approaces wit parameters managed at te colony level: te best performing is eiter L-SAcb(1) or L-SAcm(1). Even if in bot sets of instances depicted te best metod is always a literature + on-line version, in general using te configuration selected by F-Race allows to get to better performance. In fact, te average ing acieved by off-line + on-line versions is in TSP(2000, u), and in TSP(x, x). Te respective values for te literature + on-line versions are and In te sets of instances not sown ere, off-line is te best performing metod in tree out of four sets. Te best performing on-line metod in all tose cases is +SAcm. In long runs on instances of set TSP(2000, u) te best metod is L-SAc(1). Te best on-line metods always tune only one parameter. Te quality of te results is clearly decreasing as a function of te number of parameters tuned, as in te case of no a priori knowledge on te best parameter(s) to be tuned. 5.2 Quadratic Assignment Problem For te QAP, we tackle 28 sets of instances wit different levels of eterogeneity. As for te TSP, we report te results of te off-line metod and by te five on-line metods on one omogeneous, QAP(60, ES100), and one eterogeneous, QAP(80, xx), set of instances. Tey are sown in Figures 5 and 7 for te two contexts concerning no a priori knowledge and perfect a posteriori knowledge on te best parameter(s) to tune on-line, respectively. Figures 6 and 8 describe te ing of te results as a function of te number of parameters tuned Experiment 1: no a priori knowledge on te best parameter(s) to be tuned. Te results acieved by all te versions of MMAS implemented in te context of no a priori knowledge on te best parameter(s) to tune are reported in Figure 5. In sort runs, represented in Figures 5(a) and 5(b), off-line outperforms all on-line versions. Te literature version performs very well, in QAP(80, xx) following immediately te off-line version. Te difference between offline + on-line and literature + on-line is negligible wen looking at te average s, wic 12

15 L SAa(4) L LS(4) +LS(4) +SAa(4) L SAcb(6) L SAc(6) L SAcm(6) L LS(3) L SAa(3) L SAa(2) L LS(2) +LS(3) +SAa(3) +SAcb(6) +SAcm(6) +SAa(2) +LS(2) +SAc(6) L SAcm(5) L SAc(5) L SAcb(5) L SAcm(4) L SAcb(4) +SAa(1) L L SAcm(3) L SAc(3) L SAc(4) +SAcm(5) +SAcb(5) +SAc(5) +SAcm(4) +SAcb(4) +SAc(4) +SAc(1) y (a) TSP(2000, u) L SAa(4) L LS(4) L SAcb(6) L SAcm(6) L SAc(6) +SAa(4) +LS(4) L SAa(3) L LS(3) L LS(2) L SAa(2) +SAcb(6) +SAcm(6) +SAc(6) +LS(3) +SAa(3) +SAa(1) +SAa(2) +LS(2) L SAcm(5) L SAcb(5) L SAc(5) L SAcb(4) L SAcm(4) L SAcm(3) L SAc(4) L L SAc(3) +SAcm(5) +SAcb(5) +SAc(5) +SAcb(4) +SAcm(4) +SAc(4) +SAc(1) y (b) TSP(x, x) Figure 3: Results simulating perfect a posteriori knowledge on te best parameter(s) to be tuned. Simultaneous confidence intervals for all-pairwise comparisons of s between all versions of MMAS, applied to two sets of TSP instances. Results for sort runs are reported. 13

16 +SAc +SAc L SAc L SAc (a) TSP(2000, u) (b) TSP(x, x) Figure 4: Results simulating perfect a posteriori knowledge on te best parameter(s) to be tuned. Ranks acieved by an on-line metod as a function of te number of parameters tuned. Sort runs. L SAcm(3) +SAa(1) L SAc(3) L +SAc(1) y +SAa(1) L SAcm(3) L SAc(3) +SAc(1) L y (a) QAP(60, ES100) sort runs (b) QAP(80, xx) sort runs L SAcm(3) L SAc(3) +SAa(1) +SAc(1) L (c) QAP(60, ES100) long runs y L SAcm(3) L SAc(3) +SAa(1) +SAc(1) L (d) QAP(80, xx) long runs y Figure 5: Results simulating no a priori knowledge on te best parameter(s) to be tuned. Simultaneous confidence intervals for all-pairwise comparisons of s between all versions of MMAS, applied to two sets of QAP instances. 14

17 +SAcm L SAcm (a) QAP(60, ES100) sort runs +SAcm L SAcm (b) QAP(80, xx) sort runs +SAcm L SAcm (c) QAP(60, ES100) long runs +SAcm L SAcm (d) QAP(80, xx) long runs Figure 6: Results simulating no a priori knowledge on te best parameter(s) to be tuned. Ranks acieved by an on-line metod as a function of te number of parameters tuned. 15

18 are and in sets QAP(60, ES100) and QAP(80, xx) for off-line + on-line. and and for literature + on-line, respectively. Te best on-line metod is in bot cases a selfadaptive approac wit parameters managed at te colony level. Tis type of on-line metods in general outperforms te metod in wic parameters are managed at te ant level, LS and SAa. On set QAP(60, ES100), te average of te metod tat manage parameters at te colony level is 10.34; te one of te metods tat manage parameters at te ant level is Te corresponding values on set QAP(80, xx) are against 16.28, respectively. In long runs te results are qualitatively very similar, as Figures 5(c) and 5(d) sow. Te eterogeneity of te sets of instances does not ave a remarkable impact on te qualitative results. If we consider te results on all te sets of instances represented in Pellegrini et al (2010a), we can observe tat in 20 out of 28 sets of instances te difference is statistically significant in favor of te off-line version. In 13 sets of instances, even te literature version outperforms all on-line, literature being sligtly better tan off-line in 5 sets. For wat concerns only on-line versions, in nine sets of instances +SAcm acieves te best results, in seven sets te best performing is L-SAcb. +SAcb and +SAc are te best in four sets of instances eac, wile L-SAc and L-SAcm in one and two sets, respectively. Te best performing on-line versions always tune one parameter, and te performance decrease as a function of te number of parameters tuned as sown in Figure 6. Te trends are not as clear as for te TSP, possibly owing to te lower number of parameters tuned ere Experiment 2: perfect a posteriori knowledge on te best parameter(s) to be tuned. For wat concerns te context of perfect a posteriori knowledge on te best parameter(s) to be tuned, we report exemplary results in Figure 7. In sort runs, sown in Figures 7(a) and 7(b), te best on-line versions often outperform te off-line one. In long runs sown in Figures 7(c) and 7(d), te situation is more favorable to te off-line version. As in all oter results, te best performing on-line metods are always self-adaptive approaces wit parameters managed at te colony level. Te average ing of tese metods is in QAP(60, ES100), and in QAP(80, xx). Te corresponding values for te metods tat manage parameters at te colony level are and 15.48, respectively. In general te off-line + on-line versions perform sligtly better tan te literature + on-line ones, but te difference in te average ing is even smaller tan in te case of no a priori knowledge on te best parameter(s) to be tuned. Te best on-line version tunes one and two parameters in set QAP(60, ES100) and QAP(80, xx), respectively. Also in tis context, te eterogeneity of te sets of instances does not ave a remarkable impact on te results. On te instances not represented ere, off-line version is seldom te best one in sort runs. Te difference is significant in favor of eiter an on-line version in 20 sets of instances, in favor of te off-line version in 2 sets, namely QAP(80, RR) and QAP(100, RR), wile in 4 sets te results are comparable. It is not possible to identify one best performing on-line metod: L-SAcb obtains te best results in 10 sets of instances, L-SAcm in 7, L-SAc in 4, and +SAcb in 2 and +SAcm in 3 sets. In long runs, in 23 sets of instances te difference between te off-line and te best on-line versions is not statistically significant; off-line is sligtly better in 11 sets and sligtly worse in 12 sets. Te difference is significant in favor of te off-line version in two sets of instances, and in favor of te best off-line + on-line in one set. Considering only on-line versions, +SAcb is te best performing in ten sets of instances, +SAc in five sets, +SAcm in four, L-SAcm and L-SAc in tree sets, L-SAcb in two, and +SAa in one set. As it can be observed in te results reported in Figure 7, differently from te context of no a priori knowledge on te best subset and from te results on te TSP, using te configuration selected by F-Race as te initial one for on-line tuning is not often convenient ere. Moreover, te best on-line version tunes in sort runs two parameters in six sets of instances, and even tree parameters in set QAP(100, ES20). In long runs, te best on-line metod tunes one parameter in all but two sets, in wic it tunes two. As it can be seen in Figure 8, tis does not imply an inversion in te trend of te quality of 16

19 y L SAc(3) +SAa(1) L L SAcm(3) +SAc(1) L SAcm(3) +SAa(1) L SAc(3) +SAc(1) L y (a) QAP(60, ES100) sort runs (b) QAP(80, xx) sort runs +SAc(1) L SAc(3) L SAcm(3) +SAa(1) L y +SAa(1) L SAc(3) L SAcm(3) L +SAc(1) y (c) QAP(60, ES100) long runs (d) QAP(80, xx) long runs Figure 7: Results simulating perfect a posteriori knowledge on te best parameter(s) to be tuned. Simultaneous confidence intervals for all-pairwise comparisons of s between all versions of MMAS, applied to two sets of QAP instances. 17

20 +SAcm L SAcm (a) QAP(60, ES100) sort runs +SAcm L SAcm (b) QAP(80, xx) sort runs +SAcm L SAcm (c) QAP(60, ES100) long runs +SAcm L SAcm (d) QAP(80, xx) long runs Figure 8: Results simulating perfect a posteriori knowledge on te best parameter(s) to be tuned. Ranks acieved by an on-line metod as a function of te number of parameters tuned. 18

21 te results as a function of te number of parameters tuned: even if in some cases tis trend is no very evident, tuning few parameters is more convenient tan tuning many. 6 Conclusions In tis paper, we study te performance of MMAS wen its parameters are tuned off-line and wen tey are tuned on-line. We run an extensive experimental analysis using te TSP and te QAP as test problems for assessing te relative results acievable wit te two tuning approaces. For on-line tuning we implement five different metods. We observe te results of tese metods as a function of te number of parameters tuned, and we consider two different contexts: on te one and, we simulate no a priori knowledge on te best combination of parameters to be tuned on-line; on te oter and, we simulate perfect a posteriori knowledge. Te second context introduces a strong bias in favor of on-line tuning. Te results presented corroborate two of te tree conjectures tat typically drive te coice between off-line and on-line tuning: off-line tuning acieves good performance wen te instances to be tackled are omogeneous; and tuning on-line few parameters is more convenient tan tuning many. Instead, in te context considered ere, te results refuse te conjecture according to wic on-line tuning is to be preferred wen te instances are eterogeneous: te eterogeneity of te instances solved does not ave a remarkable impact on te relative results of off-line and on-line tuning. Moreover, te results sow tat te configuration used as te initial one in on-line tuning may ave a significant impact on te performance. Often, using te configuration selected by off-line tuning appears quite convenient wit respect to using te configuration proposed in te literature. Tis observation, togeter wit te result tat on-line tuning can acieve better results wen it manages few parameters, suggests tat understanding te impact of eac parameter on te beavior of te algoritms may elp in identifying te most important parameters to be tuned online. Acieving te ig level of understanding of an algoritm necessary to tis aim may require a ig effort. Tis effort may be significantly reduced by exploiting off-line tuning: a tuning approac based on te ybridization of off-line and on-line tuning migt boost te performance of optimization algoritms suc as swarm intelligence tecniques. In particular, te information on te importance of parameters gained in off-line tuning migt be used for coosing wic parameters to tune on-line. An on-line metod migt ten tune te configuration to adapt to eac specific instance and possibly to eac pase of te searc. Acknowledgements Tis work was supported by te META-X project, an Action de Recerce Concertée funded by te Scientific Researc Directorate of te Frenc Community of Belgium and by te European Union troug te ERC Advance Grant E-SWARM: Engineering Swarm Intelligence Systems (contract ). Mauro Birattari and Tomas Stützle acknowledge support from te Belgian F.R.S.-FNRS, of wic tey are Researc Associates. Te autors tank te colleagues tat answered te survey described in te paper. References Adenso-Díaz B, Laguna M (2006) Fine-tuning of algoritms using fractional experimental designs and local searc. Operations Researc 54(1): Anginolfi D, Boccalatte A, Paolucci M, Vecciola C (2008) Performance evaluation of an adaptive ant colony optimization applied to single macine sceduling. In: Li X, et al (eds) SEAL, Springer, Heidelberg, Germany, LNCS, vol 5361, pp

Off-line vs. On-line Tuning: A Study on MAX MIN Ant System for the TSP

Off-line vs. On-line Tuning: A Study on MAX MIN Ant System for the TSP Off-line vs. On-line Tuning: A Study on MAX MIN Ant System for the TSP Paola Pellegrini 1,ThomasStützle 2, and Mauro Birattari 2 1 Dipartimento di Matematica Applicata Università Ca Foscari Venezia, Venezia,

More information

Parameter Adaptation in Ant Colony Optimization

Parameter Adaptation in Ant Colony Optimization Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Parameter Adaptation in Ant Colony Optimization Thomas Stützle, Manuel López-Ibáñez,

More information

Pre-scheduled and adaptive parameter variation in MAX-MIN Ant System

Pre-scheduled and adaptive parameter variation in MAX-MIN Ant System Pre-scheduled and adaptive parameter variation in MAX-MIN Ant System Michael Maur, Manuel López-Ibáñez, and Thomas Stützle Abstract MAX-MIN Ant System (MMAS) is an ant colony optimization (ACO) algorithm

More information

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Sofia Burille Mentor: Micael Natanson September 15, 2014 Abstract Given a grap, G, wit a set of vertices, v, and edges, various

More information

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Continuous optimization algorithms for tuning real and integer parameters of swarm

More information

Haar Transform CS 430 Denbigh Starkey

Haar Transform CS 430 Denbigh Starkey Haar Transform CS Denbig Starkey. Background. Computing te transform. Restoring te original image from te transform 7. Producing te transform matrix 8 5. Using Haar for lossless compression 6. Using Haar

More information

Multi-Stack Boundary Labeling Problems

Multi-Stack Boundary Labeling Problems Multi-Stack Boundary Labeling Problems Micael A. Bekos 1, Micael Kaufmann 2, Katerina Potika 1 Antonios Symvonis 1 1 National Tecnical University of Atens, Scool of Applied Matematical & Pysical Sciences,

More information

Ant Colony Optimization: A Component-Wise Overview

Ant Colony Optimization: A Component-Wise Overview Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Ant Colony Optimization: A Component-Wise Overview M. López-Ibáñez, T. Stützle,

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring Has-Based Indexes Capter 11 Comp 521 Files and Databases Spring 2010 1 Introduction As for any index, 3 alternatives for data entries k*: Data record wit key value k

More information

4.1 Tangent Lines. y 2 y 1 = y 2 y 1

4.1 Tangent Lines. y 2 y 1 = y 2 y 1 41 Tangent Lines Introduction Recall tat te slope of a line tells us ow fast te line rises or falls Given distinct points (x 1, y 1 ) and (x 2, y 2 ), te slope of te line troug tese two points is cange

More information

Tabu Search vs. Simulated Annealing for Solving Large Quadratic Assignment Instances

Tabu Search vs. Simulated Annealing for Solving Large Quadratic Assignment Instances Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tabu Search vs. Simulated Annealing for Solving Large Quadratic Assignment Instances

More information

4.2 The Derivative. f(x + h) f(x) lim

4.2 The Derivative. f(x + h) f(x) lim 4.2 Te Derivative Introduction In te previous section, it was sown tat if a function f as a nonvertical tangent line at a point (x, f(x)), ten its slope is given by te it f(x + ) f(x). (*) Tis is potentially

More information

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry Our Calibrated Model as No Predictive Value: An Example from te Petroleum Industry J.N. Carter a, P.J. Ballester a, Z. Tavassoli a and P.R. King a a Department of Eart Sciences and Engineering, Imperial

More information

Investigating an automated method for the sensitivity analysis of functions

Investigating an automated method for the sensitivity analysis of functions Investigating an automated metod for te sensitivity analysis of functions Sibel EKER s.eker@student.tudelft.nl Jill SLINGER j..slinger@tudelft.nl Delft University of Tecnology 2628 BX, Delft, te Neterlands

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall Has-Based Indexes Capter 11 Comp 521 Files and Databases Fall 2012 1 Introduction Hasing maps a searc key directly to te pid of te containing page/page-overflow cain Doesn t require intermediate page fetces

More information

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm Sensors & Transducers 2013 by IFSA ttp://www.sensorsportal.com CESILA: Communication Circle External Square Intersection-Based WSN Localization Algoritm Sun Hongyu, Fang Ziyi, Qu Guannan College of Computer

More information

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically 2 Te Derivative Te two previous capters ave laid te foundation for te study of calculus. Tey provided a review of some material you will need and started to empasize te various ways we will view and use

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle F-Race and iterated F-Race: An overview Mauro Birattari, Zhi Yuan, Prasanna Balaprakash,

More information

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result RT. Complex Fractions Wen working wit algebraic expressions, sometimes we come across needing to simplify expressions like tese: xx 9 xx +, xx + xx + xx, yy xx + xx + +, aa Simplifying Complex Fractions

More information

More on Functions and Their Graphs

More on Functions and Their Graphs More on Functions and Teir Graps Difference Quotient ( + ) ( ) f a f a is known as te difference quotient and is used exclusively wit functions. Te objective to keep in mind is to factor te appearing in

More information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information Unsupervised Learning for Hierarcical Clustering Using Statistical Information Masaru Okamoto, Nan Bu, and Tosio Tsuji Department of Artificial Complex System Engineering Hirosima University Kagamiyama

More information

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method International Journal of Statistics and Applications 0, (): -0 DOI: 0.9/j.statistics.000.0 Comparison of te Efficiency of te Various Algoritms in Stratified Sampling wen te Initial Solutions are Determined

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Adaptive Sample Size and Importance Sampling in Estimation-based Local Search

More information

Cubic smoothing spline

Cubic smoothing spline Cubic smooting spline Menu: QCExpert Regression Cubic spline e module Cubic Spline is used to fit any functional regression curve troug data wit one independent variable x and one dependent random variable

More information

Intra- and Inter-Session Network Coding in Wireless Networks

Intra- and Inter-Session Network Coding in Wireless Networks Intra- and Inter-Session Network Coding in Wireless Networks Hulya Seferoglu, Member, IEEE, Atina Markopoulou, Member, IEEE, K K Ramakrisnan, Fellow, IEEE arxiv:857v [csni] 3 Feb Abstract In tis paper,

More information

The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms

The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms Manuel

More information

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation P R E P R N T CPWS XV Berlin, September 8, 008 Fast Calculation of Termodynamic Properties of Water and Steam in Process Modelling using Spline nterpolation Mattias Kunick a, Hans-Joacim Kretzscmar a,

More information

Vector Processing Contours

Vector Processing Contours Vector Processing Contours Andrey Kirsanov Department of Automation and Control Processes MAMI Moscow State Tecnical University Moscow, Russia AndKirsanov@yandex.ru A.Vavilin and K-H. Jo Department of

More information

Section 2.3: Calculating Limits using the Limit Laws

Section 2.3: Calculating Limits using the Limit Laws Section 2.3: Calculating Limits using te Limit Laws In previous sections, we used graps and numerics to approimate te value of a it if it eists. Te problem wit tis owever is tat it does not always give

More information

12.2 Techniques for Evaluating Limits

12.2 Techniques for Evaluating Limits 335_qd /4/5 :5 PM Page 863 Section Tecniques for Evaluating Limits 863 Tecniques for Evaluating Limits Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing

More information

Density Estimation Over Data Stream

Density Estimation Over Data Stream Density Estimation Over Data Stream Aoying Zou Dept. of Computer Science, Fudan University 22 Handan Rd. Sangai, 2433, P.R. Cina ayzou@fudan.edu.cn Ziyuan Cai Dept. of Computer Science, Fudan University

More information

3.6 Directional Derivatives and the Gradient Vector

3.6 Directional Derivatives and the Gradient Vector 288 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.6 Directional Derivatives and te Gradient Vector 3.6.1 Functions of two Variables Directional Derivatives Let us first quickly review, one more time, te

More information

Ant Colony Optimization

Ant Colony Optimization Ant Colony Optimization CompSci 760 Patricia J Riddle 1 Natural Inspiration The name Ant Colony Optimization was chosen to reflect its original inspiration: the foraging behavior of some ant species. It

More information

Linear Interpolating Splines

Linear Interpolating Splines Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 17 Notes Tese notes correspond to Sections 112, 11, and 114 in te text Linear Interpolating Splines We ave seen tat ig-degree polynomial interpolation

More information

Symmetric Tree Replication Protocol for Efficient Distributed Storage System*

Symmetric Tree Replication Protocol for Efficient Distributed Storage System* ymmetric Tree Replication Protocol for Efficient Distributed torage ystem* ung Cune Coi 1, Hee Yong Youn 1, and Joong up Coi 2 1 cool of Information and Communications Engineering ungkyunkwan University

More information

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art Multi-Objective Particle Swarm Optimizers: A Survey of te State-of-te-Art Margarita Reyes-Sierra and Carlos A. Coello Coello CINVESTAV-IPN (Evolutionary Computation Group) Electrical Engineering Department,

More information

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks An Algoritm for Loopless Deflection in Potonic Packet-Switced Networks Jason P. Jue Center for Advanced Telecommunications Systems and Services Te University of Texas at Dallas Ricardson, TX 75083-0688

More information

On the Use of Radio Resource Tests in Wireless ad hoc Networks

On the Use of Radio Resource Tests in Wireless ad hoc Networks Tecnical Report RT/29/2009 On te Use of Radio Resource Tests in Wireless ad oc Networks Diogo Mónica diogo.monica@gsd.inesc-id.pt João Leitão jleitao@gsd.inesc-id.pt Luis Rodrigues ler@ist.utl.pt Carlos

More information

Numerical Derivatives

Numerical Derivatives Lab 15 Numerical Derivatives Lab Objective: Understand and implement finite difference approximations of te derivative in single and multiple dimensions. Evaluate te accuracy of tese approximations. Ten

More information

Section 1.2 The Slope of a Tangent

Section 1.2 The Slope of a Tangent Section 1.2 Te Slope of a Tangent You are familiar wit te concept of a tangent to a curve. Wat geometric interpretation can be given to a tangent to te grap of a function at a point? A tangent is te straigt

More information

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes Coarticulation: An Approac for Generating Concurrent Plans in Markov Decision Processes Kasayar Roanimanes kas@cs.umass.edu Sridar Maadevan maadeva@cs.umass.edu Department of Computer Science, University

More information

Fault Localization Using Tarantula

Fault Localization Using Tarantula Class 20 Fault localization (cont d) Test-data generation Exam review: Nov 3, after class to :30 Responsible for all material up troug Nov 3 (troug test-data generation) Send questions beforeand so all

More information

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING Raffaele Gaetano, Giuseppe Scarpa, Giovanni Poggi, and Josiane Zerubia Dip. Ing. Elettronica e Telecomunicazioni,

More information

Design of PSO-based Fuzzy Classification Systems

Design of PSO-based Fuzzy Classification Systems Tamkang Journal of Science and Engineering, Vol. 9, No 1, pp. 6370 (006) 63 Design of PSO-based Fuzzy Classification Systems Cia-Cong Cen Department of Electronics Engineering, Wufeng Institute of Tecnology,

More information

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganograpic Scemes Jessica Fridric Dept. of Electrical Engineering, SUNY Bingamton, Bingamton, NY 3902-6000, USA fridric@bingamton.edu

More information

The (, D) and (, N) problems in double-step digraphs with unilateral distance

The (, D) and (, N) problems in double-step digraphs with unilateral distance Electronic Journal of Grap Teory and Applications () (), Te (, D) and (, N) problems in double-step digraps wit unilateral distance C Dalfó, MA Fiol Departament de Matemàtica Aplicada IV Universitat Politècnica

More information

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION MP MOSICKING WITH DISSIMILR PROJECTIONS, SPTIL RESOLUTIONS, DT TYPES ND NUMBER OF BNDS Tyler J. lumbaug and Peter Bajcsy National Center for Supercomputing pplications 605 East Springfield venue, Campaign,

More information

Tuning Algorithms for Tackling Large Instances: An Experimental Protocol.

Tuning Algorithms for Tackling Large Instances: An Experimental Protocol. Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tuning Algorithms for Tackling Large Instances: An Experimental Protocol. Franco

More information

Search-aware Conditions for Probably Approximately Correct Heuristic Search

Search-aware Conditions for Probably Approximately Correct Heuristic Search Searc-aware Conditions for Probably Approximately Correct Heuristic Searc Roni Stern Ariel Felner Information Systems Engineering Ben Gurion University Beer-Seva, Israel 85104 roni.stern@gmail.com, felner@bgu.ac.il

More information

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 Note: Tere will be a very sort online reading quiz (WebWork) on eac reading assignment due one our before class on its due date. Due dates can be found

More information

A UPnP-based Decentralized Service Discovery Improved Algorithm

A UPnP-based Decentralized Service Discovery Improved Algorithm Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol.1, No.1, Marc 2013, pp. 21~26 ISSN: 2089-3272 21 A UPnP-based Decentralized Service Discovery Improved Algoritm Yu Si-cai*, Wu Yan-zi,

More information

Solving a combinatorial problem using a local optimization in ant based system

Solving a combinatorial problem using a local optimization in ant based system Solving a combinatorial problem using a local optimization in ant based system C-M.Pintea and D.Dumitrescu Babeş-Bolyai University of Cluj-Napoca, Department of Computer-Science Kogalniceanu 1, 400084

More information

An Anchor Chain Scheme for IP Mobility Management

An Anchor Chain Scheme for IP Mobility Management An Ancor Cain Sceme for IP Mobility Management Yigal Bejerano and Israel Cidon Department of Electrical Engineering Tecnion - Israel Institute of Tecnology Haifa 32000, Israel E-mail: bej@tx.tecnion.ac.il.

More information

MAPI Computer Vision

MAPI Computer Vision MAPI Computer Vision Multiple View Geometry In tis module we intend to present several tecniques in te domain of te 3D vision Manuel Joao University of Mino Dep Industrial Electronics - Applications -

More information

Distributed and Optimal Rate Allocation in Application-Layer Multicast

Distributed and Optimal Rate Allocation in Application-Layer Multicast Distributed and Optimal Rate Allocation in Application-Layer Multicast Jinyao Yan, Martin May, Bernard Plattner, Wolfgang Mülbauer Computer Engineering and Networks Laboratory, ETH Zuric, CH-8092, Switzerland

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Improvement Strategies for the F-Race algorithm: Sampling Design and Iterative

More information

An Analysis of Algorithmic Components for Multiobjective Ant Colony Optimization: A Case Study on the Biobjective TSP

An Analysis of Algorithmic Components for Multiobjective Ant Colony Optimization: A Case Study on the Biobjective TSP An Analysis of Algorithmic Components for Multiobjective Ant Colony Optimization: A Case Study on the Biobjective TSP Manuel López-Ibáñez and Thomas Stützle IRIDIA, CoDE, Université Libre de Bruxelles,

More information

Ant Colony Optimization for dynamic Traveling Salesman Problems

Ant Colony Optimization for dynamic Traveling Salesman Problems Ant Colony Optimization for dynamic Traveling Salesman Problems Carlos A. Silva and Thomas A. Runkler Siemens AG, Corporate Technology Information and Communications, CT IC 4 81730 Munich - Germany thomas.runkler@siemens.com

More information

An Effective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Chiming Huang and Rei-Heng Cheng

An Effective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Chiming Huang and Rei-Heng Cheng An ffective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Ciming Huang and ei-heng Ceng 5 De c e mbe r0 International Journal of Advanced Information Tecnologies (IJAIT),

More information

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2 15-382 COLLECTIVE INTELLIGENCE - S18 LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2 INSTRUCTOR: GIANNI A. DI CARO ANT-ROUTING TABLE: COMBINING PHEROMONE AND HEURISTIC 2 STATE-TRANSITION:

More information

ANT colony optimization (ACO) [1] is a metaheuristic. The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms

ANT colony optimization (ACO) [1] is a metaheuristic. The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms FINAL DRAFT PUBLISHED IN IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms Manuel López-Ibáñez and Thomas Stützle Abstract Multi-objective

More information

1.4 RATIONAL EXPRESSIONS

1.4 RATIONAL EXPRESSIONS 6 CHAPTER Fundamentals.4 RATIONAL EXPRESSIONS Te Domain of an Algebraic Epression Simplifying Rational Epressions Multiplying and Dividing Rational Epressions Adding and Subtracting Rational Epressions

More information

The Euler and trapezoidal stencils to solve d d x y x = f x, y x

The Euler and trapezoidal stencils to solve d d x y x = f x, y x restart; Te Euler and trapezoidal stencils to solve d d x y x = y x Te purpose of tis workseet is to derive te tree simplest numerical stencils to solve te first order d equation y x d x = y x, and study

More information

Adaptive Ant Colony Optimization for the Traveling Salesman Problem

Adaptive Ant Colony Optimization for the Traveling Salesman Problem - Diplomarbeit - (Implementierungsarbeit) Adaptive Ant Colony Optimization for the Traveling Salesman Problem Michael Maur Mat.-Nr.: 1192603 @stud.tu-darmstadt.de Eingereicht im Dezember 2009

More information

Energy efficient temporal load aware resource allocation in cloud computing datacenters

Energy efficient temporal load aware resource allocation in cloud computing datacenters Vakilinia Journal of Cloud Computing: Advances, Systems and Applications (2018) 7:2 DOI 10.1186/s13677-017-0103-2 Journal of Cloud Computing: Advances, Systems and Applications RESEARCH Energy efficient

More information

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP 16t European Signal Processing Conference (EUSIPCO 008), Lausanne, Switzerland, August 5-9, 008, copyrigt by EURASIP ADAPTIVE WINDOW FOR LOCAL POLYNOMIAL REGRESSION FROM NOISY NONUNIFORM SAMPLES A. Sreenivasa

More information

You Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin.

You Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin. 1 G.SRT.1-Some Tings To Know Dilations affect te size of te pre-image. Te pre-image will enlarge or reduce by te ratio given by te scale factor. A dilation wit a scale factor of 1> x >1enlarges it. A dilation

More information

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method Engineering, 2013, 5, 522-526 ttp://dx.doi.org/10.4236/eng.2013.510b107 Publised Online October 2013 (ttp://www.scirp.org/journal/eng) Two Modifications of Weigt Calculation of te Non-Local Means Denoising

More information

ACO for Maximal Constraint Satisfaction Problems

ACO for Maximal Constraint Satisfaction Problems MIC 2001-4th Metaheuristics International Conference 187 ACO for Maximal Constraint Satisfaction Problems Andrea Roli Christian Blum Marco Dorigo DEIS - Università di Bologna Viale Risorgimento, 2 - Bologna

More information

CHAPTER 7: TRANSCENDENTAL FUNCTIONS

CHAPTER 7: TRANSCENDENTAL FUNCTIONS 7.0 Introduction and One to one Functions Contemporary Calculus 1 CHAPTER 7: TRANSCENDENTAL FUNCTIONS Introduction In te previous capters we saw ow to calculate and use te derivatives and integrals of

More information

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness 1 Optimal In-etwork Packet Aggregation Policy for Maimum Information Fresness Alper Sinan Akyurek, Tajana Simunic Rosing Electrical and Computer Engineering, University of California, San Diego aakyurek@ucsd.edu,

More information

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science A Cost Model for Distributed Sared Memory Using Competitive Update Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, Texas, 77843-3112, USA E-mail: fjkim,vaidyag@cs.tamu.edu

More information

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION Martin Kraus Computer Grapics and Visualization Group, Tecnisce Universität Müncen, Germany krausma@in.tum.de Magnus Strengert Visualization and Interactive

More information

An Evaluation of Alternative Continuous Media Replication Techniques in Wireless Peer-to-Peer Networks

An Evaluation of Alternative Continuous Media Replication Techniques in Wireless Peer-to-Peer Networks An Evaluation of Alternative Continuous Media eplication Tecniques in Wireless Peer-to-Peer Networks Saram Gandearizade and Tooraj Helmi Department of Computer Science University of Soutern California

More information

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method Te Open Plasma Pysics Journal, 2010, 3, 29-35 29 Open Access Alternating Direction Implicit Metods for FDTD Using te Dey-Mittra Embedded Boundary Metod T.M. Austin *, J.R. Cary, D.N. Smite C. Nieter Tec-X

More information

12.2 TECHNIQUES FOR EVALUATING LIMITS

12.2 TECHNIQUES FOR EVALUATING LIMITS Section Tecniques for Evaluating Limits 86 TECHNIQUES FOR EVALUATING LIMITS Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing tecnique to evaluate its of

More information

CSCE476/876 Spring Homework 5

CSCE476/876 Spring Homework 5 CSCE476/876 Spring 2016 Assigned on: Friday, Marc 11, 2016 Due: Monday, Marc 28, 2016 Homework 5 Programming assignment sould be submitted wit andin Te report can eiter be submitted wit andin as a PDF,

More information

Université Libre de Bruxelles

Université Libre de Bruxelles Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle On the Sensitivity of Reactive Tabu Search to its Meta-parameters Paola Pellegrini,

More information

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems Proceedings IEEE INFOCOM A Statistical Approac for Target Counting in Sensor-Based Surveillance Systems Dengyuan Wu, Decang Cen,aiXing, Xiuzen Ceng Department of Computer Science, Te George Wasington University,

More information

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms Swarm Intell (2012) 6:49 75 DOI 10.1007/s11721-011-0065-9 Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms Zhi Yuan MarcoA.MontesdeOca Mauro Birattari

More information

SlidesGen: Automatic Generation of Presentation Slides for a Technical Paper Using Summarization

SlidesGen: Automatic Generation of Presentation Slides for a Technical Paper Using Summarization Proceedings of te Twenty-Second International FLAIRS Conference (2009) SlidesGen: Automatic Generation of Presentation Slides for a Tecnical Paper Using Summarization M. Sravanti, C. Ravindranat Cowdary

More information

Redundancy Awareness in SQL Queries

Redundancy Awareness in SQL Queries Redundancy Awareness in QL Queries Bin ao and Antonio Badia omputer Engineering and omputer cience Department University of Louisville bin.cao,abadia @louisville.edu Abstract In tis paper, we study QL

More information

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR 13.5 Directional Derivatives and te Gradient Vector Contemporary Calculus 1 13.5 DIRECTIONAL DERIVATIVES and te GRADIENT VECTOR Directional Derivatives In Section 13.3 te partial derivatives f x and f

More information

Network Coding-Aware Queue Management for Unicast Flows over Coded Wireless Networks

Network Coding-Aware Queue Management for Unicast Flows over Coded Wireless Networks Network Coding-Aware Queue Management for Unicast Flows over Coded Wireless Networks Hulya Seferoglu, Atina Markopoulou EECS Dept, University of California, Irvine {seferog, atina}@uci.edu Abstract We

More information

Efficient Content-Based Indexing of Large Image Databases

Efficient Content-Based Indexing of Large Image Databases Efficient Content-Based Indexing of Large Image Databases ESSAM A. EL-KWAE University of Nort Carolina at Carlotte and MANSUR R. KABUKA University of Miami Large image databases ave emerged in various

More information

Capacity on Demand User s Guide

Capacity on Demand User s Guide System z Capacity on Demand User s Guide SC28-6846-02 Level 02c, October 2009 System z Capacity on Demand User s Guide SC28-6846-02 Level 02c, October 2009 Note Before using tis information and te product

More information

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art Krzysztof Socha, Michael Sampels, and Max Manfrin IRIDIA, Université Libre de Bruxelles, CP 194/6, Av. Franklin

More information

Minimizing Memory Access By Improving Register Usage Through High-level Transformations

Minimizing Memory Access By Improving Register Usage Through High-level Transformations Minimizing Memory Access By Improving Register Usage Troug Hig-level Transformations San Li Scool of Computer Engineering anyang Tecnological University anyang Avenue, SIGAPORE 639798 Email: p144102711@ntu.edu.sg

More information

Software Fault Prediction using Machine Learning Algorithm Pooja Garg 1 Mr. Bhushan Dua 2

Software Fault Prediction using Machine Learning Algorithm Pooja Garg 1 Mr. Bhushan Dua 2 IJSRD - International Journal for Scientific Researc & Development Vol. 3, Issue 04, 2015 ISSN (online): 2321-0613 Software Fault Prediction using Macine Learning Algoritm Pooja Garg 1 Mr. Busan Dua 2

More information

2.8 The derivative as a function

2.8 The derivative as a function CHAPTER 2. LIMITS 56 2.8 Te derivative as a function Definition. Te derivative of f(x) istefunction f (x) defined as follows f f(x + ) f(x) (x). 0 Note: tis differs from te definition in section 2.7 in

More information

Swarm Intelligence (Ant Colony Optimization)

Swarm Intelligence (Ant Colony Optimization) (Ant Colony Optimization) Prof. Dr.-Ing. Habil Andreas Mitschele-Thiel M.Sc.-Inf Mohamed Kalil 19 November 2009 1 Course description Introduction Course overview Concepts of System Engineering Swarm Intelligence

More information

Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs

Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Per Ostlund Department of Computer and Information Science Linkoping University SE-58183 Linkoping, Sweden per.ostlund@liu.se Kristian

More information

Limits and Continuity

Limits and Continuity CHAPTER Limits and Continuit. Rates of Cange and Limits. Limits Involving Infinit.3 Continuit.4 Rates of Cange and Tangent Lines An Economic Injur Level (EIL) is a measurement of te fewest number of insect

More information

Classification of Osteoporosis using Fractal Texture Features

Classification of Osteoporosis using Fractal Texture Features Classification of Osteoporosis using Fractal Texture Features V.Srikant, C.Dines Kumar and A.Tobin Department of Electronics and Communication Engineering Panimalar Engineering College Cennai, Tamil Nadu,

More information

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan Capter K Geometric Optics Blinn College - Pysics 2426 - Terry Honan K. - Properties of Ligt Te Speed of Ligt Te speed of ligt in a vacuum is approximately c > 3.0µ0 8 mês. Because of its most fundamental

More information

An experimental analysis of design choices of multi-objective ant colony optimization algorithms

An experimental analysis of design choices of multi-objective ant colony optimization algorithms Swarm Intell (2012) 6:207 232 DOI 10.1007/s11721-012-0070-7 An experimental analysis of design choices of multi-objective ant colony optimization algorithms Manuel López-Ibáñez Thomas Stützle Received:

More information

Ant Colony Optimization

Ant Colony Optimization DM841 DISCRETE OPTIMIZATION Part 2 Heuristics Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. earch 2. Context Inspiration from Nature 3. 4. 5.

More information

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company International Journal of Computational Intelligence and Applications c World Scientific Publishing Company The Accumulated Experience Ant Colony for the Traveling Salesman Problem JAMES MONTGOMERY MARCUS

More information

Scheduling Non-Preemptible Jobs to Minimize Peak Demand. Received: 22 September 2017; Accepted: 25 October 2017; Published: 28 October 2017

Scheduling Non-Preemptible Jobs to Minimize Peak Demand. Received: 22 September 2017; Accepted: 25 October 2017; Published: 28 October 2017 algoritms Article Sceduling Non-Preemptible Jobs to Minimize Peak Demand Sean Yaw 1 ID and Brendan Mumey 2, * ID 1 Los Alamos National Laboratoy, Los Alamos, NM 87545, USA; yaw@lanl.gov 2 Gianforte Scool

More information

An Analytical Approach to Real-Time Misbehavior Detection in IEEE Based Wireless Networks

An Analytical Approach to Real-Time Misbehavior Detection in IEEE Based Wireless Networks Tis paper was presented as part of te main tecnical program at IEEE INFOCOM 20 An Analytical Approac to Real-Time Misbeavior Detection in IEEE 802. Based Wireless Networks Jin Tang, Yu Ceng Electrical

More information