A Genetic Algorithm hybridized with the Discrete Lagrangian Method for trap escaping
|
|
- Merryl Parsons
- 5 years ago
- Views:
Transcription
1 A Genetic Algorithm hybridized with the Discrete Lagrangian Method for trap escaping Madalina Raschip and Cornelius Croitoru Al.I.Cuza University of Iasi, Romania {mionita, Abstract. This paper introduces a genetic algorithm enhanced with a trap escaping strategy derived from the dual information presented as discrete Lagrange multipliers. When the genetic algorithm is trapped into a local optima, the Discrete Lagrange Multiplier method is called for the best individual found. The information provided by the Lagrangian method is unified, in the form of recombination, with the one from the last population of the genetic algorithm. Then the genetic algorithm is restarted with this new improved configuration. The proposed algorithm is tested on the winner determination problem. Experiments are conducted using instances generated with the CATS system. The results show that the method is viable. 1 Introduction Genetic algorithms (GAs) are powerful optimization techniques working with populations of individuals which are improved each iteration using specific operators. When dealing with difficult real-world problems, they may be trapped into local optima. Different methods for solving this problem were developed in literature. Maintaining the population diversity is a preemptive way. A straightforward procedure is to increase the mutation rate after a change has been detected. The loss of diversity is dependent on the selection intensity. Niching methods assist the selection procedure in order to reduce the effect of the genetic drift caused by this [1]. The island model [2], the random immigrants [3], restarting [4] are other examples of techniques used for maintaining diversity. The restarting techniques for example are used inside genetic algorithms when some threshold is reached (local convergence is detected typically when no progress has been made for a long time). The current run is terminated and the algorithm is restarted with a new seed. Recently, restarting techniques have been applied to complete algorithms based on backtracking for constraint satisfaction problems, including the satisfiability problem [5]. They yield good performance improvements. Recent publications describe hybrid approaches which often lead to faster and more robust algorithms for hard optimization problems [6]. The traditional methods come in two distinct flavors: heuristic search algorithms which find a satisfactory even if not necessarily optimal solution and exact algorithms which guaranty for finding a provably optimal solution. Hybrid methods were developed in order to borrow ideas from both sources.
2 In particular, the hybridization of metaheuristics with Integer (linear) programming techniques have proven to be feasible and useful in practice [7]. The two complementary techniques benefit from the synergy. For example, the information provided by the LP-relaxed solutions could be exploited inside metaheuristics. They are useful in creating promising initial solutions, inside repairing procedures, or to guide local improvement [7]. Approaches which use dual variables and the relations between primal and dual variables are also present in literature. For example, in [8] the shadow prices of the relaxed Multi-constrained Knapsack problem are used by a genetic algorithm inside a repairing procedure. Ratios based on the shadow prices give the likeliness of the items to be included in a solution. In [9] a primal-dual variable neighborhood search for the simple plant location problem is presented. After a primal feasible solution is obtained using a variable neighborhood decomposition search, a dual solution which exploits the complementary slackness conditions is created. The dual solution is transformed into an exact solution and used to derive a good lower bound and to strengthen next a Branch and Bound algorithm. In [10] a hybrid technique based on duality information is proposed in order to escape from local optima. When the evolutionary algorithm reaches a local trap, the method leads the search out of a local optima. It constructs the appropriate dual relaxed space and improves it. The evolutionary algorithm is then restarted with a new population of primal individuals generated using the information from the dual solutions. The method was applied for the winner determination problem. The new approach presented here is based on the ideas from [10], but it uses a Lagrange Multiplier method inside the genetic algorithm. Hybrid methods based on Lagrange multipliers are not new. For example, in [11] an approach that combines Lagrangian decomposition with local search based metaheuristics was proposed. Our idea is to start the Discrete Lagrange Multiplier (DLM) method with the initial solution equal to the best individual from the genetic algorithm when it is stuck into a local optima. The DLM method is a general search method based on the Lagrange multipliers (the dual solutions). The Lagrange method will give a new solution to be used by the GA in order to escape from the local optima. In contrast to local search methods that restart from a new starting point when are trapped into a local optima, the new method moves the search out of a local optima in a direction provided by the recombination with the DLM solution. In the traditional methods based on restarts, breaks in the trajectory are made. The new method escapes from a local optima in a continuous trajectory. The advantage is given by the fact that the optima may be in the vicinity of the already found local optima. The method was tested for the winner determination problem (WDP) from the combinatorial auction field. In combinatorial auctions, multiple distinct items are sold simultaneously and the bidders may bid on combination of items [12]. The valuation for a combination of items is not necessarily equal to the sum of the individual items. This expressiveness can lead to more efficient allocations, as the applications in many real-world problems has demonstrated [13]. The
3 problem of determining the winners is computational complex (NP-complete and inapproximable) [14]. The paper is organized as follows. Section 2 presents the new approach. In the following section the winner determination problem together with the application of the general scheme for the WDP is described. Next, the experimental results on CATS instances are shown. Finally, conclusions are drawn. 2 The hybrid method The method follows the ideas from [10]. When the genetic search reaches a local trap, the approach runs the Discrete Lagrange Multiplier method, having as starting point the best individual from the primal genetic algorithm. Because of time constraints, the DLM runs only for the best individual. The Discrete Lagrange Multiplier method is the discrete version of the continuous Lagrange Multipliers method, which uses difference equations instead of using differential calculus. The penalty-based schemes use only the weights of the violated constraints to define a direction to move. The DLM method relies also on the balance between the objective function and the constraints. When the search reaches a local optima, the DLM uses the Lagrange multipliers to lead the search out of the local optima. The DLM algorithm helps the genetic algorithm to escape from the local optima. The approach restarts the genetic algorithm with the initial configuration modified: the last population of the genetic algorithm is recombined with the solution provided by the DLM method. By using the past experience, in the form of the last population of individuals, and the new information, which continues the previous direction and which is resulted from a DLM run, the algorithm is able to improve its future performance. These steps are iterated several times, or until an optimum solution is found. The scheme of the algorithm is presented in Figure 1. In contrast to [10], the new scheme does not need to transform a primal solution to a dual one. The dual solutions are initialized greedy and then are modified online in accordance with the modified primal solutions. Another benefit of the new scheme is that the DLM algorithm provides the solution used in restarting the evolutionary algorithm. In the previous approach, the primal solutions are constructed in a greedy manner from the dual ones. In [15] a framework based on genetic algorithms and a constrained simulated annealing method was proposed for solving constrained optimization problems. It searches for discrete-neighborhood saddle points by performing ascents in the original-variable subspace and descents in the Lagrange-multiplier subspace. The simulated annealing technique provides initial solutions for the genetic algorithm and could be replaced by the DLM method. The purpose of using DLM in our method is to escape from the local optima.
4 GA operators best Population p combine DLM search Lagrangian s saddle point Population p+1 Fig. 1. The primal-dlm GA schema 3 The hybrid method applied to WDP 3.1 Winner determination An auctioneer has a set of goods, M = {1, 2,..., m} to sell. The buyers (bidders) submit a set of bids, B = {B 1,..., B n }. A bid is a tuple B j = (S j, p j ) where S j M is a set of goods and p j is a price. The winner determination problem is to label the bids as winning or losing so as to maximize the auctioneer s revenue (the sum of the accepted bid prices) under the constraint that each good is allocated to at most one bid. The problem can be formulated as an Integer Linear Programming problem as follows: n max p j x j s.t. j=1 x j 1, i = 1, 2,..., m j i S j x j {0, 1}, j = 1, 2,..., n (WDP) x j = 1 if bid j with price p j is selected in the solution and x j = 0 otherwise. The definition assumes the free disposal case, i.e. not all items need to be covered. If there is no free disposal, an equality is used in the constraint formulation. Different methods for solving the problem were developed. Complete methods based on the Branch and Bound procedure [16] or linear programming [17] were designed. Stochastic methods like stochastic local search [18], simulated annealing [19] and genetic algorithms [20] have also been applied for solving the problem. In [21] a heuristic method based on the Lagrangian relaxation with
5 subgradient optimization is proposed for WDP. The heuristic methods compare well with CPLEX or with other exact algorithms. 3.2 The Discrete Lagrangian Method for WDP The Lagrange Multiplier methods have been developed for continuous constrained optimization problems. For a minimization problem, they do descents in the original space and ascents in the Lagrange-multiplier space. Equilibrium is reached when an optimal solution is found. The Discrete Lagrangian method (DLM) is a global search method which works on discrete values. It was initially proposed for solving satisfiability problems [22]. A Lagrangian function determines the search direction. The method escapes from a local optima by using the information provided by the Lagrange multipliers. Define the problem W DP (λ): n m max p j x j + [λ i (1 x j )] j=1 i=1 j i S j s.t.x j {0, 1}, j = 1, 2,..., n (WDP(λ)) for any Lagrangian multiplier vector λ = (λ 1,..., λ m ) such that λ i 0, for all i = 1, 2,..., m, as the discrete Lagrangian formulation of WDP. The formulation where x j [0, 1] is the classical (continuous) Lagrangian formulation. The Discrete Lagrangian function is defined as: L(x, λ) = n m p j x j + [λ i (1 x j )] i=1 j i S j j=1 DLM searches for a saddle-point for the problem W DP (λ). A saddle-point (x, λ ) of L(x, λ) satisfies the following condition: L(x, λ) L(x, λ ) L(x, λ ) for all λ sufficiently close to λ and for all x whose Hamming distance between x and x is 1. The pseudo-code of the algorithm is given next. Note that the solution x could be unfeasible because is a solution for the problem W DP (λ). The step of updating the Lagrange multipliers is detailed next. Denote by s(λ) = (s i (λ)), i = 1, 2,..., m the subgradient vector. s i (λ) = 1 j i S j x j (λ) s(λ) = m s i (λ) 2 i=1
6 Algorithm 1 DLM(WDP) set the initial solution x (random, in the Lagrange space) set the initial Lagrange multipliers λ (greedy) step size = 1 while x is not a solution do find the first (or best) neighbor, x of x (distance 1) if exists x then replace x with x else update Lagrange multipliers(step size) end if if after no consecutive iterations the best solution doesn t change then step size/ = 2 end if end while The Lagrange multiplier λ k (at iteration k) can be computed from λ k 1 using the following formula: λ k i = λ k 1 LB L(x, λ) i step size s(λ k 1 ) 2 s i(λ k 1 ) (1) where LB is the best lower bound found so far. The Lagrange multiplier λ can be interpreted as the price for the items. The subgradient s i (λ) denotes the stock of item i. When an item is out of stock, i.e. s i (λ) < 0 (more bids request item i), the price for the item i, λ i is increased. Otherwise, if the item is not allocated s i (λ) = 1, the price for the item is lowered down. When s i (λ) = 0, we have balanced the supply and the demand, so the price of the item is not changed. 3.3 The scheme of the hybrid algorithm for WDP The genetic algorithm starts with a population of individuals, possible solutions to the WDP problem. The individuals are evolved to better solutions by using a selection scheme and specific operators like mutation and crossover. An individual is encoded by a permutation of bids. A solution is constructed according to the permutation. A first-fit algorithm is used to decode such a permutation into a feasible solution. It starts with an empty allocation and it considers each bid in the order determined by the permutation. A bid is included in the solution if it satisfies the restrictions together with the previous selected bids. This representation ensures feasibility of the children. A disadvantage is that the search space becomes larger because the same solution can be encoded by multiple permutations. The scheme of the hybrid algorithm is presented bellow. The initial population is generated randomly. The fitness function is equal to the objective function of the WDP problem, that is the auctioneer s outcome. A fitness based selection
7 Algorithm 2 Primal-DLM-GA() init primal population while stopping condition not met do while not trapped into a local optima do selection apply operators local optimization (use best dual relaxed solution) end while bestdlm DLM(best) recombination with bestdlm end while is used, as well as the standard permutation operators, namely uniform order based crossover and swap mutation. After the application of the operators, each solution is improved using a local optimization step. The same optimization method as in [10] is used. An unsatisfied bid is selected greedily to be added to the solution. The bid i with the largest shadow surplus value p i / j S i y j is considered. The dual prices y j of the best dual relaxed solution, found at a previous step in DLM is used. The positions of the new bid and the first bid from permutation in conflict with are swapped. The assignment of bids is renewed. If the value of the new chromosome is better, the algorithm continues for a number of iterations; otherwise the optimization method stops. The genetic algorithm uses the elitism mechanism; at each iteration the best solution is kept in population. The algorithm iterates for a number of steps, or until a local optima is reached. When the genetic algorithm is stuck into a local optima, the DLM algorithm is called. The primal starting point for DLM is the best individual from the GA. All last individuals from the GA are recombined using the crossover operator with the best solution found by DLM. The genetic algorithm is restarted for a number of steps. 4 Experiments 4.1 Experimental settings The method was tested on instances from the CATS test suite [23]. Each distribution models a realistic scenario. For example, the arbitrary distribution simulates the auction of various electronic components; the regions distribution simulates the auction of radio spectrum rights; etc. Problems from each of the main distributions were generated: arbitrary, matching, paths, regions and scheduling. Instances with a variable number of bids and items were generated. The number of items ranges from 40 to 400 and the number of bids ranges from 50 to Ten problem instances were drawn from each distribution.
8 The optimal solutions were determined using a mixed integer linear program solver [24]. If the solver could not give a solution in a reasonable amount of time, the approximation algorithm ALPH [25] was considered. The ALPH algorithm first runs an approximation algorithm on the linear programming relaxation of the problem. Then a hill-climbing algorithm improves the order of the bids determined early. The ALPH heuristic was run with a small value for the approximation error parameter of the linear programming phase ϵ = For eight instances (out of ten generated) from the arbitrary distribution the LP solver was unable to find an exact solution. Four instances from the regions distribution were not solved by the LP solver. The new method, denoted by PDLMGA uses a population size of 500 individuals, a crossover probability of 0.6 and a mutation probability of The maximum number of iterations is set to 500 and the number of consecutive iterations without no change of the best was equal to 75. The number of restarts is set to five. To avoid increasing the execution time, the DLM algorithm used the step of finding the first best neighbor for the current solution. The algorithm was stopped after 1500 maximum iterations and the number of consecutive iterations without no change of the primal best was set to 30. The PDLMGA method was compared against the stochastic algorithm ALPH, the Casanova approach [18] and the PDGA algorithm [10]. In [25] it was shown that ALPH runs faster than CPLEX on large problem instances. The ALPH algorithm was run with the parameter ϵ equal with 0.2 (the same value as in the experiments from [25]). The value of the approximation error is greater than the value used in the process of finding the optimum. For the Casanova algorithm we have considered the walk probability of 0.2 and the novelty probability of The maximum number of steps from Casanova was equal to the product of the number of individuals and the number of iterations. The number of independent searches from Casanova was equal to the number of restarts from the genetic algorithm. The PDGA algorithm has the same settings as the PDLMGA. 4.2 Results Table 1 displays the results obtained for CATS instances with varsize bids. The results are averaged over 20 independent runs for each problem instance, except for the ALPH algorithm. As measure of comparison we used the gap from optimum which is equal to the difference between the optimum value and the value of the objective function for the solution found, divided by the optimum value. The Wilcoxon Signed-Rank non-parametric test is conducted. The test is done for two approaches: the PDLMGA and one algorithm from ALPH/Casanova/PDGA. The Null hypothesis was that there is no difference in the performances of the two algorithms. p-values below 0.05 were considered to be statistically significant. In the cases where the differences are significant, the winner is marked in bold. The best solutions are provided by the ALPH algorithm (note that ALPH is a specially constructed algorithm for WDP). The new approach gives statistically better results than the Casanova algorithm and the PDGA algorithm for almost
9 Table 1. The average gap (in percents) for CATS instances Distribution ALPH Casanova PDGA PDLMGA arbitrary matching paths regions scheduling all distribution, except the matching data set. For five problems (out of eight) from the arbitrary distributions and two problems (out of four) from the regions distribution which are not solved by the LP-solver, the PDLMGA found better optima than ALPH. The mean and the standard deviation of the best fitness value found by the genetic algorithms for the instances of the paths distribution are shown in Table 2. The best means are provided by the PDLMGA approach. The small values of the standard deviation of the algorithms show that the algorithms are robust and find good solutions consistently. Table 2. The mean and the standard deviation for ten instances of the paths data set Instance optimal Casanova PDGA PDLMGA (goods,bids) mean (stdev) mean (stdev) mean (stdev) (219,1132) (1.11) (0.89) (0.89) (61,1198) (0.37) 25.3 (0.17) (0.11) (51,279) (0.58) 23.4 (0.15) (0.06) (302,185) (0.49) (0.11) (0.09) (159,1028) (0.78) (0.97) (0.36) (129,1913) (0.57) (0.8) (0.24) (44,1208) (0.19) 16.9 (0.17) (0.08) (117,970) (0.64) 39.7 (0.61) (0.46) (189,1332) (0.80) (1.06) (0.42) (90,789) (0.41) (0.27) (0.21) Because the new approach is compared against the stochastic local search algorithm, the time costs need to be considered. The local search approaches are usually often orders of magnitudes faster. Figure 2 presents the time values (in seconds) for the two algorithms 1. In our case, the genetic algorithm runs faster than the Casanova approach. Note that the GA uses a mechanism for early stopping in case of premature convergence, while Casanova has not included such a mechanism. The DLM method is simple and runs faster, as you can see from Table 3. Table 3 shows how much time the new approach it spends for running the DLM method. 1 computer settings: 2GHz Pentium single core processor, 1 GB RAM
10 70 60 Casanova PDLMGA 50 running time arbitrary matching paths regions scheduling distributions Fig. 2. The running time of the Casanova and PDLMGA approaches Table 3. The percent of time, from the total time of PDLMGA, spent for running DLM Distribution arbitrary matching paths regions scheduling time% How much does improve the new method? Next we analyze the amount of the improvement achieved by the new method. We compared the new approach with: a random restart algorithm, without the local optimization step (rrga), and the same algorithm, only that the DLM method is started with a random initial solution (rpdlmga) Table 4 shows the differences of the gaps (in percents) between the PDLMGA and the two previous considered algorithms. It seems that the matching data set is the one who uses extensively the dual information. The scheduling distribution is easy when solving with a GA. Table 4. The differences of the (average) gaps (in percents) between the considered algorithms and the PDLMGA Distribution rrga rpdlmga arbitrary matching paths regions scheduling 0.7 0
11 The number of restarts In Table 5 the gap versus the number of restarts used in the PDLMGA for the paths distribution is represented. For each case the time requirements are also presented. As expected, the accuracy of the algorithm improves when using a larger number of restarts. A trade-off between performance and computation costs must be kept. Table 5. The number of restarts vs. the gap for the paths distribution. For each configuration the running time is shown (in seconds). restarts gap time Evaluation on more difficult problems Further experiments on instances with a larger number of bids were made. Problems from the matching distribution with 1000 bids and from paths distribution with and bids (and 256 items) were considered. The same parameters were kept for the algorithms, as in the experiments with smaller instances. The results are presented in Table 6. The Wilcoxon Signed-Rank non-parametric test is conducted on pairs of algorithms, except for ALPH. In cases where the differences are significant (at the level 0.05) the winner is marked in bold. Table 6. The average gap (in percents) for larger CATS instances Distribution instances ALPH Casanova PDGA PDLMGA matching (10000,256) paths (10000,256) paths (20000,256) The generated instances appear not to be so difficult for the approximative methods. Casanova is most influenced by the increasing size of the problems. The solution quality of PDLMGA is superior to the one returned by Casanova and PDGA for the large paths distributions. For the matching data set, the PDGA seems to be the best alternative, when comparing the algorithms. ALPH again finds better solutions. For difficult problem instances approximative methods are preferred to the classic ones. In experiments, Casanova outperformed the CASS algorithm, a deterministic approach, on large problem instances [18]. The results found by the new approach compared favorably to simpler genetic algorithms and some other local search techniques, like Casanova.
12 5 Conclusion The paper investigates the development of a novel hybrid algorithm by combining techniques from Evolutionary Computing and Integer Programming areas. The new hybrid evolutionary algorithm uses the dual information in the form of Lagrange multipliers to escape from a local optima. The method was applied for an important problem from the combinatorial auction realm. It was tested on different types of problem instances and the obtained allocations are very close to the optimal solutions. Although at a first sight, the algorithm seems to be complex and time consuming, it is fast enough to run in less than a minute problems with tens of thousands of bids. Applications on other optimization problems is mandatory for the future. References 1. Mahfoud, S.: Niching methods for genetic algorithms, University of Illinois at Urbana-Champaign (1996) 2. Starkweather, T., Whitley, D. and Mathias, K.: Optimization using distributed genetic algorithms. In Schwefel, H.P., Männer, R., editors, Parallel Problem Solving from Nature, Springer/Verlag, (1990) 3. Cobb, H.G., Grefenstette, J.F.: Genetic algorithms for tracking changing environments. In Proceedings of the 5th International Conference on Genetic Algorithms, Forrest, S., editor, (1993) 4. Fukunga, A.S.: Restart scheduling for genetic algorithms, In Proceedings of the 5th Conference on Parallel Problem Solving from Nature, 1498, LNCS Springer, (1998) 5. Gomes, C., Selman, B., Crato, N. and Kautz, H.: Heavy-tailed phenomena in satisfiability and constraint satisfaction problems. Journal of Automated Reasoning, 24(1/2): (2000) 6. Raidl, G.: A unified view on hybrid metaheuristics, In Proceedings of Hybrid Metaheuristics, volume 4030, Springer LNCS, 1-12 (2006) 7. Raidl, G. and Puchinger, J.: Combining (Integer) Linear Programming Techniques and Metaheuristics for Combinatorial Optimization. Hybrid Metaheuristics, An Emerging Approach to Optimization, Studies in Computational Intelligence, volume 114, (2008) 8. Pfeiffer, J., Rothlauf, F.: Analysis of Greedy Heuristics and Weight-Coded EAs for Multidimensional Knapsack Problems and Multi-Unit Combinatorial Auctions. In Proceedings of the 9th Conference on Genetic and Evolutionary Computation, p.1529 (2007) 9. Hansen, P., Brimberg, J., Mladenović, N., Urosević, D.: Primal-dual variable neighbourhood search for the simple plant location problem. INFORMS Journal on Computing, 19(4): (2007) 10. Raschip, M. and Croitoru, C.: A New Primal-Dual Genetic Algorithm: Case Study for the Winner Determination Problem. Evolutionary Computation in Combinatorial Optimization, Lecture Notes in Computer Science, 6022: (2010) 11. Leitner, M. and Raidl, G.: Lagrangian Decomposition, Metaheuristics, and Hybrid Approaches for the Design of the Last Mile in Fiber Optic Networks. In Proceedings of Hybrid Metaheuristics, volume 5296, Springer LNCS, (2008)
13 12. de Vries, S. and Vohra, R.: Combinatorial auctions: A survey. INFORMS Journal on Computing, 15(3): (2000) 13. Rassenti, S.J., Smith, V.L., and Bulfin, R.L.: A combinatorial auction mechanism for airport time slot allocation. Bell J. of Economics, 13: (1982) 14. Rothkopf, M., Pekec, A. and Harstad, R.: Computationally manageable combinatorial auctions. Management Science, 44(8): (1998) 15. Wah, B.W. and Chen, Y.X.: Constrained genetic algorithms and their applications in nonlinear constrained optimization. In Evolutionary Optimization, International Series in Operations Research and Management Science, volume 48 (IV): (2003) 16. Sandholm, T., Suri, S., Gilpin, A., and Levine, D.: CABoB: a fast optimal algorithm for combinatorial auctions. In Proceedings of the International Joint Conferences on Artifficial Intelligence, (2001) 17. Nisan, N.: Bidding and Allocation in Combinatorial Auctions. Proceedings of ACM conference on Electronic Commerce, 1-12 (2000) 18. Hoos, H.H. and Boutilier, C.: Solving combinatorial auctions using stochastic local search. Proceedings of the 17th National Conference on Artifficial Intelligence, (2000) 19. Guo, Y., Lim, A., Rodrigues, B. and Zhu, Y.: Heuristics for a bidding problem. Computers and Operations Research, 33(8): (2006) 20. Boughaci, D., Benhamou, B. and Drias, H.: A memetic algorithm for the optimal winner determination problem, Soft Computing, 13(8-9), (2009) 21. Guo, Y., Lim, A., Rodrigues, B. and Tang, J.: Using a Lagrangian Heuristic for a Combinatorial Auction Problem. In Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence, (2005) 22. Shang, Y. and Wah, B.: A Discrete Lagrangian-Based Global-Search Method for Solving Satisfiability Problems. Journal of Global Optimization, 12:61-99 (1998) 23. Leyton-Brown, K., Pearson, M. and Shoham, Y.: Towards a Universal Test Suite for Combinatorial Auction Algorithms. In Proceedings of the ACM conference on Electronic Commerce, (2000) 24. Berkelaar, M.: lp solve - version 5.5, Eindhoven University of Technology, Zurel, E. and Nisan, N.: An Efficient Approximate Allocation Algorithm for Combinatorial Auctions. Proceedings of the ACM conference on Electronic Commerce, (2001)
A Non-exact Approach and Experiment Studies on the Combinatorial Auction Problem
A Non-exact Approach and Experiment Studies on the Combinatorial Auction Problem Y. Guo 1, A. Lim 2, B. Rodrigues 3 and Y. Zhu 2 1 Department of Computer Science, National University of Singapore 3 Science
More informationIntroduction to Optimization
Introduction to Optimization Approximation Algorithms and Heuristics November 6, 2015 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff INRIA Lille Nord Europe 2 Exercise: The Knapsack Problem
More informationIntroduction to Optimization
Introduction to Optimization Approximation Algorithms and Heuristics November 21, 2016 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff Inria Saclay Ile-de-France 2 Exercise: The Knapsack
More informationOutline of the module
Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University
More informationHYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS
HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS NABEEL AL-MILLI Financial and Business Administration and Computer Science Department Zarqa University College Al-Balqa'
More informationEscaping Local Optima: Genetic Algorithm
Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned
More informationHybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2
Hybridization EVOLUTIONARY COMPUTING Hybrid Evolutionary Algorithms hybridization of an EA with local search techniques (commonly called memetic algorithms) EA+LS=MA constructive heuristics exact methods
More informationJournal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *
Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems
More informationTHE Multiconstrained 0 1 Knapsack Problem (MKP) is
An Improved Genetic Algorithm for the Multiconstrained 0 1 Knapsack Problem Günther R. Raidl Abstract This paper presents an improved hybrid Genetic Algorithm (GA) for solving the Multiconstrained 0 1
More informationDiscrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA
To appear: 15th International Joint Conference on Articial Intelligence, 1997 Discrete Lagrangian-Based Search for Solving MAX-SAT Problems Abstract Weighted maximum satisability problems (MAX-SAT) are
More informationMetaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini
Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationAn Evolutionary Algorithm with Stochastic Hill-Climbing for the Edge-Biconnectivity Augmentation Problem
An Evolutionary Algorithm with Stochastic Hill-Climbing for the Edge-Biconnectivity Augmentation Problem Ivana Ljubić and Günther R. Raidl Institute for Computer Graphics and Algorithms, Vienna University
More informationJob Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search
A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)
More informationMeta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization
2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic
More informationMarch 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014
More informationSPATIAL OPTIMIZATION METHODS
DELMELLE E. (2010). SPATIAL OPTIMIZATION METHODS. IN: B. WHARF (ED). ENCYCLOPEDIA OF HUMAN GEOGRAPHY: 2657-2659. SPATIAL OPTIMIZATION METHODS Spatial optimization is concerned with maximizing or minimizing
More informationVariable Neighborhood Search for Solving the Balanced Location Problem
TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht
More informationUsing Penalties instead of Rewards: Solving OCST Problems with Problem-Specific Guided Local Search
Using Penalties instead of Rewards: Solving OCST Problems with Problem-Specific Guided Local Search Wolfgang Steitz, Franz Rothlauf Working Paper 01/2011 March 2011 Working Papers in Information Systems
More informationABSTRACT I. INTRODUCTION. J Kanimozhi *, R Subramanian Department of Computer Science, Pondicherry University, Puducherry, Tamil Nadu, India
ABSTRACT 2018 IJSRSET Volume 4 Issue 4 Print ISSN: 2395-1990 Online ISSN : 2394-4099 Themed Section : Engineering and Technology Travelling Salesman Problem Solved using Genetic Algorithm Combined Data
More informationMetaheuristic Optimization with Evolver, Genocop and OptQuest
Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:
More informationREAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty
More informationBinary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms
Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Franz Rothlauf Department of Information Systems University of Bayreuth, Germany franz.rothlauf@uni-bayreuth.de
More informationAn Efficient Approximate Algorithm for Winner Determination in Combinatorial Auctions
An Efficient Approximate Algorithm for Winner Determination in Combinatorial Auctions Yuko Sakurai, Makoto Yokoo, and Koji Kamei NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun,
More informationSOLVING THE TASK ASSIGNMENT PROBLEM WITH A VARIABLE NEIGHBORHOOD SEARCH. Jozef Kratica, Aleksandar Savić, Vladimir Filipović, Marija Milanović
Serdica J. Computing 4 (2010), 435 446 SOLVING THE TASK ASSIGNMENT PROBLEM WITH A VARIABLE NEIGHBORHOOD SEARCH Jozef Kratica, Aleksandar Savić, Vladimir Filipović, Marija Milanović Abstract. In this paper
More informationAlgorithm Design (4) Metaheuristics
Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )
More informationGenetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem
etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA
More information3. Genetic local search for Earth observation satellites operations scheduling
Distance preserving recombination operator for Earth observation satellites operations scheduling Andrzej Jaszkiewicz Institute of Computing Science, Poznan University of Technology ul. Piotrowo 3a, 60-965
More informationTabu Search for Constraint Solving and Its Applications. Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier Angers Cedex 01 - France
Tabu Search for Constraint Solving and Its Applications Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier 49045 Angers Cedex 01 - France 1. Introduction The Constraint Satisfaction Problem (CSP)
More informationCombinatorial Auctions: A Survey by de Vries and Vohra
Combinatorial Auctions: A Survey by de Vries and Vohra Ashwin Ganesan EE228, Fall 2003 September 30, 2003 1 Combinatorial Auctions Problem N is the set of bidders, M is the set of objects b j (S) is the
More informationExploration vs. Exploitation in Differential Evolution
Exploration vs. Exploitation in Differential Evolution Ângela A. R. Sá 1, Adriano O. Andrade 1, Alcimar B. Soares 1 and Slawomir J. Nasuto 2 Abstract. Differential Evolution (DE) is a tool for efficient
More informationVariable Neighborhood Search Based Algorithm for University Course Timetabling Problem
Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem Velin Kralev, Radoslava Kraleva South-West University "Neofit Rilski", Blagoevgrad, Bulgaria Abstract: In this paper
More informationA HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY
A HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY Dmitriy BORODIN, Victor GORELIK, Wim DE BRUYN and Bert VAN VRECKEM University College Ghent, Ghent, Belgium
More informationChapter 14: Optimal Winner Determination Algorithms
Chapter 14: Optimal Winner Determination Algorithms Tuomas Sandholm 1 Introduction This chapter discusses optimal winner determination algorithms for combinatorial auctions (CAs). We say the auctioneer
More informationGRASP. Greedy Randomized Adaptive. Search Procedure
GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation
More informationC N O S N T S RA R INT N - T BA B SE S D E L O L C O A C L S E S A E RC R H
LECTURE 11 & 12 CONSTRAINT-BASED LOCAL SEARCH Constraint-based Local Search Problem given in CSP form : a set of variables V={V1, V2,, Vn} a set of constraints C={C1, C2,, Ck} i.e. arithmetic or symbolic
More informationA Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery
A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,
More informationData Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with
More informationEvolutionary Computation for Combinatorial Optimization
Evolutionary Computation for Combinatorial Optimization Günther Raidl Vienna University of Technology, Vienna, Austria raidl@ads.tuwien.ac.at EvoNet Summer School 2003, Parma, Italy August 25, 2003 Evolutionary
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationSolving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm
Solving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm Faculty of Mathematics University of Belgrade Studentski trg 16/IV 11 000, Belgrade, Serbia (e-mail: zoricast@matf.bg.ac.yu)
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationHeuristic Optimization Introduction and Simple Heuristics
Heuristic Optimization Introduction and Simple Heuristics José M PEÑA (jmpena@fi.upm.es) (Universidad Politécnica de Madrid) 1 Outline 1. What are optimization problems? 2. Exhaustive vs. Heuristic approaches
More informationCMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro
CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro LOCAL SEARCH FOR CSP Real-life CSPs can be very large and hard to solve Methods so far: construct a
More informationAn Evolutionary Algorithm for the Multi-objective Shortest Path Problem
An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China
More informationSelected Topics in Column Generation
Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a
More informationAnalysis of Greedy Heuristics and Weight-Coded EAs for Multidimensional Knapsack Problems and Multi-Unit Combinatorial Auctions
Analysis of Greedy Heuristics and Weight-Coded EAs for Multidimensional Knapsack Problems and Multi-Unit Combinatorial Auctions Jella Pfeiffer and Franz Rothlauf Working paper 1/2007 March 2007 Working
More informationCombinatorial Double Auction Winner Determination in Cloud Computing using Hybrid Genetic and Simulated Annealing Algorithm
Combinatorial Double Auction Winner Determination in Cloud Computing using Hybrid Genetic and Simulated Annealing Algorithm Ali Sadigh Yengi Kand, Ali Asghar Pourhai Kazem Department of Computer Engineering,
More informationGraph Coloring via Constraint Programming-based Column Generation
Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy
More informationDERIVATIVE-FREE OPTIMIZATION
DERIVATIVE-FREE OPTIMIZATION Main bibliography J.-S. Jang, C.-T. Sun and E. Mizutani. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall, New Jersey,
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationUsing Genetic Algorithms to solve the Minimum Labeling Spanning Tree Problem
Using to solve the Minimum Labeling Spanning Tree Problem Final Presentation, oliverr@umd.edu Advisor: Dr Bruce L. Golden, bgolden@rhsmith.umd.edu R. H. Smith School of Business (UMD) May 3, 2012 1 / 42
More informationCHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION
131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This
More informationUsing Genetic Algorithms to optimize ACS-TSP
Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca
More informationCrew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)
In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented
More informationHeuristis for Combinatorial Optimization
Heuristis for Combinatorial Optimization Luigi De Giovanni Dipartimento di Matematica, Università di Padova Luigi De Giovanni Heuristic for Combinatorial Optimization 1 / 57 Exact and heuristic methods
More informationA Firework Algorithm for Solving Capacitated Vehicle Routing Problem
A Firework Algorithm for Solving Capacitated Vehicle Routing Problem 1 Noora Hani Abdulmajeed and 2* Masri Ayob 1,2 Data Mining and Optimization Research Group, Center for Artificial Intelligence, Faculty
More informationOptimizing the Sailing Route for Fixed Groundfish Survey Stations
International Council for the Exploration of the Sea CM 1996/D:17 Optimizing the Sailing Route for Fixed Groundfish Survey Stations Magnus Thor Jonsson Thomas Philip Runarsson Björn Ævar Steinarsson Presented
More informationGenetic Algorithms and Genetic Programming Lecture 7
Genetic Algorithms and Genetic Programming Lecture 7 Gillian Hayes 13th October 2006 Lecture 7: The Building Block Hypothesis The Building Block Hypothesis Experimental evidence for the BBH The Royal Road
More informationLecture 6: The Building Block Hypothesis. Genetic Algorithms and Genetic Programming Lecture 6. The Schema Theorem Reminder
Lecture 6: The Building Block Hypothesis 1 Genetic Algorithms and Genetic Programming Lecture 6 Gillian Hayes 9th October 2007 The Building Block Hypothesis Experimental evidence for the BBH The Royal
More informationA Genetic Algorithm for the Multiple Knapsack Problem in Dynamic Environment
, 23-25 October, 2013, San Francisco, USA A Genetic Algorithm for the Multiple Knapsack Problem in Dynamic Environment Ali Nadi Ünal Abstract The 0/1 Multiple Knapsack Problem is an important class of
More informationARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS
ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem
More informationMethods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization
Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization L. De Giovanni 1 Introduction Solution methods for Combinatorial Optimization Problems (COPs) fall into two classes:
More informationSolving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach
Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Amir Salehipour School of Mathematical and Physical Sciences, The University of Newcastle, Australia
More informationFinding optimal configurations Adversarial search
CS 171 Introduction to AI Lecture 10 Finding optimal configurations Adversarial search Milos Hauskrecht milos@cs.pitt.edu 39 Sennott Square Announcements Homework assignment is out Due on Thursday next
More informationAn Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem
An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem Harald Feltl and Günther R. Raidl Institute of Computer Graphics and Algorithms Vienna University of Technology, Vienna, Austria
More informationLocal search heuristic for multiple knapsack problem
International Journal of Intelligent Information Systems 2015; 4(2): 35-39 Published online February 14, 2015 (http://www.sciencepublishinggroup.com/j/ijiis) doi: 10.11648/j.ijiis.20150402.11 ISSN: 2328-7675
More information5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing
1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.
More informationAnalysis of Greedy Heuristics and Weight-Coded EAs for Multidimensional Knapsack Problems and Multi-Unit Combinatorial Auctions
Analysis of Greedy Heuristics and Weight-Coded EAs for Multidimensional Knapsack Problems and Multi-Unit Combinatorial Auctions Jella Pfeiffer and Franz Rothlauf Working Paper 01.03.2007 Working Papers
More informationA Survey of Solving Approaches for Multiple Objective Flexible Job Shop Scheduling Problems
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 2 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0025 A Survey of Solving Approaches
More informationHeuristis for Combinatorial Optimization
Heuristis for Combinatorial Optimization Luigi De Giovanni Dipartimento di Matematica, Università di Padova Luigi De Giovanni Heuristic for Combinatorial Optimization 1 / 59 Exact and heuristic methods
More informationn Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness
Advanced Search Applications: Combinatorial Optimization Scheduling Algorithms: Stochastic Local Search and others Analyses: Phase transitions, structural analysis, statistical models Combinatorial Problems
More informationFine-grained efficient resource allocation using approximated combinatorial auctions
Web Intelligence and Agent Systems: An International Journal 7 (2009) 43 63 43 DOI 10.3233/WIA-2009-0154 IOS Press Fine-grained efficient resource allocation using approximated combinatorial auctions A
More informationCS5401 FS2015 Exam 1 Key
CS5401 FS2015 Exam 1 Key This is a closed-book, closed-notes exam. The only items you are allowed to use are writing implements. Mark each sheet of paper you use with your name and the string cs5401fs2015
More informationMassively Parallel Seesaw Search for MAX-SAT
Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The
More informationLocal Search for CSPs
Local Search for CSPs Alan Mackworth UBC CS CSP February, 0 Textbook. Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start) Searching
More informationLamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems
Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,
More informationIntroduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer
More informationA Hybrid Genetic Algorithm for the Distributed Permutation Flowshop Scheduling Problem Yan Li 1, a*, Zhigang Chen 2, b
International Conference on Information Technology and Management Innovation (ICITMI 2015) A Hybrid Genetic Algorithm for the Distributed Permutation Flowshop Scheduling Problem Yan Li 1, a*, Zhigang Chen
More informationSpeeding Up the ESG Algorithm
Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,
More informationSimple mechanisms for escaping from local optima:
The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a
More informationAn Adaptive Genetic Algorithm for Solving N- Queens Problem
An Adaptive Genetic Algorithm for Solving N- ueens Problem Uddalok Sarkar 1, * Sayan Nag 1 1 Department of Electrical Engineering Jadavpur University Kolkata, India uddaloksarkar@gmail.com, * nagsayan112358@gmail.com
More informationVariable Neighborhood Search for the Bounded Diameter Minimum Spanning Tree Problem
Variable Neighborhood Search for the Bounded Diameter Minimum Spanning Tree Problem Martin Gruber 1, Günther R. Raidl 1 1 Institute of Computer Graphics and Algorithms Vienna University of Technology Favoritenstraße
More informationApplying Metaheuristic Techniques to Search the Space of Bidding Strategies in Combinatorial Auctions
Applying Metaheuristic Techniques to Search the Space of Bidding Strategies in Combinatorial Auctions Ashish Sureka and Peter R. Wurman Department of Computer Science North Carolina State University Raleigh,
More informationAutomatic Generation of Test Case based on GATS Algorithm *
Automatic Generation of Test Case based on GATS Algorithm * Xiajiong Shen and Qian Wang Institute of Data and Knowledge Engineering Henan University Kaifeng, Henan Province 475001, China shenxj@henu.edu.cn
More informationGeneralized Combinatorial Auction for Mixed Integer Linear Programming. Mark Michael
Generalized Combinatorial Auction for Mixed Integer Linear Programming by Mark Michael A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department
More informationOpen Vehicle Routing Problem Optimization under Realistic Assumptions
Int. J. Research in Industrial Engineering, pp. 46-55 Volume 3, Number 2, 204 International Journal of Research in Industrial Engineering www.nvlscience.com Open Vehicle Routing Problem Optimization under
More informationBi-Objective Optimization for Scheduling in Heterogeneous Computing Systems
Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado
More informationCABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions Tuomas Sandholm sandholm@cs.cmu.edu Computer Science Department Carnegie Mellon University Pittsburgh, PA 523 Subhash Suri
More informationSolving Sudoku Puzzles with Node Based Coincidence Algorithm
Solving Sudoku Puzzles with Node Based Coincidence Algorithm Kiatsopon Waiyapara Department of Compute Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand kiatsopon.w@gmail.com
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationMINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS
Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 5 th, 2006 MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS Richard
More informationA Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling Repairman Problem
Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 A Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling
More informationAutomated Test Data Generation and Optimization Scheme Using Genetic Algorithm
2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm Roshni
More informationA genetic algorithm for kidney transplantation matching
A genetic algorithm for kidney transplantation matching S. Goezinne Research Paper Business Analytics Supervisors: R. Bekker and K. Glorie March 2016 VU Amsterdam Faculty of Exact Sciences De Boelelaan
More informationComputational study of the step size parameter of the subgradient optimization method
1 Computational study of the step size parameter of the subgradient optimization method Mengjie Han 1 Abstract The subgradient optimization method is a simple and flexible linear programming iterative
More informationarxiv: v1 [cs.ai] 12 Feb 2017
GENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION ANDRÉS HERRERA-POYATOS 1 AND FRANCISCO HERRERA 1,2 arxiv:1702.03594v1 [cs.ai] 12 Feb 2017 1 Research group Soft
More information