ACO for Maximal Constraint Satisfaction Problems

Similar documents
Using Genetic Algorithms to optimize ACS-TSP

Abstract. Keywords 1 Introduction 2 The MAX-W-SAT Problem

International Journal of Computational Intelligence and Applications c World Scientific Publishing Company

An Ant Approach to the Flow Shop Problem

Solving a combinatorial problem using a local optimization in ant based system

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

ACO and other (meta)heuristics for CO

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

A heuristic approach to find the global optimum of function

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

Ant Colony Optimization for dynamic Traveling Salesman Problems

A STUDY OF SOME PROPERTIES OF ANT-Q

Learning Fuzzy Rules Using Ant Colony Optimization Algorithms 1

Ant Colony Optimization

Ant Colony Optimization: A New Meta-Heuristic

Task Scheduling Using Probabilistic Ant Colony Heuristics

Image Edge Detection Using Ant Colony Optimization

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

Ant-Q: A Reinforcement Learning approach to the traveling salesman problem

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

Network routing problem-a simulation environment using Intelligent technique

Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem

Swarm Intelligence (Ant Colony Optimization)

The Influence of Run-Time Limits on Choosing Ant System Parameters

Searching for Maximum Cliques with Ant Colony Optimization

THE OPTIMIZATION OF RUNNING QUERIES IN RELATIONAL DATABASES USING ANT-COLONY ALGORITHM

Ant Colony Optimization Exercises

Ant Colony Based Load Flow Optimisation Using Matlab

Ant Colony Optimization: A Component-Wise Overview

SavingsAnts for the Vehicle Routing Problem. Karl Doerner Manfred Gronalt Richard F. Hartl Marc Reimann Christine Strauss Michael Stummer

Solving Constraint Satisfaction Problems by Artificial Bee Colony with Greedy Scouts

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

Université Libre de Bruxelles

Ant Colony Optimization: The Traveling Salesman Problem

arxiv: v1 [cs.ai] 9 Oct 2013

ANT COLONY optimization (ACO) is a metaheuristic for

Solving Permutation Constraint Satisfaction Problems with Artificial Ants

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

Workflow Scheduling Using Heuristics Based Ant Colony Optimization

Ant Colony Algorithms for the Dynamic Vehicle Routing Problem with Time Windows

Ant Colony Optimization (ACO) For The Traveling Salesman Problem (TSP) Using Partitioning

A running time analysis of an Ant Colony Optimization algorithm for shortest paths in directed acyclic graphs

THE natural metaphor on which ant algorithms are based

The Ant Colony Optimization Meta-Heuristic 1

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

Ant Colony Optimization

The Ant System: Optimization by a colony of cooperating agents

Ant Colony Optimization: Overview and Recent Advances

Ant Algorithms for Discrete Optimization

Ant Colony Optimization Algorithm for Reactive Production Scheduling Problem in the Job Shop System

An Experimental Study of the Simple Ant Colony Optimization Algorithm

International Journal of Computer Engineering and Applications, Volume XII, Special Issue, August 18, ISSN

Parallel Ant Colony Optimization for the Traveling Salesman Problem

Ant Algorithms for Discrete Optimization

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

Hybrid Ant Colony Optimization and Cuckoo Search Algorithm for Travelling Salesman Problem

An Ant Colony Optimization approach to solve Travelling Salesman Problem

A Recursive Ant Colony System Algorithm for the TSP

Ant Algorithms for Discrete Optimization

Ant Colony Optimization: Overview and Recent Advances

Relationship between Genetic Algorithms and Ant Colony Optimization Algorithms

Massively Parallel Seesaw Search for MAX-SAT

Parameter Adaptation in Ant Colony Optimization

Tasks Scheduling using Ant Colony Optimization

Ant colony optimization with genetic operations

Pre-scheduled and adaptive parameter variation in MAX-MIN Ant System

In D. Corne, M. Dorigo and F. Glover, editors New Ideas in Optimization. McGraw-Hill, London, UK, pp , 1999

Accelerating Ant Colony Optimization for the Vertex Coloring Problem on the GPU

Dynamic Load Balancing using an Ant Colony Approach in Micro-cellular Mobile Communications Systems

Combined A*-Ants Algorithm: A New Multi-Parameter Vehicle Navigation Scheme

ANT colony optimization (ACO) is a nature-inspired

Applying Opposition-Based Ideas to the Ant Colony System

Memory-Based Immigrants for Ant Colony Optimization in Changing Environments

SWARM INTELLIGENCE -I

Ant Colony Optimization and its Application to Adaptive Routing in Telecommunication Networks

System Evolving using Ant Colony Optimization Algorithm

Ant-Colony Optimization for the System Reliability Problem with Quantity Discounts

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

An Analysis of Algorithmic Components for Multiobjective Ant Colony Optimization: A Case Study on the Biobjective TSP

The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms

Parallel Ant Colonies for Combinatorial. Abstract. Ant Colonies (AC) optimization take inspiration from the behavior

VOL. 3, NO. 8 Aug, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

An Adaptive Ant System using Momentum Least Mean Square Algorithm

Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems

Advances on image interpolation based on ant colony algorithm

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Off-line vs. On-line Tuning: A Study on MAX MIN Ant System for the TSP

An Efficient Analysis for High Dimensional Dataset Using K-Means Hybridization with Ant Colony Optimization Algorithm

Random Subset Optimization

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Ant Colony Optimization and Constraint Programming

Fuzzy Ant Clustering by Centroid Positioning

Solution Bias in Ant Colony Optimisation: Lessons for Selecting Pheromone Models

ANT colony optimization (ACO) [1] is a metaheuristic. The Automatic Design of Multi-Objective Ant Colony Optimization Algorithms

On-Line Scheduling Algorithm for Real-Time Multiprocessor Systems with ACO and EDF

HEURISTIC ALGORITHMS FOR THE GENERALIZED MINIMUM SPANNING TREE PROBLEM

Transcription:

MIC 2001-4th Metaheuristics International Conference 187 ACO for Maximal Constraint Satisfaction Problems Andrea Roli Christian Blum Marco Dorigo DEIS - Università di Bologna Viale Risorgimento, 2 - Bologna (Italy) Email: aroli@deis.unibo.it IRIDIA - Université Libre de Bruxelles Av. Roosvelt 50 - Bruxelles (Belgium) Email: {aroli, cblum, mdorigo}@ulb.ac.be 1 Introduction The Ant Colony Optimization metaheuristic (ACO) is a recent metaheuristic [2] that has been successfully applied to a number of combinatorial optimization problems (see [3] for an overview). In this paper we present and discuss an application of ACO to Constraint Satisfaction Problems (CSPs). 1.1 CSPs A CSP is formally defined as a triple (X, D, C), where X = {x 1,...,x n } is the set of variables, D = {D 1,...,D n } is the set of domains which define the values a variable can assume 1 and C = {C 1,...,C m } is the set of constraints among the variables. The CSP is a decision problem: a solution of the problem is a complete assignment which satisfies all the constraints (see [12] for an overview of CSPs). Here we are interested in solving the maximal-csp, defined as the problem of finding an assignment that satisfies the greatest number of constraints. Often, weights are associated to constraints and the goal is to maximize the sum of the weights belonging to satisfied constraints. 1.2 The Ant Colony Optimization metaheuristic Ant Colony Optimization is a metaheuristic for designing algorithms for combinatorial optimization problems [2, 4, 3, 11]. In ACO, artificial ants construct a solution by building a path on a construction graph G =(C, L) wheretheelementsofl (called connections) fully connect C (set of components). Artificial pheromone can be associated either to components (nodes) or connections (edges). The ants behavior is specified by defining start states and termination conditions, construction rules, pheromone update rules and daemon actions (for a detailed and formal description see [2]). In this paper we use the Ant Colony System algorithm (ACS) [4], a particular instance of ACO (see Fig. 1 for the basic algorithm). In ACS, each ant is initially positioned on a randomly chosen node of G and builds a solution by applying a probabilistic rule, called state transition rule. This probabilistic rule is biased by pheromone values so that the higher the pheromone on a component/connection, the higher the probability it will be selected. While building the solution the ants eat some quantity of pheromone on the visited components/connections (this is called step-by-step pheromone update). After every ant has completed a solution, the offline pheromone update is applied to the components/connections of the best solution found so far, by adding a quantity of pheromone function of the quality of the solution. 1 In the following we consider only finite domain problems (i.e., the cardinality of each domain is finite).

188 MIC 2001-4th Metaheuristics International Conference procedure ACS begin Initialize while stopping criterion not satisfied do Position each ant in a starting node repeat for each ant do Choose next node by applying the state transition rule Apply step-by-step pheromone update end for until every ant has built a solution Update best solution Apply offline pheromone update end while end Figure 1: High-level description of the ACS algorithm. 2 Where to put pheromone? In this paper, we adopt the following representation for a CSP: the construction graph G =(C, L) (see Fig. 2) is defined such that nodes (components) are the pairs (variable, value) and edges (connections) fully connect the nodes. A solution is a sequence of n nodes, being n the number of variables. Ants construct a solution by probabilistically choosing a node in their feasible neighborhood; in this work, we define the feasible neighborhood of an ant k as the set of pairs (variable, value) such that the variable has not yet been assigned a value. The neighborhood chosen therefore implements the problem constraints that say that nodes associated to the same variable are not in a same solution. A key point for the algorithm designer is the choice of the graph elements (components and connections) to which to associate pheromone. This topic will be discussed in the following. Pheromone on components A first possibility is to put pheromone on components. In this case, the amount of pheromone is proportional to the (learned) desirability of having a particular assignment in the solution. With this choice, the ACS state transition rule used by ant k (called pseudo-random-proportional rule, see [4]) 2 becomes: s = { arg maxu Jk {τ(u)}, if q q 0 z, otherwise p k (z) = { τ (z) Pu J k τ (u), if z J k 0, otherwise (1) where s is the next node, τ(u) is the pheromone value on node u, J k the set of nodes in the feasible neighborhood of ant k, q is a random variable uniformly distributed in [0,1], q 0 (0 q 0 1) a parameter of the algorithm and z is chosen with probability p k. The pheromone update rules (step-by-step and offline) are: Step-by-step pheromone update: ant k, s S k : τ(s) (1 ρ) τ(s)+ρ τ 0, Offline pheromone update: s S opt : τ(s) (1 α) τ(s)+α g(s opt ), where S k is the solution constructed by ant k, S opt is the best solution found so far, ρ, α [0, 1], τ 0 R + are parameters of the algorithm and 0 <g(s opt ) < + is a nondecreasing monotonic function of the quality of the solution S opt. Pheromone on connections 2 The general rule includes also a heuristic function η. For simplicity we do not introduce it in this discussion; we just observe that instead of τ( ) we should write τ( )[η( )] β,whereβ is a parameter to adjust the relative importance of pheromone values and heuristic information.

MIC 2001-4th Metaheuristics International Conference 189 x x z y z y Figure 2: Construction graph for a CSP with binary variables. The correspondences between nodes and assignments are: (x, 1) x, (x, 0) x, (y, 1) y, (y, 0) y, (z,1) z,(z,0) z. An alternative possibility is to associate pheromone with connections. Thus, the quantity of pheromone on the connections between two nodes is proportional to the (learned) benefit of having the two corresponding assignments in the solution. For ant k which moves from node r to node s, rule (1) becomes: s = { arg maxu Jk {τ(r, u)}, if q q 0 z, otherwise p k (r, z) = { τ (r,z) Pu J k τ (r,u), if z J k 0, otherwise (2) where τ(r, u) is the pheromone level on connection (r, u). The pheromone update rules are an obvious extension of the previous ones to pheromone on connections (edges). Pheromone on connections (with sums) A last possibility considered in this paper takes into account the dependence of an assignment on those already done. To do this, (1) is changed as follows: s = { arg maxu Jk { w S τ(w, u)}, if q q 0 k z, otherwise p k (r, z) = P w S τ (w,z) P k Pu J k w S τ (w,u), k if z J k 0, otherwise (3) where S k denotes the set of nodes of the current partial path. Formulas (3) consider the sum of pheromone values on the edges which connect the candidate node to all the nodes already in the solution. The rules for pheromone update are changed so that the connection between every couple of nodes in the solution is updated. 3 Results We applied ACS to the MAX-SAT problem, a typical NP-hard max-csp. MAX-SAT is the maximization version of the Satisfiability Problem (SAT) [6]. The following definition is one of the most common ones: given a set of clauses, each of which is the logical disjunction of k>2 literals (i.e., a variable negated or not), we ask whether an assignment to the variables exists that satisfies all the clauses. In MAX-SAT the problem is to find an assignment that satisfies the greatest number of clauses, or, in the weighted version, the assignment that maximizes the sum of weights of the satisfied clauses. We implemented three algorithms: ACS-comp (pheromone on components), ACS-conn (pheromone on

190 MIC 2001-4th Metaheuristics International Conference Table 1: ACS-comp, ACS-conn and ACS-conn+. For the jnh instances we report the average error from the optimal solution (in percentage), the average computational time (in seconds) and the average number of iterations for the best solution found. In the uuf instances we reported the best average solution. The averages are evaluated for 100 runs. The algorithms were run on a Pentium II at 400MHz, with 512 MB of RAM and 512 KB of cache memory. algorithm instance average error std.dev. average time std.dev. average iter. std.dev. ACS-comp jnh1 0.558 0.084 12.811 3.056 76.530 18.193 jnh201 0.568 0.086 11.940 3.175 74.810 19.882 jnh301 0.777 0.127 14.809 2.949 81.710 16.299 uuf250-01 1038.19 2.13 32.56 12.65 63.21 24.46 uuf250-02 1031.35 2.37 36.01 10.78 70.85 21.23 ACS-conn jnh1 0.878 0.122 80.849 31.214 64.40 24.879 jnh201 0.885 0.120 78.676 29.035 66.670 24.517 jnh301 1.188 0.167 81.245 34.218 61.860 26.109 uuf250-01 1030.90 2.10 133.35 96.22 40.59 29.31 uuf250-02 1023.97 1.89 174.03 95.76 51.95 28.68 ACS-conn+ jnh1 0.583 0.077 195.78 52.71 76.63 20.54 jnh201 0.570 0.118 188.49 50.93 75.09 20.18 jnh301 0.780 0.139 205.68 38.03 78.86 14.56 uuf250-01 1037.37 2.47 1843.33 785.08 61.99 26.51 uuf250-02 1031.11 2.54 2079.18 720.81 70.21 24.49 connections) and ACS-conn+ (pheromone on connections with sums). The parameter settings were: ρ = α =0.01, β =1,τ 0 =0.1, q 0 =0.8 and 10 ants; we stopped the algorithms after 100 iterations. We tested the algorithms on weighted and unweighted MAX-SAT instances 3. The average quality of solutions produced by ACS-comp and ACS-conn+ is very similar (see Table 1), while ACS-conn generates on average slightly worse solutions. The behavior of ACS-conn suggests that considering only pairs of assignments can be misleading. The average number of iterations does not change much among the algorithms (ACS-conn shows an average slightly lower than the other algorithms). If we consider the average computation time we note that the use of a matrix of pheromone values causes a high load in the computation. The fact that ACS-conn+ does not perform significantly better than ACS-comp suggests that, for random instances, learning inter-variable correlations does not seem advantageous. We also tested ACS using local search as a daemon action. As local search we developed a MAX- SAT version of GSAT, a local search algorithm for SAT [10]. Local search is applied to each solution generated by the ants. The introduction of local search boosts the performance of the algorithms (they find better solutions than the algorithms without local search in the same amount of execution time) and flattens the differences in the quality of solutions produced (the average error from the optimal solution is statistically equivalent for the three algorithms). 4 Related work and future directions The first, non implemented, description of an application of ACO to CSPs was presented in [1]. An approach that fits the one described in Section 2 has been introduced in [8]. That work is an implementation of an Ant System [5] with pheromone on connections (with sums) and dynamic heuristic; ants first select a variable (randomly), then they apply the state transition rule to select the value to assign. The way of assigning values to variables used in [8] can be covered by our approach if we change the definition of feasible neighborhood for the ants. Different approaches which do not exactly fit into the ACO paradigm are presented in [9, 7]. Future work is directed to the extension of the presented representations to other combinatorial optimization problems with a CSP formulation and to problems 3 The weighted instances (2nd DIMACS benchmark set) can be retrieved from: http://www.research.att.com/~mgcr/data/maxsat.tar.gz. Unweighted instances (SATLIB): http://www.intellektik.informatik.tu-darmstadt.de/satlib.

MIC 2001-4th Metaheuristics International Conference 191 with a 0-1 formulation (like the knapsack problem). Furthermore, we are currently implementing ACS in a reduced graph representation, suited for problems with binary variables, whereby a node in the construction graph represents the binding (variable,1). Acknowledgments: This work was supported by the Metaheuristics Network, a Research Training Network funded by the Improving Human Potential programme of the CEC, contract HPRN-CT-1999-00106. Andrea Roli acknowledges support from the CEC through a Marie Curie Training Site fellowship, contract HPMT- CT-2000-00032. The information provided is the sole responsibility of the authors and does not reflect the Community s opinion. The Community is not responsible for any use that might be made of data appearing in this publication. Marco Dorigo acknowledges support from the Belgian FNRS, of which he is a Senior Research Associate. References [1] M. Dorigo. Optimization, Learning and Natural Algorithms (in Italian). PhD thesis, DEI, Politecnico di Milano,Italy, 1992. pp. 140. [2] M. Dorigo and G. Di Caro. The Ant Colony Optimization meta-heuristic. In D. Corne, M. Dorigo, and F. Glover, editors, New Ideas in Optimization, pages 11 32. McGraw-Hill, 1999. [3] M. Dorigo, G. Di Caro, and L. M. Gambardella. Ant algorithms for discrete optimization. Art. Life, 5(2):137 172, 1999. [4] M. Dorigo and L. M. Gambardella. Ant Colony System: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comp., 1(1):53 66, 1997. [5] M. Dorigo, V. Maniezzo, and A. Colorni. Ant System: Optimization by a colony of cooperating agents. IEEE Tran. Sys., Man and Cyb. - Part B, 26(1):29 41, 1996. [6] M. R. Garey and D. S. Johnson. Computers and intractability; a guide to the theory of NPcompleteness. W.H. Freeman, 1979. [7] A. Løkketangen. Satisfied ants. In Abst. Proc. ANTS 2000, 2000. [8] S. Pimont and C. Solnon. A generic ant algorithm for solving constraint satisfaction problems. In Abst. Proc. ANTS 2000, 2000. [9] L. Schoofs and B. Naudts. Solving CSPs with ant colonies. In Abst. Proc. ANTS 2000, 2000. [10] B. Selman, H. J. Levesque, and D. Mitchell. A new method for solving hard satisfiability problems. In Paul Rosenbloom and Peter Szolovits, editors, Proc. 10th Nat. Conf. Art. Int., pages 440 446, Menlo Park, California, 1992. American Association for Artificial Intelligence, AAAI Press. [11] T. Stützle and M. Dorigo. ACO algorithms for the quadratic assignment problem. In D. Corne, M. Dorigo, and F. Glover, editors, New Ideas in Optimization, pages 33 50. McGraw-Hill, 1999. [12] E. Tsang. Foundations of Constraint Satisfaction. Academic Press, 1991.