ACO and other (meta)heuristics for CO

Similar documents
Parallel Computing in Combinatorial Optimization

Pre-requisite Material for Course Heuristics and Approximation Algorithms

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

Ant Colony Optimization

Theorem 2.9: nearest addition algorithm

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Non-deterministic Search techniques. Emma Hart

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

An Ant Approach to the Flow Shop Problem

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

Using Genetic Algorithms to optimize ACS-TSP

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

GRASP. Greedy Randomized Adaptive. Search Procedure

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

Outline. No Free Lunch Theorems SMTWTP. Outline DM812 METAHEURISTICS

Notes for Lecture 24

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Escaping Local Optima: Genetic Algorithm

Ant Colony Optimization: The Traveling Salesman Problem

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Machine Learning for Software Engineering

A heuristic approach to find the global optimum of function

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Optimization Techniques for Design Space Exploration

New algorithm for analyzing performance of neighborhood strategies in solving job shop scheduling problems

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

A Particle Swarm Approach to Quadratic Assignment Problems

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

ACO for Maximal Constraint Satisfaction Problems

Lecture 24: More Reductions (1997) Steven Skiena. skiena

Solution Bias in Ant Colony Optimisation: Lessons for Selecting Pheromone Models

Constructive meta-heuristics

Introduction to Combinatorial Algorithms

Parallel path-relinking method for the flow shop scheduling problem

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Complete Local Search with Memory

Local Search Overview

Solving a combinatorial problem using a local optimization in ant based system

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

Metaheuristics : from Design to Implementation

Constructive and destructive algorithms

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

A memetic algorithm for symmetric traveling salesman problem

Seach algorithms The travelling salesman problem The Towers of Hanoi Playing games. Comp24412: Symbolic AI. Lecture 4: Search. Ian Pratt-Hartmann

A Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling Repairman Problem

O(1) Delta Component Computation Technique for the Quadratic Assignment Problem

Combinatorial Optimization

CT79 SOFT COMPUTING ALCCS-FEB 2014

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

Outline of the talk. Local search meta-heuristics for combinatorial problems. Constraint Satisfaction Problems. The n-queens problem

Heuristic Optimization Introduction and Simple Heuristics

CHAPTER 4 GENETIC ALGORITHM

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Ant Colony Optimization for dynamic Traveling Salesman Problems

Advances in Metaheuristics on GPU

1 Variations of the Traveling Salesman Problem

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION

Monte Carlo Methods; Combinatorial Search

Computational complexity

The Heuristic Strategy Implementation to the Hopfield -Tank TSP Neural Algorithm

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Ant Colony Optimization

Parallel Path-Relinking Method for the Flow Shop Scheduling Problem

A Comparative Study on Nature Inspired Algorithms with Firefly Algorithm

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Image Edge Detection Using Ant Colony Optimization

Overview of Tabu Search

Algorithms for Integer Programming

Effective Optimizer Development for Solving Combinatorial Optimization Problems *

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology

Travelling Salesman Problem: Tabu Search

Ant Colony Optimization Exercises

The Augmented Regret Heuristic for Staff Scheduling

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Brute-Force Algorithms

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Using Proposed Hybrid Algorithm for Solving the Multi Objective Traveling Salesman Problem

Combinatorial Search; Monte Carlo Methods

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

ANT COLONY OPTIMIZATION FOR MANUFACTURING RESOURCE SCHEDULING PROBLEM

SCIENCE & TECHNOLOGY

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

Constrained Minimum Spanning Tree Algorithms

Two approaches. Local Search TSP. Examples of algorithms using local search. Local search heuristics - To do list

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

Abstract. Keywords 1 Introduction 2 The MAX-W-SAT Problem

Ant Colony Optimization: A Component-Wise Overview

Euler and Hamilton paths. Jorge A. Cobb The University of Texas at Dallas

val(y, I) α (9.0.2) α (9.0.3)

Module 6 NP-Complete Problems and Heuristics

Examples of P vs NP: More Problems

Transcription:

ACO and other (meta)heuristics for CO 32

33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution space Finally, Ant Colony Optimization explained Other population-based metaheuristics for optimization problems: Genetic Algorithms, Cultural Algorithms, Cross-entropy Single agent metaheuristics: Local Search, Tabu Search, Rollout algorithms

34 Ant Colony Optimization metaheuristic Reverse engineering of the mechanisms behind the ant colony shortest path behavior [Dorigo et al., 1991] Multi-agent architecture based on stigmergy. The agents, called ants are an abstraction and an engineered version of real ants Target: solution of combinatorial optimization problems (static and dynamic, centralized and distributed) General idea: repeated construction of solutions using a stochastic policy, observation of the results, and updating of real-valued variables used in turn by the decision policy in order to bias subsequent solution construction toward the most promising areas of the search space Before proceeding, let s understand first what a metaheuristic is, and which is the class of problems ACO is intended for

35 Algorithmic complexity The time complexity function of an algorithm for a given problem indicates, for each input size n, the maximum time the algorithm needs to find a solution to an instance of that size (worst-case time complexity). For instance, a polynomial time algorithm has time complexity O(g(n)) where g is a polynomial function. If the time complexity cannot be bounded by a polynomial the algorithm is said exponential [Garey and Johnson, 1979] A problem is said intractable if there is no polynomial time algorithm able to solve it For the so called NP-hard class of optimization problems exact algorithms need, in the worst case, exponential time to find the optimum (e.g., Traveling salesman, quadratic assignment, vehicle routing, graph coloring, satisfiability... ) Exact algorithms are guaranteed to find the optimal solution and to prove its optimality for every finite size instance of the combinatorial problem in bounded, instance-dependent time

Heuristics and metaheuristics For most NP-hard problems of interest the performance of exact algorithms is not satisfactory. Huge computational times are required sometimes even for small instances. It is therefore necessary in practice to trade optimality for efficiency. Heuristic methods seek to obtain good solutions at relatively low computational cost without being able to guarantee the optimality of the solution Asymptotic convergence can be often proved for specific cases. An algorithm that comes with the formal proof that it returns a solution worse than the optimal one by at most some fixed value or factor is anapproximation algorithm A metaheuristic is a high-level strategy which guides other (problem-specific) heuristics to search for solutions in a possibly wide set of problem domains (e.g., [Glover and Kochenber, 2002]. A metaheuristic can be seen as a general algorithmic framework which can be applied to different optimization problems with relatively few modifications to make them adapted to the specific problem (e.g., Tabu Search, Simulated Annealing, Iterated Local Search, Evolutionary Computation, ACO) 36

Can you think of examples of metaheuristics that you adopt in your real-life and/or that are adopted in your scientific field of interest? Do you think it is useful in 37 A trivial example of real-life metaheuristic Meta-strategy: Before buying a new item (e.g., a new laptop or a new pair of shoes), it is mandatory to check at least 5 shops and collect related information The high-level strategy is independent from the specific item and doesn t tell anything about how the shops are actually selected and how the final decision is taken. We therefore need at least two additional heuristics for the specific tasks of selecting the shop and taking the final decision

38 Combinatorial problems Instance of a combinatorial optimization problem: is a pair (S, J), where S is a finite set of feasible solutions, and J is a function that associates a real cost to each feasible solution, J : S R. The problem consists in finding the element s S which minimizes the function J: s = arg min s S J(s). Given the finiteness of the set S, the minimum of J on S indeed exists for at least one element In the case of continuous optimization S is a subset of R n and the global minimum might not exist (if the set is open) A combinatorial optimization problem is a set of instances of an optimization problem, usually all sharing some core properties (e.g., species genome and individual s DNA) A solution is a feasible solution in the set S. If mapping J changes during the execution of the algorithm,

Characteristics of a solution Given the finiteness of the solution set, this requires that the set of elements that can be part of a solution is also finite. Let s call them solution components Example: find the shortest path connecting two end-points. S = {(1237), (137), (13467), (123467), (1467), (1567)} is the loopless solution set. The cost of a solution can be the sum of the costs of the single links. The set C of the solution components is: C = {1, 2, 3, 4, 5, 6, 7} Start 1 c 8 5 c 1 2 c 3 c 5 4 c 6 6 c 9 c 10 7 End c 7 c 2 c 4 39

40 TSP: constrained shortest path Traveling Salesman Problem: the minimal cost closed circuit touching all the nodes NP-hard, very fameus, loads of applications, studied since about 1850 For n cities n! solutions (n=10, n! = 3,628,800) A solution is a ciclic permutation of cities: s = (132451). J = P d ij, where d ij are the distances, or, more in general, the link costs: J = d 13 + d 32 + d 24 + d 45 + d 51 1 5 2 4 3

QAP: quadratic assignment problem Quadratic Assignment Problem: minimizing the assignment of activities a i A to locations l j L Extension of TSP, which can be seen as a problem of assigning a city to a position from 1 to n (bipartite assignment problems). NP-hard but more difficult than TSP A solution is a complete assignment: s = (a 1, l 3 ), (a 2, l 4 ), (a 3, l 5 ), (a 4, l 1 ), (a 5, l 2 ) Every possible pair a i l j has a cost. The objective is to minimizing the sum of all costs: J(s) = a i,l j s c ij, where c ij is the cost of assigning activities a i to location l j. J(s) = c 13 + c 24 + c 35 + c 41 + c 52. Activities Locations 41

42 Construction heuristics Starting from an empty partial solution x 0 =, a complete solution s S is incrementally built by adding one-at-a-time a new component c C to the partial solution. The generic iteration (also termed transition) of a construction process can be described as: x j = {c 1, c 2,..., c j } x j+1 = {c 1, c 2,..., c j, c j+1 }, c i C, where x j X = (C) is a partial solution of (1) cardinality (length) j, j C <. Solution construction: sequential decision process

43 Generic construction algorithm procedure Generic construction algorithm() t 0; x t ; while (x t / S stopping criterion) c t select component(c x t ); x t+1 add component(x t, c t ); t t + 1; end while return x t ;

44 Construction illustrated S C Ω xt C (x t) c t π x t+1 xs Step-by-step dependence on all the past decisions: P (c t ) = P ( c t (c 1 k, c2 k+1,..., ct 1 k+t 1 )), Appropriate when the cost J(s) is a combination of contributions each one related to the fact that a particular component c j is included into the partial

State diagram of the sequential process 45

46 Modification methods Construction algorithms work on the set of solution components Modification strategies act upon the search space of the complete solutions: start with a complete solution and proceed by modifications of it Construction methods make use of an incremental local view of a solution, while modification approaches are based on a global view of a solution This class numbers some of the most effective algorithms and heuristics for combinatorial problems

Neighborhood of a solution There is no assigned metric as in the continuous. The notion of proximity is arbitrary A neighborhood is a mapping N that associates to each feasible solution s of an optimization problem a set N (s) of other feasible solutions. The mapping can be conveniently expressed in terms of a rule M that, given s, allows the definition of the set of solutions belonging to the neighborhood by applying some modification procedure on the components of s: N (s) = {s : s S s can be obtained from s from M(s)}. N can be random but only those mappings preserving a certain degree of correlation between the value J(s) associated to the point s and the values associated to the points in N (s) are really meaningful. Measure of closeness among the values associated to the solutions: if some components of s t are fixed, the values of the solutions generated by modifying other components through M should be correlated to the value of s t. 47

48 Local search methods Look for a solution locally optimal with respect to the defined neighborhood structure They are based on the iteration of the process of neighborhood examination until no further improvements are found. The most effective heuristics [Aarts and Lenstra, 1997]

49 Generic local search algorithm procedure Modification heuristic() define neighborhood structure(); s get initial solution(s); s best s; while ( stopping criterion) s select solution from neighborhood(n (s)); if (accept solution(s )) s s ; if (s < s best ) s best s; end if end if end while return s best ;

50 Main design issues Neighborhood structure Generation of the initial solution Selection of a candidate solution from the neighborhood of the current solution Criterion to accept or reject such selected solution