TABU search and Iterated Local Search classical OR methods

Similar documents
Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

a local optimum is encountered in such a way that further improvement steps become possible.

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

Simple mechanisms for escaping from local optima:

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Non-deterministic Search techniques. Emma Hart

Travelling Salesman Problem: Tabu Search

Outline of the module

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

Heuristic Optimization Introduction and Simple Heuristics

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

Algorithm Design (4) Metaheuristics

SLS Algorithms. 2.1 Iterative Improvement (revisited)

Iterated Local Search

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

Tabu Search - Examples

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

Heuristic Optimisation

6. Tabu Search 6.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Single Candidate Methods

Metaheuristics : from Design to Implementation

Optimization Techniques for Design Space Exploration

Artificial Intelligence Methods (G52AIM)

Two approaches. Local Search TSP. Examples of algorithms using local search. Local search heuristics - To do list

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

(Stochastic) Local Search Algorithms

Overview of Tabu Search

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Iterated Local Search

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Parallel Computing in Combinatorial Optimization

Lecture 06 Tabu Search

Escaping Local Optima: Genetic Algorithm

Heuristic Optimisation

The Traveling Salesman Problem: State of the Art

GRASP. Greedy Randomized Adaptive. Search Procedure

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Title: Guided Local Search and Its Application to the Traveling Salesman Problem.

Algorithms & Complexity

A Comparative Study of Tabu Search and Simulated Annealing for Traveling Salesman Problem. Project Report Applied Optimization MSCI 703

Heuristis for Combinatorial Optimization

Effective Optimizer Development for Solving Combinatorial Optimization Problems *

An Overview of Search Algorithms With a Focus in Simulated Annealing

Tabu Search Method for Solving the Traveling salesman Problem. University of Mosul Received on: 10/12/2007 Accepted on: 4/3/2008

Improved Large-Step Markov Chain Variants for the Symmetric TSP

Design and Analysis of Algorithms

A Course on Meta-Heuristic Search Methods for Combinatorial Optimization Problems

Metaheuristics: a quick overview

The Probabilistic Heuristic In Local (PHIL) Search Meta-strategy

Heuristis for Combinatorial Optimization

ABSTRACT I. INTRODUCTION. J Kanimozhi *, R Subramanian Department of Computer Science, Pondicherry University, Puducherry, Tamil Nadu, India

SLS Methods: An Overview

Ant Colony Optimization

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

The Linear Ordering Problem: Instances, Search Space Analysis and Algorithms

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Evolutionary Algorithms: Perfecting the Art of Good Enough. Liz Sander

An Ant Approach to the Flow Shop Problem

Simulated Annealing. Slides based on lecture by Van Larhoven

ACO and other (meta)heuristics for CO

Test Cost Optimization Using Tabu Search

Curs 11. Cautare locala Euristici

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

TABU SEARCH HEURISTIC FOR POINT-FEATURE CARTOGRAPHIC LABEL PLACEMENT

Machine Learning for Software Engineering

Adaptive Multi-operator MetaHeuristics for quadratic assignment problems

Variable Neighborhood Search

x n+1 = x n f(x n) f (x n ), (1)

Artificial Intelligence

Artificial Intelligence (Heuristic Search)

Variable Neighbourhood Search for Uncapacitated Warehouse Location Problems

A Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling Repairman Problem

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1

mywbut.com Informed Search Strategies-II

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Local Search. Outline DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION. Definition: Local Search Algorithm.

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

New algorithm for analyzing performance of neighborhood strategies in solving job shop scheduling problems

A Tabu Search solution algorithm

Improvement heuristics for the Sparse Travelling Salesman Problem

CS:4420 Artificial Intelligence

Effective Stochastic Local Search Algorithms for Biobjective Permutation Flow-shop Problems

SWARM INTELLIGENCE -I

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

The Traveling Salesman Problem: Adapting 2-Opt And 3-Opt Local Optimization to Branch & Bound Techniques

Interactive segmentation, Combinatorial optimization. Filip Malmberg

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Complete Local Search with Memory

Preliminary Background Tabu Search Genetic Algorithm

Evolutionary Computation for Combinatorial Optimization

SCIENCE & TECHNOLOGY

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft

Ant Colony Optimization: A Component-Wise Overview

Transcription:

TABU search and Iterated Local Search classical OR methods tks@imm.dtu.dk Informatics and Mathematical Modeling Technical University of Denmark 1

Outline TSP optimization problem Tabu Search (TS) (most important) Iterated Local Search (ILS) 2

Traveling Salesman Problem (TSP) Objective: Find a tour of minimal length that visits all cities exactly once. 3

2-opt What is the neighborhood? 4

2-opt Select two non-consecutive links: 5

2-opt Delete the links: 6

2-opt Reconnect (only one possibility): 7

Local search for TSP Concepts: Solutions space S, TSP: All feasible tours visiting all cities Objective function f : S R, TSP: Length of tour Neighborhood N(s) S, TSP: All feasible tours that can be obtained by remove two edges in the tour s and replacing them with two new edges (2-opt). 8

TABU search When humans solve problems they use experience and memory. Try to do the same in a local search. Random Restart + Simulated annealing (SA) + simple ILS : No memory. TS: Incorporates memory. 9

Mountain Climbing using memory Remember our drunk mountain climber? Now he is sober... Look all around you... Choose the next step which increases your height as much as possible not going back to a place where you were before! 10

TABU search features Is difficult to analyze mathematically. More work is needed for designing and implementing a TS heuristic compared to a SA heuristic. Has been applied with success to a wide range of combinatorial optimization problems. 11

Tabu search history Roots go back to the seventies. Proposed in its current form by Fred Glover in 1986. An enourmous amount of applications and variations of TS have been proposed since then. 12

From Hill climbing (decenting) to Tabu search Problem with hill climbing: We are stuck in a local optimum. Possible solution: Allow moves that do not improve the solution or perhaps even make the solution worse. New problem introduced: 13 Cycling (next slide).

Cycling Cost: 900 Opt Cost: 775 Cost: 800 Cycling Cost: 850 The tabu search way of handling cycling: Remember previously visited solutions somehow. Why is Tabu Search called Tabu Search? 14

Tabu Search Tabu search Strategies for avoiding cycling: Remember all solutions encountered so far (strict Tabu Search). Remember the last j solutions (cycles of length j + 1 are possible). Remember the last j moves that have been performed. Remember changing attributes of the last j solutions. 15

Basic Tabu Search algorithm Choose initial solution s s = s; k = 1; repeat Generate V N(s,k) N(s) Choose the best s in V s = s if f(s) < f(s ) then s = s k = k + 1 until stopping condition is met return s 16

Application of TABU Points to consider when applying TS to a specific problem: Initial solution. Definition of tabu-based neighborhood N(s,k) N(s). Definition of candidate set V N(s,k). Stopping condition. 17

Neighbors and moves A neighbour s N(s) is obtained by applying a move m. A move m identifies the changes needed in order to obtain s = s m. M(s) is the set of moves that can be applied to s. 18

Neighbors and moves There is a one-to-one relationship between neighbours and moves: N(s) = {s m M(s) Û Ø s = s m} If every move has an inverse then we can define tabus using moves. When a move has been performed, its inverse is tabu in the following iterations. Even if the inverse move is tabu for j iterations cycling with length less than j can occur (see next slide). 19

Inverse moves and cycling Cycling occurs even though inverse moves 20

Tabu tenure number of iterations that a solution/move/attribute is tabu may be static or dynamic depends on problem type and instance size Dynamic tabu tenure. Static tabu tenures are the easiest to implement. 21

Reactive tabu search Vary tabu tenure according to search history: increase tabu tenure when the same solution is visited over and over again. Decrease tabu tenure when no duplicate solutions have been spotted for some time. 22

Aspiration criterion condition for overriding tabu status global best: neighbour is a new best solution (this is the most common asp. criterion). region best: neighbour is a new best among solutions having a certain property. recent best: neighbour is the best among the most recently visited solutions. attribute based: store best objective value seen for each attribute. Similar to region best. 23

Tabu search memory: Short-term Tabu lists which help to escape local optima and to make the search avoid recently visited solutions. 24

Tabu search memory: Long-term Intensification: Information on elite solutions which extract the core of good solutions and direct the search according to this. Diversification: Information on the whole search trajectory which drives the search into new unexplored areas (diversification). For many problems the long term memory can be the difference between a mediocre metaheuristic and a state of the art meteheuristic. Unfortunately it is difficult to set up a general framework for how to use long-term memory - it is problem dependent. 25

Intuitive comparison of local search methods Iteration 2 Iteration 7 Global optimum Iteration 1 High quality area Low quality area Iteration 4 Iteration 3 Iteration 5 Iteration 6 High quality area Solution space model Random restart Start Start Simulated annealing Tabu search 26

Tricks for quickly searching the neighborhood The process of evaluating the neighborhood can often be speeded up by using -computations: 27

Tricks for quickly searching the neighborhood Sometimes the cost of a move in iteration t can be computed faster by using the cost of the move in iteration t 1 and modifying this cost according to the move performed in iteration t 1. It can occasionally be profitable to search the neighborhood in a special order and using information from the last move evaluation to calculate the cost of the current move 28

TS Overview Pros: Generates good solutions quickly. Many examples of TS algorithms yielding state of the art results for many problems. The heuristic has been applied to many problems areas. When faced with a new problem it is easy to find a paper that describes a tabu search algorithm for a similar problem (to get inspired). There are many ways to twist and tune the heuristic - helps you to get good results. 29

TS Overview Cons: There are many ways to twist and tune the heuristic - this is time consuming. For many applications you have to come up with some special tricks to ensure that the tabu search does enough diversification (probably using long term memory). The tabu list is not always enough. May not be the ideal metaheuristic when the neighbourhood is to large to be searched exhaustively. Enormous amount of literature about TS. The quality of some30 papers is questionable.

TS for Christmas Lunch I So how does TS work for the Christmas Lunch Problem? In TABU.java: The neighborhood: Run through a triple for loop: For every table and every seat at that table, for every other table, for every seat at that other table: Try to swap the two persons. If new value is better and non-tabu, save this swap. Add tabu Check if best has been improved. This version is SLOW: DeepClone and tabu solutions are also evaluated... 31

TS for Christmas Lunch II DataObject.java: The matrix representation is chosen. This makes the swap operation very fast. The evaluation function takes time... 32

Iterated Local Search (ILS) Descent algorithm maps from the set of solutions S to the set S S of solutions localy optimal w.r.t. the chosen neighborhood. The set S is usually much smaller than the set S. It seems profitable to search S instead of S. We would expect to find better solutions by randomly sampling S compared to randomly sampling S. 33

How to search S? Simplest approach: Use random restart: Problem: we tend to get solutions close to the mean of S. Better approach: ILS: 1. Generate initial solution. 2. Perform local search to reach local optima. 3. Modify the current solution somehow. 4. Iterate step 2-3 as desired. 34

ILS proc IteratedLocalSearch s 0 = GenerateInitialSolution; s = LocalSearch(s 0 ); repeat s current = Perturb(s,history); s current = LocalSearch(s current ); s = Accept(s, s current, history); until termination condition met end 35

ILS comments Note: Accept operates on the solution obtained after pertubation AND local search, not right after the permutation. First used on a location problem by Baxter in 1981 and on the TSP by Baum in 1986 according to Thomas Stützle. The name Iterated Local Search was first invented much later (in the 90s). 36

Important points Perturb (Diversification) Make changes that no easily are undone by local search (perturbation should be stronger or different from local search moves). Perhaps use history to determine the strength of the pertubation. Pertubation can sometimes be seen as ruining part of the solution (dropping bombs!) 37

Important points LocalSearch (Intensification) Tradeoff between speed and strength of local search. Use succesfull local search procedure from scientific literature if available. Accept Simple: only accept improvements or accept all solutions. More advanced: accept certain deteriorating solutions, use history. 38

Example ILS applied to TSP Local seach: 3-opt Perturb: A special 4-opt move called double-bridged move. Is not easy to undo using simpler moves. Accept: Accept only improving solutions 39

ILS applied to TSP Accept = Better : Accept only improving solutions. Accept = RW : Random walk (in S ), accept all solutions. Thomas Time Stidsenlimit: 120 sec on PII-266 40 Mhz (experiments by

Example ILS applied to TSP Speed: Time limit: 120 sec on PII-266 Mhz (experiments by Lorenco, Martin and Stützle) 41

ILS Overview Pros: Easy to implement a simple ILS. Usually good descent algorithms already exists. Compared to SA it quickly gives you good solutions and gradually improves upon it. For some problems it yields state of the art results. 42

ILS Overview Cons: It may require a deep understanding of the problem and a lot of trial and error to come up with a good pertubation method. If we are using a bad pertubation method we might either keep returning to the same local optima or our metaheuristic may resemble random restart. May not be the ideal metaheuristic when the neighbourhood is to large to be searched exhaustively. No usefull theory 43