Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Similar documents
Simple mechanisms for escaping from local optima:

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization

a local optimum is encountered in such a way that further improvement steps become possible.

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

SLS Algorithms. 2.1 Iterative Improvement (revisited)

TABU search and Iterated Local Search classical OR methods

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved

The MAX-SAX Problems

Optimization Techniques for Design Space Exploration

SLS Methods: An Overview

Introduction: Combinatorial Problems and Search

Non-deterministic Search techniques. Emma Hart

Two approaches. Local Search TSP. Examples of algorithms using local search. Local search heuristics - To do list

Variable Neighbourhood Search (VNS)

Variable Neighbourhood Search (VNS)

(Stochastic) Local Search Algorithms

CS-E3200 Discrete Models and Search

Title: Guided Local Search and Its Application to the Traveling Salesman Problem.

Stochastic greedy local search Chapter 7

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES

The Traveling Salesman Problem: State of the Art

GRASP. Greedy Randomized Adaptive. Search Procedure

Algorithm Design (4) Metaheuristics

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Iterated Robust Tabu Search for MAX-SAT

Local Search for CSPs

Travelling Salesman Problems

Stochastic Local Search for SMT

Set 5: Constraint Satisfaction Problems

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Local Search. Outline DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION. Definition: Local Search Algorithm.

Massively Parallel Seesaw Search for MAX-SAT

Lecture: Iterative Search Methods

A survey of AI-based meta-heuristics for dealing with local. optima in local search

Design and Analysis of Algorithms

Variable Neighborhood Search

Adaptive Memory-Based Local Search for MAX-SAT

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

Overview of Tabu Search

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

Copyright 2000, Kevin Wayne 1

Artificial Intelligence Methods (G52AIM)

Random Subset Optimization

A Course on Meta-Heuristic Search Methods for Combinatorial Optimization Problems

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

Outline. Introduction DMP204 SCHEDULING, TIMETABLING AND ROUTING

Adaptive Large Neighborhood Search

Outline of the module

The Probabilistic Heuristic In Local (PHIL) Search Meta-strategy

Heuristic Optimization Introduction and Simple Heuristics

Automatic Algorithm Configuration based on Local Search

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

The COMPSET Algorithm for Subset Selection

An Introduction to SAT Solvers

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Metaheuristics : from Design to Implementation

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Effective Stochastic Local Search Algorithms for Biobjective Permutation Flow-shop Problems

Parallel Computing in Combinatorial Optimization

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Lecture 06 Tabu Search

Set 5: Constraint Satisfaction Problems

Heuristis for Combinatorial Optimization

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Simulated Annealing. Slides based on lecture by Van Larhoven

Heuristis for Combinatorial Optimization

6. Tabu Search 6.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Constraint Satisfaction Problems

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Administrative. Local Search!

Heuristic Optimisation

A tabu search approach for makespan minimization in a permutation flow shop scheduling problems

N-Queens problem. Administrative. Local Search

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Artificial Intelligence

Set 5: Constraint Satisfaction Problems

CS227: Assignment 1 Report

A Tabu Search solution algorithm

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

' $ Applying Iterated Local. Thomas Stutzle

Automatic Algorithm Configuration based on Local Search

Complete Local Search with Memory

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Ant Colony Optimization

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

New algorithm for analyzing performance of neighborhood strategies in solving job shop scheduling problems

Kalev Kask and Rina Dechter

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Assigning Judges to Competitions Using Tabu Search Approach

Chapter 5 Search Strategies

An Ant Colony Optimization Meta-Heuristic for Subset Selection Problems

An Adaptive Noise Mechanism for WalkSAT

Transcription:

Simulated Annealing Key idea: Vary temperature parameter, i.e., probability of accepting worsening moves, in Probabilistic Iterative Improvement according to annealing schedule (aka cooling schedule). Inspired by physical annealing process: I candidate solutions = states of physical system I evaluation function = thermodynamic energy I globally optimal solutions = ground states I parameter T = physical temperature Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature. Heuristic Optimization 2013 64 Simulated Annealing (SA): determine initial candidate solution s set initial temperature T according to annealing schedule While termination condition is not satisfied: probabilistically choose a neighbour s 0 of s using proposal mechanism If s 0 satisfies probabilistic acceptance criterion (depending on T ): s := s 0 b update T according to annealing schedule Heuristic Optimization 2013 65

Note: I 2-stage step function based on I proposal mechanism (often uniform random choice from N(s)) I acceptance criterion (often Metropolis condition) I Annealing schedule (function mapping run-time t onto temperature T (t)): I I I initial temperature T0 (may depend on properties of given problem instance) temperature update scheme (e.g., geometric cooling: T := T ) number of search steps to be performed at each temperature (often multiple of neighbourhood size) I Termination predicate: often based on acceptance ratio, i.e., ratio of proposed vs accepted steps. Heuristic Optimization 2013 66 Example: Simulated Annealing for the TSP Extension of previous PII algorithm for the TSP, with I proposal mechanism: uniform random choice from 2-exchange neighbourhood; I acceptance criterion: Metropolis condition (always accept improving steps, accept worsening steps with probability exp [(f (s) f (s 0 ))/T ]); I annealing schedule: geometric cooling T := 0.95 T with n (n 1) steps at each temperature (n = number of vertices in given graph), T 0 chosen such that 97% of proposed steps are accepted; I termination: when for five successive temperature values no improvement in solution quality and acceptance ratio < 2%. Heuristic Optimization 2013 67

Improvements: I neighbourhood pruning (e.g., candidate lists for TSP) I greedy initialisation (e.g., by using NNH for the TSP) I low temperature starts (to prevent good initial candidate solutions from being too easily destroyed by worsening steps) I look-up tables for acceptance probabilities: instead of computing exponential function exp( /T ) for each step with := f (s) f (s 0 )(expensive!), use precomputed table for range of argument values /T. Heuristic Optimization 2013 68 Example: Simulated Annealing for the graph bipartitioning I for a given graph G := (V, E), find a partition of the nodes in two sets V 1 and V 2 such that V 1 = V 2, V 1 [ V 2 = V,and that the number of edges with vertices in each of the two sets is minimal A B Heuristic Optimization 2013 69

SA example: graph bipartitioning Johnson et al. 1989 I tests were run on random graphs (G n,p ) and random geometric graphs U n,d I modified cost function ( : imbalance factor) f (V 1, V 2 )= {(u, v) 2 E u 2 V 1 ^v 2 V 2 } + ( V 1 V 2 ) 2 allows infeasible solutions but punishes the amount of infeasibility I side advantage: allows to use 1 exchange neighborhoods of size O(n) instead of the typical neighborhood that exchanges two nodes at a time and is of size O(n 2 ) Heuristic Optimization 2013 70 SA example: graph bipartitioning Johnson et al. 1989 I initial solution is chosen randomly I standard geometric cooling schedule I experimental comparison to Kernighan Lin heuristic I Simulated Annealing gave better performance on G n,p graphs I just the opposite is true for Un,d graphs I several further improvements were proposed and tested general remark: Although relatively old, Johnson et al. s experimental investigations on SA are still worth a detailed reading! Heuristic Optimization 2013 71

Convergence result for SA: Under certain conditions (extremely slow cooling), any su ciently long trajectory of SA is guaranteed to end in an optimal solution [Geman and Geman, 1984; Hajek, 1998]. Note: I Practical relevance for combinatorial problem solving is very limited (impractical nature of necessary conditions) I In combinatorial problem solving, ending in optimal solution is typically unimportant, but finding optimal solution during the search is (even if it is encountered only once)! Heuristic Optimization 2013 72 I SA is historically one of the first SLS methods (metaheuristics) I raised significant interest due to simplicity, good results, and theoretical properties I rather simple to implement I on standard benchmark problems (e.g. TSP, SAT) typically outperformed by more advanced methods (see following ones) I nevertheless, for some (messy) problems sometimes surprisingly e ective Heuristic Optimization 2013 73

Tabu Search Key idea: Use aspects of search history (memory) to escape from local minima. Simple Tabu Search: I Associate tabu attributes with candidate solutions or solution components. I Forbid steps to search positions recently visited by underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization 2013 74 Tabu Search (TS): determine initial candidate solution s While termination criterion is not satisfied: determine set N 0 of non-tabu neighbours of s choose a best improving candidate solution s 0 in N 0 update tabu attributes based on s 0 b s := s 0 Heuristic Optimization 2013 75

Note: I Non-tabu search positions in N(s) are called admissible neighbours of s. I After a search step, the current search position or the solution components just added/removed from it are declared tabu for a fixed number of subsequent search steps (tabu tenure). I Often, an additional aspiration criterion is used: this specifies conditions under which tabu status may be overridden (e.g., if considered step leads to improvement in incumbent solution). Heuristic Optimization 2013 76 Example: Tabu Search for SAT GSAT/Tabu (1) I Search space: set of all truth assignments for propositional variables in given CNF formula F. I Solution set: models of F. I Use 1-flip neighbourhood relation, i.e., two truth assignments are neighbours i they di er in the truth value assigned to one variable. I Memory: Associate tabu status (Boolean value) with each variable in F. Heuristic Optimization 2013 77

Example: Tabu Search for SAT GSAT/Tabu (2) I Initialisation: random picking, i.e., select uniformly at random from set of all truth assignments. I Search steps: I variables are tabu i they have been changed in the last tt steps; I neighbouring assignments are admissible i they can be reached by changing the value of a non-tabu variable or have fewer unsatisfied clauses than the best assignment seen so far (aspiration criterion); I choose uniformly at random admissible assignment with minimal number of unsatisfied clauses. I Termination: upon finding model of F or after given bound on number of search steps has been reached. Heuristic Optimization 2013 78 Note: I GSAT/Tabu used to be state of the art for SAT solving. I Crucial for e cient implementation: I keep time complexity of search steps minimal by using special data structures, incremental updating and caching mechanism for evaluation function values; I e cient determination of tabu status: store for each variable x the number of the search step when its value was last changed it x ; x is tabu i it it x < tt, whereit =currentsearchstepnumber. Heuristic Optimization 2013 79

Note: Performance of Tabu Search depends crucially on setting of tabu tenure tt: I tt too low ) search stagnates due to inability to escape from local minima; I tt too high ) search becomes ine ective due to overly restricted search path (admissible neighbourhoods too small) Advanced TS methods: I Robust Tabu Search [Taillard, 1991]: repeatedly choose tt from given interval; also: force specific steps that have not been made for a long time. I Reactive Tabu Search [Battiti and Tecchiolli, 1994]: dynamically adjust tt during search; also: use escape mechanism to overcome stagnation. Heuristic Optimization 2013 80 Further improvements can be achieved by using intermediate-term or long-term memory to achieve additional intensification or diversification. Examples: I Occasionally backtrack to elite candidate solutions, i.e., high-quality search positions encountered earlier in the search; when doing this, all associated tabu attributes are cleared. I Freeze certain solution components and keep them fixed for long periods of the search. I Occasionally force rarely used solution components to be introduced into current candidate solution. I Extend evaluation function to capture frequency of use of candidate solutions or solution components. Heuristic Optimization 2013 81

Tabu search algorithms algorithms are state of the art for solving many combinatorial problems, including: I SAT and MAX-SAT I the Constraint Satisfaction Problem (CSP) I many scheduling problems Crucial factors in many applications: I choice of neighbourhood relation I e cient evaluation of candidate solutions (caching and incremental updating mechanisms) Heuristic Optimization 2013 82 Dynamic Local Search I Key Idea: Modify the evaluation function whenever a local optimum is encountered in such a way that further improvement steps become possible. I Associate penalty weights (penalties) with solution components; these determine impact of components on evaluation function value. I Perform Iterative Improvement; when in local minimum, increase penalties of some solution components until improving steps become available. Heuristic Optimization 2013 83

Dynamic Local Search (DLS): determine initial candidate solution s initialise penalties While termination criterion is not satisfied: compute modified evaluation function g 0 from g based on penalties perform subsidiary local search on s using evaluation function g 0 b update penalties based on s Heuristic Optimization 2013 84 Dynamic Local Search (continued) I Modified evaluation function: g 0 (, s) :=g(, s)+ P i2sc( 0,s) penalty(i), where SC( 0, s) = set of solution components of problem instance 0 used in candidate solution s. I Penalty initialisation: For all i: penalty(i) := 0. I Penalty update in local minimum s: Typically involves penalty increase of some or all solution components of s; often also occasional penalty decrease or penalty smoothing. I Subsidiary local search: Often Iterative Improvement. Heuristic Optimization 2013 85

Potential problem: Solution components required for (optimal) solution may also be present in many local minima. Possible solutions: A: Occasional decreases/smoothing of penalties. B: Only increase penalties of solution components that are least likely to occur in (optimal) solutions. Implementation of B (Guided local search): [Voudouris and Tsang, 1995] Only increase penalties of solution components i with maximal utility: util(s 0, i) := f i (, s 0 ) 1+penalty(i) where f i (, s 0 ) = solution quality contribution of i in s 0. Heuristic Optimization 2013 86 Example: Guided Local Search (GLS) for the TSP [Voudouris and Tsang 1995; 1999] I Given: TSP instance G I Search space: Hamiltonian cycles in G with n vertices; use standard 2-exchange neighbourhood; solution components = edges of G; f (G, p) :=w(p); f e (G, p) :=w(e); I Penalty initialisation: Set all edge penalties to zero. I Subsidiary local search: Iterative First Improvement. I Penalty update: Increment penalties for all edges with maximal utility by := 0.3 w(s 2-opt) n where s 2-opt = 2-optimal tour. Heuristic Optimization 2013 87

Related methods: I Breakout Method [Morris, 1993] I GENET [Davenport et al., 1994] I Clause weighting methods for SAT [Selman and Kautz, 1993; Cha and Iwama, 1996; Frank, 1997] I several long-term memory schemes of tabu search Dynamic local search algorithms are state of the art for several problems, including: I SAT, MAX-SAT I MAX-CLIQUE [Pullan et al., 2006] Heuristic Optimization 2013 88 Hybrid SLS Methods Combination of simple SLS methods often yields substantial performance improvements. Simple examples: I Commonly used restart mechanisms can be seen as hybridisations with Uninformed Random Picking I Iterative Improvement + Uninformed Random Walk = Randomised Iterative Improvement Heuristic Optimization 2013 89

Iterated Local Search Key Idea: Use two types of SLS steps: I subsidiary local search steps for reaching local optima as e ciently as possible (intensification) I perturbation steps for e ectively escaping from local optima (diversification). Also: Use acceptance criterion to control diversification vs intensification behaviour. Heuristic Optimization 2013 90 Iterated Local Search (ILS): determine initial candidate solution s perform subsidiary local search on s While termination criterion is not satisfied: r := s perform perturbation on s perform subsidiary local search on s based on acceptance criterion, b keep s or revert to s := r Heuristic Optimization 2013 91

Note: I Subsidiary local search results in a local minimum. I ILS trajectories can be seen as walks in the space of local minima of the given evaluation function. I Perturbation phase and acceptance criterion may use aspects of search history (i.e., limitedmemory). I In a high-performance ILS algorithm, subsidiary local search, perturbation mechanism and acceptance criterion need to complement each other well. In what follows: A closer look at ILS Heuristic Optimization 2013 92 ILS algorithmic outline procedure Iterated Local Search s 0 GenerateInitialSolution s LocalSearch(s 0 ) repeat s 0 Perturbation(s, history) s 0 LocalSearch(s 0 ) s AcceptanceCriterion(s, s 0, history) until termination condition met end Heuristic Optimization 2013 93

basic version of ILS I initial solution: random or construction heuristic I subsidiary local search: often readily available I perturbation: random moves in higher order neighborhoods I acceptance criterion: force cost to decrease such a version of ILS.. I often leads to very good performance I only requires few lines of additional code to existing local search algorithm I state-of-the-art results with further optimizations Heuristic Optimization 2013 94 basic ILS algorithm for TSP I GenerateInitialSolution: greedy heuristic I LocalSearch: 2-opt, 3-opt, LK, (whatever available) I Perturbation: double-bridge move (a specific 4-opt move) I AcceptanceCriterion: accept s 0 only if f (s 0 ) apple f (s ) Heuristic Optimization 2013 95

basic ILS algorithm for SMTWTP I GenerateInitialSolution: random initial solution by EDD heuristic I LocalSearch: piped VND using local searches based on interchange and insert neighborhoods I Perturbation: random k-opt move, k > 2 I AcceptanceCriterion: accept s 0 only if f (s 0 ) apple f (s ) Heuristic Optimization 2013 96