Advanced Search Simulated annealing

Similar documents
Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

Simulated Annealing. Slides based on lecture by Van Larhoven

Advanced Search Genetic algorithm

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Algorithm Design (4) Metaheuristics

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Kapitel 5: Local Search

Non-deterministic Search techniques. Emma Hart

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Artificial Intelligence

Hardware-Software Codesign

Informed Search and Exploration

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

x n+1 = x n f(x n) f (x n ), (1)

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Ar#ficial)Intelligence!!

Optimization Techniques for Design Space Exploration

TDT4136 Logic and Reasoning Systems

Artificial Intelligence

Solving Problems: Intelligent Search

BEYOND CLASSICAL SEARCH

CS:4420 Artificial Intelligence

An Overview of Search Algorithms With a Focus in Simulated Annealing

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Simulated Annealing. Premchand Akella

Artificial Intelligence

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Informed Search and Exploration

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Artificial Intelligence

mywbut.com Informed Search Strategies-II

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

AI Programming CS S-08 Local Search / Genetic Algorithms

Term Paper for EE 680 Computer Aided Design of Digital Systems I Timber Wolf Algorithm for Placement. Imran M. Rizvi John Antony K.

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Advanced A* Improvements

Lecture: Iterative Search Methods

Single Candidate Methods

Simulated annealing/metropolis and genetic optimization

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

Algorithms & Complexity

Informed search algorithms. Chapter 4

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Artificial Intelligence

Informed search algorithms

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

Local Search. CS 486/686: Introduction to Artificial Intelligence

Heuristic (Informed) Search

Local Search (Ch )

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

Random Search Report An objective look at random search performance for 4 problem sets

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

TABU search and Iterated Local Search classical OR methods

Interactive segmentation, Combinatorial optimization. Filip Malmberg

Simple mechanisms for escaping from local optima:

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

CS 380: Artificial Intelligence Lecture #4

Set 5: Constraint Satisfaction Problems

Evaluating Heuristic Optimization Phase Order Search Algorithms

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

Problem Solving: Informed Search

A complete algorithm to solve the graph-coloring problem

Artificial Intelligence

Robot Programming with Lisp

Artificial Intelligence

COMPARATIVE STUDY OF CIRCUIT PARTITIONING ALGORITHMS

Chapter 14 Global Search Algorithms

Evolutionary Algorithms: Perfecting the Art of Good Enough. Liz Sander

CS 188: Artificial Intelligence

INTRODUCTION TO HEURISTIC SEARCH

Single Dimensional Data. How can computation pick best data values? Or, turn math into searching? How is this Multi-Dimensional Data?

A Parallel Simulated Annealing Algorithm for Weapon-Target Assignment Problem

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

A COMPARATIVE STUDY OF HEURISTIC OPTIMIZATION ALGORITHMS

Volume 3, No. 6, June 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at

A motivated definition of exploitation and exploration

Set 5: Constraint Satisfaction Problems

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Hardware/Software Partitioning of Digital Systems

Predicting Diabetes using Neural Networks and Randomized Optimization

Set 5: Constraint Satisfaction Problems

Transcription:

Advanced Search Simulated annealing Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [Based on slides from Jerry Zhu, Andrew Moore http://www.cs.cmu.edu/~awm/tutorials ] slide 1

2. SIMULATED ANNEALING slide 2

anneal To subject (glass or metal) to a process of heating and slow cooling in order to toughen and reduce brittleness. slide 3

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. slide 4

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. What are the two key differences from Hill-Climbing? slide 5

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. How to choose the small probability? idea 1: p = 0.1 slide 6

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. How to choose the small probability? idea 1: p = 0.1 What s the drawback? Hint: consider the case when we are at the global optimum slide 7

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. How to choose the small probability? idea 1: p = 0.1 idea 2: p decreases with time slide 8

1. Pick initial state s 2. Randomly pick t in neighbors(s) 3. IF f(t) better THEN accept st. 4. ELSE /* t is worse than s */ 5. accept st with a small probability 6. GOTO 2 until bored. How to choose the small probability? idea 1: p = 0.1 idea 2: p decreases with time idea 3: p decreases with time, also as the badness f(s)-f(t) increases slide 9

If f(t) better than f(s), always accept t Otherwise, accept t with probability exp f ( s) f ( t) Temp Boltzmann distribution slide 10

If f(t) better than f(s), always accept t Otherwise, accept t with probability exp f ( s) f ( t) Temp Boltzmann distribution Temp is a temperature parameter that cools (anneals) over time, e.g. TempTemp*0.9 which gives Temp=(T 0 ) #iteration slide 11

If f(t) better than f(s), always accept t Otherwise, accept t with probability exp f ( s) f ( t) Temp Boltzmann distribution Temp is a temperature parameter that cools (anneals) over time, e.g. TempTemp*0.9 which gives Temp=(T 0 ) #iteration High temperature: almost always accept any t Low temperature: first-choice hill climbing slide 12

If f(t) better than f(s), always accept t Otherwise, accept t with probability exp f ( s) f ( t) Temp Boltzmann distribution Temp is a temperature parameter that cools (anneals) over time, e.g. TempTemp*0.9 which gives Temp=(T 0 ) #iteration High temperature: almost always accept any t Low temperature: first-choice hill climbing If the badness (formally known as energy difference) f(s)-f(t) is large, the probability is small. slide 13

SA algorithm // assuming we want to maximize f() current = Initial-State(problem) for t = 1 to do end T = Schedule(t) ; // T is the current temperature, which is monotonically decreasing with t if T=0 then return current ; //halt when temperature = 0 next = Select-Random-Successor-State(current) deltae = f(next) - f(current) ; // If positive, next is better than current. Otherwise, next is worse than current. if deltae > 0 then current = next ; // always move to a better state else current = next with probability p = exp(deltae / T) ; // as T 0, p 0; as deltae -, p 0 slide 14

issues Easy to implement. Intuitive: Proposed by Metropolis in 1953 based on the analogy that alloys manage to find a near global minimum energy state, when annealed slowly. slide 15

issues Cooling scheme important Neighborhood design is the real ingenuity, not the decision to use simulated annealing. Not much to say theoretically With infinitely slow cooling rate, finds global optimum with probability 1. Try hill-climbing with random restarts first! slide 16