Foundations of Artificial Intelligence

Similar documents
Foundations of Artificial Intelligence

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Foundations of Artificial Intelligence

TDT4136 Logic and Reasoning Systems

Informed Search Algorithms. Chapter 4

Informed search algorithms. Chapter 4, Sections 1 2 1

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Informed Search and Exploration

Introduction to Artificial Intelligence. Informed Search

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

CS:4420 Artificial Intelligence

Artificial Intelligence

Informed Search and Exploration

Artificial Intelligence. Informed search methods

Informed search algorithms

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

Informed search algorithms

CS 380: Artificial Intelligence Lecture #4

Artificial Intelligence: Search Part 2: Heuristic search

Informed search algorithms

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS414-Artificial Intelligence

Informed Search and Exploration

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Robot Programming with Lisp

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

ARTIFICIAL INTELLIGENCE. Informed search

Informed search algorithms. Chapter 4

Introduction to Artificial Intelligence (G51IAI)

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Solving Problems using Search

Outline. Best-first search

Planning, Execution & Learning 1. Heuristic Search Planning

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Artificial Intelligence. 2. Informed Search

Artificial Intelligence

Artificial Intelligence

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Outline. Best-first search

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Problem solving and search

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

COSC343: Artificial Intelligence

CS486/686 Lecture Slides (c) 2014 P.Poupart

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

AGENTS AND ENVIRONMENTS. What is AI in reality?

AGENTS AND ENVIRONMENTS. What is AI in reality?

Artificial Intelligence

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Artificial Intelligence

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Informed Search and Exploration

Problem solving and search

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Lecture 4: Informed/Heuristic Search

Problem Solving and Search. Geraint A. Wiggins Professor of Computational Creativity Department of Computer Science Vrije Universiteit Brussel

4. Solving Problems by Searching

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search algorithms. Chapter 4

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Informed Search and Exploration

CS 771 Artificial Intelligence. Informed Search

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Vorlesung Grundlagen der Künstlichen Intelligenz

Seminar: Search and Optimization

Artificial Intelligence CS 6364

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Automated Planning & Artificial Intelligence

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Artificial Intelligence

PEAS: Medical diagnosis system

Informed Search and Exploration for Agents

Informed search methods

Searching and NetLogo

Artificial Intelligence, CS, Nanjing University Spring, 2016, Yang Yu. Lecture 5: Search 4.

Solving Problems by Searching

Solving Problems by Searching

Heuristic (Informed) Search

Artificial Intelligence

Solving Problems: Intelligent Search

Shortest path using A Algorithm

Transcription:

Foundations of Artificial Intelligence 4. Informed Search Methods Heuristics, Local Search Methods, Genetic Algorithms Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität Freiburg May 3, 2017

Contents 1 Best-First Search 2 A and IDA 3 Local Search Methods 4 Genetic Algorithms (University of Freiburg) Foundations of AI May 3, 2017 2 / 32

Best-First Search Search procedures differ in the way they determine the next node to expand. Uninformed Search: Rigid procedure with no knowledge of the cost of a given node to the goal. Informed Search: Knowledge of the worth of expanding a node n is given in the form of an evaluation function f(n), which assigns a real number to each node. Mostly, f(n) includes as a component a heuristic function h(n), which estimates the costs of the cheapest path from n to the goal. Best-First Search: Informed search procedure that expands the node with the best f-value first. (University of Freiburg) Foundations of AI May 3, 2017 3 / 32

General Algorithm function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier function GRAPH-SEARCH(problem) returns a solution, or failure Best-first initialize searchthe isfrontier an instance using the ofinitial the general state of problem Tree-Search algorithm in which frontier initialize is thea explored priorityset queue to be empty ordered by an evaluation function f. loop do When f is if always the frontier correct, is empty we then do not return need failure to search! choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier (University of Freiburg) Foundations of AI May 3, 2017 4 / 32

Greedy Search A possible way to judge the worth of a node is to estimate its path-costs to the goal. h(n) = estimated path-costs from n to the goal The only real restriction is that h(n) = 0 if n is a goal. A best-first search using h(n) as the evaluation function, i.e., f(n) = h(n) is called a greedy search. Example: route-finding problem: h(n) = (University of Freiburg) Foundations of AI May 3, 2017 5 / 32

Greedy Search A possible way to judge the worth of a node is to estimate its path-costs to the goal. h(n) = estimated path-costs from n to the goal The only real restriction is that h(n) = 0 if n is a goal. A best-first search using h(n) as the evaluation function, i.e., f(n) = h(n) is called a greedy search. Example: route-finding problem: h(n) = straight-line distance from n to the goal (University of Freiburg) Foundations of AI May 3, 2017 5 / 32

Heuristics The evaluation function h in greedy searches is also called a heuristic function or simply a heuristic. The word heuristic is derived from the Greek word ευρισκειν (note also: ευρηκα!) The mathematician Polya introduced the word in the context of problem solving techniques. In AI it has two meanings: Heuristics are fast but in certain situations incomplete methods for problem-solving [Newell, Shaw, Simon 1963] (The greedy search is actually generally incomplete). Heuristics are methods that improve the search in the average-case. In all cases, the heuristic is problem-specific and focuses the search! (University of Freiburg) Foundations of AI May 3, 2017 6 / 32

Greedy Search Example 71 Oradea 75 140 118 111 70 75 Dobreta Neamt 87 151 Iasi 92 99 Fagaras Vaslui 80 142 Lugoj Pitesti 211 97 98 Hirsova Mehadia 146 101 85 Urziceni 138 86 Bucharest 120 90 Craiova Eforie Giurgiu 366 Bucharest 0 Craiova 160 Drobeta 242 Eforie 161 Fagaras 176 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 100 193 253 329 Urziceni 80 Vaslui 199 374 (University of Freiburg) Foundations of AI May 3, 2017 7 / 32

Greedy Search from to Bucharest (a) The initial state (b) After expanding 366 253 329 374 (c) After expanding 329 374 Fagaras Oradea 366 176 380 193 (d) After expanding Fagaras 329 374 Fagaras Oradea 366 380 193 Bucharest 253 0 (University of Freiburg) Foundations of AI May 3, 2017 8 / 32

Greedy Search from to Bucharest (a) The initial state (b) After expanding 366 253 329 374 (c) After expanding 329 374 Fagaras Oradea 366 176 380 193 (d) After expanding Fagaras 329 374 Fagaras Oradea 366 380 193 Bucharest 253 0 (University of Freiburg) Foundations of AI May 3, 2017 8 / 32

Greedy Search from to Bucharest (a) The initial state (b) After expanding 366 253 329 374 (c) After expanding 329 374 Fagaras Oradea 366 176 380 193 (d) After expanding Fagaras 329 374 Fagaras Oradea 366 380 193 Bucharest 253 0 (University of Freiburg) Foundations of AI May 3, 2017 8 / 32

Greedy Search from to Bucharest (a) The initial state (b) After expanding 366 253 329 374 (c) After expanding 329 374 Fagaras Oradea 366 176 380 193 (d) After expanding Fagaras 329 374 Fagaras Oradea 366 380 193 Bucharest 253 0 (University of Freiburg) Foundations of AI May 3, 2017 8 / 32

Greedy Search - Properties a good heuristic might reduce search time drastically non-optimal incomplete graph-search version is complete only in finite spaces Can we do better? (University of Freiburg) Foundations of AI May 3, 2017 9 / 32

A : Minimization of the Estimated Path Costs A combines greedy search with the uniform-cost search: Always expand node with lowest f(n) first, where g(n) = actual cost from the initial state to n. h(n) = estimated cost from n to the nearest goal. f(n) = g(n) + h(n), the estimated cost of the cheapest solution through n. Let h (n) be the actual cost of the optimal path from n to the nearest goal. h is admissible if the following holds for all n: h(n) h (n) We require that for A, h is admissible (example: straight-line distance is admissible). In other words, h is an optimistic estimate of the costs that actually occur. (University of Freiburg) Foundations of AI May 3, 2017 10 / 32

A Search Example 71 Oradea 75 140 118 111 70 75 Dobreta Neamt 87 151 Iasi 92 99 Fagaras Vaslui 80 142 Lugoj Pitesti 211 97 98 Hirsova Mehadia 146 101 85 Urziceni 138 86 Bucharest 120 90 Craiova Eforie Giurgiu 366 Bucharest 0 Craiova 160 Drobeta 242 Eforie 161 Fagaras 176 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 100 193 253 329 Urziceni 80 Vaslui 199 374 (University of Freiburg) Foundations of AI May 3, 2017 11 / 32

A Search from to Bucharest (a) The initial state (b) After expanding 366=0+366 393=140+253 447=118+329 449=75+374 (c) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 413=220+193 (d) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

A Search from to Bucharest (a) The initial state (b) After expanding 366=0+366 393=140+253 447=118+329 449=75+374 (c) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 413=220+193 (d) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

A Search from to Bucharest (a) The initial state (b) After expanding 366=0+366 393=140+253 447=118+329 449=75+374 (c) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 413=220+193 (d) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

A Search from to Bucharest (a) The initial state (b) After expanding 366=0+366 393=140+253 447=118+329 449=75+374 (c) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 413=220+193 (d) After expanding 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

Fagaras Oradea A 646=280+366 415=239+176 671=291+380 Search from to Bucharest Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras 447=118+329 449=75+374 646=280+366 Fagaras Oradea 671=291+380 Bucharest Craiova Pitesti 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253 (f) After expanding Pitesti 447=118+329 449=75+374 646=280+366 Fagaras Oradea 671=291+380 Bucharest Craiova Pitesti 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova 418=418+0 615=455+160 607=414+193 (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

Fagaras Oradea A 646=280+366 415=239+176 671=291+380 Search from to Bucharest Craiova Pitesti 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras 447=118+329 449=75+374 646=280+366 Fagaras Oradea 671=291+380 Bucharest Craiova Pitesti 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253 (f) After expanding Pitesti 447=118+329 449=75+374 646=280+366 Fagaras Oradea 671=291+380 Bucharest Craiova Pitesti 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova 418=418+0 615=455+160 607=414+193 (University of Freiburg) Foundations of AI May 3, 2017 12 / 32

Example: Path Planning for Robots in a Grid-World Live-Demo: http://qiao.github.io/pathfinding.js/visual/ (University of Freiburg) Foundations of AI May 3, 2017 13 / 32

Optimality of A Claim: The first solution found (= node is expanded and found to be a goal node) has the minimum path cost. Proof: Suppose there exists a goal node G with optimal path cost f, but A has found another node G 2 with g(g 2 ) > f. Start n G G 2 (University of Freiburg) Foundations of AI May 3, 2017 14 / 32

Optimality of A Let n be a node on the path from the start to G that has not yet been expanded. Since h is admissible, we have f(n) f. Since n was not expanded before G 2, the following must hold: and It follows from h(g 2 ) = 0 that Contradicts the assumption! f(g 2 ) f(n) f(g 2 ) f. g(g 2 ) f. (University of Freiburg) Foundations of AI May 3, 2017 15 / 32

Completeness and Complexity Completeness: If a solution exists, A will find it provided that (1) every node has a finite number of successor nodes, and (2) there exists a positive constant δ > 0 such that every step has at least cost δ. there exists only a finite number of nodes n with f(n) f. Complexity: In general, still exponential in the path length of the solution (space, time) More refined complexity results depend on the assumptions made, e.g. on the quality of the heuristic function. Example: In the case in which h (n) h(n) O(log(h (n)), only one goal state exists, and the search graph is a tree, a sub-exponential number of nodes will be expanded [Gaschnig, 1977, Helmert & Roeger, 2008]. Unfortunately, this almost never holds. (University of Freiburg) Foundations of AI May 3, 2017 16 / 32

A note on Graph- vs. Tree-Search A as described is a tree-search (and may consider duplicates) For the graph-based variant, one either needs to consider re-opening nodes from the explored set, when a better estimate becomes known, or one needs needs to require stronger restrictions on the heuristic estimate: it needs to be consistent. A heuristic h is called consistent iff for all actions a leading from s to s : h(s) h(s ) c(a), where c(a) denotes the cost of action a. Note: Consistency implies admissibility. (University of Freiburg) Foundations of AI May 3, 2017 17 / 32

Heuristic Function Example 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State (University of Freiburg) Foundations of AI May 3, 2017 18 / 32

Heuristic Function Example 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State h 1 = the number of tiles in the wrong position (University of Freiburg) Foundations of AI May 3, 2017 18 / 32

Heuristic Function Example 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State h 1 = the number of tiles in the wrong position h 2 = the sum of the distances of the tiles from their goal positions (Manhattan distance) (University of Freiburg) Foundations of AI May 3, 2017 18 / 32

Empirical Evaluation d = distance from goal Average over 100 instances Search Cost (nodes generated) Effective Branching Factor d IDS A (h 1) A (h 2) IDS A (h 1) A (h 2) 2 10 6 6 2.45 1.79 1.79 4 112 13 12 2.87 1.48 1.45 6 680 20 18 2.73 1.34 1.30 8 6384 39 25 2.80 1.33 1.24 10 47127 93 39 2.79 1.38 1.22 12 3644035 227 73 2.78 1.42 1.24 14-539 113-1.44 1.23 16-1301 211-1.45 1.25 18-3056 363-1.46 1.26 20-7276 676-1.47 1.47 22-18094 1219-1.48 1.28 24-39135 1641-1.48 1.26 (University of Freiburg) Foundations of AI May 3, 2017 19 / 32

Variants of A A in general still suffers from exponential memory growth. Therefore, several refinements have been suggested: iterative-deepening A, where the f-costs are used to define the cutoff (rather than the depth of the search tree): IDA Recursive Best First Search (RBFS): introduces a variable f limit to keep track of the best alternative path available from any ancestor of the current node. If current node exceeds this limit, recursion unwinds back to the alternative path. other alternatives memory-bounded A (MA ) and simplified MA (SMA ). (University of Freiburg) Foundations of AI May 3, 2017 20 / 32

Local Search Methods In many problems, it is unimportant how the goal is reached only the goal itself matters (8-queens problem, VLSI Layout, TSP). If in addition a quality measure for states is given, local search can be used to find solutions. It operates using a single current node (rather than multiple paths). It requires little memory. Idea: Begin with a randomly-chosen configuration and improve on it step by step Hill Climbing. Note: It can be used for maximization or minimization respectively (see 8-queens example) (University of Freiburg) Foundations of AI May 3, 2017 21 / 32

Example: 8-queens Problem (1) Example state with heuristic cost estimate h = 17 (counts the number of pairs threatening each other directly or indirectly). 18 12 14 13 13 12 14 14 14 16 13 15 12 14 12 16 14 12 18 13 15 12 14 14 15 14 14 13 16 13 16 14 17 15 14 16 16 17 16 18 15 15 18 14 15 15 14 16 14 14 13 17 12 14 12 18 (University of Freiburg) Foundations of AI May 3, 2017 22 / 32

Hill Climbing function HILL-CLIMBING(problem) returns a state that is a local maximum current MAKE-NODE(problem.INITIAL-STATE) loop do neighbor a highest-valued successor of current if neighbor.value current.value then return current.state current neighbor Figure 4.2 The hill-climbing search algorithm, which is the most basic local sea each step the current node is replaced by the best neighbor; in this version, that m with the highest VALUE, but if a heuristic cost estimate h is used, we would find the lowest h. (University of Freiburg) Foundations of AI May 3, 2017 23 / 32

Example: 8-queens Problem (2) Possible realization of a hill-climbing algorithm: Select a column and move the queen to the square with the fewest conflicts. 2 2 1 2 3 1 2 3 3 2 3 2 3 0 (University of Freiburg) Foundations of AI May 3, 2017 24 / 32

Problems with Local Search Methods Local maxima: The algorithm finds a sub-optimal solution. Plateaus: Here, the algorithm can only explore at random. Ridges: Similar to plateaus but might even require suboptimal moves. Solutions: Start over when no progress is being made. Inject noise random walk Which strategies (with which parameters) are successful (within a problem class) can usually only empirically be determined. (University of Freiburg) Foundations of AI May 3, 2017 25 / 32

Example: 8-queens Problem (Local Minimum) Local minimum (h = 1) of the 8-queens Problem. Every successor has a higher cost. (University of Freiburg) Foundations of AI May 3, 2017 26 / 32

Illustration of the ridge problem The grid of states (dark circles) is superimposed on a ridge rising from left to right, creating a sequence of local maxima, that are not directly connected to each other. From each local maximum, all the available actions point downhill. (University of Freiburg) Foundations of AI May 3, 2017 27 / 32

Performance figures for the 8-queens Problem The 8-queens problem has about 8 8 17 million states. Starting from a random initialization, hill-climbing directly finds a solution in about 14% of the cases. On average it requires only 4 steps! Better algorithm: Allow sideways moves (no improvement), but restrict the number of moves (avoid infinite loops!). E.g.: max. 100 moves: Solves 94%, number of steps raises to 21 steps for successful instances and 64 for failure cases. (University of Freiburg) Foundations of AI May 3, 2017 28 / 32

Simulated FigureAnnealing 4.2 The hill-climbing search algorithm, which is the most basic local search technique. each step the current node is replaced by the best neighbor; in this version, that means the neigh with the highest VALUE, but if a heuristic cost estimate h is used, we would find the neighbor with lowest h. In the simulated annealing algorithm, noise is injected systematically: first a lot, then gradually less. function SIMULATED-ANNEALING(problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature current MAKE-NODE(problem.INITIAL-STATE) for t = 1 to do T schedule(t) if T = 0 then return current next a randomly selected successor of current E next.value current.value if E > 0 then current next else current next only with probability e E/T Figure 4.5 The simulated annealing algorithm, a version of stochastic hill climbing where so downhill moves are allowed. Downhill moves are accepted readily early in the annealing schedule then less often as time goes on. The schedule input determines the value of the temperature T a function of time. Has been used since the early 80 s for VSLI layout and other optimization problems. (University of Freiburg) Foundations of AI May 3, 2017 29 / 32

Genetic Algorithms Evolution appears to be very successful at finding good solutions. Idea: Similar to evolution, we search for solutions by three operators: mutation, crossover, and selection. Ingredients: Coding of a solution into a string of symbols or bit-string A fitness function to judge the worth of configurations A population of configurations Example: 8-queens problem as a chain of eight numbers. Fitness is judged by the number of non-attacks. The population consists of a set of arrangements of queens. (University of Freiburg) Foundations of AI May 3, 2017 30 / 32

Selection, Mutation, and Crossing Many variations: how selection will be applied, what type of cross-over operators will be used, etc. Selection of individuals according to a fitness function and pairing Calculation of the breaking points and recombination According to a given probability elements in the string are modified. (University of Freiburg) Foundations of AI May 3, 2017 31 / 32

Summary Heuristics focus the search Best-first search expands the node with the highest worth (defined by any measure) first. With the minimization of the evaluated costs to the goal h we obtain a greedy search. The minimization of f(n) = g(n) + h(n) combines uniform and greedy searches. When h(n) is admissible, i.e., h is never overestimated, we obtain the A search, which is complete and optimal. IDA is a combination of the iterative-deepening and A searches. Local search methods only ever work on one state, attempting to improve it step-wise. Genetic algorithms imitate evolution by combining good solutions. (University of Freiburg) Foundations of AI May 3, 2017 32 / 32