Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Similar documents
TDT4136 Logic and Reasoning Systems

Informed Search and Exploration

Informed Search Algorithms. Chapter 4

Artificial Intelligence

Informed search algorithms

Informed search algorithms

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Artificial Intelligence: Search Part 2: Heuristic search

Introduction to Artificial Intelligence. Informed Search

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed search algorithms. Chapter 4

Robot Programming with Lisp

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

CS414-Artificial Intelligence

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

COSC343: Artificial Intelligence

CS:4420 Artificial Intelligence

Planning, Execution & Learning 1. Heuristic Search Planning

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Artificial Intelligence. Informed search methods

Solving Problems using Search

Informed Search and Exploration

Artificial Intelligence

CS 380: Artificial Intelligence Lecture #4

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed Search and Exploration

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Informed Search and Exploration

Informed search algorithms

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Foundations of Artificial Intelligence

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Artificial Intelligence. 2. Informed Search

ARTIFICIAL INTELLIGENCE. Informed search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed search algorithms. Chapter 4

Problem solving and search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Introduction to Artificial Intelligence (G51IAI)

Problem Solving: Informed Search

Problem solving and search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Foundations of Artificial Intelligence

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Solving Problems: Intelligent Search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Outline. Best-first search

Artificial Intelligence CS 6364

Searching and NetLogo

Informed Search and Exploration

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed Search and Exploration for Agents

Informed Search and Exploration

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Informed search methods

Artificial Intelligence

DFS. Depth-limited Search

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Outline. Best-first search

Artificial Intelligence, CS, Nanjing University Spring, 2016, Yang Yu. Lecture 5: Search 4.

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Lecture 4: Informed/Heuristic Search

Artificial Intelligence Informed search. Peter Antal

Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Introduction to Computer Science and Programming for Astronomers

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Artificial Intelligence

Foundations of Artificial Intelligence

CS:4420 Artificial Intelligence

Informed search algorithms

Informed search algorithms

CS486/686 Lecture Slides (c) 2015 P.Poupart

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Problem solving and search

Artificial Intelligence

CS486/686 Lecture Slides (c) 2014 P.Poupart

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Solving Problems by Searching

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Chapters 3-5 Problem Solving using Search

Informed (heuristic) search (cont). Constraint-satisfaction search.

Transcription:

Lecture Plan 1 Informed search strategies (KA AGH) 1 czerwca 2010 1 / 28

Blind vs. informed search strategies Blind search methods You already know them: BFS, DFS, UCS et al. They don t analyse the nodes in the fringe; they simply pick the first one (going down or sideways). Informed search methods They try to guess which node in the fringe is most promising......by using an evaluation function, also known as the heuristic. The heuristic is based on additional knowledge about the search space. (KA AGH) 1 czerwca 2010 2 / 28

algorithm family General approach: picks node for expansion based on evaluation function, f (n). We don t know goal is best; if we knew, it wouldn t be a search at all. Heuristic function h(n) is the key component of f (n). Heuristic functions h(n) = estimated cost of the cheapest path from node n to the goal node. If n is the goal node, h(n) = 0. (KA AGH) 1 czerwca 2010 3 / 28

Heuristic functions Choosing an evaluation function If we know the locations of cities, we know the straight-line distances between them. h SLD straight-line distance from n to Bucharest. Oradea 71 Neamt 87 Zerind 75 151 Iasi Arad 140 92 Sibiu 99 Fagaras 118 80 Vaslui Timisoara Rimnicu Vilcea 111 142 211 Lugoj 97 Pitesti 70 98 146 Hirsova Mehadia 101 85 Urziceni 75 138 86 Bucharest 120 Dobreta 90 Craiova Eforie Giurgiu Straight line distance to Bucharest Arad 366 Bucharest 0 Craiova 160 Dobreta 242 Eforie 161 Fagaras 178 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 98 Rimnicu Vilcea 193 Sibiu 253 Timisoara 329 Urziceni 80 Vaslui 199 Zerind 374 (KA AGH) 1 czerwca 2010 4 / 28

Admissible heuristic Heuristic functions Never overestimates the cost to reach the goal, e.g. h(n) h (n) (where h (n) is the real cost to get from n to goal). Is by nature optimistic. Monotonic (consistent) heuristic For every node n and every successor n of n (generated by any action a), estimated cost of reaching the goal from n is no greater than the stop cost of getting to n plus estimated cost of reaching the goal from n : h(n) c(n, a, n ) + h(n ). This is a form of general triangle inequality: each side of a triangle cannot be longer than sum of two other sides. (KA AGH) 1 czerwca 2010 5 / 28

Greedy best-first search Only takes into account the heuristic, e.g. f (n) = h(n). Arad 366 (KA AGH) 1 czerwca 2010 6 / 28

Greedy best-first search Only takes into account the heuristic, e.g. f (n) = h(n). Arad Sibiu Timisoara Zerind 253 329 374 (KA AGH) 1 czerwca 2010 6 / 28

Greedy best-first search Only takes into account the heuristic, e.g. f (n) = h(n). Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 176 380 193 (KA AGH) 1 czerwca 2010 6 / 28

Greedy best-first search Only takes into account the heuristic, e.g. f (n) = h(n). Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 380 193 Sibiu Bucharest 253 0 (KA AGH) 1 czerwca 2010 6 / 28

properties Completeness: No can get stuck in loops. Time complexity: O(b m ), but depends on quality of heuristic. Space complexity: O(b m ) keeps all nodes in memory. Optimality: No. (KA AGH) 1 czerwca 2010 7 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad 366=0+366 (KA AGH) 1 czerwca 2010 8 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad Sibiu 393=140+253 Timisoara Zerind 447=118+329 449=75+374 (KA AGH) 1 czerwca 2010 8 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 415=239+176 671=291+380 413=220+193 (KA AGH) 1 czerwca 2010 8 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea 646=280+366 415=239+176 671=291+380 Rimnicu Vilcea Craiova Pitesti Sibiu 526=366+160 417=317+100 553=300+253 (KA AGH) 1 czerwca 2010 8 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253 (KA AGH) 1 czerwca 2010 8 / 28

algorithm Idea: avoid expanding paths that are already expensive. f (n) = g(n) + h(n), where: g(n) cost so far to reach n, h(n) estimated cost from n to goal (heuristic), therefore f (n) estimated total cost of path through n to goal. Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova Rimnicu Vilcea 418=418+0 615=455+160 607=414+193 (KA AGH) 1 czerwca 2010 8 / 28

Let s suppose that... A* optimality...a suboptimal goal G 2 has been generated. Let n be an unexpanded node on a shortest path to the optimal goal G. Start n G G 2 Proof f (G 2 ) = g(g 2 ), since h(g 2 ) = 0 g(g 2 ) > g(g), since G 2 is suboptimal g(g) f (n), since h is admissible Since f (G 2 ) > f (n), A* will never select G 2 for expansion (KA AGH) 1 czerwca 2010 9 / 28

A* with consistent heuristic Node expansion order If the heuristic is consistent, A* expands nodes in order of increasing f value. A* gradually adds f -contours of nodes; contour i has all nodes with f = f i, where f i < f i+1. O Z N A I T 380 400 S R F V L P D M C 420 G B U H E (KA AGH) 1 czerwca 2010 10 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

A* with inconsistent heuristic If h is inconsistent: As we go along the path, f may sometimes increase. A* doesn t always expand nodes in order of increasing f value, as it may find lower-cost paths to nodes it already expanded. A* should re-expand those nodes; however, the Graph-Search algorithm will not re-expand them. This example is due to Prof. Dana Nau. (KA AGH) 1 czerwca 2010 11 / 28

Properties of A* A* properties Completeness: Yes, unless there are infinitely many nodes with f < f (G). Time complexity: Exponential, depends on mean relative error of heuristic. Space complexity: Keeps all nodes in memory. Optimality: Yes, if: for Tree-Search, the heuristic is admissible, for Graph-Search, the heuristic is admissible and consistent. (KA AGH) 1 czerwca 2010 12 / 28

Heuristic function quality Heuristic functions An admissible heuristic never overestimates the cost of reaching the goal. It usually underestimates see the SLD heuristic. However, a good heuristic function tries to minimise the gap between its value and the actual cost. Example How would you design a heuristic for the 8-puzzle? 7 2 4 51 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State (KA AGH) 1 czerwca 2010 13 / 28

Heuristic functions Example How would you design a heuristic for the 8-puzzle? 7 2 4 51 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State 8-puzzle: two possible heuristics h 1 (n): number of tiles not in their places h 2 (n): sum of Manhattan distances of all tiles from their places (KA AGH) 1 czerwca 2010 14 / 28

Heuristic functions Example How would you design a heuristic for the 8-puzzle? 7 2 4 51 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State 8-puzzle: two possible heuristics h 1 (S) = 6 h 2 (S) = 4 + 0 + 3 + 3 + 1 + 0 + 2 + 1 = 14 (KA AGH) 1 czerwca 2010 15 / 28

Heuristic functions 8-puzzle: two possible heuristics h 1 (S) = 6 h 2 (S) = 4 + 0 + 3 + 3 + 1 + 0 + 2 + 1 = 14 Heuristic domination If h 2 (n) h 1 (n) for all n, then h 2 dominates h 1. Therefore, h 2 is always better than h 1. Performance comparison Number of nodes expanded if solution is at depth d: d IDS A*(h 1 ) A*(h 2 ) 14 3 473 951 539 113 24 54 000 000 000 39 135 1 641 (KA AGH) 1 czerwca 2010 16 / 28

Heuristic functions Heuristic domination If h 2 (n) h 1 (n) for all n, then h 2 dominates h 1. Therefore, h 2 is always better than h 1. Hybrid heuristics If h a and h b are admissible heuristics, then h(n) = max(h a (n), h b (n)) is admissible and dominates both h a and h b. Deriving admissible heuristics Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem. (KA AGH) 1 czerwca 2010 17 / 28

Deriving admissible heuristics Heuristic functions Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem. Example: 8-puzzle For the 8-puzzle problem: If the rules are relaxed so that a tile can move anywhere, then h 1 (n) (number of misplaced tiles) gives the exact cost. If the rules are relaxed so that a tile can can move to any adjacent square (regardless of whether it s occupied), then h 2 (n) (sum of Manhattan distances) gives the exact cost. 7 2 4 51 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State (KA AGH) 1 czerwca 2010 18 / 28

For many problems, the solution is important, not how we got there the path is irrelevant. State space: set of complete configurations; find optimal configuration (e.g., TSP) or configuration satisfying constraints (e.g., timetable). : keep current state, try to improve it. Example: Travelling Salesman Problem Start with any complete tour, then try to switch pairs. (KA AGH) 1 czerwca 2010 19 / 28

Example: n-queensexample: n-queens Put nput queens n queens on an n on an n board n nwith board no two so that queens noontwo thequeens same row, column, conflictor ondiagonal same row, column or diagonal. Move a queen to reduce number of conflicts. Move a queen to reduce number of conflicts h = 5 h = 2 h = 0 Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 1million (KA AGH) 1 czerwca 2010 20 / 28

search ascent/descent search (or gradient ascent/descent) Also called local greedy search. Like climbing Mt. Everest in thick fog, with amnesia. Like climbing Everest in thick fog with amnesia function Hill-Climbing( problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current Make-Node(Initial-State[problem]) loop do neighbor a highest-valued successor of current if Value[neighbor] Value[current] then return State[current] current neighbor end (KA AGH) 1 czerwca 2010 21 / 28

search objective function shoulder global maximum local maximum "flat" local maximum current state state space algorithm can get stuck on: local maxima, ridges, flats. (KA AGH) 1 czerwca 2010 22 / 28

search Variants of hill-climbing search If an algorithm (e.g., standard hill-climbing) only selects moves which improve the solution, it is bound to be incomplete. Random-start are complete, but very ineffective. Stochastic hill-climbing picks one of the better successors at random. First-choice hill-climbing generates random successors until it finds one better than the current state. Random-restart hill-climbing conducts a series of searches, starting with random start states. It is complete with probability close to 1, because it must eventually generate the goal state. (KA AGH) 1 czerwca 2010 23 / 28

Background Simulated annealing In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cool them. This allows the material to coalesce into a low-energy crystalline state. The algorithm Selects move at random. If the move improves the solution, it is made every time. If the move worsens the solution, the probability if it is made is determined upon: the amount E by which the solution is worsened, the so-called temperature T, which is decreased in every iteration. a a That way, the bad moves are more likely to be allowed at the start, and become less likely as the temperature decreases. (KA AGH) 1 czerwca 2010 24 / 28

Simulated annealing Simulated annealing Idea: escape local maxima by allowing some bad moves but gradually decrease their size and frequency function Simulated-Annealing( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling prob. of downward steps current Make-Node(Initial-State[problem]) for t 1 to do T schedule[t] if T = 0 then return current next a randomly selected successor of current E Value[next] Value[current] if E > 0 then current next else current next only with probability e E/T Chapter 4, Sections 3 4 8 (KA AGH) 1 czerwca 2010 25 / 28

Local beam search Local beam search Idea: keep k states instead of 1; choose top k of all their successors. Not the same as k individual searches run in parallel. Searches that find good states invite others to join them. Threads that do not provide good results are discarded. Problem Often, all k states end up on same local maximum. Solution: stochastic local beam search Choose k successors at random, but still be biased towards the good ones. (KA AGH) 1 czerwca 2010 26 / 28

Genetic Genetic Variant of stochastic beam search. Successor states generated by combining two parent states (rather than modifying a single state). We have k randomly generated states, called the population. Each state is caled an individual. Each individual is represented as a string over a finite alphabet. Steps in GA 1 Evaluate fitness of each individual in population, as probability of being selected for reproduction. 2 Perform selection, taking fitness into account. 3 Perform crossover within pairs (at random point). 4 Introduce random mutation to allow for stochastic exploration. (KA AGH) 1 czerwca 2010 27 / 28

Genetic 24748552 24 32752411 23 24415124 20 32543213 11 31% 29% 26% 14% 32752411 24748552 32752411 24415124 32748552 24752411 32752124 24415411 32748152 24752411 32252124 24415417 Fitness Selection Pairs Cross Over Mutation + = (KA AGH) 1 czerwca 2010 28 / 28