Informed Search and Exploration

Similar documents
Artificial Intelligence

Informed search algorithms

TDT4136 Logic and Reasoning Systems

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Informed search algorithms

Informed Search and Exploration

Informed Search Algorithms. Chapter 4

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed Search and Exploration

Robot Programming with Lisp

Introduction to Artificial Intelligence. Informed Search

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Artificial Intelligence: Search Part 2: Heuristic search

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

CS:4420 Artificial Intelligence

Planning, Execution & Learning 1. Heuristic Search Planning

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Problem solving and search

Informed search algorithms. Chapter 4

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed Search and Exploration

Artificial Intelligence. Informed search methods

COSC343: Artificial Intelligence

Informed search algorithms

Problem solving and search

Solving Problems using Search

Artificial Intelligence

CS 380: Artificial Intelligence Lecture #4

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Problem Solving: Informed Search

CS414-Artificial Intelligence

Chapter 3 Solving Problems by Searching Informed (heuristic) search strategies More on heuristics

ARTIFICIAL INTELLIGENCE. Informed search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed Search and Exploration

Artificial Intelligence

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Problem solving and search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Outline. Best-first search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Artificial Intelligence CS 6364

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Foundations of Artificial Intelligence

Informed search algorithms. Chapter 4

Outline. Best-first search

Artificial Intelligence

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Artificial Intelligence Informed search. Peter Antal

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Introduction to Artificial Intelligence (G51IAI)

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

Artificial Intelligence

CS486/686 Lecture Slides (c) 2014 P.Poupart

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Informed search methods

Artificial Intelligence. 2. Informed Search

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Informed Search and Exploration for Agents

Searching and NetLogo

Informed Search and Exploration

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

DFS. Depth-limited Search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

Informed (heuristic) search (cont). Constraint-satisfaction search.

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Artificial Intelligence

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Problem solving and search

Foundations of Artificial Intelligence

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving Problems: Intelligent Search

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Solving problems by searching

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Informed search algorithms

Informed search algorithms

Solving Problems by Searching

Solving Problems by Searching

Advanced Artificial Intelligence (DT4019, HT10)

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Artificial Intelligence, CS, Nanjing University Spring, 2016, Yang Yu. Lecture 5: Search 4.

Lecture 4: Informed/Heuristic Search

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

4. Solving Problems by Searching

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Artificial Intelligence: Search Part 1: Uninformed graph search

CS:4420 Artificial Intelligence

Transcription:

Ch. 04 p.1/39 Informed Search and Exploration Chapter 4

Ch. 04 p.2/39 Outline Best-first search A search Heuristics IDA search Hill-climbing Simulated annealing

Ch. 04 p.3/39 Review: Tree search function TREE-SEARCH (problem, fringe) returns a solution, or failure fringe loop do if EMPTY?(fringe) then return failure node REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe INSERT-ALL(EXPAND(node, problem), fringe) INSERT(MAKE-NODE(INITIAL-STATE [problem]),fringe) A strategy is defined by picking the order of node expansion

Ch. 04 p.4/39 Best-first search Idea: use an evaluation function for each node estimate of desirability Expand most desirable unexpanded node Implementation: fringe is a queue sorted in decreasing order of desirability Special cases: greedy search A search

Ch. 04 p.5/39 Romania with step costs in km 71 Oradea Neamt Arad 118 75 Zerind 140 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Dobreta Mehadia 120 146 101 85 138 Bucharest 90 Craiova Giurgiu 98 Urziceni Hirsova 86 Eforie

Ch. 04 p.6/39 Greedy search Evaluation function (heuristic) = estimate of cost from to the closest goal E.g., Bucharest = straight-line distance from Greedy search expands the node that appears to be closest to goal to

Ch. 04 p.7/39 Greedy search example Arad

Ch. 04 p.8/39 After expanding Arad Arad Sibiu Timisoara Zerind 253 329 374

Ch. 04 p.9/39 After expanding Sibiu Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu V. 366 176 380 193

Ch. 04 p.10/39 After expanding Fagaras Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu V. 366 380 193 Sibiu Bucharest 253 0

Ch. 04 p.11/39 Properties of greedy search Complete No can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Complete in finite space with repeated-state checking Time, but a good heuristic can give dramatic improvement Space Optimal No keeps all nodes in memory

Ch. 04 p.12/39 A search Idea: avoid expanding paths that are already expensive Evaluation function = cost so far to reach = estimated cost to goal from = estimated total cost of path through goal A search uses an admissible heuristic i.e., where is the true cost from (Also require, so for any goal.) E.g., never overestimates the actual road distance. to.

Ch. 04 p.13/39 A search example Arad 366=0+366

Ch. 04 p.14/39 After expanding Arad Arad Sibiu Timisoara Zerind 393=140+253 447=118+329 449=75+374

Ch. 04 p.15/39 After expanding Sibiu Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea 646=280+366 415=239+176 671=291+380 Rimnicu V. 413=220+193

Ch. 04 p.16/39 After expanding Rimnicu Vilcea Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu V. 646=280+366 415=239+176 671=291+380 Craiova Pitesti Sibiu 526=366+160 417=317+100 553=300+253

Ch. 04 p.17/39 After expanding Fagaras Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad 646=280+366 Fagaras Oradea Rimnicu V. 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253

Ch. 04 p.18/39 After expanding Pitesti Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad 646=280+366 Fagaras Oradea Rimnicu V. 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova Rimnicu V. 418=418+0 615=455+160 607=414+193

Ch. 04 p.19/39 Optimality of A (standard proof) Theorem: A search is optimal Suppose some suboptimal goal has been generated and is in the queue. Let be an unexpanded node on a shortest path to an optimal goal.

Optimality of A (standard proof) start n G1 G2 for expansion, A will never select Since Ch. 04 p.20/39

Ch. 04 p.21/39 Optimality of A (more intuitive) Lemma: A expands nodes in order of increasing value Gradually adds -contours of nodes (cf. breadth-first adds layers) Contour has all nodes with, where

F-contours O Z N A T 380 S 400 R L F P I V D M C 420 G B U H E Ch. 04 p.22/39

Ch. 04 p.23/39 Properties of A Complete Yes, unless there are infinitely many nodes with Time Exponential in (relative error in length of solution) Space Keeps all nodes in memory Optimal Yes cannot expand A expands all nodes with A expands some nodes with A expands no nodes with until is finished

Proof of lemma: Consistency A heuristic is consistent if n n G h(n) h(n ) c(n, a, n ) If is consistent, we have I.e., is nondecreasing along any path. Ch. 04 p.24/39

Ch. 04 p.25/39 Admissible heuristics E.g., for the 8-puzzle: = number of misplaced tiles = total Manhattan distance (i.e., no. of squares from desired location of each tile) 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 =?? =?? Start State Goal State

Ch. 04 p.26/39 Dominance If then for all dominates (both admissible) and is better for search Typical search costs: IDS = 3,473,941 nodes A = 539 nodes A = 113 nodes IDS 54,000,000,000 nodes A = 39,135 nodes A = 1,641 nodes

Ch. 04 p.27/39 Relaxed problems Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then gives the shortest solution Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem

Ch. 04 p.28/39 Relaxed problems (cont d) Well-known example: travelling salesperson problem (TSP) Find the shortest tour visiting all cities exactly once Minimum spanning tree can be computed in and is a lower bound on the shortest (open) tour

Ch. 04 p.29/39 Iterative Deepening A* (IDA*) Idea: perform iterations of DFS. The cutoff is defined based on the -cost rather than the depth of a node. Each iteration expands all nodes inside the contour for the current -cost, peeping over the contour to find out where the contour lies.

Ch. 04 p.30/39 Iterative Deepening A* (IDA*) function IDA* (problem) returns a solution sequence inputs: problem, a problem local variables: f-limit, the current root, a node -COST limit root MAKE-NODE(INITIAL-STATE[problem]) f-limit -COST(root) loop do solution, f-limit DFS-CONTOUR(root, f-limit) if solution is non-null then return solution if f-limit = then return failure

Ch. 04 p.31/39 Iterative Deepening A* (IDA*) function DFS-CONTOUR (node, f-limit) returns a solution sequence and a new -COST limit inputs: node, a node f-limit, the current -COST limit local variables: next-f, the -COST limit for the next contour, initally if -COST[node] f-limit then return null, -COST[node] if GOAL-TEST[problem](STATE[node]) then return node, f-limit for each node in SUCCESSORS(node) do solution, new-f DFS-CONTOUR(s, f-limit) if solution is non-null then return solution, f-limit next-f MIN(next-f, new-f) return null, next-f

Ch. 04 p.32/39 Properties of IDA* Complete Yes, similar to A*. Time Depends strongly on the number of different values that the heuristic value can take on. 8-puzzle: few values, good performance TSP: the heuristic value is different for every state. Each contour only includes one more state than the previous contour. If A* expands nodes, IDA* expands 1 + 2 +... + = nodes. Space It is DFS, it only requires space proportional to the longest path it explores. If is the smallest operator cost, and is the optimal solution cost, then IDA* will require nodes. Optimal Yes, similar to A*

Ch. 04 p.33/39 Iterative improvement algorithms In many optimization problems, the path is irrelevant; the goal state itself is the solution Then state space = set of complete configurations; find optimal configuration, e.g., TSP or, find configuration satisfying constraints, e.g., timetable In such cases, can use iterative improvement algorithms; keep a single current state, try to improve it Constant space, suitable for online as well as offline search

Ch. 04 p.34/39 Example: Travelling Salesperson Problem Start with any complete tour, perform pairwise exchanges

Ch. 04 p.35/39 Example: -queens Put queens on an board with no two queens on the same row, column, or diagonal Move a queen to reduce number of conflicts

Ch. 04 p.36/39 Hill-climbing (or gradient ascent/descent) function HILL-CLIMBING (problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor a highest-valued successor of current if VALUE[neighbor] VALUE[current] then return STATE[current] current neighbor

Hill-climbing (cont d) Like climbing Everest in thick fog with amnesia Problem: depending on initial state, can get stuck on local maxima objective function global maximum shoulder local maximum flat local maximum current state state space In continuous spaces, problems w/ choosing step size, slow convergence Ch. 04 p.37/39

Ch. 04 p.38/39 Simulated annealing function SIMULATED-ANNEALING (problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling the probability of downward steps current MAKE-NODE(INITIAL-STATE[problem]) for to do T schedule[t] if then return current next a randomly selected successor of current E VALUE[next] - VALUE[current] if E then current next else current next only with probability

Ch. 04 p.39/39 Properties of simulated annealing Idea: escape local maxima by allowing some bad moves but gradually decrease their size and frequency At fixed temperature reaches Boltzman distribution decreased slowly enough state, state occupation probability always reach best Is this necessarily an interesting guarantee?? Devised by Metropolis et al., 1953, for physical process modelling Widely used in VLSI layout, airline scheduling, etc.