Search : Lecture 2. September 9, 2003

Similar documents
Artificial Intelligence

Informed Search Methods

Artificial Intelligence

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Artificial Intelligence

3 SOLVING PROBLEMS BY SEARCHING

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm

Heuristic (Informed) Search

CS 380: Artificial Intelligence Lecture #4

Problem Solving and Search

Problem Solving & Heuristic Search

Ar#ficial)Intelligence!!

Informed Search and Exploration

Search. CS 3793/5233 Artificial Intelligence Search 1

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Uninformed Search Methods

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Artificial Intelligence

Lecture 4: Informed/Heuristic Search

Solving problems by searching

Informed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Solving problems by searching

CS 331: Artificial Intelligence Informed Search. Informed Search

CS 331: Artificial Intelligence Informed Search. Informed Search

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

A.I.: Informed Search Algorithms. Chapter III: Part Deux

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Artificial Intelligence

Outline. Best-first search

CS 5522: Artificial Intelligence II

Blind (Uninformed) Search (Where we systematically explore alternatives)

Artificial Intelligence

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Informed Search Algorithms

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

CPS 170: Artificial Intelligence Search

Informed State Space Search B4B36ZUI, LS 2018

Informed Search and Exploration

Efficient memory-bounded search methods

Solving problems by searching

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Informed Search CS457 David Kauchak Fall 2011

ARTIFICIAL INTELLIGENCE. Pathfinding and search

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Problem Solving as Search. CMPSCI 383 September 15, 2011

Solving Problems by Searching

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Route planning / Search Movement Group behavior Decision making

Informed Search and Exploration

Downloaded from ioenotes.edu.np

Last time: Problem-Solving

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Uninformed search strategies (Section 3.4) Source: Fotolia

Outline. Best-first search

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Solving Problems by Searching

CS 771 Artificial Intelligence. Informed Search

Searching with Partial Information

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 1

Blind (Uninformed) Search (Where we systematically explore alternatives)

Advanced A* Improvements

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

Informed search strategies (Section ) Source: Fotolia

Informed Search Lecture 5

4. Solving Problems by Searching

mywbut.com Informed Search Strategies-I

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Announcements. CS 188: Artificial Intelligence

CS510 \ Lecture Ariel Stolerman

Problem Solving and Searching

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Heuristic Search: Intro

Lecture 3. Uninformed Search

Informed Search A* Algorithm

Informed Search and Exploration

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Informed search methods

CS486/686 Lecture Slides (c) 2014 P.Poupart

Lecture 3 of 42. Lecture 3 of 42

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

UNINFORMED SEARCH. What to do if teammates drop? Still have 3 or more? No problem keep going. Have two or fewer and want to be merged?

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Ar#ficial)Intelligence!!

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.3, Page 1

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Intelligent Agents. Foundations of Artificial Intelligence. Problem-Solving as Search. A Simple Reflex Agent. Agent with Model and Internal State

Informed search methods

COMP9414: Artificial Intelligence Informed Search

COMP9414: Artificial Intelligence Informed Search

mywbut.com Informed Search Strategies-II

Solving Problems by Searching

CS 4100 // artificial intelligence

Transcription:

Search 6.825: Lecture 2 September 9, 2003 1 Problem-Solving Problems When your environment can be effectively modeled as having discrete states and actions deterministic, known world dynamics known initial state explicit goal or goal test; path cost Treat it as a search problem. This formulation is particularly useful when different goals arise in the same world. Also, problems are often presented in this form (scheduling, navigation), so either the agent or the designer has to do problem solving. Build an abstraction of your domain: initial state successor function (maps states into possible successor states) goal test additive path cost, described as non=negative step costs Objective will be to find the lowest-cost path to a goal state. 2 Uninformed search This problem is intractable, in general. You apparently have to consider all possible paths. It s not that bad, because of additivity, which means that, when considering going through some state s, it s enough to find the cheapest path from start to s, and the cheapest path 1

from s to a goal. You don t have to think of all possible combinations of paths to and from s. We ll take advantage of this later. Search tree: root is start state; children are successor states. A single state may occur multiple times in the tree. States are not the same as nodes. Measuring search complexity: b : branching factor d : depth of the shallowest solution in the search tree m : maximum depth of the tree Basic search methods. Expand a node by checking to see if it s a goal. If so, return it. If not, vist its successors, by putting them on the agenda. Methods differ in what order the next node to be expanded is chosen. depth-first: agenda is a stack; time O(b m ) space O(bm) not optimal breadth-first: agenda is a queue; time O(b d ) space O(b d ) optimal uniform-cost: agenda is a priority queue: expand node with least cost so far; optimal iterative deepending: do depth-first-search with increasingly deep cut-offs; time b + b 2 + b 3... b d = O(b d ), space O(bd); optimal; This is often the uninformed method of choice. 3 Repeated states Algorithms that forget their history are doomed to repeat it. Consider a rectangular street grid. Search tree has 4 d leaves, but there are only about 2d 2 leaves at depth d. Rule 1: In any search, never revisit a state that s on the current path. In these uninformed methods, it s okay to follow Rule 2: Never visit a node (add it to the agenda) that has already been expanded. We retain optimality, because all of our methods (so far) visit nodes via the best path first. Note, though, that we lose the space advantages of DFS if we do this. 4 Formulating problems Driving with gas stations Short order cook: 4 dishes, 2 burners, utility is temperature of all the dishes at the end 2

5 Informed search We can do better by focusing search in a reasonable direction. f(n) : evaluation function on nodes; choose node with lowest f(n) to expand h(s) : heuristic function: estimated cost of cheapest path from s to a goal state h(s) is admissible if it never overstimates cost g(n) : cost so far A search sets f(n) = g(n)+h(s). It s optimal if h is admissible and we don t apply Rule 2 above. Straight-line distance is a good heuristic. In general, you can often relax a problem, and use the solution cost in that problem as a lower-bound on the actual costs. Beware avoiding repeated states! It is no longer necessarily the case that we expand states via the cheapest path first. So, if we find a new path to a state that s on the agenda, discard the more expensive path. Another strategy is to ensure that your heuristic is consistent: for every node n and successor n, h(n) c(n, a, n ) + h(n ). You don t have heuristic values that decrease dramatically as you follow a path. If your heuristic is consistent, then heuristic values are always non-decreasing along a path. So the sequence of nodes expanded by A has non-decreasing values. So the first time we reach the goal, it must be via the cheapest path. So it s okay to use Rule 2. 6 Memory-bounded informed search A simple version is iterative-deepening A, called IDA. It s like iterative deepening, but instead of a depth cutoff on each iteration, we use an f-cost cutoff. Set the cutoff to be the smallest cost of any node that exceeded the cutoff on the previous iteration. Not so good, because it tends to add too few nodes in each iteration. Recursive best-first search (RBFS) Treat the f values a bit more generally. They will always be an underestimate of the total path cost of going through this node. We can, in general, improve these estimates by looking farther down the tree. keep track of f-cost of the best alternative path from any ancestor of current node if current node exceeds this limit, backtrack to that alternative replace all f-values along the path with the best f-value of the children Better than IDA, but still generates too many nodes. Optimal if h is admissible. Space O(bd). Too hard to characterize time; can t make use of graph-search efficiencies. 3

Uses all the memory you have available to avoid back- Simplified Memory-Bounded A tracking too much. Works like A until memory is full Forget leaf node with highest f value Back values up to parent: means that we only regenerate this subtree when all other subtrees have been shown to be worse Complete if there is any reachable solution (depth less than memory). Returns best reachable solution. 7 Learning to Search Better Can try to learn at the meta-level, a mapping from the state of a search problem to which node to expand next. More later. 8 Online Search What if We don t know the successor function We do have a set of actions that can be taken in each state The states are completely observable, so when we take an action, we can observe the successor state The world is safely explorable: no dead ends Agent has to wander around the world in search of the goal. Could try a random walk, but they re terrible, in general. Need to be more systematic. LRTA Let H(s) be the current best estimate of the cost to goal from s Agent in state s should choose the a that minimizes c(s, a, s ) + H(s ) Untried actions have cost h(s): optimism under uncertainty Initial h(s) has to be admissible. Guaranteed to find the goal in any safely-explorable finite space. Can be led astray in infinite spaces. 4

9 Exercises 3.7d, 3.9, 3.11a-d, 3.19a, e Construct a search example with an inadmissible heuristic and show that it can lead to a suboptimal solution. 4.2, 4.3, 4.5, 4.14 What would happen in SMA if you didn t back up the values? Give an example (safely explorable) infinite space in which LRTA would fail to find the goal. 5