Problem Solving Agents

Similar documents
Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Solving Problems by Searching

Uninformed Search strategies. AIMA sections 3.4,3.5

Today s s lecture. Lecture 2: Search - 1. Examples of toy problems. Searching for Solutions. Victor R. Lesser

Uninformed Search Strategies AIMA 3.4

Goal-Based Agents Problem solving as search. Outline

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

mywbut.com Uninformed Search

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Last time: Problem-Solving

CS 8520: Artificial Intelligence

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 1, 2015

Solving problems by searching

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Lecture 14: March 9, 2015

State Spaces

Problem Solving and Search in Artificial Intelligence

AI: problem solving and search

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Chapter 3: Solving Problems by Searching

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Solving problems by searching. Chapter 3

Solving Problems by Searching

Uninformed Search Strategies AIMA

Ar#ficial)Intelligence!!

Uninformed Search Methods

Search. CS 3793/5233 Artificial Intelligence Search 1

Topic 1 Uninformed Search

Solving Problems by Searching

Chapter 3 Solving problems by searching

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2013

AI: Week 2. Tom Henderson. Fall 2014 CS 5300

Problem solving and search

Artificial Intelligence

Artificial Intelligence Problem Solving and Uninformed Search

Solving Problem by Searching. Chapter 3

UNINFORMED SEARCH. What to do if teammates drop? Still have 3 or more? No problem keep going. Have two or fewer and want to be merged?

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Solving Problems: Blind Search

Multiagent Systems Problem Solving and Uninformed Search

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Artificial Intelligence

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Blind (Uninformed) Search (Where we systematically explore alternatives)

Problem Solving as Search. CMPSCI 383 September 15, 2011

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

CPS 170: Artificial Intelligence Search

Logistics. u AI Nugget presentations. u Project Team Wikis, pages. u PolyLearn: Does everybody have access? u Lab and Homework Assignments.

Pengju XJTU 2016

Problem Solving and Search. Chapter 3

A2 Uninformed Search

Blind (Uninformed) Search (Where we systematically explore alternatives)

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

Chapter 4. Uninformed Search Strategies

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Instructor: Wei-Min Shen

KI-Programmierung. Basic Search Algorithms

Uninformed search strategies (Section 3.4) Source: Fotolia

Informed Search CS457 David Kauchak Fall 2011

Artificial Intelligence: Search Part 1: Uninformed graph search

Vorlesung Grundlagen der Künstlichen Intelligenz

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Problem Solving and Searching

Artificial Intelligence

Uninformed (also called blind) search algorithms

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section

Solving problems by searching

Solving Problems by Searching

Solving Problems by Searching

Solving Problems by Searching

9/7/2015. Outline for today s lecture. Problem Solving Agents & Problem Formulation. Task environments

Artificial Intelligence Uninformed search

CS:4420 Artificial Intelligence

Problem Solving and Search

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Artificial Intelligence Class 3: Search (Ch ) Some material adopted from notes by Charles R. Dyer, University of Wisconsin-Madison

Search EECS 395/495 Intro to Artificial Intelligence

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Search EECS 348 Intro to Artificial Intelligence

Chapter S:II (continued)

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Problem solving and search

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Pengju

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

CS 151: Intelligent Agents, Problem Formulation and Search

AGENTS AND ENVIRONMENTS. What is AI in reality?

Artificial Intelligence Search: summary&exercises. Peter Antal

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

Downloaded from ioenotes.edu.np

Artificial Intelligence

Problem solving and search

3 SOLVING PROBLEMS BY SEARCHING

CS 380: Artificial Intelligence Lecture #3

An Appropriate Search Algorithm for Finding Grid Resources

Lecture 3. Uninformed Search

CS 520: Introduction to Artificial Intelligence. Review

Transcription:

Problem Solving Agents Well-defined Problems Solutions (as a sequence of actions). Examples Search Trees Uninformed Search Algorithms

Well-defined Problems 1. State Space, S An Initial State, s 0 S. A Set of Operators (or Actions), {A i i = 1, 2,...} Formally, the state space is the set of states that can be reached by any sequence of actions applied to the initial state, e.g., S = {s 0,..., A i1 s 0,..., A i1 A i2 s 0,..., A i1 A i2 A ik s 0,...} A path is a particular sequence of actions applied to the initial state. 2. A Goal Test, or Predicate, γ : S {0, 1}, such that 1 is obtained if and only if the argument is a goal state. (N.B., Zero, one, or more states may satisfy this predicate.) 3. An optional path cost function, g, assigns a numeric cost to a given path.

A Solution A solution is a path that satisfies the goal test. An optimal solution is a solution with minimal cost.

Example Problem: Graph Traversal Each node in the graph corresponds to a state. Traversing an edge corresponds to an action. Find a path that connects the two blue nodes.

Example Problem: The Eight Puzzle 6 1 1 2 3 4 7 6 5 Can it be done?

Example Problem: Eight Queens Q Q Q Q Q Q Q Q

Example Problem: Vacuum World L V S L R L L V V V S S R L R L R L V S V S S V R R L R S V R

Example Problem: Missionaries and Cannibals One one side of river are 3 Missionaries, 3 Cannibals, and 1 Boat. The boat can hold at most two passengers. If at any time or place the number of cannibals exceeds the number of missionaries, then the latter group will be eaten. At least one passenger must pilot the boat. Representation; let (m, c, b) denote the state that has m missionaries, c cannibals, and b boats, on the original side of the river. Initial State (3, 3, 1) Goal State (0, 0, 0) Operators (m, c, 1) (m 1, c, 0) (m, c, 1) (m 2, c, 0) (m, c, 1) (m 1, c 1, 0) (m, c, 1) (m, c 1, 0) (m, c, 1) (m, c 2, 0) (m, c, 0) (m + 1, c, 1) (m, c, 0) (m + 2, c, 1) (m, c, 0) (m + 1, c + 1, 1) (m, c, 0) (m, c + 1, 1) (m, c, 0) (m, c + 2, 1)

Search Trees The root vertex represents the initial state s 0. Every vertex in the tree corresponds to a state that can be reached from the initial state. Each edge that originates from a parent node s i corresponds to an action A j that can be applied at state s i. The child node adjacent to this edge s k represents the state that is obtained by applying this action. s k is said to be a successor of s i. Terminal states, if any occur, are leaves of the tree. The depth of a state is the number of actions required to reach it from the initial state.

Search Tree: Example 6 1 6 1 6 1 5 1 4 6 1 6 6 1 6 1 6 1 6 1 2 7 3 6 5 1 4 5 1 4 5 1 4 5 1 6 1 6 1 6 6 3 6 6 1 6 1 6 1 6 1 6 4 1 6 1 6 3 1 6 1 6 1 6 1 2 7 2 7 2 7 3

Search Strategies completeness A search strategy is said to be complete if it is guaranteed to find a goal state if one exists. informed A search strategy is said to be uninformed if it does not attempt to estimate the cost of the path between the current state and the goal. time complexity : the number or operations that a strategy requires to reach a goal state. space complexity : the amount of memory that a strategy requires to reach a goal state. optimality A strategy is said to be optimal if it is guaranteed to find the solution with the minimum path cost. expansion A state is expanded when each applicable action is applied to it.

Uninformed Search Search algorithms generally maintain two lists: 1. visited : a list of previously expanded states. 2. queue (or frontier): a list of candidate states. At the beginning, this list consists of only the initial state. Search algorithms generally perform the following: 1. Is the current state a goal? 2. Obtain all successors to the current state: apply all possible actions. 3. Combine new candidate states (successors) with all current candidates states.

Search Algorithm (PAIP, page 191) ;;; tree-search is a general purpose search that can implement ;;; many search algorithms. Just modify the combiner function. (defun tree-search (states goal-p successors combiner) "Find a state that satisfies goal-p. Start with states, and search according to successors and combiner." (format t " &;; states: a" states) (cond ((null states) nil) ; Stop! ((funcall goal-p (first states)) (first states)); Eureka! (t (tree-search ; Continue. (funcall combiner (funcall successors (first states)) (rest states)) goal-p successors combiner))))

Search Strategies Breadth First Search: new candidates are placed at the end of the queue, i.e. states are expanded in the order that they are discovered. Uniform Cost Search: states are sorted in the queue according to the cost of their paths. Depth First Search: the last candidates obtained are expanded first. Depth Limited Search: a depth first search that does not expand any node below a predetermined depth. Iterative Deepening Search: A depth limited seach in which the depth limit is incremented after each failed search. Bidirectional Search: Two simultaneous searches: one directed to the goal from the initial state, and the other, directed to the initial state from the goal.

Breadth First Search Algorithm ;;; To perform a breadth-first search, the new successors must ;;; be appended to the list of remaining states. Call this ;;; action, a prepend. (defun prepend (lista listb) "Prepend listb to lista." (append listb lista)) (defun breadth-first-search (start goal-p successors) "Search by exanding the shallowest active state." (tree-search (list start) goal-p successors # prepend))

Depth First Search Algorithm ;;; Using a combiner of append we implement a ;;; depth-first-search (defun depth-first-search (start goal-p successors) "Search by expanding the deepest active state." (tree-search (list start) goal-p successors # append))

Comparison (AIMA, page 1) Criterion Breadth Uniform Depth Depth Iterative Bidirectional First Cost First Limited Deepening Complete? Yes Yes Only for Yes Yes Yes finite m (if l > d) Time b d+1 b C /ɛ b m b l b d b d/2 Space b d+1 b C /ɛ bm bl bd b d/2 Optimal? Yes Yes No No Yes Yes b denotes the branching factor; d denotes the solution depth; m denotes the maximum depth of the tree; l denotes the depth limit.

Breadth-First Search

Depth-First Search: Trémaux s Algorithm (c. 12) Direction Destination Node Subsequent Action Forward Forward New Junction (No labeled paths.) Old Junction (Some labeled paths.) Place X at exit. Select new path. Place N at new entrance. March forward. Place N at exit. Turn around. March backward. Forward Dead End Turn around. March backward. Forward Goal Eureka! Backward Original Entrance Give up! Backward Old Junction with some unlabeled paths Select new (unlabeled) path. Place an N at new entrance. March forward. Backward Old Junction with no unlabeled paths Select path labeled with X. March backward.

A Recursive Implementation of Tarry s Algorithm Direction Destination Node Subsequent Action Forward Forward Forward New Junction (No labeled paths.) Old Junction with some other unlabeled paths (in addition to the current path) Old Junction with no other unlabeled paths Place X at exit. Select new path. Place N at new entrance. March forward. Place an I at the current exit. Select new (unlabeled) path. Place N at new entrance. March forward. Turn around. Place N at current entrance. March backward. Forward Dead End Turn around. March backward. Forward Goal Eureka! Backward Original Entrance Give up! Backward Old Junction with some unlabeled paths Select new (unlabeled) path. Place an N at new entrance. March forward. Backward Backward Old Junction with no unlabeled paths and at least one path labeled I Old Junction with no unlabeled paths Select a path labeled I. Change this I to an N. March backward. Select path labeled with X. March backward.

Tarry s Algorithm with Stones A simpler implementation of Tarry s algorithm involves placing 1, 2, or 3 stones at the path entrances and exits. The following is adapted from Peter Harrison s micromouse web site (http://micromouse.cannock.ac.uk/maze/othermethods.htm). Place 3 stones (formerly an X ) at the exit of a path used to enter a new junction. Place 1 stone (formerly an I ) at the exit of a path used to enter an old junction. Pick a path entrance according to the following rules (in order) 1. If possible, first select a path that has 0 stones at its entrance. Place 2 stones (formerly an N ) at its entrance, and proceed. 2. Otherwise, select a path entrance that contains only 1 stone. Add 1 addition stone (converting an I into an N ) and proceed. 3. If all else fails, select the path entrance that contains 3 stones, and proceed. Under no circumstance should you enter a passage that contains 2 stones at its entrance, or drop pebbles in any other manner.

Backtracking Search

Fixed-Depth Search

Iterative Deepening

Constraint Satisfaction Problem (CSP) Do not expand the same state more than once. Backtracking: don t expand a state that already violates the constraints. Forward Checking: don t expand a state if all remaining values for the next variable violate the constraints. Arc Consistency: don t expand a state if the remaining variables cannot be assigned a value that satisfies the constraint.