Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Similar documents
Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Chapter 4. Uninformed Search Strategies

Chapter 3: Solving Problems by Searching

Solving Problems: Blind Search

Uninformed Search Methods

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

521495A: Artificial Intelligence

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Structures and Strategies For Space State Search

State Space Search. Many problems can be represented as a set of states and a set of rules of how one state is transformed to another.

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

A2 Uninformed Search

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

CSC 2114: Artificial Intelligence Search

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Announcements. Project 0: Python Tutorial Due last night

CS 8520: Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Problem Solving and Search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Search EECS 395/495 Intro to Artificial Intelligence

Artificial Intelligence

Search EECS 348 Intro to Artificial Intelligence

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Informed search strategies (Section ) Source: Fotolia

Search. CS 3793/5233 Artificial Intelligence Search 1

An Appropriate Search Algorithm for Finding Grid Resources

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Artificial Intelligence (part 4a) Problem Solving Using Search: Structures and Strategies for State Space Search

CS 4100 // artificial intelligence

Solving Problems by Searching

Lecture 14: March 9, 2015

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

CS 5522: Artificial Intelligence II

mywbut.com Uninformed Search

Lecture 3. Uninformed Search

Solving problems by searching

Chapter 3 Solving problems by searching

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

CAP 4630 Artificial Intelligence

Problem solving and search

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Solving problems by searching

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Solving Problems by Searching

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Problem Solving and Searching

Chapter 14. Graphs Pearson Addison-Wesley. All rights reserved 14 A-1

Uninformed Search. Day 1 & 2 of Search. Russel & Norvig Chap. 3. Material in part from

Artificial Intelligence

Topic 1 Uninformed Search

Uninformed Search Strategies AIMA 3.4

State-Space Search. Computer Science E-22 Harvard Extension School David G. Sullivan, Ph.D. Solving Problems by Searching

Problem Solving & Heuristic Search

AI: problem solving and search

Blind (Uninformed) Search (Where we systematically explore alternatives)

Problem solving and search

Artificial Intelligence

521495A: Artificial Intelligence

Algorithms for Data Structures: Uninformed Search. Phillip Smith 19/11/2013

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Solving Problem by Searching. Chapter 3

Efficient memory-bounded search methods

Problem solving and search

3 SOLVING PROBLEMS BY SEARCHING

Problem Solving Agents

PEAS: Medical diagnosis system

Uninformed Search Strategies AIMA

STRUCTURES AND STRATEGIES FOR STATE SPACE SEARCH

State Space Search in AI

Search : Lecture 2. September 9, 2003

Uninformed search strategies (Section 3.4) Source: Fotolia

CS486/686 Lecture Slides (c) 2015 P.Poupart

AI: Week 2. Tom Henderson. Fall 2014 CS 5300

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

DFS. Depth-limited Search

Uninformed Search Strategies

CS486/686 Lecture Slides (c) 2014 P.Poupart

Uninformed Search. (Textbook Chpt 3.5) Computer Science cpsc322, Lecture 5. May 18, CPSC 322, Lecture 5 Slide 1

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

KI-Programmierung. Basic Search Algorithms

Informed Search and Exploration

Brute Force: Selection Sort

Informed Search and Exploration

CPS 170: Artificial Intelligence Search

Solving Problems by Searching

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Solving Problems by Searching

CS540 Uninformed Search

Search: Advanced Topics and Conclusion

Informed Search CS457 David Kauchak Fall 2011

Informed Search Algorithms

Pengju XJTU 2016

Graphs & Digraphs Tuesday, November 06, 2007

Solving Problems by Searching

Search: Advanced Topics and Conclusion

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 1

Transcription:

Computer Science and Software Engineering University of Wisconsin - Platteville 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville Read: Textbook Chapter 3.7-3.9,3.12, 4.

Problem Solving as Search Problem: State Space Initial state to start A description of all possible actions at any state Transition model: what each action does The goal states A path cost function that assign a cost to each path. Problem solution: an action sequence that leads from the initial state to a goal state.

The river crossing puzzle Initial state: PFCG Actions: Person can take one item to cross the river Transition model: fox eats chicken, chicken eats grain if left alone Goal state: PFCG Path cost function: equal efforts State space is usually represented as a graph. Search tree is more often used for problem solving. All possible paths through a graph. Can you construct the state space and search tree for the river crossing puzzle?

Search in Search Tree Start from the initial state (or the goal state) Expand a state by applying the search actions to that state, generating ALL of its successor states. These successors are in the next level down of the search tree The order in which we choose states for expansion is determined by the search strategy Different strategies result in different behavior KEY: We want to find the solution while realizing in memory as few as possible of the nodes in the search space.

More Examples The 8-squares puzzle Initial state: Goal state: 7 2 4 5 6 8 3 1 Actions: up, down, left, right 1 2 3 4 5 6 7 8 Maze: Initial state: entrance Goal state: exit Actions: north, south, east, west

A quick review on graph Graph: G = {V, E} V: a set of vertices (nodes) E: a set of edges Directed graph Path, loop Connected graph and disconnected graph Complete graph Root Tree: Root, leaf, parent, child, siblings, ancestors, decedents, branching factor

How to represent a graph? Adjacency list Intuitive A E Less memory for sparse graph Adjacency matrix B C D Faster access: O(1) vs. O(n)

Graph Problem Examples Konigsberg bridges problem: a walk that crosses each bridge once? Traveling salesperson: a route that visits each city once? Map coloring: can map be painted so no two adjacent countries have the same color?

Data-Driven or Goal-Driven Search Data-driven search: Start from an initial state and move forward until a goal is reached Top-down approach A.K.A. forward chaining When initial data is available and goal is not clear Goal-driven search: Start at the goal and work back toward a start state Bottom-up approach A.K.A. backward chaining When goal is clear: exit a maze, medical diagnosis

Example: The Towers of Hanoi Initial State: (123)()() Actions: Op1: Move disk from peg 1 to peg 2 Op2: Move disk from peg 1 to peg 3 Op3: Move disk from peg 2 to peg 1 Op4: Move disk from peg 2 to peg 3 Op5: Move disk from peg 3 to peg 1 Op6: Move disk from peg 3 to peg 2 Goal test: ()()(123) Path cost: 1 per step

Example: The Towers of Hanoi First five levels of search tree:

Goal Tree A.K.A. and-or tree Goal: solution Subgoal: each step along the way And-node: a goals can be achieved only by solving all its subgoals. Or-node: a goals can be achieved by solving any of its subgoals. Leaf nodes are either success nodes or failure nodes.

Example: The Towers of Hanoi with 4 disks

Example: The Towers of Hanoi with 4 disks

Properties of Search Methods Complexity Time and space Completeness Is it guaranteed to find a goal state if one exists? Optimality (often used to mean admissibility) Is it guaranteed to find a solution in the quickest time? Admissibility Is it guaranteed to find the best solution? Irrevocability No backtracking Called tentative if there is backtracking

P, NP and NP-hard In real-world, the search problem can be classified to two classes: P and NP. class P is the set of all problems for which solutions with polynomial time behavior have been found. class NP is the set of all problems for which solutions with exponential behavior have been found. If an optimization of the problem cannot be solved in polynomial time, it is called NP-hard. If a decision problem is both NP and NP-hard, it is NP-complete.

Uninformed Search Brute-force search (exhaustive search, blind search, generate and test) Examine every node until it finds a goal Simplest form of search Assume no additional knowledge other than how to traverse the search tree and detect a leaf and goal node How many possible states do we have? TSP of n cities Sliding-block puzzle (8-squares) Rubik s cube (3 by 3)

Breadth-First Search Check all siblings before children ;; breadth_first_search: StartState->SUCCESS FAILURE Open <- [Start] // states to be considered Closed <- [] // states that have been considered while open!= [] Next first(open) Open rest(open) // remove first item from open if isgoal(next), return SUCCESS let Kids = children(next) - (Open union Closed) Closed Closed union [Next] Open append (Open, Kids) end-while return FAILURE Use a queue

Depth-First Search Check all decedents before siblings ;; depth_first_search: StartState->SUCCESS FAILURE Open <- [Start] // states to be considered Closed <- [] // states that have been considered while open!= [] Next first(open) Open rest(open) // remove first item from open if isgoal(next), return SUCCESS let Kids = children(next) - (Open union Closed) Closed Closed union [Next] Open append (Kids, Open) end-while return FAILURE Use a stack

Example: TSP 200 Eau Claire Appleton 180 150 100 190 180 Dodgeville Brookfield 240 120 40 150 Cuba City Can we solve TSP using BFS and DFS? How to order siblings? Initial State:? Actions: Travel from one city to another Transition model: roads Goal test: visit each city exactly once & return to A Path cost: traveling distance

Breadth-first vs. Depth-first Complexity BFS DFS Completeness Optimality Admissibility Irrevocability b is the branching factor of the tree d is the depth of the shallowest goal state reached m is the depth of the deepest goal state reached

Breadth-first vs. Depth-first BFS Complexity Time: O(b d ) Space: O(b d ) DFS Completeness Yes No Optimality No No Admissibility Yes (if no weight) No Irrevocability Yes Yes Time: O(b m ) Space: O(bm) b is the branching factor of the tree d is the depth of the shallowest goal state reached m is the depth of the deepest goal state reached

Breadth-first vs. Depth-first When to use which? Some paths are extremely long All path are of similar length All path are of similar length and all lead to a goal state High branching factor: a state may lead to many different states Internet search engine?

Variations of BFS and DFS Uniform-cost search: Instead of expanding the shallowest node, expand the node with the smallest path cost. e.g. Dijkstra s Algorithm If all steps are equal, it is the same as BFS The first goal found is an optimal solution Depth-limited search: DFS with depth limit x Avoid getting stuck in infinitely deep path or loops will find a solution if it is within the depth limit

Review: Dijkstra s algorithm Dijkstra s algorithm: single source shortest path algorithm G = (V,E} S = {vertices whose shortest paths from the source is determined} di = best estimate of shortest path to vertex i pi = predecessors Initialize di and pi, Set S to empty, While there are still vertices in V-S, Sort the vertices in V-S according to the current best estimate of their distance from the source, Add u, the closest vertex in V-S, to S, Update all the vertices still in V-S connected to u to a better estimation if possible

Dijkstra s algorithm example 2 10 4 2 6 0 7 5 6 1 8 1 10 3 4 5

Dijkstra s algorithm example 0+7=7 2 10 4 2 6 0 7 5 6 1 8 1 10 3 0+10=10 4 5

Dijkstra s algorithm example 0+7=7 2 10 7+10=17 4 2 6 0 7 5 6 1 8 1 10 3 5 4 0+10=10 7+6=13

Dijkstra s algorithm example 0+7=7 2 10 7+10=17 4 2 6 0 7 5 6 1 8 1 10 3 5 4 0+10=10 7+6=13

Dijkstra s algorithm example 0+7=7 2 10 7+6+1=14 7+6+8=21 2 4 6 0 7 5 6 1 8 1 10 3 5 4 0+10=10 7+6=13

Dijkstra s algorithm example 0+7=7 2 10 7+6+1=14 7+6+1+2=16 2 4 6 0 7 5 6 1 8 1 10 3 5 4 0+10=10 7+6=13

Dijkstra s algorithm example 0+7=7 2 10 7+6+1=14 7+6+1+2=16 2 4 6 0 7 5 6 1 8 1 10 3 5 4 0+10=10 7+6=13

Depth-First Iterative Deepening DFID, a.k.a. Iterative Deepening Search or IDS Exhaustive search technique that combines depth-first with breadth-first search: repeatedly carrying out depth-limit search on the tree, starting with a depth-first search limited to a depth of 1, then a depth-first search of depth 2, 3, and so on, until a goal node is found. Combines the benefit of BFS that it will always find the shortest-step path and of DFS in the efficiency of memory use. Avoid the problem of DFS that it may be trapped in infinitely deep path.

Is DFID too time consuming? It is almost as efficient as DFS and BFS because for most trees, the majority of nodes are in the deepest level All three approaches spend most of their time examining these node For a tree of depth d and with a branching factor of b, the total number of nodes is 1 + b + b^2 + b^3 +... + b^d = (1-b d+1 )/(1-b) O(b d ) The total # of nodes DFID checks is (d + 1) + b(d) + b^2 (d-1) + b^3(d-2) +... + b^d O(b d )

Properties of DFID Complete? Optimal? Time complexity Space complexity Backtracking? Yes Yes if no weight O(b d ) O(bd) Yes