Informed (heuristic) search (cont). Constraint-satisfaction search.

Similar documents
Informed search methods

Constraint satisfaction search

Constraint satisfaction search. Combinatorial optimization search.

Introduction to Artificial Intelligence. Informed Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed search algorithms. Chapter 4, Sections 1 2 1

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed Search and Exploration

Artificial Intelligence. Informed search methods

CS:4420 Artificial Intelligence

Informed Search Algorithms. Chapter 4

Informed Search and Exploration

Informed search algorithms

Informed Search and Exploration

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Informed search algorithms

Informed Search and Exploration

Artificial Intelligence

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Informed search algorithms

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

CS414-Artificial Intelligence

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Informed Search (Ch )

TDT4136 Logic and Reasoning Systems

Chapter 3 Solving Problems by Searching Informed (heuristic) search strategies More on heuristics

Planning, Execution & Learning 1. Heuristic Search Planning

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Robot Programming with Lisp

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Route planning / Search Movement Group behavior Decision making

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Problem solving and search

Artificial Intelligence: Search Part 2: Heuristic search

CS486/686 Lecture Slides (c) 2015 P.Poupart

CS 380: Artificial Intelligence Lecture #4

COSC343: Artificial Intelligence

CS486/686 Lecture Slides (c) 2014 P.Poupart

Informed search algorithms

Artificial Intelligence

Informed search algorithms

Solving Problems by Searching

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Informed search algorithms. Chapter 4

DFS. Depth-limited Search

Artificial Intelligence CS 6364

Foundations of Artificial Intelligence

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Constraint satisfaction search. Combinatorial optimization search.

Solving Problems using Search

Introduction to Artificial Intelligence (G51IAI)

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Ar#ficial)Intelligence!!

Artificial Intelligence

CS 771 Artificial Intelligence. Informed Search

Lecture 4: Informed/Heuristic Search

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Artificial Intelligence

Problem solving and search

Outline. Best-first search

Problem Solving and Search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Informed Search Lecture 5

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Informed search algorithms Michal Pěchouček, Milan Rollo. Department of Cybernetics Czech Technical University in Prague

Outline. Best-first search

Solving problems by searching

Informed search algorithms. Chapter 4

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

AGENTS AND ENVIRONMENTS. What is AI in reality?

PEAS: Medical diagnosis system

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

A.I.: Informed Search Algorithms. Chapter III: Part Deux

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Artificial Intelligence

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

ARTIFICIAL INTELLIGENCE. Informed search

Informed/Heuristic Search

CS 4700: Foundations of Artificial Intelligence

CSE 40171: Artificial Intelligence. Informed Search: A* Search

CS 520: Introduction to Artificial Intelligence. Lectures on Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CS 416, Artificial Intelligence Midterm Examination Fall 2004

Artificial Intelligence Informed search. Peter Antal

Solving Problems by Searching

Solving Problems by Searching

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

Artificial Intelligence

Automated Planning & Artificial Intelligence

Chapters 3-5 Problem Solving using Search

Transcription:

CS 1571 Introduction to AI Lecture 5 Informed (heuristic) search (cont). Constraint-satisfaction search. Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Administration PS 1 due today Report before the class begins Programs through ftp PS- is out on the course web page due next week on Tuesday, September 17, 00 Report Programs 1

Evaluation-function driven search A search strategy can be defined in terms of a node evaluation function Evaluation function Denoted f (n ) Defines the desirability of a node to be expanded next Evaluation-function driven search: expand the node (state) with the best evaluation-function value Implementation: successors of the expanded node are inserted into the priority queue in the decreasing order of their evaluation function value Uniform cost search Uniform cost search (Dijkstra s shortest path): A special case of the evaluation-functiondriven search f ( n) = g ( n ) Path cost function g (n ) ; path cost from the initial state to n Uniform-cost search: Can handle general minimum cost path-search problem: weights or costs associated with operators (links). Note: Uniform cost search relies on the problem definition only Uninformed search method

Best-first search Best-first search incorporates a heuristic function, h(n), into the evaluation function f (n ). heuristic function: measures a potential of a state (node) to reach a goal Special cases (differ in the design of evaluation function): Greedy search f ( n ) = h( n) A* algorithm f ( n) = g ( n) + h ( n ) + iterative deepening version of A* : IDA* Example: traveler problem with straight-line distance information Straight-line distances give an estimate of the cost of the path between the two cities 3

Greedy search method Evaluation function is equal to the heuristic function f ( n ) = h( n) Idea: the node that seems to be the closest to the goal is expanded first Greedy search f(n)=h(n) queue Arad 366 Arad 366

Greedy search f(n)=h(n) 75 10 Arad 366 118 queue Sibiu 53 Timisoara 39 Zerind 37 Zerind Sibiu Timisoara 37 53 39 Zerind 75 Greedy search 10 Arad Sibiu 366 118 queue Timisoara 37 53 39 10 151 99 80 f(n)=h(n) Fagaras 99 Rimniciu V. 193 Timisoara 39 Arad 366 Zerind 37 Oradea 380 Arad Oradea Fagaras Rimnicu Vilcea 366 380 178 193 5

Zerind 75 Greedy search 10 Arad Sibiu 366 118 queue Timisoara 37 53 39 10 151 99 80 f(n)=h(n) Bucharest 0 Rimniciu V. 193 Sibiu 53 Timisoara 39 Arad 366 Zerind 37 Oradea 380 Arad Oradea Fagaras Rimnicu Vilcea 366 380 178 193 99 11 Sibiu Bucharest 53 0 Completeness:? Properties of greedy search Optimality:? Time complexity:? Memory (space) complexity:? 6

Properties of greedy search Completeness: No. We can loop forever. Nodes that seem to be the best choices can lead to cycles. Elimination of state repeats can solve the problem. Optimality: No. Even if we reach the goal, we may be biased by a bad heuristic estimate. Evaluation function disregards the cost of the path built so far. Time complexity: m O ( b ) Worst case!!! But often better! Memory (space) complexity: m O ( b ) Often better! A* search The problem with the greedy search is that it can keep expanding paths that are already very expensive. The problem with the uniform-cost search is that it uses only past exploration information (path cost), no additional information is utilized A* search f ( n) = g ( n) + h ( n ) g (n ) h(n) f (n ) - cost of reaching the state - estimate of the cost from the current state to a goal - estimate of the path length Additional A*condition: admissible heuristic h( n) h * ( n) for all n 7

A* search example Arad 366 f(n) queue Arad 366 Zerind 75 A* search example 10 Arad Sibiu 366 118 Timisoara 9 393 7 f(n) queue Sibiu 393 Timisoara 7 Zerind 9 8

Zerind A* search example Arad 366 queue 75 118 10 Sibiu Timisoara 9 393 7 10 151 99 80 f(n) Rimnicu V. 13 Fagaras 17 Timisoara 7 Zerind 9 Oradea 56 Arad 66 Arad Oradea Fagaras Rimnicu Vilcea 66 56 17 13 Zerind 75 Arad A* search example 10 Arad Oradea 366 Fagaras 118 Sibiu Timisoara 9 393 7 10 151 99 80 Rimnicu Vilcea queue 66 56 17 13 16 97 80 f(n) Pitesti 15 Fagaras 17 Timisoara 7 Zerind 9 Oradea 56 Craiova 56 Sibiu 553 Arad 66 Craiova Pitesti 56 15 Sibiu 553 9

Zerind 75 Arad A* search example 10 Arad Oradea 366 Fagaras 118 Sibiu Timisoara 9 393 7 10 151 99 80 Rimnicu Vilcea queue 66 56 17 13 16 97 80 f(n) Bucharest 0 Fagaras 17 Timisoara 7 Zerind 9 Oradea 56 Craiova 56 Sibiu 553 Rimnicu V. 607 Arad 66 56 Craiova Pitesti 97 138 101 Sibiu 15 553 607 Rimnicu Vilcea Craiova 615 Bucharest 0 Properties of A* search Completeness:? Optimality:? Time complexity:? Memory (space) complexity:? 10

Properties of A* search Completeness: Yes. Optimality: Yes (with the admissible heuristic) Time complexity: Order roughly the number of nodes with f(n) smaller than the cost of the optimal path g* Memory (space) complexity: Same as time complexity (all nodes in the memory) Optimality of A* In general, a heuristic function h(n) : Can overestimate, be equal or underestimate the true distance * of a node to the goal h ( n) Is the A* optimal for the arbitrary heuristic function? No! Admissible heuristic condition Never overestimate the distance to the goal!!! h( n) h * ( n) for all n Example: the straight-line distance in the travel problem never overestimates the actual distance Claim: A* search (with admissible heuristic!!) is optimal 11

Optimality of A* (proof) Let G1 be the optimal goal (with the minimum path distance). Assume that we have a sub-optimal goal G. Let n be a node that is on the optimal path and is in the queue together with G start n G G1 Then: f ( G ) = g( G ) since h( G) = 0 > g( G1) since G is suboptimal f ( n) since h is admissible And thus A* never selects G before n Admissible heuristics Heuristics are designed based on relaxed version of problems Example: the 8-puzzle problem Initial position Goal position Admissible heuristics: 1. number of misplaced tiles. Sum of distances of all tiles from their goal positions (Manhattan distance) 1

Admissible heuristics We can have multiple admissible heuristics for the same problem Dominance: Heuristic function dominates h if n h1 ( n) h( n) h1 Combination: two or more admissible heuristics can be combined to give a new admissible heuristics Assume two admissible heuristics h 1, h Then: h 3( n) = max( h1 ( n), h( n)) is admissible IDA* Iterative deepening version of A* Progressively increases the evaluation function limit (instead of the depth limit) Performs limited-cost depth-first search for the current evaluation function limit Keeps expanding nodes in the depth first manner up to the evaluation function limit Problem: the amount by which the evaluation limit should be progressively increased Solutions: peak over the previous step boundary to guarantee that in the next cycle more nodes are expanded Increase the limit by the fixed cost increment say u 13

IDA* Solution 1: peak over the previous step boundary to guarantee that in the next cycle more nodes are expanded Properties: the choice of the new cost limit influences how many nodes are expanded in each iteration We may find the sub-optimal solution Fix: complete the search up to the limit to find the best Solution : Increase the limit by a fixed cost increment (u) Properties: Too many or too few nodes expanded no control of the number of nodes The solution of accuracy < u is found Constraint satisfaction search 1

Search problem A search problem: Search space (or state space): a set of objects among which we conduct the search; Initial state: an object we start to search from; Operators (actions): transform one state in the search space to the other; Goal condition: describes the object we search for Possible metric on a search space: measures the quality of the object with regard to the goal Search problems occur in planning, optimizations, learning Constraint satisfaction problem (CSP) Two types of search: path search (a path from the initial state to a state satisfying the goal condition) configuration search (a configuration satisfying goal conditions) Constraint satisfaction problem (CSP) is a configuration search problem where: A state is defined by a set of variables Goal condition is represented by a set constraints on possible variable values Special properties of the CSP allow more specific procedures to be designed and applied for solving them 15

Example of a CSP: N-queens Goal: n queens placed in non-attacking positions on the board Variables: Represent queens, one for each column: Q1, Q, Q3, Q Values: Row placement of each queen on the board {1,, 3, } Q1 =, Q = Constraints: Q i Q j Q Q i j i j Two queens not in the same row Two queens not on the same diagonal Satisfiability (SAT) problem Determine whether a sentence in the conjunctive normal form (CNF) is satisfiable (can evaluate to true) Used in the propositional logic (covered later) ( P Q R) ( P R S) ( P Q T )K Variables: Propositional symbols (P, R, T, S) Values: True, False Constraints: Every conjunct must evaluate to true, at least one of the literals must evaluate to true ( P Q R) True, ( P R S ) True,K 16

Other real world CSP problems Scheduling problems: E.g. telescope scheduling High-school class schedule Design problems: Hardware configurations VLSI design More complex problems may involve: real-valued variables additional preferences on variable assignments the optimal configuration is sought Map coloring Color a map using k different colors such that no adjacent countries have the same color Variables: Represent countries A, B, C, D, E Values: K -different colors {Red, Blue, Green,..} Constraints: A B, A C, C E, etc An example of a problem with binary constraints 17

Constraint satisfaction as a search problem Formulation of a CSP as a search problem: States. Assignment (partial, complete) of values to variables. Initial state. No variable is assigned a value. Operators. Assign a value to one of the unassigned variables. Goal condition. All variables are assigned, no constraints are violated. Constraints can be represented: Explicitly by a set of allowable values Implicitly by a function that tests for the satisfaction of constraints Solving CSP as a standard search Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q Q3, Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = 18

Solving a CSP through standard search Maximum depth of the tree (m):? Depth of the solution (d) :? Branching factor (b) :? Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q 3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q 3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = Solving a CSP through standard search Maximum depth of the tree: Number of variables of the CSP Depth of the solution: Number of variables of the CSP Branching factor: if we fix the order of variable assignments the branch factor depends on the number of their values Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q 3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q 3, Assigned: Q = 1 Unassigned: Q, Q 3 Assigned: Q =, Q 1 = 19

Solving a CSP through standard search What search algorithm to use:? Depth of the tree = Depth of the solution=number of vars Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q 3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q 3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = Solving a CSP through standard search What search algorithm to use: Depth first search!!! Since we know the depth of the solution We do not have to keep large number of nodes in queues Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q 3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q 3, Assigned: Q = 1 Unassigned: Q, Q 3 Assigned: Q =, Q 1 = 0

Backtracking Depth-first search for CSP is also referred to as backtracking The violation of constraints needs to be checked for each node, either during its generation or before its expansion Important problem: Current variable assignments in concert with constraints restrict remaining legal values of unassigned variables; The remaining legal and illegal values of variables may be inferred (effect of constraints propagates) It is necessary to keep track of the remaining legal values, so we know when the constraints are violated and when to terminate the search Constraint propagation A state (more broadly) is defined by a set variables and their legal and illegal assignments Legal and illegal assignments can be represented through variable equations and variable disequations Example: map coloring Equation Disequation A = Red C Red Constraints + assignments can entail new equations and disequations A = Red B Red 1

Assign A=Red Constraint propagation A B C D E F Red Blue Green - equations - disequations Assign A=Red Constraint propagation A B C D E F Red Blue Green - equations - disequations

Assign E=Blue Constraint propagation A B C D E F Red Blue Green Assign E=Blue Constraint propagation A B C D E F Red Blue Green 3

Assign F=Green Constraint propagation A B C D E F Red Blue Green Assign F=Green Constraint propagation A B C D E F Red Blue Green Conflict!!! No legal assignments available for B and C

Constraint propagation We can derive remaining legal values through propagation A B C D E F Red Blue Green B=Green C=Green Constraint propagation We can derive remaining legal values through propagation A B C D E F Red Blue Green B=Green C=Green F=Red 5

Constraint propagation Three known techniques for propagating the effects of past assignments and constraints: Value propagation Arc consistency Forward checking Difference: Completeness of inferences Time complexity of inferences. 6