Artificial Intelligence

Similar documents
Informed search algorithms

Problem solving and search

Introduction to Artificial Intelligence. Informed Search

Informed search algorithms. Chapter 4, Sections 1 2 1

CS:4420 Artificial Intelligence

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Informed Search and Exploration

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Artificial Intelligence. Informed search methods

Informed Search and Exploration

Problem solving and search

Informed search algorithms

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Informed Search and Exploration

Robot Programming with Lisp

Artificial Intelligence: Search Part 2: Heuristic search

Problem solving and search

Artificial Intelligence

Informed search algorithms

4. Solving Problems by Searching

AGENTS AND ENVIRONMENTS. What is AI in reality?

CS486/686 Lecture Slides (c) 2015 P.Poupart

Informed Search Algorithms. Chapter 4

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

AGENTS AND ENVIRONMENTS. What is AI in reality?

CS486/686 Lecture Slides (c) 2014 P.Poupart

Planning, Execution & Learning 1. Heuristic Search Planning

Solving Problems by Searching

TDT4136 Logic and Reasoning Systems

COSC343: Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Artificial Intelligence

Artificial Intelligence

Solving Problems by Searching

Solving Problems by Searching

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Artificial Intelligence CS 6364

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Solving Problems using Search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

PEAS: Medical diagnosis system

CS-E4800 Artificial Intelligence

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Problem solving and search

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Introduction to Artificial Intelligence (G51IAI)

Informed Search and Exploration

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

CS:4420 Artificial Intelligence

Foundations of Artificial Intelligence

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Informed search methods

Artificial Intelligence: Search Part 1: Uninformed graph search

Outline. Best-first search

Clustering (Un-supervised Learning)

Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu. Blind Searches

Artificial Intelligence

Problem Solving and Search. Geraint A. Wiggins Professor of Computational Creativity Department of Computer Science Vrije Universiteit Brussel

CS414-Artificial Intelligence

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Foundations of Artificial Intelligence

CS 380: Artificial Intelligence Lecture #4

Searching and NetLogo

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

Outline. Best-first search

Solving problems by searching. Chapter 3

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Solving problems by searching

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Foundations of Artificial Intelligence

Problem-solving agents. Solving Problems by Searching. Outline. Example: Romania. Chapter 3

Informed (Heuristic) Search. Idea: be smart about what paths to try.

AI: problem solving and search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Seminar: Search and Optimization

Automated Planning & Artificial Intelligence

Problem solving and search: Chapter 3, Sections 1 5

Artificial Intelligence

Problem solving and search: Chapter 3, Sections 1 5

Problem Solving and Search

CAP 4630 Artificial Intelligence

Solving problems by searching

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Problem solving and search

Problem Solving and Search. Chapter 3

Informed (heuristic) search (cont). Constraint-satisfaction search.

3 SOLVING PROBLEMS BY SEARCHING

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

ARTIFICIAL INTELLIGENCE. Informed search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Solving Problems by Searching (Blindly)

Solving Problems by Searching

Search. Intelligent agents. Problem-solving. Problem-solving agents. Road map of Romania. The components of a problem. that will take me to the goal!

Transcription:

Artificial Intelligence Academic year 2016/2017 Giorgio Fumera http://pralab.diee.unica.it fumera@diee.unica.it Pattern Recognition and Applications Lab Department of Electrical and Electronic Engineering University of Cagliari

Outline Part I: Historical notes Part II: Solving Problems by Searching Uninformed Search Informed search Part III: Knowledge-based Systems Logical Languages Expert systems Part IV: The Lisp Language Part V: Machine Learning Decision Trees Neural Networks

Part II Solving Problems by Searching

Some motivating problems Consider the following problems, and assume that your goal is to design a rational agent (a computer program, a robot, etc.) capable of autonomously solving them. Remember: a rational agent is a system that acts rationally, according to a well-defined objective.

Some motivating problems Missionaries and cannibals A classic AI toy-problem: three missionaries and three cannibals must cross a river, on a boat that can only hold two people, and without leaving more cannibals than missionaries on either side of the river. How can all six get across the river safely?

Some motivating problems Game playing: 15-puzzle Another classic AI s toy-problem: transform an array of tiles from an initial configuration into a given desired configuration, by a sequence of moves of a tile into an adjacent empty cell (possibly, the shortest sequence). 13 10 11 6 5 7 4 8 1 14 9 3 15 2 12 initial configuration 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 desired configuration

Some motivating problems Game playing: checkers and chess Two historical problems addressed by many researchers since the early days of AI. Chess has been named the Drosophila of AI.

Some motivating problems Robot navigation: a real-world problem addressed in mid 1960s. Left: Shakey the robot (1968). Right: a navigation problem for Shakey: finding a route from R to G, possibly the shortest one, avoiding obstacles (in black).

Some motivating problems Route finding in maps An example: finding a route from to Bucharest (possibly, the shortest route). 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie

Common features of the above problems Although the above problems may seem very di erent from each other, they share some high-level features that allow one to solve them using a single, general approach. Main feature: a clear goal can be defined, in terms of a set of desired world states. Once the goal is defined, the task is to find a sequence of actions that lead to a goal state. This requires one to suitably define the actions and the states to be considered.

Aframeworkforsearchproblems Problems exhibiting the above characteristics can be formalized as follows: 1. Goal formulation: what are the desired world states? 2. Problem formulation: given the goal: what are the actions to consider? what are the states to consider? The crucial point in this step is to find a proper level of abstraction, by removing every irrelevant detail. Under the above formulation: I the solution of a problem is a sequence of actions that lead to a goal state I the process of looking for a solution is called search

Goal and problem formulation: examples 15-puzzle 13 10 11 6 5 7 4 8 1 14 9 3 15 2 12 initial configuration 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 desired configuration I goal: getting to the desired tile configuration (possibly, by the shortest sequence of moves) I states: each possible 16! tile configurations I actions: moving the n-th tile (n = 1,...,15) to one of the four adjacent cells, if empty

Goal and problem formulation: examples Route finding in maps: Bucharest 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie I goal: getting from a given city to a destination one (possibly, through the shortest route) I states: being in each possible city I actions: moving between two adjacent cities

Goal and problem formulation: examples Chess I goal: to checkmate (this goal is achieved in many possible chessboard configurations) I states: each possible chessboard configuration I actions: all legal moves

Properties of search problems I Static vs dynamic: does the environment change over time? (e.g., 15-puzzle and chess are static; robot navigation is dynamic, if the position of obstacles changes over time) I Fully vs partially observable: is the current state completely known? (e.g., 15-puzzle and chess are fully observable; robot navigation is partially observable, if sensors are not perfect ) I Discrete vs continuous sets of states and actions (e.g., 15-puzzle and chess are discrete, robot navigation is continuous) I Deterministic vs non-deterministic: is the outcome (the resulting state) of any sequence of actions certain? (e.g., 15-puzzle is deterministic, chess is not)

Examples of real-world problems Many challenging real-world problems can be formulated as search problems: I Traveling salesperson problem: finding the shortest tour that allows one to visit every city of a given map exactly once (applications to planning, logistics, microchip manufacture, DNA sequencing, etc.) I Route-finding: routing in computer networks, airline travel planning, etc. I VLSI design: cell layout, channel routing I...

Search problems: formal defintion (1/2) How can one devise algorithms (and implement them using some programming language), to solve search problems? First, a rigorous problem definition is needed. The goal and problem formulation sketched above can be formally defined in terms of four components: I the initial state I the set of possible actions I the goal test I the path cost

Search problems: formal defintion (2/2) I A decription of the initial state: the state that the agent starts in I A description of the possible actions, defined as a successor function SF which, given a state s, returns the set of ordered pairs (a Õ, s Õ ), where a Õ is a legal action in state s and and s Õ the resulting state. The above components implicitly define the the space state: it can be represented as a graph where nodes = states, edges = actions. A path is a sequence of states connected by a sequence of actions I The goal test function: given a state s, it determines whether or not s is a goal state I A path cost function assigns a numeric cost to each path. In many problems it is defined as the sum of the costs of the individual actions (step cost) along the path

Example: 8-puzzle (1/2) 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State I State description: the location of each tile in the board I Initial state: any given state (it can be shown that only half of the states can be reached by any given initial state) I Goal test: this checks whether the input state matches the desired state I Path cost: if the goal is to reach the desired configuration by the shortest sequence of moves, each action costs 1 move, and the path cost is the number of steps in the path

Example: 8-puzzle (2/2) I Actions: di erent descriptions are possible, e.g.: moving the n-th tile (n = 1,...,8) to one of the four adjacent cells (e.g., move tile 3 right ): 32 actions moving the blank to one of the four adjacent cells (e.g., move the blank down ): 4 actions Using, e.g., the latter description, the successor function SF returns all the states that can be reached by a given state, with the corresponding action descriptions; for instance: 7 2 4 5 6 8 3 1 SF ( )={( move the blank down, 7 2 4 5 3 6 8 1 ),...} The resulting state space is a graph made up of 9! 2 nodes (possible board configurations reachable from the initial state). Note that it is not convenient nor necessary to compute and store the state space beforehand into computer memory: it is implicitly defined by the initial state and SF.

Example: route finding in maps (1/2) 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 70 Mehadia 75 Drobeta 120 142 Pitesti 211 97 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie I State description: the name of the city where the agent is in I Initial state: any given city in the map I Goal test: this checks whether a given city is the destination one I Path cost: if the goal is to reach the destination through the shortest route, the step cost is the length (e.g., in km) of the route between two adjacent cities, and the path cost is the sum of the correspoding step costs

Example: route finding in maps (2/2) I Actions can be described as moving from a city to an adjacent one. The corresponding successor function returns all the cities adjacent to a given one (no action description is necessary in this particular case), e.g.: SF () ={(Timisoara, Sibiu, Zerind)} Note that in this kind of problem the state space (graph) corresponds to the map (if one views each road as standing for two actions, e.g., moving from to Sibiu and moving from Sibiu to ). Accordingly, storing in computer memory the information on the map amounts to explicitly storing the whole state space.

Solving a search problem From now on we shall consider the simplest kind of search problems: static, fully observable, discrete, deterministic (e.g., 15-puzzle and route finding in maps). The key feature of this kind of problem is that the search for a solution can be made o ine, i.e., before starting the execution of the corresponding actions. The main steps for solving such problems are thus the following: 1. goal and problem formulation (discussed above) 2. searching for a solution (o ine) 3. executing the actions In the following we shall focus on the search step.

Search algorithms A possible technique for solving a search problem is to iteratively construct and expand a set of partial solutions, each one starting from the initial state. The set of partial solutions can be represented by a search tree, whose root node corresponds to the initial state, whereas every other node is generated by applying the successor function to another node already in the tree. This approach gives rise to a family of search algorithms.

Search tree A search tree represents a set of partial solutions (sequences of actions) starting from the initial state: I each node corresponds to a state I an edge from a parent node A to a child node B corresponds to the action that leads from state A to state B I a leaf node corresponds to the end state of a partial solution I each sequence from the root to a leaf is a possible path in the state space S I depth of a node: the number of actions in the path from the root to that node (the root node has zero depth) I the set of leaf nodes is called fringe

Search tree: an example 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 142 111 Lugoj Pitesti 211 97 70 98 Mehadia 146 101 85 Urziceni 75 138 Bucharest Drobeta 120 90 Craiova Giurgiu Hirsova 86 Eforie I Root node: the starting city, I Six leaf nodes (fringe, in white): six partial solutions æ Sibiu æ æ Sibiu æ Fagaras... I Edges: moving from a city to an adjacent one

Space state and search tree 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 142 111 Lugoj Pitesti 211 97 70 98 Mehadia 146 101 85 Urziceni 75 138 Bucharest Drobeta 120 90 Craiova Giurgiu Hirsova 86 Eforie Note that a search tree is di erent from the state space. One key di erence is that every state corresponds to a single node of the state space, but may appear in several nodes of the search tree. For instance, this is the case of the state in the example above.

Sketch of a general tree-search algorithm Every tree-search algorithm works as follows: 1. construct the root node R of the search tree, associate the initial state to R, and set the fringe equal to {R} (the initial state is the only partial solution at this point) 2. repeat the following steps: 2.1 if the fringe is empty, then no solution has been found and the algorithm stops 2.2 choose one of the partial solutions, i.e., one of the leaf nodes N in the fringe (in the first iteration only R can be selected) 2.3 if N contains a goal state, then the search is successfully completed, and the sequence of actions from R to N is returned as the solution 2.4 otherwise, expand the state in N: apply SF to N for each state generated by SF, construct a new leaf node and add it to the tree as a child of N, as well as to the fringe remove N from the fringe

Sketch of a general tree-search algorithm Note that the above tree-search algorithm is independent of the search problem. The key point is the choice of a leaf node in the fringe in step 2.2 I di erent choices are possible I each one defines a di erent search strategy I each search strategy corresponds to a di erent search algorithm

Tree-search algorithm: an example (1/6) Example: route finding on maps, getting from to Bucharest. 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie

Tree-search algorithm: an example (2/6) Root node: the initial state ()

Tree-search algorithm: an example (3/6) Choose a partial solution (leaf node) to explore. Currently, the only leaf is. Goal test: isthedesiredstate? No; therefore, the leaf has to be expanded.

Tree-search algorithm: an example (4/6) Expand the chosen leaf () by applying to it the successor function, which generates all the states reachable from.

Tree-search algorithm: an example (5/6) Choose another partial solution (leaf node). Now there are three leaves: which one should one choose? This must be determined by a search strategy (see later). For instance, assume that Sibiu is chosen. Goal test: issibiuthedesiredstate? No; therefore, the leaf Sibiu has to be expanded.

Tree-search algorithm: an example (6/6) Expand the chosen leaf (Sibiu). And so on.

General tree-search algorithm A more concise (but still informal) description: function Tree-Search (problem, strategy) returns a solution, or failure construct the root node using the initial state of problem loop do if there are no leaf nodes then return failure choose a leaf node according to strategy if the chosen leaf node contains a goal state then return the corresponding solution else expand the node and add its child nodes to the search tree

Implementation hints In the following, a more formal version of the above tree-search algorithm is given in pseudo-code. This version is independent of the search problem, and of the programming language. Details may change depending on the specific programming language used for implementing the algorithm.

Implementation hints: data structure for search tree nodes State Parent-Node Children-Nodes Action Path-Cost Depth a problem-dependent representation of the corresponding state a pointer to the parent node pointers to the children nodes a description of the action applied to the parent node to generate this one the total cost of the actions on the path from the root to this node the number of edges (actions) in the path from the root to this node An example: PARENT 5 4 Node ACTION = Right PATH-COST = 6 6 1 8 STATE 7 3 2

Implementation hints: the fringe of the search tree At each step of the tree-search algorithm, one leaf node must be selected from the fringe to be expanded, according to a given search strategy. The fringe can be conveniently represented as a queue: newly generated nodes are inserted into the fringe in the order in which they will be expanded by the chosen search strategy. This way, the next node to be expanded is always the first node in the current fringe.

Implementation hints: data structure for the search problem The search problem can be represented by the following data structure: Initial-State Goal-Test Successor-Fn Step-Cost a problem-dependent representation of the initial state the function Goal (see above) that checks whether a state is a goal state the function SF (see above) that returns a set of pairs (action, state) from a given state a function that returns the cost of carrying out a given action in a given node

Implementation hints: search problem Note that in the above data structure the fields Goal-Test, Successor-Fn and Step-Cost contain a function. For instance, in C language these fields can be defined as pointers to functions. The data structure for representing a state, as well as the goal test, successor (or operators) and step cost functions, have to be specifically defined for the search problem at hand.

Implementation hints: the main function function Tree-Search (problem, Enqueue) returns a solution, or failure fringe Ω an empty queue fringe Ω Enqueue(Make-Node(Initial-State[problem]), fringe) loop do if Empty?(fringe) then return failure node Ω Remove-First(fringe) if Goal-Test[problem](State[node]) succeeds then return Solution(node) fringe Ω Enqueue(Expand(node, problem), fringe) Note that in this implementation the search strategy can be implemented through the function Enqueue, which is passed to Tree-Search as an argument (e.g., a pointer to a function in C language).

Implementation hints: node expansion function Expand(node, problem) returns a set of nodes successors Ω the empty set for each Èaction, resultí in Successor-Fn[problem](State[node]) do n Ω anewnode State[n] Ω result Parent-Node[n] Ω node Action[n] Ω action Path-Cost[n] Ω Path-Cost[node] + Step-Cost[problem](node, action) Depth[n] Ω Depth[node] +1 Children-Nodes[n] Ω the empty set add n to Children-Nodes[node] add n to successors return successors

Implementation hints: auxiliary functions I Make-Node(n) returns a new instance of node, storing n in its State field I Remove-First(q) removes the first element from the queue q and returns it I Solution(n) returns the sequence of Action field values from the root of the tree to node n, following the pointers in the Parent-Node field from n backwards to the root I Enqueue(nodes, q) inserts in the queue q each node of the set nodes, in a position defined by the search strategy (note that a di erent function must be defined for each search strategy)

Measuring problem-solving performance Two main criteria: I e ectiveness: completeness: is the algorithm guaranteed to find a solution, when there is one? optimality: is the solution found by the algorithm the one with minimum path cost? I e ciency (search cost): computational complexity of search algorithms time complexity: how long does it take to find a solution? space complexity: how much memory is needed to perform the search? A trade-o between e ectiveness and e ciency may be required in practice.

Measuring problem-solving performance A note on space complexity: the state space graph does not always need to be stored in memory (it may be even infinite). Examples: I route finding in maps (finite state space): an explicit representation (fully stored in memory) is possible 71 Oradea Neamt Zerind 75 151 140 118 Timisoara Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 142 111 Lugoj Pitesti 211 97 70 98 Mehadia 146 101 85 Urziceni 75 138 Bucharest Drobeta 120 90 Craiova Giurgiu Hirsova 86 Eforie I 15-puzzle (finite, but very large state space): the implicit representation defined by the initial state and the successor function is enough 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State

Search strategies The essence of search: choosing one of the partial solutions (leaf nodes) to follow up at each step. An example: 71 Oradea Neamt Zerind 87 75 151 Iasi 140 92 Sibiu 99 Fagaras 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 111 Lugoj Pitesti 211 97 70 98 Hirsova Mehadia 146 101 85 Urziceni 75 138 86 Bucharest Drobeta 120 90 Craiova Eforie Giurgiu which of the six partial solutions should one choose? Is there any information available about which option is better than another? I no: uninformed search strategies must be used I yes: informed search strategies can be used

Uninformed search strategies Main rationale: in absence of any information about the best partial solution, systematically explore the space state. Main strategies: I breadth-first I depth-first I uniform-cost I depth-limited I iterative-deepening depth-first I bidirectional

Breadth-first search Expands first the shallowest leaf node. If there is more than one leaf node at the lowest depth, the choice can be made according to any criterion (even randomly). An example: 71 Oradea Neamt Zerind 87 75 151 Iasi 140 92 Sibiu 99 Fagaras 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 111 Lugoj Pitesti 211 97 70 98 Hirsova Mehadia 146 101 85 Urziceni 75 138 86 Bucharest Drobeta 120 90 Craiova Eforie Giurgiu shallowest leaf nodes: Timisoara, Zerind

Example of breadth-first search (1/6) Route finding on maps: getting from to Bucharest. 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie The nodes of the search tree will be numbered to refer to them without ambiguities.

Example of breadth-first search (2/6) Root node: the initial state,. Fringe = { }. Goal test: is the first node in the fringe the desired state? No, therefore it will be expanded. 1

Example of breadth-first search (3/6) is removed from the fringe and expanded, generating three nodes having the same depth I since the shallowest leaf nodes have to be expanded first, the newly generated ones are always inserted at the end of the fringe I since the newly generated nodes always have the same depth, they are inserted in the fringe in any order between themselves 1 Zerind 2 Sibiu 3 Timisoara 4 Current fringe: { Zerind (2), Sibiu (3), Timisoara (4) }.

Example of breadth-first search (4/6) Goal test: is the first node in the fringe, Zerind (2) the desired state? No, therefore it is removed from the fringe and expanded. 1 Zerind 2 Sibiu 3 Timisoara 4 Oradea 5 6 Current fringe: { Sibiu (3), Timisoara (4), Oradea (5), (6) }.

Example of breadth-first search (5/6) Goal test: is the first node in the fringe, Sibiu (3) the desired state? No, therefore it is removed from the fringe and expanded. 1 Zerind 2 Sibiu 3 Timisoara 4 Oradea Oradea Rimnicu Vilcea Fagaras 5 6 7 8 9 10 Current fringe: { Timisoara (4), Oradea (5), (6), Oradea (7), (8), Rimnicu Vilcea (9), Fagaras (10) }.

Example of breadth-first search (6/6) Goal test: is the first node in the fringe, Timisoara (4) the desired state? No, therefore it is removed from the fringe and expanded. 1 Zerind 2 Sibiu 3 Timisoara 4 Oradea Oradea Rimnicu Vilcea Fagaras Lugoj 5 6 7 8 9 10 11 12 Current fringe: { Oradea (5), (6), Oradea (7), (8), Rimnicu Vilcea (9), Fagaras (10), Lugoj (11), (12) }. And so on.

Properties of breadth-first search I Completeness: yes (a solution is always found, if one exists) I Optimality: no (unless the path cost is a non-decreasing function of depth) I Time complexity? I Space complexity?

Evaluating computational complexity of search algorithms Analytical framework: I consider a hypothetical state space where each state has b successors (branching factor) I let d Ø 0 be the depth of the shallowest solution I evaluate the worst case computational complexity: time complexity: how many nodes does a search algorithm generate before it finds a solution? space complexity: what is the maximum number of nodes that have to be simultaneously stored in memory? Usually computational complexity is evaluated asymptotically, and is represented using the Big O notation.

Breadth-first search: computational complexity Time complexity: Depth N. of generated nodes 0 1 (root node) 1 b 2 b 2 3 b 3...... d b d d + 1 b d+1 b Total: 1 + b + b 2 +... +(b d+1 b) = O(b d+1 ) Example: a tree with b = 2 (binary tree) D B A C E F G Space complexity is O(b d+1 ) as well, since all generated nodes must remain in memory.

Breadth-first search: computational complexity An example: I b = 10 I time for generating one node: 10 4 s I each node requires 1, 000 bytes of storage Depth Nodes Time Memory 2 1, 100 0.11 sec. 1 megabyte 4 111, 100 11 sec. 106 megabytes 6 10 7 0.19 minutes 10 gigabytes 8 10 9 31 hours 1 terabyte 10 10 11 129 days 101 terabytes 12 10 13 35 years 10 petabytes 14 10 15 3, 523 years 1 exabyte

Depth-first search Expands first the deepest leaf node. If there is more than one leaf node at the highest depth, the choice can be made according to any criterion (even randomly). An example: 71 Oradea Neamt Zerind 87 75 151 Iasi 140 92 Sibiu 99 Fagaras 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 111 Lugoj Pitesti 211 97 70 98 Hirsova Mehadia 146 101 85 Urziceni 75 138 86 Bucharest Drobeta 120 90 Craiova Eforie Giurgiu deepest leaf nodes:, Fagaras, Oradea, Rimnicu Vilcea

Example of depth-first search (1/5) Route finding on maps: getting from to Bucharest. 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie

Example of depth-first search (2/5) Root node: the initial state,. Fringe = { }. Goal test: is the first node in the fringe the desired state? No, therefore it will be expanded. 1

Example of depth-first search (3/5) is removed from the fringe and expanded, generating three nodes having the same depth I since the deepest leaf nodes have to be expanded first, the newly generated ones are always inserted on top of the fringe I since the newly generated nodes always have the same depth, they are inserted in the fringe in any order between themselves 1 Zerind 2 Sibiu 3 Timisoara 4 Current fringe: { Zerind (2), Sibiu (3), Timisoara (4) }.

Example of depth-first search (4/5) Goal test: is the first node in the fringe, Zerind (2) the desired state? No, therefore it is removed from the fringe and expanded. 1 Zerind 2 Sibiu 3 Timisoara 4 Oradea 5 6 Current fringe: { Oradea (5), (6), Sibiu (3), Timisoara (4) }.

Example of depth-first search (5/5) Goal test: is the first node in the fringe, Oradea (5) the desired state? No, therefore it is removed from the fringe and expanded. 1 Zerind 2 Sibiu 3 Timisoara 4 Oradea 5 6 Sibiu 7 Zerind 8 Current fringe: { Sibiu (7), Zerind (8), (6), Sibiu (3), Timisoara (4) }. And so on.

Some remarks on depth-first search I Depth-first search can get stuck going down along very long paths, or even infinite paths, for instance loops (e.g.: æ Zerind æ æ Zerind... ) I It has modest memory requirements after a node has been expanded, it can be removed from memory, provided that all the paths starting from it have been fully explored (if they are not infinite) therefore, only a single path form the root to a leaf node need to be stored in memory, together with the unexpanded sibling nodes for each node on the path An example is given in the following, assuming a binary search tree (each node has two successors) where each path has depth 3 (shaded nodes are the ones not yet generated).

Example of depth-first search A A A B C B C B C D E F G H I J K L M N O D E F G H I J K L M N O D E F G H I J K L M N O A A A B C B C C D E F G H I J K L M N O D E F G I J K L M N O E F G J K L M N O A A C B C C E F G E F G F G J K L M N O K L M N O L M N O A A A C C C F G F G F G L M N O L M N O M N O

Depth-first search: computational complexity Remember that only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node in the path, must be stored. Branching factor: b; maximum depth of the search tree: m; depth of the shallowest solution: m (worst case). Time complexity Space complexity Depth N. of generated nodes N. of stored nodes 0 1 (root node) 1 (root node) 1 b b 2 b 2 b......... m b m b Total: 1 + b +...+ b m 1 + mb = O(b m ) = O(mb) Space complexity is negligible.

Properties of depth-first search I Completeness: yes (unless there are infinite paths) I Optimality: no (a deeper, suboptimal solution can be found along a path explored before than the one containing the optimal solution at smaller depth) I Time complexity: O(b m ) I Space complexity: O(mb) Care must be taken to avoid infinite paths (e.g., looping paths: Zerind Zerind... ).

Other strategies I Uniform-cost: expands the leaf node with the lowest path cost I Depth-limited: depth-firstsearchwithapredefineddepthlimit (avoids infinite paths, but is not complete) I Iterative-deepening depth-first: repeated depth-limited search with depth limit 1, 2, 3,..., until a solution is found (avoids infinite paths, and is complete) I Bidirectional: simultaneously searching forward from the initial state and backwards from the goal state, until the two searches meet Start Goal

Avoiding repeated states A critical issue of the search process: wasting time by expanding the same state several times. It may happen, e.g., when: I actions are reversible (15-puzzle, route finding, etc.) I more than one path from the root can lead to the same state (15-puzzle, route finding, etc.)

Avoiding repeated states Possible solutions, with increasing complexity: 1. if reversible actions exist (e.g., 8-puzzle, route finding in maps), discard a child node containing the state of the parent node 2. discard children nodes containing states already present in the same path 3. discard children nodes containing states already present in the current search tree (but depth-first search removes some nodes from memory) 4. discard children nodes containing states previously inserted in the search tree, even if not present in the current one (requires to store all generated states) Issues of solutions 3 and 4: I exponential complexity for strategies like breadth-first search I it is better to discard the repeated state with highest path cost to avoid a ecting optimality

E ectiveness of uninformed search: an example A (apparently) very simple, toy problem, 8-puzzle: 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State I state space size: 9! =362, 880 distinct states (only 9!/2 = 181, 440 reachable from any given initial state) I average solution depth (over all possible pairs of initial and goal states): about 22 I average branching factor: about 3 Nodes generated/stored by Breadth-first search when the shallowest solutions has depth 22: about 3 22 3.1 10 10. Not a toy problem, at least using uninformed search.

Exercise 1. Implement the general tree-search algorithm in a programming language of your choice, including the data structures for a search problem and the search tree 2. Implement the additional, specific functions for breadth-first, depth-first and uniform cost search 3. Implement the additional, specific data structures and functions for the 8-puzzle problem, and the route finding problem in the Romania map 4. Run the above search algorithms on specific problem instances

Informed search Uninformed search, based on systematically exploring the search space, is not e ective, even in toy problems. Problem-specific knowledge, when available, can help. Main idea: exploiting such a knowledge to identify the best node to expand at each step of the general tree-search algorithm: best-first search approach.

Best-first search A useful piece of information (when available): an estimate of the cost between any given node n (state) and the solution, which is formalized as a heuristic function h(n). It can be used in a more general node evaluation function f (n), which can take into account other information (e.g., the path cost). The same general tree-search algorithm above can be used: the best-first strategy inserts nodes in the fringe (queue), sorting them for increasing values of f (n), suchthattheleafnoden with lowest f (n) is selected to be expanded at each iteration. Heuristic search is one of the first contributions of AI.

Heuristic functions: an example Route-finding in maps (form to Bucharest): 71 Oradea Neamt Zerind 75 140 118 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 98 146 101 85 Urziceni 138 Bucharest 90 Craiova Giurgiu Hirsova 86 Eforie h(n) =straight-line distance from a given city to Bucharest Bucharest Craiova Drobeta Eforie Fagaras Giurgiu Hirsova Iasi Lugoj 366 0 160 242 161 176 77 151 226 244 Mehadia Neamt Oradea Pitesti Rimnicu Vilcea Sibiu Timisoara Urziceni Vaslui Zerind 241 234 380 100 193 253 329 80 199 374

Main best-first search strategies I Greedy search I A*-search I Iterative-deepening A* I Memory-bounded A*

Greedy best-first search Tries to expand the node that is closest to the solution: f (n) =h(n) An example: 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (a) The initial state 366

Greedy best-first search Tries to expand the node that is closest to the solution: f (n) =h(n) An example: 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (b) After expanding Sibiu Timisoara Zerind 253 329 374

Greedy best-first search Tries to expand the node that is closest to the solution: f (n) =h(n) An example: 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (c) After expanding Sibiu Sibiu Timisoara 329 Zerind 374 Fagaras Oradea Rimnicu Vilcea 366 176 380 193

Greedy best-first search Tries to expand the node that is closest to the solution: f (n) =h(n) An example: 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (d) After expanding Fagaras Sibiu Timisoara Zerind 329 374 Fagaras Oradea Rimnicu Vilcea 366 380 193 Sibiu Bucharest 253 0

Properties of greedy best-first search I Optimality: no (see the previous example) I Completeness: no (if there are infinite paths) I Time and space complexity: O(b m )

A* search The most widely known best-first search strategy. Expands the node with minimum estimated cost to the solution: f (n) = g(n) + h(n) path cost from root to n estimated cost from n to the solution Devised in the 1960s for robot navigation tasks:

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (a) The initial state 366=0+366

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (b) After expanding Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (c) After expanding Sibiu Sibiu Timisoara Zerind 447=118+329 449=75+374 Fagaras Oradea Rimnicu Vilcea 646=280+366 415=239+176 671=291+380 413=220+193

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (d) After expanding Rimnicu Vilcea Sibiu Timisoara Zerind 447=118+329 449=75+374 Fagaras Oradea 646=280+366 415=239+176 671=291+380 Rimnicu Vilcea Craiova Pitesti Sibiu 526=366+160 417=317+100 553=300+253

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (e) After expanding Fagaras Sibiu Timisoara Zerind 447=118+329 449=75+374 Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253

A* search An example: expanding the first five nodes in the -Bucharest route finding problem (the value f (n) =g(n) +h(n) is shown below each node) 71 Oradea Neamt 118 75 Zerind Timisoara 111 140 70 75 Drobeta 151 Lugoj Mehadia 120 Sibiu 80 99 Rimnicu Vilcea Craiova 97 Fagaras Pitesti 211 146 101 85 138 90 Giurgiu 87 Bucharest Urziceni Iasi 92 142 98 Vaslui 366 Mehadia Bucharest 0 Neamt Craiova 160 Oradea Drobeta 242 Pitesti Eforie 161 Rimnicu Vilcea Fagaras 176 Sibiu Giurgiu 77 Timisoara Hirsova 151 Urziceni 86 Iasi 226 Vaslui Lugoj 244 Zerind Eforie Hirsova 241 234 380 100 193 253 329 80 199 374 (f) After expanding Pitesti Sibiu Timisoara Zerind 447=118+329 449=75+374 Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 526=366+160 553=300+253 591=338+253 450=450+0 Bucharest Rimnicu Vilcea Craiova 418=418+0 615=455+160 607=414+193

Properties of A* search I Optimality: yes, provided that h(n) is admissible, i.e., never overestimates the cost to the solution (e.g., route finding in maps: straight-line distance) I Completeness: yes Also optimally e cient, foranyadmissibleh(n), among algorithms that extend search paths from the root: A* expands the minimum number of nodes I Time and space complexity: still exponential in the worst case (O(b m )), but usually much lower than that of uninformed search strategies

Proof of A* optimality (1/2) Assume the fringe contains one leaf node n Õ with a sub-optimal goal state, and no leaf node with an optimal goal state. Can A* ever select n Õ to be expanded, thus returning it as a suboptimal solution? First, note that some leaf node n ÕÕ = n Õ in the path toward an optimal solution must exist in the fringe. root node n' n'' fringe optimal solution (not yet generated)

Proof of A* optimality (2/2) Denoting with C ú the cost of an optimal solution, the above assumptions imply: 1. h(n Õ )=0(n Õ contains a goal state) 2. f (n Õ )=g(n Õ )+h(n Õ )=g(n Õ ) > C ú (n Õ contains a sub-optimal goal state) 3. f (n ÕÕ )=g(n ÕÕ )+h(n ÕÕ ) Æ C ú (n ÕÕ is in the path toward an optimal solution, and h is admissible, thus f (n ÕÕ ) does not overestimate the cost of any solution reachable through n ÕÕ ) In turn, 2 and 3 imply f (n Õ ) > C ú Ø f (n ÕÕ ) This means that n Õ will not be selected to be expanded, and thus A* cannot return a suboptimal solution.

Improving A* search I Good heuristics can reduce time and memory requirements, especially with respect to uninformed search I However, in many practical problems even A* is infeasible: memory requirements are the main drawback I Alternative approaches: searching for suboptimal solutions quickly A* variants with reduced memory requirements (and a small increase in execution time)

Defining heuristic functions Defining a good (and admissible, for A*) heuristic h(n) is crucial for informed search. Examples: I route finding in maps: straight-line distance I what about 8-puzzle? (remember that about 3.1 10 10 nodes are generated on average by Breadth-first search) 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State exercise: devise admissible heuristics for the 8-puzzle problem

ADefining heuristic functions Well-known admissible heuristics for k-puzzle: I number of misplaced tiles (in the following, h 1 (n)) I sum of the distances of the tiles from their goal position (city block or Manhattan distance, h 2 (n)) Example: 7 2 4 1 2 5 6 3 4 5 8 3 1 6 7 8 Start State Goal State I h 1 (start state): 8(alltilesaremisplaced) I h 2 (start state): 3+ 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18 (tiles 1 to 8)

E ect of heuristic s accuracy on performance To evaluate the quality of a heuristic, the concept of e ective branching factor (denoted as b ú )isuseful: I let N be the number of nodes generated by A* for a given problem, and d be the solution depth I b ú is defined as the b.f. of a uniform tree of depth d containing N nodes, which is the solution of the equation: N = 1 + b ú +(b ú ) 2 + +(b ú ) d The lower the value of b ú,thebettertheheuristic. Note that b ú depends on the problem instance: itisusually evaluated empirically as the average over a set of instances.

E ect of heuristic s accuracy on performance An example: I evaluating h 1 and h 2 on 600 random 8-puzzle problem instances with solution depth d = 4, 8,...,24 (100 instances for each depth) I comparison with uninformed iterative-deepening search (IDS) Search Cost (expanded nodes) E ective Branching Factor d IDS A* (h 1 ) A* (h 2 ) IDS A* (h 1 ) A* (h 2 ) 4 112 13 12 2.87 1.48 1.45 8 6,384 39 25 2.80 1.33 1.24 12 3,644,035 227 73 2.78 1.42 1.24 16 1,301 211 1.45 1.25 20 7,276 676 1.47 1.27 24 39,135 1,641 1.48 1.26

Defining/choosing heuristic functions A general criterion for defining a heuristic: setting h(n) as the exact cost of a relaxed problem. Examples: I k-puzzle: a tile can move to any square æ h 1 (n) I k-puzzle: a tile can move to any adjacent square æ h 2 (n) I route finding in maps: an adjacent city can be reached by moving straight to it æ straight-line distance If several heuristics h 1,...,h p are available: I choose the dominating one (if any), i.e., the heuristic h k (n) such that, for each n, k = arg min i=1,...,p h i (n) (for instance, h 2 dominates h 1 in k-puzzle) I if no dominating heuristic exists, define a new one as: h(n) =max{h 1 (n),...,h p (n)} (it dominates by definition all the available heuristics)