Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Similar documents
Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Informed search algorithms. Chapter 4

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Informed search algorithms. Chapter 4

CS 380: Artificial Intelligence Lecture #4

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Artificial Intelligence

A.I.: Informed Search Algorithms. Chapter III: Part Deux

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

ARTIFICIAL INTELLIGENCE. Informed search

Informed search algorithms

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Lecture 4: Informed/Heuristic Search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search algorithms

Informed search algorithms

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Informed Search and Exploration for Agents

CS 771 Artificial Intelligence. Informed Search

Informed search methods

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Artificial Intelligence

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Outline. Best-first search

Artificial Intelligence

TDT4136 Logic and Reasoning Systems

Informed/Heuristic Search

Artificial Intelligence

Artificial Intelligence

Problem Solving: Informed Search

Outline. Best-first search

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Artificial Intelligence

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Informed search algorithms

Informed search algorithms. Chapter 4, Sections 1 2 1

CS:4420 Artificial Intelligence

Informed Search Algorithms. Chapter 4

Informed search algorithms

Solving Problems: Intelligent Search

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search and Exploration

Informed search algorithms

Robot Programming with Lisp

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Informed Search and Exploration

Informed Search and Exploration

Informed Search and Exploration

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Artificial Intelligence: Search Part 2: Heuristic search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Solving problems by searching

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Informed Search and Exploration

Introduction to Artificial Intelligence. Informed Search

Planning, Execution & Learning 1. Heuristic Search Planning

Ar#ficial)Intelligence!!

Informed Search Algorithms

Problem solving and search

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Informed search strategies (Section ) Source: Fotolia

CS 331: Artificial Intelligence Informed Search. Informed Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed Search and Exploration

CS 331: Artificial Intelligence Informed Search. Informed Search

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Heuristic (Informed) Search

Problem solving and search

Artificial Intelligence

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Artificial Intelligence Informed search. Peter Antal

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Advanced Artificial Intelligence (DT4019, HT10)

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Informed (Heuristic) Search. Idea: be smart about what paths to try.

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

Foundations of Artificial Intelligence

Informed Search A* Algorithm

Problem Solving and Search

Chapters 3-5 Problem Solving using Search

DFS. Depth-limited Search

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

CS414-Artificial Intelligence

Downloaded from ioenotes.edu.np

3 SOLVING PROBLEMS BY SEARCHING

Problem Solving & Heuristic Search

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

Foundations of Artificial Intelligence

Introduction to Computer Science and Programming for Astronomers

Transcription:

Lecture Notes on Informed Searching University of Birzeit, Palestine 1 st Semester, 2014 Artificial Intelligence Chapter 4 Informed Searching Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu www.jarrar.info Jarrar 2014 1

Watch this lecture and download the slides from http://jarrar-courses.blogspot.com/2011/11/artificial-intelligence-fall-2011.html Most information based on Chapter 4 of [1] Jarrar 2014 2

Discussion and Motivation How to determine the minimum number of coins to give while making change? The coin of the highest value first? Jarrar 2014 3

Discussion and Motivation Travel Salesperson Problem Given a list of cities and their pair wise distances, the task is to find a shortest possible tour that visits each city exactly once. Any idea how to improve this type of search? What type of information we may use to improve our search? Do you think this idea is useful: (At each stage visit the unvisited city nearest to the current city)? Jarrar 2014 4

Best-first search Idea: use an evaluation function f(n) for each node family of search methods with various evaluation functions (estimate of "desirability ) usually gives an estimate of the distance to the goal often referred to as heuristics in this context Expand most desirable unexpanded node. A heuristic function ranks alternatives at each branching step based on the available information (heuristically) in order to make a decision about which branch to follow during a search. Implementation: Order the nodes in fringe in decreasing order of desirability. Special cases: greedy best-first search A * search Jarrar 2014 5

Romania with step costs in km Suppose we can have this info (SLD) How can we use it to improve our search? Jarrar 2014 6

Greedy best-first search Greedy best-first search expands the node that appears to be closest to goal. Estimate of cost from n to goal,e.g., h SLD (n) = straight-line distance from n to Bucharest. Utilizes a heuristic function as evaluation function f(n) = h(n) = estimated cost from the current node to a goal. Heuristic functions are problem-specific. Often straight-line distance for route-finding and similar problems. Often better than depth-first, although worst-time complexities are equal or worse (space). Jarrar 2014 7

Greedy best-first search example Example from [1] Jarrar 2014 8

Greedy best-first search example Jarrar 2014 9

Greedy best-first search example Jarrar 2014 10

Greedy best-first search example Jarrar 2014 11

Properties of greedy best-first search Complete: No can get stuck in loops (e.g., Iasi Neamt Iasi Neamt.) Time: O(b m ), but a good heuristic can give significant improvement Space: O(b m ) -- keeps all nodes in memory Optimal: No b m branching factor maximum depth of the search tree Jarrar 2014 12

Discussion Do you think h SLD (n) is admissible? Would you use h SLD (n) in Palestine? How? Why? Did you find the Greedy idea useful? Ideas to improve it? Jarrar 2014 13

A * search Idea: avoid expanding paths that are already expensive. Evaluation function = path cost + estimated cost to the goal f(n) = g(n) + h(n) -g(n) = cost so far to reach n -h(n) = estimated cost from n to goal -f(n) = estimated total cost of path through n to goal Combines greedy and uniform-cost search to find the (estimated) cheapest path through the current node Heuristics must be admissible Never overestimate the cost to reach the goal Very good search method, but with complexity problems Jarrar 2014 14

A * search example Example from [1] Jarrar 2014 15

A * search example Jarrar 2014 16

A * search example Jarrar 2014 17

A * search example Jarrar 2014 18

A * search example Jarrar 2014 19

A * search example Jarrar 2014 20

A* Exercise How will A* get from Iasi to Fagaras? Jarrar 2014 21

A* Exercise Node Coordinates SL Distance A (5,9) 8.0 B (3,8) 7.3 C (8,8) 7.6 D (5,7) 6.0 E (7,6) 5.4 F (4,5) 4.1 G (6,5) 4.1 H (3,3) 2.8 I (5,3) 2.0 J (7,2) 2.2 K (5,1) 0.0 Jarrar 2014 22

Solution to A* Exercise Node Coordinates SL Distance A (5,9) 8.0 B (3,8) 7.3 C (8,8) 7.6 D (5,7) 6.0 E (7,6) 5.4 F (4,5) 4.1 G (6,5) 4.1 H (3,3) 2.8 I (5,3) 2.0 J (7,2) 2.2 K (5,1) 0.0 Jarrar 2014 23

Greedy Best-First Exercise Node Coordinates Distance A (5,9) 8.0 B (3,8) 7.3 C (8,8) 7.6 D (5,7) 6.0 E (7,6) 5.4 F (4,5) 4.1 G (6,5) 4.1 H (3,3) 2.8 I (5,3) 2.0 J (7,2) 2.2 K (5,1) 0.0 Jarrar 2014 24

Solution to Greedy Best-First Exercise Node Coordinates Distance A (5,9) 8.0 B (3,8) 7.3 C (8,8) 7.6 D (5,7) 6.0 E (7,6) 5.4 F (4,5) 4.1 G (6,5) 4.1 H (3,3) 2.8 I (5,3) 2.0 J (7,2) 2.2 K (5,1) 0.0 Jarrar 2014 25

Another Exercise Do 1) A* Search and 2) Greedy Best-Fit Search Node C g(n) h(n) A (5,10) 0.0 8.0 B (3,8) 2.8 6.3 C (7,8) 2.8 6.3 D (2,6) 5.0 5.0 E (5,6) 5.6 4.0 F (6,7) 4.2 5.1 G (8,6) 5.0 5.0 H (1,4) 7.2 4.5 I (3,4) 7.2 2.8 J (7,3) 8.1 2.2 K (8,4) 7.0 3.6 L (5,2) 9.6 0.0 Jarrar 2014 26

Admissible Heuristics Based on [4] A heuristic h(n) is admissible if for every node n, h(n) h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic. Example: h SLD (n) (never overestimates the actual road distance) Theorem-1: If h(n) is admissible, A * using TREE-SEARCH is optimal. (Ideas to prove this theorem?) Jarrar 2014 27

Optimality of A * (proof) Based on [4] Recall that f(n) = g(n) + h(n Now, suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. We want to prove: f(n) < f(g 2 ) (then A* will prefer n over G 2 ) f(g 2 ) = g(g 2 ) since h(g 2 ) = 0 g(g 2 ) > g(g) since G 2 is suboptimal f(g) = g(g) since h(g) = 0 f(g 2 ) > f(g) from above h(n) h^*(n) since h is admissible g(n) + h(n) g(n) + h * (n) f(n) f(g) Hence f(g 2 ) > f(n), and n is expanded contradiction! thus, A * will never select G 2 for expansion Jarrar 2014 28

Optimality of A * (proof) Recall that f(n) = g(n) + h(n Now, suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. We want to prove: f(n) < f(g 2 ) (then A* will prefer n over G 2 ) In other words: f(g 2 ) = g(g 2 ) + h(g 2 ) = g(g 2 ) > C*, since G 2 is a goal on a non-optimal path (C* is the optimal cost) f(n) = g(n) + h(n) <= C*, since h is admissible f(n) <= C* < f(g 2 ), so G 2 will never be expanded A* will not expand goals on sub-optimal paths Jarrar 2014 29

Consistent Heuristics Based on [5] A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem-2: If h(n) is consistent, A* using GRAPH-SEARCH is optimal. consistency is also called monotonicity Jarrar 2014 30

Optimality of A * Based on [6] A * expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f = f i, where f i < f i+1 Jarrar 2014 31

Complexity of A* Based on [2] The number of nodes within the goal contour search space is still exponential with respect to the length of the solution better than other algorithms, but still problematic Frequently, space complexity is more important than time complexity A* keeps all generated nodes in memory Jarrar 2014 32

Properties of A* Complete: Yes (unless there are infinitely many nodes with f f(g) ) Time: Exponential Because all nodes such that f(n) <= C* are expanded! Space: Keeps all nodes in memory, Fringe is exponentially large Optimal: Yes who can propose an idea to improve the time/space complexity Jarrar 2014 33

Memory Bounded Heuristic Search How can we solve the memory problem for A* search? Idea: Try something like iterative deeping search, but the cutoff is f-cost (g+h) at each iteration, rather than depth first. Two types of memory bounded heuristic searches: Recursive BFS SMA* Jarrar 2014 34

Recursive Best First Search (RBFS) best alternative over fringe nodes, which are not children: do I want to back up? RBFS changes its mind very often in practice. This is because the f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f- values and will be explored first. Problem? If we have more memory we cannot make use of it. Ay idea to improve this? Jarrar 2014 35

Simple Memory Bounded A* (SMA*) This is like A*, but when memory is full we delete the worst node (largest f-value). Like RBFS, we remember the best descendent in the branch we delete. If there is a tie (equal f-values) we first delete the oldest nodes first. SMA* finds the optimal reachable solution given the memory constraint. But time can still be exponential. Jarrar 2014 36

SMA* pseudocode Based on [2] function SMA*(problem) returns a solution sequence inputs: problem, a problem static: Queue, a queue of nodes ordered by f-cost Queue MAKE-QUEUE({MAKE-NODE(INITIAL-STATE[problem])}) loop do if Queue is empty then return failure n deepest least-f-cost node in Queue if GOAL-TEST(n) then return success s NEXT-SUCCESSOR(n) if s is not a goal and is at maximum depth then f(s) else f(s) MAX(f(n),g(s)+h(s)) if all of n s successors have been generated then update n s f-cost and those of its ancestors if necessary if SUCCESSORS(n) all in memory then remove n from Queue if memory is full then delete shallowest, highest-f-cost node in Queue remove it from its parent s successor list insert its parent on Queue if necessary insert s in Queue end Jarrar 2014 37

Simple Memory-bounded A* (SMA*) SMA* is a shortest path algorithm based on the A* algorithm. The advantage of SMA* is that it uses a bounded memory, while the A* algorithm might need exponential memory. All other characteristics of SMA* are inherited from A*. How it works: Like A*, it expands the best leaf until memory is full. Drops the worst leaf node- the one with the highest f-value. Like RBFS, SMA* then backs up the value of the forgotten node to its parent. Jarrar 2014 38

Simple Memory-bounded A* (SMA*) (Example with 3-node memory) Search space Progress of SMA*. Each node is labeled with its current f-cost. Values in parentheses show the value of the best forgotten descendant. f = g+h B 10+5=15 10 10 = goal A 0+12=12 10 8 G 8+5=13 8 16 12 A 15 B 12 A A 13 B G 15 13 18 A 13[15] G 13 H C 20+5=25 E 30+5=35 10 10 F D 20+0=20 30+0=30 H 16+2=18 J 8 8 K I 24+0=24 24+0=24 24+5=29 24[ ] A 15[15] G 24 I B 15 A G 15 24 Is given to nodes that the path up to it uses all available memory. Can tell you when best solution found within memory constraint is optimal or not. A 15[24] Jarrar 2014 39 C 15 B 25 20[24] B 8 20[ ] D A 20

The Algorithm proceeds as follows [3] Jarrar 2014 40

SMA* Properties [2] It works with a heuristic, just as A* It is complete if the allowed memory is high enough to store the shallowest solution. It is optimal if the allowed memory is high enough to store the shallowest optimal solution, otherwise it will return the best solution that fits in the allowed memory It avoids repeated states as long as the memory bound allows it It will use all memory available Enlarging the memory bound of the algorithm will only speed up the calculation When enough memory is available to contain the entire search tree, then calculation has an optimal speed Jarrar 2014 41

Admissible Heuristics How can you invent a good admissible heuristic function? E.g., for the 8-puzzle Jarrar 2014 42

Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) =? h 2 (S) =? Jarrar 2014 43

Admissible Heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) =? 8 h 2 (S) =? 3+1+2+2+2+3+3+2 = 18 Jarrar 2014 44

Dominance If h 2 (n) h 1 (n) for all n, and both are admissible. then h 2 dominates h 1 h 2 is better for search: it is guaranteed to expand less nodes. Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes What to do If we have h 1 h m, but none dominates the other? h(n) = max{h 1 (n),...h m (n)} Jarrar 2014 45

Relaxed Problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any near square, then h 2 (n) gives the shortest solution Jarrar 2014 46

Admissible Heuristics How can you invent a good admissible heuristic function? Try to relax the problem, from which an optimal solution can be found easily. Learn from experience. Can machines invite an admissible heuristic automatically? Jarrar 2014 47

Local Search Algorithms Jarrar 2014 48

Local Search Algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. State space = set of "complete" configurations. Find configuration satisfying constraints, e.g., n-queens. In such cases, we can use local search algorithms. keep a single "current" state, try to improve it according to an objective function Advantages: 1. Uses little memory 2. Finds reasonable solutions in large infinite spaces Jarrar 2014 49

Local Search Algorithms Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions (the search space) until a solution deemed optimal is found or a time bound is elapsed. For example, the travelling salesman problem, in which a solution is a cycle containing all nodes of the graph and the target is to minimize the total length of the cycle. i.e. a solution can be a cycle and the criterion to maximize is a combination of the number of nodes and the length of the cycle. A local search algorithm starts from a candidate solution and then iteratively moves to a neighbor solution. Jarrar 2014 50

Local Search Algorithms If every candidate solution has more than one neighbor solution; the choice of which one to move to is taken using only information about the solutions in the neighborhood of the current one. hence the name local search. Terminate on a time bound or if the situation is not improved after number of steps. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. Jarrar 2014 51

Hill-Climbing Search continually moves in the direction of increasing value (i.e., uphill). Terminates when it reaches a peak, no neighbor has a higher value. Only records the state and its objective function value. Does not look ahead beyond the immediate Sometimes called Greedy Local Search Problem: can get stuck in local maxima, Its success depends very much on the shape of the state-space land- scape: if there are few local maxima, random-restart hill climbing will find a good solution very quickly. Jarrar 2014 52

Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal. Move a queen to reduce number of conflicts. Jarrar 2014 53

Hill-climbing search: 8-queens problem Each number indicates h if we move a queen in its corresponding column h = number of pairs of queens that are attacking each other, either directly or indirectly (h = 17 for the above state) Jarrar 2014 54

Hill-climbing search: 8-queens problem A local minimum with h = 1 Jarrar 2014 55

Simulated Annealing Search Based on [1] To avoid being stuck in a local maxima, It tries randomly (using a probability function) to move to another state, if this new state is better it moves into it, otherwise try another move and so on. In other word, at each step, the SA heuristic considers some neighbour s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state sit moves. Terminates when finding an acceptably good solution in a fixed amount of time, rather than the best possible solution. Locating a good approximation to the global minimum of a given function in a large search space. Widely used in VLSI layout, airline scheduling, etc. Jarrar 2014 56

Properties of Simulated Annealing Search The problem with this approach is that the neighbors of a state are not guaranteed to contain any of the existing better solutions which means that failure to find a better solution among them does not guarantee that no better solution exists. It will not get stuck to a local optimum If it runs for an infinite amount of time, the global optimum will be found. Jarrar 2014 57

Genetic Algorithms Inspired by evolutionary biology such as inheritance. Evolves toward better solutions. A successor state is generated by combining two parent states. Start with k randomly generated states (population). Jarrar 2014 58

Genetic Algorithms A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. Jarrar 2014 59

Genetic Algorithms fitness: #non-attacking queens probability of being regenerated in next generation Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc Jarrar 2014 60

Genetic Algorithms [2,6,9,3,5 0,4,1,7,8] [3,6,9,7,3 8,0,4,7,1] [2,6,9,3,5,8,0,4,7,1] [3,6,9,7,3,0,4,1,7,8] Jarrar 2014 61

References [1] S. Russell and P. Norvig: Artificial Intelligence: A Modern Approach Prentice Hall, 2003, Second Edition [2] http://en.wikipedia.org/wiki/sma* [3] Moonis Ali: Lecture Notes on Artificial Intelligence http://cs.txstate.edu/~ma04/files/cs5346/sma%20search.pdf [4] Max Welling: Lecture Notes on Artificial Intelligence https://www.ics.uci.edu/~welling/teaching/ics175winter12/a-starsearch.pdf [5] Kathleen McKeown: Lecture Notes on Artificial Intelligence http://www.cs.columbia.edu/~kathy/cs4701/documents/informedsearch-ar-print.ppt [6] Franz Kurfess: Lecture Notes on Artificial Intelligence http://users.csc.calpoly.edu/~fkurfess/courses/artificial-intelligence/f09/slides/3- Search.ppt Jarrar 2014 62