Heuristic Search: Intro
|
|
- Simon Johns
- 5 years ago
- Views:
Transcription
1 Heuristic Search: Intro Blind search can be applied to all problems Is inefficient, as does not incorporate knowledge about the problem to guide the search Such knowledge can be used when deciding which node to expand next Heuristic search (aka informed, best first search) uses problem knowledge to select state that appears to be closest to goal state as next to expand This should produce a solution faster than systematically expanding all nodes til a goal is stumbled upon The general state evaluation function: f(n) = h(n) + g(n) f(n)) is estimated cost of getting from start state to goal state via state n g(n) is actual cost of getting from start state to n found so far h(n) is heuristic function: estimated cost of going from n to goal state This function is where knowledge about the problem comes into play h(goal) = 0 f (n) = h (n) + g (n) represents the minimal cost of getting from start state to goal state via state n when all paths through n are considered h (n) is actual cost to goal from n g (n) is cost of best path to n from initial state There are categories of heuistic search, based on variations in f(n)) 1
2 Heuristic Search: Intro (2) General algorithm (tree): function GIS-TREE (problem, evalfn) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) frontier <- INSERT(child, frontier) 2
3 Heuristic Search: Intro (3) General algorithm (graph): function GIS-GRAPH (problem, evalfn) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) ADD(node.STATE, explored) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) if ((child.state!in explored) AND ((!STATE-FOUND(child.STATE, frontier)) { frontier <- INSERT(child, frontier) else if (STATE-FOUND(child.STATE, frontier)) { node <- RETRIEVE-NODE(child.STATE, frontier) if (child.cost < node.cost)) frontier <- REPLACE(node, child, frontier) 3
4 Heuristic Search: Intro (4) The above is a modified version of the uniform cost algorithm it accepts an additional argument: the function used to assign a value to a node Function apply(f unction, node) evaluates f unction for a given node PATH-COST has been replaced by a more generic PATH attribute Attributes that need to be stored on a node are dependent on the search strategy 4
5 Heuristic Search: Greedy Search The greedy search algorithm selects the node that appears to be closest to the goal f(n) = h(n) I.e., g(n) = 0: We ignore cost to get to a node Algorithm GS: function GS (problem) returns solution, or failure { return GIS-TREE (problem, h) // OR return GIS-GRAPH (problem, h) Characteristics: Attempts to get to goal as quickly as possible May reach dead ends May expand nodes unnecessarily Not complete for tree rep (due to reversible actions) Complete for graph rep of finite state space Not optimal Complexity: Storage requirements O(b d ) (where d is max depth of structure Time requirements O(b d ) Note that quality of heuristic greatly affects search 5
6 Heuristic Search: A Combines aspects of uniform cost and greedy seraches UCS is optimal and complete, but inefficient UCS based on g(n) GS is neither optimal nor complete, but efficient GS based on h(n) For A, f(n) = h(n) + g(n) Algorithm A (for trees) function ASTAR-TREE (problem) returns solution, or failure { return GIS-TREE (problem, h + g) A is optimal and complete, providing that 1. For trees, h(n) is admissible Admissible means that h(n) h (n) I.e., h(n) never over-estimates cost from n to g Such an algorithm is optimistic 2. For graphs, h(n) is consistant (monotonic) h(n) is consistant if for every node n and successor nodes n generated by action a, the estimated cost of getting from n to the goal is never greater than the actual cost of getting fom n to a successor state n plus the estimated cost of getting from n to the goal I.e., h(n) c(n, a, n ) + h(n ) for all n 6
7 Heuristic Search: A (2) This is reflected in the following diagram: This is referred to as the triangle inequality Relevance: Estimated cost to goal cannot decrease by more than c ij when move from n i to n j Consistancy is a stronger requirement than admissiblity Consider The above represents a nonmonotonic function Since h(n i ) underestimates the cost of getting from n to g, the cost must be at least 4 Since c(n i, a, n j ) = 1, the cost from n j to g must be at least 3 h(n j ) = 2 is unrealistic (i.e., As you get closer to the goal, the overall cost should be increasing, not decreasing) 7
8 Heuristic Search: A Graphs As with blind search, using graphs instead of trees will result in extra effort Handling already-visited nodes more complex in A When encounter node that appears on frontier or closed, may have found an alternate path to the node New path may be cheaper than original If new path is not cheaper, just ignore newly found path If not, must adjust f of the node, and adjust path pointers between it and parent If the node is on closed, will also have to cascade updated cost of path through descendants 8
9 Heuristic Search: A Graphs (2) Algorithm: function ASTAR-GRAPH (problem) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) INSERT(node, explored-nodes) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) //** If state has not been generated yet, add it to frontier** if ((!STATE-FOUND(child.STATE, explored-nodes)) AND ((!STATE-FOUND(child.STATE, frontier)) { frontier <- INSERT(child, frontier) //** Replace existing node in frontier with new, cheaper node else if (STATE-FOUND(child.STATE, frontier)) { node <- RETRIEVE-NODE(child.STATE, frontier) if (child.cost < node.cost)) frontier <- REPLACE(node, child, frontier) //** Must propagate cheaper cost to children else if (STATE-FOUND(child.STATE, explored-nodes)) { node <- RETRIEVE-NODE(child.STATE, explored-nodes) if (child.cost < node.cost)) explored-nodes <- REPLACE(node, child, explored-nodes) UPDATE-PATH(child) 9
10 Heuristic Search: A Graphs (3) Note that this algorithm is essentially the same as GIS-GRAPH, except for two changes: 1. The last if-else: else if (child.state in explored) UPDATE-PATH(child) If child appears in explored, this means that it has been expanded and we have generated any number of ancestors for this state If we have found a cheaper path to child, then the cost associated with those ancestors will need to be adjusted, since the paths to the ancestors go through child Function UPDATE-PATH will id those states reached via child These nodes may be in frontier or explored-nodes The g values will be updated (to a lesser value), which will decrease their f values 2. explored-nodes The text s formulation maintains states - not nodes - on the explored list Since A graph search requires us to be able to adjust costs and traverse paths to explored nodes, we must retain nodes - not just state information - on the explored list The name explored-nodes emphasizes this change from graph search algorithms presented earlier 10
11 Heuristic Search: A Optimality This examines the optimality of A graph search 1. If h(n) is consistant, then values of f(n) are nondecreasing along any path Given g(n ) = g(n) + c(n, a, n ) Then, f(n ) = g(n )+h(n ) = g(n)+c(n, a, n )+h(n ) g(n)+h(n) = f(n) 2. At every step prior to termination of A, there is always a node n on f rontier with the following properties: n is on optimal path to a goal A has found an optimal path to n f(n ) f (n 0 ) 3. Proof (by induction): (a) Base case: At start, only n 0 on frontier n 0 on optimal path to goal f(n ) f(n 0 ) = h(n 0 ) (b) Inductive assumption: Assume m 0 nodes expanded and above holds for each (c) Proof: Consider expansion of (m + 1) st node from frontier Call this node n Let n be on optimal path Then either i. n is not selected as (m + 1) st node Regardless, it still has properties as noted 11
12 Heuristic Search: A Optimality (2) ii. n is selected In the second case, let n p be one of n s successors on an optimal path This node is on frontier Path to n p must be optimal, otherwise, there would be a better path to the goal n p becomes the new n for the next iteration of the algorithm (d) Proof that f(n ) f(n 0 ) Let n be on optimal path and assume A has found optimal path from n 0 to n Then ˆf(n ) = g(n ) + h(n ) (1) g(n ) + h(n ) (since g(n ) = g(n ) and h(n ) h(n )(2) f(n ) (since f(n ) = g(n ) + h(n )) (3) f(n 0 ) (since f(n ) = f(n 0 ) and n on optimal path) (4) (5) 12
13 Heuristic Search: A Optimality (3) 4. Given the conditions specified for A and h, and providing there is a path with finite cost from n 0 to goal, A is guaranteed to terminate with a minimal cost path to a goal 5. Proof (by contradiction): A terminates if there is an accessible goal Assume A doesn t terminate Then a point will be reached where f(n) > f(n 0 ) for some n frontier, since ɛ > 0 This contradicts assumption 6. Proof (by contradiction): Termination of A (a) A terminates when either frontier empty This contradicts assumption that there is an accessible goal (b) Or when a goal node is id d Suppose there is an optimal goal g 1 where f(g 1 ) = f(n 0 ), and A finds a non-optimal goal g 2 with f(g 2 ) > f(n 0 ) where g 1 g 2 When A terminates, f(g 2 ) f(g 2 ) > f(n 0 ) But prior to selection of g 2, there was a node n on frontier on an optimal path with f(n ) f(n 0 ) by previous lemma This contradicts assumptions Because f(n) is nondecreasing, contours can be drawn in the state space All nodes within a given contour f i have an f value less than those on contour f i, and the f value of contour f i < f j, where i < j 13
14 Heuristic Search: A Optimality (4) As the search progresses, the contours narrow and stretch toward the goal along the cheapest path The more accurate h(n), the more focused the contours become Note that UCS generates circular contours centered on the initial state A is optimally efficient for any given h function: No other algorithm is guaranteed to expand fewer nodes than A 14
15 Heuristic Search: A Complexity The number of states within the goal contour is exponential wrt the solution length For problems of constant step costs, analysis is based on 1. Absolute error: = h h 2. Relative error: ɛ = (h h)/h Complexity analysis depends on the characteristics of the state space When there is a single goal and reversible actions Time complexity O(b ) = O(b ɛd ), where d is the solution depth For heuristics of practical use, h, so ɛ is constant or growing and time complexity is exponential in d O(b ɛd ) = O((b ɛ ) d ) which means the effective branching factor is b ɛ If there are many goal states (especially near-optimal goal states), path may veer from the optimal path This will result in additional cost proportional to the number of goal states within a factor of ɛ of the optimal cost With a graph, there can be exponentially many states with f(n) < C The search usually runs out of space before time becomes an issue 15
16 Heuristic Search: Variations to A The main issue with A is memory usage As noted above, the algorithm usually uses up available memory before time becomes an issue The following algorithms attempt to limit memory usage while retaining the properties of A 16
17 Heuristic Search: Variations to A - Iterative Deepening A Search (IDA ) This is the A version of the depth first iterative deepening algorithm On each iteration, perform a depth first search Instead of using a depth limit, use a bound on f Algorithm IDA function IDASTAR (problem) returns solution, or failure { root <- new(node) root.state <- problem.initial-state root.fcost <- problem.get-initial-state-h() f-limit <- root.fcost while (TRUE) { [result, f-limit] <- DFS-CONTOUR(root, problem, f-limit) if (result!= NULL) return result else if (f-limit == infinity) return failure function DFS-CONTOUR (node, problem, f-limit) returns [solution, f-limit] { nextf <- infinity if (node.fcost > f-limit) return [node, node.fcost] if (problem.goal-test(node.state)) return [node, f-limit] for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) [solution, newf] <- DFS-CONTOUR(child, f-limit) if (solution NOT NULL) return [solution, f-limit] nextf <- min(nextf, newf) return [NULL, nextf] 17
18 Heuristic Search: Variations to A - Iterative Deepening A Search (IDA ) (2) Expands nodes along contour lines of equal f IDA complete and optimal Complexity 1. Space Let δ = smallest operator cost Let f = optimal solution cost Worst case requires bf nodes 2. Time δ Dependent on range of values that f can assume Best case The fewer the values, the fewer the contours fewer iterations Thus, IDA A Also has less overhead as does not require priority queue Worst case When have many unique values for ĥ(n) (absolute worst is every h value unique) Requires many iterations n O(n 2 ) To reduce time complexity increase f limit by ɛ on every iteration Number of iterations 1/ɛ While reduces search cost, may produce non-optimal solution Such a solution is worse than optimal by at most ɛ Such algorithms called ɛ-admissible 18
19 Heuristic Search: Variations to A - Recursive Best First Search (RBFS) This uses more memory than IDA but generates fewer nodes Uses same approach as A with following differences Assigns a backed up value to each node Let n be a node with m i successors Backed up value of n = b(n) = min(b(m i )) Backed up value of leaf node (one on frontier) is f(n) Backed up value of a node represents descendant with lowest f value in tree rooted at node Only most promising path to goal maintained at any time Algorithm (description) When node is expanded, f for all successors computed If one of these (m) has f less than b value of any node on frontier Back up values of all ancestors of m based on this value Continue from m Otherwise Let n be node on frontier with b < f(m i ) Back up ancestors of n based on lowest successor f Find common ancestor of n and n; call this node k Let k n be the child of k that is the root of the subtree containg n Delete everything below k n (k n will have backed up value of this subtree) n will be next node to expand 19
20 Heuristic Search: Variations to A - Recursive Best First Search (RBFS) (2) Algorithm function RECURSIVE-BFS (problem) returns solution, or failure { return RBDF(problem, MAKE-NODE(problem.INITIAL-STATE), infinity) function RBFS (problem, node, f-limit) returns solution, or failure { if (problem.goal-test(node.state)) return SOLUTION(node.STATE) successors <- new(set) for (each action in problem.actions(node.state)) { successors <- INSERT(CHILD-NODE(problem, node, action), successors) if (EMPTY(successors) return failure for (each s in successors) s.f <- MAX(s.g + s.h, node.f) loop { best <- FIND-MIN-NODE(successors) //find node with smallest f value if (best.f > f.limit return failure //find second smallest f value alternative <- FIND-MIN-2-NODE-VALUE(successors, best.f) result <- RBFS(problem, best, min(f-limit, alternative) if (result!= failure) return result 20
21 Heuristic Search: Simplified Memory-bounded A Search (SMA ) IDA* and RBFS have problems due to using too little memory IDA holds only current f-cost limit between iterations RBFS holds more info, using linear space Both have no memory of what went before May expand same nodes multiple times May experience redundant paths and the computational overhead required to deal with them SMA uses all memory available Characteristics: Avoids repeated states within memory constraints Complete if memory sufficient to store shallowest solution path Optimal if memory sufficient to store optimal solution path Otherwise, finds best solution possible within memory constraints If entire search tree fits in memory, is optimally efficient Algorithm overview: A applied to problem until run out of memory To generate a successor when no memory available, will need to remove one from queue Removed nodes called forgotten nodes Remove node with highest F COST Want to remember cost of best path so far through a forgotten node (in case need to return to it later) This info retained in root of forgotten subtree These vales called backed-up values 21
22 Heuristic Search: Simplified Memory-bounded A Search (SMA ) (2) Algorithm SMA function SMASTAR (problem) returns solution, or failure { Q <- makenode(initialstate [problem]) node <- new(node) node.state <- problem.initial-state node.fcost <- problem.get-initial-state-h() if (problem.goal-test(node.state)) return SOLUTION(node) frontier <- new(queue) INSERT(node, frontier) loop { if (EMPTY?(frontier)) return failure node <- deepest, least-fcost node in frontier if (problem.goal-test(node.state)) return SOLUTION(node) child <- CHILD-NODE(problem, node, problem.next-action(node.state)) if (NOT problem.goal-test(node.state) AND MAX-DEPTH(child)) child.f <- infinity else child.f <- max(node.f, child.g + child.h) if (no more successors of node) update node s FCOST and those of ancestors to least cost path through node if (successors of node all in memory) pop(node) if (full(memory)) { delete shallowest, highest-fcost node r in frontier remove r from parent successor list frontier <- INSERT(r.PARENT, frontier) \\if necessary insert r s parent on frontier if necessary frontier <- INSERT(child, frontier) 22
23 Issues: Heuristic Search: Simplified Memory-bounded A Search (SMA ) (3) If all leaf nodes have same f value is problemmatic To preclude cycles of selecting and deselecting the same node Always expand newest best leaf always delete oldest worst leaf Evaluation: SMA can handle more difficult problems than A without the memory overhead Better than IDA on graphs For very hard problems, may have significant regeneration of paths May make solution intractable where A with limited memory would find a solution 23
24 Branching factor Heuristic Search: Heuristic Function Evaluation Let n be number of nodes expanded by A for a given problem Let d be solution depth Let B be effective branching factor I.e., average branching factor over whole problem Then, n = d i=1 B = (Bd 1)B B 1 B generally constant over a large range of instances for a given problem and generally independent of path length Ideal case is B = 1: converge directly to goal Want heuristic with smallest branching factor Can be used to estimate number of nodes expanded for a given B and depth Domination h a dominates h b if h a (n) h b (n) Dominating heuristic will always expand fewer nodes than dominated one The larger h(n), the more accurate it is Computational cost of h Must consider cost of computing h Frequently, the more accurate, the more expensive If computational cost outweighs cost of generating nodes, may not be worthwhile 24
25 Heuristic Search: Designing Heuristics There are a number of techniques that can be used to design heuristics: 1. Problem relaxation Relaxed problem is one with fewer constraints For example, consider the eight puzzle (a) A tile can move from A B if they are adjacent: Corresponds to the Manhatten distance (b) A tile can move from A B if B is empty: Gaschnig s heuristic (c) A tile can move from A B: Corresponds to the number of tiles out of place The state space graph of a relaxed problem is a super graph of the original It will have more edges than the original Any optimal solution in the original space will be a solution in the relaxed space Since the relaxed space has additional edges, some of its solutions may be better Cost of the optimal solution to a relaxed problem is an admissible heuristic in the original The derived heuristic must obey the triangle inequality, and so is consistent Good heuristics often represent exact costs to relaxed problem 2. Composite functions If have several heuristic functions, and none is dominant, use h(n) = max(h 1 (n), h 2 (n),..., h m (n)) If each h i is admissible, so is h(n) h(n) dominates each individual h i 3. Statistical info By generating random problem instances, can gather data about real v estimated costs If find that when h(n) = x true cost = y z% of the time, use y in those cases Admissability lost 25
26 4. Pattern databases Heuristic Search: Designing Heuristics (2) Find the cost of generating a subproblem of the original This cost will be a lower bound on the cost of the full problem A pattern database stores exact solutions for every subproblem of the original When solving the original, look up the solutions of the corresponding matching subproblems in the DB This is an admissible heuristic Take maximum of all possible matches for a given configuration 5. Features Identify those features that (should) contribute to h Base heuristic on them The agent learns which features are valuable over time 6. Use weighted functions Let ˆf(n) = ĝ(n) + wĥ(n) w > 0 As w decreases, ˆf optimal cost search As w increases, ˆf greedy search Experimental evidence suggests varying w inversely with tree depth 26
27 Heuristic Search: Learning Heuristics Can an agent learn a better search strategy? To do so, need a meta-level state space This space represents the internal state of the agent program during the search process The actual problem state space is the object-level state space A meta-level learning algorithm monitors the steps of the search process at the meta-level and compares them with properties in teh object-level to id which steps are not worthwhile 27
Informed Search and Exploration
Ch. 03b p.1/51 Informed Search and Exploration Sections 3.5 and 3.6 Nilufer Onder Department of Computer Science Michigan Technological University Ch. 03b p.2/51 Outline Best-first search A search Heuristics,
More informationInformed Search and Exploration
Ch. 03 p.1/47 Informed Search and Exploration Sections 3.5 and 3.6 Ch. 03 p.2/47 Outline Best-first search A search Heuristics, pattern databases IDA search (Recursive Best-First Search (RBFS), MA and
More informationmywbut.com Informed Search Strategies-I
Informed Search Strategies-I 1 3.1 Introduction We have outlined the different types of search strategies. In the earlier chapter we have looked at different blind search strategies. Uninformed search
More informationInformed Search Methods
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in as the crow flies distance to goal 8-puzzle: Heuristic: Expand
More informationAr#ficial)Intelligence!!
Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic Uninformed (blind) search algorithms can find an (optimal) solution to the problem,
More informationInformed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016
Informed search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3 Outline Best-first search Greedy
More informationA.I.: Informed Search Algorithms. Chapter III: Part Deux
A.I.: Informed Search Algorithms Chapter III: Part Deux Best-first search Greedy best-first search A * search Heuristics Outline Overview Informed Search: uses problem-specific knowledge. General approach:
More informationInformed search methods
Informed search methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Section 3.5-3.7 of Russell and Norvig Informed Search Methods Heuristic = to find, to discover Heuristic
More informationUninformed Search Strategies AIMA
Uninformed Search Strategies AIMA 3.3-3.4 CIS 421/521 - Intro to AI - Fall 2017 1 Review: Formulating search problems Formulate search problem States: configurations of the puzzle (9! configurations) Actions:
More informationInformed Search Algorithms
Informed Search Algorithms CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia 2017, Semester 2 Introduction
More informationArtificial Intelligence
Artificial Intelligence Information Systems and Machine Learning Lab (ISMLL) Tomáš Horváth 16 rd November, 2011 Informed Search and Exploration Example (again) Informed strategy we use a problem-specific
More informationLecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010
Lecture 4: Search 3 Victor R. Lesser CMPSCI 683 Fall 2010 First Homework 1 st Programming Assignment 2 separate parts (homeworks) First part due on (9/27) at 5pm Second part due on 10/13 at 5pm Send homework
More informationToday s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004
Today s s lecture Search and Agents Material at the end of last lecture Lecture 3: Search - 2 Victor R. Lesser CMPSCI 683 Fall 2004 Continuation of Simple Search The use of background knowledge to accelerate
More informationArtificial Intelligence
Artificial Intelligence Information Systems and Machine Learning Lab (ISMLL) Tomáš Horváth 10 rd November, 2010 Informed Search and Exploration Example (again) Informed strategy we use a problem-specific
More informationInformed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart
Informed Search CS 486/686 University of Waterloo May 0 Outline Using knowledge Heuristics Best-first search Greedy best-first search A* search Other variations of A* Back to heuristics 2 Recall from last
More informationCPS 170: Artificial Intelligence Search
CPS 170: Artificial Intelligence http://www.cs.duke.edu/courses/spring09/cps170/ Search Instructor: Vincent Conitzer Search We have some actions that can change the state of the world Change resulting
More informationUninformed Search Strategies AIMA 3.4
Uninformed Search Strategies AIMA 3.4 CIS 391-2015 1 The Goat, Cabbage, Wolf Problem (From xkcd.com) CIS 391-2015 2 But First: Missionaries & Cannibals Three missionaries and three cannibals come to a
More informationCOMP9414: Artificial Intelligence Informed Search
COMP9, Wednesday March, 00 Informed Search COMP9: Artificial Intelligence Informed Search Wayne Wobcke Room J- wobcke@cse.unsw.edu.au Based on slides by Maurice Pagnucco Overview Heuristics Informed Search
More informationHeuristic (Informed) Search
Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap., Sect..1 3 1 Search Algorithm #2 SEARCH#2 1. INSERT(initial-node,Open-List) 2. Repeat: a. If empty(open-list) then return failure
More informationCOMP9414: Artificial Intelligence Informed Search
COMP9, Monday 9 March, 0 Informed Search COMP9: Artificial Intelligence Informed Search Wayne Wobcke Room J- wobcke@cse.unsw.edu.au Based on slides by Maurice Pagnucco Overview Heuristics Informed Search
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.1 4.2) A General Search algorithm: Chapter 3: Search Strategies Task : Find a sequence of actions leading from the initial state to
More informationInformed search strategies (Section ) Source: Fotolia
Informed search strategies (Section 3.5-3.6) Source: Fotolia Review: Tree search Initialize the frontier using the starting state While the frontier is not empty Choose a frontier node to expand according
More informationSearch : Lecture 2. September 9, 2003
Search 6.825: Lecture 2 September 9, 2003 1 Problem-Solving Problems When your environment can be effectively modeled as having discrete states and actions deterministic, known world dynamics known initial
More informationInformed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013
Informed Search CS 486/686: Introduction to Artificial Intelligence Fall 2013 1 Outline Using knowledge Heuristics Bestfirst search Greedy bestfirst search A* search Variations of A* Back to heuristics
More informationITCS 6150 Intelligent Systems. Lecture 5 Informed Searches
ITCS 6150 Intelligent Systems Lecture 5 Informed Searches Informed Searches We are informed (in some way) about future states and future paths We use this information to make better decisions about which
More informationOutline. Best-first search
Outline Best-first search Greedy best-first search A* search Heuristics Admissible Heuristics Graph Search Consistent Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing
More informationLecture 4: Informed/Heuristic Search
Lecture 4: Informed/Heuristic Search Outline Limitations of uninformed search methods Informed (or heuristic) search uses problem-specific heuristics to improve efficiency Best-first A* RBFS SMA* Techniques
More informationChapter 3: Solving Problems by Searching
Chapter 3: Solving Problems by Searching Prepared by: Dr. Ziad Kobti 1 Problem-Solving Agent Reflex agent -> base its actions on a direct mapping from states to actions. Cannot operate well in large environments
More informationInformed Search A* Algorithm
Informed Search A* Algorithm CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3 Most slides have
More informationProblem Solving & Heuristic Search
190.08 Artificial 2016-Spring Problem Solving & Heuristic Search Byoung-Tak Zhang School of Computer Science and Engineering Seoul National University 190.08 Artificial (2016-Spring) http://www.cs.duke.edu/courses/fall08/cps270/
More informationCS 771 Artificial Intelligence. Informed Search
CS 771 Artificial Intelligence Informed Search Outline Review limitations of uninformed search methods Informed (or heuristic) search Uses problem-specific heuristics to improve efficiency Best-first,
More informationInformed (Heuristic) Search. Idea: be smart about what paths to try.
Informed (Heuristic) Search Idea: be smart about what paths to try. 1 Blind Search vs. Informed Search What s the difference? How do we formally specify this? A node is selected for expansion based on
More informationCS 331: Artificial Intelligence Informed Search. Informed Search
CS 331: Artificial Intelligence Informed Search 1 Informed Search How can we make search smarter? Use problem-specific knowledge beyond the definition of the problem itself Specifically, incorporate knowledge
More informationInformed search algorithms
Informed search algorithms This lecture topic Chapter 3.5-3.7 Next lecture topic Chapter 4.1-4.2 (Please read lecture topic material before and after each lecture on that topic) Outline Review limitations
More information3 SOLVING PROBLEMS BY SEARCHING
48 3 SOLVING PROBLEMS BY SEARCHING A goal-based agent aims at solving problems by performing actions that lead to desirable states Let us first consider the uninformed situation in which the agent is not
More informationCS 331: Artificial Intelligence Informed Search. Informed Search
CS 331: Artificial Intelligence Informed Search 1 Informed Search How can we make search smarter? Use problem-specific knowledge beyond the definition of the problem itself Specifically, incorporate knowledge
More informationThis lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm
Lecture 6: Search 5 Victor R. Lesser CMPSCI 683 Fall 2010 This lecture Other Time and Space Variations of A* Finish off RBFS SMA* Anytime A* RTA* (maybe if have time) RBFS - Recursive Best-First Search
More information4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies
55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently
More informationDIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018
DIT411/TIN175, Artificial Intelligence Chapters 3 4: More search algorithms CHAPTERS 3 4: MORE SEARCH ALGORITHMS DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 23 January, 2018 1 TABLE OF CONTENTS
More informationOutline. Best-first search
Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems
More informationCS 380: Artificial Intelligence Lecture #4
CS 380: Artificial Intelligence Lecture #4 William Regli Material Chapter 4 Section 1-3 1 Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing
More informationArtificial Intelligence
Artificial Intelligence Search Marc Toussaint University of Stuttgart Winter 2015/16 (slides based on Stuart Russell s AI course) Outline Problem formulation & examples Basic search algorithms 2/100 Example:
More informationMustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit
Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, 2018 Chapter 3 Informed Searching Mustafa Jarrar University of Birzeit Jarrar 2018 1 Watch this lecture and download the slides
More informationSolving problems by searching
Solving problems by searching CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3 Outline Problem-solving
More informationBest-First Search! Minimizing Space or Time!! RBFS! Save space, take more time!
Best-First Search! Minimizing Space or Time!! RBFS! Save space, take more time! RBFS-1 RBFS general properties! Similar to A* algorithm developed for heuristic search! RBFS-2 RBFS general properties 2!
More informationContents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search
Contents Foundations of Artificial Intelligence 4. Informed Search Methods Heuristics, Local Search Methods, Genetic Algorithms Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität
More informationGraphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,
Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics, performance data Pattern DBs? General Tree Search function
More informationInformed search algorithms
Artificial Intelligence Topic 4 Informed search algorithms Best-first search Greedy search A search Admissible heuristics Memory-bounded search IDA SMA Reading: Russell and Norvig, Chapter 4, Sections
More informationChapter 3: Informed Search and Exploration. Dr. Daisy Tang
Chapter 3: Informed Search and Exploration Dr. Daisy Tang Informed Search Definition: Use problem-specific knowledge beyond the definition of the problem itself Can find solutions more efficiently Best-first
More informationInformed Search and Exploration for Agents
Informed Search and Exploration for Agents R&N: 3.5, 3.6 Michael Rovatsos University of Edinburgh 29 th January 2015 Outline Best-first search Greedy best-first search A * search Heuristics Admissibility
More informationSolving problems by searching
Solving problems by searching CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2014 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3 Outline Problem-solving
More informationInformed search algorithms
Informed search algorithms This lecture topic Chapter 3.5-3.7 Next lecture topic Chapter 4.1-4.2 (Please read lecture topic material before and after each lecture on that topic) Outline Review limitations
More informationInformed search algorithms
Informed search algorithms This lecture topic Chapter 3.5-3.7 Next lecture topic Chapter 4.1-4.2 (Please read lecture topic material before and after each lecture on that topic) Outline Review limitations
More informationLecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017
Lecture 2: Fun with Search Rachel Greenstadt CS 510, October 5, 2017 Reminder! Project pre-proposals due tonight Overview Uninformed search BFS, DFS, Uniform-Cost, Graph-Search Informed search Heuristics,
More informationEfficient memory-bounded search methods
Efficient memory-bounded search methods Mikhail Simin Arjang Fahim CSCE 580: Artificial Intelligence Fall 2011 Dr. Marco Voltorta Outline of The Presentation Motivations and Objectives Background - BFS
More informationDr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit
Lecture Notes, Advanced Artificial Intelligence (SCOM7341) Sina Institute, University of Birzeit 2 nd Semester, 2012 Advanced Artificial Intelligence (SCOM7341) Chapter 4 Informed Searching Dr. Mustafa
More informationRoute planning / Search Movement Group behavior Decision making
Game AI Where is the AI Route planning / Search Movement Group behavior Decision making General Search Algorithm Design Keep a pair of set of states: One, the set of states to explore, called the open
More informationState Spaces
Unit-2: CONTENT: Introduction to Search: Searching for solutions, Uninformed search strategies, Informed search strategies, Local search algorithms and optimistic problems, Adversarial Search, Search for
More informationS A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S
LECTURE 2 SEARCHING IN STATE GRAPHS Introduction Idea: Problem Solving as Search Basic formalism as State-Space Graph Graph explored by Tree Search Different algorithms to explore the graph Slides mainly
More informationProblem Solving and Search
Artificial Intelligence Problem Solving and Search Dae-Won Kim School of Computer Science & Engineering Chung-Ang University Outline Problem-solving agents Problem types Problem formulation Example problems
More informationInformed State Space Search B4B36ZUI, LS 2018
Informed State Space Search B4B36ZUI, LS 2018 Branislav Bošanský, Martin Schaefer, David Fiedler, Jaromír Janisch {name.surname}@agents.fel.cvut.cz Artificial Intelligence Center, Czech Technical University
More informationInformed/Heuristic Search
Informed/Heuristic Search Outline Limitations of uninformed search methods Informed (or heuristic) search uses problem-specific heuristics to improve efficiency Best-first A* Techniques for generating
More informationHeuristic Search. Heuristic Search. Heuristic Search. CSE 3401: Intro to AI & LP Informed Search
CSE 3401: Intro to AI & LP Informed Search Heuristic Search. Required Readings: Chapter 3, Sections 5 and 6, and Chapter 4, Section 1. In uninformed search, we don t try to evaluate which of the nodes
More informationSolving Problems by Searching
INF5390 Kunstig intelligens Solving Problems by Searching Roar Fjellheim Outline Problem-solving agents Example problems Search programs Uninformed search Informed search Summary AIMA Chapter 3: Solving
More informationDr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit
Lecture Notes on Informed Searching University of Birzeit, Palestine 1 st Semester, 2014 Artificial Intelligence Chapter 4 Informed Searching Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Informed Search Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More informationSolving Problems by Searching
INF5390 Kunstig intelligens Sony Vaio VPC-Z12 Solving Problems by Searching Roar Fjellheim Outline Problem-solving agents Example problems Search programs Uninformed search Informed search Summary AIMA
More informationAnnouncements. CS 188: Artificial Intelligence
Announcements Projects: Looking for project partners? --- Come to front after lecture. Try pair programming, not divide-and-conquer Account forms available up front during break and after lecture Assignments
More informationHeuris'c Search. Reading note: Chapter 4 covers heuristic search.
Heuris'c Search Reading note: Chapter 4 covers heuristic search. Credits: Slides in this deck are drawn from or inspired by a multitude of sources including: Shaul Markovitch Jurgen Strum Sheila McIlraith
More informationUninformed Search Methods
Uninformed Search Methods Search Algorithms Uninformed Blind search Breadth-first uniform first depth-first Iterative deepening depth-first Bidirectional Branch and Bound Informed Heuristic search Greedy
More informationHW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3
9-04-2013 Uninformed (blind) search algorithms Breadth-First Search (BFS) Uniform-Cost Search Depth-First Search (DFS) Depth-Limited Search Iterative Deepening Best-First Search HW#1 due today HW#2 due
More informationArtificial Intelligence
Artificial Intelligence CSC348 Unit 3: Problem Solving and Search Syedur Rahman Lecturer, CSE Department North South University syedur.rahman@wolfson.oxon.org Artificial Intelligence: Lecture Notes The
More informationSet 3: Informed Heuristic Search. ICS 271 Fall 2017 Kalev Kask
Set 3: Informed Heuristic Search ICS 271 Fall 2017 Kalev Kask Basic search scheme We have 3 kinds of states explored (past) only graph search frontier (current) unexplored (future) implicitly given Initially
More informationSolving Problems by Searching. Artificial Intelligence Santa Clara University 2016
Solving Problems by Searching Artificial Intelligence Santa Clara University 2016 Problem Solving Agents Problem Solving Agents Use atomic representation of states Planning agents Use factored or structured
More informationARTIFICIAL INTELLIGENCE. Pathfinding and search
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Pathfinding and search Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationHeuristic Search and Advanced Methods
Heuristic Search and Advanced Methods Computer Science cpsc322, Lecture 3 (Textbook Chpt 3.6 3.7) May, 15, 2012 CPSC 322, Lecture 3 Slide 1 Course Announcements Posted on WebCT Assignment1 (due on Thurs!)
More informationInformed search algorithms. Chapter 4
Informed search algorithms Chapter 4 Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search Best-first search Idea: use an evaluation function f(n) for each node
More informationInformed Search Lecture 5
Lecture 5 How can we exploit problem-specific knowledge to find solutions more efficiently? Where does this knowledge come from and under what conditions is it useful? 1 Agenda Review: Uninformed Search
More informationLecture 5 Heuristics. Last Time: A* Search
CSE 473 Lecture 5 Heuristics CSE AI Faculty Last Time: A* Search Use an evaluation function f(n) for node n. f(n) = estimated total cost of path thru n to goal f(n) = g(n) + h(n) g(n) = cost so far to
More informationInformed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)
Informed search algorithms Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty) Intuition, like the rays of the sun, acts only in an inflexibly
More informationVorlesung Grundlagen der Künstlichen Intelligenz
Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Bart Selman selman@cs.cornell.edu Informed Search Readings R&N - Chapter 3: 3.5 and 3.6 Search Search strategies determined by choice of node (in queue)
More informationSRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A
UNIT II REPRESENTATION OF KNOWLEDGE PART A 1. What is informed search? One that uses problem specific knowledge beyond the definition of the problem itself and it can find solutions more efficiently than
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Bart Selman selman@cs.cornell.edu Module: Informed Search Readings R&N - Chapter 3: 3.5 and 3.6 Search Search strategies determined by choice of node (in
More informationCS 188: Artificial Intelligence Fall Search Gone Wrong?
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Pieter Aeel UC Berkeley Many slides from Dan Klein Search Gone Wrong? 1 Announcements Assignments: Project 0 (Python tutorial): due
More informationHeuristic Search. CPSC 470/570 Artificial Intelligence Brian Scassellati
Heuristic Search CPSC 470/570 Artificial Intelligence Brian Scassellati Goal Formulation 200 Denver 300 200 200 Chicago 150 200 Boston 50 1200 210 75 320 255 Key West New York Well-defined function that
More informationARTIFICIAL INTELLIGENCE. Informed search
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Informed search Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationProblem solving and search
Problem solving and search Chapter 3 Chapter 3 1 Outline Problem-solving agents Problem types Problem formulation Example problems Uninformed search algorithms Informed search algorithms Chapter 3 2 Restricted
More informationInformed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.
Notes about the assignment Informed Search Russell and Norvig chap. 4 If it says return True or False, return True or False, not "True" or "False Comment out or remove print statements before submitting.
More informationUninformed (also called blind) search algorithms
Uninformed (also called blind) search algorithms First Lecture Today (Thu 30 Jun) Read Chapters 18.6.1-2, 20.3.1 Second Lecture Today (Thu 30 Jun) Read Chapter 3.1-3.4 Next Lecture (Tue 5 Jul) Chapters
More informationInformed search algorithms. Chapter 4, Sections 1 2 1
Informed search algorithms Chapter 4, Sections 1 2 Chapter 4, Sections 1 2 1 Outline Best-first search A search Heuristics Chapter 4, Sections 1 2 2 Review: Tree search function Tree-Search( problem, fringe)
More informationInformed Search and Exploration
Informed Search and Exploration Chapter 4 (4.1-4.3) CS 2710 1 Introduction Ch.3 searches good building blocks for learning about search But vastly inefficient eg: Can we do better? Breadth Depth Uniform
More informationInformed Search and Exploration
Informed Search and Exploration Berlin Chen 2005 Reference: 1. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach, Chapter 4 2. S. Russell s teaching materials AI - Berlin Chen 1 Introduction
More informationOutline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:
Outline for today s lecture Informed Search II Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing 2 Review: Greedy best-first search: f(n): estimated
More informationCAP 4630 Artificial Intelligence
CAP 4630 Artificial Intelligence Instructor: Sam Ganzfried sganzfri@cis.fiu.edu 1 http://www.ultimateaiclass.com/ https://moodle.cis.fiu.edu/ 2 Solving problems by search 3 8-puzzle 4 8-queens 5 Search
More informationSearching with Partial Information
Searching with Partial Information Above we (unrealistically) assumed that the environment is fully observable and deterministic Moreover, we assumed that the agent knows what the effects of each action
More informationInformed Search CS457 David Kauchak Fall 2011
Admin Informed Search CS57 David Kauchak Fall 011 Some material used from : Sara Owsley Sood and others Q3 mean: 6. median: 7 Final projects proposals looked pretty good start working plan out exactly
More informationInformed Search. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison
Informed Search Xiaojin Zhu jerryzhu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [Based on slides from Andrew Moore http://www.cs.cmu.edu/~awm/tutorials ] slide 1 Main messages
More informationAdvanced Artificial Intelligence (DT4019, HT10)
Advanced Artificial Intelligence (DT4019, HT10) Problem Solving and Search: Informed Search Strategies (I) Federico Pecora School of Science and Technology Örebro University federico.pecora@oru.se Federico
More informationThe wolf sheep cabbage problem. Search. Terminology. Solution. Finite state acceptor / state space
Search The wolf sheep cabbage problem What is search? Terminology: search space, strategy Modelling Uninformed search (not intelligent ) Breadth first Depth first, some variations omplexity space and time
More information