Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Similar documents
Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Informed search algorithms. Chapter 4

CS 380: Artificial Intelligence Lecture #4

DFS. Depth-limited Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Informed search algorithms. Chapter 4

Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Informed search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Artificial Intelligence

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Robot Programming with Lisp

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Artificial Intelligence Informed search. Peter Antal

Outline. Best-first search

TDT4136 Logic and Reasoning Systems

Informed Search and Exploration

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Outline. Best-first search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed Search and Exploration for Agents

Downloaded from ioenotes.edu.np

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed search algorithms

Informed Search Algorithms. Chapter 4

Lecture 4: Informed/Heuristic Search

Informed search strategies (Section ) Source: Fotolia

Artificial Intelligence

CS:4420 Artificial Intelligence

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Informed Search and Exploration

Artificial Intelligence

Informed Search and Exploration

Informed Search and Exploration

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Informed/Heuristic Search

Informed search algorithms

CAP 4630 Artificial Intelligence

Solving Problems using Search

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Informed search algorithms. Chapter 4, Sections 1 2 1

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Solving problems by searching

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Informed search algorithms

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

CS414-Artificial Intelligence

Problem Solving and Search

Ar#ficial)Intelligence!!

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Foundations of Artificial Intelligence

Introduction to Artificial Intelligence. Informed Search

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Artificial Intelligence Informed search. Peter Antal

Solving Problems: Intelligent Search

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search and Exploration

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS 771 Artificial Intelligence. Informed Search

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Uninformed search strategies (Section 3.4) Source: Fotolia

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Route planning / Search Movement Group behavior Decision making

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Artificial Intelligence

Artificial Intelligence: Search Part 2: Heuristic search

Problem Solving: Informed Search

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Solving problems by searching

Solving problems by searching

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Problem solving and search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Foundations of Artificial Intelligence

Planning, Execution & Learning 1. Heuristic Search Planning

Chapters 3-5 Problem Solving using Search

CSE 40171: Artificial Intelligence. Informed Search: A* Search

Problem Solving & Heuristic Search

Efficient memory-bounded search methods

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

COMP9414: Artificial Intelligence Informed Search

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Transcription:

- Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria 1

References Skriptum (TU Wien, Institut für Informationssysteme, Thomas Eiter et al.) ÖH-Copyshop, Studienzentrum Stuart Russell und Peter Norvig. Artificial Intelligence - A Modern Approach. Prentice Hall. 2003. Vorlesungsfolien TU Graz (partially based on the slides of TUWien) 2

Goals Uninformed Search Strategies. Informed Search Strategies. 3

Search Important method of Artificial Intelligence Approaches: Uninformed search Informed search (exploit information/heuristics about the problem structure) Example: Which is the shortest path from A to D? A B D C E 4

Elements of a Search Problem Start node (start state) Goal node (goal state) States are generated dynamically on the basis of generation rule R example: move one element to the empty field Search goal: sequence of rules that transform the start state to the goal state 3 6 1 2 3 start state 2 1 4 8 4 goal state 5 8 7 7 6 5 5

Search Problem Start state (S 0 ) Non-empty set of goal states (goal test) Non-empty set of operators: Transform state S i into state S i+1 Transformation triggers costs (>0) No cost estimation available default: unit costs Cost function for search paths: of individual transformations 6

Properties of Search Methods Completeness Does the method find a solution, if one exists? Space Complexity How do memory requirements increase with increasing sizes of search problems (search spaces)? Time Complexity What is the computing performance with increasing sizes of search problems (search spaces)? Optimality Does the method identify the optimal solution (if one exists)? 7

Breadth-First Search (bfs) Level-wise expansion of states Systematic strategy Expand all paths of length 1 Expand all paths of length 2 Completeness yes (if b is finite). Space complexity: branching factor b, depth of the shallowest solution d: b d+1 Time complexity: b d+1 Optimality yes (if step costs are identical). 8

bfs D G A C F B E [A] [B,C,D] [B,C,D] [C,D,E,F] [C,D,E,F] [D,E,F,G] [D,E,F,G] [E,F,G] [E,F,G] [F,G] [F,G] [G] [G] 9

Uniform Cost Search (ucs) A S 1 10 5 5 A B C S B G S 1 5 15 S 15 5 C A B 5 C 15 A B C 15 Goal: cheapest path from S to G (independent of # transitions) Approach: Expand S, then expand A (since A has the cheapest current path) Solution not necessarily found opt. could be < 11 Expand B finally solution identified with path costs = 10 G 11 A G 11 10 10

Uniform Cost Search (ucs) Generalization of bfs Expand state with lowest related costs There could be a solution with shorter path but higher costs Corresponds to bfs if step costs are identical Completeness yes (if b finite & step costs > ). Space complexity: branching factor b, cost of the optimal solution C*: bc*/ +1 Time complexity: bc*/ +1 Optimality yes. 11

Depth First Search (dfs) No level-wise expansion Expansion of successor nodes of one deepest state If no more states can be expanded backtracking Completeness no (indefinite descent paths) Space complexity: branching factor b, maximum depth of the search tree m: b m Time complexity: b m Optimality no. 12

dfs D G A C F B E H [A] [B,C,D] [B,C,D] [E,F,C,D] [E,F,C,D] [H,F,C,D] [H,F,C,D] [F,C,D] [F,C,D] [C,D] [C,D] [F,G,D] [F,G,D] [G,D] [G,D] 13

DF Iterative Deepening (dfid) Search depth defined by bound l If search depth l is reached backtracking If no solution has been found for level l l = l + 1 Example using a binary tree: Level 0: Level 1: Repeated dfs until search level l. Level 2: 14

DF Iterative Deepening Completeness yes (if b is finite). Space complexity: branching factor b, depth of the shallowest solution d: b d Time complexity: b d Optimality yes (if step costs are identical). 15

Comparison of Uninformed Search Strategies Criterion Breadth- First Uniform- Cost Depth- First Completeness Yes a Yes a,b No Yes a Time O(b d+1 ) O(b C*/ +1 ) O(b m ) O(b d ) Iterative Deepening Space O(b d+1 ) O(b C*/ +1 ) O(bm) O(bd) Optimality Yes c Yes No Yes c 16

Heuristic Search Informed search which exploits problem- and domain-specific knowledge Estimation function: estimates the costs from the current node to the goal state Heuristic approach denoted as best-first search, since best-evaluated state is chosen for expansion Used notation: f: estimation function; f * (the real function) h(n): estimator of minimal costs from n to a goal state Invariant for goal state S n : h(n) = 0 Methods to be discussed: Greedy search A* search 17

Greedy Search Estimates minimal costs from current to the goal node Does not take into account already existing costs, i.e., f(n) = h(n) Completeness no (loopcheck needed). Space complexity: branching factor b, maximum depth of the search tree m : b m Time complexity: b m Optimality no. 18

Structure of Working Example 19

Greedy Search straight-line distance h (sld) 20

Greedy Search straight-line distance h (sld) 21

Greedy Search straight-line distance h (sld) 22

Summarizing Greedy Search Use sld heuristics Choose shortest path from Arad to Bucharest Arad: expand lowest h-value Sibiu: expand lowest h-value Fagaras: expand lowest h-value Bucharest found path: Arad, Sibiu, Rimnicu, Pitesti, Bucharest length of solution: 418km (in contrast to solution found 450km) Greedy search is based on local information Better choice: take into account already existing efforts 23

Greedy Search: Loops 24

A* Search In contrast to greedy search, it takes into account already existing costs as well. g(n) = path costs from start to current node Results in a more fair selection strategy for expansion Estimation function: f(n) = g(n) + h(n); h(n) the same as for greedy search A*: best-first search method with the following properties: Finite number of possible expansions Transition costs are positive (min. ) admissible (h), i.e., h does never overestimate the real costs (it is an optimistic heuristic) Consequently: f never overestimates the real costs 25

A* Search 26

A* Search 27

A* Search 28

Summarizing A* Search Used functions g(n): real distance Arad-n h(n): sld n-bucharest Taking into account global information improves the quality of the identified solution Expansion ordering (lowest f-value) Sibiu, Rimnicu, Pitesti, Bucharest Optimal solution will be found 29

Determination of Heuristics Heuristic function h must be optimistic No formal rule set of determining heuristic functions exists, however, some rules of thumb are there Example (8-Puzzle) 2 heuristic functions h 1 and h 2 h 1 : number of squares in wrong position h 2 : sum(distances of fields from goal state) start state Simplification: do not take into account goal the state influences of moves to other fields. For start state, h 1 =7 and h 2 = 15 Dominance relationship: h 2 dominates h 1 since for each n: h 1 (n) h 2 (n) h 2 is assumed to be more efficient; h 1 has tendency towards breadth first search. 30

Properties of A* Search f-values along a path in the search tree are never decreasing (monotonicity property) theorem: monoton(f) h(n 1 ) c(n 1,n 2 ) + h(n 2 ) goal expansion of nodes with increasing f-value 31

Properties of A* Search Distance circles of 380, 400, 420km (f-values) Expand all n with f(n) 380, then 400, Let f* be the costs of an optimal solution: A* expands all n with f(n) < f*, some m with f(m) = f*, and the goal state will be identified Completeness yes (h is admissible). Space complexity: exponential Time complexity: exponential Optimality yes. 32

Optimality of A* Search Suppose suboptimal (path to) goal G 2 in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G f(g 2 ) = g(g 2 ) since h(g 2 )=0 > g(g) since G 2 is suboptimal [f(g 2 ) > g(g)] >= f(n) since h is admissible [f(g 2 ) > f(n)] Since f(g 2 ) > f(n), A* will never select G 2 for expansion 33

Local Search Previously: systematic exploration of search space path to goal is solution to the problem However, for some problems the path is irrelevant for example, 8-queens problem state space = set of configurations (complete) goal: identify configuration which satisfies a set of constraints Other algorithms can be used local search algorithms try to improve the current state by generating a follow-up state (neighbor state) are not systematic, not optimal, and not complete 34

Local Search: Example n-queens problem: positioning of n queens in an nxn board s.t. the number of 2 queens positioned on the same column, row or diagonal is minimized or zero. 35

Hill Climbing Algorithm Finding the top of Mount Everest in a thick fog while suffering from amnesia [Russel & Norvig, 2003] Greedy local search 36

Hill Climbing: Local Maxima Peak that is higher than each of its neighboring states but lower than the global maximum. h = number of pairs of queens attacking each other successor function: move single queen to another square in the same column h = 1 local maximum (minimum): every move of a single queen makes the situation worse [Russel & Norvig 2003] 37

Hill Climbing: Ridges Sequence of local maxima. 38

Hill Climbing: Plateaux Evaluation function is flat, a local maximum or a shoulder from which progress is possible. 39

Simulated Annealing Hill climbing never selects downhill moves Danger of, for example, local maxima Instead of picking the best move, choose a random one and accept also worse solutions with a decreasing probability 40

Genetic Algorithms Start with k randomly generated states (the population) Each state (individual) represented by a string over a finite alphabet (e.g., 1..8) Each state is rated by evaluation (fitness) function (e.g., #non-attacking pairs of queens opt. = 28) Selection probability directly proportional to fitness score, for example, 23/(24+23+20+11) 29% 41

Genetic Algorithms: Cross-Over 42

Genetic Algorithm function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual input: population, a set of individuals FITNESS-FN, a function which determines the quality of the individual repeat new_population empty set loop for i from 1 to SIZE(population) do x RANDOM_SELECTION(population, FITNESS_FN) y RANDOM_SELECTION(population, FITNESS_FN) child REPRODUCE(x,y) if (small random probability) then child MUTATE(child ) add child to new_population population new_population until some individual is fit enough or enough time has elapsed return the best individual 43

Exercise 1. Explain the major differences between informed and uninformed search when to use which approach? 2. Explain the major differences between local search algorithms and A*. 3. Which search approach would you use for solving the Travelling Salesman Problem? 4. Explain the term admissibility in the context of best first search. 5. Show incompleteness and non-optimality of depthfirst search on the basis of a simple example. 44

Thank You! 45