COMP 8620 Advanced Topics in AI

Similar documents
Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Search. Intelligent agents. Problem-solving. Problem-solving agents. Road map of Romania. The components of a problem. that will take me to the goal!

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Problem solving and search

Artificial Intelligence

Artificial Intelligence: Search Part 1: Uninformed graph search

Informed search algorithms

Introduction to Artificial Intelligence. Informed Search

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

CS:4420 Artificial Intelligence

Informed search algorithms. Chapter 4, Sections 1 2 1

CS:4420 Artificial Intelligence

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

mywbut.com Uninformed Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Artificial Intelligence. Informed search methods

Informed Search and Exploration

Informed Search and Exploration

Artificial Intelligence: Search Part 2: Heuristic search

Problem solving Basic search. Example: Romania. Example: Romania. Problem types. Intelligent Systems and HCI D7023E. Single-state problem formulation

Informed Search Methods

Informed Search and Exploration

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search

Solving Problems by Searching

Artificial Intelligence

Seminar: Search and Optimization

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS486/686 Lecture Slides (c) 2015 P.Poupart

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

Planning, Execution & Learning 1. Heuristic Search Planning

Problem solving and search

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS486/686 Lecture Slides (c) 2014 P.Poupart

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed search methods

Uninformed Search. Models To Be Studied in CS 540. Chapter Search Example: Route Finding

TDT4136 Logic and Reasoning Systems

1/19/2010. Usually for static, deterministic environment

Lecture 4: Informed/Heuristic Search

Informed search algorithms

COSC343: Artificial Intelligence

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Informed Search Algorithms. Chapter 4

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Artificial Intelligence CS 6364

Solving problems by searching

Informed search algorithms. Chapter 4

Problem solving and search

Reminders. Problem solving and search. Problem-solving agents. Outline. Assignment 0 due 5pm today

Informed search algorithms

Informed Search and Exploration

Introduction to Artificial Intelligence (G51IAI)

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

DFS. Depth-limited Search

Problem-solving agents. Problem solving and search. Example: Romania. Reminders. Example: Romania. Outline. Chapter 3

CS-E4800 Artificial Intelligence

Problem solving and search

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Artificial Intelligence Fall, 2010

Robot Programming with Lisp

What is Search? Intro to Search. Search problems. Finding goals in trees. Why is goal search not trivial? Blind Search Methods

Informed (heuristic) search (cont). Constraint-satisfaction search.

Problem-solving agents. Solving Problems by Searching. Outline. Example: Romania. Chapter 3

CS 771 Artificial Intelligence. Informed Search

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

CS 380: Artificial Intelligence Lecture #4

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Lecture overview. Knowledge-based systems in Bioinformatics, 1MB602, Goal based agents. Search terminology. Specifying a search problem

Informed search algorithms

Informed Search and Exploration for Agents

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Solving Problems using Search

Outline. Problem solving and search. Problem-solving agents. Example: Romania. Example: Romania. Problem types. Problem-solving agents.

Informed search algorithms

Problem solving and search

Informed/Heuristic Search

Informed Search Lecture 5

Problem Solving and Search. Chapter 3

Artificial Intelligence

Problem solving and search

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Graph and Heuristic Search. Lecture 3. Ning Xiong. Mälardalen University. Agenda

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Problem Solving and Search

Problem solving by search

4. Solving Problems by Searching

CS414-Artificial Intelligence

Artificial Intelligence

Route planning / Search Movement Group behavior Decision making

Chapter 3 Solving Problems by Searching Informed (heuristic) search strategies More on heuristics

ARTIFICIAL INTELLIGENCE. Pathfinding and search

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Ar#ficial)Intelligence!!

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Transcription:

OMP 8620 dvanced Topics in Lecturers: Philip Kilby & Jinbo uang 25/07/2008 1

Part 1: Search Lecturer: r Philip Kilby Philip.Kilby@nicta.com.au Weeks 1-71 (Week 7 is assignment seminars) Part 2: Probabilistic Reasoning with ayesian networks Lecturer: r Jinbo uang Jinbo.uang@nicta.com.au Weeks 8-138 25/07/2008 2

ourse Outline Part 1 1-2 ntroduction Systematic Search 3-4 Linear Programming / nteger Programming / ranch and ound 5-6 Neighbourhood-based based methods: Simulated nnealing Tabu Search enetic lgorithms 7-8 onstraint Programming (uest lecturer: Jason Li) 7-8 onstraint Programming (uest lecturer: Jason Li) 9-10 ase Studies: TSP Planning (Jussi( Rintanen) 11 ynamic Programming 12 Qualitative Reasoning (uest lecturer: Jason Li) 13-14 14 Seminars (uest Lecturers: You lot!) 3

40% xam 60% ssignment ssessment Search 1 x ssignment (next week) 15% 1 x Seminar 15% Reasoning 3 x ssignment (10% each) (tentative) 4

What is Search? Search finds a solution to a problem Landscape of solutions Some are good Lots (and lots and lots) are bad ow do we find the good ones? 5

What is Search? Landscape can continuous or discrete (We will only deal with discrete) Landscape can be (approximated by?) a tree Landscape can be (approximated by?) a graph 6

What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] 7

What is Search? X=2 Y=5 X=3 Y=5 X + 2 * Y < 3, X [0,5] Y [1,5] X=2 Y=3 X=0 Y=0 X=5 Y=5 X=5 Y=1 X=0 Y=1 X=3 Y=1 X=4 Y=5 X=1 Y=2 X=0 Y=2 X=0 Y=3 X=5 Y=2 X=0 Y=1 X=3 Y=2 X=2 Y=2 X=5 Y=3 X=3 Y=3 X=4 Y=3 X=1 Y=1 X=3 Y=4 X=2 Y=1 X=0 Y=4 X=5 Y=4 X=0 Y=5 X=4 Y=4 X=4 Y=2 X=1 Y=5 X=1 Y=4 X=1 Y=3 X=2 Y=4 8 X=4 Y=1

What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] X=0 X=1 X=2 X=3 X=4 X=5 Y=1 Y=2 Y=3 Y=4 Y=5 9

What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] Y=1 Y=2 Y=3 Y=4 Y=5 X=0 X=1 X=2 X=3 X=4 X=5 10

What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] X>0 X=0 Y>1 Y=1 11

What is Search? ifferent flavours ind a solution (Satisfaction/ecision) ind the best solution (ombinatorial Optimisation) ecision problems: Search variant: ind a solution (or show none exists) ecision variant: oes a solution exist? 12

What is Search onstructive Search ind a solution by construction Local Search iven a solution, explore the neighbourhood of that solution to find other (possibly better) solutions 13

What is Search? nytime algorithm (for optimisation prob): fter a (short) startup time, an answer is available The longer the alg is run, the better the solution Quality guarantee may be available (e.g. solution is within 5% of optimal) One-shot algorithm nswer only available at completion 14

What is Search? problem is defined by nitial state Successor function (an action leading from one state to another) oal test Path cost solution is a sequence of actions leading from the start state to a goal state 15

xample: Romania 16

Problem formulation problem is defined by four items: initial state e.g., at rad successor function S(x) set of action state pairs e.g., S(rad) = {<rad Zerind, at Zerind>,...} goal test e.g., x = at ucharest can be implicit, e.g., asirport(x) path cost (additive) e.g. sum of distances, number of actions executed, etc. c(x, a, y) is the step cost, assumed to be 0 17

Tree Search function Tree-Search (problem, fringe) returns a solution, or failure fringe nsert (Make-Node (nitial-state (problem)), fringe) loop do if s-mpty (fringe) then return failure node Remove-ront (fringe) if oal-test (problem, State[node]) then return node fringe nsertll (xpand (node, problem), fringe) function xpand (node, problem) returns a set of nodes successors the empty set for each (action, result) in Successor-n (problem, State[node]) do s a new Node Parent-Node[s] node; ction[s] action; State[s] result Path-ost[s] Path-ost[node] + Step-ost(State[node], action, result) epth[s] epth[node] + 1 successors nsert (s) return successors 18

readth-first Search L M O 19

readth-first Search L M O 20

readth-first Search L M O 21

readth-first Search L M O 22

readth-first Search L M O 23

readth-first Search L M O 24

readth-first Search L M O 25

readth-first Search L M O 26

readth-first Search L M O 27

readth-first Search L M O 28

readth-first Search L M O 29

readth-first Search L M O 30

readth-first Search L M O 31

readth-first Search L M!! O 32

epth-first Search L M O 33

epth-first Search L M O 34

epth-first Search L M O 35

epth-first Search L M O 36

epth-first Search L M O 37

epth-first Search L M O 38

epth-first Search L M O 39

epth-first Search L M O 40

epth-first Search L M O 41

epth-first Search L M O 42

epth-first Search L M O 43

epth-first Search L M O 44

epth-first Search L M O 45

readth-first Search L M!! O 46

terative eepening S Search L M O 47

terative eepening S Search L M O 48

terative eepening S Search L M O 49

terative eepening S Search L M O 50

terative eepening S Search L M O 51

terative eepening S Search L M O 52

terative eepening S Search L M O 53

terative eepening S Search L M O 54

terative eepening S Search L M O 55

terative eepening S Search L M O 56

terative eepening S Search L M O 57

terative eepening S Search L M O 58

terative eepening S Search L M O 59

terative eepening S Search L M O 60

terative eepening S Search L M O 61

terative eepening S Search L M O 62

terative eepening S Search L M O 63

terative eepening S Search L M O 64

terative eepening S Search L M O 65

terative eepening S Search L M O 66

terative eepening S Search L M O 67

terative eepening S Search L M O 68

terative eepening S Search L M O 69

terative eepening S Search L M O 70

terative eepening S Search L M O 71

terative eepening S Search L M O 72

terative eepening S Search L M O 73

terative eepening S Search L M!! O 74

Uniform cost search xpand in order of cost dentical to readth-irst Search if all step costs are equal Number of nodes expanded depends on ε > 0 the smallest cost action cost. (f ε = 0, uniform-ost search may infinite-loop) 75

i-directional Search xtend S from Start and oal states simultaneously b d/2 d/2 + b d/2 << b d 76

i-directional Search Restrictions: an only use if 1 (or a few) known goal nodes an only use if actions are reversible i.e. if we can efficiently calculate Pred(s (s) (for state s) where a Pred( s) = { r : a r s} 77

Summary riterion omplete S Yes S No S Yes Uniform ost Yes i-direc direc- tional Yes Time O(b d+1 ) O(b m ) O(b d ) O(b */ */ε ) O(b d/2 ) Space O(b d+1 ) O(bm) O(bd) O(b */ */ε ) O(b d/2 ) Optimal Yes, for unit cost No Yes, for unit cost Yes Yes, for unit cost b = branching factor ; m = max depth of tree ; d = depth of goal node * = cost of goal ; ε = min action cost 78

Repeated State etection Test for previously-seen seen states ssentially, search graph mimics state graph 79

Repeated State etection an make exponential improvement in time an make S complete S now has potentially exponential space requirement ave to be careful to guarantee optimality n practice, almost always use repeated state detection 80

* Search dea: avoid expanding paths that are already expensive valuation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to the closest goal f(n) = estimated total cost of path through n to goal search uses an admissible heuristic i.e., h(n) h (n) where h (n) is the true cost from n. (lso require h(n) 0, so h() = 0 for any goal.).g., h SL (n) never overestimates the actual road distance When h(n) is admissible, f(n) never overestimates the total cost of the shortest path through n to the goal Theorem: search is optimal 81

* xample rad 366=0+366 82

* xample rad 366=0+366 83

* xample rad Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374 84

* xample rad Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374 85

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea 413=220+193 86

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea 413=220+193 87

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea raiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 88

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea raiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 89

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 ucharest 450=450+0 raiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 90

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 ucharest 450=450+0 raiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 91

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 ucharest 450=450+0 raiova 526=366+160 Pitesti Sibiu 553=300+253 ucharest 418=418+0 raiova 615=455+160 Rimnicu Vilcea 607=414+193 92

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 ucharest 450=450+0 raiova 526=366+160 Pitesti Sibiu 553=300+253 ucharest 418=418+0 raiova 615=455+160 Rimnicu Vilcea 607=414+193 93

* xample rad Sibiu Timisoara 447=118+329 Zerind 449=75+374 rad 646=280+366 agaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 ucharest 450=450+0 raiova 526=366+160 Pitesti Sibiu 553=300+253 ucharest 418=418+0 raiova 615=455+160 Rimnicu Vilcea 607=414+193 94

* Search Optimal No other optimal algorithm expands fewer nodes (modulo order of expansion of nodes with same cost) xponential space requirement 95

reedy Search Use (always admissible) h(n) = 0 in f(n) = g(n) + h(n) i.e. always use the cheapest found so far 96

Variants of * Search Many variants of * developed Weighted *: f(n) = g(n) + W h(n) Not exact aster nswer is within W of optimal 97

Variants of * Search Many variants of * developed Weighted *: f(n) = g(n) + W h(n) Not exact aster nswer is within W of optimal * 98

Variants of * Search Many variants of * developed Weighted *: f(n) = g(n) + W h(n) Not exact aster nswer is within W of optimal Weighted * 99

Variants of * Search terative eepening * (*) ncrementally increase max depth Memory-bounded * mpose bound on memory iscard nodes with largest f(x) if necessary Others to come in seminars 100

Limited iscrepancy Search Requires a heuristic to order choices (e.g. admissible heuristic, but don t t require underestimate) f heuristic was right, we could always take the choice given by the heuristic LS allows the heuristic to make mistakes: k-ls allows k mistakes. 101

Limited iscrepancy Search (k( = 1) J K L M N O 102

Limited iscrepancy Search (k( = 1) 0 1 J K L M N O 103

Limited iscrepancy Search (k( = 1) 0 1 0 1 J K L M N O 104

Limited iscrepancy Search (k( = 1) 0 1 0 1 0 1 J K L M N O 105

Limited iscrepancy Search (k( = 1) 0 1 0 1 0 1 J K L M N O 106

Limited iscrepancy Search (k( = 1) 0 1 0 1 0 1 J K L M N O 107

Limited iscrepancy Search (k( = 1) 0 1 0 1 0 1 1 J K L M N O 108

Limited iscrepancy Search (k( = 1) 0 1 0 1 0 1 1 J K L M N O 109

Limited iscrepancy Search (k( = 1) 0 1 0 1 1 0 1 1 J K L M N O 110

Limited iscrepancy Search (k( = 1) 0 1 0 1 1 0 1 1 1 J K L M N O 111

Limited iscrepancy Search (k( = 1) 0 1 0 1 1 0 1 1 1 J K L M N O 112

Next week. (nteger) Linear Programming and ranch & ound 113