Lecture 4 of Artificial Intelligence. Heuristic Search. Produced by Qiangfu Zhao (2008), All rights reserved

Similar documents
Lecture 5 of Artificial Intelligence. Production System. Produced by Qiangfu Zhao (Since 2008), All rights reserved

Lecture 2 of Artificial Intelligence. Problem formulation. Produced by Qiangfu Zhao (Since 2008), All rights reserved

A* optimality proof, cycle checking

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.3, Page 1

Optimization. Industrial AI Lab.

CS446: Machine Learning Fall Problem Set 4. Handed Out: October 17, 2013 Due: October 31 th, w T x i w

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to optimization

Other course stuff. CSCI 599: Coordinated Mobile Robotics Prof. Nora Ayanian

Optimization. 1. Optimization. by Prof. Seungchul Lee Industrial AI Lab POSTECH. Table of Contents

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

EE365 Homework 8. (a) A key step in Dijkstra s algorithm is computing the neighbors of a vertex x:

A* and 3D Configuration Space

UNINFORMED SEARCH. Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5

Discrete Motion Planning

This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course?

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

6.141: Robotics systems and science Lecture 10: Motion Planning III

Programming, numerics and optimization

Data Structures And Algorithms

CS 460/560 Introduction to Computational Robotics Fall 2017, Rutgers University. Lecture 08 Extras. A* In More Detail. Instructor: Jingjin Yu

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Data Structures and Algorithms

Artificial Intelligence

LECTURE NOTES Non-Linear Programming

Lecture 2 Optimization with equality constraints

Solution Methods Numerical Algorithms

Lecture 3: Convex sets

Lecture 5: Duality Theory

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

EE266 Homework 8 Solutions

General Methods and Search Algorithms

6.141: Robotics systems and science Lecture 10: Implementing Motion Planning

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Heuristic (Informed) Search

Assignment 1. University of Toronto, CSC384 - Introduction to Artificial Intelligence, Winter

MAT137 Calculus! Lecture 31

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Introduction to Optimization Problems and Methods

Lecture 2 September 3

Graphs of Exponential

AM 221: Advanced Optimization Spring 2016

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Today. Informed Search. Graph Search. Heuristics Greedy Search A* Search

Kernels and Constrained Optimization

Artificial Intelligence Informed Search

Applied Lagrange Duality for Constrained Optimization

Lesson 6. Opening Exercise 1. Without graphing, state the vertex for each of the following quadratic equations.

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Math 2 Spring Unit 5 Bundle Transformational Graphing and Inverse Variation

Linear and quadratic Taylor polynomials for functions of several variables.

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Nearest Neighbor Classifiers

Surfaces Beyond Classification

CSE 114, Computer Science 1 Course Information. Spring 2017 Stony Brook University Instructor: Dr. Paul Fodor

Algorithm Design and Analysis Homework #4

Homework: More Abstraction, Trees, and Lists

x y

CSE 332 Winter 2015: Midterm Exam (closed book, closed notes, no calculators)

CS 520: Introduction to Artificial Intelligence. Review

Heuristic Search and Advanced Methods

Greedy Algorithms CSE 780

Graph and Heuristic Search. Lecture 3. Ning Xiong. Mälardalen University. Agenda

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Heuris'c Search. Reading note: Chapter 4 covers heuristic search.

Instance Based Learning. k-nearest Neighbor. Locally weighted regression. Radial basis functions. Case-based reasoning. Lazy and eager learning

5 Day 5: Maxima and minima for n variables.

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

Algebra 2 Honors Lesson 10 Translating Functions

CS 520: Introduction to Artificial Intelligence. Lectures on Search

Machine Learning for Software Engineering

Homework 3. Constraint Satisfaction Problems, Optimization problems

Unconstrained Optimization

Route planning / Search Movement Group behavior Decision making

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

Manifolds. Chapter X. 44. Locally Euclidean Spaces

CS 395T Computational Learning Theory. Scribe: Wei Tang

1 Basic Mathematical Operations

Lecture 7: Primitive Recursion is Turing Computable. Michael Beeson

UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE

PAC-MAN is one of the most popular game

ME 555: Distributed Optimization

Lec13p1, ORF363/COS323

Artificial Intelligence (part 4d)

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Data Structure and Algorithm Homework #6 Due: 5pm, Friday, June 14, 2013 TA === Homework submission instructions ===

minimize ½(x 1 + 2x 2 + x 32 ) subject to x 1 + x 2 + x 3 = 5

Lecture 5: Simplicial Complex

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MA FINAL EXAM INSTRUCTIONS VERSION 01 DECEMBER 9, Section # and recitation time

COLUMN GENERATION IN LINEAR PROGRAMMING

Transcription:

Lecture 4 of Artificial Intelligence Heuristic Search AI Lec04/1

Topics of this lecture What are heuristics? What is heuristic search? Best first search A* algorithm Generalization of search problems AI Lec04/2

What are heuristics? Heuristics are know-hows obtained through a lot experiences. Heuristics often enable us to make decisions quickly without thinking deeply about the reasons. In many cases, heuristics are tacit knowledge that cannot be explained verbally. The more experiences we have, the better the heuristics will be. AI Lec04/3

Why heuristic search? Based on the heuristics, we can get good solutions without investigating all possible cases. In fact, a deep learner used in Alpha-Go can learn heuristics for playing Go-game, and this learner can help the system to make decisions more efficiently. Without using heuristics, many AI-related problems cannot be solved at all (may take many years to get a solutions). Patterns memorized so far can be used to predict the move of the typhon. We do not have to calculate the results based on the raw data. AI Lec04/4

Best first search The basic idea of best first search is similar to uniform cost search. The only difference is that the cost (or in general, the evaluation function) is estimated based on some heuristics, rather than the real one calculated during search. AI Lec04/5

Best first search Step 1: Put the initial node x0 and its heuristic value H(x0) to the open list. Step 2: Take a node x from the top of the open list. If the open list is empty, stop with failure. If x is the target node, stop with success. Step 3: Expand x and get a set S of child nodes. Add x to the closed list. Step 4: For each x in S but not in the closed list, estimate its heuristic value H. If x is not in the open list, put x along with the edge (x,x ) and H into the open list; otherwise, if H is smaller than the old value H(x ), update x with the new edge and the new heuristic value. Step 5: Sort the open list according to the heuristic values of the nodes, and return to Step 2. AI Lec04/6

Example 2.6 p. 25 Steps Open List Closed List 0 {(0,0),4} -- 1 {(1,0),3} {(0,1),3} (0,0) 2 {(2,0),2} {(1,1),2} (0,0) (1,0) {(0,1),3} 3 {(1,1),2} {(0,1),3} (0,0) (1,0) (2,0) 4 {(2,1),1} {(1,2),1} (0,0) (1,0) (2,0) (1,1) {(0,1),3} 5 {(2,2),0} {(1,2),1} {(0,1),3} (0,0) (1,0) (2,0) (1,1) (2,1) 6 (2,2)=target node. (0,0) (1,0) (2,0) (1,1) (2,1) 2 1 0 (0,2) (1,2) (2,2) 3 2 (0,1) (1,1) (2,1) 1 4 3 2 (0,0) (1,0) (2,0) AI Lec04/7

The A* Algorithm We cannot guarantee to obtain the optimal solution using the best-first search. This is because the true cost from the initial node to the current node is ignored. The A* algorithm can solve this problem (A=admissible). In the A* algorithm, the cost of a node is evaluated using both the estimated cost and the true cost as follows: F(x) = C(x) + H(x) It has been proved the A* algorithm can obtain the best solution provided that the estimated cost H(x) is always smaller (conservative) than the best possible value H*(x). AI Lec04/8

The A* Algorithm Step 1: Put the initial node x0 and its cost F(x0)=H(x0) to the open list. Step 2: Get a node x from the top of the open list. If the open list is empty, stop with failure. If x is the target node, stop with success. Step 3: Expand x to get a set S of child nodes. Put x to the closed list. AI Lec04/9

The A* Algorithm Step 4: For each x in S, find its cost F = F(x) + d(x, x ) + [H(x ) H(x)] If x is in the closed list but the new cost is smaller than the old one, move x to the open list and update the edge (x,x ) and the cost. Else, if x is in the open list, but the new cost is smaller than the old one, update the edge (x,x ) and the cost. Else (if x is not in the open list nor in the closed list), put x along with the edge (x,x ) and the cost F to the open list. Step 5: Sort the open list according to the costs of the nodes, and return to Step 2. AI Lec04/10

Example 2.7 p. 27 1 (0,2) (1,2) (2,2) 2 1 1 3 (0,1) (1,1) (2,1) 2 1 (0,2) (1,2) (2,2) 3 2 (0,1) (1,1) (2,1) 0 1 2 3 5 1 (0,0) (1,0) (2,0) 4 3 2 (0,0) (1,0) (2,0) True cost Estimated cost AI Lec04/11

Example 2.7 p. 27 Steps Open List Closed List 0 {(0,0),--, 4} -- 1 {(0,1),5} {(1,0),8} {(0,0),4} 2 {(0,2),6} {(1,0),8} {(0,0),4} {(0,1),5} 3 {(1,2),6} {(1,0),8} {(0,0),4} {(0,1),5} {(0,2),6} 4 {(1,0),8} {(1,1),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} 5 {(1,1),8} {(2,0),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} 6 {(2,0),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8} {(2,1),10} 7 {(2,1),10} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8} {(2,0),8} 8 {(2,2),10} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8} {(2,0),8} {(2,1),10} 9 (2,2)=target node {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8} {(2,0),8} {(2,1),10} AI Lec04/12

Property of the A* Algorithm The A* algorithm can obtain the best solution because it considers both the cost calculated up to now and the estimated future cost. A sufficient condition for obtaining the best solution is that the estimated future cost is smaller than the best possible value. If this condition is not satisfied, we may not be able to get the best solution. This kind of algorithms are called A algorithms. AI Lec04/13

Comparison between A* algorithms Suppose that A1 and A2 are two A* algorithms. If for any node x we have H1(x)>H2(x), we say A1 is more efficient. That is, if H(x) is closer to the best value H*(x), the algorithm is better. If H(x)=H*(x), we can get the solution immediately. On the other hand, if H(x)=0 for all x (the most conservative case), the A* algorithm becomes the uniform cost search algorithm. AI Lec04/14

Some heuristics for finding H(x) For any given node x, we need an estimation function to find H(x). For maze problem, for example, H(x) can be estimated by the Manhattan distance between the current node the target node. This distance is usually smaller than the true value (and this is good) because some edges may not exist in practice. For more complex problems, we may need a method (e.g. neural network) to learn the function from experiences or observed data. AI Lec04/15

Generalization of Search Generally speaking, search is a problem for finding the best solution x* from a given domain D. Here, D is a set containing all states or candidate solutions. Each state is often represented as a n-dimensional state vector or feature vector. To see if a state x is the best (or the desired) one or not, we need to define an objective function (e.g. cost function) f(x). The problem can be formulated by Min f(x), for x in D AI Lec04/16

Optimization and search The result of search may not be the best one (e.g. depth-first search). However, search with an objective function often results in more desirable solutions (e.g. shortestpath search). The goal of heuristic search is to minimize the objective function with the minimum effort (e.g. best-first search) the solution may not be optimal. AI Lec04/17

Various optimization problems If D is the whole n-d Euclidean space, the problem is unconstrained. If D is defined by some functions, the problem is constrained, and is often formulated by Min f(x) s. t. x in D All x in D are called feasible solutions, and D is usually defined a group of functions as follows: D={x gi(x) <= 0, i=1,2, } AI Lec04/18

Various optimization problems If the objective function f(x) and all constrain functions are linear, the problem is a linear programming problem, and can be solved by using the simplex algorithm. If f(x) is quadratic (2 nd order) and the constrain functions are linear, the problem is called a quadratic optimization problem. If f(x) is a convex function, and D is a convex set, we can easily find the global optimal solution. AI Lec04/19

Local and global optimal solution If f(x)>=f(x*) for x in epsilon-neighborhood (see the first line of p. 31) of x*, x* is a local optimal solution If f(x)>=f(x*) for all x in D, it is a global optimal solution. The goal of search is to find the global optimal solution, but this is often difficult. In practice, we may just find some local but useful solution. AI Lec04/20

Convex function and convex set x 2 x 1 x 1 lx 1 +(1-l)x 2 x 2 lx 1 +(1-l)x 2 x, x2 D, l [0,1], f ( lx1 + (1 l) x2) lf ( x1) + (1 l) f ( 2) 1 x AI Lec04/21

A simple but difficult problem Schaffer function F6 and its segment for x2=0 f ( x) = 0.5 2 (sin x1 + x (1.0 + 0.001( ) 0.5 + x )) 2 2 2 2 x1 2 2 2 AI Lec04/22

Homework for lecture 4 (1) Ex. 2.4 (p. 29 in the textbook) For the search graph shown in Fig. 2.7, add an edge with cost 1 between nodes (0,1) and (1,1), find the shortest path between the initial node (0,0) and the target node (2,2) using A* algorithm. Summarize the search processing in the same as Table 2.9 in p. 28 of the textbook. Submit the result (in hardcopy) to the TA within the exercise class. AI Lec04/23

Homework for lecture 4 (2) Based on the skeleton problem, complete a program containing the uniform cost search, the best-first search, and the A* algorithm. Here, we suppose that A is the initial node, and E is the target. The cost of each edge and the heuristic value of each node are also given in the figure. 3 B (14) 4 C (10) 8 D (2) 2 A (15) 5 8 3 3 E (0) 4 H (11) 2 G (9) 4 F (5) AI Lec04/24

Quizzes What is result of uniform cost search? What are the advantage and dis-advantage of the best-first search algorithm, compared with the uniform cost search algorithm? What condition is needed to guarantee the best solution when we use the A* algorithm? Write the definitions of a location optimal solution and the global optimal solution. Local optimal solution: Global optimal solution: AI Lec04/25