Monotonicity. Admissible Search: That finds the shortest path to the Goal. Monotonicity: local admissibility is called MONOTONICITY

Similar documents
HEURISTIC SEARCH. 4.3 Using Heuristics in Games 4.4 Complexity Issues 4.5 Epilogue and References 4.6 Exercises

Artificial Intelligence (part 4d)

Artificial Intelligence (part 4d)

Artificial Intelligence

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

State Space Search. Many problems can be represented as a set of states and a set of rules of how one state is transformed to another.

Artificial Intelligence CS 6364

Introduction HEURISTIC SEARCH. Introduction. Heuristics. Two main concerns of AI researchers. Two problems with the search process

CPSC 436D Video Game Programming

Chapter 2 Classical algorithms in Search and Relaxation

CS 540: Introduction to Artificial Intelligence

Downloaded from ioenotes.edu.np

CSCI-630 Foundations of Intelligent Systems Fall 2015, Prof. Zanibbi

Potential Midterm Exam Questions

Downloded from: CSITauthority.blogspot.com

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Finding optimal configurations Adversarial search

Artificial Intelligence

Algorithm Design Techniques (III)

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Searching: Where it all begins...

Search & Planning. Jeremy L Wyatt

Data Structures and Algorithms

The wolf sheep cabbage problem. Search. Terminology. Solution. Finite state acceptor / state space

CS 188: Artificial Intelligence. Recap Search I

Review Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

CSC384 Midterm Sample Questions

3 SOLVING PROBLEMS BY SEARCHING

Chapters 3-5 Problem Solving using Search

1 Tree Search (12 points)

Alpha-Beta Pruning in Mini-Max Algorithm An Optimized Approach for a Connect-4 Game

Artificial Intelligence. Game trees. Two-player zero-sum game. Goals for the lecture. Blai Bonet

HEURISTIC SEARCH. 4.3 Using Heuristics in Games 4.4 Complexity Issues 4.5 Epilogue and References 4.6 Exercises

Efficient memory-bounded search methods

APHID: Asynchronous Parallel Game-Tree Search

CSCI 4150: Intro. to Artificial Intelligence Midterm Examination

HEURISTIC SEARCH. Heuristics: Rules for choosing the branches in a state space that are most likely to lead to an acceptable problem solution.

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.5, Page 1

Multiple Agents. Why can t we all just get along? (Rodney King) CS 3793/5233 Artificial Intelligence Multiple Agents 1

CS 520: Introduction to Artificial Intelligence. Lectures on Search

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

1 Introduction and Examples

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Parallel Programming. Parallel algorithms Combinatorial Search

Informed (Heuristic) Search. Idea: be smart about what paths to try.

CIS 192: Artificial Intelligence. Search and Constraint Satisfaction Alex Frias Nov. 30 th

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Artificial Intelligence (part 4c) Strategies for State Space Search. (Informed..Heuristic search)

Best-First Search! Minimizing Space or Time!! RBFS! Save space, take more time!

Informatics 2D: Tutorial 2 (Solutions)

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Künstliche Intelligenz

Informatics 2D. Coursework 1: Search and Games

Algorithm Design Techniques. Hwansoo Han

3.6.2 Generating admissible heuristics from relaxed problems

6.034 Quiz 1 October 13,2004

AI: Week 2. Tom Henderson. Fall 2014 CS 5300

Search and Optimization

CS 540-1: Introduction to Artificial Intelligence

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm

6.034 Quiz 1, Spring 2004 Solutions

Homework #6 (Constraint Satisfaction, Non-Deterministic Uncertainty and Adversarial Search) Out: 2/21/11 Due: 2/29/11 (at noon)

Last time: Problem-Solving

Artificial Intelligence

Review Agents ( ) Review State Space Search

Monte Carlo Tree Search

Lecture 2: Problem Solving and Search

Introduction to Spring 2007 Artificial Intelligence Midterm Solutions

Search : Lecture 2. September 9, 2003

Lecture 3 - States and Searching

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.3, Page 1

Artificial Intelligence. Chapters Reviews. Readings: Chapters 3-8 of Russell & Norvig.

Heuristic (Informed) Search

Chapter 3: Search. c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1

Search. Search Trees

INTRODUCTION TO HEURISTIC SEARCH

Ar#ficial)Intelligence!!

6.034 QUIZ 1. Fall 2002

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu. Problem Space and Search Tree

mywbut.com Informed Search Strategies-I

STRUCTURES AND STRATEGIES FOR STATE SPACE SEARCH

Problem solving as Search (summary)

Introduction to Fall 2014 Artificial Intelligence Midterm Solutions

Search: Advanced Topics and Conclusion

Two-player Games ZUI 2012/2013

6.034 QUIZ 1 Solutons. Fall Problem 1: Rule-Based Book Recommendations (30 points)

CS 416, Artificial Intelligence Midterm Examination Fall 2004

CS 4700: Artificial Intelligence

Introduction to Artificial Intelligence Midterm 1. CS 188 Spring You have approximately 2 hours.

Informed State Space Search B4B36ZUI, LS 2018

Informed search algorithms. Chapter 4

4 Search Problem formulation (23 points)

A* Optimality CS4804

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

CS 380/480 Foundations of Artificial Intelligence Winter 2007 Assignment 2 Solutions to Selected Problems

: Principles of Automated Reasoning and Decision Making Midterm

Transcription:

Monotonicity Admissible Search: That finds the shortest path to the Goal Monotonicity: local admissibility is called MONOTONICITY This property ensures consistently minimal path to each state they encounter in the search.

It takes accumulative effect into consideration (for a distance problem) Along any path from the root, the cost never decreases (If this is true then the Heuristic is Monotonic in nature.) f(n) = g(n) + h(n) = 3 + 4 = 7 f (n) = g' (n ) + h' (n ) = 4 + 2 = 6 417 n 366 390 526 n

Non-Monotonic Monotonic If f(n`) < f(n) (Non - Monotonic) Then f(n`) = max (f(n), g(n`) + h(n`)) We take the cost of parent Node. Pathmax. This when only heuristic cost is taken Another representation = h(n) - h(n`) <= cost (n, n`)

Informedness. For two A heuristic h 1 and h 2 if h 1 (n) <= h 2 (n) for all states n in the search space, heuristic h 2 is said to be more informed than h 1. Both h 1 and h 2 can give OPTIMAL path but h 2 examines very few states in the process.

Monotonic Heuristics are Admissible States = S 1, S 2,, S g S 1 = Start S g = goal h(s 1 ) - h(s 2 ) <= cost (s 1, s 2 ) h(s 2 ) - h(s 3 ) <= cost (s 2, s 3 ) h (g-1) - h (g) <= cost (S g-1, S g ) ADD h (s1) - h (g) <= cost (S 1, S g )

h (n) = 0 Uninformed search Example Breadth - First search A * is more informed then Breadth - first search

Adversary Search (Games) AIM: The aim is to move in such a way as to stop the opponent from making a good / winning move. Game playing can use Tree - Search. The tree or game - tree alternates between two players.

Things to Remember: 1. Every move is vital 2. The opponent could win at the next move or subsequent moves. 3. Keep track of the safest moves 4. The opponent is well - informed 5. How the opponent is likely to response to your moves.

Two move win Player 1 = P 1 P1 moves B C A D Player 2 = P 2 P2 moves wins Safest move for P 1 is always A Safest move for P 2 is always A E F G H I J P1 P2 P1 P1 P2 P2 C D (if allowed 1 st move)

Minimax Procedure for Games Assumption: MIN: MAX: Opponent has same knowledge of state space and make a consistent effort to WIN. Label for the opponent trying to minimize other player s (MAX) score. Player trying to win (maximise advantage) Both MAX and MIN are equally informed

Rules 1. Label level s MAX and MIN 2. Assign values to leaf nodes: 0 if MIN wins 1 if MAX wins 3. Propagate values up the graph. If parent is MAX, assign it Max-value of its children If parent is MIN, assign it min-value of its children

Minimaxing to fixed to play depth (Complex games) Strategy: n - move look ahead - Suppose you start in the middle of the game. - One cannot assign WIN/LOOSE values at that stage - In this case some heuristics evaluation is applied - values are then projected back to supply indicate WINNING/LOOSING trend.

Summary Assign heuristic values to leaves of n-level graph Propagate value to root node This value indicates best state that can be reached in n moves from this start - state or node MAXIMIZE for MAX Parents MINIMIZE for MIN parents

Example: TIC - TAC - TOE X O X O X M(n) = Total of my possible O winning lines O(n) = Trial of Opponents winning lines E(n) = M(n) - O(n)

Horizon Effect Heuristics applied with limited ahead may lead to a bad situation and the person may leave the game. Same depth in search can be added to partially reduce this affect.

Alpha - Beta Procedures Minimax procedure pursues all branches in the space. Same of them could have been ignored or pruned. To improve efficiency pruning is applied to two person games

Simple Idea if A > 5 or B < 0 If A >5 and B <0 If the first condition A > 5 succeeds then B < 0 may not be evaluated. If the first condition failed then evaluating B < 0 is unnecessary.

a MAX MIN b = 0.4 c -0.2 e MAX d = 0.6-0.2 MIN f = -0.5 g = -0.2 - MAX can score maximum of -0.2 when moves a - c - e. MAX has a batter option to move to b

- MAX node neglects values <= a (atleast it can score) at MIN nodes below it. A MAX - MIN node neglects values >= b (almost it can score) at MAX nodes below it B =10 C MIN G=0 H C node can score ATMOST 0 nothing above 0 (beta) A node can score ATLEAST 10 nothing less than 10 (alpha)

Complexity Reduction Complexity Cost: Can be estimated roughly through measuring the size of open and closed list. (A) Beam Search: In beam search only the n most pronishing state are best for future consideration Bound applied to the open list. - The procedure may miss the solution by pruning it too early.

(B) More Informed ness - Apply more informed heuristics to reduce the complexity. - This may increase the computational cost in computing the heuristic