Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Similar documents
Advanced A* Improvements

Heuristic (Informed) Search

TDT4136 Logic and Reasoning Systems

Introduction to Computer Science and Programming for Astronomers

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

CS:4420 Artificial Intelligence

Informed search algorithms. Chapter 4

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Informed search algorithms. Chapter 4

Artificial Intelligence

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

mywbut.com Informed Search Strategies-II

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

CS 188: Artificial Intelligence

Artificial Intelligence

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Artificial Intelligence: Search Part 2: Heuristic search

ARTIFICIAL INTELLIGENCE. Informed search

Administrative. Local Search!

Local Search (Ch )

CS 4700: Artificial Intelligence

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Informed Search and Exploration

INTRODUCTION TO HEURISTIC SEARCH

CS 380: Artificial Intelligence Lecture #4

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

! A* is best first search with f(n) = g(n) + h(n) ! Three changes make it an anytime algorithm:

3.6.2 Generating admissible heuristics from relaxed problems

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Midterm Examination CS540-2: Introduction to Artificial Intelligence

N-Queens problem. Administrative. Local Search

Artificial Intelligence

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Ar#ficial)Intelligence!!

Informed Search and Exploration

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Last time: Problem-Solving

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Review Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests

BEYOND CLASSICAL SEARCH

Artificial Intelligence. 2. Informed Search

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Foundations of Artificial Intelligence

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Vorlesung Grundlagen der Künstlichen Intelligenz

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Artificial Intelligence

Solving Problems: Intelligent Search

Planning and Search. Genetic algorithms. Genetic algorithms 1

Informed Search Algorithms. Chapter 4

DFS. Depth-limited Search

Foundations of Artificial Intelligence

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Artificial Intelligence, CS, Nanjing University Spring, 2016, Yang Yu. Lecture 5: Search 4.

Simulated Annealing. Slides based on lecture by Van Larhoven

Beyond Classical Search

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Informed search algorithms

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Artificial Intelligence

Beyond Classical Search

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

CS 4700: Artificial Intelligence

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Informed Search and Exploration

CS 188: Artificial Intelligence

Downloaded from ioenotes.edu.np

Non-deterministic Search techniques. Emma Hart

Lecture 4: Local and Randomized/Stochastic Search

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Advanced Search Simulated annealing

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

Chapters 3-5 Problem Solving using Search

Problem Solving: Informed Search

Solving Problems using Search

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

Local Search. CS 486/686: Introduction to Artificial Intelligence

Transcription:

Local Search 1

Local Search Light-memory search method No search tree; only the current state is represented! Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state Many similarities with optimisation techniques 85

Gradient Descent 1) S! initial state ) Repeat: a) S! arg min S SUCCESSORS(S) {h(s )} b) if GOAL?(S ) return S c) if h(s ) < h(s) then S! S else return failure Similar to: - hill climbing with h - gradient descent over continuous space 86

Repeat n times: Application: 8-Queen 1) Pick an initial state S at random with one queen in each column ) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens " new S [min-conflicts heuristic] 3) Return failure 1 3 3 3 0

Repeat n times: Application: 8-Queen 1) Pick an initial state S at random with one queen in each column ) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens " new S [min-conflicts heuristic] 3) Return failure 1 3 3 3 0

Repeat n times: Application: 8-Queen Why does it work??? 1) There are many goal states that are a) well-distributed If GOAL?(S) then return over S the state space 1) Pick an initial state S at random with one queen in each column ) Repeat k times: ) b) If Pick no an solution attacked queen has been Q at random found after a few c) Move Q in its column to minimize the number of attacking queens " new S [min-conflicts heuristic] steps, it s better to start it all over again. Building a search tree would be much less efficient because of the high branching 1 factor 3) Running time almost independent of the number of queens 3) Return failure 3 3 3 0

Gradient Descent 1) S! initial state ) Repeat: a) S! arg min S SUCCESSORS(S) {h(s )} b) if GOAL?(S ) return S c) if h(s ) < h(s) then S! S else return failure may easily get stuck in local minima à Random restart (as in n-queen example) à Monte Carlo descent 89

Monte Carlo Descent 1) S! initial state ) Repeat k times: a) If GOAL?(S) then return S b) S! successor of S picked at random c) if h(s ) h(s) then S! S d) else - Δh = h(s )-h(s) 3) Return failure - with probability ~ exp( Δh/T), where T is called the temperature, do: S! S [Metropolis criterion] Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T 90

Simulated Annealing 1) S! initial state ) Repeat k times: a) If GOAL?(S) then return S b) S! successor of S picked at random c) if h(s ) h(s) then S! S d) else - Δh = h(s )-h(s) 3) Return failure - with probability ~ exp( Δh/T), where T is called the temperature, do: S! S [Metropolis criterion] Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T 90

Parallel Local Search Techniques They perform several local searches concurrently, but not independently: Beam search Genetic algorithms See R&N, pages 115-119 91

Local Beam Search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel Searches that find good states recruit other searches to join them Problem: quite often, all k states end up on same local hill Idea: choose k successors randomly, biased towards good ones Observe the close analogy to natural selection! 91

Genetic Algorithms Idea: stochastic local beam search + generate successors from pairs of states 91

Genetic Algorithms GAs require states encoded as strings (GPs use programs) Crossover helps iff substrings are meaningful components 91

Search problems Blind search Heuristic search: best-first and A* Construction of heuristics Variants of A* Local search 9

When to Use Search Techniques? 1) The search space is small, and No other technique is available, or Developing a more efficient technique is not worth the effort ) The search space is large, and No other available technique is available, and There exist good heuristics 93

Anytime A* Three changes make A* an anytime algorithm: 1) Use a non-admissible heuristic so that suboptimal solutions are found quickly. ) Continue the search after the first solution is found using it to prune the open list????? 3) When the open list is empty, the best solution generated is optimal. How to choose a non-admissible heuristic? 93

Anytime A* How to choose a non-admissible heuristic? Weighted evaluation functions: f'(n) = (1-w)*g(N) + w*h(n) Higher weight on h(n) tends to search deeper. Admissible if h(n) is admissible and w 0.5 Otherwise, the search is non-admissible, but it normally finds solutions much faster. An appropriate w makes possible a tradeoff between the solution quality and the computation time. 93

Online Search Can be solved only by an agent executing actions (Rather than by a purely computational process). Action knowledge: - Action(s): a list of allowed actions in state s - c(s, a, s ): step-cost function (cannot be used until s is determined) - Goal-Test(s) Assumptions: - An agent can recognise previously visited states. - Actions are deterministic. - Have access to an admissible heuristic h(s) that estimates the distance from the current state to a goal state 93

Online DF-Search 93

0