Artificial Intelligence

Similar documents
Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

CS:4420 Artificial Intelligence

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

TDT4136 Logic and Reasoning Systems

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Artificial Intelligence

Informed search algorithms. Chapter 4

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

BEYOND CLASSICAL SEARCH

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Artificial Intelligence

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

mywbut.com Informed Search Strategies-II

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Artificial Intelligence

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Informed Search Algorithms. Chapter 4

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed search algorithms. Chapter 4

Solving Problems: Intelligent Search

Informed Search and Exploration

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

CS 380: Artificial Intelligence Lecture #4

ARTIFICIAL INTELLIGENCE. Informed search

Informed Search and Exploration

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Informed search algorithms

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Artificial Intelligence

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Introduction to Computer Science and Programming for Astronomers

Artificial Intelligence

Robot Programming with Lisp

Administrative. Local Search!

Beyond Classical Search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Local Search (Ch )

N-Queens problem. Administrative. Local Search

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Problem Solving: Informed Search

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

3.6.2 Generating admissible heuristics from relaxed problems

Ar#ficial)Intelligence!!

Artificial Intelligence

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Artificial Intelligence Informed search. Peter Antal

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Informed Search and Exploration

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Beyond Classical Search

CS 188: Artificial Intelligence

Local Search. CS 486/686: Introduction to Artificial Intelligence

Foundations of Artificial Intelligence

Advanced A* Improvements

Downloaded from ioenotes.edu.np

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Artificial Intelligence: Search Part 2: Heuristic search

Foundations of Artificial Intelligence

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

CS 188: Artificial Intelligence Spring Today

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Advanced Search Simulated annealing

Non-deterministic Search techniques. Emma Hart

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Solving Problems using Search

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Lecture 4: Local and Randomized/Stochastic Search

AI Programming CS S-08 Local Search / Genetic Algorithms

Informed Search and Exploration

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Vorlesung Grundlagen der Künstlichen Intelligenz

Beyond Classical Search

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Artificial Intelligence

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Heuristic (Informed) Search

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Transcription:

Artificial Intelligence Local Search Vibhav Gogate The University of Texas at Dallas Some material courtesy of Luke Zettlemoyer, Dan Klein, Dan Weld, Alex Ihler, Stuart Russell, Mausam

Systematic Search: Review Be able to formulate a problem as a Search problem Tree Search vs Graph Search How they operate and expand nodes Priority queue Breadth first: f(n)=d(n) Depth first: f(n)=-d(n) Uniform cost search: f(n)=g(n) Best first search: f(n)=h(n) A* : f(n)=g(n)+h(n) f 1 f 2 f 3

Systematic Search: Properties Completeness? Optimality? Depth-first search (graph search/path checking) A* search (admissibility vs consistency) Time Complexity? Space Complexity? How to construct heuristics Pros and Cons of various search methods, which method to use when

Outline Local search techniques and optimization Hill-climbing Simulated annealing Genetic Algorithms (read the book) Issues with local search 4

Local search and optimization Previous lecture: path to goal is solution to problem systematic exploration of search space. This lecture: a state is solution to problem for some problems path is irrelevant. E.g., 8-queens 5

Local search and optimization Local search Keep track of single current state Move only to neighboring states Ignore paths Advantages: Use very little memory Can often find reasonable solutions in large or infinite (continuous) state spaces. Pure optimization problems All states have an objective function Goal is to find state with max (or min) objective value Does not quite fit into path-cost/goal-state formulation Local search can do quite well on these problems. 7

Trivial Algorithms Random Sampling Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Mausam 8

Hill-climbing (Greedy Local Search) function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current MAKE-NODE(INITIAL-STATE[problem]) 9

Hill-climbing (Greedy Local Search) function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor a highest valued successor of current 10

Hill-climbing (Greedy Local Search) function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor a highest valued successor of current if VALUE [neighbor] VALUE[current] then return STATE[current] current neighbor 11

Hill-climbing (Greedy Local Search) function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor a highest valued successor of current if VALUE [neighbor] VALUE[current] then return STATE[current] current neighbor Min version will reverse inequalities and look for lowest valued successor 12

Hill-climbing search a loop that continuously moves towards increasing value terminates when a peak is reached Aka greedy local search Value can be either Objective function value Heuristic function value (minimized) Hill climbing does not look ahead of the immediate neighbors Can randomly choose among the set of best successors if multiple have the best value climbing Mount Everest in a thick fog with amnesia 13

Landscape of search 14

Landscape of search Hill Climbing gets stuck in local minima 15

Landscape of search Hill Climbing gets stuck in local minima depending on? 16

Example: n-queens Put n queens on an n x n board with no two queens on the same row, column, or diagonal 17

Hill-climbing search: 8-queens problem 18

Hill-climbing search: 8-queens problem Formulate it as an optimization problem 19

Hill-climbing search: 8-queens problem Formulate it as an optimization problem h = number of pairs of queens that are attacking each other 20

Hill-climbing search: 8-queens problem Formulate it as an optimization problem h = number of pairs of queens that are attacking each other h = 17 for the above state 21

Hill-climbing search: 8-queens problem Formulate it as an optimization problem h = number of pairs of queens that are attacking each other h = 17 for the above state 3+4+2+3+2+2+1+0 (from left to right) 22

Search Space State All 8 queens on the board in some configuration Successor function move a single queen to another square in the same column. Example of a heuristic function h(n): the number of pairs of queens that are attacking each other (so we want to minimize this) 23

Hill-climbing search: 8-queens problem Is this a solution? What is h? 24

Hill-climbing on 8-queens Randomly generated 8-queens starting states 14% the time it solves the problem 86% of the time it get stuck at a local minimum However Takes only 4 steps on average when it succeeds And 3 on average when it gets stuck (for a state space with 8^8 =~17 million states) 25

Hill Climbing Drawbacks Local maxima Plateaus 26

Escaping Plateaus: Sideways Move If no downhill (uphill) moves, allow sideways moves in hope that algorithm can escape Need to place a limit on the possible number of sideways moves to avoid infinite loops For 8-queens Now allow sideways moves with a limit of 100 Raises percentage of problem instances solved from 14 to 94% However, success comes at a cost 21 steps for every successful solution 64 for each failure 27

Hill-Climbing: Review Solution=Goal state; path to goal is irrelevant Algorithm: Assign a cost to each state Current state=randomly selected state (Initialization) Loop If current state is the goal state or local maxima Return current state Current state = the best successor of the current state in terms of cost Problems: Local maxima, plateaus, initialization

Hill Climbing: Stochastic variations How to escape local maxima/plateaus Stochastic variations Restart hill-climbing with different initializations Choose each uphill move with some probability Sometimes even pick a downward move What we will cover? Random-restart hill climbing Random-walk hill climbing Combination

Hill-climbing with random restarts If at first you don t succeed, try, try again! Different variations For each restart: run until termination vs. run for a fixed time Run a fixed number of restarts or run indefinitely Analysis Say each search has probability p of success E.g., for 8-queens, p = 0.14 with no sideways moves Expected number of restarts? = 1/p (why?) 34

Hill-climbing with random restarts If at first you don t succeed, try, try again! Different variations For each restart: run until termination vs. run for a fixed time Run a fixed number of restarts or run indefinitely Analysis Say each search has probability p of success E.g., for 8-queens, p = 0.14 with no sideways moves Expected number of restarts? = 1/p (why?) If you want to pick one local search algorithm, learn this one!! 35

Hill-climbing with random walk At each step do one of the two Greedy: With prob p move to the neighbor with largest value Random: With prob 1-p move to a random neighbor Hill-climbing with both At each step do one of the three Greedy: move to the neighbor with largest value Random Walk: move to a random neighbor Random Restart: Resample a new current state Mausam 36

Simulated Annealing Simulated Annealing = physics inspired twist on random walk Basic idea: instead of picking the best move, pick one randomly say the change in objective function is d if d is positive (i.e. uphill move), then move to that state otherwise: move to this state with probability proportional to d thus: worse moves (very large negative d) are executed less often over time, make it less likely to accept locally bad moves using a parameter T called the temperature! 39

Physical Interpretation of Simulated Annealing A Physical Analogy: imagine letting a ball roll downhill on the function surface this is like hill-climbing (for minimization) now imagine shaking the surface, while the ball rolls, gradually reducing the amount of shaking this is like simulated annealing Annealing = physical process of cooling a liquid or metal until particles achieve a certain frozen crystal state simulated annealing: free variables are like particles seek low energy (high quality) configuration slowly reducing temp. T with particles moving around randomly 40

Simulated annealing function SIMULATED-ANNEALING( problem, schedule) return a solution state input: problem, a problem schedule, a mapping from time to temperature local variables: current, a node. next, a node. T, a temperature controlling the prob. of downward steps current MAKE-NODE(INITIAL-STATE[problem]) for t 1 to do T schedule[t] if T = 0 then return current next a randomly selected successor of current E VALUE[next] - VALUE[current] if E > 0 then current next else current next only with probability e E /T 41

Temperature T current next only with probability e E /T high T: probability of locally bad move is higher low T: probability of locally bad move is lower typically, T is decreased as the algorithm runs longer i.e., there is a temperature schedule 42

Simulated Annealing in Practice method proposed in 1983 by IBM researchers for solving VLSI layout problems (Kirkpatrick et al, Science, 220:671-680, 1983). theoretically will always find the global optimum Other applications: Traveling salesman, Graph partitioning, Graph coloring, Scheduling, Facility Layout, Image Processing, useful for some problems, but can be very slow slowness comes about because T must be decreased very gradually to retain optimality 43

Local beam search Idea: Keeping only one node in memory is an extreme reaction to memory problems. Keep track of k states instead of one Initially: k randomly selected states Next: determine all successors of k states If any of successors is goal finished Else select k best from successors and repeat 44

Local Beam Search (contd) Not the same as k random-start searches run in parallel! Searches that find good states recruit other searches to join them Problem: quite often, all k states end up on same local hill Idea: Stochastic beam search Choose k successors randomly, biased towards good ones 45

Hey! Perhaps sex can improve search? 46

Sure! 47

Genetic algorithms Twist on Local Search: successor is generated by combining two parent states A state is represented as a string over a finite alphabet (e.g. binary) 8-queens State = position of 8 queens each in a column Start with k randomly generated states (population) Evaluation function (fitness function): Higher values for better states. Opposite to heuristic function, e.g., # non-attacking pairs in 8-queens Produce the next generation of states by simulated evolution Random selection Crossover Random mutation 48

Genetic Algorithms: Read the book Genetic algorithm is a Biologically inspired variant of stochastic beam search Positive points Appealing connection to human evolution neural networks, and genetic algorithms are metaphors! Probabilistically complete just like random walk Negative points and that is why we won t cover it!!! Large number of tunable parameters Difficult to replicate performance from one problem to another Lack of good empirical studies comparing to simpler methods Useful on some (small?) set of problems but no convincing evidence that GAs are better than hill-climbing w/random restarts in general 52