Local Search (Ch )

Similar documents
Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

3.6.2 Generating admissible heuristics from relaxed problems

Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Administrative. Local Search!

CS:4420 Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

N-Queens problem. Administrative. Local Search

Artificial Intelligence

Artificial Intelligence

Advanced Search Genetic algorithm

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Informed search algorithms. Chapter 4

Artificial Intelligence

Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

mywbut.com Informed Search Strategies-II

Advanced A* Improvements

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search and Exploration

Midterm Examination CS540-2: Introduction to Artificial Intelligence

CS 188: Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

TDT4136 Logic and Reasoning Systems

AI Programming CS S-08 Local Search / Genetic Algorithms

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed search algorithms. Chapter 4

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Ar#ficial)Intelligence!!

Informed Search and Exploration

x n+1 = x n f(x n) f (x n ), (1)

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Solving Problems: Intelligent Search

DFS. Depth-limited Search

Escaping Local Optima: Genetic Algorithm

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Introduction to Computer Science and Programming for Astronomers

CS 380: Artificial Intelligence Lecture #4

Beyond Classical Search

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Algorithms & Complexity

Heuristic Optimisation

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

BEYOND CLASSICAL SEARCH

Local Search. CS 486/686: Introduction to Artificial Intelligence

Non-deterministic Search techniques. Emma Hart

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Heuristic (Informed) Search

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Beyond Classical Search

CHAPTER 4 GENETIC ALGORITHM

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Algorithm Design (4) Metaheuristics

ARTIFICIAL INTELLIGENCE. Informed search

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Artificial Intelligence (Heuristic Search)

Informed Search (Ch )

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Outline. Best-first search

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

Heuristic Search. Heuristic Search. Heuristic Search. CSE 3401: Intro to AI & LP Informed Search

Artificial Intelligence. 2. Informed Search

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Simulated Annealing. Slides based on lecture by Van Larhoven

CS 188: Artificial Intelligence Spring Today

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

Informed Search and Exploration

Problem Solving: Informed Search

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Genetic Programming: A study on Computer Language

CS 188: Artificial Intelligence Spring Announcements

ITCS 6150 Intelligent Systems. Lecture 6 Informed Searches

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Planning and Search. Genetic algorithms. Genetic algorithms 1

Transcription:

Local Search (Ch. 4-4.1)

Local search Before we tried to find a path from the start state to a goal state using a fringe set Now we will look at algorithms that do not care about a fringe, but just neighbors Some problems, may not have a clear best goal, yet we have some way of evaluating the state (how good is a state)

Local search Today we will talk about 4 (more) algorithms: 1. Hill climbing 2. Simulated annealing 3. Beam search 4. Genetic algorithms All of these will only consider neighbors while looking for a goal

Local search General properties of local searches: - Fast and low memory - Can find good solutions if can estimate state value - Hard to find optimal path In general these types of searches are used if the tree is too big to find a real optimal solution

Hill climbing Remember greedy best-first search? 1. Pick from fringe with best heuristic 2. Repeat 1... Hill climbing is only a slight variation: 1. Pick best between: yourself and child 2. Repeat 1... What are the pros and cons of this?

Hill climbing This actually works surprisingly well, if getting close to the goal is sufficient (and actions are not too restrictive) Newton's method:

Hill climbing

Hill climbing For the 8-puzzles we had 2 (consistent) heuristics: h1 - number of mismatched pieces h2 - Manhattan distance from number's current to goal position Let's try hill climbing this problem!

Hill climbing Can get stuck in: - Local maximum - Plateau/shoulder Local maximum will have a range of attraction around it Can get an infinite loop in a plateau if not careful (step count)

Hill climbing To avoid these pitfalls, most local searches incorporate some form of randomness Hill search variants: Stochastic hill climbing - choose random move from better solutions Random-restart hill search - run hill search until maximum found (or looping), then start at another random spot and repeat

Simulated annealing The idea behind simulated annealing is we act more random at the start (to explore ), then take greedy choices later https://www.youtube.com/watch?v=qfd3cmbn28 An analogy might be a hard boiled egg: 1. To crack the shell you hit rather hard (not too hard!) 2. You then hit lightly to create a cracked area around first 3. Carefully peal the rest

Simulated annealing The process is: 1. Pick random action and evaluation result 2. If result better than current, take it 3. If result worse accept probabilistically 4. Decrease acceptance chance in step 3 5. Repeat... (see: SAacceptance.cpp) Specifically, we track some temperature T: 3. Accept with probability: 4. Decrease T (linear? hard to find best...)

Simulated annealing Let's try SA on 8-puzzle:

Simulated annealing Let's try SA on 8-puzzle: This example did not work well, but probably due to the temperature handling We want the temperature to be fairly high at the start (to move around the graph) The hard part is slowly decreasing it over time

Simulated annealing SA does work well on the traveling salesperson problem (see: tsp.zip)

Local beam search Beam search is similar to hill climbing, except we track multiple states simultaneously Initialize: start with K random nodes 1. Find all children of the K nodes 2. Select best K children from whole pool 3. Repeat... Unlike previous approaches, this uses more memory to better search hopeful options

Local beam search Beam search with 3 beams Pick best 3 options at each depth to expand Stop like hill-climb (when child worse than parents)

Local beam search However, the basic version of beam search can get stuck in local maximum as well To help avoid this, stochastic beam search picks children with probability relative to their values This is different that hill climbing with K restarts as better options get more consideration than worse ones

Local beam search

Genetic algorithms Nice examples of GAs: http://rednuht.org/genetic_cars_2/ http://boxcar2d.com/

Genetic algorithms Genetic algorithms are based on how life has evolved over time They (in general) have 3 (or 5) parts: 1. Select/generate children 1a. Select 2 random parents 1b. Mutate/crossover 2. Test fitness of children to see if they survive 3. Repeat until convergence

Genetic algorithms Selection/survival: Typically children have a probabilistic survival rate (randomness ensures genetic diversity) Crossover: Split the parent's information into two parts, then take part 1 from parent A and 2 from B Mutation: Change a random part to a random value

Genetic algorithms Genetic algorithms are very good at optimizing the fitness evaluation function (assuming fitness fairly continuous) While you have to choose parameters (i.e. mutation frequency, how often to take a gene, etc.), typically GAs converge for most The downside is that often it takes many generations to converge to the optimal

Genetic algorithms There are a wide range of options for selecting who to bring to the next generation: - always the top (similar to hill-climbing... gets stuck a lot) - choose purely by weighted random (i.e. 4 fitness chosen twice as much as 2 fitness) - choose the best and others weighted random Can get stuck if pool's diversity becomes too little (hope for many random mutations)

Genetic algorithms Let's make a small (fake) example with the 4-queens problem Adults: right 1/4 Child pool (fitness): (20) =(30) left 3/4 (10) =(20) mutation (col 2) (15) =(30)

Genetic algorithms Let's make a small (fake) example with the 4-queens problem Child pool (fitness): (20) =(30) Weighted random selection: (10) =(20) (15) =(35)

Genetic algorithms https://www.youtube.com/watch?v=r9ohn5zf4uo