Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Similar documents
Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Recap Arc Consistency Example Local Search Hill Climbing. Local Search. CPSC 322 CSPs 4. Textbook 4.8. Local Search CPSC 322 CSPs 4, Slide 1

Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Artificial Intelligence

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Local Search (Ch )

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

BEYOND CLASSICAL SEARCH

CS 188: Artificial Intelligence Spring Today

Beyond Classical Search

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Artificial Intelligence

CS 188: Artificial Intelligence

ITCS 6150 Intelligent Systems. Lecture 6 Informed Searches

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Advanced Search Genetic algorithm

CS:4420 Artificial Intelligence

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Ar#ficial)Intelligence!!

Beyond Classical Search

Artificial Intelligence

Local Search for CSPs

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Local Search. CS 486/686: Introduction to Artificial Intelligence

Administrative. Local Search!

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

3.6.2 Generating admissible heuristics from relaxed problems

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

Informed Search and Exploration

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

mywbut.com Informed Search Strategies-II

Non-deterministic Search techniques. Emma Hart

Escaping Local Optima: Genetic Algorithm

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

CS 188: Artificial Intelligence Spring Announcements

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

ARTIFICIAL INTELLIGENCE. Informed search

N-Queens problem. Administrative. Local Search

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Informed search algorithms. Chapter 4

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Artificial Intelligence

Single Candidate Methods

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed Search and Exploration

Advanced A* Improvements

CS 188: Artificial Intelligence

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

CS 188: Artificial Intelligence

Downloaded from ioenotes.edu.np

Announcements. CS 188: Artificial Intelligence Fall Robot motion planning! Today. Robotics Tasks. Mobile Robots

CS 188: Artificial Intelligence Fall Announcements

Solving Problems using Search

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Foundations of Artificial Intelligence

TDT4136 Logic and Reasoning Systems

Artificial Intelligence

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Informed search algorithms. Chapter 4

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

Informed Search Algorithms. Chapter 4

AI Programming CS S-08 Local Search / Genetic Algorithms

Foundations of Artificial Intelligence

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Introduction to Computer Science and Programming for Astronomers

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Algorithms & Complexity

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

CS 188: Artificial Intelligence. Recap: Search

Transcription:

Hill Climbing Many search spaces are too big for systematic search. A useful method in practice for some consistency and optimization problems is hill climbing: Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable. Select a neighbor of the current assignment that improves the heuristic value to be the next current assignment.

Selecting Neighbors in Hill Climbing When the domains are small or unordered, the neighbors of a node correspond to choosing another value for one of the variables. When the domains are large and ordered, the neighbors of a node are the adjacent values for one of the dimensions. If the domains are continuous, you can use Gradient ascent: change each variable proportional to the gradient of the heuristic function in that direction. The value of variable X i goes from v i to v i + η h X i. Gradient descent: go downhill; v i becomes v i η h X i.

Problems with Hill Climbing Foothills local maxima that are not global maxima Plateaus heuristic values are uninformative Ridge foothill where n-step lookahead might help Ignorance of the peak Ridge Plateau Foothill

Randomized Algorithms Consider two methods to find a maximum value: Hill climbing, starting from some position, keep moving uphill & report maximum value found Pick values at random & report maximum value found Which do you expect to work better to find a maximum? Can a mix work better?

Randomized Hill Climbing As well as uphill steps we can allow for: Random steps: move to a random neighbor. Random restart: reassign random values to all variables. Which is more expensive computationally?

1-Dimensional Ordered Examples Two 1-dimensional search spaces; step right or left: Which method would most easily find the maximum? What happens in hundreds or thousands of dimensions? What if different parts of the search space have different structure?

Stochastic Local Search for CSPs Goal is to find an assignment with zero unsatisfied relations. Heuristic function: the number of unsatisfied relations. We want an assignment with minimum heuristic value. Stochastic local search is a mix of: Greedy descent: move to a lowest neighbor Random walk: taking some random steps Random restart: reassigning values to all variables

Greedy Descent It may be too expensive to find the variable-value pair that minimizes the heuristic function at every step. An alternative is: Select a variable that participates in the most number of conflicts. Choose a (different) value for that variable that resolves the most conflicts. The alternative is easier to compute even if it doesn t always maximally reduce the number of conflicts.

Random Walk You can add randomness: When choosing the best variable-value pair, randomly sometimes choose a random variable-value pair. When selecting a variable then a value: Sometimes choose a random variable. Sometimes choose, at random, a variable that participates in a conflict (a red node). Sometimes choose a random variable. Sometimes choose the best value and sometimes choose a random value.

Comparing Stochastic Algorithms How can you compare three algorithms when one solves the problem 30% of the time very quickly but doesn t halt for the other 70% of the cases one solves 60% of the cases reasonably quickly but doesn t solve the rest one solves the problem in 100% of the cases, but slowly? Summary statistics, such as mean run time, median run time, and mode run time don t make much sense.

Runtime Distribution Plots runtime (or number of steps) and the proportion (or number) of the runs that are solved within that runtime. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 10 100 1000

Variant: Simulated Annealing Pick a variable at random and a new value at random. If it is an improvement, adopt it. If it isn t an improvement, adopt it probabilistically depending on a temperature parameter, T. With current node n and proposed node n we move to n with probability e (h(n ) h(n))/t Temperature can be reduced.

Tabu lists To prevent cycling we can maintain a tabu list of the k last nodes visited. Don t allow a node that is already on the tabu list. If k = 1, we don t allow a node to the same value. We can implement it more efficiently than as a list of complete nodes. It can be expensive if k is large.

Parallel Search Idea: maintain k nodes instead of one. At every stage, update each node. Whenever one node is a solution, it can be reported. Like k restarts, but uses k times the minimum number of steps.

Beam Search Like parallel search, with k nodes, but you choose the k best out of all of the neighbors. When k = 1, it is hill climbing. When k =, it is breadth-first search. The value of k lets us limit space and parallelism. Randomness can also be added.

Stochastic Beam Search Like beam search, but you probabilistically choose the k nodes at the next generation. The probability that a neighbor is chosen is proportional to the heuristic value. This maintains diversity amongst the nodes. The heuristic value reflects the fitness of the node. Like asexual reproduction: each node gives its mutations and the fittest ones survive.

Genetic Algorithms Like stochastic beam search, but pairs are nodes are combined to create the offspring: For each generation: Randomly choose pairs of nodes where the fittest individuals are more likely to be chosen. For each pair, perform a cross-over: form two offspring each taking different parts of their parents: Mutate some values Report best node found.

Crossover Given two nodes: X 1 = a 1, X 2 = a 2,...,X m = a m X 1 = b 1, X 2 = b 2,...,X m = b m Select i at random. Form two offspring: X 1 = a 1,...,X i = a i, X i+1 = b i+1,...,x m = b m X 1 = b 1,...,X i = b i, X i+1 = a i+1,...,x m = a m Note that this depends on an ordering of the variables. Many variations are possible.

Example: Crossword Puzzle 1 2 3 4 Words: ant, big, bus, car, has book, buys, hold, lane, year beast, ginger, search, symbol, syntax