Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Similar documents
Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Local Search (Ch )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

CS 188: Artificial Intelligence Spring Today

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Local Search for CSPs

CS:4420 Artificial Intelligence

Advanced Search Genetic algorithm

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Escaping Local Optima: Genetic Algorithm

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

CS 188: Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Local Search. CS 486/686: Introduction to Artificial Intelligence

BEYOND CLASSICAL SEARCH

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.2, Page 1

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Administrative. Local Search!

Informed Search and Exploration

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Ar#ficial)Intelligence!!

ARTIFICIAL INTELLIGENCE. Informed search

3.6.2 Generating admissible heuristics from relaxed problems

Beyond Classical Search

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

CS 188: Artificial Intelligence Spring Announcements

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

ITCS 6150 Intelligent Systems. Lecture 6 Informed Searches

Introduction to Optimization

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.3, Page 1

Recap Arc Consistency Example Local Search Hill Climbing. Local Search. CPSC 322 CSPs 4. Textbook 4.8. Local Search CPSC 322 CSPs 4, Slide 1

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

N-Queens problem. Administrative. Local Search

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution

Artificial Intelligence

Non-deterministic Search techniques. Emma Hart

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Advanced A* Improvements

Beyond Classical Search

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

CS 188: Artificial Intelligence

Chapter 9: Genetic Algorithms

CS 188: Artificial Intelligence

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Introduction to Optimization

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

mywbut.com Informed Search Strategies-II

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

TDT4136 Logic and Reasoning Systems

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 3.5, Page 1

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Evolutionary Computation Algorithms for Cryptanalysis: A Study

The Parallel Software Design Process. Parallel Software Design

Artificial Intelligence

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed Search and Exploration

Introduction to Design Optimization: Search Methods

Informed search algorithms. Chapter 4

METAHEURISTIC. Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal.

Algorithm Design (4) Metaheuristics

Introduction to Design Optimization: Search Methods

GENETIC ALGORITHM with Hands-On exercise

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Evolutionary Computation

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Foundations of Artificial Intelligence

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Introduction to Genetic Algorithms

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

CS 4700: Artificial Intelligence

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CS5401 FS2015 Exam 1 Key

Transcription:

Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: Select a variable to change Select a new value for that variable Until a satisfying assignment is found c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 1

Local Search for CSPs Aim: find an assignment with zero unsatisfied constraints. Given an assignment of a value to each variable, a conflict is an unsatisfied constraint. The goal is an assignment with zero conflicts. Heuristic function to be minimized: the number of conflicts. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 2

Greedy Descent Variants To choose a variable to change and a new value for it: Find a variable-value pair that minimizes the number of conflicts Select a variable that participates in the most conflicts. Select a value that minimizes the number of conflicts. Select a variable that appears in any conflict. Select a value that minimizes the number of conflicts. Select a variable at random. Select a value that minimizes the number of conflicts. Select a variable and value at random; accept this change if it doesn t increase the number of conflicts. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 3

Complex Domains When the domains are small or unordered, the neighbors of an assignment can correspond to choosing another value for one of the variables. When the domains are large and ordered, the neighbors of an assignment are the adjacent values for one of the variables. If the domains are continuous, Gradient descent changes each variable proportional to the gradient of the heuristic function in that direction. The value of variable X i goes from v i to c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 4

Complex Domains When the domains are small or unordered, the neighbors of an assignment can correspond to choosing another value for one of the variables. When the domains are large and ordered, the neighbors of an assignment are the adjacent values for one of the variables. If the domains are continuous, Gradient descent changes each variable proportional to the gradient of the heuristic function in that direction. The value of variable X i goes from v i to v i η h X i. η is the step size. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 5

Problems with Greedy Descent a local minimum that is not a global minimum a plateau where the heuristic values are uninformative a ridge is a local minimum where n-step look-ahead might help Ridge Plateau Local Minimum c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 6

Randomized Algorithms Consider two methods to find a minimum value: Greedy descent, starting from some position, keep moving down & report minimum value found Pick values at random & report minimum value found Which do you expect to work better to find a global minimum? Can a mix work better? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 7

Randomized Greedy Descent As well as downward steps we can allow for: Random steps: move to a random neighbor. Random restart: reassign random values to all variables. Which is more expensive computationally? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 8

1-Dimensional Ordered Examples Two 1-dimensional search spaces; step right or left: (a) (b) Which method would most easily find the global minimum? What happens in hundreds or thousands of dimensions? What if different parts of the search space have different structure? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 9

Stochastic Local Search Stochastic local search is a mix of: Greedy descent: move to a lowest neighbor Random walk: taking some random steps Random restart: reassigning values to all variables c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 10

Random Walk Variants of random walk: When choosing the best variable-value pair, randomly sometimes choose a random variable-value pair. When selecting a variable then a value: Sometimes choose any variable that participates in the most conflicts. Sometimes choose any variable that participates in any conflict (a red node). Sometimes choose any variable. Sometimes choose the best value and sometimes choose a random value. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 11

Comparing Stochastic Algorithms How can you compare three algorithms when one solves the problem 30% of the time very quickly but doesn t halt for the other 70% of the cases one solves 60% of the cases reasonably quickly but doesn t solve the rest one solves the problem in 100% of the cases, but slowly? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 12

Comparing Stochastic Algorithms How can you compare three algorithms when one solves the problem 30% of the time very quickly but doesn t halt for the other 70% of the cases one solves 60% of the cases reasonably quickly but doesn t solve the rest one solves the problem in 100% of the cases, but slowly? Summary statistics, such as mean run time, median run time, and mode run time don t make much sense. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 13

Runtime Distribution Plots runtime (or number of steps) and the proportion (or number) of the runs that are solved within that runtime. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 10 100 1000 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 14

Variant: Simulated Annealing Pick a variable at random and a new value at random. If it is an improvement, adopt it. If it isn t an improvement, adopt it probabilistically depending on a temperature parameter, T. With current assignment n and proposed assignment n we move to n with probability e (h(n ) h(n))/t Temperature can be reduced. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 15

Variant: Simulated Annealing Pick a variable at random and a new value at random. If it is an improvement, adopt it. If it isn t an improvement, adopt it probabilistically depending on a temperature parameter, T. With current assignment n and proposed assignment n we move to n with probability e (h(n ) h(n))/t Temperature can be reduced. Probability of accepting a change: Temperature 1-worse 2-worse 3-worse 10 0.91 0.81 0.74 1 0.37 0.14 0.05 0.25 0.02 0.0003 0.000006 0.1 0.00005 2 10 9 9 10 14 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 16

Tabu lists To prevent cycling we can maintain a tabu list of the k last assignments. Don t allow an assignment that is already on the tabu list. If k = 1, we don t allow an assignment of to the same value to the variable chosen. We can implement it more efficiently than as a list of complete assignments. It can be expensive if k is large. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 17

Parallel Search A total assignment is called an individual. Idea: maintain a population of k individuals instead of one. At every stage, update each individual in the population. Whenever an individual is a solution, it can be reported. Like k restarts, but uses k times the minimum number of steps. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 18

Beam Search Like parallel search, with k individuals, but choose the k best out of all of the neighbors. When k = 1, it is greedy descent. When k =, it is breadth-first search. The value of k lets us limit space and parallelism. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 19

Stochastic Beam Search Like beam search, but it probabilistically chooses the k individuals at the next generation. The probability that a neighbor is chosen is proportional to its heuristic value. This maintains diversity amongst the individuals. The heuristic value reflects the fitness of the individual. Like asexual reproduction: each individual mutates and the fittest ones survive. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 20

Genetic Algorithms Like stochastic beam search, but pairs of individuals are combined to create the offspring: For each generation: Randomly choose pairs of individuals where the fittest individuals are more likely to be chosen. For each pair, perform a cross-over: form two offspring each taking different parts of their parents: Mutate some values. Stop when a solution is found. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 21

Crossover Given two individuals: X 1 = a 1, X 2 = a 2,..., X m = a m X 1 = b 1, X 2 = b 2,..., X m = b m Select i at random. Form two offspring: X 1 = a 1,..., X i = a i, X i+1 = b i+1,..., X m = b m X 1 = b 1,..., X i = b i, X i+1 = a i+1,..., X m = a m The effectiveness depends on the ordering of the variables. Many variations are possible. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 22