Local Search for CSPs

Similar documents
Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

Arc Consistency and Domain Splitting in CSPs

Lecture 9 Arc Consistency

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Planning: Wrap up CSP Planning Logic: Intro

CSPs: Arc Consistency & Domain Splitting

Constraint Satisfaction Problems

CSE 473: Artificial Intelligence

Domain Splitting CPSC 322 CSP 4. Textbook 4.6. February 4, 2011

Constraint Satisfaction

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Lecture 6: Constraint Satisfaction Problems (CSPs)

CSPs: Search and Arc Consistency

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

CS W4701 Artificial Intelligence

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems

Example: Map coloring

Branch & Bound (B&B) and Constraint Satisfaction Problems (CSPs)

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

CS 188: Artificial Intelligence

Constraint Satisfaction Problems (CSPs)

Recap Arc Consistency Example Local Search Hill Climbing. Local Search. CPSC 322 CSPs 4. Textbook 4.8. Local Search CPSC 322 CSPs 4, Slide 1

CS 188: Artificial Intelligence Spring Announcements

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Week 8: Constraint Satisfaction Problems

AI Fundamentals: Constraints Satisfaction Problems. Maria Simi

CS 188: Artificial Intelligence. Recap: Search

Constraint Satisfaction Problems

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Outline. Best-first search

Constraint Satisfaction Problems (CSPs) Lecture 4 - Features and Constraints. CSPs as Graph searching problems. Example Domains. Dual Representations

Administrative. Local Search!

CS 188: Artificial Intelligence Spring Today

Set 5: Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

N-Queens problem. Administrative. Local Search

Constraint Satisfaction Problems

Constraint Satisfaction. AI Slides (5e) c Lin

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

Admin. Quick search recap. Constraint Satisfaction Problems (CSPs)! Intro Example: 8-Queens. Where should I put the queens in columns 3 and 4?

CS 188: Artificial Intelligence Fall 2011

Outline. Best-first search

CS 188: Artificial Intelligence

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search

Constraint Satisfaction Problems

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

Constraint Satisfaction Problems. Chapter 6

CS 4100 // artificial intelligence

Spezielle Themen der Künstlichen Intelligenz

Search with Costs and Heuristic Search

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

CS 188: Artificial Intelligence Fall 2011

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

CS 343: Artificial Intelligence

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012

Set 5: Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall 2010

Set 5: Constraint Satisfaction Problems

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

ARTIFICIAL INTELLIGENCE (CS 370D)

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Artificial Intelligence

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

6.034 Notes: Section 3.1

Artificial Intelligence

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Constraint Satisfaction Problems

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Transcription:

Local Search for CSPs Alan Mackworth UBC CS CSP February, 0 Textbook.

Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start)

Searching by domain splitting CSP, apply AC If domains with multiple values Split on one domain CSP, apply AC If domains with multiple values Split on one domain CSP n, apply AC If domains with multiple values Split on one domain Recurse until no domains with multiple values Recurse until no domains with multiple values

Example Simple Problem in AIspace variables: A, B, C Domains: all {,,,} A=B, B=C, A C Trace arc consistency + domain splitting for this network in AIspace.

Arc consistency + domain splitting: example variables: A, B, C Domains: all {,,,} A=B, B=C, A C ({,,,}, {,,,}, {,,,}) AC (arc consistency) ({,,,}, {,,,}, {,,,}) A ϵ {,} A ϵ {,} ({,}, {,,,}, {,,,}) AC ({,}, {,}, {,}) ({,}, {,,,}, {,,,}) AC ({,}, {,}, {,}) B ϵ {} B ϵ {} B ϵ {} B ϵ {} ({,}, {}, {,}) ({,}, {}, {,}) ({,}, {}, {,}) ({,}, {}, {,}) AC AC AC AC ({}, {}, {}) ({}, {}, {}) ({}, {}, {}) ({}, {}, {}) No solution No solution No solution No solution

Arc consistency + domain splitting: another example variables: A, B, C Domains: all {,,,} A=B, B=C, A=C ({,,,}, {,,,}, {,,,}) AC (arc consistency) ({,,,}, {,,,}, {,,,}) A ϵ {,} A ϵ {,} ({,}, {,,,}, {,,,}) AC ({,}, {,}, {,}) ({,}, {,,,}, {,,,}) AC ({,}, {,}, {,}) B ϵ {} B ϵ {} B ϵ {} B ϵ {} ({,}, {}, {,}) ({,}, {}, {,}) ({,}, {}, {,}) ({,}, {}, {,}) AC AC AC AC ({}, {}, {}) ({}, {}, {}) ({}, {}, {}) ({}, {}, {}) Solution Solution Solution Solution

Third formulation of CSP as search Arc consistency with domain splitting States: vector (D(V ),, D(V n )) of remaining domains, with D(V i ) dom(v i ) for each V i Start state: vector of original domains (dom(v ),, dom(v n )) Successor function: reduce one of the domains + run arc consistency => new CSP Goal state: vector of unary domains that satisfies all constraints That is, only one value left for each variable The assignment of each variable to its single value is a model Solution: that assignment

Arc consistency with domain splitting algorithm Procedure AC_DS(V, dom,c, TDA) Inputs V : a set of variables dom: a function such that dom(v) is the domain of variable V C: set of constraints to be satisfied TDA: set of possibly arc inconsistent edges of the constraint network Output set of models of the CSP (empty if no model exists) : dom GAC(V,dom,C,TDA) // run arc consistency initialized with TDA : If dom includes an empty domain then return {} : If for all V,, V n V, dom(v i ) has a single value v i then : return the model {V = v,, V n = v n } Base case : no solution Base case : single model : Choose a variable V V 6: [D,, D n ] Partition dom(v) into non-empty domains Domain splitting : models {} Arcs that could become inconsistent by reducing V s domain. : for i=,, n do Z is some variable, c is some constraint involving both Z and V 9: dom i dom with dom(v) replaced by D i 0: TDA = {<Z, c> Z V\{V} and Z scope(c) and V scope(c)} : models models AC_DS(V, dom i, C, TDA) Recursive case : return models

Learning Goals for arc consistency Define/read/write/trace/debug the arc consistency algorithm. Compute its complexity and assess its possible outcomes Define/read/write/trace/debug domain splitting and its integration with arc consistency 9

Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start) 0

Local Search: Motivation Solving CSPs is NP-hard - Search space for many CSPs is huge - Exponential in the number of variables - Even arc consistency with domain splitting is often not enough Alternative: local search Often finds a solution quickly But cannot prove that there is no solution Useful method in practice Best available method for many constraint satisfaction and constraint optimization problems Extremely general! Works for problems other than CSPs E.g. arc consistency only works for CSPs

Some Successful Application Areas for Local Search Probabilistic Reasoning RNA structure design Propositional satisfiability (SAT) University Timetabling Protein Folding Scheduling of Hubble Space Telescope: week 0 seconds

Local Search Idea: Consider the space of complete assignments of values to variables (all possible worlds) Neighbours of a current node are similar variable assignments Move from one node to another according to a function that scores how good each assignment is 6 6 6 9 6 6 6 9

Local Search Problem: Definition Definition: A local search problem consists of a: CSP: a set of variables, domains for these variables, and constraints on their joint values. A node in the search space will be a complete assignment to all of the variables. Neighbour relation: an edge in the search space will exist when the neighbour relation holds between a pair of nodes. Scoring function: h(n), judges cost of a node (want to minimize) - E.g. the number of constraints violated in node n. - E.g. the cost of a state in an optimization context.

Example: Sudoku as a local search problem CSP: usual Sudoku CSP - One variable per cell; domains {,,9}; - Constraints: each number occurs once per row, per column, and per x box Neighbour relation: value of a single cell differs Scoring function: number of constraint violations 6 6 6 9 6 6 6 9

Search Space for Local Search V = v,v = v,.., V n = v V = v,v = v,.., V n = v V = v,v = v n,.., V n = v V = v,v = v,.., V n = v V = v,v = v,.., V n = v V = v,v = v,.., V n = v V = v,v = v,.., V n = v Only the current node is kept in memory at each step. Very different from the systematic tree search approaches we have seen so far! Local search does NOT backtrack!

Local search: only use local information

Iterative Best Improvement How to determine the neighbor node to be selected? Iterative Best Improvement: select the neighbor that optimizes some evaluation function Which strategy would make sense? Select neighbour with Maximal number of constraint violations Similar number of constraint violations as current state No constraint violations Minimal number of constraint violations Evaluation function: h(n): number of constraint violations in state n Greedy descent: evaluate h(n) for each neighbour, pick the neighbour n with minimal h(n) Hill climbing: equivalent algorithm for maximization problems Minimizing h(n) is identical to maximizing h(n)

Example: Greedy descent for Sudoku Assign random numbers between and 9 to blank fields Repeat For each cell & each number: Evaluate how many constraint violations changing the assignment would yield Choose the cell and number that leads to the fewest violated constraints; change it Until solved 9 6 6 6 9

Example: Greedy descent for Sudoku Example for one local search step: Reduces #constraint violations by : - Two s in the first column - Two s in the first row - Two s in the top-left box 6 6 6 9 6 6 6 9

General Local Search Algorithm : Procedure Local-Search(V,dom,C) : Inputs : V: a set of variables : dom: a function such that dom(x) is the domain of variable X : C: set of constraints to be satisfied 6: Output complete assignment that satisfies the constraints : Local : A[V] an array of values indexed by V 9: repeat 0: for each variable X do : A[X] a random value in dom(x); : Random initialization : while (stopping criterion not met & A is not a satisfying assignment) : Select a variable Y and a value V dom(y) : Set A[Y] V 6: : if (A is a satisfying assignment) then : return A 9: 0: until termination Local search step

General Local Search Algorithm : Procedure Local-Search(V,dom,C) : Inputs : V: a set of variables : dom: a function such that dom(x) is the domain of variable X : C: set of constraints to be satisfied 6: Output complete assignment that satisfies the constraints : Local : A[V] an array of values indexed by V 9: repeat 0: for each variable X do : A[X] a random value in dom(x); : : while (stopping criterion not met & A is not a satisfying assignment) : Select a variable Y and a value V dom(y) : Set A[Y] V 6: : if (A is a satisfying assignment) then : return A 9: 0: until termination Based on local information. E.g., for each neighbour evaluate how many constraints are unsatisfied. Greedy descent: select Y and V to minimize #unsatisfied constraints at each step

Another example: N-Queens Put n queens on an n n board with no two queens on the same row, column, or diagonal (i.e attacking each other) Positions a queen can attack

Example: N-queens h = h =? h =? 0

Example: N-Queens steps h = h = Each cell lists h (i.e. #constraints unsatisfied) if you move the queen from that column into the cell

The problem of local minima Which move should we pick in this situation? - Current cost: h= - No single move can improve on this - In fact, every single move only makes things worse (h ) Locally optimal solution Since we are minimizing: local minimum 6

Local minima Evaluation function Evaluation function State Space ( variable) Local minima Most research in local search concerns effective mechanisms for escaping from local minima Want to quickly explore many local minima: global minimum is a local minimum, too

Different neighbourhoods Local minima are defined with respect to a neighbourhood. Neighbourhood: states resulting from some small incremental change to current variable assignment -exchange neighbourhood One stage selection: all assignments that differ in exactly one variable. How many of those are there for N variables and domain size d? O(Nd) O(d N ) O(N d ) O(N+d) O(dN). N variables, for each of them need to check d- values Two stage selection: first choose a variable (e.g. the one in the most conflicts), then best value Lower computational complexity: O(N+d). But less progress per step -exchange neighbourhood All variable assignments that differ in exactly two variables. O(N d ) More powerful: local optimum for -exchange neighbourhood might not be local optimum for -exchange neighbourhood

Different neighbourhoods How about an -exchange neighbourhood? - All minima with respect to the -exchange neighbourhood are global minima - Why? - How expensive is the - exchange neighbourhood? - O(N d ) - In general, N-exchange neighbourhood includes all solutions - Where N is the number of variables - But is exponentially large 9

Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start) 0

Stochastic Local Search We will use greedy steps to find local minima Move to neighbour with best evaluation function value We will use randomness to avoid getting trapped in local minima

General Local Search Algorithm : Procedure Local-Search(V,dom,C) : Inputs : V: a set of variables : dom: a function such that dom(x) is the domain of variable X : C: set of constraints to be satisfied 6: Output complete assignment that satisfies the constraints : Local : A[V] an array of values indexed by V 9: repeat 0: for each variable X do : Random A[X] a random value in dom(x); restart Extreme case : random sampling. Restart at every step: Stopping criterion is true : : while (stopping criterion not met & A is not a satisfying assignment) : Select a variable Y and a value V dom(y) : Set A[Y] V 6: : if (A is a satisfying assignment) then : return A 9: 0: until termination

General Local Search Algorithm : Procedure Local-Search(V,dom,C) : Inputs : V: a set of variables : dom: a function such that dom(x) is the domain of variable X : C: set of constraints to be satisfied 6: Output complete assignment that satisfies the constraints : Local : A[V] an array of values indexed by V 9: repeat 0: for each variable X do : A[X] a random value in dom(x); : : while (stopping criterion not met & A is not a satisfying assignment) : Select a variable Y and a value V dom(y) : Set A[Y] V 6: : if (A is a satisfying assignment) then : return A 9: 0: until termination Extreme case : greedy descent Only restart in local minima: Stopping criterion is no more improvement in eval. function h

Greedy descent vs. Random sampling Greedy descent is good for finding local minima bad for exploring new parts of the search space Random sampling is good for exploring new parts of the search space bad for finding local minima A mix of the two can work very well

Greedy Descent + Randomness Greedy steps Move to neighbour with best evaluation function value Next to greedy steps, we can allow for:. Random restart: reassign random values to all variables (i.e. start fresh). Random steps: move to a random neighbour

Which randomized method would work best in each of the these two search spaces? Evaluation function A Evaluation function B State Space ( variable) Hill climbing with random steps best on A Hill climbing with random restart best on B State Space ( variable) Hill climbing with random steps best on B Hill climbing with random restart best on A equivalent

Evaluation function A Evaluation function B State Space ( variable) Hill climbing with random steps best on B Hill climbing with random restart best on A State Space ( variable) But these examples are simplified extreme cases for illustration, in reality you don t know what your search space looks like Usually integrating both kinds of randomization works best

Stochastic Local Search for CSPs Start node: random assignment Goal: assignment with zero unsatisfied constraints Heuristic function h: number of unsatisfied constraints Lower values of the function are better Stochastic local search is a mix of: Greedy descent: move to neighbor with lowest h Random walk: take some random steps Random restart: reassigning values to all variables

Stochastic Local Search for CSPs: details Examples of ways to add randomness to local search for a CSP In one stage selection of variable and value: instead choose a random variable-value pair In two stage selection (first select variable V, then new value for V): Selecting variables: Sometimes choose the variable which participates in the largest number of conflicts Sometimes choose a random variable that participates in some conflict Sometimes choose a random variable Selecting values Sometimes choose the best value for the chosen variable Sometimes choose a random value for the chosen variable 9

Learning Goals for local search (started) Implement local search for a CSP. Implement different ways to generate neighbors Implement scoring functions to solve a CSP by local search through either greedy descent or hill-climbing. Implement SLS with random steps (-stage, -stage versions) random restart Assignment # is available on Connect today (due Wednesday, February th ) Exercise # on SLS is available on the home page - do it! Coming up: more local search, Section.