Constraint Programming

Similar documents
Constraint (Logic) Programming

A Re-examination of Limited Discrepancy Search

Search. Krzysztof Kuchcinski. Department of Computer Science Lund Institute of Technology Sweden.

INCOMPLETE DEPTH-FIRST SEARCH TECHNIQUES: A SHORT SURVEY

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták

Probabilistic Belief. Adversarial Search. Heuristic Search. Planning. Probabilistic Reasoning. CSPs. Learning CS121

0th iteration 1st iteration 2nd iteration

General Methods and Search Algorithms

This is the search strategy that we are still using:

Constraint-Based Scheduling: An Introduction for Newcomers

CS 188: Artificial Intelligence Fall 2011

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

MODELLING AND SOLVING SCHEDULING PROBLEMS USING CONSTRAINT PROGRAMMING

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Example: Map coloring

Chapter 6 Constraint Satisfaction Problems

Mathematical Programming Formulations, Constraint Programming

Interleaved Depth-First Search *

CS 4100 // artificial intelligence

CS 188: Artificial Intelligence

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Constraint Satisfaction Problems

Existing model. Existing model. Conceptual Model. Logical constraints. Temporal constraints

CSE 473: Artificial Intelligence

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

CS 188: Artificial Intelligence. Recap: Search

Constraint Satisfaction Problems (CSPs)

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

Foundations of Artificial Intelligence

Constraint Satisfaction Problems

A Parameterized Local Consistency for Redundant Modeling in Weighted CSPs

Lecture 6: Constraint Satisfaction Problems (CSPs)

CS 343: Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Constraint Programming

Uninformed Search Methods

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Heuristic (Informed) Search

Constraint Satisfaction Problems

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

A CSP Search Algorithm with Reduced Branching Factor

12/7/2006. Outline. Generalized Conflict Learning for Hybrid Discrete/Linear Optimization. Forward Conflict-Directed Search

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

6.856 Randomized Algorithms

Constraint Satisfaction

Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall 2010

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

Constraint Satisfaction Problems Part 2

Crossword Puzzles as a Constraint Problem

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence. Recap Search I

Constraint Satisfaction Problems

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

A New Algorithm for Singleton Arc Consistency

Constraint)Programming!

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CS 188: Artificial Intelligence Spring Today

CS 4100/5100: Foundations of AI

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

Constraint Satisfaction Problems. Chapter 6

Set 5: Constraint Satisfaction Problems

CS 395T Computational Learning Theory. Scribe: Wei Tang

Moving to a different formalism... SEND + MORE MONEY

CS 188: Artificial Intelligence

Chapter S:IV. IV. Informed Search

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming

Branch & Bound (B&B) and Constraint Satisfaction Problems (CSPs)

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Conflict based Backjumping for Constraints Optimization Problems

Constraint Satisfaction Problems

THE CONSTRAINT PROGRAMMER S TOOLBOX. Christian Schulte, SCALE, KTH & SICS

CS 188: Artificial Intelligence

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

ARTIFICIAL INTELLIGENCE (CS 370D)

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Week 8: Constraint Satisfaction Problems

CS 188: Artificial Intelligence Spring Announcements

Advanced A* Improvements

CS 188: Artificial Intelligence Spring Announcements

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Outline. Best-first search

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems

Lecture: Iterative Search Methods

K-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Transcription:

Depth-first search Let us go back to foundations: DFS = Depth First Search Constraint Programming Roman Barták Department of Theoretical Computer Science and Mathematical Logic 2 3 4 5 6 7 8 9 Observation: Real-life problems have huge states spaces that cannot be fully explored. We can explore just part of the state space! Ä incomplete tree search Search and Optimization Techniques Do not explore the state space completely. Do not guarantee proving that there is no solution and they do not guarantee finding a solution (completeness) sometimes completeness can be guaranteed with timecomplexity trade-off For many problems, incomplete techniques find solutions faster. Frequently based on a complete search algorithm such as DFS. Cutoff after exploiting allocated resources (time, backtracks, credit, ) may be global (for the whole search tree) or local (for a given sub-tree or a search node) Restart Incomplete search with different parameters (for example with more resources) learning can be used before the next iteration Limited number of backtracks (cutoff) Bounded Backtrack Search backtracking is counted from the point with other alternatives limited number of leafs After failure, increase the limit by one (restart). Example: BBS(8) 2 3 4 5 6 7 8 9 count the number of backtracks (failures) stop after exceeding the limit

Limited number of alternatives (width) for each node (cutoff). try a given number of alternatives for each node Beware, this is still exponential! After failure, increase the width by one (restart). Example: IB(2) Iterative Broadening Limited depth for tree search (cutoff). till given depth, all alternatives are tried after exceeding the depth, another incomplete search can be used After failure, increase the depth by one (restart). Example: DBS(3) Depth Bounded Search 2 3 4 5 6 7 8 restrict the number of tries in nodes (inner for-loop) 2 3 4 5 6 7 8 keep the number of instantiated variables if the number is larger than a given limit, try just one alternative BBS(0) Limited credit (the number of backtracks) for search (cutoff). credit is split among available alternatives unit credit means no alternatives (values) After failure, increase credit by one (restart). Example: CS(7) 2 2 4 3 2 3 4 5 6 7 in each node the (non-unit) credit is uniformly split among alternative sub-trees for a unit credit, just one alternative is tried 2 Credit Search When solving real-life problems we frequently have some experience with manual solving of the problem. Heuristics a guide where to go they recommend a value for assignment (value ordering) frequently lead to a solution But what to do when the heuristic is wrong? DFS takes care about the end of branches (leafs of tree) it repairs latest failures of the heuristic rather than earlier failures so it assumes that heuristic was right at the beginning of search Observation: The number of wrong heuristic decisions is low. Search and heuristics Observation2: Heuristics are usually less reliable at the beginning of search than at its end (more information and fewer choices are available there).

Recovery from mistakes Harvey & Ginsberg (IJCAI 995) Limited Discrepancy Search How to make search more efficient? Backtracking is blind with respect to heuristics. Discrepancy = violation of heuristic (different value is used) Core principles of discrepancy search: we change the order of branches based on discrepancies explore first the branches with less discrepancies Limited number of discrepancies (cutoff) branches with less discrepancies are explored first After failure increase the number of allowed discrepancies by one (restart). first, follow the heuristic then explore paths with at most one discrepancy Example: LDS(), heuristic suggests going to left is before heuristic says go left explore first the branches with earlier discrepancies is before heuristic says go left 6 5 4 3 2 A note for non-binary constraints: non-heuristic values are assumed as one discrepancy (here) each other non-heuristic value means increase of the number of discrepancies (e.g. third value = two discrepancies) (Harvey & Ginsberg, IJCAI 995) Algorithm LDS Korf (AAAI 996) (Korf, AAAI 996) Improved LDS procedure LDS-PROBE(Unlabelled,Labelled,Constraints,D) if Unlabelled = {} then return Labelled select X in Unlabelled Values X D X - {values inconsistent with Labelled using Constraints} if Values X = {} then return fail elseselect HV in Values X using heuristic if D>0 then for each value V from Values X -{HV} do R LDS-PROBE(Unlabelled-{X}, Labelled {X/V}, Constraints, D-) return LDS-PROBE(Unlabelled-{X}, Labelled {X/HV}, Constraints, D) end LDS-PROBE procedure LDS(Variables,Constraints) for D=0 to Variables do % D determines the allowed number of discrepancies R LDS-PROBE(Variables,{},Constraints,D) return fail end LDS In each iteration LDS explores branches from the previous iteration, i.e., it repeats already done computation and returns to already explored parts. Ä ILDS: a given number of discrepancies (cutoff) branches with later discrepancies explored first After failure increase the number of discrepancies by one (restart) Example: ILDS(), heuristic suggests going to left 2 3 4 5

Korf (AAAI 996) (Korf, AAAI 996) procedure ILDS-PROBE(Unlabelled, Labelled, Constraints, D) if Unlabelled = {} then return Labelled select X in Unlabelled Values X D X - {values inconsistent with Labelled using Constraints} if Values X = {} then return fail elseselect HV in Values X using heuristic if D< Unlabelled then R ILDS-PROBE(Unlabelled-{X}, Labelled {X/HV}, Constraints, D) if D>0 then for each value V from Values X - {HV} do R ILDS-PROBE(Unlabelled-{X}, Labelled {X/V}, Constraints, D-) end ILDS-PROBE Difference from LDS Algorithm ILDS procedure ILDS(Variables,Constraints) for D=0 to Variables do % D determines the allowed number of discrepancies R ILDS-PROBE(Variables,{},Constraints,D) return fail end LDS Walsh (IJCAI 997) Depth-Bounded Discrepancy Search ILDS explores branches with later discrepancies first. Ä DDS: Discrepancies allowed to some depth (cutoff) at the limit depth, there must be discrepancy (so no branches from previous iterations are re-visited) depth limit also restricts the number of discrepancies branches with earlier discrepancies are tried first After failure increase the depth limit by one (restart) Example: DDS(3), heuristic suggests going to left 2 3 4 Enumeration Other branching strategies So far we assumed search by labelling, i.e. assignment of values to variables. assign a value, propagate and backtrack in case of failure (try other value) this is called enumeration propagation is used only after instantiating a variable Example: X,Y,Z in 0,,N- (N is constant) Z X=Y, X=Z, Z=(Y+) mod N problem is AC, but has no solution enumeration will try all the values for n=0 7 runtime 45 s. (at.7 GHz P4) Can we use faster labelling? 0 2 X Y Enumeration resolves disjunctions in the form X=0 X= X=N- if there is no correct value, the algorithm tries all the values We can use propagation when we find some value to be wrong! that value is deleted from the domain which starts propagation that filters out other values we solve disjunctions in the form X=H X H this is called step labelling (usually a default strategy) the previous example solved in 22 s. by trying and refuting value 0 for X Why so long? In each AC cycle we remove just one value. Another typical branching is bisection/domain splitting we solve disjunctions in the form X H X>H, where H is a value in the middle of the domain

Constrained optimization So far we looked for any solution satisfying the constraints. Frequently, we need to find an optimal solution, where solution quality is defined by some objective function. Definition: Constraint Satisfaction Optimisation Problem (CSOP) consists of a CSP P and an objective function f mapping solutions of P to real numbers. A solution to a CSOP is a solution to P minimizing / maximizing the value of f. When solving CSOPs we need methods that can provide more than one solution. The method branch-and-bound is a frequently used optimisation technique based on pruning branches where there is no optimal solution. It uses a heuristic function h that estimates the value of objective function f. admissible heuristic for minimization satisfies h(x) f(x) [for maximization f(x) h(x)] heuristic closer to f is better We stop exploring the search branch when: there is no solution in the sub-tree there is no optimal solution in the sub-tree Bound h(x), where Bound is the maximal value of f for an acceptable solution How to obtain the Bound? for example the value of the solution found so far Branch and bound Branch and bound for constrained optimization Objective function is encoded in a constraint we optimize a value v, where v = f(x) the first solution is found using no bound on v the next solutions must be better than the last solution found (v < Bound) repeat until no feasible solution is found Algorithm Branch & Bound procedure BB-Min(Variables, V, Constraints) Bound sup NewSolution fail repeat Solution NewSolution NewSolution Solve(Variables,Constraints {V<Bound}) Bound value of V in NewSolution (if any) until NewSolution = fail return Solution end BB-Min Branch and bound: notes Heuristic h is hidden in the propagation of constraint v = f(x). Efficiency of search depends on: good heuristic (good propagation through the objective constraint) good solution found early using an initial bound may help We can find the optimal solution fast but the proof of optimality takes time (explore the rest of search tree) Frequently, we do not need optimal solution, good solution is enough BB can stop after finding a good enough solution BB can be speeded up by using using both upper and lower bounds repeat TempBound (UBound+LBound) / 2 NewSolution Solve(Variables,Constraints {V TempBound}) if NewSolution=fail then LBound TempBound+ else UBound TempBound until LBound = UBound