Other approaches. Marco Kuhlmann & Guido Tack Lecture 11

Similar documents
(Stochastic) Local Search Algorithms

Lecture: Iterative Search Methods

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Example: Map coloring

Set 5: Constraint Satisfaction Problems

Constraint (Logic) Programming

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Local Search for CSPs

Simple mechanisms for escaping from local optima:

A Constraint-Based Architecture for Local Search

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Mathematical Programming Formulations, Constraint Programming

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

General Methods and Search Algorithms

Set 5: Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

The Vehicle Routing Problem with Time Windows

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Lecture 6: Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems

Stochastic greedy local search Chapter 7

Constraints on Set Variables for Constraint-based Local Search

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. May, 30, CPSC 322, Lecture 14 Slide 1

Set 5: Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Neighborhood Search: Mixing Gecode and EasyLocal++

Outline. Best-first search

CS 188: Artificial Intelligence Spring Today

Course Summary! What have we learned and what are we expected to know?

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

CS 188: Artificial Intelligence Fall 2011

Local Search Overview

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Constraint Satisfaction

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Constraint Satisfaction Problems

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

Constraint Programming

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Constraint Satisfaction Problems

Introduction to Constraint Programming

Outline of the talk. Local search meta-heuristics for combinatorial problems. Constraint Satisfaction Problems. The n-queens problem

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

CONSTRAINT SATISFACTION

Week 8: Constraint Satisfaction Problems

Constraint Satisfaction Problems

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

CS 188: Artificial Intelligence

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

Constraint Satisfaction Problems. Chapter 6

Foundations of Artificial Intelligence

CS 343: Artificial Intelligence

Constraint Satisfaction Problems

Outline. Best-first search

Horn Formulae. CS124 Course Notes 8 Spring 2018

Constraint Satisfaction Problems Part 2

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

CSE 473: Artificial Intelligence

Bound Consistency for Binary Length-Lex Set Constraints

Today. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 5. Constraint Satisfaction Problems (CSP) CSP Definition

ZLoc: A C++ library for local search

6.034 Quiz 1, Spring 2004 Solutions

Constraint Satisfaction Problems

CONSTRAINT SATISFACTION

6.034 Notes: Section 3.1

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

CS 4100/5100: Foundations of AI

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

CS 4100 // artificial intelligence

Non-deterministic Search techniques. Emma Hart

3 No-Wait Job Shops with Variable Processing Times

6.854J / J Advanced Algorithms Fall 2008

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Announcements. CS 188: Artificial Intelligence Fall 2010

Constraint Satisfaction. AI Slides (5e) c Lin

Constraint Satisfaction Problems

Algorithms for Integer Programming

Practical SAT Solving

Retiming. Adapted from: Synthesis and Optimization of Digital Circuits, G. De Micheli Stanford. Outline. Structural optimization methods. Retiming.

Chapter 10 Part 1: Reduction

Constraint Satisfaction Problems (CSPs)

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CS 188: Artificial Intelligence

Local Search. (Textbook Chpt 4.8) Computer Science cpsc322, Lecture 14. Oct, 7, CPSC 322, Lecture 14 Slide 1

Advanced algorithms. topological ordering, minimum spanning tree, Union-Find problem. Jiří Vyskočil, Radek Mařík 2012

Hardware-Software Codesign

1 Linear programming relaxation

C N O S N T S RA R INT N - T BA B SE S D E L O L C O A C L S E S A E RC R H

Lecture 7. Search. Search. Foundations of Constraint Programming

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014

Constraint Propagation: The Heart of Constraint Programming

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Transcription:

Other approaches Marco Kuhlmann & Guido Tack Lecture 11

Plan for today CSP-based propagation Constraint-based local search Linear programming

CSP-based propagation

History of CSP-based propagation AI research backtracking (generate and test) arc consistency a.k.a. domain consistency path consistency k-consistency

Reminder: CSP A constraint satisfaction problem contains variables with associated domains constraints (relations) between variables Here: only binary constraints full relation on all but two variables

Constraint Network 1 2 1..100 1..100 < < 1..100 3 < < 4 5 1..100 < 1..100

Constraint Network variable 1 2 1..100 1..100 < < 1..100 3 < < 4 5 1..100 < 1..100

Constraint Network variable 1 2 1..100 1..100 < < < 1..100 3 < constraint 4 5 1..100 < 1..100

Arc inconsistency 1 2 1..100 1..100 < < 1..100 3 No x in D 3 such that x<1 The arc 3-2 is inconsistent

Arc inconsistency 1 2 1..100 1..100 < < 1..100 3 No x in D 3 such that x<1 The arc 3-2 is inconsistent

Arc consistency 1 2 2..100 2..100 < < 1..99 3 All arcs are consistent The network is arc consistent

AC-1 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac1( ) Q := {(i,j) for all ci,j} do change := false for (i,j) in Q do change = revise(i,j) while (change) return

AC-1 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac1( ) Q := {(i,j) for all ci,j} do change := false for (i,j) in Q do change = revise(i,j) while (change) return

AC-1 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac1( ) Q := {(i,j) for all ci,j} do change := false for (i,j) in Q do change = revise(i,j) while (change) return Inefficiency: always revise all arcs!

AC-3 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac3( ) Q := {(i,j) for all ci,j} while Q not empty do remove (k,m) from Q if revise(k,m) then N := {(i,k) for all ci,k,i m} Q := Q N return Better: only revise arcs with modified variables

AC-3 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac3( ) Q := {(i,j) for all ci,j} while Q not empty do remove (k,m) from Q if revise(k,m) then N := {(i,k) for all ci,k,i m} Q := Q N return Better: only revise arcs with modified variables

AC-3 revise(i,j) modified := false for x in Di do if y in Dj with (x,y) in ci,j then Di := Di - {x} modified := true return modified ac3( ) Q := {(i,j) for all ci,j} while Q not empty do remove (k,m) from Q if revise(k,m) then N := {(i,k) for all ci,k,i m} Q := Q N return Better: only revise arcs with modified variables Still: possibly check many constraints several times

AC-2001 Idea: remember the smallest support for each value when support is removed, try to find new support only check bigger supports Benefit: all possible supports checked at most once

AC-2001 revise(i,j) modified := false for x in Di, Last[i,x,j] not in Dj do y = min { n in Dj, n>last[i,x,j], (x,n) in ci,j } if y exists then Last[i,x,j] := y else Di := Di - {x} modified := true return modified ac2001() for all i, x in Di, j do Last[i,x,j] := min(dj)-1 Q := {(i,j) for all ci,j} while Q not empty do remove (k,m) from Q if revise(k,m) then return N := {(i,k) for all ci,k,i m} Q := Q N

Propagators for arc consistency Given: constraint c on variables x,y ac c,x (s) = s(z) for x z {n s(x) m s(y). (n, m) c} else ac c,y (s) = s(z) for y z {m s(y) n s(x). (n, m) c} else Propagation achieves arc consistency for c

Strong assumptions Consider only normalized networks: at most one constraint between each pair of variables Consequence: (x<y and y<x) is detected as inconsistent by normalization

Path inconsistency 1 2 2..100 2..100 < < < 1..99 3 < Unsatisfiable But takes O( D ) steps to find out! Idea: combine two constraints 4 5 1..100 < 1..100

Path inconsistency 1 2 2..100 2..100 < < < 1..99 3 < combining x 3<x 5 and x 5 <x 4 yields x 3 <x 4 normalization detects that x 3 <x 4 and x 3 >x 4 are inconsistent 4 5 1..100 < 1..100

Path consistency Algorithm to make a network path consistent: compute combinations, then use AC Problem: arbitrary combinations of constraints expensive to compute Not used in actual CP systems

Literature Apt. Principles of Constraint Programming. Cambridge University Press, 2003. Mackworth. Consistency in Networks of Relations. Artificial Intelligence 8(1), 1977.

Constraint-based local search

Local search Idea: incomplete! start from some assignment move to neighbouring assignments improve overall "quality" of the assignment Quality of an assignment Number of violated constraints Objective function (for optimization)

Neighbourhoods and moves For an assignment a, N(a) are the neighbouring assignments some neighbours may be forbidden legal neighbours: identified by a function L Local search performs a move from a to one of the N(a) select where to move using operation S

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) cut-off return a'

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Local search blueprint a := GenerateInitialAssignment() a' := a quality for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) move return a'

Local search blueprint a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Reminder: GSAT GSAT(α,MAX_FLIPS,MAX_TRIES) for i := 1 to MAX_TRIES T := random truth assignment for j := 1 to MAX_FLIPS if T satisfies α return T p := variable such that changing its truth value gives largest increase in number of clauses of α that are satisfied by T T := T with truth assignment of p reversed end for end for return "no satisfying assignment found"

Example: Graph partitioning find a balanced partition of vertices of a graph G = (V, E) V = P 1 P 2, P 1 = P 2 with minimal numer of cross-partition edges f( P 1, P 2 ) = { v 1, v 2 E v 1 P 1, v 2 P 2 }

Approach Greedy local improvement N returns only balanced partitions L licenses only better assignments

Example: Graph partitioning Initial assignment: arbitrary balanced partition Move: swap vertices between P 1 and P 2 assignments remains balanced partitions! Neighbourhood: N( P 1, P 2 ) = { (P 1 \ {a}) {b}, (P 2 \ {b}) {a} a P 1, b P 2 }

Example: Graph partitioning Legal moves: improve objective value L(N, a) = {n N f(n) < f(a)} Greedy selection: choose one among best neighbours S(M, a) = min{m M f(n) = min a M f(a)}

Heuristics and Metaheuristics Heuristics typically choose next neighbour Metaheuristics collect information on execution sequence how to escape local minima how to drive to global optimality

Example heuristics Locally improve f Choose best neighbour randomize in case of ties Choose first better neighbour Multistage heuristics Random improvement...

Example metaheuristics Iterated local search Simulated annealing Tabu search... start from different initial assignments based on "temperature" initially: sample search space widely later: towards random improvement

Implementation aspects a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) return a'

Implementation aspects a := GenerateInitialAssignment() a' := a for k in 1...MaxTrials do if f(a)<f(a') then a' := a a := S(L(N(a), a), a) incremental algorithms! return a'

Constraint-based local search Idea: declarative model based on constraints give structure to neighbourhood computation efficient incremental algorithms separate model from search specification

COMET System for constraint-based local search Special, object-oriented programming language http://www.comet-online.org Developed at Brown University by P. Van Hentenryck and Laurent Michel

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; incremental range Size = 1..n; LocalSolver m(); variable UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); random initial assignment var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); constraints while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); search while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

n-queens in Comet int n = 8; range Size = 1..n; LocalSolver m(); UniformDistribution distr(size); var{int} queens[i in Size](m,Size) := distr.get(); int neg[i in Size] = -i; int pos[i in Size] = i; ConstraintSystem S(m); S.post(new AllDifferent(queen)); S.post(new AllDifferent(queen, neg)); S.post(new AllDifferent(queen, pos)); m.close(); while (S.violationDegree() > 0) selectmax(q in Size)(S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) queen[q] := v;

Meta-heuristic: tabu search Idea: Do not modify variables that you have modified recently Simple tabu data structure: tabu[v] = earliest iteration when v may be modified when modifying v in iteration i, set tabu[v] to i+t, where t is the tabu tenure

n-queens (tabu search) int n = 20;... int tabu[i in Size] = -1; int iteration = 0; int tenure = 4; while (S.violationDegree()) { selectmax(q in Size : tabu[q] <= iteration) (S.getViolations(queen[q])) selectmin(v in Size)(S.getAssignDelta(queen[q],v)) { queen[q] := v; tabu[q] = iteration + tenure; } iteration++; }

Constraints as differentiable objects Maintain properties incrementally degree of violation set of violated variables Allow differentiation, i.e. querying effect of modifications how would violation change when moving to this assignment Possibly difficult, definitely tedious to implement!

CBLS - Architecture Search Differentiable objects Functions Invariants Constraints

CBLS - Architecture Search Differentiable objects Functions Invariants Constraints

Invariants Example: inc{int} s(m) <- sum(i in 1..10) a[i] Incrementally maintain that s is the sum of the a[i] "one-way constraints" Differentiable

Constraints based on invariants Implement constraints as objective functions: V(e1=e2) = abs(e1-e2) V(e1 e2) = max(e1-e2, 0) V(e1 e1) = 1 - min(1, abs(e1-e2)) V(r1 r2) = V(r1) + V(r2) V(r1 r2) = min(v(r1), V(r2)) V( r) = 1 - min(1, V(r))

Local search vs. CP LS needs only linear memory in problem size LS deals well with large and overconstrained problems LS is good for online algorithms (dynamically changing constraints) CP can guarantee that constraints hold CP can prove unsatisfiability CP can prove optimality

Literature Van Hentenryck, Michel. Constraint-Based Local Search. MIT Press, 2005. Michel, Van Hentenryck. A Constrained-Based Architecture for Local Search. OOPSLA, 2002. Van Hentenryck, Michel. Differentiable Invariants. CP, 2006.

Linear programming

Linear Programming (LP) Describe problem by linear equations linear inequalities linear objective function Find optimal solution over the reals Variants: some variables must have integer values ILP Integer Linear Programming MILP Mixed Integer Linear Programming

LP and CP Hybrids Relaxation of global constraints can guide search pruning in optimization Decomposition schemes column generation Benders decomposition tighten bounds on variables use CP/LP on well-suited problem parts

Simplex method

Simplex method

Simplex method feasible region

Simplex method feasible region

Simplex method feasible region

Simplex method feasible region

Simplex method feasible region

Simplex method minimum x, maximum y feasible region

Simplex method minimum x, maximum y feasible region maximum x

Simplex method minimum x, maximum y feasible region maximum x miminum y

Linear Programming Advantages very robust and scalable technique very well understood numerically highly stable very good at optimization Disadvantages common structures difficult to express (distinct,...)

Literature Cormen, Leiserson, Rivest, Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001.

Summary Propagation-based approaches Other approaches often scale better may offer fewer guarantees exploit problem structure (global constraints) good at finding feasible solutions good at combining algorithmic techiques sometimes better at optimization