Search. Krzysztof Kuchcinski. Department of Computer Science Lund Institute of Technology Sweden.

Similar documents
Constraint Programming

JaCoP v Java Constraint Programming Libraray. Krzysztof Kuchcinski

Lecture 7. Search. Search. Foundations of Constraint Programming

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

Constraint Satisfaction Problems

Foundations of Artificial Intelligence

Chapter 6 Constraint Satisfaction Problems

General Methods and Search Algorithms

Example: Map coloring

Artificial Intelligence

Artificial Intelligence

This is the search strategy that we are still using:

Search and Optimization

Constraint Satisfaction Problems

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Course Summary! What have we learned and what are we expected to know?

Constraint Satisfaction Problems. Chapter 6

CS 188: Artificial Intelligence Fall 2011

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Constraint Satisfaction Problems

Constraint Satisfaction Problems. Chapter 6

EDAN01 Introduction to Constraint Programming Lab Instruction

A Re-examination of Limited Discrepancy Search

Constraint Satisfaction

3 No-Wait Job Shops with Variable Processing Times

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

Propagating separable equalities in an MDD store

Constraint Satisfaction Problems

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Lecture 6: Constraint Satisfaction Problems (CSPs)

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

Mathematical Programming Formulations, Constraint Programming

Integrating Probabilistic Reasoning with Constraint Satisfaction

A Fast Arc Consistency Algorithm for n-ary Constraints

CS 4100 // artificial intelligence

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Constraint (Logic) Programming

Constraint Satisfaction Problems

Conflict based Backjumping for Constraints Optimization Problems

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

Lecture: Iterative Search Methods

CS 188: Artificial Intelligence

6.034 Quiz 1, Spring 2004 Solutions

Constraint Satisfaction Problems Part 2

CS 188: Artificial Intelligence. Recap: Search

CS 188: Artificial Intelligence Spring Today

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Spezielle Themen der Künstlichen Intelligenz

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

: Principles of Automated Reasoning and Decision Making Midterm

A CSP Search Algorithm with Reduced Branching Factor

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

INCOMPLETE DEPTH-FIRST SEARCH TECHNIQUES: A SHORT SURVEY

Consistency and Set Intersection

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

CSE 473: Artificial Intelligence

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

Constraint Satisfaction Problems

(2,4) Trees. 2/22/2006 (2,4) Trees 1

Advanced Algorithms. Class Notes for Thursday, September 18, 2014 Bernard Moret

Flow Graph Theory. Depth-First Ordering Efficiency of Iterative Algorithms Reducible Flow Graphs

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction

Parallelizing Adaptive Triangular Grids with Refinement Trees and Space Filling Curves

CS188 Fall 2018 Section 2: Graph Search + CSPs

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Constraint Satisfaction Problems (CSPs)

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving

Cardinality Reasoning for bin-packing constraint. Application to a tank allocation problem

General properties of staircase and convex dual feasible functions

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems

Zchaff: A fast SAT solver. Zchaff: A fast SAT solver

Lecture 9 Arc Consistency

ARTIFICIAL INTELLIGENCE (CS 370D)

Efficient memory-bounded search methods

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP.

Binary Search Trees. Carlos Moreno uwaterloo.ca EIT

Outline. Best-first search

Problem 1 Zero: 11% (~20 students), Partial: 66% (~120 students), Perfect: 23% (~43 students)

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Trees. Arash Rafiey. 20 October, 2015

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function

6.001 Notes: Section 31.1

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Constraint Programming for Timetabling

CS 310 B-trees, Page 1. Motives. Large-scale databases are stored in disks/hard drives.

V Advanced Data Structures

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C

Modelling with Constraints

Transcription:

Search Krzysztof Kuchcinski Krzysztof.Kuchcinski@cs.lth.se Department of Computer Science Lund Institute of Technology Sweden January 12, 2015 Kris Kuchcinski (LTH) Search January 12, 2015 1 / 46 Search He who can properly define and divide is to be considered a god. Plato (ca. 429-347 BC) Test everything. Hold on to the good. Avoid every kind of evil. (1 Thessalonians 5:21-22) Kris Kuchcinski (LTH) Search January 12, 2015 2 / 46 Outline 1 Complete Search Methods Chronological Backtracking Backtrack-Free Search 2 Incomplete Search Methods Iterative Broadening Discrepancy Search Credit Search 3 Optimization 4 Multi-objective Optimization 5 Conclusions Kris Kuchcinski (LTH) Search January 12, 2015 3 / 46

Complete Search Methods constraints consistency methods prune some FDVs and enforces constraint consistency but it is not a complete procedure, set of constraints is not solved yet, asolutionrequiresthatanassignmentthatsatisfiesallconstraints is found, method based on constraint-and-generate, method assigns a new value to a selected FDV and performs consistency checking, if at some point inconsistency is detected (by obtaining an empty FD) the method backtracks, standard algorithm for constraint-and-generate method is depth-first-search (DFS) algorithm, also called labeling, an implementation of DFS method is often called chronological backtracking. Kris Kuchcinski (LTH) Search January 12, 2015 4 / 46 Depth First Search Algorithm DFS label DFS(V) if solver.consistency() if V var select FDV from V V V \var v select value from D var impose var = v result DFS(V ) return result (var, v) undo all decisions // backtrack impose var v return DFS(V) return {} // solution found, return empty result return nil // inconsistent constraints Kris Kuchcinski (LTH) Search January 12, 2015 5 / 46 DFS (cont d) several decision that need to be made when implementing a DFS algorithm. First, we need to determine the method for selecting FDV from the vector of FDVs that can be given as a parameter to DFS method, The second decision is the selection of a value from the domain of selected variable var. Kris Kuchcinski (LTH) Search January 12, 2015 6 / 46

DFS Variable Selection input order option selects the first variable from the current input vector, first fail option selects a variable with the smallest number of elements in its domain, most constrained option selects a variable that appears in most constraints, smallest option selects a variable that has the smallest minimal value in its domain, largest option selects a variable that has the greatest minimal value in its domain, smallest maximal option selects a variable that has the smallest maximal value in its domain, and max regret option selects a variable that has the largest difference between the smallest and the second smallest value in its domain. Kris Kuchcinski (LTH) Search January 12, 2015 7 / 46 DFS Value Selection minimal value from the domain is selected, maximal value from the domain is selected, first the middle value from the domain is selected and then left and right values are tried, and random value from the domain is selected. Kris Kuchcinski (LTH) Search January 12, 2015 8 / 46 DFS Example Example (graph coloring problem) v 1 v 3 v 0 v 2 v 0 :: {1..3}, v 1 :: {1..2}, v 2 :: {1..2} and v 3 :: {1..3}, vector of variables is [v 0, v 3, v 1, v 2 ]. Kris Kuchcinski (LTH) Search January 12, 2015 9 / 46

DFS Search Tree = 1 fail v 0 = 2 fail 1 v 0 2 v 0 = 3 v 1 = 1 v 0 = 3 v 3 = 3 = 1 fail v 3 = 2 fail 1 v 3 2 v 3 = 3 v 1 = 1 v 2 = 2 solution = 2 v 2 solution (a) input order option (b) first fail option Kris Kuchcinski (LTH) Search January 12, 2015 10 / 46 Variable Selection Motivation first fail option selection of small domains at the beginning of the search is motivated by the fact that small domains might provide higher opportunity for the solver to fail with the assignment, this strategy will remove parts of the search space early in the search, if the assignment produces a failure it is better to backtrack there since later fail means need to backtrack to early decision located often close to the root of the search tree. Most constrained option tries to enforce as much propagation as possible at the beginning of the search, it selects a FDV that is present in most constraints and therefore it might wake up large number of constraints and achieve good search tree pruning. Kris Kuchcinski (LTH) Search January 12, 2015 11 / 46 Variable Selection Motivation smallest and smallest maximal options justified for scheduling applications, they are usually applied to FDV that defines starting time of a task and try to schedule tasks based on their urgency, it reminds a little bit priorities used in list scheduling methods. max regret option usually used when cost function minimization is performed and we would like to check first a small number in variable domain before going to the next one which is much larger, we try a small value first and, if this value fails later in the search tree, the next value is not much larger since the largest next values where tried at the beginning of the search. Kris Kuchcinski (LTH) Search January 12, 2015 12 / 46

Variable Selection Breaking Ties it may exist several FDVs with the same value for the variable selection option, second option to break ties between variables can be useful, for example, first fail option can be combined with most constrained option, it can be implemented in the solver implicitly or explicitly. Kris Kuchcinski (LTH) Search January 12, 2015 13 / 46 Variable Selection Breaking Ties all problem decision variables are not comparable in some sense and our selection options cannot be uniformly applied to them, for example, if we model scheduling and assignment we have two types of FDVs: task i starting time, t i and resource assigned for task execution, r i, they are not comparable in respect to selection options and placing them in the same vector of FDVs will not create a good search strategy, grouping variables and organize search in such a way that it considers differences in groups, for example, subvector [t i, r i ] groups start time and assigned resource for task i. Kris Kuchcinski (LTH) Search January 12, 2015 14 / 46 JaCoP SimpleDFS (without minimization) public boolean label(intvar[] vars) { ChoicePoint choice = null; boolean consistent = store.consistency(); if (!consistent) return false; // Failed leaf of the search tree { // consistent if (vars.length == 0) return true; // solution found; no more variables to label choice = new ChoicePoint(vars); // selects a variable and the value for it levelup(); store.impose(choice.getconstraint()); // choice point imposed. consistent = label(choice.getsearchvariables ()); } } if (consistent) { leveldown(); return true; } { restorelevel(); store.impose(new Not(choice.getConstraint())); // negated choice point imposed. consistent = label(vars); leveldown(); if (consistent) return true; return false; } Kris Kuchcinski (LTH) Search January 12, 2015 15 / 46

Variable Selection Breaking Ties Example Scheduling and assignment of data-flow graphs (task graphs). group [t i, r i ] [[t 1, r 1 ], [t 2, r 2 ],...,[t 11, r 11 ]] and select two FDVs t i and r i for value assignment based on the first variable using, for example, smallest selection option. create the following vector [[t 1, t 2,...,t 11 ], [r 1, r 2,...,r 11 ]] and assign values first to the first vector and then to the second one. Kris Kuchcinski (LTH) Search January 12, 2015 16 / 46 DFS Domain Splitting our DFS organizes search space based on variable assignment, more advanced strategies that are based on domain splitting, domain splitting reduces the domain of each variable, by splitting the current domain into several possible subdomains, solver can trigger consistency checking of involved constraints even the final value is not yet determined, many constraints can prune domains of involved variables with assigned subdomains since their consistency methods are based on bound consistency and do not require final assignment to a variable, the most popular domain splitting method splits the domain into two parts, the lower and upper halves weak commitment. Kris Kuchcinski (LTH) Search January 12, 2015 17 / 46 DFS Algorithm Domain Splitting label DFS_split(V) if solver.consistency() if V var select FDV from V V V \var if min(var )=max(var ) result DFS_split(V ) return result (var, min(var )) return nil mid (min(var )+max(var ))/2 impose var mid result DFS_split(V) return result undo all decisions // backtrack impose var > mid return DFS_split(V) return {} // solution found, return empty result return nil // inconsistent constraints Kris Kuchcinski (LTH) Search January 12, 2015 18 / 46

Backtrack-Free Search does exist a class of constraint programming problems for which we can find an ordering that search does not need to backtrack, formally, a given CP problem is backtrack-free for a given order of variables (x 1,...,x n ) if for every i < n, everypartialsolution (x 1,...,x i ) can be consistently extended to include x i+1. To be able to discuss backtrack-free search we need to define several properties for constraint graphs. An ordered constraint graph is a constraint graph whose nodes have been ordered linearly, The width of a node in an ordered constraint graph is the number of parent nodes, The width of the ordered constraint graph is the maximum width of any of its nodes, and finally, the width of the constraint graph is the minimum width of all the orderings of that graph. In general the width of a constraint graph depends upon its structure. Kris Kuchcinski (LTH) Search January 12, 2015 19 / 46 Backtrack-Free Search Example Consider a simple graph coloring problem with three nodes, A, B and C, andthefollowingconstraints. A :: {1..2}, B :: {1..2}, C :: {1..2} A B A C A B C A B C B A C a) constraint graph b) examples of ordering Kris Kuchcinski (LTH) Search January 12, 2015 20 / 46 Backtrack-Free Search (cont d) Theorem (Freuder 1982) Given a constraint satisfaction problem: 1 Asearchorderisbacktrack-freeifthelevelofstrongconsistency is greater than the width of the corresponding ordered constraint graph. 2 There exists a backtrack-free search order for the problem if the level of strong consistency is greater than the width of the constraint graph. Proof. In instantiating any variable v we must check consistency requirements involving at most j other variables, where j is the width of the node. If the consistency level k is at least j + 1, then given prior choices for the j variables, consistent among themselves, there exists a value for v consistent with the prior choices. Kris Kuchcinski (LTH) Search January 12, 2015 21 / 46

Backtrack-Free Search (cont d) tree-structured binary constraint graphs have a special property any such arc consistent graph has backtrack-free search for variety of orderings since there exist many orderings with width 1, directional arc consistency, directional path-consistency and directional k-consistency has been developed to restrict consistency checking to a given order of variables. Kris Kuchcinski (LTH) Search January 12, 2015 22 / 46 Incomplete Search Methods search methods that does not necessarily need to be complete, i.e., check all possible assignments, they might not search the whole search space and skip some parts purposely, they might miss solutions that exist in the parts of the search space that they skipped. the methods, depending on given parameters, can usually search the full search space as well. Kris Kuchcinski (LTH) Search January 12, 2015 23 / 46 Iterative Broadening algorithm restricts the number of assigned values to each of selected variables by specifying new parameter b. the search starts with b = 1and,inthefirstiteration,DFS algorithm artificially backtracks after trying only a single assignment to each variable, if the solution is found the whole search stops and the found solution is reported. in case of failure the algorithm continues and tries to find a solution with two possible assignments for each variable and if this fails it continues until all assignments for all variables has been tried and therefore it will do the full DFS search. Kris Kuchcinski (LTH) Search January 12, 2015 24 / 46

Iterative Broadening Algorithm label IB(V) b 1 do result breadth_bounded_dfs(b, V) b b + 1 while result = nil b < max i ( D i ) return result Kris Kuchcinski (LTH) Search January 12, 2015 25 / 46 Iterative Broadening Algorithm (cont d) label breadth_bounded_dfs(b, V) if solver.consistency() if V select one variable var from V do select value v from D var delete v from D var impose var[i] =v result breadth_bounded_dfs(b, V\var ) return result (var, v) // backtrack undo all decisions // backtrack while D var {} less than b values of D var has been tried return nil // no more values to assign for var return {} // assignment for all FDVs found return nil // inconsistent constraints Kris Kuchcinski (LTH) Search January 12, 2015 26 / 46 Discrepancy Search Several search methods that are based on the notion of discrepancy, they try to limit the search tree size by examining only a number of nodes that get assigned values different than the first value selected by variable and value selection options. Kris Kuchcinski (LTH) Search January 12, 2015 27 / 46

Limited Discrepancy Search (LDS) limited discrepancy search (LDS) has been originally proposed in for binary search trees, The intuition behind this search method is that often our variable ordering method is good enough to order variable in such a way that a solution can be found without backtracks. in situations when a wrong variable assignment is made we need to backtrack and change the decisions on variable assignment. the decisions that are wrong and need to changed are called discrepancies and therefore the method is called limited discrepancy search. Kris Kuchcinski (LTH) Search January 12, 2015 28 / 46 LDS Algorithm label LDS() for i = 0..maximum dept result LDS_probe(V, i) return result return nil Kris Kuchcinski (LTH) Search January 12, 2015 29 / 46 LDS Algorithm (cont d) label LDS_proble(V, k) if V = return {} if solver.consistency() return nil v select variable from V V V\{v} if k = 0 impose v = first_value(v) result LDS_proble(V, 0) return result (v, first_value(v) return nil // k > 0 impose v = second_value(v) result LDS_proble(V, k 1) return result (v, second_value(v) undo decisions impose v = first_value(v) result LDS_proble(V, k) return result (v, first_value(v) return nil Kris Kuchcinski (LTH) Search January 12, 2015 30 / 46

LDS Search Tree (a) 0th iteration (b) 1st iteration (c) 2nd iteration (d) 3rd iteration (e) 4th iteration Kris Kuchcinski (LTH) Search January 12, 2015 31 / 46 Improved Limited Discrepancy Search (ILDS) ILDS visits only those paths that have exactly i discrepancies. it keeps track of the remaining depth of the search tree and if this depth is less than or equal the number of discrepancies, only the higher values in the domain are assigned (constraint v = second_value(v)). Kris Kuchcinski (LTH) Search January 12, 2015 32 / 46 ILDS Search (a) 0th iteration (b) 1st iteration (c) 2nd iteration (d) 3rd iteration (e) 4th iteration Kris Kuchcinski (LTH) Search January 12, 2015 33 / 46

Depth-Bounded Discrepancy Search (DDS) Depth-bounded discrepancy search (DDS) is another extension of LDS method, it biases the search to discrepancies located high in the search tree, It uses a depth bound parameter to iteratively increase the depth of the discrepancy search. on the iteration i + 1, DDS explores those branches on which discrepancies occur at a depth i or less, it takes only right branches at depth i on the i + 1iteration;left branches are avoided since they would lead to previously visited nodes Kris Kuchcinski (LTH) Search January 12, 2015 34 / 46 DDS Tree (a) 0th iteration (b) 1st iteration (c) 2nd iteration (d) 3rd iteration (e) 4th iteration Kris Kuchcinski (LTH) Search January 12, 2015 35 / 46 Credit Search Credit search is an incomplete depth first search method where search is controlled by two parameters: the value of a credit and the number of backtracks, first the search uses the credit points and distributes them to different branches of the DFS tree, when the credit is used (its value is zero), only the number of backtracks, specified by the second parameter, is allowed. Kris Kuchcinski (LTH) Search January 12, 2015 36 / 46

Credit Search Example initial credit = 8, backtracks = 3 4 2 1 credit search Parameters 2 1 1 credit (e.g., 8) 1 credit distribution (e.g., 1/2) number of backtracks (e.g., 3) local search solution Kris Kuchcinski (LTH) Search January 12, 2015 37 / 46 Optimization important part of constraint programming, we will discuss methods for minimization of a given cost function and assume that there exist in our model a single FDV that represents the cost for minimization, B&B algorithm, when used, reduces the search space by cutting solutions that cannot provide a better cost value, Kris Kuchcinski (LTH) Search January 12, 2015 38 / 46 Optimization Algorithm label minimize(v, cost ) result nil solution nil do result DFS(V) solution result impose cost < valueofcost(result) while result nil return solution The optimization algorithm with restarts and DFS algorithm. Kris Kuchcinski (LTH) Search January 12, 2015 39 / 46

Multi-objective Optimization multi-objective optimization the optimization that simultaneously optimizes more than one cost function. it does not exist a single optimal solution but instead we introduce anotionofpareto optimality, there exist several optimization criteria or cost functions denoted by f i (x), i = 1,...,n and we define Pareto optimal solution as vector x, ifallothervectorsy have a higher value for at least one of the cost functions, i.e., y i f i (y) > f i (x), orhavethesamevalue for all cost functions, each vector x that is Pareto optimal is also called Pareto point, Pareto points form a trade-off curve or surface called Pareto curve or Pareto surface, Kris Kuchcinski (LTH) Search January 12, 2015 40 / 46 Pareto points 6 5 4 3 2 1 0 cost A C B D 0 1 2 3 4 5 6 7 8 9 time Kris Kuchcinski (LTH) Search January 12, 2015 41 / 46 Multi-objective Optimization (cont d) The generation of Pareto points start search with most relaxed constraints, when a solution is found an additional constraint is imposed; this constraint forbids to find solutions with greater cost or greater execution time cost < c i time < et i is imposed. the points that has been found before might be dominated by newly found point and therefore the algorithm removes from the list of candidate Pareto points such dominated points. Kris Kuchcinski (LTH) Search January 12, 2015 42 / 46

Pareto Points Generation Algorithm vector pareto(v, criteria) paretopoints nil while DFS(V) nil paretopoints paretopoints (val 0,...,val n ) impose criteria 0 < val 0 criteria n < val n remove points dominated by (val 0,...,val n ) from paretopoints return paretopoints Kris Kuchcinski (LTH) Search January 12, 2015 43 / 46 Pareto Points Generation Example 6 cost D 6 cost D 5 A 5 A 4 C 4 C 3 3 2 B 2 B 1 time 1 time 0 0 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 (a) step 1 (b) step 2 6 5 4 3 2 1 0 cost A C B D 0 1 2 3 4 5 6 7 8 9 (c) step 3 time 6 5 4 3 2 1 0 cost A B 0 1 2 3 4 5 6 7 8 9 (d) Pareto points time Kris Kuchcinski (LTH) Search January 12, 2015 44 / 46 Multi-objective Optimization Example Example Consider differential equation solver example. We optimize threecost functions: number of adders, number of multipliers and number of execution cycles. The algorithm proposed in Pareto points generation algorithm produces the following Pareto points. Number Number Execution adders multipliers cycles 1 4 6 2 3 6 1 3 7 2 2 7 1 2 8 1 1 13 Kris Kuchcinski (LTH) Search January 12, 2015 45 / 46

Conclusions search is built based on constraint-and-generate method together with depth-first-search, ordering of variables and selection of values for assignment have influence search performance, optimization uses branch-and-bound method, different search styles can be defined for different problems. there exist complete and not complete search methods, Kris Kuchcinski (LTH) Search January 12, 2015 46 / 46