Temporal Constraint Satisfaction Problems An Evaluation of Search Strategies

Size: px
Start display at page:

Download "Temporal Constraint Satisfaction Problems An Evaluation of Search Strategies"

Transcription

1 Introduction Temporal Constraint Satisfaction Problems An Evaluation of Search Strategies Bruce Lo, John Duchi, and James Connor Stanford University CS227: Assignment 3 May 19, 2005 Temporal constraint satisfaction problems (TCSPs) arise in many areas of computer science. The TCSP originates from the domain of scheduling a set of jobs, where each job has some set of operations that will complete it, and these operations use different resources and demand different time windows. As such, this and other types of scheduling problems make up one of the areas in which TCSP solving techniques apply most readily. While temporal constraint satisfaction problems have been shown to be in the NP-Hard class of problems [2], there are heuristic techniques that give very fast solutions to TCSPs, especially when the TCSPs are not very highly constrained. In this paper we investigate the efficacy of different algorithms and heuristics for solving TCSPs. We begin by describing the translation of a TCSP into a simple temporal problem (known as an STP, as presented in Decker et al. 1990) and the solution of the STP to constrain search. We also explore the use of chronological backtracking with constraint propagation and two strategies for dynamically ordering variables in the search: the slack and bslack heuristics. We also discuss possibilities for extending our basic search framework, especially using some sort of dependency-directed (rather than chronological) backtracking, and evaluate the results of our testing. The problems we use to test our search procedures are drawn from the Sadeh suite of problems. There are sixty problems with similar features. Each of the problems has fifty operations whose durations are specified, five resources, and each operation uses one of the resources for the duration of the operation. Each problem also defines a set of release and due times, which, respectively, correspond to the earliest start and latest end times allowed of each operation. There are also precedence constraints on the operations that constrain some operation i to be after some other operation j. We ran all testing on a SunBlade 2000 with two UltraSPARC III+ 900 MHz CPUs and 2 gigabytes of RAM. Problem Representation In our framework, each operation is represented by its start time and duration. That is, operation i would be represented by its start time X i and its duration. Using Dechter et al.'s approach to the TCSP, we constrain the temporal distance between time points which are start times of events in our framework to allow us to quickly shrink the possible domains of different operations. Using this representation, we are able to create a distance graph representing the possible temporal distances between each X i and X j, on which we can run shortest-path finding algorithms to shrink our search domain.

2 This translation brings out the STP embedded within our TCSP. We treat resources, as in Laborie's framework, as unary resources that are completely consumed and returned by each operation, which imposes a total ordering constraint on operations using the same resource. In the problems we investigate, there are always ten operations sharing each resource (operations 0 through 9 share resource 0, operations 10 through 19 share resource 1, etc.), which means any operation i in [0, 9] will never overlap with another operation j in [0, 9]. As such, to solve our TCSP, we must impose an ordering on every pair of operations (i, j) that share a resource. These orderings must allow each operation i to finish by its due time, start after its release time, and obey the precedence constraints enforced by the problem description. Because of the structure of the Sadeh problems (each resource shared by 10 operations) there are 10 2 * 5 = 225 orderings between variables that we must assign. The ordering between two operations is a variable in our problem's framework; these will be discussed in more detail below. Having the STP embedded in the TCSP gives us two graphs in which to search for solutions to our problem: we have a distance graph, representing the temporal distances between different operations, and a constraint graph, which represents the ordering and resource constraints on different pairs (i, j) of operations. Algorithms and Techniques Bellman-Ford Similar in spirit to arc-consistency algorithms such as AC-3 and AC-4 in normal constraint satisfaction problems, solving our embedded STP (as described above) tightens the domain of possible solutions to our TCSP before search actually begins. To solve the embedded simple temporal problem, we run Bellman-Ford on a distance graph that represents the temporal distance constraints. Each constraint can be expressed in the form X i X j w ij where X i and X j represent the start time of operations i and j. For convenience of representation, we introduce a fake operation t 0 to be the initial time, set its time to be at time 0, and we can treat t 0 as a node in the distance graph. Our distance graph contains a node for every job i, a node for time t 0, and edges from each node i to each node j with weight 0 if i = j, with weight w ij if we have a constraint X i X j w ij, and infinite otherwise. The weights represent upper bounds on the difference between the start times of jobs. Min-weight paths in the constraint graph represent the tightest possible upper bounds that are implied by the constraints in the problem. For example, we may have X i X j w ij, X i X k w ik, and X k X j w kj By adding together the latter two, we get X i X k X k X j = X i X j w ik w kj Whichever of w ij and (w ik + w kj ) is smaller is the least upper bound, which corresponds to the shortest path between nodes X i and X j. To solve the simple temporal problem, for each X i we need to find a greatest lower

3 bound and a least upper bound. We can do this by finding shortest paths in the constraints graph. To get the greatest lower bound (GLB), we find the shortest path from X i to the t 0 node. This is 0 X i GLB or X i GLB To get the least upper bound (LUB), we find the shortest path from the t 0 node to X i. This is X i 0 LUB We can do this efficiently in O( V * E ) time for a graph with nodes V and edges E using Bellman-Ford to find single source shortest paths. To find shortest paths from node t 0 to job nodes, we treat t 0 as the source node. To find shortest paths from all the operation (X i ) nodes to t 0, we treat t 0 as the source node in a new constraint graph formed by reversing all the edge weights in the original graph. In the new reversed graph, a path from t 0 to X i represents a path from X i to t 0 in the original graph. If there are any negative weight cycles in the constraint graph, then we know our simple temporal problem is infeasible because the GLB and LUB can then become arbitrarily large and small respectively. Bellman-Ford incrementally keeps track of best estimates for the min-weight paths from the source to each node. For V iterations and for each edge uv, Bellman-Ford checks whether the minimum-weight path from the source (in our framework, the t 0 we introduced) to u plus the minimum-weight path from u to v is greater than the minimumweight path from the source to v. If so, it updates the minimum-weight path from the source to v to reflect the smaller weight path it just found. Assuming that there are no negative weight cycles, Bellman-Ford only needs ( V -1) iterations to find the shortest paths because the shortest paths could only use V nodes and a maximum of ( V -1) edges because repeating nodes (going over a cycle) could not shorten the distance, since there are no negative weight cycles. Bellman-Ford is guaranteed to find the minimum-weight path from the source to any node u because over each iteration it will perform a new distance update for an edge in the shortest path until the path is complete. If Bellman- Ford is able to update the distance from the source to a node u in the V th iteration, then there must be a negative weight cycle because otherwise the shortest path would have already been found by the ( V -1) th iteration. Search We implemented the following general search framework with constraint propagation at each assignment. In this, initialize() includes a call to run Bellman- Ford on the STP distance graph that undergirds the TCSP, and each ij represents a variable in the search. These variables, described in more detail below, represent the ordering of two operations i and j. During the labeling stage for a variable, we must propagate the constraints that ordering two operations imposes on the start times of the two operations and any other operations we know precede or follow them. Likewise, during the un-labeling stage, we un-propagate the constraints (modifications to start times of an operation) that had been propagated to any operations effected by the variable we un-label. This is explained in more detail in the constraint propagation section of the paper.

4 procedure schedule(n) consistent = true ij = initialize() loop if consistent then (ij, consistent) = label(ij) else (ij, consistent) = unlabel(ij) if ij = nn then return "solution found" else if ij = 00 then return "no solution" endloop end schedule Variables As mentioned in the problem modeling section, in utilizing the search procedure to schedule the operations in a job, we define the variables to be pairs of operations that share the same resource (denoted ij hereafter). Two operations i and j that share the same resource cannot overlap; as a result, each variable has two values in its domain: either i is scheduled strictly before j (denoted i->j) or j is scheduled strictly before i (denoted j->i). As there are 225 ij variables to which we must assign values, we keep them in an array rs_list, and it is easy to access variables by simply indexing them in this array. In our implementation, we model a variable as an object consisting of two operation ID s, a domain bit array, a current domain bit array, and its assigned value (either i->j or j->i). We use two domains so that we can keep track of precedence constraints defined by the problem description that is, if operation i must be finished before j based simply on the problem description, we know the domain size is simply one. When resetting variable domains in the search, we then reset simply to our original domain rather than allowing search for both possible assignments to ij. Constraint Propagation Our constraint propagation algorithm relies on the fact that once we have instantiated a particular ordering of two operations, we can shrink the range of their respective earliest and latest start times (est and lst) based on the following formula: If i -> j then estj = max(estj, esti + lengthi) lsti = min (lsti, lstj lengthi) Furthermore, once an operation j s start time range is changed, its predecessors and successors may be effected, so we have to propagate the time constraints we establish for j. The terms predecessor and successor are used here to describe ordering relationships between operations established either by the problem input or by our search procedure. Operation j s predecessors are those operations that are scheduled strictly before j and j s successors are those operations that we schedule strictly after j. Constraint propagation is the workhorse of the search procedure and its efficiency is important for achieving good performance. We use a number of auxiliary data structures to achieve efficiency. First of all, when examining a newly assigned ij variable (assigned to either i->j or j->i), we make sure that inconsistency is found as soon as possible for both operations before proceeding to the next variable, so that no work is

5 wasted. We know an inconsistency exists if any variable's earliest start time is greater than its latest start time. If we discover an inconsistency, constraint propagation has effectively annihilated the variable ij's domain, and we can backtrack rather than continue searching. Secondly, in order to find the predecessors or successors of a particular operation quickly, we used a constraint propagation list a two-dimensional array of vectors called cp_list. cp_list is an Operations x 2 array of vectors, first indexed by the operation ID (1, 2,, 50 in the Sadeh test suite) and then indexed by 0 or 1, representing successor or predecessor. These vectors track the precedence constraints of a particular operation. They are initialized with the precedence constraints that we load from the input file specifying the problem, and whenever either i->j or j->i is assigned to a ij resource constraint variable, we add the newly created precedence constraint to the tail end of the correct vector. For example, choosing the ordering i->j would add the precedence constraint j being a successor to cp_list[i][successor] and the constraint of i being a predecessor to cp_list[j][successor] vectors. Thirdly, whenever we examine each constraint, if one of its members j s earliest or latest possible start times change, we push j s predecessor or successor constraints (in the form of j's actual predecessors or successors) onto a queue. We propagate the time changes done to j to each of the elements of the queue, adding successors or predecessors of each of the operations we pop off the queue back into the queue. We continue to work off the queue until it is empty, which, given that there are only 50 operations and we only push predecessors or successors of a given operation, will terminate relatively quickly. Variable Ordering Heuristics In traditional CSPs, dynamic variable ordering, especially in conjunction with forward checking, has proved a very valuable tool for speeding up search [1]. Using constraint propagation, we have put into our search a sort of weak version of forward checking. While we check whether individual operations become inconsistent, rather than whether domains of variables ij are annihilated, propagating constraints does let us eliminate some orderings of operations, thus shrinking variable domains, so it seems natural that variable and value ordering heuristics would be useful. Smith and Cheng propose using heuristics that allow estimation of sequencing flexibility to help select variables to instantiate in the case that an ordering i->j or j->i is not specified by the start times of operations i and j. There are four possibilities for a relationship between the start times of the operations. 1. est i processtime i lst j and est j processtime j lst i 2. est j processtime j lst i and est i processtime i lst j 3. est j processtime j lst i and est i processtime i lst j 4. est i processtime i lst j and est j processtime j lst i In the first case, the ordering i->j is forced, in the second, the ordering j->i is forced, and in the last, we do not know which of i->j or j->i is correct. Thus, in our search, whenever we are selecting a variable to label, we choose those variables whose domains are unary, that is, in cases one or two. This is isomorphic to the minimum remaining values heuristic for DVO used in traditional CSPs. In TCSPs, however, many variables have domain of

6 size two, because there are two possible orderings, and if no variables are in cases 1 or 2 above, we must select among the remaining variable (if one is in case 3, we backtrack). Intuitively, if we cannot choose a variable from cases 1 or 2 above, we would like to choose a variable whose domain is most constrained, because if we delay choosing that variable, we are likely to arrive at inconsistent states further along in our search, which could be costly. We would like to hit the inconsistent states early and focus on the more difficult variables. As such, we would like to be able to measure temporal flexibility of some unassigned variable ij. To this end, Smith and Cheng defined two heuristics that we will evaluate: slack and bslack. Slack is a function defined as follows: given a variable ij, if we choose the ordering of values i->j, the slack is slack i j =lst j est i processtime i while if we choose the ordering j->i, the temporal slack is slack j i =lst i est j processtime j Given the definitions, we try to select the variable ij whose ordering minimizes slack: min[slack i j, slack j i ]=min[min[ slack u v, slack v u ]] u, v Once the variable ij has been selected, we go 'all out' for that variable, selecting the value (ordering of i and j) that leaves the most slack once it has been selected. This variable and value selection heuristic, as we shall see, provides very good search information; however, it ignores some data that the different orderings may provide. For example, if the slack of an ordering i->j is 3, but the slack of the ordering j->i is 20, and the temporal slack of ordering k->l is 4 and the slack of selecting ordering l->k is 4, which variable, ij or kl, is better for the search? With slack, the answer selected is the variable ij, but the large difference between the slacks for ij's orderings makes this choice perhaps not the best. As such, we define a similarity metric for a variable, given by the following: min[ slack i j, slack j i ] S= max [ slack i j, slack j i ] With this S, we can define what Smith and Cheng refer to as biased temporal slack: slack i j Bslack i j = f S Here, f is a monotonically increasing function of S. As suggested by Smith and Cheng, we test as f different roots of S, and we also use a multi-parameter composite bslack, Bslack i j = slack i j slack i j n 1 n 2 S S When choosing the next variable for search, we select the variable ij with minimum overall bslack value, which places the most value on the actual slack of a decision, but as a variable's orderings give much different slack values, the variables bslack values will blow up. In bslack, we select the value to instantiate (i->j or j->i) the same as we do for..

7 the slack variable and value ordering heuristic. There is some inefficiency in calculating bslack and slack values for every possible variable ordering, since at every step of the search we must calculate the temporal slack for each variable. Caching might be useful, and we could update only those slacks that change during the constraint propagation phase of the search, but we would still have to check every ij variable's slack anyway. As calculating the slack and bslack for one single variable ij is a constant time operation regardless, we lose no theoretical efficiency. There is a small speedup we implement, however. To quickly pick and attempt to instantiate those operation orderings where only one of i->j or j->i is possible, we maintain a queue of unit-domain variables. Whenever this queue becomes empty, we must search for variables that have minimum slack; in doing this search, we may come across variables and their corresponding operation orderings that fall into cases 1 or 2 above, since assigning case 1 and 2 variables seen from earlier stages in search may give us more case 1 and 2 variables. If we find more of the singular-domain variables, we push these variables onto our unit-domain stack, but continue searching for all variables with unit domains. This way, we are able to maintain our queue and search for variables to choose less often, since we can simply pull variables off of the queue. Start-Time Stack To track the current earliest and latest start times of each operation, we use an array of pairs of earliest and latest possible start times for each operation. The pairs are initialized with the times for each operation as determined by distances from t 0 to X i and X i to t 0 by running Bellman-Ford algorithm. In the search, every time we label some ij variable, we propagate the constraints it imposes and the start times for the other operations are updated. Whenever all of a particular variable s values make the TCSP inconsistent, we must backtrack in the search and restore the start times of all the variables to the values they had before propagation. As such, we maintain a stack start_times_stack whose levels are an array of earliest and latest start times corresponding to the earliest and latest start times possible at some level in the search. Before a propagating the constraints from a variable assignment, we push a copy of the current start time frame (represented by the array of pairs of earliest and latest start times) onto the start_times_stack. If the cycle of constraint propagation is able to finish without inconsistency, then the frame stays in the stack and current frame is modified according to the propagation. This can carry on for any number of ij value selections. Whenever an inconsistency arises during constraint propagation, we restore the previous start time frame. The start_times_stack s existence allows us to pop the last start time frame, replace the current frame with this stored frame and deallocate the old frame. This strategy makes undopropagation extremely fast. The space requirement is Operations * 2 * 4 bytes per frame (400 bytes). In our dataset, there are only 225 variables, so we can go only 225 levels deep, which would only result in a dismal 90KB of memory. Even if there are the full 50 2 = 1225 ij variables based on 50 operations, the stack would take less than 490KB of memory. Although we do not currently do this, we could consider pre-allocating the space for the full stack to further avoid the cost of memory allocation. One small thing worth noting is that we deliberately choose a consecutive array of pairs objects for each start time frame so that in making a copy to

8 push onto the stack, we can use memcpy to improve speed. Results In this section, we will examine the results of running the described TCSP solving backtracking procedures, and we will also explore reasons that certain problems were more difficult than others. In empirically evaluating TCSP solving procedures, we determined that a solver using a two-parameter bslack variable ordering scheme proved to be the most effective solver. For the parameters n 1 and n 2, we chose 2 and 3, and this proved to be the most effective solving strategy. In fact, with these parameters, our search never even backtracked (see Figure 1). In this and the following diagrams, Bellman-Ford refers to the time spent running Bellman-Ford on the embedded STP, Search refers to the time (in seconds) of doing actual backtracking search on the problem, Assignments is the number of variable assignments attempted during the search, and Success is the proportion of successes in the 60 problems the algorithm had. Bslack Heuristic Bellman-Ford Search Assignments Success Average Median STDev Figure 1: Optimum bslack using a composite of f S = S and f S = S 3 Other bslack variable ordering schemes provided very good TCSP solvers as well; each version of our search with bslack that we tested was able to solve all of the Sadeh problems. All averaged well below a second average solving time, as well. In Figure 2, we see the average number of assignments and success rates for bslack with different parameters for n in f S = n x. If there are two parameters, we use the composite bslack function setting n 1 to be the first parameter, n 2 the second. In general, the singleparameter bslack functions rather than the composite bslack perform more quickly, but they also do more backtracking and assignments than do the composite bslack functions. Using the slack heuristic also proved valuable; it enabled us to solve 95% of the Sadeh problems, or all but three of the sixty, in under ten minutes. It was also able to solve all of the remaining problems, excluding one, which it solved in 19 seconds, in under.05 seconds. The temporal slack of a variable is very quick to compute, and it provides a relatively good amount of information about search, so when search was successful, searches using slack ran very quickly, even more quickly than our TCSP solver with bslack in most cases (see figures 3 and 4).

9 Bellman-Ford Search Assignments Success bslack 2 Average Median bslack 3 Average Median bslack 4 Average Median bslack 2 3 Average Median bslack 3 4 Average Median Figure 2: Bslack statistics for different parameters of f(s) Slack Heuristic Bellman-Ford Search Assignments Success Average Median STDev Figure 3: Total slack statistics Slack Heuristic (Successful Attempts 95%) Bellman-Ford Search Assignments Average Median STDev Figure 4: Slack statistics for successful solutions to problems In all but the four hard problems, our solver using the slack heuristic never had to backtrack, similar to the searcher using bslack. We hypothesize that this is because in domains such as the Sadeh problems, with sets of operations sharing only one resource, the slack heuristic provides a good estimate of future temporal difficulties in the search. For the most part, the relatively few constraints variables that do not share their resources have amongst themselves allow slack to be accurate; bslack's advantage comes in more tightly constrained domains where the similarity of the temporal slacks given by two different orderings become important. This may be a result of being able to say with more confidence that certain variables and operation orderings will become more difficult further along in the search, which bslack gives with its similarity metric, which allows us

10 to assign problem variables early in search. The last method we evaluated was simple chronological backtracking search with constraint propagation. This method worked well for some of the Sadeh problems, but many proved too difficult for it inside of a 10 minute time limit. Non-heuristic search was able to solve 58% of the Sadeh problems; of those it solved, it was significantly slower than either search with bslack or slack (see figures 5 and 6). Slack and bslack seem to provide very good heuristic estimates of what variables ought to be instantiated, which leads them to significantly outpace simple, non-heuristic constraint propagation search. No Heuristic Bellman-Ford Search Success Average Median STDev Figure 5: No Heuristic search with constraint propagation statistics No Heuristic (Successful Attempts 58%) Bellman-Ford Search Assignments Average Median STDev Figure 6: Statistics on successful runs (sub 600 seconds) of no heuristic search In an effort to explain why some problems proved more difficult than others, we include in figure 7 a graphical representation of a solution to the twelfth Sadeh problem, which proved to be among the most difficult, causing even some searches using bslack heuristics to backtrack. Each bar represents an entire operation from start to finish. As can be seen, operations 0 through 19 are clustered near the beginning, but there are big gaps between a few operations, such as between 38 and 36 and 33 and 31. These gaps imply dependencies on other operations' completion, which make search more difficult because of the interplay between non resource-sharing operations. Nonetheless, a solution exists, and search using a composite bslack heuristic found the solution sans backtrack.

11 Op 49 Op 48 Op 47 Op 46 Op 45 Op 44 Op 43 Op 42 Op 41 Op 40 Op 39 Op 38 Op 37 Op 36 Op 35 Op 34 Op 33 Op 32 Op 31 Op 30 Op 29 Op 28 Op 27 Op 26 Op 25 Op 24 Op 23 Op 22 Op 21 Op 20 Op 19 Op 18 Op 17 Op 16 Op 15 Op 14 Op 13 Op 12 Op 11 Op 10 Op 9 Op 8 Op 7 Op 6 Op 5 Op 4 Op 3 Op 2 Op 1 Op Figure 7: Problem 12 Solution. Each bar represents the full execution of an operation. Horizontal lines indicate groups of operations that share the same resource. The horizontal scale is time.

12 Discussion and Future Directions Dependency-Directed Backtracking Dependency-directed backtracking would be an obvious extension to the search framework we have evaluated thus far. Because in our more effective heuristic searches, there was no actual backtracking, we thought dependency-directed backtracking would add little functionality to our solver. Though we did not implement dependency-directed backtracking, the following describes a possible extension of our solver to include dependency-directed backtracking. Our search variables are pairs of operations whose domains are i->j and j->i. We assign orderings to each pair of constrained operations (ij, representing a pair of resource sharing operations), and we need to maintain each operation's feasibility as we search. An operation i is feasible if its earliest start time is less than or equal to its latest start time. To do chronological backtracking, we assign orderings to pairs of operations until either all constrained operations i and j have orders i->j or j->i assigned to them, in which case we have a solution, or an assignment makes an operation infeasible. In normal backtracking, if i->j is an impossible ordering, we try to assign the value j->i to ij, or we backtrack one level further back in the search. As described in the Start-Time Stack section, each level maintains its earliest and latest possible start times for each job; as we backtrack we can revert to a previous level's start times instead of having to keep track of specific changes made to the start times when propagating assigned constraints. To do dependency-directed backtracking, we would search back through these historical start times to find a level where there is a feasible assignment to the current jobpair. An illustrated example may help explain this: Figure 8: Example of dependency-directed backtracking Suppose that we have already assigned job-pairs A->C, C->D, and B->C, in that order, and we have found that for job pair (D, H) neither D->H nor H->D are feasible. In chronological backtracking, we simply backtrack to the B->C assignment. However, if neither D->H nor H->D is feasible in the historical start times at the level at which we assigned (B, C), then we know that no matter what value we assign to pair (B, C) we still will not find a feasible value for the ordering of H and D since variable assignments can only make start time domains smaller. We want to backtrack until we find a level l where either D->H or H->D keeps both D and H feasible; we then change the assignment at l

13 since we know that l s assignment initially made all assignments to H and D infeasible. In this example, this would entail rolling back ordering assignments until we got to the assignment of operations A and C to be ordered A->C, then, knowing that this was the cause of the conflict, reassigning to C->A and continuing search. This dependency-directed backtracking algorithm is equivalent to backjumping in standard CSPs. We could do more elaborate backtracking procedures by storing with each value the deepest level with which it conflicted. This would allow us to do the equivalent of conflict-directed backjumping in standard CSPs; however, we think the expense of this approach would outweigh the benefit because finding the level with which a value conflicts is relatively expensive because it requires constraint propagation through the start times at each previous level until we find a feasible assignment Conclusion In this paper, we evaluated a set of algorithms for solving temporal constraint satisfaction problems. In our empirical evaluations, we found that certain heuristics and search strategies help to radically pare down the search space, so much so that backtracking becomes unnecessary. A temporal-constraint solving algorithm that implements a backtracking search using bslack heuristics for variable and value selection and constraint propagation as it searches appears to be very effective for solving TCSPs with unary resource constraints. That these algorithms are so fast, yet complete and sound, is a testament to the usefulness of heuristics and extended search strategies in temporal constraint satisfaction problems. References [1] F. Bacchus and P. van Run. Dynamic Variable Ordering in CSPs. Principles and Practice of Constraint Programming, [2] R. Dechter, I. Meiri, J. Pearl. Temporal Constraint Networks. Artificial Intelligence 49, [3] P. Laborie. Algorithms for propagating resource constraints in AI planning and scheduling: Existing approaches and new results. Artificial Intelligence 143, [4] S. Smith and C. Cheng. Slack Based Heuristics For Constraint Satisfaction Scheduling. Proceedings of AAAI

General Methods and Search Algorithms

General Methods and Search Algorithms DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION Lecture 3 General Methods and Search Algorithms Marco Chiarandini 2 Methods and Algorithms A Method is a general framework for

More information

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable //7 Review: Constraint Satisfaction Problems Constraint Satisfaction Problems II AIMA: Chapter 6 A CSP consists of: Finite set of X, X,, X n Nonempty domain of possible values for each variable D, D, D

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Announcements Grading questions:

More information

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens CS 188: Artificial Intelligence Fall 2008 Announcements Grading questions: don t panic, talk to us Newsgroup: check it out Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Constraint Satisfaction Luke Zettlemoyer Multiple slides adapted from Dan Klein, Stuart Russell or Andrew Moore What is Search For? Models of the world: single agent, deterministic

More information

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17 CSE 473: Artificial Intelligence Constraint Satisfaction Dieter Fox What is Search For? Models of the world: single agent, deterministic actions, fully observed state, discrete state space Planning: sequences

More information

CS 4100/5100: Foundations of AI

CS 4100/5100: Foundations of AI CS 4100/5100: Foundations of AI Constraint satisfaction problems 1 Instructor: Rob Platt r.platt@neu.edu College of Computer and information Science Northeastern University September 5, 2013 1 These notes

More information

Constraint Satisfaction

Constraint Satisfaction Constraint Satisfaction Philipp Koehn 1 October 2015 Outline 1 Constraint satisfaction problems (CSP) examples Backtracking search for CSPs Problem structure and problem decomposition Local search for

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems (CSPs) 1 Hal Daumé III (me@hal3.name) Constraint Satisfaction Problems (CSPs) Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 7 Feb 2012 Many

More information

Backtracking algorithms for disjunctions of temporal constraints

Backtracking algorithms for disjunctions of temporal constraints Artificial Intelligence 120 (2000) 81 117 Backtracking algorithms for disjunctions of temporal constraints Kostas Stergiou a,, Manolis Koubarakis b,1 a APES Research Group, Department of Computer Science,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. S. Russell and P. Norvig. Artificial Intelligence:

More information

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Lecture 18 Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Outline Chapter 6 - Constraint Satisfaction Problems Path Consistency & Global Constraints Sudoku Example Backtracking

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 5: CSPs II 9/8/2011 Dan Klein UC Berkeley Multiple slides over the course adapted from either Stuart Russell or Andrew Moore 1 Today Efficient Solution

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Chapter 6 Constraint Satisfaction Problems

Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems CS5811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline CSP problem definition Backtracking search

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Section 1 3 Constraint Satisfaction 1 Outline Constraint Satisfaction Problems (CSP) Backtracking search for CSPs Local search for CSPs Constraint Satisfaction

More information

Eddie Schwalb, Rina Dechter. It is well known that all these tasks are NP-hard.

Eddie Schwalb, Rina Dechter.  It is well known that all these tasks are NP-hard. Coping With Disjunctions in Temporal Constraint Satisfaction Problems 3 Eddie Schwalb, Rina Dechter Department of Information and Computer Science University of California at Irvine, CA 977 eschwalb@ics.uci.edu,

More information

CS 4100 // artificial intelligence

CS 4100 // artificial intelligence CS 4100 // artificial intelligence instructor: byron wallace Constraint Satisfaction Problems Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials

More information

Constraint Satisfaction Problems Part 2

Constraint Satisfaction Problems Part 2 Constraint Satisfaction Problems Part 2 Deepak Kumar October 2017 CSP Formulation (as a special case of search) State is defined by n variables x 1, x 2,, x n Variables can take on values from a domain

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 Announcements Project 1: Search is due next week Written 1: Search and CSPs out soon Piazza: check it out if you haven t CS 188: Artificial Intelligence Fall 2011 Lecture 4: Constraint Satisfaction 9/6/2011

More information

Constructive Search Algorithms

Constructive Search Algorithms Constructive Search Algorithms! Introduction Historically the major search method for CSPs Reference: S.W.Golomb & L.D.Baumert (1965) Backtrack Programming, JACM 12:516-524 Extended for Intelligent Backtracking

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Lecture 6: Constraint Satisfaction Problems (CSPs)

Lecture 6: Constraint Satisfaction Problems (CSPs) Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph Example: Map-Coloring Constraint Satisfaction Problems Western Northern erritory ueensland Chapter 5 South New South Wales asmania Variables, N,,, V, SA, Domains D i = {red,green,blue} Constraints: adjacent

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Office hours Office hours for Assignment 1 (ASB9810 in CSIL): Sep 29th(Fri) 12:00 to 13:30 Oct 3rd(Tue) 11:30 to 13:00 Late homework policy You get four late

More information

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:

More information

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence Introduction to Artificial Intelligence 22.0472-001 Fall 2009 Lecture 5: Constraint Satisfaction Problems II Announcements Assignment due on Monday 11.59pm Email search.py and searchagent.py to me Next

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Last update: February 25, 2010 Constraint Satisfaction Problems CMSC 421, Chapter 5 CMSC 421, Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local

More information

CS 188: Artificial Intelligence. Recap: Search

CS 188: Artificial Intelligence. Recap: Search CS 188: Artificial Intelligence Lecture 4 and 5: Constraint Satisfaction Problems (CSPs) Pieter Abbeel UC Berkeley Many slides from Dan Klein Recap: Search Search problem: States (configurations of the

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3) Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.) Some slides adapted from Richard Lathrop, USC/ISI, CS 7 Review: The Minimax Rule Idea: Make the best move for MAX assuming that MIN always replies

More information

Chapter 8 Memory Management

Chapter 8 Memory Management 1 Chapter 8 Memory Management The technique we will describe are: 1. Single continuous memory management 2. Partitioned memory management 3. Relocatable partitioned memory management 4. Paged memory management

More information

CS W4701 Artificial Intelligence

CS W4701 Artificial Intelligence CS W4701 Artificial Intelligence Fall 2013 Chapter 6: Constraint Satisfaction Problems Jonathan Voris (based on slides by Sal Stolfo) Assignment 3 Go Encircling Game Ancient Chinese game Dates back At

More information

CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality. Linda Shapiro Winter 2015

CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality. Linda Shapiro Winter 2015 CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality Linda Shapiro Winter 2015 Announcements Winter 2015 CSE 373 Data structures and Algorithms 2 Amortized Analysis In

More information

CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality. Lauren Milne Spring 2015

CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality. Lauren Milne Spring 2015 CSE373: Data Structures & Algorithms Lecture 12: Amortized Analysis and Memory Locality Lauren Milne Spring 2015 Announcements Homework 3 due on Wednesday at 11pm Catie back Monday Spring 2015 CSE 373

More information

Announcements. CS 188: Artificial Intelligence Fall 2010

Announcements. CS 188: Artificial Intelligence Fall 2010 Announcements Project 1: Search is due Monday Looking for partners? After class or newsgroup Written 1: Search and CSPs out soon Newsgroup: check it out CS 188: Artificial Intelligence Fall 2010 Lecture

More information

Unit-5 Dynamic Programming 2016

Unit-5 Dynamic Programming 2016 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Constraint Satisfaction Problems II Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Revised by Hankui Zhuo, March 14, 2018 Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search

More information

An Evaluation of Blackbox Graph Planning

An Evaluation of Blackbox Graph Planning An Evaluation of Blackbox Graph Planning John Duchi, James Connor, and Bruce Lo jduchi@stanford.edu, jconnor@stanford.edu, brucelo@cs.stanford.edu, I. Introduction Stanford University CS227: Assignment

More information

1. (15 points) Consider the crossword puzzle

1. (15 points) Consider the crossword puzzle ICS 271 Fall 2017 Instructor : Kalev Kask Homework Assignment 4 Due Tuesday 11/7 1. (15 points) Consider the crossword puzzle We represent the problem as a CSP where there is a variable for each of the

More information

Integrating Local-Search Advice Into Refinement Search (Or Not)

Integrating Local-Search Advice Into Refinement Search (Or Not) Integrating Local-Search Advice Into Refinement Search (Or Not) Alexander Nareyek, Stephen F. Smith, and Christian M. Ohler School of Computer Science Carnegie Mellon University 5 Forbes Avenue Pittsburgh,

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Artificial Intelligence Fall, 2010

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Artificial Intelligence Fall, 2010 MSSHUSETTS INSTITUTE OF TEHNOLOY epartment of Electrical Engineering and omputer Science 6.0 rtificial Intelligence Fall, 00 Search Me! Recitation, Thursday September Prof. ob erwick. ifference between

More information

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong CS 88: Artificial Intelligence Spring 2009 Lecture 4: Constraint Satisfaction /29/2009 John DeNero UC Berkeley Slides adapted from Dan Klein, Stuart Russell or Andrew Moore Announcements The Python tutorial

More information

Optimal Crane Scheduling

Optimal Crane Scheduling Optimal Crane Scheduling IonuŃ Aron Iiro Harjunkoski John Hooker Latife Genç Kaya March 2007 1 Problem Schedule 2 cranes to transfer material between locations in a manufacturing plant. For example, copper

More information

Uninformed Search Methods

Uninformed Search Methods Uninformed Search Methods Search Algorithms Uninformed Blind search Breadth-first uniform first depth-first Iterative deepening depth-first Bidirectional Branch and Bound Informed Heuristic search Greedy

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Constraint Satisfaction Problems Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2006 Lecture 4: CSPs 9/7/2006 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore Announcements Reminder: Project

More information

Mathematical Programming Formulations, Constraint Programming

Mathematical Programming Formulations, Constraint Programming Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Mathematical Programming Formulations, Constraint Programming 1. Special Purpose Algorithms 2. Constraint Programming Marco Chiarandini DM87 Scheduling,

More information

4 Search Problem formulation (23 points)

4 Search Problem formulation (23 points) 4 Search Problem formulation (23 points) Consider a Mars rover that has to drive around the surface, collect rock samples, and return to the lander. We want to construct a plan for its exploration. It

More information

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic. CS 188: Artificial Intelligence Spring 2010 Lecture 5: CSPs II 2/2/2010 Pieter Abbeel UC Berkeley Many slides from Dan Klein Announcements Project 1 due Thursday Lecture videos reminder: don t count on

More information

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012 /0/0 CSE 57: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Space of Search Strategies Blind Search DFS, BFS,

More information

Generating Feasible Schedules under Complex Metric Constraints *

Generating Feasible Schedules under Complex Metric Constraints * From: AAAI-94 Proceedings. Copyright 1994, AAAI (www.aaai.org). All rights reserved. Generating Feasible Schedules under Complex Metric Constraints * Cheng-Chung Cheng The Robotics Institute Carnegie Mellon

More information

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria Constraint Satisfaction Problems Chapter 5 Example: Map-Coloring Western Northern Territory South Queensland New South Wales Tasmania Variables WA, NT, Q, NSW, V, SA, T Domains D i = {red,green,blue} Constraints:

More information

Constraint Solving by Composition

Constraint Solving by Composition Constraint Solving by Composition Student: Zhijun Zhang Supervisor: Susan L. Epstein The Graduate Center of the City University of New York, Computer Science Department 365 Fifth Avenue, New York, NY 10016-4309,

More information

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Roman Barták Charles University in Prague, Faculty of Mathematics and Physics Institute for Theoretical Computer

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides) Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides) Constraint satisfaction problems What is a CSP? Finite set of variables V, V 2,, V n Nonempty domain of possible values for

More information

Empirical analysis of procedures that schedule unit length jobs subject to precedence constraints forming in- and out-stars

Empirical analysis of procedures that schedule unit length jobs subject to precedence constraints forming in- and out-stars Empirical analysis of procedures that schedule unit length jobs subject to precedence constraints forming in- and out-stars Samuel Tigistu Feder * Abstract This paper addresses the problem of scheduling

More information

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence, Course on Artificial Intelligence, winter term 2012/2013 0/35 Artificial Intelligence Artificial Intelligence 3. Constraint Satisfaction Problems Lars Schmidt-Thieme Information Systems and Machine Learning

More information

Improving search using indexing: a study with temporal CSPs

Improving search using indexing: a study with temporal CSPs Improving search using indexing: a study with temporal CSPs Nikos Mamoulis Dimitris Papadias Department of Computer Science Hong Kong University of Science Technology Clear Water Bay, Hong Kong http://www.es.ust.hk/{-mamoulis,

More information

Recursive Search with Backtracking

Recursive Search with Backtracking CS 311 Data Structures and Algorithms Lecture Slides Friday, October 2, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks CHAPPELLG@member.ams.org 2005 2009 Glenn G.

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence CSPs II + Local Search Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior PART III: Search Outline Depth-first Search Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior Best-First Search

More information

: Principles of Automated Reasoning and Decision Making Midterm

: Principles of Automated Reasoning and Decision Making Midterm 16.410-13: Principles of Automated Reasoning and Decision Making Midterm October 20 th, 2003 Name E-mail Note: Budget your time wisely. Some parts of this quiz could take you much longer than others. Move

More information

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 6, 4/20/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes Material Read all of chapter

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

A COMPARISON OF ATMS AND CSP TECHNIQUES

A COMPARISON OF ATMS AND CSP TECHNIQUES A COMPARISON OF ATMS AND CSP TECHNIQUES Johan de Kleer Xerox Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto CA 94304 Abstract A fundamental problem for most AI problem solvers is how to control

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Solutions to Problem Set 1

Solutions to Problem Set 1 CSCI-GA.3520-001 Honors Analysis of Algorithms Solutions to Problem Set 1 Problem 1 An O(n) algorithm that finds the kth integer in an array a = (a 1,..., a n ) of n distinct integers. Basic Idea Using

More information

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function Scribe Notes: 2/13/2013 Presenter: Tony Schnider Scribe: Nate Stender Topic: Soft Constraints (Ch. 9 of CP handbook) Soft Constraints Motivation Soft constraints are used: 1. When we seek to find the best

More information

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability. CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Announcements Assignments: DUE W1: NOW P1: Due 9/12 at 11:59pm Assignments: UP W2: Up now P2: Up by weekend Dan Klein UC Berkeley

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Assignments: DUE Announcements

More information

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( ) 17.4 Dynamic tables Let us now study the problem of dynamically expanding and contracting a table We show that the amortized cost of insertion/ deletion is only (1) Though the actual cost of an operation

More information

Preview. Memory Management

Preview. Memory Management Preview Memory Management With Mono-Process With Multi-Processes Multi-process with Fixed Partitions Modeling Multiprogramming Swapping Memory Management with Bitmaps Memory Management with Free-List Virtual

More information

Basic Search Algorithms

Basic Search Algorithms Basic Search Algorithms Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Abstract The complexities of various search algorithms are considered in terms of time, space, and cost

More information

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling CS 188: Artificial Intelligence Spring 2009 Lecture : Constraint Satisfaction 2/3/2009 Announcements Project 1 (Search) is due tomorrow Come to office hours if you re stuck Today at 1pm (Nick) and 3pm

More information

Operating System Concepts

Operating System Concepts Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems Constraint Satisfaction Problems N variables domain D constraints x 1 x 2 Instructor: Marco Alvarez University of Rhode Island (These slides

More information

Lecture Notes on Contracts

Lecture Notes on Contracts Lecture Notes on Contracts 15-122: Principles of Imperative Computation Frank Pfenning Lecture 2 August 30, 2012 1 Introduction For an overview the course goals and the mechanics and schedule of the course,

More information

TIE Graph algorithms

TIE Graph algorithms TIE-20106 1 1 Graph algorithms This chapter discusses the data structure that is a collection of points (called nodes or vertices) and connections between them (called edges or arcs) a graph. The common

More information

Steve Levine. Programs with Flexible Time. When? Tuesday, Feb 16 th. courtesy of JPL

Steve Levine. Programs with Flexible Time. When? Tuesday, Feb 16 th. courtesy of JPL Programs with Flexible Time When? Contributions: Brian Williams Patrick Conrad Simon Fang Paul Morris Nicola Muscettola Pedro Santana Julie Shah John Stedl Andrew Wang Steve Levine Tuesday, Feb 16 th courtesy

More information