On Generating Templates for Hypothesis in Inductive Logic Programming

Size: px
Start display at page:

Download "On Generating Templates for Hypothesis in Inductive Logic Programming"

Transcription

1 On Generating Templates for Hypothesis in Inductive Logic Programming Andrej Chovanec and Roman Barták Charles University in Prague, Faculty of Mathematics and Physics Malostranské nám. 25, Praha 1, Czech Republic Abstract. Inductive logic programming is a subfield of machine learning that uses first-order logic as a uniform representation for examples and hypothesis. In its core form, it deals with the problem of finding a hypothesis that covers all positive examples and excludes all negative examples. The coverage test and the method to obtain a hypothesis from a given template have been efficiently implemented using constraint satisfaction techniques. In this paper we suggest a method how to efficiently generate the template by remembering a history of generated templates and using this history when adding predicates to a new candidate template. This method significantly outperforms the existing method based on brute-force incremental extension of the template. Keywords: inductive logic programming, template generation, constraint satisfaction. 1 Introduction Inductive logic programming (ILP) is a discipline investigating invention of clausal theories from observed examples and additional background knowledge. Formally, for a given set of positive examples E + and a set of negative examples E - we are finding a hypothesis H such that H entails all examples from E + and does not entail any example from E - (a so-called consistent hypothesis). Background knowledge is a domain-specific knowledge which is available prior to learning process. Through this paper we suppose without loss of generality that background knowledge is empty. Since logical entailment is not complete in first-order logic θ-subsumption is used as a decidable restriction of logical entailment [8]. In our work, examples are represented as sets of fully instantiated atoms and hypothesis is a set of atoms with variables. In such setting, the process of inventing the consistent hypothesis H consists of (1) determining its structure, that is which atoms and how many of them are in the hypothesis, and (2) finding the unifications between variables in the atoms in such way that we obtain a consistent hypothesis. Hypothesis structure is called a template and hypothesis H is formed from template T by applying a substitution θ of variables such that Tθ = H. For instance, T = {arc(x 1,X 2 ), arc(x 3,X 4 )} is a template consisting of two atoms and four variables and H = {arc(x 1,X 2 ), arc(x 2,X 1 )} is one I. Batyrshin and G. Sidorov (Eds.): MICAI 2011, Part I, LNAI 7094, pp , Springer-Verlag Berlin Heidelberg 2011

2 On Generating Templates for Hypothesis in Inductive Logic Programming 163 particular hypothesis obtained from the template T by applying substitution θ = {X 3 /X 2, X 4 /X 1 }. Depending on the fact whether we do or do not know the template a priori, we speak about template ILP consistency in case we do know the template, and about general ILP consistency in case we do not know the template. Although both consistency problems belong to the same complexity class [5], the general task deals with an extra problem of determining the structure of the template, which has a radical impact on the overall performance of ILP system. Based on this fact, there are huge differences in running time between algorithms solving general and template ILP consistency problems. This situation motivated us to study the problem of general ILP consistency and we developed a new algorithm which should make solving general consistency more efficient. In this paper we present a new approach to generate templates for the general ILP consistency problem. Our work focuses on a method how the initial template is generated. The method uses existing algorithms to transform the template into a consistent hypothesis by finding certain unifications between variables. We propose a generate-and-test algorithm which successively generates templates until it is possible to generate a consistent hypothesis from the template. The algorithm remembers the previously generated templates and their properties and takes them into account when generating a next template. In other words, the algorithm is continuously learning itself from the rejected templates in order to improve templates generated in later iterations. The paper is organized as follows. In section 2 we will introduce existing algorithms used for the template ILP consistency problem. These algorithms are used as subroutines in our template generating algorithm. Simultaneously we will describe a basic version of the template generating algorithm. Section 3 will give the main result of the paper we will propose how to extend the basic template generating algorithm with several improvements and extensions yielding a self-learning template generating algorithm which is significantly more efficient. In section 4 we will further enhance the algorithm and finally in section 5 we will present experimental results comparing the effectiveness of the proposed methods. In the whole paper we will work with examples of identifying common structure (i.e. hypothesis) in random graphs (i.e. positive and negative examples). 2 Background As we have mentioned in the introduction, ILP deals with the problem of finding a hypothesis that covers a set of positive examples and excludes a set of negative examples. Hypothesis and examples are supposed to be clauses and the hypothesis is required to logically entail all positive examples and no negative example. Entailment is checked using θ-subsumption [8] which is a decidable restriction of logical entailment. We will assume the clauses to be expressed as sets of literals, and, without loss of generality, we will only work with positive literals, that is, non-negated atoms. All terms in learning examples (hypotheses, respectively) will be constants (variables) written in lower (upper) cases. For instance, E = {arc(a,b), arc(b,c), arc(c,a)} is an

3 164 A. Chovanec and R. Barták example and H = {arc(x,y), arc(y,z)} is a hypothesis. Hypothesis H subsumes example E, if there exists a substitution θ of variables such that Hθ E. In the above example, substitution θ = {X/a, Y/b, Z/c} implies that H subsumes E. In order to develop a general ILP system based on the generate-and-test approach, we must specify algorithms for three major components of the system. These components include generating the template, testing θ-subsumption and finally deciding template consistency. In this case, the ILP algorithm usually repeats the following three steps: generate a template, find unifications of variables in the template to obtain a hypothesis, test consistency of a hypothesis using θ-subsumption check. If the hypothesis is not consistent, the algorithm tries other unifications and if all unifications are exhausted, it generates a new template. The θ-subsumption check and deciding template consistency have already been addressed by the utilization of constraint satisfaction (CSP) techniques. θ- subsumption check is a crucial part of ILP systems and thus it naturally motivated new efficient approaches to be developed. Maloberti and Sebag proposed an efficient CSP algorithm Django [6] that significantly outperformed existing systems. Dramatic speed-up brought by Django encouraged exploitation of CSP techniques in other parts of ILP systems. In [2] Barták suggested a novel approach utilizing CSP towards deciding template consistency. His work was based on finding unifications between variables in the template to obtain a consistent hypothesis. The idea is as follows. We start with a hypothesis consisting of mutually different variables (the template) and then we systematically search the space of possible unifications until we obtain a consistent hypothesis or the search exhausts all possibilities. Barták proposed a sophisticated CSP model that extensively prunes the space of all unifications and thus keeps runtime in very sensible bounds. For generating the template, Barták used a simple algorithm based on the generateand-test method that works as follows: first we recall that each variable in the template appears exactly once (it is the goal of template consistency to decide which variables in atoms should be unified to obtain a consistent hypothesis). Hence template generation is about deciding how many atoms of each predicate symbol will appear in the template. In [2] this is done by exploring systematically all possible templates with the increasing length of template. Let k denote the number of different predicate symbols in all examples (in case the background knowledge is not empty, also predicate symbols from the background knowledge should be considered), for example {arc(a,b), arc(b,c), arc(c,a)} contains one predicate symbol (arc) and three atoms for this predicate symbol. To simplify notation, we will write predicate p instead of writing the atom of predicate symbol p. For example, adding predicate p to template means adding a new atom of the predicate symbol p with fresh variables to the template. The algorithm starts with a template of length k containing one atom for each predicate symbol and as soon as the constraint model for template consistency rejects the template, a new template is generated. A template space is searched systematically by generating all possible templates of length k, k+1, k+2,

4 On Generating Templates for Hypothesis in Inductive Logic Programming 165 The complete algorithm solving the general ILP consistency problem may look like in Figure 1. As the stop criterion we can use either reaching of the maximal k or reaching the maximal runtime. 1: k number of all different predicates in examples 2: repeat 3: S T all templates of length k 4: for each T S T do 5: H Decide-Consistency(T) 6: if H is consistent then return H 7: end for 8: k k + 1 9: until stop criterion satisfied Fig. 1. A basic generate-and-test template generating algorithm (IDS) We can notice easily that algorithm in Figure 1 is performing iterative deepening search (IDS). An advantage of this approach is that it guarantees finding the shortest consistent hypothesis. However, we are not always interested in finding the shortest possible solution. In fact, the trade-off between finding the optimal hypothesis and the cost of this search is so cumbersome that this method becomes impractical, especially for longer templates (see the section with experimental results). Hence we suggest a different method for exploring the space of templates based on incremental extension of the template by adding new predicates. 3 Incremental Template Generation When we analyzed reasons why the IDS algorithm was not capable to find more complicated (longer) hypotheses, we find out it is because of its leaps between two distinct parts of the template space. Think of the following example: we are solving a problem having the final template equal to 5 a, 5 b, 5 c (5 predicates a, 5 predicates b, and 5 predicates c) and we have examined all templates of length 14. Let us suppose without loss of generality, that the last examined template of the length 14 was 4 a, 5 b, 5 c. We can see that it is sufficient to add predicate a to the template. Despite, the basic systematic algorithm starts blindly generating all templates of the length 15 from the beginning until it reaches the correct template. Based on the above observation we suggest searching the space of templates only one way new predicates can be added to the template and once they are added, they cannot be removed. Further, we need to determine which predicate is due to be added to the template in each step of the algorithm. We can use systematic search that adds different predicates one by one in cycles add predicate a, followed by predicate b, followed by predicate c, and again predicate a etc. Or we can add each predicate with certain probability. If the probability of adding predicate to the template is distributed among all predicates uniformly, the earlier and the latter approaches are identical in

5 166 A. Chovanec and R. Barták terms of developing the template. Both approaches are identical because the probability of the case when the probabilistic approach would significantly prefer adding one particular predicate (the number of its occurrences would be k-times of occurrences of other predicates for any constant k > 0) diminishes exponentially (a socalled Chernoff bound [7]). However, the probabilistic approach is still preferable to the systematic one as it allows us to guide the search algorithm according to custom conditions which may even evolve in time. In Figure 2 we show the basic incremental probabilistic approach (IPS) to template generation. Procedure Decide-Consistency does the actual template validation. The stop criterion is met when either Decide-Consistency succeeded in validating the template or execution time expired. It may sometimes happen that the hypothesis created from the template contains some isolated atoms, i.e. atoms with variables that are not unified with other variables. These atoms are obviously redundant and hence they are removed from the final hypothesis (Remove-Isolated-Atoms procedure in the algorithm). A drawback of the whole algorithm is that it can negatively affect the performance of Decide-Consistency function as the function has to consider unifications between variables in atoms that are eventually identified as redundant. 1: P all different predicate symbols in examples 2: T empty template 3: repeat 4: Generate predicate p P with probability distributed uniformly 5: T T {p} 6: H Decide-Consistency(T) 7: until stop criterion satisfied 8: H Remove-Isolated-Atoms(H) Fig. 2. An incremental probabilistic template generation (IPS) 4 History-Driven Tabu Template Generation The key drawback of the pure incremental approach as described in the previous section is that it generates new atoms without any respect to the previous work it did. In particular, we noticed that if the added atom contributes to better consistency of the hypothesis, it might be useful to add atom of the same predicate symbol again. We formalize this idea in another template generation algorithm based on two techniques: (1) For a given template, we check the maximal number B of broken negative examples achieved by some hypothesis generated from the template (hypothesis breaks the negative example if it does not subsume it and subsumes all positive examples at the same time). Then we check the value of B between two subsequent templates. Based on information whether B increases or decreases between the iterations we will decide about the next predicate to be added to the template. (2) We maintain a tabu list [4] of predicates such that these predicates cannot be added to the template in next steps of the algorithm.

6 On Generating Templates for Hypothesis in Inductive Logic Programming 167 Firstly, let us discuss in detail the point (1). After we launch the formerly described algorithms (either IDS or IPS), we analyze the maximal numbers of the negative examples which are broken by any hypothesis formed from the last two templates. In particular, if we added predicate p to the last template and the maximal number B of the broken negative examples increased in comparison to the number B of the last but one template, then we studied what happens with the maximal number of the broken negative examples for the next template if we add the same predicate p again. Empirical results showed that if a predicate added to the template increased the number of broken negative examples in one iteration then it is very likely to increase the number B again if we add the same predicate in the next iteration again. Particularly, in our experiments the average number of such iterations, where the number of broken negative examples was increased obeying the suggested rule, was 69.3% (the base was computed as the number of all subsequent pairs of templates, where the first template of the pair increased the maximal number of the broken negative examples). In contrast, we also averaged the number of iterations where the number of broken negative examples was increased by adding a different predicate than the one added last time. The number of such iterations was 26.19%, which is much less than the number of the improving iterations in the first case. Thus adding the same predicate seems to be beneficial. Finally we should note that the fact that in the remaining 30.7% of the first case when the rule did not increase the maximal number of the negative examples does not mean the heuristic went always wrong. It might also happen that there was no predicate at all that would increase the maximal number of the broken negative examples for a given template structure (no matter which one would be added). Now let us continue with the point (2) that resembles the idea of Tabu Search [4] though there are some differences in handling the tabu list. If we add some predicate that did not cause the maximal number of broken negative examples to increase, then we forbid adding this predicate to the template in the following iterations of the algorithm until some condition (a so-called aspiration criterion) is met and the predicate is allowed to be used again. For this purpose we maintain a tabu list consisting of all predicates that cannot be added to the template in next iterations. As soon as the tabu list contains all predicates or some predicate increased the maximal number of broken negative examples, the tabu list is emptied and the process continues. In the second case, we empty the tabu list because of the fact that if the last predicate increased the maximal number of broken negative examples, then the template was changed relatively significantly and all predicates (also those in the tabu list) are now more likely to break some new negative examples than they were before. Now if we put these two concepts together, we get a basic history-driven tabu template generation algorithm which is illustrated in Figure 3. In each iteration of the algorithm, we try to validate the current template (line 6) and if we succeed, we are done. Otherwise we store the maximal number of broken negative examples during the validation (line 9) and then we proceed with the next steps depending on the maximal number of broken negative examples in the previous iteration. If the number

7 168 A. Chovanec and R. Barták increased, we add the same predicate to the template as we did the last time (line 11). In other case we generate a new predicate randomly such that it is not in the tabu list and we add it to the template (lines 16, 17). 1: Initialize template T 2: B last 0 {max. number of broken neg. examples in the last iteration} 3: P last any predicate {a predicate appended into template in the last iteration} 4: Tabu {tabu list} 5: repeat 6: H consistent Decide-Consistency(T) 7: if H consistent exists then return H consistent 8: else 9: B current Max-Broken-Neg-Examples(T) 10: if B current > B last then 11: T T {P last } 12: Tabu 13: else 14: if all predicates in Tabu then Tabu 15: P current random predicate not in Tabu 16: T T {P current } 17: Tabu Tabu {P current } 18: P last = P current 19: end if 20: B last = B current 21: end if 22: until stop criterion satisfied Fig. 3. A history-driven template generation The complexity of the above algorithm is strongly dependent on the complexity of Decide-Consistency function. It has been proved that the problem of deciding template consistency is Σ 2 P -complete [5], hence the whole algorithm belongs at least to this complexity class. If we want to determine the upper bound, it is necessary to identify the number of calls of Decide-Consistency function in the repeat loop. In general, the loop does not have to finish at all. However, we are usually not interested in finding arbitrary solution for the ILP problem and rather it makes sense to impose an upper bound on the size of the desired solution. Furthermore, if the bound is polynomially related to the size of the evidence, we get a so-called bounded ILP problem which is Σ 2 P -complete [5]. 4.1 Stochastic Extension of History-Driven Tabu Template Generation The algorithm in Figure 3 behaves like a modification of the well known hill-climbing algorithm and it is fully deterministic except for the step at line 15. We can further

8 On Generating Templates for Hypothesis in Inductive Logic Programming 169 improve its performance by modifying the probability to select a predicate at line 15. We developed a stochastic model as an extension of the basic history-driven algorithm yielding the stochastic history-driven tabu algorithm. The idea is that the predicate that behaved well in previous iterations is preferred to be added to the template. First, besides the tabu list we introduce a candidate list which is the list of all predicates not in the tabu list. Furthermore, each predicate in the candidate list is assigned a probability such that all probabilities in the candidate list sum to one. In each iteration of the algorithm we pick a predicate from the candidate list according to its probability and add it to the template. The candidate list is a dynamic structure. In addition to adding and removing predicates into or from it, the probabilities may change as well. There are two situations when the probabilities in the candidate list are recomputed. The first situation arises when the last predicate added to the template increased the maximal number of broken negative examples. In that case we (1) move all predicates from the tabu list back to the candidate list and set their probabilities to some low value p, (2) set the probability of the last added predicate to some high value p high in the candidate list. After adding the predicates from the tabu list back to the candidate list, we should still prefer those predicates originally in the candidate list to those recently added from the tabu list when selecting the next predicate to be added to the template. The reason is that the predicates originally in the tabu list behaved worse than those in the candidate list (that is the reason why they were tabu). The intended effect can be achieved by modifying the probabilities of predicates in the candidate list in the following way. We find a predicate with the minimal probability p min among all predicates in the candidate list and then we assign the probability p = p min. p tabu to all predicates moved from the tabu list. In our implementation we use p tabu = 0.1 so the probability of selecting some predicate originally from the tabu list is at least ten times smaller than the probability of selecting any predicate originally in the candidate list. Setting the probability of the last successful predicate to p high means that we prefer appending those literals that increased the maximal number of broken negative examples in the last iteration (as we have already proposed). In our algorithm we use p high = We should remark that after every change of the probabilities in the candidate list we normalize them to yield the sum equal to one. The second situation when the probabilities of the predicates in the candidate list are adjusted is when the candidate list becomes empty (it means that all predicates are in the tabu list). In this case we put all predicates back to the candidate list and we distribute the probability among them uniformly. In Figure 4 we give a pseudo code for the stochastic history-driven tabu template generation. This algorithm is very similar to the algorithm in Figure 3 except it extends it with the stochastic steps. We use a few new procedures in the algorithm. Procedure Distribute-Probability- Uniformly returns the set of all predicates with identical probabilities. Procedure Predicate-With-Min-Prob returns the predicate with the minimal probability. Procedure Update-Probability takes the list provided in the first argument and updates the probability of the predicates in the second argument to the value in the third argument. Finally procedure Normalize-Probabilities adjusts the probabilities such that they sum to one. As the stop criterion we can use expiration timeout.

9 170 A. Chovanec and R. Barták 1: Initialize template T 2: B last 0 {maximal number of broken negative examples in the last iteration} 3: P last any predicate {a predicate appended to the template in the last iteration} 4: Tabu {tabu list} 5: Cand Distribute-Probability-Uniformly {initialization of candidate list} 6: repeat 7: H consistent Decide-Consistency(T) 8: if H consistent exists then return H consistent 9: else 10: B current Max-Broken-Neg-Examples(T) 11: if B current > B last then 12: p min = Predicate-With-Min-Prob(Cand) 13: Update-Probability(Cand, {P last }, p high ) 14: Update-Probability(Cand, Tabu, p min. p tabu ) 15: Normalize-Probabilities(Cand) 16: Tabu 17: else 18: if all predicates in Tabu then 19: Tabu 20: Cand Distribute-Probability-Uniformly 21: end if 22: end if 23: P current predicate from Cand with corresponding probability 24: T T {P current } 25: Tabu Tabu {P current } 26: Cand Cand \ {P current } 27: Normalize-Probabilities(Cand) 28: B last = B current 29: P last = P current 30: end if 31: until stop criterion satisfied Fig. 4. A stochastic history-driven tabu template generation 5 Experimental Results In order to compare effectiveness of the proposed methods, we implemented and benchmarked the algorithms in SICStus Prolog on 2.0 GHz Intel Xeon processor with 12 GB RAM under Gentoo Linux. The first set of experiments was executed on ten instances of identifying common structures in random graphs generated according to Barabási-Réka model [1]. We used graphs consisting of 20 nodes that were constructed by incrementally adding new nodes and connecting them with three arcs

10 On Generating Templates for Hypothesis in Inductive Logic Programming 171 to existing nodes in the graph. The hidden structure that we were looking for (the consistent hypothesis) consisted of five nodes. Both positive and negative evidence contained ten instances of the graphs. In Table 1 we show a comprehensive comparison of all methods described in the paper the naïve iterative deepening search (IDS) from [2], the incremental probabilistic search (IPS) and finally the stochastic history-driven tabu search (SHDTS). For each method we show the overall running time and the length of the found template. The runtime limit was set to 600 seconds. Since IPS and SHDTS are randomized algorithms, the results were obtained by averaging values through their five runs. In case any of these runs exceeded the maximal time limit, the final values were averaged only from the successful runs and the number of these unfinished runs is showed in the corresponding column. Table 1. A comparison of the naïve iterative deepening search (IDS), the incremental probabilistic search (IPS) and the stochastic history-driven tabu search (SHDTS) IDS IPS SHDTS time[s] length time[s] length #unfinished time[s] length #unfinished > First let us compare the results between IDS and IPS. If we look at the hypotheses lengths 1, we can see that they are almost identical for both searches, thus IPS finds almost always the optimal solution (we know that the solution given by the IDS is optimal). Further let us compare the runtimes. Except for one dataset, IPS clearly outperforms IDS. On the other hand, we have to realize that these results are not guaranteed as IPS is a randomized algorithm. In fact, in seven of ten cases the algorithm did not finish in one of its five runs. The reason why the search algorithm ran so long is that the random generator generated repeatedly wrong predicates until the template was so long that its validation exceeded the time limit. Now let us focus on the result of SHDTS. We can see that the length of the final hypothesis is almost identical to the optimal solution again and thus templates generated by SHDTS are appropriate. Actual significance of this method is evident when we analyze the runtimes. The method not only beats IPS for almost every case 1 When comparing the lengths of the hypotheses, we only compare the number of all atoms in the hypothesis but we do not consider the discrepancies between actual predicate symbols of the atoms and their arities in the hypotheses.

11 172 A. Chovanec and R. Barták but it also finished for all instances. The worse result in the ninth case is due to the randomized nature of the algorithms. On the other hand, IPS did not finish once on this instance. Hence the experiments proved that stochastic history-driven tabu search noticeably outperforms the former two methods. To support further that the proposed algorithms contribute to better performance of whole ILP system, we evaluated them on another set of input instances generated according to different model than in the first case. In this case we were again identifying common structures in random graphs, however new nodes were connected to the existing ones obeying Erdös-Rényi [3] model. In particular, each input dataset consists of ten positive and ten negative examples (graphs), where the graphs contain 20 nodes with density of arcs 0.2. In Table 2 we present results for finding implanted hidden structure consisting of 5, 6 and 7 nodes with arc density of 0.4. In the first column there is a number of nodes of the hidden common structure and in other columns there are runtimes and hypothesis lengths of each algorithm. The overall runtime limit was set to 1200 seconds. From Table 2 it clearly follows that SHDTS is working best for most of the datasets. Table 2. A comparison of the three methods for identifying common structures (subgraphs) of various number of nodes IDS IPS SHDS nodes time[s] length time[s] length time[s] length > > > > > > > > > > > > > Conclusions In this paper we addressed the problem of generating templates for inductive logic programming. We presented a novel approach that uses existing CSP algorithms as its subroutines. Specifically we use subroutines for θ-subsumption check [6] and for deciding template consistency [2]. We started with a simple algorithm from [2] performing iterative deepening search that is showed to be inefficient. The main result of the paper is a history-driven tabu template generating algorithm which is guided by evaluating how the negative evidence is covered by the generated template. Furthermore, we suggested a stochastic extension of this algorithm yielding a stochastic history-driven

12 On Generating Templates for Hypothesis in Inductive Logic Programming 173 tabu search. The efficiency of the last algorithm is demonstrated experimentally on the problem of identifying common structures in randomly generated graphs according to two different models. Stochastic history-driven search is doing better on almost all input instances and it significantly decreases the template generation time in comparison to former algorithms. An interesting feature of this algorithm is that it uses some form of learning so the algorithm learns itself during the learning process. Future work should mainly deal with testing the performance of the algorithms on instances of domains other than those of the random graphs, for example those of the bioinformatics. Real-life problems have often a particular structure and thus it is challenging to examine our methods in these fields. Acknowledgments. The authors would like to thank Filip Železný and Ondřej Kuželka for useful discussions on ILP techniques and for providing a generator of random problems for experiments. References 1. Barabási, A.-L., Réka, A.: Emergence of scaling in random networks. Science 286, (1999) 2. Barták, R.: Constraint Models for Reasoning on Unification in Inductive Logic Programming. In: Dicheva, D., Dochev, D. (eds.) AIMSA LNCS (LNAI), vol. 6304, pp Springer, Heidelberg (2010) 3. Erdös, P., Rényi, A.: On Random Graphs I. Publicationes Mathematicae Debrecen 6, (1959) 4. Glover, F., Laguna, M.: Tabu Search. Kluwer, Norwell (1997) 5. Gottlob, G., Leone, N., Scarcello, F.: On the complexity of some inductive logic programming problems. New Generation Computing 17, (1999) 6. Maloberti, J., Sebag, M.: Fast Theta-Subsumption with Constraint Satisfaction Algorithms. Machine Learning 55, (2004) 7. Mitzenmacher, M., Upfal, E.: Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press (2005) 8. Plotkin, G.: A note on inductive generalization. In: Meltzer, B., Michie, D. (eds.) Machine Intelligence, vol. 5, pp Edinburgh University Press (1970)

Conflict-based Statistics

Conflict-based Statistics Conflict-based Statistics Tomáš Müller 1, Roman Barták 1 and Hana Rudová 2 1 Faculty of Mathematics and Physics, Charles University Malostranské nám. 2/25, Prague, Czech Republic {muller bartak}@ktiml.mff.cuni.cz

More information

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Roman Barták Charles University in Prague, Faculty of Mathematics and Physics Institute for Theoretical Computer

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

Identifying non-redundant literals in clauses with uniqueness propagation

Identifying non-redundant literals in clauses with uniqueness propagation Identifying non-redundant literals in clauses with uniqueness propagation Hendrik Blockeel Department of Computer Science, KU Leuven Abstract. Several authors have proposed increasingly efficient methods

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

A New Algorithm for Singleton Arc Consistency

A New Algorithm for Singleton Arc Consistency A New Algorithm for Singleton Arc Consistency Roman Barták, Radek Erben Charles University, Institute for Theoretical Computer Science Malostranské nám. 2/25, 118 Praha 1, Czech Republic bartak@kti.mff.cuni.cz,

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies 55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently

More information

Solutions to Homework 10

Solutions to Homework 10 CS/Math 240: Intro to Discrete Math 5/3/20 Instructor: Dieter van Melkebeek Solutions to Homework 0 Problem There were five different languages in Problem 4 of Homework 9. The Language D 0 Recall that

More information

Abstract Path Planning for Multiple Robots: An Empirical Study

Abstract Path Planning for Multiple Robots: An Empirical Study Abstract Path Planning for Multiple Robots: An Empirical Study Charles University in Prague Faculty of Mathematics and Physics Department of Theoretical Computer Science and Mathematical Logic Malostranské

More information

Reduced branching-factor algorithms for constraint satisfaction problems

Reduced branching-factor algorithms for constraint satisfaction problems Reduced branching-factor algorithms for constraint satisfaction problems Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

A generic framework for solving CSPs integrating decomposition methods

A generic framework for solving CSPs integrating decomposition methods A generic framework for solving CSPs integrating decomposition methods L. Blet 1,3, S. N. Ndiaye 1,2, and C. Solnon 1,3 1 Université de Lyon - LIRIS 2 Université Lyon 1, LIRIS, UMR5205, F-69622 France

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Crossword Puzzles as a Constraint Problem

Crossword Puzzles as a Constraint Problem Crossword Puzzles as a Constraint Problem Anbulagan and Adi Botea NICTA and Australian National University, Canberra, Australia {anbulagan,adi.botea}@nicta.com.au Abstract. We present new results in crossword

More information

Computational complexity

Computational complexity Computational complexity Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Definitions: problems and instances A problem is a general question expressed in

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

Inductive Logic Programming Using a MaxSAT Solver

Inductive Logic Programming Using a MaxSAT Solver Inductive Logic Programming Using a MaxSAT Solver Noriaki Chikara 1, Miyuki Koshimura 2, Hiroshi Fujita 2, and Ryuzo Hasegawa 2 1 National Institute of Technology, Tokuyama College, Gakuendai, Shunan,

More information

CS-E3200 Discrete Models and Search

CS-E3200 Discrete Models and Search Shahab Tasharrofi Department of Information and Computer Science, Aalto University Lecture 7: Complete and local search methods for SAT Outline Algorithms for solving Boolean satisfiability problems Complete

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 11.1 Introduction Dynamic programming can be very confusing until you ve used it a

More information

Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras. Lecture - 37 Resolution Rules

Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras. Lecture - 37 Resolution Rules Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras Lecture - 37 Resolution Rules If some literals can be unified, the same algorithm should be able

More information

1.3. Conditional expressions To express case distinctions like

1.3. Conditional expressions To express case distinctions like Introduction Much of the theory developed in the underlying course Logic II can be implemented in a proof assistant. In the present setting this is interesting, since we can then machine extract from a

More information

6.856 Randomized Algorithms

6.856 Randomized Algorithms 6.856 Randomized Algorithms David Karger Handout #4, September 21, 2002 Homework 1 Solutions Problem 1 MR 1.8. (a) The min-cut algorithm given in class works because at each step it is very unlikely (probability

More information

Constraint Programming

Constraint Programming Depth-first search Let us go back to foundations: DFS = Depth First Search Constraint Programming Roman Barták Department of Theoretical Computer Science and Mathematical Logic 2 3 4 5 6 7 8 9 Observation:

More information

A new edge selection heuristic for computing the Tutte polynomial of an undirected graph.

A new edge selection heuristic for computing the Tutte polynomial of an undirected graph. FPSAC 2012, Nagoya, Japan DMTCS proc. (subm.), by the authors, 1 12 A new edge selection heuristic for computing the Tutte polynomial of an undirected graph. Michael Monagan 1 1 Department of Mathematics,

More information

Simple mechanisms for escaping from local optima:

Simple mechanisms for escaping from local optima: The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

The strong chromatic number of a graph

The strong chromatic number of a graph The strong chromatic number of a graph Noga Alon Abstract It is shown that there is an absolute constant c with the following property: For any two graphs G 1 = (V, E 1 ) and G 2 = (V, E 2 ) on the same

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}. Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták In Proceedings of 7th IFAC Workshop on Intelligent Manufacturing Systems (IMS 2003), Elsevier Science, 2003 (to appear). CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS Roman Barták Charles

More information

Beyond Counting. Owen Kaser. September 17, 2014

Beyond Counting. Owen Kaser. September 17, 2014 Beyond Counting Owen Kaser September 17, 2014 1 Introduction Combinatorial objects such as permutations and combinations are frequently studied from a counting perspective. For instance, How many distinct

More information

Eulerian disjoint paths problem in grid graphs is NP-complete

Eulerian disjoint paths problem in grid graphs is NP-complete Discrete Applied Mathematics 143 (2004) 336 341 Notes Eulerian disjoint paths problem in grid graphs is NP-complete Daniel Marx www.elsevier.com/locate/dam Department of Computer Science and Information

More information

Ch9: Exact Inference: Variable Elimination. Shimi Salant, Barak Sternberg

Ch9: Exact Inference: Variable Elimination. Shimi Salant, Barak Sternberg Ch9: Exact Inference: Variable Elimination Shimi Salant Barak Sternberg Part 1 Reminder introduction (1/3) We saw two ways to represent (finite discrete) distributions via graphical data structures: Bayesian

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

A Hyper-heuristic based on Random Gradient, Greedy and Dominance

A Hyper-heuristic based on Random Gradient, Greedy and Dominance A Hyper-heuristic based on Random Gradient, Greedy and Dominance Ender Özcan and Ahmed Kheiri University of Nottingham, School of Computer Science Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, UK

More information

The Metalanguage λprolog and Its Implementation

The Metalanguage λprolog and Its Implementation The Metalanguage λprolog and Its Implementation Gopalan Nadathur Computer Science Department University of Minnesota (currently visiting INRIA and LIX) 1 The Role of Metalanguages Many computational tasks

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Operational Semantics

Operational Semantics 15-819K: Logic Programming Lecture 4 Operational Semantics Frank Pfenning September 7, 2006 In this lecture we begin in the quest to formally capture the operational semantics in order to prove properties

More information

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs Representations of Weighted Graphs (as Matrices) A B Algorithms and Data Structures: Minimum Spanning Trees 9.0 F 1.0 6.0 5.0 6.0 G 5.0 I H 3.0 1.0 C 5.0 E 1.0 D 28th Oct, 1st & 4th Nov, 2011 ADS: lects

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Framework for replica selection in fault-tolerant distributed systems

Framework for replica selection in fault-tolerant distributed systems Framework for replica selection in fault-tolerant distributed systems Daniel Popescu Computer Science Department University of Southern California Los Angeles, CA 90089-0781 {dpopescu}@usc.edu Abstract.

More information

Constraint Programming

Constraint Programming Constraint In Pursuit of The Holly Grail Roman Barták Charles University in Prague Constraint programming represents one of the closest approaches computer science has yet made to the Holy Grail of programming:

More information

Fundamental Properties of Graphs

Fundamental Properties of Graphs Chapter three In many real-life situations we need to know how robust a graph that represents a certain network is, how edges or vertices can be removed without completely destroying the overall connectivity,

More information

Foundations of AI. 9. Predicate Logic. Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution

Foundations of AI. 9. Predicate Logic. Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution Foundations of AI 9. Predicate Logic Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller 09/1 Contents Motivation

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

ApplMath Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel.

ApplMath Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel. ApplMath 2015 Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel In: Jan Brandts and Sergej Korotov and Michal Křížek and Karel Segeth and Jakub Šístek and

More information

Specialization-based parallel Processing without Memo-trees

Specialization-based parallel Processing without Memo-trees Specialization-based parallel Processing without Memo-trees Hidemi Ogasawara, Kiyoshi Akama, and Hiroshi Mabuchi Abstract The purpose of this paper is to propose a framework for constructing correct parallel

More information

Concept Learning (2) Aims. Introduction. t Previously we looked at concept learning in an arbitrary conjunctive representation

Concept Learning (2) Aims. Introduction. t Previously we looked at concept learning in an arbitrary conjunctive representation Acknowledgements Concept Learning (2) 14s1: COMP9417 Machine Learning and Data Mining School of Computer Science and Engineering, University of New South Wales Material derived from slides for the book

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Term Algebras with Length Function and Bounded Quantifier Elimination

Term Algebras with Length Function and Bounded Quantifier Elimination with Length Function and Bounded Ting Zhang, Henny B Sipma, Zohar Manna Stanford University tingz,sipma,zm@csstanfordedu STeP Group, September 3, 2004 TPHOLs 2004 - p 1/37 Motivation: Program Verification

More information

Full CNF Encoding: The Counting Constraints Case

Full CNF Encoding: The Counting Constraints Case Full CNF Encoding: The Counting Constraints Case Olivier Bailleux 1 and Yacine Boufkhad 2 1 LERSIA, Université de Bourgogne Avenue Alain Savary, BP 47870 21078 Dijon Cedex olivier.bailleux@u-bourgogne.fr

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Multi-relational Decision Tree Induction

Multi-relational Decision Tree Induction Multi-relational Decision Tree Induction Arno J. Knobbe 1,2, Arno Siebes 2, Daniël van der Wallen 1 1 Syllogic B.V., Hoefseweg 1, 3821 AE, Amersfoort, The Netherlands, {a.knobbe, d.van.der.wallen}@syllogic.com

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

SAT solver of Howe & King as a logic program

SAT solver of Howe & King as a logic program SAT solver of Howe & King as a logic program W lodzimierz Drabent June 6, 2011 Howe and King [HK11b, HK11a] presented a SAT solver which is an elegant and concise Prolog program of 22 lines. It is not

More information

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

arxiv: v1 [cs.dm] 24 Sep 2012

arxiv: v1 [cs.dm] 24 Sep 2012 A new edge selection heuristic for computing the Tutte polynomial of an undirected graph. arxiv:1209.5160v1 [cs.dm] 2 Sep 2012 Michael Monagan Department of Mathematics, Simon Fraser University mmonagan@cecms.sfu.ca

More information

Learning Probabilistic Ontologies with Distributed Parameter Learning

Learning Probabilistic Ontologies with Distributed Parameter Learning Learning Probabilistic Ontologies with Distributed Parameter Learning Giuseppe Cota 1, Riccardo Zese 1, Elena Bellodi 1, Fabrizio Riguzzi 2, and Evelina Lamma 1 1 Dipartimento di Ingegneria University

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

New Optimal Load Allocation for Scheduling Divisible Data Grid Applications

New Optimal Load Allocation for Scheduling Divisible Data Grid Applications New Optimal Load Allocation for Scheduling Divisible Data Grid Applications M. Othman, M. Abdullah, H. Ibrahim, and S. Subramaniam Department of Communication Technology and Network, University Putra Malaysia,

More information

Lecture 1: A Monte Carlo Minimum Cut Algorithm

Lecture 1: A Monte Carlo Minimum Cut Algorithm Randomized Algorithms Lecture 1: A Monte Carlo Minimum Cut Algorithm Sotiris Nikoletseas Professor CEID - ETY Course 2017-2018 Sotiris Nikoletseas, Professor A Monte Carlo Minimum Cut Algorithm 1 / 32

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Constraint Satisfaction Luke Zettlemoyer Multiple slides adapted from Dan Klein, Stuart Russell or Andrew Moore What is Search For? Models of the world: single agent, deterministic

More information

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini 6. Tabu Search 6.3 Minimum k-tree Problem Fall 2010 Instructor: Dr. Masoud Yaghini Outline Definition Initial Solution Neighborhood Structure and Move Mechanism Tabu Structure Illustrative Tabu Structure

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Spring 2019 Alexis Maciel Department of Computer Science Clarkson University Copyright c 2019 Alexis Maciel ii Contents 1 Analysis of Algorithms 1 1.1 Introduction.................................

More information

A Note on Restricted Forms of LGG. Ondřej Kuželka1 and Jan Ramon2 University, 2KU Leuven

A Note on Restricted Forms of LGG. Ondřej Kuželka1 and Jan Ramon2 University, 2KU Leuven A Note on Restricted Forms of LGG Ondřej Kuželka1 and Jan Ramon2 1Cardiff University, 2KU Leuven Horn-Cla Vojtěch Asc What is this talk about? It is about a negative answer to a conjecture which we had

More information

Divisibility Rules and Their Explanations

Divisibility Rules and Their Explanations Divisibility Rules and Their Explanations Increase Your Number Sense These divisibility rules apply to determining the divisibility of a positive integer (1, 2, 3, ) by another positive integer or 0 (although

More information

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red.

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red. COS 521 Fall 2009 Notes on Minimum Spanning Trees 1. The Generic Greedy Algorithm The generic greedy algorithm finds a minimum spanning tree (MST) by an edge-coloring process. Initially all edges are uncolored.

More information

Stochastic propositionalization of relational data using aggregates

Stochastic propositionalization of relational data using aggregates Stochastic propositionalization of relational data using aggregates Valentin Gjorgjioski and Sašo Dzeroski Jožef Stefan Institute Abstract. The fact that data is already stored in relational databases

More information

1 Definition of Reduction

1 Definition of Reduction 1 Definition of Reduction Problem A is reducible, or more technically Turing reducible, to problem B, denoted A B if there a main program M to solve problem A that lacks only a procedure to solve problem

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

Declarative Programming. 7: inductive reasoning

Declarative Programming. 7: inductive reasoning Declarative Programming 7: inductive reasoning 1 Inductive reasoning: overview infer general rules from specific observations Given B: background theory (clauses of logic program) P: positive examples

More information

Constraint-Based Scheduling: An Introduction for Newcomers

Constraint-Based Scheduling: An Introduction for Newcomers Constraint-Based Scheduling: An Introduction for Newcomers Roman Barták * Charles University in Prague, Faculty of Mathematics and Physics Malostranské námestí 2/25, 118 00, Praha 1, Czech Republic bartak@kti.mff.cuni.cz

More information

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions. CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

On the BEAM Implementation

On the BEAM Implementation On the BEAM Implementation Ricardo Lopes 1,Vítor Santos Costa 2, and Fernando Silva 1 1 DCC-FC and LIACC, Universidade do Porto, Portugal {rslopes,fds}@ncc.up.pt 2 COPPE-Sistemas, Universidade Federal

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

ACO and other (meta)heuristics for CO

ACO and other (meta)heuristics for CO ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution

More information

On the implementation of a multiple output algorithm for defeasible argumentation

On the implementation of a multiple output algorithm for defeasible argumentation On the implementation of a multiple output algorithm for defeasible argumentation Teresa Alsinet 1, Ramón Béjar 1, Lluis Godo 2, and Francesc Guitart 1 1 Department of Computer Science University of Lleida

More information

Heuristic (Informed) Search

Heuristic (Informed) Search Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap., Sect..1 3 1 Search Algorithm #2 SEARCH#2 1. INSERT(initial-node,Open-List) 2. Repeat: a. If empty(open-list) then return failure

More information

Throughout the chapter, we will assume that the reader is familiar with the basics of phylogenetic trees.

Throughout the chapter, we will assume that the reader is familiar with the basics of phylogenetic trees. Chapter 7 SUPERTREE ALGORITHMS FOR NESTED TAXA Philip Daniel and Charles Semple Abstract: Keywords: Most supertree algorithms combine collections of rooted phylogenetic trees with overlapping leaf sets

More information

CS227: Assignment 1 Report

CS227: Assignment 1 Report 1 CS227: Assignment 1 Report Lei Huang and Lawson Wong April 20, 2008 1 Introduction Propositional satisfiability (SAT) problems have been of great historical and practical significance in AI. Despite

More information

The Resolution Algorithm

The Resolution Algorithm The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial

More information

Efficient Construction of Relational Features

Efficient Construction of Relational Features Efficient Construction of Relational Features Filip Železný Czech Technology University in Prague Technická 2, 166 27 Prague 6, Czech Republic zelezny@fel.cvut.cz Abstract Devising algorithms for learning

More information

Learning Directed Probabilistic Logical Models using Ordering-search

Learning Directed Probabilistic Logical Models using Ordering-search Learning Directed Probabilistic Logical Models using Ordering-search Daan Fierens, Jan Ramon, Maurice Bruynooghe, and Hendrik Blockeel K.U.Leuven, Dept. of Computer Science, Celestijnenlaan 200A, 3001

More information

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population. An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences

More information

DATABASE THEORY. Lecture 11: Introduction to Datalog. TU Dresden, 12th June Markus Krötzsch Knowledge-Based Systems

DATABASE THEORY. Lecture 11: Introduction to Datalog. TU Dresden, 12th June Markus Krötzsch Knowledge-Based Systems DATABASE THEORY Lecture 11: Introduction to Datalog Markus Krötzsch Knowledge-Based Systems TU Dresden, 12th June 2018 Announcement All lectures and the exercise on 19 June 2018 will be in room APB 1004

More information

A Roadmap to an Enhanced Graph Based Data mining Approach for Multi-Relational Data mining

A Roadmap to an Enhanced Graph Based Data mining Approach for Multi-Relational Data mining A Roadmap to an Enhanced Graph Based Data mining Approach for Multi-Relational Data mining D.Kavinya 1 Student, Department of CSE, K.S.Rangasamy College of Technology, Tiruchengode, Tamil Nadu, India 1

More information

X-KIF New Knowledge Modeling Language

X-KIF New Knowledge Modeling Language Proceedings of I-MEDIA 07 and I-SEMANTICS 07 Graz, Austria, September 5-7, 2007 X-KIF New Knowledge Modeling Language Michal Ševčenko (Czech Technical University in Prague sevcenko@vc.cvut.cz) Abstract:

More information

Network Load Balancing Methods: Experimental Comparisons and Improvement

Network Load Balancing Methods: Experimental Comparisons and Improvement Network Load Balancing Methods: Experimental Comparisons and Improvement Abstract Load balancing algorithms play critical roles in systems where the workload has to be distributed across multiple resources,

More information

Overview of Tabu Search

Overview of Tabu Search Overview of Tabu Search The word tabu (or taboo) comes from Tongan, a language of Polynesia, where it was used by the aborigines of Tonga island to indicate things that cannot be touched because they are

More information

Honour Thy Neighbour Clique Maintenance in Dynamic Graphs

Honour Thy Neighbour Clique Maintenance in Dynamic Graphs Honour Thy Neighbour Clique Maintenance in Dynamic Graphs Thorsten J. Ottosen Department of Computer Science, Aalborg University, Denmark nesotto@cs.aau.dk Jiří Vomlel Institute of Information Theory and

More information

6.001 Notes: Section 8.1

6.001 Notes: Section 8.1 6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything

More information