Towards Higher-Level Supercompilation

Size: px
Start display at page:

Download "Towards Higher-Level Supercompilation"

Transcription

1 Towards Higher-Level Supercompilation Ilya Klyuchnikov and Sergei Romanenko Keldysh Institute of Applied Mathematics Russian Academy of Sciences 4 Miusskaya sq., Moscow, , Russia ilya.klyuchnikov@gmail.com romansa@keldysh.ru Abstract. We show that the power of supercompilation can be increased by constructing a hierarchy of supercompilers, in which a lowerlevel supercompiler is used by a higher-level one for proving improvement lemmas. The lemmas thus obtained are used to transform expressions labeling nodes in process trees, in order to avoid premature generalizations. Such kind of supercompilation, based on a combination of several metalevels, is called higher-level supercompilation (to differentiate it from higher-order supercompilation related to transforming higherorder functions). Higher-level supercompilation may be considered as an application of a more general principle of metasystem transition. 1 Introduction The concept of metasystem transition was introduced by V.F.Turchin in his 1977 book The Phenomenon of Science [26]. In the context of computer science, Turchin gives the following (somewhat simplified) formulation of the main idea of metasystem transition [28]: Consider a system S of any kind. Suppose that there is a way to make some number of copies of it, possibly with variations. Suppose that these systems are united into a new system S which has the systems of the S type as its subsystems, and includes also an additional mechanism which somehow examines, controls, modifies and reproduces the S-subsystems. Then we call S a metasystem with respect to S, and the creation of S a metasystem transition. As a result of consecutive metasystem transitions a multilevel hierarchy of control arises, which exhibits complicated forms of behavior. Futamura projections [6] may serve as a good example of metasystem transition. Let p be a program, i an interpreter, and s a program specializer. Then s(i, p) may be regarded as a compiled program (the first projection ), s(s, i) as a compiler (the second projection ) and s(s, s) as a compiler generator (the third Supported by Russian Foundation for Basic Research projects No a and No a.

2 projection ). (The second Futamura projection is also referred to by Ershov as Turchin s theorem of double driving [5].) In the second projection the evaluation of s(s, i) involves two copies of the specializer s, the second copy examining and controlling the first one. In the third projection, there are 3 copies of s, the third one controlling the second one controlling the first one. Moreover, as shown by Glück [7], there may be considered the fourth Futamura projection, corresponding to the next metasystem transition! Futamura projections, however, are not the only possible way of exploiting the idea of metasystem transition by combining program transformers. In the present paper we consider another technique of constructing a multilevel hierarchy of control using supercompilers as its building blocks. A supercompiler is a program transformer based on supercompilation [27], a program transformation technique bearing close relation to the fold/unfold method by Burstall and Darlington [4]. Unfortunately, pure supercompilation is known not to be very good at transforming non-linear (i.e. containing repeated variables) expressions and functions with accumulating parameters. We argue, however, that the power of supercompilation can be increased by combining several copies of a classic supercompiler and making them control each other. Such kind of supercompilation, based on a combination of several metalevels will be called higher-level supercompilation (to differentiate it from higher-order supercompilation related to transforming higher-order functions). The technique suggested in the paper is (conceptually) simple and modular, and is based on the use of improvement lemmas [20,21], which are automatically generated by lower-level supercompilers for a higher-level supercompiler. 2 Higher-Level Supercompilation 2.1 What is a zero-level supercompiler? The descriptions of supercompilation given in the literature differ in some secondary details, irrelevant to the main idea of higher-level supercompilation. We follow the terminology and notation used by Sørensen and Glück [24,23,25]. All program transformation examples considered in the paper have been carried out by HOSC [13,14], a higher-order supercompiler whose general structure is shown in Fig Accumulating parameter: zero-level supercompilation Let us try to apply the supercompiler HOSC [13] to the program shown in Fig. 2. At the beginning, a few steps of driving produce the process tree shown in Fig. 3. At this point the whistle signals that there is a node b embedding a previously encountered node a, but b is not an instance of a: case double x Z of {Z True; S y odd y);} c case double n (S (S Z)) of {Z True; S m odd m;}

3 def scp(tree) b = unprocessed_leaf(tree) if b == null return makeprogram(tree) if trivial(b) return scp(drive(b, tree)) a = ancestor(tree, b, renaming) if a!= null return scp(fold(tree, a, b)) a = ancestor(tree, b, instance) if a!= null return scp(abstract(tree, b, a)) a = ancestor(tree, b, whistle) if a == null return scp(drive(b, tree)) return scp(abstract(tree, a, b)) unprocessed_leaf(tree) returns an unprocessed leaf b in the process tree. trivial(b) checks whether the leaf b is trivial. (A leaf is trivial if driving it does not result in unfolding a function call or applying a substitution to a variable.) drive(b,tree) performs a driving step for a node b and returns the modified tree. ancestor(tree,b,renaming) returns a node a such that b is a renaming of a. ancestor(tree,b,instance) returns a node a such that b is an instance of a. ancestor(tree,b,whistle) returns a node a such that a is homeomorphically embedded in b by coupling. fold(t,a,b) makes a cycle in the process tree from b to a. abstract(tree,a,b) generalizes a to b. Fig. 1. Zero-level supercompilation algorithm data Bool = True False; data Nat = Z S Nat; even (double x Z) where even = λx case x of { Z True; S x1 odd x1;}; odd = λx case x of { Z False; S x1 even x1;}; double = λx y case x of { Z y; S x1 double x1 (S (S y));}; Fig. 2. even (double x Z): source program Hence, HOSC has to throw away the whole subtree under a and generalize a by replacing a with a new node a, such that a and b are instances of a. Then the supercompilation continues to produce the residual program shown in Fig. 4. This result is correct, but it is not especially exciting! In the goal expression even (double x Z) the inner function double multiplies its argument by 2, and the outer function even checks whether this number is even. Hence, the whole expression never returns False. But this can not be immediately seen from the residual program, the program text still containing False.

4 even (double x Z) case (double x Z) of {Z True; S y odd y;} case case x of {Z Z; S z double z (S (S Z));} of {Z->True; S y->odd y;} True x = Z x = S n case (double n (S (S Z))) of {Z True; S m odd m;} Fig. 3. even (double x Z): driving letrec f=λw2 λp2 case w2 of { Z letrec g=λr2 case r2 of { S r case r of {Z False; S z2 g z2;}; Z True;} in g p2; S z f z (S (S p2)); } in f x Z Fig. 4. The result of zero-level supercompilation 2.3 Accumulating parameter: applying a lemma As was pointed out by Burstall and Darlington [4], the power of a program transformation system can be increased by enabling it to use laws or lemmas (such as the associativity and commutativity of addition and multiplication). In terms of supercompilation it amounts to replacing a node in the process tree with an equivalent one. Let us return to the tree in Fig. 3. The result of supercompilation was not good enough, because HOSC had to generalize a node, and a generalization resulted in a loss of precision. But we can avoid generalization by making use of the following (mysterious) equivalence case double n (S (S Z)) of {Z True; S m odd m;} = = even (double n Z) Since the node even (double n Z) is a renaming of an upper one, the supercompiler can now form a cycle to produce the program shown in Fig. 5. Note that this time there is no occurrences of False, hence the supercompiler succeeded in proving that False can never be returned by the program.

5 letrec f=λt case t of {Z True; S s f s;} in f x Fig. 5. The result of applying a lemma def scp(tree, n) b = unprocessed_leaf(tree) if b == null return makeprogram(tree) if trivial(b) return scp(drive(b, tree), n) a = ancestor(tree, b, renaming) if a!= null return scp(fold(t, a, b), n) a = ancestor(tree, b, instance) if a!= null return scp(abstract(tree, b, a)) a = ancestor(tree, b, whistle) if a == null return scp(drive(b, tree), n) if n > 0 e = findeqexpr(b.expr, n) if e!= null return scp(replace(tree, b, e), n) return scp(abstract(tree, a, b)) def findeqexpr(e1, n) e = scp(e1, n-1) cands = candidates(e1, n) for cand <- cands if equivalent( scp(cand, n-1), e) return cand return null def candidates(e1, n) Fig. 6. Multi-level" supercompilation algorithm Hence, there are good reasons to believe that lemmas are a good thing, but there appear two questions: (1) how to prove lemmas, and (2) how to find useful lemmas. 2.4 Proving lemmas by supercompilation In [15] we have shown that the supercompiler HOSC [13] may be used for proving interesting equivalences of higher-order expressions. The technique is quite straightforward. Let e 1 and e 2 be expressions appearing in a program p. Let e 1 and e 2 be the results of supercompiling e 1 and e 2 with respect to p. Then, if e 1 and e 2 are the same (modulo alpha-renaming), then the original expressions e 1 and e 2 are equivalent (provided that the supercompiler strictly preserves the equivalence of programs). Since e 1 and e 2 may contain free variables, the use of a higher-order supercompiler enables us to prove equalities with universal quantification over functions and infinite data types by using a higher-order supercompiler.

6 def scp(tree, n) b = unprocessed_leaf(tree) if b == null return [makeprogram(tree)] if trivial(b) return scp(drive(b, tree), n) a = ancestor(tree, b, renaming) if a!= null return scp(fold(t, a, b), n) a = ancestor(tree, b, instance) if a!= null return scp(abstract(tree, b, a)) a = ancestor(tree, b, whistle) if a == null return scp(drive(b, tree), n) progs = scp(abstract(tree, a, b)) if n > 0 for e <- findeqexpr(b.expr, n) progs ++= scp(replace(tree, b, e), n) return progs def findeqexpr(e1, n) es = scp(e1, n-1) cands = candidates(e1, n) exps = [] for cand <- cands if not_disjoint( scp(cand, n-1), es) exps = exps + cand return exps def candidates(e1, n)... Fig. 7. Multi-level" supercompilation algorithm: multiple residual programs... Thus the reasoning about operational equivalence ( =) of programs can be reduced to a trivial check of the syntactic equality of supercompiled programs. Since all residual programs produced by HOSC are expressions (that may contain letrec-subexpressions), checking the equality of residual programs boils down to a syntactical comparison of expressions. Generally speaking, the idea of proving equivalence by normalization is a wellknown one, being a standard technique in such fields as computer algebra. The idea of using supercompilation for normalization is due to Lisitsa and Webster [17], who have successfully applied supercompilation for proving the equivalence of programs written in a first-order functional language, on condition that the programs deal with finite input data and are guaranteed to terminate. Later it has been found [15,13] that these restrictions can be lifted in cases where the supercompiler deals with programs in a lazy functional language and preserves the termination properties of programs. 2.5 Stacking supercompilers, jumping to higher-level supercompilation As we have seen the power of supercompilation can be increased by using lemmas (i.e. replacing some expressions with equivalent ones). On the other hand, supercompilers can be used for checking the equality of expressions. Hence, recalling the principle of metasystem transition [26,28], we come to the following idea: let us construct a tower of supercompilers, the higher-level ones running the lower-level ones in order to obtain useful lemmas.

7 This can be done by adding to the function scp(tree) shown in Fig. 1 an additional parameter n, the level the supercompiler is invoked at. The modified supercompilation algorithm is shown in Fig. 6. Note that for n = 0 the algorithm degrades to the classic supercompilation. But in cases where n > 0 the supercompiler calls the function findeqexpr(b.expr,n) passing to it the expression in the node b and the current level. The function tries to produce an expression equivalent to b.expr by generating a set of candidate expressions and selecting an expression equivalent to b.expr. The check for equivalence is performed by invoking the same supercompiler at the lower level n Generating sets of residual programs The algorithm in Fig. 6 assumes that there is a single result of supercompilation. However, there are certain points in the process of supercompilation, where the supercompiler has an opportunity to make a choice among several options, so that, given a source program, several (equivalent) residual programs may be generated. This may be used for increasing the power of the supercompilation-based equality check. Suppose we have to check for equivalence two expressions e 1 and e 2. Then, instead of generating and comparing just two residual expressions e 1 and e 2, we can supercompile e 1 and e 2 to produce two sets of residual expressions and try to find a residual expression common to both sets (modulo alpha-renaming). To implement this idea we need a version of a supercompilation algorithm producing a set of residual programs (see Fig. 7). This version, instead of choosing an arbitrary acceptable expression from the set of candidate expressions, returns the set of all acceptable expressions. Note that for any n the set of residual programs produced by the algorithm includes the result returned by the zero-level supercompiler. 2.7 A few open questions Fig. 6 and Fig. 7 present the general idea of higher-level supercompilation, but there still remains a few open questions. Correctness At the first glance, replacing an expression with an equivalent one looks as a natural and safe-by-construction operation. And yet, as shown by Sands [20,21], the unrestricted use of equivalences may lead to incorrect transformations that do not preserve the meaning of programs. How to generate candidate expressions Supercompilation can be used for checking the equivalence of expressions, but it does not help us in finding candidate expressions that are worth being checked for equivalence.

8 3 Correctness = equivalence + improvement 3.1 Notation It is clear that the meaning of expressions may depend on the context. Thus, to avoid making our notation unnecessarily cumbersome, when speaking about the equivalence of expressions, we will assume that the expressions are evaluated and supercompiled in the context of the same program. We use SC[[e]] to denote the expression produced by supercompiling the expression e by a classic, zero-level supercompiler, and e e to denote the fact that e is the same as e (modulo alpha-renaming). 3.2 Operational equivalence Definition 1 (Operational approximation). An expression e operationally approximates e, e e, if for all contexts C such that C[e], C[e ] are closed, if the evaluation of C[e] terminates then so does the evaluation of C[e ]. Definition 2 (Operational equivalence). An expression e is operationally equivalent to e, e = e if e e and e e. In the following we assume the supercompilers to preserve operational equivalence, i.e. that e = SC[[e]] implies e = SC[[e]] (which is true of the supercompiler HOSC [13]). 3.3 Improvement The replacement of an expression e with an equivalent expression e, followed by a fold, may result in producing an incorrect residual program (some examples can be found in [21]). Definition 3 (Improvement). An expression e is improved by e, e e, if for all contexts C such that C[e] and C[e ] are closed, if the computation of C[e] terminates using n function calls, then the computation of C[e ] also terminates, and uses no more than n function calls. As has been shown by Sands [21], the replacement of an expression e 1 with an expression e 2 will not violate the correctness of transformation if the following conditions are met: e 1 = e2 and e 1 e 2. Definition 4 (Improvement lemma). A pair (e 1, e 2 ) is an improvement lemma if e 1 = e2 and e 1 e 2.

9 m n m φ(e 1,..., e k ) i : e i e i SC[[e 1]] = SC[[e 2]] SC[[e 1]] n φ(e 1,..., e k ) e 1 e 2 SC[[e2]] Fig. 8. Estimation of improvement based on annotated supercompiled expressions 3.4 Checking the improvement relation by supercompilation Let e 1 and e 2 be expressions whose equivalence has been proven by supercompilation. Does it mean that one of the expressions is an improvement over the other one? Not at all! Supercompiling the following two expressions with respect to the program shown in Fig. 10 proves them to be operationally equivalent: or (even n) (odd n) = case (even n) of {True True; False odd (S (S n));} However, neither is an improvement over another one. Indeed, if n = Z, the evaluation of the expressions involves 2 and 1 function calls, respectively. But, if n = S Z, the evaluation involves 5 and 6 function calls. Therefore, this lemma is unsafe to be used in program transformation. Fortunately, the check that e 1 e 2 holds for two expressions e 1 and e 2, such that e 1 = e2, can also be performed by supercompilation! And this can be done almost for free in the following way. In order to check e 1 and e 2 for equivalence, we have to supercompile them to e 1 and e 2. The check for equivalence succeeds if e 1 and e 2 are the same (modulo alpha-renaming). Therefore, e 1 and e 2 contain insufficient information to make any conclusions about e 2 being an improvement over e 1. However, the process trees produced by supercompiling e 1 and e 2 contain more information than the residual expressions. Namely, let us examine the process trees and mark with a star (*) the edges corresponding to an unfolding (a function call). For example, supercompiling the expressions considered above produces the process trees shown in Fig. 12 and Fig. 14 (the subtrees for odd x are omitted for brevity). Now the starred edges provide some information that can be used for checking that an expression improves another one. But this information is not accessible from outside. But, in the case of HOSC [13], this information can be made visible by modifying the algorithm that converts process trees into residual expression. The modified algorithm converts starred edges into annotations in residual expressions. When traversing a starred edge, the residual expression produced by traversing the (single) child node is annotated with a star (*). In this way the information about unfolds is recorded in residual expressions, so that a lowerlevel supercompiler can be used as a black box. For example, Fig. 13 and Fig. 16 show the annotated programs produced from the process trees in Fig. 12 and Fig. 14. Now, let denote a binary relation on expressions such that e e iff (1) e and e differ only in their annotations, and (2) e can be transformed into e

10 by erasing some stars in e. See Fig. 8 for a more formal definition, where n φ denotes a functor φ prefixed with n stars. (It is curious to note that can be considered as a special case of homeomorphic embedding relation.) Theorem 1. Let e 1 = SC[[e 1 ]] and e 2 = SC[[e 1 ]]. If e 1 e 2 and e 1 e 2, then e 1 e 2. Proof. Since e 1 e 2, e 1 is the same as e 2 (modulo annotations and a bound variable renaming). Therefore, if we put the expressions in the same context C and try to evaluate C[e 1] and C[e 2] (disregarding the annotations), this will result in two sequences of reduction steps, differing only in the number of stars encountered during computation. Since e 1 e 2, after any number of reduction steps, the number of stars encountered in the evaluation of C[e 2] cannot be greater that the number of stars encountered in the evaluation of C[e 1], and the stars correspond to unfoldings in the original expressions e 1 and e 2. So, by evaluating C[e 1] and C[e 2] and counting stars we can count the number of unfolds in the evaluation of C[e 1 ] and C[e 2 ]. Hence, the number of unfolds in the evaluation of C[e 2 ] is no more than in the evaluation of C[e 1 ]. Therefore e 1 e 2. Thus, by examining annotated supercompiled expressions, we can check the improvement relation for the original expressions. For example, consider the annotated supercompiled expressions for or (even n) (odd n) and case (even n) of {True True; False odd (S (S n));} shown in Fig. 13 and Fig. 16. Since the supercompiled expressions are not related by, we cannot make the conclusion that the original expressions are related by. 4 A proof-of-concept implementation Although, the general idea of higher-level supercompilation is conceptually simple, there are a number of problems to be solved in a practical implementation. Which zero-level supercompiler to use as the basic for implementing higherlevel supercompilation? How to guarantee the correctness of transformations? How to generate useful lemmas? How to ensure the termination of higher-level supercompilation? To show the feasibility of higher-level supercompilation we have implemented a simple proof-of-concept higher-level supercompiler HLSC by modifying the supercompiler HOSC [13,15]. HOSC has been chosen because it (1) preserves the meaning of programs (including their termination properties), (2) is able to prove lemmas with universal quantification over functions and infinite data types,

11 (3) generates residual programs in the form of expressions, which enables the program equivalence checking to be reduced to expression equivalence checking. The correctness of the transformations is guaranteed, since HLSC uses lemmas that are improvement ones. Note, however, that the check for improvement is based on zero-level supercompilation, for which reason HLSC currently implements only a two-level hierarchy of supercompilers, rather than a multi-level one, consisting of the top and bottom supercompilers. The least elaborated points are the search for useful lemmas and ensuring the termination of higher-level supercompilation. S[[v]] = 1 S[[c e i]] = 1 + i S[[ei]] S[[λv e]] = 1 + S[[e]] S[[case e 0 of {c i v ik e i;}]] = 1 + S[[e 0]] + i S[[ei]] S[[e 1 e 2]] = S[[e 1]] + S[[e 2]] Fig. 9. The size of expression Presently the generation of candidate expressions is implemented in a rather crude and straightforward way. When the top supercompiler finds a node b containing an expression e and embedding a previously encountered node a, it generates and tries all expressions e whose size (see Fig. 9) is less than the size of e. Then the bottom supercompiler is used to check whether (e, e ) is an improvement lemma and the search for a lemma stops. Ensuring the termination of the top supercompiler is an exciting problem that requires further investigation. The termination of zero-level supercompilation is achieved by the check for homeomorphic embedding and generalization [23,22,13]. However, the main idea of higher-level supercompilation (in the version presented in Fig. 6) consists in avoiding generalization. When an embedding is detected, the use of an improvement lemma enables the supercompiler to avoid generalization and continue to build the process tree. And this, potentially, may lead to non-termination. From the practical point of view, though, the non-termination can be avoided by imposing some restrictions on the use of lemmas. A simple and straightforward solution is the following. Suppose, the check for embedding finds that there an upper node a is embedded in a lower node b. Then b is replaced with b with the aid of an improvement lemma, and supercompilation continues without generalization. But this fact is recorded in the node a, so that next time when a gets embedded in another node, no lemma will be used, and a will be generalized as in the case of zero-level supercompilation. To some extent this idea is alike to cross-fertilization used by Boyer and Moore [2] in their theorem prover. They argue that the induction hypothesis

12 data Bool = True False; data Nat = Z S Nat; or (even m) (odd m) where even = λx case x of { Z True; S x1 odd x1;}; odd = λx case x of { Z False; S x1 even x1;}; or = λx y case x of { True True; False y;}; Fig. 10. or (even m) (odd m): the source program or (even m) (odd m) case (even m) of {True True; False (odd m);} case (case m of {Z -> True; S x -> odd x;}) of {True -> True; False -> odd m;} True m = Z case (odd x) of {True -> True; False -> odd (S x);} case (case x of {Z -> False; S n -> even n;}) of {True -> True; False -> odd (S x);} True x = Z m = S x x = S n case (even n) of {True -> True; False -> odd (S (S n));} Fig. 11. or (even m) (odd m): the whistle blows should be used just once, and then thrown away. (And then comes the turn of generalization.) 5 Examples 5.1 Supercompiling a non-linear expression Let us try to supercompile the program shown in Fig. 10. We know in advance that the expression or (even m) (odd m) can never return False, since a natural number m is either even or odd. But this cannot be readily seen from the program s text! After a few driving steps, we get the process tree shown in Fig. 11. At this point an embedding (by coupling) is detected: case (even m) of {True True; False (odd m);} c case (even n) of {True True; False (odd (S (S n)));}

13 or (even m) (odd m) * let x=m in case even m of {True -> True; False -> odd x;} case (even m) of {True -> True; False -> odd x;} * case (case m of {Z -> True; S y -> odd y;}) of {True -> True; False -> odd x;} True m = Z case (odd y) of {True -> True; False -> odd x;} * case (case y of {Z -> False; S z -> even z;}) of {True -> True; False -> odd x;} y = S z odd x m = S y y = Z case (even z) of {True -> True; False -> odd x;} Fig. 12. or (even m) (odd m): after generalization *(letrec f=*(λv case v of { Z True; S p *(case p of { Z letrec g = *(λw case w of { Z False; S t *(case t of {Z True; S z g z;});}) in g m; S x f x; }); }) in f m) Fig. 13. or (even m) (odd m): the result of zero-level supercompilation

14 let x=n in case even n of {True -> True; F -> odd (S (S x));} case (even n) of {True -> True; False -> odd (S (S x));} * case (case n of {Z -> True; S y -> odd y;}) of {True -> True; False -> odd (S (S x));} n = Z n = S y case (odd y) True of {True -> True; False -> odd (S (S x));} * case (case y of {Z -> False; S z -> even z;}) of {True -> True; False -> odd (S (S x));} y = Z y = S z case (even z) of odd (S (S x)) {True -> True; False -> odd (S (S x));} * even (S x) * odd x Fig. 14. case even n of {T rue T rue; F alse odd (S (S n)); }: annotated process tree But the second expression is not an instance of the first one, for which reason a folding cannot be performed. So the zero-level supercompiler HOSC would perform a generalization by replacing the first expression with the let-expression: let x = m in case (even m) of {True True; False (odd x);} Then it would continue by transforming the body of the let-expression, instead of the original expression, thereby forgetting that x and m have the same value. This loss of information would result in the residual program in Fig. 13, containing False, despite the fact that False can never be returned by the program. The higher-level version of HOSC, however, tries to find and apply an improvement lemma. The first lemma it finds has the size 5: case (even n) of {True True; False (odd (S (S n)));} = or (even n) (odd n) But this lemma is not an improvement one, and the higher-level supercompiler rejects it by supercompiling its left and right sides with the bottom supercompiler to produce annotated expressions in Fig. 16 and Fig. 13, respectively. However, there exist two improvement lemmas of size 6: case (even n) of {True True; False odd (S (S n));} case (even n) of {True True; False odd n;}

15 let x=n in case even n of {True -> True; False -> odd x;} case (even n) of {True -> True; False -> odd x;} * case (case n of {Z -> True; S y -> odd y;}) of {True -> True; False -> odd x;} True n = Z n = S y case (odd y) of {True -> True; False -> odd x;} * case (case y of {Z -> False; S z -> even z;}) of {True -> True; False -> odd x;} y = Z y = S z case (even z) of odd x {True -> True; False -> odd x;} Fig. 15. case even n of {T rue T rue; F alse odd n; }: annotated process tree letrec f=*(λv case v of { Z True; S p *(case p of { Z **(letrec g = *(λw case w of { Z False; S t *(case t of {Z True; S z g z;});}) in g n); S x f x; };) }) in f n Fig. 16. case even n of {T rue T rue; F alse odd (S (S n)); }: annotated residual program

16 letrec f=*(λv case v of { Z True; S p *(case p of { Z (letrec g = *(λw case w of { Z False; S t *(case t of {Z True; S z g z;});})* in g n); S x f x; };) }) in f n Fig. 17. case even n of {T rue T rue; F alse odd n; }: annotated residual program letrec f=λw case w of { Z True; S x case x of { Z True; S z f z;}; } in f m Fig. 18. or (even m) (odd m): the result of higher-level supercompilation case (even n) of {True True; False odd (S (S n));} case (odd n) of {True odd n; False True;} The higher-level HOSC finds and applies the first one, thereby avoiding generalization and producing the program in Fig. 18. Now False does not appear in the program! 5.2 Accumulating parameter: using an improvement lemma Let us reconsider the program with an accumulating parameter shown in Fig. 2. If we try to supercompile it, the whistle blows for the following expressions: case double x Z of {Z True; S y odd y);} c case double n (S (S Z)) of {Z True; S m odd m;} There are two improvement lemmas (of minimal size): case double n (S (S Z)) of {Z True; S m odd m;} case double n (S Z) of {Z True; S m even m;} case double n (S (S Z)) of {Z True; S m odd m;}

17 case double n (S Z) of {Z False; S m even m;} The higher-level HOSC finds and applies the first one and, after some driving, the whistle blows for the second time: case double n (S Z) of {Z True; S m even m;} c case double p (S (S (S Z))) of {Z True; S m even m;} Again, there are two improvement lemmas (of minimal size): case double p (S (S (S Z))) of {Z True; S m even m;} case double p (S Z) of {Z True; S m even m;} case double p (S (S (S Z))) of {Z True; S m even m;} case double p (S Z) of {Z False; S m even m;} The application of the first lemma enables a fold to be performed without generalization, so that the higher-level HOSC produces the program in Fig. 19. case x of { Z True; S y1 letrec f=λt2 case t2 of {Z True; S u2 f u2;} in f y1; } Fig. 19. even (double x Z): the result of higher-level supercompilation 6 Discussion and conclusion The main idea of higher-level supercompilation is based on the principle of metasystem transition [27,28]. Another approach to increasing the power of supercompilation based on metasystem transition is distillation [8,10,9]. In many cases distillation and higher-level supercompilation produce similar results, but, seemingly, an advantage of higher-level supercompilation is its conceptual simplicity and modularity: it can be implemented by a slight modification of a classic supercompiler, adding a (conceptually) trivial lemma generator, and making several copies of the same supercompiler to interact. Since the lemma generator uses the supercompiler as a black box, its design does not depend on the subtle details of the supercompilation process. Our current implementation of higher-level supercompilation is a proof-ofconcept one, is rather naive, and can be improved in a variety of ways. First, the higher-level supercompilation algorithm shown in Fig. 6 tries to apply a lemma only to the whole embedding (lower) expression. But lemmas could be applied in a more refined way.

18 An (instance of an) improvement lemma could be applied to a subexpression of the embedding expression. To avoid generalization, an (instance of an) improvement lemma could also be applied to a (sub)expression of the embedded (upper) expression. Second, the search for lemmas is implemented in a straightforward way: no attempt is made to take into account the structure of the embedding (lower) and embedded (upper) expressions. However, there are a few techniques developed in the field of inductive theorem proving (like difference matching [1], rippling [3] and divergence critic [29]) that could be used in implementing a more refined lemma generator. Higher-level supercompilation does not depend on minor implementation details of the supercompiler it is based upon. However, some properties of the supercompiler do matter. First of all, the check whether a pair of expressions forms an improvement lemma [20,21] relies on the supercompiler preserving termination properties of programs [15,13]. This requirement is not met by all supercompilers. For example, the supercompiler SCP4 [16] dealing with programs in Refal, a strict first-order functional language, may extend the domain of a transformed function, for which reason the equivalence of expressions can be proven by supercompilation only for total expressions operating on finite data structures [17]. During supercompilation, termination properties are easier to preserve for a lazy functional language, than for a strict one. Nevertheless, Jonsson [11] succeeded in developing a termination-preserving supercompilation technique for a higher-order call-by-value language. Therefore, higher-level supercompilation is certainly applicable to higher-order strict languages. Since any residual program produced by HOSC is a self-contained expression, the check for equality and improvement amounts to a trivial comparison of expressions. In the case of a supercompiler like Supero [19,18], residual programs may have less trivial structure, therefore, comparing them for syntactic isomorphism may be more intricate that in case of HOSC. In principle, higher-level supercompilation should be implementable also on the basis of a supercompiler for an imperative or object oriented language, such as the Java supercompiler by Klimov [12], but there remains a number of technical problems to be investigated. Acknowledgements We would like to express our gratitude for Andrei V. Klimov and Yuri A. Klimov for their comments and fruitful discussions. Special thanks are due to Sergei M. Abramov for his encouragement and support for the project. References 1. D. A. Basin and T. Walsh. Difference matching. In CADE-11: Proceedings of the 11th International Conference on Automated Deduction, pages , London, UK, Springer-Verlag.

19 2. R. S. Boyer and J. S. Moore. Proving theorems about LISP functions. J. ACM, 22(1): , A. Bundy, A. Stevens, F. van Harmelen, A. Ireland, and A. Smaill. Rippling: a heuristic for guiding inductive proofs. Artif. Intell., 62(2): , R. M. Burstall and J. Darlington. A transformation system for developing recursive programs. Journal of the ACM (JACM), 24(1):44 67, A. P. Ershov. On the essence of compilation. In E. Neuhold, editor, Formal Description of Programming Concepts, pages North-Holland, Y. Futamura. Partial evaluation of computation process an approach to a compiler-compiler. Systems, Computers, Controls, 2(5):45 50, R. Glück. Is there a fourth Futamura projection? In PEPM 09: Proceedings of the 2009 ACM SIGPLAN workshop on Partial evaluation and program manipulation, pages 51 60, New York, NY, USA, ACM. 8. G. W. Hamilton. Distillation: extracting the essence of programs. In Proceedings of the 2007 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, pages ACM Press New York, NY, USA, G. W. Hamilton. Extracting the essence of distillation. In Perspectives of Systems Informatics, volume 5947 of LNCS, pages , G.W. Hamilton and M.H. Kabir. Constructing programs from metasystem transition proofs. In Proceedings of the First International Workshop on Metacomputation in Russia, pages 9 26, Pereslavl-Zalessky, Russia, July Ailamazyan University of Pereslavl. 11. P. A. Jonsson. Positive supercompilation for a higher-order call-by-value language. Master s thesis, Luleå University of Technology, And. V. Klimov. A Java supercompiler and its application to verification of cachecoherence protocols. In Perspectives of Systems Informatics, volume 5947 of LNCS, pages , I. Klyuchnikov. Supercompiler HOSC 1.0: under the hood. Preprint 63, Keldysh Institute of Applied Mathematics, I. Klyuchnikov. Supercompiler HOSC 1.1: proof of termination. Preprint 21, Keldysh Institute of Applied Mathematics, I. Klyuchnikov and S. Romanenko. Proving the equivalence of higher-order terms by means of supercompilation. In Perspectives of Systems Informatics, volume 5947 of LNCS, pages , A. P. Lisitsa and A. P. Nemytykh. Verification as a parameterized testing (experiments with the SCP4 supercompiler). Program. Comput. Softw., 33(1):14 23, A.P. Lisitsa and M. Webster. Supercompilation for equivalence testing in metamorphic computer viruses detection. In Proceedings of the First International Workshop on Metacomputation in Russia, pages Ailamazyan University of Pereslavl, N. Mitchell. Transformation and Analysis of Functional Programs. PhD thesis, University of York, N. Mitchell and C. Runciman. A supercompiler for core Haskell. In Implementation and Application of Functional Languages, volume 5083 of LNCS, pages , Berlin, Heidelberg, Springer-Verlag. 20. D. Sands. Proving the correctness of recursion-based automatic program transformations. Theoretical Computer Science, 167(1-2): , D. Sands. Total correctness by local improvement in the transformation of functional programs. ACM Trans. Program. Lang. Syst., 18(2): , 1996.

20 22. M. H. Sørensen. Convergence of program transformers in the metric space of trees. Sci. Comput. Program., 37(1-3): , M. H. Sørensen and R. Glück. An algorithm of generalization in positive supercompilation. In J. W. Lloyd, editor, Logic Programming: The 1995 International Symposium, pages , M. H. Sørensen, R. Glück, and N. D. Jones. Towards unifying partial evaluation, deforestation, supercompilation, and GPC. In ESOP 94: Proceedings of the 5th European Symposium on Programming, pages , London, UK, Springer-Verlag. 25. M. H. Sørensen, R. Glück, and N. D. Jones. A positive supercompiler. Journal of Functional Programming, 6(6): , V. F. Turchin. The phenomen of science. A cybernetic approach to human evolution. Columbia University Press, New York, V. F. Turchin. The concept of a supercompiler. ACM Transactions on Programming Languages and Systems (TOPLAS), 8(3): , V. F. Turchin. Metacomputation: Metasystem transitions plus supercompilation. In Partial Evaluation, volume 1110 of LNCS, pages Springer, T. Walsh. A divergence critic for inductive proof. J. Artif. Int. Res., 4(1): , 1996.

Detecting Metamorphic Computer Viruses using Supercompilation

Detecting Metamorphic Computer Viruses using Supercompilation Detecting Metamorphic Computer Viruses using Supercompilation Alexei Lisitsa and Matt Webster In this paper we present a novel approach to detection of metamorphic computer viruses by proving program equivalence

More information

An Approach to Polyvariant Binding Time Analysis for a Stack-Based Language

An Approach to Polyvariant Binding Time Analysis for a Stack-Based Language Supported by Russian Foundation for Basic Research project No. 06-01-00574-a and No. 08-07-00280-a, and Russian Federal Agency of Science and Innovation project No. 2007-4-1.4-18-02-064. An Approach to

More information

An Approach to Polyvariant Binding Time Analysis for a Stack-Based Language

An Approach to Polyvariant Binding Time Analysis for a Stack-Based Language An Approach to Polyvariant Binding Time Analysis for a Stack-Based Language Yuri A. Klimov Keldysh Institute of Applied Mathematics, Russian Academy of Sciences RU-125047 Moscow, Russia, yuklimov@keldysh.ru

More information

Andrei P. Nemytykh and Victoria A. Pinchuk. Program Systems Institute, Pereslavl-Zalesski, Yaroslavl Region, Russia,

Andrei P. Nemytykh and Victoria A. Pinchuk. Program Systems Institute, Pereslavl-Zalesski, Yaroslavl Region, Russia, Program Transformation with Metasystem Transitions: Experiments with a Supercompiler Andrei P. Nemytykh and Victoria A. Pinchuk Program Systems Institute, Pereslavl-Zalesski, Yaroslavl Region, Russia,

More information

Extracting the Range of cps from Affine Typing

Extracting the Range of cps from Affine Typing Extracting the Range of cps from Affine Typing Extended Abstract Josh Berdine, Peter W. O Hearn Queen Mary, University of London {berdine, ohearn}@dcs.qmul.ac.uk Hayo Thielecke The University of Birmingham

More information

An Efficient Staging Algorithm for Binding-Time Analysis

An Efficient Staging Algorithm for Binding-Time Analysis An Efficient Staging Algorithm for Binding-Time Analysis Takuma Murakami 1, Zhenjiang Hu 1,2, Kazuhiko Kakehi 1, and Masato Takeichi 1 1 Department of Mathematical Informatics, Graduate School of Information

More information

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic 3.4 Deduction and Evaluation: Tools 3.4.1 Conditional-Equational Logic The general definition of a formal specification from above was based on the existence of a precisely defined semantics for the syntax

More information

On Meaning Preservation of a Calculus of Records

On Meaning Preservation of a Calculus of Records On Meaning Preservation of a Calculus of Records Emily Christiansen and Elena Machkasova Computer Science Discipline University of Minnesota, Morris Morris, MN 56267 chri1101, elenam@morris.umn.edu Abstract

More information

Introduction to the Lambda Calculus

Introduction to the Lambda Calculus Introduction to the Lambda Calculus Overview: What is Computability? Church s Thesis The Lambda Calculus Scope and lexical address The Church-Rosser Property Recursion References: Daniel P. Friedman et

More information

From Types to Sets in Isabelle/HOL

From Types to Sets in Isabelle/HOL From Types to Sets in Isabelle/HOL Extented Abstract Ondřej Kunčar 1 and Andrei Popescu 1,2 1 Fakultät für Informatik, Technische Universität München, Germany 2 Institute of Mathematics Simion Stoilow

More information

Security for Multithreaded Programs under Cooperative Scheduling

Security for Multithreaded Programs under Cooperative Scheduling Security for Multithreaded Programs under Cooperative Scheduling Alejandro Russo and Andrei Sabelfeld Dept. of Computer Science and Engineering, Chalmers University of Technology 412 96 Göteborg, Sweden,

More information

Runtime Behavior of Conversion Interpretation of Subtyping

Runtime Behavior of Conversion Interpretation of Subtyping Runtime Behavior of Conversion Interpretation of Subtyping Yasuhiko Minamide Institute of Information Sciences and Electronics University of Tsukuba and PRESTO, JST minamide@is.tsukuba.ac.jp Abstract.

More information

The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 4 MODULE, SPRING SEMESTER MATHEMATICAL FOUNDATIONS OF PROGRAMMING ANSWERS

The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 4 MODULE, SPRING SEMESTER MATHEMATICAL FOUNDATIONS OF PROGRAMMING ANSWERS The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 4 MODULE, SPRING SEMESTER 2012 2013 MATHEMATICAL FOUNDATIONS OF PROGRAMMING ANSWERS Time allowed TWO hours Candidates may complete the front

More information

Constrained Types and their Expressiveness

Constrained Types and their Expressiveness Constrained Types and their Expressiveness JENS PALSBERG Massachusetts Institute of Technology and SCOTT SMITH Johns Hopkins University A constrained type consists of both a standard type and a constraint

More information

On the Finiteness of the Recursive Chromatic Number

On the Finiteness of the Recursive Chromatic Number On the Finiteness of the Recursive Chromatic Number William I Gasarch Andrew C.Y. Lee Abstract A recursive graph is a graph whose vertex and edges sets are recursive. A highly recursive graph is a recursive

More information

Formally-Proven Kosaraju s algorithm

Formally-Proven Kosaraju s algorithm Formally-Proven Kosaraju s algorithm Laurent Théry Laurent.Thery@sophia.inria.fr Abstract This notes explains how the Kosaraju s algorithm that computes the strong-connected components of a directed graph

More information

Backtracking and Induction in ACL2

Backtracking and Induction in ACL2 Backtracking and Induction in ACL2 John Erickson University of Texas at Austin jderick@cs.utexas.edu ABSTRACT This paper presents an extension to ACL2 that allows backtracking to occur when a proof fails.

More information

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

FUNCTIONAL PEARLS The countdown problem

FUNCTIONAL PEARLS The countdown problem To appear in the Journal of Functional Programming 1 FUNCTIONAL PEARLS The countdown problem GRAHAM HUTTON School of Computer Science and IT University of Nottingham, Nottingham, UK www.cs.nott.ac.uk/

More information

Information Processing Letters Vol. 30, No. 2, pp , January Acad. Andrei Ershov, ed. Partial Evaluation of Pattern Matching in Strings

Information Processing Letters Vol. 30, No. 2, pp , January Acad. Andrei Ershov, ed. Partial Evaluation of Pattern Matching in Strings Information Processing Letters Vol. 30, No. 2, pp. 79-86, January 1989 Acad. Andrei Ershov, ed. Partial Evaluation of Pattern Matching in Strings Charles Consel Olivier Danvy LITP DIKU { Computer Science

More information

TVLA: A SYSTEM FOR GENERATING ABSTRACT INTERPRETERS*

TVLA: A SYSTEM FOR GENERATING ABSTRACT INTERPRETERS* TVLA: A SYSTEM FOR GENERATING ABSTRACT INTERPRETERS* Tal Lev-Ami, Roman Manevich, and Mooly Sagiv Tel Aviv University {tla@trivnet.com, {rumster,msagiv}@post.tau.ac.il} Abstract TVLA (Three-Valued-Logic

More information

Lecture Notes on Data Representation

Lecture Notes on Data Representation Lecture Notes on Data Representation 15-814: Types and Programming Languages Frank Pfenning Lecture 9 Tuesday, October 2, 2018 1 Introduction In this lecture we ll see our type system in action. In particular

More information

Formal semantics of loosely typed languages. Joep Verkoelen Vincent Driessen

Formal semantics of loosely typed languages. Joep Verkoelen Vincent Driessen Formal semantics of loosely typed languages Joep Verkoelen Vincent Driessen June, 2004 ii Contents 1 Introduction 3 2 Syntax 5 2.1 Formalities.............................. 5 2.2 Example language LooselyWhile.................

More information

Lecture Notes on Program Equivalence

Lecture Notes on Program Equivalence Lecture Notes on Program Equivalence 15-312: Foundations of Programming Languages Frank Pfenning Lecture 24 November 30, 2004 When are two programs equal? Without much reflection one might say that two

More information

Type Systems Winter Semester 2006

Type Systems Winter Semester 2006 Type Systems Winter Semester 2006 Week 4 November 8 November 15, 2006 - version 1.1 The Lambda Calculus The lambda-calculus If our previous language of arithmetic expressions was the simplest nontrivial

More information

Institut für Informatik D Augsburg

Institut für Informatik D Augsburg Universität Augsburg Safer Ways to Pointer Manipulation Bernhard Möller Report 2000-4 Mai 2000 Institut für Informatik D-86135 Augsburg Copyright c Bernhard Möller Institut für Informatik Universität Augsburg

More information

X'=Y',. X=Y..., X'=O (1) THE USE OP METASYSTEM TRANSITION IN THEOREM PROVING AND PROGRAM OPTIMIZATION

X'=Y',. X=Y..., X'=O (1) THE USE OP METASYSTEM TRANSITION IN THEOREM PROVING AND PROGRAM OPTIMIZATION THE USE OP METASYSTEM TRANSITION IN THEOREM PROVING AND PROGRAM OPTIMIZATION Valentin P. Turchin, The City College, The City University o= Ne~: York Compare proving a theorem in an axiomatic system with

More information

Constructing Control Flow Graph for Java by Decoupling Exception Flow from Normal Flow

Constructing Control Flow Graph for Java by Decoupling Exception Flow from Normal Flow Constructing Control Flow Graph for Java by Decoupling Exception Flow from Normal Flow Jang-Wu Jo 1 and Byeong-Mo Chang 2 1 Department of Computer Engineering Pusan University of Foreign Studies Pusan

More information

Mutable References. Chapter 1

Mutable References. Chapter 1 Chapter 1 Mutable References In the (typed or untyped) λ-calculus, or in pure functional languages, a variable is immutable in that once bound to a value as the result of a substitution, its contents never

More information

Automatic Generation of Graph Models for Model Checking

Automatic Generation of Graph Models for Model Checking Automatic Generation of Graph Models for Model Checking E.J. Smulders University of Twente edwin.smulders@gmail.com ABSTRACT There exist many methods to prove the correctness of applications and verify

More information

Lecture Notes on Binary Decision Diagrams

Lecture Notes on Binary Decision Diagrams Lecture Notes on Binary Decision Diagrams 15-122: Principles of Imperative Computation William Lovas Notes by Frank Pfenning Lecture 25 April 21, 2011 1 Introduction In this lecture we revisit the important

More information

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Fall 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

CS 242. Fundamentals. Reading: See last slide

CS 242. Fundamentals. Reading: See last slide CS 242 Fundamentals Reading: See last slide Syntax and Semantics of Programs Syntax The symbols used to write a program Semantics The actions that occur when a program is executed Programming language

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

Foundations of Computer Science Spring Mathematical Preliminaries

Foundations of Computer Science Spring Mathematical Preliminaries Foundations of Computer Science Spring 2017 Equivalence Relation, Recursive Definition, and Mathematical Induction Mathematical Preliminaries Mohammad Ashiqur Rahman Department of Computer Science College

More information

Proof Pearl: The Termination Analysis of Terminator

Proof Pearl: The Termination Analysis of Terminator Proof Pearl: The Termination Analysis of Terminator Joe Hurd Computing Laboratory Oxford University joe.hurd@comlab.ox.ac.uk Abstract. Terminator is a static analysis tool developed by Microsoft Research

More information

Denotational Semantics. Domain Theory

Denotational Semantics. Domain Theory Denotational Semantics and Domain Theory 1 / 51 Outline Denotational Semantics Basic Domain Theory Introduction and history Primitive and lifted domains Sum and product domains Function domains Meaning

More information

Formal Systems and their Applications

Formal Systems and their Applications Formal Systems and their Applications Dave Clarke (Dave.Clarke@cs.kuleuven.be) Acknowledgment: these slides are based in part on slides from Benjamin Pierce and Frank Piessens 1 Course Overview Introduction

More information

Rule Formats for Nominal Modal Transition Systems

Rule Formats for Nominal Modal Transition Systems Rule Formats for Nominal Modal Transition Systems Anke Stüber Universitet Uppsala, Uppsala, Sweden anke.stuber@it.uu.se Abstract. Modal transition systems are specification languages that allow the expression

More information

Cache-Oblivious Traversals of an Array s Pairs

Cache-Oblivious Traversals of an Array s Pairs Cache-Oblivious Traversals of an Array s Pairs Tobias Johnson May 7, 2007 Abstract Cache-obliviousness is a concept first introduced by Frigo et al. in [1]. We follow their model and develop a cache-oblivious

More information

A Typed Calculus Supporting Shallow Embeddings of Abstract Machines

A Typed Calculus Supporting Shallow Embeddings of Abstract Machines A Typed Calculus Supporting Shallow Embeddings of Abstract Machines Aaron Bohannon Zena M. Ariola Amr Sabry April 23, 2005 1 Overview The goal of this work is to draw a formal connection between steps

More information

Derivation of a Pattern-Matching Compiler

Derivation of a Pattern-Matching Compiler Derivation of a Pattern-Matching Compiler Geoff Barrett and Philip Wadler Oxford University Computing Laboratory, Programming Research Group 1986 Introduction This paper presents the derivation of an efficient

More information

Once Upon a Polymorphic Type

Once Upon a Polymorphic Type Once Upon a Polymorphic Type Keith Wansbrough Computer Laboratory University of Cambridge kw217@cl.cam.ac.uk http://www.cl.cam.ac.uk/users/kw217/ Simon Peyton Jones Microsoft Research Cambridge 20 January,

More information

15 212: Principles of Programming. Some Notes on Induction

15 212: Principles of Programming. Some Notes on Induction 5 22: Principles of Programming Some Notes on Induction Michael Erdmann Spring 20 These notes provide a brief introduction to induction for proving properties of ML programs. We assume that the reader

More information

SAT-CNF Is N P-complete

SAT-CNF Is N P-complete SAT-CNF Is N P-complete Rod Howell Kansas State University November 9, 2000 The purpose of this paper is to give a detailed presentation of an N P- completeness proof using the definition of N P given

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Explanation Based Program Transformation.

Explanation Based Program Transformation. Explanation Based Program Transformation. Maurice Bruynooghe Luc De Raedt Danny De Schreye Department of Computer Science Katholieke Universiteit Leuven Celestijnenlaan 200A B-3030 Heverlee, Belgium Abstract

More information

Polymorphic lambda calculus Princ. of Progr. Languages (and Extended ) The University of Birmingham. c Uday Reddy

Polymorphic lambda calculus Princ. of Progr. Languages (and Extended ) The University of Birmingham. c Uday Reddy 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 6: Polymorphic Type Systems 1. Polymorphic

More information

Reasoning About Imperative Programs. COS 441 Slides 10

Reasoning About Imperative Programs. COS 441 Slides 10 Reasoning About Imperative Programs COS 441 Slides 10 The last few weeks Agenda reasoning about functional programming It s very simple and very uniform: substitution of equal expressions for equal expressions

More information

CIS 500 Software Foundations Fall September 25

CIS 500 Software Foundations Fall September 25 CIS 500 Software Foundations Fall 2006 September 25 The Lambda Calculus The lambda-calculus If our previous language of arithmetic expressions was the simplest nontrivial programming language, then the

More information

Topic 3: Propositions as types

Topic 3: Propositions as types Topic 3: Propositions as types May 18, 2014 Propositions as types We have seen that the main mathematical objects in a type theory are types. But remember that in conventional foundations, as based on

More information

Lecture Notes on Contracts

Lecture Notes on Contracts Lecture Notes on Contracts 15-122: Principles of Imperative Computation Frank Pfenning Lecture 2 August 30, 2012 1 Introduction For an overview the course goals and the mechanics and schedule of the course,

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

An Approach to Behavioral Subtyping Based on Static Analysis

An Approach to Behavioral Subtyping Based on Static Analysis TACoS 04 Preliminary Version An Approach to Behavioral Subtyping Based on Static Analysis Francesco Logozzo 1 STIX - École Polytechnique F-91128 Palaiseau, France Abstract In mainstream object oriented

More information

Lecture Notes on Liveness Analysis

Lecture Notes on Liveness Analysis Lecture Notes on Liveness Analysis 15-411: Compiler Design Frank Pfenning André Platzer Lecture 4 1 Introduction We will see different kinds of program analyses in the course, most of them for the purpose

More information

Propositional Logic. Part I

Propositional Logic. Part I Part I Propositional Logic 1 Classical Logic and the Material Conditional 1.1 Introduction 1.1.1 The first purpose of this chapter is to review classical propositional logic, including semantic tableaux.

More information

XI International PhD Workshop OWD 2009, October Fuzzy Sets as Metasets

XI International PhD Workshop OWD 2009, October Fuzzy Sets as Metasets XI International PhD Workshop OWD 2009, 17 20 October 2009 Fuzzy Sets as Metasets Bartłomiej Starosta, Polsko-Japońska WyŜsza Szkoła Technik Komputerowych (24.01.2008, prof. Witold Kosiński, Polsko-Japońska

More information

System Description: Twelf A Meta-Logical Framework for Deductive Systems

System Description: Twelf A Meta-Logical Framework for Deductive Systems System Description: Twelf A Meta-Logical Framework for Deductive Systems Frank Pfenning and Carsten Schürmann Department of Computer Science Carnegie Mellon University fp@cs.cmu.edu carsten@cs.cmu.edu

More information

A Reduction of Conway s Thrackle Conjecture

A Reduction of Conway s Thrackle Conjecture A Reduction of Conway s Thrackle Conjecture Wei Li, Karen Daniels, and Konstantin Rybnikov Department of Computer Science and Department of Mathematical Sciences University of Massachusetts, Lowell 01854

More information

The Fibonacci hypercube

The Fibonacci hypercube AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 40 (2008), Pages 187 196 The Fibonacci hypercube Fred J. Rispoli Department of Mathematics and Computer Science Dowling College, Oakdale, NY 11769 U.S.A. Steven

More information

From the λ-calculus to Functional Programming Drew McDermott Posted

From the λ-calculus to Functional Programming Drew McDermott Posted From the λ-calculus to Functional Programming Drew McDermott drew.mcdermott@yale.edu 2015-09-28 Posted 2015-10-24 The λ-calculus was intended from its inception as a model of computation. It was used by

More information

Control Flow Analysis with SAT Solvers

Control Flow Analysis with SAT Solvers Control Flow Analysis with SAT Solvers Steven Lyde, Matthew Might University of Utah, Salt Lake City, Utah, USA Abstract. Control flow analyses statically determine the control flow of programs. This is

More information

Guarded Operations, Refinement and Simulation

Guarded Operations, Refinement and Simulation Guarded Operations, Refinement and Simulation Steve Reeves and David Streader Department of Computer Science University of Waikato Hamilton, New Zealand stever,dstr@cs.waikato.ac.nz Abstract Simulation

More information

CSCI.6962/4962 Software Verification Fundamental Proof Methods in Computer Science (Arkoudas and Musser) Sections p.

CSCI.6962/4962 Software Verification Fundamental Proof Methods in Computer Science (Arkoudas and Musser) Sections p. CSCI.6962/4962 Software Verification Fundamental Proof Methods in Computer Science (Arkoudas and Musser) Sections 10.1-10.3 p. 1/106 CSCI.6962/4962 Software Verification Fundamental Proof Methods in Computer

More information

Chapter 2 The Language PCF

Chapter 2 The Language PCF Chapter 2 The Language PCF We will illustrate the various styles of semantics of programming languages with an example: the language PCF Programming language for computable functions, also called Mini-ML.

More information

3.7 Denotational Semantics

3.7 Denotational Semantics 3.7 Denotational Semantics Denotational semantics, also known as fixed-point semantics, associates to each programming language construct a well-defined and rigorously understood mathematical object. These

More information

Modal Logic: Implications for Design of a Language for Distributed Computation p.1/53

Modal Logic: Implications for Design of a Language for Distributed Computation p.1/53 Modal Logic: Implications for Design of a Language for Distributed Computation Jonathan Moody (with Frank Pfenning) Department of Computer Science Carnegie Mellon University Modal Logic: Implications for

More information

Typed Lambda Calculus and Exception Handling

Typed Lambda Calculus and Exception Handling Typed Lambda Calculus and Exception Handling Dan Zingaro zingard@mcmaster.ca McMaster University Typed Lambda Calculus and Exception Handling p. 1/2 Untyped Lambda Calculus Goal is to introduce typing

More information

Lecture Notes on Loop Optimizations

Lecture Notes on Loop Optimizations Lecture Notes on Loop Optimizations 15-411: Compiler Design Frank Pfenning Lecture 17 October 22, 2013 1 Introduction Optimizing loops is particularly important in compilation, since loops (and in particular

More information

Translation validation: from Simulink to C

Translation validation: from Simulink to C Translation validation: from Simulink to C Ofer Strichman Michael Ryabtsev Technion, Haifa, Israel. Email: ofers@ie.technion.ac.il, michaelr@cs.technion.ac.il Abstract. Translation validation is a technique

More information

SCHOOL: a Small Chorded Object-Oriented Language

SCHOOL: a Small Chorded Object-Oriented Language SCHOOL: a Small Chorded Object-Oriented Language S. Drossopoulou, A. Petrounias, A. Buckley, S. Eisenbach { s.drossopoulou, a.petrounias, a.buckley, s.eisenbach } @ imperial.ac.uk Department of Computing,

More information

Type-Safe Update Programming

Type-Safe Update Programming Type-Safe Update Programming Martin Erwig and Deling Ren Oregon State University Department of Computer Science [erwig rende]@cs.orst.edu Abstract. Many software maintenance problems are caused by using

More information

Basic concepts. Chapter Toplevel loop

Basic concepts. Chapter Toplevel loop Chapter 3 Basic concepts We examine in this chapter some fundamental concepts which we will use and study in the following chapters. Some of them are specific to the interface with the Caml language (toplevel,

More information

Program Calculus Calculational Programming

Program Calculus Calculational Programming Program Calculus Calculational Programming National Institute of Informatics June 21 / June 28 / July 5, 2010 Program Calculus Calculational Programming What we will learn? Discussing the mathematical

More information

5. Introduction to the Lambda Calculus. Oscar Nierstrasz

5. Introduction to the Lambda Calculus. Oscar Nierstrasz 5. Introduction to the Lambda Calculus Oscar Nierstrasz Roadmap > What is Computability? Church s Thesis > Lambda Calculus operational semantics > The Church-Rosser Property > Modelling basic programming

More information

CONSECUTIVE INTEGERS AND THE COLLATZ CONJECTURE. Marcus Elia Department of Mathematics, SUNY Geneseo, Geneseo, NY

CONSECUTIVE INTEGERS AND THE COLLATZ CONJECTURE. Marcus Elia Department of Mathematics, SUNY Geneseo, Geneseo, NY CONSECUTIVE INTEGERS AND THE COLLATZ CONJECTURE Marcus Elia Department of Mathematics, SUNY Geneseo, Geneseo, NY mse1@geneseo.edu Amanda Tucker Department of Mathematics, University of Rochester, Rochester,

More information

Functional Languages. Hwansoo Han

Functional Languages. Hwansoo Han Functional Languages Hwansoo Han Historical Origins Imperative and functional models Alan Turing, Alonzo Church, Stephen Kleene, Emil Post, etc. ~1930s Different formalizations of the notion of an algorithm

More information

Handout 10: Imperative programs and the Lambda Calculus

Handout 10: Imperative programs and the Lambda Calculus 06-02552 Princ of Progr Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 10: Imperative programs and the Lambda Calculus

More information

Abstract Path Planning for Multiple Robots: An Empirical Study

Abstract Path Planning for Multiple Robots: An Empirical Study Abstract Path Planning for Multiple Robots: An Empirical Study Charles University in Prague Faculty of Mathematics and Physics Department of Theoretical Computer Science and Mathematical Logic Malostranské

More information

Lecture 9: Typed Lambda Calculus

Lecture 9: Typed Lambda Calculus Advanced Topics in Programming Languages Spring Semester, 2012 Lecture 9: Typed Lambda Calculus May 8, 2012 Lecturer: Mooly Sagiv Scribe: Guy Golan Gueta and Shachar Itzhaky 1 Defining a Type System for

More information

On the Definition of Sequential Consistency

On the Definition of Sequential Consistency On the Definition of Sequential Consistency Ali Sezgin Ganesh Gopalakrishnan Abstract The definition of sequential consistency is compared with an intuitive notion of correctness. A relation between what

More information

1 Scope, Bound and Free Occurrences, Closed Terms

1 Scope, Bound and Free Occurrences, Closed Terms CS 6110 S18 Lecture 2 The λ-calculus Last time we introduced the λ-calculus, a mathematical system for studying the interaction of functional abstraction and functional application. We discussed the syntax

More information

Unlabeled equivalence for matroids representable over finite fields

Unlabeled equivalence for matroids representable over finite fields Unlabeled equivalence for matroids representable over finite fields November 16, 2012 S. R. Kingan Department of Mathematics Brooklyn College, City University of New York 2900 Bedford Avenue Brooklyn,

More information

7. Introduction to Denotational Semantics. Oscar Nierstrasz

7. Introduction to Denotational Semantics. Oscar Nierstrasz 7. Introduction to Denotational Semantics Oscar Nierstrasz Roadmap > Syntax and Semantics > Semantics of Expressions > Semantics of Assignment > Other Issues References > D. A. Schmidt, Denotational Semantics,

More information

Subsumption. Principle of safe substitution

Subsumption. Principle of safe substitution Recap on Subtyping Subsumption Some types are better than others, in the sense that a value of one can always safely be used where a value of the other is expected. Which can be formalized as by introducing:

More information

CSCC24 Functional Programming Scheme Part 2

CSCC24 Functional Programming Scheme Part 2 CSCC24 Functional Programming Scheme Part 2 Carolyn MacLeod 1 winter 2012 1 Based on slides from Anya Tafliovich, and with many thanks to Gerald Penn and Prabhakar Ragde. 1 The Spirit of Lisp-like Languages

More information

ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES. Nikolay Kosovskiy

ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES. Nikolay Kosovskiy International Journal Information Theories and Applications, Vol. 20, Number 2, 2013 131 ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES Nikolay Kosovskiy Abstract: A mathematical

More information

Imperative Functional Programming

Imperative Functional Programming Imperative Functional Programming Uday S. Reddy Department of Computer Science The University of Illinois at Urbana-Champaign Urbana, Illinois 61801 reddy@cs.uiuc.edu Our intuitive idea of a function is

More information

A Model of Machine Learning Based on User Preference of Attributes

A Model of Machine Learning Based on User Preference of Attributes 1 A Model of Machine Learning Based on User Preference of Attributes Yiyu Yao 1, Yan Zhao 1, Jue Wang 2 and Suqing Han 2 1 Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada

More information

ACONCURRENT system may be viewed as a collection of

ACONCURRENT system may be viewed as a collection of 252 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 10, NO. 3, MARCH 1999 Constructing a Reliable Test&Set Bit Frank Stomp and Gadi Taubenfeld AbstractÐThe problem of computing with faulty

More information

the application rule M : x:a: B N : A M N : (x:a: B) N and the reduction rule (x: A: B) N! Bfx := Ng. Their algorithm is not fully satisfactory in the

the application rule M : x:a: B N : A M N : (x:a: B) N and the reduction rule (x: A: B) N! Bfx := Ng. Their algorithm is not fully satisfactory in the The Semi-Full Closure of Pure Type Systems? Gilles Barthe Institutionen for Datavetenskap, Chalmers Tekniska Hogskola, Goteborg, Sweden Departamento de Informatica, Universidade do Minho, Braga, Portugal

More information

Theory and Algorithms for the Generation and Validation of Speculative Loop Optimizations

Theory and Algorithms for the Generation and Validation of Speculative Loop Optimizations Theory and Algorithms for the Generation and Validation of Speculative Loop Optimizations Ying Hu Clark Barrett Benjamin Goldberg Department of Computer Science New York University yinghubarrettgoldberg

More information

An ACL2 Tutorial. Matt Kaufmann and J Strother Moore

An ACL2 Tutorial. Matt Kaufmann and J Strother Moore An ACL2 Tutorial Matt Kaufmann and J Strother Moore Department of Computer Sciences, University of Texas at Austin, Taylor Hall 2.124, Austin, Texas 78712 {kaufmann,moore}@cs.utexas.edu Abstract. We describe

More information

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Fundamenta Informaticae 56 (2003) 105 120 105 IOS Press A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Jesper Jansson Department of Computer Science Lund University, Box 118 SE-221

More information

The Systematic Generation of Channelling Constraints

The Systematic Generation of Channelling Constraints The Systematic Generation of Channelling Constraints Bernadette Martínez-Hernández and Alan M. Frisch Artificial Intelligence Group, Dept. of Computer Science, Univ. of York, York, UK Abstract. The automatic

More information

Proving Invariants of Functional Programs

Proving Invariants of Functional Programs Proving Invariants of Functional Programs Zoltán Horváth, Tamás Kozsik, Máté Tejfel Eötvös Loránd University Department of General Computer Science {hz,kto,matej}@inf.elte.hu Abstract. In a pure functional

More information

Foundations. Yu Zhang. Acknowledgement: modified from Stanford CS242

Foundations. Yu Zhang. Acknowledgement: modified from Stanford CS242 Spring 2013 Foundations Yu Zhang Acknowledgement: modified from Stanford CS242 https://courseware.stanford.edu/pg/courses/317431/ Course web site: http://staff.ustc.edu.cn/~yuzhang/fpl Reading Concepts

More information

The Markov Reformulation Theorem

The Markov Reformulation Theorem The Markov Reformulation Theorem Michael Kassoff and Michael Genesereth Logic Group, Department of Computer Science Stanford University {mkassoff, genesereth}@cs.stanford.edu Abstract In this paper, we

More information

Functional Programming Languages (FPL)

Functional Programming Languages (FPL) Functional Programming Languages (FPL) 1. Definitions... 2 2. Applications... 2 3. Examples... 3 4. FPL Characteristics:... 3 5. Lambda calculus (LC)... 4 6. Functions in FPLs... 7 7. Modern functional

More information

(Refer Slide Time: 4:00)

(Refer Slide Time: 4:00) Principles of Programming Languages Dr. S. Arun Kumar Department of Computer Science & Engineering Indian Institute of Technology, Delhi Lecture - 38 Meanings Let us look at abstracts namely functional

More information