Notes on evaluating λ-calculus terms and abstract machines
|
|
- Maud Rosamond Francis
- 5 years ago
- Views:
Transcription
1 Notes on evaluating λ-calculus terms and abstract machines J.R.B. Cockett Department of Computer Science, University of Calgary Calgary, T2N 1N4, Alberta, Canada November 17, Introduction In this document we discuss evaluation strategies for λ-terms. We shall present six different evaluation strategies. They are distinguished by whether they are by-value (basically innermost) or by-name (basically outermost) and what sort of normal form they aim to produce. There are three normal forms weak head normal form, head normal form, and normal form. To implement the λ-calculus one must take the further step of translating these reduction strategies into the actions of an abstract machine for which one compiles λ-terms into code. This makes for very simple yet powerful evaluation techniques which are relatively efficient. We illustrate this with two machines: a modern version of Landin s SECD machine and the Krivine machine. 2 Evaluation strategies There are two basic families of evaluation strategies (reduction strategies) for the λ-calculus: byvalue and by-name. The former family of evaluation strategy, by-value, evaluates arguments (and sometimes function bodies) before applying a function to its arguments. The by-value strategies are relatively simple to implement and are reasonably efficient, however, they suffers from a significant defect: they may not find a normal form (or a weak head normal form) even if there is one. The by-name evaluation strategy, are (perhaps) more complex to implement as they leave the arguments unevaluated: this can make the evaluation very inefficient as unevaluated terms can be duplicated thereby doubling the cost of their evaluation. However, the by-name strategies have an important theoretical advantage: if there is a normal form (weak head normal form) they will find it. Because of this considerable effort has been into to making these strategies more efficient and this has led to the development of graph reduction techniques for lazy evaluation : these underly the evaluation of Haskell programs for example. In graph reduction the duplicated terms are shared and, thus, are only evaluated once: in terms of the number of -reduction steps, lazy evaluation does the minimal number possible: however, there is an overhead to sharing terms in order to keep track of when a term has been already evaluated which will sometimes make it less efficient than the simpler by-value evaluation. Both by-value and by-name strategies come in different forms depending on where the evaluation terminates if it does terminate. The weak strategies aim to produce a result in weak head normal form. The essential feature of the weak evaluation strategies is that they never 1
2 evaluate the bodies of λ-abstractions. These evaluation strategies are closely related to what is actually implemented in functional languages: the tendency is to want to compile (and optimize) function bodies and thus to never modify them while the program is running. The strong evaluation strategies aim to produce a term in η-normal form. Strong reduction strategies are primarily of theoretical interest: for example strong by-name evaluation is a leftmost outermost reduction often called normal order reduction and is guaranteed to find a normal form if there is one. Finally there are the head reduction strategies: these evaluate the term to head normal form. Head reduction strategies do go inside the top level λ-abstractions but no others. Head evaluation is also closely related to evaluation strategies for functional languages: when constructors for datatypes are not built in and are natively expressed in the λ-calculus they involve a top level binding which the evaluation must go under. Lastly a remark: writing down formally how an evaluation strategy works may seem easy... but, in fact, it is not so easy! The crucial issue is: how exactly does one efficiently find the next redex to reduce? For example, saying that one always reduces the leftmost outermost redex tells one how to find the next redex from the global perspective of the whole term: however, an implementation of an evaluation which searches for redexes by starting from scratch at each step would be very inefficient and this is never done in practice. Thus, more detailed descriptions are necessary. To this end, below, we use both inference systems and recursive descriptions of the evaluation techniques. (One reason for using inference systems is actually because you must get used to using them!) These lead ultimately to the abstract machines which are used in practice for evaluation. 2.1 By-value evaluation strategies We shall start by discussing by-value evaluation strategies. We shall discuss two of these byvalue strategies. The first first strategy is the innermost strategy: here whenever a reduction is performed all the subexpressions of the redex must be in normal form. Thus the reduction C (λx.m) N C M[N/x] can only be performed once M and N have been reduced to normal form. The second strategy is the rightmost evaluation strategy here the above reduction can be performed but only when N is in normal form. This means that in rightmost evaluation one does not evaluate function bodies - λ-abstractions before applying them. This from a programming perspective is very natural as normally one wishes to compile function bodies into code which one does not rewrite Innermost evaluation This is a strong reduction strategy. As described above, in this strategy one can only perform a -reduction when the subexpressions of the redex are already in normal form. This means that when one encounters a redex one must recursively reduce all it subexpressions this can be done in parallel before one performs the reduction of that redex. This does have the slightly peculiar property that one reduces the body of functions before one applies them and this is not usually how functional languages are implemented. The strategy can be described by the inference system in Table 1. The inferences in the system are to be read as saying that, in order to conclude the statement below the line, one must first have 2
3 M λx.m N N (λx.m ) N L L L M N L Evaluate to abstraction M P Q N N M N (P Q) N Evaluate to application M y N N M N y N Evaluate to variable Var(x) x x Variable M M λx.m λx.m Abstraction Table 1: Innermost Evaluation concluded all the things above the line. Thus, the first rule of Table 1 says that, in order evaluate M N to L written as M N L below the line one must: (a) Evaluate M to an abstraction of the form (here we are working up to α-equality) λx.m. This is indicated by M λx.m If, in fact, M does not evaluate to an abstraction we will use one of the next two rules which are designed to cover the other possible cases. (b) Evaluate N to N, written N N. (c) Perform the -reduction (λx.m ) N (d) Evaluate L to L, that is L L. Consider the following examples: Example 2.1 L. 3
4 (1) Consider the by-value evaluation of (λx.(λy.yy)x)z:. (a) λy.y y λy.y y var(x) (b) x x (c) (λy.y y) x x x (d). x x x x (λy.y y) x x x (a) (λx.(λy.y y) x) λx.x x(b) z z (c) (λx.x x) z z z (d). z z z z (λx.(λy.y y) x) z z z (2) Notice that (λxy.y) Ω does not have a terminating innermost by-value reduction as one must evaluate Ω := (λx.x x) (λx.x x) before one can reduce -reduction at the root of this term: the reduction of Ω, of course, never terminates. Similarly, (λxy.y) (λy.ω) will not have a terminating innermost reduction because of the presence of Ω. (3) We shall use the evaluation of square square 2 as a running example where square := λx.x x In addition we shall, to facilitate this example, allow ourselves the assumption that when two numbers are multiplied they can be reduced to the answer, e.g Here is the structure of the by-value evaluation: (a). square square (a). square square. (b) 2 2 (c) square (d) (b) square 2 4 (c) square (d) square (square 2) 16 4
5 Notice how every subexpression gets evaluated before the whole expression. Also how one normalizes the function bodies: here they are already taken as being in normal form. We may also express the innermost reduction strategy very simply as a recursive function on λ-terms: Rightmost evaluation I(x) = x I(x N) = x I(N) I((λx.M) N) = I(I(M)[I(N)/x]) { L[I(N)/y] I(M N) = λy.l I((M N) P ) = I(M N) I(M) otherwise The basic reduction strategy adopted by SML and OCaml is a weak rightmost reduction strategy. This means arguments to functions are evaluated before functions are applied to them and it is weak because functions are never evaluated. The strong rightmost by-value reduction strategy, however, does allow function bodies to be evaluated but only when they are not applied to any argument. This makes it more complicated to describe as the reduction involves aspects of weak reduction until it is discovered that the λ-term cannot ever be applied when the reduction is pushed under the λ-abstractions Weak rightmost evaluation We shall start by describing the weak rightmost reduction system. The weak reduction strategies reduce λ-terms to what is called weak head normal form. Thus, t w whnf(t) when the reduction terminates we shall see shortly that Ω has no weak head normal form. By way of contrast, the strong rightmost reduction strategy, if it terminates, will do so only at a normal form, thus t s nf(t). A λ-term, M, is in weak head normal form 1 in case it is a λ-abstraction, M = λx.n, or it is of the form M = x N 1... N p where x is any variable. Shortly we shall see that why it is called a weak head normal form it is a head normal form which, in addition could be a λ-abstraction. The weak rightmost reduction strategy is described in Table 2. The key difference is that when a λ-abstraction is encountered the evaluation stops and never goes inside/under the abstraction. Notice also that, the system does evaluate arguments in a rightmost fashion, although the evaluation which distinguishes the cases has been put first for clarity. In fact, the order of evaluation of the arguments does not actually matter and, indeed, can be done in parallel. Thus, if we take the first rule what it says in detail is: In order to evaluate M N to L:, (a) Evaluate N to N. 1 Weak head normal forms were introduced by Simon Peyton Jones to reflect the form to which functional languages actually evaluate. 5
6 (b) M w λx.m (a) N w N (c) (λx.m ) N L (d) L w L M N w L M-evaluate to abstraction (b) M w P Q (a) N w N M N w (P Q) N M evaluate to application (b) M w y (a) N w N M N w y N M evaluate to variable Var(x) x w x Variable λ-abstraction λx.m λx.m Weak reduction Table 2: Weak Rightmost Evaluation (b) Evaluate M to an abstraction λx.m. If it does not evaluate to an abstraction use the appropriate one of the rules which follow... note that each requires that (a) be performed. (c) Perform the -reduction (λx.m )N L. (d) Evaluate L to L. The system even if one evaluates the arguments in a different order, does ensure that arguments are evaluated before functions are applied: this is the key feature of by-value evaluation. Example 2.2 (1) Consider the weak rightmost evaluation of (λxy.y) (λy.y Ω) λ-abstraction (a) (λy.y Ω) w (λy.y Ω) λ-abstraction (b) (λxy.y) w (λxy.y) (c) (λxy.y) (λy.y Ω) λy.y λ-abstraction (d) (λy.y) w (λy.y) (λxy.y) (λy.y Ω) w λy.y This shows that under weak rightmost reduction (here we have swapped (a) and (b) to get strict rightmost reduction) one can have non-terminating subterms which never get evaluated. Note that the innermost reduction strategy will not terminate on this term. 6
7 (2) Note that weak rightmost reduction does not terminate on Ω nor does it terminate on (λxy.y) Ω. We may also express the weak rightmost reduction using a recursive algorithm as follows: R w (λx.l) = λx.l R w (x) = x { Rw (M R w (N M) = [R w (N)/y] R w (M) = λy.m R w (M) R w (N) otherwise Strong rightmost evaluation The idea of strong rightmost evaluation is to perform a rightmost evaluation which does end at a normal form when it terminates. This means, in particular, that the evaluation will sometimes have to go inside λ-abstractions but only when it is not the argument of a -reduction. The reduction strategy is described in Table 3: notice that is more complex as one must flip from a weak reduction strategy to a strong reduction when the head term is not a λ-abstraction and it is necessary to force the evaluation of the arguments. This does mean that one needs to know how the head term evaluates before deciding whether to initiate a strong reduction of the argument. This system is largely a curiosity! It illustrates the technique of getting a full reduction by having a mutually recursive reduction strategy. It is perhaps worth mentioning that this reduction system is best implemented as first evaluating the function to determine how the argument needs to be evaluated (either strongly or weakly). This may be expressed as recursive function by: R s (x) = x R s (λx.m) = λx.r s (M) { Rs (M R s (M N) = [R ws (N)/y]) R ws (M) = λy.m R ws (M) R s (N) otherwise R ws (x) = x R ws (λx.m) = λx.m { Rws (M R ws (M N) = [R ws (N)/y]) R ws (M) = λy.m R ws (M) R s (N) otherwise 2.2 By-name evaluation strategies These are basically outermost evaluation strategies. In an outermost evaluation strategy one always performs -reductions steps which are closer to the root of the term before those which are further from the root. This therefore is exactly the opposite of an innermost evaluation strategy. Thus, if two redexes are nested on must always choose to develop the one closest to the root first. Of course, this does not completely determine the order so one usually combines this with a leftmost strategy which requires that the one always first evaluates the leftmost redex of any two redexes which are in parallel, in the sense of being on different branches of the tree: this is called the leftmost outermost evaluation strategy (were on reads leftmost as a modifier on the basic outermost strategy). 7
8 (b) M ws λx.m (a) N ws N (c) (λx.m ) N L (d) L s L M N s L M-evaluate to abstraction (b) M ws P Q (a) N s N M N s (P Q) N M evaluate to application (b) M ws y (a) N s N M N s y N M evaluate to variable Var(x) M s M x s x Variable λx.m s λx.m Strong reduction (b) M ws λx.m (a) N ws N (c) (λx.m ) N L (d) L ws L M N ws L M-evaluate to abstraction (b) M ws P Q (a) N s N M N ws (P Q) N M evaluate to application (b) M ws y (a) N s N M N ws y N M evaluate to variable Var(x) x ws x Variable λ-abstraction λx.m ws λx.m Weak reduction Table 3: Strong Rightmost Evaluation 8
9 (a) M w λx.m (b) (λx.m ) N (c) L w L M N w L L Leftmost (to abstraction) (a) M w P Q (b) N w N M N w (P Q) N Leftmost (to application) (a) M w y (b) N w N M N w y N Leftmost (to variable) Var(x) x w x Variable λ-abstraction λx.m w λx.m Weak reduction Table 4: Weak by-name evaluation Weak by-name evaluation This is essentially the strategy underlying lazy functional languages except, of course, that these languages use graph reduction techniques to avoid the duplications of subterms which is caused by a -reduction. The reduction here is to weak head normal form. Thus, the evaluation never goes inside/under a λ-abstraction. The strategy is described in Table 4 and, if it terminates will do so on a weak head normal form. The strategy has the merit of being remarkably simple. As mentioned a defect of by-name evaluation is that it can duplicate unevaluated terms, this duplicate the evaluation work making the technique very inefficient. Recall that this problem can be overcome by using graph reduction techniques to share the duplicated terms. Below is an simple illustration of the problem: Example 2.3 9
10 (1) Consider the evaluation of (λxy.y) Ω (λz.n): λ-abstraction (a) λxy.y λxy.y (b) (λxy.y) Ω λy.y λ-abstraction (c) λy.y λy.y (a) (λxy.y) Ω λy.y (b) (λy.y) (λz.n) (λz.n) λ-abstraction (c) λz.n λz.n (λxy.y) Ω (λz.n) λz.n Notice that this evaluates a weak head normal form: Ω, in contrast to the by-value strategies, is never evaluated and, because N is the body of an abstraction we never have to reduce this. (2) Consider the evaluation of square square 2 where square := λx.x x Again, to facilitate this example, allow ourselves the assumption that when two numbers are multiplied they can be reduced to the answer, e.g and that one must to evaluate two multiplied expressions one must minimal evaluate each expression. Here is then (roughly) the structure of the weak by-name evaluation of this expression: defn. and λ-abstraction (a) square λx.x x (b) (λx.x x) (square 2) (square 2) (square 2) (c). square 2 4. square (square2) 16 (square 2) (square 2) 16 square (square 2) 16 Notice how subexpression are not evaluated but duplicated leading to duplicated the evaluation work. We may write this evaluation strategy as a recursive function: N w (x) = x N w (λx.m) = λx.m { Nw (M N w (M N) = [N/y]) N w (M) = λy.m N w (M) N w (N) otherwise 10
11 2.2.2 Strong by-name evaluation As before the purpose of strong by-name reduction is to obtain a normal form rather than a weak head normal form. This means the reduction strategy must sometimes be forced to go inside λ- abstractions: controlling this aspect of the evaluation causes the strategy to be more complex. As above this leads to needing a second evaluation steps N ws N which recursively used the strong evaluation N s N. The evaluation strategy is described in Table 5 together with its mutually recursive definition. The importance of this evaluation strategy is that, if there is a normal form for N, then this reduction strategy will terminate by finding that normal form. This is proven using the standardization theorem which says that any reduction sequence in the λ-calculus can be rearranged to be an outermost reduction and this in turn can be rearranged to be a leftmost outermost reduction. 2.3 Head evaluation The last evaluation strategy is to head normal form: a λ-term is in head normal form if it is of the form λx 1...x n.y N 1...N p where n, p N and so include the possibility of being 0! To evaluate to head normal form it is necessary to evaluate inside the topmost λ-abstractions. This means again that the expression of the reduction strategy is more complex and requires a mutual recursion. This is called a head reduction because, after going inside top-level λ-abstractions, one searches for reductions on the application chains by repeatedly looking down the left branch until a non-application is found: this is the head term of the application chain. If the head term is a λ-abstraction one has a reduction. One simply repeats this process until the head is a variable. 11
12 (a) M w λx.m (b) (λx.m ) N (c) L L M N L L Leftmost (to abstraction) (a) M w P Q (b) N N M N (P Q) N Leftmost (to application) (a) M w y (b) N N M N y N Leftmost (to variable) Var(x) x x Variable M M λx.m λx.m Abstraction (a) M w λx.m (b) (λx.m ) N (c) L w L M N w L L Leftmost (to abstraction) (a) M w P Q (b) N N M N w (P Q) N Leftmost (to application) (a) M w y (b) N N M N w y N Leftmost (to variable) Var(x) x w x Variable Abstraction λx.m w λx.m Weak reduction N (x) = x N (λx.m) = λx.n (M) { N (M N (M N) = [N/y]) N w (M) = λy.m N w (M) N (N) otherwise N w (x) = x N w (λx.m) = λx.m { Nw (M N w (M N) = [N/y]) N w (M) = λy.m N w (M) N (N) otherwise Table 5: Strong by-name evaluation normal order reduction 12
13 (a) M w λx.m (b) (λx.m ) N (c) L L M N L L Head evaluate (to abstraction) M w P Q N N M N (P Q) N M w y M N y N Head evaluate (to application) Head evaluate (to variable) Var(x) x x Variable M M λx.m λx.m Abstraction (a) M w λx.m (b) (λx.m ) N (c) L w L M N w L L Head evaluate (to abstraction) M w P Q N N M N w (P Q) N M w y M N w y N Head evaluate (to application) Head evaluate (to variable) Var(x) x w x Variable Abstraction λx.m w λx.m Weak reduction H(x) = x H(λx.M) = λx.h(m) { H(M H(M N) = [N/y]) H w (M) = λy.m H w (M) H(N) otherwise H w (x) = x H w (λx.m) = λx.m { Hw (M H w (M N) = [N/y]) H w (M) = λy.m H w (M) H(N) otherwise Table 6: Head evaluation 13
14 3 Abstract machines Abstract machines reorganize the evaluation into simple machine transitions which one aims to make nearly constant time steps. One then starts the machine in a state determined by the desired computation and repeatedly do transitions until a final state is reached when one publishes the answer! If one arranges each step to correspond to an instruction then one can compile λ-terms down to a sequence of instructions. Reorganizing the computation into this from allows one to compile λ-terms into code sequences of instructions and introduces the opportunity for more efficient and simpler evaluation due to this compilation step. We shall describe two abstract machines below: a modern version of Landin s SECD machine and the Krivine machine. 3.1 The CES machine or the modern SECD machine The modern SECD machine, which is the CES machine, is a simplification of Landin s original SECD machine (see Paulson s notes). It implements a weak by-value reduction of the λ-term by compiling the term into CES code which is run to obtain the weak head normal form (if it exists) of the λ-term. There are two improvements over Landin s SECD machine: it does not have a dump as the stack doubles up as a dump and it uses a simpler instruction set. The CES machine is a state machine with transitions: the state is a triple consisting of: C: A code pointer, C, which points to the current instruction. E: An environment, E, which holds the values of variables. These are accessed using De-Bruijn s Indices. S: A stack S holding both intermediate results continuations (closures). Here we shall illustrate the CES machine with additional instructions for integers, Booleans, and Lists. This means that we must augment the λ-calculus with arithmetic instructions and list instructions. For arithmetic we add constants (meaning integers, k), with instructions for addition, Add, multiplication, Mul, and comparison leq. For booleans we add constants True and False and the conditional If. For lists we add the constants Nil and Cons and the case instruction. The resulting instructions for a CES machine are: 14
15 Instruction Clo(c) App Access(n) Ret Explanation Push closure of code c with current environment on the stack Pop function closure and argument, perform application Push n th value in the environment onto the stack. Return the top value on the stack and jump to the continuation on the stack Arithmetic instructions: Const(n) Add Mul Leq push the constant n on the stack Pop two arguments from the top of the stack and add them Pop two arguments from the top of the stack and add them Pop two arguments from the top of the stack and compare them Boolean instructions: True False If Push the constant True on the stack Push the constant False on the stack Pop an argument from the top of the stack and depending on whether it is True or False evaluate the appropriate branch. List instructions: Cons push the Cons applied to the top two elements of the stack onto the stack Nil push the constant Nil on the stack Case(c 1, c 2 ) if Cons t 1 t 2 is on the stack pop it and push t 2 and t 1 onto the environment. Evaluate c 2 Case(c 1, c 2 ) if Nil pop it and evaluate c 1 To compile a λ-term into CES code one essentially converts the λ-term into de Bruijn notation and then translates the term into CES-machine code. This one can do in two steps. However, below we present it as one big step. Recall the translation into de Bruijn notation replaces variables by indexes: Example 3.1 Here is the De Bruijn notation for the following λ-terms: (λx.xx) (λx.x) (λ.(#1 #1))(λ. #1) (λx.λy.(xy)) (λx.x) (λy.y) (λ. λ.(#2 #1))(λ.#1)(λ.#1) The compilation into CES-machine code for a λ-term is as follows: 15
16 Lambda Terms Compilation λx.t v [Clo( t x:v ++[Ret])] M N v N v ++ M v +[App] x v [Access(n)] where n = index x v k v a + b v a b v a b v True v False v If t {True t 0 False t 1 } v Nil v Cons(a, b) v Case t {Nil t 0 Cons x y t 1 } v This translation has two slightly subtle aspects. [Const(k)] b v ++ a v ++[Add] b v ++ a v ++[Mul] b v ++ a v ++[Leq] [True] [False] t v ++[If( t 0 v ++[Ret], t 1 v ++[Ret])] [Nil] b v ++ a v ++[Cons] t v ++[Case( t 0 v ++[Ret], t 1 x:y:v ++[Ret])] 1. The translation is always done in a variable context, v: one starts with the empty variable context. The variable context is a stack of variable names and is used when one wants to translate a variable into its de Bruijn index: the index is just the depth of the variable in the context subscripting the translation. The de Bruijn index indicates how the value of the variable can be accessed in the environment of the CES-machine. In particular, when translating a case expression, the Cons branch requires that one adds the variables in the pattern to the context which changes how the debruijn indexes are calculated. 2. In the translations for closures, cases, and conditionals, one must add a return instruction Ret, to the end of the code fragments within these constructs as after executing such a code fragment one must return to executing the code which followed the construct and is pushed as a continuation onto the stack. Example 3.2 Example of compilation: (1) (2) Let us compile Ω: (λx.x + 1)2 [] = [Const(2), Clo( (x + 1) [x] ++[Ret]), App] = [Const(2), Clo([Const(1), Access(1), Add, Ret]), App] (λx.x x) (λx.x x) [] = (λx.x x) [] ++ (λx.x x) [] ++[App] = [Clo( (x x) [x] ++[Ret]), Clo( (x x) [x] ++[Ret]), App] = [Clo([Access(1), Access(1), App, Ret]), Clo([Access(1), Access(1), App, Ret]), App] The machine Transitions in Modern SECD Machine are: 16
17 Before After Code Env Stack Code Env Stack Clo(c ) : c e s c e Clos(c, e) : s App : c e Clos(c, e ) : v : s c v : e Clos(c, e) : s Access(n); c e s c e e(n) : s Ret : c e v : Clos(c, e ) : s c e v : s Const(k) : c e s c e k : s Add : c e n : m : s c e (n + m) : s Mul : c e n : m : s c e (n m) : s Leq : c e n : m : s c e (n m) : s True : c e s c e True : s False : c e s c e False : s If(c 0, c 1 ) : c e True : s c 0 e Clos(c, e) : s If(c 0, c 1 ) : c e False : s c 1 e Clos(c, e) : s Nil : c e s c e Nil : s Cons : c e v 1 : v 2 : s c e Cons(v 1, v 2 ) : s Case(c 1, c 2 ) : c e Cons(v 1, v 2 ) : s c 1 v 1 : v 2 : e Clo(c, e) : s Case(c 1, c 2 ) : c e Nil() : s c 2 e Clos(c, e) : s Where Clos(c, e) denotes closure of code c with environment e and e(n) is the n th -element of the environment. The machine is started with a code pointer and the environment and stack empty: Code = c Environment = Nil Stack = Nil The final state is reached when the code stack is empty: the answer should they be sitting on the top of the stack Code = Nil Environment = Nil Stack = v : the answer is v! As an example of an evaluation in the Modern SECD machine, let us evaluate Const(2) : Clo[Const(1) : Access(1) : Add : Ret] : App This, as was shown above, is the compilation of (λx.x + 1) 2. 17
18 Code Env Stack Const(2) : Clo(c) : App Nil Nil Clo(c) : App Nil Const(2) App Nil Clos(c, Nil) : 2 Const(1) : Access(1) : Add : Ret 2 Clos(Nil, Nil) Access(1) : Add : Ret 2 1 : Clos(Nil, Nil) Add : Ret 2 2 : 1 : Clos(Nil, Nil) Ret 2 3 : Clos(Nil, Nil) Nil Nil 3 where c := Const(1) : Access(1) : Add : Ret. Fixed points For a by-value machine, such as the CES-machine above, using a fixed point combinator will often cause a non-terminating behaviour. However, this can be avoided by a judicious choice of fixed point combinator. The idea is to use the fact that the machine only evaluates terms to weak head normal form, thus it is possible to avoid the undesirable behaviour of a fixed point combinator by choosing one which evaluates initially to a closure. Here is an example of one which works (see Larry Paulson s notes for example): λf.(λa.f (λx.a a x)) (λa.f (λx.a a x)). In fact, this is a fixed point combinator in a slightly unusual way as to show it is such we must use the η-rule: (λf.(λa.f (λx.a a x)) (λa.f (λx.a a x))) F (λa.f (λx.a a x)) (λa.f (λx.a a x)) F (λx.(λa.f (λx.a a x)) (λa.f (λx.a a x))x) = η F ((λa.f (λx.a a x)) (λa.f (λx.a a x))) F ((λf.(λa.f (λx.a a x)) (λa.f (λx.a a x))) F ) The extra abstraction produced by this fixed point combinator, which necessitated a use of the η-equality in the proof above, stops the CES machine from repeatedly unwrapping the fixed point leading to an an undesirable infinite behaviour. While this works this is not very efficient! The question of how to make this more efficient has, of course, been considered as the machine has been the basis of many implementations. The technique which is often used in practical implementations is to replace a recursive definition by a closure which, when the fixed point is called, actually points back to itself: this is called tying the knot and gives an efficient implementation. However, it is fairly drastic as it requires a pointer modification. A less drastic approach to this is to allow the machine to call subroutines in which case recursion can be implemented as a direct recursive call. In effect one is tying the knot with such an approach but as in effect one is allowing pointers to code. To implement this one has to add to the machine a program store, Prog, and a command Call which in the machine has the transition: 18
19 Before After Code Env Stack Code Env Stack Call( f ) : c e s Prog( f ) e Clos(c, e) : s The called code must always end with a Ret to ensure that the code in the closure after the call is correctly initiated. The question remains, however, whether one can implement fixed points directly in the machine. It turns out that this can also be done 2. This does not require the use of pointers or adding a program store: one adds instead a native fixed point combinator, Fix(l). One then has to have a compilation to CES-machine code for this native fixed point and some additional transitions in the CES-machine. The compilation step is as follows: Fix(λfx.N) x = Fix( N x:f:x ++[Ret]) Fix(λf.N) x = Fixc( N f:x ++[Ret]) The two translation phrases are needed to handle the case when the function being fixed has one or more arguments and the (more unusual but possible) case when the function has no arguments. The code has to accommodate the these two cases slightly differently. Here the translation of N assumes that the two arguments f and x are bound so the de Bruijn indexes for f and x in the code must take this into account. Notice also that the code inside the fixed point combinator must be applied to two arguments: the first is the fixed point itself and the second is the argument of the function. Here is how the machine must be modified: Before After Code Env Stack Code Env Stack Fix(c ) : c e s c e FixClos(c, e) : s App : c e FixClos(c, e ) : v : s c v : FixClos(c, e ) : e Clos(c, e) : s Fixc(c ) : c e s c e FixcClos(c, e) : s App : c e FixcClos(c, e ) : s c FixcClos(c, e ) : e Clos(c, e) : s The application for a fixed point not only applies the code but also inserts the fixed point below the argument of the application (if there is one) in the environment so that a recursive call can be serviced correctly. Here are some questions: 1. What happens when one evaluates Ω? 2. Given a Church numeral can you translate it into an integer? 3. Can you sum the elements of a list of integers? 4. Give the λ-representation of a list of numbers can you translate it into a built-in list? 5. Can you program the factorial of a number recursively? 2 Thanks to Nathan Harms for working this out!!! 19
20 3.2 The Krivine machine The Krivine machine is remarkable for its simplicity. An aspect which is a bit subtle is interpreting its output: for this one must reverse compile the state of the machine into a λ-term. It performs a weak by-name reduction for pure λ-terms producing a weak head normal form. As for the CES machine it is based on a state consisting of a triple of code, environment, and stack. The environment and the stack, however, now both contain closures which makes the machine more uniform. As for the CES machine, the Krivine machine uses de Bruijn notation to facilitate compilation to code. The Krivine machine has just three instructions: Instruction Push(c) Access(n) Grab Explanation Pushes a closure of the code c with current environment on the stack Continues with the n th closure in the environment. Moves the top value of the stack onto the environment The compilation from de Bruijn terms is also remarkably simple unlike for the CES machine translation no appending of code is required: Lambda Terms λ.m M N #(n) Compilation Grab : M Push([ N ) : M [Access(n)] Finally the machine transitions are: Before After Code Env Stack Code Env Stack Access(1) : c Cls(c, e ) : e s c e s Access(n + 1) : c : e s Access(n) : c e s Grab : c e Cls(c, e ) : s c Cls(c, e ) : e s Push(c ) : c e s c e Cls(c, e) : s The start state for the Krivine machine has an empty environment and state with the code generated form a (closed) λ-term. The final state is reached when no more transitions can be made. The result is a (suspended) machine state which can be reverse compiled into a λ-term. There are three possible ways in which the machine can get stuck : while doing an Access, a Grab, or one can run out of code. If one starts with a closed term, for an Access to fail a Grab must first have failed, thus, the machine must get stuck at a Grab or when the code is empty. However, it is not hard to see that the code never becomes empty. Thus, one always terminates if indeed it does terminate with some code applied to an environment. The environment is a list of closures each consisting of code and environment pairs - thus, to reverse compile one recursively reverse compiles the environments into a stack of λ-terms. Then to reverse compile the top code environment pair one uses the stack of λ-terms obtained by reverse compiling the environments to substitute the access commands which are not then bound in the code reverse compiled to a λ-term The de Bruijn 20
21 index value beyond the binding depth of the term itself, indicates the depth of the term in the environment with which it should be substituted. Here is the description of reverse compiling: one starts with the machine state (c, e, ) for which the reverse compilation is c, e 0 rev as defined below. Push(c) : c, e i rev = c, e i rev c, e i rev Grab : c, e i rev = λ. c, e i+1 rev Access(n), e i rev = #(n) n i Access(n), e i rev = e(i) 0 rev n > i 21
A Brief Survey of Abstract Machines for Functional Programming Languages
A Brief Survey of Abstract Machines for Functional Programming Languages Prashant Kumar University of Calgary pkumar@ucalgary.ca June 6, 2014 Prashant Kumar (UofC) Abstract Machines June 6, 2014 1 / 24
More informationVU Semantik von Programmiersprachen
VU Semantik von Programmiersprachen Agata Ciabattoni Institute für Computersprachen, Theory and Logic group (agata@logic.at) (A gentle) Introduction to λ calculus p. 1 Why shoud I studyλcalculus? p. 2
More informationAccurate Step Counting
Accurate Step Counting Catherine Hope and Graham Hutton University of Nottingham Abstract Starting with an evaluator for a language, an abstract machine for the same language can be mechanically derived
More informationLambda Calculus. Lecture 4 CS /26/10
Lambda Calculus Lecture 4 CS 565 10/26/10 Pure (Untyped) Lambda Calculus The only value is a function Variables denote functions Functions always take functions as arguments Functions always return functions
More informationIntroduction to Lambda Calculus. Lecture 5 CS 565 1/24/08
Introduction to Lambda Calculus Lecture 5 CS 565 1/24/08 Lambda Calculus So far, we ve explored some simple but non-interesting languages language of arithmetic expressions IMP (arithmetic + while loops)
More informationCITS3211 FUNCTIONAL PROGRAMMING
CITS3211 FUNCTIONAL PROGRAMMING 9. The λ calculus Summary: This lecture introduces the λ calculus. The λ calculus is the theoretical model underlying the semantics and implementation of functional programming
More informationCMSC 336: Type Systems for Programming Languages Lecture 5: Simply Typed Lambda Calculus Acar & Ahmed January 24, 2008
CMSC 336: Type Systems for Programming Languages Lecture 5: Simply Typed Lambda Calculus Acar & Ahmed January 24, 2008 Contents 1 Solution to the Exercise 1 1.1 Semantics for lambda calculus.......................
More informationActivity. CSCI 334: Principles of Programming Languages. Lecture 4: Fundamentals II. What is computable? What is computable?
Activity CSCI 334: Principles of Programming Languages Lecture 4: Fundamentals II Write a function firsts that, when given a list of cons cells, returns a list of the left element of each cons. ( (a. b)
More informationAccurate Step Counting
Accurate Step Counting Catherine Hope and Graham Hutton School of Computer Science and IT University of Nottingham, UK {cvh,gmh}@cs.nott.ac.uk Abstract Starting with an evaluator for a language, an abstract
More informationChapter 2 The Language PCF
Chapter 2 The Language PCF We will illustrate the various styles of semantics of programming languages with an example: the language PCF Programming language for computable functions, also called Mini-ML.
More informationFrom the λ-calculus to Functional Programming Drew McDermott Posted
From the λ-calculus to Functional Programming Drew McDermott drew.mcdermott@yale.edu 2015-09-28 Posted 2015-10-24 The λ-calculus was intended from its inception as a model of computation. It was used by
More informationThe Untyped Lambda Calculus
Resources: The slides of this lecture were derived from [Järvi], with permission of the original author, by copy & x = 1 let x = 1 in... paste or by selection, annotation, or rewording. [Järvi] is in turn
More informationImplementing functional languages with abstract machines
Implementing functional languages with abstract machines Hayo Thielecke University of Birmingham http://www.cs.bham.ac.uk/~hxt December 9, 2015 Contents Introduction Lambda calculus and programming languages
More informationIntroduction to the Lambda Calculus
Introduction to the Lambda Calculus Overview: What is Computability? Church s Thesis The Lambda Calculus Scope and lexical address The Church-Rosser Property Recursion References: Daniel P. Friedman et
More informationCITS3211 FUNCTIONAL PROGRAMMING. 14. Graph reduction
CITS3211 FUNCTIONAL PROGRAMMING 14. Graph reduction Summary: This lecture discusses graph reduction, which is the basis of the most common compilation technique for lazy functional languages. CITS3211
More information10.6 Theoretical Foundations
10 Functional Languages 10.6 Theoretical Foundations EXAMPLE 10.44 Functions as mappings Mathematically,a function is a single-valued mapping: it associates every element in one set (the domain) with (at
More informationLambda Calculus and Type Inference
Lambda Calculus and Type Inference Björn Lisper Dept. of Computer Science and Engineering Mälardalen University bjorn.lisper@mdh.se http://www.idt.mdh.se/ blr/ August 17, 2007 Lambda Calculus and Type
More informationPure (Untyped) λ-calculus. Andrey Kruglyak, 2010
Pure (Untyped) λ-calculus Andrey Kruglyak, 2010 1 Pure (untyped) λ-calculus An example of a simple formal language Invented by Alonzo Church (1936) Used as a core language (a calculus capturing the essential
More informationIntroduction to the λ-calculus
Announcements Prelim #2 issues: o Problem 5 grading guide out shortly o Problem 3 (hashing) issues Will be on final! Friday RDZ office hours are 11-12 not 1:30-2:30 1 Introduction to the λ-calculus Today
More informationTo figure this out we need a more precise understanding of how ML works
Announcements: What are the following numbers: 74/2/70/17 (2:30,2:30,3:35,7:30) PS2 due Thursday 9/20 11:59PM Guest lecture on Tuesday 9/25 o No RDZ office hours next Friday, I am on travel A brief comment
More informationINF 212 ANALYSIS OF PROG. LANGS LAMBDA CALCULUS. Instructors: Crista Lopes Copyright Instructors.
INF 212 ANALYSIS OF PROG. LANGS LAMBDA CALCULUS Instructors: Crista Lopes Copyright Instructors. History Formal mathematical system Simplest programming language Intended for studying functions, recursion
More informationFunctional Languages and Higher-Order Functions
Functional Languages and Higher-Order Functions Leonidas Fegaras CSE 5317/4305 L12: Higher-Order Functions 1 First-Class Functions Values of some type are first-class if They can be assigned to local variables
More informationλ calculus Function application Untyped λ-calculus - Basic Idea Terms, Variables, Syntax β reduction Advanced Formal Methods
Course 2D1453, 2006-07 Advanced Formal Methods Lecture 2: Lambda calculus Mads Dam KTH/CSC Some material from B. Pierce: TAPL + some from G. Klein, NICTA Alonzo Church, 1903-1995 Church-Turing thesis First
More informationAbstract register machines Lecture 23 Tuesday, April 19, 2016
Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 23 Tuesday, April 19, 2016 1 Why abstract machines? So far in the class, we have seen a variety of language features.
More informationIntroduction to Lambda Calculus. Lecture 7 CS /08/09
Introduction to Lambda Calculus Lecture 7 CS 565 02/08/09 Lambda Calculus So far, we ve explored some simple but non-interesting languages language of arithmetic expressions IMP (arithmetic + while loops)
More informationLambda Calculus and Extensions as Foundation of Functional Programming
Lambda Calculus and Extensions as Foundation of Functional Programming David Sabel and Manfred Schmidt-Schauß 29. September 2015 Lehrerbildungsforum Informatik Last update: 30. September 2015 Overview
More information3.1 λ-calculus: Syntax
3 The Lambda Calculus The Lambda calculus, or λ-calculus, is a model of computation based on the idea that algorithms can be seen as mathematical functions mapping inputs to outputs. It was introduced
More informationFunctional Programming Languages (FPL)
Functional Programming Languages (FPL) 1. Definitions... 2 2. Applications... 2 3. Examples... 3 4. FPL Characteristics:... 3 5. Lambda calculus (LC)... 4 6. Functions in FPLs... 7 7. Modern functional
More informationA Small Interpreted Language
A Small Interpreted Language What would you need to build a small computing language based on mathematical principles? The language should be simple, Turing equivalent (i.e.: it can compute anything that
More informationThe Untyped Lambda Calculus
Resources: The slides of this lecture were derived from [Järvi], with permission of the original author, by copy & x = 1 let x = 1 in... paste or by selection, annotation, or rewording. [Järvi] is in turn
More information6.001 Notes: Section 6.1
6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of
More informationIntroduction to lambda calculus Part 3
Introduction to lambda calculus Part 3 Antti-Juhani Kaijanaho 2017-01-27... 1 Untyped lambda calculus... 2 Typed lambda calculi In an untyped lambda calculus extended with integers, it is required that
More informationTheory and Implementation of a Functional Programming Language
Rose-Hulman Undergraduate Mathematics Journal Volume 1 Issue 2 Article 3 Theory and Implementation of a Functional Programming Language Ari Lamstein University of Michigan Follow this and additional works
More informationFunctions as data. Massimo Merro. 9 November Massimo Merro The Lambda language 1 / 21
Functions as data Massimo Merro 9 November 2011 Massimo Merro The Lambda language 1 / 21 The core of sequential programming languages In the mid 1960s, Peter Landin observed that a complex programming
More informationlci Manual Kostas Chatzikokolakis 12 March 2006
lci Manual Kostas Chatzikokolakis 12 March 2006 This manual describes the use of lci, which is an advanced interpreter for the λ-calculus. This program was first developed by Kostas Chatzikokolakis as
More informationSelf-applicable Partial Evaluation for Pure Lambda Calculus
Self-applicable Partial Evaluation for Pure Lambda Calculus Torben Æ. Mogensen DIKU, University of Copenhagen, Denmark Abstract Partial evaluation of an applied lambda calculus was done some years ago
More informationLecture Notes on Data Representation
Lecture Notes on Data Representation 15-814: Types and Programming Languages Frank Pfenning Lecture 9 Tuesday, October 2, 2018 1 Introduction In this lecture we ll see our type system in action. In particular
More information1.3. Conditional expressions To express case distinctions like
Introduction Much of the theory developed in the underlying course Logic II can be implemented in a proof assistant. In the present setting this is interesting, since we can then machine extract from a
More informationMutable References. Chapter 1
Chapter 1 Mutable References In the (typed or untyped) λ-calculus, or in pure functional languages, a variable is immutable in that once bound to a value as the result of a substitution, its contents never
More informationdynamically typed dynamically scoped
Reference Dynamically Typed Programming Languages Part 1: The Untyped λ-calculus Jim Royer CIS 352 April 19, 2018 Practical Foundations for Programming Languages, 2/e, Part VI: Dynamic Types, by Robert
More informationSoftware Paradigms (Lesson 4) Functional Programming Paradigm
Software Paradigms (Lesson 4) Functional Programming Paradigm Table of Contents 1 Introduction... 2 2 Evaluation of Functions... 3 3 Compositional (Construct) Operators... 4 4 Some Implementation Issues...
More informationCS2 Algorithms and Data Structures Note 10. Depth-First Search and Topological Sorting
CS2 Algorithms and Data Structures Note 10 Depth-First Search and Topological Sorting In this lecture, we will analyse the running time of DFS and discuss a few applications. 10.1 A recursive implementation
More informationIntroductory Example
CSc 520 Principles of Programming Languages 21: Lambda Calculus Introduction Christian Collberg Department of Computer Science University of Arizona collberg@cs.arizona.edu Copyright c 2005 Christian Collberg
More informationFormal Semantics. Aspects to formalize. Lambda calculus. Approach
Formal Semantics Aspects to formalize Why formalize? some language features are tricky, e.g. generalizable type variables, nested functions some features have subtle interactions, e.g. polymorphism and
More informationUrmas Tamm Tartu 2013
The implementation and optimization of functional languages Urmas Tamm Tartu 2013 Intro The implementation of functional programming languages, Simon L. Peyton Jones, 1987 Miranda fully lazy strict a predecessor
More informationProgramming Language Concepts: Lecture 14
Programming Language Concepts: Lecture 14 Madhavan Mukund Chennai Mathematical Institute madhavan@cmi.ac.in http://www.cmi.ac.in/~madhavan/courses/pl2009 PLC 2009, Lecture 14, 11 March 2009 Function programming
More informationConstraint-based Analysis. Harry Xu CS 253/INF 212 Spring 2013
Constraint-based Analysis Harry Xu CS 253/INF 212 Spring 2013 Acknowledgements Many slides in this file were taken from Prof. Crista Lope s slides on functional programming as well as slides provided by
More information1 Introduction. 3 Syntax
CS 6110 S18 Lecture 19 Typed λ-calculus 1 Introduction Type checking is a lightweight technique for proving simple properties of programs. Unlike theorem-proving techniques based on axiomatic semantics,
More informationFunctional Programming. Overview. Topics. Recall λ-terms. Examples
Topics Functional Programming Christian Sternagel Harald Zankl Evgeny Zuenko Department of Computer Science University of Innsbruck WS 2017/2018 abstract data types, algebraic data types, binary search
More information6.001 Notes: Section 8.1
6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything
More informationHutton, Graham and Bahr, Patrick (2016) Cutting out continuations. In: WadlerFest, April 2016, Edinburgh, Scotland.
Hutton, Graham and Bahr, Patrick (2016) Cutting out continuations. In: WadlerFest, 11-12 April 2016, Edinburgh, Scotland. Access from the University of Nottingham repository: http://eprints.nottingham.ac.uk/32703/1/cutting.pdf
More informationLambda Calculus. Type Systems, Lectures 3. Jevgeni Kabanov Tartu,
Lambda Calculus Type Systems, Lectures 3 Jevgeni Kabanov Tartu, 13.02.2006 PREVIOUSLY ON TYPE SYSTEMS Arithmetical expressions and Booleans Evaluation semantics Normal forms & Values Getting stuck Safety
More informationPure Lambda Calculus. Lecture 17
Pure Lambda Calculus Lecture 17 Lambda Calculus Lambda Calculus (λ-calculus) is a functional notation introduced by Alonzo Church in the early 1930s to formalize the notion of computability. Pure λ-calculus
More informationProject 2: Scheme Interpreter
Project 2: Scheme Interpreter CSC 4101, Fall 2017 Due: 12 November 2017 For this project, you will implement a simple Scheme interpreter in C++ or Java. Your interpreter should be able to handle the same
More informationLambda Calculus. Variables and Functions. cs3723 1
Lambda Calculus Variables and Functions cs3723 1 Lambda Calculus Mathematical system for functions Computation with functions Captures essence of variable binding Function parameters and substitution Can
More informationOne of a number of approaches to a mathematical challenge at the time (1930): Constructibility
λ Calculus Church s λ Calculus: Brief History One of a number of approaches to a mathematical challenge at the time (1930): Constructibility (What does it mean for an object, e.g. a natural number, to
More informationCS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011
CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011 1 Introduction Type checking is a lightweight technique for proving simple properties of programs. Unlike theorem-proving techniques based on axiomatic
More information7. Introduction to Denotational Semantics. Oscar Nierstrasz
7. Introduction to Denotational Semantics Oscar Nierstrasz Roadmap > Syntax and Semantics > Semantics of Expressions > Semantics of Assignment > Other Issues References > D. A. Schmidt, Denotational Semantics,
More informationCS 11 Haskell track: lecture 1
CS 11 Haskell track: lecture 1 This week: Introduction/motivation/pep talk Basics of Haskell Prerequisite Knowledge of basic functional programming e.g. Scheme, Ocaml, Erlang CS 1, CS 4 "permission of
More informationCourse Notes Equational Programming: Lambda Calculus. Femke van Raamsdonk
Course Notes Equational Programming: Lambda Calculus Femke van Raamsdonk November 8, 2018 2 Contents 1 Introduction 5 2 Terms and Reduction 7 2.1 Lambda terms............................ 7 2.2 Beta reduction............................
More informationLambda Calculus-2. Church Rosser Property
Lambda Calculus-2 Church-Rosser theorem Supports referential transparency of function application Says if it exists, normal form of a term is UNIQUE 1 Church Rosser Property Fundamental result of λ-calculus:
More informationComputer Science 203 Programming Languages Fall Lecture 10. Bindings, Procedures, Functions, Functional Programming, and the Lambda Calculus
1 Computer Science 203 Programming Languages Fall 2004 Lecture 10 Bindings, Procedures, Functions, Functional Programming, and the Lambda Calculus Plan Informal discussion of procedures and bindings Introduction
More information1 Scope, Bound and Free Occurrences, Closed Terms
CS 6110 S18 Lecture 2 The λ-calculus Last time we introduced the λ-calculus, a mathematical system for studying the interaction of functional abstraction and functional application. We discussed the syntax
More informationLecture 5: Lazy Evaluation and Infinite Data Structures
Lecture 5: Lazy Evaluation and Infinite Data Structures Søren Haagerup Department of Mathematics and Computer Science University of Southern Denmark, Odense October 3, 2017 How does Haskell evaluate a
More informationWhat does my program mean?
September 16, 2015 L02-1 What does my program mean? Armando Solar Lezama Computer Science and Artificial Intelligence Laboratory M.I.T. Adapted from Arvind 2010. Used with permission. September 16, 2015
More informationFunctional Programming and λ Calculus. Amey Karkare Dept of CSE, IIT Kanpur
Functional Programming and λ Calculus Amey Karkare Dept of CSE, IIT Kanpur 0 Software Development Challenges Growing size and complexity of modern computer programs Complicated architectures Massively
More informationLecture 5: The Untyped λ-calculus
Lecture 5: The Untyped λ-calculus Syntax and basic examples Polyvios Pratikakis Computer Science Department, University of Crete Type Systems and Static Analysis Pratikakis (CSD) Untyped λ-calculus I CS49040,
More information11/6/17. Outline. FP Foundations, Scheme. Imperative Languages. Functional Programming. Mathematical Foundations. Mathematical Foundations
Outline FP Foundations, Scheme In Text: Chapter 15 Mathematical foundations Functional programming λ-calculus LISP Scheme 2 Imperative Languages We have been discussing imperative languages C/C++, Java,
More informationJ. Barkley Rosser, 81, a professor emeritus of mathematics and computer science at the University of Wisconsin who had served in government, died
Church-Rosser J. Barkley Rosser, 81, a professor emeritus of mathematics and computer science at the University of Wisconsin who had served in government, died Sept. 5, 1989. Along with Alan Turing and
More informationCSE-321 Programming Languages 2012 Midterm
Name: Hemos ID: CSE-321 Programming Languages 2012 Midterm Prob 1 Prob 2 Prob 3 Prob 4 Prob 5 Prob 6 Total Score Max 14 15 29 20 7 15 100 There are six problems on 24 pages in this exam. The maximum score
More informationFunctional Programming
Functional Programming COMS W4115 Prof. Stephen A. Edwards Spring 2003 Columbia University Department of Computer Science Original version by Prof. Simon Parsons Functional vs. Imperative Imperative programming
More informationLast class. CS Principles of Programming Languages. Introduction. Outline
Last class CS6848 - Principles of Programming Languages Principles of Programming Languages V. Krishna Nandivada IIT Madras Interpreters A Environment B Cells C Closures D Recursive environments E Interpreting
More informationLecture Notes on Program Equivalence
Lecture Notes on Program Equivalence 15-312: Foundations of Programming Languages Frank Pfenning Lecture 24 November 30, 2004 When are two programs equal? Without much reflection one might say that two
More information3.7 Denotational Semantics
3.7 Denotational Semantics Denotational semantics, also known as fixed-point semantics, associates to each programming language construct a well-defined and rigorously understood mathematical object. These
More informationCMSC330. Objects, Functional Programming, and lambda calculus
CMSC330 Objects, Functional Programming, and lambda calculus 1 OOP vs. FP Object-oriented programming (OOP) Computation as interactions between objects Objects encapsulate mutable data (state) Accessed
More informationLecture 15 CIS 341: COMPILERS
Lecture 15 CIS 341: COMPILERS Announcements HW4: OAT v. 1.0 Parsing & basic code generation Due: March 28 th No lecture on Thursday, March 22 Dr. Z will be away Zdancewic CIS 341: Compilers 2 Adding Integers
More informationaxiomatic semantics involving logical rules for deriving relations between preconditions and postconditions.
CS 6110 S18 Lecture 18 Denotational Semantics 1 What is Denotational Semantics? So far we have looked at operational semantics involving rules for state transitions, definitional semantics involving translations
More informationsimplefun Semantics 1 The SimpleFUN Abstract Syntax 2 Semantics
simplefun Semantics 1 The SimpleFUN Abstract Syntax We include the abstract syntax here for easy reference when studying the domains and transition rules in the following sections. There is one minor change
More informationType Systems Winter Semester 2006
Type Systems Winter Semester 2006 Week 4 November 8 November 15, 2006 - version 1.1 The Lambda Calculus The lambda-calculus If our previous language of arithmetic expressions was the simplest nontrivial
More informationTyping Control. Chapter Conditionals
Chapter 26 Typing Control 26.1 Conditionals Let s expand our language with a conditional construct. We can use if0 like before, but for generality it s going to be more convenient to have a proper conditional
More informationCIS 500 Software Foundations Fall September 25
CIS 500 Software Foundations Fall 2006 September 25 The Lambda Calculus The lambda-calculus If our previous language of arithmetic expressions was the simplest nontrivial programming language, then the
More informationLambda Calculus and Type Inference
Lambda Calculus and Type Inference Björn Lisper Dept. of Computer Science and Engineering Mälardalen University bjorn.lisper@mdh.se http://www.idt.mdh.se/ blr/ October 13, 2004 Lambda Calculus and Type
More informationThe Typed λ Calculus and Type Inferencing in ML
Notes on Types S. Arun-Kumar Department of Computer Science and Engineering Indian Institute of Technology New Delhi, 110016 email: sak@cse.iitd.ernet.in April 14, 2002 2 Chapter 1 The Typed λ Calculus
More informationOrganization of Programming Languages CS3200/5200N. Lecture 11
Organization of Programming Languages CS3200/5200N Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Functional vs. Imperative The design of the imperative languages
More informationλ-calculus Lecture 1 Venanzio Capretta MGS Nottingham
λ-calculus Lecture 1 Venanzio Capretta MGS 2018 - Nottingham Table of contents 1. History of λ-calculus 2. Definition of λ-calculus 3. Data Structures 1 History of λ-calculus Hilbert s Program David Hilbert
More informationOn Meaning Preservation of a Calculus of Records
On Meaning Preservation of a Calculus of Records Emily Christiansen and Elena Machkasova Computer Science Discipline University of Minnesota, Morris Morris, MN 56267 chri1101, elenam@morris.umn.edu Abstract
More informationQIR Specification. Romain Vernoux. January 6, Notations 2
QIR Specification Romain Vernoux January 6, 2017 Contents 1 Notations 2 2 Architecture 3 3 QIR data model, expressions and operators 4 3.1 QIR data model.................................... 4 3.2 QIR operators.....................................
More informationAbstract λ-calculus Machines
Abstract λ-calculus Machines Werner E. Kluge Department of Computer Science University of Kiel D 24105 Kiel, Germany wkinformatik.uni-kiel.de Abstract. This paper is about fully normalizing λ-calculus
More informationCOMP 1130 Lambda Calculus. based on slides by Jeff Foster, U Maryland
COMP 1130 Lambda Calculus based on slides by Jeff Foster, U Maryland Motivation Commonly-used programming languages are large and complex ANSI C99 standard: 538 pages ANSI C++ standard: 714 pages Java
More informationLists. Michael P. Fourman. February 2, 2010
Lists Michael P. Fourman February 2, 2010 1 Introduction The list is a fundamental datatype in most functional languages. ML is no exception; list is a built-in ML type constructor. However, to introduce
More informationFunctional programming languages
Functional programming languages Part I: interpreters and operational semantics Xavier Leroy INRIA Paris-Rocquencourt MPRI 2-4, 2016 2017 X. Leroy (INRIA) Functional programming languages MPRI 2-4, 2016
More informationIntroduction. chapter Functions
chapter 1 Introduction In this chapter we set the stage for the rest of the book. We start by reviewing the notion of a function, then introduce the concept of functional programming, summarise the main
More informationAn experiment with variable binding, denotational semantics, and logical relations in Coq. Adam Chlipala University of California, Berkeley
A Certified TypePreserving Compiler from Lambda Calculus to Assembly Language An experiment with variable binding, denotational semantics, and logical relations in Coq Adam Chlipala University of California,
More information(Refer Slide Time: 4:00)
Principles of Programming Languages Dr. S. Arun Kumar Department of Computer Science & Engineering Indian Institute of Technology, Delhi Lecture - 38 Meanings Let us look at abstracts namely functional
More informationBits, Words, and Integers
Computer Science 52 Bits, Words, and Integers Spring Semester, 2017 In this document, we look at how bits are organized into meaningful data. In particular, we will see the details of how integers are
More informationRecursively Enumerable Languages, Turing Machines, and Decidability
Recursively Enumerable Languages, Turing Machines, and Decidability 1 Problem Reduction: Basic Concepts and Analogies The concept of problem reduction is simple at a high level. You simply take an algorithm
More information5. Introduction to the Lambda Calculus. Oscar Nierstrasz
5. Introduction to the Lambda Calculus Oscar Nierstrasz Roadmap > What is Computability? Church s Thesis > Lambda Calculus operational semantics > The Church-Rosser Property > Modelling basic programming
More informationCS 6110 S14 Lecture 1 Introduction 24 January 2014
CS 6110 S14 Lecture 1 Introduction 24 January 2014 1 Introduction What is a program? Is it just something that tells the computer what to do? Yes, but there is much more to it than that. The basic expressions
More informationFunctional Programming. Pure Functional Programming
Functional Programming Pure Functional Programming Computation is largely performed by applying functions to values. The value of an expression depends only on the values of its sub-expressions (if any).
More informationCSE 3302 Programming Languages Lecture 8: Functional Programming
CSE 3302 Programming Languages Lecture 8: Functional Programming (based on the slides by Tim Sheard) Leonidas Fegaras University of Texas at Arlington CSE 3302 L8 Spring 2011 1 Functional Programming Languages
More information