Handout 10: Imperative programs and the Lambda Calculus

Similar documents
Polymorphic lambda calculus Princ. of Progr. Languages (and Extended ) The University of Birmingham. c Uday Reddy

Handout 9: Imperative Programs and State

CS 6110 S14 Lecture 1 Introduction 24 January 2014

Pure (Untyped) λ-calculus. Andrey Kruglyak, 2010

Software System Design and Implementation

Denotational Semantics. Domain Theory

CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Dan Grossman Spring 2011

CMSC 330: Organization of Programming Languages. Operational Semantics

Formal Systems and their Applications

Lambda Calculus. Variables and Functions. cs3723 1

The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 4 MODULE, SPRING SEMESTER MATHEMATICAL FOUNDATIONS OF PROGRAMMING ANSWERS

Introduction to lambda calculus Part 3

CIS 500 Software Foundations Fall September 25

Mutable References. Chapter 1

Types and Static Type Checking (Introducing Micro-Haskell)

11/6/17. Outline. FP Foundations, Scheme. Imperative Languages. Functional Programming. Mathematical Foundations. Mathematical Foundations

Imperative Functional Programming

Programming Languages Third Edition

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages. Lambda calculus

Types and Static Type Checking (Introducing Micro-Haskell)

9/23/2014. Why study? Lambda calculus. Church Rosser theorem Completeness of Lambda Calculus: Turing Complete

CSE 505: Concepts of Programming Languages

Type Systems Winter Semester 2006

The Lambda Calculus. 27 September. Fall Software Foundations CIS 500. The lambda-calculus. Announcements

1 Introduction. 3 Syntax

Nano-Lisp The Tutorial Handbook

Programming Language Concepts, cs2104 Lecture 04 ( )

Review. CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Let bindings (CBV) Adding Stuff. Booleans and Conditionals

Semantics of programming languages

Informal Semantics of Data. semantic specification names (identifiers) attributes binding declarations scope rules visibility

Fundamentals of Artificial Intelligence COMP221: Functional Programming in Scheme (and LISP)

(Refer Slide Time: 4:00)

Introduction to the Lambda Calculus

CMSC330. Objects, Functional Programming, and lambda calculus

Tail Calls. CMSC 330: Organization of Programming Languages. Tail Recursion. Tail Recursion (cont d) Names and Binding. Tail Recursion (cont d)

The Essence of Reynolds

Comp 411 Principles of Programming Languages Lecture 7 Meta-interpreters. Corky Cartwright January 26, 2018

CS 242. Fundamentals. Reading: See last slide

Functional Programming Languages (FPL)

G Programming Languages - Fall 2012

Types and Type Inference

CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011

6.001 Notes: Section 6.1

6.001 Notes: Section 8.1

Constraint-based Analysis. Harry Xu CS 253/INF 212 Spring 2013

Com S 541. Programming Languages I

Functional Programming and λ Calculus. Amey Karkare Dept of CSE, IIT Kanpur

The Design of Core C++ (Notes)

Abstract register machines Lecture 23 Tuesday, April 19, 2016

Programming Languages Third Edition. Chapter 7 Basic Semantics

Harvard School of Engineering and Applied Sciences Computer Science 152

Types and Type Inference

Reasoning About Imperative Programs. COS 441 Slides 10

M. Snyder, George Mason University LAMBDA CALCULUS. (untyped)

5. Introduction to the Lambda Calculus. Oscar Nierstrasz

CS 4110 Programming Languages & Logics. Lecture 17 Programming in the λ-calculus

The Substitution Model

3.7 Denotational Semantics

Foundations. Yu Zhang. Acknowledgement: modified from Stanford CS242

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

One of a number of approaches to a mathematical challenge at the time (1930): Constructibility

CMSC 330: Organization of Programming Languages

Lambda Calculus and Extensions as Foundation of Functional Programming

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

A Small Interpreted Language

Concepts of programming languages

A macro- generator for ALGOL

Organization of Programming Languages CS3200/5200N. Lecture 11

On Academic Dishonesty. Declarative Computation Model. Single assignment store. Single assignment store (2) Single assignment store (3)

Functional Programming. Big Picture. Design of Programming Languages

CS4215 Programming Language Implementation. Martin Henz

CS152: Programming Languages. Lecture 7 Lambda Calculus. Dan Grossman Spring 2011

G Programming Languages Spring 2010 Lecture 4. Robert Grimm, New York University

Functional Programming and Haskell

Lambda Calculus. Gunnar Gotshalks LC-1

Functions as data. Massimo Merro. 9 November Massimo Merro The Lambda language 1 / 21

1007 Imperative Programming Part II

COMP 1130 Lambda Calculus. based on slides by Jeff Foster, U Maryland

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

CSE 413 Languages & Implementation. Hal Perkins Winter 2019 Structs, Implementing Languages (credits: Dan Grossman, CSE 341)

Chapter 6 Control Flow. June 9, 2015

CSCC24 Functional Programming Scheme Part 2

Lecture 9: Typed Lambda Calculus

CSCI B522 Lecture 11 Naming and Scope 8 Oct, 2009

Introduction. chapter Functions

6.001 Notes: Section 15.1

Goal. CS152: Programming Languages. Lecture 15 Parametric Polymorphism. What the Library Likes. What The Client Likes. Start simpler.

Pure Lambda Calculus. Lecture 17

Formal Semantics. Prof. Clarkson Fall Today s music: Down to Earth by Peter Gabriel from the WALL-E soundtrack

Lecture 4 Searching Arrays

4.2 Variations on a Scheme -- Lazy Evaluation

The Substitution Model. Nate Foster Spring 2018

CSc 520 final exam Wednesday 13 December 2000 TIME = 2 hours

RSL Reference Manual

CMSC 330: Organization of Programming Languages. Lambda Calculus

Principles of Programming Languages COMP251: Functional Programming in Scheme (and LISP)

Lecture Notes on Data Representation

CMSC 330: Organization of Programming Languages. Formal Semantics of a Prog. Lang. Specifying Syntax, Semantics

More Lambda Calculus and Intro to Type Systems

1 Scope, Bound and Free Occurrences, Closed Terms

Transcription:

06-02552 Princ of Progr Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 10: Imperative programs and the Lambda Calculus Algol-like languages as typed lambda calculi The Algol 60 programming language was defined by an international committee of computer scientists, some of whom were familiar with the lambda calculus Perhaps as a result, the procedure mechanism of Algol 60 was defined along the same lines as that of the lambda calculus Peter Landin 1 showed the correspondence between the two procedure mechanisms a few years later However, the lambda calculus, by itself, expresses only functional programs, not imperative programs John Reynolds 2 proposed that Algol-like languages should be viewed as typed lambda calculi, which share the procedure mechanism of the lambda calculus, but have base types that represent imperative computations (which are not by themselves part of the lambda calculus) His formal language is called Idealized Algol It represents a very satisfying view of Algol-like programming languages, and we examine it in this section 1 Typed lambda calculus We define a typed lambda calculus by first stating its types There will be some collection of basic types, which may be chosen depending on the application one wishes to model Each basic type is a type Whenever T 1 and T 2 are types, T 1 T 2 is a type (It evidently represents the collection of functions from values of type T 1 to those of type T 2 ) One might also add other type constructors to the calculus without upsetting its fundamental structure example: For Whenever T 1,, T n are types (T 1,, T n ) is a type (It evidently represents the collection of tuples whose components are of types T 1,, T n respectively) The terms of a typed lambda calculus have the same surface syntax as that of the (untyped) lambda calculus: M ::= x c λx M M 1 M 2 However, the terms are expected to obey the type rules We omit the details in this handout A functional programming language can be obtained by picking the basic types to be data types such as integer, boolean, character, etc 2 Lambda calculus for imperative programs To obtain a typed imperative programming language, we pick basic types to be those representing imperative programming concepts (cf Handout 9) These are: Mutable variables, also called references, representing storage locations Expressions that read the state of variables and return a value Commands that alter the state of variables Variables, expressions and commands are not treated as types in typical imperative programming languages Rather, they are designated as separate syntactic categories However, to obtain the full power of typed lambda calculus, it is useful to regard them as types The basic data types such as int, bool, and char or not regarded as types of the lambda calculus This is because there are no terms in imperative programming languages that directly denote data values (except constants) Rather, terms denote either mutable variables or expressions, each of which might deal with values of particular data types Let δ stand for types such as int, bool, Then the basic types of the lambda calculus for imperative programs are: 1 Landin, Peter A correspondence between ALGOL 60 and Church s Lambda-notations: Part II, Communications of the ACM, March 1965 2 Reynolds, John The essence of Algol, in Algorithmic Languages, North-Holland, 1981

var[δ], also written as ref[δ], for variables that store δ-typed data values exp[δ], for expressions that return δ-typed data values comm, for commands In summary, the types of our lambda calculus for imperative programs is as follows: T ::= var[δ] exp[δ] comm T 1 T 2 3 Terminology: variables and references Note that the term variable in imperative programming refers to storage locations whose values can be modified In contrast, lambda calculus as well as standard mathematics use the term variable for a completely different concept, viz, symbols used to stand for values To avoid conflict between the two uses, Algol 68 introduced the term reference for mutable variables in the sense of imperative programming The terminology does not catch on within the imperative language culture (except in isolated usages like call by reference ) However, it became standard in functional programming culture So, we use both the terms variable and reference for referring to this concept For example, in the term λx x + y of type exp[int] exp[int], the symbol x is a bound identifier and y is a free identifier 4 Constants for imperative programs All the primitive operations of the imperative programs are modelled as constants in our typed lambda calculus We group them into four classes, for ease of exposition: Primitive operations for expressions All the constants and primitive operations needed for data values are expressed as constants that act on exp types Some examples are: 0, 1, 2, :: exp[int] true, false :: exp[bool] +, -, :: exp[int] exp[int] exp[int] =, <, :: exp[int] exp[int] exp[bool] &&, :: exp[bool] exp[bool] exp[bool] not :: exp[bool] exp[bool] The only thing surprising about these types is that they involve the exp type constructor We need the exp type constructor because, in general, the arguments for operations such as + are expressions which can read the state of variables The result of such an application, eg, x + y, is again an expression that is state-dependent Primitive operations that deal with commands These are as follows: skip :: comm seq :: comm comm comm if :: exp[bool] comm comm comm We will use syntactic sugar that makes these operations more convenient to write: C 1 ; C 2 seq C 1 C 2 if B then C 1 else C 2 if B C 1 C 2 Primitive operations that deal with variables These are as follows: read :: var[δ] exp[δ] write :: var[δ] exp[δ] comm The types we assign to these constants are quite important You would find them surprising at first sight However, they are entirely consistent with the normal interpretation of these operations: If V is a variable, then read V is an expression whose effect is to read the value of V and return it If V is a variable and E an expression then write V E is a command whose effect is to evaluate E and assign its value to V

We will use the syntactic sugar: V := E write V E To obtain the conventional imperative programming notation, we also treat read as an implicit coercion : Whenever a variable V is used as an expression, we understand it to mean (read V ) Here are some examples of syntactically sugared terms obtained from these conventions: x := x + 1 write x (+ (read x) 1) if x > y then m := x else m := y (if (> (read x) (read y)) (write m (read x)) (write m (read y))) Finally, we have an operation for local variable declarations: local[δ] :: (var[δ] comm) comm This constant is used to desugar local variable declarations as follows: {int x; C} local[int] (λx C) {δ x; C} local[δ] (λx C) The effect of local[δ] B is to create a new local variable for δ-typed values, say V, and then execute B(V ) After B(V ) finishes, the local variable is deallocated Note that this is exactly what we expect from a local variable declaration of the form {δ x; C} In summary, all the behaviour of imperative programs can be modelled using a few primitive functions in terms of the basic types var[δ], exp[δ] and comm 5 Procedures The procedures of Algol-like languages are mapped directly into the functions of lambda calculus For example, the Algol 60 procedure declaration: procedure swap(int x, int y) { int t; t := x; x := y; y := t } is thought of as the definition of a function swap: let swap = λx λy {int t; t := read x; x := read y; y := read t } What is the type of this function? Even though the Algol 60 declaration seems to suggest that the parameters of swap are integers, in reality, they are variables that hold integers, ie, entities of type var[int] in our terminology We can apply swap to two integer variables, say i and j The result of the application (swap i j) is then a command that can be executed as part of a larger command: swap i j; Therefore, the type of swap is: swap : var[int] var[int] comm 6 Semantics of procedure call Prior to Algol 60, the meaning of a procedure such as swap was understood operationally, in terms of machine instructions that would be executed That story might run as follows: 1 Push references to the variables i and j on the system stack 2 Push the program counter on the stack, and jump to the code of swap

3 When the code of swap finishes, pop the arguments i and j as well as the saved program counter from the system stack, and jump back to the saved program counter position The definition of Algol 60 put paid to such operational descriptions The semantics of a procedure call, as given in the Algol 60 Report, is to simply copy the body of the procedure to where the procedure call appears, and replace the formal parameters by the arguments, like so: swap i j; = {int t; t := read i; i := read j; j := read t} This semantics came to be known as the Algol copy rule We might also call it procedure unfolding Note that the copy rule is precisely the β-equivalence reduction rule of the lambda calculus 7 Orthogonality of procedures and commands John Reynolds noted that the procedure call mechanism and command execution in Algol 60 are orthogonal, ie, they are independent mechanisms that are both used in the program interpretation, but there is no interference between them More concretely, it means something like this An Algol 60 program is a term C of type comm We can repeatedly unfold all its procedure calls: C = C 1 = C 2 = = C n = This process might go on for ever However, if C has a terminating execution, there will be a finite term in the series, say C n, which can be executed as if it were a simple imperative program Any procedure calls embedded in it will not need to be unfolded, because they will be in conditional branches that are skipped We do not know, in advance, the finite unfolding C n needed for the execution So, in practice, we interleave the unfolding process and the command execution process, and only unfold those procedure calls that are needed for command execution However, orthogonality means that we could in principle unfold all the needed procedure calls in advance, before the command execution begins 8 Call by name The mode of parameter passing in Algol 60 is termed call by name, the same as in lambda calculus That means that the terms denoting the arguments are substituted for formal parameters in the body of the procedure When used in conjunction with imperative programs, the call-by-name parameter passing gives rise to a surprising amount of interference effects 9 Interference between parameters Consider using the swap procedure with arrays Let a be an array of integers, whose component variables are written as a[0], a[1],, and consider a procedure call swap i a[i] Recall that a[i] should be really thought of as a[read i] because i is a variable used in place of an expression and therefore involves the implicit coercion read As per the Algol 60 copy rule, the procedure call swap i (a[read i]) unfolds to the following command: {int t; t := read i; i := read a[read i]; a[read i] := read t } Consider executing this command from an initial state where i = 0, a[0] = 1, a[1] = 2, We might expect that the effect of the procedure call swap i (a[read i]) should be to swap the variables i and a[read i], ie, i should become 1, a[0] should become 0 and all other elements of the array should remain unchanged However, what happens is quite different The first assignment sets t to 0 The second assignment sets i to 1 Since, i is now 1, the third assignment has the effect of a[1] := t So, a[1] changes to 0, and a[0] remains unchanged! The problem here is that the two parameters i and a[read i] interfere, ie, changing one of them affects the meaning of the other term On the other hand, when we define the procedure swap x y, we tend to assume that x and y are independent, ie, changing one of them does not the affect the meaning of the other Thus, there is a mismatch of expectations

Imperative programming in Haskell Haskell is a functional programming language, but it also has imperative features The imperative features are a bit cumbersome compared to Algol But since most of the programming in Haskell is done in functional style, this is not a problem in practice 10 IO type constructor In Haskell, the two separate type constructors exp and comm of Algol are combined into a single type constructor IO Computations of type IO T can do both the reading and writing of the state variables (In contrast, exp[δ]-typed computations can only read the state and comm-typed computations can modify the state) Secondly, the results returned from (IOT )-typed computations can return results of any type T They are not limited to a special class of data types as in Algol The name IO is short for input-output When Haskell was designed, only input-output actions were considered Later, general state-change actions were also incorporated into the same type 11 Constants for IO The primitive operations for IO type computations are as follows: return :: t IO t >>= :: IO t (t IO u) IO u The return operation simply returns its argument, without doing any state changes (It is similar in spirit to the skip command) The >>= operation (normally read as bind ) sequences two state change actions An expression of the form C >>= F represents a compound action which carries out two actions represented by C and F However, note that actions return values The value returned by C would be some value x of type t This value is passed as an argument to F, which is of type t IO u The value returned by F (x) is of type u This is the result returned by the whole action C >>= F Since the result is of type u, the whole action is of type IO u Here is an example to illustrate these operations: dialogue = putstrln "Type in a string" >>= \() -> getstrln >>= \in -> putstrln "you typed " + in This action first outputs the string Type in a string This is an action of type IO () Next it inputs a string, an action of type IO String Finally it writes back the string that has been typed We normally type the above code with line breaks as follows: dialogue = putstrln "Type in a string" >>= \() -> getstrln >>= \in -> putstrln "you typed " + in This seems a bit strange because we are splitting the formal parameter of a lambda abstraction and the body of the lambda abstraction into separate lines However, this style of typing it is in fact easier to read 1 Ouput Type in s astring Let the result be () 2 Read a string Let the result be in 3 Output you typed and in So the lambda abstraction allows the English language locution Let the result be x The fact that the scope of a lambda abstraction extends as far to the right (and down) as possible, the variable x mentioned in the locution can be used in the rest of the body of the definition

12 References For every type T, Haskell allows us to create references (ie, mutable variables) that can hold values of type T Their type is denoted IORef T The constants (primitive operations) dealing with references are as follows: readioref :: IORef t IO t writeioref :: IORef t t IO () newioref :: t IO (IORef t) The operation readioref is the counterpart of read in Idealized Algol Since the Exp type constructor is subsumed by IO, we have lost the information that the read operation is a pure state reader Here, the type says that it is some state action The operation writeioref is the counterpart of write in Idealized Algol The type IO () corresponds comm The operation newioref is similar to local in Idealized Algol It takes an initial value of type t and produces an IO action, which creates a reference with that initial value Why can t the result type be just be IORef t? The answer is that there isn t a single unique reference for a type t Every time, we execute newioref, we expect to get a new reference Hence this is an action, not just a reference