Semantics and Concurrence Module on Semantic of Programming Languages

Similar documents
Semantics and Concurrence Module on Semantic of Programming Languages

Note that in this definition, n + m denotes the syntactic expression with three symbols n, +, and m, not to the number that is the sum of n and m.

axiomatic semantics involving logical rules for deriving relations between preconditions and postconditions.

3.7 Denotational Semantics

The Formal Semantics of Programming Languages An Introduction. Glynn Winskel. The MIT Press Cambridge, Massachusetts London, England

Denotational Semantics. Domain Theory

1 Introduction. 3 Syntax

Goals: Define the syntax of a simple imperative language Define a semantics using natural deduction 1

CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011

CS 6110 S11 Lecture 12 Naming and Scope 21 February 2011

Introduction to Denotational Semantics. Brutus Is An Honorable Man. Class Likes/Dislikes Survey. Dueling Semantics

CIS 500 Software Foundations Midterm I

Abstract Interpretation

Last class. CS Principles of Programming Languages. Introduction. Outline

CSE505, Fall 2012, Midterm Examination October 30, 2012

Introduction to Denotational Semantics. Class Likes/Dislikes Survey. Dueling Semantics. Denotational Semantics Learning Goals. You re On Jeopardy!

Formal Systems and their Applications

Induction and Semantics in Dafny

CONVENTIONAL EXECUTABLE SEMANTICS. Grigore Rosu CS522 Programming Language Semantics

Lecture Notes on Data Representation

Programming Languages Third Edition

CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Dan Grossman Spring 2011

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic

CONVENTIONAL EXECUTABLE SEMANTICS. Grigore Rosu CS422 Programming Language Semantics

CMSC 330: Organization of Programming Languages. Formal Semantics of a Prog. Lang. Specifying Syntax, Semantics

Chapter 2 The Language PCF

CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

Lecture Notes on Program Equivalence

Programming Languages Lecture 14: Sum, Product, Recursive Types

The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 4 MODULE, SPRING SEMESTER MATHEMATICAL FOUNDATIONS OF PROGRAMMING ANSWERS

CS611 Lecture 33 Equirecursive types & Recursive domain equations November 19, 2001

3.4 Denotational Semantics

CS422 - Programming Language Design

Application: Programming Language Semantics

Lambda Calculus. Type Systems, Lectures 3. Jevgeni Kabanov Tartu,

Inductive Definitions, continued

CSC 501 Semantics of Programming Languages

LOGIC AND DISCRETE MATHEMATICS

M. Snyder, George Mason University LAMBDA CALCULUS. (untyped)

Introduction to Sets and Logic (MATH 1190)

Intro to semantics; Small-step semantics Lecture 1 Tuesday, January 29, 2013

Programming Languages

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

3.2 Big-Step Structural Operational Semantics (Big-Step SOS)

Handout 9: Imperative Programs and State

A Hoare Logic Contract Theory: An Exercise in Denotational Semantics

An Evolution of Mathematical Tools

Semantics via Syntax. f (4) = if define f (x) =2 x + 55.

Typed Lambda Calculus

CMSC 336: Type Systems for Programming Languages Lecture 5: Simply Typed Lambda Calculus Acar & Ahmed January 24, 2008

Lambda Calculus and Extensions as Foundation of Functional Programming

The three faces of homotopy type theory. Type theory and category theory. Minicourse plan. Typing judgments. Michael Shulman.

Reasoning About Imperative Programs. COS 441 Slides 10

Semantics of programming languages

Semantics. A. Demers Jan This material is primarily from Ch. 2 of the text. We present an imperative

A CRASH COURSE IN SEMANTICS

Introduction to Homotopy Type Theory

Overview. A normal-order language. Strictness. Recursion. Infinite data structures. Direct denotational semantics. Transition semantics

Lambda Calculus and Type Inference

Programming Languages Lecture 15: Recursive Types & Subtyping

Inductive datatypes in HOL. lessons learned in Formal-Logic Engineering

Lecture 5 - Axiomatic semantics

CS4215 Programming Language Implementation. Martin Henz

Definability and full abstraction in lambda-calculi

CMSC 330: Organization of Programming Languages. Operational Semantics

Review. CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Let bindings (CBV) Adding Stuff. Booleans and Conditionals

CS131 Typed Lambda Calculus Worksheet Due Thursday, April 19th

CSE-321 Programming Languages 2012 Midterm

Meeting13:Denotations

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

Meeting14:Denotations

Calculus of Inductive Constructions

Part VI. Imperative Functional Programming

CS 6110 S14 Lecture 1 Introduction 24 January 2014

CS 4110 Programming Languages & Logics. Lecture 28 Recursive Types

Lambda Calculus and Type Inference

Lecture 1. 1 Notation

Observational Program Calculi and the Correctness of Translations

CSE 20 DISCRETE MATH. Winter

Polymorphic lambda calculus Princ. of Progr. Languages (and Extended ) The University of Birmingham. c Uday Reddy

CSE 505: Concepts of Programming Languages

CMSC 330: Organization of Programming Languages

Typed Lambda Calculus for Syntacticians

Recursive Types and Subtyping

Type Checking and Type Inference

Computing Fundamentals 2 Introduction to CafeOBJ

Dynamic Logic David Harel, The Weizmann Institute Dexter Kozen, Cornell University Jerzy Tiuryn, University of Warsaw The MIT Press, Cambridge, Massac

CSE 20 DISCRETE MATH. Fall

Johns Hopkins Math Tournament Proof Round: Point Set Topology

Formal Semantics. Prof. Clarkson Fall Today s music: Down to Earth by Peter Gabriel from the WALL-E soundtrack

This is already grossly inconvenient in present formalisms. Why do we want to make this convenient? GENERAL GOALS

n n Try tutorial on front page to get started! n spring13/ n Stack Overflow!

Lectures 20, 21: Axiomatic Semantics

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

MPRI course 2-4 Functional programming languages Exercises

Fundamental Concepts. Chapter 1

Discrete Mathematics Lecture 4. Harper Langston New York University

Recursive Types and Subtyping

COSC252: Programming Languages: Semantic Specification. Jeremy Bolton, PhD Adjunct Professor

Transcription:

Semantics and Concurrence Module on Semantic of Programming Languages Pietro Di Gianantonio Università di Udine

Presentation Module of 6 credits (48 hours), Semantics of programming languages Describe the behavior of programs in a formal and rigorous way. We present 3 (2) different approaches, methods to semantics. Methods applied to a series of, more and more complex, programming languages. Some topics overlapping the Formal Methods course. Prerequisite for: Module of Concurrence (Lenisa), Abstract Interpretation (Comini), any formal and rigorous treatment of programs.

Semantics of programming languages Objective: to formally describe the behavior of programs, and programming constructors. As opposed to informal definitions, commonly used. Useful for: avoid ambiguity in defining a programming language: highlight the critical points of a programming language (e.g. static-dynamic environment, evaluation and parameter passing) useful in building compilers, to reason about single programs (define logic, reasoning principles, programs): to prove that a given program to satisfy a give property, specification, that is correct.

Semantic approaches Operational Semantics, describes how programs are executed, not on a real calculator, too complex, on a simple formal machine (Turing machines style). Structured Operational Semantic (SOS). The formal machine consists of a rewriting system: system of rules, syntax.

Semantic approaches Denotational Semantics. The meaning of a program described by a mathematical object. Partial function, element in an ordered set. An element of an ordered set represents the meaning of a program, part of the program, program constructor. Alternative, action semantics, semantic games, categorical. Axiomatic Semantics. The meaning of a program expressed in terms of pre-conditions and post-conditions. Several semantics because none completely satisfactory. Each describes one aspect of program behavior, has a different goal.

SOS pros Simple, syntactic, intuitive. Quite flexible. It can easily handle complex programming languages. The structure of rules remains constant in different languages.

SOS cons Semantics depend on syntax, it is formulated using syntax. It is difficult to correlate programs written in different languages. Semantics are not compositional (the semantics of an element depend on the semantics of its components). It induces a notion of equivalence between programs difficult to verify (and use).

Denotational Semantics Goals: a syntax-independent semantics, you can compare programs written in different languages. a compositional semantics (the semantics of an element depends on the semantics of its components). more abstract, provides tools for reasoning on programs. Main feature: describe the behavior of a program through a mathematical object.

Different Features of Programming Languages no termination, store (memory, imperative languages) environment, not determinism, concurrence, higher order functions (functional languages), exceptions,... The complexity of denotation semantics grows rapidly, with the features of programming languages

Axiomatic semantics Indirect description of a program, through a set of assertions: {Pre} p {Post} It immediately gives a logic to reason about programs. Complementary, and justified, by the other semantics.

Relations among semantics different descriptions of programming languages. one can prove the coherence among descriptions. one semantics descriptions can be justified by the others.

Textbook Glynn Winskel: The Formal Semantics of Programming Languages. An introduction. A classic, simple, complete presentation, few frills, and few general considerations. Three copies in the library. We present a good part of the book, skipping: most of the proves of the theorems, the general introduction of formalism, the methodology: just presented on working examples. Some additional topics.

Exam a set of exercise to solve at home, an oral exam, on appointment.

Prerequisites Mathematical logic notions: calculation of predicates, set constructors: product, disjoint sum, function space, power set, grammar (free from context), inductive definition and induction principle, model theory: language and model, (syntax and semantics).

A simple imperative language: IMP Syntactic categories: Integer Numbers (N): n ; Boolean Values (T) b ; Locations (Loc): X. Arithmetic Expressions (AExp): a = n X a 0 + a 1 a 0 a 1 a 0 a 1 Boolean expressions (BExp): b = true false a 0 = a 1 a 0 a 1 not b b 0 or b 1 b 0 and b 1 Commands (Com): c ::= skip X := a c 0 ; c 1 if b then c 0 else c 1 while b do c

IMP Abstract syntax: ambiguous but simpler than concrete syntax Incomplete syntax: unspecified syntax for numerical constants, locations: not interesting, low level details, orthogonal to our discussion. Minimum language capable of computing all computable functions (Turing-complete), if the locations can store arbitrarily large spaces. Missing: variables (environment), procedure, function definitions, recursive definitions.

Operational Semantic for IMP A set of rules to describe the behavior of arithmetic and Boolean expressions, commands. One associate to arithmetic expressions assertions, judgments in the form: a, σ n where σ : Loc N, state (memory, store). Derived judgments by rules in natural deduction, Rules driven by syntax (structured operational semantics). The basic expressions associate axioms: n, σ n X, σ σ(x )

IMP: SOS rules To composite derivative rules: a 0, σ n 0 a 1, σ n 1 a 0 + a 1, σ n n 0 + n 1 = n... Evaluating an arithmetic expression is trivial, so trivial rules. Each expression has associated one or more rules, determined by its main connective. Rules imply a deterministic algorithm.

SOS rules Rules assume, via side condition, a preexisting mechanism for calculating the arithmetic operation. One abstracts from the problem of implementing the arithmetic operations. it is possible, give a chosen number representation, the algorithm for addition via SOS rules, we avoid to deal with this low level implementation. Similarly, rules do not defined the syntax for store and a to perform operation on stores. One abstracts from the problem of implementing store operations. Exercise: evaluate 2 x 3 + y in a store σ, σ(x) = 4, σ(y) = 1

Boolean expression: SOS rules Axioms...... a 0, σ n a 1, σ n a 0 = a 1, σ true a 0, σ n a 1, σ m a 0 = a 1, σ false n m b 0, σ t 0 b 1, σ t 1 b 0 and b 1, σ t where t true if t 0 true and t 1 true. Otherwise t false.

Boolean operators. Boolean operators can be evaluated in different ways: short circuit evaluations. This alternative evaluation can be express by the SOS rules. Four alternative rules for connectivity and.

Exercises Through the rules one can explicitly define the arithmetic operation, without using side conditions. Simplification: We represent just the natural numbers and not the integers, in a unary notation. n ::= 0 Sn Addition, product, comparison rules. The same problem with binary notation: Where n : 0 = 2 n e n : 1 = 2 n + 1 n = 0 n : 0 n : 1

Commands Executing a command has the effect of modifying memory, store. Judgements have form: c, σ σ To represent updated store, one uses the notation σ[m/x ] σ[m/x](x ) = m σ[m/x ](Y ) = σ(y ) if X Y In a complete operational approach the state should be a syntactic object: grammar to define states: ground states, updating operations, set of rules describing the behavior.

Commands, rules skip, σ σ a, σ n X := a, σ σ[n/x] c 0, σ σ c 1, σ σ c 0 ; c 1, σ σ b, σ true c 0, σ σ if b then c 0 else c 1, σ σ... b, σ true c, σ σ while b do c, σ σ while b do c, σ σ...

Equivalences: Semantics induces a notion of equivalence between commands (and expressions): if for every pair of stores σ, σ : c 0 c 1 c 0, σ σ if and only if c 1, σ σ Several notions of equivalence: c 0 c 1 if the c 0 and c 1 commands, formatted in the abstract syntax, are the same. σ 0 and σ 1 are the same as Loc N

Exercises Encode in Haskell the rules of operational semantics. Given: show that: and w while b do c w if b then c; w else skip w if b then w else skip

Determinism The rules are deterministic: Strong formulation: For each c, σ exists for more than σ and a single demonstration of: c, σ σ Weak formulation: For every c, σ exists for more than σ for which it is valid c, σ σ

Big-step, small-step SOS Alternative formulation: describes a step of computation. New rules for the while c, σ c, σ b, σ true while b do c, σ c; while b do c, σ Second rule for the while... Both formulations are used: For some languages it is easier to provide big-step semantics. More abstract. In the languages for concurrency it is crucial to consider the small-step semantics. Contains additional information about computing steps and execution order. It can be not trivial to demonstrate the equivalence between the two formulations.

Induction In mathematics and computer science many inductive definitions: natural numbers, lists, grammars, derivations, demonstrations Set constructors: zero, successor; empty list, concatenation; constructs of grammar; axioms, derivation rules.

Induction On inductive structures: definitions for recursion, induction demonstrations. Reinforcements, extensions: generalized induction, n. ( m < n. P(m)) P(n) So, n. P(n) well founded sets.

Denotational Semantics for IMP Associate to IMP expressions mathematical objects ((partial) functions) in compositionally. Building basics: N = {..., 2, 1, 0, 1, 2,...}, the set of integers. T = {true, false}, the set of Boolean values. Σ = Loc N, the set of possible states (memory, store configurations). Note the difference between N and N.

Interpreting Functions Each syntactic category is associated with an interpretation function: A : AExp (Σ N) an arithmetic expression represents a function from integer state. B : BExp (Σ T ) C : Com (Σ Σ) a command represents a partial function, from state to state. Resuming the ideas of operational semantics, but expressing them differently. I use the double parenthesis to enclose the syntax elements

Semantics of expressions (arithmetic, Boolean) Interpretation functions are defined by induction on the grammar (on the structure of the term). A n (σ) = n A X (σ) = σ(x )... A a o + a 1 (σ) = (A a o (σ)) + (A a 1 (σ)) B a o a 1 (σ) = (A a o (σ)) (A a 1 (σ)) Each element, each operator, is interpreted with its semantic correspondent.

Commands semantics Commands represent partial functions. In the definition, partial functions are seen as relations (the set of pairs argument, value ) represented through their graphs. C skip = {(σ, σ) σ Σ} C X := a = {(σ, σ[n/x ]) σ Σ, A a (σ) = n} C c 0 ; c 1 = C c 1 C c 0 C if b then c 0 else c 1 = {(σ, σ ) C c 0 B b (σ) = true} {(σ, σ ) C c 1 B b (σ) = false}

The hard case: while constructor C while b do c = C if b then c; while b do c else skip = {(σ, σ) σ Σ, B b (σ) = false} {(σ, σ ) (C while b do c C c ) B b (σ) = true} recursive definition. Existence of solution: reduce the problem to a fixed-point problem; consider the operator: Γ(R) = {(σ, σ) σ Σ, B b (σ) = false} {(σ, σ ) (R C c ) B b (σ) = true} Γ transforms a relation between Σ and Σ into another relation. Γ : Rel(Σ, Σ) Rel(Σ, Σ) define C while b do c as minimum fixed-point for Γ.

Background Idea Given Ω while true do skip Define C while b do c As limit of its approximations: C Ω C if b then c; Ω else skip C if b then c; (if b then c; Ω else skip) else skip

Fixed-point theorems (on orders) They provide solution to the previous problem. Knaster-Tarski theorem: in a complete lattice, each monotone function has a minimum (and a maximum) fixed-point, (important in math) in a complete partial order each monotone and continuous function has a minimum fixed-point. (fundamental in denotational semantics) (the property of the order weakens, the properties of the function are strengthened). in a full metric space, each contractive function has a single fixed-point, (used in analysis)

Ordered set Definition A partial order (P, ) is composed by a set P and a binary relation on P s.t. for every p, q, r P p p, (reflexive) p q and q r then p r, (transitive) p q and q p then p = q, (antisymmetric)

Examples natural numbers, rational, reals, (total order); powerset, finite powerset (Boolean algebra); partial functions ( Bool 0, Nat Nat)

Least upper bound Definition Given subset of X of P, p is an upper bound for X if q X. q p. The least upper bound (l.u.b) of X, X if it exists, is the smallest among the upper bounds, that is: q X. q X, and for each p, if q X. q p then X p the least upper bound is unique (show by exercise). the dual notions are: lower bound greatest lower bound (g.l.b),

Lattice Definition A lattice is a partial order in which each pair of elements has an least upper bound and a greatest lower bound end of. We can write {p, q} as p q. It follows that in a lattice every finished set has top set. A complete lattice is a partial order in which each subset has an least upper bound. Exercise: show that every subset in a complete lattice also has a greatest lower bound.

Knaster-Tarski Theorem Definition A function f : P Q between orders is monotone if it respects the order. If p 1 p 2 then f (p 1 ) f (p 2 ) Theorem (Knaster-Tarski) Each monotone function f on a complete lattice P has a minimum (maximum) fixed-point. Proof. One can show that {p f (p) p} is a fixed-point for f (i.e. {p f (p) p} = f ( {p f (p) p})

Relations The partial functions do not form a lattice. Relations are an extension of the partial functions and form a lattice. Partial functions and functions can be seen as special cases of relations. Definition A relation between X and Y is a subset of X Y. A partial function f : X Y is a relation such that x X, y, y Y. (x, y) f y = y a total function is a partial function such that x y. (x, y) f (x, y) f can also be written by y = f (x)

Relations Definition (Composition) Given two R relations between X and Y and S between Y and Z we define S R as (x, z) S R y. (x, y) R (y, z) S Note the inversion in the order. If relations are also functions: composition as relation coincides with the composition as function.

Application to semantics the set of relations on Σ Σ forms a complete pattern, the operator: Γ(R) = {(σ, σ) B b (σ) = false} {(σ, σ ) (R C c ) B b (σ) = true} is monotone; the Knaster-Tarski theorem proves the existence a fixed-point for the Γ operator (recursive definition of semantics). Recursive definition has multiple solutions, the minimum solution is the one that correctly describes the behavior of the program.

Structures for denotational semantics It is preferable to use partial functions instead of relation. The previous solution does not prove that the fixed-point is a partial function, it could be a generic relation. Remind: in IMP, the semantics of a command: Σ Σ. The set of partial functions forms an order, f g when the following equivalent conditions are met: g is more defined than f, x.f (x) f (x) = g(x) as a set of pairs (argument, result) f g this space is not a lattice. It is necessary to use a second fixed-point theorem.

Complete Partial Orders Definition In a partial order P, an ω-chain (a chain having length ω) is countable sequence of elements in P with each element greater (or equal) than the previous one p 1 p 2 p 3,... A complete partial order (CPO) P is a partial order in which each ω-chain has a least upper bound. A complete partial order with bottom is a complete partial order containing a minimum element. CPO are the typical structures to interpret programs. An example of CPO with bottom is the set of partial functions from any pair of sets (from N to N).

Continuous functions on CPO Definition Continuity A function f : D E between CPO is continue if it preserves the least upper bound of of the chains (the limit of chains) For each chain p 1, p 2, p 3,..., i N f (p i) = f ( i N p i). Similarly, in mathematical analysis a function is continuous iff preserves limits of convergent sequences.

New fixed-point theorem Theorem Each function continues on a CPO with bottom f : D D has a minimum fixed-point. Proof. Proof sketch The minimum fixed-point is the upper end of the chain, f ( ), f (f ( )), f 3 ( ),.... Notice that the proof is constructive, suggest a way to compute the fixed-point. The proof of Knaster-Tarski is not constructive, use uncountable large subsets.

Semantics of while Remind that the function: C while b do c is defined as the minimum fixed point of Γ Γ(R) = {(σ, σ) σ Σ, B b (σ) = false} {(σ, σ ) (R C c ) B b (σ) = true} One should prove that Γ is monotone (easy), and continuous (non easy, we postpone the proof).

stands for the command: if b then c else skip syntactic sugar, enriches the language without adding new semantic definitions. Definition of while by approximations By the proof of fixed-point theorem, the function C while b do c the limit of the following ω-chain of partial functions: C Ω C if b then (c; Ω) C if b then (c; if b then (c; Ω)) C if b then (c; if b then (c; if b then (c; )))... where Ω denotes an always divergent command and if b then c

Examples Compute: C while true do skip operational and denotational semantics of while 2 X do X := X 2 Z := 0; while Y X do X := X Y ; Z := Z + 1 Store can be limited to the locations contained in the command.

Operational and denotational semantics agree Theorem The judgement c, σ σ is derivable iff (σ, σ ) C c Proved by induction on the syntax of c, and for c a while command, on inductions on the derivation.

A category for denotational semantics Generalizing the previous example (denotational semantics of IMP) programming languages (and their components: data types, commands) are interpreted using: 1 complete partial orders (CPOs) order: the information order; represents the absence of information, the program that always diverges, allows solutions to recursive definitions, fixed-point equations. 2 functions between CPOs that are monotonic continue: preserve the least upper bounds of ω-chains.

Examples of CPO N with flat order: n m n = m. N = N { }, n m (n = m n = ) (the set of natural number with ), T = {true, false}, T O = {, }, with (Sierpinski space). N N with the point-wise order: f g iff for every n, f (n) g(n), isomorphic to N N (with the order defined above) internalize the divergence: f (n) = denotes the divergence of f (n), partial functions are avoided

Examples of CPO Ω = natural numbers + { }, with the linear order. the lazy natural numbers, Haskell computation having type data Nat = Zero Suc Nat streams of Boolean values: partial strings, arbitrarily long. data StreamBool = TT StreamBool FF StreamBool

Monotonicity Intuitive meaning. Consider a program with a functional type F : (Nat Bool) Bool its semantics F : (N T ) T is a function: monotone, if F (f ) true then, for any function g more define that f, F (g) true. Preserves the information order: more information in input generate more information in output.

Continuity A functional type F : (Nat Bool) Bool is continuous if F (f ) true then F generates true after evaluating f on a finite number of values, i.e., there must exists a partial function g, defined on a finite number of elements, such that g f and F (g) = true functions need to be finitary: To generate a finite piece of information as the result, computation examine just a finite piece of information of the argument. Example of a non continuous functionals: test if a function converge on even numbers, test in a function returns the parity. Exercise: show that composition preserves continuity.

Lambda (λ) notation In mathematics, one defines functions by means of equations: f (x) = sin(x) + cos(x). With the λ notation one write directly: λx. sin(x) + cos(x), or λx R. sin(x) + cos(x) or f = λx. sin(x) + cos(x) In Haskell: \x -> (sin x) + (cos x) Benefits name less functions, one can define a function without giving it a name, definition of functions more synthetic, functions similar to other elements, first class objects conceptually clearer: sin(x)dx becomes λx. sin(x) or sin.

CPO Builders To give semantics to complex languages, one associates to types: suitable structured CPOs; programs and program subexpressions: CPO elements, CPO functions To do this: new CPO are build from existing ones, standard operators and functions are used in the process.

Discrete CPO CPO without bottom: To a set of D values (ex. N the integer set, B) One associate CPO D with flat order d 1 d 2 sse d 1 = d 2. With respect to the information order, all elements are unrelated: different information, all completely defined, none more defined than the other. A set of completely defined values.

Lifting Definition D = D { } with order relation: d 1 d 2 sse d 1 = or (d 1, d 2 D d 1 d 2 ) One add to D, an element representing a divergent computations, the other elements represents computation generating a D elements. Associated functions: : D D, given a value d, constructs a computation d returning d as result. from a function f : D E, defined on values, construct a function f : D E defined on computations, E need to be a CPO with bottom. Notation. (λx.e) (d) is also written as let x d. e.

Some remarks we use an approach inspired by category theory: for each constructor, we defined the set of functions characterizing it, these functions are constructors or destructors basic ingredients for building other functions, CPO builders have a corresponding type constructors in Haskell, in category theory the operation of Lifting defines a monad, correspondent Haskell: the Monad Maybe unmatched correspondence: in Haskell I can define test :: Maybe a -> Bool test Nothing = True test (Just_) = False

Product Definition D E the set of pairs with the orderly relation: d 1, e 1 d 2, e 2 sse d 1 d 2 and e 1 e 2 It is generalized to the finished product. Builds CPOs associated with pairs, records, vectors. Associated functions: projections π 1 : (D E) D, π 1 ( d, e ) = d, π 2 : (D E) E from each pair of functions f : C D, g : C E we derive f, g : C (D E) f, g (c) = f (c), g(c) (returns pairs of elements). These functions define an isomorphism between (C D) (C E) and C (D E) given by...

Exercises Show that the definitions are well-given: order D E is a CPO, the π i, f, g functions are continuous. Build O O(= O 2 ), O 3. Which orders are isomorphic? Build (T ) 2

Properties Proposition For any indexes set of elements in D {d i,j i, j Nat, i i, j j. d i,j d i,j } d i,i = i i d i,j = j j i d i,j Proposition A function f : (C D) E is continued sse is continued in each of its arguments.

Function Space Definition [D E] the set of continuous functions from D to E with the point-wise order: f g sse for every d D, f (d) g(d). To build CPO for functional languages. type FunInt = Integer -> Integer Prove [D E] is a CPO. Notice the square brackets [ ] in the notation.

Associated functions application app : ([D E] D) E app( f, d ) = f (d) currying by (Haskell Curry), a function f : (D E) F induces a function curry(f ) : D [E F ] curry(f )(d)(e) = f ( d, e ) These functions define an isomorphism between C [D E] and (C D) E), given by...

Properties and exercises Show that fixed-point operator Y : [D D] D is continuous. Show that [T D] is isomorphic to D 2. Draw CPOs: [0 T ], [0 T ], [T T ],

Disjoint Sum Definition D + E = { 1, d d D} { 2, e e E} with order: 1, d 1, d iff d d 2, e 2, e iff e e 1, d incomparable with 2, e Show that D + E is a CPO. CPO associated with variant types. data Bool = True False

Associated functions insertions: in 1 : D (D + E), in 1 (d) = 1, d, in 2 : E (D + E) from functions f : D C and g : E C construct the function [f, g] : (D + E) C: [f, g]( 1, d ) = f (d), [f, g]( 2, e ) = g(e), A induced isomorphism between (D C) (E C) and [D + E] C

Exercises and Properties Define the sum n-ary (of n CPO). Can I reduce the n-ary amount to repeated binary application applications? Define, through standard functions and constructors, the semantics of if then else Define, by means of standard functions and constructors, the semantics of Boolean functions.

Metalanguage CPO functions can be constructed from a language that uses: variables with a domain type: x 1 : D 1,..., x i : D i Constant: true, false, 1, 0, 1,... basic functions:, π i, app, in i, fix builders: ( ),,, curry( ), [, ], application, lambda abstraction, composition of functions, let. Examples: π 2, π 1 λx. f (g(x)) λx. f (π 1 (x))(π 2 (x)) λ(x, y). f (x)(y)

Metalanguage Proposition Each expression e, having type E, with variables x 1 : D 1,..., x i : D i, denotes a continuous function f in each of its variables, that is f : (D 1... D i ) E continuous. Proof. By induction on the structure of the e, we define the meaning of e and from here the continuity. Metalanguage allows you to define mathematical objects, with a syntax similar to Haskell. Metalanguange and Haskell define objects of different nature.

Functional Languages Semantic (operational and denotational) semantics of two simple functional languages with two different assessment mechanisms call-by-value (eager) like: Standard ML, OCaml. call-by-name (lazy) like Haskell, Miranda.

Expressions t ::= x n t 1 op t 2 (t 1, t 2 ) fst t snd t λx.t (t 1 t 2 ) if t 0 then t 1 else t 2 let x t 1 in t 2 rec f.λx.t

Type checking Not all expressions are meaningful, (1 3), ((λx.x + 1)(2, 3)) Type control: determines the correct expressions, derives t : τ types: τ ::= int τ 1 τ 2 τ 1 τ 2 induction rules on the structure of terms every syntactic construct has an associated rule: x : τ 1 t 1 : τ 1 t 2 : τ 2 let x t 1 in t 2 : τ 2

No polymorphism Each variable has associated a single type, property: Each expression has a unique type, no polymorphism: type system and denotational semantics are simpler.

Operational Semantics defines a system of rules describing how a term reduces. Two alternatives: big-step reduction: t c (t c) describes the value c generated by the computation of t closer to denotational semantics. small-step reduction t 1 t 2 describes how a computation step turns t 1 into t 2. In the small step semantic for functional languages, often a single step of computation substitutes several occurence of a variable by a term, This is not an elementary computing step. Small step reduction can be useful for debugging, see Haskell.

Call-by-value, Big-step reduction For each type, a set of values or canonical forms: computational results, terms that can not be further reduced. int (ground types), the numeric constants... 1, 0, 1, 2... τ 1 τ 2, pairs (v 1, v 2 ), with v i value. Eager reduction, fully defined elements. τ 1 τ 2, λ-abstraction λx.t, with t not needed a value. Alternative definitions are possible: λx.v with v not reduceable term, examples x + 3, (x 1) By definition values are closed terms: one can only evaluate closed terms; this restriction simplifies the definitions.

Reduction rules: for induction on term structure... t 0 m t 1 n t 0 + t 1 o o = m + n

Reduction rules: for induction on term structure... t 0 m t 1 n t 0 + t 1 o o = m + n t 0 0 t 1 c if t 0 then t 1 else t 2 c

Reduction rules: for induction on term structure... t 0 m t 1 n t 0 + t 1 o o = m + n t 0 0 t 1 c if t 0 then t 1 else t 2 c t 0 n t 2 c if t 0 then t 1 else t 2 c n 0

Reduction rules: for induction on term structure... t 0 m t 1 n t 0 + t 1 o o = m + n t 0 0 t 1 c if t 0 then t 1 else t 2 c t 0 n t 2 c if t 0 then t 1 else t 2 c t 1 c 1 t 2 c 2 (t 1, t 2 ) (c 1, c 2 ) n 0

Reduction rules: for induction on term structure... t 0 m t 1 n t 0 + t 1 o o = m + n t 0 0 t 1 c if t 0 then t 1 else t 2 c t 0 n t 2 c if t 0 then t 1 else t 2 c t 1 c 1 t 2 c 2 (t 1, t 2 ) (c 1, c 2 ) t (c 1, c 2 ) fst t c 1 n 0

t 1 λx.t 1 t 2 v 2 t 1 [v 2/x] c (t 1 t 2 ) c Reduction rules

Reduction rules t 1 λx.t 1 t 2 v 2 t 1 [v 2/x] c (t 1 t 2 ) c t 1 c 1 t 2 [c 1 /x] c let x t 1 in t 2 c

Reduction rules t 1 λx.t 1 t 2 v 2 t 1 [v 2/x] c (t 1 t 2 ) c t 1 c 1 t 2 [c 1 /x] c let x t 1 in t 2 c rec x.λy.t λy.t[rec x.λy.t / x]

Reduction property Deterministic reduction, each term reduces at most to one value, there is just one applicable rule; rec endless derivations correspond to infinite computations. Example: ((rec f N N. λy. (f 1)) 2) reduction preserves types: subject reduction. Exercise ((rec f.λx. if x then 0 else (f (x 1)) + 2) 2)

Domains for Denotational Semantics separated in: domains for interpreting values; domains to interpret computations. Domains for values: by induction on the structure of the type. V int = N V τ1 τ 2 = V τ1 V τ2 V τ1 τ 2 = [V τ1 (V τ2 ) ] Domain for computation: (V τ )

Environment The interpretation of an open term depends on how we interpret variables (from the environment). In call-by-value languages, variables denote values. Env, the set of functions from variables in the domains for computations ρ : Var V τ τ T that respect the types x : τ implies ρ(x) V τ. The interpretation of a term t : τ, t : Env (V τ )

Inductive definition on terms x ρ = ρ(x) n ρ = n

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. (v 1, v 2 ) (fst t) ρ = let v t ρ. π 1 (v)

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. (v 1, v 2 ) (fst t) ρ = let v t ρ. π 1 (v) λx.t ρ = λv : V σ. t (ρ[v/x])

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. (v 1, v 2 ) (fst t) ρ = let v t ρ. π 1 (v) λx.t ρ = λv : V σ. t (ρ[v/x]) (t 1 t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. v 1 (v 2 )

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. (v 1, v 2 ) (fst t) ρ = let v t ρ. π 1 (v) λx.t ρ = λv : V σ. t (ρ[v/x]) (t 1 t 2 ) ρ = let v 1 t 1 ρ. let v 2 t 2 ρ. v 1 (v 2 ) rec f.λx. t ρ = Fix(λv 1 : V σ τ. λv 2 : V σ. t ρ[v 1 /f, v 2 /x])

Categorical inductive definition on terms x = π x n = n! t 1 op t 2 = op t 1, t 2 (t 1, t 2 ) = t 1, t 2 (fst t) = fst = ( π 1 ) λx.t = curry( t (in y f y )) y Var (t 1 t 2 ) = app (id) t 1, t 2 rec x.λy. t = Fix curry( t (in y f y )) where f x = π 1 and f y = π y π 2 if x y y Var

Properties of the denotational semantics Substitution lemma For each closed term s, if s ρ = v then t[s/x] ρ = t ρ[v/x], Proved by induction on the structure of the term t. Instrumental to prove other properties. For each value c, c ρ.

Comparison between semantics: Correctness Correctness of operational semantics Proposition For each term t, and c value, t c implies t = c It is proved by induction on the derivation of t c. The rules of operational semantics respect the denotation.

Comparison between semantics: Adequacy The opposite implication: t = c implies t c is not valid since there are two differnt values c 1 and c 2 with the same denotational semantics. therefore: c 1 = c 2 but c c Examples λx.0 and λx.0 + 0

Adequacy A weaker property holds: Proposition For each term t, and value c, t = c implies the existence of c such that t c. If t is different from, then computation on t terminates. Demonstration is not obvious; uses sophisticated techniques, logical relations. Corollary For each t : int and integer n t n iff t = n

Call-by-name language Different recursive definition rec x.t Different sets of values for pair types. int the numeric constants... 1, 0, 1, 2... τ 1 τ 2 (t 1, t 2 ), with t i closed, not necessarily values. Remark on lazy reduction: defining values in this way one can handle infinite lists, regardless call-by-value, call-by-name reduction mechanism. τ 1 τ 2 closed terms λx.t, with t not necessarily a value.

Reduction rules Differences from call-by-name semantics: (t 1, t 2 ) (t 1, t 2 ) t (t 1, t 2 ) t 1 c 1 fst t c 1

Reduction rules Differences from call-by-name semantics: (t 1, t 2 ) (t 1, t 2 ) t (t 1, t 2 ) t 1 c 1 fst t c 1 t 1 λx.t 1 t 1 [t 2/x] c (t 1 t 2 ) c

Reduction rules Differences from call-by-name semantics: (t 1, t 2 ) (t 1, t 2 ) t (t 1, t 2 ) t 1 c 1 fst t c 1 t 1 λx.t 1 t 1 [t 2/x] c (t 1 t 2 ) c t 2 [t 1 /x] c let x t 1 in t 2 c

Reduction rules Differences from call-by-name semantics: (t 1, t 2 ) (t 1, t 2 ) t (t 1, t 2 ) t 1 c 1 fst t c 1 t 1 λx.t 1 t 1 [t 2/x] c (t 1 t 2 ) c t 2 [t 1 /x] c let x t 1 in t 2 c t[rec x.t / x] c rec x.t c

Main differences evaluation of call-by-name arguments, for uniformity, call-by-name evaluation also for constructor let, recursion applicable to all elements, the different set of values for the couple type force different rules. Properties preserved: deterministic reduction, each term reduces to a value at most, there is more than one applicable rule; reductions preserve the type; (subject reduction).

Domains for values: V int = N V τ1 τ 2 = (V τ1 ) (V τ2 ) V τ1 τ 2 = [(V τ1 ) (V τ2 ) ] Domains for Computation. Domains for Denotational Semantics (V τ ) Environment. In call-by-name languages, variables denote computations. Env, the set of functions from variables in the domains for computations ρ : Var (V τ ) τ type that respect the types. t : Env (V τ )

Inductive definition on terms x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = ( t 1 ρ, t 2 ρ) (fst t) ρ = let v t ρ. π 1 (v) λx.t ρ = λv : (V σ ). t (ρ[v/x]) (t 1 t 2 ) ρ = let v t 1 ρ. v( t 2 ρ) rec x.t ρ = Fix(λv : (V σ ) t (ρ[v/x]))

Categorical definition x = π x n = n 1 t 1 op t 2 = op t 1, t 2 (t 1, t 2 ) = t 1, t 2 (fst t) = fst ((π 1 ) ) λx.t = curry( t (in y f y )) y Var (t 1 t 2 ) = app (id) t 1, t 2 rec x.t = Fix curry( t (in y f y )) where f x = π 1 and f y = π y π 2 if x y y Var

Properties of the denotational semantics substitution lemma If s ρ = v then t[s/x] ρ = t ρ[v/x] It is proved by induction on the structure of the term t for each c, c value

Comparison of Semantics: Correctness and Adequacy Proposition (Correctness) For each term t, and c value, t c implies t = c Proposition (Adequacy) For each term t, and c value, t = c implies the existence of c such that t c. Corollary For each t : int and integer n t n iff t = n

Observational equality and full-abstraction Two terms t 1, t 2 are observationally equivalent (the same for the operational semantics) t 1 t 2, if: for each C[ ] context from the adequacy theorem: C[t 1 ] C[t 2 ] t 1 = t 2 implies t 1 t 2 Opposite implication (full abstraction) is only true for a sequential denotational semantics. The existence of a parallel or function on cpos cause full-abstraction to fail.

Semantics usage Observationally equivalence is difficult to prove, test on infinite context Operational semantics useful to specify language implementation. Denotational semantics a first step in reasoning on programs.

Simplified denotational semantics For the call-by-name languages, observational equivalence just on integer context C[ ] : int identifies more terms allows a simple denotational semantics simpler domains not separating domains of values from computation. D int = (N) D τ1 τ 2 = D τ1 D τ2 D τ1 τ 2 = [D τ1 D τ2 ] It is important to specifies which observation can be done on programs.

Inductive definitions on term x ρ = ρ(x) n ρ = n t 1 op t 2 ρ = t 1 ρ op t 2 ρ (t 1, t 2 ) ρ = ( t 1 ρ, t 2 ρ) (fst t) ρ = π 1 ( t ρ) λx.t ρ = λd : D σ. t (ρ[d/x]) (t 1 t 2 ) ρ = t 1 ρ( t 2 ρ) rec x.t ρ = Fix(λd : D σ. t (ρ[d/x]))

Sum Types We enrich the functional languages with sum types: τ ::=... τ 1 + τ 2 data Fig = Circle Integer Rectangle Integer Integer Constructors and destructors. inl(t) inr(t) case t of inl(x 1 ).t 1, inr(x 2 ).t 2 case fig of Circle n -> t1 Rectangle n m -> t2

Sum Types Typing rules: an expression can have more than one type (in contrast with Haskell) Operational semantics (eager - lazy) Denotational semantic Simplified denotational semantics.

Haskell s denotational semantics We aim to extend the operational and denotational semantics of functional lazy language to a large part of Haskell. Some critical points. Definitions of types (non recursive). Recursive Definitions for pattern matching. Recursive Definitions of Types. Polymorphism

Core (Kernel) Language When defining the semantics of a language, or its implementation, it is useful to consider a Core Language: simple language, reduced set of primitive builders, reduced subset of main language; Main language, a super-set of core language, can be (easily) reduce to it; Modular approach to implementation, semantics. From a theoretical point of view, the core languages are highlighted: the fundamental mechanisms of computation; similarities: differences between languages.

Core Language for Haskell System FC Lambda calculalus: application and λ-abstraction; polymorphs: explicit types, type variable, type quantifiers. algebraic (recursive) data types: a set of basic functions, constructors and destructors; equivalents between types: coercion operators.

definable in the Fun. Definition of non recursive types Can be simulated in Fun (our functional language) we can associate to each type a type in Fun data TypeId = Const1 TypeExp11... TypeExp1M Const2 TypeExp21... TypeExp2N... Translate to to the Fun type tr(typeid), tr(typeid) = tr(texp11)... tr(texp1m)) + tr(texp21)... tr(texp2n)) +... Each constructor translate to a function having type: Const1 : tr(texp11)... tr(texp1m) tr(typeid)

Denotational semantics for non recursive types Given the previous translation, the domains for non recursive types are: V TId = ((V TExp11 )... (V TExp1M ) ) + ((V TExp21 )... (V TExp2N ) ) +... and the semantics of constructors are: Const1 ρ = λv 1 : (V TExp11 )... λv M : (V TExp1M ). in 1 v 1,..., v M

Data Types, Constructors and Destructors date Nat = O S Nat The definition generates two constructor functions having type: O :: Nat S :: Nat -> Nat a destructor function of type case.

Pattern matching Pattern matching does not belong to the core language: definitions by pattern matching reduced to the case constructor on a single arguments Example: add x O = x add x (S y) = S (add x y) can be translated into the core language by: let add = rec add. \x y -> case y of O -> x S y1 -> S (add x y1) in

Pattern with more arguments The definition and True True = True and _ False = False and False _ = False can translate into: let and = \x y -> case x of True -> case y of True -> True False -> Fal False -> False The construct case makes the order of arguments explicit. Pattern matching allows more elegant, compact definitions.

Pattern matching: another example The definition : leq O _ = True leq (S _) O = False leq (Sx) (S y) = leq x y can translate into: let leq = rec leq. \x y -> case x of O -> True S x1 -> case y of O -> False S y1 -> leq x1 y1 in... More complex to define mutually recursive functions: recursively defines an array of functions, one element in a product type.

Recursive data types It is necessary to solve domain equations. date Nat = O S Nat LazyNat = 1 + (LazyNat) data ListBool = Empty Cons Bool ListBool ListBool = 1 + T (ListBool) It is sufficient to find CPOs satisfying the equation up to isomorphisms, ie left and right sides of equation are isomorphic domains.

Domain Equation: In general we consider the problem of solving domain equation. Nat = 1 + Nat In simple cases one can simulate fixed point solution on cpos start with the empty domain evaluate the sequence of approximation take the union (limit of approximation) Some care have to be taken: L = (1 + L) Sometime more solutions are possible: List = 1 + (Bool List) Finite and infinite lists.

Recursive data types In Haskell, one can define recursive types to which it is not obvious to associate CPO. date FunBool = Fun [FunBool -> Bool] FunBool = (Funbool T ) date Untyped = Abst [Untyped -> Untyped] Un = (Un Un ) Difficult problem, left open for thirty years (Dana Scott 1969) There are several techniques for solving domain equations. All require an enrichment of CPO theory. We present the one based on the Information System

Information System Different presentation of a CPO and its elements. An element of a CPO is described as a subset of elementary basic information. An information system A is given by: a set of tokens A (elementary information), Con f (A), a consistency predicate, Con defines the finite sets of consistent tokens. an entailment relation (implication) (Con A) X a denotes that the information contained in X implies information a Examples: N, B, 1, 1, B, [B B ], [B [B B ]]

Information System: definition An information system A = (A, Con, ) must meet the following conditions: 1 Con(X ) and Y X implies Con(Y ), 2 for every a A we have Con({a}), 3 X a implies Con(X {a}) and X 4 Con(X ) and a X implies X a, 5 Con(X ), Con(Y ) X Y and Y a implies X a where X Y indicates b Y.X b

From Information System to CPO The CPO A associated with the information system A = (A, Con, ) is composed by those subset x of A such that: x is consistent, ie for every Y f x, Con(Y ), x is closed for the entailment relation, ie: for every Y f x, Y a implies a x The order on A is the information order x y if and only if x y A is a CPO with minimum element.

Alternative Definition I define CPO A by adding the condition x, different from empty. Winskel uses this second definition. A = ( A ) A can be CPO without minimum element. A generates the CPO of computations, A the CPO of values.

Some comments Different information systems can induce the same CPO. There are CPOs that can not be defined through an information system. The class of CPOs defined by the information system is named Scott s domain (Dana Scott). Consistently complete, ω-algebraic CPO. There are variants of the information system called coherent spaces: without the entailment relation, the coherence relations is binary. Scott s Domain vs. Information System is an example of Stone Duality, elements of a space can be described by their properties.

The empty information system: Information System Builder: Lifting 0 = (, { }, ) 0 = { } Given the information system: A = (A, Con, ) define A = (A, Con, ) as: A = A { } Con (X ) sse Con(Y ). (X = in l Y X = (in l Y {in r })) X a sse a = in r (Y b). a = in l b (X = in l Y X = in l Y {in r } Proposition A = A and A = A

Product Given: A = (A, Con A, A ) e B = (B, Con B, B ) define: A B = (A B, Con A B, A B ) as Con A B (in l X in r Y ) sse Con A (X ) e Con B (Y ) (in l X in r Y ) A B (in l a) sse X A a (in l X in r Y ) A B (in r b) sse Y B b Propositin A B = A B

Coalescent product Given: A = (A, Con A, A ) e B = (B, Con B, B ) define: A B = (A B, Con A B, A B ) as: Con A B (Z) sse Con A (π 1 Z) e Con B (π 2 Z) Z A B c sse π 1 Z A π 1 c π 2 Z A π 2 c Proposition A B = A B

Sum Given the IS A = (A, Con A, A ) e B = (B, Con B, B ) define A + B = (A B, Con A+B, A+B ) as: Con A B (X ) sse X = in l Y Con A (Y ) o X = in r Y Con B (Y ) (in l X ) A+B (in l a) sse X A a (in r Y ) AB (in r b) sse Y B b Proposition A + B = somma coalescente di A, B A + B = A + B

Function space Given A = (A, Con A, A ) e B = (B, Con B, B ) define A B = (C, Con C, C ) as: C = {(X, b) X Con A, b B} Con C ({(X 1, b 1 ),..., (X n, b n )}) sse I {1,..., n}. Con A ( i I X i) Con B ({b i i I }) {(X 1, b 1 ),... (X n, b n )} C (X, b) sse {b i (X A X i)} B b Proposition A B = [ A B ]

Strict function space The space of strict (continuous) functions [C D]: the functions f st f ( ) = is described by the IS that repeats the previous construction by modifying only the definition of the first point: C = {(X, b) X Con A, X, b B} Proposition A B = [ A B ] = [ A (B) ]

Denotational semantics via information systems IS allows an alternative, dual, finitary, presentation of sematics of terms. As set of rules deriving judgements in the form: Γ t : a stating, under the assumption Γ the token a belong to the semantics of t. Γ has form x : {a 1,... a n },..., y : {b 1,... b n } x, y, possibly, free variable in t assume that the interpretation of x contains the token a 1,... a n.

Rules... Γ t 0 : m Γ t 1 : n Γ t 0 + t 1 : o o = m + n

Rules... Γ t 0 : m Γ t 1 : n Γ t 0 + t 1 : o o = m + n Γ t 0 : 0 Γ t 1 : a Γ if t 0 then t 1 else t 2 : a

Rules... Γ t 0 : m Γ t 1 : n Γ t 0 + t 1 : o o = m + n Γ t 0 : 0 Γ t 1 : a Γ if t 0 then t 1 else t 2 : a Γ t 0 : n Γ t 2 : a Γ if t 0 then t 1 else t 2 : c n 0

Rules... Γ t 0 : m Γ t 1 : n Γ t 0 + t 1 : o o = m + n Γ t 0 : 0 Γ t 1 : a Γ if t 0 then t 1 else t 2 : a Γ t 0 : n Γ t 2 : a Γ if t 0 then t 1 else t 2 : c n 0 Γ t 1 : a Γ (t 1, t 2 ) : in l (a)

Rules... Γ t 0 : m Γ t 1 : n Γ t 0 + t 1 : o o = m + n Γ t 0 : 0 Γ t 1 : a Γ if t 0 then t 1 else t 2 : a Γ t 0 : n Γ t 2 : a Γ if t 0 then t 1 else t 2 : c n 0 Γ t 1 : a Γ (t 1, t 2 ) : in l (a) Γ t : in l (a) Γ fst t : a

Γ, x : {a 1,..., a i } t : b Γ λx.t 1 : ({a 1,..., a i }, b) Rules

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t 1 : ({a 1,..., a i }, b) Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t 1 : ({a 1,..., a i }, b) Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b Γ, x : {a 1,..., a i } t 1 : b Γ t 2 : a 1... Γ t 2 : a i Γ let x t 2 in t 1 : b

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t 1 : ({a 1,..., a i }, b) Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b Γ, x : {a 1,..., a i } t 1 : b Γ t 2 : a 1... Γ t 2 : a i Γ let x t 2 in t 1 : b Γ, x : {a 1,..., a i } t : b Γ rec x.t : a 1... Γ rec x.t : a i Γ rec x.t : b

An Order on IS We define an order relation,, on IS as: iff (A, Con A, A ) (B, Con B, B ) A B Con A (X ) iff X A Con B (X ) X A a iff (X {a}) A X B a Proposition The IS set with forms a CPO with, the minimum element is O, the limit of a chain A 0 A 1 A 2... with A i = (A i, Con i, i ) is obtained through union: i ω A i = ( i ω A i, i ω C i, i ω i )

Continuity Proposition The constructors of IS, +,,,, are continuous w.r.t.. Corollary Each constructor build using base constructors is continuous and admits fixed-point.

Eager language with recursive types Types: 1 τ 1 τ 2 τ 1 τ 2 τ 1 + τ 2 X µx.τ Expressions t ::= x () (t 1, t 2 ) fst t snd t λx.t (t 1 t 2 ) inl(t) inr(t) case t of inl(x 1 ).t 1, inr(x 2 ).t 2 abs (t) rep (t) rec f.λx.t One basic type 1. Nat defined a recursive type.

Each variable has associated a single type, x τ x; induction rules on the term of the term, each syntactic construct has a rule of type; new rules: unit: () : 1 abstraction: representation: t : τ[µx.τ/x ] abs (t) : µx.τ t : µx.τ rep (t) : τ[µx.τ/x ] type conversion, explicit casting type; necessary to ensure the uniqueness of the type; if the CPO associated with µx.τ is isomorphic to that associated with τ[µx.τ/x ], Type checking

Operational Semantics: Set of Values Because of recursive types must be defined by a set of inductive rules, and not by induction on the structure of the type c : C τ[µx.τ/x ] abs (c) : C µx.τ

Reduction rules New rules for terms in recursive types. t c abs (t) abs (c) t abs (c) rep (t) c

Denotational Semantics By information system Types of associations - information system made by: a type environment (type environment); a set of definitions, by induction on the type structure; V 1 χ = 0 = ({ }, {{ }, }, {{ } }) V τ 1 τ 2 χ = V τ 1 χ V τ 2 χ V τ 1 + τ 2 χ = V τ 1 χ + V τ 2 χ V τ 1 τ 2 χ = (V τ 1 χ V τ 2 χ ) V X χ = χ(x ) V µx.τ χ = µi.v τ χ[i /X ] Note: V τ defines the computing CPO associated with τ, and V τ defines the value CPO associated with τ, To simplify, we do not define two information systems (computations and values),

Denotational Semantics The equations should be reformulated to fit the domains defined in terms of information systems. abs and rep are represented by the identity function. The usual properties of correctness and adequacy apply. Semantics can be defined also through rules, in Types Assignment System style.

Denotational semantics via Types Assignment System As set of rules deriving judgements in the form: Γ t : a stating, under the assumption Γ the token a belong to the semantics of t. Γ has form x : {a 1,... a n },..., y : {b 1,... b n } stating that the interpretation of x (ρ(x)) contains the token a 1,... a n i.e. {a 1,... a n } ρ(x)

Γ () : Rules

Rules Γ () : 1 i n Γ, x : {a 1,..., a n } x : a i

Rules Γ () : 1 i n Γ, x : {a 1,..., a n } x : a i Γ t 1 : a 1 Γ t 2 : a 2 Γ (t 1, t 2 ) : (a 1, a 2 )

Rules Γ () : 1 i n Γ, x : {a 1,..., a n } x : a i Γ t 1 : a 1 Γ t 2 : a 2 Γ (t 1, t 2 ) : (a 1, a 2 ) Γ t : (a 1, a 2 ) Γ fst t : a 1

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t : ({a 1,..., a i }, b) Γ λx.t :

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t : ({a 1,..., a i }, b) Γ λx.t : Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b

Rules Γ, x : {a 1,..., a i } t : b Γ λx.t : ({a 1,..., a i }, b) Γ λx.t : Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b Γ, x : {a 1,..., a i } t : b Γ rec x.t : a 1... Γ rec x.t : a i Γ rec x.t : b

Lazy language with recursive types Types: 0 τ 1 τ 2 τ 1 τ 2 τ 1 + τ 2 X µx.τ Expressions t ::= x (t 1, t 2 ) fst t snd t λx.t (t 1 t 2 ) inl(t) inr(t) case t of inl(x 1 ).t 1, inr(x 2 ).t 2 abs (t) rep (t) rec x.t 0 does not contain any value, Divergent term, no reduction rule.

Operational Semantics The eager rule still applies. While, values are: c : C τ[µx.τ/x ] abs (c) : C µx.τ (t 1, t 2 ), inl(t), inr(t) Eager rules for terms in recursive types apply: t c abs (t) abs (c) t abs (c) rep (t) c

Denotational Semantics V 0 χ = O V τ 1 τ 2 χ = (V τ 1 χ V τ 2 χ ) V τ 1 + τ 2 χ = (V τ 1 χ ) + (V τ 2 χ ) V τ 1 τ 2 χ = (V τ 1 χ V τ 2 χ ) V X χ = χ(x ) V µx.τ χ = µi.v τ χ[i /X ]

Rules 1 i n Γ, x : {a 1,..., a n } x : a i

Rules 1 i n Γ, x : {a 1,..., a n } x : a i Γ (t 1, t 2 ) : Γ t 1 : a 1 Γ (t 1, t 2 ) : in L a 1

Rules 1 i n Γ, x : {a 1,..., a n } x : a i Γ (t 1, t 2 ) : Γ t 1 : a 1 Γ (t 1, t 2 ) : in L a 1 Γ t : in l a 1 Γ fst t : a 1

Rules {a 1,..., a i } can be Γ, x : {a 1,..., a i } t : b Γ λx.t : ({a 1,..., a i }, b) Γ λx.t :

Rules {a 1,..., a i } can be Γ, x : {a 1,..., a i } t : b Γ λx.t : ({a 1,..., a i }, b) Γ λx.t : Γ t 1 : ({a 1,..., a i }, b) Γ t 2 : a 1... Γ t 2 : a i Γ (t 1 t 2 ) : b