An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C

Similar documents
An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C

Formal C semantics: CompCert and the C standard

Separation Logic for Non-local Control Flow and Block Scope Variables

The Rule of Constancy(Derived Frame Rule)

Aliasing restrictions of C11 formalized in Coq

Semantics via Syntax. f (4) = if define f (x) =2 x + 55.

Hoare logic. A proof system for separation logic. Introduction. Separation logic

Programming Languages Third Edition

arxiv: v1 [cs.lo] 10 Sep 2015

Undefinedness and Non-determinism in C

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements

CS4215 Programming Language Implementation. Martin Henz

Checks and Balances - Constraint Solving without Surprises in Object-Constraint Programming Languages: Full Formal Development

1 Introduction. 3 Syntax

Review of the C Programming Language

Hoare logic. WHILE p, a language with pointers. Introduction. Syntax of WHILE p. Lecture 5: Introduction to separation logic

Hoare logic. Lecture 5: Introduction to separation logic. Jean Pichon-Pharabod University of Cambridge. CST Part II 2017/18

Operational Semantics. One-Slide Summary. Lecture Outline

1 Lexical Considerations

CS 6110 S11 Lecture 25 Typed λ-calculus 6 April 2011

Review of the C Programming Language for Principles of Operating Systems

Programming Languages Third Edition. Chapter 7 Basic Semantics

Software Security: Vulnerability Analysis

Programming Languages

G Programming Languages - Fall 2012

Lecture 3 Notes Arrays

CONVENTIONAL EXECUTABLE SEMANTICS. Grigore Rosu CS522 Programming Language Semantics

COP4020 Programming Languages. Functional Programming Prof. Robert van Engelen

Lectures 20, 21: Axiomatic Semantics

Joint Entity Resolution

A concrete memory model for CompCert

CS558 Programming Languages

Lexical Considerations

Principles of Programming Languages

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic

Simply-Typed Lambda Calculus

Intermediate Code Generation

4.5 Pure Linear Functional Programming

Lexical Considerations

axiomatic semantics involving logical rules for deriving relations between preconditions and postconditions.

Lecture Notes on Program Equivalence

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

6. Hoare Logic and Weakest Preconditions

CONVENTIONAL EXECUTABLE SEMANTICS. Grigore Rosu CS422 Programming Language Semantics

Programming Lecture 3

Handout 10: Imperative programs and the Lambda Calculus

COS 320. Compiling Techniques

CS 161 Computer Security

Chapter 1. Introduction

Matching Logic. Grigore Rosu University of Illinois at Urbana-Champaign

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

G Programming Languages - Fall 2012

From IMP to Java. Andreas Lochbihler. parts based on work by Gerwin Klein and Tobias Nipkow ETH Zurich

Semantics. There is no single widely acceptable notation or formalism for describing semantics Operational Semantics

7. Introduction to Denotational Semantics. Oscar Nierstrasz

Lecture Notes on Intermediate Representation

(Refer Slide Time: 4:00)

Hiding local state in direct style: a higher-order anti-frame rule

Hoare Logic and Model Checking

Functional Programming. Pure Functional Programming

From Types to Sets in Isabelle/HOL

3.7 Denotational Semantics

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning

Hoare Logic and Model Checking. A proof system for Separation logic. Introduction. Separation Logic

LECTURE 16. Functional Programming

Qualifying Exam in Programming Languages and Compilers

Types. Type checking. Why Do We Need Type Systems? Types and Operations. What is a type? Consensus

On Meaning Preservation of a Calculus of Records

C Language Part 1 Digital Computer Concept and Practice Copyright 2012 by Jaejin Lee

Operational Semantics 1 / 13

Lecture Notes on Data Representation

Lecture Notes on Contracts

Modal Logic: Implications for Design of a Language for Distributed Computation p.1/53

Static Program Analysis CS701

Lecture Notes on Arrays

COMP 181. Agenda. Midterm topics. Today: type checking. Purpose of types. Type errors. Type checking

CMSC 330: Organization of Programming Languages

Chapter 2 The Language PCF

An Annotated Language

Formal verification of a C-like memory model and its uses for verifying program transformations

The SPL Programming Language Reference Manual

Introduction to Axiomatic Semantics

B.6 Types and Overloading

Hardware versus software

Program Analysis: Lecture 02 Page 1 of 32

The PCAT Programming Language Reference Manual

CS558 Programming Languages

Function compose, Type cut, And the Algebra of logic

A Type System for Functional Traversal-Based Aspects

Tail Calls. CMSC 330: Organization of Programming Languages. Tail Recursion. Tail Recursion (cont d) Names and Binding. Tail Recursion (cont d)

Uncertain Data Models

Chapter 3 (part 3) Describing Syntax and Semantics

CSCI.6962/4962 Software Verification Fundamental Proof Methods in Computer Science (Arkoudas and Musser) Sections p.

Chapter 6 Control Flow. June 9, 2015

Induction and Semantics in Dafny

Programming Languages Lecture 14: Sum, Product, Recursive Types

Operational Semantics of Cool

CS422 - Programming Language Design

Static Program Analysis Part 1 the TIP language

Transcription:

An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C Robbert Krebbers ICIS, Radboud University Nijmegen, The Netherlands mail@robbertkrebbers.nl Abstract The C11 standard of the C programming language does not specify the execution order of expressions. Besides, to make more effective optimizations possible (e.g. delaying of side-effects and interleaving), it gives compilers in certain cases the freedom to use even more behaviors than just those of all execution orders. Widely used C compilers actually exploit this freedom given by the C standard for optimizations, so it should be taken seriously in formal verification. This paper presents an operational and axiomatic semantics (based on separation logic) for non-determinism and sequence points in C. We prove soundness of our axiomatic semantics with respect to our operational semantics. This proof has been fully formalized using the Coq proof assistant. Categories and Subject Descriptors D.3.1 [Programming Languages]: Formal Definitions and Theory; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs Keywords Operational Semantics, Separation Logic, C Verification, Interactive Theorem Proving, Coq 1. Introduction The C programming language [16, 17] is not only among the most popular programming languages in the world, but it is also among the most dangerous programming languages. Due to weak static typing and the absence of runtime checks, it is extremely easy for C programs to have bugs that make the program crash or behave badly in other ways. NULL-pointers can be dereferenced, arrays can be accessed outside their bounds, memory can be used after it is freed, etc. Furthermore, C programs can be developed with a too specific interpretation of the language in mind, giving portability and maintenance problems later. Instead of forcing compilers to use a predefined execution order for expressions (e.g. left to right), the C standard does not specify it. This is a common cause of portability and maintenance problems, as a compiler may use an arbitrary execution order for each Part of this research has been done while the author was visiting INRIA- Paris Rocquencourt, France. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). POPL 14, January 22 24, 2014, San Diego, CA, USA. Copyright is held by the owner/author(s). ACM 978-1-4503-2544-8/14/01. http://dx.doi.org/10.1145/2535838.2535878 expression. Hence, to prove the correctness of a C program with respect to an arbitrary compiler, one has to verify that each possible execution order is legal and gives the correct result. To make more effective optimizations possible (e.g. delaying of side-effects and interleaving), the C standard requires the programmer to ensure that all execution orders satisfy certain conditions. If these conditions are not met, the program may do anything. Let us take a look at an example where one of those conditions is not met. int main() int x; int y = (x = 3) + (x = 4); printf("%d %d\n", x, y); } By considering all possible execution orders, one would naively expect this program to print 4 7 or 3 7, depending on whether the assignment x = 3 or x = 4 is executed first. However, the sequence point restriction does not allow an object to be modified more than once (or being read after being modified) between two sequence points [16, 6.5p2]. A sequence point occurs for example at the end ; of a full expression, before a function call, and after the first operand of the conditional? : operator [16, Annex C]. Hence, both execution orders lead to a sequence point violation, and are thus illegal. As a result, the execution of this program exhibits undefined behavior, meaning it may do literally anything. The C standard uses a garbage in, garbage out principle for undefined behavior so that compilers do not have to generate (possibly expensive) runtime checks to handle corner cases. A compiler therefore does not have to generate code to test whether a sequence point violation occurs, but instead is allowed to assume no sequence point violations will occur and can use this information to perform more effective optimizations. Indeed, when compiled with gcc -O1 (version 4.7.2), the above program prints 4 8, which does not correspond to any of the execution orders. Non-determinism in C is even more unrestrained than some may think. That the execution order in e1 + e2 is unspecified, does not mean that either e1 or e2 will be executed entirely before the other. Instead, it means that execution can be interleaved; first a part of e1, then a part of e2, then another part of e1, and so on... Hence, the following expression is also allowed to print bac. printf("a") + (printf("b") + printf("c")); Many existing tools for C verification determinize the target program in one of their first phases, and let the user verify the correctness of the determinized version. When targeting a specific compiler for which the execution order is known, and which does not perform optimizations based on the sequence point restriction, this approach works. But to prove the correctness of a program with respect to an arbitrary compiler, this approach is insufficient even if all possible execution orders are considered (see the counterexample in gcc above).

Some verification tools perform syntactical checks to exclude sequence points violations. However, as it is undecidable whether a sequence point violation may occur [12], determinization combined with such checks is either unsound (slides 13 and 15 of Ellison and Rosu s POPL 2012 talk [13] present some examples in existing tools), or will exclude valid C programs. Approach. As a step towards taking non-determinism and the sequence point restriction seriously in C verification, we extend the small step operational and separation logic for non-local control flow by Krebbers and Wiedijk [21] with non-deterministic expressions with side-effects and the sequence point restriction. Soundness of the axiomatic semantics is proved with respect to the operational semantics using the Coq proof assistant. The straightforward approach to verification of programs with non-determinism is to consider all possible execution orders. However, naively this may result in a combinatorial explosion, and also it is not entirely clear how to use this approach in an axiomatic semantics that also handles the sequence point restriction. Hence, we take a different approach: we extend the axiomatic semantics with a Hoare judgment P } e Q} for expressions. As usual, P is an assertion called the precondition. Like Von Oheimb [29], we let the postcondition Q be a function from values to assertions, because expressions not only have side-effects but primarily yield a value. Intuitively, the judgment P } e Q} means that if P holds for the memory beforehand, and execution of e yields a value v, then Q v holds for the resulting memory afterwards. Besides partial program correctness, the judgment P } e Q} ensures that e exhibits no undefined behavior due to the sequence point restriction. To deal with the unrestrained non-determinism in C, we observe that non-determinism in expressions corresponds to a form of concurrency, which separation logic is well capable of dealing with. Inspired by the rule for the parallel composition of separation logic (see [27]), we propose the following kind of rules for each operator. P l } e l Q l } P r} e r Q r} P l Q l } e l e r P r Q r} The idea is that, if the memory can be split up into two disjoint parts (using the separating conjunction ), in which the subexpressions e l respectively e r can be executed safely, then the full expression e l e r can be executed safely in the whole memory. The actual rules of the axiomatic semantics (see Section 5) are more complicated. We have to deal with the return value, and have to account for undefined behavior due to integer overflow. To ensure no sequence point violations occur, we use separation logic with permission accounting. Like in ordinary separation logic with permissions [6], the singleton assertion becomes e 1 e2 where is the permission of the object e 2 at address e 1, but we introduce a special class of locked permissions. The inference rules of our axiomatic semantics are set up in such a way that reads and writes are only allowed for objects that are not locked, and moreover such that objects become locked after they have been written to. At constructs that contain a sequence point, the inference rules ensure that these locks are released. Fractional permissions [7] are used to allow memory that will not be written to be shared by multiple subexpressions. Our approach to handling non-determinism and sequence points at the level of the operational semantics is inspired by Ellison and Rosu [12] and Norrish [25]. We annotate each object in memory with a permission, that is changed into a locked variant whenever a write occurs. This permission is changed back into the unlocked variant at the subsequent sequence point. Furthermore, we have a special state undef for undefined behavior that is used for example whenever a write to a locked object occurs. To bring the operational semantics closer to our axiomatic semantics, we make locks local to the subexpression where they were created. That means, at a sequence point we only unlock objects that have been locked by that particular subexpression, instead of unlocking all objects. This modification lets some artificial programs that were legal by the C standard exhibit undefined behavior (see page 5), but is not unsound for program verification. An important part of both the operational and axiomatic semantics is the underlying permission system. We give an abstract specification of it (as an extension of permission algebras [8]), and give various instances of this abstraction. Related work. Non-determinism, side-effects in expressions, and sequence points have been treated numerous times in formal treatments of C, but to our surprise, there is little evidence of work on program verification and program logics for these concepts. The first formalization of a significant part of C is due to Norrish [25] using the proof assistant HOL4. An important part of his work was to accurately formalize non-determinism and sequence points as described by the C89 standard. He proved various Hoare rules for statements to facilitate reasoning about C programs. To reason about non-deterministic expressions, he proved that execution of sequence point free expressions is confluent [26]. This result is also useful for more efficient symbolic execution. Reasoning about arbitrary C expressions was left as an open problem. Papaspyrou [30] has given a denotational semantics for a part of C89, including non-determinism and sequence points. He has implemented his semantics in Haskell to obtain a tool that can be used to explore all behaviors of a C program. Papaspyrou did not consider an axiomatic semantics. More recently, Ellison and Rosu [12] have defined an executable semantics of the C11 standard using the K-framework. Their semantics is very comprehensive and also describes non-determinism and the sequence point restriction. Moreover, since their semantics is effectively executable, it can be used as a debugger and an interpreter to explore all behaviors of a C program. It has been thoroughly tested against C test suites, and has been used by Regehr et al. to find bugs in widely used C compilers [31]. CompCert, a verified compiler C compiler by Leroy et al. written in Coq [23], supports non-determinism in expressions in the semantics of its source language. The interpreter by Campbell [9] can be used to explore this non-determinism. Krebbers [18] has extended CompCert s source language and interpreter with the sequence point restriction in the style of Ellison and Rosu [12]. There have been various efforts to verify programs in CompCert C. Appel and Blazy s axiomatic semantics for CompCert [2] operates on an intermediate language in which expressions have been determinized and side-effects have been removed. Their axiomatic semantics is thus limited to verification of programs compiled with CompCert, and will not work for arbitrary compilers. Herms [15] has formalized a verification condition generator based on the Why platform in Coq that can be used as a standalone tool via Coq s extraction mechanism. He proved the tool s soundness with respect to an intermediate language of CompCert, and it thus suffers from the same limitations as Appel and Blazy s work. Black and Windley [5] have developed an axiomatic semantics for C. They define inference rules to factor out side-effects of expressions by translating these into semantically equivalent versions. Their axiomatic semantics seems rather limited, and soundness has not been proven with respect to an operational semantics. Extending an axiomatic semantics with a judgment for expressions is not new, and has been done for example by Von Oheimb [29] for Java in the proof assistant Isabelle. His judgments for expressions are quite similar to ours, but his inference rules are not. Since he considered Java, he was able to use that the execution order of expressions is fully defined, which is not the case for C.

Contribution. Our contribution is fourfold: We define an abstract interface for permissions on top of which the memory model is defined, and present an algebraic method to reason about disjoint memories (Section 2). We define a small step operational semantics that handles nondeterminism and sequence points (Section 3 and 4). We give an axiomatic semantics that allows reasoning about programs with non-determinism. This axiomatic semantics ensures that no undefined behavior (e.g. sequence point violations and integer overflow) occurs (Section 5). We prove the soundness of our axiomatic semantics (Section 6). This proof, together with some extensions (Section 7), has been fully formalized using Coq (Section 8). As this paper describes a large formalization effort, we often omit details and proofs. The interested reader can find all details online as part of our Coq formalization. 2. The memory and permissions We model memories as finite partial functions from some countable set of memory indices (b index) to pairs of values and permissions. A value is either indet (which is used for uninitialized memory and the return value of functions without explicit return), a bounded integer, a pointer, or a NULL-pointer. Values are untyped (apart from integer values) and intentionally kept simple to focus on the issues of the paper. DEFINITION 2.1. A partial function from A to B is a (total) function from A to B opt, where B opt is the option type, defined as containing either or x for some x B. A partial function is called finite if its domain is finite. The operation f[x := y] stores the value y at index x, and f[x := ] deletes the value at index x. DEFINITION 2.2. Integer types and values are defined as: τ inttype ::= signed char unsigned char signed int... v val ::= indet int τ n ptr b NULL For an integer value int τ n, the mathematical integer n Z should be within the bounds of τ. A value v is true, notation istrue v, if it is of the shape int τ n with n 0, or ptr b. It is false, notation isfalse v, if it is of the shape int τ 0 or NULL. Notice that indet is neither true nor false, because at an actual machine uninitialized memory has arbitrary contents. In order to abstract from a concrete choice for a permission system (no permissions, simple read/write/free permissions, fractional permissions, etc.) in the definition of the memory, we define an abstract interface for permissions. We organize permissions using four different permission kinds (pkind): Free, which allows all operations (read, write, free), Write, which allows just reading and writing, Read, which allows solely reading, and Locked, which is temporarily used to lock an object between a write to it and a subsequent sequence point. This organization is inspired by Leroy et al. [24], but differs by the fact that we abstract away from a concrete implementation and allow much more complex permission systems (e.g. those based on fractional permissions). These are needed for our separation logic. We define a partial order on permission kinds as the reflexivetransitive closure of Read Write, Locked Write and Write Free. Since read-only memory cannot be used for writing, and therefore cannot be locked, the kinds Locked and Read are incomparable on purpose. DEFINITION 2.3. A permission system consists of a set P, binary relations and, binary operations and \, and functions kind : P pkind and lock, unlock : P P, satisfying: 1. (P, ) is a partial order 2. is symmetric 3. (P, ) is a commutative semigroup 4. If x y, then kind x kind y 5. If x y, then kind x = Read 6. If Write kind x, then unlock (lock x) = x 7. If kind x Locked, then unlock x = x 8. kind (unlock x) Locked 9. If x y and x x, then x y 10. If x y z, then x z and x y z 11. If x y, then x x y 12. If x y, then z x z y 13. If z x, z y, and z x z y, then x y 14. If x y, then x y \ x and x y \ x = y Permission systems extend permission algebras by Calcagno et al. [8] by organizing permissions using kinds, and by having operations for locking and unlocking. Whereas is a partial function in a permission algebra, we require it to be a total function, and account for partiality using a relation to describe that two permissions are disjoint and may be joined. For permissions that are not disjoint, the result of is not specified in the definition of a permission system. This relieves us from dealing with partial functions in Coq. Lastly, we require \ to be a primitive so we can lift it to an operation on memories (see Definition 2.9) that is an actual Coq function. Dockins et al. [11] remedy the issue of partially by defining as a relation instead of a function. But as for the operation \, we prefer to use functions to ease reasoning about memories. Rule 5 ensures that only permissions whose kind is Read are disjoint. This is to ensure that only readable parts of disjoint memories may overlap. Before we define memories, we define three instances of permission systems. We begin with fractional permissions [7]. DEFINITION 2.4. Fractional permissions (z frac) are rational numbers within (0, 1]. We let z 1 z 2 iff z 1 z 2, and z 1 z 2 iff z 1 + z 2 1. The operations are defined as: z 1 z 2 := z 1 \ z 2 := z 1 + z 2 if z 1 + z 2 1 1 otherwise z 1 z 2 if 0 < z 1 z 2 1 otherwise Write if z = 1 kind z := Read otherwise lock z := unlock z := z Notice that in the above definition we yield the dummy value 1 for z 1 z 2 if z 1 z 2, and z 1 \ z 2 if z 2 z 1. To account for locks due to the sequence point restriction, we extend fractional permissions with a special permission Seq. DEFINITION 2.5. Sequenceable permissions are defined as: s seqfrac ::= Seq UnSeq z Disjointness is inductively defined as: 1. if z 1 z 2 then UnSeq z 1 UnSeq z 2 The order is inductively defined as: 1. Seq Seq 2. if z 1 z 2, then UnSeq z 1 UnSeq z 2 The operations and \ are defined point-wise, and: kind Seq := Locked lock Seq := Seq unlock Seq := UnSeq 1 kind (UnSeq z) := kind z lock (UnSeq z) := Seq unlock (UnSeq z) := UnSeq z

We extend sequenceable permissions to account for some subtleties of C. First of all, we need to deal with objects that are declared using the const qualifier (those are read-only). Secondly, whereas dynamically allocated memory obtained via alloc can be freed manually using free, memory of block scope variables cannot be freed using free. Freeing it should therefore be prohibited by the permission system. DEFINITION 2.6. Full permissions are defined as: perm ::= Freeable s Writable s ReadOnly z Here, all relations and operations are defined point-wise by lifting those on fractional and sequenceable permissions. We use the abbreviations F, W and R for the permissions Freeable (UnSeq 1), Writable (UnSeq 1) and ReadOnly 1, respectively. Given an arbitrary permission system with carrier P, the definition of the memory is now straightforward. DEFINITION 2.7. Memories (m mem) are finite partial functions from memory indices to pairs (v, x) with v val and x P. First we define the operations that are used by the operational semantics (Section 4) to interact with the memory. DEFINITION 2.8. The memory operations are defined as: x if m b = (v, x) perm b m := otherwise v if m b = (v, x) and kind x Locked m!! b := otherwise m[b := (v, x)] if m b = (v, x) m[b := v] := m otherwise alloc b v x m := m[b := (v, x)] free b m := m[b := ] locks m := b m b = (v, x) kind x = Locked} m[b := (v, lock x)] if m b = (v, x) lock b m := m otherwise unlock Ω m := (b, (v, unlock x)) b Ω m b = (v, x)} Here, we have Ω fin index. (b, (v, x)) b / Ω m b = (v, x)} The function perm is used to obtain the permission of an object. Allocation alloc b v x m of an object with value v and permission x should only be used with unused indices b in m (i.e. with perm b m = ). Likewise, a store [ := ] and deallocation using free, should only be used when the permissions are appropriate. We will take care of these side-conditionals in the rules of the operational semantics. Next, we define the memory operations that are used by the axiomatic semantics (Section 5). DEFINITION 2.9. The separation memory relations are defined as: We let m 1 m 2 iff for all b, v 1, x 1, v 2 and x 2 with m 1 b = (v 1, x 2) and m 2 b = (v 2, x 2) we have v 1 = v 2 and x 1 x 2. We let m 1 m 2 iff for all b, v 1 and x 1 with m 1 b = (v, x 1) there exists an x 2 x 1 with m 2 b = (v, x 2). The separation memory operations are defined as: m 1 m 2 := (b, (v 1, x 1 x 2 )) m 1 b = (v 1, x 1 ) m 2 b = (v 2, x 2 )} (b, (v 1, x 1 )) m 1 b = (v 1, x 1 ) m 2 b = } (b, (v 2, x 2 )) m 1 b = m 2 b = (v 2, x 2 )} m 1 \ m 2 := (b, (v 1, x 1 \ x 2 )) m 1 b = (v 1, x 1 ) m 2 b = (v 2, x 2 ) x 2 x 1 } (b, (v 1, x 1 )) m 1 b = (v 1, x 1 ) m 2 b = } The union and empty memory form a monoid that is commutative and cancellative for disjoint memories. For the soundness proof of our axiomatic semantics (Section 6) we often need to reason about preservation of disjointness under memory operations. To ease that kind of reasoning, we define a relation m 1 m 2 that describes that the memories m 1 and m 2 behave equivalently with respect to disjointness. DEFINITION 2.10. Disjointness of a list of memories m, notation m, is inductively defined as: 1. [ ] 2. If m m and m, then (m :: m) Notice that m is stronger than merely having m i m j for each i j. For example, using fractional permissions, we do not have [ (b, (v, 0.5))}, (b, (v, 0.5))}, (b, (v, 0.5))} ], whereas we clearly do have (b, (v, 0.5))} (b, (v, 0.5))}. DEFINITION 2.11. Equivalence of m 1 and m 2 with respect to disjointness, notation m 1 m 2, is defined as: m 1 m 2 := m. (m :: m 1) (m :: m 2) m 1 m 2 := m 1 m 2 m 2 m 1 It is straightforward to show that is reflexive and transitive, being respected by concatenation of sequences, and is being preserved by list containment. Hence, is an equivalence relation, a congruence with respect to concatenation of sequences, and also being preserved by permutations. The following results allow us to reason algebraically about disjoint memories. FACT 2.12. If m 1 m 2 and m 1, then m 2. THEOREM 2.13. We have the following algebraic properties: 1. :: m m 2. (m 1 m 2) :: m m 1 :: m 2 :: m provided that m 1 m 2 3. m 1 :: m 2 m 1 ++ m 2 provided that m 1 4. m[b := v] :: m m :: m provided that perm b m = x for some x that is not a fragment 5. (b, (v 1, x))} :: m (b, (v 2, x))} :: m provided x is not a fragment 6. m 2 :: m m 1 :: (m 2 \ m 1) :: m provided that m 1 m 2 7. m :: m lock b m :: m provided that perm b m = x for some x that is not a fragment 8. m :: m unlock Ω m :: m Here, a permission x is a fragment if there is a y such that x y. 3. The language In this section we define the syntax of expressions and statements. DEFINITION 3.1. Expressions are defined as: binop ::= == <= + - * / %... e expr ::= x i [v] Ω e 1 := e 2 f( e) load e alloc Instead of [v], we just write v. free e e 1 e 2 e 1? e 2 : e 3 (τ) e Expressions may contain side-effects: assignments e 1 := e 2, function calls f( e), and allocation alloc and deallocation free e of dynamic memory. Unary operators, prefix and postfix increment

and decrement, assignment operators, and the comma operator, are omitted in this paper, but are included in the Coq formalization. The logical operators are defined in terms of the conditional. Values [v] Ω are annotated with a finite set Ω of locked memory indices. This set is initially empty, but whenever a write is performed, the written object is locked in memory and its memory index is added to Ω (see Section 4). Whenever a sequence point occurs, the locked objects in Ω will be unlocked in memory. The operation locks e collects the annotated locks in e. Stacks (ρ stack) are lists of memory indices. Variables are De Bruijn indices, i.e. the variable x i refers to the ith memory index on the stack. De Bruijn indices avoid us from having to deal with shadowing due to block scope variables. Especially in the axiomatic semantics this is useful, as we do not want to lose information by a local variable shadowing an already existing one. The stack contains references to the value of each variable instead of the values itself so as to treat pointers to both local and allocated storage in a uniform way. Execution of a variable x i thus consists of looking up its address b at position i in the stack, and returning a pointer ptr b to that address. Execution of load e consists of evaluating e to ptr b and looking up b in the memory. DEFINITION 3.2. Statements are defined as: s stmt ::= e skip goto l return e block c s s 1 ; s 2 l : s while(e) s if (e) s 1 else s 2 The statement syntax is adapted from Krebbers and Wiedijk [21], but we treat assignments and function calls as expression constructs instead of statements constructs. Hence, assignments and function calls can be nested, and can occur in the expressions of a return, while, and conditional statement. The construct block c s opens a block scope with one local variable, where the boolean c specifies whether the variable is const-qualified or not. The permission blockperm c is defined as: blockperm True := R and blockperm False := W. Since we use De Bruijn indices, the construct block c s is nameless. 4. Operational semantics We define the semantics of expressions and statements by a small step operational semantics. That means, computation is defined by the reflexive transitive closure of a reduction relation on program states. To define this reduction, we first define head reduction of expressions, and then use evaluation contexts (as introduced by Felleisen et al. [14]) to define reduction of whole programs. In the remainder of this paper, we will use a memory instantiated with full permissions (Definition 2.6). DEFINITION 4.1. Given a stack ρ, head reduction of expressions (e 1, m 1) h (e 2, m 2) is inductively defined as: 1. (x i, m) h (ptr b, m), in case ρ i = b 2. ([ptr b] Ω1 :=[v] Ω2, m) h ([v] b} Ω1 Ω 2, lock b (m[b:=v])), in case perm b m = and Write kind 3. (load [ptr b] Ω, m) h ([v] Ω, m), for any v with m!! b = v 4. (alloc, m) h (ptr b, alloc b indet F m), for any b with perm b m = 5. (free [ptr b] Ω, m) h ([indet] Ω, free b m), in case perm b m = and kind = Free 6. ([v 1] Ω1 [v 2] Ω2, m) h ([v ] Ω1 Ω 2, m), in case v 1 v 2 = v 7. ([v] Ω? e 2 : e 3, m) h (e 2, unlock Ω m), in case istrue v 8. ([v] Ω? e 2 : e 3, m) h (e 3, unlock Ω m), in case isfalse v 9. ((τ) [v] Ω, m) h ([v ] Ω, m), in case (τ) v = v If the stack ρ is not clear from the context, we write ρ (e 1, m 1) h (e 2, m 2). Note that picking an unused index for allocation in Rule 4 is non-deterministic. In Rules 6 and 9, v 1 v 2 and (τ) v are partial functions that evaluate a binary operation on values and integer cast on values respectively. These functions fail in case of integer overflow, or if an operation is used wrongly (e.g. division by zero). An assignment [ptr b] Ω1 := [v] Ω2 (Rule 2) not only stores the value v at index b, but also locks b in the memory. Locking b enforces the sequence point restriction, because consecutive reads and writes to b will fail (as ensured by the side-condition of the assignment rule, and Definition 2.8 of the!! operation). In order to keep track of the lock of b, we add b to the set b} Ω 1 Ω 2. Rules 7 and 8 for the conditional [v] Ω? e 2 : e 3 model a sequence point by unlocking the indices Ω in the memory, making future reads and writes possible again. Other constructs with a sequence point will be given a similar semantics (see Definition 4.10). Like Ellison and Rosu [12], we implicitly use non-determinism to capture undefined behavior due to sequence point violations. For example, in x + (x = 10) only one execution order (performing the read after the assignment) leads to a sequence point violation. In Norrish s semantics [25] both execution orders lead to a sequence point violation as he also keeps track of reads. Our treatment of sequence points assigns undefined behavior to more programs than the C standard, and Ellison and Rosu do. Ellison and Rosu release the locks of all objects at a sequence point, whereas we just release the locks that have been created by the subexpression where the sequence point occurred. For example, we assign undefined behavior to the following program of Ellison. int x, y, *p = &y; int f() if (x) p = &x; } return 0; } int main() return (x = 1) + (*p = 2) + f(); } The execution order that leads to undefined behavior is (a) x = 1, (b) call f which changes p to &x, (c) *p = 2. Here, the lock of &x survives the function call. In the semantics of Ellison and Rosu, this program has defined behavior, as the sequence point before the function call releases all locks, so also the lock of &x. We believe that this is a reasonable trade-off, because dealing with sequence points locally instead of globally brings the operational semantics closer to the axiomatic semantics as separation logic only talks about a local part of the memory. We believe only artificial programs become illegal, because different function calls in the same expression can still write to a shared part of the memory (which is useful for memoization). For example, given int f(int y) static int map[map_size]; if (map[y]) return map[y]; } return map[y] = expensive_function(y); } the expression f(3) + f(3) has defined behavior according to our semantics of sequence points. Since the C standard does not allow interleaved execution of function calls [16, 6.5.2.2p10], these are not described by the head reduction h. Instead, a function call changes the whole program state to a state in which its body is executed. When execution of the function body is finished, the result will be plugged back into the whole expression. To describe this behavior, and to select a head redex in an expression, we define expression contexts. DEFINITION 4.2. Singular expression contexts are defined as: E s ectx s ::= := e e := f( e 1,, e 2) load free e 2 e 1? e 2 : e 3 (τ)

Expression contexts (E ectx) are lists of singular contexts. Given an expression context E and an expression e, the substitution of e for in E, notation E[ e ], is defined as usual. In the reduction of whole programs (Definition 4.10), we allow E[ e 1 ] to reduce to E[ e 2 ] provided that (e 1, m 1) h (e 2, m 2). To enforce that the first operand of the conditional e 1? e 2 : e 3 is executed entirely before the others, it is essential that we omit the contexts e 1? : e 3 and e 1? e 2 :. The reduction of whole programs uses a zipper-like data structure, called a program context [21], to store the location of the substatement that is being executed. Execution of the program occurs by traversal through the program context in the direction down, up, jump, or top. When a goto l statement is executed, the direction is changed to l, and the semantics performs a small step traversal through the program context until the label l has been reached. Program contexts extend the zipper by annotating each block scope variable with its associated memory index, and furthermore contain the full call stack of the program. Program contexts can also be seen as a generalization of continuations (as for example being used in CompCert [2, 23]). However, there are some notable differences. Program contexts implicitly contain the stack, whereas a continuation semantics typically stores the stack separately. Program contexts also contain the part of the program that has been executed, whereas continuations only contain the part that remains to be done. Since the complete program is preserved, looping constructs like the while statement do not have to duplicate code. The fact that program contexts do not throw away the parts of the statement that have been executed is essential for the treatment of goto. Upon an invocation of a goto, the semantics traverses through the program context until the corresponding label has been found. During this traversal it passes all block scope variables that go out of scope, allowing it to perform required allocations and deallocations in a natural way. Hence, the point of this traversal is not so much to search for the label, but much more to incrementally calculate the required allocations and deallocations. DEFINITION 4.3. Singular statement contexts are defined as: S s sctx s ::= ; s 2 s 1 ; l : while(e) if (e) else s 2 if (e) s 1 else Given a singular statement context S s and a statement s, substitution of s for in S s, notation S s[ s ], is defined as usual. A pair ( S s, s) consisting of a list Ss of singular statement contexts and a statement s forms a zipper for statements without block scope variables. That means, S s is a statement turned insideout that represents a path from the focused substatement s to the top of the whole statement. DEFINITION 4.4. Expression statement contexts and singular program contexts are defined as: S e sctx e ::= return while( ) s if ( ) s 1 else s 2 P s ctx s ::= S s block b c (S e, e) resume E params b Program contexts (k ctx) are lists of singular program contexts. Given an expression statement context S e and an expression e, substitution of e for in S e, notation S e[ e ], is defined as usual. The previously defined program contexts will be used as follows in the operational semantics. When entering a block scope block c s, the singular context block b c is appended to the head of the program context. It associates the block scope with its memory index b. When executing a statement construct S e[ e ] that contains an expression e, the singular context (S e, e) is appended to the head of the program context to keep track of the statement. We need to keep track of the expression e as well so that it can be restored when execution of the expression is finished. When executing a function call E[ f( v) ], the singular context resume E is appended to the head of the program context. When the function returns with value v, execution of the expression E[ v ] with the return value v plugged in, is continued. When a function body is entered, the singular context params b is appended to the head of the program context. It contains a list b of memory indices of the function parameters. As program contexts implicitly contain the stack, we define a function to extract it from them. DEFINITION 4.5. The corresponding stack getstack k of a program context k is defined as: getstack (S s :: k) := getstack k getstack (block b c :: k) := b :: getstack k getstack ((S e, e) :: k) := getstack k getstack (resume E :: k) := [ ] getstack (params b :: k) := b ++ getstack k We define getstack (resume E :: k) as [ ] instead of getstack k, as otherwise it would be possible to refer to the local variables of the calling function. DEFINITION 4.6. Directions, focuses and program states are defined as: d direction ::= l v φ focus ::= (d, s) e call f v return v undef S state ::= S(k, φ, m) A program state S(k, φ, m) consists of a program context k, the part of the program φ that is focused, and the memory m. Similar to Leroy s Cmedium [23], we have five kinds of states: (d, s) for execution of a statement s in direction d, e for execution of an expression e, call f v for calling a function f with arguments v, return v for returning from a function with return value v, and undef to capture undefined behavior. We have additional states for execution of expressions and undefined behavior. The semantics of Leroy s Cminor [22], and the semantics of Krebbers and Wiedijk [21], do not have such states, because their expressions are deterministic and side-effect free. They capture undefined behavior by letting the reduction get stuck, whereas that will not work in the presence of non-determinism. DEFINITION 4.7. The relation allocparams m 1 b v m2 allocates fresh blocks b for function parameters v with permission (non-deterministically). It is inductively defined as: 1. allocparams m [ ] [ ] m 2. If allocparams m 1 b v m2 and perm b m 2 =, then allocparams m 1 (b :: b) (v :: v) (alloc b v m 2)

DEFINITION 4.8. An expression is a redex if it is of the following shape: (a) x i, (b) [v 1] Ω1 := [v 2] Ω2, (c) f([v 1] Ω1,..., [v 2] Ω2 ), (d) load [v] Ω, (e) alloc (f) free [v] Ω, (g) [v 1] Ω1 [v 2] Ω2, (h) [v] Ω? e 2 : e 3, or (i) (τ) [v] Ω. DEFINITION 4.9. An expression is safe in stack ρ and memory m if: (a) it is of the shape f( e), or (b) there is an expression e and memory m such that ρ (e, m) h (e, m ). DEFINITION 4.10. Given a finite partial function δ mapping function names to statements, the small step reduction S 1 S 2 is inductively defined as: 1. For simple statements: (a) S(k, (, skip), m) S(k, (, skip), m) (b) S(k, (, goto l), m) S(k, ( l, goto l), m) (c) S(k, (, S e[ e ]), m) S((S e, e) :: k, e, m) 2. For expressions: (a) S(k, E[ e 1 ], m 1 ) S(k, E[ e 2 ], m 2 ) for any e 2 and m 2 s.t. getstack k (e 1, m 1 ) h (e 2, m 2 ) (b) S(k, (, E[ f([v 0 ] Ω0,..., [v n] Ωn ) ]), m) S(resume E :: k, call f v, unlock ( Ω) m) (c) S(k, (, E[ e ]), m) S(k, undef, m), provided that e is an unsafe redex 3. For finished expressions: (a) S((, e) :: k, [v] Ω, m) S(k, (, e), unlock Ω m) (b) S((return, e) :: k, [v] Ω, m) S(k, ( v, return e), unlock Ω m) (c) S((while( ) s, e) :: k, [v] Ω, m) S(while(e) :: k, (, s), unlock Ω m) provided that istrue v (d) S((while( ) s, e) :: k, [v] Ω, m) S(k, (, while(e) s), unlock Ω m) provided that isfalse v (e) S((if ( ) s 1 else s 2, e) :: k, [v] Ω, m) S(if (e) else s 2 :: k, (, s 1 ), unlock Ω m) provided that istrue v (f) S((if ( ) s 1 else s 2, e) :: k, [v] Ω, m) S(if (e) s 1 else :: k, (, s 2 ), unlock Ω m) provided that isfalse v 4. For compound statements: (a) S(k, (, block c s), m) S((block b c ) :: k, (, s), alloc b indet (blockperm c) m) for any b such that perm b m = (b) S(k, (, s 1 ; s 2 ), m) S(( ; s 2 ) :: k, (, s 1 ), m) (c) S(k, (, l : s), m) S((l : ) :: k, (, s), m) (d) S((block b c ) :: k, (, s), m) S(k, (, block c s), free b m) (e) S(( ; s 2 ) :: k, (, s 1 ), m) S((s 1 ; ) :: k, (, s 2 ), m) (f) S((s 1 ; ) :: k, (, s 2 ), m) S(k, (, s 1 ; s 2 ), m) (g) S(while(e) :: k, (, s), m) S(k, (, while(e) s), m) (h) S((if (e) else s 2 ) :: k, (, s 1 ), m) S(k, (, if (e) s 1 else s 2 ), m) (i) S((if (e) s 1 else ) :: k, (, s 2 ), m) S(k, (, if (e) s 1 else s 2 ), m) (j) S((l : ) :: k, (, s), m) S(k, (, l : s), m) 5. For function calls: (a) S(k, call f v, m 1 ) S(params b :: k, (, s), m 2 ) for any s, b and m 2 s.t. δ f = s and allocparams W m 1 b v m2 (b) S(params b :: k, (, s), m) S(k, return indet, free b m) (c) S(params b :: k, ( v, s), m) S(k, return v, free b m) (d) S(resume E :: k, return v, m) S(k, E[ v ], m) 6. For non-local control flow: (a) S((block b c ) :: k, ( v, s), m) S(k, ( v, block c s), free b m) (b) S(S s :: k, ( v, s), m) S(k, ( v, S s[ s ]), m) (c) S(k, ( l, l : s), m) S((l : ) :: k, (, s), m) (d) S(k, ( l, block c s), m) S((block b c ) :: k, ( l, s), alloc b indet (blockperm c) m) for any b such that perm b m =, and provided that l labels s (e) S(block b c :: k, ( l, s), m) S(k, ( l, block c s), free b m) provided that l / labels s (f) S(k, ( l, S s[ s ]), m) S(S s :: k, ( l, s), m) provided that l labels s (g) S(S s :: k, ( l, s), m) S(k, ( l, S s[ s ]), m) provided that l / labels s Note that the selection of redexes in Rule 2a, 2b, and 2c is nondeterministic. Moreover, note that the rules 6c and 6f overlap, and that the splitting into S s and s in rule 6f is non-deterministic. DEFINITION 4.11. We let denote the reflexive-transitive closure of, and let n denote paths of n -reduction steps. Execution of a statement S(k, (d, s), m) is performed by traversing through the program context k and statement s in direction d. The direction down (respectively up ) is used to traverse downwards (respectively upwards) to the next substatement that has to be executed. When a substatement S e[ e ] containing an expression e has been reached (Rule 1c), the location of the expression S e is stored on the program context, and execution is continued in S((S e, e) :: k, e, m). Execution of an expression S(k, e, m) is performed by nondeterministically decomposing e into E[ e ]. Rule 2a allows e to perform a h -step, and Rule 2b allows e to perform a function call. If the redex e is unsafe and thereby cannot be contracted (e.g. because of a sequence violation or integer overflow), Rule 2c ensures that the whole program reduces to the undef state. When execution of an expression has resulted in a value [v] Ω, we model a sequence point by unlocking Ω in memory (Rule 3a-3f). For a function call S(k, E[ f([v 0] Ω0,..., [v n] Ωn ) ], m), two reductions occur before the function body will be executed. The first to S(resume E :: k, call f v, unlock ( Ω) m) (Rule 2b) stores the location of the caller on the program context and takes care of the sequence point before the function call. The subsequent reduction to S(params b :: resume E :: k, (, s), m ) (Rule 5a) looks up the callee s body s, allocates the parameters v, and then performs a transition to execute the function body s. We consider two directions for non-local control flow: jump l and top v. After a goto l (Rule 1b), the direction l is used to traverse to the substatement labeled l (Rule 6c-6g). Although this traversal is non-deterministic (in the case of duplicate labels), there are some side-conditions in order to ensure that the reduction is not going back and forth between the same locations. This is required because we may otherwise impose non-terminating behavior on terminating programs. The non-determinism could be removed entirely by adding additional side-conditions. However, as we already have other sources of non-determinism, we omitted doing so to ease formalization. When execution of the expression e of a return e statement has resulted in a value v (Rule 3b), the direction v is used to traverse to the top of the whole statement (Rule 6a and 6b). When this traversal reaches the top of the statement, there are two reductions to give the return value v to the caller. The first reduction, from S(params b :: resume E :: k, ( v, s), m) to S(resume E :: k, return v, free b m) (Rule 5c), deallocates the function parameters, and the second reduction, to S(k, E[ v ], free b m) (Rule 5d), resumes execution of the expression E[ v ] at the caller.

5. Axiomatic semantics Judgments of Hoare logic are triples P } s Q}, where s is a statement, and P and Q are assertions called the pre- and postcondition. The intuitive reading of such a triple is: if P holds for the memory before executing s, and execution of s terminates, then Q holds afterwards. For our language, we have two such judgments: one for expressions and one for statements. Our expression judgments are quadruples P } e Q}. As usual, P and Q are the pre- and postcondition of e respectively, but whereas P is just an assertion, Q is a function from values to assertions. It ensures that if execution of e yields a value v, then Q v holds afterwards. The environment is a finite function from function names to their pre- and postconditions that is used to cope with (mutually) recursive functions. As in Krebbers and Wiedijk [21], our judgments for statements are sextuples ; R; J P } s Q}, where R is a function from values to assertions, and J is a function from labels to assertions. The assertion R v has to hold when executing a return e (for each value v obtained by execution of e), and J l is the jumping condition that has to hold when executing a goto l. We use a shallow embedding to represent assertions. This treatment is similar to that of Appel and Blazy [2], Von Oheimb [29], Krebbers and Wiedijk [21], etc. In order to talk about expressions without side-effects (assignments, allocation and deallocation, and function calls) in assertions (we need this for various rules of our axiomatic semantics, see Definition 5.9), we define an evaluation function for pure expressions. DEFINITION 5.1. Evaluation [[ e ]] ρ,m of an expression e in a stack ρ and memory m is a partial function that is defined as: [[ x i ]] ρ,m := ptr b if ρ i = b [[ v ]] ρ,m := v [[ load e ]] ρ,m := m!! b if [[ e ]] ρ,m = ptr b [[ e 1 e 2 ]] ρ,m := [[ e 1 ]] ρ,m [[ e 2 ]] ρ,m [[ e 2 ]] ρ,m if [[ e 1 ]] ρ,m = v and istrue v [[ e 1? e 2 : e 3 ]] ρ,m := [[ e 3 ]] ρ,m if [[ e 1 ]] ρ,m = v and isfalse v [[ (τ) e ]] ρ,m := (τ) [[ e ]] ρ,m DEFINITION 5.2. Assertions are predicates over the the stack and the memory. We define the following connectives on assertions. P Q := λρ m. P ρ m Q ρ m x. P x := λρ m. x. P x ρ m P Q := λρ m. P ρ m Q ρ m P Q := λρ m. P ρ m Q ρ m P := λρ m. P ρ m x. P x := λρ m. x. P x ρ m P := λρ m. P e v := λρ m. [[ e ]] ρ,m = v We treat as an implicit coercion, e.g. we write True instead of True. Also, we often lift these connectives to functions to assertions, e.g. we write P Q instead of λv. P v Q v. DEFINITION 5.3. We let P Q denote that for all stacks ρ and memories m we have P ρ m implies Q ρ m. DEFINITION 5.4. An assertion P is called stack independent if for all ρ 1, ρ 2 stack and m mem with P ρ 1 m we have P ρ 2 m. Similarly, P is called unlock independent if for all ρ stack, m mem and Ω index with P ρ m we have P ρ (unlock Ω m). Next, we define the assertions of separation logic [28]. The separating conjunction P Q asserts that the memory can be split into two disjoint parts such that P holds in the one part, and Q in the other (due to the use of fractional permissions, these parts may overlap as long as their permissions are disjoint and their values agree). Finally, e 1 e2 asserts that the memory consists of exactly one object at e 1 with contents e 2 and permission. DEFINITION 5.5. The connectives of separation logic are: e 1 emp := λρ m. m = P Q := λρ m. m 1 m 2. m = m 1 m 2 e 1 m 1 m 2 P ρ m 1 Q ρ m 2 e2 := λρ m. b v. [[ e 1 ]] ρ,m = ptr b := e2. e 1 [[ e 2 ]] ρ,m = v m = (b, (v, ))} e2 To enforce the sequence point restriction, the rule for assignment changes the permission of e 1 e2 into lock. Subsequent reads and writes are therefore no longer possible. At constructs that have a sequence point, we use the assertion P to release these locks, and to make future reads and writes possible again. DEFINITION 5.6. The unlocking assertion P is defined as: P := λρ m. P ρ (unlock (locks m) m). We proved properties as P Q (P Q) to push the -connective through an assertion. Similar to Krebbers and Wiedijk [21], we need to lift an assertion such that the De Bruijn indices of its variables are increased so as to deal with block scope variables. The lifting P of P is defined semantically, and we prove that it behaves as expected. DEFINITION 5.7. The lifting assertion P is defined as: P := λρ m. P (tail ρ) m. LEMMA 5.8. The operation ( ) distributes over the connectives,,,,,, and. We have (e v) = (e ) v and (e 1 e2) = (e 1 ) (e 2 ), where the operation e replaces each variable x i in e by x i+1. The specification of a function with parameters v consists of an assertion P y v called the precondition, and a function Q y v from (return) values to assertions called the postcondition. We allow universal quantification over arbitrary logical variables y in order to relate the pre- and postcondition. The pre- and postcondition should moreover be stack independent because local variables will have a different meaning at the caller than at callee. We denote such a specification as y v. P y v} Q y v}. DEFINITION 5.9. Given a finite partial function δ mapping function names to statements, the expression judgment P } e Q} and statement judgment ; R; J P } s Q} of the axiomatic semantics are mutually inductively defined as shown in Figure 1. We have a frame, weaken, and exist rule for both the expression and statement judgments. The traditional frame rule of separation logic [28] includes a side-condition modifies s free A =. Like Krebbers and Wiedijk [21], we do not need this side-condition as our local variables are (immutable) references into the memory. Since the return and goto statements leave the normal control flow, the postconditions of the (goto) and (return) rules are arbitrary. The rules for function calls are based on those by Krebbers and Wiedijk [21]. The (e-call) rule is used to call a function f( e) that is already in. The assertion B v is used to frame a part of the memory that is used by e but not by the f itself. The (add funs) rule can be used to add an arbitrary family of specifications of (possibly mutually recursive) functions to. For each function f in with precondition P and postcondition Q, it has to be verified that the function body δ f is correct for all instantiations of the logical variables y and input values v. The precondition Π [ x W i v i ] P y v, where Π [ e i i e i ] denotes the