AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

Similar documents
An Annotated Language

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning

Semantics via Syntax. f (4) = if define f (x) =2 x + 55.

Introduction to Axiomatic Semantics

Hardware versus software

Hoare Logic: Proving Programs Correct

Semantics. There is no single widely acceptable notation or formalism for describing semantics Operational Semantics

Chapter 3. Describing Syntax and Semantics ISBN

Lecture Notes on Contracts

Typing Control. Chapter Conditionals

6. Hoare Logic and Weakest Preconditions

Reasoning About Imperative Programs. COS 441 Slides 10

Lecture 5 - Axiomatic semantics

Assertions. Assertions - Example

Hoare Logic. COMP2600 Formal Methods for Software Engineering. Rajeev Goré

Induction and Semantics in Dafny

Lecture Notes: Hoare Logic

Lectures 20, 21: Axiomatic Semantics

Arguing for program correctness and writing correct programs

Program Verification. Program Verification 307/434

CSE 331 Software Design and Implementation. Lecture 3 Loop Reasoning

CS2 Algorithms and Data Structures Note 10. Depth-First Search and Topological Sorting

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis

References: internet notes; Bertrand Meyer, Object-Oriented Software Construction; 10/14/2004 1

Chapter 3 (part 3) Describing Syntax and Semantics

LESSON 3 CONTROL STRUCTURES

Programming Languages Third Edition

Formal Methods of Software Design, Eric Hehner, segment 24 page 1 out of 5

Chapter 3. Describing Syntax and Semantics

Lecture Notes on Linear Search

Readability [Skrien 4.0] Programs must be written for people to read, and only incidentally for machines to execute.

Handout 9: Imperative Programs and State

Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Foundations, Reasoning About Algorithms, and Design By Contract CMPSC 122

Warm-Up Problem. 1. What is the definition of a Hoare triple satisfying partial correctness? 2. Recall the rule for assignment: x (assignment)

Reasoning about programs

Chapter 2 The Language PCF

Last time. Reasoning about programs. Coming up. Project Final Presentations. This Thursday, Nov 30: 4 th in-class exercise

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

A proof-producing CSP solver: A proof supplement

Repetition Through Recursion

Verification Condition Generation

Lecture 6 Binary Search

CITS5501 Software Testing and Quality Assurance Formal methods

Lecture Notes on Arrays

Introduction to Axiomatic Semantics (1/2)

Chapter 3. Semantics. Topics. Introduction. Introduction. Introduction. Introduction

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

15-122: Principles of Imperative Computation (Section G)

Violations of the contract are exceptions, and are usually handled by special language constructs. Design by contract

Sorting L7.2 Recall linear search for an element in an array, which is O(n). The divide-and-conquer technique of binary search divides the array in ha

Semantics and Types. Shriram Krishnamurthi

Lecture 24 Search in Graphs

a correct statement? You need to know what the statement is supposed to do.

Introduction to Axiomatic Semantics (1/2)

Chapter 3. Describing Syntax and Semantics

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Part II. Hoare Logic and Program Verification. Why specify programs? Specification and Verification. Code Verification. Why verify programs?

What is a Procedure?

(Refer Slide Time: 4:00)

6.170 Lecture 6 Procedure specifications MIT EECS

Lecture 18 Restoring Invariants

Lecture 4 Searching Arrays

CMSC 330: Organization of Programming Languages

CMSC 336: Type Systems for Programming Languages Lecture 5: Simply Typed Lambda Calculus Acar & Ahmed January 24, 2008

CS2104 Prog. Lang. Concepts

Principle of Complier Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

CMSC 330: Organization of Programming Languages. Formal Semantics of a Prog. Lang. Specifying Syntax, Semantics

Warm-Up Problem. Let be a set of well-formed Predicate logic formulas. Let be well-formed Predicate logic formulas. Prove or disprove the following.

12/30/2013 S. NALINI,AP/CSE

STABILITY AND PARADOX IN ALGORITHMIC LOGIC

Chapter 3. Describing Syntax and Semantics ISBN

CS 161 Computer Security

MITOCW watch?v=kz7jjltq9r4

A Crash Course on Limits (in class worksheet)

Lecture 10 Notes Linked Lists

Search: Advanced Topics and Conclusion

Specifications. Prof. Clarkson Fall Today s music: Nice to know you by Incubus

Compositional Cutpoint Verification

Verifying Safety Property of Lustre Programs: Temporal Induction

Outline. Introduction. 2 Proof of Correctness. 3 Final Notes. Precondition P 1 : Inputs include

CIS 890: Safety Critical Systems

Lecture Notes on Quicksort

Chapter 3. Set Theory. 3.1 What is a Set?

Symbolic Execution and Proof of Properties

CORRECTNESS ISSUES AND LOOP INVARIANTS

(a) (4 pts) Prove that if a and b are rational, then ab is rational. Since a and b are rational they can be written as the ratio of integers a 1

Lecture 10 Notes Linked Lists

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

SOFTWARE ENGINEERING DESIGN I

Note that in this definition, n + m denotes the syntactic expression with three symbols n, +, and m, not to the number that is the sum of n and m.

Propositional Logic. Application of Propositional logic

Chapter 3. The While programming language

Lecture Notes on Queues

LOOPS. Repetition using the while statement

Assertions, pre/postconditions

Notes on Turing s Theorem and Computability

Transcription:

AXIOMS OF AN IMPERATIVE LANGUAGE We will use the same language, with the same abstract syntax that we used for operational semantics. However, we will only be concerned with the commands, since the language has expressions that have no side effects. In other words, only commands (and essentially only assignment) can alter program state. There will be an axiom for alternative command in the abstract syntax. The main form in each axiom is a conclusion with the assertion-command-assertion (or ACA) form: { p} C{ q } In this form C is any command, p is a true assertion before C is executed, and q is true after it. p is the precondition and q is the post-condition, and are expressed in the language of assertions, which contains expressions about arithmetic variables; each variable corresponds to a program language variable in the whole program. Some axioms are expressed as rules, in a similar fashion to the operational semantic rules. Below the line is a conclusion, and above the line will be one or more ACA triples or possibly other assertions or logical expressions. The meaning of these will be similar to the operational semantics rules. i.e. the conclusion below the line can only be made if the expressions above the line are true. We will build correctness proofs of programs by incorporating axioms relating to individual assignments, and building these up with the help of the compound command forms (sequence, conditional, loop) to build a proof for the whole program. PARTIAL CORRECTNESS To start with we will make one big assumption concerning each ACA triple. They will only hold if the command C terminates. We can make nor real judgment about the state after the command unless it does. This is called partial correctness. It can expressed as : if p is true before C, and if C terminates, then q is true after it has finished Each axiom makes the same assumption. Total correctness will be looked at later, when we will be able to prove that C terminates. If C is a whole program, then [p,q] is the specification of the program. Correctness proofs start with a specification, and break the program down into its simple commands, when the axioms apply. We will take the axioms in the same order that the abstract syntax presents them: nop I := E C 1 ;C 2 if B then C 1 else C 2 end while B do C end WEAK AND STRONG CONDITIONS Since we are dealing with assertions that can only be true or false, and there can be an infinite number of equivalent expressions, there is one extra goal that we must look out for. When we build our pre- and post-conditions, we should always try to be as accurate as possible. This means that the post-conditions should be as strong as possible, so that they express the most specific version of what the command actually does. Conversely, the pre-condition should be as weak (or general) as possible so as not to overly constrain our expression of what the command does. Later we shall express this goal in an axiom, but let us turn now to the basic command types. THE AXIOM FOR nop Since nop is meant to do nothing, not surprisingly, it is very simple: { p} nop{ p } What this says is that nop does not alter the program state. Whatever was true before the command is also true after it.

THE AXIOM FOR ASSIGNMENT The axiom for assignment is counter-intuitive, so we will present it after a couple of examples. First, expression evaluation cannot alter program state, so we need to concentrate on how to express the change in state caused by the assignment of the value of an expression to the identifier in I:=E. We might think that we should add an additional assertion to the post-condition. Something like this: e.g. for x := 2, we could use: { p} I := E { p something about I and E} { p} x := 2 { p x=2} Unfortunately, this will not work, as a simple example shows: { x> 4} x := 2{ x> 4 x= 2} The post-condition is clearly false, for any x, so we have gone from true in the pre-condition, to false in the post-condition. This is clearly no good. All axioms and the proofs that contain them must proceed from true to true. Logic cannot tolerate a rule such as this attempt. Instead, we must instead think like this: what must be the precondition be for the post-condition to the true? Expressions not involving the assigned variable can remain the same for pre- to post-condition. Expressions that do involve the assigned variable must be true for the value of the expression to be assigned for the post-condition to be true. In this way, the assignment has already been pre-judged in a way. Only certain expressions can be true after an assignment since assignments are destructive. We could never have : { } x := 2{ x > 4} because 2 (the value of the assigned expression) is not greater than 4. However: { } x := 2{ x > 0} could work because 2 > 0. Following this line of thought, we can see that the relationship between pre and postcondition must be such that, for all possible post-conditions that could be true, they must always work as pre-conditions with the value of the expression substituted. This is where our substitution form comes in very handily. We can write the assignment axiom as: { p[ E\ I] } I := E{ p } This says that the only post-conditions which are possibly true are those for which it is also true with E substituted for I in p. In fact, this usually means that we work backwards with assignments. We have a post-condition, and we push it through the assignment to produce the correct pre-condition. Simple examples are: { 2> 0} x := 2{ x > 0} { x 1< x} y := x - 1 { y < x } { x+ 1 = 3} x := x +1{ x= 3} In each case the left hand is either true because of a fact like { 2> 0}, true because of the axioms of arithmetic, 1< x, or true by solving some constraint, like { x + 1= 3}. What is not possible is to use post-conditions which like{ x } cannot possibly be true: { 2< 0} x := 2{ x < 0} { x 1= x} y := x - 1 { y = x } { x+ 1= 0 x+ 1= 1} x := x +1{ x= 0 x= 1}

In each of these cases the post-condition produces a pre-condition that is false, or it is false to start with, like the last example. Strictly, these might be considered correct since false produces false, but they are not useful in this form. We only want to apply these axioms when both assertions are true. THE AXIOM FOR THE COMMAND SEQUENCE Having got the base cases right, i.e. nop and assignment, the rest are recursive in nature, as expressed in the abstract syntax. In the case of the sequence, two commands are executed one after the other. There are possibly two changes of state. We can express this by asking that the post-condition of the first command is the pre-condition of the second. These ACA triples will be conditions to be proved before the sequence can be proved correct: { p} C1{ r} { r} C1{ q} { p} C;C { q} 1 2 As an example, let us put two assignments together, and apply this rule by asserting a post-condition and pushing it back twice through the commands: The two commands are: x := 2; y := x plus 1. The post-condition will be { y x} >. Pushing this back through the second command gives: { x 1 x} y := x plus1{ y } + > > x. Pushing this pre-condition back through the first gives:{ 2+ 1> 2} x := 2{ x + 1> x} This pre-condition is true, so we can assert: { true} x := 2;y := x plus 1{ y x} > for the sequence. This says that it does not matter what values x and y have to start with, after these two assignments, x will be greater than y. It is pointless trying to generate separate ACA triples for each command since our correctness proofs will be goal-directed, i.e. we will have some specification that we are trying to prove. The method does not lend itself well to forward reasoning, such as trying to find all possible assertions which might be true after the program terminates. Operational semantics is much better at that since it can propagate values through different stores as the program executes. THE AXIOM FOR THE CONDITIONAL The conditional axiom has two difficulties. First we must ensure that both pre- and post-conditions apply regardless of which branch is taken. Second we have somehow to get the condition itself into the act. To do this, we must ensure that all Boolean expressions in the language have a corresponding form in the arithmetic language. This is the case with ours since we only have equal, greater and less, which correspond to =, >, < in arithmetic. We can put both branches in the same axiom by asking that they both be prerequisites for the conditional to be asserted: { p b} C 1{ q} { p ~ b} C2{ q} { p} if B then C else C end{ q} 1 2 Here b is a meta-variable for the expression obtained from the Boolean expression B. Two things are notable. One is that p and q are the conditions for both branches; the other is that q must be one assertion, that essentially handles both of these branches together. Sometimes this assertion is quite simple: { true} if x less 0 then x := 1 else x := 2 end{ x > 0} is provable because the test is really irrelevant. If we push the post-condition through both branches we get true in both cases, since 2 > 0 and 1 > 0. Since true is already the weakest possible pre-condition, we don t care whether the test is true if x less 0 then y := x plus 1 else y := neg x end y 0 is more interesting (neg is the true or false. However,

unary negative operator). Using the assignment axiom on both branches gives: { x + 1 0} on the then branch and { x 0} on the else branch. In order to use the conditional axiom, we must show, since p is just true, that { x + 1 0} the same as the test { x < 0}, and that { x 0} is the same as its negation ~ ( x < 0). In this case both are true, but not without some work: x+ 1 0 x 1 x< 0and x 0 x 0 ~ ( x< 0) using algebraic rewriting rules. THE AXIOM FOR THE LOOP The loop is the most difficult axiom since it is clearly impossible in our assertion language to express the iterative nature of the execution. What we must do is to capture the state changes in such a way that we can express the semantics of the loop by looking at what happens each time we execute the body of the loop. We would like to write something like: { } C{ } { p} while B do C end{ q} But what are the proper pre- and post-conditions for C, and how do we handle the loop exit? The solution is to find a single assertion that is true before the loop starts, after each iteration (so it is the next pre-condition as well), and after the loop ends. This is the loop invariant. It expresses, logically, the fact that the loop continues until the test becomes false, and then the loop exits. The axiom is: { p b} C{ p} { p} while B do C end { p ~ b} This is very subtle. We are unable to show the loop explicitly as we were in operational semantics. Instead we characterize the loop by what it keeps the same, rather than what it changes. While the invariant p does the loop continuation, the test b handles the non-continuation. Essentially we have the annotated sequence: p b C p b C p b C p ~ b. Obviously, finding the loop invariant is crucial to handling loops. Let s start very simply with the loop while x greater 0 do x := x minus 1 end, assuming that x starts out positive. What is the invariant in this case? What is true before, during and after the loop? Since this is really a count-down loop, the only 0 x 0 x> 0 x := x -1 x 0 true? thing we can say is { x }. Can we prove the condition above the line? i.e. is: If we push the post-condition back through the assignment we get { x 1 0}. Is this the same as { x 0 x>0} x 0 x> 0 x > 0? First, is just the same as, since they are both false or both true together. Then we have x 1 0 x 1 x> 0 by algebraic rewrite, so they are indeed the same. So, by applying the axiom we can write: x 0 while x greater 0 do x := x minus 1 end x 0 ~ x> 0. The post-condition is interesting, since ( ) ( x> ) x 0 x 0 x 0 x = 0 ~ 0, and is the same as. We have proved, by choosing the right invariant, that the loop terminates with x = 0, quite a result. Other loops will be harder, as we shall see. For another simple example, let s look at the loop while y greater x do y := y minus 1; x := x plus 1 end. In this loop, the value of y and x approach each other until either x = y or y crosses over so that y = x-1. Potentially we y = x y = x 1 which makes it explicit. The only realistic invariant is can prove that, so let s set the final assertion to y > x 2 which takes both possible end results into account. In order to prove the loop correct with respect to the final assertion, we need to prove: { y>x-2 y>x} y := y minus 1; x := x plus 1{ y > x-2} If we take the invariant back through the two assignments, using the assignment axiom, we get a pre-condition of y 1> x+ 1 2, or y > x using algebra and arithmetic. As before, we need to show that this is the same as ( ) y > x 2 y > x. First, we know that A B Bfor all A and B. Then, just like before, y > x 2 y > x is the same as y > x, so we do indeed have the right invariant. We can this write the loop conclusion: { y > x-2} while y greater x do y := y minus 1; x := x plus 1 y > x 2 ~ ( y > x) is

The proof that this post-condition is the same as { y x y x 1} value in all cases is easier. We leave that as an exercise. STRENGTHENING AND WEAKENING = = is hard. Showing that they have the same The final axiom (really two axioms) is called the consequence axiom. Very often we will end up with a precondition that is not immediately convertible into some desired assertion. Are we doomed to failure in this case? The answer is no, because we may be able to show that one assertion implies the other. What this means on the precondition side is that we may not have the weakest pre-condition, but a stronger one. If we can show that the strong one implies the weaker, then we can still apply the axiom in question. As an axiom of its own this is: p r r C q Sometimes this is called the strengthening axiom. There is a corresponding one on the post-condition side, the weakening axiom: p C r r q Here we make the post-condition weaker by finding something that is implied by it. These can be combined in the consequence axiom: C p r r s s q Now we can see that in fact we were a little too strong in our previous loop examples in insisting that, for instance, x 0 x 0is the same as 0. Logically this could be written as x 0 x 0 x=, whereas we only really need the forward implication x = ( ) ( ( x 0 x 0) ( x= 0). We can then use the weakening axiom to prove the final result. Whether this is easier than proving the equivalence is not obvious, but if we cannot prove equivalence, we may still be able to prove the implication. 0)