Weakly Relational Domains for Floating-Point Computation Analysis

Similar documents
CODE ANALYSES FOR NUMERICAL ACCURACY WITH AFFINE FORMS: FROM DIAGNOSIS TO THE ORIGIN OF THE NUMERICAL ERRORS. Teratec 2017 Forum Védrine Franck

Static Analysis of the Accuracy in Control Systems : Principles and Experiments

Abstract Interpretation of Floating-Point Computations

Static Analysis of Finite Precision Computations

Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors

Towards an industrial use of FLUCTUAT on safety-critical avionics software

Abstract Interpretation of Floating-Point. Computations. Interaction, CEA-LIST/X/CNRS. February 20, Presentation at the University of Verona

Semantics Transformation of Arithmetic Expressions

Improving the Static Analysis of Loops by Dynamic Partitioning Techniques

CHAPTER 2 SENSITIVITY OF LINEAR SYSTEMS; EFFECTS OF ROUNDOFF ERRORS

Enhancing the Implementation of Mathematical Formulas for Fixed-Point and Floating-Point Arithmetics

Symbolic Methods to Enhance the Precision of Numerical Abstract Domains

Mathematical preliminaries and error analysis

Roundoff Errors and Computer Arithmetic

Forward inner-approximated reachability of non-linear continuous systems

Computing Basics. 1 Sources of Error LECTURE NOTES ECO 613/614 FALL 2007 KAREN A. KOPECKY

Modular Static Analysis with Zonotopes

CS321. Introduction to Numerical Methods

Semantics-Based Transformation of Arithmetic Expressions

Static Analysis by A. I. of Embedded Critical Software

Summary of Raptor Codes

Widening Operator. Fixpoint Approximation with Widening. A widening operator 2 L ˆ L 7``! L is such that: Correctness: - 8x; y 2 L : (y) v (x y)

Lecture 6. Abstract Interpretation

Structuring an Abstract Interpreter through Value and State Abstractions: EVA, an Evolved Value Analysis for Frama C

Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs

Refining Abstract Interpretation-based Approximations with a Floating-point Constraint Solver

Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors

Static Analysis of Programs with Probabilities. Sriram Sankaranarayanan University of Colorado, Boulder, USA.

Graphs connected with block ciphers

The Apron Library. Bertrand Jeannet and Antoine Miné. CAV 09 conference 02/07/2009 INRIA, CNRS/ENS

Automated Precision Tuning using Semidefinite Programming

A Towards a Compiler for Reals 1

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

A Static Analyzer for Large Safety-Critical Software

Trace Partitioning in Abstract Interpretation Based Static Analyzers

Errors in Computation

CS321 Introduction To Numerical Methods

Design and Implementation of an Abstract Interpreter for VHDL

MA651 Topology. Lecture 4. Topological spaces 2

8 Towards a Compiler for Reals 1

Principles of Program Analysis: A Sampler of Approaches

Space Software Validation using Abstract Interpretation

On the undecidability of the tiling problem. Jarkko Kari. Mathematics Department, University of Turku, Finland

Computational Methods. Sources of Errors

Eliminating Annotations by Automatic Flow Analysis of Real-Time Programs

Floating-Point Numbers in Digital Computers

Interval Polyhedra: An Abstract Domain to Infer Interval Linear Relationships

Floating-Point Numbers in Digital Computers

Sound Compilation of Reals

Extracting the Range of cps from Affine Typing

Symbolic Execution for Checking the Accuracy of Floating-Point Programs

Enclosures of Roundoff Errors using SDP

A New Abstraction Framework for Affine Transformers

What is a Graphon? Daniel Glasscock, June 2013

Some issues related to double roundings

Constraint solving on modular integers

Scientific Computing: An Introductory Survey

Accuracy versus precision

Floating-Point Arithmetic

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Lecture 5: Duality Theory

Open and Closed Sets

Guarded Operations, Refinement and Simulation

Good Will Hunting s Problem: Counting Homeomorphically Irreducible Trees

Floating-point Precision vs Performance Trade-offs

A Note on Karr s Algorithm

Finding a -regular Supergraph of Minimum Order

Chapter One. Numerical Algorithms

On the Max Coloring Problem

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

The Interface Fresnel Zone revisited

Integer Programming Theory

Lecture 5: Properties of convex sets

Review of Calculus, cont d

Certification of Roundoff Errors with SDP Relaxations and Formal Interval Methods

Revisiting the Upper Bounding Process in a Safe Branch and Bound Algorithm

RATCOP: Relational Analysis Tool for Concurrent Programs

Locally convex topological vector spaces

7. The Gauss-Bonnet theorem

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Abstract Interpretation

Numerical Computing: An Introduction

Computing Integer Powers in Floating-Point Arithmetic

Floating point numbers in Scilab

(i, j)-almost Continuity and (i, j)-weakly Continuity in Fuzzy Bitopological Spaces

Chapter S:V. V. Formal Properties of A*

Lecture 15: The subspace topology, Closed sets

The Verification Grand Challenge and Abstract Interpretation

Small Survey on Perfect Graphs

Sendmail crackaddr - Static Analysis strikes back

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

STABILITY AND PARADOX IN ALGORITHMIC LOGIC

Verification of Numerical Results, using Posits, Valids, and Quires

2D and 3D Transformations AUI Course Denbigh Starkey

Design and Implementation of a Special-Purpose Static Program Analyzer for Safety-Critical Real-Time Embedded Software

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

Numerical validation of compensated summation algorithms with stochastic arithmetic

Trustworthy Numerical Computation in Scala

Transcription:

Weakly Relational Domains for Floating-Point Computation Analysis Eric Goubault, Sylvie Putot CEA Saclay, F91191 Gif-sur-Yvette Cedex, France {eric.goubault,sylvie.putot}@cea.fr 1 Introduction We present new numerical abstract domains for static analysis of the errors introduced by the approximation by floating-point arithmetic of real numbers computation, by abstract interpretation [2]. The analysis follows the floatingpoint computation, and bounds at each operation, the error committed between the floating-point and the real result. These new domains extend a former domain [3], [5], based for each variable on the abstraction of the floating-point value by an interval, and of the error by a sum of intervals decomposing the provenance of the total error along the control points (or groups of points) in the source code. The idea of this former domain is to provide some information on the source of errors in the program. The origin of the main losses of precision is most of the time very localized, so identifying the operations responsible for these main losses, while bounding the total error, can be very useful. But it does not allow to improve the accuracy of the total error on one variable, compared to the much less costly floating-point plus total error version - it can prove even worse some times. Indeed, the correlations between variables are not used for a tighter computation values or of errors, and the method suffers from the classical drawbacks of intervals. Resembling forms, though used in a very different way, were introduced in the interval community, under the name of affine interval arithmetic [1], precisely in order to overcome the problem of loss of correlation between variables in interval arithmetic. But in affine arithmetic, the use of a sum of coefficients is only a means for getting more precision in the dynamic computation of bounds for the real value of a variable, and these coefficients have no meaning. The new domains we propose combine the interest of providing information on the source of errors, with the possibility to use linear correlations between variables also for improving the approximation. We propose to use these correlations on one hand for the computation of errors, which could be done for a cost very comparable to the existing approximation. And on the other hand, we propose to use a form close to affine interval arithmetic for the computation of floating-point value, except that this form must suit floating-point computations instead of real computations. This approximation is much more expensive than interval arithmetic, but comparable to the approximation used for the errors.

2 Eric Goubault, Sylvie Putot The ideas presented here are still preliminary. In particular, they are not fully experimented yet. We also limit our study to the case where exactly one label i L is associated to each node of the control flow graph of the program. 2 New domain for the errors The domain introduced in [3], [5], abstracts the real value of a variable x by x = f x + ωi x ε i. (1) i L {os} where the floating-point number f x, and the real error terms ωi x, are abstracted by intervals, and ε i is a formal variable associated to label i. Errors of order higher than one are grouped in one term associated to point os. In this domain, the error terms ωi x express both the rounding error committed at point i, and its propagation during further computations on variable x. Thus the fact that a same rounding error can be propagated on different variables, is lost. The idea here is to distinguish the rounding error from its propagation. Variable x is represented by : x = f x + i L t x i.γ iζ i + ω x os ε os, where f x F is the floating-point value, abstracted either by an interval or as presented in section 3; γ i R is a symbolic variable associated with the rounding error committed at point i; t x i R expresses the propagation of error γ i on current variable x; ζ i is a formal variable associated to label i; and ωosε x os is the higher order error term. An abstract environment thus consists of such a formal sum for each variable, together with a set of intervals giving the range of the rounding errors γ i. Let (x) denote the error committed by rounding x to a floating-point number, the addition for example writes: z = x + l y = f z + i L(t x i + ty i ).γ iζ i + (ωos x + ωy os )ε os + (f x + f y ) ζ l, }{{} γ l The error due to the rounding of f x + f y to a floating-point number can be bounded by an interval, it defines the range of γ l. The corresponding coefficient t x l is initialized to 1. In further computations, the t x l express the linear correlations between variables on which error γ l is propagated. They are abstracted by intervals, of width usually small, due to rounding errors in the analysis or unions. 3 New domain for the floating-point value Linear correlations between variables can be used directly on the errors or on the real values of variables, but not on floating-point values. We thus propose

Weakly Relational Domains for Floating-Point Computation Analysis 3 to decompose the floating-point value f of a variable resulting from a set of operations, in the real value of this set of operations r(f), plus the sum of errors δ(f) accumulated along the computation, f = r(f)+δ(f). Other proposals have been made to overcome this problem, most notably [6]. Our approach is different, in that [6] linearizes the expressions 1, whereas we do not need to. - We use affine interval arithmetic [1] to compute accurately the real part r(f), keeping track of linear correlations between variables: n r(f) = {α f 0 + α f i θ iϕ i, θ i [ 1, 1]} [r (f), r + (f)], i=1 where α f i, 0 i n are real numbers, θ i, 0 i n are symbolic variables, with value in [ 1, 1]. The assignment of a constant interval at label l, f = l [a, b], is written r(f) = (a + b)/2 + (b a)/2 θ l ϕ l to express the fact that r(f) takes one value in this interval. For example, supposing (a+b)/2 and (b a)/2 are represented exactly, f f will be found equal to 0. Indeed, the addition of two affine forms is computed componentwise: r(f + g) = (α f 0 + αg 0 ) + i L(α f i + αg i )θ iϕ i. In the multiplication, the non linear part creates a new term α l θ l ϕ l : r(f l g) = α f 0 αg 0 + i L, i l (α f i αg 0 +αg i αf 0 )θ iϕ i +( α f l αg 0 +αg l αf 0 + i L, j L α f i αg j )θ lϕ l. We do not discuss here the case of division, or union or intersection operators, that can not systematically be done componentwise. In these cases, using the centered form with only one symbol θ l can be an acceptable compromise. - For the error, we do not use the sum of errors as computed in section 2. Indeed, we do not just bound the error due to the successive roundings on the whole interval. We also want to compute the errors on the bounds, in order to get much tighter result in some cases (see example at the end of the section). We note δ f and δ f + the errors due to the successive roundings committed on the bounds r (f) and r + (f). The set of floating-point numbers represented by f is the interval γ(f) = [r (f) + δ f, r + (f) + δ f +]. However, with affine interval arithmetic, the bounds of the set resulting from an arithmetic operation f.g are not necessarily got from the bounds of f and g as in classical interval arithmetic. Thus, we also need the maximum error δ f M committed on the interval [r (f), r + (f)]. 1 A floating-point expression is transformed into a linear expression in the real field with interval coefficients.

4 Eric Goubault, Sylvie Putot We describe very briefly the principle of the propagation of errors: for an operation h = f.g, for example if the maximum of r + (h) is got from bounds of f and g, then the error δ h + is computed using the errors on these bounds, otherwise using the maximum error on the intervals. To this must be added the new rounding error due to the conversion to a floating-point of the sum of this real result and previous errors. This error can be computed quite accurately for the bounds. Note that an other way to use affine arithmetic for the computation of floating-point values, without the burden of an extra error term δ(f), would be to use directly the affine form r(f), but with floating-point instead of real coefficients α f i. Instead of bounding them using higher precision numbers, we would bound them using floating-point number with outward rounding (it is not possible to use the current rounding mode for each coefficient, because the sum of rounded intermediate results is not equal to the rounded sum). The result would be similar to the formulation proposed here, but in which we would use only the maximum error on intervals. We show this on the example that follows. Consider the computation y = x a x, for a < 1 such that a/2 is not a floating-point number, and starting with x [0, 1]. With interval arithmetic, we get y [ a, 1]. If this computation, followed by x = y, is in a loop, the loop will be found unstable and the fixpoint over-approximation unacceptable. Consider now the affine interval arithmetic with floating-point coefficient, supposing the initial value of x can be associated to label 1. Here, x can be represented without over-approximation by x = 1 2 + 1 2 θ 1ϕ 1. It is not the case for ax: we note f the nearest floating-point value smaller than a/2, and f + the nearest floating-point value greater than a/2. Then a x = [f, f + ]+[f, f + ]θ 1 ϕ 1, and y = [ 1 2 f+, 1 2 f ] + [ 1 2 f+, 1 2 f ]θ 1 ϕ 1. Then the values of y are found in [f f +, 1 2f ], which is better than with interval arithmetic, but the property that y is greater than 0, crucial to the study of the loop, is still lost. Finally, we consider affine interval arithmetic with real coefficients, plus the rounding error. Variable x is again represented by r(x) = 1 2 + 1 2 θ 1ϕ 1, with no error. The real result of the multiplication is r(ax) = a 2 + a 2 θ 1ϕ 1, and supposing that we use higher precision numbers to abstract the real numbers, a 2 is known exactly. The error when rounding the lower bound of the result (0) is 0, and the error on the upper bound (a), is also 0. The maximum rounding error is of the order of the unit in the last place of a. The real value of y is r(y) = 1 a 2 + 1 a 2 θ 1ϕ 1. The lower bound of r(y) is reached for θ 1 = 1, that is from the lower bound of the set containing r(x) and the upper bound of r(ax), thus δ y = 0. We get in the same way δy + = 0, the maximum error on ax is not used. And the floating-point value of y is in [0, 1 a]. 4 Conclusion We have presented in this paper preliminary ideas about two new weakly relational abstract domains, one for the safe computation of invariants describing

Weakly Relational Domains for Floating-Point Computation Analysis 5 the range of floating-point computations, and the other, on the estimates of the contributions of floating-point operations to imprecision errors with respect to an ideal real number semantics. Other variations using correlations between values and errors, for example by means of relative error, could also be used, and should be developped elsewhere. The computations using these two domains are independent, except by the fact that the rounding error at one point is computed using the bounds for the floating-point result at this point. Nevertheless, the computation rules of the domain of errors and floating-point values have noticeable similarities. There seems to be a formal link, yet to be fully understood, between some of the propagation coefficients appearing in the two domains. These domains are probably part of a broader picture, namely that they might be linked to a more general semantics, through abstraction and concretization pairs. This should be investigated in the near future. References 1. J. L. D. Comba and J. Stolfi. Affine arithmetic and its applications to computer graphics. In SIBGRAPI 93, Recife, PE (Brazil), October 20-22, 1993. 2. P. Cousot and R. Cousot. Abstract interpretation frameworks. Journal of Logic and Symbolic Computation, 2(4):511 547, 1992. 3. E. Goubault. Static analyses of the precision of floating-point operations. In Static Analysis Symposium, SAS 01, number 2126 in LNCS, Springer-Verlag, 2001. 4. E. Goubault, M. Martel, and S. Putot. Asserting the precision of floating-point computations : a simple abstract interpreter. In ESOP 02, LNCS, Springer 2002. 5. M. Martel. Propagation of roundoff errors in finite precision computations : a semantics approach. In ESOP 02, number 2305 in LNCS, Springer-Verlag, 2002. 6. A. Miné. Relational Abstract Domains for the Detection of Floating-Point Run- Time Errors. In ESOP 04, number 2986 in LNCS, Springer-Verlag, 2004. 7. S. Putot, E. Goubault and M. Martel. Static Analysis-Based Validation of Floating- Point Computations. In LNCS 2991, Springer-Verlag, 2004.