FPSim: A Floating- Point Arithmetic Demonstration Applet

Similar documents
UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

The Sign consists of a single bit. If this bit is '1', then the number is negative. If this bit is '0', then the number is positive.

Floating Point Arithmetic

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

Floating-point representations

Floating-point representations

By, Ajinkya Karande Adarsh Yoga

EE878 Special Topics in VLSI. Computer Arithmetic for Digital Signal Processing

Numeric Encodings Prof. James L. Frankel Harvard University

The ALU consists of combinational logic. Processes all data in the CPU. ALL von Neuman machines have an ALU loop.

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

Floating Point Numbers

Finite arithmetic and error analysis

Foundations of Computer Systems

Floating Point January 24, 2008

Floating point. Today! IEEE Floating Point Standard! Rounding! Floating Point Operations! Mathematical properties. Next time. !

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

CO212 Lecture 10: Arithmetic & Logical Unit

Divide: Paper & Pencil

Chapter 2 Float Point Arithmetic. Real Numbers in Decimal Notation. Real Numbers in Decimal Notation

Floating Point (with contributions from Dr. Bin Ren, William & Mary Computer Science)

Inf2C - Computer Systems Lecture 2 Data Representation

Systems I. Floating Point. Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties

ECE232: Hardware Organization and Design

Floating Point Numbers

Floating Point Numbers

Floating point. Today. IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.

Double Precision Floating-Point Arithmetic on FPGAs

Signed Multiplication Multiply the positives Negate result if signs of operand are different

Floating Point Representation in Computers

Computer Arithmetic Ch 8

Computer Arithmetic Ch 8

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

CS 33. Data Representation (Part 3) CS33 Intro to Computer Systems VIII 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

Module 2: Computer Arithmetic

Computer Architecture and IC Design Lab. Chapter 3 Part 2 Arithmetic for Computers Floating Point

System Programming CISC 360. Floating Point September 16, 2008

COMPUTER ORGANIZATION AND DESIGN

COMPUTER ORGANIZATION AND ARCHITECTURE

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = (

Floating Point : Introduction to Computer Systems 4 th Lecture, May 25, Instructor: Brian Railing. Carnegie Mellon

Classes of Real Numbers 1/2. The Real Line

Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition. Carnegie Mellon

Floating Point Puzzles The course that gives CMU its Zip! Floating Point Jan 22, IEEE Floating Point. Fractional Binary Numbers.

Floating Point Numbers

COMP2611: Computer Organization. Data Representation

Data Representation Floating Point

Giving credit where credit is due

CS429: Computer Organization and Architecture

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Giving credit where credit is due

ECE260: Fundamentals of Computer Engineering

Data Representation Floating Point

Floating Point Arithmetic. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Today: Floating Point. Floating Point. Fractional Binary Numbers. Fractional binary numbers. bi bi 1 b2 b1 b0 b 1 b 2 b 3 b j

Floating Point. CSE 238/2038/2138: Systems Programming. Instructor: Fatma CORUT ERGİN. Slides adapted from Bryant & O Hallaron s slides

Representing and Manipulating Floating Points

VHDL IMPLEMENTATION OF IEEE 754 FLOATING POINT UNIT

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

Hardware Modules for Safe Integer and Floating-Point Arithmetic

Arithmetic for Computers. Hwansoo Han

Computer Architecture Chapter 3. Fall 2005 Department of Computer Science Kent State University

Computer Architecture and Organization

MIPS Integer ALU Requirements

Architecture and Design of Generic IEEE-754 Based Floating Point Adder, Subtractor and Multiplier

C NUMERIC FORMATS. Overview. IEEE Single-Precision Floating-point Data Format. Figure C-0. Table C-0. Listing C-0.

Chapter 10 - Computer Arithmetic

Floating-Point Arithmetic

Floating Point Arithmetic

IEEE Standard for Floating-Point Arithmetic: 754

Floating Point Considerations

VTU NOTES QUESTION PAPERS NEWS RESULTS FORUMS Arithmetic (a) The four possible cases Carry (b) Truth table x y

COMPUTER ARCHITECTURE AND ORGANIZATION. Operation Add Magnitudes Subtract Magnitudes (+A) + ( B) + (A B) (B A) + (A B)

Chapter 2. Data Representation in Computer Systems

Most nonzero floating-point numbers are normalized. This means they can be expressed as. x = ±(1 + f) 2 e. 0 f < 1

Chapter 3 Arithmetic for Computers (Part 2)

A Single/Double Precision Floating-Point Reciprocal Unit Design for Multimedia Applications

±M R ±E, S M CHARACTERISTIC MANTISSA 1 k j

Introduction to Numerical Computing

Chapter 2 Data Representations

Number Representations

3.5 Floating Point: Overview

A Decimal Floating Point Arithmetic Unit for Embedded System Applications using VLSI Techniques

Computer Organization: A Programmer's Perspective

Representing and Manipulating Floating Points

Thomas Polzer Institut für Technische Informatik

Floating Point Arithmetic

Written Homework 3. Floating-Point Example (1/2)

CS 261 Fall Floating-Point Numbers. Mike Lam, Professor.

Data Representations & Arithmetic Operations

COMP2121: Microprocessors and Interfacing. Number Systems

Data Representation Floating Point

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

Numerical computing. How computers store real numbers and the problems that result

Floating Point Square Root under HUB Format

CS 261 Fall Floating-Point Numbers. Mike Lam, Professor.

Floating-point representation

Transcription:

FPSim: A Floating- Point Arithmetic Demonstration Applet Jeffrey Ward Departmen t of Computer Science St. Cloud State University waje9901@stclouds ta te.edu Abstract An understan ding of IEEE 754 standar d conforming floating- point arithmetic is essential for any computer science student focusing on numerical computing. This topic, however, can often be difficult for students to unders tan d from an abstract, conceptual point of view. FPSim presents instructor s with a tool to visually demonst r ate the operation of IEEE 754 floating- point arithmetic. This Java applet is designed to be a full and accurate implementation of the IEEE 754 specification. The applet demonstr ates addition, subtraction, multiplication, and division of Single and Double floating- point numbers. Output shows the process of normalizing and rounding the number with guard bits and the sticky bit. It also shows the effects of the operation on the five floating- point exception flags. In order to accurately implement the IEEE 754 standar d, FPSim performs most arithmetic in software, with little reliance on the floating- point capabilities of either Java or the underlying hardware. The program stores numbers in binary as arrays of integers. Arithmetic functions then work with these arrays. Division is implemented by reciprocation throug h the use of Newton's Method. This paper will present the internal workings of FPSim.

Introduction The ANSI/IEEE Std 754-1985 IEEE Standar d for Binary Floating- Point Arithmetic defines the most widely used standar d for binary floatingpoint arithmetic. Specifically, the standar d defines how floating- point numbers are to be stored and rounded and how exceptional conditions are to be handled. Any student taking a course in numerical computing must begin with a solid comprehension of binary floating- point arithme tic as defined by this standar d. Like anything else, a visual demonst ration could give students a more thorough understa n ding of this topic. FPSim is designed to do just that. It is a Java applet which fully implements the IEEE 754 standar d and provides a graphical display of the arithmetic process. What follows is a brief overview of the standar d followed by a description of FPSim and its internal data structures and algorithms. For a more thorough overview of the IEEE 754 standar d and floating- point arithmetic, the reader should see [3] and [4]. Binary Representation Let β be the base of a floating- point number and p its precision. Then the number ± d 0.d 1 d 2...d p- 1 β e represents the number ± d 0 d 1 1 d 2 2... d p 1 p 1 e, 0 d i. The part d 0.d 1 d 2...d p- 1 is referred to as the significand (also known as the fraction or mantisa). The IEEE 754 standar d defines two floating- point number storage formats: Single and Double. A Single floating- point number consists of 32 bits and a Double floating- point number consists of 64 bits. Each format is separated into three fields as shown in Figure 1 and Figure 2. Sign 1 bit Figure 1: The Single format bit fields. Biased Exponent Significand 8 bits 23 bits Most significant bit Least significant bit 31 30 23 22 0 Sign 1 bit Figure 2: The Double format bit fields. Biased Exponent Significand 11 bits 52 bits Most significant bit Least significant bit 63 62 52 51 0

The exponent is stored as an unsigned integer. From this integer a bias is subtracted to retrieve the correct, signed exponent. There are two numbers of interest associated with the exponent: e min and e max are the smallest and largest possible exponent values for a given format. Table 1 shows the details of the exponent for Single and Double format numbers. Table 1: The exponent ranges for the standar d IEEE 754 formats. Format Exponent Length Bias Unbiase d Range e min e max Single 8 bits 127 0 to 255-126 127 Double 11 bits 1023 0 to 2047-1022 1023 In addition to Single and Double types, the IEEE 754 standar d allows for architecture- specific Extended formats. For example, the Intel IA- 32 architecture includes 80- bit Double Extended floating- point numbers. Normal and Subnormal Numbers The most significant bit of the binary floating- point significand is always assumed to be the only digit to the left of the binary point. Let b 0.b 1 b 2...b p- 1 be the significand of a binary floating- point number of precision p. If b 0 is one, the number is said to be normal ; if b 0 is zero, the number is subnor m al. IEEE 754 treats normalized numbers as the common case and does not store the most significant bit. Subnormal numbers are required when a number's exponent would be less than e min if normalized. In this case, the number is right- shifted until it has an exponent of e min 1 (see Table 2). This leads to a loss in precision and the underflow exception being thrown. Table 2: Interpretation of binary floating- point numbers from their exponent. Unbiase d Exponent Interpr et a tion e = e min - 1 e min e e max ± 0. b 1 b 2...b p- 1 2 emin ± 1. b 1 b 2...b p- 1 2 e e = e max + 1 ± Infinity if b 1 = b 2 =...= b p- 1 =0, NaN otherwise Special Values IEEE 754 defines two special values: Infinity and Not a Number (NaN). NaN results from perfor ming some invalid operation like those listed in Table 3

below. There is no single representation of NaN, but instead a whole set of possible representations. Any number with an exponent of e max + 1 and at least one bit of the significand field set is NaN. There is only one distinction made between any possible bit patterns of a NaN. If the most significant bit of the significand field is set, the number is a quiet NaN; if that bit is clear, the number is a signaling NaN. A signaling NaN used in one of the invalid operations listed in Table 3 will trigger an invalid exception, but a quiet NaN will not. An invalid operation involving either a signaling or a quiet NaN as an input will output a quiet NaN. Table 3: Invalid operations which produce NaN. Oper ation NaN Produced By Addition/subtraction Multiplication + (- ) 0 Division 0 / 0, / Rounding Regarding the issue of rounding, the IEEE 754 standar d states that...every operation...shall be performed as if it first produced an intermediate result correct to infinite precision and with unboun d e d range, and then rounded that result according to one of the [four rounding] modes [5] Of course producing a result that is correct to infinite precision would be impossible much of the time, so the use of guard bits and a sticky bit is employed for the purpose of correct rounding. A set of n guard bits will contain the last n bits right- shifted out of the result during an operation. Essentially, the number of significant bits is tempora rily increased by n. The sticky bit will capture any set bits that are right- shifted out of the guard bits. It is referred to as sticky because once set, it remains set until the current operation is complete. FPSim uses two guard bits and a sticky bit to ensure correctly rounde d results. There are four different methods for rounding specified by IEEE 754. The defualt, round to nearest, will round to the nearest representable number. In the event that two representable numbers are equidistant from the result, the even number is taken. The three other modes are round towards +Infinity, round towards - Infinity, and round towards zero. Certain care must be taken when rounding ±Infinity with these last three modes see Table 4.

Table 4: Results of rounding ±Infinity. Round Mode Result for +Infinity Result for - Infinity Round Towards Zero 1.11...1 2 emax - 1.11...1 2 emax Round Towards +Infinity +Infinity 1.11...1 2 emax Round Towards - Infinity - 1.11...1 2 emax - Infinity Floating- Point Exceptions The IEEE 754 standar d defines five floating- point arithmetic exceptions. 1) Invalid Occurs from any of the operations listed in Table 3. 2) Division by zero Occurs when attem pting to divide by zero. 3) Overflow Indicates that the result of an operation was larger than the maximu m possible value for the format being used. 4) Underflow Occurs when the result of an operation is subnor mal. 5) Inexact Indicates that the rounded result of an operation is not exact in other words, some precision may have been lost. Design One of the primary goals for FPSim was to create an implementation that would be as true as possible to the IEEE 754 standar d. For this reason, the progra m stores all numbers and perfor m s all floating- point arithmetic in software. There is little dependence on the underlying hardware or Java virtual machine floating- point functionality. Internal Floating- point Number Storage Within the program, the significand of each number is stored as an array of integers taking on the values zero or one. For the sake of simplicity in the arithmetic algorithms, the leading bit of the significand, which is implicit in the IEEE Single and Double formats, is included in this array. The array is exactly the size of the number of significant bits, with the most significant bit at index zero. Several other fields also describe the number. The exponent is stored as an integer and is unbiased. The sign bit, as well, is an integer, being either zero to indicate positive or one to indicate negative. The current state of the number whether it is normal, subnor mal, zero, NaN, or infinity is also stored.

Addition and Subtraction Addition and subtraction within FPSim both begin in the same procedure. This procedu re will decide whether to add or subtract the two operan d s based on the specified operation and the operand signs. This algorithm is outlined in Appendix A.1. Once this decision has been made, the operan d s are sent to either the addition or subtraction proced u re. These procedu res are outlined in Appendix A.2 and A.3. The task of adding or subtracting the significands is perfor me d by software implementa tions of binary adder and subtracter circuits. Early in the developme nt of FPSim, subtraction and addition of negative numbers was implemente d by obtaining the twos- complement of the operand's significand and then adding. This lead to an overly complicated process for subtracting, and eventually a separate binary subtracter circuit was implemente d. Multiplication The multiplication algorithm used in FPSim is outlined in Appendix A.4. Multiplication of the significands is perfor med by a software binary multiplier circuit. An array of size 2n, where n is the number of bits in the significand, is used for the algorithm. When the multiplication is finished, the result is taken to be the first n bits in the array. The next two bits comprise the guard bits; any set bit after those two will cause the sticky bit to be set. Division Division in FPSim is by far the most complex of the four arithmetic operations. Division is perfor me d by reciprocation, where the reciprocal for the divisor is found and then multiplied by the dividend. FPSim uses Newton's Method to find this reciprocal. We begin with the equation f r = 1 r d, where d is the divisor. The root of this function correspond s to the reciprocal of the divisor. Note that for the derivative, we have f ' r = 1 r 2.

If A 0 is an initial approximation for the reciprocal of the divisor, than at each iteration of Newton's Method, we will compute A n = A n 1 f A n 1 f ' A n 1 1/ r d = A n 1 1/ r 2 = A n 1 r d r 2 fo r n 1. Since A n - 1 is an approximation of r, we replace r with A n - 1 and obtain 2 A n =2 A n 1 d A n 1 = A n 1 2 d A n 1 fo r n 1. The greatest difficulty in this approach to division is finding a good initial approximation for the reciprocal. FPSim uses a formula described in [2] to make this approximation. Let Y be the divisor, n be the number of significant bits in Y and m be an integer such that 0 < m < n. Let p = 1.y 1 y 2...y m and q = 0.00...0 y m +1 y m +2...y n. Then the initial approximation A 0 is given by A 0 = q q 2 m p 3. The computation of A 0 is the one and only part of FPSim that uses Java's floating- point functions. A hardware implementation would likely use a lookup table with a number of values of this formula already computed. This is not as feasible in FPSim, where a floating- point number requires much more than the 4 to 16 bytes needed in hardware. Even a relatively small lookup table would consume a very large amount of memory. Neither could the approximation be computed on demand in software, since it requires a division the very operation that we are ultimately trying to perfor m. So the approximation is computed by hardware and then converted to FPSim's internal floating- point format. Testing A program called TestFloat [1] is being used to verify the accuracy of FPSim. TestFloat compares the results of a computation from a softwarebased floating- point arithmetic engine, called SoftFloat, to that of the target implementation. It reports any discrepancies between the two in the resulting number or exception flags. TestFloat was designed for testing hardware, so some modifications were necessary. Each line of C

code in TestFloat that would normally compute the given operation was replaced with a function to execute a simple Java program. This program contained all of FPSim's arithmetic code. The operands and results were then passed back and forth between TestFloat and Java. At the time of this paper's writing, testing is still in progress. At present, addition is largely error- free, with only a few minor corrections that need to be made. Multiplication is about 50% error- free, and division is almost completely untested. References 1) Hauser, J. TestFloat. Retrieved February 21, 2004, from http://www.jhauser.us/arithmetic/testfloat.html 2) Hennessy, J. L, & Patterson, D. A. (2003). Computer Architecture: A Quantitative Approach. 3 rd Edition. San Francisco: Morgan Kaufmann. 3) Ito, M., Takagi, N., & Yajima, S. (1995). Efficient Initial Approximation and Fast Converging Methods for Division and Square Root. IEEE 12 th Symposium on Computer Arithmetic. 2 9. 4) Overton, M. L. (2001). Numerical Computing with IEEE Floating Point Arithmetic. SIAM. 5) Sun Microsystems. (1996). Numerical Computing Guide. 6) The Institute for Electrical and Electronics Engineers, Inc. (1985). IEEE Standard for Binary Floating- Point Arithmetic.

Appendix A A.1 Addition and Subtraction Input: Floating- point numbers OPA and OPB Output: Floating- point number RESULT. If (OPA.exp < OPB.exp) Shift significand of OPA right until the exponent of OPA matches that of OPB; If (OPA.exp > OPB.exp) Shift significand of OPB right until the exponent of OPB matches that of OPA; If (OPA == NaN or OPB == NaN) RESULT = NaN; If (the NaN operand is a signaling NaN) Throw an invalid exception; If (the specified operation is addition) If (OPA.sign = = OPB.sign) RESULT = OPA + OPB; If (OPA > OPB) RESULT = OPA OPB; If (OPA < OPB) RESULT = OPB OPA; RESULT = Zero; If (the specified operation is subtraction) If (OPA.sign = = OPB.sign) RESULT = OPA OPB; RESULT = OPA + OPB; A.2 Addition Input: Floating- point numbers OPA and OPB Output: Floating- point number RESULT. If (OPA == Infinity or OPB == Infinity) RESULT = Infinity; RESULT.significand = OPA.signficand + OPB.significand; If (the addition operation resulted in a carry out) Shift RESULT's significand right by one and set the most significant

bit to one; Normalize RESULT; Round RESULT; A.3 Subtraction Input: Floating- point numbers OPA and OPB Output: Floating- point number RESULT. If (OPA = = Infinity and OPB = = Infinity) RESULT = NaN; Throw invalid exception; If (OPA == Infinity or OPB == Infinity) RESULT = Infinity; RESULT.significand = OPA.signficand OPB.significand; Normalize RESULT; Round RESULT; A.4 Multiplication Input: Floating- point numbers OPA and OPB Output: Floating- point number RESULT. If (OPA == NaN or OPB == NaN) RESULT = NaN; If (the NaN operand is a signaling NaN) Throw an invalid exception; If (one operand is Infinity and the other is Zero) RESULT = NaN; Throw invalid exception; If (OPA == Infinity or OPB == Infinity) RESULT = Infinity; If (OPA.sign = = OPB.sign) RESULT.sign = Positive; RESULT.sign = Negative; RESULT.exponent = OPA.exponent + OPB.exponent; RESULT.significand = OPA.significand * OPB.significand; Normalize RESULT;

Round RESULT; A.5 Division Input: Floating- point numbers OPA and OPB Output: Floating- point number RESULT. If (OPA == NaN or OPB == NaN) RESULT = NaN; If (the NaN operand is a signaling NaN) Throw an invalid exception; If (OPA == Zero and OPB == Zero) RESULT = NaN; Throw an invalid exception; If (OPB == Zero) RESULT = Infinity; Throw divide by zero exception; If (OPA.sign = = OPB.sign) RESULT.sign = Positive; RESULT.sign = Negative; Compute OPB_INVERSE, the reciprocal of OPB; RESULT = OPA * OPB_INVERSE; Normalize RESULT; Round RESULT; A.6 Normalization Input: Floating- point number NUMBER If (NUMBER.exponent < e min ) Right- shift NUMBER until NUMBER.exponent == e min ; If (e min < NUMBER.exponent e max ) Left- shift NUMBER until the most significant bit is one or until NUMBER.exponent == e min ; If (NUMBER.exponent > e max ) NUMBER = Infinity; Throw overflow and inexact exceptions;

A.7 Rounding Input: Floating- point number NUMBER; guard/sticky bits from the last operation stored in the three- element array GUARD If (no guard/sticky bits are set) If (Rounding Mode = = Round Towards - Infinity) If (NUMBER.sign = = Positive) Truncate NUMBER.significand; Increment NUMBER.significand; If (Rounding Mode = = Round Towards +Infinity) If (NUMBER.sign = = Positive) Increment NUMBER.significand; Truncate NUMBER.significand; If (Rounding Mode = = Round Towards Zero) Truncate NUMBER.significand; If (Rounding Mode = = Round to Nearest) GUARDVALUE = GUARD[0] * 4 + GUARD[1] * 2 + GUARD[2]; If (GUARDVALUE < 4) Truncate NUMBER.signficand; If (GUARDVALUE > 4) Increment NUMBER.significand; If (least significant bit of NUMBER.significand = = 1) Increment NUMBER.significand; Truncate NUMBER.significand;