Floating-Point Arithmetic

Similar documents
Floating-Point Arithmetic

Floating-Point Arithmetic

Computer Architecture and IC Design Lab. Chapter 3 Part 2 Arithmetic for Computers Floating Point

Scientific Computing. Error Analysis

Floating Point COE 308. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Floating Point Arithmetic

ECE232: Hardware Organization and Design

Arithmetic for Computers. Hwansoo Han

Floating-point Arithmetic. where you sum up the integer to the left of the decimal point and the fraction to the right.

15213 Recitation 2: Floating Point

Floating Point Numbers. Lecture 9 CAP

COMP2611: Computer Organization. Data Representation

3.5 Floating Point: Overview

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

Floating Point Arithmetic. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter Three. Arithmetic

Thomas Polzer Institut für Technische Informatik

Chapter 3 Arithmetic for Computers (Part 2)

Numeric Encodings Prof. James L. Frankel Harvard University

Lecture 13: (Integer Multiplication and Division) FLOATING POINT NUMBERS

Floating Point Arithmetic

Number Systems. Both numbers are positive

Chapter 3. Arithmetic Text: P&H rev

Floating Point Arithmetic

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

Arithmetic. Chapter 3 Computer Organization and Design

Written Homework 3. Floating-Point Example (1/2)

Lecture 10: Floating Point, Digital Design

Inf2C - Computer Systems Lecture 2 Data Representation

IEEE-754 floating-point

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

CSCI 402: Computer Architectures. Arithmetic for Computers (3) Fengguang Song Department of Computer & Information Science IUPUI.

Floating Point Numbers

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

xx.yyyy Lecture #11 Floating Point II Summary (single precision): Precision and Accuracy Fractional Powers of 2 Representation of Fractions

FLOATING POINT NUMBERS

CSE 2021 Computer Organization. Hugh Chesser, CSEB 1012U W4-W

Foundations of Computer Systems

Chapter 3: Arithmetic for Computers

Floating Point (with contributions from Dr. Bin Ren, William & Mary Computer Science)

CO212 Lecture 10: Arithmetic & Logical Unit

Divide: Paper & Pencil

CS61C : Machine Structures

9 Floating-Point. 9.1 Objectives. 9.2 Floating-Point Number Representation. Value = ± (1.F) 2 2 E Bias

CS429: Computer Organization and Architecture

unused unused unused unused unused unused

IEEE Standard 754 for Binary Floating-Point Arithmetic.

EE 109 Unit 19. IEEE 754 Floating Point Representation Floating Point Arithmetic

CO Computer Architecture and Programming Languages CAPL. Lecture 15

Data Representation Floating Point

COMP Overview of Tutorial #2

Computer (Literacy) Skills. Number representations and memory. Lubomír Bulej KDSS MFF UK

Data Representation Floating Point

ECE 2030D Computer Engineering Spring problems, 5 pages Exam Two 8 March 2012

Representing and Manipulating Floating Points. Jo, Heeseung

CSCI 402: Computer Architectures. Arithmetic for Computers (4) Fengguang Song Department of Computer & Information Science IUPUI.

Floating-point Error

Signed Multiplication Multiply the positives Negate result if signs of operand are different

Operations On Data CHAPTER 4. (Solutions to Odd-Numbered Problems) Review Questions

Representing and Manipulating Floating Points

Divide: Paper & Pencil CS152. Computer Architecture and Engineering Lecture 7. Divide, Floating Point, Pentium Bug. DIVIDE HARDWARE Version 1

Floating Point Numbers

Floating Point Numbers

3.1 DATA REPRESENTATION (PART C)

Organisasi Sistem Komputer

Introduction to Computer Systems Recitation 2 May 29, Marjorie Carlson Aditya Gupta Shailin Desai

Real Numbers finite subset real numbers floating point numbers Scientific Notation fixed point numbers

CS101 Lecture 04: Binary Arithmetic

Ints and Floating Point

Representing and Manipulating Floating Points

Slide Set 11. for ENCM 369 Winter 2015 Lecture Section 01. Steve Norman, PhD, PEng

TDT4255 Computer Design. Lecture 4. Magnus Jahre

Number Systems Standard positional representation of numbers: An unsigned number with whole and fraction portions is represented as:

CPS 101 Introduction to Computational Science

IT 1204 Section 2.0. Data Representation and Arithmetic. 2009, University of Colombo School of Computing 1

ECE331: Hardware Organization and Design

Floating point. Today! IEEE Floating Point Standard! Rounding! Floating Point Operations! Mathematical properties. Next time. !

UC Berkeley CS61C : Machine Structures

Computer Architecture Chapter 3. Fall 2005 Department of Computer Science Kent State University

Representing and Manipulating Floating Points. Computer Systems Laboratory Sungkyunkwan University

Computer Architecture. Chapter 3: Arithmetic for Computers

Chapter 3. Arithmetic for Computers

Representing and Manipulating Floating Points

Numerical computing. How computers store real numbers and the problems that result

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Chapter 2. Data Representation in Computer Systems

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

Computer Arithmetic Ch 8

Computer Arithmetic Ch 8

UC Berkeley CS61C : Machine Structures

Number Representations

Integers and Floating Point

Floating Point Representation in Computers

EE 109 Unit 20. IEEE 754 Floating Point Representation Floating Point Arithmetic

C NUMERIC FORMATS. Overview. IEEE Single-Precision Floating-point Data Format. Figure C-0. Table C-0. Listing C-0.

IEEE Standard 754 for Binary Floating-Point Arithmetic.

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

ECE260: Fundamentals of Computer Engineering

Finite arithmetic and error analysis

Precision and Accuracy

Transcription:

Floating-Point Arithmetic if ((A + A) - A == A) { SelfDestruct() } Reading: Study Chapter 4. L12 Multiplication 1

Why Floating Point? Aren t Integers enough? Many applications require numbers with a VERY large range. (e.g. nanoseconds to centuries) Most scientific applications need real numbers (e.g. π) But so far we have only used integers. We *COULD* use integers with a fixed binary point We *COULD* implement fractions using two integers (e.g. ½, 1023/102934) Floating point is a better answer for most applications. L12 Multiplication 2

Recall Scientific Notation Let s start our discussion of floating point by recalling scientific notation from high school Numbers represented in parts: 42 = 4.200 x 10 1 Significant Digits Exponent 1024 = 1.024 x 10 3-0.0625 = -6.250 x 10-2 Arithmetic is done in pieces 1024 1.024 x 10 3-42 - 0.042 x 10 3 Before adding, we must match the exponents, effectively denormalizing the smaller magnitude number 982 0.982 x 10 3 9.820 x 10 2 We then normalize the final result so there is one digit to the left of the decimal point and adjust the exponent accordingly. L12 Multiplication 3

Multiplication in Scientific Notation Is straightforward: Multiply together the significant parts Add the exponents Normalize if required Examples: 1024 1.024 x 10 3 In multiplication, how far is the most you will ever normalize? In addition? x 0.0625 6.250 x 10-2 64 6.400 x 10 1 42 4.200 x 10 1 x 0.0625 6.250 x 10-2 2.625 26.250 x 10-1 2.625 x 10 0 (Normalized) L12 Multiplication 4

FP == Binary Scientific Notation IEEE single precision floating-point format 0 1 0000 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 S Sign Bit E Exponent + 127 F Significand (Mantissa) - 1 0x42280000 in hexadecimal Exponent: Unsigned Bias 127 8-bit integer E = Exponent + 127 Exponent = 10000100 (132) 127 = 5 Significand: Unsigned fixed binary point with hidden-one Significand = 1 + 0.01010000000000000000000 = 1.3125 Putting it all together N = -1 S (1 + F ) x 2 E-127 = -1 0 (1.3125) x 2 5 = 42 L12 Multiplication 5

Example Numbers One Sign = +, Exponent = 0, Significand = 1.0-1 0 (1.0) x 2 0 = 1 S = 0, E = 0 + 127, F = 1.0 1 0 01111111 00000000000000000000000 0x3f800000 One-half Sign = +, Exponent = -1, Significand = 1.0-1 0 (1.0) x 2-1 = ½ S = 0, E = -1 + 127, F = 1.0 1 0 01111110 00000000000000000000000 0x3f000000 Minus Two Sign = -, Exponent = 1, Significand = 1.0 1 10000000 00000000000000000000000 0xc0000000 L12 Multiplication 6

Zeros How do you represent 0? Zero Sign =?, Exponent =?, Significand =? Here s where the hidden 1 comes back to bite you Hint: Zero is small. What s the smallest number you can generate? E = Exponent + 127, Exponent = -127, Signficand = 1.0 1 0 (1.0) x 2-127 = 5.87747 x 10-39 IEEE Convention When E = 0 (Exponent = -127), we ll interpret numbers differently 0 00000000 00000000000000000000000 = 0 not 1.0 x 2-127 1 00000000 00000000000000000000000 = -0 not -1.0 x 2-127 Yes, there are 2 zeros. Setting E=0 is also used to represent a few other small numbers besides 0. In all of these numbers there is no hidden one assumed in F, and they are called the unnormalized numbers. WARNING: If you rely these values you are skating on thin ice! L12 Multiplication 7

Low-End of the IEEE Spectrum Denorm Gap -bias 1-bias 2-bias normal numbers with hidden bit 0 2 2 2 The gap between 0 and the next representable normalized number is much larger than the gaps between nearby representable numbers. IEEE standard uses denormalized numbers to fill in the gap, making the distances between numbers near 0 more alike. 0 2 -bias 2 1-bias 2-bias 2 p-1 bits of precision p bits of precision Denormalized numbers have a hidden 0 and a fixed exponent of -126 S -126 X = (-1) 2 (0.F) NOTE: Zero is represented using 0 for the exponent and 0 for the mantissa. Either, +0 or -0 can be represented, based on the sign bit. L12 Multiplication 8

Infinities IEEE floating point also reserves the largest possible exponent to represent unrepresentable large numbers Positive Infinity S = 0, E = 255, F = 0 0 11111111 00000000000000000000000 = + 0x7f800000 Negative Infinity -- S = 1, E = 255, F = 0 1 11111111 00000000000000000000000 = - 0xff800000 Other numbers with E = 255 (F 0) are used to represent exceptions or Not-A-Number (NAN) -1, - x 42, 0/0, /, log(-5) It does, however, attempt to handle a few special cases: 1/0 = +, -1/0 = -, log(0) = - L12 Multiplication 9

Floating Point Anomalies It is CRUCIAL for computer scientists to know that Floating Point arithmetic is NOT the same arithmetic you learned in grade school! 1.0 is NOT EQUAL to 10*0.1 (Why?) ex. 1.0 * 10.0 == 10.0 but, 0.1 * 10.0!= 1.0 0.1 decimal == 1/16 + 1/32 + 1/256 + 1/512 + 1/4096 + == 0.0 0011 0011 0011 0011 0011 Consider, in decimal 1/3 is a repeating fraction 0.333333 If you quit at some fixed number of digits, then 3 * 1/3!= 1 L12 Multiplication 10

F.P. Real Numbers Floating point is an approximation of Real Numbers ; it even breaks basic LAWS of math Floating Point arithmetic IS NOT, in general, associative x + (y + z) is not necessarily equal to (x + y) + z Thus, the order of calculations matters, and should be considered Arithmetic is approximate whereas integer arithmetic is exact. Operations may not result in any visible change (x + 1.0) MAY == x Floating-point is often overused by programmers, since the finiteness of the representation less apparent Programmers who assume that floating point numbers are real numbers, do so at their peril L12 Multiplication 11

Floating Point Disasters Scud Missiles get through, 28 die In 1991, during the 1 st Gulf War, a Patriot missile defense system let a Scud get through, hit a barracks, and kill 28 people. The problem was due to a floatingpoint error when taking the difference of a converted & scaled integer. (Source: Robert Skeel, "Round-off error cripples Patriot Missile", SIAM News, July 1992.) $7B Rocket crashes (Ariane 5) When the first ESA Ariane 5 was launched on June 4, 1996, it lasted only 39 seconds, then the rocket veered off course and self-destructed. An inertial system, produced an floating-point exception while trying to convert a 64-bit floating-point number to an integer. Ironically, the same code was used in the Ariane 4, but the larger values were never generated (http://www.around.com/ariane.html). Intel Ships and Denies Bugs In 1994, Intel shipped its first Pentium processors with a floating-point divide bug. The bug was due to bad look-up tables used in to speed up quotient calculations. After months of denials, Intel adopted a no-questions replacement policy, costing $300M. (http://www.intel.com/support/processors/pentium/fdiv/) L12 Multiplication 12

Floating-Point Addition H/W L12 Multiplication 13

Floating-Point Multiplication H/W S E F S E F Step 1: Multiply significands Add exponents Surprisingly, FP multiplication is simpler than FP addition Small ADDER Subtract 127 Subtract 1 Control S E F 24 by 24 round Mux (Shift Right by 1) E R = E 1 + E 2-127 Exponent R + 127 = Exponent 1 + 127 + Exponent 2 + 127 127 Step 2: Normalize result (Result of [1,2) *[1.2) = [1,4) at most we shift right one bit, and fix exponent L12 Multiplication 14

MIPS Floating Point Floating point Co-processor 32 Floating point registers separate from 32 general purpose registers, 32 bits wide each. uses an even-odd pair for double precision add.d fd, fs, ft # fd = fs + ft in double precision add.s fd, fs, ft # fd = fs + ft in single precision sub.d, sub.s, mul.d, mul.s, div.d, div.s, abs.d, abs.s l.d fd, address # load a double from address l.s, s.d, s.s Conversion instructions, Compare instructions, Branches (bc1t, bc1f) L12 Multiplication 15

Chapter Three/Four Summary Computer arithmetic is constrained by limited precision Bit patterns have no inherent meaning but standards do exist two s complement IEEE 754 floating point Computer instructions determine meaning of the bit patterns Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation). Numerical computing employs methods that differ from those of the math you learned in grade school. L12 Multiplication 16