Data Representa+on: Floa+ng Point Numbers

Similar documents
Floa.ng Point : Introduc;on to Computer Systems 4 th Lecture, Sep. 10, Instructors: Randal E. Bryant and David R.

Floats are not Reals. BIL 220 Introduc6on to Systems Programming Spring Instructors: Aykut & Erkut Erdem

Data Representation Floating Point

Data Representation Floating Point

Floating Point. CSE 238/2038/2138: Systems Programming. Instructor: Fatma CORUT ERGİN. Slides adapted from Bryant & O Hallaron s slides

Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition. Carnegie Mellon

System Programming CISC 360. Floating Point September 16, 2008

Systems I. Floating Point. Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties

Floating Point : Introduction to Computer Systems 4 th Lecture, May 25, Instructor: Brian Railing. Carnegie Mellon

Floating Point January 24, 2008

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

Foundations of Computer Systems

Giving credit where credit is due

Giving credit where credit is due

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

Today: Floating Point. Floating Point. Fractional Binary Numbers. Fractional binary numbers. bi bi 1 b2 b1 b0 b 1 b 2 b 3 b j

Floating Point Numbers

Roadmap. The Interface CSE351 Winter Frac3onal Binary Numbers. Today s Topics. Floa3ng- Point Numbers. What is ?

Floating point. Today! IEEE Floating Point Standard! Rounding! Floating Point Operations! Mathematical properties. Next time. !

Chapter 2 Float Point Arithmetic. Real Numbers in Decimal Notation. Real Numbers in Decimal Notation

Floating point. Today. IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.

Floating Point Puzzles The course that gives CMU its Zip! Floating Point Jan 22, IEEE Floating Point. Fractional Binary Numbers.

Floating Point (with contributions from Dr. Bin Ren, William & Mary Computer Science)

Data Representation Floating Point

Floating Point Numbers

Floating Point Numbers

CS429: Computer Organization and Architecture

Computer Organization: A Programmer's Perspective

The course that gives CMU its Zip! Floating Point Arithmetic Feb 17, 2000

Representing and Manipulating Floating Points

CS 33. Data Representation (Part 3) CS33 Intro to Computer Systems VIII 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

Representing and Manipulating Floating Points

Representing and Manipulating Floating Points. Computer Systems Laboratory Sungkyunkwan University

Representing and Manipulating Floating Points

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

Representing and Manipulating Floating Points. Jo, Heeseung

Floating Point. CENG331 - Computer Organization. Instructors: Murat Manguoglu (Section 1)

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

Computer Architecture Chapter 3. Fall 2005 Department of Computer Science Kent State University

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

Up next. Midterm. Today s lecture. To follow

Outline. Integer representa+on and opera+ons Bit opera+ons Floa+ng point numbers. Reading Assignment:

Outline. Integer representa+on and opera+ons Bit opera+ons Floa+ng point numbers. Reading Assignment:

Systems Programming and Computer Architecture ( )

Ints and Floating Point

Floating Point Numbers

COMP2611: Computer Organization. Data Representation

CSCI 402: Computer Architectures. Arithmetic for Computers (3) Fengguang Song Department of Computer & Information Science IUPUI.

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

Floa=ng- Point Numbers

UNIT IV: DATA PATH DESIGN

EE 109 Unit 19. IEEE 754 Floating Point Representation Floating Point Arithmetic

Floating Point Arithmetic

Integers and Floating Point

Numeric Encodings Prof. James L. Frankel Harvard University

EE 109 Unit 20. IEEE 754 Floating Point Representation Floating Point Arithmetic

Floating-point representations

Floating-point representations

Finite arithmetic and error analysis

Floating Point Numbers. Lecture 9 CAP

Description Hex M E V smallest value > largest denormalized negative infinity number with hex representation 3BB0 ---

Main Memory Organization

CS 6210 Fall 2016 Bei Wang. Lecture 4 Floating Point Systems Continued

15213 Recitation 2: Floating Point

CS 261 Fall Floating-Point Numbers. Mike Lam, Professor.

Written Homework 3. Floating-Point Example (1/2)

CS 261 Fall Floating-Point Numbers. Mike Lam, Professor.

Floating Point Arithmetic

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

Introduction to Computer Systems Recitation 2 May 29, Marjorie Carlson Aditya Gupta Shailin Desai

Writing a Fraction Class

Representa7on of integers: unsigned and signed Conversion, cas7ng Expanding, trunca7ng Addi7on, nega7on, mul7plica7on, shiaing

ecture 25 Floating Point Friedland and Weaver Computer Science 61C Spring 2017 March 17th, 2017

Floating Point. EE 109 Unit 20. Floating Point Representation. Fixed Point

C NUMERIC FORMATS. Overview. IEEE Single-Precision Floating-point Data Format. Figure C-0. Table C-0. Listing C-0.

CS367 Test 1 Review Guide

Outline. What is Performance? Restating Performance Equation Time = Seconds. CPU Performance Factors

CS 61C: Great Ideas in Computer Architecture Performance and Floating-Point Arithmetic

CS61c Midterm Review (fa06) Number representation and Floating points From your friendly reader

CS 61C: Great Ideas in Computer Architecture Floating Point Arithmetic

CS 61C: Great Ideas in Computer Architecture Performance and Floating Point Arithmetic

ECE232: Hardware Organization and Design

Lecture 13: (Integer Multiplication and Division) FLOATING POINT NUMBERS

FLOATING POINT NUMBERS

Midterm Exam Answers Instructor: Randy Shepherd CSCI-UA.0201 Spring 2017

CS241 Computer Organization Spring Introduction to Assembly

CS101 Introduction to computing Floating Point Numbers

Floating Point Arithmetic

Arithmetic for Computers. Hwansoo Han

Floating Point II, x86 64 Intro

CO212 Lecture 10: Arithmetic & Logical Unit

Number Representations

Module 2: Computer Arithmetic

Bits and Bytes: Data Presentation Mohamed Zahran (aka Z)

95% of the folks out there are completely clueless about floating-point.

UCB CS61C : Machine Structures

Instructor: Randy H. Katz hap://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #7. Warehouse Scale Computer

Today CISC124. Building Modular Code. Designing Methods. Designing Methods, Cont. Designing Methods, Cont. Assignment 1 due this Friday, 7pm.

Review: MULTIPLY HARDWARE Version 1. ECE4680 Computer Organization & Architecture. Divide, Floating Point, Pentium Bug

Floating Point Representation in Computers

Transcription:

Data Representa+on: Floa+ng Point Numbers Csci 2021 - Machine Architecture and Organiza+on Professor Pen- Chung Yew With sides from Randy Bryant and Dave O Hallaron 1

Floa+ng Point Numbers Background: Frac+onal binary numbers IEEE floa+ng point standard: Defini+on Example and proper+es Rounding, addi+on, mul+plica+on Floa+ng point in C Summary 2

Frac+onal binary numbers What is 1011.101 2? 3

Frac+onal Binary Numbers 2 i 2 i- 1 4 2 1 bi bi- 1 b2 b1 b0 b- 1 b- 2 b- 3 b- j 1/2 1/4 1/8 Representa+on 2 - j Bits to right of binary point represent fracgonal powers of 2 Represents ragonal number: 4

Frac+onal Binary Numbers: Examples Value Representa+on 5 3/4 101.112 2 7/8 010.1112 1 7/16 001.01112 Observa+ons Divide by 2 by shining right (unsigned) Mul+ply by 2 by shining leu Numbers of form 0.111111 2 are just below 1.0 1/2 + 1/4 + 1/8 + + 1/2 i + 1.0 Use notagon 1.0 ε 5

Representable Numbers Limita+on #1 Can only exactly represent numbers of the form x/2 k Other ragonal numbers may have repeagng bit representagons Value Representa+on 1/3 0.0101010101[01] 2 1/5 0.001100110011[0011] 2 1/10 0.0001100110011[0011] 2 Limita+on #2 Just one se]ng of binary point within the w bits Limited range of numbers (very small values? very large?) 6

Floa+ng Point Numbers Background: Frac+onal binary numbers IEEE floa+ng point standard: Defini+on Example and proper+es Rounding, addi+on, mul+plica+on Floa+ng point in C Summary 7

IEEE Floa+ng Point IEEE Standard 754 Established in 1985 as uniform standard for floagng point arithmegc Before that, many idiosyncragc formats Supported by all major CPU vendors Driven by numerical concerns Nice standards for rounding, overflow, underflow Hard to make fast in hardware Numerical analysts predominated over hardware designers in defining standard 8

Announcement 2/3/2016 Prof. Yew s office hour on Friday 2/5/206 has been extended by ½ hour, i.e. from 10am- 11:30am Data Lab due 11:55pm, next Monday 2/8/2016 Check Data Lab Forum for common Q&As, or post your problems there. Homework Assignment #1 has been issued on Monday 2/1/2016 Download from Moodle class web page Due date before class Wednesday 2/10/2016 (check class schedule) 9

Review: Representable Numbers Limita+on #1 Can only exactly represent numbers of the form x/2 k Other ragonal numbers may have repeagng bit representagons Value Representa+on 1/3 0.0101010101[01] 2 1/5 0.001100110011[0011] 2 1/10 0.0001100110011[0011] 2 Limita+on #2 Just one se]ng of binary point within the w bits Limited range of numbers (very small values? very large?) 10

Floa+ng Point Representa+on Numerical Form: ( 1) s M 2 E Sign bit s determines whether number is negagve or posigve Significand M normally a frac+onal value in range [1.0,2.0). Exponent E weights value by power of two Encoding Most significant bit (MSB) is sign bit s exp field encodes E (but is not equal to E) frac field encodes M (but is not equal to M) Three different encoding schemes: (1) Normalized (exp 000 000, exp 111 1111), (2) Denormalized (exp=000 000), (3) Not- a- Number (NaN) (exp = 1111 111) s exp frac 11

Precision op+ons Single precision: 32 bits s exp frac 1 8-bits 23-bits Double precision: 64 bits s exp frac 1 11-bits 52-bits Extended precision: 80 bits (Intel only, internal to hardware) s exp frac 1 15-bits 63 or 64-bits 12

(1) Normalized Values When: exp 000 0 and exp 111 1 s exp v = ( 1) s M 2 E frac Exponent coded as a biased value: Exp = E + Bias Exp: unsigned value of exp field Bias = 2 k- 1-1, where k is the number of exponent bits Single precision: 127 (Exp: 1 254, E: - 126 127) Double precision: 1023 (Exp: 1 2046, E: - 1022 1023) Significand coded with implied leading 1: M = 1.xxx x 2 xxx x: bits of frac field Minimum when frac=000 0 (M = 1.0) Maximum when frac=111 1 (M = 2.0 ε) Get extra leading bit for free 13

Normalized Encoding Example Value: float F = 15213.0; 15213 10 = 11101101101101 2 = 1.1101101101101 2 x 2 13 v = ( 1) s M 2 E E = Exp Bias Significand M = 1.1101101101101 2 frac= 11011011011010000000000 2 Exponent E = 13 Bias = 127 Exp = 140 = 10001100 2 Result: 0 10001100 11011011011010000000000 s exp frac 14

(2) Denormalized Values Condi+on: exp = 000 0 v = ( 1) s M 2 E E = 1 Bias Exponent value: E = 1 Bias (instead of E = 0 Bias) Significand coded with implied leading 0: M = 0.xxx x2 xxx x: bits of frac Allows gradual underflow Cases exp = 000 0, frac = 000 0 Represents zero value Note disgnct values: +0 and 0 (why?) exp = 000 0, frac 000 0 Numbers closest to 0.0 Equi- spaced s exp frac 15

(3) Special Values Condi+on: exp = 111 1 Case: exp = 111 1, frac = 000 0 Represents value (infinity) OperaGon that overflows Both posigve and negagve E.g., 1.0/0.0 = 1.0/ 0.0 = +, 1.0/ 0.0 = Case: exp = 111 1, frac 000 0 Not- a- Number (NaN) Represents case when no numeric value can be determined E.g., sqrt( 1),, 0 s exp frac 16

Visualiza+on: Floa+ng Point Encodings Normalized Denorm +Denorm +Normalized + NaN - 0 +0 NaN 17

Floa+ng Point Numbers Background: Frac+onal binary numbers IEEE floa+ng point standard: Defini+on Example and proper+es Rounding, addi+on, mul+plica+on Floa+ng point in C Summary 18

Tiny Floa+ng Point Example s exp frac 1 4- bits 3- bits 8- bit Floa+ng Point Representa+on Sign bit is in the most significant bit Next four bits are the exponent, with a bias of 7 Last three bits are the frac Same general form as IEEE Format Normalized, denormalized RepresentaGon of 0, NaN, infinity 19

Dynamic Range (Posi+ve Only) Denormalized numbers Normalized numbers s exp frac E Value 0 0000 000-6 0 0 0000 001-6 1/8*1/64 = 1/512 0 0000 010-6 2/8*1/64 = 2/512 0 0000 110-6 6/8*1/64 = 6/512 0 0000 111-6 7/8*1/64 = 7/512 0 0001 000-6 8/8*1/64 = 8/512 0 0001 001-6 9/8*1/64 = 9/512 0 0110 110-1 14/8*1/2 = 14/16 0 0110 111-1 15/8*1/2 = 15/16 0 0111 000 0 8/8*1 = 1 0 0111 001 0 9/8*1 = 9/8 0 0111 010 0 10/8*1 = 10/8 0 1110 110 7 14/8*128 = 224 0 1110 111 7 15/8*128 = 240 0 1111 000 n/a inf University of Minnesota v = ( 1) s M 2 E n: E = Exp Bias = Exp 7 d: E = 1 Bias = 1 7 = - 6 closest to zero largest denorm smallest norm closest to 1 below closest to 1 above largest norm 20

Distribu+on of Values 6- bit IEEE- like format e = 3 exponent bits f = 2 fracgon bits Bias is 2 3-1 - 1 = 3 s exp frac 1 3- bits 2- bits No+ce how the distribu+on gets denser toward zero. 8 values -15-10 -5 0 5 10 15 Denormalized Normalized Infinity 21

Distribu+on of Values (close- up view) 6- bit IEEE- like format e = 3 exponent bits f = 2 fracgon bits Bias is 3 s exp frac 1 3- bits 2- bits -1-0.5 0 0.5 1 Denormalized Normalized Infinity 22

Special Proper+es of the IEEE Encoding FP Zero Same as Integer Zero All bits = 0 Can (Almost) Use Unsigned Integer Comparison Must first compare sign bits Must consider 0 = 0 NaNs problemagc Will be greater than any other values What should comparison yield? Otherwise OK Denorm vs. normalized Normalized vs. infinity NegaGve is smaller than posigve (not so in 2 s complement) 23

Floa+ng Point Numbers Background: Frac+onal binary numbers IEEE floa+ng point standard: Defini+on Example and proper+es Rounding, addi+on, mul+plica+on Floa+ng point in C Summary 24

Floa+ng Point Opera+ons: Basic Idea x +f y = Round(x + y) x f y = Round(x y) Basic idea First compute exact result Make it fit into desired precision Possibly overflow if exponent too large Possibly round to fit into frac 25

Rounding Rounding Modes (illustrate with $ rounding) $1.4 $1.6 $1.5 $2.5 $1.5 Towards zero $1 $1 $1 $2 $1 Round down ( ) $1 $1 $1 $2 $2 Round up (+ ) $2 $2 $2 $3 $1 Nearest Even (default) $1 $2 $2 $2 $2 26

Closer Look at Round- To- Even Default Rounding Mode Round to Even Need to use assembly programming to get other rounding modes All others are sta+s+cally biased Sum of set of posigve numbers will consistently be over- or under- esgmated Applying to Other Decimal Places / Bit Posi+ons When exactly halfway between two possible values Round so that least significant digit is even E.g., round to nearest hundredth 7.8949999 7.89 (Less than half way) 7.8950001 7.90 (Greater than half way) 7.8950000 7.90 (Half way round up) 7.8850000 7.88 (Half way round down) 27

Rounding Binary Numbers Binary Frac+onal Numbers Even when least significant bit is 0 Half way when bits to right of rounding posigon = 100 2 Examples Round to nearest 1/4 (2 bits right of binary point) Value Binary Rounded Ac+on Rounded Value 2 3/32 10.000112 10.002 (<1/2 down) 2 2 3/16 10.001102 10.012 (>1/2 up) 2 1/4 2 7/8 10.111002 11.002 ( 1/2 up) 3 2 5/8 10.101002 10.102 ( 1/2 down) 2 1/2 28

FP Mul+plica+on ( 1) s1 M1 2 E1 ( 1) s2 M2 2 E2 Exact Result: ( 1) s M 2 E Sign s: s1 ^ s2 Significand M: M1 x M2 Exponent E: E1 + E2 Fixing If M 2, shin M right, increment E If E out of range, overflow Round M to fit frac precision Implementa+on Biggest chore is mulgplying significands 29

Floa+ng Point Addi+on ( 1) s1 M1 2 E1 + (- 1) s2 M2 2 E2 Assume E1 > E2 Exact Result: ( 1) s M 2 E Sign s, significand M: + Result of signed align & add Exponent E: E1 (the larger of E 1 and E 2 ) Get binary points lined up E1 E2 ( 1) s1 M1 ( 1) s2 M2 ( 1) s M Fixing If M 2, shin M right, increment E if M < 1, shin M len k posigons, decrement E by k Overflow if E out of range Round M to fit frac precision 30

31

Review: Floa+ng Point Encodings Normalized Denorm +Denorm +Normalized + NaN - 0 +0 NaN 32

Review: Distribu+on of Values 6- bit IEEE- like format e = 3 exponent bits f = 2 fracgon bits Bias is 2 3-1 - 1 = 3 s exp frac 1 3- bits 2- bits No+ce how the distribu+on gets denser toward zero. 8 values -15-10 -5 0 5 10 15 Denormalized Normalized Infinity 33

Review: A Close- up View 6- bit IEEE- like format e = 3 exponent bits f = 2 fracgon bits Bias is 3 s exp frac 1 3- bits 2- bits -1-0.5 0 0.5 1 Denormalized Normalized Infinity 34

Review: Posi+ve Only Denormalized numbers Normalized numbers s exp frac E Value 0 0000 000-6 0 0 0000 001-6 1/8*1/64 = 1/512 0 0000 010-6 2/8*1/64 = 2/512 0 0000 110-6 6/8*1/64 = 6/512 0 0000 111-6 7/8*1/64 = 7/512 0 0001 000-6 8/8*1/64 = 8/512 0 0001 001-6 9/8*1/64 = 9/512 0 0110 110-1 14/8*1/2 = 14/16 0 0110 111-1 15/8*1/2 = 15/16 0 0111 000 0 8/8*1 = 1 0 0111 001 0 9/8*1 = 9/8 0 0111 010 0 10/8*1 = 10/8 0 1110 110 7 14/8*128 = 224 0 1110 111 7 15/8*128 = 240 0 1111 000 n/a inf University of Minnesota v = ( 1) s M 2 E n: E = Exp Bias = Exp 7 d: E = 1 Bias = 1 7 = - 6 closest to zero largest denorm smallest norm closest to 1 below closest to 1 above largest norm 35

Review: Floa+ng Point Addi+on ( 1) s1 M1 2 E1 + (- 1) s2 M2 2 E2 Assume E1 > E2 Exact Result: ( 1) s M 2 E Sign s, significand M: + Result of signed align & add Exponent E: E1 (the larger of E 1 and E 2 ) Get binary points lined up E1 E2 ( 1) s1 M1 ( 1) s2 M2 ( 1) s M Fixing If M 2, shin M right, increment E if M < 1, shin M len k posigons, decrement E by k Overflow if E out of range Round M to fit frac precision 36

Mathema+cal Proper+es of FP Add Mathema+cal Proper+es CommutaGve? AssociaGve? Overflow and inexactness of rounding (3.14+1e10)-1e10 = 0, 3.14+(1e10-1e10) = 3.14 0 is addigve idengty? Yes Every element has addigve inverse? Yes, except for infiniges & NaNs Monotonicity a b a+c b+c? Except for infiniges & NaNs Yes No Almost Almost 37

Mathema+cal Proper+es of FP Mult Mathema+cal Proper+es MulGplicaGon CommutaGve? MulGplicaGon is AssociaGve? Possibility of overflow, inexactness of rounding Ex: (1e20*1e20)*1e-20= inf, 1e20*(1e20*1e-20)= 1e20 1 is mulgplicagve idengty? Yes MulGplicaGon distributes over addigon? No Possibility of overflow, inexactness of rounding 1e20*(1e20-1e20)= 0.0, 1e20*1e20 1e20*1e20 = NaN Monotonicity a b & c 0 a * c b *c? Except for infiniges & NaNs Yes No Almost 39

Floa+ng Point Numbers Background: Frac+onal binary numbers IEEE floa+ng point standard: Defini+on Example and proper+es Rounding, addi+on, mul+plica+on Floa+ng point in C Summary 40

Floa+ng Point in C C Guarantees Two Levels float single precision double double precision Conversions/Cas+ng Type cas+ng between int, float, and double changes bit representa+on double/float int Truncates fracgonal part Like rounding toward zero Not defined when out of range or NaN: Generally sets to TMin int double Exact conversion, as long as int has 53 bit word size int float Will round according to rounding mode 41

Summary IEEE Floa+ng Point has clear mathema+cal proper+es Represents numbers of form M x 2 E One can reason about opera+ons independent of implementa+on As if computed with perfect precision and then rounded Not the same as real arithme+c Violates associagvity/distribugvity Makes life difficult for compilers & serious numerical applicagons programmers 43

Interes+ng Numbers {single,double} Descrip7on exp frac Numeric Value Zero 00 00 00 00 0.0 Smallest Pos. Denorm. 00 00 00 01 2 {23,52} x 2 {126,1022} Single 1.4 x 10 45 Double 4.9 x 10 324 Largest Denormalized 00 00 11 11 (1.0 ε) x 2 {126,1022} Single 1.18 x 10 38 Double 2.2 x 10 308 Smallest Pos. Normalized 00 01 00 00 1.0 x 2 {126,1022} Just larger than largest denormalized One 01 11 00 00 1.0 Largest Normalized 11 10 11 11 (2.0 ε) x 2 {127,1023} Single 3.4 x 10 38 Double 1.8 x 10 308-15 -10-5 0 5 10 15 Denormalized Normalized Infinity 48

Machine- Level Data Representa+on (Done) to Machine- Level Program Representa+on (Next) 49