AM702 Applied Computational Methods c ecture 03 Approximations, rrors and heir Analysis
Approximations and rrors c Approximation is unavoidable in mathematical modeling the real world phenomena Approximation leads to errors stimating the errors in computation is necessary for reliability of computed results 2
Accuracy and Precision c Accuracy : t refers to how closely the measured or computed value matches with the true value. Precision: t refers to how closely the computed or measured values agree with each other in repeated computation or measurement. Consider fig. 4.1 from Chapra, as shown here (a) inaccurate and imprecise, (b) accurate and imprecise, (c) inaccurate and precise, (d) accurate and precise.
rrors in Computations rror: he deviation from an expected or true value. rue value may not be known always!.g. Area of a rectangle A= xb=12x10=120 cm 2 But by measurement m xb m =11.9x10.1 he computed value A c = 120.19 cm 2 rror, = A- A c =0.19. his error is known as Absolute rror. When it is compared with the actual value or true value, it is known as relative error, r = A- A c /A=0.19/120. his is expressed in percentage as r = A- A c /A*100=0.19/120*100 = 0.15%. 4
Floating Points and Machine Accuracy Floating Point Representation hese numbers are represented by a sign bit s, an exact integer exponent ε, an exact positive integer mantissa, M. (B base, bias) Machine Accuracy, ε m t refers to the smallest floating point number which added to the floating point number 1.0 produces a floating point result different from 1.0. A typical 32 bit computer with base 2, has ε m = 3x10-8 5
Machine Representation 6 Representing Real numbers on digital computers Computers have finite amount of memory. Consider representing π= 3.141592653589793. Or the repeating fraction 1/3 = 0.333333333333333 Any such number x R needs infinite digits for accurate representation. x = ±( 1. d1d 2d3d 4... d1) 2 Where d i are binary digits having values 0 or 1 and e is an integer exponent. he mantissa can be expressed as d1 d2 d3 d4 1. d 1d 2d3d 4... = 1+ + + +... 2 3 4 2 2 2 2.g. he binary number x is represented as: 1 1 1 1 x = ( 1.11010...) 2 = (1. + + 0 +...) 2 = 3.625 2 4 16 e 10
Floating Point Representation 7 Floating point representation requires finite digits Corresponding to a real number x its digital counterpart will be fl(x) with some k bits fl( x) = ± (1. d1d 2d 3d 4... d1) 2 Now storing this in memory will require k bits which is an approximation for storing x with infinite bits. t is interesting to know the accuracy of such a representation. Alternatively we wish to determine the fl( x) x relative error as: x he standard gives the confidence by estimating 1 k this error to be η = 2 where k is the number of 2 digits (bits) used in the representation e
Floating Point and Machine Precision he standard gives the confidence by estimating 1 k this error to be η = 2 where k is the number of 2 digits (bits) used in the representation. his η is known as machine precision or rounding unit. he floating point word in common double precison systems like MAAB is by default, has 64 bits. Out of them, 52 bits are used for the mantissa (or fraction) while the rest store the sign and the exponent. herefore the machine precision for MAAB is η = 2 53 2.2 10 16. (here k=52) Check eps = 2.2204 10 16 in MAAB 8
Floating point ranges he exponent range is -1022 to 1023. (11 bits including 1 bit for sign) he largest possible number MAAB can store has +1.111111 111 X 2 1023 = (2-2 -52 )X 2 1023 his yields approximately 2 1024 = 1.7997 X 10 308 he smallest possible number MAAB can store with full precision has +1.00000 00000 X 2-1022 his yields 2-1022 = 2.2251 X 10-308
Real Vs Floating point numbers Comparing floating point numbers to real numbers. Property Real numbers Floating point numbers Range nfinite Finite. Precision nfinite Finite xistence Real Subset of real numbers 10
Roundoff rrors Finite-precision causes roundoff errors in numerical computation Roundoff errors accumulate slowly Subtracting nearly equal numbers leads to severe loss of precision. A similar loss of precision occurs when two numbers separated by large differences in magnitude are added. Roundoff errors are unavoidable, good algorithms can minimize their effect
runcation rrors
otal Numerical rror he total numerical error is the sum of the truncation and roundoff errors. he truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases - this leads to a point of diminishing returns for step size.