Computational Economics and Finance Part I: Elementary Concepts of Numerical Analysis Spring 2016 Outline Computer arithmetic Error analysis: Sources of error Error propagation Controlling the error Rates of convergence Compute and verify 2
Computer Arithmetic Unlike pure mathematics, computer arithmetic has finite precision and is limited by time and space. Real numbers are represented as floating-point numbers of the form ±d 0.d 1 d 2...d p 1 β e. 0.d 1 d 2...d p 1 is called the significand (old: mantissa) with d j {0, 1,...,β 1} and has p digits. β is called the base. e {e min,e min +1,...,e max } is the exponent. 3 Example Consider the decimal number 0.1. If β =10andp =3,then1.00 10 1 is exact. If β =2andp = 24, then 1.10011001100110011001101 2 4 is not exact. In fact, with β = 2 the number 0.1 lies strictly between two floating-point numbers and is not exactly representable by either of them. 4
Double Precision Most widely-used standard for floating-point computation: IEEE Standard for Floating-Point Arithmetic (IEEE 754) IEEE: Institute of Electrical and Electronics Engineers Followed by many hardware (central processing unit, CPU and floating-point unit, FPU) and software implementations Current version is IEEE 754 2008, published in August 2008 Perhaps most widely used: IEEE 754 double-precision binary floating-point format: binary64 5 binary64 Base β = 2, exponent and significand written in binary form total of 64 bits 1 bit for +/- sign, 11 bits for exponent, 52 bits for significand Normalized such that most significant bit d 0 = 1 for all numbers Exponent is biased by 1023 The number ( 1) sign (1.d 1 d 2...d 52 ) 2 e 1023 has the value ( 1) sign 1+ 52 i=1 d i 2 i 2 e 1023 6
Mathematica: $MaxMachineNumber =1.79769 10 308 8 Machine Epsilon Smallest quantity ɛ such that 1 ɛ and 1 + ɛ are both different from one; smallest possible difference in the significand between two numbers Double precision has at most 16 decimal digits of accuracy 2 52 2.220446049 10 16 Matlab: eps =2.2204e 016 Mathematica: $MachineEpsilon =2.22045 10 16 7 Machine Infinity Largest quantity that can be represented; overflow error occurs if an operation produces a larger quantity. Double precision has maximal exponent and 2 10 = 1024 = 2 0 +2 1 +2 2 +...+2 10 1023 2 1024 1.797693135 10 308 Bias in representation: 1023 Largest number (2 eps) 2 1023 1.797693135 10 308 Matlab: realmax =1.7977e + 308
Mathematica: $MinMachineNumber =2.22507 10 308 9 Machine Zero Any quantity that cannot be distinguished from zero. Underflow error occurs if an operation on nonzero quantities produces a smaller quantity. Double precision has smallest exponent 1023 2 1023 1.112536929 10 308 By convention this number represents 0 since normalization requires d 0 =1. Smallest positive number 2 1022 2.225073859 10 308 Matlab: realmin =2.2251e 308 Extended Precision Often desirable and occasionally necessary to increase precision Some Software packages can produce arbitrary precision arithmetic Mathematica: $MinNumber =1.887662394852453 10 323228468 $MaxNumber =5.297557459040040 10 323228467 10
Computer Arithmetic A computer can only execute the basic arithmetic operations of addition, subtraction, multiplication, and division. Everything else is approximated. Relative speeds, old values (Exercise 2.7): speed relative operation to addition subtraction 1.03 multiplication 1.03 division 1.06 exponentiation 5.09 sine function 4.20 11 Computer Arithmetic Efficient evaluation of polynomials (Horner s method) a 0 + a 1 x + a 2 x 2 + a 3 x 3 = a 0 + x(a 1 + x(a 2 + xa 3 )) Efficient computation of derivatives (automatic differentiation) Consider f(x, y, z) =(x α + y α + z α ) β Then f x =(xα + y α + z α ) β 1 βαx α 1 = f(x, y, z) (x α + y α + z α ) βαxα x 12
Error Analysis: Sources of Error Model error: an economic model is only an approximation of a real phenomenon Data error: parameters of the model have to be estimated, forecasted, simulated or approximated; data may be missing; available data may not well reflect the true but unknown process Numerical errors: solving a model on a computer typically results in an approximation of the solution; such approximations are the essence of numerical analysis and involve two types of numerical errors, round-off errors and truncation errors reality model numerical solution 13 Numerical Analysis: Sources of Error Numbers are represented by finite number of bits. Real numbers with significand longer than the number of bits available have to be shortened. Examples: irrational numbers, finite numbers that are too long, finite numbers in decimal form that have no finite exact representation in binary form Round-off error: chopping off extra digits or rounding 2 3 stored as 0.66666 or as 0.66667 14
Round-off Errors If β =2andp = 24, then the binary floating point representation of the decimal number 0.1, 1.10011001100110011001101 2 4, (in single precision) is not exact Round-off errors are likely to occur when the numbers involved in calculations differ significantly in their magnitude, or when two numbers that are nearly identical are subtracted from each other. 15 Example in Matlab Solve the quadratic equation using the quadratic formula x 2 100.0001x +0.01 = 0 x = b ± b 2 4ac 2a Exact solutions: x 1 = 100 and x 2 =0.0001 16
Round-off Errors in Matlab format long; a =1;b = 100.0001; c =0.01; RootDis = sqrt(b 2 4 a c) RootDis =99.999899999999997 x1 =( b + RootDis)/(2 a) x1 = 100 x2 =( b RootDis)/(2 a) x2 =1.000000000033197e 004 17 Truncation Errors Truncation errors occur when numerical methods used for solving a mathematical problem use an approximate mathematical procedure. Example: The infinite sum e x = x n n=0 n! becomes N x n n=0 n! for some finite N. Truncation error is independent of round-off error and occurs even when the mathematical operations are exact. 18
Error Analysis: Error Propagation Catastrophic cancelation occurs when subtracting rounded quantities. Benign cancelation occurs when subtracting exact quantities. Example: The area of a triangle with sides a, b, andc is A = s(s a)(s b)(s c), where s = a+b+c 2 (Heron of Alexandria). Suppose a =9and b = c =4.53. The correct answer is s =9.03 and A =2.342... If β =10andp = 3, then s =9.05 and A =3.04. Rewriting as (a +(b + c))(c (a b))(c +(a b))(a +(b c)) A =, a b c, 4 yields A =2.35. 19 Error Analysis: Controlling Rounding Error Rules of thumb: Avoid unnecessary subtractions of numbers of similar magnitude. First add the smaller numbers and then add the result to the larger numbers. 20
Mathematica: 1.02559 10 8 21 Example: Rounding Error Exercise 2.3: Consider the system of linear equations 64919121x 159018721y = 1 41869520.5x 102558961y = 0 Exact solution: x = 205117922 and y = 83739041 However, double-precision arithmetic yields x =1.02559e + 008 and y =4.18695e + 007 due to catastrophic cancelation: x = = 102558961 64919121( 102558961) 41869520.5( 159018721) 102558961 6658037598793281 + 6658037598793280.5 Matlab: 102558961 Solving the Equations in Matlab A = [ 64919121-159018721 ; 41869520.5-102558961 ]; b=[1;0]; A \ b ans = 1.0e+008 * 1.0602 0.4328 22
Solving the Equations in Mathematica Clear[x, y]; Solve[{64919121 x - 159018721 y == 1, 41869520.5 x - 102558961 y == 0}, {x, y}] {} According to Mathematica the system has no solution. Clear[x, y]; Solve[{64919121 x - 159018721 y == 1, 83739041/2 x - 102558961 y == 0}, {x, y}] {{x > 205117922,y > 83739041}} Now Mathematica finds the exact solution correctly. 23 Error Analysis: Controlling Rounding Error Exercise 2.5: Compute 83521y 8 + 578x 2 y 4 2x 4 +2x 6 x 8 for x = 9478657 and y = 2298912 Exact answer: 179689877047297 However, double-precision arithmetic yields 1.0889e + 040 (depending on ordering) Individual terms: 83521y 8 = 6.5159e + 055 578x 2 y 4 = 1.45048e + 042 2x 4 = 1.61442e + 028 2x 6 = 1.45048e + 042 x 8 = 6.5159e + 055 83521y 8 x 8 = 2.9074e + 042 24
Exercise 2.5 in Matlab x=9478657; y=2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 +2 x 6 x 8 ans = -1.0889e+040 25 Exercise 2.5 in Mathematica x = 9478657; y = 2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 +2 x 6 x 8-179689877047297 Mathematica finds the correct solution. x = 9478657.; y = 2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 +2 x 6 x 8 0 Mathematica states that the solution is zero! 26
Error Analysis: Controlling Truncation Error Truncation error occurs in the application of many numerical methods. Example: iterative method x (k+1) = g (k+1) (x (k),x (k 1),...) Need stopping rules to stop sequence {x (k) } when we are close to unknown solution x. Unless sequence x (k) converges for small k stopping rule leads to truncation error. 27 Stopping Rules Stop when the sequence is not changing much anymore. Stop when x (k+1) x (k) is small relative to x (k), for small ε. x (k+1) x (k) x (k) ε This rule may never stop the sequence if x (k+1) converges to zero. General stopping rule: stop and accept x (k+1) if x (k+1) x (k) 1+ x (k) ε 28
Failure of General Stopping Rule Consider the sequence x k k 1 = j j=1 This sequence diverges, but x k tends to infinity very slowly, e.g. x 10000 =9.78761. For ε =0.0001 the general stopping rule would stop the sequence at k = 1159 with x 1159 =7.63296. General stopping rule is not reliable. 29 Rates of Convergence Key measure for the performance of an algorithm Suppose sequence {x (k) } with x (k) R n converges to x. {x (k) } converges at rate q>1tox if x (k+1) x x (k) x q for all k sufficiently large. M< Quadratic convergence: q =2 Example: 1 + 2 (2k) converges at rate q =2to1 30
Linear Convergence {x (k) } converges linearly to x at rate β if x (k+1) x x (k) x for all k sufficiently large. β<1 Example: 1 + 2 k converges linearly to 1 at rate β =0.5. Superlinear convergence: x (k+1) x lim k x (k) x =0 Example: 1 + k k converges superlinearly to 1. 31 Error Analysis: Controlling Truncation Error Adaptive stopping rule: Suppose the sequence {x (k) } converges linearly at rate β to x and x (k+1) x β x (k) x. Then x (k+1) x x(k+1) x (k) 1 β Stop and accept x (k+1) if x (k+1) x (k) ε(1 β). Estimate β as the maximum over x (k l) x (k+1) x (k l 1) x (k+1) for l =0, 1, 2,... 32
Error Analysis: Controlling Truncation Error Exercise 2.11a: Consider the sequence x k = k 3 n n=1 n!. Note that lim k x k = e 3 1=19.08553... General stopping rule Adaptive stopping rule: ˆβ =max l=0,1,2,... x (k l) x (k+1) x (k l 1) x (k+1) β = x(k) x (k+1) x (k 1) x (k+1) 33 Compute and Verify First, compute an approximate solution to your problem. Second, verify that it is an acceptable approximation according to economically meaningful criteria. Example: Consider the problem of solving f(x) = 0. The exact solution is x, our approximate solution is ˆx. Forward error analysis: How far is ˆx from x? Backward error analysis: Construct a similar problem ˆf such that ˆf(ˆx) =0. Howfaris ˆf from f? Compute and verify: How far is f(ˆx) from its target value of 0? 34
Compute and Verify Example: f(x) =x 2 2=0 Approximate solution ˆx =1.41 f(1.41) = 1.9881 2 < 0, f(1.42) = 2.0164 2 > 0 Bound on forward error ˆx x < 0.01 ˆf(x) =x 2 1.9881 satisfies ˆf(ˆx) =0 Backward error ˆf(x) f(x) =0.0119 f(ˆx) = 0.0119 How large or important is this error? 35 Compute and Verify Quantify the importance of the error in economically meaningful terms. Example: Excess demand function E(p) =D(p) S(p) What does E(ˆp) =0.01 mean? Not much. What does E(ˆp) D(ˆp) =0.01 mean? A lot. Interpretation of relative error in this example Leakage between demand and supply due to market frictions. Optimization error of boundedly rational agents. 36
Compute and Verify Relative errors in economically meaningful terms Advantage: Generally applicable (unlike forward error analysis). Disadvantage: More than one solution may be deemed acceptable (like backward error analysis). 37 Summary Computer arithmetic Error analysis: Sources of error Error propagation Controlling the error Rates of convergence Compute and verify 38