Computational Economics and Finance

Similar documents
Computational Economics and Finance

Mathematical preliminaries and error analysis

Scientific Computing: An Introductory Survey

2 Computation with Floating-Point Numbers

2 Computation with Floating-Point Numbers

Computational Methods CMSC/AMSC/MAPL 460. Representing numbers in floating point Error Analysis. Ramani Duraiswami, Dept. of Computer Science

Floating-point representation

Review Questions 26 CHAPTER 1. SCIENTIFIC COMPUTING

Computational Methods. Sources of Errors

What we need to know about error: Class Outline. Computational Methods CMSC/AMSC/MAPL 460. Errors in data and computation

Finite arithmetic and error analysis

Roundoff Errors and Computer Arithmetic

Computational Methods CMSC/AMSC/MAPL 460. Representing numbers in floating point and associated issues. Ramani Duraiswami, Dept. of Computer Science

CS321 Introduction To Numerical Methods

Floating Point Representation. CS Summer 2008 Jonathan Kaldor

2.1.1 Fixed-Point (or Integer) Arithmetic

Scientific Computing: An Introductory Survey

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Accuracy versus precision

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. What is scientific computing?

Computational Methods CMSC/AMSC/MAPL 460. Representing numbers in floating point and associated issues. Ramani Duraiswami, Dept. of Computer Science

Scientific Computing. Error Analysis

Floating-Point Arithmetic

CS321. Introduction to Numerical Methods

Floating-Point Numbers in Digital Computers

MATH 353 Engineering mathematics III

Lecture Objectives. Structured Programming & an Introduction to Error. Review the basic good habits of programming

Floating Point Arithmetic

fractional quantities are typically represented in computers using floating point format this approach is very much similar to scientific notation

Computational Mathematics: Models, Methods and Analysis. Zhilin Li

Floating-Point Numbers in Digital Computers

1.2 Round-off Errors and Computer Arithmetic

Floating-point numbers. Phys 420/580 Lecture 6

Reals 1. Floating-point numbers and their properties. Pitfalls of numeric computation. Horner's method. Bisection. Newton's method.

Outline. 1 Scientific Computing. 2 Approximations. 3 Computer Arithmetic. Scientific Computing Approximations Computer Arithmetic

Section 1.4 Mathematics on the Computer: Floating Point Arithmetic

Unavoidable Errors in Computing

Introduction to floating point arithmetic

Physics 331 Introduction to Numerical Techniques in Physics

Review of Calculus, cont d

Most nonzero floating-point numbers are normalized. This means they can be expressed as. x = ±(1 + f) 2 e. 0 f < 1

Chapter 3. Errors and numerical stability

Classes of Real Numbers 1/2. The Real Line

EE878 Special Topics in VLSI. Computer Arithmetic for Digital Signal Processing

Introduction to Numerical Computing

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

Floating Point Representation in Computers

Chapter 2. Data Representation in Computer Systems

IEEE Standard for Floating-Point Arithmetic: 754

Binary floating point encodings

ME 261: Numerical Analysis. ME 261: Numerical Analysis

Floating Point (with contributions from Dr. Bin Ren, William & Mary Computer Science)

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

Floating-point representations

Floating-point representations

Floating Point Numbers

Floating Point Numbers

Numerical Methods 5633

Math 340 Fall 2014, Victor Matveev. Binary system, round-off errors, loss of significance, and double precision accuracy.

Module 2: Computer Arithmetic

CS 6210 Fall 2016 Bei Wang. Lecture 4 Floating Point Systems Continued

Numerical Computing: An Introduction

IEEE Standard 754 Floating Point Numbers

Chapter Three. Arithmetic

Errors in Computation

Floating Point January 24, 2008

FLOATING POINT NUMBERS

Computing Basics. 1 Sources of Error LECTURE NOTES ECO 613/614 FALL 2007 KAREN A. KOPECKY

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = (

MAT128A: Numerical Analysis Lecture Two: Finite Precision Arithmetic

Floating-point Arithmetic. where you sum up the integer to the left of the decimal point and the fraction to the right.

Numeric Encodings Prof. James L. Frankel Harvard University

Floating point. Today. IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.

CHAPTER 5 Computer Arithmetic and Round-Off Errors

What Every Programmer Should Know About Floating-Point Arithmetic

Introduction to Computational Mathematics

CHAPTER 2 SENSITIVITY OF LINEAR SYSTEMS; EFFECTS OF ROUNDOFF ERRORS

Floating-Point Arithmetic

Introduction to Computer Programming with MATLAB Calculation and Programming Errors. Selis Önel, PhD

Data Representation Floating Point

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

Representing and Manipulating Floating Points. Jo, Heeseung

Computer Arithmetic. 1. Floating-point representation of numbers (scientific notation) has four components, for example, 3.

Representing and Manipulating Floating Points


Data Representation Floating Point

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

Lecture 03 Approximations, Errors and Their Analysis

VHDL IMPLEMENTATION OF IEEE 754 FLOATING POINT UNIT

New Mexico Tech Hyd 510

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

Introduction to numerical algorithms

Numerical Methods I Numerical Computing

Using Arithmetic of Real Numbers to Explore Limits and Continuity

Floating point. Today! IEEE Floating Point Standard! Rounding! Floating Point Operations! Mathematical properties. Next time. !

3.5 Floating Point: Overview

Numerical Analysis I

Transcription:

Computational Economics and Finance Part I: Elementary Concepts of Numerical Analysis Spring 2015

Outline Computer arithmetic Error analysis: Sources of error Error propagation Controlling the error Rates of convergence Compute and verify 2

Computer Arithmetic Unlike pure mathematics, computer arithmetic has finite precision and is limited by time and space. Real numbers are represented as floating-point numbers of the form ±d 0.d 1 d 2... d p 1 β e. 0.d 1 d 2... d p 1 is called the significand (old: mantissa) with d j {0, 1,..., β 1} and has p digits. β is called the base. e {e min, e min + 1,..., e max } is the exponent. 3

Example Consider the decimal number 0.1. If β = 10 and p = 3, then 1.00 10 1 is exact. If β = 2 and p = 24, then 1.10011001100110011001101 2 4 is not exact. In fact, with β = 2 the number 0.1 lies strictly between two floating-point numbers and is not exactly representable by either of them. 4

Double Precision Most widely-used standard for floating-point computation: IEEE Standard for Floating-Point Arithmetic (IEEE 754) IEEE: Institute of Electrical and Electronics Engineers Followed by many hardware (central processing unit, CPU and floating-point unit, FPU) and software implementations Current version is IEEE 754 2008, published in August 2008 Perhaps most widely used: IEEE 754 double-precision binary floating-point format: binary64 5

binary64 Base β = 2, exponent and significand written in binary form total of 64 bits 1 bit for +/- sign, 11 bits for exponent, 52 bits for significand Normalized such that most significant bit d 0 = 1 for all numbers Exponent is biased by 1023 The number ( 1) sign (1.d 1 d 2... d 52 ) 2 e 1023 has the value ( 1) sign 1 + 52 i=1 d i 2 i 2 e 1023 6

Machine Epsilon Smallest quantity ɛ such that 1 ɛ and 1 + ɛ are both different from one; smallest possible difference in the significand between two numbers Double precision has at most 16 decimal digits of accuracy 2 52 2.220446049 10 16 Matlab: eps = 2.2204e 016 Mathematica: $MachineEpsilon = 2.22045 10 16 7

Mathematica: $MaxMachineNumber = 1.79769 10 308 8 Machine Infinity Largest quantity that can be represented; overflow error occurs if an operation produces a larger quantity. Double precision has maximal exponent 2 10 = 1024 = 2 0 + 2 1 + 2 2 +... + 2 10 1023 and 2 1024 1.797693135 10 308 Bias in representation: 1023 Largest number (2 eps) 2 1023 1.797693135 10 308 Matlab: realmax = 1.7977e + 308

Mathematica: $MinMachineNumber = 2.22507 10 308 9 Machine Zero Any quantity that cannot be distinguished from zero. Underflow error occurs if an operation on nonzero quantities produces a smaller quantity. Double precision has smallest exponent 1023 2 1023 1.112536929 10 308 By convention this number represents 0 since normalization requires d 0 = 1. Smallest positive number 2 1022 2.225073859 10 308 Matlab: realmin = 2.2251e 308

Extended Precision Often desirable and occasionally necessary to increase precision Some Software packages can produce arbitrary precision arithmetic Mathematica: $MinNumber = 1.887662394852453 10 323228468 $MaxNumber = 5.297557459040040 10 323228467 10

Computer Arithmetic A computer can only execute the basic arithmetic operations of addition, subtraction, multiplication, and division. Everything else is approximated. Relative speeds, old values (Exercise 2.7): speed relative operation to addition subtraction 1.03 multiplication 1.03 division 1.06 exponentiation 5.09 sine function 4.20 11

Computer Arithmetic Efficient evaluation of polynomials (Horner s method) a 0 + a 1 x + a 2 x 2 + a 3 x 3 = a 0 + x(a 1 + x(a 2 + xa 3 )) Efficient computation of derivatives (automatic differentiation) Consider f(x, y, z) = (x α + y α + z α ) β Then f x = (xα + y α + z α ) β 1 βαx α 1 = f(x, y, z) (x α + y α + z α ) βαxα x 12

Error Analysis: Sources of Error Model error: an economic model is only an approximation of a real phenomenon Data error: parameters of the model have to be estimated, forecasted, simulated or approximated; data may be missing; available data may not well reflect the true but unknown process Numerical errors: solving a model on a computer typically results in an approximation of the solution; such approximations are the essence of numerical analysis and involve two types of numerical errors, round-off errors and truncation errors reality model numerical solution 13

Numerical Analysis: Sources of Error Numbers are represented by finite number of bits. Real numbers with significand longer than the number of bits available have to be shortened. Examples: irrational numbers, finite numbers that are too long, finite numbers in decimal form that have no finite exact representation in binary form Round-off error: chopping off extra digits or rounding 2 3 stored as 0.66666 or as 0.66667 14

Round-off Errors If β = 2 and p = 24, then the binary floating point representation of the decimal number 0.1, 1.10011001100110011001101 2 4, (in single precision) is not exact Round-off errors are likely to occur when the numbers involved in calculations differ significantly in their magnitude, or when two numbers that are nearly identical are subtracted from each other. 15

Example in Matlab Solve the quadratic equation using the quadratic formula x 2 100.0001x + 0.01 = 0 x = b ± b 2 4ac 2a Exact solutions: x 1 = 100 and x 2 = 0.0001 16

Round-off Errors in Matlab f ormat long; a = 1; b = 100.0001; c = 0.01; RootDis = sqrt(b 2 4 a c) RootDis = 99.999899999999997 x1 = ( b + RootDis)/(2 a) x1 = 100 x2 = ( b RootDis)/(2 a) x2 = 1.000000000033197e 004 17

Truncation Errors Truncation errors occur when numerical methods used for solving a mathematical problem use an approximate mathematical procedure. Example: The infinite sum e x = x n n=0 n! becomes N x n n=0 n! for some finite N. Truncation error is independent of round-off error and occurs even when the mathematical operations are exact. 18

Error Analysis: Controlling Rounding Error Rules of thumb: Avoid unnecessary subtractions of numbers of similar magnitude. First add the smaller numbers and then add the result to the larger numbers. 19

Mathematica: 1.02559 10 8 20 Example: Rounding Error Exercise 2.3: Consider the system of linear equations 64919121x 159018721y = 1 41869520.5x 102558961y = 0 Exact solution: x = 205117922 and y = 83739041 However, double-precision arithmetic yields x = 1.02559e + 008 and y = 4.18695e + 007 due to catastrophic cancelation: x = = 102558961 64919121( 102558961) 41869520.5( 159018721) 102558961 6658037598793281 + 6658037598793280.5 Matlab: 102558961

Solving the Equations in Matlab A = [ 64919121-159018721 ; 41869520.5-102558961 ]; b = [ 1 ; 0 ]; A \ b ans = 1.0e+008 * 1.0602 0.4328 21

Solving the Equations in Mathematica Clear[x, y]; Solve[{64919121 x - 159018721 y == 1, 41869520.5 x - 102558961 y == 0}, {x, y}] {} According to Mathematica the system has no solution. Clear[x, y]; Solve[{64919121 x - 159018721 y == 1, 83739041/2 x - 102558961 y == 0}, {x, y}] {{x > 205117922, y > 83739041}} Now Mathematica finds the exact solution correctly. 22

Error Analysis: Controlling Rounding Error Exercise 2.5: Compute 83521y 8 + 578x 2 y 4 2x 4 + 2x 6 x 8 for x = 9478657 and y = 2298912 Exact answer: 179689877047297 However, double-precision arithmetic yields 1.0889e + 040 (depending on ordering) Individual terms: 83521y 8 = 6.5159e + 055 578x 2 y 4 = 1.45048e + 042 2x 4 = 1.61442e + 028 2x 6 = 1.45048e + 042 x 8 = 6.5159e + 055 83521y 8 x 8 = 2.9074e + 042 23

Exercise 2.5 in Matlab x=9478657; y=2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 + 2 x 6 x 8 ans = -1.0889e+040 24

Exercise 2.5 in Mathematica x = 9478657; y = 2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 + 2 x 6 x 8-179689877047297 Mathematica finds the correct solution. x = 9478657.; y = 2298912; 83521 y 8 + 578 x 2 y 4 2 x 4 + 2 x 6 x 8 0 Mathematica states that the solution is zero! 25

Error Analysis: Controlling Truncation Error Truncation error occurs in the application of many numerical methods. Example: iterative method x (k+1) = g (k+1) (x (k), x (k 1),...) Need stopping rules to stop sequence {x (k) } when we are close to unknown solution x. Unless sequence x (k) converges for small k stopping rule leads to truncation error. 26

Stopping Rules Stop when the sequence is not changing much anymore. Stop when x (k+1) x (k) is small relative to x (k), for small ε. x (k+1) x (k) x (k) ε This rule may never stop the sequence if x (k+1) converges to zero. General stopping rule: stop and accept x (k+1) if x (k+1) x (k) 1+ x (k) ε 27

Failure of General Stopping Rule Consider the sequence x k = k j=1 1 j This sequence diverges, but x k tends to infinity very slowly, e.g. x 10000 = 9.78761. For ε = 0.0001 the general stopping rule would stop the sequence at k = 1159 with x 1159 = 7.63296. General stopping rule is not reliable. 28

Rates of Convergence Key measure for the performance of an algorithm Suppose sequence {x (k) } with x (k) R n converges to x. {x (k) } converges at rate q > 1 to x if for all k sufficiently large. x (k+1) x x (k) x q M < Quadratic convergence: q = 2 Example: 1 + 2 (2k) converges at rate q = 2 to 1 29

Linear Convergence {x (k) } converges linearly to x at rate β if for all k sufficiently large. x (k+1) x x (k) x β < 1 Example: 1 + 2 k converges linearly to 1 at rate β = 0.5. Superlinear convergence: lim k x (k+1) x x (k) x = 0 Example: 1 + k k converges superlinearly to 1. 30

Error Analysis: Controlling Truncation Error Adaptive stopping rule: Suppose the sequence {x (k) } converges linearly at rate β to x and x (k+1) x β x (k) x. Then x (k+1) x x(k+1) x (k) 1 β Stop and accept x (k+1) if x (k+1) x (k) ε(1 β). Estimate β as the maximum over x (k l) x (k+1) x (k l 1) x (k+1) for l = 0, 1, 2,.... 31

Error Analysis: Controlling Truncation Error Exercise 2.11a: Consider the sequence x k = k 3 n n=1 n!. Note that lim k x k = e 3 1 = 19.08553... General stopping rule Adaptive stopping rule: ˆβ = max l=0,1,2,... x (k l) x (k+1) x (k l 1) x (k+1) β = x(k) x (k+1) x (k 1) x (k+1) 32

Compute and Verify First, compute an approximate solution to your problem. Second, verify that it is an acceptable approximation according to economically meaningful criteria. Example: Consider the problem of solving f(x) = 0. The exact solution is x, our approximate solution is ˆx. Forward error analysis: How far is ˆx from x? Backward error analysis: Construct a similar problem ˆf such that ˆf(ˆx) = 0. How far is ˆf from f? Compute and verify: How far is f(ˆx) from its target value of 0? 33

Compute and Verify Example: f(x) = x 2 2 = 0 Approximate solution ˆx = 1.41 f(1.41) = 1.9881 2 < 0, f(1.42) = 2.0164 2 > 0 Bound on forward error ˆx x < 0.01 ˆf(x) = x 2 1.9881 satisfies ˆf(ˆx) = 0 Backward error ˆf(x) f(x) = 0.0119 f(ˆx) = 0.0119 How large or important is this error? 34

Compute and Verify Quantify the importance of the error in economically meaningful terms. Example: Excess demand function E(p) = D(p) S(p) What does E(ˆp) = 0.01 mean? Not much. What does E(ˆp) D(ˆp) = 0.01 mean? A lot. Interpretation of relative error in this example Leakage between demand and supply due to market frictions. Optimization error of boundedly rational agents. 35

Compute and Verify Relative errors in economically meaningful terms Advantage: Generally applicable (unlike forward error analysis). Disadvantage: More than one solution may be deemed acceptable (like backward error analysis). 36

Summary Computer arithmetic Error analysis: Sources of error Error propagation Controlling the error Rates of convergence Compute and verify 37