Numerical methods in physics

Similar documents
Computational Methods. Sources of Errors

Review of Calculus, cont d

Roundoff Errors and Computer Arithmetic

CS321. Introduction to Numerical Methods

CS321 Introduction To Numerical Methods

Computational Methods. H.J. Bulten, Spring

2 Computation with Floating-Point Numbers

fractional quantities are typically represented in computers using floating point format this approach is very much similar to scientific notation

Chapter 1. Numeric Artifacts. 1.1 Introduction

Floating-point representation

ME 261: Numerical Analysis. ME 261: Numerical Analysis

COSC 243. Data Representation 3. Lecture 3 - Data Representation 3 1. COSC 243 (Computer Architecture)

Computing Basics. 1 Sources of Error LECTURE NOTES ECO 613/614 FALL 2007 KAREN A. KOPECKY

Floating-point numbers. Phys 420/580 Lecture 6

Floating-Point Arithmetic

Mathematical preliminaries and error analysis

2 Computation with Floating-Point Numbers

Lecture Objectives. Structured Programming & an Introduction to Error. Review the basic good habits of programming

Review Questions 26 CHAPTER 1. SCIENTIFIC COMPUTING

Programming in C Quick Start! Biostatistics 615 Lecture 4

Classes of Real Numbers 1/2. The Real Line

Scientific Computing: An Introductory Survey

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

Chapter 3. Errors and numerical stability

Bits, Words, and Integers

On a 64-bit CPU. Size/Range vary by CPU model and Word size.

Introduction to Languages for Scientific Computing, winter semester 14/15: Final Exam

Computational Economics and Finance

Introduction to Scientific Computing Lecture 1

Lecture 2: Python Arithmetic

Error in Numerical Methods

main() { double gm; gm = (sqrt((double)5) - 1)/2; printf (" The Golden Mean is %11.8f \n", gm); } OUTPUT The Golden Mean is 0.

MAT128A: Numerical Analysis Lecture Two: Finite Precision Arithmetic

ECE232: Hardware Organization and Design

Chapter 3: Arithmetic for Computers

MST30040 Differential Equations via Computer Algebra Fall 2010 Worksheet 1

Section 1.4 Mathematics on the Computer: Floating Point Arithmetic

Binary floating point encodings

Numerical Computing: An Introduction

These are reserved words of the C language. For example int, float, if, else, for, while etc.

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers

Homework 1 Introduction to Computational Finance Spring 2019

Computing Fundamentals

1.3 Floating Point Form

Numerical computing. How computers store real numbers and the problems that result

Digital Computers and Machine Representation of Data

4 Visualization and. Approximation

Chapter Three. Arithmetic

Computational Economics and Finance

Basics of Computation. PHY 604:Computational Methods in Physics and Astrophysics II

Chapter 4. Operations on Data

Floating Point Arithmetic

Introduction to floating point arithmetic

Reals 1. Floating-point numbers and their properties. Pitfalls of numeric computation. Horner's method. Bisection. Newton's method.

Unavoidable Errors in Computing

Introduction to Programming, Aug-Dec 2008

ANSI C Programming Simple Programs

The ALU consists of combinational logic. Processes all data in the CPU. ALL von Neuman machines have an ALU loop.

Basic data types. Building blocks of computation

Number Systems. Both numbers are positive

CS101 Lecture 04: Binary Arithmetic

2.1.1 Fixed-Point (or Integer) Arithmetic

Natural Numbers and Integers. Big Ideas in Numerical Methods. Overflow. Real Numbers 29/07/2011. Taking some ideas from NM course a little further

LECTURE 0: Introduction and Background

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

What Every Programmer Should Know About Floating-Point Arithmetic

unused unused unused unused unused unused

Math 340 Fall 2014, Victor Matveev. Binary system, round-off errors, loss of significance, and double precision accuracy.

Co-processor Math Processor. Richa Upadhyay Prabhu. NMIMS s MPSTME February 9, 2016

(Refer Slide Time: 02:59)

Outline. Review of Last Week II. Review of Last Week. Computer Memory. Review Variables and Memory. February 7, Data Types

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Graphics calculator instructions

Computational Photonics, Summer Term 2014, Abbe School of Photonics, FSU Jena, Prof. Thomas Pertsch

Number Systems. Binary Numbers. Appendix. Decimal notation represents numbers as powers of 10, for example

CSE 1320 INTERMEDIATE PROGRAMMING - OVERVIEW AND DATA TYPES

Physics 306 Computing Lab 5: A Little Bit of This, A Little Bit of That

Computer Systems C S Cynthia Lee

AMTH142 Lecture 10. Scilab Graphs Floating Point Arithmetic

6-1 (Function). (Function) !*+!"#!, Function Description Example. natural logarithm of x (base e) rounds x to smallest integer not less than x

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Most nonzero floating-point numbers are normalized. This means they can be expressed as. x = ±(1 + f) 2 e. 0 f < 1

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

2. MACHINE REPRESENTATION OF TYPICAL ARITHMETIC DATA FORMATS (NATURAL AND INTEGER NUMBERS).

Scientific Computing: An Introductory Survey

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

Lecture Numbers. Richard E Sarkis CSC 161: The Art of Programming

8. Floating-point Numbers II

Lesson #3. Variables, Operators, and Expressions. 3. Variables, Operators and Expressions - Copyright Denis Hamelin - Ryerson University

Floating-Point Arithmetic

Description Hex M E V smallest value > largest denormalized negative infinity number with hex representation 3BB0 ---

Roundoff Noise in Scientific Computations

1.2 Round-off Errors and Computer Arithmetic

4 Operations On Data 4.1. Foundations of Computer Science Cengage Learning

Computer System and programming in C

Class 5. Data Representation and Introduction to Visualization

Session 2. a. Follow-ups to Session Session 2 (last revised: January 6, 2013) 2 1

Floating Point. CSE 351 Autumn Instructor: Justin Hsia

Transcription:

Numerical methods in physics Marc Wagner Goethe-Universität Frankfurt am Main summer semester 2018 Version: April 9, 2018 1

Contents 1 Introduction 3 2 Representation of numbers in computers, roundoff errors 5 2.1 Integers......................................... 5 2.2 Real numbers, floating point numbers........................ 5 2.3 Roundoff errors..................................... 6 2.3.1 Simple examples................................ 6 2.3.2 Another example: numerical derivative via finite difference........ 7 2

***** April 13, 2018 (1. lecture) ***** 1 Introduction Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). (Wiki) Die numerische Mathematik, auch kurz Numerik genannt, beschäftigt sich als Teilgebiet der Mathematik mit der Konstruktion und Analyse von Algorithmen für kontinuierliche mathematische Probleme. Hauptanwendung ist dabei die näherungsweise... Berechnung von Lösungen mit Hilfe von Computern. (Wiki) Almost no modern physics without computers. Even analytical calculations often require computer algebra systems (Mathematica, Maple,...), are not fully analytical, but numerically exact calculations (e.g. mainly analytically, at the end simple 1-dimensional numerical integrations, which can be carried out up to arbitrary precision). Goal of this lecture: Learn, how to use computers in an efficient and purposeful way. Implement numerical algorithms, e.g. in C or Fortran,...... write program code specifically for your physics problems...... use floating point numbers appropriately (understand roundoff errors, why and to what extent accuracy is limited,...)...... quite often computations run several days, weeks or even months, i.e. decide for most efficient algorithms...... in practice: parts of your code have to be written from scratch, other parts use existing numerical libraries (e.g. GSL, LAPACK, ARPACK,...), i.e. learn to use such libraries. Typical problems in physics, which can be solved numerically: Linear systems. Eigenvalue and eigenvector problems. Integration in 1 or more dimensions. Differential equations. Root finding (Nullstellensuche), optimization (finding minima or maxima).... Computer algebra systems will not be discussed in this lecture: E.g. Mathematica, Maple,... 3

Complement numerical calculations. Automated analytical calculations, e.g. solve standard integrals (find the antiderivative [Stammfunktion]), simplify lengthy expressions, transform coordinates (e.g. Cartesian coordinates to spherical coordinates),... 4

2 Representation of numbers in computers, roundoff errors 2.1 Integers Computer memory can store 0 s and 1 s, so-called bits, b j {0, 1}. Integer: z = b N 1... b 2 b 1 b 0 (stored in this way in computer memory, i.e. in the binary numeral system), z = N 1 j=0 b j 2 j (for positive integers). (1) Typically N = 32 (sometimes also N = 8, 16, 64, 128) 0 z 2 32 1 = 4 294 967 295. Negative integers: very similar (homework: study Wiki, https://en.wikipedia.org/wiki/integer (computer science)). Many arithmetic operations are exact; exceptions: if range is exceeded, division, square root,... yields another integer obtained by rounding down (Nachkommastellen abschneiden), e.g. 7/3 = 2. 2.2 Real numbers, floating point numbers Real numbers are approximated in computes by floating point numbers, z = S M 2 E e. (2) Sign: S = ±1. Mantissa: M = N M j=0 m j ( 1 2) j, (3) m 0 = 1 (phantom bit, i.e. M = 1.???, normalized ), m j {0, 1} for j 1 (representation analogous to representation of integers). Exponent: E is integer, e is integer constant. Two frequently used data types: float (32 bits), double (64 bits) 1. float: S: 1 bit. E: 8 bits. 1 float and double are C data types. In Fortran real and double precision. 5

M: N M = 23 bits. Range: M = 1, 1 + ɛ, 1 + 2ɛ,..., 2 2ɛ, 2 ɛ, where ɛ = (1/2) 23 1.19 10 7. ɛ is relative precision. e = 127, E = 1,..., 254, i.e. 2 E e = 2 126... 2 +127 10 38... 10 +38. 10 38 is smallest numbers, 10 +38 is largest number. double: S: 1 bit. E: 11 bits. M: N M = 52 bits. Range: ɛ = (1/2) 23 2.22 10 16. 2 E e 10 308... 10 +308. Homework: Study [1], section 1.1.1. Floating-Point Representation. 2.3 Roundoff errors Due to the finite number of bits of the mantissa M, real numbers cannot be stored exactly; they are approximated by the closest foating point numbers. Equation (2): z = S M 2 E e, (4) i.e. relative precision 10 7 for float and 10 16 for double. 2.3.1 Simple examples 1 + ɛ = 1, if ɛ < ɛ. Difference of similar numbers z 1 and z 2 (i.e. the first n decimal digits of z 1 and z 2 are identical, they differ in the n + 1-th digit): z 1 z 2 = α }{{} 1 10 β α 2 1.??? }{{} 1.??? 10 β = ) (α 1 α 2 10 β. (5) } {{ } O(10 n ) When α 1 α 2 is computed, the first n digits cancel each other resulting mantissa has accuracy 10 (7 n) (float) or 10 (16 n) (double). E.g. difference of two floats, which differ relatively by 10 6, is accurate only up to 1 digit. 6

2.3.2 Another example: numerical derivative via finite difference Starting point: function f(x) can be evaluated, f (x) not (e.g. expression is very long and complicated). Common approach: approximate f (x) numerically by finite difference, e.g. f(x + h) = f(x) + f (x)h + 1 2 f (x)h 2 + O(h 3 ) (6) f (x) = f (x) = Problems: If h is large O(h), O(h 2 ) large. f(x + h) + f(x) + O(h) (asymmetric) (7) h f(x + h) + f(x h) + O(h 2 ) (symmetric). (8) 2h If h is small f(x+h)+f(x), f(x+h)+f(x h) is difference of similar numbers (cf. section 2.3.1). Optimal choice h = h opt for asymmetric finite difference (7): Relative error due to O(h): δf (x) = f (x) f finite difference (x) = f (x) = 1 2 f (x)h + O(h 2 ) δf (x) f (x) = f (x)h/2 f (x) Relative error due to f(x + h) + f(x): f(x + h) f(x) f (x)h f(x + h) f(x) f(x + h) + f(x) h f (x)h f (x). (9) relative loss of accuracy f(x) f (x)h (e.g. 103 implies that 3 digits are lost) δf (x) f = f(x) (x) f ɛ. (10) (x)h For h = h opt both errors are similar: f (x)h opt f(x) f (x) f ɛ (x)h opt ( ) f(x) 1/2 h opt f (x) ɛ ɛ 1/2 δf (x) f (x) f (x)h opt opt f ɛ 1/2. (11) (x) = 7

Optimal choice h = h opt for symmetric finite difference (8): analogous analysis yields h opt ɛ 1/3, δf (x) f (x) ɛ 2/3, (12) opt i.e. symmetric derivative superior to asymmetric derivative. In practice: Estimate errors analytically as sketched above...... and test the stability of your results with respect to numerical parameters (h in the derivative example). Above estimates are confirmed by the following example program 2. // derivative of sin(x) at x = 1.0 via finite differences #include <math.h> #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { int j; // ***** printf("h rel_err_asym rel_err_sym\n"); for(j = 1; j <= 15; j++) { double h = pow(10.0, -(double)j); double df_exact = cos(1.0); double df_asym = (sin(1.0+h) - sin(1.0)) / h; double df_sym = (sin(1.0+h) - sin(1.0-h)) / (2.0 * h); double rel_err_asym = fabs((df_exact - df_asym) / df_exact); double rel_err_sym = fabs((df_exact - df_sym) / df_exact); } printf("%.1e %.3e %.3e\n", h, rel_err_asym, rel_err_sym); // ***** } return EXIT_SUCCESS; 2 Throughout this lecture I use C. 8

h rel_err_asym rel_err_sym 1.0e-01 7.947e-02 1.666e-03 1.0e-02 7.804e-03 1.667e-05 1.0e-03 7.789e-04 1.667e-07 1.0e-04 7.787e-05 1.667e-09 1.0e-05 7.787e-06 2.062e-11 1.0e-06 7.787e-07 5.130e-11 1.0e-07 7.742e-08 3.597e-10 1.0e-08 5.497e-09 4.777e-09 1.0e-09 9.724e-08 5.497e-09 1.0e-10 1.082e-07 1.082e-07 1.0e-11 2.163e-06 2.163e-06 1.0e-12 8.003e-05 2.271e-05 1.0e-13 1.358e-03 3.309e-04 1.0e-14 6.861e-03 6.861e-03 1.0e-15 2.741e-02 2.741e-02 9

References [1] W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical recipes 3rd edition: the art of scientific computing, Cambridge University Press (2007). 10