Mathematics 256 a course in differential equations for engineering students

Similar documents
R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Suppose we want to find approximate values for the solution of the differential equation

Problem Set 3 Solutions

Intro. Iterators. 1. Access

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

Brave New World Pseudocode Reference

3D vector computer graphics

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

AP PHYSICS B 2008 SCORING GUIDELINES

Programming in Fortran 90 : 2017/2018


NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Lecture 5: Multilayer Perceptrons

GSLM Operations Research II Fall 13/14

A Binarization Algorithm specialized on Document Images and Photos

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

y and the total sum of

Support Vector Machines

CE 221 Data Structures and Algorithms

CS 534: Computer Vision Model Fitting

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Computer models of motion: Iterative calculations

Parallel matrix-vector multiplication

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Reducing Frame Rate for Object Tracking

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

CMPS 10 Introduction to Computer Science Lecture Notes

Conditional Speculative Decimal Addition*

CHAPTER 10: ALGORITHM DESIGN TECHNIQUES

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

Array transposition in CUDA shared memory

Hermite Splines in Lie Groups as Products of Geodesics

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE

Lecture #15 Lecture Notes

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Outline. Midterm Review. Declaring Variables. Main Variable Data Types. Symbolic Constants. Arithmetic Operators. Midterm Review March 24, 2014

Random Kernel Perceptron on ATTiny2313 Microcontroller

The Codesign Challenge

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Analysis of Continuous Beams in General

Parallelism for Nested Loops with Non-uniform and Flow Dependences

On Some Entertaining Applications of the Concept of Set in Computer Science Course

Wishing you all a Total Quality New Year!

Newton-Raphson division module via truncated multipliers

Machine Learning: Algorithms and Applications

5 The Primal-Dual Method

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

An Optimal Algorithm for Prufer Codes *

Solutions to Programming Assignment Five Interpolation and Numerical Differentiation

Insertion Sort. Divide and Conquer Sorting. Divide and Conquer. Mergesort. Mergesort Example. Auxiliary Array

Algorithm To Convert A Decimal To A Fraction

Very simple computational domains can be discretized using boundary-fitted structured meshes (also called grids)

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

CSE 326: Data Structures Quicksort Comparison Sorting Bound

Sorting. Sorted Original. index. index

News. Recap: While Loop Example. Reading. Recap: Do Loop Example. Recap: For Loop Example

Pass by Reference vs. Pass by Value

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

Harvard University CS 101 Fall 2005, Shimon Schocken. Assembler. Elements of Computing Systems 1 Assembler (Ch. 6)

Fast Computation of Shortest Path for Visiting Segments in the Plane

Module Management Tool in Software Development Organizations

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

USING GRAPHING SKILLS

Lecture 3: Computer Arithmetic: Multiplication and Division

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Assembler. Building a Modern Computer From First Principles.

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Feature Reduction and Selection

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method

Biostatistics 615/815

Related-Mode Attacks on CTR Encryption Mode

Assembler. Shimon Schocken. Spring Elements of Computing Systems 1 Assembler (Ch. 6) Compiler. abstract interface.

AMath 483/583 Lecture 21 May 13, Notes: Notes: Jacobi iteration. Notes: Jacobi with OpenMP coarse grain

Structure from Motion

Inverse Kinematics (part 2) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Spring 2016

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Distance Calculation from Single Optical Image

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

A New Approach For the Ranking of Fuzzy Sets With Different Heights

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

CSE 326: Data Structures Quicksort Comparison Sorting Bound

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Review of approximation techniques

Meta-heuristics for Multidimensional Knapsack Problems

S.P.H. : A SOLUTION TO AVOID USING EROSION CRITERION?

S1 Note. Basis functions.

Edge Detection in Noisy Images Using the Support Vector Machines

Sorting: The Big Picture. The steps of QuickSort. QuickSort Example. QuickSort Example. QuickSort Example. Recursive Quicksort

High level vs Low Level. What is a Computer Program? What does gcc do for you? Program = Instructions + Data. Basic Computer Organization

Machine Learning 9. week

Three supervised learning methods on pen digits character recognition dataset

Transcription:

Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the step sze, f we want to halve our error we must double the number of steps taken across the same nterval. So f N steps gve us one sgnfcant fgure, then 10 N gve us fgures, 100 N gve us 3, 100000 N gve us 6, and 10 n,1 N gve us n fgures. Not too good, because even a fast computer wll thnk a bt about dong several bllon operatons. The method can be mproved dramatcally by a smple modfcaton. 1. The mproved Euler s method Look agan at what goes on n one step of Euler s method for solvng y 0 = f (t; y) : Say we are at step n. We know t n and y n and want to calculate t n+1, y n+1. In Euler s method we assumng that the slope across the nterval [t n ;t n +t]=[t n ;t n+1 ] s what t s at t n. Of course t s not true, but t s very roughly OK. y =1 (t 1 ;y 1 ) (t 0 ;y 0 ) y=0 After we have taken one step of Euler s method, we can tell to what extent the assumpton on the slope was false, by comparng the slope of the segment comng from the left at (t n+1 ;y n+1 ) wth the value f (t n+1 ;y n+1 ) that t ought now to be. The new method wll use ths dea to get a better estmate for the slope across the nterval [t n ;t n+1 ]. One step of the new method goes lke ths: Start at t n wth the estmate y n. Do one step of Euler s method to get an approxmate value y = y n +f(t n ;y n ) for y n+1. Calculate what the slope s at the pont (t n+1 ;y ).e. f (t n+1 ;y ). We now make a modfed guess as to what the slope across [t n ;t n+1 ] should have been by takng the average of these values f (t n ;y n )and f (t n+1 ;y ). In other words

More effcent methods of numercal soluton In one step of the new method we calculate the sequence Ths s n contrast wth one step of Euler s: s 0 = f (x n ;y n ) y =y n +s 0 x n+1 = x n + s 1 =f(x n+1 ;y ) y n+1 = y n +h s0 +s 1 s 0 = f (x n ;y n ) y n+1 = y n +s 0 x n+1 = x n + In the new method, we are dong essentally twce as much work n one step, but t should be a somewhat more accurate. It s, and dramatcally so. Let s see, for example, how the new method, whch s called the mproved Euler s method, does wth the smple problem y 0 = y, t; y(0) = 0:5 we looked at before. Here s how one run of the new method goes n coverng the nterval [0; 1] n 4 steps: t y s 0 y s 1 0:00 0:500000 0:500000 0:65000 0:375000 0:5 0:609375 0:359375 0:69919 0:19919 0:50 0:679199 0:179199 0:73999,0:06001 0:75 0:698349,0:051651 0:685436,0:314564 1:00 0:6557 And here s how the estmates for y(1) compare for dfferent step szes: N estmated y(1) true y(1) error 0:679688 0:640859 0:03888 4 0:6557 ::: 0:011713 8 0:644079 ::: 0:0030 16 0:641703 ::: 0:000844 3 0:641075 ::: 0:00016 64 0:640914 ::: 0:000055 18 0:640873 ::: 0:000014 56 0:640863 ::: 0:000003 51 0:640860 ::: 0:000001 104 0:640859 0:640859 0:000000 Halvng the step sze here cuts down the error by a factor of 4. Ths suggests: The error n the mproved Euler s method s proportonal to. The new method, whch s called ether the mproved Euler s method or the Runge-Kutta method of order, s much more effcent than Euler s. If t takes N ntervals to get decmal accuracy, t wll take 10 N to get 4 decmals, 100 N to get 6 (as opposed to 100000 N n Euler s method). Very roughly, to get accuracy of " t takes 1= p " steps.

More effcent methods of numercal soluton 3 Exercse 1.1. Do one step of the new method for y 0 = y, t; y(0) = 1 to estmate y(0:1). Then do two steps of sze t =0:05. Estmate the error n the second value.. Relatons wth numercal calculaton of ntegrals If y s a soluton of y 0 (t) =f(t; y(t)) then for any nterval [t n ;t n+1 ] wth t n+1 = t n +twe can ntegrate to get y(t n+1 ), y(t n )= Z tn+1 tn f (t; y(t)) dt : Ths does not amount to a formula for y(t n+1 ) n terms of y(t n ) because the rght hand sde nvolves the unknown functon y(t) throughout the nterval [t n ;t n+1 ]. In order to get from t an estmate for y(t n+1 ) we must use some sort of approxmaton for the ntegral. In Euler s method we set Z tn+1 tn f (t; y(t)) dt : = f (t n ;y(t n )) t : We then replace y(t n ) by ts approxmate value y n to see how one step of Euler s method goes. In the specal case where f (t; y) =f(t)does not depend on y, the soluton of the dfferental equaton & ntal condton y 0 = f (t); y(t 0 )=y 0 s gven by a defnte ntegral y(t n )=y 0 + Z tn t0 f(t)dt and Euler s method s equvalent to the rectangle rule for estmatng the ntegral numercally. Recall that the rectangle rule approxmates the graph of f (t) by a sequence of horzontal lnes, each one agreeng wth the value of f (t) at ts left end. In ths specal case t s smple to see geometrcally why the overall error s roughly proportonal to the step sze. Z tn+1 In the mproved Euler s method we approxmate the ntegral tn f (t; y(t)) dt

More effcent methods of numercal soluton 4 by the trapezod rule, whch uses the estmate t [f (t n;y(t n )) + f (t n+1 ;y(t n+1 ))] for t. We don t know the exact values of y(t n ) or y(t n+1 ), but we approxmate them by the values y n and y n +tf(t n ;y n ). In the specal case f (t; y) =f(t)the mproved Euler s method reduces exactly to the trapezod rule. Agan, t s easy to vsualze why the error over a fxed nterval s proportonal to the square of the step sze. 3. Runge-Kutta Nether of the two methods dscussed so far s used much n practce. The smplest method whch s n fact practcal s one called Runge-Kutta method of order 4. It nvolves a more complcated sequence of calculatons n each step. Let be the step sze. We calculate n successon: y = y n +h x = x +h s s = f (x ;y ) y = y n +h s s = f (x ;y ) x n+1 = x + y = y n +s s = f (x n+1 ;y ) y n+1 = y n +h s +s +s + s 6 The purpose of usng here s that the numbers lke y etc. are not of nterest beyond the mmedate calculaton they are nvolved n. You wouldn t want to do ths by hand, but agan t s not too bad for a spread sheet. Here s a table of the calculatons for y 0 = y, t; y(0) = 0:5 wth 4 steps.

More effcent methods of numercal soluton 5 t y s y s y s y s 0:00 0:500000 0:500000 0:56500 0:437500 0:554688 0:49688 0:6074 0:3574 0:5 0:60799 0:35799 0:65740 0:77740 0:64709 0:67709 0:674919 0:174919 0:50 0:675650 0:175650 0:697607 0:07607 0:68476 0:05976 0:69058,0:059418 0:75 0:69151,0:058479 0:68411,0:190789 0:66767,0:0738 0:639689,0:360311 1:00 0:640895 And here s how the estmates for y(1) compare for dfferent step szes. The method s so accurate that we must use nearly all the dgts of accuracy possble: N estmated y(1) true y(1) error 0:641369049688 0:64085908577048 0:0004678185640 4 0:64089503039934 ::: 0:0000359446886 8 0:64086157779163 ::: 0:00000490116 16 0:640859498971 ::: 0:0000001640593 3 0:6408590969440 ::: 0:0000000105393 64 0:64085908643684 ::: 0:00000000066636 18 0:6408590858140 ::: 0:0000000000419 56 0:64085908577311 ::: 0:0000000000063 51 0:64085908577064 ::: 0:00000000000016 104 0:64085908577049 0:64085908577048 0:00000000000001 Here halvng the step sze reduces the error by a factor of 16. Snce 16= 4, ths suggests: The error n RK4 s proportonal to 4. Euler s method and the mproved Euler s method arse from usng the rectangle rule and the trapezod rule for estmatng the ntegral Z tn+1 tn f (t; y(t)) dt : RK4 uses a varant of Smpson s Rule for calculatng ntegrals and reduces to t n the specal case when f (t; y) doesn t depend on y. 4. Remarks on how to do the calculatons When mplementng these by hand, whch you mght have to do every now and then, you should set the calculatons out neatly n tables. Here s what t mght look lke for the mproved Euler s method appled to the dfferental equaton y 0 =xy, 1 wth step sze =0:1. etc. x y s =f(x; y) y = y +s s = f (x +;y ) 0:000000 1:000000,1:000000 0:900000,0:80000 0:100000 0:909000,0:81800 0:87180,0:66918 0:00000 0:834634,0:666147 0:768019,0:539189 5. How to estmate the error When you set out to approxmate the soluton of a dfferental equaton y 0 = f (x; y)

More effcent methods of numercal soluton 6 by a numercal method, you are gven ntal condtons y(x 0 )=y 0 and a range of values [x 0 ;x f ] (f for fnal ) over whch you want to approxmate the soluton. The partcular numercal method you are gong to use s usually specfed n advance, and your major choce s then to choose the step sze the ncrement by whch x changes n each step of the method. The basc dea s that f you choose a small step sze your approxmaton wll be closer to the true soluton, but at the cost of a larger number of steps. What s the exact relatonshp between step sze and accuracy? We have dscussed ths already to some extent, but now we shall look at the queston agan. Some explct data can gve you a feel for how thngs go. Here s the outcome of several runs of Euler s method for solvng the equaton over the nterval [0; 1]. y 0 =xy, 1; y(0) = 1 Number of steps Step sze Estmated value of y(1) 4 0.50000 0.46758 8 0.15000 0.540508 16 0.06500 0.60867 3 0.03150 0.646763 64 0.01565 0.66706 18 0.00781 0.677495 56 0.003906 0.68819 51 0.001953 0.685503 104 0.000977 0.686851 Ths table suggests that f we keep makng the step sze smaller, then the approxmatons to y(1) wll converge to somethng, but does not gve a good dea, for example, of how many steps we would need to take f we wanted an answer correct to 4 decmals. To get ths we need some dea of how rapdly the estmates are convergng. The way to do ths most easly s to add a column to record the dfference between an estmate and the prevous one. Number of steps Step sze Estmated value of y(1) Dfference from prevous estmate 4 0.50000 0.46758 0.46758 8 0.15000 0.540508 0.113751 16 0.06500 0.60867 0.068164 3 0.03150 0.646763 0.038091 64 0.01565 0.66706 0.0063 18 0.00781 0.677495 0.010469 56 0.003906 0.68819 0.00533 51 0.001953 0.685503 0.00685 104 0.000977 0.686851 0.001348 The pattern n the last column s smple. Eventually, the dfference gets cut n half at each lne. Ths means that at least when h s small, the dfference s roughly proportonal to the step sze. Thus we can extrapolate entres n the last column to be 0:000674; 0:000337; 0:000169; 0:000084; :::: But the total error should be the sum of all these dfferences, whch can be wrtten as 0:001348 (1=+1=4+1=8+1=16 + :::)=0:001348! It s not n fact necessary to look at a whole sequence of runs. We can summarze the dscusson more smply: f we make two runs of Euler s method, the error n the second of the two estmates for y(x f ) wll be (roughly) the sze of the dfference between the two estmates.

More effcent methods of numercal soluton 7 The reason the error can be estmated n ths way s because the error n Euler s method s proportonal to the step sze. Suppose we have made two runs of the method, one wth step sze h and another wth step sze h=. Then because of the proportonalty, and the meanng of error, we know that true answer : = estmate h + Ch true answer : = estmate h= + C(h=) for some constant C. Here estmate h means the estmate correspondng to step sze h. If we treat the approxmate equaltes as equaltes, we get two equatons n the two unknowns true answer and C. We can solve them to get true answer : =estmate h=, estmate h = estmate h= +(estmate h=, estmate h ) error n estmate h= : = estmate h=, estmate h In other words, one run of Euler s method wll tell us vrtually nothng about how accurate the run s, but two runs wll allow us to get an dea of the error and an mproved estmate for the value of y(x f ) as well. The same s true for any of the other methods, snce we know that n each case the error s roughly proportonal to some power of h for mproved Euler s t s h and for RK4 t s h 4. In general, f the error s proportonal to h k and we make two runs of step sze h and h= we get true answer = : estmate h + Ch k true answer = : estmate h= + C(h=) k k true answer = : k estmate h= + Ch k and subtract gettng and fnally: ( k, 1) true answer : = k estmate h=, estmate h true answer : = k estmate h=, estmate h k, 1 = estmate h= + estmate h=, estmate h k, 1 If you do one run wth step sze h and another of sze h= wth a method of order k then error n estmate h= : = estmate h=, estmate h k, 1 true answer : = estmate h= + error n estmate h= The earler formula s the specal case of ths wth k =1. 6. Step sze choce When we use one of the numercal methods to approxmate the soluton of the dfferental equaton, we usually have ahead of tme some dea of how accurate we want the approxmaton to be to wthn 6 or 7 decmals of accuracy, say. The queston s, how can we choose the step sze n order to obtan ths accuracy? The dscusson above suggests the followng procedure: Make two runs of the method over the gven range, wth step sze h and then wth step sze h=.

More effcent methods of numercal soluton 8 We get two estmates for y(x f ), estmate h and estmate h=. If the method has error proportonal to h k (.e. f the error s of order k) then we know that the error n estmate h s approxmately error n estmate h : = estmate h, estmate h= 1,,k : If we multply the step sze by we multply the error by k, so f we want an error of about : k h Solve ths equaton to get the new step sze h : h = h = h error n estmate h error n estmate h 1=k The procedure looks tedous, but plausble: we choose some ntal step sze h whch we judge (arbtrarly) to be reasonable. We do two runs at step szes h and h= to see how accuracy depends on step sze (to get the constant of proportonalty). We then calculate the step sze h needed to get the accuracy we want, and do a thrd run at the new sze. Example. Here s a typcal queston: In solvng the dfferental equaton y 0 = xy +1; y(0) = 1 by RK4, wth step szes 0:065 and 0:0315, the estmates 3:05940 7706 9 and 3:05940 73971 09 for y(1) were calculated. (a) Make an estmate of the error nvolved n these calculated values. (b) What step sze should be chosen to obtan 16-decmal accuracy? One thng to notce s that we don t need to know at all what the dfferental equaton s, or even the exact method used. The only mportant thng to know s the order of the method used, whch s k =4. We set y 1 =3:05940 7706 9 y =3:05940 73971 09 The formula for the error n the second value, say, s smple. We estmate t to be y, y 1 k, 1 = 0:00000 0164 17 15 = 0:00000 00084 8 To fgure out what step sze to use to get a gven accuracy, we must know how the error wll depend on step sze. It s c 4 for some constant c, where s the step sze. In the second run the step sze s 0:0315, so we estmate To get the step sze we want we solve c = 0:00000 00084 8 (0:0315) 4 =0:00889 10,16 c 4 =0:00889 4 =10,16 ; = 0:00889 1=4 = 10,4 0:3017 =0:00033

More effcent methods of numercal soluton 9 7. Summary of the numercal methods ntroduced so far We have seen three dfferent methods of solvng dfferental equatons by numercal approxmaton. In choosng among them there s a trade-off between smplcty and effcency. Euler s method s relatvely smple to understand and to program, for example, but almost hopelessly neffcent. The thrd and fnal method, the order 4 method of Runge-Kutta, s very effcent but rather dffcult to understand and even to program. The mproved Euler s method les somewhere n between these two on both grounds. Each one produces a sequence of approxmatons to the soluton over a range of x-values, and proceeds from an approxmaton of y at one value of x to an approxmaton at the next n one more or less complcated step. In each method one step goes from x n to x n+1 = x n +. The methods dffer n how the step from y n to y n+1 s performed. More effcent sngle steps come about at the cost of hgher complexty for one step. But for a gven desred accuracy, the overall savngs n tme s good. Method The step from (x n ;y n )to (x n+1 ;y n+1 ) Overall error Euler s s = f (x n ;y n ) proportonal to y n+1 = y n +s x n+1 = x n + mproved Euler s s = f (x n ;y n ) proportonal to (RK of order ) y = y n +s x n+1 = x n + s = f (x n+1 ;y ) y n+1 = y n +h s +s Runge-Kutta of order 4 s = f (x n ;y n ) proportonal to 4 y = y n +h s h x = x + s = f (x ;y ) y = y n +h s = f (x ;y ) x n+1 = x + s y = y n +s s = f (x n+1 ;y ) y n+1 = y n +h s +s +s + s 6 In practce, of course, you cannot expect to do several steps of any of these methods except by computer. You should, however, understand how to one step of each of them; how to estmate errors n usng them; how to use error estmates to choose effcent step szes. Numercal methods are often n practce the only way to solve a dfferental equaton. They can be of amazng accuracy, and n fact the methods we have descrbed here are not too dfferent from those used to send space vehcles on ncredbly long voyages. But often, too, n practce you want some overall dea about how solutons

More effcent methods of numercal soluton 10 of a dfferental equaton behave, n addton to precse numercal values. There s no automatc way of obtanng ths. In all these methods, one of the man problems s to choose the rght step sze. The basc prncple n dong ths s that: Two runs of any method wll let you estmate the error n the calculaton, as well as decde what step sze to use n order to get the accuracy you requre. 8. Remarks on roundng error Computers can only deal wth numbers of lmted accuracy, normally about 16 dgts of precson. They cannot calculate even the product and sums of numbers exactly, much less calculate the exact values of functons lke sn. Errors n computatons due to ths lmted accuracy are called roundng errors. Roundng errors are not usually a problem n the methods we have seen, except n the crcumstance where Euler s method s used to obtan extremely hgh accuracy, n whch case t wll requre enough steps for serous error to buld up. But then Euler s method should never be used for hgh accuracy anyway. There s one crcumstance, however, n whch there wll be a problem. Consder ths dfferental equaton: y 0 = cos t, y; y(0) = y 0 The general soluton s y = (sn t, cos t)=+(y 0 +1=)e ct Suppose c>0. Then whenever y 0 6=,1= the soluton wll grow exponentally. In the exceptonal case y 0 =1= t wll smply oscllate. Here, however, s the graph of a good numercal aproxmaton matchng ntal condton y(0) =,1=: Fgure 8.1. How roundng errors affect the calculatons for a stff equaton. Roundng errors are nevtably made at some pont n almost any computer calculaton, and as soon as that happens the estmated y(x) wll make an almost certanly rreversble move from the graph of the unque bounded soluton to one of the nearby ones wth exponental growth. From that pont on t must grow unbounded, no matter how well t has looked so far. Ths s not a serous problem n the followng sense: n the real world, we can never measure ntal condtons exactly, so that we can never be sure to ht the one bounded soluton exactly. Nor can nature. Snce the bounded soluton s unstable n the sense that nearby soluton eventually dverge consderably from t, we should not expect to meet wth t n physcal problems.

More effcent methods of numercal soluton 11 Unstable solutons do not realstcally occur n nature. Roundng errors smulate the approxmate nature of our measurements. Another way to say ths: Because of roundng errors, the best we can hope to do wth machne computatons s to fnd the exact answer to a problem near to the one that s orgnally posed.