(Refer Slide Time: 02:59)

Size: px
Start display at page:

Download "(Refer Slide Time: 02:59)"

Transcription

1 Numerical Methods and Programming P. B. Sunil Kumar Department of Physics Indian Institute of Technology, Madras Lecture - 7 Error propagation and stability Last class we discussed about the representation of numbers and some of the errors that can come because of this finite representation of errors. We continue on that today and see some concrete examples of these errors which come and we said that the errors are two different types. One is, mainly two different types, one is the round-off error, and other is the truncation error. Now these errors we can quantify and we can write the error as (value obtained - the true value) / true value, and we call that as the true error, multiplied by 100 would be the % error. We saw that in real applications we would actually not know what the real value is, and thus, we need another definition of the error, and that is what is shown here. So we said an approximate error. So the approximate error would be the way it is defined, is ɛ a to show that it is actually an approximate error, that should be the % error would be approximate error divided by approximation into 100. So what will be approximate error? So that would be, for example, in a series, if you make two different approximation you could take the differences between these two, and then take the, divided by the present approximation, and that would give us an approximate error. (Refer Slide Time: 02:59) So we would see this in an example here, just to remind you that a is used to show what that this is an approximate error. One place where we actually use this approximate error

2 is in iterative procedures. So that is, as I said, the approximate error then would be the difference between the present and previous approximations, and then you could divide

3 that by the present approximation to get the approximate error. So one could ask the question whether I should take the previous approximation-present approximation, or the present approximation-the previous approximation, or does the sign matter. We are not generally concerned with the sign. We are only concerned about the magnitude, so we could write either way and define that as approximate error. (Refer Slide Time: 04:00) So when you are doing a series expansion, one would be interested in making this approximate error less than the machine tolerance, and we saw what this machine tolerance would be, and we said that if you want to have a precision of n digits, and the tolerance which we can calculate is 0.5x10 (2-n) %, So, that would be the, this means we have n significant figures. So we can compute an approximate error in a series expansion, and then we could demand that the series, you keep the terms all the way up to a term such that the sum is less than, the error is approximate error, is less than the tolerance, where the tolerance we could fix to be 0.5x10 (2-n), the word, n, is precision of the machine, or significant digits which we are interested in. Due to some reason, we may not be interested in going all the way up to the machine tolerance like some other part of the computation which is, which does not have that precision. For example, when we take experimental data, and then when we want to do some analysis with this data, most of the time the measurement itself is not precise beyond certain number of digits, so there is no point in keeping very high precision which would only introduce additional errors. So we could just cut them off, and we keep certain number of figures. So we will look at an example of this iterative method just to drive home the point that what we mean by this approximate error and. So let us look at this case of where we try to compute e 0.5. So we will use the series expansion to actually compute e 0.5.

4 We know the true value in this particular case, and that happens to be I happen to know this value, so what would be the series expansion we would use. So you would expand e 0.5, e x in general, as 1+x+x 2 /2, etcetera, and let us say, to take an example, we want this to be kept up to 3 decimal, 3 significant digits. (Refer Slide Time: 05:09) That is why we want to do that. So then, what will we find? So we would want to have a tolerance of 0.5x10 (2-3), I said (2-n). Here, n is 3 so (2-3). That is about 0.5%. (Refer Slide Time: 07:10)

5 So let us look at that series. The series is e x, as I said, 1+ x + x 2 /2! + x 3 /3!...x n /n!. The question is, up to what n I should go. That would depend on, so that would depend on what accuracy I wanted. I am going to do that. I am going to compute two different errors here. The true error and the approximate error, and then we could demand that it should go beyond, below the tolerance which we wanted. So let us start with that calculation. The first term is simply 1, and the second would be for e 0.5, that is 1.5. (Refer Slide Time: 08:01) So now, what will be the errors which we have? The true error would be , that is the true value, minus the approximate value, divided by the true value, that is 9%, 9.02% error. So then, we could take the approximate error. The approximate error is 1.5, was the present approximation. The previous approximation was 1 and we divide it by the present approximation, so that gives us 33.3 %. So we could go to the next term. We could to keep the x 2 /2! term, and then we could increase, we could decrease this error, so now the point is, we ll continue this series, so that is how we compute this. So we compute the next term we would keep, that is x 2 /2!. So that will give us.25/2, so.125, and then we add that to this, and then we again subtract 1.5 from this, so we get, add when we add, 0.125, we get We will subtract 1.5 and divide by 1.625, that will definitely give us much smaller error than this, so we could compute it in this way. And then go up to the series where it goes below the tolerance which we have specified. We will see later that this, it is not guaranteed that when you go higher and higher in the series then you will be going to get a higher precision, a better value. There will be other reasons why there will be a limiting value to a machine. We will see that later.

6 (Refer Slide Time: 09:02) So we summarize this computation which we did just now, that we have said, first was 1. It got a true error of 39.3% if you just take 1, that is 1.625, was the correct answer. So we can compute the first approximation just as 1, and there is a true error which is very large, and we do not have an approximate error because this is the first approximation. The second approximation was 1.5, and then we have a true error of 9%, and then we have an approximate error of 33.3%. So, in this case we can compute the approximate error because we have a second approximation. We have the first approximation and the second approximation. So we could do that, and the third approximation would be adding the x 2 /2 term and that will give 1.625, and then we would compute the approximate error by now taking the difference between these 2 divided by this number. So that is how it gives us this, and when we go to the fourth term, then we will again take the difference between these two numbers and divide it by this number. So that will give us the next error, the estimate of the next % approximate error, etcetera. That is how we compute. So then, we would say that, okay, I was demanding 0.05% as my error. Now, by the time I reach the sixth term, I have gone below 0.05%. So I can stop my computation here. That is an example of how we can use this error estimate to compute something. So let us just summarize this once again. There are two types of errors encountered in the numerical calculations.

7 (Refer Slide Time: 11:44) We said one is round-off error due to finite representation of numbers. So, for example, 1/6 cannot be represented exactly. We have to have a finite representation to this. Now, there is an error due to this which can come, because when I take 1/6, and if I approximate 1/6 by finite number of digits, and then multiply it again by 6, I would expect to get 1, but I would get something different from 1. So that is the answer. So, we will see how we can write a program to actually demonstrate these errors. So let us just look at that. It will also help us to get familiar with small programming (Refer Slide Time: 12:53)

8 (Refer Slide Time: 12:59) We did look at some of the elements of programming before, so let us just put it into practice and look at some small programs. For example, here we will see how I can approximate 1/6 to a certain number of digits, and then, look and try to compute the errors from that. As I said, sometimes in the experimental data analysis, we need to restrict the calculation to certain number of digits. We do not want to go beyond that. So how do we actually do this chopping and rounding, which we said before, that is 1/6, We can either chop off or round off up to some number of digits. So how do we do that? So we just look at this, an example. So here is a simple code to limit the number of decimal points in some variables in the calculation. So we will write a small code like this. So I have only a main program here, and I have declared an integer and two floating point numbers. Integer as i, and floating point numbers as y and x, and I define my x up to 1,2,3,4,5,6,7,8,9 decimal points base 10. Now let us say I want to limit this to 2 decimal points. So how do I do that? I would do it in the following way. So I will have, I will take this number, and I will multiply, I take this number x, and I multiply it by 100, so I will get , and then I will multiply it by 100. I will get the , and then I take the integer floor is here a function I call floor, which basically, what it does is to convert this number to its nearest integer, and gives me an integer as an output, "i" is an integer. So if I take x and multiply it by 100, and take floor, I will get 53, and then I will divide that by again by 100, 100.0, and I will make another floating point number. So that will give me.53. So I basically, in this process, I have chopped off everything beyond two digits. So that is what we would see here if I run this program. You will see that that s what I will get. So I started with 53.something, and then I got i, made an integer of that, that s 53, and I made that again into a floating point. So I got So ignore the 0

9 because that does not mean anything here, because I chopped off all numbers below 3 here, below the second decimal place. So this way, I can approximate numbers to certain decimal places. Now we can use this to actually do this 1/6. So I will put my x as 1/6. Remember, when you define a floating point, and you are writing a fraction, always you have to write that as fraction of two floating points, or at least one floating point. If I write 1/6, this will go to 0. x is an integer, x is a floating point. It will look for an integer, if I just write 1/6. So now, I am going to approximate that to 2 decimal places, and let s look at what we get. So we got So the actual number should be You can see already here it has been rounded off, so 666 continues, and it is rounded off to 7, but I even further reduced that and I made 2 decimal places, and I chopped it off. I just simply chopped this number off after the second decimal place. So this is chopping. So now, we can also do rounding. What we have to do there is to make and add a 0.5 to this, and then run that, so then we would get rounded instead of 0.16 which we got here. Now we got 0.17 because the next decimal place is more than 0.5, so that will be approximated. This decimal place has been approximated as 7. So that is an example of chopping and rounding. (Refer Slide Time: 18:00) So now, if I just take this 1/6, let us say I had only a 2 decimal place computer, so I would have 1/6 rounded off as Now what will be the consequence of that? The consequence, for example, here I had said that if I take 1/6 and multiply it by 6 again, it is rounded off to two digits. So what do I get? That is what we should look at. So we should try to print here one more term, so that is 1 by 6 which is our y. 1/6 approximated to 2 decimal places was our y.

10 Now we will multiply that by 6 and then see, what do we get? We have to multiply this by 6. So, y is 1/6, and we multiply this by 6, and that is what we want to look at. So we definitely don t get 1, we get So that is the kind of errors which we were talking about coming from round-off errors, because we decided to actually reduce the, to keep the precision, or the number of significant digits, which is also precision, only up to 2 decimal places, and as a consequence of that, if you take 1 by 6 and multiply it by 6, 6 being an integer, (1/6) times 6, we find that we don t get 1, and we get so in this case, and then, we can also see how it will be if I just round it off instead of chopping it off. So that will be interesting to see that there is a difference between these two values also. So we got 0.96 instead of 1.6. So that is because I have restricted this to 2 decimal places. If you keep more decimal values, you get a slightly better answer. Here, for example, it is kept to 5 digits, and then you got It is a much better answer, because it has got 5 digits. So that is one kind of error. Then we have also rounding-off errors which come, appear, due to arithmetic operations using finite digit floating point numbers. Not only because of the representation of the numbers itself being rounded off, as we said, 1/6 being rounded off to 0.17, but also because the arithmetic operations involve, also have incorporated round-off errors. (Refer Slide Time: 21:06) So this is the major source of error again. So here are some examples for it. We can look at one of the examples here, that is, if you try to compute e to the power of So we can use a series, as we saw before. We could say that I will compute e to the power of -6 with the series as 1 - x + x 2 /2! x 3 /3!, etcetera. Now I will define a function which gives me this just to approximate my errors, so I can, just to quantify my errors, I will to define the function as up to n terms as this function,

11 and you could plot this function value at x = -5.5 for different values of n computed in 5 digit floating point arithmetic. And then you would find that this value, as you keep on increasing n, would saturate beyond some point. So that is, we will again try to look at this saturation in a, with an example. So let us look at that example, that kind of a series. So we want to compute x to the power of That is what we wanted to compute. (Refer Slide Time: 22:01) (Refer Slide Time: 22:56) So let us compute that. So we will keep this term as -5.5 and then we would just, so what

12 I am doing here, I have taken, I have a small program here. Again, I define some

13 integers, and I have some floating points, y and x. So I started with x = 1 because my first term in the series is 1. So that will be the first term in my series. So I kept it as started into 1, and my power here, which I call y in this program, I will just change this to, so let us start with this as x and this as y. So we have the function, f, here, which I call y, is 1 to start, and I have the power which is -5.5, and then I run it through a series. I am going up to 10 places that means, 10 terms. It will go up to n = 10 in the series that is basically what I am going to do. I am going to, i going from 1 to 10, and then I know that the denominator of this series goes as 2!, 3!, etcetera, up to n!. So this is to compute that I have term j = 1, and j keeps going from j * i, so when i = 2, it is 2; when i = 3, it s 1 2 3, etcetera. This is clear. So that we have j = 1 to start with. So when i is 1, j is simply 1. While when i is 2, the second term comes in, so the second term here would be this term that is n = 2. When n = 2 comes in, so then it will be 2! which is 2. So that s what the j would take. So j has become 2, and when n becomes 3, we have 3!, which is So j was already 2, and then that gets multiplied by i = 3. So, it goes into 3!, etcetera. Then I compute the n th term here. This is actually the n th term which I am computing, which is basically just this term. That is the n th term that is -1. So pow is a built-in C function which will compute the power of something. So I have computed the power of -1 to the power i here. That is (-1) i is computed using this function, pow, which is a built in math library function, and then I multiply that by y n. That is what I do here, sorry x n, it should be. That is exactly this term. So I have (-1) n, and I multiply that by x n, and then I divide it by I, divide it by the j, which is my n!. So now again I want to keep this into 5 digits. So I will multiply what we saw before, the way to keep it to 5 digits would be to multiply it by 1,2,3,4,5, so, 100,000, and then I take that function and divide it by 100,000. So I take this z and divide it by 100,000. Then I get up to accuracy, up to 5 digits. So that should be y. So now I am going to print this off. I am going to print the function here, the value of the function. How does it change as the series goes by? so first starting from 1, starting from i = 1, that means I am starting from here, that is what we are going to look at. Let us look at that. Let me just make sure that what we are doing is correct. So we took 5.5 as x, and we are multiplying x i, and dividing it by j which is the n!, and we keep 1,2,3,4,5 terms, and 1,2,3,4,5 terms here, and then we are adding that to the series, and we are trying to get into that. We are trying to go up to four, n = 4. Let us go up to n = 4 and then see these terms, one by one. Let us see whether it will converge or not. So we are printing out each term, term by term, these values of x n /n! (-1) n. That is, what we want to look at. Let us do that and we compile this program. We will compile this program first, and then we will run it. So the first term that we saw was, of course, 1x10-1, 1-x+x 2 /2. So the first term is 1. I have not printed that out, and the second term, that is n = 1 term. This is the n = 0 term, n = 1 will be simply -5.5, that is what we see here. So 1 minus that, 1 plus this term would be -4.5, and the next term is , 5 places, 5 digits, is what I am keeping. So whole computation just done up to 5 digits. I will keep 5 digits. So it is , and the

14 next term is again negative, and the fourth term is positive, etcetera. So the value we are getting is going as -4.5, then it goes to 10, and then it goes to -17, then it goes to 21, etcetera. We say we are not reaching anywhere on this example. So the reason for this is if you look at, again summarized here, so I kept 5 decimal places, first is 1, and this is normalized floating point representation to base 10. So the first is 1 and the next term is -5.5, or 0.55x10 1, and the next term is, we saw was 15, that is, x10 2, and the next term is , that is 27.73x10 2, etcetera, that is what we saw here. The point is, the sum is not going to converge, so the reason for this is the following, is that we are actually trying to add and subtract numbers of the same magnitude. The final answer is actually we saw, is very small. The final answer is something like That is the final answer. We will look at the numbers which we are trying to add and subtract, so that is of this order. (Refer Slide Time: 29:58) So we are trying to do 5.5, 15, 27, 38, etcetera. They are all much larger than the final answer which you want to get, so basically, we are losing, we are trying to look for something which is much smaller than the numbers which are involved in addition and subtraction. So we are actually losing significant figures. We are not going to get a very

15 good answer by doing this kind of a series expansion in this particular case. It is a very typical example of a bad series approximation. (Refer Slide Time: 30:38) (Refer Slide Time: 30:56)

16 So one way to get rid of this is to, this catastrophic cancellation as it is called, of digits of precision, or significant digits, is use a different type of series. Instead of using e x, you put 1/e x. So what is the advantage? The advantage is that we are simply summing these terms. So we do e x instead of e -x. We put 1/e x, and then we just, we are not doing plus minus here. Everything is plus, and we take 1 by that term. That gives us much faster, much more reliable answer as We can actually try and do this here. Instead of computing e to the power of minus that, we would just simply compute plus. We just do e x, and the final answer will then turn out to be 1 by that answer. So instead of plotting 1 by, instead of printing out y, we should have 1 by y there. So, that is the way of doing this. Now I am going to do e x instead of e -x. I am going to use a series for e x. That will be this. So that is the series we are going to do here. We are going to do the power of x to the power of 5, or x is positive, and every odd term will be positive because now I am taking 1 i which is 1. So I just take the power series and then I again approximate to 5 decimal places, and then I compute the value 1 / y, that is what we wanted. Now let us see what we get with this example. So, if you do this same thing here again, so I am getting a value which is much closer, much closer, I am supposed to plot 1/y. I am keeping the same terms here. I am just taking the same terms here and then computing it. So, I am keeping the same terms, that is, x value 5.5, now it s plus, and then x 2 /2, x 3 /3!, that is 6 and x 4 /4!, etcetera. Now I summed up this thing, these terms, and I am taking the 1 by that. That is the inverse of that. So I can see that this time it is converging into the value which I want, unlike the previous case, where it was not converging at all. (Refer Slide Time: 35:43)

17 So here we are getting a much better convergence, and if you go into a series with some more terms here, we would find that we have come very close to the value which we wanted. So we are going very close to the value which we want, which we were not getting, by doing e -x with 5 decimal places. So that is the clear demonstration of cancellation errors, or the catastrophic cancellation of errors. So we can compute, I repeat e -x in two different ways, we could compute them using a power series of 1-x+x 2 /2!-x 3 /3!, and, or, we could compute it using 1/e x, which is 1/(1-x+x 2 /2!-x 3 /3!...). And what we find is that these two gives us very different results, and this is because of the loss of significant digits due to the fact that we are trying to get a number which is much smaller than the whole numbers involved in the arithmetic operation itself. So, now that is, one thing which we can see. To summarize, there are two types of errors. We have round off errors, and we have errors which are appearing due to arithmetic operations. That is what we just now saw. And then we have other types of errors which are truncation errors, which we said. So that is due to the use of approximate instead of finite mathematical operations, approximation, instead of finite mathematical operation. What I mean by this, for example, if you want to do sin(x), instead of using sin(x) as finite mathematical operation we could use a series for sin(x), and that could give us truncation errors. (Refer Slide Time: 36:10) So that is, for example, we have sin(π/3) is this value. If I just use the series terms, and stop it at n = 0, that is, number of terms in the series is n = 1, n = 2, n = 3, etcetera, and the correct value would be So now, if I use only one term in the series, I am getting a value which is completely off. Or if I use two terms in the series, I get a slightly better, and if I use three terms in the series, I get little more better, etcetera. What I want to show you here is that if you use a series for this particular function you will introduce some errors.

18 So that is another type of error which you would have, called the truncation error. That is the summary of errors which we have. So now, the point is this. Two errors, that is the truncation errors and round-off errors, can often compete with each other, as we saw that in the sine series, I can use one term in the series, or I can use two terms in the series, or I can use three terms in the series, etcetera, up to n terms, and I increase the number of terms in the series. I get better and better answers, as we saw, but it is not always true. (Refer Slide Time: 37:00) The reason being the previous one in which we saw that as we increase the number of mathematical operations, you have round-off errors introduced. So we are kind of competing between the round-off errors and truncation errors. (Refer Slide Time: 37:59)

19 So the increasing number of terms in the series will reduce the truncation errors, but this will increase the number of computations, number of operations, arithmetic operations, and that would introduce round-off errors. (Refer Slide Time: 38:08) So there is actually a competition. That is what we are trying to show here. So if you use long step size, for example, or if you use less number of terms in the series then we have larger truncation errors, but then you have round-off errors going down so the total error which is dominated in the beginning by, intervals of step size, which is the inverse of the number of terms in the series. So if I, step size, if I use, I have large truncation large error due to round-off errors in the beginning, while less truncation errors, because I used large number of steps to get into one particular point. For Example if I want to actually compute some series up to, for an interval of π/3, sin(π/3), I could use small values of sine. I could compute small values of sine and then go step by step, and then go into sin(π/3). So that means that the truncation errors are very low when I use a small step size, while the round-off errors are larger, but as I increase the step size, I will have less round-off errors, because to get to that point I have to take less number of mathematical operations. So round-off errors comes down but the truncation error goes down. So there is a point at which this kind of balance is what is called the point of diminishing returns. That is the point. It is not always true that if you use large number of terms in the series, you get a better answer. That s what the point, I want to emphasize here. So then now we could see that we have introduced, we are introducing errors when we do a mathematical operation. So now, that is one particular operation. Now the question: Can we say something about how does the error propagate down the program? That means, you have a series of computations in a program. So now, if I make an error, a round-off error or a truncation error at one step, how does it propagate? That is something which we should be looking at. That is this part of the, this is what this part we

20 will look at. So we look at error propagation in functions of a single variable. To start with, we just try to quantify this, and then we would like to look at multiple variables, and then we would like to see some examples of this using programs. (Refer Slide Time: 39:59) So let us first define this correctly. So we have a function f(x), which is dependent on a variable x. Now, let us say we introduce an error in x itself that is representation. Because of the representation finite number of digits there is an error in x. So how does that reflect on the function? The error, how does this reflect on a function? Or, what I mean by this is, how much of this error in x will propagate to the evaluation of the function f(x)? So that is what is called, that will be called propagation of error. So we assume that x here is an approximation of x. Let us say x was 1/6, and then we approximate into 2 decimal places. That is one possibility. So that is one example. So, or 3 decimal places we approximated x. Now we would find out what, find out the effect of the discrepancy between x and x, on the value of the function, on the value of f(x). That is what we want to look at, or in short, our aim would be to approximate, to find out what f(x)-f(x ) is. I said estimate, because we really do not know what f(x) would be, because we may not know what x, actual value of x is. So we approximate what the value of x is or in other words, how much, Δx here, that is, x-x, and Δf, what is the relation between Δf, which is the error in the function f(x), and Δx, which would be x- x, which is the error in the function, error in the variable x. So how much, what is the relation between these two? So that is something which we should try to see. Again, we are not interested in the sign, only interested in the mod. So as I said, the problem here is we do not know the actual value of f(x), so what we know is f(x ). So we can compute only f(x ).

21 (Refer Slide Time: 43:01) So let us assume that x is close to x. We assume that, and then we will also assume that f(x ) is a continuous and differentiable function. So we make two assumptions here, that x is close to x, and f(x ) is a continuous and differentiable function, and then we can employ a Taylor s series to compute f(x) near f(x ). So that is the series which I have given here. That is the Taylor series of f(x). x is the real value. So f(x). So we expand f(x) in around f(x ). So f(x ) is an approximation to x. We expand this. To expand this in the series called Taylor series, we need to find the derivative of the function f, and hence, is the approximation that it is differential at all points. What we mean by this is that f is a smooth function. It does not have sharp kicks. So we will expand the series So the first term is f(x ), and then f ', that is f ' here means derivative of f with respect to x at x = x. Remember, f '(x) is the derivative of the function which we know with respect to x at x = x times (x-x ), and the next term is f " which means the second derivative of f, d 2 f/dx 2 at x = x, divided by 2!, into (Δx) 2. And the third term is the third derivative here. The notation is f (3) is the third derivative, that is, (d 3 f/dx 3 at x = x ) / 3! (x - x ) 3 etcetera, up to the n th term. That is how we could expand this in this series. So that gives us a relation between f(x) and f(x ) x - x. So I could take the difference between this, that tells me that the Δf which is the relation, which is difference between f(x) and f(x ), is related to Δx which is the difference between x and x through the derivative of f to the first approximation. That is what we would write here. To first approximation, I could just write f(x)-f(x ) as f '(x) (x-x ).

22 (Refer Slide Time: 46:04) So what I said here is I am assuming that this Δx which is x-x which is pretty small. So as I go to higher terms in the series, I go to 0 because it decreases pretty fast. (Δx) 2/ 2 and (Δx) 3 /6, etcetera,. So if Δx is smaller than 1, much smaller than 1, then it would just reduce very fast, so I can, to first approximation, assume that f(x) is simply f(x ) + f'(x)(xx ). That is what I have done here. So that gives us the propagation of error. So how does the, now ask me the question, that, that s f ' here, that is a mistake, so if I, you ask me the question that if I make an error in x, the representation of x, so what is the error in f, and that s given by this. So, f ', that derivative of f at x = x into x-x. So if it is a sharply varying function, if f ' is very large, then that will introduce large errors. So we can actually generalize this idea into functions of more than one variable, and then we would again expand that in a Taylor series. So the Taylor series, only up to the second term here, we are only interested in the first approximation. So I have given in the series only up to second term, that is, say 2 variables, u and v. So then, I would say f(u, v) that is the real values. Next approximation would be the f(ui, vi); i here, I mean the next approximation to this. So if I check the (i + 1) th digit approximation to i th digit approximation, so then I would say that f(ui, vi) and the derivative of that, evaluated at that values of ui and vi, the derivative of f with respect to u evaluated at ui and vi multiplied by Δu, and the derivative of f with respect to v evaluated again at the (ui, vi) multiplied by the error in v between the two approximations, (i + 1) th level approximation to i th approximation. So the first term involves here the first ordered terms are 2. That is, one derivative with respect to u, and another derivative with respect to v, and the second order terms now have three terms, one is the second derivative of the function with respect to u again evaluated at ui and vi multiplied by (Δu) 2 and plus 2 into the second derivative, or the cross derivative, the cross second derivative, that is δ 2 f/δuδv (Δu)(Δv). Plus δ 2 f/δv 2 (Δv) 2. So again, the first approximation, the error in f due to error in u and v would be given by the

23 derivatives of these functions at that value multiplied by the error in the Δu and Δv. Again, some of that we do not know what the actual values of these functions are? So if we have, the only thing which we are trying to address here is if we have an idea about how much is the error here then how much of that is propagated into the function evaluation? That is the only thing which we are trying to understand here. So that is summarized here. (Refer Side Time: 48:28) (Refer Slide Time: 50:02)

24 So I can keep writing like this. So basically, the Δf here would be the derivative of f with respect to, for any number of terms I generalize this to many number of terms now. I have many variables, x 1, x 2, x 3, etcetera, up to n terms. And then I have approximations of that as x 1, x 2, up to x n, having errors. Let us say Δx 1, Δx 2, Δx n, etcetera. Now if I generalize this as our idea of Δf with 2 variable things into n variables, then I would get an approximation like that. So, any of these derivatives being large would mean that this function will have a large error. So that is where we stop. We try to see some of the implementation of these ideas into evaluating errors and propagation of errors in some real problems as we go along in this course. (Refer Slide Time: 51:02) (Refer Slide Time: 51:22)

25 (Refer Slide Time: 51:42

26

CS321 Introduction To Numerical Methods

CS321 Introduction To Numerical Methods CS3 Introduction To Numerical Methods Fuhua (Frank) Cheng Department of Computer Science University of Kentucky Lexington KY 456-46 - - Table of Contents Errors and Number Representations 3 Error Types

More information

MA 1128: Lecture 02 1/22/2018

MA 1128: Lecture 02 1/22/2018 MA 1128: Lecture 02 1/22/2018 Exponents Scientific Notation 1 Exponents Exponents are used to indicate how many copies of a number are to be multiplied together. For example, I like to deal with the signs

More information

Lecture 05 I/O statements Printf, Scanf Simple statements, Compound statements

Lecture 05 I/O statements Printf, Scanf Simple statements, Compound statements Programming, Data Structures and Algorithms Prof. Shankar Balachandran Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 05 I/O statements Printf, Scanf Simple

More information

Divisibility Rules and Their Explanations

Divisibility Rules and Their Explanations Divisibility Rules and Their Explanations Increase Your Number Sense These divisibility rules apply to determining the divisibility of a positive integer (1, 2, 3, ) by another positive integer or 0 (although

More information

Scientific Computing. Error Analysis

Scientific Computing. Error Analysis ECE257 Numerical Methods and Scientific Computing Error Analysis Today s s class: Introduction to error analysis Approximations Round-Off Errors Introduction Error is the difference between the exact solution

More information

(Refer Slide Time: 00:26)

(Refer Slide Time: 00:26) Programming, Data Structures and Algorithms Prof. Shankar Balachandran Department of Computer Science and Engineering Indian Institute Technology, Madras Module 07 Lecture 07 Contents Repetitive statements

More information

Module 10A Lecture - 20 What is a function? Why use functions Example: power (base, n)

Module 10A Lecture - 20 What is a function? Why use functions Example: power (base, n) Programming, Data Structures and Algorithms Prof. Shankar Balachandran Department of Computer Science and Engineering Indian Institute of Technology, Madras Module 10A Lecture - 20 What is a function?

More information

(Refer Slide Time 04:53)

(Refer Slide Time 04:53) Programming and Data Structure Dr.P.P.Chakraborty Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture 26 Algorithm Design -1 Having made a preliminary study

More information

Table of Laplace Transforms

Table of Laplace Transforms Table of Laplace Transforms 1 1 2 3 4, p > -1 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Heaviside Function 27 28. Dirac Delta Function 29 30. 31 32. 1 33 34. 35 36. 37 Laplace Transforms

More information

What Every Programmer Should Know About Floating-Point Arithmetic

What Every Programmer Should Know About Floating-Point Arithmetic What Every Programmer Should Know About Floating-Point Arithmetic Last updated: October 15, 2015 Contents 1 Why don t my numbers add up? 3 2 Basic Answers 3 2.1 Why don t my numbers, like 0.1 + 0.2 add

More information

Combinatorics Prof. Dr. L. Sunil Chandran Department of Computer Science and Automation Indian Institute of Science, Bangalore

Combinatorics Prof. Dr. L. Sunil Chandran Department of Computer Science and Automation Indian Institute of Science, Bangalore Combinatorics Prof. Dr. L. Sunil Chandran Department of Computer Science and Automation Indian Institute of Science, Bangalore Lecture - 5 Elementary concepts and basic counting principles So, welcome

More information

(Refer Slide Time: 1:27)

(Refer Slide Time: 1:27) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 1 Introduction to Data Structures and Algorithms Welcome to data

More information

2.1.1 Fixed-Point (or Integer) Arithmetic

2.1.1 Fixed-Point (or Integer) Arithmetic x = approximation to true value x error = x x, relative error = x x. x 2.1.1 Fixed-Point (or Integer) Arithmetic A base 2 (base 10) fixed-point number has a fixed number of binary (decimal) places. 1.

More information

LECTURE 0: Introduction and Background

LECTURE 0: Introduction and Background 1 LECTURE 0: Introduction and Background September 10, 2012 1 Computational science The role of computational science has become increasingly significant during the last few decades. It has become the

More information

Math 3 Coordinate Geometry Part 2 Graphing Solutions

Math 3 Coordinate Geometry Part 2 Graphing Solutions Math 3 Coordinate Geometry Part 2 Graphing Solutions 1 SOLVING SYSTEMS OF EQUATIONS GRAPHICALLY The solution of two linear equations is the point where the two lines intersect. For example, in the graph

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Scan Converting Lines, Circles and Ellipses Hello everybody, welcome again

More information

2.9 Linear Approximations and Differentials

2.9 Linear Approximations and Differentials 2.9 Linear Approximations and Differentials 2.9.1 Linear Approximation Consider the following graph, Recall that this is the tangent line at x = a. We had the following definition, f (a) = lim x a f(x)

More information

Chapter 1 Operations With Numbers

Chapter 1 Operations With Numbers Chapter 1 Operations With Numbers Part I Negative Numbers You may already know what negative numbers are, but even if you don t, then you have probably seen them several times over the past few days. If

More information

Switching Circuits and Logic Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Switching Circuits and Logic Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Switching Circuits and Logic Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 02 Octal and Hexadecimal Number Systems Welcome

More information

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Week 02 Module 06 Lecture - 14 Merge Sort: Analysis So, we have seen how to use a divide and conquer strategy, we

More information

Math 340 Fall 2014, Victor Matveev. Binary system, round-off errors, loss of significance, and double precision accuracy.

Math 340 Fall 2014, Victor Matveev. Binary system, round-off errors, loss of significance, and double precision accuracy. Math 340 Fall 2014, Victor Matveev Binary system, round-off errors, loss of significance, and double precision accuracy. 1. Bits and the binary number system A bit is one digit in a binary representation

More information

Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Problem Solving through Programming In C Prof. Anupam Basu Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture - 04 Introduction to Programming Language Concepts

More information

Introduction to Scientific Computing Lecture 1

Introduction to Scientific Computing Lecture 1 Introduction to Scientific Computing Lecture 1 Professor Hanno Rein Last updated: September 10, 2017 1 Number Representations In this lecture, we will cover two concept that are important to understand

More information

(Refer Slide Time: 00:03:51)

(Refer Slide Time: 00:03:51) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 17 Scan Converting Lines, Circles and Ellipses Hello and welcome everybody

More information

Chapter 3. Errors and numerical stability

Chapter 3. Errors and numerical stability Chapter 3 Errors and numerical stability 1 Representation of numbers Binary system : micro-transistor in state off 0 on 1 Smallest amount of stored data bit Object in memory chain of 1 and 0 10011000110101001111010010100010

More information

Floating Point Arithmetic

Floating Point Arithmetic Floating Point Arithmetic Clark N. Taylor Department of Electrical and Computer Engineering Brigham Young University clark.taylor@byu.edu 1 Introduction Numerical operations are something at which digital

More information

Programming, Data Structures and Algorithms Prof. Hema A Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras

Programming, Data Structures and Algorithms Prof. Hema A Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras Programming, Data Structures and Algorithms Prof. Hema A Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 54 Assignment on Data Structures (Refer Slide

More information

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras (Refer Slide Time: 00:17) Lecture No - 10 Hill Climbing So, we were looking

More information

Week - 01 Lecture - 03 Euclid's Algorithm for gcd. Let us continue with our running example of gcd to explore more issues involved with program.

Week - 01 Lecture - 03 Euclid's Algorithm for gcd. Let us continue with our running example of gcd to explore more issues involved with program. Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 01 Lecture - 03 Euclid's Algorithm

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore

Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore Compiler Design Prof. Y. N. Srikant Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No. # 10 Lecture No. # 16 Machine-Independent Optimizations Welcome to the

More information

Our Strategy for Learning Fortran 90

Our Strategy for Learning Fortran 90 Our Strategy for Learning Fortran 90 We want to consider some computational problems which build in complexity. evaluating an integral solving nonlinear equations vector/matrix operations fitting data

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible

More information

(Refer Slide Time 6:48)

(Refer Slide Time 6:48) Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology Madras Lecture - 8 Karnaugh Map Minimization using Maxterms We have been taking about

More information

These are notes for the third lecture; if statements and loops.

These are notes for the third lecture; if statements and loops. These are notes for the third lecture; if statements and loops. 1 Yeah, this is going to be the second slide in a lot of lectures. 2 - Dominant language for desktop application development - Most modern

More information

Chapter 2. Data Representation in Computer Systems

Chapter 2. Data Representation in Computer Systems Chapter 2 Data Representation in Computer Systems Chapter 2 Objectives Understand the fundamentals of numerical data representation and manipulation in digital computers. Master the skill of converting

More information

Intermediate Algebra. Gregg Waterman Oregon Institute of Technology

Intermediate Algebra. Gregg Waterman Oregon Institute of Technology Intermediate Algebra Gregg Waterman Oregon Institute of Technology c 2017 Gregg Waterman This work is licensed under the Creative Commons Attribution 4.0 International license. The essence of the license

More information

Introduction to numerical algorithms

Introduction to numerical algorithms Introduction to numerical algorithms Given an algebraic equation or formula, we may want to approximate the value, and while in calculus, we deal with equations or formulas that are well defined at each

More information

Mathematical preliminaries and error analysis

Mathematical preliminaries and error analysis Mathematical preliminaries and error analysis Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan August 28, 2011 Outline 1 Round-off errors and computer arithmetic IEEE

More information

MAT128A: Numerical Analysis Lecture Two: Finite Precision Arithmetic

MAT128A: Numerical Analysis Lecture Two: Finite Precision Arithmetic MAT128A: Numerical Analysis Lecture Two: Finite Precision Arithmetic September 28, 2018 Lecture 1 September 28, 2018 1 / 25 Floating point arithmetic Computers use finite strings of binary digits to represent

More information

1.3 Floating Point Form

1.3 Floating Point Form Section 1.3 Floating Point Form 29 1.3 Floating Point Form Floating point numbers are used by computers to approximate real numbers. On the surface, the question is a simple one. There are an infinite

More information

C++ Data Types. 1 Simple C++ Data Types 2. 3 Numeric Types Integers (whole numbers) Decimal Numbers... 5

C++ Data Types. 1 Simple C++ Data Types 2. 3 Numeric Types Integers (whole numbers) Decimal Numbers... 5 C++ Data Types Contents 1 Simple C++ Data Types 2 2 Quick Note About Representations 3 3 Numeric Types 4 3.1 Integers (whole numbers)............................................ 4 3.2 Decimal Numbers.................................................

More information

Accuracy versus precision

Accuracy versus precision Accuracy versus precision Accuracy is a consistent error from the true value, but not necessarily a good or precise error Precision is a consistent result within a small error, but not necessarily anywhere

More information

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers POLYTECHNIC UNIVERSITY Department of Computer and Information Science Floating-Point Numbers in Digital Computers K. Ming Leung Abstract: We explain how floating-point numbers are represented and stored

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers POLYTECHNIC UNIVERSITY Department of Computer and Information Science Floating-Point Numbers in Digital Computers K. Ming Leung Abstract: We explain how floating-point numbers are represented and stored

More information

Chapter 1. Math review. 1.1 Some sets

Chapter 1. Math review. 1.1 Some sets Chapter 1 Math review This book assumes that you understood precalculus when you took it. So you used to know how to do things like factoring polynomials, solving high school geometry problems, using trigonometric

More information

Mark Important Points in Margin. Significant Figures. Determine which digits in a number are significant.

Mark Important Points in Margin. Significant Figures. Determine which digits in a number are significant. Knowledge/Understanding: How and why measurements are rounded. Date: How rounding and significant figures relate to precision and uncertainty. When significant figures do not apply. Skills: Determine which

More information

fractional quantities are typically represented in computers using floating point format this approach is very much similar to scientific notation

fractional quantities are typically represented in computers using floating point format this approach is very much similar to scientific notation Floating Point Arithmetic fractional quantities are typically represented in computers using floating point format this approach is very much similar to scientific notation for example, fixed point number

More information

Fundamentals. Fundamentals. Fundamentals. We build up instructions from three types of materials

Fundamentals. Fundamentals. Fundamentals. We build up instructions from three types of materials Fundamentals We build up instructions from three types of materials Constants Expressions Fundamentals Constants are just that, they are values that don t change as our macros are executing Fundamentals

More information

Computational Methods. Sources of Errors

Computational Methods. Sources of Errors Computational Methods Sources of Errors Manfred Huber 2011 1 Numerical Analysis / Scientific Computing Many problems in Science and Engineering can not be solved analytically on a computer Numeric solutions

More information

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 18 Transaction Processing and Database Manager In the previous

More information

(Refer Slide Time: 01:00)

(Refer Slide Time: 01:00) Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture minus 26 Heuristics for TSP In this lecture, we continue our discussion

More information

Exponential Numbers ID1050 Quantitative & Qualitative Reasoning

Exponential Numbers ID1050 Quantitative & Qualitative Reasoning Exponential Numbers ID1050 Quantitative & Qualitative Reasoning In what ways can you have $2000? Just like fractions, you can have a number in some denomination Number Denomination Mantissa Power of 10

More information

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 20 Priority Queues Today we are going to look at the priority

More information

(Refer Slide Time 5:19)

(Refer Slide Time 5:19) Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology, Madras Lecture - 7 Logic Minimization using Karnaugh Maps In the last lecture we introduced

More information

Section 1.4 Mathematics on the Computer: Floating Point Arithmetic

Section 1.4 Mathematics on the Computer: Floating Point Arithmetic Section 1.4 Mathematics on the Computer: Floating Point Arithmetic Key terms Floating point arithmetic IEE Standard Mantissa Exponent Roundoff error Pitfalls of floating point arithmetic Structuring computations

More information

Week - 03 Lecture - 18 Recursion. For the last lecture of this week, we will look at recursive functions. (Refer Slide Time: 00:05)

Week - 03 Lecture - 18 Recursion. For the last lecture of this week, we will look at recursive functions. (Refer Slide Time: 00:05) Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 03 Lecture - 18 Recursion For the

More information

CS321. Introduction to Numerical Methods

CS321. Introduction to Numerical Methods CS31 Introduction to Numerical Methods Lecture 1 Number Representations and Errors Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506 0633 August 5, 017 Number

More information

BASIC COMPUTATION. public static void main(string [] args) Fundamentals of Computer Science I

BASIC COMPUTATION. public static void main(string [] args) Fundamentals of Computer Science I BASIC COMPUTATION x public static void main(string [] args) Fundamentals of Computer Science I Outline Using Eclipse Data Types Variables Primitive and Class Data Types Expressions Declaration Assignment

More information

Functional Programming in Haskell Prof. Madhavan Mukund and S. P. Suresh Chennai Mathematical Institute

Functional Programming in Haskell Prof. Madhavan Mukund and S. P. Suresh Chennai Mathematical Institute Functional Programming in Haskell Prof. Madhavan Mukund and S. P. Suresh Chennai Mathematical Institute Module # 02 Lecture - 03 Characters and Strings So, let us turn our attention to a data type we have

More information

(Refer Slide Time: 00:02:02)

(Refer Slide Time: 00:02:02) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 20 Clipping: Lines and Polygons Hello and welcome everybody to the lecture

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Programming in C++ Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 08 Constants and Inline Functions Welcome to module 6 of Programming

More information

6.1 Evaluate Roots and Rational Exponents

6.1 Evaluate Roots and Rational Exponents VOCABULARY:. Evaluate Roots and Rational Exponents Radical: We know radicals as square roots. But really, radicals can be used to express any root: 0 8, 8, Index: The index tells us exactly what type of

More information

MITOCW watch?v=kz7jjltq9r4

MITOCW watch?v=kz7jjltq9r4 MITOCW watch?v=kz7jjltq9r4 PROFESSOR: We're going to look at the most fundamental of all mathematical data types, namely sets, and let's begin with the definitions. So informally, a set is a collection

More information

Repeat or Not? That Is the Question!

Repeat or Not? That Is the Question! Repeat or Not? That Is the Question! Exact Decimal Representations of Fractions Learning Goals In this lesson, you will: Use decimals and fractions to evaluate arithmetic expressions. Convert fractions

More information

Basics of Computational Geometry

Basics of Computational Geometry Basics of Computational Geometry Nadeem Mohsin October 12, 2013 1 Contents This handout covers the basic concepts of computational geometry. Rather than exhaustively covering all the algorithms, it deals

More information

Lecture Objectives. Structured Programming & an Introduction to Error. Review the basic good habits of programming

Lecture Objectives. Structured Programming & an Introduction to Error. Review the basic good habits of programming Structured Programming & an Introduction to Error Lecture Objectives Review the basic good habits of programming To understand basic concepts of error and error estimation as it applies to Numerical Methods

More information

Module 2: Computer Arithmetic

Module 2: Computer Arithmetic Module 2: Computer Arithmetic 1 B O O K : C O M P U T E R O R G A N I Z A T I O N A N D D E S I G N, 3 E D, D A V I D L. P A T T E R S O N A N D J O H N L. H A N N E S S Y, M O R G A N K A U F M A N N

More information

Signed umbers. Sign/Magnitude otation

Signed umbers. Sign/Magnitude otation Signed umbers So far we have discussed unsigned number representations. In particular, we have looked at the binary number system and shorthand methods in representing binary codes. With m binary digits,

More information

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction 1 Floating Point The World is Not Just Integers Programming languages support numbers with fraction Called floating-point numbers Examples: 3.14159265 (π) 2.71828 (e) 0.000000001 or 1.0 10 9 (seconds in

More information

There are many other applications like constructing the expression tree from the postorder expression. I leave you with an idea as how to do it.

There are many other applications like constructing the expression tree from the postorder expression. I leave you with an idea as how to do it. Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 49 Module 09 Other applications: expression tree

More information

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute of Technology, Madras Module 06 Lecture - 46 Stacks: Last in first out Operations:

More information

Roundoff Errors and Computer Arithmetic

Roundoff Errors and Computer Arithmetic Jim Lambers Math 105A Summer Session I 2003-04 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Roundoff Errors and Computer Arithmetic In computing the solution to any mathematical problem,

More information

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Module 02 Lecture - 45 Memoization Let us continue our discussion of inductive definitions. (Refer Slide Time: 00:05)

More information

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. Chapter 4: Roundoff and Truncation Errors Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. 1 Outline Errors Accuracy and Precision

More information

Using Arithmetic of Real Numbers to Explore Limits and Continuity

Using Arithmetic of Real Numbers to Explore Limits and Continuity Using Arithmetic of Real Numbers to Explore Limits and Continuity by Maria Terrell Cornell University Problem Let a =.898989... and b =.000000... (a) Find a + b. (b) Use your ideas about how to add a and

More information

MITOCW watch?v=4dj1oguwtem

MITOCW watch?v=4dj1oguwtem MITOCW watch?v=4dj1oguwtem PROFESSOR: So it's time to examine uncountable sets. And that's what we're going to do in this segment. So Cantor's question was, are all sets the same size? And he gives a definitive

More information

EC121 Mathematical Techniques A Revision Notes

EC121 Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes Mathematical Techniques A begins with two weeks of intensive revision of basic arithmetic and algebra, to the level

More information

Introduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee

Introduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee Introduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee Lecture 09 Cryptanalysis and its variants, linear attack Welcome

More information

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester. AM205: lecture 2 Luna and Gary will hold a Python tutorial on Wednesday in 60 Oxford Street, Room 330 Assignment 1 will be posted this week Chris will hold office hours on Thursday (1:30pm 3:30pm, Pierce

More information

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3 Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Instructor: Nicole Hynes nicole.hynes@rutgers.edu 1 Fixed Point Numbers Fixed point number: integer part

More information

Rational Expressions Sections

Rational Expressions Sections Rational Expressions Sections Multiplying / Dividing Let s first review how we multiply and divide fractions. Multiplying / Dividing When multiplying/ dividing, do we have to have a common denominator?

More information

Week - 01 Lecture - 04 Downloading and installing Python

Week - 01 Lecture - 04 Downloading and installing Python Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 01 Lecture - 04 Downloading and

More information

Numerical computing. How computers store real numbers and the problems that result

Numerical computing. How computers store real numbers and the problems that result Numerical computing How computers store real numbers and the problems that result The scientific method Theory: Mathematical equations provide a description or model Experiment Inference from data Test

More information

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #29 Arrays in C

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #29 Arrays in C Introduction to Programming in C Department of Computer Science and Engineering Lecture No. #29 Arrays in C (Refer Slide Time: 00:08) This session will learn about arrays in C. Now, what is the word array

More information

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras Module 03 Lecture - 26 Example of computing time complexity

More information

9. Elementary Algebraic and Transcendental Scalar Functions

9. Elementary Algebraic and Transcendental Scalar Functions Scalar Functions Summary. Introduction 2. Constants 2a. Numeric Constants 2b. Character Constants 2c. Symbol Constants 2d. Nested Constants 3. Scalar Functions 4. Arithmetic Scalar Functions 5. Operators

More information

(Refer Slide Time: 1:40)

(Refer Slide Time: 1:40) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering, Indian Institute of Technology, Delhi Lecture - 3 Instruction Set Architecture - 1 Today I will start discussion

More information

(Refer Slide Time 01:41 min)

(Refer Slide Time 01:41 min) Programming and Data Structure Dr. P.P.Chakraborty Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture # 03 C Programming - II We shall continue our study of

More information

Unit 1 Integers, Fractions & Order of Operations

Unit 1 Integers, Fractions & Order of Operations Unit 1 Integers, Fractions & Order of Operations In this unit I will learn Date: I have finished this work! I can do this on the test! Operations with positive and negative numbers The order of operations

More information

Machine Computation of the Sine Function

Machine Computation of the Sine Function Machine Computation of the Sine Function Chuck Allison CNS 3320 Numerical Software Engineering Utah Valley State College February 2007 Abstract This note shows how to determine the number of terms to use

More information

Continuing with whatever we saw in the previous lectures, we are going to discuss or continue to discuss the hardwired logic design.

Continuing with whatever we saw in the previous lectures, we are going to discuss or continue to discuss the hardwired logic design. Computer Organization Part I Prof. S. Raman Department of Computer Science & Engineering Indian Institute of Technology Lecture 10 Controller Design: Micro programmed and hard wired (contd) Continuing

More information

Math 7 Notes Unit Three: Applying Rational Numbers

Math 7 Notes Unit Three: Applying Rational Numbers Math 7 Notes Unit Three: Applying Rational Numbers Strategy note to teachers: Typically students need more practice doing computations with fractions. You may want to consider teaching the sections on

More information

3.1 DATA REPRESENTATION (PART C)

3.1 DATA REPRESENTATION (PART C) 3.1 DATA REPRESENTATION (PART C) 3.1.3 REAL NUMBERS AND NORMALISED FLOATING-POINT REPRESENTATION In decimal notation, the number 23.456 can be written as 0.23456 x 10 2. This means that in decimal notation,

More information

Long (or LONGMATH ) floating-point (or integer) variables (length up to 1 million, limited by machine memory, range: approx. ±10 1,000,000.

Long (or LONGMATH ) floating-point (or integer) variables (length up to 1 million, limited by machine memory, range: approx. ±10 1,000,000. QuickCalc User Guide. Number Representation, Assignment, and Conversion Variables Constants Usage Double (or DOUBLE ) floating-point variables (approx. 16 significant digits, range: approx. ±10 308 The

More information

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #06 Loops: Operators

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #06 Loops: Operators Introduction to Programming in C Department of Computer Science and Engineering Lecture No. #06 Loops: Operators We have seen comparison operators, like less then, equal to, less than or equal. to and

More information

(Refer Slide Time: 00:51)

(Refer Slide Time: 00:51) Programming, Data Structures and Algorithms Prof. Shankar Balachandran Department of Computer Science and Engineering Indian Institute Technology, Madras Module 10 E Lecture 24 Content Example: factorial

More information

Section A Arithmetic ( 5) Exercise A

Section A Arithmetic ( 5) Exercise A Section A Arithmetic In the non-calculator section of the examination there might be times when you need to work with quite awkward numbers quickly and accurately. In particular you must be very familiar

More information