Lab 5 Monte Carlo integration

Size: px
Start display at page:

Download "Lab 5 Monte Carlo integration"

Transcription

1 Lab 5 Monte Carlo integration Edvin Listo Zec edvinli@student.chalmers.se October 0, 014 Co-worker: Jessica Fredby

2 Introduction In this computer assignment we will discuss a technique for solving difficult, definite integrals called Monte Carlo integration. The method performs numerical integration with the use of random numbers. In this assignment we will try to estimate π using this method and we will also look at how the technique can be used to study the expected shortfall of a given scenario. Further, we will take a look at different variance reduction methods and try to minimise the variance when applying Monte Carlo integration. 1 Assignment 1 In this assignment we are going to calculate π with the integral, I(f) = x, numerically by using Monte Carlo integration. We re also going to perform an error estimate. Further, we re going to implement four different variance reduction techniques and calculate the integral using those. Assignment 1.1 Problem The task in this assignment is to use ordinary Monte Carlo integration to approximate the integral I(f) numerically. We should do this for several sample sizes n = 10 5, 10 6, 10 7,... We should further perform an error estimate, as if the true value of π is unknown and compare it to the actual error (calculated with the true value of π). Theory and implementation Monte Carlo is a non-deterministic technique used when one wants to calculate difficult integrals numerically. Monte Carlo integration uses random numbers to calculate the integral: it randomly chooses points at which the integrand is evaluated and is often used for higher-dimensional integrals. Consider the multidimensional integral x1=1 xn=1 I = f(x)dx =... f(x 1,..., x n )dx 1...dx n x 1=0 of a function f over the unit hypercube [0, 1] n = [0, 1]... [0, 1] in R n. This can be interpreted as the expectation E[f(X)] of the random variable f(x), which is an R n -valued random variable with uniform distribution over [0, 1] n. This means that X 1,..., X n are independent and identically uniform distributed over [0, 1] which is the same as X 1,..., X n being random numbers. The Monte Carlo approximation of I(f) is then: x n=0 S N = V 1 N N f(x i ) 1

3 where V is the volume of the integration region (in this case V = 1), N is the sample size and {x i } N are independent observations of X. It is the law of large numbers that guarantees that lim S N = I N Now when we have the approximation I from the sum, it is natural to calculate the error of S N. From the central limit theorem we get that the sample mean of a random variable with expected value µ and variance σ is approximately normally distributed N (µ, σ ). This means that the error can be estimated from using the Monte Carlo method on σ = (f(x) I(f)) dx: σ = (f(x) I(f))dx 1 N N (f(x i ) S N ) = 1 N N f(x i ) SN = ˆσ This gives us that Var[S N ] = V Now the estimation of the error of S N is N N Var[f(x i )] = V ˆσ N δs N Var[S N ] = V ˆσ N This means that the error decreases as 1 N, which does not depend on the number of dimensions of the integral. This is a big advantage of using Monte Carlo integration over many deterministic methods that depend exponentially on the dimension. However, the Monte Carlo method works for simple examples but not always in other cases. That is why we will look into four different ways of improving the error (importance sampling, control variates, antithetic variates and stratified sampling). We implemented this Monte Carlo integration in C by writing a function that takes the sample exponents as arguments (i.e. 6 and 8 if you want to sample from N = 10 6 to 10 8 ). This was done with two for loops: the outer loop that controls what sample size N we have and the inner loop that samples N random uniformly distributed numbers x i using drand48. For each number we calculate 4 the function value of 1+x at x i and add it to itself each inner loop. We also add up the error by squaring the function value. In the outer loop we divide the total value with N, as well as calculating the error by: 1 Error=sqrt(Errorterm/n pow(s_n,))/sqrt(n); We also calculate the actual error by looking at the difference between S N and C s own value of π using M_PI. Result and discussion The results are seen in table 1.1. As expected the approximation of π becomes more accurate each time we use a larger N, and naturally both the error and the actual error decreases. We see that the estimated error is almost the same as the actual error.

4 Table 1.1: The approximation S N using ordinary Monte Carlo approximation and the error for different N. N S N Error Actual error Assignment 1. Problem The task in this assignment is to re-calculate the integral by applying four different variance reduction techniques and checking if we get more accurate values of π, in other words see if the variances/errors get reduced. Theory and implementation We want to find estimators with small variance since the variance of S N is related to the performance of the estimators. A way to this is to implement variance reduction techniques, the idea being to transform the original observations by transformation that maintains the same expected value but reduces the variance of the estimator. We begin with a technique called importance sampling. The idea behind importance sampling is to choose a distribution that we ll simulate random variables from. The trick is to choose a distribution such that the density of the sampling points are close to the shape of the integrand, in other words we choose a distribution that over-weights the important region (hence the name importance sampling). Thus the uniform distribution is replaced by this other distribution. Let p(x) be a probability density function of the random variable X that only takes values in Ω. We have then that: I = Ω f(x)dx = Ω f(x) [ f(x) ] p(x) p(x)dx = E p p(x) As long as p(x) 0 for any x Ω for which f(x) 0. Here E p denotes the expectation with respect to the density p. Instead of using uniform distribution to generate random observations, we can use p, and thus approximating the integral I with S N = 1 N This Monte Carlo approximation yields the error σ before: ( f(x) ) ˆσ = 1 N p(x) N N f(x i ) p(x i ) ( f(x) p(x) ) / ( ) N, where σ f(x) p(x) ( f(xi ) ) S p(x i ) N is estimated as We note that if p is similar to f, the ratio f/p will be close to constant which implies that the variance will be small. 3

5 The importance sampling was implemented in C by using the density function p(x) = 1 3 (4 x) for x [0, 1] and zero elsewhere. Since p is non-negative and the integral from to equals 1 it is a density function. Also, it behaves similar to f(x) which is a desired trait, see figure 1.1. We calculated the approximation in the same fashion as in assignment 1.1, just that we now approximated the integral by calculating f and p at the uniformly distributed random number x i and adding f/p each loop. The random number x i comes from the inverse CDF: p(x) = 1 (4 x), 3 CDF = P (x) = p(x)dx = 1 3 (4 x) = 1 3 (4x x ), The inverse function becomes: x = ± 4 3y, P 1 (y) = 4 3y since x [0, 1] The error was calculated the same way as before. Figure 1.1: Plot of f(x) and a scaled version of the density function p(x). The second technique is control variates. This method is popular because it is effective at reducing the variance. The idea behind control variates is to introduce a new function g, called a control 4

6 variate. The thing about g is that it s similar to f and that the integral I(g) is known. By using the trick of adding and subtracting the same thing we get: I = f(x)dx = (f(x) g(x))dx + g(x)dx = (f(x) g(x))dx + I(g) Since f and g are similar the variance of f g should be smaller than variance of f. We then get the following approximation of I(f) by using ordinary Monte Carlo integration: S N = 1 N N (f(x i ) g(x i )) + I(g) Note that the variance now directly comes from the difference f g. Since f and g are similar also in this case, the variance of f g will be small as well. The method of control variates was implemented in C by choosing g(x) = 4 x for x [0, 1] and zero elsewhere. We see then than I(g) = 3 and we did the usual Monte Carlo integration on f(x) g(x). The third technique is called antithetic variates. This method relies on choosing pairs of observations (Y 1, Y ) such that their correlation is negative, which will reduce the variance. If we want to estimate θ =E[Y ] and have two generated samples Y 1, Y the unbiased estimate of θ is: And the variance is: ˆθ = ˆθ 1 + ˆθ Var[ˆθ] = Var[Y 1] + Var[Y ] + Cov[Y 1, Y ] 4 If Y 1, Y now are i.i.d. we get Var[Y 1 ] = Var[Y ] which gives Var[ˆθ] = Var[Y 1 ]/ = Var[Y ]/. Using the antithetic variates technique means that we choose the second sample so that Y 1, Y are not i.i.d. anymore and Cov[Y 1, Y ] is negative. This reduces the variance Var[ˆθ]. We implemented the antithetic variates method in C by using ordinary Monte Carlo integration, with the same implementaition as before, just this time on f(x i ) and on f(1 x i ): S N = 1 N N f(x i ) + 1 N N f(1 x i ) And thus the variance is implemented in the same way as before too: ˆσ = 1 N N ( f(xi ) + f(1 x ) i) S N The fourth and final technique we ll mention is the method if stratified sampling. The idea behind 5

7 this method is to divide the region of integration into smaller parts and using the usual Monte Carlo integration on each of the parts, using different sample sizes for different parts. This means that if we have our region of integration Ω = [0, 1] n we divide it into k regions Ω 1,...Ω k. A region Ω j has the volume V j and for each region we use the sample N j of the observations {x ij } Nj from a random variable X j. Here X j has a uniform distribution over Ω j and this leads to the Monte Carlo integration: The error of this approximation is S N = k V j N j N j=1 j j=1 f(x i,j ) SS = k Vj σj N (f) j where the variance σj from the region Ω j is: ( 1 ( 1 ) ) σj (f) = f(x) dx f(x)dx V j Ω j V j Ω j Here the variance σ j from Ω j is also calculated using Monte Carlo integration. We implemented this in C the same as before, but with a slight modification. We added a loop that looped from 1 to k where k is the number of parts we divided the integration region into. When we then sampled the uniformly distributed random number, we had to take into account that the integration region was divided into k different parts. We did this with the following line of code: 1 x_i=drand48()/(double)k + (double)(j 1)/(double)k; //all intervals are the same size, since we just divide the region by k Then we calculated f(x i ) the same as before, the difference being that we did it for each divided region. This means that V j = 1/k and that N j = N/k. However, one could choose N j V j σ j if one wants the stratified sampling to perform optimal. Result and discussion In tables 1., 1.3, 1.4, 1.5 we see the results after using the four different variance reduction methods. All methods seem to be effective since the error is smaller in all cases, meaning that the estimation of π is better than when using ordinary Monte Carlo approximation. With respect to the previous theory, this is what to be expected. The reason we get better results is that the four different variance reduction techniques transform our observations X = x 1,..., x n by different transformations, where the expectation E[X] is the same but the variance Var[X] is reduced. The final result being that each Monte Carlo simulation becomes more efficient. Table 1.: The approximation S N and the error for different N using importance sampling. N S N Error Actual error

8 Table 1.3: The approximation S N and the error for different N using control variates. N S N Error Actual error Table 1.4: The approximation S N and the error for different N using antithetic variates. N S N Error Actual error Table 1.5: The approximation S N and the error for different N using stratified sampling. N S n Error Actual error Assignment Assignment.1 Problem The task of this assignment is to compute estimations of I(f) = 1 f(x, y)dxy by the Monte Carlo 0 method, where f(x, y) = for all (x, y) [0, 1] and zero elsewhere. sin(xy) x 1 Theory and implementation We computed the integral using ordinary Monte Carlo integration as in assignment 1.1. The implementation in C is the same, the difference being the function and that we must generate two uniformly random numbers x i, y i, using drand48. Result and discussion We see in table.1 that the error is quite high and that the value of the Monte Carlo approximation differs quite a lot. This is most likely due to that the function f(x, y) has a singularity at x = 1, and every time we come close to this point when using Monte Carlo integration we get a large error. A plot of the function can be see in figure.1. To calculate this integral more correctly we would have to generate numbers in a way so we avoid the singularity, perhaps importance sampling could be used (but it will probably be hard to find a p that is similar to f)

9 Figure.1: Plot of f(x, y) = sin(xy) x 1. Table.1: The approximation S N f(x, y). and the error for different N using Monte Carlo integration on N S N Error Assignment 3 Assignment 3.1 Problem The task in this assignment is to study the expected shortfall E[S X (u)], which is a measure of the worst case scenario. We should use Monte Carlo integration to estimate the expected shortfall of a given scenario. 8

10 Theory and implementation The mathematical definition of the expected shortfall is the expectation of a loss random variable X, given that the loss is greater than a certain threshold u: E[S X (u)] = E[X X > u] The given scenario is that an insurance company has found that the probability of a flooding is p = 0.1. If a flood occurs, the loss is exponentially distributed with parameter λ = and mean 1 λ = 3.4. This could be simulated by having the loss X = Y Z where Y Bernoulli(p) and Z exp(λ), independent of Y. Also, we let u = 10. We circumvent the Bernoulli distribution by the fact that Y will be 1 with probability p = 0.1 and 0 with probability 1 p = 0.9, meaning that X = Z with a probability of p = 0.1 and zero otherwise. We thus get: E{Z Z > u} 1 N N f(x i ) = S N, where f(x i ) = { xi if x i > u 0 otherwise. The estimation of the error is calculated as before: δs N Var[S N ] = V The implementation in C was done by creating a function that takes λ and a random uniformly distributed number in [0, 1] as an argument, and then returning an exp(λ) distributed random number. This is done by inverse transform sampling. If we have a uniformly distributed random number x [0, 1], we get an exponentially distributed random number by taking the inverse of the cdf of the exponential distribution, i.e. F 1 (u) = ln(1 u) λ same goes for 1 u meaning we can use y = ln(u) λ ˆσ N. Since u is uniformly distributed on [0, 1], the to get an exponentially distributed random number. Now when we have an exp(λ) distributed random number, we make an if statement that only lets the random number z i pass if it is greater than 10 and if pr > p, where pr is a uniformly randomly generated number in the interval [0, 1] and p is the probability 0.1. In this if statement we then calculate the approximation S N with ordinary Monte Carlo approximation. The error term is also calculated the same way as usual. A little difference is that we in this if statement also count how many times we are in it with the variable m, and then divide S N with m instead of N (the same goes for the error). Result and discussion We see that for larger N the value converges towards 13.4, which is not that surprising since u = 10 and the mean of the exponentially distributed random variable is

11 Table 3.1: The approximation of the expected shortfall S N and the error for different N using ordinary Monte Carlo approximation. N S N Error Assignment 3. Problem The task in this assignment is to choose a variance reduction method and to recalculate the integral using it with the same sample size N. Theory and implementation We choose the antithetic variate method, due to the reason that our function is monotone and the technique can therefore be applied. For example, the exponentially distributed random variable X 1 = ln(u) λ with U uniformly distributed on [0, 1] and X = ln(1 U) λ will be negatively correlated, due to the fact that if U increases it implies that 1 U decreases and vice versa. More generally we can state that for any distribution F (x) with inverse F 1 (y) we can generate a negatively correlated pair: X 1 = F 1 (U), X = F 1 (1 U). This is because F 1 (y) is a monotone function, and the monotonicity preserves the negative correlation. Result and discussion Figure 3.1: Plot of F 1 (y) = ln(y) λ. The value still converges towards 13.4 when N gets large, which is expected. The antithetic variates technique does indeed reduce the variance, as seen in table 3., which is a result we are content with. 10

12 Table 3.: The approximation of the expected shortfall S N and the error for different N using antithetic variates. N S N Error

13 A Appendix A - Matlab code A.1 Assignment 1 1 // Assignment 1 #include <math.h> //for mathfunctions 3 #include <stdio.h> 4 #include <stdlib.h> 5 #include <time.h> // for timing 6 7 //Ordinary monte carlo 8 calcpi ( int start, int end){ 9 int n, ex; 10 double Errorterm, y, x_i, S_n, Error; 11 srand48(time(null)); 1 for(ex=start; ex<=end; ex=ex+1){ //loop over sample sizes between 10^start and 10^end 13 S_n=0; Errorterm=0; for (n=1; n<=pow(10,ex); n=n+1){ //loop over the sample size 16 x_i=drand48(); //generate random numbers from Uniform(0,1) 17 y=4/(1+(x_i) (x_i)); //calculate the function value 18 S_n=S_n+y; //sum the function values 19 Errorterm=Errorterm+y y; //estimate the sum in the error expression 0 } 1 S_n = S_n/n; //total S_n Error = sqrt(errorterm/n pow(s_n,))/sqrt(n); //total error 3 4 printf ("Power: %d\n", ex); 5 printf ("pi: %3.10f\n", S_n); 6 printf ("error : %3.10f\n", Error); 7 printf ("actual: %3.10f\n", M_PI S_n); 8 printf (" \n"); 9 } 30 } 31 3 //importance sampling 33 calcpi( int start, int end){ 34 int n, ex; 35 double y, x_i, S_n, var, p, u; 36 srand48(time(null)); 37 for(ex=start; ex<=end; ex=ex+1){ 38 S_n=0; var = 0; for (n=1; n<=pow(10,ex); n=n+1){ 41 u=drand48(); 4 x_i= sqrt(4 3 u); //random variable generated from the distribution with p s pdf 43 y=4/(1+(x_i) (x_i)); //function value 44 p=(4 x_i)/3; //pdf for the random variable 45 S_n=S_n+y/p; //sum the function values 46 var=var + pow(y/p,); // sum in variance expression 47 } 48 S_n = S_n/n; //total S_n 49 var=var/n pow(s_n,); //total variance printf ("Power: %d\n", ex); 5 printf ("pi: %3.10f\n", S_n);

14 53 printf ("error : %3.10f\n", sqrt(var/n)); //estiamte error 54 printf ("actual: %3.10f\n", M_PI S_n); 55 printf (" \n"); 56 } } //control variates 61 calcpi3( int start, int end){ 6 int n, ex; 63 double y, x_i, S_n, var, g; 64 int I_g=3; 65 srand48(time(null)); 66 for(ex=start; ex<=end; ex=ex+1){ 67 S_n=0; var = 0; for (n=1; n<=pow(10,ex); n=n+1){ 70 x_i=drand48(); 71 y=4/(1+(x_i) (x_i)); //function value 7 g=(4 x_i); //the value of the other function g 73 S_n=S_n+y g; //sum the function values 74 var= var + pow(y g+i_g,); // sum in variance expression 75 } 76 S_n = S_n/n + I_g; //total S_n 77 var=var/n pow(s_n,); //total variance printf ("Power: %d\n", ex); 80 printf ("pi: %3.10f\n", S_n); 81 printf ("error : %3.10f\n", sqrt(var/n)); //estiamte error 8 printf ("actual: %3.10f\n", M_PI S_n); 83 printf (" \n"); 84 } 85 } // antithetic variates 88 calcpi4( int start, int end){ 89 int n; int ex; 90 double y, y, x_i, S_n, S_n, var; 91 srand48(time(null)); 9 for(ex=start; ex<=end; ex=ex+1){ 93 S_n=0; S_n=0; 94 var = 0; 95 for (n=1; n<=pow(10,ex); n=n+1){ 96 x_i=drand48(); 97 y=4/(1+(x_i) (x_i)); //function value for f(x) 98 y=4/(1+(1 x_i) (1 x_i)); //function value value for f(1 x) 99 S_n=S_n+y/; //sum over f(x) 100 S_n=S_n+y/; //sum over f(1 x) 101 var= var + pow(y/+y/,); //sum in variance expression 10 } 103 S_n = S_n/n + S_n/n; //total S_n 104 var=var/n pow(s_n,); //total variance printf ("Power: %d\n", ex); 107 printf ("pi: %3.10f\n", S_n); 108 printf ("error : %3.10f\n", sqrt(var/n)); //estiamte error 109 printf ("actual: %3.10f\n", M_PI S_n);

15 110 printf (" \n"); 111 } 11 } // stratified sampling 115 calcpi5( int start, int end){ 116 int n, j, ex, M=1; 117 double y, x_i, S_n, S_tot, var, M_j, n_j; 118 int k=4; //number of domains 119 double error, sum1, sum, sigma; 10 srand48(time(null)); 11 for(ex=start; ex<=end; ex=ex+1){ //loop over sample sizes between 10^start and 10^end 1 S_tot=0; 13 var = 0; error=0; 14 M_j=(double)M/(double)k; //length of the domain 15 n_j=pow(10,ex)/(double)k; //sample size in each domain 16 for( j=1; j<=k; j=j+1){ //loop over all domains 17 S_n=0; sum1=0; sum=0; sigma=0; 18 for (n=1; n<=n_j; n=n+1){ //loop over the sample size for each domain 19 x_i=drand48()/(double)k + (double)(j 1)/(double)k; //generate random numbers of uniformly distr. from domain M_j. 130 y=4/(1+(x_i) (x_i)); //the function value 131 S_n=S_n+y; 13 sum1=sum1+pow(y,); //first sum in variance expression 133 sum=s_n; //second sum in variance expression 134 } 135 S_tot=S_tot+M_j (S_n/n_j); //total S_n 136 sum1=sum1/(n_j); 137 sum=sum/(n_j); 138 sigma=(sum1 pow(sum, )); //calculate variance 139 error=error+pow(m_j,) (sigma/n_j); //calculate error 140 } 141 error=sqrt(error); printf ("Power: %d \n", ex); 144 printf ("pi: %3.10f\n", S_tot); 145 printf ("error : %3.10f\n", error); 146 printf ("actual: %3.10f\n", M_PI S_tot); 147 printf (" \n"); 148 } } main(){ 153 // Conduct Monte Carlo simulations 154 printf (" Ordinary Monte Carlo \n"); 155 calcpi (4,8) ; 156 printf (" Importance sampling \n"); 157 calcpi (4,8) ; 158 printf (" Control variates \n"); 159 calcpi3 (4,8) ; 160 printf (" Antithetic variates \n"); 161 calcpi4 (4,8) ; 16 printf (" Stratified sampling \n"); 163 calcpi5 (4,8) ; 164 return 0; 165

16 166 } A. Assignment 1 //Assignment #include <math.h> //for mathfunctions 3 #include <stdio.h> 4 #include <stdlib.h> 5 #include <time.h> // for timing 6 7 montcarl(int start, int end){ 8 int n; int ex; 9 double Errorterm, f, y_i, x_i, S_n, Error; srand48(time(null)); 1 for(ex=start; ex<=end; ex=ex+1){ //loop over sample sizes between 10^start and 10^end 13 S_n=0; Errorterm=0; for (n=1; n<=pow(10,ex); n=n+1){ //loop over the sample size 16 x_i=drand48(); 17 y_i=drand48(); 18 f=fabs(sin(x_i y_i)/(x_i 0.5)); //calculate the function value 19 S_n=S_n+f; //sum the function values 0 Errorterm=Errorterm+pow(f,); //sum in error expression 1 } S_n = S_n/n; //final S_n 3 Error=sqrt(Errorterm/n S_n S_n)/sqrt(n); //total estiamted error 4 5 printf ("Power: %d\n", ex); 6 printf ("function: %f\n", S_n); 7 printf ("error : %f\n", Error); 8 printf (" \n"); 9 } 30 } 31 3 main(){ 33 montcarl(4,8); 34 return 0; 35 } A.3 Assignment 3 1 //Assignment 3 3 #include <math.h> //for mathfunctions 4 #include <stdio.h> 5 #include <stdlib.h> 6 #include <time.h> // for timing 7 8 double normrnd(double mu,double sigma){ 9 double chi,eta; 10 chi=drand48();

17 11 eta=drand48(); 1 //M_PI is a constant in math.h 13 return mu+sigma sqrt( log(chi)) cos( M_PI eta);; 14 } double exprnd(double lambda){ 17 double u; 18 u = drand48(); 19 return ( log(u)/lambda); 0 } 1 double exprnd(double lambda, double u){ 3 4 return ( log(u)/lambda); 5 } expshort(int start, int end){ 9 int n; int ex; int m; 30 double Errorterm, z_i, f1, f, S_n, S_n1, S_n, S_tot, Error, pr; 31 double lambda = 1/3.4; double p=0.1; 3 srand48(time(null)); 33 for(ex=start; ex<=end; ex=ex+1){ 34 Errorterm=0;m=0;S_n=0; 35 for (n=1; n<=pow(10,ex); n++){ 36 z_i=exprnd(lambda); 37 //printf("z: %f\n:", z_i); 38 pr=drand48(); 39 if (pr>=p && z_i>10){ 40 //f=lambda exp( lambda z_i); 41 //f1 = z_i lambda exp( lambda z_i); 4 //printf("f : %f\n", f); 43 S_n += z_i; 44 //S_n1 = S_n1+f1; //should it be 1 f here? 45 //S_n = S_n+f; 46 //printf("sn1: %f\n", S_n1); 47 //printf("sn: %f\n", S_n); 48 Errorterm += z_i z_i; 49 m++; //this is our true sample size 50 } 51 } 5 S_n = S_n/m; 53 //S_n1 = S_n1/(0.9 n); 54 //S_n = S_n/(n); 55 //S_tot = S_tot + S_n1/S_n; printf ("Power: %d\n", ex); 58 Error = sqrt(errorterm/m pow(s_n,))/sqrt(m); 59 printf ("Exp.sh.: %f\n", S_n); 60 printf ("error : %f\n", Error); 61 printf (" \n"); 6 } 63 } // antithetic 66 anti( int start, int end){ 67 int n; int ex; double m, m;

18 68 double Errorterm, z_i, f1, f, S_n, S_n1, S_n, S_tot, Error, pr, d, z_i, var, y, y, m3; 69 double lambda = 1/3.4; double p=0.1; 70 srand48(time(null)); 71 for(ex=start; ex<=end; ex=ex+1){ 7 Errorterm=0;m=0;S_n1=0;S_n=0; 73 S_tot=0; m=0; m3=0; var = 0; for (n=1; n<=pow(10,ex); n++){ 76 pr=drand48(); 77 d=drand48(); 78 z_i=exprnd(lambda, d); 79 z_i=exprnd(lambda, 1 d); y=0; y=0; 8 //printf("z: %f\n:", z_i); if (pr>=p && z_i>10){ 85 S_n1 += z_i/; 86 y = z_i/; 87 m++; //this is our true sample size 88 } 89 if (pr>=p && z_i>10){ 90 S_n += z_i/; 91 y = z_i/; 9 m++; 93 } 94 var += pow(z_i/+z_i/,); } 97 S_n1 = S_n1/m; 98 S_n = S_n/m; 99 S_tot = S_n1 + S_n; 100 var=var/((m+m)/) pow(s_tot,); //what should n be here??? 101 printf ("Power: %d\n", ex); 10 printf ("Exp.sh.: %f\n", S_tot); 103 printf ("error : %3.10f\n", sqrt(var/((m+m)/))); 104 printf (" \n"); 105 } 106 } main(){ 111 //expshort(4,8); 11 anti (4,8) ; 113 return 0; 114 } / 118 ass1 119 regarding importance sampling: how do we replace the uniform distribution with p(x) distribution? or is it also uniform since x is in [0,1]? ass3 1 where should we multiply with 0.1? 13 should we have f or 1 f

19 14 15 general 16 about n=10^6, why is this one worse? 17 /

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of

More information

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION CS580: Computer Graphics KAIST School of Computing Chapter 3 MULTI-DIMENSIONAL MONTE CARLO INTEGRATION 2 1 Monte Carlo Integration This describes a simple technique for the numerical evaluation of integrals

More information

PHYSICS 115/242 Homework 5, Solutions X = and the mean and variance of X are N times the mean and variance of 12/N y, so

PHYSICS 115/242 Homework 5, Solutions X = and the mean and variance of X are N times the mean and variance of 12/N y, so PHYSICS 5/242 Homework 5, Solutions. Central limit theorem. (a) Let y i = x i /2. The distribution of y i is equal to for /2 y /2 and zero otherwise. Hence We consider µ y y i = /2 /2 σ 2 y y 2 i y i 2

More information

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods ISyE 6416: Computational Statistics Spring 2017 Lecture 13: Monte Carlo Methods Prof. Yao Xie H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Determine area

More information

Monte Carlo Integration and Random Numbers

Monte Carlo Integration and Random Numbers Monte Carlo Integration and Random Numbers Higher dimensional integration u Simpson rule with M evaluations in u one dimension the error is order M -4! u d dimensions the error is order M -4/d u In general

More information

STAT 725 Notes Monte Carlo Integration

STAT 725 Notes Monte Carlo Integration STAT 725 Notes Monte Carlo Integration Two major classes of numerical problems arise in statistical inference: optimization and integration. We have already spent some time discussing different optimization

More information

Optimization and Simulation

Optimization and Simulation Optimization and Simulation Statistical analysis and bootstrapping Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale

More information

Computational Methods. Randomness and Monte Carlo Methods

Computational Methods. Randomness and Monte Carlo Methods Computational Methods Randomness and Monte Carlo Methods Manfred Huber 2010 1 Randomness and Monte Carlo Methods Introducing randomness in an algorithm can lead to improved efficiencies Random sampling

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods -

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods - Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics - Statistical Methods - Karsten Heeger heeger@wisc.edu Course Schedule and Reading course website http://neutrino.physics.wisc.edu/teaching/phys736/

More information

DEDICATIONS. To my dear parents, and sisters. To my advisor Dr. Mahmoud Alrefaei. To my friends in and out the university.

DEDICATIONS. To my dear parents, and sisters. To my advisor Dr. Mahmoud Alrefaei. To my friends in and out the university. DEDICATIONS To my dear parents, and sisters. To my advisor Dr. Mahmoud Alrefaei. To my friends in and out the university. i ii TABLE OF CONTENTS Dedications Acknowledgements Table of Contents List of Tables

More information

Biostatistics 615/815 Lecture 16: Importance sampling Single dimensional optimization

Biostatistics 615/815 Lecture 16: Importance sampling Single dimensional optimization Biostatistics 615/815 Lecture 16: Single dimensional optimization Hyun Min Kang November 1st, 2012 Hyun Min Kang Biostatistics 615/815 - Lecture 16 November 1st, 2012 1 / 59 The crude Monte-Carlo Methods

More information

Numerical Integration

Numerical Integration Lecture 12: Numerical Integration (with a focus on Monte Carlo integration) Computer Graphics CMU 15-462/15-662, Fall 2015 Review: fundamental theorem of calculus Z b f(x)dx = F (b) F (a) a f(x) = d dx

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Monte Carlo Integration

Monte Carlo Integration Lecture 11: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016 Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g.

More information

Probability Models.S4 Simulating Random Variables

Probability Models.S4 Simulating Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

Monte Carlo Simula/on and Copula Func/on. by Gerardo Ferrara

Monte Carlo Simula/on and Copula Func/on. by Gerardo Ferrara Monte Carlo Simula/on and Copula Func/on by Gerardo Ferrara Introduc)on A Monte Carlo method is a computational algorithm that relies on repeated random sampling to compute its results. In a nutshell,

More information

Monte Carlo Integration COS 323

Monte Carlo Integration COS 323 Monte Carlo Integration COS 323 Last time Interpolatory Quadrature Review formulation; error analysis Newton-Cotes Quadrature Midpoint, Trapezoid, Simpson s Rule Error analysis for trapezoid, midpoint

More information

Monte Carlo Methods and Statistical Computing: My Personal E

Monte Carlo Methods and Statistical Computing: My Personal E Monte Carlo Methods and Statistical Computing: My Personal Experience Department of Mathematics & Statistics Indian Institute of Technology Kanpur November 29, 2014 Outline Preface 1 Preface 2 3 4 5 6

More information

Behavior of the sample mean. varx i = σ 2

Behavior of the sample mean. varx i = σ 2 Behavior of the sample mean We observe n independent and identically distributed (iid) draws from a random variable X. Denote the observed values by X 1, X 2,..., X n. Assume the X i come from a population

More information

Quantitative Biology II!

Quantitative Biology II! Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!

More information

Monte Carlo Integration COS 323

Monte Carlo Integration COS 323 Monte Carlo Integration COS 323 Integration in d Dimensions? One option: nested 1-D integration f(x,y) g(y) y f ( x, y) dx dy ( ) = g y dy x Evaluate the latter numerically, but each sample of g(y) is

More information

Probability and Statistics for Final Year Engineering Students

Probability and Statistics for Final Year Engineering Students Probability and Statistics for Final Year Engineering Students By Yoni Nazarathy, Last Updated: April 11, 2011. Lecture 1: Introduction and Basic Terms Welcome to the course, time table, assessment, etc..

More information

INFOMAGR Advanced Graphics. Jacco Bikker - February April Welcome!

INFOMAGR Advanced Graphics. Jacco Bikker - February April Welcome! INFOMAGR Advanced Graphics Jacco Bikker - February April 2016 Welcome! I x, x = g(x, x ) ε x, x + S ρ x, x, x I x, x dx Today s Agenda: Introduction Stratification Next Event Estimation Importance Sampling

More information

Computer Experiments. Designs

Computer Experiments. Designs Computer Experiments Designs Differences between physical and computer Recall experiments 1. The code is deterministic. There is no random error (measurement error). As a result, no replication is needed.

More information

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A.

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A. - 430 - ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD Julius Goodman Bechtel Power Corporation 12400 E. Imperial Hwy. Norwalk, CA 90650, U.S.A. ABSTRACT The accuracy of Monte Carlo method of simulating

More information

CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts. by Emmanuel Agu

CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts. by Emmanuel Agu CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts by Emmanuel Agu Introduction The integral equations generally don t have analytic solutions, so we must turn to numerical

More information

Probability Model for 2 RV s

Probability Model for 2 RV s Probability Model for 2 RV s The joint probability mass function of X and Y is P X,Y (x, y) = P [X = x, Y= y] Joint PMF is a rule that for any x and y, gives the probability that X = x and Y= y. 3 Example:

More information

DECISION SCIENCES INSTITUTE. Exponentially Derived Antithetic Random Numbers. (Full paper submission)

DECISION SCIENCES INSTITUTE. Exponentially Derived Antithetic Random Numbers. (Full paper submission) DECISION SCIENCES INSTITUTE (Full paper submission) Dennis Ridley, Ph.D. SBI, Florida A&M University and Scientific Computing, Florida State University dridley@fsu.edu Pierre Ngnepieba, Ph.D. Department

More information

Random Numbers and Monte Carlo Methods

Random Numbers and Monte Carlo Methods Random Numbers and Monte Carlo Methods Methods which make use of random numbers are often called Monte Carlo Methods after the Monte Carlo Casino in Monaco which has long been famous for games of chance.

More information

Integration. Volume Estimation

Integration. Volume Estimation Monte Carlo Integration Lab Objective: Many important integrals cannot be evaluated symbolically because the integrand has no antiderivative. Traditional numerical integration techniques like Newton-Cotes

More information

Lecture 7: Monte Carlo Rendering. MC Advantages

Lecture 7: Monte Carlo Rendering. MC Advantages Lecture 7: Monte Carlo Rendering CS 6620, Spring 2009 Kavita Bala Computer Science Cornell University MC Advantages Convergence rate of O( ) Simple Sampling Point evaluation Can use black boxes General

More information

CDA6530: Performance Models of Computers and Networks. Chapter 8: Statistical Simulation --- Discrete-Time Simulation

CDA6530: Performance Models of Computers and Networks. Chapter 8: Statistical Simulation --- Discrete-Time Simulation CDA6530: Performance Models of Computers and Networks Chapter 8: Statistical Simulation --- Discrete-Time Simulation Simulation Studies Models with analytical formulas Calculate the numerical solutions

More information

13 Distribution Ray Tracing

13 Distribution Ray Tracing 13 In (hereafter abbreviated as DRT ), our goal is to render a scene as accurately as possible. Whereas Basic Ray Tracing computed a very crude approximation to radiance at a point, in DRT we will attempt

More information

Will Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions

Will Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions Will Monroe July 1, 017 with materials by Mehran Sahami and Chris Piech Joint Distributions Review: Normal random variable An normal (= Gaussian) random variable is a good approximation to many other distributions.

More information

Section 6.2: Generating Discrete Random Variates

Section 6.2: Generating Discrete Random Variates Section 6.2: Generating Discrete Random Variates Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 6.2: Generating Discrete

More information

Testing Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science

Testing Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science Testing Continuous Distributions Artur Czumaj DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science University of Warwick Joint work with A. Adamaszek & C. Sohler Testing

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit IV Monte Carlo

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit IV Monte Carlo Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit IV Monte Carlo Computations Dianne P. O Leary c 2008 1 What is a Monte-Carlo

More information

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1.

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1. EXAMINING THE PERFORMANCE OF A CONTROL CHART FOR THE SHIFTED EXPONENTIAL DISTRIBUTION USING PENALIZED MAXIMUM LIKELIHOOD ESTIMATORS: A SIMULATION STUDY USING SAS Austin Brown, M.S., University of Northern

More information

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao GAMES Webinar: Rendering Tutorial 2 Monte Carlo Methods Shuang Zhao Assistant Professor Computer Science Department University of California, Irvine GAMES Webinar Shuang Zhao 1 Outline 1. Monte Carlo integration

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 28 Problem Set : Probability Review Last updated: March 6, 28 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the

More information

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.

More information

Computational Methods for Finding Probabilities of Intervals

Computational Methods for Finding Probabilities of Intervals Computational Methods for Finding Probabilities of Intervals Often the standard normal density function is denoted ϕ(z) = (2π) 1/2 exp( z 2 /2) and its CDF by Φ(z). Because Φ cannot be expressed in closed

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00

Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00 P. Hadjidoukas, C. Papadimitriou ETH Zentrum, CTL E 13 CH-8092 Zürich High Performance Computing for Science and Engineering II Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00 Spring semester

More information

Laplace Transform of a Lognormal Random Variable

Laplace Transform of a Lognormal Random Variable Approximations of the Laplace Transform of a Lognormal Random Variable Joint work with Søren Asmussen & Jens Ledet Jensen The University of Queensland School of Mathematics and Physics August 1, 2011 Conference

More information

Digital Image Processing Laboratory: MAP Image Restoration

Digital Image Processing Laboratory: MAP Image Restoration Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori

More information

Lecture 14. December 19, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 14. December 19, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. geometric Lecture 14 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University December 19, 2007 geometric 1 2 3 4 geometric 5 6 7 geometric 1 Review about logs

More information

The Plan: Basic statistics: Random and pseudorandom numbers and their generation: Chapter 16.

The Plan: Basic statistics: Random and pseudorandom numbers and their generation: Chapter 16. Scientific Computing with Case Studies SIAM Press, 29 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit IV Monte Carlo Computations Dianne P. O Leary c 28 What is a Monte-Carlo method?

More information

A Random Number Based Method for Monte Carlo Integration

A Random Number Based Method for Monte Carlo Integration A Random Number Based Method for Monte Carlo Integration J Wang and G Harrell Department Math and CS, Valdosta State University, Valdosta, Georgia, USA Abstract - A new method is proposed for Monte Carlo

More information

VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD

VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD DEPARTMENT OF MATHEMATICS TECHNICAL REPORT VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD JAMIE McCREARY Under the supervision of DR. MICHAEL ALLEN July 2001 No. 2001-4 TENNESSEE

More information

Random Number Generation and Monte Carlo Methods

Random Number Generation and Monte Carlo Methods James E. Gentle Random Number Generation and Monte Carlo Methods With 30 Illustrations Springer Contents Preface vii 1 Simulating Random Numbers from a Uniform Distribution 1 1.1 Linear Congruential Generators

More information

Monte Carlo Integration

Monte Carlo Integration Lab 18 Monte Carlo Integration Lab Objective: Implement Monte Carlo integration to estimate integrals. Use Monte Carlo Integration to calculate the integral of the joint normal distribution. Some multivariable

More information

10.2 Applications of Monte Carlo Methods

10.2 Applications of Monte Carlo Methods Chapter 10 Monte Carlo Methods There is no such thing as a perfectly random number. teacher - Harold Bailey, my 8 th grade math Preface When I was a youngster, I was the captain of my junior high school

More information

Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept

Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept Illinois Institute of Technology; Chicago, IL Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept Michael D. Shields Assistant Professor Dept. of Civil Engineering Johns Hopkins

More information

Use of Extreme Value Statistics in Modeling Biometric Systems

Use of Extreme Value Statistics in Modeling Biometric Systems Use of Extreme Value Statistics in Modeling Biometric Systems Similarity Scores Two types of matching: Genuine sample Imposter sample Matching scores Enrolled sample 0.95 0.32 Probability Density Decision

More information

What We ll Do... Random

What We ll Do... Random What We ll Do... Random- number generation Random Number Generation Generating random variates Nonstationary Poisson processes Variance reduction Sequential sampling Designing and executing simulation

More information

Divide and Conquer Kernel Ridge Regression

Divide and Conquer Kernel Ridge Regression Divide and Conquer Kernel Ridge Regression Yuchen Zhang John Duchi Martin Wainwright University of California, Berkeley COLT 2013 Yuchen Zhang (UC Berkeley) Divide and Conquer KRR COLT 2013 1 / 15 Problem

More information

Chapter 4: Non-Parametric Techniques

Chapter 4: Non-Parametric Techniques Chapter 4: Non-Parametric Techniques Introduction Density Estimation Parzen Windows Kn-Nearest Neighbor Density Estimation K-Nearest Neighbor (KNN) Decision Rule Supervised Learning How to fit a density

More information

Physically Realistic Ray Tracing

Physically Realistic Ray Tracing Physically Realistic Ray Tracing Reading Required: Watt, sections 10.6,14.8. Further reading: A. Glassner. An Introduction to Ray Tracing. Academic Press, 1989. [In the lab.] Robert L. Cook, Thomas Porter,

More information

Lecture 8: Jointly distributed random variables

Lecture 8: Jointly distributed random variables Lecture : Jointly distributed random variables Random Vectors and Joint Probability Distributions Definition: Random Vector. An n-dimensional random vector, denoted as Z = (Z, Z,, Z n ), is a function

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Lecture 6: Spectral Graph Theory I

Lecture 6: Spectral Graph Theory I A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 6: Spectral Graph Theory I September 5, 013 Lecturer: Ryan O Donnell Scribe: Jennifer Iglesias 1 Graph Theory For this course we will be working on

More information

Page rank computation HPC course project a.y Compute efficient and scalable Pagerank

Page rank computation HPC course project a.y Compute efficient and scalable Pagerank Page rank computation HPC course project a.y. 2012-13 Compute efficient and scalable Pagerank 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and used by the Google Internet

More information

Nested Sampling: Introduction and Implementation

Nested Sampling: Introduction and Implementation UNIVERSITY OF TEXAS AT SAN ANTONIO Nested Sampling: Introduction and Implementation Liang Jing May 2009 1 1 ABSTRACT Nested Sampling is a new technique to calculate the evidence, Z = P(D M) = p(d θ, M)p(θ

More information

Monte Carlo Integration

Monte Carlo Integration Lecture 15: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g. trapezoidal rule - estimate

More information

Segmentation: Clustering, Graph Cut and EM

Segmentation: Clustering, Graph Cut and EM Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu

More information

Lecture 2: Generating random numbers #2

Lecture 2: Generating random numbers #2 Lecture 2: Generating random numbers #2 Fugo Takasu Dept. Information and Computer Sciences Nara Women s University takasu@ics.nara-wu.ac.jp 26 April 2006 1 Generating exponential random number If an uniform

More information

Motivation. Advanced Computer Graphics (Fall 2009) CS 283, Lecture 11: Monte Carlo Integration Ravi Ramamoorthi

Motivation. Advanced Computer Graphics (Fall 2009) CS 283, Lecture 11: Monte Carlo Integration Ravi Ramamoorthi Advanced Computer Graphics (Fall 2009) CS 283, Lecture 11: Monte Carlo Integration Ravi Ramamoorthi http://inst.eecs.berkeley.edu/~cs283 Acknowledgements and many slides courtesy: Thomas Funkhouser, Szymon

More information

Motivation. Monte Carlo Path Tracing. Monte Carlo Path Tracing. Monte Carlo Path Tracing. Monte Carlo Path Tracing

Motivation. Monte Carlo Path Tracing. Monte Carlo Path Tracing. Monte Carlo Path Tracing. Monte Carlo Path Tracing Advanced Computer Graphics (Spring 2013) CS 283, Lecture 11: Monte Carlo Path Tracing Ravi Ramamoorthi http://inst.eecs.berkeley.edu/~cs283/sp13 Motivation General solution to rendering and global illumination

More information

Chapter 1. Numeric Artifacts. 1.1 Introduction

Chapter 1. Numeric Artifacts. 1.1 Introduction Chapter 1 Numeric Artifacts 1.1 Introduction Virtually all solutions to problems in electromagnetics require the use of a computer. Even when an analytic or closed form solution is available which is nominally

More information

SD 372 Pattern Recognition

SD 372 Pattern Recognition SD 372 Pattern Recognition Lab 2: Model Estimation and Discriminant Functions 1 Purpose This lab examines the areas of statistical model estimation and classifier aggregation. Model estimation will be

More information

Reading. 8. Distribution Ray Tracing. Required: Watt, sections 10.6,14.8. Further reading:

Reading. 8. Distribution Ray Tracing. Required: Watt, sections 10.6,14.8. Further reading: Reading Required: Watt, sections 10.6,14.8. Further reading: 8. Distribution Ray Tracing A. Glassner. An Introduction to Ray Tracing. Academic Press, 1989. [In the lab.] Robert L. Cook, Thomas Porter,

More information

Random Number Generation. Biostatistics 615/815 Lecture 16

Random Number Generation. Biostatistics 615/815 Lecture 16 Random Number Generation Biostatistics 615/815 Lecture 16 Some Uses of Random Numbers Simulating data Evaluate statistical procedures Evaluate study designs Evaluate program implementations Controlling

More information

Learning Objectives. Continuous Random Variables & The Normal Probability Distribution. Continuous Random Variable

Learning Objectives. Continuous Random Variables & The Normal Probability Distribution. Continuous Random Variable Learning Objectives Continuous Random Variables & The Normal Probability Distribution 1. Understand characteristics about continuous random variables and probability distributions 2. Understand the uniform

More information

Markov chain Monte Carlo methods

Markov chain Monte Carlo methods Markov chain Monte Carlo methods (supplementary material) see also the applet http://www.lbreyer.com/classic.html February 9 6 Independent Hastings Metropolis Sampler Outline Independent Hastings Metropolis

More information

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing Lecture 4 Digital Image Enhancement 1. Principle of image enhancement 2. Spatial domain transformation Basic intensity it tranfomation ti Histogram processing Principle Objective of Enhancement Image enhancement

More information

INTRODUCTION TO MONTE CARLO METHODS. Roger Martz

INTRODUCTION TO MONTE CARLO METHODS. Roger Martz INTRODUCTION TO MONTE CARLO METHODS Roger Martz 1 Introduction The Monte Carlo method solves problems through the use of random sampling. Otherwise know as the random walk method. A statistical method.

More information

Sorting. There exist sorting algorithms which have shown to be more efficient in practice.

Sorting. There exist sorting algorithms which have shown to be more efficient in practice. Sorting Next to storing and retrieving data, sorting of data is one of the more common algorithmic tasks, with many different ways to perform it. Whenever we perform a web search and/or view statistics

More information

Distribution Ray Tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Distribution Ray Tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Distribution Ray Tracing University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Reading Required: Watt, sections 10.6,14.8. Further reading: A. Glassner. An Introduction to Ray

More information

Monte Carlo Integration of The Rendering Equation. Computer Graphics CMU /15-662, Spring 2017

Monte Carlo Integration of The Rendering Equation. Computer Graphics CMU /15-662, Spring 2017 Monte Carlo Integration of The Rendering Equation Computer Graphics CMU 15-462/15-662, Spring 2017 Review: Monte Carlo integration Z b Definite integral What we seek to estimate a f(x)dx Random variables

More information

Intensity Transformations and Spatial Filtering

Intensity Transformations and Spatial Filtering 77 Chapter 3 Intensity Transformations and Spatial Filtering Spatial domain refers to the image plane itself, and image processing methods in this category are based on direct manipulation of pixels in

More information

Bayesian Statistics Group 8th March Slice samplers. (A very brief introduction) The basic idea

Bayesian Statistics Group 8th March Slice samplers. (A very brief introduction) The basic idea Bayesian Statistics Group 8th March 2000 Slice samplers (A very brief introduction) The basic idea lacements To sample from a distribution, simply sample uniformly from the region under the density function

More information

PHYSICS 115/242 Homework 2, Solutions

PHYSICS 115/242 Homework 2, Solutions PHYSICS 115/242 Homework 2, Solutions 1. Trapezium Rule I wrote a function trap to do the trapezoidal rule, and put it in a separate file, trap.c. This is as follows: /******************************************************************

More information

Monte Carlo Integration

Monte Carlo Integration Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration. The main goals are to review some basic concepts of probability theory, to define the notation and terminology

More information

MCMC Methods for data modeling

MCMC Methods for data modeling MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Jason Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Nonparametric Methods 1 / 49 Nonparametric Methods Overview Previously, we ve assumed that the forms of the underlying densities

More information

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation 2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing Curtis Mobley Monte Carlo Simulation Delivered at the Darling Marine Center, University of Maine July 2017 Copyright 2017 by Curtis

More information

AN APPROXIMATE INVENTORY MODEL BASED ON DIMENSIONAL ANALYSIS. Victoria University, Wellington, New Zealand

AN APPROXIMATE INVENTORY MODEL BASED ON DIMENSIONAL ANALYSIS. Victoria University, Wellington, New Zealand AN APPROXIMATE INVENTORY MODEL BASED ON DIMENSIONAL ANALYSIS by G. A. VIGNAUX and Sudha JAIN Victoria University, Wellington, New Zealand Published in Asia-Pacific Journal of Operational Research, Vol

More information

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman

More information

Discrete Mathematics Course Review 3

Discrete Mathematics Course Review 3 21-228 Discrete Mathematics Course Review 3 This document contains a list of the important definitions and theorems that have been covered thus far in the course. It is not a complete listing of what has

More information

Statistical Methods in AI

Statistical Methods in AI Statistical Methods in AI Distance Based and Linear Classifiers Shrenik Lad, 200901097 INTRODUCTION : The aim of the project was to understand different types of classification algorithms by implementing

More information

A popular method for moving beyond linearity. 2. Basis expansion and regularization 1. Examples of transformations. Piecewise-polynomials and splines

A popular method for moving beyond linearity. 2. Basis expansion and regularization 1. Examples of transformations. Piecewise-polynomials and splines A popular method for moving beyond linearity 2. Basis expansion and regularization 1 Idea: Augment the vector inputs x with additional variables which are transformation of x use linear models in this

More information

Monte-Carlo Ray Tracing. Antialiasing & integration. Global illumination. Why integration? Domains of integration. What else can we integrate?

Monte-Carlo Ray Tracing. Antialiasing & integration. Global illumination. Why integration? Domains of integration. What else can we integrate? Monte-Carlo Ray Tracing Antialiasing & integration So far, Antialiasing as signal processing Now, Antialiasing as integration Complementary yet not always the same in particular for jittered sampling Image

More information

Monte Carlo integration

Monte Carlo integration Monte Carlo integration Introduction Monte Carlo integration is a quadrature (cubature) where the nodes are chosen randomly. Typically no assumptions are made about the smoothness of the integrand, not

More information

Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis

Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis Blake Boswell Booz Allen Hamilton ISPA / SCEA Conference Albuquerque, NM June 2011 1 Table Of Contents Introduction Monte Carlo Methods

More information

1. Practice the use of the C ++ repetition constructs of for, while, and do-while. 2. Use computer-generated random numbers.

1. Practice the use of the C ++ repetition constructs of for, while, and do-while. 2. Use computer-generated random numbers. 1 Purpose This lab illustrates the use of looping structures by introducing a class of programming problems called numerical algorithms. 1. Practice the use of the C ++ repetition constructs of for, while,

More information

Exercises C-Programming

Exercises C-Programming Exercises C-Programming Claude Fuhrer (claude.fuhrer@bfh.ch) 0 November 016 Contents 1 Serie 1 1 Min function.................................. Triangle surface 1............................... 3 Triangle

More information

1 Training/Validation/Testing

1 Training/Validation/Testing CPSC 340 Final (Fall 2015) Name: Student Number: Please enter your information above, turn off cellphones, space yourselves out throughout the room, and wait until the official start of the exam to begin.

More information