STAT 725 Notes Monte Carlo Integration

Size: px
Start display at page:

Download "STAT 725 Notes Monte Carlo Integration"

Transcription

1 STAT 725 Notes Monte Carlo Integration Two major classes of numerical problems arise in statistical inference: optimization and integration. We have already spent some time discussing different optimization techniques but there are plenty more! Typically, optimization is associated with the likelihood approach to inference and integration with the Bayesian approach but this is not a strict rule. Whether your problem involves a frequentist approach or a Bayesian approach, Monte Carlo integration can be very useful. Generally speaking, Monte Carlo integration refers to the process of approximating the integral of interest through the random generation of data. We will discuss classical Monte Carlo integration and importance sampling but certainly there are other approaches. Classical Monte Carlo Integration: Consider the generic problem of evaluating the integral E f [h(x)] = h(x)f(x)dx. () We assume that X is a random variable with pdf f and support S. It is natural to propose using a sample X,..., X n generated from the density f to approximate () by the empirical average (sample mean) h n = h(x i ), (2) n S since h n converges almost surely to E f [h(x)] by the Strong Law of Large Numbers. (This methods is sometimes referred to as the Monte Carlo method.) Moreover, when h 2 has a finite expectation under f, i.e. S h2 (x)f(x)dx <, the speed of convergence of h n can be assessed since the variance V ar( h n ) = {h(x) E f [h(x)]} 2 f(x)dx (3) n can also be estimated from the sample X,..., X n through S v n = n 2 [h(x i ) h n ] 2. (4) For large n, h n E f [h(x)] vn (5) is therefore approximately distributed as a N(0, ) random variable, and this leads to the construction of confidence bounds on the approximation of E f [h(x)]: h n ±.96 vn n. (6)

2 Example : A Bayesian application. In the classical (frequentist) approach to estimation the parameter θ is thought to be an unknown, but fixed, quantity. A random sample X,...,X n is taken from a population indexed by θ and, based on the observed sample values, knowledge about θ is obtained. In the Bayesian approach, θ is considered to be a random quantity whose variation can be described by a probability distribution, called the prior distribution. The prior distribution is a subjective distribution and is formulated before the data are seen. A sample is taken from a population indexed by θ and the prior distribution is updated with the sample information. This updated distribution is known as the posterior distribution. The updating is done via Bayes rule. Denote the prior distribution by π(θ) and the sample distribution by f(x θ). Then the posterior distribution, the conditional distribution of θ given the sample x, is π(θ x) = π(θ)f(x θ), (7) m(x) where m(x) is the marginal distribution of X, m(x) = π(θ)f(x θ)dθ. (8) The posterior distribution is now used to estimate θ. In particular, the Bayes estimate of θ is the mean of the posterior distribution, i.e. E(θ X) = θf(θ x)dθ (9) = θπ(θ)f(x θ)dθ π(θ)f(x θ)dθ. (0) Suppose that X θ N(θ, ) and θ N(0, ). Then f(x θ) = e (x θ)2 /2 () Then it is relatively easy to show the following: f(θ x) = π(θ) = e θ2 /2. (2) m(x) = (/ 2) exp 2 e x2 /4 { 2 (3) } (θ x/2) 2. (4) /2 Note that f(θ x) is the pdf of a normal distribution with mean x/2 and variance /2. Therefore, E(θ x) = x/2 is the Bayes estimator of θ. 2

3 It is sometimes the case that a robust prior is desired (Robert and Casella, 999). A degree of robustness can be obtained with a Cauchy prior. Therefore, let s now assume that X θ N(θ, ), as before, and θ Cauchy(0, ). Then f(x θ) = e (x θ)2 /2 (5) π(θ) = π + θ2. (6) An analytic solution to this problem, i.e. a close-form expression of the Bayes estimator of θ, is, to my knowledge, impossible to find. From (0), we need to evaluate E(θ x) = θ e (x θ)2 /2 dθ +θ 2 e +θ 2 (x θ)2 /2 dθ δπ (x). (7) Monte Carlo integration can be used to estimate the both integrals (numerator and denominator). Let θ,..., θ n be a random sample from N(x, ) (why?). Then ˆδ π (x) = n θ i +θi 2 n +θi 2 (8) provides us with a Monte Carlo estimate of the Bayes estimator of θ. The law of large numbers implies that ˆδ π (x) converges in probability to δ π (x) as n. Now, suppose that we want to evaluate the integral b a g(x)dx (9) for a continuous function g over the closed and bounded interval [a, b]. If the anti-derivative of g does not exist, then Monte Carlo integration is in order. We can write the integral as b a g(x)dx = (b a) b a g(x) dx = (b a)e[g(x)], (20) b a where the random variable X is uniformly distributed on the interval (a, b), i.e. X Uniform(a, b). The Monte Carlo method is then to generate a random sample X,..., X n from the Uniform(a, b) distribution and compute Y i = (b a)g(x i ), i =,...,n. Then Ȳn is a consistent estimator of b a g(x)dx. Example 2: Estimation of π. Consider the estimation of π via Monte Carlo integration. Let g(x) = 4 x 2, for 0 < x <. (2) (Why?) Then π = 0 g(x)dx = E[g(X)], (22) 3

4 where X Uniform(0, ). Hence, we need to generate a random sample X,...,X n from the uniform distribution on the interval (0, ) and form Y i = 4 Xi 2. Thus, Ȳn is a consistent estimator of π. Note that Ȳ is estimating a mean so the large sample confidence interval for means, Ȳ ±.96s/ n, can be used to estimate the error of estimation. The following R code can be used to estimate π via Monte Carlo integration. mc.int.pi<-function(n){ # # This function estimates pi via Monte Carlo integration. # Samples of size n from Uniform(0,) are used. # A 95% confidence interval is also computed. # y<-4*sqrt(-runif(n)^2) pi.est<-mean(y) moe<-.96*sqrt(var(y)/n) ci.lower<-pi.est-moe ci.upper<-pi.est+moe c(pi.est,ci.lower,ci.upper) } Executing this code for various n yields: > pi.out<-mc.int.pi(00) > pi.out [] > pi.out<-mc.int.pi(000) > pi.out [] > pi.out<-mc.int.pi(0000) > pi.out [] > pi.out<-mc.int.pi(00000) > pi.out [] Example 3: Normal cdf. Since the normal cdf cannot be written in an explicit form, a possible way to construct normal distribution tables is to use Monte Carlo simulation. The approximation of Φ(t) = t e y2 /2 dy (23) 4

5 by Monte Carlo simulation is ˆΦ(t) = n I(X i t), (24) where I is the indicator function and X,...,X n is a random sample from a standard normal distribution. Note that the exact variance of ˆΦ(t) is Φ(t)[ Φ(t)]/n since the variables I(X i t) are independent Bernoulli random variables with success probability Φ(t). For values of t around t = 0, the variance is thus approximately /(4n), and to achieve a precision to four digits (after the decimal point), the approximation requires on average n = (0 4 2) 2 simulations 200 million iterations. The following R code was used in generating Table : normal.prob<-function(n){ # # This function estimates P(X<=a) where X is N(0,) # via Monte Carlo integration. # Several values of a are considered. # # This function computes Monte Carlo estimate # normal cdf evaluated at v. Phi.hat<-function(v,x){mean(ifelse(x<=v,,0))} } a<-matrix(c(0,0.67,0.84,.28,.65,2.32,2.58,3.09,3.72),nrow=) x<-rnorm(n) apply(a,2,phi.hat,x=x) Table : Estimates of Normal Probabilities Φ(t) via Monte Carlo Integration t n exact What do you observe? Importance Sampling: Importance sampling is named such because it is based on so-called importance functions, although a more accurate name might be weighted sampling. The idea is to estimate E f [h(x)] = h(x)f(x)dx (25) S 5

6 [earlier ()] by sampling from a distribution other than f, the distribution of iterest. There are many reasons for doing this, one of which is illustrated in the next example. Example 4: Cauchy probability. Consider the problem of approximating p = P(X > 2) where X Cauchy(0, ). Thus, the quantity of interest is p = dx. (26) π( + x 2 ) 2 This probability can be approximated through classical Monte Carlo integration: ˆp = n I(X i > 2) (27) where X,...,X n are an iid sample from Cauchy(0, ). The variance of this estimator is p( p)/n = 0.26/n because p = This variance can be reduced by taking into account the symmetry of the Cauchy distribution because ˆp 2 = 2n I( X i > 2) (28) has variance p( 2p)/2n = /n. The relative inefficiency of these methods is due to the generation of values outside the domain of interest, [2, ); such values are, in some sense, irrelevant to the approximation of p. We can rewrite p as p = 2 2 dx. (29) π( + x 2 ) 0 The integral in this expression can be considered an expectation of h(x) = 2/π( + X 2 ) where X Uniform(0, 2). Thus, an alternative approximation of p is ˆp 3 = 2 n h(u i ), (30) where U,...,U n are an iid sample from Uniform(0, 2). The variance of this estimator is [E(h 2 ) {E(h)} 2 ]/n = /n (by integration by parts). Finally, p can also be written as p = /2 0 y 2 dy (3) π( + y 2 ) which can be thought of as the expectation of (/4)h(Y ) = /( + Y 2 ) where Y Uniform(0, /2). This leads to the estimator ˆp 4 = 4n h(y i ), (32) where Y,...,Y n is an iid sample from Uniform(0, /2). This variance of this estimator can be shown to be /n. 6

7 In comparing ˆp and ˆp 4, one sees that the reduction in variance is of order 0 3 which implies that ˆp 4 requires times fewer simulations than ˆp to achieve the same precision. The lesson here is that the approximation of () based on simulation from f is not necessarily optimal. In fact, it can be shown that such an approximation will always be suboptimal. An alternative to direct sampling from f for the approximation of () is importance sampling. Definition The method of importance sampling is an evaluation of () based on generating an iid sample X,..., X n from a given distribution g and approximating E f [h(x)] n f(x i ) g(x i ) h(x i). Importance sampling is based on an alternative representation of (): E f [h(x)] = h(x) f(x) g(x)dx. (33) g(x) The estimator () converges to () for the same reason that h n converges, regardless of the choice of the distribution g (as long as supp(g) supp(f)). This estimator is of considerable interest since it puts very little restriction on the choice of the instrumental distribution g, which can be chosen from distributions from which it is easy to simulate. Furthermore, the same sample (generated from g) can be used repeatedly, not only for different functions h but also for different densities f. This feature is attactive for robustness and Bayesian sensitivity analyses. However, there are some choices of g which are better than others. While () does converge almost surely to (), its variance is only finite when Therefore: [ ] [ E g h 2 (X) f2 (X) = E g 2 f h 2 (X) f(x) ] (X) g(x) = h 2 (x) f2 (x) dx <. (34) g(x) Instrumental distributions g with tails lighter than those of f (that is, those with unbounded ratios f/g) are not appropriate for importance sampling. In these cases, the variances of the corresponding estimators will be infinite for many functions h. If the ratio f/g is unbounded, the weights f(x i )/g(x i ) will vary widely, giving too much importance to a few values x i. Instrumental distributions g with thicker tails than f ensure that the ratio f/g does not cause the divergence of E f (h 2 f/g). Thus, one possible sufficient condition is f(x)/g(x) < M, x and Var f (h) <, 7

8 although there are others. Other Monte Carlo Integration Techniques: For those of you who are interested, there are several other methods of Monte Carlo integration, each with advantages and disadvantages. These methods include Reimann Approximations Laplace Approximations Saddlepoint Approximations For information on these methods see Chapter 3 of Monte Carlo Statistical Methods by Christian P. Roberts and George Casella (Springer 999). 8

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods ISyE 6416: Computational Statistics Spring 2017 Lecture 13: Monte Carlo Methods Prof. Yao Xie H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Determine area

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of

More information

Monte Carlo Methods and Statistical Computing: My Personal E

Monte Carlo Methods and Statistical Computing: My Personal E Monte Carlo Methods and Statistical Computing: My Personal Experience Department of Mathematics & Statistics Indian Institute of Technology Kanpur November 29, 2014 Outline Preface 1 Preface 2 3 4 5 6

More information

Bayesian Statistics Group 8th March Slice samplers. (A very brief introduction) The basic idea

Bayesian Statistics Group 8th March Slice samplers. (A very brief introduction) The basic idea Bayesian Statistics Group 8th March 2000 Slice samplers (A very brief introduction) The basic idea lacements To sample from a distribution, simply sample uniformly from the region under the density function

More information

Bayesian Methods. David Rosenberg. April 11, New York University. David Rosenberg (New York University) DS-GA 1003 April 11, / 19

Bayesian Methods. David Rosenberg. April 11, New York University. David Rosenberg (New York University) DS-GA 1003 April 11, / 19 Bayesian Methods David Rosenberg New York University April 11, 2017 David Rosenberg (New York University) DS-GA 1003 April 11, 2017 1 / 19 Classical Statistics Classical Statistics David Rosenberg (New

More information

Nested Sampling: Introduction and Implementation

Nested Sampling: Introduction and Implementation UNIVERSITY OF TEXAS AT SAN ANTONIO Nested Sampling: Introduction and Implementation Liang Jing May 2009 1 1 ABSTRACT Nested Sampling is a new technique to calculate the evidence, Z = P(D M) = p(d θ, M)p(θ

More information

Laplace Transform of a Lognormal Random Variable

Laplace Transform of a Lognormal Random Variable Approximations of the Laplace Transform of a Lognormal Random Variable Joint work with Søren Asmussen & Jens Ledet Jensen The University of Queensland School of Mathematics and Physics August 1, 2011 Conference

More information

Monte Carlo for Spatial Models

Monte Carlo for Spatial Models Monte Carlo for Spatial Models Murali Haran Department of Statistics Penn State University Penn State Computational Science Lectures April 2007 Spatial Models Lots of scientific questions involve analyzing

More information

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION CS580: Computer Graphics KAIST School of Computing Chapter 3 MULTI-DIMENSIONAL MONTE CARLO INTEGRATION 2 1 Monte Carlo Integration This describes a simple technique for the numerical evaluation of integrals

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 28 Problem Set : Probability Review Last updated: March 6, 28 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Optimization Methods III. The MCMC. Exercises.

Optimization Methods III. The MCMC. Exercises. Aula 8. Optimization Methods III. Exercises. 0 Optimization Methods III. The MCMC. Exercises. Anatoli Iambartsev IME-USP Aula 8. Optimization Methods III. Exercises. 1 [RC] A generic Markov chain Monte

More information

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.

More information

Biostatistics 615/815 Lecture 16: Importance sampling Single dimensional optimization

Biostatistics 615/815 Lecture 16: Importance sampling Single dimensional optimization Biostatistics 615/815 Lecture 16: Single dimensional optimization Hyun Min Kang November 1st, 2012 Hyun Min Kang Biostatistics 615/815 - Lecture 16 November 1st, 2012 1 / 59 The crude Monte-Carlo Methods

More information

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed

More information

A Random Number Based Method for Monte Carlo Integration

A Random Number Based Method for Monte Carlo Integration A Random Number Based Method for Monte Carlo Integration J Wang and G Harrell Department Math and CS, Valdosta State University, Valdosta, Georgia, USA Abstract - A new method is proposed for Monte Carlo

More information

Lab 5 Monte Carlo integration

Lab 5 Monte Carlo integration Lab 5 Monte Carlo integration Edvin Listo Zec 9065-976 edvinli@student.chalmers.se October 0, 014 Co-worker: Jessica Fredby Introduction In this computer assignment we will discuss a technique for solving

More information

Integration. Volume Estimation

Integration. Volume Estimation Monte Carlo Integration Lab Objective: Many important integrals cannot be evaluated symbolically because the integrand has no antiderivative. Traditional numerical integration techniques like Newton-Cotes

More information

Quantitative Biology II!

Quantitative Biology II! Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!

More information

Monte Carlo Simula/on and Copula Func/on. by Gerardo Ferrara

Monte Carlo Simula/on and Copula Func/on. by Gerardo Ferrara Monte Carlo Simula/on and Copula Func/on by Gerardo Ferrara Introduc)on A Monte Carlo method is a computational algorithm that relies on repeated random sampling to compute its results. In a nutshell,

More information

Probability Model for 2 RV s

Probability Model for 2 RV s Probability Model for 2 RV s The joint probability mass function of X and Y is P X,Y (x, y) = P [X = x, Y= y] Joint PMF is a rule that for any x and y, gives the probability that X = x and Y= y. 3 Example:

More information

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Computer vision: models, learning and inference. Chapter 10 Graphical Models Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

The Cross-Entropy Method

The Cross-Entropy Method The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,

More information

A noninformative Bayesian approach to small area estimation

A noninformative Bayesian approach to small area estimation A noninformative Bayesian approach to small area estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu September 2001 Revised May 2002 Research supported

More information

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A.

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A. - 430 - ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD Julius Goodman Bechtel Power Corporation 12400 E. Imperial Hwy. Norwalk, CA 90650, U.S.A. ABSTRACT The accuracy of Monte Carlo method of simulating

More information

Extreme Value Theory in (Hourly) Precipitation

Extreme Value Theory in (Hourly) Precipitation Extreme Value Theory in (Hourly) Precipitation Uli Schneider Geophysical Statistics Project, NCAR GSP Miniseries at CSU November 17, 2003 Outline Project overview Extreme value theory 101 Applying extreme

More information

Hybrid Quasi-Monte Carlo Method for the Simulation of State Space Models

Hybrid Quasi-Monte Carlo Method for the Simulation of State Space Models The Tenth International Symposium on Operations Research and Its Applications (ISORA 211) Dunhuang, China, August 28 31, 211 Copyright 211 ORSC & APORC, pp. 83 88 Hybrid Quasi-Monte Carlo Method for the

More information

Computational Methods for Finding Probabilities of Intervals

Computational Methods for Finding Probabilities of Intervals Computational Methods for Finding Probabilities of Intervals Often the standard normal density function is denoted ϕ(z) = (2π) 1/2 exp( z 2 /2) and its CDF by Φ(z). Because Φ cannot be expressed in closed

More information

Calculus I Review Handout 1.3 Introduction to Calculus - Limits. by Kevin M. Chevalier

Calculus I Review Handout 1.3 Introduction to Calculus - Limits. by Kevin M. Chevalier Calculus I Review Handout 1.3 Introduction to Calculus - Limits by Kevin M. Chevalier We are now going to dive into Calculus I as we take a look at the it process. While precalculus covered more static

More information

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation The Korean Communications in Statistics Vol. 12 No. 2, 2005 pp. 323-333 Bayesian Estimation for Skew Normal Distributions Using Data Augmentation Hea-Jung Kim 1) Abstract In this paper, we develop a MCMC

More information

1 Methods for Posterior Simulation

1 Methods for Posterior Simulation 1 Methods for Posterior Simulation Let p(θ y) be the posterior. simulation. Koop presents four methods for (posterior) 1. Monte Carlo integration: draw from p(θ y). 2. Gibbs sampler: sequentially drawing

More information

Package acebayes. R topics documented: November 21, Type Package

Package acebayes. R topics documented: November 21, Type Package Type Package Package acebayes November 21, 2018 Title Optimal Bayesian Experimental Design using the ACE Algorithm Version 1.5.2 Date 2018-11-21 Author Antony M. Overstall, David C. Woods & Maria Adamou

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

The Cross-Entropy Method for Mathematical Programming

The Cross-Entropy Method for Mathematical Programming The Cross-Entropy Method for Mathematical Programming Dirk P. Kroese Reuven Y. Rubinstein Department of Mathematics, The University of Queensland, Australia Faculty of Industrial Engineering and Management,

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods -

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods - Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics - Statistical Methods - Karsten Heeger heeger@wisc.edu Course Schedule and Reading course website http://neutrino.physics.wisc.edu/teaching/phys736/

More information

Issues in MCMC use for Bayesian model fitting. Practical Considerations for WinBUGS Users

Issues in MCMC use for Bayesian model fitting. Practical Considerations for WinBUGS Users Practical Considerations for WinBUGS Users Kate Cowles, Ph.D. Department of Statistics and Actuarial Science University of Iowa 22S:138 Lecture 12 Oct. 3, 2003 Issues in MCMC use for Bayesian model fitting

More information

10.4 Linear interpolation method Newton s method

10.4 Linear interpolation method Newton s method 10.4 Linear interpolation method The next best thing one can do is the linear interpolation method, also known as the double false position method. This method works similarly to the bisection method by

More information

1. Suppose that the equation F (x, y, z) = 0 implicitly defines each of the three variables x, y, and z as functions of the other two:

1. Suppose that the equation F (x, y, z) = 0 implicitly defines each of the three variables x, y, and z as functions of the other two: Final Solutions. Suppose that the equation F (x, y, z) implicitly defines each of the three variables x, y, and z as functions of the other two: z f(x, y), y g(x, z), x h(y, z). If F is differentiable

More information

COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS

COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS JOSHUA LENERS Abstract. An algorithm is function from ω to ω defined by a finite set of instructions to transform a given input x to the desired output

More information

Bayesian Computation: Overview and Methods for Low-Dimensional Models

Bayesian Computation: Overview and Methods for Low-Dimensional Models Bayesian Computation: Overview and Methods for Low-Dimensional Models Tom Loredo Dept. of Astronomy, Cornell University http://www.astro.cornell.edu/staff/loredo/bayes/ IAC Winter School, 3 4 Nov 2014

More information

Regularization and model selection

Regularization and model selection CS229 Lecture notes Andrew Ng Part VI Regularization and model selection Suppose we are trying select among several different models for a learning problem. For instance, we might be using a polynomial

More information

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao

GAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao GAMES Webinar: Rendering Tutorial 2 Monte Carlo Methods Shuang Zhao Assistant Professor Computer Science Department University of California, Irvine GAMES Webinar Shuang Zhao 1 Outline 1. Monte Carlo integration

More information

Approximate Bayesian Computation methods and their applications for hierarchical statistical models. University College London, 2015

Approximate Bayesian Computation methods and their applications for hierarchical statistical models. University College London, 2015 Approximate Bayesian Computation methods and their applications for hierarchical statistical models University College London, 2015 Contents 1. Introduction 2. ABC methods 3. Hierarchical models 4. Application

More information

A Fast Estimation of SRAM Failure Rate Using Probability Collectives

A Fast Estimation of SRAM Failure Rate Using Probability Collectives A Fast Estimation of SRAM Failure Rate Using Probability Collectives Fang Gong Electrical Engineering Department, UCLA http://www.ee.ucla.edu/~fang08 Collaborators: Sina Basir-Kazeruni, Lara Dolecek, Lei

More information

Optimization and Simulation

Optimization and Simulation Optimization and Simulation Statistical analysis and bootstrapping Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale

More information

INFOMAGR Advanced Graphics. Jacco Bikker - February April Welcome!

INFOMAGR Advanced Graphics. Jacco Bikker - February April Welcome! INFOMAGR Advanced Graphics Jacco Bikker - February April 2016 Welcome! I x, x = g(x, x ) ε x, x + S ρ x, x, x I x, x dx Today s Agenda: Introduction Stratification Next Event Estimation Importance Sampling

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Monte Carlo Integration

Monte Carlo Integration Lecture 11: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016 Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g.

More information

Lecture 17: Continuous Functions

Lecture 17: Continuous Functions Lecture 17: Continuous Functions 1 Continuous Functions Let (X, T X ) and (Y, T Y ) be topological spaces. Definition 1.1 (Continuous Function). A function f : X Y is said to be continuous if the inverse

More information

Optimal Denoising of Natural Images and their Multiscale Geometry and Density

Optimal Denoising of Natural Images and their Multiscale Geometry and Density Optimal Denoising of Natural Images and their Multiscale Geometry and Density Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Israel. Joint work with Anat Levin (WIS),

More information

Sets. De Morgan s laws. Mappings. Definition. Definition

Sets. De Morgan s laws. Mappings. Definition. Definition Sets Let X and Y be two sets. Then the set A set is a collection of elements. Two sets are equal if they contain exactly the same elements. A is a subset of B (A B) if all the elements of A also belong

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity

More information

Monte Carlo Techniques for Bayesian Statistical Inference A comparative review

Monte Carlo Techniques for Bayesian Statistical Inference A comparative review 1 Monte Carlo Techniques for Bayesian Statistical Inference A comparative review BY Y. FAN, D. WANG School of Mathematics and Statistics, University of New South Wales, Sydney 2052, AUSTRALIA 18th January,

More information

Parameter Estimation. Learning From Data: MLE. Parameter Estimation. Likelihood. Maximum Likelihood Parameter Estimation. Likelihood Function 12/1/16

Parameter Estimation. Learning From Data: MLE. Parameter Estimation. Likelihood. Maximum Likelihood Parameter Estimation. Likelihood Function 12/1/16 Learning From Data: MLE Maximum Estimators Common approach in statistics: use a parametric model of data: Assume data set: Bin(n, p), Poisson( ), N(µ, exp( ) Uniform(a, b) 2 ) But parameters are unknown!!!

More information

VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD

VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD DEPARTMENT OF MATHEMATICS TECHNICAL REPORT VARIOUS ESTIMATIONS OF π AS DEMONSTRATIONS OF THE MONTE CARLO METHOD JAMIE McCREARY Under the supervision of DR. MICHAEL ALLEN July 2001 No. 2001-4 TENNESSEE

More information

Topic 5 - Joint distributions and the CLT

Topic 5 - Joint distributions and the CLT Topic 5 - Joint distributions and the CLT Joint distributions Calculation of probabilities, mean and variance Expectations of functions based on joint distributions Central Limit Theorem Sampling distributions

More information

Mixture Models and the EM Algorithm

Mixture Models and the EM Algorithm Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Finite Mixture Models Say we have a data set D = {x 1,..., x N } where x i is

More information

Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels

Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels Mehdi Molkaraie and Hans-Andrea Loeliger Dept. of Information Technology and Electrical Engineering ETH Zurich, Switzerland

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistics and Error Analysis -

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistics and Error Analysis - Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics - Statistics and Error Analysis - Karsten Heeger heeger@wisc.edu Feldman&Cousin what are the issues they deal with? what processes

More information

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster

More information

Exam 2 is Tue Nov 21. Bring a pencil and a calculator. Discuss similarity to exam1. HW3 is due Tue Dec 5.

Exam 2 is Tue Nov 21. Bring a pencil and a calculator. Discuss similarity to exam1. HW3 is due Tue Dec 5. Stat 100a: Introduction to Probability. Outline for the day 1. Bivariate and marginal density. 2. CLT. 3. CIs. 4. Sample size calculations. 5. Review for exam 2. Exam 2 is Tue Nov 21. Bring a pencil and

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. Lecture 14

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. Lecture 14 Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics Lecture 14 Karsten Heeger heeger@wisc.edu Course Schedule and Reading course website http://neutrino.physics.wisc.edu/teaching/phys736/

More information

Markov Chain Monte Carlo (part 1)

Markov Chain Monte Carlo (part 1) Markov Chain Monte Carlo (part 1) Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2018 Depending on the book that you select for

More information

Math 265 Exam 3 Solutions

Math 265 Exam 3 Solutions C Roettger, Fall 16 Math 265 Exam 3 Solutions Problem 1 Let D be the region inside the circle r 5 sin θ but outside the cardioid r 2 + sin θ. Find the area of D. Note that r and θ denote polar coordinates.

More information

In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple

In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple 1 In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple times) until they are absorbed. On their way, they

More information

Clustering Relational Data using the Infinite Relational Model

Clustering Relational Data using the Infinite Relational Model Clustering Relational Data using the Infinite Relational Model Ana Daglis Supervised by: Matthew Ludkin September 4, 2015 Ana Daglis Clustering Data using the Infinite Relational Model September 4, 2015

More information

MCMC Diagnostics. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) MCMC Diagnostics MATH / 24

MCMC Diagnostics. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) MCMC Diagnostics MATH / 24 MCMC Diagnostics Yingbo Li Clemson University MATH 9810 Yingbo Li (Clemson) MCMC Diagnostics MATH 9810 1 / 24 Convergence to Posterior Distribution Theory proves that if a Gibbs sampler iterates enough,

More information

Probability Models.S4 Simulating Random Variables

Probability Models.S4 Simulating Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability

More information

Missing Data Analysis for the Employee Dataset

Missing Data Analysis for the Employee Dataset Missing Data Analysis for the Employee Dataset 67% of the observations have missing values! Modeling Setup For our analysis goals we would like to do: Y X N (X, 2 I) and then interpret the coefficients

More information

Monte Carlo Integration

Monte Carlo Integration Lab 18 Monte Carlo Integration Lab Objective: Implement Monte Carlo integration to estimate integrals. Use Monte Carlo Integration to calculate the integral of the joint normal distribution. Some multivariable

More information

MCMC Methods for data modeling

MCMC Methods for data modeling MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms

More information

The Multi Stage Gibbs Sampling: Data Augmentation Dutch Example

The Multi Stage Gibbs Sampling: Data Augmentation Dutch Example The Multi Stage Gibbs Sampling: Data Augmentation Dutch Example Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 8 1 Example: Data augmentation / Auxiliary variables A commonly-used

More information

GT "Calcul Ensembliste"

GT Calcul Ensembliste GT "Calcul Ensembliste" Beyond the bounded error framework for non linear state estimation Fahed Abdallah Université de Technologie de Compiègne 9 Décembre 2010 Fahed Abdallah GT "Calcul Ensembliste" 9

More information

Probability and Statistics for Final Year Engineering Students

Probability and Statistics for Final Year Engineering Students Probability and Statistics for Final Year Engineering Students By Yoni Nazarathy, Last Updated: April 11, 2011. Lecture 1: Introduction and Basic Terms Welcome to the course, time table, assessment, etc..

More information

Lecture 7: Monte Carlo Rendering. MC Advantages

Lecture 7: Monte Carlo Rendering. MC Advantages Lecture 7: Monte Carlo Rendering CS 6620, Spring 2009 Kavita Bala Computer Science Cornell University MC Advantages Convergence rate of O( ) Simple Sampling Point evaluation Can use black boxes General

More information

MONTE CARLO EVALUATION OF DEFINITE INTEGRALS

MONTE CARLO EVALUATION OF DEFINITE INTEGRALS MISN-0-355 MONTE CARLO EVALUATION OF DEFINITE INTEGRALS by Robert Ehrlich MONTE CARLO EVALUATION OF DEFINITE INTEGRALS y 1. Integrals, 1-2 Dimensions a. Introduction.............................................1

More information

Monte Carlo Integration

Monte Carlo Integration Lecture 15: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g. trapezoidal rule - estimate

More information

INLA: Integrated Nested Laplace Approximations

INLA: Integrated Nested Laplace Approximations INLA: Integrated Nested Laplace Approximations John Paige Statistics Department University of Washington October 10, 2017 1 The problem Markov Chain Monte Carlo (MCMC) takes too long in many settings.

More information

Simulating from the Polya posterior by Glen Meeden, March 06

Simulating from the Polya posterior by Glen Meeden, March 06 1 Introduction Simulating from the Polya posterior by Glen Meeden, glen@stat.umn.edu March 06 The Polya posterior is an objective Bayesian approach to finite population sampling. In its simplest form it

More information

Scalable Bayes Clustering for Outlier Detection Under Informative Sampling

Scalable Bayes Clustering for Outlier Detection Under Informative Sampling Scalable Bayes Clustering for Outlier Detection Under Informative Sampling Based on JMLR paper of T. D. Savitsky Terrance D. Savitsky Office of Survey Methods Research FCSM - 2018 March 7-9, 2018 1 / 21

More information

Use of Extreme Value Statistics in Modeling Biometric Systems

Use of Extreme Value Statistics in Modeling Biometric Systems Use of Extreme Value Statistics in Modeling Biometric Systems Similarity Scores Two types of matching: Genuine sample Imposter sample Matching scores Enrolled sample 0.95 0.32 Probability Density Decision

More information

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation

More information

Decomposition of log-linear models

Decomposition of log-linear models Graphical Models, Lecture 5, Michaelmas Term 2009 October 27, 2009 Generating class Dependence graph of log-linear model Conformal graphical models Factor graphs A density f factorizes w.r.t. A if there

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 17 EM CS/CNS/EE 155 Andreas Krause Announcements Project poster session on Thursday Dec 3, 4-6pm in Annenberg 2 nd floor atrium! Easels, poster boards and cookies

More information

Generating random samples from user-defined distributions

Generating random samples from user-defined distributions The Stata Journal (2011) 11, Number 2, pp. 299 304 Generating random samples from user-defined distributions Katarína Lukácsy Central European University Budapest, Hungary lukacsy katarina@phd.ceu.hu Abstract.

More information

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation Unit 5 SIMULATION THEORY Lesson 39 Learning objective: To learn random number generation. Methods of simulation. Monte Carlo method of simulation You ve already read basics of simulation now I will be

More information

Behavior of the sample mean. varx i = σ 2

Behavior of the sample mean. varx i = σ 2 Behavior of the sample mean We observe n independent and identically distributed (iid) draws from a random variable X. Denote the observed values by X 1, X 2,..., X n. Assume the X i come from a population

More information

Multivariate probability distributions

Multivariate probability distributions Multivariate probability distributions September, 07 STAT 0 Class Slide Outline of Topics Background Discrete bivariate distribution 3 Continuous bivariate distribution STAT 0 Class Slide Multivariate

More information

Study Guide for Test 2

Study Guide for Test 2 Study Guide for Test Math 6: Calculus October, 7. Overview Non-graphing calculators will be allowed. You will need to know the following:. Set Pieces 9 4.. Trigonometric Substitutions (Section 7.).. Partial

More information

Robert Collins CSE598G. Robert Collins CSE598G

Robert Collins CSE598G. Robert Collins CSE598G Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent to superposition of multiple kernels centered

More information

Dynamic Thresholding for Image Analysis

Dynamic Thresholding for Image Analysis Dynamic Thresholding for Image Analysis Statistical Consulting Report for Edward Chan Clean Energy Research Center University of British Columbia by Libo Lu Department of Statistics University of British

More information

An Introduction to Markov Chain Monte Carlo

An Introduction to Markov Chain Monte Carlo An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other

More information

Section 6.2: Generating Discrete Random Variates

Section 6.2: Generating Discrete Random Variates Section 6.2: Generating Discrete Random Variates Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 6.2: Generating Discrete

More information

Topics in Machine Learning-EE 5359 Model Assessment and Selection

Topics in Machine Learning-EE 5359 Model Assessment and Selection Topics in Machine Learning-EE 5359 Model Assessment and Selection Ioannis D. Schizas Electrical Engineering Department University of Texas at Arlington 1 Training and Generalization Training stage: Utilizing

More information

Condence Intervals about a Single Parameter:

Condence Intervals about a Single Parameter: Chapter 9 Condence Intervals about a Single Parameter: 9.1 About a Population Mean, known Denition 9.1.1 A point estimate of a parameter is the value of a statistic that estimates the value of the parameter.

More information

CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts. by Emmanuel Agu

CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts. by Emmanuel Agu CS 563 Advanced Topics in Computer Graphics Monte Carlo Integration: Basic Concepts by Emmanuel Agu Introduction The integral equations generally don t have analytic solutions, so we must turn to numerical

More information

Halftoning and quasi-monte Carlo

Halftoning and quasi-monte Carlo Halftoning and quasi-monte Carlo Ken Hanson CCS-2, Methods for Advanced Scientific Simulations Los Alamos National Laboratory This presentation available at http://www.lanl.gov/home/kmh/ LA-UR-04-1854

More information