Laboratorio di Problemi Inversi Esercitazione 4: metodi Bayesiani e importance sampling
|
|
- Deborah Chapman
- 6 years ago
- Views:
Transcription
1 Laboratorio di Problemi Inversi Esercitazione 4: metodi Bayesiani e importance sampling Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova May 19, Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
2 Outline 1 Recap on the theory 2 MATLAB implementation Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
3 Bayesian method for inverse problems I Again, the inverse problem y = Ax + n, n N (0, σ 2 ), but we consider this from a different perspective. A different point of view deterministic approaches: we look for a reconstruction of measurement y; Bayes approaches: we aim to determine the probability of reconstructing x given y (posterior probability). Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
4 Bayesian methods for inverse problems II We have: p(y x) (probability of observing y if x is given, but x is unknown). We seek: p(x y). Bayes theorem p(x y) = p(y x)p(x) p(y) Ingredients: p(y x): likelihood; p(x): encodes a-priori information on the desired reconstruction (prior). p(y): often interpreted as a normalisation constant. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
5 Bayesian methods for inverse problems II We have: p(y x) (probability of observing y if x is given, but x is unknown). We seek: p(x y). Bayes theorem p(x y) = p(y x)p(x) p(y) Ingredients: p(y x): likelihood; p(x): encodes a-priori information on the desired reconstruction (prior). p(y): often interpreted as a normalisation constant. The unknown can be estimated using Maximum A Posteriori (MAP) estimate: x MAP = argmax p(x y) Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
6 Bayesian methods for inverse problems II We have: p(y x) (probability of observing y if x is given, but x is unknown). We seek: p(x y). Bayes theorem p(x y) = p(y x)p(x) p(y) Ingredients: p(y x): likelihood; p(x): encodes a-priori information on the desired reconstruction (prior). p(y): often interpreted as a normalisation constant. The unknown can be estimated using Maximum A Posteriori (MAP) estimate: Bayesian inverse problem solution x MAP = argmax p(x y) The solution of the inverse problem is the posterior itself. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
7 Work plan Often, it is difficult to determine an analytic expression of the posterior. Objectives We want to implement two basic Monte Carlo methods to estimate the posterior distribution solution of a very simplified deblurring/denoising problem. We will consider: Importance Sampling; Metropolis-Hastings. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
8 Outline 1 Recap on the theory 2 MATLAB implementation Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
9 Building the parametric image Test image: construct the image of a blurred circle with given: - centre coordinates and radius; - background/foreground values; - Gaussian blur with fixed blur variance. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
10 Building the parametric image Test image: construct the image of a blurred circle with given: - centre coordinates and radius; - background/foreground values; - Gaussian blur with fixed blur variance. Parameters: x C = y C = 32, r = 15, bg val = 200, fg val = 125, σ 2 PSF = 2. Write a MATLAB function create circle that creates a circle depending on the input parameters above. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
11 Building the parametric image Test image: construct the image of a blurred circle with given: - centre coordinates and radius; - background/foreground values; - Gaussian blur with fixed blur variance. Parameters: x C = y C = 32, r = 15, bg val = 200, fg val = 125, σ 2 PSF = 2. Write a MATLAB function create circle that creates a circle depending on the input parameters above. For blurring: use fft2, ifft2, fftshift. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
12 Building the parametric image Test image: construct the image of a blurred circle with given: - centre coordinates and radius; - background/foreground values; - Gaussian blur with fixed blur variance. Parameters: x C = y C = 32, r = 15, bg val = 200, fg val = 125, σ 2 PSF = 2. Write a MATLAB function create circle that creates a circle depending on the input parameters above. For blurring: use fft2, ifft2, fftshift. Add Gaussian noise with noise level σ noise = 12. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
13 Writing Importance Sampling Fix N = samples. Unknowns: x C, y C, r, bg val, fg val, σ 2 PSF Idea In each sample, draw a random value for each parameter from a known probability distribution (we fix uniform distribution), build the corresponding blurred image and construct a weight encoding how likely that combination of parameters is. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
14 Writing Importance Sampling Fix N = samples. Unknowns: x C, y C, r, bg val, fg val, σ 2 PSF Idea In each sample, draw a random value for each parameter from a known probability distribution (we fix uniform distribution), build the corresponding blurred image and construct a weight encoding how likely that combination of parameters is. - For every sample, for each parameter involved: Use MATLAB command rand to generate a random number in the interval [0, 1] with uniform probability distribution. Scale the generate number within the desired interval; For simplicity, use ceil to take the biggest integer to the found value. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
15 Writing Importance Sampling Fix N = samples. Unknowns: x C, y C, r, bg val, fg val, σ 2 PSF Idea In each sample, draw a random value for each parameter from a known probability distribution (we fix uniform distribution), build the corresponding blurred image and construct a weight encoding how likely that combination of parameters is. - For every sample, for each parameter involved: Use MATLAB command rand to generate a random number in the interval [0, 1] with uniform probability distribution. Scale the generate number within the desired interval; For simplicity, use ceil to take the biggest integer to the found value. - Construct for each sample the corresponding PSF and use it to construct a blurred version of the estimated image. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
16 Importance weights - For each sample, construct the weight depending on the density we want to estimate and the importance density η as follows: w i = p(x i y) η(x i ) = p(y x i)p(x i ) η(x i )p(y) = p(y x i ) p(y) p(y x i ) assuming that p(x i ) = η(x i ) (importance distribution = prior) and up to multiplicative constants. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
17 Importance weights - For each sample, construct the weight depending on the density we want to estimate and the importance density η as follows: w i = p(x i y) η(x i ) = p(y x i)p(x i ) η(x i )p(y) = p(y x i ) p(y) p(y x i ) assuming that p(x i ) = η(x i ) (importance distribution = prior) and up to multiplicative constants. In our case the likelihood is the one corresponding to the noise distribution: p(y x i ) = 1 ( exp y Bx i 2 ) ( F 2πσ 2 2σ 2 p(y x i ) = exp y Bx i 2 ) F 2σ 2 Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
18 Importance weights - For each sample, construct the weight depending on the density we want to estimate and the importance density η as follows: w i = p(x i y) η(x i ) = p(y x i)p(x i ) η(x i )p(y) = p(y x i ) p(y) p(y x i ) assuming that p(x i ) = η(x i ) (importance distribution = prior) and up to multiplicative constants. In our case the likelihood is the one corresponding to the noise distribution: p(y x i ) = 1 ( exp y Bx i 2 ) ( F 2πσ 2 2σ 2 p(y x i ) = exp y Bx i 2 ) F 2σ 2 Numerical problems in MATLAB (numerical zeros)! Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
19 Way around: log-sum-exp formula MATLAB may get 0 s when calculating the weights due to large numerical values at the power of e. When computed for each sample, the weight vector needs to be normalised: W = w wi log(w ) = log(w) log( w i ) (1) in the MC loop, instead of the w i s, compute their log (and avoid the numerical problems); Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
20 Way around: log-sum-exp formula MATLAB may get 0 s when calculating the weights due to large numerical values at the power of e. When computed for each sample, the weight vector needs to be normalised: W = w wi log(w ) = log(w) log( w i ) (1) in the MC loop, instead of the w i s, compute their log (and avoid the numerical problems); use: log( ( ( )) w i ) = max(log(w i )) + log exp log(w i ) max(log(w i ) and calculate log(w ) in (1). Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
21 Way around: log-sum-exp formula MATLAB may get 0 s when calculating the weights due to large numerical values at the power of e. When computed for each sample, the weight vector needs to be normalised: W = w wi log(w ) = log(w) log( w i ) (1) in the MC loop, instead of the w i s, compute their log (and avoid the numerical problems); use: log( ( ( )) w i ) = max(log(w i )) + log exp log(w i ) max(log(w i ) and calculate log(w ) in (1). compute W and find the indices where the weight vector is maximum. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
22 Way around: log-sum-exp formula MATLAB may get 0 s when calculating the weights due to large numerical values at the power of e. When computed for each sample, the weight vector needs to be normalised: W = w wi log(w ) = log(w) log( w i ) (1) in the MC loop, instead of the w i s, compute their log (and avoid the numerical problems); use: log( ( ( )) w i ) = max(log(w i )) + log exp log(w i ) max(log(w i ) and calculate log(w ) in (1). compute W and find the indices where the weight vector is maximum. compute the corresponding set of parameters and use it to plot the reconstructed image. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
23 Weighted histograms We now want to plot the weighted histograms to see how IS works and find their peaks (MAP estimation). Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
24 Weighted histograms We now want to plot the weighted histograms to see how IS works and find their peaks (MAP estimation). on the x-axis: range of values for the parameter to estimate; on the y-axis: accumulators h for each possible value of the parameter. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
25 Problems Narrow distributions The likelihood concentrates values on a very narrow set of values the posterior distribution promotes very few values. Recall: w i = p(x i y) η(x i ) = p(y x i)p(x i ) η(x i )p(y) = p(y x i ) p(y) p(y x i ) Consequences: y Bx i 2 F are very large weights are very small and focus just in correspondence of these values (MATLAB has numerical limitations... ). log-sum exp formula: we can at least save the maximum but no weighted histograms... TASK: how to make the posterior larger? Choose a different importance distribution centred on real values but not uniform... Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
26 Weighted histograms in MATLAB Initialise the range of values and the histograms for each parameter estimated; Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
27 Weighted histograms in MATLAB Initialise the range of values and the histograms for each parameter estimated; For each sample i = 1,..., N: whenever I find an occurrence of that value, increment the corresponding histogram of w i. (think about Hough: here, your accumulator increases of w i ) Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
28 Weighted histograms in MATLAB Initialise the range of values and the histograms for each parameter estimated; For each sample i = 1,..., N: whenever I find an occurrence of that value, increment the corresponding histogram of w i. (think about Hough: here, your accumulator increases of w i ) Example for a generic parameter t: tt=[1:max t]; ht=zeros(size(tt)); for i=1:n ht(t(i))=ht(t(i))+w(i); end Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
29 Weighted histograms in MATLAB Initialise the range of values and the histograms for each parameter estimated; For each sample i = 1,..., N: whenever I find an occurrence of that value, increment the corresponding histogram of w i. (think about Hough: here, your accumulator increases of w i ) Example for a generic parameter t: tt=[1:max t]; ht=zeros(size(tt)); for i=1:n ht(t(i))=ht(t(i))+w(i); end Plot the histograms for every parameter estimated. Comment on the resulting plots and find the peaks Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
30 MAP estimation In this way, we are computing the MAP estimate: MAP estimate max p(x y) = max N w i h(x x i ) i=1 and finding the marginal distributions whose peaks are exactly the parameters maximising the posterior. Luca Calatroni (DIMA, Unige) Esercitazione 4, Lab. Prob. Inv. May 19, / 14
Laboratorio di Problemi Inversi Esercitazione 3: regolarizzazione iterativa, metodo di Landweber
Laboratorio di Problemi Inversi Esercitazione 3: regolarizzazione iterativa, metodo di Landweber Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova May 18, 2016. Luca Calatroni
More informationBME I5000: Biomedical Imaging
1 Lucas Parra, CCNY BME I5000: Biomedical Imaging Lecture 11 Point Spread Function, Inverse Filtering, Wiener Filtering, Sharpening,... Lucas C. Parra, parra@ccny.cuny.edu Blackboard: http://cityonline.ccny.cuny.edu/
More informationImage analysis. Computer Vision and Classification Image Segmentation. 7 Image analysis
7 Computer Vision and Classification 413 / 458 Computer Vision and Classification The k-nearest-neighbor method The k-nearest-neighbor (knn) procedure has been used in data analysis and machine learning
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationDigital Image Processing Laboratory: MAP Image Restoration
Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori
More informationIntroduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity
More informationNested Sampling: Introduction and Implementation
UNIVERSITY OF TEXAS AT SAN ANTONIO Nested Sampling: Introduction and Implementation Liang Jing May 2009 1 1 ABSTRACT Nested Sampling is a new technique to calculate the evidence, Z = P(D M) = p(d θ, M)p(θ
More informationHomework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:
Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes
More informationAdaptive Multiple-Frame Image Super- Resolution Based on U-Curve
Adaptive Multiple-Frame Image Super- Resolution Based on U-Curve IEEE Transaction on Image Processing, Vol. 19, No. 12, 2010 Qiangqiang Yuan, Liangpei Zhang, Huanfeng Shen, and Pingxiang Li Presented by
More informationOverview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week
Statistics & Bayesian Inference Lecture 3 Joe Zuntz Overview Overview & Motivation Metropolis Hastings Monte Carlo Methods Importance sampling Direct sampling Gibbs sampling Monte-Carlo Markov Chains Emcee
More informationMarkov chain Monte Carlo methods
Markov chain Monte Carlo methods (supplementary material) see also the applet http://www.lbreyer.com/classic.html February 9 6 Independent Hastings Metropolis Sampler Outline Independent Hastings Metropolis
More informationRobotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.
Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:
More informationMCMC Methods for data modeling
MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms
More informationThis chapter explains two techniques which are frequently used throughout
Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive
More informationProbabilistic Robotics
Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(
More informationBayesian Methods. David Rosenberg. April 11, New York University. David Rosenberg (New York University) DS-GA 1003 April 11, / 19
Bayesian Methods David Rosenberg New York University April 11, 2017 David Rosenberg (New York University) DS-GA 1003 April 11, 2017 1 / 19 Classical Statistics Classical Statistics David Rosenberg (New
More informationSD 372 Pattern Recognition
SD 372 Pattern Recognition Lab 2: Model Estimation and Discriminant Functions 1 Purpose This lab examines the areas of statistical model estimation and classifier aggregation. Model estimation will be
More informationBART STAT8810, Fall 2017
BART STAT8810, Fall 2017 M.T. Pratola November 1, 2017 Today BART: Bayesian Additive Regression Trees BART: Bayesian Additive Regression Trees Additive model generalizes the single-tree regression model:
More informationSuper-resolution on Text Image Sequences
November 4, 2004 Outline Outline Geometric Distortion Optical/Motion Blurring Down-Sampling Total Variation Basic Idea Outline Geometric Distortion Optical/Motion Blurring Down-Sampling No optical/image
More informationQuantitative Biology II!
Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!
More informationLevel-set MCMC Curve Sampling and Geometric Conditional Simulation
Level-set MCMC Curve Sampling and Geometric Conditional Simulation Ayres Fan John W. Fisher III Alan S. Willsky February 16, 2007 Outline 1. Overview 2. Curve evolution 3. Markov chain Monte Carlo 4. Curve
More informationConvexization in Markov Chain Monte Carlo
in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non
More informationGT "Calcul Ensembliste"
GT "Calcul Ensembliste" Beyond the bounded error framework for non linear state estimation Fahed Abdallah Université de Technologie de Compiègne 9 Décembre 2010 Fahed Abdallah GT "Calcul Ensembliste" 9
More informationMarkov Chain Monte Carlo (part 1)
Markov Chain Monte Carlo (part 1) Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2018 Depending on the book that you select for
More informationCOMPUTATIONAL STATISTICS UNSUPERVISED LEARNING
COMPUTATIONAL STATISTICS UNSUPERVISED LEARNING Luca Bortolussi Department of Mathematics and Geosciences University of Trieste Office 238, third floor, H2bis luca@dmi.units.it Trieste, Winter Semester
More informationProbability Models.S4 Simulating Random Variables
Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability
More informationHybrid Quasi-Monte Carlo Method for the Simulation of State Space Models
The Tenth International Symposium on Operations Research and Its Applications (ISORA 211) Dunhuang, China, August 28 31, 211 Copyright 211 ORSC & APORC, pp. 83 88 Hybrid Quasi-Monte Carlo Method for the
More informationClustering Lecture 5: Mixture Model
Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics
More informationMarkov Random Fields and Gibbs Sampling for Image Denoising
Markov Random Fields and Gibbs Sampling for Image Denoising Chang Yue Electrical Engineering Stanford University changyue@stanfoed.edu Abstract This project applies Gibbs Sampling based on different Markov
More informationPractical Course WS12/13 Introduction to Monte Carlo Localization
Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter
More informationCIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]
CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.
More informationRecap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach
Truth Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 2.04.205 Discriminative Approaches (5 weeks)
More informationCDA6530: Performance Models of Computers and Networks. Chapter 8: Statistical Simulation --- Discrete-Time Simulation
CDA6530: Performance Models of Computers and Networks Chapter 8: Statistical Simulation --- Discrete-Time Simulation Simulation Studies Models with analytical formulas Calculate the numerical solutions
More informationNumerical Integration
Lecture 12: Numerical Integration (with a focus on Monte Carlo integration) Computer Graphics CMU 15-462/15-662, Fall 2015 Review: fundamental theorem of calculus Z b f(x)dx = F (b) F (a) a f(x) = d dx
More informationMachine Learning A WS15/16 1sst KU Version: January 11, b) [1 P] For the probability distribution P (A, B, C, D) with the factorization
Machine Learning A 708.064 WS15/16 1sst KU Version: January 11, 2016 Exercises Problems marked with * are optional. 1 Conditional Independence I [3 P] a) [1 P] For the probability distribution P (A, B,
More informationMachine Learning Lecture 3
Machine Learning Lecture 3 Probability Density Estimation II 19.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Exam dates We re in the process
More informationA problem - too many features. TDA 231 Dimension Reduction: PCA. Features. Making new features
A problem - too many features TDA 1 Dimension Reduction: Aim: To build a classifier that can diagnose leukaemia using Gene expression data. Data: 7 healthy samples,11 leukaemia samples (N = 8). Each sample
More informationMissing variable problems
Missing variable problems In many vision problems, if some variables were known the maximum likelihood inference problem would be easy fitting; if we knew which line each token came from, it would be easy
More informationEdge detection. Gradient-based edge operators
Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:
More informationGAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao
GAMES Webinar: Rendering Tutorial 2 Monte Carlo Methods Shuang Zhao Assistant Professor Computer Science Department University of California, Irvine GAMES Webinar Shuang Zhao 1 Outline 1. Monte Carlo integration
More informationLecture 2: Introduction to Numerical Simulation
Lecture 2: Introduction to Numerical Simulation Ahmed Kebaier kebaier@math.univ-paris13.fr HEC, Paris Outline of The Talk 1 Simulation of Random variables Outline 1 Simulation of Random variables Random
More informationMachine Learning A W 1sst KU. b) [1 P] For the probability distribution P (A, B, C, D) with the factorization
Machine Learning A 708.064 13W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence a) [1 P] For the probability distribution P (A, B, C, D) with the factorization P (A, B,
More informationEN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform
EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform The goal of this fourth lab is to ˆ Learn how to detect corners, and use them in a tracking application ˆ Learn how to describe
More informationCSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates
CSCI 599 Class Presenta/on Zach Levine Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates April 26 th, 2012 Topics Covered in this Presenta2on A (Brief) Review of HMMs HMM Parameter Learning Expecta2on-
More informationAn Introduction to Markov Chain Monte Carlo
An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other
More informationIntroduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering
Introduction to Pattern Recognition Part II Selim Aksoy Bilkent University Department of Computer Engineering saksoy@cs.bilkent.edu.tr RETINA Pattern Recognition Tutorial, Summer 2005 Overview Statistical
More informationMonte Carlo Integration
Lecture 11: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016 Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g.
More informationProbabilistic Robotics
Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic
More informationMachine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves
Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves
More informationx' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8
1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15
More informationAn Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework
IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster
More informationDiscussion on Bayesian Model Selection and Parameter Estimation in Extragalactic Astronomy by Martin Weinberg
Discussion on Bayesian Model Selection and Parameter Estimation in Extragalactic Astronomy by Martin Weinberg Phil Gregory Physics and Astronomy Univ. of British Columbia Introduction Martin Weinberg reported
More informationSparse & Redundant Representations and Their Applications in Signal and Image Processing
Sparse & Redundant Representations and Their Applications in Signal and Image Processing Sparseland: An Estimation Point of View Michael Elad The Computer Science Department The Technion Israel Institute
More informationMachine Learning Lecture 3
Many slides adapted from B. Schiele Machine Learning Lecture 3 Probability Density Estimation II 26.04.2016 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course
More informationA Bayesian approach to artificial neural network model selection
A Bayesian approach to artificial neural network model selection Kingston, G. B., H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School of Civil and Environmental Engineering,
More informationMachine Learning Lecture 3
Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 26.04.206 Discriminative Approaches (5 weeks) Linear
More informationIntegration. Volume Estimation
Monte Carlo Integration Lab Objective: Many important integrals cannot be evaluated symbolically because the integrand has no antiderivative. Traditional numerical integration techniques like Newton-Cotes
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 17 EM CS/CNS/EE 155 Andreas Krause Announcements Project poster session on Thursday Dec 3, 4-6pm in Annenberg 2 nd floor atrium! Easels, poster boards and cookies
More informationBayesian Statistics Group 8th March Slice samplers. (A very brief introduction) The basic idea
Bayesian Statistics Group 8th March 2000 Slice samplers (A very brief introduction) The basic idea lacements To sample from a distribution, simply sample uniformly from the region under the density function
More informationWill Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions
Will Monroe July 1, 017 with materials by Mehran Sahami and Chris Piech Joint Distributions Review: Normal random variable An normal (= Gaussian) random variable is a good approximation to many other distributions.
More informationBlind Image Deblurring Using Dark Channel Prior
Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationMachine Learning. Supervised Learning. Manfred Huber
Machine Learning Supervised Learning Manfred Huber 2015 1 Supervised Learning Supervised learning is learning where the training data contains the target output of the learning system. Training data D
More informationLecture 3: Conditional Independence - Undirected
CS598: Graphical Models, Fall 2016 Lecture 3: Conditional Independence - Undirected Lecturer: Sanmi Koyejo Scribe: Nate Bowman and Erin Carrier, Aug. 30, 2016 1 Review for the Bayes-Ball Algorithm Recall
More information% Close all figure windows % Start figure window
CS1112 Fall 2016 Project 3 Part A Due Monday 10/3 at 11pm You must work either on your own or with one partner. If you work with a partner, you must first register as a group in CMS and then submit your
More informationMonte Carlo Integration and Random Numbers
Monte Carlo Integration and Random Numbers Higher dimensional integration u Simpson rule with M evaluations in u one dimension the error is order M -4! u d dimensions the error is order M -4/d u In general
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More information2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation
2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing Curtis Mobley Monte Carlo Simulation Delivered at the Darling Marine Center, University of Maine July 2017 Copyright 2017 by Curtis
More informationMachine Learning and Pervasive Computing
Stephan Sigg Georg-August-University Goettingen, Computer Networks 17.12.2014 Overview and Structure 22.10.2014 Organisation 22.10.3014 Introduction (Def.: Machine learning, Supervised/Unsupervised, Examples)
More informationCS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking
CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking DUE Tues, November 25-11:55pm Here you are going to experiment with a particle filter tracker such as was described in class. Recall that
More informationMonte Carlo Methods and Statistical Computing: My Personal E
Monte Carlo Methods and Statistical Computing: My Personal Experience Department of Mathematics & Statistics Indian Institute of Technology Kanpur November 29, 2014 Outline Preface 1 Preface 2 3 4 5 6
More informationTracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods
Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,
More informationTracking Computer Vision Spring 2018, Lecture 24
Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How
More informationWhat is machine learning?
Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship
More informationIntroduction. Chapter Overview
Chapter 1 Introduction The Hough Transform is an algorithm presented by Paul Hough in 1962 for the detection of features of a particular shape like lines or circles in digitalized images. In its classical
More informationA stochastic approach of Residual Move Out Analysis in seismic data processing
A stochastic approach of Residual ove Out Analysis in seismic data processing JOHNG-AY T.,, BORDES L., DOSSOU-GBÉTÉ S. and LANDA E. Laboratoire de athématique et leurs Applications PAU Applied Geophysical
More informationVARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung
POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo
More informationGraphical Models, Bayesian Method, Sampling, and Variational Inference
Graphical Models, Bayesian Method, Sampling, and Variational Inference With Application in Function MRI Analysis and Other Imaging Problems Wei Liu Scientific Computing and Imaging Institute University
More informationShort-Cut MCMC: An Alternative to Adaptation
Short-Cut MCMC: An Alternative to Adaptation Radford M. Neal Dept. of Statistics and Dept. of Computer Science University of Toronto http://www.cs.utoronto.ca/ radford/ Third Workshop on Monte Carlo Methods,
More informationAn Introduction to PDF Estimation and Clustering
Sigmedia, Electronic Engineering Dept., Trinity College, Dublin. 1 An Introduction to PDF Estimation and Clustering David Corrigan corrigad@tcd.ie Electrical and Electronic Engineering Dept., University
More informationBayesian estimation of optical properties of the human head via 3D structural MRI p.1
Bayesian estimation of optical properties of the human head via 3D structural MRI June 23, 2003 at ECBO 2003 Alex Barnett Courant Institute, New York University Collaborators (NMR Center, Mass. Gen. Hosp.,
More informationMatting & Compositing
Matting & Compositing Image Compositing Slides from Bill Freeman and Alyosha Efros. Compositing Procedure 1. Extract Sprites (e.g using Intelligent Scissors in Photoshop) 2. Blend them into the composite
More information10.4 Linear interpolation method Newton s method
10.4 Linear interpolation method The next best thing one can do is the linear interpolation method, also known as the double false position method. This method works similarly to the bisection method by
More informationEstimating the Information Rate of Noisy Two-Dimensional Constrained Channels
Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels Mehdi Molkaraie and Hans-Andrea Loeliger Dept. of Information Technology and Electrical Engineering ETH Zurich, Switzerland
More informationA Fast Estimation of SRAM Failure Rate Using Probability Collectives
A Fast Estimation of SRAM Failure Rate Using Probability Collectives Fang Gong Electrical Engineering Department, UCLA http://www.ee.ucla.edu/~fang08 Collaborators: Sina Basir-Kazeruni, Lara Dolecek, Lei
More informationIn the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple
1 In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple times) until they are absorbed. On their way, they
More informationDigital Image Restoration
Digital Image Restoration Blur as a chance and not a nuisance Filip Šroubek sroubekf@utia.cas.cz www.utia.cas.cz Institute of Information Theory and Automation Academy of Sciences of the Czech Republic
More informationChapter 1. Introduction
Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of
More informationIntroduction to Image Super-resolution. Presenter: Kevin Su
Introduction to Image Super-resolution Presenter: Kevin Su References 1. S.C. Park, M.K. Park, and M.G. KANG, Super-Resolution Image Reconstruction: A Technical Overview, IEEE Signal Processing Magazine,
More informationOutline. Bayesian Data Analysis Hierarchical models. Rat tumor data. Errandum: exercise GCSR 3.11
Outline Bayesian Data Analysis Hierarchical models Helle Sørensen May 15, 2009 Today: More about the rat tumor data: model, derivation of posteriors, the actual computations in R. : a hierarchical normal
More informationImage restoration. Lecture 14. Milan Gavrilovic Centre for Image Analysis Uppsala University
Image restoration Lecture 14 Milan Gavrilovic milan@cb.uu.se Centre for Image Analysis Uppsala University Computer Assisted Image Analysis 2009-05-08 M. Gavrilovic (Uppsala University) L14 Image restoration
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationL10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms
L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation
More informationExpectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University
Expectation Maximization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University April 10 th, 2006 1 Announcements Reminder: Project milestone due Wednesday beginning of class 2 Coordinate
More informationMonte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta
1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization
More informationImage Restoration and Reconstruction
Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image
More informationThe Plan: Basic statistics: Random and pseudorandom numbers and their generation: Chapter 16.
Scientific Computing with Case Studies SIAM Press, 29 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit IV Monte Carlo Computations Dianne P. O Leary c 28 What is a Monte-Carlo method?
More information3. Data Structures for Image Analysis L AK S H M O U. E D U
3. Data Structures for Image Analysis L AK S H M AN @ O U. E D U Different formulations Can be advantageous to treat a spatial grid as a: Levelset Matrix Markov chain Topographic map Relational structure
More information