Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization
|
|
- Derek Russell
- 6 years ago
- Views:
Transcription
1 Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization CS 650: Computer Vision Bryan S. Morse
2 Optimization Approaches to Vision / Image Processing Recurring theme: Cast vision problem as an optimization problem Use established optimization techniques Examples: Automated thresholding Fitting-based segmentation Edge-based segmentation using graphs Graph-cut region segmentation Snakes.
3 Bayesian Inference We want to know the real world... but we only have pictures of it The world causes the picture... but we lose information in the process and all kinds of things can go wrong So, can we infer the world from the (corrupted) image? Common approach: Bayesian Inference
4 Bayesian Reconstruction Example: Bayesian Image Reconstruction Reconstruction: Process of attempting to recreate the original signal given a corrupted one. Terms in Image Reconstruction: Scene: the real world Image: a (possibly corrupted) picture of a scene Image reconstruction attempts to recreate the scene from an image.
5 Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization Bayesian Reconstruction Likelihood Possible reconstruction Original Possible reconstruction Which possible reconstruction is more likely to have produced the original image when corrupted?
6 Bayesian Reconstruction Knowledge About the Corruption Process Knowledge about the corruption process puts limits on reconstruction. Usually thought of as fitting the data : the reconstructed image can t vary too much from the original corrupted image. Key: Can evaluate how plausible any solution is given the input
7 Bayesian Reconstruction Prior Knowledge Possible Original Possible reconstruction reconstruction Which possible reconstruction seems better to you?
8 Bayesian Reconstruction Knowledge About Properties of the Original Scene Possible general properties: Generally smooth A few scattered rapid transitions Possible specific properties: Known scene contents (subject, anatomy, etc.) Other related images/scenes (video frames, other views, etc.) Key: Can evaluate how plausible any solution is at all
9 Review: Conditional Probablities and Bayes Theorem Notation: Probabilities The probability of discrete event A occurring is P(A) The continuous random variable x has a probability density function (pdf) p(x) For vector-valued random variables x, we write this as p(x) Properly speaking, it s really the probability that some specific variable X has the value x: p(x = x) but we usually just assume we know what variable x refers to.
10 Review: Conditional Probablities and Bayes Theorem Conditional Probabilities We write the conditional probability of A given B as P(A B) This means the probability of A given B or given B, what is the probability of A? Or for random variables, or p(x A) p(x y)
11 Review: Conditional Probablities and Bayes Theorem Bayes Theorem Generally: P(A B) = P(B A) P(A) P(B)
12 Bayesian Reconstruction (revisited) Bayesian Reconstruction Goal: for all possible reconstructed scenes f, find the one that maximizes p(f g) for measured image g. Problem: your knowledge of the imaging process tells you P(g f), but how do you determine P(f g)? Really Big Problem: How big is the space of all possible scenes f?
13 Bayesian Reconstruction (revisited) Bayesian Reconstruction P(f g) = P(g f) P(f) P(g) P(g f) is the likelihood or data term P(f) is the a priori knowledge (prior) P(g) is independent of f, and can thus be ignored when choosing best f P(f g) is called the a posteriori estimate This is often called maximum a posteriori (MAP) estimation.
14 Bayesian Reconstruction (revisited) Likelihood The likelihood is all about what can go wrong with the imaging process. Uncertainty is introduced by pixel noise, measurement noise, error, etc. Example: assuming white noise with standard deviation σ, the probability of getting noisy image g from scene f is P(g f) = i e (f i g i ) 2 /σ 2
15 Bayesian Reconstruction (revisited) Priors The priors are all about what we expect about good solutions. Example - penalize unsmooth images: P(f) = i k N(i) e (f i f k ) 2 where N(i) denotes the neighborhood of i. Notice that one large discontinuity in intensity is more likely than several smaller discontinuities. Results in piecewise-constant images with infrequent but rapid discontinuities. This is an example of what is formally called a Markov Random Field.
16 Bayesian Reconstruction (revisited) Bayesian Reconstruction Likehood: P(g f) = i e (f i g i ) 2 /σ 2 Prior: P(f) = i So, we want to choose f to optimize P(g f) P(f) = i k N(i) e (f i g i ) 2 /σ 2 e (f i f k ) 2 Looks like an expensive computation, right? i k N(i) e (f i f k ) 2
17 Bayesian Reconstruction (revisited) Bayesian Reconstruction (cont d) Optimizing this is the same as optimizing its logarithm: log e (f i g i ) 2 /σ 2 e (f i f k ) 2 i i k N(i) = log e (f i g i ) 2 /σ 2 + log e (f i f k ) 2 i i k N(i) = log e (f i g i ) 2 /σ 2 + log e (f i f k ) 2 i i k N(i) = (f i g i ) 2 /σ 2 + (f i f k ) 2 i i k N(i)
18 Bayesian Reconstruction (revisited) Example (cont d) Maximizing this is the same as minimizing its negation: (f i g i ) 2 /σ 2 + (f i f k ) 2 i i k N(i) }{{}}{{} fitting data prior
19 Bayesian Reconstruction (revisited) Bayesian Reconstruction If P(g f) and P(f) are negative exponentials, the process usually boils down to minimizing some function where data(f, g) + λ prior(f) data(f, g) penalizes reconstructions f that don t agree with the original image g prior(f) penalizes reconstructions that are a priori unlikely The weight λ controls the relative importance of the two
20 Other Bayesian Methods Other Bayesian Methods Many other vision techniques have this general framework of optimizing a function of the form data(f, g) + λ prior(f) data term drives the system to solutions that maximize fitting the image data prior term drives the system to desirable solutions (smooth, known shape, other known priors, etc.) Basic approach: set up the likelihood and prior terms, then choose a suitable optimization technique.
21 Other Bayesian Methods Balancing the Data and Prior Terms data(f, g) + λ prior(f) If λ is set too low, the data term dominates and the solution may overfit the noisy data. If λ is set too high, the prior term dominates and the solution may depend too much on the prior(s).
22 Optimization Techniques Optimization Since the space of all f to search is far too large, non-exhaustive optimization techniques must be used: Gradient-descent and variants (conjugate gradient, etc.) Simulated annealing Genetic or other evolutionary algorithms Graduated non-convexity All of these give good but not always the best possible solution. Note: sometimes there exist globally optimal polynomial-time solvers for what you want if so, big win!
23 Optimization Techniques Gradient Descent The simplest form of optimization is gradient-descent minimization. Idea: find minimum of function f by iteratively taking a step downhill. For one variable: x t+1 = x t γ df dx (xt) where γ controls the size of the step at each iteration. For two variables: x t+1 = x t γ df dx (xt) y t+1 = y t γ df dy (yt) Or, more generally for any function of a vector x: x t+1 = x t γ f (x t)
24 Optimization Techniques Implementation: Gradient Descent General form: x t+1 = x t γ f (x t ) Implemention is actually quite simple: grad = CalculateGradient(f,x); while (magnitude(grad) > convergence threshold) { x -= gamma * grad; grad = CalculateGradient(f,x); } The difficult part isn t implementing the minimization, it s differentiating the function you re trying to minimize.
25 Optimization Techniques Simulated Annealing Simulated annealing tries to overcome two problems of gradient-descent: having to differentiate the criterion function getting stuck in local minima Basic approach: randomly generate a small change if better, keep it with some probability, allow a change to a worse solution in order to explore more greatly the solution space (and decrease this probability as you go)
26 Optimization Techniques Genetic Algorithms Encode possible solutions as strings. Start with a set of initial potential solutions. For each iteration (each generation ), three possible operations: reproduction test each string to see how good it is if good, keep it and produce more if bad, delete it crossover with some probability, splice pieces of different existing strings together mutation with some probability, randomly change a character Similar in spirit to simulated annealing but with the idea of occasionally merging parts of promising solutions
27 Optimization Techniques Graduated Nonconvexity Idea: smooth the function to get rid of local minima Problem: not the same function anymore! But the solution makes a good starting point for a less-smoothed form of the function!
28 Optimization Techniques Graduated Nonconvexity (cont d) Algorithm: Choose a large smoothing factor s so that the smoothed function has only one minimum. Choose a starting point and iterate until convergence: 1. use gradient descent to find the minimum of this smoothed function 2. use this minimum as the starting point for the next iteration 3. reduce s and smooth the original function again by s
29 Optimization Techniques Optimization and Iterative Algorithms All of the approximate optimization techniques we ve talked about involve iteration. Each employs a specific strategy for finding local/global minima (or hopefully global minima). Nearly every iterative vision algorithm employs similar strategies, explicitly or not look for it!
Regularization and Markov Random Fields (MRF) CS 664 Spring 2008
Regularization and Markov Random Fields (MRF) CS 664 Spring 2008 Regularization in Low Level Vision Low level vision problems concerned with estimating some quantity at each pixel Visual motion (u(x,y),v(x,y))
More informationCPSC 340: Machine Learning and Data Mining. More Regularization Fall 2017
CPSC 340: Machine Learning and Data Mining More Regularization Fall 2017 Assignment 3: Admin Out soon, due Friday of next week. Midterm: You can view your exam during instructor office hours or after class
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationAll images are degraded
Lecture 7 Image Relaxation: Restoration and Feature Extraction ch. 6 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these
More informationImage Restoration using Markov Random Fields
Image Restoration using Markov Random Fields Based on the paper Stochastic Relaxation, Gibbs Distributions and Bayesian Restoration of Images, PAMI, 1984, Geman and Geman. and the book Markov Random Field
More informationWhat is machine learning?
Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship
More informationMarkov Random Fields and Gibbs Sampling for Image Denoising
Markov Random Fields and Gibbs Sampling for Image Denoising Chang Yue Electrical Engineering Stanford University changyue@stanfoed.edu Abstract This project applies Gibbs Sampling based on different Markov
More informationLecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman
Lecture 4 3B1B Optimization Michaelmas 2017 A. Zisserman Convexity Robust cost functions Optimizing non-convex functions grid search branch and bound simulated annealing evolutionary optimization The Optimization
More informationA Brief Look at Optimization
A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest
More informationEnsemble methods in machine learning. Example. Neural networks. Neural networks
Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you
More informationAlgorithms for Markov Random Fields in Computer Vision
Algorithms for Markov Random Fields in Computer Vision Dan Huttenlocher November, 2003 (Joint work with Pedro Felzenszwalb) Random Field Broadly applicable stochastic model Collection of n sites S Hidden
More informationBuilding Classifiers using Bayesian Networks
Building Classifiers using Bayesian Networks Nir Friedman and Moises Goldszmidt 1997 Presented by Brian Collins and Lukas Seitlinger Paper Summary The Naive Bayes classifier has reasonable performance
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationMarkov Networks in Computer Vision
Markov Networks in Computer Vision Sargur Srihari srihari@cedar.buffalo.edu 1 Markov Networks for Computer Vision Some applications: 1. Image segmentation 2. Removal of blur/noise 3. Stereo reconstruction
More informationFitting D.A. Forsyth, CS 543
Fitting D.A. Forsyth, CS 543 Fitting Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local can t tell whether a set of points lies on
More informationMarkov Networks in Computer Vision. Sargur Srihari
Markov Networks in Computer Vision Sargur srihari@cedar.buffalo.edu 1 Markov Networks for Computer Vision Important application area for MNs 1. Image segmentation 2. Removal of blur/noise 3. Stereo reconstruction
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationCS281 Section 3: Practical Optimization
CS281 Section 3: Practical Optimization David Duvenaud and Dougal Maclaurin Most parameter estimation problems in machine learning cannot be solved in closed form, so we often have to resort to numerical
More informationUsing Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam
Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented
More informationPET Image Reconstruction using Anatomical Information through Mutual Information Based Priors
2005 IEEE Nuclear Science Symposium Conference Record M11-354 PET Image Reconstruction using Anatomical Information through Mutual Information Based Priors Sangeetha Somayajula, Evren Asma, and Richard
More informationREAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty
More informationImage Processing with Nonparametric Neighborhood Statistics
Image Processing with Nonparametric Neighborhood Statistics Ross T. Whitaker Scientific Computing and Imaging Institute School of Computing University of Utah PhD Suyash P. Awate University of Pennsylvania,
More informationExpectation Maximization: Inferring model parameters and class labels
Expectation Maximization: Inferring model parameters and class labels Emily Fox University of Washington February 27, 2017 Mixture of Gaussian recap 1 2/26/17 Jumble of unlabeled images HISTOGRAM blue
More informationHomework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:
Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes
More informationChapter 1. Introduction
Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of
More informationOverview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week
Statistics & Bayesian Inference Lecture 3 Joe Zuntz Overview Overview & Motivation Metropolis Hastings Monte Carlo Methods Importance sampling Direct sampling Gibbs sampling Monte-Carlo Markov Chains Emcee
More informationRobert Collins CSE598G. Robert Collins CSE598G
Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent to superposition of multiple kernels centered
More informationNotes on Robust Estimation David J. Fleet Allan Jepson March 30, 005 Robust Estimataion. The field of robust statistics [3, 4] is concerned with estimation problems in which the data contains gross errors,
More informationSimple Model Selection Cross Validation Regularization Neural Networks
Neural Nets: Many possible refs e.g., Mitchell Chapter 4 Simple Model Selection Cross Validation Regularization Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February
More informationCS 534: Computer Vision Segmentation and Perceptual Grouping
CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation
More informationFeature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit
More informationRecent Developments in Model-based Derivative-free Optimization
Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:
More informationModels for grids. Computer vision: models, learning and inference. Multi label Denoising. Binary Denoising. Denoising Goal.
Models for grids Computer vision: models, learning and inference Chapter 9 Graphical Models Consider models where one unknown world state at each pixel in the image takes the form of a grid. Loops in the
More informationPerceptron: This is convolution!
Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image
More informationComputational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions
Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Thomas Giraud Simon Chabot October 12, 2013 Contents 1 Discriminant analysis 3 1.1 Main idea................................
More informationNeural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.
Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric
More informationOperators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG
Operators-Based on Second Derivative The principle of edge detection based on double derivative is to detect only those points as edge points which possess local maxima in the gradient values. Laplacian
More informationDigital Image Processing Laboratory: MAP Image Restoration
Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori
More informationMarkov Random Fields and Segmentation with Graph Cuts
Markov Random Fields and Segmentation with Graph Cuts Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem Administrative stuffs Final project Proposal due Oct 27 (Thursday) HW 4 is out
More informationPATTERN CLASSIFICATION AND SCENE ANALYSIS
PATTERN CLASSIFICATION AND SCENE ANALYSIS RICHARD O. DUDA PETER E. HART Stanford Research Institute, Menlo Park, California A WILEY-INTERSCIENCE PUBLICATION JOHN WILEY & SONS New York Chichester Brisbane
More informationHyperparameter optimization. CS6787 Lecture 6 Fall 2017
Hyperparameter optimization CS6787 Lecture 6 Fall 2017 Review We ve covered many methods Stochastic gradient descent Step size/learning rate, how long to run Mini-batching Batch size Momentum Momentum
More informationMachine Learning for Software Engineering
Machine Learning for Software Engineering Single-State Meta-Heuristics Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Recap: Goal is to Find the Optimum Challenges of general optimization
More informationLECTURE NOTES Non-Linear Programming
CEE 6110 David Rosenberg p. 1 Learning Objectives LECTURE NOTES Non-Linear Programming 1. Write out the non-linear model formulation 2. Describe the difficulties of solving a non-linear programming model
More informationMulti-Modal Metropolis Nested Sampling For Inspiralling Binaries
Multi-Modal Metropolis Nested Sampling For Inspiralling Binaries Ed Porter (AEI) & Jon Gair (IOA) 2W@AEI Workshop AEI September 2008 (We acknowledge useful converstions with F. Feroz and M. Hobson (Cavendish
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationActive contour: a parallel genetic algorithm approach
id-1 Active contour: a parallel genetic algorithm approach Florence Kussener 1 1 MathWorks, 2 rue de Paris 92196 Meudon Cedex, France Florence.Kussener@mathworks.fr Abstract This paper presents an algorithm
More informationMR IMAGE SEGMENTATION
MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationThe EM Algorithm Lecture What's the Point? Maximum likelihood parameter estimates: One denition of the \best" knob settings. Often impossible to nd di
The EM Algorithm This lecture introduces an important statistical estimation algorithm known as the EM or \expectation-maximization" algorithm. It reviews the situations in which EM works well and its
More informationSimulated Annealing Method for Regional Analysis
Simulated Annealing Method for Regional Analysis JAN PANUS, STANISLAVA SIMONOVA Institute of System Engineering and Informatics University of Pardubice Studentská 84, 532 10 Pardubice CZECH REPUBLIC http://www.upce.cz
More informationCS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges
CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to
More informationCS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-
More informationHere are some of the more basic curves that we ll need to know how to do as well as limits on the parameter if they are required.
1 of 10 23/07/2016 05:15 Paul's Online Math Notes Calculus III (Notes) / Line Integrals / Line Integrals - Part I Problems] [Notes] [Practice Problems] [Assignment Calculus III - Notes Line Integrals Part
More informationD-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.
D-Separation Say: A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked by C if it contains a node such that either a) the arrows on the path meet either
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationLearning Two-View Stereo Matching
Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology The 10th European
More informationEdge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick
Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationImage Processing. Filtering. Slide 1
Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple
More informationMesh segmentation. Florent Lafarge Inria Sophia Antipolis - Mediterranee
Mesh segmentation Florent Lafarge Inria Sophia Antipolis - Mediterranee Outline What is mesh segmentation? M = {V,E,F} is a mesh S is either V, E or F (usually F) A Segmentation is a set of sub-meshes
More informationCMPT 882 Week 3 Summary
CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser
More informationUnsupervised Texture Image Segmentation Using MRF- EM Framework
Journal of Advances in Computer Research Quarterly ISSN: 2008-6148 Sari Branch, Islamic Azad University, Sari, I.R.Iran (Vol. 4, No. 2, May 2013), Pages: 1-13 www.jacr.iausari.ac.ir Unsupervised Texture
More informationCPSC 340: Machine Learning and Data Mining. Robust Regression Fall 2015
CPSC 340: Machine Learning and Data Mining Robust Regression Fall 2015 Admin Can you see Assignment 1 grades on UBC connect? Auditors, don t worry about it. You should already be working on Assignment
More informationMixture Models and the EM Algorithm
Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Finite Mixture Models Say we have a data set D = {x 1,..., x N } where x i is
More informationToday. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient
Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in
More informationA Brief Introduction to Bayesian Networks. adapted from slides by Mitch Marcus
A Brief Introduction to Bayesian Networks adapted from slides by Mitch Marcus Bayesian Networks A simple, graphical notation for conditional independence assertions and hence for compact specification
More information22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos
Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris
More informationMarch 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014
More information25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 25. NLP algorithms ˆ Overview ˆ Local methods ˆ Constrained optimization ˆ Global methods ˆ Black-box methods ˆ Course wrap-up Laurent Lessard
More informationICA mixture models for image processing
I999 6th Joint Sy~nposiurn orz Neural Computation Proceedings ICA mixture models for image processing Te-Won Lee Michael S. Lewicki The Salk Institute, CNL Carnegie Mellon University, CS & CNBC 10010 N.
More informationSAMPLING AND NOISE. Increasing the number of samples per pixel gives an anti-aliased image which better represents the actual scene.
SAMPLING AND NOISE When generating an image, Mantra must determine a color value for each pixel by examining the scene behind the image plane. Mantra achieves this by sending out a number of rays from
More informationReview of Filtering. Filtering in frequency domain
Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect of filter Algorithm: 1. Convert image and filter to fft (fft2
More informationImage Restoration and Reconstruction
Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image
More informationIntroduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.
Chapter 7: Derivative-Free Optimization Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.5) Jyh-Shing Roger Jang et al.,
More informationINTRO TO THE METROPOLIS ALGORITHM
INTRO TO THE METROPOLIS ALGORITHM A famous reliability experiment was performed where n = 23 ball bearings were tested and the number of revolutions were recorded. The observations in ballbearing2.dat
More informationRandom Search Report An objective look at random search performance for 4 problem sets
Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report
More informationClustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford
Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically
More informationEEM 463 Introduction to Image Processing. Week 3: Intensity Transformations
EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial
More informationAI Programming CS S-08 Local Search / Genetic Algorithms
AI Programming CS662-2013S-08 Local Search / Genetic Algorithms David Galles Department of Computer Science University of San Francisco 08-0: Overview Local Search Hill-Climbing Search Simulated Annealing
More informationTargil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation
Targil : Image Segmentation Image segmentation Many slides from Steve Seitz Segment region of the image which: elongs to a single object. Looks uniform (gray levels, color ) Have the same attributes (texture
More informationintro, applications MRF, labeling... how it can be computed at all? Applications in segmentation: GraphCut, GrabCut, demos
Image as Markov Random Field and Applications 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Talk Outline Last update:
More information3D Surface Recovery via Deterministic Annealing based Piecewise Linear Surface Fitting Algorithm
3D Surface Recovery via Deterministic Annealing based Piecewise Linear Surface Fitting Algorithm Bing Han, Chris Paulson, and Dapeng Wu Department of Electrical and Computer Engineering University of Florida
More informationImage Registration Lecture 4: First Examples
Image Registration Lecture 4: First Examples Prof. Charlene Tsai Outline Example Intensity-based registration SSD error function Image mapping Function minimization: Gradient descent Derivative calculation
More informationCombinatorial optimization and its applications in image Processing. Filip Malmberg
Combinatorial optimization and its applications in image Processing Filip Malmberg Part 1: Optimization in image processing Optimization in image processing Many image processing problems can be formulated
More informationCost Functions in Machine Learning
Cost Functions in Machine Learning Kevin Swingler Motivation Given some data that reflects measurements from the environment We want to build a model that reflects certain statistics about that data Something
More informationLevel-set MCMC Curve Sampling and Geometric Conditional Simulation
Level-set MCMC Curve Sampling and Geometric Conditional Simulation Ayres Fan John W. Fisher III Alan S. Willsky February 16, 2007 Outline 1. Overview 2. Curve evolution 3. Markov chain Monte Carlo 4. Curve
More informationCSE 573: Artificial Intelligence Autumn 2010
CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine
More informationLecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing
Lecture 4 Digital Image Enhancement 1. Principle of image enhancement 2. Spatial domain transformation Basic intensity it tranfomation ti Histogram processing Principle Objective of Enhancement Image enhancement
More informationHill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.
Hill Climbing Many search spaces are too big for systematic search. A useful method in practice for some consistency and optimization problems is hill climbing: Assume a heuristic value for each assignment
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling
More informationNorbert Schuff VA Medical Center and UCSF
Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationFast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on PAMI, vol. 23, no. 11, pp. 1222-1239 p.1 Fast Approximate Energy Minimization via Graph Cuts Yuri Boykov, Olga Veksler and Ramin Zabih Abstract Many tasks in computer vision involve
More informationOutline of Lecture. Scope of Optimization in Practice. Scope of Optimization (cont.)
Scope of Optimization in Practice and Niche of Evolutionary Methodologies Kalyanmoy Deb* Department of Business Technology Helsinki School of Economics Kalyanmoy.deb@hse.fi http://www.iitk.ac.in/kangal/deb.htm
More informationCase Study 1: Estimating Click Probabilities
Case Study 1: Estimating Click Probabilities SGD cont d AdaGrad Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade March 31, 2015 1 Support/Resources Office Hours Yao Lu:
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationMarkov/Conditional Random Fields, Graph Cut, and applications in Computer Vision
Markov/Conditional Random Fields, Graph Cut, and applications in Computer Vision Fuxin Li Slides and materials from Le Song, Tucker Hermans, Pawan Kumar, Carsten Rother, Peter Orchard, and others Recap:
More informationOptimal Denoising of Natural Images and their Multiscale Geometry and Density
Optimal Denoising of Natural Images and their Multiscale Geometry and Density Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Israel. Joint work with Anat Levin (WIS),
More information