Sparse & Redundant Representations and Their Applications in Signal and Image Processing
|
|
- Allyson Golden
- 5 years ago
- Views:
Transcription
1 Sparse & Redundant Representations and Their Applications in Signal and Image Processing Sparseland: An Estimation Point of View Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel
2 A Strange Experiment
3 What If Consider the denoising problem min s.t. D z 0 and suppose that we can find a group of J candidate solutions J j j1 such that j 0 j D j z n Basic Questions: What could we do with such a set of competing solutions in order to better denoise z? Why should this help? How shall we practically find such a set of solutions? Relevant work: [Leung & Barron ( 06)] [Larsson & Selen ( 07)] [Schintter et. al. (`08)] [Elad and Yavneh ( 08)] [Giraud ( 08)] [Protter et. al. ( 10)]
4 Why Bother? Because each representation conveys a different story about the signal Because pursuit algorithms are often wrong in finding the sparsest representation, and then relying on their solution is too sensitive And then maybe there are deeper reasons? D D 1
5 Generating Many Sparse Solutions Our answer: Randomizing the OMP Initialization k 0, 0 r y D z and S 0 Main Iteration T 1. Compute p(i) di rk 1 for 1 i m k k 1. Choose i0 s.t. 1 i m, p(i 0) p(i) 3. Update S k : Sk Sk 1 i0 4. LS : k min D z s.t. sup S k 5. Update Residual: r z D No k r k k Yes Stop We Randomize Step Choose i Choose i 0 with probability 0 T i 0 exp c d r s.t. 1 i m, p(i ) p(i) k1 For now, let s set the parameter c manually for best performance. Later we shall define a way to set it automatically
6 Let s Try The Following: Form a random dictionary D Multiply by a sparse vector α 0 having 10 non-zeros Add Gaussian iid noise v with σ=1 and obtain z Solve the P 0 problem by OMP, and obtain α OMP min s.t. D z 10 0 Use Random-OMP and obtain the set {α j RandOMP } Let s look at the obtained representations 100 D 00 + = 0 v z
7 Results: The Obtained Set {α j RandOMP } The OMP gives the sparsest solution (NNZ=) The others are in the range [,0] As expected, all the representations satisfy D z 100
8 Results: Denoising Performance? Dˆ D z D Even though OMP is the sparsest, it is not the most effective for denoising The cardinality does not reveal its efficiency
9 And now to the Surprise Lets propose the average ˆ 1000 j1 as our representation RandOMP j This representation is not sparse at all and yet Dˆ D z D 0 compared to for OMP
10 Is it Consistent? Let s repeat this experiment many (1000) times Dictionary (random) of size n=100, m=00 True support of α is 10 We run OMP for denoising We run RandOMP J=1000 times and average 1 J RandOMP ˆ j J j1 Denoising is assessed as before OMP: ROMP: Cases of zero solution The results of 1000 trials lead to the same conclusion How could we explain this?
11 A Crash-Course on Estimation Theory
12 Defining Our Goal We are interested in the signal x Unfortunately, we get instead a measurement z We do know that z is related to x via the following conditional probabilities P(z x) or P(x z) Estimation theory is all about algorithms for inferring x from z based on these Obviously, a key element in our story is the need to know these P s sometimes this is a tough task by itself
13 The Maximum-Likelihood (ML) P(z x) This conditional is known as the likelihood function, describing the probability of getting the measurements z if x is given A natural estimator to propose is the Maximum Likelihood ˆx ML argmax P z x x ML is very popular, but in many situations it is quite weak or even useless This brings us to the Bayesian approach
14 The MAP Estimation P(z x) P(x z) This is a deep philosophical change in the approach we take, as now we consider x as random as well Due to Bayes rule we have that P z x P x P x z Pz The Maximum A posteriori Probability (MAP) estimator suggests MAP x ˆx argmax P x z argmax P z x P x x
15 The MAP A Closer Look The MAP estimator is given by ˆx argmax P x z argmax P z x P x MAP x In words, it seeks the most probable x given z, which fits exactly our story (z is given while x is unknown) MAP resembles ML with one major distinction P(x) enters and affects the outcome. This is known as the prior This is the Bayesian approach in estimation, but MAP is not the ONLY Bayesian estimator x
16 MMSE Estimation Given the posterior P(x z), what is the best one could do? Answer: find the estimator that minimizes the Mean-Squared-Error, given by ˆ ˆ ˆ MSE x x x P x z dx E x x z Lets take a derivative w.r.t. the unknown ˆ ˆx x x P x z dx 0 x P x z dx P x z dx E x z MMSE estimation is a conditional expectation
17 MMSE versus MAP In general, these two estimators are different They align if the posterior is a symmetric (e.g. Gaussian distribution) Typically, MMSE is much harder to compute, which explains the popularity of MAP P x z ˆx MAP ˆx MMSE x
18 Sparseland : An Estimation Point of View
19 Our Signal Model n D is fixed and known Assume that is built by: o Choosing the support s with probability P(s): P(kS)=p are drawn independently o Choosing the s coefficients using iid Gaussian entries IN(0, x ) m D x The ideal signal is x=d=d s s α P(α) and P(x) are known
20 Adding Noise n m D x + The noise v is additive white & Gaussian α P z x C exp x z P(z α) & P(α z) and even P(z s) & P(s z) could all be derived v z
21 So, Lets Estimate Given P(α z) or P(s z) MAP Oracle known support s MMSE MAP ˆ ArgMax P( z) oracle ˆ MMSE ˆ E z or MAP ŝ ArgMax P(s z) Why Oracle? Because It is a building block in the derivation of MAP and MMSE It poses a bound on the best achievable performance
22 Deriving the Oracle Estimate s P z,s P z sp s Pz P z z Ds s P s exp P z s exp z Ds s s P s z exp x ˆ 1 1 D D 1 I 1 D T T s s s s x Q 1 s h s z s x This is both MAP and MMSE The estimate of x is obtained by D s s
23 The MAP Estimate of the Support MAP P(z s)p(s) ŝ ArgMax P(s z) ArgMax s s P(z) Using a marginalization trick we get P(z s) P(z s, )P( )d s s s s T 1 s hsqs hs log(det( Qs)) x exp The expression within the integral is pure Gaussian and thus a closed-form expression is within reach
24 The MAP Estimate of the Support MAP P(z s)p(s) ŝ ArgMax P(s z) ArgMax s s P(z) Based on our prior for p 1 p P generating the support s 1 p s m T 1 s MAP hsqs hs log(det( Qs)) p Max exp s (1 p) x ŝ
25 The MAP Estimate: Implications T 1 s MAP hsqs hs log(det( Qs)) p Max exp s (1 p) x ŝ The MAP estimator requires to test all the possible supports for the maximization Once the support is found, the oracle formula is used to obtain s This process is usually impossible due to the combinatorial # of possibilities This is why we rarely use exact MAP, and replace it with an approximation (e.g., OMP)
26 The MMSE Estimation MMSE ˆ E z P(s z) E z,s s P(s z) P(s) P(z s)... This is the oracle for s 1 ˆ s Q E z,s h T 1 hsqs hs log(det( Qs)) p exp (1 p) x s s s ˆ MMSE s P(s y) ˆ s
27 The MMSE Estimation: Implication ˆ MMSE s P(s y) ˆ The best estimator (in terms of L error) is a weighted average of many sparse representations!!! As such, it is not expected to be sparse at all As in the MAP case, one cannot compute this expression, as the summation is over a combinatorial set of possibilities. We should propose approximations here as well s
28 Sparseland: Approximate Estimation
29 The Case of s =1 P(s z) where T 1 s hsqs hs log(det( Qs)) p exp (1 p) x Q D D I & h D z T T s s s s s x s =1 implies that D s has only one column and thus Q s =1/ +1/ x (scalar) and h s =(z T d k )/ The right-most term is a constant and omitted Little bit of algebra and we get 1 x P(s z) exp (z d k) x T
30 The Case of s =1: A Closer Look 1 x P(s z) exp (z d ) x This is c in the Random-OMP T k Based on this we can propose the first step of a greedy algorithm for both MAP and MMSE: MAP Choose the atom with the largest (z T d k ) value (out of m) The same as OMP does MMSE Compute these m probabilities, and draw at random an atom from this distribution this is exactly what Random-OMP does
31 What About the Next Steps? Random-OMP Suppose we have k-1 non-zeros, and we are about to choose the k-th one: Option 1: Compute the probabilities of P(s z) T 1 hsqs hs log(det( Qs)) P(s z) exp (m-k+1 scalar values) in which the k-1 chosen atoms are fixed, and use this distribution to either maximize or draw a random atom Option (Simpler): Use the same rule as in the first step to proceed 1 x P(s z) exp (r d ) x T k1 k
32 Relative Representation Mean-Squared-Error A Demo These results correspond to a small dictionary (10 16), where the combinatorial formulas can be evaluated as well Parameters: n,m: p=0.1 σ x =1 J=50 (RandOMP) Averaged over experiments Oracle MMSE MAP OMP Rand-OMP
33 Few Words on the Unitary Case We will not derive the equations for this case, and simply show the outcome: Both MAP and MMSE have a closedform solution T T 1 p MAP c z dk z dk log c k ˆ p 1 c 0 Otherwise MAP ˆ k The MAP estimate is obtained by 0 hard-thresholding of this form: x -1 c p=0.1 =0.3 x = T z d k x
34 Few Words on the Unitary Case As for the MMSE estimate, we get a smoothed shrinkage curve: ˆ MMSE k c T p 1 c exp z d k 1 p c 1 p 1 c exp 1 p T z d k This leads to a dense representation vector, just like the one we have seen earlier, both in the deblurring result and in the synthetic experiment MMSE ˆ k p=0.1 =0.3 x = T z d k
35 Relative Representation Mean-Squared-Error A Demo These results correspond to a unitary dictionary of size n=100. In this case, exact MAP & MMSE are accessible, and we have also formulae for the error Parameters: n,m: p=0.1 σ x = experiments Empirical Oracle Theoretical Oracle Empirical MMSE Theoretical MMSE Empirical MAP Theoretical MAP
36 MMSE: Back to Reality
37 The Main Lesson The main lesson to take from the above discussion is this: Even under the Sparseland model which assumes that our signals are built of sparse combination of atoms estimation of such signals is better done by using a dense representation
38 Implications This is a counter-intuitive statement, and yet it is critical to the use of Sparseland This may explain few of the results we saw earlier: In the deblurring experiment we got the best result for a very dense representation In the Thresholding-based denoising experiment, the best threshold led to a dense representation Warning: This does not mean that any dense representation is good!!
39 Merging MMSE Estimation in Our Algorithms The concept of the MMSE estimation can be added to various algorithms in order to boost their performance For example: o o In the deblurring task, one may seek several sparse explanations for the signal by Random- OMP and then average them. This is challenging due to the dimensions involved When denoising patches (e.g. the K-SVD algorithm), one could replace the OMP by a Random-OMP, and get a better denoising outcome. In practice, the benefit in this is low due to the later patch-averaging
40 Here is Another Manifestation of MMSE Consider the Following Rationale Observation: K-SVD denoising applied on portions of an image, leads to improved results Thus: seems that locally adaptive dictionary is beneficial At the extreme, use a different dictionary for each pixel Alternative: MMSE estimate, allowing dense combination of atoms A sparse representation in this case is bad Atoms are noisy Too hard to train - Use the surrounding patches as the atoms
41 The NLM Algorithm: The Practice ˆx k 0 k where w k,k w k,k Comments: w k,k k exp R 0 k R 0 k z z R 0 h Once the cleaned patches are created, they are averaged as they overlap. Note that the original NLM simply uses the center pixel, and thus averaging is not done Our interpretation of NLM is very far from the original idea the authors had in mind k 0 z
42 Summary Sparsity and Redundancy are used for denoising of signals/images How? Estimation theory tells us exactly what should be done (or approximate) All this provides an interesting explanation to earlier observations Conclusions? Averaging leads to better denoising, as it approximates the MMSE So can we do better than OMP?
Generalized Tree-Based Wavelet Transform and Applications to Patch-Based Image Processing
Generalized Tree-Based Wavelet Transform and * Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel *Joint work with A Seminar in the Hebrew University
More informationOptimal Denoising of Natural Images and their Multiscale Geometry and Density
Optimal Denoising of Natural Images and their Multiscale Geometry and Density Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Israel. Joint work with Anat Levin (WIS),
More informationImage Processing Via Pixel Permutations
Image Processing Via Pixel Permutations Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel Joint work with Idan Ram Israel Cohen The Electrical
More informationPATCH-DISAGREEMENT AS A WAY TO IMPROVE K-SVD DENOISING
PATCH-DISAGREEMENT AS A WAY TO IMPROVE K-SVD DENOISING Yaniv Romano Department of Electrical Engineering Technion, Haifa 32000, Israel yromano@tx.technion.ac.il Michael Elad Department of Computer Science
More informationImage Restoration and Background Separation Using Sparse Representation Framework
Image Restoration and Background Separation Using Sparse Representation Framework Liu, Shikun Abstract In this paper, we introduce patch-based PCA denoising and k-svd dictionary learning method for the
More informationSparse & Redundant Representation Modeling of Images: Theory and Applications
Sparse & Redundant Representation Modeling of Images: Theory and Applications Michael Elad The Computer Science Department The Technion Haifa 3, Israel April 16 th 1, EE Seminar This research was supported
More informationA parallel patch based algorithm for CT image denoising on the Cell Broadband Engine
A parallel patch based algorithm for CT image denoising on the Cell Broadband Engine Dominik Bartuschat, Markus Stürmer, Harald Köstler and Ulrich Rüde Friedrich-Alexander Universität Erlangen-Nürnberg,Germany
More informationExpected Patch Log Likelihood with a Sparse Prior
Expected Patch Log Likelihood with a Sparse Prior Jeremias Sulam and Michael Elad Computer Science Department, Technion, Israel {jsulam,elad}@cs.technion.ac.il Abstract. Image priors are of great importance
More informationSparse & Redundant Representation Modeling of Images: Theory and Applications
Sparse & Redundant Representation Modeling of Images: Theory and Applications Michael Elad The Computer Science Department The Technion Haifa 3, Israel The research leading to these results has been received
More informationGeneralizing the Non-Local-Means to Super-resolution Reconstruction
Generalizing the Non-Local-Means to Super-resolution Reconstruction M. Protter & M. Elad H. Takeda & P. Milanfar Department of Computer Science Department of Electrical Engineering Technion Israel Institute
More informationCombinatorial Selection and Least Absolute Shrinkage via The CLASH Operator
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems LIONS / EPFL http://lions.epfl.ch & Idiap Research Institute joint
More informationSUPPLEMENTARY MATERIAL
SUPPLEMENTARY MATERIAL Zhiyuan Zha 1,3, Xin Liu 2, Ziheng Zhou 2, Xiaohua Huang 2, Jingang Shi 2, Zhenhong Shang 3, Lan Tang 1, Yechao Bai 1, Qiong Wang 1, Xinggan Zhang 1 1 School of Electronic Science
More informationIMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING
IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University
More informationBoosting Simple Model Selection Cross Validation Regularization. October 3 rd, 2007 Carlos Guestrin [Schapire, 1989]
Boosting Simple Model Selection Cross Validation Regularization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 3 rd, 2007 1 Boosting [Schapire, 1989] Idea: given a weak
More informationImage Denoising Via Learned Dictionaries and Sparse representation
Image Denoising Via Learned Dictionaries and Sparse representation Michael Elad Michal Aharon Department of Computer Science The Technion - Israel Institute of Technology, Haifa 32 Israel Abstract We address
More informationSignal Reconstruction from Sparse Representations: An Introdu. Sensing
Signal Reconstruction from Sparse Representations: An Introduction to Compressed Sensing December 18, 2009 Digital Data Acquisition Suppose we want to acquire some real world signal digitally. Applications
More informationDigital Image Processing Laboratory: MAP Image Restoration
Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori
More informationIMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING
IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology
More informationImage Reconstruction from Multiple Sparse Representations
Image Reconstruction from Multiple Sparse Representations Robert Crandall Advisor: Professor Ali Bilgin University of Arizona Program in Applied Mathematics 617 N. Santa Rita, Tucson, AZ 85719 Abstract
More informationAdaptive Reconstruction Methods for Low-Dose Computed Tomography
Adaptive Reconstruction Methods for Low-Dose Computed Tomography Joseph Shtok Ph.D. supervisors: Prof. Michael Elad, Dr. Michael Zibulevsky. Technion IIT, Israel, 011 Ph.D. Talk, Apr. 01 Contents of this
More informationVariable Selection 6.783, Biomedical Decision Support
6.783, Biomedical Decision Support (lrosasco@mit.edu) Department of Brain and Cognitive Science- MIT November 2, 2009 About this class Why selecting variables Approaches to variable selection Sparsity-based
More informationStatistical image models
Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world
More informationClustering web search results
Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means
More informationImage Denoising Using Sparse Representations
Image Denoising Using Sparse Representations SeyyedMajid Valiollahzadeh 1,,HamedFirouzi 1, Massoud Babaie-Zadeh 1, and Christian Jutten 2 1 Department of Electrical Engineering, Sharif University of Technology,
More informationModified Iterative Method for Recovery of Sparse Multiple Measurement Problems
Journal of Electrical Engineering 6 (2018) 124-128 doi: 10.17265/2328-2223/2018.02.009 D DAVID PUBLISHING Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems Sina Mortazavi and
More informationSingle Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling
Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling Yaniv Romano The Electrical Engineering Department Matan Protter The Computer Science Department Michael Elad The Computer Science
More informationHomework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:
Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes
More informationFMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 6: Graphical Models Cristian Sminchisescu Graphical Models Provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationBoosting Simple Model Selection Cross Validation Regularization
Boosting: (Linked from class website) Schapire 01 Boosting Simple Model Selection Cross Validation Regularization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 8 th,
More informationAn Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising
An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,
More informationSUPER-RESOLUTION reconstruction proposes a fusion of
36 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 1, JANUARY 2009 Generalizing the Nonlocal-Means to Super-Resolution Reconstruction Matan Protter, Michael Elad, Senior Member, IEEE, Hiroyuki Takeda,
More informationChapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea
Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationStructure-adaptive Image Denoising with 3D Collaborative Filtering
, pp.42-47 http://dx.doi.org/10.14257/astl.2015.80.09 Structure-adaptive Image Denoising with 3D Collaborative Filtering Xuemei Wang 1, Dengyin Zhang 2, Min Zhu 2,3, Yingtian Ji 2, Jin Wang 4 1 College
More informationImage Restoration Using DNN
Image Restoration Using DNN Hila Levi & Eran Amar Images were taken from: http://people.tuebingen.mpg.de/burger/neural_denoising/ Agenda Domain Expertise vs. End-to-End optimization Image Denoising and
More informationOn Single Image Scale-Up using Sparse-Representation
On Single Image Scale-Up using Sparse-Representation Roman Zeyde, Matan Protter and Michael Elad The Computer Science Department Technion Israel Institute of Technology Haifa 32000, Israel {romanz,matanpr,elad}@cs.technion.ac.il
More informationNon-Parametric Bayesian Dictionary Learning for Sparse Image Representations
Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, 1 Guillermo Sapiro and Lawrence Carin Department of Electrical and Computer
More informationA* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery
A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery Nazim Burak Karahanoglu a,b,, Hakan Erdogan a a Department of Electronics Engineering, Sabanci University, Istanbul
More informationEfficient Imaging Algorithms on Many-Core Platforms
Efficient Imaging Algorithms on Many-Core Platforms H. Köstler Dagstuhl, 22.11.2011 Contents Imaging Applications HDR Compression performance of PDE-based models Image Denoising performance of patch-based
More informationObject Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision
Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition
More informationLab 9. Julia Janicki. Introduction
Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support
More informationOptimization Methods for Machine Learning (OMML)
Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning
More information10701 Machine Learning. Clustering
171 Machine Learning Clustering What is Clustering? Organizing data into clusters such that there is high intra-cluster similarity low inter-cluster similarity Informally, finding natural groupings among
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18
601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can
More information10601 Machine Learning. Model and feature selection
10601 Machine Learning Model and feature selection Model selection issues We have seen some of this before Selecting features (or basis functions) Logistic regression SVMs Selecting parameter value Prior
More informationELEG Compressive Sensing and Sparse Signal Representations
ELEG 867 - Compressive Sensing and Sparse Signal Representations Gonzalo R. Arce Depart. of Electrical and Computer Engineering University of Delaware Fall 211 Compressive Sensing G. Arce Fall, 211 1 /
More informationSuper-resolution on Text Image Sequences
November 4, 2004 Outline Outline Geometric Distortion Optical/Motion Blurring Down-Sampling Total Variation Basic Idea Outline Geometric Distortion Optical/Motion Blurring Down-Sampling No optical/image
More information10601 Machine Learning. Hierarchical clustering. Reading: Bishop: 9-9.2
161 Machine Learning Hierarchical clustering Reading: Bishop: 9-9.2 Second half: Overview Clustering - Hierarchical, semi-supervised learning Graphical models - Bayesian networks, HMMs, Reasoning under
More informationGT "Calcul Ensembliste"
GT "Calcul Ensembliste" Beyond the bounded error framework for non linear state estimation Fahed Abdallah Université de Technologie de Compiègne 9 Décembre 2010 Fahed Abdallah GT "Calcul Ensembliste" 9
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007,
More informationApplying Machine Learning to Real Problems: Why is it Difficult? How Research Can Help?
Applying Machine Learning to Real Problems: Why is it Difficult? How Research Can Help? Olivier Bousquet, Google, Zürich, obousquet@google.com June 4th, 2007 Outline 1 Introduction 2 Features 3 Minimax
More informationThis chapter explains two techniques which are frequently used throughout
Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive
More informationIntroduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14
600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 23.1 Introduction We spent last week proving that for certain problems,
More informationMarkov Random Fields and Gibbs Sampling for Image Denoising
Markov Random Fields and Gibbs Sampling for Image Denoising Chang Yue Electrical Engineering Stanford University changyue@stanfoed.edu Abstract This project applies Gibbs Sampling based on different Markov
More informationCRF Based Point Cloud Segmentation Jonathan Nation
CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to
More informationFRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG)
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 10 (Systemsimulation) A parallel K-SVD implementation
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationSmoothing Dissimilarities for Cluster Analysis: Binary Data and Functional Data
Smoothing Dissimilarities for Cluster Analysis: Binary Data and unctional Data David B. University of South Carolina Department of Statistics Joint work with Zhimin Chen University of South Carolina Current
More informationCombinatorial optimization and its applications in image Processing. Filip Malmberg
Combinatorial optimization and its applications in image Processing Filip Malmberg Part 1: Optimization in image processing Optimization in image processing Many image processing problems can be formulated
More information10-701/15-781, Fall 2006, Final
-7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly
More informationCSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM
CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will
More informationBayesian analysis of genetic population structure using BAPS: Exercises
Bayesian analysis of genetic population structure using BAPS: Exercises p S u k S u p u,s S, Jukka Corander Department of Mathematics, Åbo Akademi University, Finland Exercise 1: Clustering of groups of
More informationA General Greedy Approximation Algorithm with Applications
A General Greedy Approximation Algorithm with Applications Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Greedy approximation algorithms have been
More informationWavelet for Graphs and its Deployment to Image Processing
Wavelet for Graphs and its * Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel *Joint work with The research leading to these results has been
More informationAdvanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude
Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,
More informationRegularization and model selection
CS229 Lecture notes Andrew Ng Part VI Regularization and model selection Suppose we are trying select among several different models for a learning problem. For instance, we might be using a polynomial
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 21 Outline 1 Course
More informationSimple Model Selection Cross Validation Regularization Neural Networks
Neural Nets: Many possible refs e.g., Mitchell Chapter 4 Simple Model Selection Cross Validation Regularization Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February
More informationComputer vision: models, learning and inference. Chapter 10 Graphical Models
Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x
More informationScanning Real World Objects without Worries 3D Reconstruction
Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects
More informationGAMs semi-parametric GLMs. Simon Wood Mathematical Sciences, University of Bath, U.K.
GAMs semi-parametric GLMs Simon Wood Mathematical Sciences, University of Bath, U.K. Generalized linear models, GLM 1. A GLM models a univariate response, y i as g{e(y i )} = X i β where y i Exponential
More informationExtracting Coactivated Features from Multiple Data Sets
Extracting Coactivated Features from Multiple Data Sets Michael U. Gutmann University of Helsinki michael.gutmann@helsinki.fi Aapo Hyvärinen University of Helsinki aapo.hyvarinen@helsinki.fi Michael U.
More informationNaïve Bayes for text classification
Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Naïve Bayesian classification Naïve Bayes for text classification Support
More informationLaboratorio di Problemi Inversi Esercitazione 4: metodi Bayesiani e importance sampling
Laboratorio di Problemi Inversi Esercitazione 4: metodi Bayesiani e importance sampling Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova May 19, 2016. Luca Calatroni (DIMA, Unige)
More information10.4 Linear interpolation method Newton s method
10.4 Linear interpolation method The next best thing one can do is the linear interpolation method, also known as the double false position method. This method works similarly to the bisection method by
More informationSupport Vector Machines
Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing
More informationChapter 2: The Normal Distribution
Chapter 2: The Normal Distribution 2.1 Density Curves and the Normal Distributions 2.2 Standard Normal Calculations 1 2 Histogram for Strength of Yarn Bobbins 15.60 16.10 16.60 17.10 17.60 18.10 18.60
More informationWhat is a Graphon? Daniel Glasscock, June 2013
What is a Graphon? Daniel Glasscock, June 2013 These notes complement a talk given for the What is...? seminar at the Ohio State University. The block images in this PDF should be sharp; if they appear
More informationMultiVariate Bayesian (MVB) decoding of brain images
MultiVariate Bayesian (MVB) decoding of brain images Alexa Morcom Edinburgh SPM course 2015 With thanks to J. Daunizeau, K. Brodersen for slides stimulus behaviour encoding of sensorial or cognitive state?
More informationEstimating Human Pose in Images. Navraj Singh December 11, 2009
Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks
More informationThe Cross-Entropy Method for Mathematical Programming
The Cross-Entropy Method for Mathematical Programming Dirk P. Kroese Reuven Y. Rubinstein Department of Mathematics, The University of Queensland, Australia Faculty of Industrial Engineering and Management,
More informationIMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS
IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS P.Mahalakshmi 1, J.Muthulakshmi 2, S.Kannadhasan 3 1,2 U.G Student, 3 Assistant Professor, Department of Electronics
More informationIMA Preprint Series # 2211
LEARNING TO SENSE SPARSE SIGNALS: SIMULTANEOUS SENSING MATRIX AND SPARSIFYING DICTIONARY OPTIMIZATION By Julio Martin Duarte-Carvajalino and Guillermo Sapiro IMA Preprint Series # 2211 ( May 2008 ) INSTITUTE
More informationClustered Compressive Sensing: Application on Medical Imaging
Clustered Compressive Sensing: Application on Medical Imaging Solomon A. Tesfamicael, IACSIT Member and Faraz Barzideh Abstract This paper provides clustered compressive sensing (CCS) based image processing
More informationSupport Vector Machines
Support Vector Machines . Importance of SVM SVM is a discriminative method that brings together:. computational learning theory. previously known methods in linear discriminant functions 3. optimization
More informationSparsity and image processing
Sparsity and image processing Aurélie Boisbunon INRIA-SAM, AYIN March 6, Why sparsity? Main advantages Dimensionality reduction Fast computation Better interpretability Image processing pattern recognition
More informationIMA Preprint Series # 2168
LEARNING MULTISCALE SPARSE REPRESENTATIONS FOR IMAGE AND VIDEO RESTORATION By Julien Mairal Guillermo Sapiro and Michael Elad IMA Preprint Series # 2168 ( July 2007 ) INSTITUTE FOR MATHEMATICS AND ITS
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 12 Combining
More informationImage Processing using Smooth Ordering of its Patches
1 Image Processing using Smooth Ordering of its Patches Idan Ram, Michael Elad, Fellow, IEEE, and Israel Cohen, Senior Member, IEEE arxiv:12.3832v1 [cs.cv] 14 Oct 212 Abstract We propose an image processing
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Machine Learning Basics - 1 U Kang Seoul National University U Kang 1 In This Lecture Overview of Machine Learning Capacity, overfitting, and underfitting
More informationComputer Vision I - Basics of Image Processing Part 1
Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2
More informationMachine Learning. Unsupervised Learning. Manfred Huber
Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training
More informationBayesian Machine Learning - Lecture 6
Bayesian Machine Learning - Lecture 6 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 2, 2015 Today s lecture 1
More information1.2 Numerical Solutions of Flow Problems
1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian
More informationWhen Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration
When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration Bihan Wen, Yanjun Li and Yoram Bresler Department of Electrical and Computer Engineering Coordinated
More informationEquations of planes in
Roberto s Notes on Linear Algebra Chapter 6: Lines, planes and other straight objects Section Equations of planes in What you need to know already: What vectors and vector operations are. What linear systems
More informationAn Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm
2014/07/01 v0.5 IPOL article class Published in Image Processing On Line on 2012 06 13. Submitted on 2012 00 00, accepted on 2012 00 00. ISSN 2105 1232 c 2012 IPOL & the authors CC BY NC SA This article
More information