ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Similar documents
Randomized sampling strategies

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

The Fundamentals of Compressive Sensing

Measurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs

Lecture 17 Sparse Convex Optimization

Signal Reconstruction from Sparse Representations: An Introdu. Sensing

Structurally Random Matrices

P257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies

Introduction to Topics in Machine Learning

Tomographic reconstruction: the challenge of dark information. S. Roux

Compressed Sensing for Rapid MR Imaging

Weighted-CS for reconstruction of highly under-sampled dynamic MRI sequences

Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC

Survey of the Mathematics of Big Data

Non-Differentiable Image Manifolds

Outline Introduction Problem Formulation Proposed Solution Applications Conclusion. Compressed Sensing. David L Donoho Presented by: Nitesh Shroff

Compressive Sensing based image processing in TrapView pest monitoring system

Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems

Compressive Sensing Based Image Reconstruction using Wavelet Transform

Blind Compressed Sensing Using Sparsifying Transforms

Compressive Sensing. Part IV: Beyond Sparsity. Mark A. Davenport. Stanford University Department of Statistics

Compressed Sensing and Applications by using Dictionaries in Image Processing

EE123 Digital Signal Processing

Compressive Sensing: Theory and Practice

Collaborative Sparsity and Compressive MRI

Image reconstruction based on back propagation learning in Compressed Sensing theory

ELEG Compressive Sensing and Sparse Signal Representations

Compressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI

Compressive. Graphical Models. Volkan Cevher. Rice University ELEC 633 / STAT 631 Class

Sparse sampling in MRI: From basic theory to clinical application. R. Marc Lebel, PhD Department of Electrical Engineering Department of Radiology

TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis

Sparse Reconstruction / Compressive Sensing

G Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine. Compressed Sensing

Introduction. Wavelets, Curvelets [4], Surfacelets [5].

Off-the-Grid Compressive Imaging: Recovery of Piecewise Constant Images from Few Fourier Samples

Compressive Sensing for High-Dimensional Data

Automated Aperture GenerationQuantitative Evaluation of Ape

Main Menu. Summary. sampled) f has a sparse representation transform domain S with. in certain. f S x, the relation becomes

EE/AA 578: Convex Optimization

IE521 Convex Optimization Introduction

Compressed Sensing Algorithm for Real-Time Doppler Ultrasound Image Reconstruction

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Face Recognition via Sparse Representation

Recovery of Piecewise Smooth Images from Few Fourier Samples

Optimal Sampling Geometries for TV-Norm Reconstruction of fmri Data

Rendering and Modeling of Transparent Objects. Minglun Gong Dept. of CS, Memorial Univ.

Compressive Sensing. A New Framework for Sparse Signal Acquisition and Processing. Richard Baraniuk. Rice University

Iterative CT Reconstruction Using Curvelet-Based Regularization

Sparsity Based Regularization

Multimedia Communications ECE 728 (Data Compression)

Stereo Vision. MAN-522 Computer Vision

Compressive Sensing of High-Dimensional Visual Signals. Aswin C Sankaranarayanan Rice University

An Iteratively Reweighted Least Square Implementation for Face Recognition

The Benefit of Tree Sparsity in Accelerated MRI

Higher Degree Total Variation for 3-D Image Recovery

Compressive Sensing: Opportunities and Perils for Computer Vision

Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging

Compressed Sensing for Electron Tomography

Sparse Models in Image Understanding And Computer Vision

BME I5000: Biomedical Imaging

MRI reconstruction from partial k-space data by iterative stationary wavelet transform thresholding

COMPRESSIVE VIDEO SAMPLING

Compressive Sensing Algorithms for Fast and Accurate Imaging

Computational Photography Denoising

MAT128A: Numerical Analysis Lecture One: Course Logistics and What is Numerical Analysis?

Sparse wavelet expansions for seismic tomography: Methods and algorithms

Compressive Parameter Estimation with Earth Mover s Distance via K-means Clustering. Dian Mo and Marco F. Duarte

Robust image recovery via total-variation minimization

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

ComputerLab: compressive sensing and application to MRI

Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging

Reconstruction of CT Images from Sparse-View Polyenergetic Data Using Total Variation Minimization

Accelerated MRI Techniques: Basics of Parallel Imaging and Compressed Sensing

Total Variation Regularization Method for 3D Rotational Coronary Angiography

RECONSTRUCTION ALGORITHMS FOR COMPRESSIVE VIDEO SENSING USING BASIS PURSUIT

Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator

2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Javad Lavaei. Department of Electrical Engineering Columbia University

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

A Practical Study of Longitudinal Reference Based Compressed Sensing for MRI

Uniqueness in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus

CS 682: Computer Vision

Today. Motivation. Motivation. Image gradient. Image gradient. Computational Photography

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

Incoherent noise suppression with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Massive Data Analysis

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Bilevel Sparse Coding

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Total Variation Regularization Method for 3-D Rotational Coronary Angiography

All images are degraded

Change-Point Estimation of High-Dimensional Streaming Data via Sketching

Transcription:

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Yuejie Chi Departments of ECE and BMI The Ohio State University September 24, 2015

Time, location, and office hours Time: Tue/Thu 5:30-6:55pm No lectures on 10/1 (Thu) and 11/10 (Tue) due to travel of the instructor. We will extend each lecture by 5 minutes to compensate these two lectures. Location: Bolz Hall 314 Instructor: Dr. Yuejie Chi (chi.97@osu.edu) Office: 606 Dreese Lab Office hours: Thu: 3:00-5:00pm.

Underdetermined linear systems We re interested in solving underdetermined systems of linear equations. Estimate x R n from linear measurements b = Ax R m, where m n. Seems to be hopelessly ill-posed, since more unknowns than equations...

Compressed Sensing Compressed Sensing/Compressive Sensing/Compressive Sampling allows perfect recovery of underdetermined linear systems by exploiting additional structures in x, e.g. sparsity, low-rankness, etc. Compressed Sensing [Name coined by David Donoho] was pioneered by Donoho and Candès, Tao and Romberg in 2004.

Three motivating examples Through three motivating examples, we will see such underdetermined linear systems arise quite frequently in science and engineering applications, and exhibit intriguing signal structures that make the problem well-posed and solvable using, e.g., convex/non-convex optimization techniques. Sparse signal recovery Low-rank matrix completion Phase retrieval

Example 1: Sparse signal recovery Conventional paradigm of data acquisition: acquire then compress. Why can we compress? There s no loss in quality between the raw image and its JPEG compressed form.

Sparse representation I Sparsity: Many real world signals admit sparse representation. The signal s Cn is sparse in a basis Ψ Cn n, as s = Ψx; I Images are sparse in the wavelet domain. I The number of large coefficients in the wavelet domain is small, which allows compression.

Sparse representation allows compression Take a mega-pixel image Compute 10 6 wavelet coeffients Set to zero all but the 2.5 10 4 largest coefficients Invert the wavelet transform

Compressed sensing: compression on the fly Why cannot we directly the compressed data and then reconstruct? y i = a i, x, i = 1,..., m measurements sparse signal nonzero entries Mathematically, this give rises to an underdetermined system of equations, where the signal of interest is sparse.

Sparse recovery Questions of interest include: How to design the measurement/sampling matrix? What are the efficient algorithms? Are they stable with respect to noise? How many measurements are necessary/sufficient? Spoiler: It turns out m K log n random measurements will suffice.

CS application in MRI Imaging speed is important in many MRI applications, and it is of great importance to reduce the image acquisition time; MR images are acquired by sampling the k-space (frequency domain) - take a long time to fully sample it! By subsampling the k-space, the acquisition time is reduced; however image artifacts arise if using conventional reconstruction. Novel algorithms that exploit sparsity allows much better reconstruction.

Transform Domain Sparsity of MRI Figure : Transform-domain sparsity of images. The DCT, wavelet, and finite-differences transforms were calculated for the image (Left column). The image was then reconstructed from a subset of 5, 10, and 20% of the largest transform coefficients [Lustig et. al. 2007].

Accelerate MRI via CS Reconstruction Figure : Reconstruction from 5-fold accelerated acquisition of first-pass contrast enhanced abdominal angiography. (a) Reconstruction from complete data; (b) LR; (c) ZF-w/dc; (d) CS reconstruction from random subsampling [Lustig et. al. 2007].

Many applications of CS CS is expected to become useful when measurements are expensive (e.g. fuel cell imaging, near IR imaging) slow (e.g. MRI) beyond current capabilities (e.g. wideband analog to digital conversion) wasteful missing etc..

Example 2: The Netflix problem In 2006, Netflix offered a $1 million prize to improve its movie rating prediction algorithm. How to estimate the missing ratings? About a million users, and 25,000 movies, with sparsely sampled ratings

Low-rank matrix completion Completion problem: consider M R n 1 n 2 to represent Netflix data, we may model it through factorization: In other words, the rank r of M is much smaller than its dimension r min{n 1, n 2 }.

Low-rank matrix completion Questions of interest include: What are the efficient algorithms for matrix completion? Are they stable with respect to noise and outliers? How many measurements are necessary/sufficient? Spoiler: Under appropriate conditions, matrix completion is possible from m r max{n 1, n 2 } log 2 (max{n 1, n 2 }) samples.

Applications in computer vision Separation of background (low-rank) and foreground (sparse) in video:

Example 3: Phase retrieval In optics, we only observe the magnitude of measurements but not the phase. What can we tell about the structure from the intensity of its Fourier transform?

X-rays crystallography Useful in many applications, such as X-rays crystallography, which allows determination of atomic structures within a crystal Example: discovery of Double-Helix structure of DNA (1962 Nobel Prize) 10 Nobel Prizes in X-ray crystallography so far

Phase retrieval We formulate the phase retrieval problem as follows. Let the measurements vectors be {a i R n } m i=1. For each i, we measure How to recover x R n? y i = a i, x 2, i = 1,..., m. Standard Gerchberg-Saxton (or Fienup) iterative algorithms suffer from local minima. Note these equations are quadratic in x, and no sparsity is assumed for x, so m n would be necessary.

Applying lifting Expand y i = a i, x 2 = (a i x)(a i x) = a i (xx )a i := a i Xa i, where X = xx is a rank-one symmetric matrix. To recover x, it is equivalent to recovering X (up to rotational ambiguity). We again convert the problem to an underdetermined linear system!

Phase retrieval Questions of interest include: How to design the measurement/sampling matrix? What are the efficient algorithms? Are they stable with respect to noise? How many measurements are necessary/sufficient? Spoiler: Under appropriate conditions, phase retrieval is possible from m n samples.

Variations on a common theme We can solve an underdetermined linear system if one has more information about the underlying signal, together with a well-posed sampling scheme. We will show for the three examples above, there is a convex optimization algorithm that provides near-optimal performance guarantees. We will also discuss extensions of the theory: What if my measurements are nonlinear? What if I have some low-dimensional signal structure that is not sparse/low-rank?

If time allows... Dimensionality reduction Dictionary learning Online learning

Warning There will not be textbooks, only references... Boyd and Vandenberghe, Convex Optimization, Available online. Foucart and Rauhut, A Mathematical Introduction to Compressive Sampling. Available online. Vershynin, Introduction to the non-asymptotic analysis of random matrices. Available on arxiv:1011.3027. We will also provide references on related papers in the literature. There will be quite a few PROOFS... This course is research-oriented.

Prerequisites Probabilty Linear algebra Knowledge in convex optimization is a plus Mathematical maturity Know how to use MATLAB

Grading Homework: 25% We shall have no more than five homeworks. Midterm paper: 25% (Due at the beginning of Week 9) Literature review that sets the stage for the final project. Final Project: 50% (in-class presentation 15% + final report 35%) Around week 12, we will ask for a short project proposal/progress report to make sure you re on the right track. In-class presentation during Week 16, with the final report due at the end of Week 16.

Projects Projects should be independent work. Topics are flexible: it can be related to your current research, but use something learned from this course. A good project should contain something novel and useful: it could be applications of existing algorithms to a new domain; it could be proposal of a new algorithm; it could be new theoretical analysis of an existing algorithm; etc... It is important that you justify its novelty!

Project evaluation In-class presentation Final report Each student will be allocated 15 minutes - this is about the time for a conference presentation. The final report should follow the format of an IEEE journal paper. (It could become a publication of yours!)

How to best prepare for the lectures Read, read, read! Chapter 1-2 [Foucart-Rauhut] Introduction to Compressed Sensing Book chapter by Davenport et.al. http://statweb.stanford.edu/~markad/publications/ ddek-chapter1-2011.pdf