Constrained adaptive sensing

Similar documents
The limits of adaptive sensing

Robust image recovery via total-variation minimization

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Compressive Sensing. Part IV: Beyond Sparsity. Mark A. Davenport. Stanford University Department of Statistics

Outline Introduction Problem Formulation Proposed Solution Applications Conclusion. Compressed Sensing. David L Donoho Presented by: Nitesh Shroff

Topology and Topological Spaces

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Singular Value Decomposition, and Application to Recommender Systems

Lecture 4: Linear Programming

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

Unit VIII. Chapter 9. Link Analysis

A Practical Study of Longitudinal Reference Based Compressed Sensing for MRI

6 Randomized rounding of semidefinite programs

Lasso. November 14, 2017

CMPSCI611: The Simplex Algorithm Lecture 24

4. Image Retrieval using Transformed Image Content

Figure : Example Precedence Graph

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Lecture 17 Sparse Convex Optimization

Conic Duality. yyye

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = (

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

I How does the formulation (5) serve the purpose of the composite parameterization

Measurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs

Computational Geometry: Lecture 5

Robust Principal Component Analysis (RPCA)

Notes for Lecture 10

1. Represent each of these relations on {1, 2, 3} with a matrix (with the elements of this set listed in increasing order).

ELEG Compressive Sensing and Sparse Signal Representations

An Eternal Domination Problem in Grids

Simple Classification using Binary Data

Clustering and Visualisation of Data

The Fundamentals of Compressive Sensing

Summer 2009 REU: Introduction to Some Advanced Topics in Computational Mathematics

Main Menu. Summary. sampled) f has a sparse representation transform domain S with. in certain. f S x, the relation becomes

Chapter 15 Introduction to Linear Programming

Lecture 1: Introduction

TRAVELTIME TOMOGRAPHY (CONT)

Homework # 2 Due: October 6. Programming Multiprocessors: Parallelism, Communication, and Synchronization

Further Mathematics 2016 Module 2: NETWORKS AND DECISION MATHEMATICS Chapter 9 Undirected Graphs and Networks

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

Signal and Image Recovery from Random Linear Measurements in Compressive Sampling

Bipartite Graph based Construction of Compressed Sensing Matrices

Analyzing the Peeling Decoder

Structurally Random Matrices

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

The Ascendance of the Dual Simplex Method: A Geometric View

Probabilistic (Randomized) algorithms

Introduction to Algorithms

4 Integer Linear Programming (ILP)

Note Set 4: Finite Mixture Models and the EM Algorithm

ECS 178 Course Notes REFINEMENT

On-Line Geometric Modeling Notes REFINEMENT

10-701/15-781, Fall 2006, Final

Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator

ISA 562: Information Security, Theory and Practice. Lecture 1

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/3/15

Lecture Notes 2: The Simplex Algorithm

Output: For each size provided as input, a figure of that size is to appear, followed by a blank line.

Lecture 2 September 3

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

The Simplex Algorithm

Process model formulation and solution, 3E4 Computer software tutorial - Tutorial 3

Some questions of consensus building using co-association

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Numerical Methods for PDEs : Video 9: 2D Finite Difference February 14, Equations / 29

UNIVERSITY OF CALIFORNIA, SANTA CRUZ BOARD OF STUDIES IN COMPUTER ENGINEERING

CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples By Deanna Needell and Joel A. Tropp

2-D Dual Multiresolution Decomposition Through NUDFB and its Application

3. Replace any row by the sum of that row and a constant multiple of any other row.

Lecture 19: November 5

Compressed Sensing Algorithm for Real-Time Doppler Ultrasound Image Reconstruction

MOSEK Optimization Suite

Investigating Mixed-Integer Hulls using a MIP-Solver

Unlabeled Data Classification by Support Vector Machines

Lecture 7: Arnold s Cat Map

Move-to-front algorithm

The Benefit of Tree Sparsity in Accelerated MRI

6. Relational Algebra (Part II)

What is a Graphon? Daniel Glasscock, June 2013

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Using Genetic Algorithms to Solve the Box Stacking Problem

1 Linear programming relaxation

The Probabilistic Method

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer

AXIOMS FOR THE INTEGERS

An Introduction to Graph Theory

Iterative Shrinkage/Thresholding g Algorithms: Some History and Recent Development

Clustering. SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic

Projective geometry and the extended Euclidean plane

Parameter Estimation in Compressive Sensing. Marco F. Duarte

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Foundations of Computing

MAT 243 Test 2 SOLUTIONS, FORM A

Linear Methods for Regression and Shrinkage Methods

A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery

Maximum Density Still Life

Vector Calculus: Understanding the Cross Product

ECEN 615 Methods of Electric Power Systems Analysis Lecture 13: Sparse Matrix Ordering, Sparse Vector Methods

Transcription:

Constrained adaptive sensing Deanna Needell Claremont McKenna College Dept. of Mathematics

Mark Davenport Andrew Massimino Tina Woolf

Sensing sparse signals -sparse When (and how well) can we estimate from the measurements?

Nonadaptive sensing Prototypical sensing model: There exist matrices and recovery algorithms that produce an estimate such that for any with we have For any matrix and any recovery algorithm, there exist with such that

Adaptive sensing Think of sensing as a game of 20 questions Simple strategy: Use half of our sensing energy to find the support, and the remainder to estimate the values.

Thought experiment Suppose that after the first stage we have perfectly estimated the support

Benefits of adaptivity Adaptivity offers the potential for tremendous benefits Suppose we wish to estimate a has amplitude : No method can find the nonzero when -sparse vector whose nonzero A simple binary search procedure will succeed in finding the location of the nonzero with probability when Not hard to extend to -sparse vectors See Arias-Castro, Candès, Davenport; Castro; Malloy, Nowak Provided that the SNR is sufficiently large, adaptivity can reduce our error by a factor of!

Sensing with constraints Existing approaches to adaptivity require the ability to acquire arbitrary linear measurements, but in many (most?) real-world systems, our measurements are highly constrained Suppose we are limited to using measurement vectors chosen from some fixed (finite) ensemble How much room for improvement do we have in this case? How should we actually go about adaptively selecting our measurements?

Room for improvement? It depends! If is -sparse and the are chosen (potentially adaptively) by selecting up to rows from the DFT matrix, then for any adaptive scheme we will have On the other hand, if contains vectors which are better aligned with our class of signals (or if is sparse in an alternative basis/dictionary), then dramatic improvements may still be possible

How to adapt? Suppose we knew the locations of the nonzeros One can show that the error in this case is given by Ideally, we would like to choose a sequence according to where here sequence denotes the matrix with rows given by the

A toy problem Suppose our signal is 1-sparse (in Haar wavelets) and after m/2 measurements we know the location Λ. Which Fourier measurements, and how is error? We want to minimize Thus we want to maximize One easily optimizes by repeatedly sampling with row f j

A toy problem The MSE can be computed as And is bounded by matches lower bound

A toy problem How many signals actually benefit?

How to adapt? Suppose we knew the locations of the nonzeros One can show that the error in this case is given by Ideally, we would like to choose a sequence according to where here sequence denotes the matrix with rows given by the

Convex relaxation We would like to solve Instead we consider the relaxation The diagonal entries of tell us how much of each sensing vector we should use, and the constraint ensures that (assuming has unit-norm rows) Equivalent to notion of A-optimality criterion in optimal experimental design

Generating the sensing matrix In practice, tends to be somewhat sparse, placing high weight on a small number of measurements and low weights on many others Where sensing energy is the operative constraint (as opposed to number of measurements) we can use directly to sense If we wish to take exactly measurements, one option is to draw measurement vectors by sampling with replacement according to the probability mass function

Example DFT measurements of signal with sparse Haar wavelet transform (supported on connected tree) Recovery performed using CoSaMP

Constrained sensing in practice The oracle adaptive approach can be used as a building block for a practical algorithm Simple approach: Divide sensing energy / measurements in half Use first half by randomly selecting measurement vectors and using a conventional sparse recovery algorithm to estimate the support Use this support estimate to choose second half of measurements

Simulation results

Summary Adaptivity (sometimes) allows tremendous improvements Not always easy to realize these improvements in the constrained setting existing algorithms not applicable room for improvement may not be quite as large Simple strategies for adaptively selecting the measurements based on convex optimization can be surprisingly effective

Thank You! arxiv:1506.05889 http://www.cmc.edu/pages/faculty/dneedell/