Dictionary Learning for Blind One Bit Compressed Sensing

Similar documents
Distributed Compressed Estimation Based on Compressive Sensing for Wireless Sensor Networks

Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems

The Viterbi Algorithm for Subset Selection

Clustered Compressive Sensing: Application on Medical Imaging

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal

Convex combination of adaptive filters for a variable tap-length LMS algorithm

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

Using. Adaptive. Fourth. Department of Graduate Tohoku University Sendai, Japan jp. the. is adopting. was proposed in. and

Sparse Watermark Embedding and Recovery using Compressed Sensing Framework for Audio Signals

Department of Electronics and Communication KMP College of Engineering, Perumbavoor, Kerala, India 1 2

1-bit Compressive Data Gathering for Wireless Sensor Networks

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

LOW-DENSITY PARITY-CHECK (LDPC) codes [1] can

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Image reconstruction based on back propagation learning in Compressed Sensing theory

Image Inpainting Using Sparsity of the Transform Domain

COMPRESSIVE VIDEO SAMPLING

Sparse Solutions to Linear Inverse Problems. Yuzhe Jin

ELEG Compressive Sensing and Sparse Signal Representations

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

Quickest Search Over Multiple Sequences with Mixed Observations

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT

Image denoising in the wavelet domain using Improved Neigh-shrink

2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing

AN EFFICIENT VIDEO WATERMARKING USING COLOR HISTOGRAM ANALYSIS AND BITPLANE IMAGE ARRAYS

Adaptive Doppler centroid estimation algorithm of airborne SAR

Compressed Sensing Algorithm for Real-Time Doppler Ultrasound Image Reconstruction

Greedy algorithms for Sparse Dictionary Learning

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Weighted-CS for reconstruction of highly under-sampled dynamic MRI sequences

On combining chase-2 and sum-product algorithms for LDPC codes

The Benefit of Tree Sparsity in Accelerated MRI

P257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies

Adaptive Sparse Recovery by Parametric Weighted L Minimization for ISAR Imaging of Uniformly Rotating Targets

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

SHIP WAKE DETECTION FOR SAR IMAGES WITH COMPLEX BACKGROUNDS BASED ON MORPHOLOGICAL DICTIONARY LEARNING

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

Image denoising using curvelet transform: an approach for edge preservation

Detection Performance of Radar Compressive Sensing in Noisy Environments

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Reconstruction Improvements on Compressive Sensing

/$ IEEE

Introduction. Wavelets, Curvelets [4], Surfacelets [5].

arxiv: v1 [stat.ml] 14 Jan 2017

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Analysis of a Reduced-Communication Diffusion LMS Algorithm

Iterative CT Reconstruction Using Curvelet-Based Regularization

Lecture 17 Sparse Convex Optimization

IN some applications, we need to decompose a given matrix

Compressive. Graphical Models. Volkan Cevher. Rice University ELEC 633 / STAT 631 Class

4692 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 10, OCTOBER X/$ IEEE

Structurally Random Matrices

Learning Splines for Sparse Tomographic Reconstruction. Elham Sakhaee and Alireza Entezari University of Florida

RECONSTRUCTION ALGORITHMS FOR COMPRESSIVE VIDEO SENSING USING BASIS PURSUIT

Compressed Sensing and Applications by using Dictionaries in Image Processing

Structure-adaptive Image Denoising with 3D Collaborative Filtering

Expectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model

Measurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs

Two are Better Than One: Adaptive Sparse System Identification using Affine Combination of Two Sparse Adaptive Filters

Algebraic Iterative Methods for Computed Tomography

THE DESIGN OF STRUCTURED REGULAR LDPC CODES WITH LARGE GIRTH. Haotian Zhang and José M. F. Moura

Compressive Sensing Based Image Reconstruction using Wavelet Transform

Sparsity and image processing

Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC

Compressive Sensing based image processing in TrapView pest monitoring system

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Image Denoising Using Sparse Representations

MULTICHANNEL image processing is studied in this

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 6, JUNE

Synthetic Aperture Imaging Using a Randomly Steered Spotlight

IMAGE DENOISING USING FRAMELET TRANSFORM

Spatially-Localized Compressed Sensing and Routing in Multi-Hop Sensor Networks 1

An Iterative Approach for Reconstruction of Arbitrary Sparsely Sampled Magnetic Resonance Images

G Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine. Compressed Sensing

Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei

SAR MOVING TARGET IMAGING USING GROUP SPARSITY

Outlier Pursuit: Robust PCA and Collaborative Filtering

Minimum-Polytope-Based Linear Programming Decoder for LDPC Codes via ADMM Approach

Effects of Weight Approximation Methods on Performance of Digital Beamforming Using Least Mean Squares Algorithm

Robust Ring Detection In Phase Correlation Surfaces

Randomized sampling strategies

Code Generation for Embedded Convex Optimization

An Improved Approach For Mixed Noise Removal In Color Images

Blind Sequence Detection Without Channel Estimation

Compressive Parameter Estimation with Earth Mover s Distance via K-means Clustering. Dian Mo and Marco F. Duarte

Bipartite Graph based Construction of Compressed Sensing Matrices

MANY image and video compression standards such as

Multi-polarimetric SAR image compression based on sparse representation

Total Variation Denoising with Overlapping Group Sparsity

L1 REGULARIZED STAP ALGORITHM WITH A GENERALIZED SIDELOBE CANCELER ARCHITECTURE FOR AIRBORNE RADAR

Greedy Gossip with Eavesdropping

RECOVERY OF PARTIALLY OBSERVED DATA APPEARING IN CLUSTERS. Sunrita Poddar, Mathews Jacob

REGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION

Satellite Image Processing Using Singular Value Decomposition and Discrete Wavelet Transform

ENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION

An efficient multiplierless approximation of the fast Fourier transform using sum-of-powers-of-two (SOPOT) coefficients

Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator

An Area-Efficient BIRA With 1-D Spare Segments

IMAGE DE-NOISING IN WAVELET DOMAIN

Transcription:

IEEE SIGNAL PROCESSING LETTERS, VOL. 23, NO. 2, FEBRUARY 2016 187 Dictionary Learning for Blind One Bit Compressed Sensing Hadi Zayyani, Mehdi Korki, Student Member, IEEE, and Farrokh Marvasti, Senior Member, IEEE Abstract This letter proposes a dictionary learning algorithm for blind one bit compressed sensing. In the blind one bit compressed sensing framework, the original signal to be reconstructed from one bit linear random measurements is sparse in an unknown domain. In this context, the multiplication of measurement matrix and sparse domain matrix, i.e.,, should be learned. Hence, we use dictionary learning totrainthismatrix.towards that end, an appropriate continuous convex cost function is suggested for one bit compressed sensing and a simple steepest-descent method is exploited to learn the rows of the matrix.experimental results show the effectiveness of the proposed algorithm against the case of no dictionary learning, specially with increasing the number of training signals and the number of sign measurements. Index Terms Compressed sensing, dictionary learning, one bit measurements, steepest-descent. I. INTRODUCTION T HE one bit compressed sensing which is the extreme case of quantized compressed sensing [1] has been extensively investigated recently [2] [11]. According to compressed sensing (CS) theory, a sparse signal can be reconstructed from a number of linear measurements which could be much smaller than the signal dimension [12], [13]. Classical CS neglects the quantization process and assumes that the measurements are real continuous valued. However, in practice the measurements should be quantized to some discrete levels. This is known as quantized compressed sensing [1]. In the extreme case, there are only two discrete levels. This is called one bit compressed sensing and it has gained much attention in the research community recently [2] [11], specially in wireless sensor networks [9]. In the one bit compressed sensing framework, it is proved that an accurate and stable recovery can be achieved by using only the sign of linear measurements [5]. Manuscript received August 28, 2015; revised November 11, 2015; accepted November 24, 2015. Date of publication November 25, 2015; date of current version December 21, 2015. The work of H. Zayyani was supported by the Iran National elite s foundation. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Michael Lexa. H. Zayyani is with the Department of Electrical and Computer Engineering, Qom University of Technology, Qom 12345, Iran (e-mail: zayyani@qut.ac.ir). M. Korki is with the Department of Telecommunications, Electrical, Robotics, and Biomedical Engineering, Swinburne University of Technology, Hawthorn, VIC 3122, Australia (e-mail: mkorki@swin.edu.au). F. Marvasti is with the Department of Electrical Engineering, Sharif University of Technology, Tehran 14174, Iran (e-mail: marvasti@sharif.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2015.2503804 Many algorithms have been developed for one bit compressed sensing. A renormalized fixed-point iteration (RFPI) algorithm which is based on -norm minimization has been presented in [2]. Also, a matching sign pursuit (MSP) algorithm has been proposed in [3]. A binary iterative hard thresholding (BIHT) algorithm introduced in [5], which has been shown to have better recovery performance than that of MSP. Moreover, a restricted-step shrinkage (RSS) algorithm which has been devised in [4] has provable convergence guarantees. In addition to noise-free settings, there may be noisy sign measurements. In this case, we may be encountered with sign flips which will worsen the performance. In [6], an adaptive outlier pursuit (AOP) algorithm is developed to detect the sign flips and reconstruct the signals with very high accuracy even when there are a large number of sign flips [6]. Moreover, noise-adaptive RFPI (NARFPI) algorithm combines the idea of RFPI and AOP [7]. In addition, [8] proposes a convex approach to solve the problem. Recently, a one bit Bayesian compressed sensing [10] and a MAP approach [11] have been developed for solving the problem. The basic assumption imposed by CS is that the signal is sparse in a domain, i.e. in a dictionary. The dictionary is a predefined dictionary or it may be constructed for a class of signals. The dictionary learning algorithm attempts to find an adaptive dictionary for sparse representation of a class of signals [14]. The most important dictionary learning algorithms are the method of optimal directions (MOD) [15] and K-SVD [16]. There are some research work that use dictionary learning algorithm in the CS framework [17], [18], [19]. However, to the best of our knowledge, investigation of the dictionary learning algorithm in the one bit CS framework has not been reported in the literature. Blind compressed sensing [20] is an extension of CS in which the sparse domain is not known in advance. For example, for unknown signals, we do not know the sparse domain in advance or we have a perturbed version of the sparse domain matrix [21]. In these cases, we use blind CS framework. One application of blind CS is the recovery of dynamic magnetic resonance images from undersampled measurements [22]. In this letter, similar to blind CS, we introduce the notion of blind one bit compressed sensing. In conventional one bit CS, we need to know both the measurement matrix and sparse domain matrix to form a multiplication matrix. However, in the sequel we assume that the sparse domain matrix is unknown. Thus, we learn the matrix by minimizing an appropriate cost function. The proposed algorithm similar to the most of the dictionary learning algorithms iterates between two steps. The first step is the one bit CS and the second step is the dictionary update step. 1070-9908 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

188 IEEE SIGNAL PROCESSING LETTERS, VOL. 23, NO. 2, FEBRUARY 2016 Simulation results show the effectiveness of the proposed algorithm for reconstructing the sparse vector from one bit linear random measurements, specially when the number of training signals and sign measurements is large. The rest of the letter is organized as follows. Section II introduces our proposed algorithm, including problem formulation and the two steps of the algorithm. Simulation results are presented in Section III. Finally, conclusions are drawn in Section IV. II. THE PROPOSED ALGORITHM A. Problem Formulation Consider a signal in an unknown domain, is a sparse vector. In one bit compressed sensing, only the sign of the linear random measurements are available, i.e., is a random measurement matrix, is the sign measurement vector and is the noise measurement vector, which is assumed to be i.i.d. random Gaussian with variance.weaimtoestimatethesparsevector from only sign measurements. The problem is to find and then from the sign measurements hereafter, the matrix is called dictionary. The sparse domain is unknown in advance. As a result, the dictionary is also unknown. We use some training signals to learn the dictionary matrix from sign measurements, is the number of training signals. The overall problem of dictionary learning for one bit compressed sensing is to find the sparse matrix and then from a training matrix which is is the collection of noise vectors. After learning the matrix from the proposed algorithm and finding the sparse matrix, the estimate of matrix is is the left inverse matrix of. Finally, the estimate of the original signals will be. B. Two Steps of the Proposed Algorithm Inspired by most of dictionary learning algorithms [14], the optimization problem for one bit blind CS is to minimize,the first term is the error term and is either 1 or 2, and the second term is the sparsity regularization term and is either0or1 and is a weighting coefficient. Similar to most of dictionary learning algorithms [14], we divide the problem into two steps. The first step is the sparse recovery from one bit measurements when the dictionary is fixed. The second step is the dictionary update when the sparse coefficients are fixed. 1) One Bit Compressed Sensing: Dictionary is Fixed : Various algorithms were proposed to solve the conventional one bit (1) (2) (3) compressed sensing (CS) problem, such as BIHT [5], MSP [3], RFPI [2], and AOP [6], to name a few. Because of its simplicity, in this letter, we use BIHT algorithm to perform sparse recovery for all of the training signals. Note that the proposed dictionary learning algorithm can use any of the sparse recovery methods in the one bit CS framework. For notational convenience, we use the following notation for this step: 2) Dictionary Update: Sparse Matrix is Fixed : For dictionary update, since we have only the sign of measurements, it is infeasible to use the classical dictionary learning algorithms such as MOD [15] or K-SVD [16]. In order to learn the dictionary, we propose the following cost function for the one bit CS framework: p can either be 1 or 2 and is the th row of the dictionary matrix. Therefore, the dictionary update step is to solve the following optimization problem is given in (5). The optimization problem in (6) can be divided into sub-optimization problems to find the rows ( ) of the dictionary matrix. The sub-optimization problems are To solve (7), we use an approximation for continuous S-shaped function as Hence, the sub-optimization problem is (4) (5) (6) (7).Thisisa (9) Thanks to the approximations, the cost function in (9) is a continuous cost function. It can be shown that with this approximation is convex with respect to. The proof is postponed to Appendix. Hence, it has a unique minimizer which can be found by parallel simple steepest-descent methods, each responsible for finding a row of the dictionary. The recursion of the th steepest-descent is with the partial derivative is equal to (8) (10) (11)

ZAYYANI et al.: DICTIONARY LEARNING FOR BLIND ONE BIT COMPRESSED SENSING 189 Substituting (11) into (10) results in the following final recursion (12) the multiplicative term is absorbed in defining,, and are the derivative of and the step-size parameter, respectively. Therefore, the overall algorithm is a two-step iterative algorithm which iterates between (4) and (12). We call these two versions of our algorithm DL-BIHT ( ) and DL-BIHT ( ), respectively. Fig. 1. Cost function versus the number of iterations. III. SIMULATION RESULTS This section presents the simulation results. In the simulations, the unknown sparse vector is drawn from a Bernoulli Gaussian (BG) model with activity probability and with variance of active samples. To resolve the amplitude ambiguity arisen in one bit compressed sensing, we normalized the sparse vector to have unit norm. The size of the signal vector isassumedtobe. The elements of sensing matrix are obtained from a standard Gaussian distribution with. A canonical basis for sparse domain matrix, i.e., is assumed. The number of atoms are assumedtobe. The additive noise is considered as Gaussian random variable with distribution. For initialization of the sparse domain matrix and the dictionary,weuseaperturbedversionof which is and.for the BIHT algorithm, we used 20 iterations with the parameter [5]. In the first experiment, we examine the convergence behavior of the proposed cost function for different values of step-size parameter. The number of iterations is selected as 40 which is sufficient for the convergence of the cost function in most of the simulation cases. The number of training signals is. The number of sign measurements is assumed to be. Fig. 1 shows the cost function versus the number of iterations for both DL-BIHT ( )and DL-BIHT ( ) and for three values of, and. It is seen that both DL-BIHT ( ) and DL-BIHT ( ) exhibit a monotone decreasing cost functions that achieve the lowest values after almost 40 iterations. Among the three values for step size, the best value is which leads to the fastest convergence. We use this value in the next experiments. In the second experiment, we utilize the Normalized Mean Square Error (NMSE) as a performance metric, which is defined as, is the estimate of the true signal. All the NMSEs are averaged over 50 Monte Carlo (MC) simulations. The number of training signals varies between and. The number of sign measurements is again.fig.2showsthe NMSE performance versus the number of training signals for DL-BIHT ( ), DL-BIHT ( ), blind-without dictionary learning (DL) algorithm and non-blind algorithm. In the blind case without dictionary learning, the initial guess is used to estimate the sparse vectors by the BIHT Fig. 2. Fig. 3. NMSE versus number of training signals. NMSE versus number of sign measurements. algorithm. Then the signal estimate is obtained as explained in Section II-A. In the non-blind case, the sparse domain matrix is known in advance. It is seen that when, DL-BIHT algorithms outperform the case of without dictionary learning by 4 db performance gain and non-blind case outperforms DL-BIHT algorithms by about 5.5 db. Hence, the proposed algorithms are the trade-offs between the blind (no dictionary learning) and the non-blind cases. In the third experiment, we explore the role of the number of measurements. In this case, the number of training signals is selected as. The other parameters are the same as the second experiment. The number of sign measurements varies from 100 to 500. Fig. 3 shows the NMSE performance versus the number of sign measurements. The figure shows that with increasing the number of measurements, the performance of recovering the original signal by the proposed algorithms improves. Also, both DL-BIHT ( )anddl-biht( ) significantly outperform the case of without dictionary learning algorithm. Particularly, when the number of measurements is 500, both algorithms achieve about 15 db performance gain with respect to the case of no dictionary learning. Moreover, the non-blind case performs better than the BIHT-DL algorithms by about 12.5 db performance gain.

190 IEEE SIGNAL PROCESSING LETTERS, VOL. 23, NO. 2, FEBRUARY 2016 Therefore, the scalar partial derivative order derivative is is equal to. The second Fig. 4. NMSE versus Signal to Noise Ratio. (a) Fixed data noise (data db) and variable measurement noise. (b) Fixed measurement noise (measurement db) and variable data noise. In the fourth experiment, we investigate the effect of the measurement noise and data noise. Data noise is in the model of generating the signals, is Gaussian random variable with distribution. Measurement noise is in (1). Measurement SNR and data SNR are defined as and, respectively. The other parameters are the same as the third experiment. The number of sign measurements is equal to 100. Fig. 4(a) shows the results of NMSE versus the measurements SNR with a fixed data db, while Fig. 4(b) illustrates the results of NMSE versus data SNR with a fixed measurement db. From Fig. 4, it is observed that the proposed algorithms outperform the Blind without DL. The two partial derivatives in (A) are equal to and. Replacing these two termsin(a)resultsin Consider, and. It can be shown that for the two cases and, the expression in the summation in (A) is positive. For example, consider the case. Defining, with some calculations, we have IV. CONCLUSION We have proposed a new iterative dictionary learning algorithm for the noisy sparse signal reconstruction in one bit compressed sensing framework when the sparse domain is unknown in advance. The algorithm has two steps. The first step is the sparse signal recovery from one bit measurements which is performed by BIHT algorithm in this paper. The second step is to update the dictionary matrix. This is carried out by minimizing a suitable cost function in the one bit compressed sensing framework. A simple steepest-descent method is used to update the rows of the dictionary matrix. Simulation results show the effectiveness of the dictionary learning in monotone converging of the cost function and estimating the original signals specially when the number of training signals and the number of sign measurements increases. The step-size parameter can be adaptively selected based on a simple strategy like line-search in every step of the iterative algorithm, which is considered as a future work. APPENDIX To verify the convexity of, is defined in (8), we separately prove the convexity for and. For the case,we prove that the second derivative is positive. Following some simple manipulations, we reach to Now, in the case of, it can be shown that Therefore, by proving, the proof of the convexity of for is complete. For the case, the cost function is equal to, is given in (8). We prove that each of the sub-optimization problems for is convex. Let,then and are the first and second order vector derivative of with respect to, respectively. Hence, if then is concave. Conversely, if then is convex. Let,then if,wehave and as a result. Hence, is negative. Using the composition property ([23, p. 84]), since is convex and non-increasing when,also is concave, we conclude that is convex. As sum of convex functions is convex, thus is convex and finally is convex. If,wehave and as a result. Hence, is positive. Using the composition property ([23, p. 84)], since is convex and non-decreasing when,also is convex, we conclude that is convex. Again, because sum of convex functions is convex is convex, which results in the convexity of the cost function.

ZAYYANI et al.: DICTIONARY LEARNING FOR BLIND ONE BIT COMPRESSED SENSING 191 REFERENCES [1] A. Zymnis, S. Boyd, and E. Candes, Compressed sensing with quantized measurements, IEEE Signal Process. Lett., vol.17,no.2,pp. 149 152, Feb. 2010. [2] P. Boufounos and R. Baraniuk, 1-bit compressive sensing, in Proc. 42nd Annu. Conf. Inf. Sci. Syst., Princeton, NJ, USA, Mar. 2008, pp. 16 21. [3] P. Boufounos, Greedy sparse signal reconstruction from sign measurements, in Proc.43rdAsilomar.Conf.Signals,Syst.,Comput. (Asilomar 09), 2009, pp. 1305 1309. [4] J.N.Laska,Z.Wen,W.Yin,andR.G.Baraniuk, Trust,butverify: Fast and accurate signal recovery from 1-bit compressive measurements, IEEE Trans. Signal Process., vol. 59, no. 11, pp. 5289 5301, Nov. 2011. [5] L. Jacques, J. Laska, P. Boufounos, and R. Baraniuk, Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors, IEEE Trans. Inf. Theory, vol. 59, no. 4, pp. 2082 2102, Apr. 2013. [6] M.Yan,Y.Yang,andS.Osher, Robust1-bitcompressivesensing using adaptive outlier pursuit, IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3868 3875, Jul. 2012. [7] A.Movahed,A.Panahi,andG.Durisi, A robust RFPI-based 1-bit compressive sensing reconstruction algorithm, in Proc. IEEE ITW, Sep. 2012, pp. 567 571. [8] Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 482 494, Jan. 2013. [9] C. H. Chen and J. Y. Wu, Amplitude-aided 1-bit compressive sensing over noisy wireless sensor networks, IEEE Wireless Commun. Lett., vol. 4, no. 5, pp. 473 476, Oct. 2015. [10] F.Li,J.Fang,H.Li,andL.Huang, Robust one-bit Bayesian compressed sensing with sign-flip errors, IEEE Signal Process. Lett., vol. 22, no. 7, pp. 857 861, Jul. 2015. [11] X. Dong and Y. Zhang, A MAP approach for 1-bit compressive sensing in synthetic aperture radar imaging, IEEE Geosci. Remote Sens. Lett., vol. 12, no. 6, pp. 1237 1241, Jun. 2015. [12] E. J. Candes and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406 5425, Dec. 2006. [13] D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289 1306, Apr. 2006. [14] I. Tosic and P. Frossard, Dictionary Learning, IEEE Signal Process. Mag., vol. 28, no. 2, pp. 27 38, Mar. 2011. [15] K. Engan, S. Aase, and J. Halkon Husoy, Method of optimal directions for frame design, in Proc. ICASSP, 1999, pp. 2443 2446. [16] M.Aharon,M.Elad,andA.Bruckstein, K-SVD:Analgorithmfor designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311 4322, Nov. 2006. [17] J. M. D. Carvajalino and G. Sapiro, Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization, IEEE Trans. Image Process., vol. 18, no. 7, pp. 1395 1408, Jul. 2009. [18] W. Chen and M. R. D. Rodrigues, Dictionary learning with optimized projection design for compressive sensing applications, IEEE Signal Process. Lett., vol. 20, no. 10, pp. 992 995, Oct. 2013. [19] W.Chen,I.J.Wassel,andM.R.D.Rodrigues, Dictionarydesignfor distributed compressive sensing, IEEE Signal Process. Lett., vol. 22, no. 1, pp. 95 99, Jan. 2015. [20] S. Gleichman and Y. C. Eldar, Blind compressed sensing, IEEE Trans. Inf. Theory, vol. 57, no. 10, pp. 6958 6975, Oct. 2011. [21] O. Teke, A. C. Gurbuz, and O. Arikan, Purturbed orthogonal matching pursuit, IEEE Trans. Signal Process., vol. 61, no. 24, pp. 6220 6231, Dec. 15, 2013. [22] S. G. Lingala and M. Jacob, Blind compressive sensing dynamic MRI, IEEE Trans. Med. Imag., vol. 32, no. 6, pp. 1132 1145, Mar. 2013. [23] S. Boyd and L. Vandenberghe, Convex Optimization. NewYork,NY, USA: Cambridge Univ. Press, 2004.