Outline Introduction Problem Formulation Proposed Solution Applications Conclusion. Compressed Sensing. David L Donoho Presented by: Nitesh Shroff
|
|
- Silvester Wood
- 5 years ago
- Views:
Transcription
1 Compressed Sensing David L Donoho Presented by: Nitesh Shroff University of Maryland
2 Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution Near Optimal Information Near Optimal Reconstruction 4 Applications Example 5 Conclusion
3 Compressed Sensing Motivation Why go to so much effort to acquire all the data when most of the what we get will be thrown away?
4 Compressed Sensing Motivation Why go to so much effort to acquire all the data when most of the what we get will be thrown away? Cant we just directly measure the part that wont end up being thrown away?
5 Compressed Sensing Motivation Why go to so much effort to acquire all the data when most of the what we get will be thrown away? Cant we just directly measure the part that wont end up being thrown away? Wouldnt it be possible to acquire the data in already compressed form so that one does not need to throw away anything?
6 Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution Near Optimal Information Near Optimal Reconstruction 4 Applications Example 5 Conclusion
7 Sparse Signal Sparse Signal A signal x is K sparse if its support is i : x i 0 is of cardinality less or equal to K.
8 Sparse Signal Sparse Signal A signal x is K sparse if its support is i : x i 0 is of cardinality less or equal to K. Figure: 1 Sparse [richb]
9 Sparse Signal Sparse Signal Figure: 2 Sparse [richb]
10 Sparse Signal Sparse Signal Figure: 2 Sparse [richb] Figure: 3 Sparse [richb]
11 Sparse Signal Sparse Signal Have few non-zero coefficients.
12 Sparse Signal Sparse Signal Have few non-zero coefficients. Sparsity leads to efficient estimators.
13 Sparse Signal Sparse Signal Have few non-zero coefficients. Sparsity leads to efficient estimators. Sparsity leads to dimensionality reduction.
14 Sparse Signal Sparse Signal Have few non-zero coefficients. Sparsity leads to efficient estimators. Sparsity leads to dimensionality reduction. Signals of interest are often sparse or compressible.
15 Sparse Signal Sparse Signal Have few non-zero coefficients. Sparsity leads to efficient estimators. Sparsity leads to dimensionality reduction. Signals of interest are often sparse or compressible. Known property of standard images and many other digital content types.
16 Sparse Signal Sparse Signal Have few non-zero coefficients. Sparsity leads to efficient estimators. Sparsity leads to dimensionality reduction. Signals of interest are often sparse or compressible. Known property of standard images and many other digital content types. Transform Compression: The object has transform coefficients θ i =< x,ψ i > and are assumed sparse in the sense that, for some 0 < p < 2 and for some R > 0: θ p = ( i θ i p ) 1/p R
17 Problem Statement Compressive data acquisition When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through dimensionality reduction y = Φx
18 Problem Statement Compressive data acquisition When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through dimensionality reduction y = Φx Directly acquire compressed data
19 Problem Statement Compressive data acquisition When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through dimensionality reduction y = Φx Directly acquire compressed data n = O( mlog(m)) measurements instead of m measurements.
20 Problem Statement Problem Statement x R m
21 Problem Statement Problem Statement x R m Class X of such signals
22 Problem Statement Problem Statement x R m Class X of such signals Design an information operator I n : X R n that samples n pieces of information about x
23 Problem Statement Problem Statement x R m Class X of such signals Design an information operator I n : X R n that samples n pieces of information about x Design an Algorithm A : R n R m that offers an approximate reconstruction of x.
24 Problem Statement Problem Statement x R m Class X of such signals Design an information operator I n : X R n that samples n pieces of information about x Design an Algorithm A : R n R m that offers an approximate reconstruction of x. Here the information operator takes the form I n (x) = (< ξ 1,x >,...,< ξ n,x >) where the ξ i are sampling kernels
25 Problem Statement Problem Statement Nonadaptive sampling, i.e. fixed independently of x.
26 Problem Statement Problem Statement Nonadaptive sampling, i.e. fixed independently of x. A n is an unspecified, possibly nonlinear reconstruction operator.
27 Problem Statement Problem Statement Nonadaptive sampling, i.e. fixed independently of x. A n is an unspecified, possibly nonlinear reconstruction operator. Interested in l 2 error of reconstruction E n (X) = inf An,I A n sup x X x A n (I n (x)) 2
28 Problem Statement Problem Statement Nonadaptive sampling, i.e. fixed independently of x. A n is an unspecified, possibly nonlinear reconstruction operator. Interested in l 2 error of reconstruction Why not adaptive: For 0 < p < 1 E n (X) = inf An,I A n sup x X x A n (I n (x)) 2 E n (X p,m (R)) 2 1/p E n (Xp,m Adapt (R)) Adaptive infromation is of minimal help.
29 Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution Near Optimal Information Near Optimal Reconstruction 4 Applications Example 5 Conclusion
30 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i
31 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i I n = ΦΨ T
32 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i I n = ΦΨ T Assume Ψ to be identity for simplicity
33 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i I n = ΦΨ T Assume Ψ to be identity for simplicity J {1,2,...,m}
34 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i I n = ΦΨ T Assume Ψ to be identity for simplicity J {1,2,...,m} Φ J submatrix of Φ with columns J.
35 Near Optimal Information Information Operator I n Ψ orthogonal matrix of basis elements ψ i I n = ΦΨ T Assume Ψ to be identity for simplicity J {1,2,...,m} Φ J submatrix of Φ with columns J. V J denote range of Φ in R n
36 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal.
37 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal. CS1: Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/log(m)
38 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal. CS1: Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/log(m) This demands a certain quantitative degree of linear independence among all small groups of columns.
39 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal. CS1: Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/log(m) This demands a certain quantitative degree of linear independence among all small groups of columns. CS2: On each subspace V J we have the inequality uniformly in J < ρn/log(m) v 1 η 2. n. v 2 v V J
40 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal. CS1: Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/log(m) This demands a certain quantitative degree of linear independence among all small groups of columns. CS2: On each subspace V J we have the inequality uniformly in J < ρn/log(m) v 1 η 2. n. v 2 v V J This says that linear combinations of small groups of columns give vectors that look much like random noise, at least as far as the comparison of l 1 and l 2 norms is concerned.
41 Near Optimal Information CS conditions Structural conditions on an n m matrix which imply that its nullspace is optimal. CS1: Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/log(m) This demands a certain quantitative degree of linear independence among all small groups of columns. CS2: On each subspace V J we have the inequality uniformly in J < ρn/log(m) v 1 η 2. n. v 2 v V J This says that linear combinations of small groups of columns give vectors that look much like random noise, at least as far as the comparison of l 1 and l 2 norms is concerned. Every V J slices through the l 1 m in such a way that the resulting convex section is actually close to spherical.
42 Near Optimal Information CS conditions Family of quotient norms on R n. Q J c(v) = min θ l 1 (J c ) subject to Φ J cθ = v These describe the minimal l 1 norm representation of v achievable using only specified subsets of columns of Φ
43 Near Optimal Information CS conditions Family of quotient norms on R n. Q J c(v) = min θ l 1 (J c ) subject to Φ J cθ = v These describe the minimal l 1 norm representation of v achievable using only specified subsets of columns of Φ CS3: On each subspace V J Q J c(v) η 3 / log(m/n) v 1,v V J uniformly in J < ρn/log(m)
44 Near Optimal Information CS conditions Family of quotient norms on R n. Q J c(v) = min θ l 1 (J c ) subject to Φ J cθ = v These describe the minimal l 1 norm representation of v achievable using only specified subsets of columns of Φ CS3: On each subspace V J Q J c(v) η 3 / log(m/n) v 1,v V J uniformly in J < ρn/log(m) This says that for every vector in some V J, the associated quotient norm Q Jc is never dramatically better than the simple l 1 norm on (R) n.
45 Near Optimal Information CS conditions Family of quotient norms on R n. Q J c(v) = min θ l 1 (J c ) subject to Φ J cθ = v These describe the minimal l 1 norm representation of v achievable using only specified subsets of columns of Φ CS3: On each subspace V J Q J c(v) η 3 / log(m/n) v 1,v V J uniformly in J < ρn/log(m) This says that for every vector in some V J, the associated quotient norm Q Jc is never dramatically better than the simple l 1 norm on (R) n. Matrices satisfying these conditions are ubiquitous for large n and m.
46 Near Optimal Information CS conditions Family of quotient norms on R n. Q J c(v) = min θ l 1 (J c ) subject to Φ J cθ = v These describe the minimal l 1 norm representation of v achievable using only specified subsets of columns of Φ CS3: On each subspace V J Q J c(v) η 3 / log(m/n) v 1,v V J uniformly in J < ρn/log(m) This says that for every vector in some V J, the associated quotient norm Q Jc is never dramatically better than the simple l 1 norm on (R) n. Matrices satisfying these conditions are ubiquitous for large n and m. Possible to choose η i and ρ independent of n and of m
47 Near Optimal Information Ubiquity Theorem: Let (n,m n ) be a sequence of problem sizes with n, n < m n, and m A.n γ, A > 0 and γ 1. There exist η i > 0 and ρ > 0 depending only on A and γ so that, for each δ > 0 the proportion of all n m matrices Φ satisfying CS1-CS3 with parameters (η i ) and ρ eventually exceeds 1 δ.
48 Near Optimal Information Φ matrix Random Projection Φ
49 Near Optimal Information Φ matrix Random Projection Φ Not full rank n m where n < m
50 Near Optimal Information Φ matrix Random Projection Φ Not full rank n m where n < m Loses information, in general
51 Near Optimal Information Φ matrix Random Projection Φ Not full rank n m where n < m Loses information, in general But preserves structure and information in sparse signals with high probability.
52 Near Optimal Information Φ matrix Random Projection Φ Not full rank n m where n < m Loses information, in general But preserves structure and information in sparse signals with high probability. Generated by randomly sampling the columns, with different columns iid random uniform on S n 1
53 Near Optimal Reconstruction Reconstruction Algorithm Least norm solution min θ(x) p subject to y n = I n (x) (P p )
54 Near Optimal Reconstruction Reconstruction Algorithm Least norm solution Drawbacks: min θ(x) p subject to y n = I n (x) (P p )
55 Near Optimal Reconstruction Reconstruction Algorithm Least norm solution Drawbacks: p should be known min θ(x) p subject to y n = I n (x) (P p )
56 Near Optimal Reconstruction Reconstruction Algorithm Least norm solution min θ(x) p subject to y n = I n (x) (P p ) Drawbacks: p should be known p < 1 is a nonconvex optimization problem.
57 Near Optimal Reconstruction Reconstruction Algorithm Least norm solution min θ(x) p subject to y n = I n (x) (P p ) Drawbacks: p should be known p < 1 is a nonconvex optimization problem. Solving l 0 norm requires combinatorial optimization.
58 Near Optimal Reconstruction Basis Pursuit Let A n 2m = [Φ Φ]
59 Near Optimal Reconstruction Basis Pursuit Let A n 2m = [Φ Φ] Linear Program min z 1 T z subject to Az = y n, x 0.
60 Near Optimal Reconstruction Basis Pursuit Let A n 2m = [Φ Φ] Linear Program min z 1 T z subject to Az = y n, x 0. has solution z = [u v ]
61 Near Optimal Reconstruction Basis Pursuit Let A n 2m = [Φ Φ] Linear Program min z 1 T z subject to Az = y n, x 0. has solution z = [u v ] θ = u v
62 Near Optimal Reconstruction Basis Pursuit Fix ɛ > 0
63 Near Optimal Reconstruction Basis Pursuit Fix ɛ > 0 x such that θ 1 cm α, α < 1/2.
64 Near Optimal Reconstruction Basis Pursuit Fix ɛ > 0 x such that θ 1 cm α, α < 1/2. Make n C ɛ.m 2α log(m) measurements.
65 Near Optimal Reconstruction Basis Pursuit Fix ɛ > 0 x such that θ 1 cm α, α < 1/2. Make n C ɛ.m 2α log(m) measurements. x ˆx 1,θ 2 << ɛ. x 2
66 Near Optimal Reconstruction Reconstruction Algorithm When P 0 has a solution, P 1 will find it.
67 Near Optimal Reconstruction Reconstruction Algorithm When P 0 has a solution, P 1 will find it. Theorem: Suppose that Φ satisfies CS1-CS3 with given positive constants ρ, (η i ). There is a constant ρ 0 > 0 depending only on ρ and (η i ) and not on n or m so that, if θ has at most ρ 0 n/log(m) nonzeros, then (P 0 ) and (P 1 ) both have the same unique solution.
68 Near Optimal Reconstruction Reconstruction Algorithm When P 0 has a solution, P 1 will find it. Theorem: Suppose that Φ satisfies CS1-CS3 with given positive constants ρ, (η i ). There is a constant ρ 0 > 0 depending only on ρ and (η i ) and not on n or m so that, if θ has at most ρ 0 n/log(m) nonzeros, then (P 0 ) and (P 1 ) both have the same unique solution. i.e., Although the system of equations is massively undetermined, l 1 minimization and sparse solution conincide when the result is sufficiently sparse.
69 Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution Near Optimal Information Near Optimal Reconstruction 4 Applications Example 5 Conclusion
70 Example Parial Fourier Ensemble Collection of n m matrices made by sampling n rows out of the m m Fourier matrix,
71 Example Parial Fourier Ensemble Collection of n m matrices made by sampling n rows out of the m m Fourier matrix, Concrete examples of Φ working within the CS framework
72 Example Parial Fourier Ensemble Collection of n m matrices made by sampling n rows out of the m m Fourier matrix, Concrete examples of Φ working within the CS framework Allows l 1 minimization to reconstruct from such information for all 0 < p < 1
73 Example Images of Bounded Variation f (x), x [0,1] 2.
74 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B.
75 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) C.B.2 j
76 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) C.B.2 j Fix finest scale j 1
77 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) Fix finest scale j 1 Take j 0 = j 1 /2 C.B.2 j
78 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) Fix finest scale j 1 Take j 0 = j 1 /2 C.B.2 j CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 2 (4 j 1 ) samples for detail coefficients.
79 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) Fix finest scale j 1 Take j 0 = j 1 /2 C.B.2 j CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 2 (4 j 1 ) samples for detail coefficients. Total of c.j 2 1.4j 1/2 pieces of measured information.
80 Example Images of Bounded Variation f (x), x [0,1] 2. Bounded in absolute value f B. The wavelet coefficients follow θ (j) Fix finest scale j 1 Take j 0 = j 1 /2 C.B.2 j CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 2 (4 j 1 ) samples for detail coefficients. Total of c.j 2 1.4j 1/2 pieces of measured information. Error of the same order of linear sampling with 4 j 1 samples: with c independent of f. f ˆf c2 j 1
81 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2.
82 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B.
83 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L.
84 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L. Fix finest scale j 1
85 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L. Fix finest scale j 1 Take j 0 = j 1 /4
86 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L. Fix finest scale j 1 Take j 0 = j 1 /4 CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 5/2 (4 j 1 ) samples for detail coefficients.
87 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L. Fix finest scale j 1 Take j 0 = j 1 /4 CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 5/2 (4 j 1 ) samples for detail coefficients. Total of c.j 5/2 1.4 j 1/4 pieces of measured information.
88 Example Piecewise C 2 Images with C 2 edges C 2,2 (B,L) of piecewise smooth f (x), x [0,1] 2. Bounded in absolute value, first and second partial derivative by B. Curves total length L. Fix finest scale j 1 Take j 0 = j 1 /4 CS takes 4 j 0 resume coefficients and n c4 j 0 (log) 5/2 (4 j 1 ) samples for detail coefficients. Total of c.j 5/2 1.4 j 1/4 pieces of measured information. Error of the same order of linear sampling with 4 j 1 samples: with c independent of f. f ˆf c2 j 1
89 Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution Near Optimal Information Near Optimal Reconstruction 4 Applications Example 5 Conclusion
90 Conclusion Framework for compressed sensing of objects x R m
91 Conclusion Framework for compressed sensing of objects x R m Conditions on Φ CS1-CS3
92 Conclusion Framework for compressed sensing of objects x R m Conditions on Φ CS1-CS3 Information Operator I n (x) = ΦΨ T x
93 Conclusion Framework for compressed sensing of objects x R m Conditions on Φ CS1-CS3 Information Operator I n (x) = ΦΨ T x Random sampling yields near optimal I n with overwhelmingly high probability.
94 Conclusion Framework for compressed sensing of objects x R m Conditions on Φ CS1-CS3 Information Operator I n (x) = ΦΨ T x Random sampling yields near optimal I n with overwhelmingly high probability. Reconstructed x by solving l 1 convex optimization.
95 Conclusion Framework for compressed sensing of objects x R m Conditions on Φ CS1-CS3 Information Operator I n (x) = ΦΨ T x Random sampling yields near optimal I n with overwhelmingly high probability. Reconstructed x by solving l 1 convex optimization. Exploits a priori signal sparsity information. [richb]
96 Thank You!
Signal Reconstruction from Sparse Representations: An Introdu. Sensing
Signal Reconstruction from Sparse Representations: An Introduction to Compressed Sensing December 18, 2009 Digital Data Acquisition Suppose we want to acquire some real world signal digitally. Applications
More informationSparse Reconstruction / Compressive Sensing
Sparse Reconstruction / Compressive Sensing Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University Namrata Vaswani Sparse Reconstruction / Compressive Sensing 1/ 20 The
More informationELEG Compressive Sensing and Sparse Signal Representations
ELEG 867 - Compressive Sensing and Sparse Signal Representations Gonzalo R. Arce Depart. of Electrical and Computer Engineering University of Delaware Fall 211 Compressive Sensing G. Arce Fall, 211 1 /
More informationIntroduction to Topics in Machine Learning
Introduction to Topics in Machine Learning Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University Namrata Vaswani 1/ 27 Compressed Sensing / Sparse Recovery: Given y :=
More informationStructurally Random Matrices
Fast Compressive Sampling Using Structurally Random Matrices Presented by: Thong Do (thongdo@jhu.edu) The Johns Hopkins University A joint work with Prof. Trac Tran, The Johns Hopkins University it Dr.
More informationMeasurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs
Measurements and Bits: Compressed Sensing meets Information Theory Dror Baron ECE Department Rice University dsp.rice.edu/cs Sensing by Sampling Sample data at Nyquist rate Compress data using model (e.g.,
More informationA fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009]
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] Yongjia Song University of Wisconsin-Madison April 22, 2010 Yongjia Song
More informationRobust image recovery via total-variation minimization
Robust image recovery via total-variation minimization Rachel Ward University of Texas at Austin (Joint work with Deanna Needell, Claremont McKenna College) February 16, 2012 2 Images are compressible
More informationImage Reconstruction from Multiple Sparse Representations
Image Reconstruction from Multiple Sparse Representations Robert Crandall Advisor: Professor Ali Bilgin University of Arizona Program in Applied Mathematics 617 N. Santa Rita, Tucson, AZ 85719 Abstract
More informationBilevel Sparse Coding
Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional
More informationCompressive. Graphical Models. Volkan Cevher. Rice University ELEC 633 / STAT 631 Class
Compressive Sensing and Graphical Models Volkan Cevher volkan@rice edu volkan@rice.edu Rice University ELEC 633 / STAT 631 Class http://www.ece.rice.edu/~vc3/elec633/ Digital Revolution Pressure is on
More informationCompressive Sensing for High-Dimensional Data
Compressive Sensing for High-Dimensional Data Richard Baraniuk Rice University dsp.rice.edu/cs DIMACS Workshop on Recent Advances in Mathematics and Information Sciences for Analysis and Understanding
More informationCompressive Sensing for Multimedia. Communications in Wireless Sensor Networks
Compressive Sensing for Multimedia 1 Communications in Wireless Sensor Networks Wael Barakat & Rabih Saliba MDDSP Project Final Report Prof. Brian L. Evans May 9, 2008 Abstract Compressive Sensing is an
More informationCompressive Sensing. A New Framework for Sparse Signal Acquisition and Processing. Richard Baraniuk. Rice University
Compressive Sensing A New Framework for Sparse Signal Acquisition and Processing Richard Baraniuk Rice University Better, Stronger, Faster Accelerating Data Deluge 1250 billion gigabytes generated in 2010
More informationCollaborative Sparsity and Compressive MRI
Modeling and Computation Seminar February 14, 2013 Table of Contents 1 T2 Estimation 2 Undersampling in MRI 3 Compressed Sensing 4 Model-Based Approach 5 From L1 to L0 6 Spatially Adaptive Sparsity MRI
More informationNumerical Optimization
Convex Sets Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Let x 1, x 2 R n, x 1 x 2. Line and line segment Line passing through x 1 and x 2 : {y
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Yuejie Chi Departments of ECE and BMI The Ohio State University September 24, 2015 Time, location, and office hours Time: Tue/Thu
More informationRandomized sampling strategies
Randomized sampling strategies Felix J. Herrmann SLIM Seismic Laboratory for Imaging and Modeling the University of British Columbia SLIM Drivers & Acquisition costs impediments Full-waveform inversion
More informationFace Recognition via Sparse Representation
Face Recognition via Sparse Representation John Wright, Allen Y. Yang, Arvind, S. Shankar Sastry and Yi Ma IEEE Trans. PAMI, March 2008 Research About Face Face Detection Face Alignment Face Recognition
More informationCompressed Sensing for Electron Tomography
University of Maryland, College Park Department of Mathematics February 10, 2015 1/33 Outline I Introduction 1 Introduction 2 3 4 2/33 1 Introduction 2 3 4 3/33 Tomography Introduction Tomography - Producing
More informationRank Minimization over Finite Fields
Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering, University of Wisconsin-Madison ISIT 2011 Vincent Tan (UW-Madison)
More informationCombinatorial Selection and Least Absolute Shrinkage via The CLASH Operator
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems LIONS / EPFL http://lions.epfl.ch & Idiap Research Institute joint
More informationCOMPRESSIVE VIDEO SAMPLING
COMPRESSIVE VIDEO SAMPLING Vladimir Stanković and Lina Stanković Dept of Electronic and Electrical Engineering University of Strathclyde, Glasgow, UK phone: +44-141-548-2679 email: {vladimir,lina}.stankovic@eee.strath.ac.uk
More informationThe convex geometry of inverse problems
The convex geometry of inverse problems Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Venkat Chandrasekaran Pablo Parrilo Alan Willsky Linear Inverse Problems
More informationEfficient MR Image Reconstruction for Compressed MR Imaging
Efficient MR Image Reconstruction for Compressed MR Imaging Junzhou Huang, Shaoting Zhang, and Dimitris Metaxas Division of Computer and Information Sciences, Rutgers University, NJ, USA 08854 Abstract.
More informationWeighted-CS for reconstruction of highly under-sampled dynamic MRI sequences
Weighted- for reconstruction of highly under-sampled dynamic MRI sequences Dornoosh Zonoobi and Ashraf A. Kassim Dept. Electrical and Computer Engineering National University of Singapore, Singapore E-mail:
More informationThe Benefit of Tree Sparsity in Accelerated MRI
The Benefit of Tree Sparsity in Accelerated MRI Chen Chen and Junzhou Huang Department of Computer Science and Engineering, The University of Texas at Arlington, TX, USA 76019 Abstract. The wavelet coefficients
More informationCompressive Sensing. Part IV: Beyond Sparsity. Mark A. Davenport. Stanford University Department of Statistics
Compressive Sensing Part IV: Beyond Sparsity Mark A. Davenport Stanford University Department of Statistics Beyond Sparsity Not all signal models fit neatly into the sparse setting The concept of dimension
More informationCompressed Sensing for Rapid MR Imaging
Compressed Sensing for Rapid Imaging Michael Lustig1, Juan Santos1, David Donoho2 and John Pauly1 1 Electrical Engineering Department, Stanford University 2 Statistics Department, Stanford University rapid
More informationLearning Low-rank Transformations: Algorithms and Applications. Qiang Qiu Guillermo Sapiro
Learning Low-rank Transformations: Algorithms and Applications Qiang Qiu Guillermo Sapiro Motivation Outline Low-rank transform - algorithms and theories Applications Subspace clustering Classification
More informationBayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis
Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Xavier Le Faucheur a, Brani Vidakovic b and Allen Tannenbaum a a School of Electrical and Computer Engineering, b Department of Biomedical
More informationSparse wavelet expansions for seismic tomography: Methods and algorithms
Sparse wavelet expansions for seismic tomography: Methods and algorithms Ignace Loris Université Libre de Bruxelles International symposium on geophysical imaging with localized waves 24 28 July 2011 (Joint
More informationBipartite Graph based Construction of Compressed Sensing Matrices
Bipartite Graph based Construction of Compressed Sensing Matrices Weizhi Lu, Kidiyo Kpalma and Joseph Ronsin arxiv:44.4939v [cs.it] 9 Apr 24 Abstract This paper proposes an efficient method to construct
More informationCGIHT: Conjugate Gradient Iterative Hard Thresholding for Compressed Sensing and Matrix Completion
Information and Inference: A Journal of the IMA (25) Page of 4 doi:.93/imaiai/drn CGIHT: Conjugate Gradient Iterative Hard Thresholding for Compressed Sensing and Matrix Completion JEFFREY D. BLANCHARD,
More informationCompressed Sensing Algorithm for Real-Time Doppler Ultrasound Image Reconstruction
Mathematical Modelling and Applications 2017; 2(6): 75-80 http://www.sciencepublishinggroup.com/j/mma doi: 10.11648/j.mma.20170206.14 ISSN: 2575-1786 (Print); ISSN: 2575-1794 (Online) Compressed Sensing
More informationLecture 17 Sparse Convex Optimization
Lecture 17 Sparse Convex Optimization Compressed sensing A short introduction to Compressed Sensing An imaging perspective 10 Mega Pixels Scene Image compression Picture Why do we compress images? Introduction
More informationIMA Preprint Series # 2211
LEARNING TO SENSE SPARSE SIGNALS: SIMULTANEOUS SENSING MATRIX AND SPARSIFYING DICTIONARY OPTIMIZATION By Julio Martin Duarte-Carvajalino and Guillermo Sapiro IMA Preprint Series # 2211 ( May 2008 ) INSTITUTE
More informationBlind Compressed Sensing Using Sparsifying Transforms
Blind Compressed Sensing Using Sparsifying Transforms Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laboratory University of Illinois
More information2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing
Progress In Electromagnetics Research M, Vol. 46, 47 56, 206 2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing Berenice Verdin * and Patrick Debroux Abstract The measurement
More informationCompressive Sensing: Theory and Practice
Compressive Sensing: Theory and Practice Mark Davenport Rice University ECE Department Sensor Explosion Digital Revolution If we sample a signal at twice its highest frequency, then we can recover it exactly.
More informationMultiple-View Object Recognition in Band-Limited Distributed Camera Networks
in Band-Limited Distributed Camera Networks Allen Y. Yang, Subhransu Maji, Mario Christoudas, Kirak Hong, Posu Yan Trevor Darrell, Jitendra Malik, and Shankar Sastry Fusion, 2009 Classical Object Recognition
More informationUniversity of Alberta. Parallel Sampling and Reconstruction with Permutation in Multidimensional Compressed Sensing. Hao Fang
University of Alberta Parallel Sampling and Reconstruction with Permutation in Multidimensional Compressed Sensing by Hao Fang A thesis submitted to the Faculty of Graduate Studies and Research in partial
More informationConstrained adaptive sensing
Constrained adaptive sensing Deanna Needell Claremont McKenna College Dept. of Mathematics Mark Davenport Andrew Massimino Tina Woolf Sensing sparse signals -sparse When (and how well) can we estimate
More informationThe Fundamentals of Compressive Sensing
The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor explosion Data deluge Digital revolution If we sample a signal
More informationDirect Surface Reconstruction using Perspective Shape from Shading via Photometric Stereo
Direct Surface Reconstruction using Perspective Shape from Shading via Roberto Mecca joint work with Ariel Tankus and Alfred M. Bruckstein Technion - Israel Institute of Technology Department of Computer
More information1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics
1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!
More informationA Geometric Analysis of Subspace Clustering with Outliers
A Geometric Analysis of Subspace Clustering with Outliers Mahdi Soltanolkotabi and Emmanuel Candés Stanford University Fundamental Tool in Data Mining : PCA Fundamental Tool in Data Mining : PCA Subspace
More informationLecture VIII. Global Approximation Methods: I
Lecture VIII Global Approximation Methods: I Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Global Methods p. 1 /29 Global function approximation Global methods: function
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationarxiv: v2 [physics.med-ph] 22 Jul 2014
Multichannel Compressive Sensing MRI Using Noiselet Encoding arxiv:1407.5536v2 [physics.med-ph] 22 Jul 2014 Kamlesh Pawar 1,2,3, Gary Egan 4, and Jingxin Zhang 1,5,* 1 Department of Electrical and Computer
More informationCS675: Convex and Combinatorial Optimization Spring 2018 Convex Sets. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Spring 2018 Convex Sets Instructor: Shaddin Dughmi Outline 1 Convex sets, Affine sets, and Cones 2 Examples of Convex Sets 3 Convexity-Preserving Operations
More informationMultiresponse Sparse Regression with Application to Multidimensional Scaling
Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,
More informationA* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery
A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery Nazim Burak Karahanoglu a,b,, Hakan Erdogan a a Department of Electronics Engineering, Sabanci University, Istanbul
More informationI How does the formulation (5) serve the purpose of the composite parameterization
Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)
More informationSection 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1
Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients
More informationP257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies
P257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies. azemi* (University of Alberta) & H.R. Siahkoohi (University of Tehran) SUMMARY Petrophysical reservoir properties,
More informationADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING
ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING Fei Yang 1 Hong Jiang 2 Zuowei Shen 3 Wei Deng 4 Dimitris Metaxas 1 1 Rutgers University 2 Bell Labs 3 National University
More informationIterative CT Reconstruction Using Curvelet-Based Regularization
Iterative CT Reconstruction Using Curvelet-Based Regularization Haibo Wu 1,2, Andreas Maier 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab (LME), Department of Computer Science, 2 Graduate School in
More informationCHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT
CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT 9.1 Introduction In the previous chapters the inpainting was considered as an iterative algorithm. PDE based method uses iterations to converge
More informationProgress In Electromagnetics Research, Vol. 125, , 2012
Progress In Electromagnetics Research, Vol. 125, 527 542, 2012 A NOVEL IMAGE FORMATION ALGORITHM FOR HIGH-RESOLUTION WIDE-SWATH SPACEBORNE SAR USING COMPRESSED SENSING ON AZIMUTH DIS- PLACEMENT PHASE CENTER
More informationLecture 2. Topology of Sets in R n. August 27, 2008
Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,
More informationRobust Poisson Surface Reconstruction
Robust Poisson Surface Reconstruction V. Estellers, M. Scott, K. Tew, and S. Soatto Univeristy of California, Los Angeles Brigham Young University June 2, 2015 1/19 Goals: Surface reconstruction from noisy
More informationOutlier Pursuit: Robust PCA and Collaborative Filtering
Outlier Pursuit: Robust PCA and Collaborative Filtering Huan Xu Dept. of Mechanical Engineering & Dept. of Mathematics National University of Singapore Joint w/ Constantine Caramanis, Yudong Chen, Sujay
More informationNon-Differentiable Image Manifolds
The Multiscale Structure of Non-Differentiable Image Manifolds Michael Wakin Electrical l Engineering i Colorado School of Mines Joint work with Richard Baraniuk, Hyeokho Choi, David Donoho Models for
More informationOutline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014
5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38
More informationThe convex geometry of inverse problems
The convex geometry of inverse problems Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Venkat Chandrasekaran Pablo Parrilo Alan Willsky Linear Inverse Problems
More informationContourlets: Construction and Properties
Contourlets: Construction and Properties Minh N. Do Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ minhdo minhdo@uiuc.edu Joint work with
More informationConvex or non-convex: which is better?
Sparsity Amplified Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York March 7 / 4 Convex or non-convex: which is better? (for sparse-regularized
More informationGEOMETRIC MANIFOLD APPROXIMATION USING LOCALLY LINEAR APPROXIMATIONS
GEOMETRIC MANIFOLD APPROXIMATION USING LOCALLY LINEAR APPROXIMATIONS BY TALAL AHMED A thesis submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial fulfillment
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationSupplemental Material for Efficient MR Image Reconstruction for Compressed MR Imaging
Supplemental Material for Efficient MR Image Reconstruction for Compressed MR Imaging Paper ID: 1195 No Institute Given 1 More Experiment Results 1.1 Visual Comparisons We apply all methods on four 2D
More informationBagging for One-Class Learning
Bagging for One-Class Learning David Kamm December 13, 2008 1 Introduction Consider the following outlier detection problem: suppose you are given an unlabeled data set and make the assumptions that one
More informationSpatially-Localized Compressed Sensing and Routing in Multi-Hop Sensor Networks 1
Spatially-Localized Compressed Sensing and Routing in Multi-Hop Sensor Networks 1 Sungwon Lee, Sundeep Pattem, Maheswaran Sathiamoorthy, Bhaskar Krishnamachari and Antonio Ortega University of Southern
More informationIteratively Re-weighted Least Squares for Sums of Convex Functions
Iteratively Re-weighted Least Squares for Sums of Convex Functions James Burke University of Washington Jiashan Wang LinkedIn Frank Curtis Lehigh University Hao Wang Shanghai Tech University Daiwei He
More informationSpatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
sensors Article Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks Haifeng Zheng 1 ID, Jiayin Li 1, Xinxin Feng 1 ID, Wenzhong Guo 2,3, *, Zhonghui Chen 1 and Neal Xiong
More informationConvexization in Markov Chain Monte Carlo
in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non
More informationDiffusion Wavelets for Natural Image Analysis
Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................
More informationEE123 Digital Signal Processing
EE123 Digital Signal Processing Lecture 24 Compressed Sensing III M. Lustig, EECS UC Berkeley RADIOS https://inst.eecs.berkeley.edu/~ee123/ sp15/radio.html Interfaces and radios on Wednesday -- please
More informationOptimal Sampling Geometries for TV-Norm Reconstruction of fmri Data
Optimal Sampling Geometries for TV-Norm Reconstruction of fmri Data Oliver M. Jeromin, Student Member, IEEE, Vince D. Calhoun, Senior Member, IEEE, and Marios S. Pattichis, Senior Member, IEEE Abstract
More informationCoherence Pattern Guided Compressive Sensing with Unresolved Grids
SIAM J. IMAGING SCIENCES Vol. 5, No., pp. 79 22 c 22 Society for Industrial and Applied Mathematics Coherence Pattern Guided Compressive Sensing with Unresolved Grids Albert Fannjiang and Wenjing Liao
More informationRecent Developments in Model-based Derivative-free Optimization
Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:
More informationSPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari
SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari Laboratory for Advanced Brain Signal Processing Laboratory for Mathematical
More informationLinear Methods for Regression and Shrinkage Methods
Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors
More informationPolynomials tend to oscillate (wiggle) a lot, even when our true function does not.
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 2: Spline Approximations Dianne P O Leary c 2001, 2002, 2007 Piecewise polynomial interpolation Piecewise polynomial interpolation Read: Chapter 3 Skip:
More informationMODEL SELECTION AND REGULARIZATION PARAMETER CHOICE
MODEL SELECTION AND REGULARIZATION PARAMETER CHOICE REGULARIZATION METHODS FOR HIGH DIMENSIONAL LEARNING Francesca Odone and Lorenzo Rosasco odone@disi.unige.it - lrosasco@mit.edu June 6, 2011 ABOUT THIS
More informationMar. 20 Math 2335 sec 001 Spring 2014
Mar. 20 Math 2335 sec 001 Spring 2014 Chebyshev Polynomials Definition: For an integer n 0 define the function ( ) T n (x) = cos n cos 1 (x), 1 x 1. It can be shown that T n is a polynomial of degree n.
More informationA Relationship between the Robust Statistics Theory and Sparse Compressive Sensed Signals Reconstruction
THIS PAPER IS A POSTPRINT OF A PAPER SUBMITTED TO AND ACCEPTED FOR PUBLICATION IN IET SIGNAL PROCESSING AND IS SUBJECT TO INSTITUTION OF ENGINEERING AND TECHNOLOGY COPYRIGHT. THE COPY OF RECORD IS AVAILABLE
More informationRobust Kernel Methods in Clustering and Dimensionality Reduction Problems
Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust
More informationCompressive Sensing of High-Dimensional Visual Signals. Aswin C Sankaranarayanan Rice University
Compressive Sensing of High-Dimensional Visual Signals Aswin C Sankaranarayanan Rice University Interaction of light with objects Reflection Fog Volumetric scattering Human skin Sub-surface scattering
More informationSparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz CS Dept.,Fac. of computers and Informatics Univ. of Benha Benha, Egypt Walid Osamy CS Dept.,Fac. of computers and Informatics
More informationCellular Tree Classifiers. Gérard Biau & Luc Devroye
Cellular Tree Classifiers Gérard Biau & Luc Devroye Paris, December 2013 Outline 1 Context 2 Cellular tree classifiers 3 A mathematical model 4 Are there consistent cellular tree classifiers? 5 A non-randomized
More informationCurve fitting using linear models
Curve fitting using linear models Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark September 28, 2012 1 / 12 Outline for today linear models and basis functions polynomial regression
More informationELEG Compressive Sensing and Sparse Signal Representations
ELEG 867 - Compressive Sensing and Sparse Signal Representations Introduction to Matrix Completion and Robust PCA Gonzalo Garateguy Depart. of Electrical and Computer Engineering University of Delaware
More informationG Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine. Compressed Sensing
G16.4428 Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine Compressed Sensing Ricardo Otazo, PhD ricardo.otazo@nyumc.org Compressed
More informationHaar Wavelet Image Compression
Math 57 Haar Wavelet Image Compression. Preliminaries Haar wavelet compression is an efficient way to perform both lossless and lossy image compression. It relies on averaging and differencing the values
More informationSparsity Based Regularization
9.520: Statistical Learning Theory and Applications March 8th, 200 Sparsity Based Regularization Lecturer: Lorenzo Rosasco Scribe: Ioannis Gkioulekas Introduction In previous lectures, we saw how regularization
More informationDEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS
DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS ABSTRACT Compressed sensing (CS) is a signal processing framework for efficiently reconstructing a signal from a small
More informationSparse sampling in MRI: From basic theory to clinical application. R. Marc Lebel, PhD Department of Electrical Engineering Department of Radiology
Sparse sampling in MRI: From basic theory to clinical application R. Marc Lebel, PhD Department of Electrical Engineering Department of Radiology Objective Provide an intuitive overview of compressed sensing
More informationINTRODUCTION TO FINITE ELEMENT METHODS
INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.
More informationPROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY
PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version
More information