Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform
|
|
- Antonia Pierce
- 6 years ago
- Views:
Transcription
1 Noname manuscript No. (will be inserted by the editor) Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform Rick Archibald Anne Gelb Rodrigo B. Platte the date of receipt and acceptance should be inserted later Abstract Fourier samples are collected in a variety of applications including magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). The data are typically under-sampled and noisy. In recent years, l regularization has received considerable attention in designing image reconstruction algorithms from undersampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when T V is used for the l regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using T V as an l regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the polynomial annihilation (PA) transform. This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach. Keywords Fourier Data l regularization Split Bregman Edge Detection Polynomial Annihilation Introduction Data are acquired as partial Fourier samples in several applications, including magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). In an idealized situation, recovering images from partial Fourier data may be done simply and efficiently by using the inverse fast Fourier transform (FFT). In practice, the data acquisition system is usually under-prescribed and noisy. Moreover, the Fourier domain is not well suited for recovering the underlying image, which is generally only piecewise smooth. In recent years l regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy data for images that have some sparsity properties, that is, some measurable features of the image have sparse representation. Also, l regularization provides a formulation that is compatible with compressed sensing (CS) applications, specifically, when an image can be reconstructed Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 3783 (archibaldrk@ornl.gov) School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, (annegelb@asu.edu) School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, (rbp@asu.edu) Address(es) of author(s) should be given Technically, the l norm of an expression is a better measure of sparsity. However, the l norm does not meet the convexity requirements and is very slow to compute. Additional detail on using the l norm in place of the l in order to measure sparsity can be found in [7].
2 2 Rick Archibald et al. from a very small number of measurements [4 6]. In particular, the goal for reconstructing an image from SAR or MRI data is to solve min J(f) such that Ff ˆf 2 =, () f where ˆf consists of samples of the Fourier transform of the unknown image, f. F contains a subset of rows of a Fourier matrix, and J is an appropriate l regularization term, [,2 23]. Typically for measured data the related (TV) denoising problem, min f J(f) such that Ff ˆf 2 < σ, (2) is solved. It is in general still difficult to develop efficient and robust techniques for solving (2). The Split Bregman Algorithm, [2], is a numerically efficient and stable algorithm that has successfully solved (2) for a variety of applications. In this paper we use the Split Bregman Algorithm as a launching point to develop a new technique for solving (2) based on the polynomial annihilation l regularization introduced in [27]. We will demonstrate that our method yields improved accuracy in regions away from discontinuities, especially in the case of under-sampled data. We adopt the standardizations and terminology from [2] to describe our algorithm. A well known drawback in using T V as an l regularization term is that the reconstructed image defaults to a piecewise constant approximation. While suitable for some applications, in others it is desirable to see more details. This has been addressed in several ways. For example, total generalized variation (TGV), which generates a piecewise (typically quadratic) polynomial approximation in smooth regions, was developed in [2]. Multi-wavelets have also been used to formulate sparsifying transforms, [24]. The polynomial annihilation (PA) transform, which exploits the sparsity of the underlying image in the jump discontinuity domain, was introduced in [27]. It was demonstrated there that generating a sparsifying transform based on the sparsity of edges in the underlying image (as expressed in ()), yields improved accuracy and convergence properties for both image reconstruction and edge identification. In particular, high order accuracy is possible in regions of smoothness, which has two important consequences, also true in multi-dimensions. First, it is possible to see more variation in the underlying image, and second, fewer data points are needed to reconstruct an image. In [27], the MATLAB CVX package [5,4] was used to implement (2) for the PA transform l regularization. Although suitable for one-dimensional problems, CVX is not efficient enough for higher dimensional problems. Because of this limitation, the technique introduced in [27] was not pursued for other applications, such as compression or reducing the dimensionality of the data for efficient processing. This paper thus seeks to expand the recent results in [27] in two ways. First, we will improve the efficiency of the numerical algorithms in multi-dimensions. In this regard, we will adapt the Split Bregman Algorithm, [2], to the PA transform. Once this is accomplished, we will demonstrate how the PA transform is an effective tool for reconstructing piecewise smooth images from under-sampled Fourier data. The rest of the paper is organized as follows. In Section 2 we describe the reconstruction problem in one dimension and discuss how the polynomial annihilation edge detection method is used to construct an l regularization term for solving (2). In Section 3 we review the Split Bregman Algorithm for sparse Fourier data using the TV operator and demonstrate how it can be adapted for the PA transform operator without increasing computational cost. In Section 4 we provide some numerical examples and demonstrate that our method is robust to noise and undersampling. We compare our results to those using TV and TGV (combined with shearlet regularization), based on the code given in [7]. 2 Concluding remarks are provided in Section 5. 2 Preliminaries We begin by describing the one-dimensional problem. Let f R be a piecewise smooth function on [, ], with f() = f(). Suppose we are given its first 2N + (normalized) Fourier coefficients, ˆf(k) = 2 f(x)e ikπx dx, for k = N,..., N. (3) 2 In [27] our method compared favorably to multi-wavelet constructed regularization terms, [24]. We do not repeat those experiments here.
3 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We wish to recover f from (3) at a finite set of grid points, x j = + j x, j =,, 2N, with x = N.3 The standard Fourier partial sum reconstruction used to approximate periodic replicas of f is defined as S N f(x) = N k= N ˆf k e ikπx. (4) At the gridpoints x = x j, we can write (4) as the linear system where F denotes the discrete Fourier transform matrix, Ff = ˆf, (5) F k,j = e ikπxj, j 2N, N k N. (6) Here ˆf is the 2N + vector of Fourier coefficients given in (3), and f = {f j } 2N j= is the calculated solution to f(x j ), j =,..., 2N. The FFT can be used to efficiently solve (5), [3]. Since the underlying function f is only piecewise smooth, the approximation given by (4) will yield the Gibbs phenomenon. Filtering is often used to alleviate the Gibbs phenomenon and to reduce the effects of high frequency noise. The filtered approximation is given by N SN σ f(x) = σ k ˆfk e ikπx, (7) k= N where σ k is an admissible filter, [3]. The corresponding linear system at the gridpoints x = x j is then where F σ denotes the filtered discrete Fourier transform matrix F σ f = ˆf, (8) F σ k,j = σ k e ikπxj, j 2N, N k N, (9) and once again the FFT can be employed. As will be seen in Section 4, filtering may cause too much smoothing over the jump discontinuities, especially if the Gibbs oscillations are to be completely removed. It is also evident that filtering can not address the poor reconstruction quality due to undersampling. Thus we seek other methods of regularization. Since noise is inherent to all sampling systems, we will demonstrate that our technique is effective for noisy input data, f = ˆf + ɛ. 2. Sparse Representation in the Jump Function Domain As mentioned in the introduction, we seek to regularize (2) by enforcing the sparsity of edges in the image domain. To do this, we must first define the jump function of a piecewise smooth function. Note that the words jump and edge are used interchangeably throughout the remainder of this paper. Definition Let f : [, ] : R. For all x (, ), let f(x ) and f(x + ) denote its left and right hand limits. The jump function of f is defined at each x as [f](x) = f(x + ) f(x ). () 3 For simplicity we choose 2N + equally spaced grid points to match the number of Fourier coefficients. The techniques described in this paper are easily extended to different gridding schemes in the image domain.
4 4 Rick Archibald et al. From the above definition, we see that [f] is zero everywhere that f is continuous, while it takes on the jump value at each jump discontinuity location. We make the assumption that there is at most one jump within a cell I j = [x j, x j+ ). Thus, if [f](x j ) is the value of the jump that occurs within the cell I j, we can write where χ j (x) is defined as [f](x) = χ j (x) = [f](x j )χ j (x), () 2N j= { if x Ij for all other x. For simplicity, the numerical algorithms used in this investigation all place the jump discontinuity at the left boundary of its corresponding cell. (This also means that the solution to (2) can be written as the expansion coefficients of the standard basis.) Since [f](x) = for almost all grid point values x = x j, it is apparent that () has only a few nonzero coefficients. Therefore we say that [f](x) has sparse representation, or equivalently, that the jump function domain of f is sparse. Hence we seek to regularize (5) in the form of (2) using (). This will require an approximation to () that is suitable for convex optimization problems. 2.2 Convex Optimization Using Sparsity of Edges While there are a variety of ways to approximate (), for our purpose we will use the polynomial annihilation edge detection method, []. The advantage in using the polynomial annihilation method is that it is high order, meaning that in regions of smoothness the coefficients of the approximation of () will indeed be sparse. Moreover, it is simple to generate a transform matrix for l regularization. This was accomplished in [27], where the polynomial annihilation (PA) transform matrix was introduced. The polynomial annihilation edge detection method, [], is defined as L m f(x) = q m (x) x j S x c j (x)f(x j ), (2) where S x is the local set of m+ grid points from the set of given grid points about x, c j (x) are the polynomial annihilation edge detection coefficients, (3), and q m (x) is the normalization factor, (4). Each parameter of the method can be further described as: S x : For any particular cell I j = [x j, x j+ ), there are m possible stencils, S x of size m +, that contain the interval I j. For simplicity, we assume that the stencils are centered around the interval of interest, I j, and are given by S Ij = {x j m,, x 2 j+ m }, S 2 I j = {x j m+,, x 2 j+ m } 2 for m even and odd respectively. For non-periodic solutions the stencils are adapted to be more one sided as the boundaries of the interval are approached, []. To avoid cumbersome notation, we write S x as the generic stencil unless further clarification is needed. c j (x): The polynomial annihilation edge detection coefficients, c j (x), j =,..., m +, are constructed to annihilate polynomials up to degree m. They are obtained by solving the system x j S x c j (x)p l (x j ) = p (m) l (x), j =,..., m +, (3) where p l, l =,..., m, is a basis of for the space of polynomials of degree m. q m (x): The normalization factor, q m (x), normalizes the approximation to assure the proper convergence of L m f to the jump value at each discontinuity. It is computed as q m (x) = c j (x), (4) x j S + x where S + x is the set of points x j S x such that x j x.
5 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 5 It was shown in [] that the the polynomial annihilation edge detection method has mth order convergence away from the jump discontinuities. More precisely, this accuracy is seen in the region outside the stencil that contains the jump. Oscillations will naturally develop in the region of the jump, which will increase with m. The consequences of these oscillations on our method will be discussed more in Section 4. If the solution vector f is on uniform points in [, ], then we solve (2) on the set of points x j = +j x, j =,..., 2N, with x = N. In this case there is an explicit formula for the polynomial annihilation edge detection coefficients, independent of location x, computed as ([]) m! c j = m+ j =,..., m +. (5) k=,k j (j k) x, We can now define the polynomial annihilation (PA) transform matrix, L m, as L m j,l = c(j, l) q m, l 2N, j < 2N, (6) (x l ) where and c(j, l) = { cj l m 2 < j l m 2 + s(j, l) m + otherwise l m 2 l m 2 s(j, l) = l + m m 2 2N l + m m 2 > 2N otherwise. For example, not assuming periodicity, the banded matrix L m for m = 4 can be written as L m = The PA transform produces a vector with small, nonzero values in the smooth regions of f and large values at the jump locations. Therefore, by minimizing L m f, we encourage a solution f that has sparse representation in the jump function domain, as given in (). As was demonstrated in [27], using the PA transform in (2) reduces the Gibbs oscillations without smearing over the discontinuities or causing a staircasing effect. Remark When m =, applying the PA transform is equivalent to T V regularization (up to a normalization constant). As is evident in Figure 2, using T V as a minimizing constraint causes a staircasing effect, since it encourages a solution that minimizes the differences between the solution vector components. In this regard, the PA transform can be seen as high order, meaning that the accuracy away from the jump discontinuities is O( x m ). As mentioned in the introduction, in [27], the MATLAB CVX package was used to implement (2) with (6). While suitable for one dimensional problems, it is not efficient for multiple dimensions. Below we describe how the split Bregman algorithm, [2], can be adapted to incorporate the PA transform, (6), into (2), and thus improve the overall efficiency.
6 6 Rick Archibald et al. 3 The Split Bregman Algorithm for l regularization of Sparse Fourier Data Bregman iteration, first developed to find extrema of convex functionals, [3], has been used in many applications where the optimization problem is of the form () or (2), [8, 22, 28]. The Split Bregman Algorithm, developed in [2], was shown to be equivalent to Bregman iteration. Its popularity is owed to the fact that it is very fast, typically having the computational cost on the order of an FFT, and also that the nonlinear steps involve only soft thresholding, while all other aspects of the algorithm involve solving invertible linear systems. In recent years the Split Bregman Algorithm has been used for solving a broad class of l regularized optimization problems. In particular, the Split Bregman Algorithm was used to solve (2) for the two dimensional Fourier transform and termed the sparse MRI data reconstruction problem in [8,2,25]. 4 In this case we assume that f : R 2 R is a periodic piecewise smooth function on [, ] 2 for which we are given the (normalized) Fourier coefficients, ˆf(k, l) = 4 f(x, y)e iπ(kx+ly) dydx, for k, l = N,..., N. (7) We wish to recover f from (7) at a finite set of uniform grid points, (x i, y j ), for i, j =,, 2N. The fidelity term in (2) is directly extended with F being the analogous two-dimensional Fourier transform matrix of (6). Although the polynomial annihilation edge detection is an inherently multi-dimensional method, [], our results indicate that using a dimension by dimension construction, J x and J y for the two dimensional regularization term J, is more efficient for uniformly spaced data. We now seek f = {f(x i, y j ) : i, j 2N} that solves the convex optimization problem min f (J x f + J y f) such that Ff ˆf 2 σ (8) In the case where only partial information is given, the corresponding convex optimization problem we solve is min f (J x f + J y f) such that MFf ˆf 2 σ. (9) Here the matrix M represents a row selector matrix, which comprises a subset of the rows of an identity matrix, corresponding to the known set of Fourier samples. We note that the one dimensional version posed in (2) can also be similarly adapted. Fast algorithms were developed for (9) as a means for approximating (8) in [28] and [2]. It was first demonstrated in [28] that (9) could be solved using a Bregman iteration of the sequence of two unconstrained problems of the form f k+ = min J x f + J y f + µ f 2 MFf ˆf k 2, (2a) ˆf k+ = ˆf k + ˆf MFf k+, (2b) where µ > is an optimization parameter. The computational challenge comes in solving (2a). Using the Split Bregman methodology, a fast explicit solution was developed in [2] for the case where T V is used for the l regularization terms, that is, when J x = x and J y = y. We outline the basic steps of the Split Bregman Algorithm for the T V denoising model below and then demonstrate in Section 3.2 how it can be extended to the case where J x and J y are the polynomial annihilation transforms, resulting in a more accurate solution without additional computational cost. Since we do not explicitly motivate the algorithm, we refer interested readers to [2] for more general details. 4 Although designed for MRI data, it is applicable whenever data are sampled in the Fourier domain, especially when dimension reduction is desirable.
7 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 7 3. The Split Bregman Algorithm for T V Denoising Following the framework in [2], for J x = x and J y = y, we make the replacements d x x f and d y y f. Using v = x v i,j 2 + y v i,j 2, i,j we obtain as the augmented optimization of (2a): min d x,i,j 2 + d y,i,j 2 + λ f,d x,d y 2 d x x f b x 2 i,j + λ 2 d y y f b y 2 + µ 2 MFf ˆf 2. (2) The variables b x and b y arise from derivation of the Split Bregman Algorithm, [2], and their calculation is given in Algorithm. Solving (2) is accomplished in steps. When f is held fixed, the exact optimization of d x and d y can be calculated using the shrink operator, [26], as where d opt x = max(s /λ, ) xf + b x s s = and d opt y = max(s /λ, ) yf + b y, (22) s x f + b x 2 + y f + b y 2. (23) When d x and d y are held fixed, optimizing (2) reduces to the l 2 minimization problem min f λ 2 d x x f b x 2 + λ 2 d y y f b y 2 + µ 2 MFf ˆf 2. (24) Since the subproblem in (24) is differentiable, the optimal solution f can be found by differentiating with respect to f and setting the result equal to zero, arriving at where (µf T M T MF + λ T x x + λ T y y )f = rhs, (25) rhs = µf T M Tˆf + λ T x (d x b x ) + λ T y (d y b y ). (26) Using the identities T = and F T = F produces the inverse problem F KFf = rhs, (27) where K = (µm T M F F ). Note that K is a diagonal operator. Therefore, the optimal solution f to (24) can be calculated at the cost of two Fourier transforms as f opt = F K F(rhs). (28) Incorporating the above calculations into (9) leads to a fast algorithm for the T V denoising problem, [2]: Algorithm The Split Bregman Algorithm for T V Denoising Define J x = x and J y = y in (9). Initialize k =, f = F M Tˆf, and b x = b y = d x = d y = while MFf k ˆf 2 > σ f k+ = F K Frhs k d k+ x d k+ y = max(s k /λ, ) xf k +b k x s k = max(s k /λ, ) yf k +b k y s k b k+ x = b k+ x + ( x f k+ d k+ x ) y ) b k+ y = b k+ y + ( y f k+ d k+
8 8 Rick Archibald et al. ˆf k+ = ˆf k + ˆf MFf k+ k = k + end The iteration terms s k and rhs k are provided in (23) and (26), where all variables are superscripted with k. The first five steps of Algorithm can alternatively be iterated in a short inner loop before updating ˆf k+. We note that Algorithm works for the one dimensional optimization problem given in (2) by using the one dimensional Fourier transform with J as the T V operator and eliminating the y components, b y and d y. 3.2 The Split Bregman Algorithm for PA l Regularization As noted in the introduction, the main limitation in using the T V operator as an l regularization term is that it will generate a piecewise constant solution for the underlying signal or image. In [27] it was demonstrated that the T V operator is equivalent to the PA transform operator, (2), with m =, and that better accuracy may be achieved when m >, especially if the underlying signal or image has significant variation between discontinuities. We now demonstrate that solving (2) with the PA transform operator as the l regularization term can be made efficient via an extension of the Split Bregman Algorithm for T V denoising. We begin by defining J x := L m x and J y := L m y in (9), where L m x and L m y are the respective directional PA transform operators defined by in one dimension by (6). Bregman iteration of (9) is then given by the sequence of two unconstrained optimization problems of the form f k+ = min L x f + L m y f + µ f 2 MFf ˆf k 2, (29a) ˆf k+ = ˆf k + ˆf MFf k+, (29b) where µ > is again an optimization parameter. Following the technique described in Section 3., we write the augmented problem corresponding to (29a) and (29b) as min d x,i,j 2 + d y,i,j 2 + λ f,d x,d y 2 d x L m x f b x 2 i,j + λ 2 d y L m y f b x 2 + µ 2 MFf ˆf 2. (3) When f is held fixed, the exact optimization of d x and d y can be calculated using the shrink operator, [26], similar to the derivation of (22), as d opt x where s is given by = max(s /λ, ) Lm x f + b x s s = and d opt y = max(s /λ, ) Lm y f + b y, (3) s L m x f + b x 2 + L m y f + b y 2. (32) When d x and d y are held fixed, (3) reduces to the l 2 minimization problem min f λ 2 d x L m x f b x 2 + λ 2 d y L m y f b y 2 + µ 2 MFf ˆf 2. (33) Once again, since the subproblem in (33) is differentiable, the optimal solution f can be found by differentiating with respect to f and setting the result equal to zero, arriving at ( ) µf T M T MF + λ(l m x ) T L m x + λ(l m y ) T L m y f = rhs, (34) where rhs = µf T M Tˆf + λ(l m x ) T (d x b x ) + λ(l m y ) T (d y b y ). (35)
9 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 9 Of significant importance here is that (L m ) T L m = (L m x ) T L m x + (L m y ) T L m y = m( 2m ) 2m + x2m y 2m is a diagonal operator in the Fourier domain. Hence the inverse problem is solved by ( F K LM F ) f = rhs. (36) Note that K LM = (µm T M + F(L m ) T L m F ) is a diagonal operator. Therefore, the optimal solution f to (24) can be calculated at the cost of two Fourier transforms as f opt = F K LM Frhs. (37) Thus, just as in the case for the T V denoising model, solving (9) with PA l regularization, (2), can be made numerically efficient via the Split Bregman Algorithm: Algorithm 2 The Split Bregman Algorithm for PA l Regularization Define J x = L m x and J y = L m y in (9). Initialize k =, f = F M Tˆf, and b x = b y = d x = d y = while MFf k ˆf 2 > σ f k+ = F K LM Frhsk d k+ x d k+ y = max(s k /λ, ) Lm x f k +b k x s k = max(s k /λ, ) Lm y f k +b k y s k b k+ x = b k+ x + (L m x f k+ d k+ x ) y ) b k+ y = b k+ y + (L m y f k+ d k+ ˆf k+ = ˆf k + ˆf MFf k+ k = k + end The iteration terms s k and rhs k are provided in (32) and (35), where all variables are superscripted with k. The first five steps of Algorithm 2 can alternatively be iterated in a short inner loop before updating ˆf k+. As before, Algorithm 2 works for the one dimensional optimization problem (2) by using the one dimensional Fourier transform with J = L m and eliminating the y components b y and d y. Finally, we note that Algorithm 2 is equivalent to Algorithm when m =. 4 Numerical Results We are now ready to demonstrate the Split Bregman Algorithm for PA l regularization, given by Algorithm 2, for solving the one and two dimensional optimization problems. We will assume that we are given a finite number of Fourier coefficients for the underlying signal or image. The wave numbers of the chosen coefficients were then drawn from a normal distribution with standard deviation 2N/6 (rounded to the nearest integer), where 2N + is the number of recovered function values. For our one dimensional experiments, we consider the two test functions defined on [, ]: Example f a (x) = { x if x < x otherwise ; f b(x) = cos πx 2 if x < 2 cos 3πx 2 if 2 x < 2 cos 7πx 2 if 2 x
10 Rick Archibald et al. Uniform Grid Fourier Reconstruction Filtered Fourier Reconstruction (a) f a(x).5.5 (b) S N f a(x).5.5 (c) S σ N fa(x) Fourier Reconstruction Filtered Fourier Reconstruction Uniform Grid (d) f b (x) (e) S N f b (x) (f) SN σ f b(x) Fig. (a) f a and (d) f b plotted on 2N + = 29 uniform gridpoints; (b) and (e) Fourier partial sum approximation, (4), for f a and f b respectively; (c) and (f) Filtered Fourier partial sum approximation, (7), with filter σ k = exp( α(k/n) 2p ), α = 32 and 2p = 4, for f a and f b respectively. PA m= PA m=2 PA m= (a) PA (m = ) for f a (b) PA (m = 2) for f a (c) PA (m = 3) for f a PA m= PA m=2 PA m= (d) PA (m = ) for f b (e) PA (m = 2) for f b (f) PA (m = 3) for f b Fig. 2 Results using Algorithm 2 with 2N = 28, µ = (29a), λ =.5 and m =, 2 and 3. Figures (a) and (d) show f a and f b plotted on a uniform grid of 2N + = 29 points in [, ]. The corresponding Fourier and filtered Fourier partial sum approximations, (4) and (7), are illustrated in Figure (b) and (e), and in Figure (c) and (f) respectively. In this reconstruction we used 5% of the original 2N + = 29 Fourier coefficients sampled. It is not surprising that having a sparse sampling of Fourier data severely limits the reconstruction quality when directly applying either the standard or filtered Fourier approximation. The convex optimization framework is clearly better suited in this case. As illustrated in Figure 2, the approximation is readily improved by employing the PA transform, (6), for the l regularization term in (2). Algorithm 2 was used in all cases with the number of iterations ranging from 3-4.
11 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform m= m=2 m=3 m= m=2 m= (a) log Err(f a(j)) (b) log Err(f b (j)) Fig. 3 Log of the pointwise error, (38), for m =, 2, and 3. (a) f a and (b) f b. Figure 3 illustrates the pointwise error, given by Err(f(j)) = f j f(x j ), j =,..., 2N, (38) where f j are the components of our solution and f(x j ) are the corresponding components to the underlying function. The smallest error near the discontinuities occurs when m = 2. It is furthermore evident that using m = 2 is the best choice for f a, which is piecewise linear. As demonstrated in Figure 3, the pointwise error decreases as m increases in regions away from the discontinuity, which is consistent with earlier discussions on the effect of m on reconstruction (see also [27]). Figure 4 shows the pointwise error using Algorithm 2 for 2N=32 2N=64 2N=28 2N=256 2N=32 2N=64 2N=28 2N= (a) log Err(f a(j)) (b) log Err(f b (j)) Fig. 4 Log of the pointwise error, (38), for increasing N. f a and f b with fixed parameters µ = (29a), λ =.5, m = 3. Here 2N = 32, 64, 28, 256, and 5% of the coefficients were used. It is evident that as N increases, the resolution at the jump location becomes sharper while the error generally improves in smooth regions. Phase transition diagrams are effective at showing when an undersampled reconstruction is likely to be accurate, [8,9,9]. Figure 5 demonstrates how using the PA transform as the l regularization term increases the likelihood of recovering the correct values of piecewise polynomials with multiple jump discontinuities as the undersampling rates are changed. Each target function was generated using the following random process: First, the location of the jumps were drawn uniformly on the interval [, ]. Then, the polynomial pieces where generated by interpolating random values drawn from a standard normal distribution. All trials were carried out without noise and solutions were obtained by solving (). Each point in these diagrams correspond to the fraction of successful recoveries in 2 trials. The sparsity factor (number of jumps / number of samples) varies along the y-axis, while the undersampling factor (number of samples / number of recovered values) varies along the x-axis. Not surprisingly, TV works best when the target functions are piecewise constants. For piecewise linear and quadratic polynomials, this is no longer the case. The bottom two rows in this figure
12 2 Rick Archibald et al. m = (TV) m = 2 m = 3 Fig. 5 Phase transition diagrams for the reconstruction of piecewise polynomials using TV and PA with m = 2 and 3. The colormap shows the fraction of successful recoveries in 2 trials. A trial is deemed successful when the relative l 2 error is below 2. Top row: piecewise constant functions. Middle row: piecewise linear functions. Bottom row: piecewise quadratic polynomials. show that TV is likely to fail if the data is undersampled by a factor of.4 or less regardless of the number of jumps. As expected, higher values of m are more appropriate when the polynomial degree of the piecewise target functions is increased. For piecewise linear polynomials, the algorithm works best when m = 2, which is consistent with our other findings, that is, there are fewer oscillations near the edges for m = 2, and since the underlying function is piecewise linear, m = 2 is sufficient for recovery. Although we did not formally analyze the parameter sensitivity, our experiments indicate that the results are generally robust with respect to parameter selection. In particular, Figure 6(a) and (b) show the mean l error for ten random trials when 5% of the coefficients are used for calculating f a and f b using m =, 2, 3. The best results occurs near λ =.5. Figure 6(c) shows the l error for f c over a range of λ, and Figure 6(d)
13 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 2 PA m= PA m=2 PA m=3 2 PA m= PA m=2 PA m=3 3 PA m= PA m=2 PA m=3 4 PA m= PA m=2 PA m=3 L Error L Error L Error 2 L Error λ (a) Err(f a) λ (b) Err(f b ) 2 2 λ (c) Err(f c) λ (d) Err(f c) with 5 db SNR Fig. 6 log Err l, given by (38), for various λ used in Algorithm 2. Here 2N = 64 shows the same result with 5 db SNR. We observe that while λ is somewhat sensitive to noise, the transitions are smooth and they do not vary greatly from the cases where no noise is present. Future work will include a more detailed investigation into optimal parameter selection. To illustrate Algorithm 2 for two dimensions, we consider the following test function defined on [, ] 2 : Example 2 with { sin(π x2 + y f c (x, y) = 2 /2) if < x, y < 3 4 g(x, y) otherwise, g(x, y) = { cos(3π x2 + y 2 /2) if x 2 + y 2 2 cos(π x 2 + y 2 /2) if x 2 + y 2 > 2. The Fourier data for our numerical experiments were chosen either randomly from a Gaussian distribution, shown in Figure 7(a), or from a tomographic sampling pattern, shown in Figure 7(b). In each case we used 5% of the [2N + ] 2 Fourier coefficients of the underlying image. To demonstrate the accuracy of reconstruction in smooth regions, we also calculated the l 2 error after removing all values within two pixels away from internal jump discontinuities. Figure 7(c) shows the mask subtracted out of the l 2 error calculation. (a) Gaussian Sampling (b) Tomographic Sampling (c) Edge map of f c(x, y). Fig. 7 (a) Gaussian sampling of the Fourier data; (b) Tomographic sampling of the Fourier data; (c) Mask of f c(x, y) used to calculate l 2 error. We compared our results with those generated by Algorithm as well as those constructed using the total generalized variation shearlet (TGVSH) based image reconstruction algorithm, [6]. In the latter case, we used the publicly available code in [7] with its default parameters. 5 Figure 8 displays the various methods for reconstructing f c (x, y) in Example 2. Figure 9 shows the cross section error comparison of the various methods. 5 We tried a variety of parameters in our experiments to ensure that our comparisons were fair. As it turned out, the default parameters yielded the best results.
14 4 Rick Archibald et al. (a) TV (b) TGVSH (c) PA (m = 2) (d) PA (m = 3) Fig. 8 Reconstruction of f c(x, y). The parameters for Algorithm 2 chosen were µ = (29a) and λ = Fig. 9 Cross section error of f c(x, y) at x = Fourier TV 6 TGVSHCS Spa m=2 Spa m= (a) Cross section error at x =.55 Figure compares the results for reconstructing f c (x, y) using the same techniques for the case where the Fourier data are sampled using the tomographic pattern with noise level db SNR. Similarly, the reconstruction results in Figure display the case when the data are chosen randomly from the Gaussian sampling pattern. Here the noise level is 5dB SNR. We note that our results were not sensitive to the particular sampling pattern chosen.
15 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 5 (a) TV (b) TGVSH (c) PA (m = 2) (d) PA (m = 3) Fig. Reconstruction of f c(x, y) given noisy Fourier data ( db SNR) and using tomographic sampling. Parameters for Algorithm 2 chosen were µ = (29a) and λ =. Table compares the l 2 errors for reconstructing f c when 5% of the Fourier coefficients selected randomly from a Gaussian distribution are used. Observe that Algorithm 2 is particularly effective when the Fourier samples are noisy. Table l 2 errors for reconstructing f c with various methods when using 5% of the Fourier coefficients selected randomly from a Gaussian distribution. l 2 is the standard l2 error. l 2 2 is the l2 error calculated 2 pixel points away from the internal edges. The parameters chosen are the same as in Figures 8 and respectively. method noise l 2 l2 2 TV None TGVSH None m = 2 None m = 3 None TV db SNR TGVSH db SNR m = 2 db SNR m = 3 db SNR TV 5 db SNR TGVSH 5 db SNR m = 2 5 db SNR m = 3 5 db SNR
16 6 Rick Archibald et al. (a) TV (b) TGVSH (c) PA (m = 2) (d) PA (m = 3) Fig. Reconstruction of f c(x, y) given noisy Fourier data (5 db SNR). Parameters for Algorithm 2 chosen were µ = (29a) and λ =. Finally, Figure 2 compares these same algorithms when applied to the synthetic aperture radar (SAR) image of a golf course, [], using µ = (29a) and λ = in Algorithm 2. In this case we computed the Fourier coefficients from the given image data. The Ground Truth image is down-sampled from the original image while the Fourier coefficients were calculated via trapezoidal rule from the original image. As illustrated in Figure 2(d), using the PA transform for l regularization better captures the underlying features of the image. 5 Conclusions and Future Considerations This paper demonstrates how the Split Bregman Algorithm can be adapted to use the PA transform as the l regularization term in solving the denoising model, (2), when the data are acquired as Fourier coefficients. The method is especially effective when the data are undersampled and noisy, as demonstrated by the examples in Section 4. In particular, the PA transform demonstrates improved accuracy away from the boundaries as compared to other methods. The phase transition diagrams illustrate that using the PA transform yields a greater likelihood of success in undersampled cases when the underlying image is not piecewise constant. Moreover, Table verifies that the PA transform is particularly effective as the SNR is reduced. Our adaptation of the Split Bregman Algorithm means that the PA transform is just as efficient as using TV. The optimization parameters, although not fully tested, appear to be robust. This will be the subject of future investigations. A downside to using the PA transform for l regularization is that oscillations start to form near the discontinuities as the order is increased. Preliminary investigations suggest that it is possible to combine the results of low (TV) and higher order PA transform for l regularization at very little additional cost.
17 Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform (a) Original SAR Image (b) TV: Relative l2 error =.35e 7 (c) TGVSH: Relative l2 error =.28e (d) PA m=2: Relative l2 error = 9.67e 2 (e) PA m=3: Relative l2 error = Fig. 2 Reconstruction of a Synthetic Aperture Radar (SAR) image of a golf course, []. We calculated 6 6 Fourier coefficients and added 5dB SNR. Algorithm 2 was applied on 5% of the Fourier coefficients randomly selected from a Gaussian distribution. Parameter values used were µ = (29a) and λ =. Specifically, both algorithms can be run simultaneously with a map of the internal edges calculated as a byproduct. Then, as a final step, the solution would use the TV results near the edges and the higher order PA transform in smooth regions. We note that errors made in calculating the internal edge map will only result in the downgrading of the local accuracy to the less accurate of the two approximations. Moreover, a tolerance threshold can be applied depending on the SNR, undersampling, and any other prior information. These ideas will be explored in future investigations. Acknowledgments This work is supported in part by grants NSF-DMS and AFOSR FA The submitted manuscript is based upon work, authored in part by contractors [UT-Battelle LLC, manager of Oak Ridge National Laboratory (ORNL)], and supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. References. Archibald, R., Gelb, A., and Yoon, J. Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Anal. 43, (25),
18 8 Rick Archibald et al. 2. Bredies, K., Kunisch, K., and Pock, T. Total generalized variation. SIAM J. Imaging Sci. 3, 3 (2), Bregman, L. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex optimization. USSR Comput. Math. Math. Phys. 7 (967), Candès, E. J., and Romberg, J. Signal recovery from random projections. In Proc. SPIE Comput. Imaging III (25), vol. 5674, pp Candès, E. J., Romberg, J., and Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52 (26), Donoho, D. Compressed sensing. IEEE Trans. Inform. Theory 52 (26), Donoho, D. For most large underdetermined systems of linear equations the minimal l -norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59, 6 (26), Donoho, D., and Tanner, J. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 367, 96 (29), Donoho, D. L., Maleki, A., and Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. 6, 45 (29), Durand, S., and Froment, J. Reconstruction of wavelet coefficients using total variation minimization. SIAM J. Sci. Comput. 24, 5 (23), Ellsworth, M., and Thomas, C. A fast algorithm for image deblurring with total variation regularization. Unmanned Tech Solutions 4 (24). 2. Goldstein, T., and Osher, S. The split bregman method for l-regularized problems. SIAM J. Imaging Sci. 2, 2 (29), Gottlieb, D., and Orszag, S. A. Numerical analysis of spectral methods: theory and applications. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 977. CBMS-NSF Regional Conference Series in Applied Mathematics, No Grant, M., and Boyd, S. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, Eds., Lecture Notes in Control and Information Sciences. Springer-Verlag Limited, 28, pp Grant, M., and Boyd, S. CVX: Matlab software for disciplined convex programming, version Mar Guo, W., Qin, J., and Yin, W. A New Detail-Preserving Regularization Scheme. SIAM J. Imaging Sci. 7, 2 (24), Guo, W., Qin, J., and Yin, W. Matlab scripts for TGV shearlet based image reconstruction algorithm. ucla.edu/~qinjingonly/tgvshcs/webpage.html, Feb He, L., Chang, T.-C., and Osher, S. MR image reconstruction from sparse radial samples by using iterative refinement procedures. In Proc. 3th Annu. Meet. ISMRM (26), p Krzakala, F., Mézard, M., Sausset, F., Sun, Y., and Zdeborová, L. Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices. J. Stat. Mech. Theory Exp. 22, 8 (22), P Lustig, M., Donoho, D., and Pauly, J. M. Sparse mri: The application of compressed sensing for rapid mr imaging. Magn. Reson. Med. 58, 6 (27), Moulin, P. A wavelet regularization method for diffuse radar-target imaging and speckle-noise reduction. J. Math. Imaging Vis. 3, (993), Osher, S., Burger, M., Goldfarb, D., Xu, J., and Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4, 2 (25), Rudin, L., Osher, S., and Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D 6 (992), Schiavazzi, D., Doostan, A., and Iaccarino, G. Sparse multiresolution stochastic approximation for uncertainty quantification. Recent Adv. Sci. Comput. Appl. 586 (23), Trzasko, J., Manduca, A., and Borisch, E. Sparse MRI reconstruction via multiscale l-continuation. In Stat. Signal Process. 27. SSP 7. IEEE/SP 4th Work. (Aug 27), pp Wang, Y., Yin, W., and Zhang, Y. A fast algorithm for image deblurring with total variation regularization. CAAM Tech. Reports (27). 27. Wasserman, G., Archibald, R., and Gelb, A. Image reconstruction from Fourier data using sparsity of edges. J. Sci. Comput., to appear (25). 28. Yin, W., Osher, S., Goldfarb, D., and Darbon, J. Bregman iterative algorithms for l-minimization with applications to compressed sensing. SIAM J. Imaging Sci. (28),
Efficient MR Image Reconstruction for Compressed MR Imaging
Efficient MR Image Reconstruction for Compressed MR Imaging Junzhou Huang, Shaoting Zhang, and Dimitris Metaxas Division of Computer and Information Sciences, Rutgers University, NJ, USA 08854 Abstract.
More informationTotal Variation Denoising with Overlapping Group Sparsity
1 Total Variation Denoising with Overlapping Group Sparsity Ivan W. Selesnick and Po-Yu Chen Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 2 Abstract This paper describes
More informationIMAGE DE-NOISING IN WAVELET DOMAIN
IMAGE DE-NOISING IN WAVELET DOMAIN Aaditya Verma a, Shrey Agarwal a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - (aaditya, ashrey)@iitk.ac.in KEY WORDS: Wavelets,
More informationWeighted-CS for reconstruction of highly under-sampled dynamic MRI sequences
Weighted- for reconstruction of highly under-sampled dynamic MRI sequences Dornoosh Zonoobi and Ashraf A. Kassim Dept. Electrical and Computer Engineering National University of Singapore, Singapore E-mail:
More informationIntroduction. Wavelets, Curvelets [4], Surfacelets [5].
Introduction Signal reconstruction from the smallest possible Fourier measurements has been a key motivation in the compressed sensing (CS) research [1]. Accurate reconstruction from partial Fourier data
More informationarxiv: v2 [math.na] 20 Mar 2017
Parameter Selection for HOTV Regularization Toby Sanders School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, USA. arxiv:1608.04819v2 [math.na] 20 Mar 2017 Abstract Popular
More informationThe Benefit of Tree Sparsity in Accelerated MRI
The Benefit of Tree Sparsity in Accelerated MRI Chen Chen and Junzhou Huang Department of Computer Science and Engineering, The University of Texas at Arlington, TX, USA 76019 Abstract. The wavelet coefficients
More informationG Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine. Compressed Sensing
G16.4428 Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine Compressed Sensing Ricardo Otazo, PhD ricardo.otazo@nyumc.org Compressed
More informationELEG Compressive Sensing and Sparse Signal Representations
ELEG 867 - Compressive Sensing and Sparse Signal Representations Gonzalo R. Arce Depart. of Electrical and Computer Engineering University of Delaware Fall 211 Compressive Sensing G. Arce Fall, 211 1 /
More informationCompressive Sensing Algorithms for Fast and Accurate Imaging
Compressive Sensing Algorithms for Fast and Accurate Imaging Wotao Yin Department of Computational and Applied Mathematics, Rice University SCIMM 10 ASU, Tempe, AZ Acknowledgements: results come in part
More informationP257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies
P257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies. azemi* (University of Alberta) & H.R. Siahkoohi (University of Tehran) SUMMARY Petrophysical reservoir properties,
More informationDeconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC
Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC SUMMARY We use the recently introduced multiscale and multidirectional curvelet transform to exploit the
More informationIncoherent noise suppression with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC
Incoherent noise suppression with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC SUMMARY The separation of signal and noise is a key issue in seismic data processing. By
More informationCompressive Sensing for Multimedia. Communications in Wireless Sensor Networks
Compressive Sensing for Multimedia 1 Communications in Wireless Sensor Networks Wael Barakat & Rabih Saliba MDDSP Project Final Report Prof. Brian L. Evans May 9, 2008 Abstract Compressive Sensing is an
More informationSparse wavelet expansions for seismic tomography: Methods and algorithms
Sparse wavelet expansions for seismic tomography: Methods and algorithms Ignace Loris Université Libre de Bruxelles International symposium on geophysical imaging with localized waves 24 28 July 2011 (Joint
More informationDetecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400
More informationError-control vs classical multiresolution algorithms for image compression
Error-control vs classical multiresolution algorithms for image compression S. AMAT, J.C. TRILLO AND P. VIALA Departamento de Matemática Aplicada y Estadística. Universidad Politécnica de Cartagena. Paseo
More informationOutline Introduction Problem Formulation Proposed Solution Applications Conclusion. Compressed Sensing. David L Donoho Presented by: Nitesh Shroff
Compressed Sensing David L Donoho Presented by: Nitesh Shroff University of Maryland Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution
More informationAN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES
AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com
More informationAdvanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude
Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,
More informationOptimal Sampling Geometries for TV-Norm Reconstruction of fmri Data
Optimal Sampling Geometries for TV-Norm Reconstruction of fmri Data Oliver M. Jeromin, Student Member, IEEE, Vince D. Calhoun, Senior Member, IEEE, and Marios S. Pattichis, Senior Member, IEEE Abstract
More informationCompressed Sensing Algorithm for Real-Time Doppler Ultrasound Image Reconstruction
Mathematical Modelling and Applications 2017; 2(6): 75-80 http://www.sciencepublishinggroup.com/j/mma doi: 10.11648/j.mma.20170206.14 ISSN: 2575-1786 (Print); ISSN: 2575-1794 (Online) Compressed Sensing
More informationConvex combination of adaptive filters for a variable tap-length LMS algorithm
Loughborough University Institutional Repository Convex combination of adaptive filters for a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository
More informationCompressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI
1 Compressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI Kevin T. Looby klooby@stanford.edu ABSTRACT The temporal resolution necessary for dynamic contrast enhanced (DCE) magnetic resonance
More informationSUMMARY. In combination with compressive sensing, a successful reconstruction
Higher dimensional blue-noise sampling schemes for curvelet-based seismic data recovery Gang Tang, Tsinghua University & UBC-Seismic Laboratory for Imaging and Modeling (UBC-SLIM), Reza Shahidi, UBC-SLIM,
More informationIterative CT Reconstruction Using Curvelet-Based Regularization
Iterative CT Reconstruction Using Curvelet-Based Regularization Haibo Wu 1,2, Andreas Maier 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab (LME), Department of Computer Science, 2 Graduate School in
More informationReducing Effects of Bad Data Using Variance Based Joint Sparsity Recovery
Journal of Scientific Computing manuscript No. (will be inserted by the editor) Reducing Effects of Bad Data Using Variance Based Joint Sparsity Recovery Anne Gelb Theresa Scarnati Received: date / Accepted:
More informationIterative Refinement by Smooth Curvature Correction for PDE-based Image Restoration
Iterative Refinement by Smooth Curvature Correction for PDE-based Image Restoration Anup Gautam 1, Jihee Kim 2, Doeun Kwak 3, and Seongjai Kim 4 1 Department of Mathematics and Statistics, Mississippi
More informationCompressed Sensing for Rapid MR Imaging
Compressed Sensing for Rapid Imaging Michael Lustig1, Juan Santos1, David Donoho2 and John Pauly1 1 Electrical Engineering Department, Stanford University 2 Statistics Department, Stanford University rapid
More informationREGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION
REGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION Haoyi Liang and Daniel S. Weller University of Virginia, Department of ECE, Charlottesville, VA, 2294, USA ABSTRACT Conventional automatic
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Yuejie Chi Departments of ECE and BMI The Ohio State University September 24, 2015 Time, location, and office hours Time: Tue/Thu
More information2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing
Progress In Electromagnetics Research M, Vol. 46, 47 56, 206 2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing Berenice Verdin * and Patrick Debroux Abstract The measurement
More informationDenoising an Image by Denoising its Components in a Moving Frame
Denoising an Image by Denoising its Components in a Moving Frame Gabriela Ghimpețeanu 1, Thomas Batard 1, Marcelo Bertalmío 1, and Stacey Levine 2 1 Universitat Pompeu Fabra, Spain 2 Duquesne University,
More informationLearning Splines for Sparse Tomographic Reconstruction. Elham Sakhaee and Alireza Entezari University of Florida
Learning Splines for Sparse Tomographic Reconstruction Elham Sakhaee and Alireza Entezari University of Florida esakhaee@cise.ufl.edu 2 Tomographic Reconstruction Recover the image given X-ray measurements
More informationA Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings
Scientific Papers, University of Latvia, 2010. Vol. 756 Computer Science and Information Technologies 207 220 P. A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings
More informationThe WENO Method in the Context of Earlier Methods To approximate, in a physically correct way, [3] the solution to a conservation law of the form u t
An implicit WENO scheme for steady-state computation of scalar hyperbolic equations Sigal Gottlieb Mathematics Department University of Massachusetts at Dartmouth 85 Old Westport Road North Dartmouth,
More informationOff-the-Grid Compressive Imaging: Recovery of Piecewise Constant Images from Few Fourier Samples
Off-the-Grid Compressive Imaging: Recovery of Piecewise Constant Images from Few Fourier Samples Greg Ongie PhD Candidate Department of Applied Math and Computational Sciences University of Iowa April
More informationStructurally Random Matrices
Fast Compressive Sampling Using Structurally Random Matrices Presented by: Thong Do (thongdo@jhu.edu) The Johns Hopkins University A joint work with Prof. Trac Tran, The Johns Hopkins University it Dr.
More informationComputerLab: compressive sensing and application to MRI
Compressive Sensing, 207-8 ComputerLab: compressive sensing and application to MRI Aline Roumy This computer lab addresses the implementation and analysis of reconstruction algorithms for compressive sensing.
More informationTomographic reconstruction: the challenge of dark information. S. Roux
Tomographic reconstruction: the challenge of dark information S. Roux Meeting on Tomography and Applications, Politecnico di Milano, 20-22 April, 2015 Tomography A mature technique, providing an outstanding
More informationSparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal
Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal Hadi. Zayyani, Seyyedmajid. Valliollahzadeh Sharif University of Technology zayyani000@yahoo.com, valliollahzadeh@yahoo.com
More informationGRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov
GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS Andrey Nasonov, and Andrey Krylov Lomonosov Moscow State University, Moscow, Department of Computational Mathematics and Cybernetics, e-mail: nasonov@cs.msu.ru,
More informationCollaborative Sparsity and Compressive MRI
Modeling and Computation Seminar February 14, 2013 Table of Contents 1 T2 Estimation 2 Undersampling in MRI 3 Compressed Sensing 4 Model-Based Approach 5 From L1 to L0 6 Spatially Adaptive Sparsity MRI
More informationSPARSE PARTIAL DERIVATIVES AND RECONSTRUCTION FROM PARTIAL FOURIER DATA. Elham Sakhaee and Alireza Entezari
SPARSE PARTIAL DERIVATIVES AND RECONSTRUCTION FROM PARTIAL FOURIER DATA Elham Sakhaee and Alireza Entezari CISE Department, University of Florida, Gainesville, FL ABSTRACT Signal reconstruction from the
More informationRandomized sampling strategies
Randomized sampling strategies Felix J. Herrmann SLIM Seismic Laboratory for Imaging and Modeling the University of British Columbia SLIM Drivers & Acquisition costs impediments Full-waveform inversion
More informationImage Restoration with Compound Regularization Using a Bregman Iterative Algorithm
Image Restoration with Compound Regularization Using a Bregman Iterative Algorithm Manya V. Afonso, José M. Bioucas-Dias, Mario A. T. Figueiredo To cite this version: Manya V. Afonso, José M. Bioucas-Dias,
More informationBlind Compressed Sensing Using Sparsifying Transforms
Blind Compressed Sensing Using Sparsifying Transforms Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laboratory University of Illinois
More information3 Nonlinear Regression
3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear
More informationRecent Developments in Model-based Derivative-free Optimization
Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:
More informationMRF-based Algorithms for Segmentation of SAR Images
This paper originally appeared in the Proceedings of the 998 International Conference on Image Processing, v. 3, pp. 770-774, IEEE, Chicago, (998) MRF-based Algorithms for Segmentation of SAR Images Robert
More informationHigher Degree Total Variation for 3-D Image Recovery
Higher Degree Total Variation for 3-D Image Recovery Greg Ongie*, Yue Hu, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa ISBI 2014 Beijing, China Motivation: Compressed
More information3 Nonlinear Regression
CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic
More informationCurvelet Transform with Adaptive Tiling
Curvelet Transform with Adaptive Tiling Hasan Al-Marzouqi and Ghassan AlRegib School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA, 30332-0250 {almarzouqi, alregib}@gatech.edu
More informationEdge Detection Free Postprocessing for Pseudospectral Approximations
Edge Detection Free Postprocessing for Pseudospectral Approximations Scott A. Sarra March 4, 29 Abstract Pseudospectral Methods based on global polynomial approximation yield exponential accuracy when
More informationCombinatorial Selection and Least Absolute Shrinkage via The CLASH Operator
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems LIONS / EPFL http://lions.epfl.ch & Idiap Research Institute joint
More informationUniqueness in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models
Uniqueness in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models Yanjun Li Joint work with Kiryung Lee and Yoram Bresler Coordinated Science Laboratory Department of Electrical
More informationLocally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling
Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important
More informationImage Processing. Filtering. Slide 1
Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple
More informationA Subdivision-Regularization Framework for Preventing Over Fitting of Data by a Model
Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 8, Issue 1 (June 2013), pp. 178-190 Applications and Applied Mathematics: An International Journal (AAM) A Subdivision-Regularization
More informationADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING
ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING Fei Yang 1 Hong Jiang 2 Zuowei Shen 3 Wei Deng 4 Dimitris Metaxas 1 1 Rutgers University 2 Bell Labs 3 National University
More informationMain Menu. Summary. sampled) f has a sparse representation transform domain S with. in certain. f S x, the relation becomes
Preliminary study on Dreamlet based compressive sensing data recovery Ru-Shan Wu*, Yu Geng 1 and Lingling Ye, Modeling and Imaging Lab, Earth & Planetary Sciences/IGPP, University of California, Santa
More informationRecent Advances inelectrical & Electronic Engineering
4 Send Orders for Reprints to reprints@benthamscience.ae Recent Advances in Electrical & Electronic Engineering, 017, 10, 4-47 RESEARCH ARTICLE ISSN: 35-0965 eissn: 35-0973 Image Inpainting Method Based
More informationlecture 10: B-Splines
9 lecture : -Splines -Splines: a basis for splines Throughout our discussion of standard polynomial interpolation, we viewed P n as a linear space of dimension n +, and then expressed the unique interpolating
More informationGlobal Minimization of the Active Contour Model with TV-Inpainting and Two-Phase Denoising
Global Minimization of the Active Contour Model with TV-Inpainting and Two-Phase Denoising Shingyu Leung and Stanley Osher Department of Mathematics, UCLA, Los Angeles, CA 90095, USA {syleung, sjo}@math.ucla.edu
More informationImage Inpainting Using Sparsity of the Transform Domain
Image Inpainting Using Sparsity of the Transform Domain H. Hosseini*, N.B. Marvasti, Student Member, IEEE, F. Marvasti, Senior Member, IEEE Advanced Communication Research Institute (ACRI) Department of
More informationIterative Shrinkage/Thresholding g Algorithms: Some History and Recent Development
Iterative Shrinkage/Thresholding g Algorithms: Some History and Recent Development Mário A. T. Figueiredo Instituto de Telecomunicações and Instituto Superior Técnico, Technical University of Lisbon PORTUGAL
More informationCompressed Sensing and L 1 -Related Minimization
Compressed Sensing and L 1 -Related Minimization Yin Wotao Computational and Applied Mathematics Rice University Jan 4, 2008 Chinese Academy of Sciences Inst. Comp. Math The Problems of Interest Unconstrained
More informationArray geometries, signal type, and sampling conditions for the application of compressed sensing in MIMO radar
Array geometries, signal type, and sampling conditions for the application of compressed sensing in MIMO radar Juan Lopez a and Zhijun Qiao a a Department of Mathematics, The University of Texas - Pan
More informationImage reconstruction based on back propagation learning in Compressed Sensing theory
Image reconstruction based on back propagation learning in Compressed Sensing theory Gaoang Wang Project for ECE 539 Fall 2013 Abstract Over the past few years, a new framework known as compressive sampling
More informationImage Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei
Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei College of Physical and Information Science, Hunan Normal University, Changsha, China Hunan Art Professional
More informationBilevel Sparse Coding
Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional
More informationRECOVERY OF PARTIALLY OBSERVED DATA APPEARING IN CLUSTERS. Sunrita Poddar, Mathews Jacob
RECOVERY OF PARTIALLY OBSERVED DATA APPEARING IN CLUSTERS Sunrita Poddar, Mathews Jacob Department of Electrical and Computer Engineering The University of Iowa, IA, USA ABSTRACT We propose a matrix completion
More informationMATH3016: OPTIMIZATION
MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is
More informationCOMPRESSIVE VIDEO SAMPLING
COMPRESSIVE VIDEO SAMPLING Vladimir Stanković and Lina Stanković Dept of Electronic and Electrical Engineering University of Strathclyde, Glasgow, UK phone: +44-141-548-2679 email: {vladimir,lina}.stankovic@eee.strath.ac.uk
More informationImage Reconstruction from Multiple Sparse Representations
Image Reconstruction from Multiple Sparse Representations Robert Crandall Advisor: Professor Ali Bilgin University of Arizona Program in Applied Mathematics 617 N. Santa Rita, Tucson, AZ 85719 Abstract
More informationCS 450 Numerical Analysis. Chapter 7: Interpolation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationReconstruction of CT Images from Sparse-View Polyenergetic Data Using Total Variation Minimization
1 Reconstruction of CT Images from Sparse-View Polyenergetic Data Using Total Variation Minimization T. Humphries and A. Faridani Abstract Recent work in CT image reconstruction has seen increasing interest
More informationSparse Reconstruction / Compressive Sensing
Sparse Reconstruction / Compressive Sensing Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University Namrata Vaswani Sparse Reconstruction / Compressive Sensing 1/ 20 The
More informationAlgebraic Iterative Methods for Computed Tomography
Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic
More informationOn Iterations and Scales of Nonlinear Filters
O. Drbohlav (ed.): Proc. of the Computer Vision Winter Workshop 3 Valtice, Czech Republic, Feb. 3-, 3, pp. - Czech Pattern Recognition Society On Iterations and Scales of Nonlinear Filters Pavel rázek,
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationAn R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation
An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements
More informationSparse Sampling Methods for Large Scale Experimental Data
Sparse Sampling Methods for Large Scale Experimental Data Rick Archibald Oak Ridge National Laboratory IPAM January 2017 Big Data Meets Computation Outline DOE Facilities ACUMEN Project Sparse Optimization
More informationFully discrete Finite Element Approximations of Semilinear Parabolic Equations in a Nonconvex Polygon
Fully discrete Finite Element Approximations of Semilinear Parabolic Equations in a Nonconvex Polygon Tamal Pramanick 1,a) 1 Department of Mathematics, Indian Institute of Technology Guwahati, Guwahati
More informationFace Recognition via Sparse Representation
Face Recognition via Sparse Representation John Wright, Allen Y. Yang, Arvind, S. Shankar Sastry and Yi Ma IEEE Trans. PAMI, March 2008 Research About Face Face Detection Face Alignment Face Recognition
More informationModified Iterative Method for Recovery of Sparse Multiple Measurement Problems
Journal of Electrical Engineering 6 (2018) 124-128 doi: 10.17265/2328-2223/2018.02.009 D DAVID PUBLISHING Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems Sina Mortazavi and
More informationENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION
ENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION I.Erer 1, K. Sarikaya 1,2, H.Bozkurt 1 1 Department of Electronics and Telecommunications Engineering Electrics and Electronics Faculty,
More informationCompressed Sensing for Electron Tomography
University of Maryland, College Park Department of Mathematics February 10, 2015 1/33 Outline I Introduction 1 Introduction 2 3 4 2/33 1 Introduction 2 3 4 3/33 Tomography Introduction Tomography - Producing
More informationMid-Year Report. Discontinuous Galerkin Euler Equation Solver. Friday, December 14, Andrey Andreyev. Advisor: Dr.
Mid-Year Report Discontinuous Galerkin Euler Equation Solver Friday, December 14, 2012 Andrey Andreyev Advisor: Dr. James Baeder Abstract: The focus of this effort is to produce a two dimensional inviscid,
More informationNon-Differentiable Image Manifolds
The Multiscale Structure of Non-Differentiable Image Manifolds Michael Wakin Electrical l Engineering i Colorado School of Mines Joint work with Richard Baraniuk, Hyeokho Choi, David Donoho Models for
More informationModule 1: Introduction to Finite Difference Method and Fundamentals of CFD Lecture 6:
file:///d:/chitra/nptel_phase2/mechanical/cfd/lecture6/6_1.htm 1 of 1 6/20/2012 12:24 PM The Lecture deals with: ADI Method file:///d:/chitra/nptel_phase2/mechanical/cfd/lecture6/6_2.htm 1 of 2 6/20/2012
More informationImage denoising using curvelet transform: an approach for edge preservation
Journal of Scientific & Industrial Research Vol. 3469, January 00, pp. 34-38 J SCI IN RES VOL 69 JANUARY 00 Image denoising using curvelet transform: an approach for edge preservation Anil A Patil * and
More informationRemoving a mixture of Gaussian and impulsive noise using the total variation functional and split Bregman iterative method
ANZIAM J. 56 (CTAC2014) pp.c52 C67, 2015 C52 Removing a mixture of Gaussian and impulsive noise using the total variation functional and split Bregman iterative method Bishnu P. Lamichhane 1 (Received
More informationA MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING
Proceedings of the 1994 IEEE International Conference on Image Processing (ICIP-94), pp. 530-534. (Austin, Texas, 13-16 November 1994.) A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING
More informationTotal variation tomographic inversion via the Alternating Direction Method of Multipliers
Total variation tomographic inversion via the Alternating Direction Method of Multipliers Landon Safron and Mauricio D. Sacchi Department of Physics, University of Alberta Summary Geophysical inverse problems
More informationSupplemental Material for Efficient MR Image Reconstruction for Compressed MR Imaging
Supplemental Material for Efficient MR Image Reconstruction for Compressed MR Imaging Paper ID: 1195 No Institute Given 1 More Experiment Results 1.1 Visual Comparisons We apply all methods on four 2D
More informationRecent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will
Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in
More informationLimitations of Matrix Completion via Trace Norm Minimization
Limitations of Matrix Completion via Trace Norm Minimization ABSTRACT Xiaoxiao Shi Computer Science Department University of Illinois at Chicago xiaoxiao@cs.uic.edu In recent years, compressive sensing
More information1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3
6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More informationImage Super-Resolution Reconstruction Based On L 1/2 Sparsity
Buletin Teknik Elektro dan Informatika (Bulletin of Electrical Engineering and Informatics) Vol. 3, No. 3, September 4, pp. 55~6 ISSN: 89-39 55 Image Super-Resolution Reconstruction Based On L / Sparsity
More information