Framelet based blind motion deblurring from a single image

Size: px
Start display at page:

Download "Framelet based blind motion deblurring from a single image"

Transcription

1 Framelet based blind motion deblurring from a single image Jian-Feng Cai, Hui Ji, Chaoqiang Liu and Zuowei Shen 1 Abstract How to recover a clear image from a single motion-blurred image has long been a challenging open problem in digital imaging. In this paper, we focus on how to recover a motion-blurred image due to camera shake. A regularization-based approach is proposed to remove motion blurring from the image by regularizing the sparsity of both the original image and the motion-blur kernel under tight wavelet frame systems. Furthermore, an adapted version of the split Bregman method ( [1], [2]) is proposed to efficiently solve the the resulting minimization problem. The experiments on both synthesized images and real images show that our algorithm can effectively remove complex motion blurring from natural images without requiring any prior information of the motion-blur kernel. Index Terms tight frame, split Bregman method, motion blur, blind deconvolution EDICS Category TEC-MRS, TEC-RST I. INTRODUCTION Motion blurring is one of the prime causes of poor image quality in digital imaging. When an image is captured by a digital camera, the image represents not just the scene at a single instant of time, but the scene over a period of time. If objects in a scene are moving fast or the camera is moving over the period J.-F. Cai is with Department of Mathematics, University of California, Los Angeles, CA 90095, USA ( cai@math.ucla.edu). H. Ji, C. Liu and Z. Shen is with Department of Mathematics, National University of Singapore, Singapore , Singapore ( matjh, matliucq, matzuows@nus.edu.sg).

2 2 of exposure time, the objects or the whole scene will look blurry along the direction of relative motion between the object/scene and the camera. Camera shake is one main cause of motion blurring, especially when taking images using telephoto lens or using long shutter speed under low lighting condition. In the past, many researchers have been working on motion deblurring which recovers clear images from motion-blurred images. In most works, the motion blur caused by camera shake is modeled by a spatial-invariant convolution process: f = g p + η, (1) where is the discrete convolution operator, g is the original image to recover, f is the observed blurry image, p is the blur kernel (or point spread function), and η is the noise. How to recover the original image g from the observed image f is the so-called image deconvolution problem. Based on the availability of p, there are two categories of image deconvolution problems. If the blur kernel p is given as a prior, recovering the original image becomes a non-blind deconvolution problem. Non-blind deconvolution is known as an ill-conditioned inverse problem as a small perturbation of f may cause the direct solution from (1) being heavily distorted. In the past, there have been extensive research literatures on robust non-blind deconvolution algorithm (e.g. [1], [3] [7]). If the blur kernel p is also unknown, how to reverse the effect of convolution by p on the blurred image f is then a blind deconvolution problem. In general, blind deconvolution is a very challenging ill-conditioned and ill-posed inverse problem because it is not only sensitive to image noise but also under-constrained with infinitely many solutions. Removing motion blurring from images is a typical blind deconvolution problem, as the relative motion between the camera and the scene varies for individual images. Certain prior assumptions on both the blur kernel p and the original image g have to be made to overcome the ill-posedness of motion deblurring. The motion-blur kernel is quite different from some other types of blur kernels, e.g., out-of-focus blur kernel and Gaussian optical blur kernel, as motionblur kernel can not be represented by some simple parametric form. In this paper, we assume that only significant motion of the camera is a translation and that the scene being photographed is static. In the ideal case, each point on the sensor will see a single scene point throughout the exposure. That gives a sharp image g : R 2 R. However, if the camera undergoes a translative 2D movement in image plane, parametrized by (u(t), v(t)), while the shutter is open, each ray from the static scene will trace a sequence of points on the image. In other words, for each point (x, y) in the observed blurred image, we can trace the record of rays (x (t), y (t)) contributing to it as: (x (t), y (t)) (x + u(t), y + v(t)), where denotes equality up to a scale. Thus, the observed blurred image f is the integral over the exposure

3 3 (a) (b) Fig. 1. (a): a motion-blurred image of size ; (b) the image of blur kernel of (a) with image size Both images are from [8]. time [0, T ] of all translated version of f up to a scale, plus some imaging noise η: f(x, y) = 1 c T 0 g(x + u(t), y + v(t))dt + η(x, y). (2) Discretizing Equation (2) in term of time t, then the relationship between f and g is a convolution process (1) with the blur kernel p that vanishes out of the move trajectory generated by (u(t), v(t)) during exposure time. In other words, p can be expressed as p = s Γ, where s is the function associated with camera motion and Γ denote the motion trajectory by (u(t), v(t)) from 0 to T. See Fig. 1 (b) for the illustration of a real motion-blur kernel p. Briefly, the motion-blur kernel p is approximately a smooth function with its support on a continuous curve in the image plane. From this perspective, motion deblurring is a challenging blind deconvolution problem as the motion-blur kernels can not be characterized easily by some parametric functional. As a result, it requires a significant number of unknowns to represent motion-blur kernels. A. Previous work on blind deconvolution In the past, there have been extensive research works on single-image blind deconvolution. Early works on blind deblurring usually use a single image and assume a prior parametric form of the blur kernel p, such as linear motion blur kernel model (e.g. [9]). These parametric motion-blur kernel models can be obtained by estimating only a few parameters, but they are often overly simplified for practical motion blurring. To remove more general motion blurring from images, some probabilistic priors on natural images edge distributions have been proposed to derive the blur kernel (e.g., [10] [13]). One weakness of these methods is either that the assumed probabilistic priors do not always hold true for natural images or that it needs certain user interactions to obtain an accurate estimation. It is noted that there also have

4 4 been active researches on multi-image based blind motion deblurring methods as multiple images provide more information of the scene and could lead to an easier configuration for accurately estimating blur kernels. Interested readers are referred to [8], [14] [19] for more details. An alternative approach is to formulate the blind deconvolution as a joint minimization problem to simultaneously estimate both the blur kernel and the clear image. To overcome the inherent ambiguities between the blur kernel p and the clear image g, certain regularization terms on both p and g have to be added in the minimization, which results in the following minimization formulation: E(p, q) = min p,g Φ(g p f) + λ 1Θ 1 (g) + λ 2 Θ 2 (p), (3) where Φ(p g f) is the fidelity term, Θ 1 (g) and Θ 2 (p) are the regularization terms on the clear image and on the blur kernel respectively. Early regularization-based methods assume the smooth constraints on images and kernels. One such regularization (e.g. [20]) is to use the square l 2 norm of image/kernel derivatives as the regularization term on the image/kernel, which is also the so-called Tikhonov regularization method. The variational approach is proposed in [21]) which also assumes the smooth prior of both images and kernels by considering Gaussian distribution priors. Moreover, the parameters involved in the regularization are also automatically inferred in [21] by using the conjugate hyperpriors on parameters. In recent years, TV (Total Variation) and its variations have been popular choices of the regularization term in recent years to solve various blind deblurring problems (e.g., [5], [22] [26]). These TV-based blind deconvolution techniques showed good performance on removing certain types of blurrings on specific types of images, such as out-of-focus blurring on medical images and satellite images. However, TV regularization is not the optimal choice for removing motion-blurring, because TV regularization penalizes, e.g., the total length of the edges for piecewise constant functions (see [5]). As a result, the support of the resulting blur kernel tends to be a disk or several isolated disks. A more sophisticated TVnorm related model is presented in [27] with good performances on removing modest motion blurring from images without rich textures. Also, it is dependent on the accurate input of some prior information of the blur kernel. The main limitation of TV-based regularization for nature images is TV-based regularizations do not preserve the details and textures very well on the regions of complex structures due to the staircasing effects (see e.g. [28], [29]). Another type of regularization techniques for blind deconvolution is using various sparsity-based priors to regularize images, kernels or both of them. Considering a smooth blur kernel, a quasi maximumlikelyhood approach is proposed in [30] for de-convolute both sparse images and nature images which are sparsified in [30] through a sparsifying kernel learned from training data. Based on a Bayesian approach,

5 5 a sparsity-based prior on kernel is proposed in [31] that assumes the kernel can be represented by a weighted combination of Gaussian-type basis functions with weights satisfying a heavy tailed student s-t distribution. The regularization on images is also based on the assumption that the image differences satisfy a heavy tailed student s-t distribution. B. Our approach and most related works In this paper, we propose a new optimization approach to remove complex motion blurring from a single image by introducing new sparsity-based regularization terms on both images and motion-blur kernels. Our approach is closely related to recent works on both non blind image deconvolution ( [1], [32]) and blind motion deblurring ( [33]). Two non-blind image deconvolution algorithms in [1], [32] are both based on the observation that images usually have sparse representations or sparse approximations in some redundant transformed domains, e.g., wavelet ( [34]) and framelet ( [35], [36]) transforms. Given the blur kernel p, (1) is solved in [1], [32] by seeking a sparse solution in the corresponding transformed domain. The main difference between two methods lies in the different approaches to enforce the sparsity prior: one is using the socalled synthesis-based sparsity prior ( [32]) and the other is using the so-called analysis-based sparsity prior ( [1]). Here we give a brief explanation of two sparsity priors in terms of deconvolution. Interested readers are referred to [1], [37] for more detailed discussions. For simplicity, we denote images as vectors in R n by concatenating their columns. Let D R m n be an analysis operator which decomposes the data g R n to some transform coefficients Dg R m. Let R R m n be an synthesis operator which synthesizes the data g R n from some transform coefficients u R m. If R is the left inverse of D, RD = I, we have the perfect reconstruction formula g = Ru = R(Dg). Then, the approach using the synthesis-based sparsity prior gives the solution ḡ as following: ḡ := Rū; ū := argmin u R m Φ(p g f) + λ u 1, (4) where Φ is the fidelity term, λ is the regularization parameter and p is the blur kernel. The approach using the analysis-based sparsity prior yields the following solution: ḡ := argmin g R n Φ(p g f) + λ Dg 1. (5) or equivalently, ḡ := Rū; ū := argmin u range(d) Φ(p g f) + λ u 1 (6)

6 6 as RD = I. Two minimizations (4) and (6) are equivalent only if range(d) = R m, i.e., D (or R) is an invertible square matrix with m = n. When the analysis operator D is a redundant transform (e.g., tight frame transform) whose row dimension m is larger than its column dimension n, the two minimizations gives different solutions. As the domain of unknowns in (6) is only a strict subset of the domain of unknowns in (4), the synthesis-based approach (4) is seeking for the most sparse solution among all transform coefficient vectors while the analysis sparsity based minimization (6) (or (5)) is seeking for the most sparse solution only among the canonical framelet coefficient vector (transform coefficient vector decomposed from some images). Since the weighted norm of canonical framelet coefficients are closely related to the smoothness of the underlying function ( [38]), the result from the analysis-based approach (6) will give a less sparse but smoother solution than that of the synthesis-based approach (4). It is observed in extensive experiments that the deblurred result (6) with certain smoothness tends to have better visual quality than that from (4) which often has visible artifacts along image edges. In [33], the sparsity prior under redundant tight frame system is used to remove motion blurring from natural images for the first time. In [33], the sparsity prior of images under framelet ( [35], [36]) domain and the sparsity prior of motion-blur kernels under curvelet ( [39]) domain are used to regularize both images and kernels. Both sparsity priors are enforced by the synthesis-based approach. Although impressive results have been demonstrated in [33], there are still rooms for further improvements, especially on reducing the artifacts of the estimated original images and better estimation of more complex motion-blur kernels. Motivated by the benefits gained in the application of non-blind deconvolution by using the analysis-based sparsity prior ( [1]), we also adopt the analysis based sparsity prior of images under suitable tight frame system to regularize images. Regarding motion-blur kernel, we take a different approach from [33] to regularize it. Based on the assumption that the motion-blur kernel can be approximated by a smooth function with the support close to a continuous thin curve, we propose a mixed regularization strategy for blur kernel which includes both analysis based sparsity prior and l 2 norm regularization of blur kernels under a certain tight frame system. In our implementation, framelet system ( [35], [36]) is chosen to represent both original images and motion-blur kernels for its implementation simplicity and computational efficiency. Also, the minimization (6) is a more challenging problem to solve than (4). The linearized Bregman iteration ( [25], [26]) used in [33] for solving (4) is not applicable to (6). Thus, in this paper, we introduce another efficient solver for (6), i.e., the split Bregman method (See [1], [2]). The rest of the paper is organized as follows. In Section 2, we formulate the minimization model and the corresponding algorithms. Section 3 is devoted to the experimental evaluation and the discussion on

7 7 future work. II. FORMULATIONS AND ALGORITHMS It is known that non-blind deconvolution is an ill-conditioned problem as it is sensitive to noise, that is, a small perturbation of f may lead to a large distortion on the direct solution of (1). Extensive studies have been done along the line of developing algorithms robust to noise. Imposing some regularization terms is proven to be an effective approach. However, blind deblurring is a much more challenging problem as it is also an under-constrained problem. Mathematically, there exists infinitely many solutions to (1). There is one type of degenerate solutions ( g, p) of (1) which bothers many existing blind deconvolution methods: g = g h; p = p h 1, (7) where h is some low-pass/high-pass filter. In such a case, the de-blurred image will be either overdeblurred when h being a high-pass filter or less-deblurred when h being a low-pass filter. The extreme case of less-deblurring is g := f; p := δ, (8) where the image is not deblurred at all. To overcome such ill-posedness of blind deconvolution, certain priors on both images and kernels should be enforced by adding corresponding regularization terms in the minimization. And one main role of these regularization terms is to guarantee that the solution generated by the algorithm does not fall into the degenerate case. In the remaining of this section, we will introduce a new approach to solve (1) with analysis-based sparsity priors on both images and kernels under some suitable tight frame systems. In our approach, we choose framelet system ( [35], [36]) as the frame system to represent both original images and blur kernels. Before presenting our formulation on blind motion deblurring, we first give a brief introduction to framelet system and interested readers are referred to [6], [40], [41] for more implementation details. A. Wavelet tight frame and image representation The wavelet tight frames used here are mainly in two variable setting, however, for simplicity, we only present wavelet tight frames in the univariate setting, since we use tensor product wavelet frames in the implementation. A countable subset of X L 2 (R) is called a tight frame of L 2 (R) if f = x X f, x x, f L 2 (R). (9)

8 8 This is equivalent to f 2 = x X f, x 2, f L 2 (R), where f, g and f denote the inner product and the norm of L 2 (R) for any two functions f, g L 2 (R) respectively. Tight frame, as a generalization to orthonormal basis, relaxes the requirement of X being a basis of L 2 (R) and brings in redundancy that has been proved useful in many applications in signal and image processing (see e.g. [34], [41]). Since tight frame is redundant, there are an infinite number of possible expansions of f in the system X. The particular expansion given in (9) is called the canonical expansion, and { f, g } is the canonical frame coefficient sequence. A wavelet system X(Ψ) is defined to be a collection of dilations and shifts of a finite set Ψ = {ψ 1,..., ψ r } L 2 (R), X(Ψ) := {ψ j,k := 2 j/2 ψ(2 j x k), j Z, k Z, ψ Ψ}. When X(Ψ) forms a tight frame, it is called a wavelet tight frame and each ψ Ψ is called a framelet. To construct compactly supported wavelet tight frames, one usually starts from a compactly supported refinable function φ (called a scaling function) with a refinement mask h 0 satisfying φ(2ω) = h 0 φ(ω), where φ is the Fourier transform ( [34]) of φ, and h 0 is a 2π-periodic trigonometric polynomial with h 0 (0) = 1. For a given compactly supported refinable function φ, the construction of a wavelet tight frame is to find an appropriate set of framelets Ψ = {ψ 1,..., ψ r } defined in the Fourier domain by ψ i (2ω) = h i φ(ω), i = 1, 2,..., r, where the framelet masks h i s are 2π-periodic trigonometric polynomials. The Unitary Extension Principle (UEP) of [35] says that X(Ψ) forms a tight frame provided that r h 0 (ω)h 0 (ω + γπ) + h i (ω)h i (ω + γπ) = δ γ,0, γ = 0, 1. i=1 As an application of UEP, a family of wavelet tight frame systems is derived in [35] by using uniform B-splines ( [42]) as the refinable function φ. The simplest system in this family is piecewise linear B- spline tight frame which uses piecewise linear B-spline function as φ. This φ has the refinement mask h 0 = cos 2 ( ω 2 ), and the corresponding low-pass filter is h 0 = 1 [1, 2, 1], 4

9 9 Two framelets ψ 1, ψ 2 are defined by the framelet masks h 1 = 2i 2 sin(ω) and h 2 = sin 2 ( ω 2 ), whose corresponding high-pass filters are h 1 = 2 4 [ 1, 0, 1], h 2 = 1 [ 1, 2, 1]. (10) 4 Numerical computation of the wavelet frame transform is done by using the wavelet frame decomposition algorithm given in [36]. In fact, we use the decomposition algorithm without down-sampling. This can be easily implemented by using the refinement and framelet masks. The transform can be represented by a matrix W whose construction depends on the boundary conditions. In this paper, we use the Neumann (symmetric) boundary condition. Since [6], [40] have given details of how to generate such matrices, we omit the detailed discussions here and the interested reader should consult [6], [40] for details. With the matrix W, it is easy to describe the transformation process. Let g be a vector of the image after column concatenation, the frame coefficient vector u can be computed via u = W g. Once we have W, the inverse transform, i.e. the reconstruction algorithm, is W T, i.e. g = W T u. It is very important to note that W can be constructed from the refinement and framelet masks such that W T W = I. The rows of the matrix W form a tight frame in a finite dimensional space which connects well to the wavelet frame system in function spaces (see e.g. [36] for details). In general, since there are more rows than columns in W, W W T I. When W W T = I, then the rows of W form an orthonormal basis. Finally we remark that in practical computation, we are working in the bivariate setting. We employ the tensor product of 1D wavelet tight frame, and the corresponding matrix W can be constructed easily via the Kronecker product of the matrix constructed by univariate wavelet frame transform (see [6] for details). In the rest of this paper, we still use W to denote the discrete transform generated by bivariate wavelet tight frame. We further note that W is basically used for notational convenience. In the real computation, we do not use matrix multiplication. Instead, we use a wavelet decomposition and reconstruction algorithm directly modified from [36]. B. Formulation of our minimization model Given a blurred image f satisfying the relationship (1): f = p g + η.

10 10 we take a regularization-based approach to solve the blind motion deblurring problem, which requires the simultaneous estimations of both the original image g and the blur kernels p. It is well known that the regularization-based blind deconvolution approach usually results in solving a challenging non-convex minimization problem. The most commonly used approach is an alternative iteration scheme; see [22] for instance. Let p (0) be the initial guess on the blur kernel, the alternative iteration scheme is described in Algorithm 1. Algorithm 1 Outline of the alternative iterations For k = 0, 1,..., 1) given the blur kernels p (k), compute the clear image g (k+1) : g (k+1) := argmin g 1 2 p(k) g f λ 1 Θ 1 (g), (11) where Θ 1 ( ) is the regularization term on images and λ 1 is the corresponding regularization parameter. 2) given the clear image g (k+1), compute the blur kernels p (k+1) ; p (k+1) := argmin p 1 2 g(k+1) p f λ 2 Θ 2 (p), (12) where Θ 2 ( ) is regularization term on kernels and λ 2 is the corresponding regularization parameter. There are two steps in Algorithm 1 and both steps are about using regularization-based approach for non-blind deconvolution. Step 1 is a non-blind image deblurring problem, which has been studied extensively in the literature; see, for instances, [3] [7], [32]. However, there are subtle differences between Step 1 and the classic non-blind deconvolution problems, that is, the intermediate estimated blur kernel p (k+1) used for deblurring in Step 1 is not perfect and it is far way from the truth during the initial iterations. Inspired by the strong noise robustness of the recent non-blind deblurring technique ( [1]), we also use the analysis sparsity prior on the original image g under framelet system to regularize the non-blind image deblurring to alleviate the distortion caused by erroneous intermediate estimate of the blur kernel. Thus, we propose the following regularization term in (11) of Step 1: Θ 1 (g) = W g 1, (13) where g is the vectorized form of image g and W is the framelet transform given in last section. Step 2 is also a non-blind deconvolution problem but the data to recover is the motion-blur kernel p. Motion-blur kernel can also be viewed as an image but with unique image content. See Fig. 1 for one sample motion-blur kernel visualized as an image. It is observed that there are two essential constraints

11 11 of a function being a sound motion-blur kernel. One is its curvy support which implies its sparsity in spatial domain; the other is the continuity of its support and the smoothness of the kernel along its support. We propose to translate these two constraints to the following regularization term in (12) of Step 2: Θ 2 (p) = W p 1 + τ 2 p 2 2, (14) where p is the vectorized form of kernel p, W is the same framelet transform as that in (13) and τ is the parameter which balances the sparsity of the blur kernel and the continuity of the support of the blur kernel. The motivation of the proposed regularization term (14) on motion blur kernels is explained as follows. There are two components in the regularization term (14): one is the analysis based sparsity regularization under framelet domain W p 1 and the other is the l 2 norm regularization p 2 2. The regularization term W p 1 penalizes the number of large framelet coefficients, which could be viewed as penalizing the number of pixels with large discontinuities. Thus, the resulting minimizer p tends to be a function with the area of the support being small. However, by only using the analysis based sparsity regularization, the support of the resulting kernel is biased to sparse isolated points, especially when there are large oscillations on the speed of camera motion. The second regularization term p 2 2 in (14) comes to correct such a bias by also regularizing the kernel p with l 2 norm 2 2, as the resulting minimizer tends to favor the blur kernels of larger connected support. By balancing the sparsity prior W p 1 and the support continuity prior p 2 2 sound motion-blur kernel. using parameter τ, the proposed regularization term (14) will yield a In general, the minimization problem resulting from the regularization-based formulation for blind deconvolution is not a convex problem due to the non-convexity of the fidelity term Φ(g p f) (Φ( ) = 2 2 in our approach). Thus, the method may converge to a local minimum instead of the global one, depending on how it is initialized. As a result, many existing blind deconvolution algorithms require some accurate physical measurement of the blur kernel, e.g., the kernel size ( [10]). Algorithm 1 can not guarantee the convergence to the global minimum either. However, in the experiments, it has been consistently converging to the result of high visual quality without requiring knowing the kernel size. In the remaining of this section, we give a heuristic argument on why Algorithm 1 with proposed regularization terms (13) (14) is very likely to converge to a sound solution, instead of some degenerate solution (7). The two regularization terms (13) and (14) play important roles on avoiding the convergence to the

12 12 degenerate solution (7). For the case of over-deblurring where h is a high-pass filter, the de-blured image g = g h is a wrongly sharpened version of natural image by high-pass filtering, which will significantly increase the high-frequency content of the image. Thus, g will have much more large framelet coefficients than g does, or equivalently W g 1 is much larger than W g 1. On the other hand, the resulting blur kernel p = p h 1 is a smoothed version of p by low-pass filtering. Notice that the blur kernel p itself is a smooth kernel with very few high-frequency components, compared to nature images. Thus, the number of large framelet coefficients of p is nearly the same as that of p, which implies that W p 1 is still very close to W p 1. With an appropriate parameter τ, the overall cost (3) for the over-deblurred solution ( g, p) will be larger than that of the true solution. For the case of less-deblurring where h is a low-pass filter, the deblurred image g = g h is a still a smoothed version of g but with less blurring effect. In such a case, the resulted blur kernel p is usually a blur kernel with a smaller support than the truth. Although we can see a small decrease on the costs of both W g 1, both W p 1 and the penalty term p 2 2 in (14) will see an increase of their values. By assigning an appropriate value to the parameter τ, the overall cost for such ( g, p) will still be larger than that of the true solution. In summary, the balance between the sparsity-based regularization W g 1 on the image g, the sparsity-based regularization W p 1 and the l 2 norm-based regularization p 2 2 on the kernel p is likely to avoid those degenerated solutions and generate a sound solution of high visual quality. C. Numerical algorithms This section is devoted to the detailed numerical algorithm of our blind motion deblurring algorithm outlined in Algorithm 1. Both steps in Algorithm 1 are solving the same type of large scale minimization problems. The difficulties lie in the non-separable l 1 norm terms W g 1 and W p 1. One efficient solver for minimizations involving such terms is the split Bregman iteration [1], [2], which will be used in our solver. The split Bregman iteration is based on the Bregman iteration. The Bregman iteration was first introduced for non-differentiable TV-energy in [43] and then was successfully applied to wavelet based denoising in [44]. The Bregman iteration was also used in TV-based blind deconvolution in [25], [26]. To further improve the performance of the Bregman iteration, a linearized Bregman iteration was invented in [45]. More details and an improvement called kicking of the linearized Bregman iteration is described in [46], and a rigorous theory was given in [47]. The linearized Bregman iteration for frame-based image deblurring was proposed in [32]. Recently, a new type of iteration based on Bregman distance, called split Bregman iteration, was introduced in [2] which extended the utility of Bregman iteration and linearized Bregman iteration to more general l 1 norm minimization problems. The split Bregman

13 13 iteration for frame-based image deblurring was first proposed in [1]. The basic idea of split Bregman iteration is to convert the unconstrained minimization problem (11) and (13) ((12) and (14) respectively) into a constrained one by introducing an auxiliary variable d 1 = W g (d 2 = W p respectively) and then invoke the Bregman iteration to solve the constrained minimization problem. Numerical simulations in [1], [2] show that it converges fast and only uses a small memory footprint which make it very attractive for large-scale problems. Let f, g R n denote the given blurred image f and the original image g after column concatenation. We assume that the size of the blur kernel is not larger than that of the images, and let p R n denote its vectorized version. Let [ ] denote the matrix form of the convolution operator after concatenating operations as follows, p f [p] g = [g] p. In Step 1, we need to solve the following l 1 -norm based minimization: By letting d 1 = W g, the minimization (15) is equivalent to 1 min g 2 [p(k) ] g f λ 1 W g 1. (15) 1 min g,d 1 2 [p(k) ] g f λ 1 d 1 1 s.t. d 1 = W g. (16) Then, the problem (16) is further transferred into a non-constrained minimization 1 min g,d 1 2 [p(k) ] g f λ 1 d λ 1µ 2 (W g d 1) + b 1 2 2, (17) where µ is a constant number. This is the so-called split Bregman iteration. The iterative numerical algorithm for solving (17) (15) is described as following: g (l+1,k) 1 := argmin g 2 [p(k) ] g f λ1µ 2 W g d(l,k) 1 + b (l,k) 1 2 2, d (l+1,k) 1 := T 1/µ (W g (l+1,k) + b (l,k) 1 ), b (l+1,k) 1 := b (l,k) 1 + (W g (l+1,k) d (l+1,k) 1 ), Interesting readers are referred to [1] for more detailed derivation of (18). The following theorem, which follows directly from [1, Theorem 3.2], says that iteration (18) is the right one to use. Theorem 1: The iteration (18) satisfies the following property: 1 lim l + 2 [p(k) ] g (l,k) f λ 1 W g (l,k) 1 = 1 2 [p(k) ] g (,k) f λ 1 W g (,k) 1, where g (,k) is a minimizer of (15). Furthermore, assume that (15) has a unique minimizer, then iteration (18) converges, i.e., lim l + g(l,k) g (,k) 2 = 0. (18)

14 14 Proof: To apply [1, Theorem 3.2], we need to prove that there exists a minimizer satisfying (15). The existence of such a minimizer follows immediately from the definition of the cost functional. In Step 2 of Algorithm 2, we need to solve the minimization problem similar to (15): 1 2 [g(k+1) ] p f λ 2 ( W p 1 + τ 2 p 2 2). (19) The split Bregman iteration can also be used to solve the above minimization efficiently with some modifications. The sequences of v (l,k), d (l,k) 2, b (l,k) 2 is generated as following: p (l+1,k) 1 := argmin p 2 [g(k+1) ] p f λ2τ 2 p λ2µ 2 W p d(l,k) 2 + b (l,k) 2 2 2, d (l+1,k) 2 := T 1/µ (W p (l+1,k) + b (l,k) 2 ), b (l+1,k) 2 := b (l,k) 2 + (W p (l+1,k) d (l+1,k) ), where µ > 0 is some parameter of the iteration, T θ is the soft-thresholding operator. The first step of each iteration in (20) is done by solving the following positive definite linear system: ([g (k+1) ] T [g (k+1) ] + λ 2 (µ + τ)i)v = [g (k+1) ] T f + λ 2 µw T (d (l,k) b (l,k) ). We have the same convergence result for (20) as that of (18). In Step 1 (respectively Step 2), to get an exact solution of (15) (respectively (19)), we need to choose g (k+1) = g (+,k) (respectively p (k+1) = p (+,k) ) as stated in Theorem 1. However, it is too conservative to use infinite steps of iterations. The reason is that the image g (k) or the kernel p (k) might not be accurate enough, and accuracy obtained by the infinite loop will be wasted. This is typical in split Bregman iterations; see [1], [2]. Therefore, for computational efficiency, we only need to perform one iteration in (18) and (20), i.e., we choose set g (k+1) = g (1,k) and p (k+1) = p (1,k). Moreover, in practice, the iterates g (k) and p (k) may not always be a physically sound solution. Thus, we chose to impose the following physical conditions: p 0, (20) p(j) = 1; and 0 g 1. (21) j which says that the blur kernel is non-negative and is normalized, and the range of the original image is between [0, 1]. In each step, we project g (k) and p (k) such that they satisfies the physical conditions in (21). Taking all above into account, we get the complete iteration that will be used in our algorithm for blind motion-deblurring; see Algorithm 2 for a detailed description. III. EXPERIMENTS AND CONCLUSIONS In our implementation, the maximum iteration number of Algorithm 2 is set to 100. For blurred color images of size , each iteration in Algorithm 2 (implemented in Matlab 7.8) takes roughly 10

15 15 Algorithm 2 Numerical algorithm for blind motion deblurring (1) Initilize k := 0, p (0) := δ, d 1 = b 1 := 0 and d 2 = b 2 := 0, where δ is the Delta function; (2) DO g (k+1/2) 1 := argmin g 2 [p(k) ] g f λ1µ1 2 W g d(k) 1 + b (k) 1 2 2, 1 if g (k+1/2) > 1, g (k+1) (j) := g (k+1/2) (j) if 0 g (k+1/2) (j) 1, for j = 1, 2,, N, 0 if g (k+1/2) (j) < 0, d (k+1) 1 := T 1/µ1 (W g (k+1) + b (k) 1 ), b (k+1) 1 := b (k) 1 + (W g (k+1) d (k+1) 1 ), p (k+1/2) := argmin p 1 2 [g(k+1) ] p f λ2τ 2 p λ2µ2 2 p (k+1) (j) := max(p (k+1/2) (j), 0), for j = 1, 2,, N, p (k+1) := p(k+1) p (k+1) 1, d (k+1) 2 := T 1/µ2 (W p (k+1) + b (k) 2 ), b (k+1) 2 := b (k) 2 + (W p (k+1) d (k+1) 2 ), k := k + 1, W p d(k) 2 + b (k) 2 2 2, (22) UNTIL (k K or p (k) p (k 1) 2 2 ɛ) seconds on a windows PC with 4GB memory and one single intel Core 2 CPU of 2 GHz Frequency. In Section III. B, when running Algorithm 2 on real image data, maximum iteration number K = 200 and the stopping threshold ɛ = The parameters used inside Algorithm 2 are set as the following: λ 2 = 10 1 λ 1 ( i,j f(i, j)) µ 1 = 2000 µ 2 = 10 1 µ 1 τ = 10 1 µ 2 In these parameters, µ 1 and µ 2 are only related to the convergence rate of the Bregman iteration. Setting different values of µ 1 and µ 2 does not change the outcome of Algorithm 2. The parameter λ 1 depends on the complexity of the image constant. The image with simple structure will prefer a smaller λ 1. It is noted that Algorithm 2 is rather insensitive to the value of λ 1. In our experiments, we tried Algorithm 2 on image data using three candidate values of λ 1 : {10 4, 10 3, 10 2 } and set λ 1 = 10 3 uniformly for all experiments.

16 16 (a) One clear image (b) the blurred image (c) the blur kernel Fig. 2. (a): an clear image; (b): the blurred image (c): the corresponding blur kernel. A. Simulated images In the first part of the experiments, we synthesized one sample motion-blurred image of size to illustrate the convergence behavior of Algorithm 2. The motion blur-kernel is simulated by letting the camera move in a constant speed along an S curve in the image plane. The sample original image is shown in Fig. 2 (a) and the synthesized blurred image are shown in Fig. 2 (b) with the corresponding blur kernels shown in Fig. 2 (c). The intermediate results obtained during the iterations of Algorithm 2 are shown in Fig. 3. The estimated motion-blur kernels are shown in the first row, the intermediate recovered images are shown in the second row, and the same region of the recovered images are shown in the third row after zooming in for better visual inspection. It is clear that Algorithm 2 is empirically convergent to the ground truth of the original image and blur kernel. And the convergence rate is fairly fast as it only took 160 iterations to obtain a satisfactory result. With more iterations, it is seen from Fig. 3 (h) that Algorithm 2 estimated the motion-blur kernel very accurately and yielded a high-quality deblurring image with little artifacts. B. Real images In the second part of the experiments, Algorithm 2 is applied on real image data from various sources ( [10], [33], [48]). We compared our results against the other five single-image based methods: You & Kaveh s method ( [20]), Fergus et al. s method ( [10]), Shan et al. s method ( [27]), Tzikas et al. s method ( [31]) and Cai et al. s method ( [33]). As we discussed in Section I, the approach proposed in [20] is based on the Tikhonov regularization on both images and kernels. Fergus et al. s method use the statistical properties of image derivatives to infer motion-blur kernel. In order to get a good result, it need fairly accurate information regarding the size the motion-blur kernel, especially when the size

17 17 Fig. 3. (a) k = 0 (b) k = 10 (c) k = 20 (d) k = 40 (e) k = 80 (f) k = 160 (g) k = 320 (h) k = 640 (a) (h) are the results on deblurring the blurred image shown in Fig. 2 (b) in k-th iteration of Algorithm 2, for k = 0, 10, 20, 40, 80, 160, 320, 640 respectively. Estimated blur kernels are shown in the first row; intermediate deblurred images in the second row; and one region after zooming in in the third row.

18 18 of motion-blur kernel is large ( 30 pixels). Shan et al. s method is based on a sophisticated TV-norm based minimization model on both image intensity and image gradients, the regularization term on the motion-blur kernel is the l 1 norm of the kernel intensity. Similar to [10], it also requires the input of the kernel size. Cai et al. s method ( [33]) is based on synthesis-based sparsity constraint of motion kernels in curvelet domain and synthesis-based sparsity constraint of images in wavelet domain. For [10], [27] and [33], the results are from the authors implementations. For [20] and [31], the results are from our own implementations. The parameters of the above methods above are tuned up to find visually pleasant results on tested images. Tzikas et al. s method estimates the blur kernel using a weighted combination of Gaussian-type basis functions whose weights satisfying a heavy tailed Students-t distribution. It is noted that one advantage of Tzikas et al. s method is that many parameters are automatically estimated and we used the default values of remained few parameters proposed in their paper. Fig. 4 7 showed the recovered results from all six methods on four real blurred images, which differ from each other on the type of image content, the blurring degree and the type of camera motion. The motion-blur kernels estimated by Algorithm 2 are shown in Fig. 4(h) 7 (h) for all tested images. It is seen that there are camera motion along straight line segmentations but with varies speed, camera motion along curve and camera motion along trajectory with sharp corners. It is seen that the visual quality of recovered images from Algorithm 2 degraded a little bit compared against that from the previous simulated experiment. The main reason is that the blurring happening in real life is hardly a perfect spatial-invariant motion blurring, either there exist other image blurring effects, e.g., out of focus blurring; or the motion-blurring is not completely uniform over the whole image. Overall, compared to other evaluated methods, Algorithm 2 performed consistently over these images and the results are of good quality with few noticeable image artifacts. The results from five other existing methods varied in terms of visual quality and are in general visually inferior than that from Algorithm 2. You & Kaveh s method ( [20]) only worked well on the image shown in Fig. 6 and did poorly on three other images. Such a result is not surprising as the Tikhonov regularization used in You & Kaveh s method emphasizes too much on the smoothness of blur kernels such that it tends to yield Gaussian-type blur kernels. As a result, the kernel is often over-sized which will lead to over-deblurred results. Fergus et al. s method ( [10]) performance relies on how well the distribution of image derivatives fit the assumption, which certainly has its limitation as the image content could vary significantly in the experiments. It is seen from Fig. 6 and Fig. 7 that it tends to under-estimate the blur degree such that the results look still blurry. The kernel estimation of Shan et al. s method ( [27]) is based on the regularization l 1 norm of blur kernel, which seems not adequate enough to yield accurate estimation of motion-blur kernels

19 19 tested in the experiments. Thus, the results on tested images from Shan et al. s method is not satisfactory. Tzikas et al. s method ( [31]) has its advantages over other methods on the autonomous estimation of regularization parameters. However, the kernel model considered in Tzikas is the linear combination of Gaussian-type blur kernel. As a result, it did not perform well on images blurred by thin kernels, such as Fig. 5. Cai et al. s method ( [33]) is quite stable compared to other methods. However, the results from Cai et al. s method tend to show noticeable artifacts along edges. The main reason is the usage of synthesis-based approach to enforce the sparsity constraints of image/kernel. Such a drawback is well addressed in Algorithm 2. The results from Algorithm 2 are consistent on tested images and in general are more visually pleasant with fewer artifacts than that of the other methods. C. Conclusions and future work In this paper, a new algorithm is presented to remove camera shake from a single image. Based on analysis-based sparsity prior of images in framelet domain and a mixed regularization on motion-blur kernels, which includes both analysis-based sparsity prior of kernels in framelet domain and smoothness prior on kernels, our new formulation on motion deblurring leads to a powerful algorithm which can recover a clear image from a given motion-blurred image. The analysis-based sparsity prior used in our approach empirically yields more visually pleasant results than the synthesis-based sparse prior used in other methods do ( [33]). Also, our formulation does not suffer from converging to some wellknown degenerate cases as many other approaches might do. As a result, our method does not require any prior information on the kernel while many existing techniques are dependently on some accurate information of motion blurring which usually needs user interactions. The resulting minimization problem from our formulation could be solved efficiently by the split Bregman method. The experiments on both synthesized and real images show that our proposed algorithm is very efficient and very effective on removing complicated blurring from nature images of complex structures. Blind motion deblurring is a challenging blind deconvolution problem and there are still many open questions remained. Our proposed algorithm performs quite well in the case of motion-blurring being the uniform blurring over the image, but can not deal with non-uniform motion-blurring, including image blurring caused by camera rotation and the partial image blurring caused by fast moving objects in the scene. In future, we would like to extend our proposed algorithm to remove non-uniform motion blurring from images. Also, although the parameter setting in our algorithm is rather simple without requiring rigorous tuning up, it is still more preferred for some applications to have a completely automatic parameter setting process. Recently, there have been some blind deconvolution techniques that use either

20 20 (a) Blurred image (b) Result by [20] (c) Result by [10] (d) Result by [27] (e) Result by [31] (f) Result by [33] (g) Result by Alg. 2 (h) Kernel by Alg. 2 Fig. 4. The blurred image of size and its deblurred results with zoomed regions. (a) Blurred image (b) You & Koveh [20]; (c) Fergus et al. [10]; (d) Shan et al. [27]; (e) Tzikas et al. [31]; (f) Cai et al. [33]; (g) Algorithm 2. (h) Color-visualization of the motion-blur kernel estimated by Alg. 2 with image size

21 21 (a) Blurred image (b) Result by [20] (c) Result by [10] (d) Result by [27] (e) Result by [31] (f) Result by [33] (g) Result by Alg. 2 (h) Kernel by Alg. 2 Fig. 5. The blurred image of size and its deblurred results with zoomed regions. (a) Blurred image (b) You & Koveh [20]; (c) Fergus et al. [10]; (d) Shan et al. [27]; (e) Tzikas et al. [31]; (f) Cai et al. [33]; (g) Algorithm 2. (h) Color-visualization of the motion-blur kernel estimated by Alg. 2 with image size variational Bayesian approach (e.g. [49]) or cross-validation techniques to automatically determine the optimal parameter values (e.g. [50]). In future, we also would like to investigate how to incorporate these techniques into our method to automatically infer optimal parameter setting of the de-blurring algorithm. R EFERENCES [1] J.-F. Cai, S. Osher, and Z. Shen, Split Bregman method and frame based image restoration, Multiscale model. Simul., vol. 8, no. 2, pp , 2009.

22 22 (a) Blurred image (b) Result by [20] (c) Result by [10] (d) Result by [27] (e) Result by [31] (f) Result by [33] (g) Result by Alg. 2 (h) Kernel by Alg. 2 Fig. 6. The blurred image of size and its deblurred results with zoomed regions. (a) Blurred image (b) You & Koveh [20]; (c) Fergus et al. [10]; (d) Shan et al. [27]; (e) Tzikas et al. [31]; (f) Cai et al. [33]; (g) Algorithm 2. (h) Color-visualization of the motion-blur kernel estimated by Alg. 2 with image size [2] T. Goldstein and S. Osher, The split bregman method for l1-regularized problems, SIAM J. Imaging Sci., vol. 2, no. 2, pp , [3] H. C. Andrews and B. R. Hunt, Digital image restoration. Englewood Cliffs, NJ: Prentice-Hall, [4] M. K. Ng, R. H. Chan, and W. Tang, A fast algorithm for deblurring models with neumann boundary condition, SIAM J. Sci. Comput., vol. 21, no. 3, pp , [5] T. F. Chan and J. Shen, Image processing and analysis, ser. Variational, PDE, wavelet, and stochastic methods. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM), 2005.

23 23 (a) Blurred image (b) Result by [20] (c) Result by [10] (d) Result by [27] (e) Result by [31] (f) Result by [33] (g) Result by Alg. 2 (h) Kernel by Alg. 2 Fig. 7. The blurred image of size and its deblurred results with zoomed regions. (a) Blurred image (b) You & Koveh [20]; (c) Fergus et al. [10]; (d) Shan et al. [27]; (e) Tzikas et al. [31]; (f) Cai et al. [33]; (g) Algorithm 2. (h) Color-visualization of the motion-blur kernel estimated by Alg. 2 with image size [6] A. Chai and Z. Shen, Deconvlolution: A wavelet frame approach, Numer. Math., vol. 106, pp , [7] Y. Lou, X. Zhang, S. Osher, and A. Bertozzi, Image recovery via nonlocal operators, Journal of Scientific Computing, vol. 42, no. 2, pp , [8] J.-F. Cai, H. Ji, C. Liu, and Z. Shen, Blind motion deblurring using multiple images, Journal of Computational Physics, vol. 228, no. 14, pp , [9] G. Pavlovic and A. M. Tekalp, Maximum likelihood parametric blur identification based on a continuous spatial domain model, IEEE Trans. Image Processing, vol. 1, no. 4, Oct [10] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, Removing camera shake from a single photograph, in SIGGRAPH, vol. 25, 2006, pp [11] A. Levin, Blind motion deblurring using image statistics. in NIPS, Dec. 2006, pp [12] J. Jia, Single image motion deblurring using transparency, in CVPR, 2007, pp [13] N. Joshi, R. Szeliski, and D. Kriegman, PSF estimation using sharp edge prediction, in CVPR, [14] B. Bascle, A. Blake, and A. Zisserman, Motion deblurring and super-resolution from an image sequence, in ECCV, 1996, pp [15] M. Ben-Ezra and S. K. Nayar, Motion-based motion deblurring, IEEE Trans. PAMI, vol. 26, no. 6, pp , 2004.

24 24 [16] F. Sroubek and J. Flusser, Multichannel blind deconvolution of spatially misaligned images, IEEE Tran. Image Processing, vol. 14, no. 7, pp , [17] R. Raskar, A. Agrawal, and J. Tumblin, Coded exposure photography: Motion deblurring via fluttered shutter, in SIGGRAPH, vol. 25, 2006, pp [18] J. Chen, L. Yuan, C. K. Tang, and L. Quan, Robust dual motion deblurring, in CVPR, [19] Y. Lu, J. Sun, L. Quan, and H. Shum, Blurred/non-blurred image alignment using an image sequence, in SIGGRAPH, [20] Y. You and M. Kaveh, A regularization approach to joint blur identification and image restoration, IEEE Tran. Image Processing, vol. 5, [21] R. Molina, J. Mateos, and A. K. Katsaggelos, Blind deconvolution using a variational approach to parameter, image, and blur estimation, IEEE transacion on image processing, vol. 15, no. 12, pp , [22] T. F. Chan and C. K. Wong, Total variation blind deconvolution, IEEE Tran. Image Processing, vol. 7, no. 3, pp , [23] L. Bar, B. Berkels, M. Rumpf, and G. Sapiro, A variational framework for simultaneous motion estimation and restoration of motion-blurred video, in ICCV, [24] P. D. Romero and V. F. Candela, Blind deconvolution models regularized by fractional powers of the laplacian, Journal of Mathematical Imaging and Vision, vol. 32, [25] L. He, A. Marquina, and S. Osher, Blind deconvolution using TV regularization and Bregman iteration, Int. J. Imaging Syst. Technol., vol. 15, pp , [26] A. Marquina, Inverse scale space methods for blind deconvolution, UCLA CAM Reports, vol , [27] Q. Shan, J. Jia, and A. Agarwala, High-quality motion deblurring from a single image, in SIGGRAPH, [28] D. C. Dobson and F. Santosa, Recovery of blocky images from noisy and blurred data, SIAM J. Appl. Math., vol. 56, pp , [29] M. Nikolova, Local strong homogeneity of a regularized estimator, SIAM J. Appl. Math., vol. 61, pp , [30] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, Blind deconvolution of images using optimal sparse representations, IEEE Trans. Image Processing, vol. 14, no. 6, pp , [31] D. G. Tzikas, A. C. Likas, and N. P. Galasanos, Variational bayesian sparse kernel-based blind image deconvolution with student s-t priors, IEEE Trans. Image Processing, vol. 18, no. 4, pp , [32] J.-F. Cai, S. Osher, and Z. Shen, Linearized bregman iterations for frame-based image deblurring, SIAM Journal on Imaging Sciences, pp , [33] J.-F. Cai, H.Ji, C. Liu, and Z. Shen, Blind motion deblurring from a single image using sparse approximation, in CVPR, [34] S. Mallat, A Wavelet Tour of Signal Processing. Academic Press, [35] A. Ron and Z. Shen, Affine system in L 2(R d ): the analysis of the analysis operator, J. of Func. Anal., vol. 148, [36] I. Daubechies, B. Han, A. Ron, and Z. Shen, Framelets: MRA-based constructions of wavelet frames, Appl. Comput. Harmon. Anal., vol. 14, pp. 1 46, [37] M. Elad, P. Milanfar, and R. Rubinstein, Analysis versus synthesis in signal priors, in EUSIPCO, [38] L. Borup, R. Gribonval, and M. Nielsen, Bi-framelet systems with few vanishing moments characterize Besov spaces, Appl. Comput. Harmon. Anal., vol. 14, no. 1, 2004.

Framelet based blind motion deblurring from a single image

Framelet based blind motion deblurring from a single image Framelet based blind motion deblurring from a single image Jian-Feng Cai, Hui Ji, Chaoqiang Liu and Zuowei Shen Abstract How to recover a clear image from a single motionblurred image has long been a challenging

More information

Image deconvolution using a characterization of sharp images in wavelet domain

Image deconvolution using a characterization of sharp images in wavelet domain Image deconvolution using a characterization of sharp images in wavelet domain Hui Ji a,, Jia Li a, Zuowei Shen a, Kang Wang a a Department of Mathematics, National University of Singapore, Singapore,

More information

DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING

DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING Fang Wang 1,2, Tianxing Li 3, Yi Li 2 1 Nanjing University of Science and Technology, Nanjing, 210094 2 National ICT Australia, Canberra, 2601 3 Dartmouth College,

More information

Edge Detections Using Box Spline Tight Frames

Edge Detections Using Box Spline Tight Frames Edge Detections Using Box Spline Tight Frames Ming-Jun Lai 1) Abstract. We present a piece of numerical evidence that box spline tight frames are very useful for edge detection of images. Comparsion with

More information

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com

More information

Digital Image Restoration

Digital Image Restoration Digital Image Restoration Blur as a chance and not a nuisance Filip Šroubek sroubekf@utia.cas.cz www.utia.cas.cz Institute of Information Theory and Automation Academy of Sciences of the Czech Republic

More information

IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS

IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS P.Mahalakshmi 1, J.Muthulakshmi 2, S.Kannadhasan 3 1,2 U.G Student, 3 Assistant Professor, Department of Electronics

More information

Good Regions to Deblur

Good Regions to Deblur Good Regions to Deblur Zhe Hu and Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced {zhu, mhyang}@ucmerced.edu Abstract. The goal of single image deblurring

More information

Blind Image Deblurring Using Dark Channel Prior

Blind Image Deblurring Using Dark Channel Prior Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA

More information

ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING

ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING Fei Yang 1 Hong Jiang 2 Zuowei Shen 3 Wei Deng 4 Dimitris Metaxas 1 1 Rutgers University 2 Bell Labs 3 National University

More information

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,

More information

Denoising an Image by Denoising its Components in a Moving Frame

Denoising an Image by Denoising its Components in a Moving Frame Denoising an Image by Denoising its Components in a Moving Frame Gabriela Ghimpețeanu 1, Thomas Batard 1, Marcelo Bertalmío 1, and Stacey Levine 2 1 Universitat Pompeu Fabra, Spain 2 Duquesne University,

More information

Data-Driven Tight Frame for Multi-Channel Images and Its Application to Joint Color-Depth Image Reconstruction

Data-Driven Tight Frame for Multi-Channel Images and Its Application to Joint Color-Depth Image Reconstruction Data-Driven Tight Frame for Multi-Channel Images and Its Application to Joint Color-Depth Image Reconstruction Jin Wang Jian-Feng Cai Abstract In image restoration, we usually assume that the underlying

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

Wavelet frame based scene reconstruction from range data

Wavelet frame based scene reconstruction from range data Wavelet frame based scene reconstruction from range data Hui Ji a,, Zuowei Shen a, Yuhong Xu b a Department of Mathematics, National University of Singapore, Singapore, 117542 b Center for Wavelets, Approx.

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Deblurring Text Images via L 0 -Regularized Intensity and Gradient Prior

Deblurring Text Images via L 0 -Regularized Intensity and Gradient Prior Deblurring Text Images via L -Regularized Intensity and Gradient Prior Jinshan Pan, Zhe Hu, Zhixun Su, Ming-Hsuan Yang School of Mathematical Sciences, Dalian University of Technology Electrical Engineering

More information

Two-Phase Kernel Estimation for Robust Motion Deblurring

Two-Phase Kernel Estimation for Robust Motion Deblurring Two-Phase Kernel Estimation for Robust Motion Deblurring Li Xu and Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong {xuli,leojia}@cse.cuhk.edu.hk Abstract. We

More information

Single Image Deblurring Using Motion Density Functions

Single Image Deblurring Using Motion Density Functions Single Image Deblurring Using Motion Density Functions Ankit Gupta 1, Neel Joshi 2, C. Lawrence Zitnick 2, Michael Cohen 2, and Brian Curless 1 1 University of Washington 2 Microsoft Research Abstract.

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Wavelet Frame Based Blind Image Inpainting

Wavelet Frame Based Blind Image Inpainting Wavelet Frame Based Blind Image Inpainting Bin Dong b,, Hui Ji a,, Jia Li a, Zuowei Shen a, Yuhong Xu c a Department of Mathematics, National University of Singapore, Singapore, 117542 b Department of

More information

Image Inpainting Using Sparsity of the Transform Domain

Image Inpainting Using Sparsity of the Transform Domain Image Inpainting Using Sparsity of the Transform Domain H. Hosseini*, N.B. Marvasti, Student Member, IEEE, F. Marvasti, Senior Member, IEEE Advanced Communication Research Institute (ACRI) Department of

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation

More information

A hybrid GMRES and TV-norm based method for image restoration

A hybrid GMRES and TV-norm based method for image restoration A hybrid GMRES and TV-norm based method for image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University, Cleveland, OH 44106 b Rocketcalc,

More information

A Comparative Study & Analysis of Image Restoration by Non Blind Technique

A Comparative Study & Analysis of Image Restoration by Non Blind Technique A Comparative Study & Analysis of Image Restoration by Non Blind Technique Saurav Rawat 1, S.N.Tazi 2 M.Tech Student, Assistant Professor, CSE Department, Government Engineering College, Ajmer Abstract:

More information

Author(s): Title: Journal: ISSN: Year: 2014 Pages: Volume: 25 Issue: 5

Author(s): Title: Journal: ISSN: Year: 2014 Pages: Volume: 25 Issue: 5 Author(s): Ming Yin, Junbin Gao, David Tien, Shuting Cai Title: Blind image deblurring via coupled sparse representation Journal: Journal of Visual Communication and Image Representation ISSN: 1047-3203

More information

Image Restoration under Significant Additive Noise

Image Restoration under Significant Additive Noise 1 Image Restoration under Significant Additive Noise Wenyi Zhao 1 and Art Pope Sarnoff Corporation 01 Washington Road, Princeton, NJ 08540, USA email: { wzhao, apope }@ieee.org Tel: 408-53-178 Abstract

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Single Image Motion Deblurring Using Transparency

Single Image Motion Deblurring Using Transparency Single Image Motion Deblurring Using Transparency Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong leojia@cse.cuhk.edu.hk Abstract One of the key problems of

More information

Method of Background Subtraction for Medical Image Segmentation

Method of Background Subtraction for Medical Image Segmentation Method of Background Subtraction for Medical Image Segmentation Seongjai Kim Department of Mathematics and Statistics, Mississippi State University Mississippi State, MS 39762, USA and Hyeona Lim Department

More information

Some Blind Deconvolution Techniques in Image Processing

Some Blind Deconvolution Techniques in Image Processing Some Blind Deconvolution Techniques in Image Processing Tony Chan Math Dept., UCLA Joint work with Frederick Park and Andy M. Yip IPAM Workshop on Mathematical Challenges in Astronomical Imaging July 26-30,

More information

Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC

Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC Deconvolution with curvelet-domain sparsity Vishal Kumar, EOS-UBC and Felix J. Herrmann, EOS-UBC SUMMARY We use the recently introduced multiscale and multidirectional curvelet transform to exploit the

More information

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS Andrey Nasonov, and Andrey Krylov Lomonosov Moscow State University, Moscow, Department of Computational Mathematics and Cybernetics, e-mail: nasonov@cs.msu.ru,

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

Removing Atmospheric Turbulence

Removing Atmospheric Turbulence Removing Atmospheric Turbulence Xiang Zhu, Peyman Milanfar EE Department University of California, Santa Cruz SIAM Imaging Science, May 20 th, 2012 1 What is the Problem? time 2 Atmospheric Turbulence

More information

Total Variation Denoising with Overlapping Group Sparsity

Total Variation Denoising with Overlapping Group Sparsity 1 Total Variation Denoising with Overlapping Group Sparsity Ivan W. Selesnick and Po-Yu Chen Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 2 Abstract This paper describes

More information

Joint Depth Estimation and Camera Shake Removal from Single Blurry Image

Joint Depth Estimation and Camera Shake Removal from Single Blurry Image Joint Depth Estimation and Camera Shake Removal from Single Blurry Image Zhe Hu1 Li Xu2 Ming-Hsuan Yang1 1 University of California, Merced 2 Image and Visual Computing Lab, Lenovo R & T zhu@ucmerced.edu,

More information

Sparse wavelet expansions for seismic tomography: Methods and algorithms

Sparse wavelet expansions for seismic tomography: Methods and algorithms Sparse wavelet expansions for seismic tomography: Methods and algorithms Ignace Loris Université Libre de Bruxelles International symposium on geophysical imaging with localized waves 24 28 July 2011 (Joint

More information

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS ARIFA SULTANA 1 & KANDARPA KUMAR SARMA 2 1,2 Department of Electronics and Communication Engineering, Gauhati

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Robust Single Image Super-resolution based on Gradient Enhancement

Robust Single Image Super-resolution based on Gradient Enhancement Robust Single Image Super-resolution based on Gradient Enhancement Licheng Yu, Hongteng Xu, Yi Xu and Xiaokang Yang Department of Electronic Engineering, Shanghai Jiaotong University, Shanghai 200240,

More information

Anisotropic representations for superresolution of hyperspectral data

Anisotropic representations for superresolution of hyperspectral data Anisotropic representations for superresolution of hyperspectral data Edward H. Bosch, Wojciech Czaja, James M. Murphy, and Daniel Weinberg Norbert Wiener Center Department of Mathematics University of

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Yunyun Yang, Chunming Li, Chiu-Yen Kao and Stanley Osher. Speaker: Chiu-Yen Kao (Math Department, The Ohio State University) BIRS, Banff, Canada

Yunyun Yang, Chunming Li, Chiu-Yen Kao and Stanley Osher. Speaker: Chiu-Yen Kao (Math Department, The Ohio State University) BIRS, Banff, Canada Yunyun Yang, Chunming Li, Chiu-Yen Kao and Stanley Osher Speaker: Chiu-Yen Kao (Math Department, The Ohio State University) BIRS, Banff, Canada Outline Review of Region-based Active Contour Models Mumford

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari Laboratory for Advanced Brain Signal Processing Laboratory for Mathematical

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Deblurring Face Images with Exemplars

Deblurring Face Images with Exemplars Deblurring Face Images with Exemplars Jinshan Pan 1, Zhe Hu 2, Zhixun Su 1, and Ming-Hsuan Yang 2 1 Dalian University of Technology 2 University of California at Merced Abstract. The human face is one

More information

A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs

A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs A Study on Blur Kernel Estimation from Blurred and Noisy Image Pairs Mushfiqur Rouf Department of Computer Science University of British Columbia nasarouf@cs.ubc.ca Abstract The course can be split in

More information

A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings

A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings Scientific Papers, University of Latvia, 2010. Vol. 756 Computer Science and Information Technologies 207 220 P. A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Motion Deblurring With Graph Laplacian Regularization

Motion Deblurring With Graph Laplacian Regularization Motion Deblurring With Graph Laplacian Regularization Amin Kheradmand and Peyman Milanfar Department of Electrical Engineering University of California, Santa Cruz ABSTRACT In this paper, we develop a

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Numerical Methods on the Image Processing Problems

Numerical Methods on the Image Processing Problems Numerical Methods on the Image Processing Problems Department of Mathematics and Statistics Mississippi State University December 13, 2006 Objective Develop efficient PDE (partial differential equations)

More information

Efficient MR Image Reconstruction for Compressed MR Imaging

Efficient MR Image Reconstruction for Compressed MR Imaging Efficient MR Image Reconstruction for Compressed MR Imaging Junzhou Huang, Shaoting Zhang, and Dimitris Metaxas Division of Computer and Information Sciences, Rutgers University, NJ, USA 08854 Abstract.

More information

Ping Tan. Simon Fraser University

Ping Tan. Simon Fraser University Ping Tan Simon Fraser University Photos vs. Videos (live photos) A good photo tells a story Stories are better told in videos Videos in the Mobile Era (mobile & share) More videos are captured by mobile

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

Collaborative Sparsity and Compressive MRI

Collaborative Sparsity and Compressive MRI Modeling and Computation Seminar February 14, 2013 Table of Contents 1 T2 Estimation 2 Undersampling in MRI 3 Compressed Sensing 4 Model-Based Approach 5 From L1 to L0 6 Spatially Adaptive Sparsity MRI

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

Handling Noise in Single Image Deblurring using Directional Filters

Handling Noise in Single Image Deblurring using Directional Filters 2013 IEEE Conference on Computer Vision and Pattern Recognition Handling Noise in Single Image Deblurring using Directional Filters Lin Zhong 1 Sunghyun Cho 2 Dimitris Metaxas 1 Sylvain Paris 2 Jue Wang

More information

Image reconstruction based on back propagation learning in Compressed Sensing theory

Image reconstruction based on back propagation learning in Compressed Sensing theory Image reconstruction based on back propagation learning in Compressed Sensing theory Gaoang Wang Project for ECE 539 Fall 2013 Abstract Over the past few years, a new framework known as compressive sampling

More information

A Novel Method for Enhanced Needle Localization Using Ultrasound-Guidance

A Novel Method for Enhanced Needle Localization Using Ultrasound-Guidance A Novel Method for Enhanced Needle Localization Using Ultrasound-Guidance Bin Dong 1,EricSavitsky 2, and Stanley Osher 3 1 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs

More information

ADAPTIVE DEBLURRING OF SURVEILLANCE VIDEO SEQUENCES THAT DETERIORATE OVER TIME. Konstantinos Vougioukas, Bastiaan J. Boom and Robert B.

ADAPTIVE DEBLURRING OF SURVEILLANCE VIDEO SEQUENCES THAT DETERIORATE OVER TIME. Konstantinos Vougioukas, Bastiaan J. Boom and Robert B. ADAPTIVE DELURRING OF SURVEILLANCE VIDEO SEQUENCES THAT DETERIORATE OVER TIME Konstantinos Vougioukas, astiaan J. oom and Robert. Fisher University of Edinburgh School of Informatics, 10 Crichton St, Edinburgh,

More information

Recovery of Piecewise Smooth Images from Few Fourier Samples

Recovery of Piecewise Smooth Images from Few Fourier Samples Recovery of Piecewise Smooth Images from Few Fourier Samples Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa SampTA 2015 Washington, D.C. 1. Introduction 2.

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

Super Resolution Using Graph-cut

Super Resolution Using Graph-cut Super Resolution Using Graph-cut Uma Mudenagudi, Ram Singla, Prem Kalra, and Subhashis Banerjee Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi,

More information

Image Segmentation Via Iterative Geodesic Averaging

Image Segmentation Via Iterative Geodesic Averaging Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.

More information

Compressed Sensing for Electron Tomography

Compressed Sensing for Electron Tomography University of Maryland, College Park Department of Mathematics February 10, 2015 1/33 Outline I Introduction 1 Introduction 2 3 4 2/33 1 Introduction 2 3 4 3/33 Tomography Introduction Tomography - Producing

More information

A Wavelet Tour of Signal Processing The Sparse Way

A Wavelet Tour of Signal Processing The Sparse Way A Wavelet Tour of Signal Processing The Sparse Way Stephane Mallat with contributions from Gabriel Peyre AMSTERDAM BOSTON HEIDELBERG LONDON NEWYORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY»TOKYO

More information

Recent Advances inelectrical & Electronic Engineering

Recent Advances inelectrical & Electronic Engineering 4 Send Orders for Reprints to reprints@benthamscience.ae Recent Advances in Electrical & Electronic Engineering, 017, 10, 4-47 RESEARCH ARTICLE ISSN: 35-0965 eissn: 35-0973 Image Inpainting Method Based

More information

Image recovery via geometrically structured approximation

Image recovery via geometrically structured approximation Image recovery via geometrically structured approximation Hui Ji a,, Yu Luo b,a, Zuowei Shen a a Department of Mathematics, National University of Singapore, Singapore 7543 b School of Computer and Engineering,

More information

Suppression of Missing Data Artifacts. for Deblurring Images Corrupted by

Suppression of Missing Data Artifacts. for Deblurring Images Corrupted by Suppression of Missing Data Artifacts for Deblurring Images Corrupted by Random Valued Noise Nam-Yong Lee Department of Applied Mathematics Institute of Basic Sciences, Inje University Gimhae, Gyeongnam

More information

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal Hadi. Zayyani, Seyyedmajid. Valliollahzadeh Sharif University of Technology zayyani000@yahoo.com, valliollahzadeh@yahoo.com

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

Image denoising using curvelet transform: an approach for edge preservation

Image denoising using curvelet transform: an approach for edge preservation Journal of Scientific & Industrial Research Vol. 3469, January 00, pp. 34-38 J SCI IN RES VOL 69 JANUARY 00 Image denoising using curvelet transform: an approach for edge preservation Anil A Patil * and

More information

Learning Data Terms for Non-blind Deblurring Supplemental Material

Learning Data Terms for Non-blind Deblurring Supplemental Material Learning Data Terms for Non-blind Deblurring Supplemental Material Jiangxin Dong 1, Jinshan Pan 2, Deqing Sun 3, Zhixun Su 1,4, and Ming-Hsuan Yang 5 1 Dalian University of Technology dongjxjx@gmail.com,

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization Volume 3, No. 3, May-June 2012 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 Image Deblurring Using Adaptive Sparse

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

Photographic stitching with optimized object and color matching based on image derivatives

Photographic stitching with optimized object and color matching based on image derivatives Photographic stitching with optimized object and color matching based on image derivatives Simon T.Y. Suen, Edmund Y. Lam, and Kenneth K.Y. Wong Department of Electrical and Electronic Engineering, The

More information

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements

More information

1510 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 16, NO. 6, OCTOBER Efficient Patch-Wise Non-Uniform Deblurring for a Single Image

1510 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 16, NO. 6, OCTOBER Efficient Patch-Wise Non-Uniform Deblurring for a Single Image 1510 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 16, NO. 6, OCTOBER 2014 Efficient Patch-Wise Non-Uniform Deblurring for a Single Image Xin Yu, Feng Xu, Shunli Zhang, and Li Zhang Abstract In this paper, we

More information

Limitations of Matrix Completion via Trace Norm Minimization

Limitations of Matrix Completion via Trace Norm Minimization Limitations of Matrix Completion via Trace Norm Minimization ABSTRACT Xiaoxiao Shi Computer Science Department University of Illinois at Chicago xiaoxiao@cs.uic.edu In recent years, compressive sensing

More information

REGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION

REGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION REGULARIZATION PARAMETER TRIMMING FOR ITERATIVE IMAGE RECONSTRUCTION Haoyi Liang and Daniel S. Weller University of Virginia, Department of ECE, Charlottesville, VA, 2294, USA ABSTRACT Conventional automatic

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

Bilevel Sparse Coding

Bilevel Sparse Coding Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional

More information

The Benefit of Tree Sparsity in Accelerated MRI

The Benefit of Tree Sparsity in Accelerated MRI The Benefit of Tree Sparsity in Accelerated MRI Chen Chen and Junzhou Huang Department of Computer Science and Engineering, The University of Texas at Arlington, TX, USA 76019 Abstract. The wavelet coefficients

More information

Adaptive Multiple-Frame Image Super- Resolution Based on U-Curve

Adaptive Multiple-Frame Image Super- Resolution Based on U-Curve Adaptive Multiple-Frame Image Super- Resolution Based on U-Curve IEEE Transaction on Image Processing, Vol. 19, No. 12, 2010 Qiangqiang Yuan, Liangpei Zhang, Huanfeng Shen, and Pingxiang Li Presented by

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Kaveh Ahmadi Department of EECS University of Toledo, Toledo, Ohio, USA 43606 Email: Kaveh.ahmadi@utoledo.edu Ezzatollah

More information

Storage Efficient NL-Means Burst Denoising for Programmable Cameras

Storage Efficient NL-Means Burst Denoising for Programmable Cameras Storage Efficient NL-Means Burst Denoising for Programmable Cameras Brendan Duncan Stanford University brendand@stanford.edu Miroslav Kukla Stanford University mkukla@stanford.edu Abstract An effective

More information

Optimal Denoising of Natural Images and their Multiscale Geometry and Density

Optimal Denoising of Natural Images and their Multiscale Geometry and Density Optimal Denoising of Natural Images and their Multiscale Geometry and Density Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Israel. Joint work with Anat Levin (WIS),

More information

IN MANY applications, it is desirable that the acquisition

IN MANY applications, it is desirable that the acquisition 1288 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 17, NO 10, OCTOBER 2007 A Robust and Computationally Efficient Simultaneous Super-Resolution Scheme for Image Sequences Marcelo

More information

Image Restoration using Accelerated Proximal Gradient method

Image Restoration using Accelerated Proximal Gradient method Image Restoration using Accelerated Proximal Gradient method Alluri.Samuyl Department of computer science, KIET Engineering College. D.Srinivas Asso.prof, Department of computer science, KIET Engineering

More information

CS 450 Numerical Analysis. Chapter 7: Interpolation

CS 450 Numerical Analysis. Chapter 7: Interpolation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information