Learning Data Terms for Non-blind Deblurring Supplemental Material

Similar documents
Learning Data Terms for Non-blind Deblurring

Blind Image Deblurring Using Dark Channel Prior

Deblurring Text Images via L 0 -Regularized Intensity and Gradient Prior

Learning Fully Convolutional Networks for Iterative Non-blind Deconvolution

Robust Kernel Estimation with Outliers Handling for Image Deblurring

Deblurring Face Images with Exemplars

Robust Kernel Estimation with Outliers Handling for Image Deblurring

Blind Deblurring using Internal Patch Recurrence. Tomer Michaeli & Michal Irani Weizmann Institute

Joint Depth Estimation and Camera Shake Removal from Single Blurry Image

Deblurring by Example using Dense Correspondence

arxiv: v2 [cs.cv] 1 Aug 2016

Non-blind Deblurring: Handling Kernel Uncertainty with CNNs

Learning a Discriminative Prior for Blind Image Deblurring

Good Regions to Deblur

KERNEL-FREE VIDEO DEBLURRING VIA SYNTHESIS. Feitong Tan, Shuaicheng Liu, Liaoyuan Zeng, Bing Zeng

BLIND IMAGE DEBLURRING VIA REWEIGHTED GRAPH TOTAL VARIATION

Learning how to combine internal and external denoising methods

arxiv: v1 [cs.cv] 25 Dec 2017

Handling Noise in Single Image Deblurring using Directional Filters

Digital Image Restoration

IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS

An improved non-blind image deblurring methods based on FoEs

Synthesis and Analysis Sparse Representation Models for Image Restoration. Shuhang Gu 顾舒航. Dept. of Computing The Hong Kong Polytechnic University

Image Restoration Using DNN

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy

Two-Phase Kernel Estimation for Robust Motion Deblurring

A Comparative Study for Single Image Blind Deblurring

Normalized Blind Deconvolution

Motion Deblurring With Graph Laplacian Regularization

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

Nonlocal Spectral Prior Model for Low-level Vision

Edge-Based Blur Kernel Estimation Using Sparse Representation and Self-Similarity

Single Image Deblurring Using Motion Density Functions

Image Denoising and Blind Deconvolution by Non-uniform Method

Deblurring Shaken and Partially Saturated Images

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

arxiv: v1 [cs.cv] 11 Aug 2017

Discriminative Transfer Learning for General Image Restoration

Robust Single Image Super-resolution based on Gradient Enhancement

DUAL DEBLURRING LEVERAGED BY IMAGE MATCHING

Structured Face Hallucination

Part Localization by Exploiting Deep Convolutional Networks

Blind Deconvolution Using a Normalized Sparsity Measure

1510 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 16, NO. 6, OCTOBER Efficient Patch-Wise Non-Uniform Deblurring for a Single Image

Markov Random Fields and Gibbs Sampling for Image Denoising

arxiv: v1 [cs.cv] 11 Apr 2016

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization

Self-paced Kernel Estimation for Robust Blind Image Deblurring

Author(s): Title: Journal: ISSN: Year: 2014 Pages: Volume: 25 Issue: 5

Shrinkage Fields for Effective Image Restoration

Patch Mosaic for Fast Motion Deblurring

The Trainable Alternating Gradient Shrinkage method

Individualness and Determinantal Point Processes for Pedestrian Detection: Supplementary Material

Efficient 3D Kernel Estimation for Non-uniform Camera Shake Removal Using Perpendicular Camera System

Kernel Fusion for Better Image Deblurring

Learning to Push the Limits of Efficient FFT-based Image Deconvolution

Depth-Aware Motion Deblurring

PSF ACCURACY MEASURE FOR EVALUATION OF BLUR ESTIMATION ALGORITHMS

Image deconvolution using a characterization of sharp images in wavelet domain

COMPRESSED FACE HALLUCINATION. Electrical Engineering and Computer Science University of California, Merced, CA 95344, USA

Learning Recursive Filters for Low-Level Vision via a Hybrid Neural Network

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Physics-Based Generative Adversarial Models for Image Restoration and Beyond

Removing Atmospheric Turbulence

The Return of the Gating Network: Combining Generative Models and Discriminative Training in Natural Image Priors

Improving Image Restoration with Soft-Rounding

GAN-D: Generative Adversarial Networks for Image Deconvolution

Image Deblurring and Denoising using Color Priors

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

State-of-the-Art Image Motion Deblurring Technique

Optic Flow and Basics Towards Horn-Schunck 1

Blind Deconvolution with Non-local Sparsity Reweighting

A fast iterative thresholding algorithm for wavelet-regularized deconvolution

arxiv: v2 [cs.cv] 18 Dec 2018

A Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang

Motion Estimation (II) Ce Liu Microsoft Research New England

arxiv: v1 [cs.cv] 11 Apr 2017

DIGITAL matting refers to the problem of accurate foreground

Deblurring Natural Image Using Super-Gaussian Fields

Multiple cosegmentation

An Improved Approach For Mixed Noise Removal In Color Images

Blind Image Deconvolution Technique for Image Restoration using Ant Colony Optimization

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

A Comparative Study & Analysis of Image Restoration by Non Blind Technique

TAKING good pictures in low-light conditions is perhaps

Learning Two-View Stereo Matching

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

Optimal Denoising of Natural Images and their Multiscale Geometry and Density

IMAGE denoising is a well-known, ill-posed problem in

Image Super-Resolution using Gradient Profile Prior

Image Inpainting Using Sparsity of the Transform Domain

From Motion Blur to Motion Flow: a Deep Learning Solution for Removing Heterogeneous Motion Blur

Expected Patch Log Likelihood with a Sparse Prior

Graph-Based Blind Image Deblurring From a Single Photograph

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

A MAP-Estimation Framework for Blind Deblurring Using High-Level Edge Priors

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

SUPPLEMENTARY MATERIAL

Blind Local Noise Estimation for Medical Images Reconstructed from Rapid Acquisition

Transcription:

Learning Data Terms for Non-blind Deblurring Supplemental Material Jiangxin Dong 1, Jinshan Pan 2, Deqing Sun 3, Zhixun Su 1,4, and Ming-Hsuan Yang 5 1 Dalian University of Technology dongjxjx@gmail.com, zxsu@dlut.edu.com 2 Nanjing University of Science and Technology sdluran@gmail.com 3 NVIDIA deqings@nvidia.com 4 Guilin University of Electronic Technology 5 University of California at Merced mhyang@ucmerced.edu Overview In this supplemental material, we provide the detailed analysis of the proposed method in Section 1. The specific formulas for the derivatives (14) of the manuscript are presented in Section 2. Section 3 demonstrates the effectiveness of the proposed method to improve the performance of existing deblurring approaches. We present more experimental results of the proposed algorithm against the state-of-the-art methods in Section 4. 1 Further Analysis of the Proposed Method The flexibility of learning the shrinkage function. Compared with learning the penalty function, learning the corresponding shrinkage function is more flexible. The learned shrinkage functions (see Figure 4(b) of the manuscript) involve both monotonic functions and non-monotonic functions, which cannot be obtained by learning the penalty functions [7]. To further understand the learned shrinkage function, we derive the corresponding penalty function by integrating the inverse function of the learned shrinkage function according to [9]. However, some learned non-monotonic shrinkage functions (e.g., the 2nd column in Figure 4(b) of the manuscript) do not have the inverse functions. Their associated penalty functions cannot be easily obtained. Thus we only derive a few penalty functions to facilitate our understanding. For the learned shrinkage functions in Figure 1(a) and (c), the corresponding penalty functions are plotted in Figure 1(b) and (d), respectively. The function shapes in Figure 1(b) and (d) resemble those of the truncated l 2 norm-based functions, which are effective to reduce the influence of outliers. For outlier pixels, whose values are relatively large, the corresponding penalty for their data term is close to a constant or minimized. Then the image prior plays a major role in the image restoration. This indicates that most outliers will be smoothed. This property enables our method to handle outliers.

2 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang 300 80 300 80 200 70 200 70 60 60 100 50 100 50 0 40 0 40-100 30-100 30 20 20-200 10-200 10-300 -300-200 -100 0 100 200 300 0-300 -200-100 0 100 200 300-300 -300-200 -100 0 100 200 300 0-300 -200-100 0 100 200 300 (a) (b) (c) (d) Fig. 1. The flexibility of learning the shrinkage function (solution). (a) and (c) are two learned shrinkage functions. (b) and (d) are the approximated penalty functions corresponding to (a) and (c), respectively Similar results can also be obtained from our analysis in Section 5 of the manuscript, which is discussed from the perspective of the shrinkage functions. However, Figure 1 shows that seemingly simple shrinkage functions have complex non-linear penalty functions. If we directly learn the penalty function and the learned penalty function is like Figure 1(b) or (d), we need to solve a complicated optimization problem (e.g., (6) of the manuscript), which will be difficult and time-consuming to compute. In addition, the proposed method can learn nonmonotonic shrinkage functions that cannot be derived from learning the penalty functions. Thus, it is more flexible and efficient to directly learn the shrinkage functions. Effects of learning filters for the data term. In the proposed method, we model the residue image in both the raw data space and the filtered data space to better capture the property of the noise distribution [6]. Figure 2(a) shows some learned filters. We quantitatively evaluate the effect of learning filters on the image restoration. For fair comparisons, we remove the filters from the data term (i.e., N f = 0 in (4) of the manuscript). Figure 2(b)-(e) show that the method only exploiting the information of the residue image in the raw data space (i.e., Our-NF) generates the deblurred result with severe ringing artifacts. In contrast, the method with the learned filters generates a clearer image with finer textures. To further evaluate the effectiveness of learning linear filters, we evaluate the effect of the learned filters and the commonly used 1st and 2nd derivative filters on 200 images with Gaussian noise. Table 1 shows that the method with our learned filters performs better. Table 1. The effectiveness of learning linear filters 1st 2nd Ours Avg. PSNRs 26.07 25.95 26.47

Learning Data Terms for Non-blind Deblurring 3 8 filters of size 3x3 8 filters of size 3x3 (a) Some learned filters (b) Blurred (c) Ours-NF (d) Ours (e) GroundTruth Fig. 2. Effects of learning filters for the data term. The method with learned filers generates clearer images with finer details Running time. Table 5 of the manuscript shows the running time of the proposed method and some state-of-the-art methods. With the similar halfquadratic optimization scheme, CSF [7] needs two update steps in each iteration with the fixed l 2 norm-based data term and the learned image prior. TVL1 [11] requires three update steps with the fixed l 1 norm-based data term and image prior. While the proposed method has to update four steps (i.e., (6)-(9) of the manuscript) for learning the more flexible data term with the l 1 norm-based image prior. Therefore, with the same iterations, the runtime of the proposed method is a little longer than CSF and TVL1. However, our method runs faster than Cho et al. [1] as Cho is based on the EM algorithm and the iteratively reweighted least squares method. Given the complexity of outlier handling, our algorithm is a relatively fast algorithm among the outlier handling methods and performs much better than all the other methods. Robustness of the proposed method on inaccurate blur kernels. We e- valuate the proposed algorithm on real-world images with unknown noise where the blur kernels are estimated by [2, 5] (Figure 3 of the manuscript). The proposed algorithm generates clearer images with fewer artifacts than the state-ofthe-art methods. To further examine the robustness of our algorithm on inaccurate kernels, we retrain a new model with a mix of ground truth (GT) and estimated kernels (obtained by the blind deblurring method [5]). Then the learned model is evaluated on the 32 blurred images from [4] with impulse noise. We use kernel similarity (KS) to measure the accuracy of kernels. We respectively use GT kernels (K- S=1), GT kernels corrupted by Gaussian noise (KS> 0.9), and estimated kernels by the blind deblurring method [2] (KS< 0.9) to restore the blurred images. As shown in Table 2, our algorithm leads to higher PSNR/SSIM values than the state-of-the-art methods. As expected, less accurate kernels lead to degraded performance, but the proposed algorithm still performs favorably against other methods.

4 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang Table 2. Robustness of the proposed method on inaccurate blur kernels PSNR/SSIM TVL1 [26] GT kernels GT kernels with Gaussian noise Kernels by [10] Cho [1] 30.95/0.8485 22.60/0.6236 20.73/0.5114 Ours 31.13/0.8506 31.83/0.8602 21.73/0.6308 23.23/0.6410 21.11/0.5279 21.23/0.5300 Comparisons with baselines that use explicit-form solutions. In Tables 1-4 of the manuscript, we compare the proposed algorithm with two baseline methods (i.e., TVL2 and TVL1), which use the same image prior and are combined with `2 and `1 norm-based data terms. The proposed algorithm has consistent higher PSNR/SSIM values than the baseline methods. We further compare our learned data term with `2 /`1 norm-based data terms under the proposed framework i.e., we learn other parameters while fixing the shrinkage functions to be the analytical solutions of the `2 /`1 norm-based data terms. Table 3 shows the PSNR/SSIM results. Learning the data term performs favorably against the methods with fixed `2 /`1 norm-based data terms. Table 3. Comparison results of our learned data term with `2 /`1 norm-based data terms under the proposed framework PSNR/SSIM `2 norm-based data term Gaussian noise (5%) Impulse noise (5%) 23.57/0.6277 21.80/0.4308 `1 norm-based data term 23.39/0.5926 26.62/0.7042 Ours 23.73/0.6576 28.47/0.8828 Limitations. The proposed algorithm is less effective for blurred images whose main structures are severely saturated, as shown in Figure 3. 2 Derivations All the following derivations can be calculated efficiently and the Matlab code will be made available to the public. All convolution related computations use the (a) Blurred image (b) Our result Fig. 3. One failure example

Learning Data Terms for Non-blind Deblurring 5 same circular boundary conditions, in order to reduce the adverse effect of the image boundary as [7]. Since the computation only needs matrix calculus and the principal method is similar to the method [7], we omit the redundant derivation and directly present the derivatives of l t needed in (14) of the manuscript w.r.t. specific model parameters. First, since the parameters λ t and β t are supposed to be positive and directly learning λ t and β t may lead to negative values, we instead use λ t =exp( λ t ) and β t =exp( β t ). Then, λ t and β t can be learned via their partial derivatives l t λ t = λ t [ (Fh ĉ t ) (u th F h l B t ) + (F v ĉ t ) (u tv F v l B t ) ], N l t f β = β t (F ti Kĉ t ) ( (1) F ti b v ti F ti Kl B ) t, t i=1 [ ] where ĉ t = ζt 1 T L(lt,l gt) l t and l B t is the uncropped deblurred image. The derivative of l t w.r.t. each weight π t0,j and π ti,j, (i = 1,..., N f ) can be computed as l t π t0,j = (Kĉ t ) φ 0(b K l t 1, π t0 ) π t0,j, l t = β(f ti Kĉ t ) φ i(f ti (b K l t 1 ), π ti ), π ti,j π ti,j (2) where l t 1 denotes the processed image with the boundary handling operation in the iteration t 1. Then in order to constrain the solution space, we follow the method [7] and have f i = B B f, where B is the filter basis. Therefore, the derivative of l t w.r.t. each filter coefficient f ti, (i = 1,..., N f ) is l ( ) ) t f = β t [Kĉ t] C F ti (b Kl B t v ti ti )]) + ([b Kl B t ] C [b K l t 1] C φ πti [F ti (b K l t 1 (F tikĉ t)] B B f ti ( ) ) β t[ (F ti b Kl B t v ti FtiKĉ t ( ) )] )) + (F tikĉ t) F ti (b Kl B t φ fti πti [F ti (b K l t 1 F ti (b B K l t 1 ] B f. ti (3) In addition to learning the parameters stage by stage, the parameters of all T stages can also be jointly trained by minimizing J(Ω 1,...,T ) = S s=1 L(l {s} T (Ω T ), l {s} gt ), (4) where only the loss of the final inference l T is contained. We need to compute not only the gradient of l T w.r.t. Ω T, which can be obtained by (14) of the

6 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang manuscript, but also the gradient of l T w.r.t. Ω t (t = 1,..., T 1). We take t=t 1 as an example. The gradient of l T w.r.t. Ω T 1 is given as L(l T, l gt ) = L(l [ T, l gt ) l T = ĉ ξt 1 T 1 ζ ] T 1 l B T 1, (5) Ω T 1 l T Ω T 1 Ω T 1 Ω T 1 where ĉ T 1 = ζ 1 T 1 T P E (K φ 0(b K l T 1 )Kĉ T +β N f i=1 K F T i φ i (F T i(b K l T 1 )F T i K ĉ T )). We note that (5) has the similar form to (14) of the manuscript except for the concrete value of ĉ. The gradient of l T w.r.t. the parameters of the former stages can be computed in the similar manner. However, due to the non-convexity and complexity of the objective function, the joint training with the parameters obtained from the greedy training as the initialization cannot considerably improve the performance (i.e., at most 0.1 db improvement). 3 Extensions of the Proposed Method To understand the role of the data term, we have used a standard total variation prior for the latent image. It is straightforward to combine the proposed data term with different image priors. The discriminative learning algorithm for the general image prior family is analogous to that in the manuscript for the total variation prior. In the case of EPLL [12], we modify (8) of the manuscript and obtain u i = argmin u i γ 2 Q il u i 2 2 log p(u i ), (6) where Q i is used to extract the i-th patch from the image and p(u i ) models the corresponding image prior of the i-th patch, as mentioned in [12]. Then the optimization (9) in the manuscript will be replaced by l = argmin l = [A E] 1 [B E], N τ 2 b Kl f β z 2 2 + 2 Fi(b Kl) vi 2 2 + i=1 i Nf where A E = K K + β τ Nf i=1 K F i (F ib v i ) + λγ β τ i=1 K F i F ik + λγ τ i Q i u i. τ λγ 2 Qil ui 2 2 (7) i Q i Q i and B E = K (b z) + Figure 4 shows that the original EPLL method is less effective for images with impulse noise, since it is combined with an l 2 norm-based data term. However, the method with the proposed framework can effectively remove the impulse noise and generate clearer images as shown in Figure 4(c). Furthermore, using the expressive EPLL prior better recovers fine details than the TV prior, such as the koala s fur. More results shown in Figure 4 demonstrate that the method with the proposed framework is able to handle the blurred images with outliers.

Learning Data Terms for Non-blind Deblurring 7 (a) Blurred (b) EPLL [12] (c) EPLL+Ours (d) Ours (e) GT Fig. 4. The effectiveness of the proposed method to improve EPLL for images with significant impulse noise. Using the more expressive EPLL prior better recovers fine details than the TV prior

8 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang 4 More Experimental Results In this section, we provide more visual comparisons with the state-of-the-art deblurring methods. (a) Blurred (1%) (b) TVL2 [3] (c) EPLL [12] (d) MLP [8] (e) CSF [7] (f) TVL1 [11] (g) Cho [1] (h) Ours (h) Blurred (5%) (i) TVL2 [3] (j) EPLL [12] (k) MLP [8] (l) CSF [7] (m) TVL1 [11] (n) Cho [1] (o) Ours Fig. 5. The proposed method performs comparably with the state-of-the-art methods on images with Gaussian noise. (a) and (h) are blurred images corrupted with 1% and 5% Gaussian noise, respectively

Learning Data Terms for Non-blind Deblurring 9 (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 6. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 5% impulse noise. The proposed method can generate much clearer images with finer details, such as the stripe of the zebra

10 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 7. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 3% impulse noise. The proposed method can generate much clearer images with finer details, such as the bear s fur

Learning Data Terms for Non-blind Deblurring 11 (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 8. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 1% impulse noise. The proposed method can generate much clearer images with finer details, such as the chicks feathers and eyes

12 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 9. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 4% impulse noise. The proposed method can generate much clearer images with finer details, such as the texture of the turtle

Learning Data Terms for Non-blind Deblurring 13 (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 10. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 2% impulse noise. The proposed method can generate much clearer images with finer details, such as the characters on the clock

14 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang (a) Blurred (b) TVL2 [3] (c) EPLL [12] (d) CSF [7] (e) TVL1 [11] (f) Cho [1] (g) Ours (h) GroundTruth Fig. 11. The effectiveness of the proposed method on images with impulse noise. The blurred image (a) is corrupted with 2% impulse noise. The proposed method can generate much clearer images with finer details

Learning Data Terms for Non-blind Deblurring 15 (a) Blurred (2%) (b) TVL2 [3] (c) EPLL [12] (d) MLP [8] e) CSF [7] (f) TVL1 [11] (g) Cho [1] (h) Ours Fig. 12. The effectiveness of the proposed method on images with mixed noise. The blurred image (a) is corrupted with 2% Gaussian noise and 2% impulse noise. The proposed method performs favorably against the state-of-the-art methods (a) Blurred (b) TVL1 [11] (c) Whyte [10] (d) Cho [1] (e) Ours Fig. 13. The effectiveness of the proposed method on images with saturated pixels. The proposed method performs favorably against the state-of-the-art methods

16 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang (a) Blurred (b) Whyte [10] (c) Cho [1] (d) Ours Fig. 14. The effectiveness of the proposed method on real captured images with unknown noise. The proposed method performs favorably against the state-of-the-art methods

Learning Data Terms for Non-blind Deblurring 17 (a) Blurred (b) TVL1 [11] (c) Cho [1] (d) Ours (e) Blurred (f) CSF [7] (g) Cho [1] (h) Ours Fig. 15. The effectiveness of the proposed method on real captured images with unknown noise. The proposed method performs favorably against the state-of-the-art methods

18 J. Dong, J. Pan, D. Sun, Z. Su, and M. Yang References 1. Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: ICCV. pp. 495 502 (2011) 2. Dong, J., Pan, J., Su, Z., Yang, M.H.: Blind image deblurring with outlier handling. In: ICCV (2017) 3. Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: NIPS. pp. 1033 1041 (2009) 4. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR. pp. 1964 1971 (2009) 5. Pan, J., Lin, Z., Su, Z., Yang, M.H.: Robust kernel estimation with outliers handling for image deblurring. In: CVPR. pp. 2800 2808 (2016) 6. Schmidt, U., Gao, Q., Roth, S.: A generative perspective on mrfs in low-level vision. In: CVPR. pp. 1751 1758 (2010) 7. Schmidt, U., Roth, S.: Shrinkage fields for effective image restoration. In: CVPR. pp. 2774 2781 (2014) 8. Schuler, C.J., Burger, H.C., Harmeling, S., Scholkopf, B.: A machine learning approach for non-blind image deconvolution. In: CVPR. pp. 1067 1074 (2013) 9. Selesnick, I.: Penalty and shrinkage functions for sparse signal processing. Connexions 11 (2012) 10. Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. IJCV 110(2), 185 201 (2014) 11. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: ECCV. pp. 157 170 (2010) 12. Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: ICCV. pp. 479 486 (2012)