Sparse Representation and Low Rank Methods for Image Restoration and Classification. Lei Zhang
|
|
- Constance Francis
- 5 years ago
- Views:
Transcription
1 Sparse Representation and Low Rank Methods for Image Restoration and Classification Lei Zhang Dept. of Computing The Hong Kong Polytechnic University
2 My recent research focuses Sparse Representation, Dictionary Learning, Low Rank Image restoration Collaborative representation based pattern classification Image Quality Assessment Full reference and no reference IQA models Visual Tracking Fast and robust trackers Image Segmentation Evaluation Biometrics (face, finger-knuckle-print, palmprint)
3 A linear system??.?? A dense solution A sparse solution 3
4 Sparse solutions 4
5 How to solve the sparse coding problem? Greedy Search Approach for L 0 -minimization Orthogonal Matching Pursuit Least angle regression Convex Optimization Method for L 1 -minimization Interior Point Gradient Projection Proximal Gradient Descent (Iterative soft-thresholding) Augmented Lagrangian Methods Alternating Direction Method of Multipliers Non-convex L p -minimization W. Zuo, D. Meng, L. Zhang, X. Feng and D. Zhang, A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding, In ICCV
6 Example applications Denoising 6
7 Example applications Deblurring 7
8 Example applications Superresolution 8
9 Example applications Medical image reconstruction (e.g., CT) TV-based method Our method Qiong Xu, Hengyong Yu, Xuanqin Mou, Lei Zhang, Jiang Hsieh, and Ge Wang, Low-dose X-ray CT Reconstruction via Dictionary Learning, IEEE Transactions on Medical Imaging, vol. 31, pp ,
10 Example applications Inpainting 10
11 Example applications Morphological component analysis (cartoon-texture decomposition) = + J. Bobin, J.-L. Starck J. Fadili, Y. Moudden and D.L Donoho, "Morphological Component Analysis: an adaptive thresholding strategy", IEEE Transactions on Image Processing, Vol 16, No 11, pp ,
12 Why sparse: neuroscience perspective Observations on Primary Visual Cortex The Monkey Experiment by Hubel and Wiesel, 1968 Responses of a simple cell in monkeys right striate cortex. David Hubel and Torsten Wiesel Nobel Prize Winner 12
13 Why sparse: neuroscience perspective Olshausen and Field s Sparse Coding, 1996 The basis function can be updated by gradient descent: Resulted basis functions. Courtesy by Olshausen and Field
14 Why sparse: probabilistic Bayes perspective Signal recovery in a Bayesian viewpoint xˆ arg max P x y arg max P y x P x Represent x as a linear combination of bases (dictionary atoms) x And assume that the representation coefficients are i.i.d. and follow some prior distribution: ~ exp( i i ) l p x Likelihood Prior x 14
15 Why sparse: probabilistic Bayes perspective The maximum a posteriori (MAP) solution: MAP We have: arg max p( y) arg max log p( y ) log p( ) arg m in y If p=0, it is the L 0 -norm sparse coding problem. If p=1, it becomes the convex L 1 -norm sparse coding. If 0<p<1, it will be non-convex L p -norm minimization. 2 2 i i l p 15
16 Why sparse: signal processing perspective x is called K-sparse if it is a linear combination of only K basis vectors. If K<<N, we say x is compressible. Measurement of x K x i 1 i i y x 16
17 Why sparse: signal processing perspective Reconstruction If x is k-sparse, we can reconstruct x from y with M (M<<N) measurements: ˆ arg min, s.t. y But the measurement matrix should satisfy the restricted isometry property (RIP) condition: 0 For any vector v sharing the same K nonzero entries as : 17
18 Image reconstruction: the problem Reconstruct x from its degraded measurement y x y y = Hx + v H: the degradation matrix v: Gaussian white noise 18
19 Image reconstruction by sparse coding: the basic procedures 1. Partition the degraded image into overlapped patches. 2. Denote by the employed dictionary. For each patch, solve the following L 1 -norm sparse coding problem: ˆ arg min y 3. Reconstruct each patch by. ˆx ˆ 4. Put the reconstructed patch back to the original image. For the overlapped pixels between patches, average them. 5. In practice, the above procedures can be iterated for several rounds
20 How sparsity helps? An illustrative example You are looking for a girlfriend/boyfriend. i.e., you are reconstructing the desired signal. Your objective is that she/he is 白 - 富 - 美 / 高 - 富 - 帅. i.e., you want a clean and perfect reconstruction. However, the candidates are limited. i.e., the dictionary is small. Can you find your ideal girlfriend/boyfriend? 20
21 How sparsity helps? An illustrative example Candidate A is tall; however, he is not handsome. Candidate B is rich; however, he is too fat. Candidate C is handsome; however, he is poor. If you sparsely select one of them, none is ideal for you i.e., a sparse representation vector such as [0, 1, 0]. How about a dense solution: (A+B+C)/3? i.e., a dense representation vector [1, 1, 1]/3 The reconstructed boyfriend is a compromise of 高 - 富 - 帅, and he is fat (i.e., has some noise) at the same time. 21
22 How sparsity helps? An illustrative example So what s wrong? This is because the dictionary is too small! If you can select your boyfriend/girlfriend from boys/girls all over the world (i.e., a large enough dictionary), there is a very high probability (nearly 1) that you will find him/her! i.e., a very sparse solution such as [0,, 1,, 0] In summary, a sparse solution with an over-complete dictionary often works! Sparsity and redundancy are the two sides of the same coin. 22
23 The dictionary Usually, an over-complete dictionary is required in doing sparse representation. The dictionary can be formed by the off-the-shelf bases such as DCT bases, wavelets, curvelets, etc. Learning dictionaries from natural images has shown very promising results in image reconstruction. Dictionary learning has become a hot topic in image processing and computer vision. M. Aharon, M. Elad, and A.M. Bruckstein, The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation, IEEE Trans. Signal Processing, vol. 54, no. 11, pp , Nov
24 Nonlocal self-similarity In natural images, usually we can find many similar patches to a given path, which can be spatially far from it. This is called nonlocal self-similarity. Nonlocal self-similarity has been widely and successfully used in image reconstruction.
25 Representative image restoration methods K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-d transform-domain collaborative filtering," TIP (BM3D) J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, Non-local sparse models for image restoration," ICCV (LSSC) J. Yang, J. Wright, T. Huang and Y. Ma, Image super-resolution via sparse representation, TIP (ScSR) D. Zoran and Y. Weiss, From learning models of natural image patches to whole image restoration, ICCV (EPLL) W. Dong, L. Zhang, G. Shi and X. Wu, Image deblurring and supper-resolution by adaptive sparse domain selection and adaptive regularization, TIP (ASDS) S. Wang, L. Zhang, Y. Liang and Q. Pan, Semi-Coupled Dictionary Learning with Applications to Image Superresolution and Photo-Sketch Image Synthesis, CVPR (SCDL) W. Dong, L. Zhang, G. Shi, and X. Li, Nonlocally Centralized Sparse Representation for Image Restoration, TIP (NCSR) 25
26 NCSR (ICCV 11, TIP 13) A simple but very effective sparse representation model was proposed. It outperforms many state-of-the-arts in image denoising, deblurring and super-resolution. W. Dong, L. Zhang and G. Shi, Centralized Sparse Representation for Image Restoration, in ICCV W. Dong, L. Zhang, G. Shi and X. Li, Nonlocally Centralized Sparse Representation for Image Restoration, IEEE Trans. on Image Processing, vol. 22, no. 4, pp , April
27 NCSR: The idea For true signal arg min, s.t. x For degraded signal x The sparse coding noise (SCN) 1 2 y arg min, s.t. y H = y x 1 2 To better reconstruct the signal, we need to reduce the SCN because: x = xˆ x Φα Φα Φυ y x α
28 NCSR: The objective function The proposed objective function Key idea: Suppressing the SCN How to estimate x? arg min yh + ˆ y 2 2 x l p The unbiased estimate: ˆ E[ ] x x The zero-mean property of SCN makes ˆ E[ ] E[ ] x x y 28
29 NCSR: The solution The nonlocal estimation of μ i i, j i, j, jc i [ ] E y,, The simplified objective function exp( ˆ ˆ i j xi xi j / h) / W 2 2 N 2 arg min + y y H μ 2 i1 The iterative solution: i i l N ( j) 2 ( j1) arg min + y y H 2 i μi i1 p l p 29
30 NSCR: The parameters and dictionaries The L p -norm is set to L 1 -norm since SCN is generally Laplacian distributed. The regularization parameter is adaptively determined based on the MAP estimation principle. Local PCA dictionaries are used, and they are adaptively learned from the image. We cluster the image patches, and for each cluster, a PCA dictionary is learned and used to code the patches within this cluster. 30
31 Denoising results From left to right and top to bottom: original image, noisy image (=100), denoised images by SAPCA-BM3D (PSNR=25.20 db; FSIM=0.8065), LSSC (PSNR=25.63 db; FSIM=0.8017), EPLL (PSNR=25.44 db; FSIM= ), and NCSR (PSNR=25.65 db; FSIM=0.8068). 31
32 Deblurring results Blurred FISTA (27.75 db) BM3D (28.61 db) NCSR (30.30 db) Blurred Fergus, et al [SIGGRAPH06] NCSR Close up View 32
33 Super-resolution results Low resolution TV (31.24 db) ScSR (32.87 db) NCSR (33.68 db) Low resolution TV (31.34 db) ScSR (31.55 db) NCSR (34.00 db) 33
34 GHP (CVPR 13, TIP 14) Like noise, textures are fine scale structures in images, and most of the denoising algorithms will remove the textures while removing noise. Is it possible to preserve the texture structures, to some extent, in denoising? We made a good attempt in: W. Zuo, L. Zhang, C. Song, and D. Zhang, Texture Enhanced Image Denoising via Gradient Histogram Preservation, in CVPR W. Zuo, L. Zhang, C. Song, D. Zhang, and H. Gao, Gradient Histogram Estimation and Preservation for Texture Enhanced Image Denoising, in TIP
35 GHP The key is to estimate and preserve the gradient histogram of true images and preserve it in the denoised image: xˆ y x α 2 i i βi 1 arg minx, F 2 F( x) x s.t. xd α, h h F r An iterative histogram specification algorithm is developed for the efficient solution of the GHP model. Similar PSNR/SSIM measures to BM3D, LSSC and NCSR, but more natural and visually pleasant denoising results. 35
36 GHP results: CVPR 13 logo Original Noisy BM3D GHP 36
37 Noisy image BM3D LSSC NCSR GHP Ground truth 37
38 Group sparsity?????? ?????? A sparse solution A group sparse solution 38
39 From 1D to 2D: rank minimization 39
40 Nuclear norm 40
41 Nuclear Norm Minimization(NNM) 41
42 NNM: pros and cons Pros Tightest convex envelope of rank minimization. Closed form solution. Cons Treat equally all the singular values. Ignore the different significances of matrix singular values. 42
43 Weighted nuclear norm minimization (WNNM) 43
44 Optimization of WNNM Q. Xie, D. Meng, S. Gu, L. Zhang, W. Zuo, X. Feng, and Z. Xu, On the optimization of weighted nuclear norm minimization, Technical Report, to be online soon. 44
45 An important corollary Q. Xie, D. Meng, S. Gu, L. Zhang, W. Zuo, X. Feng, and Z. Xu, On the optimization of weighted nuclear norm minimization, Technical Report, to be online soon. 45
46 Application of WNNM to image denoising 1. For each noisy patch, search in the image for its nonlocal similar patches to form matrix Y. 2. Solve the WNNM problem to estimate the clean patches X from Y. 3. Put the clean patch back to the image. 4. Repeat the above procedures several times to obtain the denoised image. 46
47 WNNM based image denoising WNNM S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted Nuclear Norm Minimization with Application to Image Denoising, CVPR
48 The weights 48
49 Experimental Results (a) Ground truth (b) Noisy image (PSNR: 14.16dB) (c) BM3D (PSNR: 26.78dB) (d) EPLL (PSNR: 26.65dB) (e) LSSC (PSNR: 26.77dB) (f) NCSR (PSNR: 26.66dB) (g) SAIST (PSNR: 26.63dB) (h) WNNM (PSNR: 26.98dB) Denoising results on image Boats by different method (noise level sigma=50). 49
50 Experimental Results (a) Ground truth (b) Noisy image (PSNR:dB) (c) BM3D (PSNR: 24.22dB) (d) EPLL (PSNR: 22.46dB) (e) LSSC (PSNR: 24.04dB) (f) NCSR (PSNR: 23.76dB) (g) SAIST (PSNR: 24.26dB) (h) WNNM (PSNR: 24.68dB) Denoising results on image Fence by different method (noise level sigma=75). 50
51 Experimental Results (a) Ground truth (b) Noisy image ( PSNR: 8.10dB) (c) BM3D (PSNR: 22.52dB) (d) EPLL (PSNR: 22.23dB) (e) SSC (PSNR: 22.24dB) (f) NCSR (PSNR: 22.11dB) (g) SAIST (PSNR: 22.61dB) (h) WNNM (PSNR: 22.91dB) Denoising results on image Monarch by different method (noise level sigma=100). 51
52 Experimental Results (a) Ground truth (b) Noisy image (PSNR:8.10dB) (c) BM3D (PSNR: 33.05dB) (d) EPLL (PSNR: 32.61dB) (e) LSSC (PSNR: 32.88dB) (f) NCSR (PSNR: 32.95dB) (g) SAIST (PSNR: 33.08dB) (h) WNNM (PSNR: 33.12dB) Denoising results on image House by different method (noise level sigma=100). 52
53 Experimental Results Sigma=20 BM3D EPLL LSSC NCSR SAIST WNNM C-Man House Peppers Montage Leaves Starfish Mornar Airplane Paint JellyBean Fence Parrot Lena Barbara Boat Hill F.print Man Couple Straw AVE
54 Experimental Results Sigma=40 BM3D EPLL LSSC NCSR SAIST WNNM C-Man House Peppers Montage Leaves Starfish Mornar Airplane Paint JellyBean Fence Parrot Lena Barbara Boat Hill F.print Man Couple Straw AVE
55 Experimental Results Sigma=50 BM3D EPLL LSSC NCSR SAIST WNNM C-Man House Peppers Montage Leaves Starfish Mornar Airplane Paint JellyBean Fence Parrot Lena Barbara Boat Hill F.print Man Couple Straw AVE
56 Experimental Results Sigma=75 BM3D EPLL LSSC NCSR SAIST WNNM C-Man House Peppers Montage Leaves Starfish Mornar Airplane Paint JellyBean Fence Parrot Lena Barbara Boat Hill F.print Man Couple Straw AVE
57 Experimental Results Sigma=100 BM3D EPLL LSSC NCSR SAIST WNNM C-Man House Peppers Montage Leaves Starfish Mornar Airplane Paint JellyBean Fence Parrot Lena Barbara Boat Hill F.print Man Couple Straw AVE
58 Low Rank Sparse representation Nonlocal Dictionary Learning Image Patches With patch based image modeling, nonlocal, sparse representation, low rank and dictionary leaning can be used individually or jointly for image processing.
59 What s next? Actually I don t know Probably Sparse/Low-rank + Big Data? Theoretical analysis? Algorithms and implementation? W.r.t. image restoration, one interesting topic (at least I think) is perceptual quality oriented image restoration. 59
60 Sparse representation: data perspective Curse of dimensionality For real-world high-dimensional data, the available samples are usually insufficient. Fortunately, real data often lie on low-dimensional, sparse, or degenerate structures in the high-dimensional space. Subspace methods: PCA, LLE, ISOMAP, ICA, Coding methods: Bag-of-words, Mixture-models, 60
61 Sparse representation based classification (SRC) y X = X1, X2,..., X K = ; ;...; K 1 2 Test image Training dictionary coefficients Representation: min s.t. y X 1 is sparse: ideally, only supported on images of the same subject Classification: r label ( ) arg min k k r yx ˆ y k k k 2 J. Wright, A. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust Face Recognition via Sparse Representation, PAMI
62 How to process a corrupted face? y X = X1, X2,..., X K = 1; 2;...; K e Test image Training dictionary coefficients Representation: min s.t. y X, I 1 Equivalent to min e s.t. y X e 1 1 Classification: r label ( ) arg min k k r y X ˆ eˆ y k k k 2 62
63 Regularized robust coding Can we have a more principled way to deal with various types of outliers in face images? Our solution: Meng Yang Lei Zhang, Jian Yang and David Zhang. Robust sparse coding for face recognition. In CVPR Meng Yang, Lei Zhang, Jian Yang, and David Zhang, Regularized robust coding for face recognition. IEEE Trans. Image Processing,
64 One big question! Is it true that the sparse representation helps face recognition? L. Zhang, M. Yang, and X. Feng. Sparse Representation or Collaborative Representation: Which Helps Face Recognition? In ICCV L. Zhang, M. Yang, X. Feng, Y. Ma and D. Zhang, Collaborative Representation based Classification for Face Recognition, arxiv:
65 Within-class or across-class representation Analyze the working mechanism of SRC. Training samples Yes Within-class representation Enough? No Across-class representation Regularized nearest subspace Collaborative representation 65
66 Regularized nearest subspace Training samples Enough? Yes Regularized nearest subspace (RNS) The query sample is represented on the training samples from the same class with regularized representation coefficient: min yx st.. 2 i 2 l p 66
67 Why regularization? Assume that we have enough training samples for each class so that all the images of class i can be faithfully represented by X i. All the face images are somewhat similar, while some subjects may have very similar face images. Let X j = X i +. If is small (meet some condition), there is e j e y 2 i ( Xi) min 1, m n e j and e j are the representation error of X j and X i to represent y without any constraint on the representation coefficient. 67
68 Regularized nearest subspace with L p -norm Representation residual min yx st i 2 l p Representation residual Correct class Wrong class L 0 norm of representation 0.46 coefficients Correct class Wrong class L 1 norm of representation coefficients L 2 norm of representation coefficients Regularization makes classification more stable. L 2 -norm regularization can play the similar role to L 0 and L 1 norm in this classification task. Representation residual Correct class Wrong class 68
69 Why collaborative representation? Training samples Enough? No Collaborative Representation FR is a typical small-sample-size problem, and X i is under-complete in general. Face images of different classes share similarities. Samples from other classes can be used to collaboratively represent the sample in one class. 2 min y X X [ X, X,, X ] 2 l p 1 2 Dilute the small-size-sample problem Consider the competition between different classes K 69
70 Why collaborative representation? Without considering the l p -norm regularization in coding, the associated representation is actually the perpendicular projection of y onto the space spanned by X. 2 α 2 i αˆ arg min y Xα yˆ X ˆ iαi Only e yˆ X αˆ * i i i 2 2 works for classification The double checking in e* makes the classification more effective and robust. 70
71 L 1 vs. L 2 in regularization min yx Coefficients of l 1 -regularized minimization 0.2 Coefficients of l 2 -regularized minimization l p Sparse! Non-sparse! Though L 1 leads to sparser coefficients, the classification rates are similar. Recognition rate l 1 -regularized minimization l 2 -regularized minimization 0 0 1e-6 5e-5 5e-4 5e-3 1e
72 Collaborative representation model min y X p, q 1or 2 l l q q=2,p=1, Sparse Representation based Classification (S-SRC) q=2,p=2, Collaborative Representation based Classification with regularized least square (CRC_RLS) q=1,p=1, Robust Sparse Representation based Classification (R-SRC) q=1,p=2, Robust Collaborative Representation based Classification (R-CRC) CRC_RLS has a closed-form solution; others have iterative solutions. p 72
73 Gender classification 700 Male Samples 700 Female Samples Training Set Male? Female? Feature (AR) RNS_L1 RNS_L2 CRC-RLS S-SRC SVM LRC NN 300-d Eigenface 94.9% 94.9% 93.7% 92.3% 92.4% 27.3% 90.7% Big benefit (67% improvements) brought by regularization on coding vector! 73
74 Face recognition without occlusion Training samples per subject are limited. Training Set Identity? 95 AR database Recognition rate NN LRC SVM S-SRC CRC_RLS Highest accuracy when feature dimension is not too low Eigenface dimension 74
75 Recognition rate Face recognition with pixel corruption Original image 70% random corruption Identity? ~ Corruption percent (%) on EYB R-SRC R-CRC 75
76 Face recognition with real disguise Identity? Disguise(AR) Sunglass (test 1) Scarf (test 1) Sunglass (test 2) Scarf (test 2) R-SRC 87.0% 59.5% 69.8% 40.8% CRC-RLS 68.5% 90.5% 57.2% 71.8% R-CRC 87.0% 86.0% 65.8% 73.2% Significant improvement in the case of scarf 76
77 Running time No occlusion (MPIE) L1_ls Homotopy FISTA ALM CRC-RLS Recognition rate 92.6% 92.0% 79.6% 92.0% 92.2% Running time (s) Speed-up times! Corruption (MPIE) L1_ls Homotopy SpaRSA FISTA ALM R-CRC Average running time Speed-up times! 77
78 One even bigger question! SRC/CRC represents the query face by gallery faces from all classes. However, it uses the representation residual by each class for classification. So what kind of classifier SRC/CRC is? Why SRC/CRC works? L. Zhang, W. Zuo, X. Feng, and Y. Ma, A Probabilistic Formulation of Collaborative Representation based Classifier, Preprint, to be online soon. 78
79 Probabilistic subspace of X k Samples of class k: X k = [x 1, x 2,..., x n ]. S : the subspace spanned by X k. Each data point x in S can be written as: x= X k. We assume that the probability that x belongs to class k is determined by : P( label( x) k) exp c α 2 2 It can be shown that such a probability depends on the distribution of the samples of X k. The red point will have much higher probability than the green one. 79
80 Representation of query sample y The query sample y usually lies outside the subspace {X k }. The probability that y belongs to class k is determined by two factors: Given x= X k, how likely y has the same class label as x? What is the probability that x belong to y class k? By maximizing the product of the two probabilities, we have X k 2 2 p max log P( label( y) k) min y X α α k α k
81 Two classes X 1 = [x 1,1, x 1,2,..., x 1,n ]; X 2 = [x 2,1, x 2,2,..., x 2,n ] S : the subspace spanned by [X 1 X 2 ] Each data point x in S can be written as: x= X X 2 2. x belongs to X 1 or X 2 with certain probability: P( label( x) 1) 2 2 exp x X α α x P( label( x) 2) 2 2 exp x X α α
82 Collaborative representation of y y lies outside the subspace {X X 2 2 }. The probability that y belongs to class 1 or 2 depends on how likely y has the same class label as x=x 1 1 +X 2 2 and the probability that x belongs to class 1 or 2: y p 1 { α, α } 1 2 maxlog P( label( y) 1) min y ( X α X α ) Xα X α α x p 2 { α, α } 1 2 maxlog P( label( y) 2) min y ( X α X α ) Xα X α α
83 General case The probability that query sample y belongs to class k can be computed as: p K 2 K 2 min y X α X α X α α k { } 1 i i 2 1 i i k k k α i i i 2 The classification rule: label( y) arg max { p } k k 2 2 Problem: For each class k, we need to solve the optimization once. This can be costly. 83
84 Joint probability For a data point x in the subspace spanned by all classes X, we define the following joint probability: For the query sample y outside the subspace of X, we have We use the marginal probability for classification: We only need to solve the optimization once. i1 2 2 P( label( x) 1,..., label( x) K) exp x X α α K i i 2 i 2 2 max log( P K K ( )) min y X α Xα X α α α 2 2 i1 i i 1 i i 2 i 2 2 i p P( label( y) k) exp y Xαˆ Xαˆ X αˆ αˆ k 2 k k 2 k 2 label( y) arg max { p } k k 84
85 Variants ProCRC-l 2 (closed form solution) min α ProCRC-l 1 min α Robust ProCRC (ProCRC-r ) K 2 K 2 2 y X α Xα X α α i1 i i 1 i i 2 i 2 2 i K 2 K y X α Xα X α α i1 i i 1 i i i i 2 min α y K X α K i1 i i 1 i1 i i 2 i 1 x X α 2 α 85
86 Face recognition: AR Dim SVM NSC CRC SRC CROC ProCRC-l ProCRC-l
87 Face recognition: Extended Yale B Dim SVM NSC CRC SRC CROC ProCRC-l ProCRC-l
88 Robust face recognition Random corruption (YaleB) Corruption ratio 10% 20% 40% 60% SRC-r ProCRC-r Block occlusion (YaleB) Occlusion ratio 10% 20% 30% 40% SRC-r ProCRC-r Disguise (AR) Disguise Sunglasses Scarf SRC-r ProCRC-r
89 Handwritten digit recognition: MNIST Number of training samples per class SVM NSC CRC SRC CROC ProCRC-l ProCRC-l
90 Handwritten digit recognition: USPS Number of training samples per class SVM NSC CRC SRC CROC ProCRC-l ProCRC-l
91 Running time Intel Core (TM) i7-2720qm 2.20 GHz CPU with 8 GB RAM Running time (second) of different methods on the Extended Yale B dataset:
92 Remarks ProCRC provides a good probabilistic interpretation of collaborative representation based classifiers (NSC, SRC and CRC). ProCRC achieves higher classification accuracy than the competing classifiers in most experiments. ProCRC has small performance variation under different number of training samples and feature dimension. It is robust to training sample size and feature dimension.
93 Take Research as Fun! Thank you!
Synthesis and Analysis Sparse Representation Models for Image Restoration. Shuhang Gu 顾舒航. Dept. of Computing The Hong Kong Polytechnic University
Synthesis and Analysis Sparse Representation Models for Image Restoration Shuhang Gu 顾舒航 Dept. of Computing The Hong Kong Polytechnic University Outline Sparse representation models for image modeling
More informationIMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING
IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University
More informationPatch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques
Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Syed Gilani Pasha Assistant Professor, Dept. of ECE, School of Engineering, Central University of Karnataka, Gulbarga,
More informationSUPPLEMENTARY MATERIAL
SUPPLEMENTARY MATERIAL Zhiyuan Zha 1,3, Xin Liu 2, Ziheng Zhou 2, Xiaohua Huang 2, Jingang Shi 2, Zhenhong Shang 3, Lan Tang 1, Yechao Bai 1, Qiong Wang 1, Xinggan Zhang 1 1 School of Electronic Science
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationIMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING
IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology
More informationAn Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising
An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,
More informationCentralized Sparse Representation for Image Restoration
Centralized Sparse Representation for Image Restoration Weisheng Dong Sch. of Elec. Engineering Xidian University, China wsdong@mail.xidian.edu.cn Lei Zhang Dept. of Computing The Hong Kong Polytechnic
More informationImage Restoration Using DNN
Image Restoration Using DNN Hila Levi & Eran Amar Images were taken from: http://people.tuebingen.mpg.de/burger/neural_denoising/ Agenda Domain Expertise vs. End-to-End optimization Image Denoising and
More informationSingle Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling
Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling Yaniv Romano The Electrical Engineering Department Matan Protter The Computer Science Department Michael Elad The Computer Science
More informationImage Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization
Volume 3, No. 3, May-June 2012 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 Image Deblurring Using Adaptive Sparse
More informationVirtual Training Samples and CRC based Test Sample Reconstruction and Face Recognition Experiments Wei HUANG and Li-ming MIAO
7 nd International Conference on Computational Modeling, Simulation and Applied Mathematics (CMSAM 7) ISBN: 978--6595-499-8 Virtual raining Samples and CRC based est Sample Reconstruction and Face Recognition
More informationPATCH-DISAGREEMENT AS A WAY TO IMPROVE K-SVD DENOISING
PATCH-DISAGREEMENT AS A WAY TO IMPROVE K-SVD DENOISING Yaniv Romano Department of Electrical Engineering Technion, Haifa 32000, Israel yromano@tx.technion.ac.il Michael Elad Department of Computer Science
More informationIMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS
IMAGE DENOISING TO ESTIMATE THE GRADIENT HISTOGRAM PRESERVATION USING VARIOUS ALGORITHMS P.Mahalakshmi 1, J.Muthulakshmi 2, S.Kannadhasan 3 1,2 U.G Student, 3 Assistant Professor, Department of Electronics
More informationImage Restoration: From Sparse and Low-rank Priors to Deep Priors
Image Restoration: From Sparse and Low-rank Priors to Deep Priors Lei Zhang 1, Wangmeng Zuo 2 1 Dept. of computing, The Hong Kong Polytechnic University, 2 School of Computer Science and Technology, Harbin
More informationImage Denoising via Group Sparse Eigenvectors of Graph Laplacian
Image Denoising via Group Sparse Eigenvectors of Graph Laplacian Yibin Tang, Ying Chen, Ning Xu, Aimin Jiang, Lin Zhou College of IOT Engineering, Hohai University, Changzhou, China School of Information
More informationPatch Group Based Nonlocal Self-Similarity Prior Learning for Image Denoising
Patch Group Based Nonlocal Self-Similarity Prior Learning for Image Denoising Jun Xu 1, Lei Zhang 1,, Wangmeng Zuo 2, David Zhang 1, and Xiangchu Feng 3 1 Dept. of Computing, The Hong Kong Polytechnic
More informationExternal Patch Prior Guided Internal Clustering for Image Denoising
External Patch Prior Guided Internal Clustering for Image Denoising Fei Chen 1, Lei Zhang 2, and Huimin Yu 3 1 College of Mathematics and Computer Science, Fuzhou University, Fuzhou, China 2 Dept. of Computing,
More informationFace Recognition via Sparse Representation
Face Recognition via Sparse Representation John Wright, Allen Y. Yang, Arvind, S. Shankar Sastry and Yi Ma IEEE Trans. PAMI, March 2008 Research About Face Face Detection Face Alignment Face Recognition
More informationBilevel Sparse Coding
Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional
More informationCentralized Sparse Representation Non-locally For Image Restoration
Centralized Sparse Representation Non-locally For Image Restoration More Manisha Sarjerao, Prof. Shivale Nitin 1 M.E.II, Department of COE, Jspm s BSIOTR, Wagholi, Pune, Maharashtra, India 2 Assistant
More informationImage Restoration and Background Separation Using Sparse Representation Framework
Image Restoration and Background Separation Using Sparse Representation Framework Liu, Shikun Abstract In this paper, we introduce patch-based PCA denoising and k-svd dictionary learning method for the
More informationA New Orthogonalization of Locality Preserving Projection and Applications
A New Orthogonalization of Locality Preserving Projection and Applications Gitam Shikkenawis 1,, Suman K. Mitra, and Ajit Rajwade 2 1 Dhirubhai Ambani Institute of Information and Communication Technology,
More informationImage Processing using Smooth Ordering of its Patches
1 Image Processing using Smooth Ordering of its Patches Idan Ram, Michael Elad, Fellow, IEEE, and Israel Cohen, Senior Member, IEEE arxiv:12.3832v1 [cs.cv] 14 Oct 212 Abstract We propose an image processing
More informationRegularized Robust Coding for Face Recognition
Regularized Robust Coding for Face Recognition Meng Yang a, Student Member, IEEE, Lei Zhang a,, Member, IEEE Jian Yang b, Member, IEEE, and David Zhang a, Fellow, IEEE a Dept. of Computing, The Hong Kong
More informationExpected Patch Log Likelihood with a Sparse Prior
Expected Patch Log Likelihood with a Sparse Prior Jeremias Sulam and Michael Elad Computer Science Department, Technion, Israel {jsulam,elad}@cs.technion.ac.il Abstract. Image priors are of great importance
More informationAn Improved Approach For Mixed Noise Removal In Color Images
An Improved Approach For Mixed Noise Removal In Color Images Ancy Mariam Thomas 1, Dr. Deepa J 2, Rijo Sam 3 1P.G. student, College of Engineering, Chengannur, Kerala, India. 2Associate Professor, Electronics
More informationIMA Preprint Series # 2281
DICTIONARY LEARNING AND SPARSE CODING FOR UNSUPERVISED CLUSTERING By Pablo Sprechmann and Guillermo Sapiro IMA Preprint Series # 2281 ( September 2009 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS UNIVERSITY
More informationSparsity and image processing
Sparsity and image processing Aurélie Boisbunon INRIA-SAM, AYIN March 6, Why sparsity? Main advantages Dimensionality reduction Fast computation Better interpretability Image processing pattern recognition
More informationPatch-based Image Denoising: Probability Distribution Estimation vs. Sparsity Prior
017 5th European Signal Processing Conference (EUSIPCO) Patch-based Image Denoising: Probability Distribution Estimation vs. Sparsity Prior Dai-Viet Tran 1, Sébastien Li-Thiao-Té 1, Marie Luong, Thuong
More informationSingle Image Super-Resolution via Sparse Representation over Directionally Structured Dictionaries Based on the Patch Gradient Phase Angle
2014 UKSim-AMSS 8th European Modelling Symposium Single Image Super-Resolution via Sparse Representation over Directionally Structured Dictionaries Based on the Patch Gradient Phase Angle Mahmoud Nazzal,
More informationLearning how to combine internal and external denoising methods
Learning how to combine internal and external denoising methods Harold Christopher Burger, Christian Schuler, and Stefan Harmeling Max Planck Institute for Intelligent Systems, Tübingen, Germany Abstract.
More informationINCOHERENT DICTIONARY LEARNING FOR SPARSE REPRESENTATION BASED IMAGE DENOISING
INCOHERENT DICTIONARY LEARNING FOR SPARSE REPRESENTATION BASED IMAGE DENOISING Jin Wang 1, Jian-Feng Cai 2, Yunhui Shi 1 and Baocai Yin 1 1 Beijing Key Laboratory of Multimedia and Intelligent Software
More informationLearning based face hallucination techniques: A survey
Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)
More informationGeneralized Tree-Based Wavelet Transform and Applications to Patch-Based Image Processing
Generalized Tree-Based Wavelet Transform and * Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel *Joint work with A Seminar in the Hebrew University
More informationStructure-adaptive Image Denoising with 3D Collaborative Filtering
, pp.42-47 http://dx.doi.org/10.14257/astl.2015.80.09 Structure-adaptive Image Denoising with 3D Collaborative Filtering Xuemei Wang 1, Dengyin Zhang 2, Min Zhu 2,3, Yingtian Ji 2, Jin Wang 4 1 College
More informationImage denoising with patch based PCA: local versus global
DELEDALLE, SALMON, DALALYAN: PATCH BASED PCA 1 Image denoising with patch based PCA: local versus global Charles-Alban Deledalle http://perso.telecom-paristech.fr/~deledall/ Joseph Salmon http://www.math.jussieu.fr/~salmon/
More informationDepartment of Electronics and Communication KMP College of Engineering, Perumbavoor, Kerala, India 1 2
Vol.3, Issue 3, 2015, Page.1115-1021 Effect of Anti-Forensics and Dic.TV Method for Reducing Artifact in JPEG Decompression 1 Deepthy Mohan, 2 Sreejith.H 1 PG Scholar, 2 Assistant Professor Department
More informationDetecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400
More informationImage Super-Resolution Reconstruction Based On L 1/2 Sparsity
Buletin Teknik Elektro dan Informatika (Bulletin of Electrical Engineering and Informatics) Vol. 3, No. 3, September 4, pp. 55~6 ISSN: 89-39 55 Image Super-Resolution Reconstruction Based On L / Sparsity
More informationRobust Face Recognition via Sparse Representation
Robust Face Recognition via Sparse Representation Panqu Wang Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92092 pawang@ucsd.edu Can Xu Department of
More informationMATCHING PURSUIT BASED CONVOLUTIONAL SPARSE CODING. Elad Plaut and Raja Giryes
MATCHING PURSUIT BASED CONVOLUTIONAL SPARSE CODING Elad Plaut and Raa Giryes School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel ABSTRACT Convolutional sparse coding using the l 0,
More informationExtended Dictionary Learning : Convolutional and Multiple Feature Spaces
Extended Dictionary Learning : Convolutional and Multiple Feature Spaces Konstantina Fotiadou, Greg Tsagkatakis & Panagiotis Tsakalides kfot@ics.forth.gr, greg@ics.forth.gr, tsakalid@ics.forth.gr ICS-
More informationNonlocal Spectral Prior Model for Low-level Vision
Nonlocal Spectral Prior Model for Low-level Vision Shenlong Wang, Lei Zhang, Yan Liang Northwestern Polytechnical University, The Hong Kong Polytechnic University Abstract. Image nonlocal self-similarity
More informationProjective dictionary pair learning for pattern classification
Projective dictionary pair learning for pattern classification Shuhang Gu 1, Lei Zhang 1, Wangmeng Zuo 2, Xiangchu Feng 3 1 Dept. of Computing, The Hong Kong Polytechnic University, Hong Kong, China 2
More informationPRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING
PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING Divesh Kumar 1 and Dheeraj Kalra 2 1 Department of Electronics & Communication Engineering, IET, GLA University, Mathura 2 Department
More informationImage Denoising Using Sparse Representations
Image Denoising Using Sparse Representations SeyyedMajid Valiollahzadeh 1,,HamedFirouzi 1, Massoud Babaie-Zadeh 1, and Christian Jutten 2 1 Department of Electrical Engineering, Sharif University of Technology,
More informationTHE goal of image denoising is to restore the clean image
1 Non-Convex Weighted l p Minimization based Group Sparse Representation Framework for Image Denoising Qiong Wang, Xinggan Zhang, Yu Wu, Lan Tang and Zhiyuan Zha model favors the piecewise constant image
More informationLearning Splines for Sparse Tomographic Reconstruction. Elham Sakhaee and Alireza Entezari University of Florida
Learning Splines for Sparse Tomographic Reconstruction Elham Sakhaee and Alireza Entezari University of Florida esakhaee@cise.ufl.edu 2 Tomographic Reconstruction Recover the image given X-ray measurements
More informationRobust Principal Component Analysis (RPCA)
Robust Principal Component Analysis (RPCA) & Matrix decomposition: into low-rank and sparse components Zhenfang Hu 2010.4.1 reference [1] Chandrasekharan, V., Sanghavi, S., Parillo, P., Wilsky, A.: Ranksparsity
More informationSparse Models in Image Understanding And Computer Vision
Sparse Models in Image Understanding And Computer Vision Jayaraman J. Thiagarajan Arizona State University Collaborators Prof. Andreas Spanias Karthikeyan Natesan Ramamurthy Sparsity Sparsity of a vector
More informationRobust and Secure Iris Recognition
Robust and Secure Iris Recognition Vishal M. Patel University of Maryland, UMIACS pvishalm@umiacs.umd.edu IJCB 2011 Tutorial Sparse Representation and Low-Rank Representation for Biometrics Outline Iris
More informationOn Single Image Scale-Up using Sparse-Representation
On Single Image Scale-Up using Sparse-Representation Roman Zeyde, Matan Protter and Michael Elad The Computer Science Department Technion Israel Institute of Technology Haifa 32000, Israel {romanz,matanpr,elad}@cs.technion.ac.il
More informationPart-based and local feature models for generic object recognition
Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza
More informationLocally Adaptive Learning for Translation-Variant MRF Image Priors
Locally Adaptive Learning for Translation-Variant MRF Image Priors Masayuki Tanaka and Masatoshi Okutomi Tokyo Institute of Technology 2-12-1 O-okayama, Meguro-ku, Tokyo, JAPAN mtanaka@ok.ctrl.titech.ac.p,
More informationImage Inpainting Using Sparsity of the Transform Domain
Image Inpainting Using Sparsity of the Transform Domain H. Hosseini*, N.B. Marvasti, Student Member, IEEE, F. Marvasti, Senior Member, IEEE Advanced Communication Research Institute (ACRI) Department of
More informationAn Overview on Dictionary and Sparse Representation in Image Denoising
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 6, Ver. I (Nov - Dec. 2014), PP 65-70 An Overview on Dictionary and Sparse Representation
More informationAn efficient face recognition algorithm based on multi-kernel regularization learning
Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel
More informationOne Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models
One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models [Supplemental Materials] 1. Network Architecture b ref b ref +1 We now describe the architecture of the networks
More informationSparse Variation Dictionary Learning for Face Recognition with A Single Training Sample Per Person
Sparse Variation Dictionary Learning for Face Recognition with A Single Training Sample Per Person Meng Yang, Luc Van Gool ETH Zurich Switzerland {yang,vangool}@vision.ee.ethz.ch Lei Zhang The Hong Kong
More informationInverse Problems and Machine Learning
Inverse Problems and Machine Learning Julian Wörmann Research Group for Geometric Optimization and Machine Learning (GOL) 1 What are inverse problems? 2 Inverse Problems cause/ excitation 3 Inverse Problems
More informationSingle-patch low-rank prior for non-pointwise impulse noise removal
Single-patch low-rank prior for non-pointwise impulse noise removal Ruixuan Wang Emanuele Trucco School of Computing, University of Dundee, UK {ruixuanwang, manueltrucco}@computing.dundee.ac.uk Abstract
More informationRestoration of Images Corrupted by Mixed Gaussian Impulse Noise with Weighted Encoding
Restoration of Images Corrupted by Mixed Gaussian Impulse Noise with Weighted Encoding Om Prakash V. Bhat 1, Shrividya G. 2, Nagaraj N. S. 3 1 Post Graduation student, Dept. of ECE, NMAMIT-Nitte, Karnataka,
More informationSHIP WAKE DETECTION FOR SAR IMAGES WITH COMPLEX BACKGROUNDS BASED ON MORPHOLOGICAL DICTIONARY LEARNING
SHIP WAKE DETECTION FOR SAR IMAGES WITH COMPLEX BACKGROUNDS BASED ON MORPHOLOGICAL DICTIONARY LEARNING Guozheng Yang 1, 2, Jing Yu 3, Chuangbai Xiao 3, Weidong Sun 1 1 State Key Laboratory of Intelligent
More informationA Comparative Analysis of Noise Reduction Filters in Images Mandeep kaur 1, Deepinder kaur 2
A Comparative Analysis of Noise Reduction Filters in Images Mandeep kaur 1, Deepinder kaur 2 1 Research Scholar, Dept. Of Computer Science & Engineering, CT Institute of Technology & Research, Jalandhar,
More informationarxiv: v1 [cs.cv] 18 Jan 2019
Good Similar Patches for Image Denoising Si Lu Portland State University lusi@pdx.edu arxiv:1901.06046v1 [cs.cv] 18 Jan 2019 Abstract Patch-based denoising algorithms like BM3D have achieved outstanding
More informationGuided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging
Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern
More informationAugmented Coupled Dictionary Learning for Image Super-Resolution
Augmented Coupled Dictionary Learning for Image Super-Resolution Muhammad Rushdi and Jeffrey Ho Computer and Information Science and Engineering University of Florida Gainesville, Florida, U.S.A. Email:
More informationMULTIVIEW 3D VIDEO DENOISING IN SLIDING 3D DCT DOMAIN
20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 MULTIVIEW 3D VIDEO DENOISING IN SLIDING 3D DCT DOMAIN 1 Michal Joachimiak, 2 Dmytro Rusanovskyy 1 Dept.
More informationRobust Multimodal Dictionary Learning
Robust Multimodal Dictionary Learning Tian Cao 1, Vladimir Jojic 1, Shannon Modla 3, Debbie Powell 3, Kirk Czymmek 4, and Marc Niethammer 1,2 1 University of North Carolina at Chapel Hill, NC 2 Biomedical
More informationHIGH-QUALITY IMAGE INTERPOLATION VIA LOCAL AUTOREGRESSIVE AND NONLOCAL 3-D SPARSE REGULARIZATION
HIGH-QUALITY IMAGE INTERPOLATION VIA LOCAL AUTOREGRESSIVE AND NONLOCAL 3-D SPARSE REGULARIZATION Xinwei Gao, Jian Zhang, Feng Jiang, Xiaopeng Fan, Siwei Ma, Debin Zhao School of Computer Science and Technology,
More informationDiscriminative sparse model and dictionary learning for object category recognition
Discriative sparse model and dictionary learning for object category recognition Xiao Deng and Donghui Wang Institute of Artificial Intelligence, Zhejiang University Hangzhou, China, 31007 {yellowxiao,dhwang}@zju.edu.cn
More informationImage denoising using curvelet transform: an approach for edge preservation
Journal of Scientific & Industrial Research Vol. 3469, January 00, pp. 34-38 J SCI IN RES VOL 69 JANUARY 00 Image denoising using curvelet transform: an approach for edge preservation Anil A Patil * and
More informationBlind Image Deblurring Using Dark Channel Prior
Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA
More informationREJECTION-BASED CLASSIFICATION FOR ACTION RECOGNITION USING A SPATIO-TEMPORAL DICTIONARY. Stefen Chan Wai Tim, Michele Rombaut, Denis Pellerin
REJECTION-BASED CLASSIFICATION FOR ACTION RECOGNITION USING A SPATIO-TEMPORAL DICTIONARY Stefen Chan Wai Tim, Michele Rombaut, Denis Pellerin Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble, France ABSTRACT
More informationIMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE
IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE Yulun Zhang 1, Kaiyu Gu 2, Yongbing Zhang 1, Jian Zhang 3, and Qionghai Dai 1,4 1 Shenzhen
More informationImage Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei
Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei College of Physical and Information Science, Hunan Normal University, Changsha, China Hunan Art Professional
More informationImage Inpainting by Patch Propagation Using Patch Sparsity Zongben Xu and Jian Sun
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 5, MAY 2010 1153 Image Inpainting by Patch Propagation Using Patch Sparsity Zongben Xu and Jian Sun Abstract This paper introduces a novel examplar-based
More informationLearning Data Terms for Non-blind Deblurring Supplemental Material
Learning Data Terms for Non-blind Deblurring Supplemental Material Jiangxin Dong 1, Jinshan Pan 2, Deqing Sun 3, Zhixun Su 1,4, and Ming-Hsuan Yang 5 1 Dalian University of Technology dongjxjx@gmail.com,
More informationSalt and pepper noise removal in surveillance video based on low-rank matrix recovery
Computational Visual Media DOI 10.1007/s41095-015-0005-5 Vol. 1, No. 1, March 2015, 59 68 Research Article Salt and pepper noise removal in surveillance video based on low-rank matrix recovery Yongxia
More informationImage Processing Via Pixel Permutations
Image Processing Via Pixel Permutations Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel Joint work with Idan Ram Israel Cohen The Electrical
More informationCertain Explorations On Removal Of Rain Streaks Using Morphological Component Analysis
Certain Explorations On Removal Of Rain Streaks Using Morphological Component Analysis Jaina George 1, S.Bhavani 2, Dr.J.Jaya 3 1. PG Scholar, Sri.Shakthi Institute of Engineering and Technology, Coimbatore,
More informationA DEEP DICTIONARY MODEL FOR IMAGE SUPER-RESOLUTION. Jun-Jie Huang and Pier Luigi Dragotti
A DEEP DICTIONARY MODEL FOR IMAGE SUPER-RESOLUTION Jun-Jie Huang and Pier Luigi Dragotti Communications and Signal Processing Group CSP), Imperial College London, UK ABSTRACT Inspired by the recent success
More informationNTHU Rain Removal Project
People NTHU Rain Removal Project Networked Video Lab, National Tsing Hua University, Hsinchu, Taiwan Li-Wei Kang, Institute of Information Science, Academia Sinica, Taipei, Taiwan Chia-Wen Lin *, Department
More informationSparse & Redundant Representation Modeling of Images: Theory and Applications
Sparse & Redundant Representation Modeling of Images: Theory and Applications Michael Elad The Computer Science Department The Technion Haifa 3, Israel The research leading to these results has been received
More informationTRADITIONAL patch-based sparse coding has been
Bridge the Gap Between Group Sparse Coding and Rank Minimization via Adaptive Dictionary Learning Zhiyuan Zha, Xin Yuan, Senior Member, IEEE arxiv:709.03979v2 [cs.cv] 8 Nov 207 Abstract Both sparse coding
More informationSparse & Redundant Representations and Their Applications in Signal and Image Processing
Sparse & Redundant Representations and Their Applications in Signal and Image Processing Sparseland: An Estimation Point of View Michael Elad The Computer Science Department The Technion Israel Institute
More informationCID: Combined Image Denoising in Spatial and Frequency Domains Using Web Images
CID: Combined Image Denoising in Spatial and Frequency Domains Using Web Images Huanjing Yue 1, Xiaoyan Sun 2, Jingyu Yang 1, Feng Wu 3 1 Tianjin University, Tianjin, China. {dayueer,yjy}@tju.edu.cn 2
More informationMULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo
MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,
More informationarxiv: v1 [cs.cv] 30 Oct 2018
Image Restoration using Total Variation Regularized Deep Image Prior Jiaming Liu 1, Yu Sun 2, Xiaojian Xu 2 and Ulugbek S. Kamilov 1,2 1 Department of Electrical and Systems Engineering, Washington University
More informationPatch Group based Bayesian Learning for Blind Image Denoising
Patch Group based Bayesian Learning for Blind Image Denoising Jun Xu 1, Dongwei Ren 1,2, Lei Zhang 1, David Zhang 1 1 Dept. of Computing, The Hong Kong Polytechnic University, Hong Kong, China 2 School
More informationRecovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy
Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Xintao Wang Ke Yu Chao Dong Chen Change Loy Problem enlarge 4 times Low-resolution image High-resolution image Previous
More informationLEARNING COMPRESSED IMAGE CLASSIFICATION FEATURES. Qiang Qiu and Guillermo Sapiro. Duke University, Durham, NC 27708, USA
LEARNING COMPRESSED IMAGE CLASSIFICATION FEATURES Qiang Qiu and Guillermo Sapiro Duke University, Durham, NC 2778, USA ABSTRACT Learning a transformation-based dimension reduction, thereby compressive,
More informationIMAGE DENOISING BY TARGETED EXTERNAL DATABASES
IMAGE DENOISING BY TARGETED EXTERNAL DATABASES Enming Luo 1, Stanley H. Chan, and Truong Q. Nguyen 1 1 University of California, San Diego, Dept of ECE, 9500 Gilman Drive, La Jolla, CA 9093. Harvard School
More informationAUTOMATIC data summarization, which attempts to
748 JOURNAL OF SOFTWARE, VOL. 9, NO. 3, MARCH 2014 Sparse Affinity Propagation for Image Analysis Xue Zhang, Jian Cheng Lv Machine Intelligence Laboratory, College of Computer Science, Sichuan University,
More informationDenoising an Image by Denoising its Components in a Moving Frame
Denoising an Image by Denoising its Components in a Moving Frame Gabriela Ghimpețeanu 1, Thomas Batard 1, Marcelo Bertalmío 1, and Stacey Levine 2 1 Universitat Pompeu Fabra, Spain 2 Duquesne University,
More informationImage Denoising based on Adaptive BM3D and Singular Value
Image Denoising based on Adaptive BM3D and Singular Value Decomposition YouSai hang, ShuJin hu, YuanJiang Li Institute of Electronic and Information, Jiangsu University of Science and Technology, henjiang,
More informationLearning Dictionaries of Discriminative Image Patches
Downloaded from orbit.dtu.dk on: Nov 22, 2018 Learning Dictionaries of Discriminative Image Patches Dahl, Anders Bjorholm; Larsen, Rasmus Published in: Proceedings of the British Machine Vision Conference
More informationNon-Parametric Bayesian Dictionary Learning for Sparse Image Representations
Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, 1 Guillermo Sapiro and Lawrence Carin Department of Electrical and Computer
More informationAddress for Correspondence 1 Associate Professor, 2 Research Scholar, 3 Professor, Department of Electronics and Communication Engineering
Research Paper ITERATIVE NON LOCAL IMAGE RESTORATION USING INTERPOLATION OF UP AND DOWN SAMPLING 1 R.Jothi Chitra, 2 K.Sakthidasan @ Sankaran and 3 V.Nagarajan Address for Correspondence 1 Associate Professor,
More information