ECE 484 Digital Image Processing Lec 17 - Part II Review & Final Projects opics Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu slides created with PS Office Linux and EqualX equation editor Z. Li, ECE 484 Digital Image Processing, 2018 p.1
Outline Part II Summary & Exam 2 Exam 2 Image Dimension Reduction (SVD, PCA, LEM) Eigenface Fisherface Conv Neural Networks raining of CNN Course Projects Z. Li, ECE 484 Digital Image Processing, 2018 p.2
Exam 2 ime & Venu 11/29, in class. Format: close book, but can bringing in an A4 cheating sheet Multiple choices Problem solving Coverage: only the stuffs covered after the exam 1. Relax, more conceptual than gory details. Z. Li, ECE 484 Digital Image Processing, 2018 p.3
SVD projection decomposition for non square matrix: A mxn: Z. Li, Image Analysis & Retrv, Spring 2018 p.4
SVD as Signal Decomposition A (mxn) = U (mxm) S (mxn) V (nxn) he 1 st order SVD approx. of A is: Z. Li, Image Analysis & Retrv, Spring 2018 p.5
SVD approximation of an image Very easy function [x]=svd_approx(x0, k) dbg=0; if dbg x0= fix(100*randn(4,6)); k=2; end [u, s, v]=svd(x0); [m, n]=size(s); x = zeros(m, n); sgm = diag(s); for j=1:k x = x + sgm(j)*u(:,j)*v(:,j)'; end Z. Li, Image Analysis & Retrv, Spring 2018 p.6
Min Error Reconstruction Derivation of PCA Algorithm GOAL: Z. Li, Image Analysis & Retrv, Spring 2018 p.7
Justification of PCA Algorithm Remaining dimension x is centered! Z. Li, Image Analysis & Retrv, Spring 2018 p.8
PCA reconstruction error minimization GOAL: Use Lagrange-multipliers for the constraints, KK condition: Z. Li, Image Analysis & Retrv, Spring 2018 p.9
Justification of PCA Z. Li, Image Analysis & Retrv, Spring 2018 p.10
PCA Algorithm Center the data: X = X repmat(mean(x), [n, 1]); Principal component #1 points in the direction of the largest variance Each subsequent principal component is orthogonal to the previous ones, and points in the directions of the largest variance of the residual subspace Solved by finding Eigen Vectors of the Scatter/Covarinace matrix of data: S = cov(x); [A, eigv]=eig(s) Z. Li, Image Analysis & Retrv, Spring 2018 p.11
PCA & Fisher s Linear Discriminant Between-class scatter S B i ( i - )( i - ) i= 1 ithin-class scatter S = otal scatter c = å c å å i= 1 x Î k ( x i k - )( - ) i k i 1 2 2 S c = å å here i= 1 x Î k ( x i k - )( - ) c is the number of classes k i is the mean of class i i is number of samples of i.. = S B + S 1 Z. Li, Image Analysis & Retrv, Spring 2018 p.12
Eigen vs Fisher Projection PCA 1 2 Fisher PCA (Eigenfaces) PCA = arg max Maximizes projected total scatter Fisher s Linear Discriminant fld = arg max Maximizes ratio of projected between-class to projected within-class scatter, solved by the generalized Eigen problem: S S S B Z. Li, Image Analysis & Retrv, Spring 2018 p.13
Dealing with Singularity of S w fld = PCA = fld PCA = arg max arg max PCA PCA S S S B PCA PCA Since S is rank N-c, project training set via PCA first to subspace spanned by first N-c principal components of the training set. Apply FLD to N-c dimensional subspace yielding c-1 dimensional feature space. Fisher s Linear Discriminant projects away the within-class variation (lighting, expressions) found in training set. Fisher s Linear Discriminant preserves the separability of the classes. Z. Li, Image Analysis & Retrv, Spring 2018 p.14
Subspace Learning for Face Recognition Project face images to a subspace with basis A Matlab: x=faces*a(:,1:kd) eigf 2 eigf 3 eigf 1 = 10.9* + 0.4* + 4.7* Z. Li, Image Analysis & Retrv, Spring 2018 p.15
Subspace/ransform Method It is interesting to compare Fisherface with Eigenface basis = A x in R wxh y in R d Eigenface Fisherface Z. Li, Image Analysis & Retrv, Spring 2018 p.16
CNN Processing Pipeline e can generate successive convolution features into higher level of representation: (notice w/o padding, shrinking) this gives us low level to high level features deeper feature, has larger receptive field, i.e, how many pixels it derives from Z. Li, ECE 484 Digital Image Processing, 2018 p.17
LeNet A landmark work: conv layers generate w x h x k feature maps FC layers map features to vectors How is label prediction done from final 4096 dimensional feature? Z. Li, ECE 484 Digital Image Processing, 2018 p.18
Pixel Level Loss Function Given an image patch in the input side, the residual is pixel level loss a bicubic upsampled image is prediction, the residual to be-learn is the difference between teh ground truth {y j } and predicted image. Z. Li, ECE 484 Digital Image Processing, 2018 p.19
Outline Part II Summary. Course Projects Denoising filtering (BM3D) eighted Nuclear Norm Minimization with Application to Image Denoising (NNM) Deep learning denoising - DnCNN Deep learning denoising - Universal Denoising Network Super-resolution - handcrafted (SR Forrest) Super-resolution - deep learning (EDSR) Papers are available at: https://umkc.box.com/s/lrek4ool84th4l3epjt5fi3sh53alc5 f Z. Li, ECE 484 Digital Image Processing, 2018 p.20
BM3D BM3D denoising filtering source code: http://www.cs.tut.fi/~foi/gcf-bm3d/bm3d.zip Z. Li, ECE 484 Digital Image Processing, 2018 p.21
NNM eighted Nuclear Norm Minimization and Its Applications to Low Level Vision code: http://www4.comp.polyu.edu.hk/~cslzhang/code/nnm_code.zip Z. Li, ECE 484 Digital Image Processing, 2018 p.22
DnCNN Architecture Z. Li, ECE 484 Digital Image Processing, 2018 p.23
Universal Denoising Networks Architecture: Results Z. Li, ECE 484 Digital Image Processing, 2018 p.24
SR Forrest Data dependent local projection model for SR Z. Li, ECE 484 Digital Image Processing, 2018 p.25
EDSR Super Resolution with Residual Networks code: https://github.com/thstkdgus35/edsr-pyorch Z. Li, ECE 484 Digital Image Processing, 2018 p.26
Summary Deep Leanring in Denoising Just beginning to show advantages, room for innovation combining handcrafted with deep would be the best Deep Learning in SR EDSR like residual learning gives best results ask-linked SR has more room Z. Li, ECE 484 Digital Image Processing, 2018 p.27