2. (a) See attached code for npls2_sps.m (b) See attached code and plot. Your MSE should be 7.57(for NPLS) 5.11 (for NPLS2). MSE numbers can be different a little bit should be close to that. (c) See subplot. MSE_NPLS = [22.4768 7.5772 8.8435 10.1979 10.6519], MSE_NPLS2 = [10.4248 5.1109 4.5064 4.3954 4.3702], for beta = [4 16 64 256 1024]. MSE number can vary a little bit from these numbers. For NPLS, minimum MSE occurs at beta = 2^4, while for NPLS2, minimum MSE occurs at beta = 2^10. (d) NPLS2 (modified approach) has 2 nd order difference as the penalty function. The noiseless signal itself is a straight line which incurs no penalty under the 2 nd order difference while it incurs some penalty under the 1 st order difference. For this class of signal, 2 nd order penalty models the signal better, thus NPSL2 performs better than NPLS method. function xx = npls2_sps(yy, niter, beta, delta) %function xx = npls2_sps(yy, niter, beta, delta) % nonquadratic penalized least-squares de-noising of an image y % using separable paraboloidal surrogates (SPS) algorithm % yy image to be "de-noised" % niter # of iterations % beta # roughness penalty parameter % delta # roughness penalty parameter if nargin < 2, niter = 80; end if nargin < 3, beta = 2^4; end % defaults appropriate for HW problem if nargin < 4, delta = 0.5; end [nx,ny] = size(yy); C = buildc(nx,ny); % create penalty matrix for 1st-order differences % \omega "curvature" function for Lange3 penalty wt = inline(sprintf('1./ sqrt(1 + abs(x/ %19.18e).^2)', delta)); denom = 1 + beta * abs(c)' * abs(c) * ones(nx*ny,1); xx = yy(:); % initial guess, the noisy image - in a vector for ii=1:niter Cx = C * xx; xx = xx + (yy(:) - xx - beta * (C' * (wt(cx).* Cx)))./ denom; end xx = reshape(xx, size(yy)); % turn vector back into an image
% % Build a sparse matrix that computes first-order differences % between horizontal and vertical neighboring pixels. % function C = buildc(nx, ny) i = 1:(nx-2); j = 2:(nx-1); % row and column indices i = [i i i]; j = [j-1 j j+1]; s = ones(nx-2,1)*[-1 2-1]; % make non zero entries Cx = sparse(i, j, s); % matrix rows are [0... 0-1 2-1 0... 0] i = 1:(ny-2); j = 2:(ny-1); i = [i i i]; j = [j-1 j j+1]; s = ones(ny-2,1)*[-1 2-1]; % make non zero entries Cy = sparse(i, j, s); % matrix rows are [0... 0-1 2-1 0... 0] % make it apply to each row of image (respectively each column) % and combine horizontal and vertical penalties C = [kron(speye(ny), Cx); kron(cy, speye(nx))]; % hw9 prob2 : apply two verison of NPLS denoising nx = 64; ny = 50; xtrue = zeros(nx,ny); ix = -(nx-1)/2:(nx-1)/2; iy = -(ny-1)/2:(ny-1)/2; [ix, iy] = ndgrid(ix, iy); xtrue = (1 - min(abs(ix/(nx/3)),1)) * 200; xtrue = (1 - min(abs(iy/(ny/3)),1)).* xtrue; randn('seed', 0) % OMITTED FROM TEMPLATE :-( yy = xtrue + 10 * randn(size(xtrue)); % add gaussian noise clf, pl = 330; colormap(gray(256)) subplot(pl+1), imagesc(xtrue'), axis xy, axis image title 'x_{true}(n,m)', colorbar horiz subplot(pl+2), imagesc(yy'), axis xy, axis image title 'Noisy: y(n,m)', colorbar horiz niter=200; delta=1; beta1 = 2^4; beta2 = 2^4;
xhat1 = npls_sps(yy, niter, beta1, delta); xhat2 = npls2_sps(yy, niter, beta2, delta); beta_list = [2^2 2^4 2^6 2^8 2^10]; mse_npls1 = zeros(1, length(beta_list)); mse_npls2 = mse_npls1; for i = 1: length(beta_list), xhat_temp = npls_sps(yy, niter, beta_list(i), delta); mse_npls1(i) = mean2((xhat_temp-xtrue).^2); xhat_temp = npls2_sps(yy, niter, beta_list(i), delta); mse_npls2(i) = mean2((xhat_temp-xtrue).^2); end; subplot(pl+4), imagesc(xhat1'), axis xy, axis image xlabel n, ylabel m, title 'NPLS1', colorbar horiz subplot(pl+5), imagesc((xhat1-xtrue)'), axis xy, axis image xlabel n, ylabel m, colorbar horiz title(sprintf('npls1 error, MSE=%g', mean2((xhat1-xtrue).^2))) subplot(pl+7), imagesc(xhat2'), axis xy, axis image xlabel n, ylabel m, title 'NPLS2', colorbar horiz subplot(pl+8), imagesc((xhat2-xtrue)'), axis xy, axis image xlabel n, ylabel m, colorbar horiz title(sprintf('npls2 error, MSE=%g', mean2((xhat2-xtrue).^2))) subplot(pl+3) plot(1:nx, xtrue(:,ny/2), 'r:', 1:nx, xhat1(:,ny/2), 'c--', 1:nx, xhat2(:,ny/2), 'y-') axis tight, legend('xtrue', 'npls1', 'npls2', 3), title 'Profile plot'; subplot(pl+6) plot(1:5, mse_npls1,'r:',1:5, mse_npls2,'c--'); axis tight, xlabel '0.5log_2\beta', ylabel 'MSE', legend('npls1', 'npls2', 2)
3. see attached code. From the estimated R_y[n,m] we see that h[n,m] must be a 3x7 rectangle, so N=1 and M=3. Since Var{y[n,m]} = R_y[0,0] = c^2(h[n,m] h[n,m])[0,0], from the peak of estimated R_y[n,m] we compute that c 13.2 or -13.2 since (h[n,m] h[n,m])[0,0] = h[n,m] ^2 = 35. -5pts for trial and error approach. % acorr_find.m K = 128; h = ones(2*1+1,2*3+1); % N =1 and M =3 c = 13; randn('state', 9) y = c * conv2(randn(k,k), h, 'same'); subplot(221);imagesc(y), axis xy, axis image colorbar;colormap gray;title 'y[n,m]' r = xcorr2(y) / K^2; ii = [-(K-1):(K-1)]; subplot(222); imagesc(ii, ii, r); axis xy, axis image colorbar;title('r_y[n,m]');
ii = [-15:15]; subplot(223) plot(ii, r(k+ii,k), '-o'), xlabel 'n', axis tight; subplot(224) plot(ii, r(k,k+ii), '-o'), xlabel 'm', axis tight c_hat = sqrt(r(k,k) / sum(h(:).^2))