The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that A = UΣV T.

Similar documents
Assignment: Backgrounding and Optical Flow.

Name: Math 310 Fall 2012 Toews EXAM 1. The material we have covered so far has been designed to support the following learning goals:

Interlude: Solving systems of Equations

COMP 558 lecture 19 Nov. 17, 2010

Recognition, SVD, and PCA

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin)

Column and row space of a matrix

APPM 2360 Project 2 Due Nov. 3 at 5:00 PM in D2L

General Instructions. Questions

Two-view geometry Computer Vision Spring 2018, Lecture 10

Image Compression using Singular Value Decomposition

Unsupervised learning in Vision

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

1 Introduction to Matlab

Convex Optimization / Homework 2, due Oct 3

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Image Compression using Singular Value Decomposition (SVD)

APPM 2360 Lab #2: Facial Recognition

Applications Video Surveillance (On-line or off-line)

Math 308 Autumn 2016 MIDTERM /18/2016

Fundamentals of Linear Algebra, Part II

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

ft-uiowa-math2550 Assignment HW8fall14 due 10/23/2014 at 11:59pm CDT 3. (1 pt) local/library/ui/fall14/hw8 3.pg Given the matrix

Meeting 1 Introduction to Functions. Part 1 Graphing Points on a Plane (REVIEW) Part 2 What is a function?

Lecture 3: Camera Calibration, DLT, SVD

3D Geometry and Camera Calibration

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Multiple View Geometry in Computer Vision

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)

1) Give a set-theoretic description of the given points as a subset W of R 3. a) The points on the plane x + y 2z = 0.

FreeMat Tutorial. 3x + 4y 2z = 5 2x 5y + z = 8 x x + 3y = -1 xx

Image Manipulation in MATLAB Due Monday, July 17 at 5:00 PM

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

Groups, Linear Algebra, and the Geometry of the Torus

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR

Lecture 2: Variables, Vectors and Matrices in MATLAB

CSC 411 Lecture 18: Matrix Factorizations

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

An Approximate Singular Value Decomposition of Large Matrices in Julia

A GPU-based Approximate SVD Algorithm Blake Foster, Sridhar Mahadevan, Rui Wang

Introduction to Matlab

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Introduction to Matlab

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB

ECON 502 INTRODUCTION TO MATLAB Nov 9, 2007 TA: Murat Koyuncu

CSE 252B: Computer Vision II

MATLAB COURSE FALL 2004 SESSION 1 GETTING STARTED. Christian Daude 1

MATLAB GUIDE UMD PHYS401 SPRING 2011

MATRIX REVIEW PROBLEMS: Our matrix test will be on Friday May 23rd. Here are some problems to help you review.

Matrix Inverse 2 ( 2) 1 = 2 1 2

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A

Euclidean Space. Definition 1 (Euclidean Space) A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,.

Computational Foundations of Cognitive Science. Inverse. Inverse. Inverse Determinant

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis

Finite Math - J-term Homework. Section Inverse of a Square Matrix

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

An idea which can be used once is a trick. If it can be used more than once it becomes a method

MATLAB Lecture 1. Introduction to MATLAB

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Linear Methods for Regression and Shrinkage Methods

Image Compression with Singular Value Decomposition & Correlation: a Graphical Analysis

Lesson 13: Exploring Factored Form

Exercise 2. AMTH/CPSC 445a/545a - Fall Semester September 21, 2017

Image Analysis & Retrieval. CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W Lec 13

HW 1: Project Report (Camera Calibration)

Basic matrix math in R

Basic MATLAB Intro III

An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type.

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Epipolar Geometry and the Essential Matrix

Numerical Linear Algebra

Geometric camera models and calibration

Bindel, Spring 2015 Numerical Analysis (CS 4220) Figure 1: A blurred mystery photo taken at the Ithaca SPCA. Proj 2: Where Are My Glasses?

Math 2B Linear Algebra Test 2 S13 Name Write all responses on separate paper. Show your work for credit.

Computer Packet 1 Row Operations + Freemat

CS129: Introduction to Matlab (Code)

MAT 003 Brian Killough s Instructor Notes Saint Leo University

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

Segmentation: Clustering, Graph Cut and EM

Lecture 7: Arnold s Cat Map

Vector Calculus: Understanding the Cross Product

Singular Value Decomposition, and Application to Recommender Systems

Introduction to MATLAB

Exercise session using MATLAB: Quasiconvex Optimixation

Capturing, Modeling, Rendering 3D Structures

TUTORIAL 1 Introduction to Matrix Calculation using MATLAB TUTORIAL 1 INTRODUCTION TO MATRIX CALCULATION USING MATLAB

CHAPTER 5 SYSTEMS OF EQUATIONS. x y

This assignment is due the first day of school. Name:

Facial Recognition Using Eigenfaces

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

MATLAB GUIDE UMD PHYS401 SPRING 2012

Introduction to Homogeneous coordinates

CS 231A Computer Vision (Winter 2015) Problem Set 2

Dimension Reduction CS534

Mathematics 4330/5344 #1 Matlab and Numerical Approximation

Generalized trace ratio optimization and applications

Introduction to Matlab

AH Matrices.notebook November 28, 2016

Transcription:

Section 7.4 Notes (The SVD) The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that Then there are A = UΣV T Specifically: The ordering of the vectors comes from the ordering of the singular values (largest to smallest). The columns of U are the eigenvectors of AA T The columns of V are the eigenvectors of A T A. The diagonal elements of Σ are the singular values, σ i = λ i There are relationships between v i and u i (with normalization): Av i = σ i u i A T u i = σ i v i The scaling factor comes from Av i = σ i = A T u i. If the matrix A is already a symmetric matrix, then U = V and we get the decomposition from 7.. Numerical Example (Exercise ) Find the SVD of the matrix A = 3 6 2 6 2 [ ] First, we ll work with A T 8 27 A =. The eigenvalues are λ = 0, 90. 27 [ 9 ] [ ] /3 For λ = 0, the reduced matrix is, so v = 0 0 0 [ ] [ 3 ] 3 For λ = 90, the reduced matrix is, so v = 3 0 0 0 Now, we can find the reduced SVD right away since u = Av = 2 σ 3 2

We now need a basis for the null space of 0 20 20 AA T = 20 40 40 40 40 40 2 2 0 0 0 0 0 0 2 0, 2 0 Now the full SVD is given by: 3 /3 2/ 5 2/ 5 A = 6 2 = 2/3 / 5 0 6 2 2/3 0 / 5 [ 3 0 0 0 0 The SVD and the Four Subspaces ] [ 3/ 0 / 0 / 0 3/ 0 If the columns of V are ordered by the magnitude of the singular values (largest to smallest), and the rank of A is r, then Null(A) = span {v r+, v r+2,... v n } is an orthonormal basis for the null space of A T A (and thus also of A)- Note that the eigenspace E 0 for A T A is the null space of A. The row space of A therefore has the basis: Row(A) = span {v, v 2,... v r } The column space of A is therefore r dimensional: The null space of A T is given by: Col(A) = span {u, u 2,... u r } Null(A T ) = span {u r+, u r+2,... u m } ] T The Reduced SVD Again, let A be m n with rank r. If we re not interested in getting the null spaces, we can get what s called the reduced SVD: T A = Û ˆΣ ˆV where Û is m r with o.n. columns (ordered by the singular values), ˆΣ is r r with non-zero values along the diagonal, and where ˆV is n r with o.n. columns. When there is no room for confusion, the hats are sometimes left off. 2

Going back to our previous example, notice that we could have found the reduced SVD faster, since we re ignoring the zero eigenspaces: 3 /3 A = 6 2 = 2/3 3 [ ] T 3/ 0 0 / = 2 [ 3, ] 0 6 2 2/3 2 A Third Way The SVD can be full or reduced, and we also have a decomposition similar to the spectral decomposition: A = σ u v T + σ 2 u 2 v T 2 + + σ r u r v T r Since our last example had rank, we saw this factorization there. The SVD and Matlab The basic command is: [U,S,V]=svd(A) If we only want the reduced SVD, [U,S,V]=svd(A, econ ) If we want to compute a partial sum: In Matlab this would be: U(:,:k)*S(:k,:k)*V(:,:k) ; σ u v T + σ 2 u 2 v T 2 + + σ r u k v T k The SVD and Collections of Data Suppose we have some data in very high dimensional space- For example, 300 vectors in IR 00000. Because there are few vectors, we don t need all 00000 vectors to form a basis- We may want to approximate the data using a smaller dimensional basis (just a few basis vectors). As it turns out, the SVD gives the best basis by using the first k columns of U (there is a caveat- We usually subtract the mean from the data). For example, suppose A is 00000 300. The mean of the 300 vectors is also a vector in IR 00000, so we compute it and subtract it from each column of A. In Matlab, this would be: 3

A=randn(00000,300); %Create random data to work with ma=mean(a,2); %ma is a column vector of means Ahat=A-repmat(mA,,300); Now we take the SVD of A. The reduced representation of the data is found by projecting the data into the first k columns of U: [U,S,V]=svd(Ahat, econ ); Recon=U(:,:k)*(U(:,:k) *Ahat); %Projection Of course, it s hard to visualize these large vectors, so after the next brief section, we ll try to work with some movie data. The SVD and Solving Ax = b The SVD is how Matlab solves the system of equations for the least squares solution. Let s see how- Suppose we have the reduced SVD, and the equation below: Ax = b UΣV T x = b V V T x = V Σ U T b If we want a least squares solution, the left side of the equation is ˆx, and the pseudoinverse of the matrix A is: A = V Σ U T Example in Matlab % Make some test data: x=0*rand(20,); x=sort(x); y=3+4*x-2*x.^2+randn(size(x)); % Design matrix: A=[ones(20,) x x.^2]; [U,S,V]=svd(A, econ ); % Least Square Solution using the PseudoInverse c=v*diag(./diag(s))*u *y % Plot results using new data (in t) t=linspace(0,0); yout=c()+c(2)*t+c(3)*t.^2; plot(x,y, *,t,yout, k- ) 4

SVD over a Collection of Photos Suppose we have a collection of photographs, each of which are 640 480 pixels. Then, by concatenating the columns of the image, it can be represented as a vector in IR 640 480, or IR 307200. Given a collection of such images, we can translate them into a collection of vectors in this high dimensional space. Assuming that the faces occupy a very small region of this high dimensional space, we ought to be able to find a relatively small dimensional subspace W that captures most of the data. Such a subspace would a low dimensional approximation to the column space of photos. We recall that the SVD gives us a way of constructing such a basis. For example, if we have 200 photos that are 640 480, we can translate that data into a single array that is 307200 200, and so the left singular vectors in U would form a basis for the column space. Each of these vectors would themselves be vectors in IR 307200, so they could also be visualized as a 640 480 photograph. That what we ll do below, except that the collection is actually a short webcam video. Download the data author.mat from the class website. The webcam consists of 09 frames, where each frame is 20 60, or 9200 pixels. Thus, loading the data will give you an array that is 9200 09. Also, we should note that the data in matrix, saved as Y, consists of integer values between 0 and 255 (representing 255 shades of gray from black (0) to white (255)). This is important to mention: we ll be translating these images into vectors of real numbers instead of integers, which may impact how the coloring takes place. Discussing this in detail now would take us too far away from the discussion, but is something to keep in mind. Visualizing a vector as an array When visualizing these high dimensional vectors, remember that any vector in IR 9200 could be visualized as a 20 60 array. In Matlab, this is performed by using the reshape command. We use imagesc, short for image scaled, where Matlab will scale the values in the array to ready it for the color map. To see how the color mapping is performed, you can use the colorbar command. In this case, if u is a vector in IR 9200, then we can reshape it into an array and visualize it as an array using a gray scale color scheme by issuing the following commands, issued on one line imagesc(reshape(u,20,60)); colormap(gray); axis equal; colorbar To watch the movie, you can use the following code, which just goes through the data column by column and shows each as frame. I won t use the color bar, and the pause command is there so we can actually see the result (otherwise, the frames go by way too fast!). for j=:09 5

Figure : The figure above has three sample frames (Frames 23, 60 and 80) and the movie s mean. A=reshape(Y(:,j),20,60); imagesc(a); colormap(gray); axis equal; axis off; pause(0.) end A few sample frames and the movie s mean frame are shown in Figure. You might find it interesting that that the mean is the background. Once we ve found the mean, then we can proceed with the Singular Value Decomposition. Now consider the following code snippet : X=Y-repmat(Ymean,,09); [U,S,V]=svd(X, econ ); In the script file, we ll compare a rank 2 reconstruction to rank 0 and rank 22- This is done by using two, 0, then 22 basis vectors for the subspace in which the data lies. First, let s examine the actual basis vectors. From our code snippet, the basis vectors of interest are the columns of U (the left singular vectors of X). Since each basis vector is in IR 9200, the basis vector(s) can also be visualized as an array. 6

Sample Code %% Sample script for the "author" data % Some cleaning commands: close all; clear; clc; % Download "author.mat" from the class website, so it can be loaded here: load author Y=double(Y); %Changes the data from "integers" to "floating point" %The data was saved as integers to save on memory. % We can visualize the "movie" figure() for j=:09 A=reshape(Y(:,j),20,60); imagesc(a); colormap(gray); axis equal; axis off; pause(0.) end %% Mean subtraction Ymean=mean(Y,2); figure(2) imagesc(reshape(ymean,20,60)); colormap(gray); axis equal; axis off; title( The Movie Mean ); %% The SVD % We ll mean subtract, but keep the original data intact. We ll call put % the mean subtracted data in matrix X. X=Y-repmat(Ymean,,09); [U,S,V]=svd(X, econ ); %% Projection to R^2 Xcoords=U(:,:2) *X; %The matrix Xcoords is now 2 x 09 figure(3) plot(xcoords(,:),xcoords(2,:),. ); title( The movie data presented in the best two dimensional basis ); %% The Reconstruction back into 9200 dim space Xrecon=U(:,:2)*Xcoords; %Add back the mean if needed. %% We can play back the reconstructed movie with the mean added back in: figure(4) 7

for j=:09 A=reshape(Xrecon(:,j)+Ymean,20,60); imagesc(a); colormap(gray); axis equal; axis off; pause(0.) end Homework. Exercises, 2 p. 48: Find the singular values of the matrices below: [ ] [ ] 0 5 0 0 3 0 0 2. Exercises 7, 9 p. 48: Construct the SVD of each matrix below (by hand): [ ] 7 2 0 0 2 2 5 5 3. Exercise 4, p. 48: Find a vector x at which Ax has maximum length. 4. Suppose A = UΣV T is the (full) SVD. (a) Suppose A is square and invertible. Find the SVD of the inverse. (b) If A is square, show that det(a) is the product of the singular values. (c) If A itself is symmetric, show that U = V so that the SVD gives the eigenvalueeigenvector factorization: P DP T. 5. Let A, b be as defined below. Use the SVD to write b = ˆb + z, where ˆb Col(A) and z Null(A). 2 A = 3 4 b = 2 3 4 Also write in Matlab how you would check to see if ˆb is actually in the column space of A (using the output of the SVD). 6. (Exercise 2, 6.5) Given the data below, and the model equation β 0 + β ln(w) = p form a linear systems of equations to find the constants β 0, β, and find them by computing the pseudoinverse (using an appropriate SVD). w 48 6 8 3 3 p 9 98 03 0 2 8

7. Consider the matrix A below. We have two algorithms that will produce a basis for the column space- Gram-Schmidt and the SVD. Use Matlab to get both, then compare and contast the two bases. Hint: What vector comes first? Can we compare one, two or three dimensional subspaces? A = 3 5 0 2 3 3 8. (Using the matrix A from the previous problem) Here is some data in IR 4 that we organize in an array (think [ ] 2 3 4 X = Find the projection of this data into the space spanned by the first two columns of the matrix U from the SVD of A. It is OK to write only two significant digits if you re copying the output by hand. 9. We want to find a matrix P so that, given matrix A and vector y, then P y is the projection of y into the column space of A (orthogonal projection). We know that we could do this with the SVD, but using the normal equations and assuming that A is full rank, show that the matrix is: P = A(A T A) A T (Hint: Start by writing Ax = b and get the normal equation. 9