Robust Principal Component Analysis (RPCA)

Similar documents
Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

IN some applications, we need to decompose a given matrix

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Outlier Pursuit: Robust PCA and Collaborative Filtering

Real-Time Principal Component Pursuit

A low rank based seismic data interpolation via frequencypatches transform and low rank space projection

COMPLETION OF STRUCTURALLY-INCOMPLETE MATRICES WITH REWEIGHTED LOW-RANK AND SPARSITY PRIORS. Jingyu Yang, Xuemeng Yang, Xinchen Ye

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices by Convex Optimization

Real-time Background Subtraction via L1 Norm Tensor Decomposition

ELEG Compressive Sensing and Sparse Signal Representations

FAST PRINCIPAL COMPONENT PURSUIT VIA ALTERNATING MINIMIZATION

ADAPTIVE LOW RANK AND SPARSE DECOMPOSITION OF VIDEO USING COMPRESSIVE SENSING

ONLINE ROBUST PRINCIPAL COMPONENT ANALYSIS FOR BACKGROUND SUBTRACTION: A SYSTEM EVALUATION ON TOYOTA CAR DATA XINGQIAN XU THESIS

Think Big, Solve Small: Scaling Up Robust PCA with Coupled Dictionaries

Low Rank Representation Theories, Algorithms, Applications. 林宙辰 北京大学 May 3, 2012

ELEG Compressive Sensing and Sparse Signal Representations

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery

MULTIVIEW STEREO (MVS) reconstruction has drawn

Blockwise Matrix Completion for Image Colorization

CLEANING UP TOXIC WASTE: REMOVING NEFARIOUS CONTRIBUTIONS TO RECOMMENDATION SYSTEMS

Lecture 17 Sparse Convex Optimization

Recovery of Corrupted Low-Rank Matrices via Half-Quadratic based Nonconvex Minimization

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Decentralized Low-Rank Matrix Completion

SpaRCS: Recovering Low-Rank and Sparse Matrices from Compressive Measurements

Lecture 4 Duality and Decomposition Techniques

Robust Principal Component Analysis with Side Information

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1

NULL SPACE CLUSTERING WITH APPLICATIONS TO MOTION SEGMENTATION AND FACE CLUSTERING

HIGH DYNAMIC RANGE IMAGING BY A RANK-1 CONSTRAINT. Tae-Hyun Oh, Joon-Young Lee, In So Kweon Robotics and Computer Vision Lab, KAIST, Korea

Lecture 18: March 23

Constrained optimization

Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Fault detection with principal component pursuit method

Convex Optimization / Homework 2, due Oct 3

Learning Robust Low-Rank Representations

Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering

Fast l 1 -Minimization and Parallelization for Face Recognition

Unsupervised learning in Vision

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Nonlinear Programming

Salt and pepper noise removal in surveillance video based on low-rank matrix recovery

The Alternating Direction Method of Multipliers

OR-PCA with MRF for Robust Foreground Detection in Highly Dynamic Backgrounds

Convex optimization algorithms for sparse and low-rank representations

RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images

Provable Non-convex Robust PCA

Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

Musings on Exabyte Scale Principal Component Analysis. Randy Paffenroth

arxiv: v1 [stat.ml] 27 Sep 2017

Compressive Sensing. Part IV: Beyond Sparsity. Mark A. Davenport. Stanford University Department of Statistics

calibrated coordinates Linear transformation pixel coordinates

Robust Transfer Principal Component Analysis with Rank Constraints

Stereo and Epipolar geometry

Background Subtraction via Fast Robust Matrix Completion

An efficient algorithm for rank-1 sparse PCA

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

SpaRCS: Recovering Low-Rank and Sparse Matrices from Compressive Measurements

An efficient algorithm for sparse PCA

The Benefit of Tree Sparsity in Accelerated MRI

Face Recognition via Sparse Representation

Multi-path SAR Change Detection

Robust Face Recognition via Sparse Representation

Direct Matrix Factorization and Alignment Refinement: Application to Defect Detection

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Package ADMM. May 29, 2018

The Fundamentals of Compressive Sensing

Blind Identification of Graph Filters with Multiple Sparse Inputs

Multilevel Approximate Robust Principal Component Analysis

Sparse Screening for Exact Data Reduction

Collaborative Filtering for Netflix

Batch image alignment via subspace recovery based on alternative sparsity pursuit

Image Fusion Algorithm based on Sparse Features and Pulse Coupled Neural Networks in Wavelet Transform Domain TIANYunXiang1,a,TIANXiaoLin2,b

Latent Low-Rank Representation

Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems

Structurally Random Matrices

Low-Rank Matrix Recovery III: Fast Algorithms and Scalable Applications Zhouchen Lin

P257 Transform-domain Sparsity Regularization in Reconstruction of Channelized Facies

Linear Programming Problems

Sparsity Based Regularization

Image reconstruction based on back propagation learning in Compressed Sensing theory

Limitations of Matrix Completion via Trace Norm Minimization

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal

Data-driven modeling: A low-rank approximation problem

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Robust l p -norm Singular Value Decomposition

Two-View Geometry (Course 23, Lecture D)

Robust Principal Component Analysis with Complex Noise

Robust Principal Component Analysis on Graphs

Noisy Depth Maps Fusion for Multi-view Stereo via Matrix Completion

Image Super-Resolution Reconstruction Based On L 1/2 Sparsity

Probabilistic Graphical Models

A dual domain algorithm for minimum nuclear norm deblending Jinkun Cheng and Mauricio D. Sacchi, University of Alberta

Lecture 19: November 5

An Iteratively Reweighted Least Square Implementation for Face Recognition

Programming, numerics and optimization

Transcription:

Robust Principal Component Analysis (RPCA) & Matrix decomposition: into low-rank and sparse components Zhenfang Hu 2010.4.1

reference [1] Chandrasekharan, V., Sanghavi, S., Parillo, P., Wilsky, A.: Ranksparsity incoherence for matrix decomposition. preprint 2009. [2] Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: NIPS 2009. [3] X. Yuan and J. Yang. Sparse and low-rank matrix decomposition via alternating direction methods. preprint, 2009. [4] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of a corrupted low-rank matrices. Mathematical Programming, submitted, 2009. [5] E. J. Candès, X. Li, Y. Ma, and J. Wright. Robust Principal Component Analysis? Submitted for publication, 2009.

research trends Appear in the latest 2008-2009 Theories are guaranteed and still refining; numerical algorithms are practical for 1000 1000 matrix (12 second) and still improving; applications not yet expand Research background: comes from 1 matrix completion problem 2 L1 norm and nuclear norm convex optimization

outlines Part I: theory Part II: numerical algorithm Part III: applications

Part I: theory

PCA Given a data matrix M, assume L 0 is a Low-rank matrix N 0 is a small and i.i.d. Gaussian noise matrix Classical PCA seeks the best (in an L2 norm sense) rank-k estimate of L 0 by solving 2 It can be solved by SVD

PCA example When noise are small Gaussian, PCA does well

Defect of PCA When noise are not Gaussian, but appear like spike, i.e. data contains outliers, PCA fails

RPCA When noise are sparse spikes, another robust model (RPCA) should be built Assume L 0 is a Low-rank matrix S 0 is a Sparse spikes noise matrix Problem: we know M is composed by a low rank and a sparse matrix. Now, we are given M and asked to recover its original two components It s purely a matrix decomposition problem

ill-posed problem We only observe M, it s impossible to know which two matrices add up to be it. So without further assumptions, it can t be solved: 1., another valid sparse-plus-low-rank decomposition might be Thus, the low-rank matrix should be assumed to be not too sparse 2., with v being the first column of. A reasonable sparse-plus-low-rank decomposition in this case might be and Thus, the sparse matrix should be assumed to not be low-rank

Assumptions about how L and S are generated 1. Low-rank matrix L: 2. Sparse matrix S:

Under what conditions can M be correctly decomposed? 1. Let the matrices with rank r(l) and with either the same row-space or column-space as L live in a matrix space denoted by T(L) 2. Let the matrices with the same support as S and number of nonzero entries those of S live in a matrix space denoted by O(S) Then, if T(L) O(S)=null, M can be correctly decomposed.

Detailed conditions Various work in 2009 proposed different detailed conditions. They improved on each other, being more and more relaxed. Under each of these conditions, they proved that matrix can be precisely or even exactly decomposed.

Conditions involving probability distributions for B with rank k smaller than n, exact recovery is possible with high probability even when m is super-linear in n

the latest condition developed The work of [1] and [2] are parallel, latest [5] improved on them and yields the best condition

Brief remarks in [5], they prove even if: 1. the rank of L grows proportional to O(n/log 2 n) 2. noise in S are of order O(n 2 ) exact decomposition is feasible

Part II: numerical algorithm

Convex optimization In order to solve the original problem, it is reformulated into optimization problem. A straightforward propose is but it s not convex and intractable Recent advances in understanding of the nuclear norm heuristic for low-rank solutions and the L1 heuristic for sparse solutions suggest which is convex, i.e. exists a unique minima

numerical algorithm During just two years, a series of algorithms have been proposed, [4] provides all comparisons, and most codes available at http://watt.csl.illinois.edu/~perceive/matrix-rank/sample_code.html They include: 1. Interior point method [1] 2. iterative thresholding algorithm 3. Accelerated Proximal Gradient (APG) [2] 4. A dual approach [4] 5. (latest & best) Augmented Lagrange Multiplier (ALM) [3,4]or Alternating Directions Method (ADM) [3,5]

Problem ADM The corresponding Augmented Lagrangian function is is the multiplier of the linear constraint. < > is trace inner product for matrix <X,Y>=trace(X T Y) Then, the iterative scheme of ADM is

Two established facts To approach the optimization, two well known facts is needed 1. 2. is the soft thresholding operator USV T is SVD of W

Optimization solution Sparse A with L1 norm Low-rank B with nuclear norm. Reformulate the objective so that previous fact can be used:

Final algorithm of ADM

Part III: application

Applications [5] (1) background modeling from surveillance videos 1 Airport video 2 Lobby video with varying illumination (2) removing shadows and specularities from face images

Airport video a video of 200 frames (resolution 176 144=25344 pixels) has a static background, but significant foreground variations reshape each frame as a column vector (25344 1) and stack them into a matrix M (25344 200) Objective: recover the low-rank and sparse components of M

Lobby video a video of 250 frames (resolution 168 120=20160 pixels) with several drastic illumination changes reshape each frame as a column vector (20160 1) and stack them into a matrix M (20160 250) Objective: recover the low-rank and sparse components of M