Regularized Tensor Factorizations & Higher-Order Principal Components Analysis

Size: px
Start display at page:

Download "Regularized Tensor Factorizations & Higher-Order Principal Components Analysis"

Transcription

1 Regularized Tensor Factorizations & Higher-Order Principal Components Analysis Genevera I. Allen Department of Statistics, Rice University, Department of Pediatrics-Neurology, Baylor College of Medicine, & Jan and Dan Duncan Neurological Research Institute, Texas Children s Hospital. August 2, 2012 G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

2 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

3 High-Dimensional Multi-way Data Examples: Neuroimaging. Microscopy. Hyperspectral Imaging. Remote Sensing. Bibliometrics. Chemometrics. Environmetrics. Network Data. Internet Data. Hyperspectral Image of Gulf Oil Spill 2010 (SpecTIR). G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

4 Principal Component Analysis PCA for Matrix Data: Exploratory Data Analysis. Pattern Recognition. Dimension Reduction. Data Visualization. For High-Dimensional data, performance can be improved by regularization... Sparse PCA. (Jollieffe et al., 2003; Johnstone and Lu, 2004; Zou et al, 2006; Shen and Huang, 2008) Functional PCA. (Silverman, 1996; Ramsay, 2006) Two-way Regularized PCA. (Huang et al., 2009, Witten et al., 2010, Lee et al., 2011, Allen et al., 2011) G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

5 Functional MRIs StarPlus Data (Subject 04847): (Mitchell et al., 2004) 20 tasks in which sentence agrees with image. 20 tasks in which sentence opposes image. Each task lasted 27 seconds (55 time points). Images: Data Set: 4,698 voxels 40 tasks 55 time points. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

6 Functional MRIs Classical PCA on Flattened Tensor: G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

7 Functional MRIs Sparse Generalized PCA on Flattened Tensor: G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

8 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

9 Tensor Decompositions and Higher-Order PCA Review PCA for matrix X: maximize v k v T k XT X v k subject to v T k v k = 1, & v T k v j = 0 j < k. Or... minimize U,V,d 1 2 X U D VT 2 F subject to U T U = I, V T V = I, & d 0. Solved via the Singular Value Decomposition (SVD). How do we perform PCA without flattening a tensor (Higher-Order PCA)? No equivalent of the SVD in 3 or more dimensions! G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

10 Tensor Decompositions and Higher-Order PCA Tensor Notation: X R n p q X a tensor, X a matrix, x a vector, x a scalar. Flattened (or matricized) tensor: X (1) R n pq. Tensor Frobenius norm: X F = X 2 ijl. Tensor mode multiplication: if U R n K, X 1 U R K p q. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

11 Tensor Decompositions and Higher-Order PCA CANDECOMP / PARAFAC (CP) Decomposition: CP values: d k 0. X K k=1 d k u k v k w k CP factors: u k R n 1 : u T k u k = 1, v k R p 1 : v T k v k = 1, and w k R q 1 : w T k w k = 1. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

12 Tensor Decompositions and Higher-Order PCA Tucker Decomposition: Tucker core: D R K K K. X D 1 U 2 V 3 W Tucker factors: U R n K : U T U = I, V R p K : V T V = I, and W R q K : W T W = I. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

13 Objectives Why Regularization in Higher-Order PCA? Feature selection, smoothing, data compression, data visualization,... Existing Work..... Non-Negative Tensor Factorizations: Cichocki et al., 2009; Hazan et al., 2005; Morup et al., 2008; Lim and Comon, 2009; Liu et al., 2011; Chi and Kolda, Sparse Non-Negative Tensor Factorizations: Ruiters and Klien, 2009; Pang et al., 2011; Cichocki et al., Limited Attention from the Statistics Community. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

14 Objectives Goal Develop a mathematically sound, computationally attractive, and flexible method to incorporate regularization in each tensor factor, or HOPC. In this talk we... 1 Formulate two algorithmic methods for Sparse HOPCA: Sparse Tucker-ALS & Sparse CP-ALS. 2 Develop deflation-based Tensor Power Algorithm and related Sparse CP-TPA method. 3 Propose extensions to Generalized HOPCA and Functional HOPCA & Non-negative HOPCA. 4 Present a case study on fmri data. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

15 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

16 Sparse HOOI Higher-Order Orthogonal Iteration (HOOI) or Tucker-ALS: 1 Repeat until convergence: 1 U First K principal components of (X 2 V 3 W) (1). 2 V First K principal components of (X 1 U 3 W) (2). 3 W First K principal components of (X 1 U 2 V) (3). 2 D X 1 U 2 V 3 W. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

17 Sparse HOOI Sparse Higher-Order Orthogonal Iteration (HOOI): 1 Repeat until convergence: 1 U First K sparse principal components of (X 2 V 3 W) (1). 2 V First K sparse principal components of (X 1 U 3 W) (2). 3 W First K sparse principal components of (X 1 U 2 V) (3). 2 D X 1 U 2 V 3 W. Questions: What optimization problem is this solving? Can we guarantee convergence? Is this efficient computationally? G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

18 Sparse CP-ALS CP Alternating Least Squares (CP-ALS): 1 K 2 X d k u k v k w k 2 F k=1 1 2 X (1) U(V W) T 2 F Algorithm: 1 Solve for each factor via simple least squares with the others fixed. 2 Normalize the factor so the columns are norm one. 3 Repeat until convergence. (Khatri-Rao Product: V W = [v 1 w 1... v K w K ] R pq K.) G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

19 Sparse CP-ALS Replace Least Squares with LASSO (l 1 -norm penalty): minimize U 1 2 X (1) U(V W) T 2 F + λ u U 1 Employ same algorithm structure as CP-ALS. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

20 Sparse CP-ALS What optimization problem is this algorithm solving? minimize U,V,W,d 1 K 2 X d k u k v k w k 2 F k=1 + λ u U 1 + λ v V 1 + λ w W 1 NOT TRUE! G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

21 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

22 Tensor Power Algorithm Rank-one CP Optimization Problem: minimize u,v,w,d X d u v w 2 2 subject to u T u = 1, v T v = 1, w T w = 1, & d > 0. maximize u,v,w X 1 u 2 v 3 w subject to u T u = 1, v T v = 1, & w T w = 1. Can update one factor at a time with the others fixed. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

23 Tensor Power Algorithm Algorithm (Power Method or Deflation Approach): 1 For k = 1... K 1 Repeat until converge: 1 u k ˆX 2 v k 3 w k ˆX 2 v k 3 w k 2. 2 v k ˆX 1 u k 3 w k ˆX 1 u k 3 w k 2. 3 w k ˆX 1 u k 2 v k ˆX 1 u k 2 v k 2. 2 d k ˆX 1 u k 2 v k 3 w k. 3 ˆX ˆX d k u k v k w k. Greedy! Computes rank-one solution on residuals of previous factorization. Converges to a local optimum of the rank-one CP problem. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

24 Sparse CP-TPA: Optimization Problem Rank-one Sparse CP-TPA Problem: maximize u,v,w X 1 u 2 v 3 w λ u u 1 λ v v 1 λ w w 1 subject to u T u 1, v T v 1, & w T w 1. where λ u, λ v, λ w 0 are regularization parameters controlling the amount of sparsity. Tri-concave relaxation of rank-one CP optimization problem. Concave in u with v and w fixed and vice versa. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

25 Sparse CP-TPA: Solution & Algorithm Solution û = S(X 2 v 3 w, λ u ). ˆv = S(X 1 u 3 w, λ v ). ŵ = S(X 1 u 2 v, λ w ). where S(, λ) = sign( )( λ) + is the soft-thresholding operator. Then, the coordinate-wise solutions to the Sparse CP-TPA problem are: { { { û u û = 2 û 2 > 0 ˆv 0 otherwise, v ˆv = 2 ˆv 2 > 0 ŵ 0 otherwise, & w ŵ = 2 ŵ 2 > 0 0 otherwise, which when repeated monotonically increase the objective and converge to a local maximum. Multiple Factors: Tensor Power Algorithm deflation approach. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

26 Advantages of Sparse CP-TPA 1 Convergence to a local optimum assured. 2 Monotonic algorithm Track progress. 3 Scale of the factors directly controlled Numerically stable algorithm and solution. Solution for each factor either has norm one or is exactly zero. 4 Solution for each factor a simple analytical form Iterative updates computationally inexpensive. 5 Require less computer memory as only need to compute components of final solution. 6 General framework yields many fruitful extensions. Depends heavily on initial starting point... as do all HOPCA and Sparse HOPCA methods. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

27 Sparse HOPCA: Loose Ends Selecting regularization parameters: Cross-validation. BIC: Each sub-problem solves a penalized regression problem. Stability Selection. Amount of Variance Explained: Projection Matrices: P (U) k = U k (U T k U k) 1 U T k analogously. ) and P(V k, P (w) k Cumulative Proportion of Variance Explained by first k HOPC s: X 1 P (U) k 2 P (V ) k 3 P (W ) k X 2 F 2 F. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

28 Extensions Flexible Modern Multivariate Methods (can be used in combination): 1 General Penalties (Allen et al., 2011): Penalties that are norms or semi-norms. Re-scaled solution of a penalized regression problem. Examples: group norms, total variation norms, l q norms. 2 Non-negativity (Allen & Maletic-Savatic, 2011): Soft-thresholding operator replaced by positive thresholding operator. 3 Multi-way Functional Data (Huang et al., 2009): Multi-way half-smoothing via Tucker Decomposition. Multi-way functional data penalties estimated via deflation. Two approaches are not equivalent. 4 Structured Data... Generalized HOPCA. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

29 Generalized HOPCA Allen et al. (2011) introduced Generalized PCA for heteroscedastic errors of structured data (data in which variables are associated with a fixed location). Generalized Frobenius norm with generalizing operators such as graph Laplacians or smoothing matrices based on known structure. Rank-one Generalized CP minimize u,v,w,d 1 2 X d u v w 2 Q (1),Q (2),Q (3) subject to u T Q (1) u = 1, v T Q (2) v = 1, w T Q (3) w = 1, & d > 0. Sparse Generalized CP: Extension of Sparse CP framework. Can be solved via methods introduced in Allen et al. (2011). G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

30 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

31 Overview of Simulation Results Best Sparse HOOI Best at both signal recovery & feature selection. Comparable Sparse CP-TPA Comparable in all scenarios except when all modes have equal dimension. Good Sparse HOSVD Can have erratic behavior for K > 1. Not Good Sparse CP-ALS Not optimal at feature selection. Sparse CP-TPA MUCH faster than Sparse HOOI. Full simulation results available from gallen/. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

32 1 Introduction Motivation Background 2 Sparse HOPCA Algorithmic Methods: Sparse HOSVD, HOOI & CP-ALS Deflation Approaches to Sparse HOPCA 3 Results Simulations Example: StarPlus fmri Data G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

33 Sparse GPCA on Flattened Tensor StarPlus Data: 4,698 voxels 36 tasks 55 time points. Flatten tensor to 4,698 voxels 1,098 time point tasks (only tasks with sentence - image agreement). Spatial generalizing operator: Graph Laplaican of nearest neighbor graph connecting 3D grid of voxels. Temporal generalizing operator: Kernel smoother. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

34 Sparse GPCA on Flattened Tensor G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

35 Sparse GHOPCA on Tensor Data 4,698 voxels 36 tasks 55 time points. Spatial operator: Graph Laplaican. Temporal operator: Kernel Smoother. Task operator: Identity. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

36 StarPlus Results Tucker Decomposition: G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

37 StarPlus Results Sparse GCP-TPA: G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

38 StarPlus Results Sparse GCP-TPA: G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

39 Future Work Statistics Work: 1 All methods can be trivially extended to > 3 dimensions. 2 Initializations for algorithms. 3 How many factors, K? 4 Is HOPCA and Sparse HOPCA asymptotically (in)consistent? 5 Further multivariate methods for tensors. 6 Regression & classification with tensors. Further Applications: Hyperspectral imaging, Spatio-temporal data, Multi-dimensional NMR spectroscopy, Microscopy, remote sensing, and other Neuroimaging data. G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

40 Software & Acknowledgments Software: Coming soon... Sparse HOPCA Matlab Toolbox based on the Matlab Tensor Toolbox (Bader & Kolda). Funding: National Science Foundation, Division of Mathematical Sciences G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

41 References G. I. Allen, Regularized tensor decompositions and higher-order principal components analysis, arxiv: , Rice University Technical Report No. TR , G. I. Allen, Sparse higher-order principal components analysis, In Artificial Intelligence and Statistics, G. I. Allen, L. Grosenick & J. Taylor, A generalized least squares decomposition, arxiv: , Rice University Technical Report No. TR , G. I. Allen & M. Maletic-Savatic, Sparse non-negative generalized PCA with applications to metabolomics, Bioinformatics, 27:21, , F. D. Campbell & G. I. Allen, Algorithms and approaches for analyzing massive structured data with Sparse Generalized PCA, In Preparation, T. G. Kolda & B. W. Bader, Tensor decompositions and applications, SIAM review, 51:3, , G. I. Allen (Rice & BCM) Sparse HOPCA August 2, / 28

Sparse & Functional Principal Components Analysis

Sparse & Functional Principal Components Analysis Sparse & Functional Principal Components Analysis Genevera I. Allen Department of Statistics and Electrical and Computer Engineering, Rice University, Department of Pediatrics-Neurology, Baylor College

More information

An efficient algorithm for sparse PCA

An efficient algorithm for sparse PCA An efficient algorithm for sparse PCA Yunlong He Georgia Institute of Technology School of Mathematics heyunlong@gatech.edu Renato D.C. Monteiro Georgia Institute of Technology School of Industrial & System

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

An efficient algorithm for rank-1 sparse PCA

An efficient algorithm for rank-1 sparse PCA An efficient algorithm for rank- sparse PCA Yunlong He Georgia Institute of Technology School of Mathematics heyunlong@gatech.edu Renato Monteiro Georgia Institute of Technology School of Industrial &

More information

Dimension reduction : PCA and Clustering

Dimension reduction : PCA and Clustering Dimension reduction : PCA and Clustering By Hanne Jarmer Slides by Christopher Workman Center for Biological Sequence Analysis DTU The DNA Array Analysis Pipeline Array design Probe design Question Experimental

More information

Local-Aggregate Modeling for Big Data via Distributed Optimization: Applications to Neuroimaging

Local-Aggregate Modeling for Big Data via Distributed Optimization: Applications to Neuroimaging Biometrics 71, 905 917 December 2015 DOI: 10.1111/biom.12355 Local-Aggregate Modeling for Big Data via Distributed Optimization: Applications to Neuroimaging Yue Hu 1, * and Genevera I. Allen 2, ** 1 Department

More information

Voxel selection algorithms for fmri

Voxel selection algorithms for fmri Voxel selection algorithms for fmri Henryk Blasinski December 14, 2012 1 Introduction Functional Magnetic Resonance Imaging (fmri) is a technique to measure and image the Blood- Oxygen Level Dependent

More information

Effectiveness of Sparse Features: An Application of Sparse PCA

Effectiveness of Sparse Features: An Application of Sparse PCA 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Tensor Sparse PCA and Face Recognition: A Novel Approach

Tensor Sparse PCA and Face Recognition: A Novel Approach Tensor Sparse PCA and Face Recognition: A Novel Approach Loc Tran Laboratoire CHArt EA4004 EPHE-PSL University, France tran0398@umn.edu Linh Tran Ho Chi Minh University of Technology, Vietnam linhtran.ut@gmail.com

More information

Reviewer Profiling Using Sparse Matrix Regression

Reviewer Profiling Using Sparse Matrix Regression Reviewer Profiling Using Sparse Matrix Regression Evangelos E. Papalexakis, Nicholas D. Sidiropoulos, Minos N. Garofalakis Technical University of Crete, ECE department 14 December 2010, OEDM 2010, Sydney,

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 9, 2017 1 Outline Multivariate ordinary least squares Matlab code:

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 4, 2015 1 Outline Multivariate ordinary least squares Singular value

More information

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tan Vo 1 and Dat Tran 1 and Wanli Ma 1 1- Faculty of Education, Science, Technology and Mathematics University of Canberra, Australia

More information

Recognition, SVD, and PCA

Recognition, SVD, and PCA Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine New Approaches for EEG Source Localization and Dipole Moment Estimation Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine Outline Motivation why EEG? Mathematical Model equivalent

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract

More information

SpicyMKL Efficient multiple kernel learning method using dual augmented Lagrangian

SpicyMKL Efficient multiple kernel learning method using dual augmented Lagrangian SpicyMKL Efficient multiple kernel learning method using dual augmented Lagrangian Taiji Suzuki Ryota Tomioka The University of Tokyo Graduate School of Information Science and Technology Department of

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 189 Fall 2015 Introduction to Machine Learning Final Please do not turn over the page before you are instructed to do so. You have 2 hours and 50 minutes. Please write your initials on the top-right

More information

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,

More information

Feature Selection Using Modified-MCA Based Scoring Metric for Classification

Feature Selection Using Modified-MCA Based Scoring Metric for Classification 2011 International Conference on Information Communication and Management IPCSIT vol.16 (2011) (2011) IACSIT Press, Singapore Feature Selection Using Modified-MCA Based Scoring Metric for Classification

More information

Robust l p -norm Singular Value Decomposition

Robust l p -norm Singular Value Decomposition Robust l p -norm Singular Value Decomposition Kha Gia Quach 1, Khoa Luu 2, Chi Nhan Duong 1, Tien D. Bui 1 1 Concordia University, Computer Science and Software Engineering, Montréal, Québec, Canada 2

More information

A Nuclear Norm Minimization Algorithm with Application to Five Dimensional (5D) Seismic Data Recovery

A Nuclear Norm Minimization Algorithm with Application to Five Dimensional (5D) Seismic Data Recovery A Nuclear Norm Minimization Algorithm with Application to Five Dimensional (5D) Seismic Data Recovery Summary N. Kreimer, A. Stanton and M. D. Sacchi, University of Alberta, Edmonton, Canada kreimer@ualberta.ca

More information

When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration

When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration Bihan Wen, Yanjun Li and Yoram Bresler Department of Electrical and Computer Engineering Coordinated

More information

Alternative Statistical Methods for Bone Atlas Modelling

Alternative Statistical Methods for Bone Atlas Modelling Alternative Statistical Methods for Bone Atlas Modelling Sharmishtaa Seshamani, Gouthami Chintalapani, Russell Taylor Department of Computer Science, Johns Hopkins University, Baltimore, MD Traditional

More information

Compressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI

Compressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI 1 Compressed Sensing Reconstructions for Dynamic Contrast Enhanced MRI Kevin T. Looby klooby@stanford.edu ABSTRACT The temporal resolution necessary for dynamic contrast enhanced (DCE) magnetic resonance

More information

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected

More information

A NEW VARIABLES SELECTION AND DIMENSIONALITY REDUCTION TECHNIQUE COUPLED WITH SIMCA METHOD FOR THE CLASSIFICATION OF TEXT DOCUMENTS

A NEW VARIABLES SELECTION AND DIMENSIONALITY REDUCTION TECHNIQUE COUPLED WITH SIMCA METHOD FOR THE CLASSIFICATION OF TEXT DOCUMENTS A NEW VARIABLES SELECTION AND DIMENSIONALITY REDUCTION TECHNIQUE COUPLED WITH SIMCA METHOD FOR THE CLASSIFICATION OF TEXT DOCUMENTS Ahmed Abdelfattah Saleh University of Brasilia, Brasil ahmdsalh@yahoo.com

More information

Machine Learning / Jan 27, 2010

Machine Learning / Jan 27, 2010 Revisiting Logistic Regression & Naïve Bayes Aarti Singh Machine Learning 10-701/15-781 Jan 27, 2010 Generative and Discriminative Classifiers Training classifiers involves learning a mapping f: X -> Y,

More information

Sparse and large-scale learning with heterogeneous data

Sparse and large-scale learning with heterogeneous data Sparse and large-scale learning with heterogeneous data February 15, 2007 Gert Lanckriet (gert@ece.ucsd.edu) IEEE-SDCIS In this talk Statistical machine learning Techniques: roots in classical statistics

More information

P-spline ANOVA-type interaction models for spatio-temporal smoothing

P-spline ANOVA-type interaction models for spatio-temporal smoothing P-spline ANOVA-type interaction models for spatio-temporal smoothing Dae-Jin Lee and María Durbán Universidad Carlos III de Madrid Department of Statistics IWSM Utrecht 2008 D.-J. Lee and M. Durban (UC3M)

More information

Image Analysis, Classification and Change Detection in Remote Sensing

Image Analysis, Classification and Change Detection in Remote Sensing Image Analysis, Classification and Change Detection in Remote Sensing WITH ALGORITHMS FOR ENVI/IDL Morton J. Canty Taylor &. Francis Taylor & Francis Group Boca Raton London New York CRC is an imprint

More information

INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING

INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING Yi-Ou Li 1, Tülay Adalı 1, and Vince D. Calhoun 2,3 1 Department of Computer Science and Electrical Engineering University of Maryland Baltimore

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Robust Principal Component Analysis (RPCA)

Robust Principal Component Analysis (RPCA) Robust Principal Component Analysis (RPCA) & Matrix decomposition: into low-rank and sparse components Zhenfang Hu 2010.4.1 reference [1] Chandrasekharan, V., Sanghavi, S., Parillo, P., Wilsky, A.: Ranksparsity

More information

SGN (4 cr) Chapter 10

SGN (4 cr) Chapter 10 SGN-41006 (4 cr) Chapter 10 Feature Selection and Extraction Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 18, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006

More information

Modern Multidimensional Scaling

Modern Multidimensional Scaling Ingwer Borg Patrick Groenen Modern Multidimensional Scaling Theory and Applications With 116 Figures Springer Contents Preface vii I Fundamentals of MDS 1 1 The Four Purposes of Multidimensional Scaling

More information

Quasi-Newton algorithm for best multilinear rank approximation of tensors

Quasi-Newton algorithm for best multilinear rank approximation of tensors Quasi-Newton algorithm for best multilinear rank of tensors and Lek-Heng Lim Department of Mathematics Linköpings Universitet 6th International Congress on Industrial and Applied Mathematics Outline 1

More information

Compressed Sensing and Applications by using Dictionaries in Image Processing

Compressed Sensing and Applications by using Dictionaries in Image Processing Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 2 (2017) pp. 165-170 Research India Publications http://www.ripublication.com Compressed Sensing and Applications by using

More information

Feature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22

Feature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22 Feature selection Javier Béjar cbea LSI - FIB Term 2011/2012 Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/2012 1 / 22 Outline 1 Dimensionality reduction 2 Projections 3 Attribute selection

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Slides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Non-negative matrix factorization

Slides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Non-negative matrix factorization Slides adapted from Marshall Tappen and Bryan Russell Algorithms in Nature Non-negative matrix factorization Dimensionality Reduction The curse of dimensionality: Too many features makes it difficult to

More information

Sparsity Based Regularization

Sparsity Based Regularization 9.520: Statistical Learning Theory and Applications March 8th, 200 Sparsity Based Regularization Lecturer: Lorenzo Rosasco Scribe: Ioannis Gkioulekas Introduction In previous lectures, we saw how regularization

More information

Real-time Background Subtraction via L1 Norm Tensor Decomposition

Real-time Background Subtraction via L1 Norm Tensor Decomposition Real-time Background Subtraction via L1 Norm Tensor Decomposition Taehyeon Kim and Yoonsik Choe Yonsei University, Seoul, Korea E-mail: pyomu@yonsei.ac.kr Tel/Fax: +82-10-2702-7671 Yonsei University, Seoul,

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

Sparse Screening for Exact Data Reduction

Sparse Screening for Exact Data Reduction Sparse Screening for Exact Data Reduction Jieping Ye Arizona State University Joint work with Jie Wang and Jun Liu 1 wide data 2 tall data How to do exact data reduction? The model learnt from the reduced

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu

More information

Limitations of Matrix Completion via Trace Norm Minimization

Limitations of Matrix Completion via Trace Norm Minimization Limitations of Matrix Completion via Trace Norm Minimization ABSTRACT Xiaoxiao Shi Computer Science Department University of Illinois at Chicago xiaoxiao@cs.uic.edu In recent years, compressive sensing

More information

Toward a rigorous statistical framework for brain mapping

Toward a rigorous statistical framework for brain mapping Toward a rigorous statistical framework for brain mapping Bertrand Thirion, bertrand.thirion@inria.fr 1 Cognitive neuroscience How are cognitive activities affected or controlled by neural circuits in

More information

Interpreting predictive models in terms of anatomically labelled regions

Interpreting predictive models in terms of anatomically labelled regions Interpreting predictive models in terms of anatomically labelled regions Data Feature extraction/selection Model specification Accuracy and significance Model interpretation 2 Data Feature extraction/selection

More information

Statistical Analysis of Neuroimaging Data. Phebe Kemmer BIOS 516 Sept 24, 2015

Statistical Analysis of Neuroimaging Data. Phebe Kemmer BIOS 516 Sept 24, 2015 Statistical Analysis of Neuroimaging Data Phebe Kemmer BIOS 516 Sept 24, 2015 Review from last time Structural Imaging modalities MRI, CAT, DTI (diffusion tensor imaging) Functional Imaging modalities

More information

Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems

Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems Journal of Electrical Engineering 6 (2018) 124-128 doi: 10.17265/2328-2223/2018.02.009 D DAVID PUBLISHING Modified Iterative Method for Recovery of Sparse Multiple Measurement Problems Sina Mortazavi and

More information

Sparse coding for image classification

Sparse coding for image classification Sparse coding for image classification Columbia University Electrical Engineering: Kun Rong(kr2496@columbia.edu) Yongzhou Xiang(yx2211@columbia.edu) Yin Cui(yc2776@columbia.edu) Outline Background Introduction

More information

CPSC 340: Machine Learning and Data Mining. Multi-Dimensional Scaling Fall 2017

CPSC 340: Machine Learning and Data Mining. Multi-Dimensional Scaling Fall 2017 CPSC 340: Machine Learning and Data Mining Multi-Dimensional Scaling Fall 2017 Assignment 4: Admin 1 late day for tonight, 2 late days for Wednesday. Assignment 5: Due Monday of next week. Final: Details

More information

Blockwise Matrix Completion for Image Colorization

Blockwise Matrix Completion for Image Colorization Blockwise Matrix Completion for Image Colorization Paper #731 Abstract Image colorization is a process of recovering the whole color from a monochrome image given that a portion of labeled color pixels.

More information

Modern GPUs (Graphics Processing Units)

Modern GPUs (Graphics Processing Units) Modern GPUs (Graphics Processing Units) Powerful data parallel computation platform. High computation density, high memory bandwidth. Relatively low cost. NVIDIA GTX 580 512 cores 1.6 Tera FLOPs 1.5 GB

More information

x 1 SpaceNet: Multivariate brain decoding and segmentation Elvis DOHMATOB (Joint work with: M. EICKENBERG, B. THIRION, & G. VAROQUAUX) 75 x y=20

x 1 SpaceNet: Multivariate brain decoding and segmentation Elvis DOHMATOB (Joint work with: M. EICKENBERG, B. THIRION, & G. VAROQUAUX) 75 x y=20 SpaceNet: Multivariate brain decoding and segmentation Elvis DOHMATOB (Joint work with: M. EICKENBERG, B. THIRION, & G. VAROQUAUX) L R 75 x 2 38 0 y=20-38 -75 x 1 1 Introducing the model 2 1 Brain decoding

More information

Lecture 19: November 5

Lecture 19: November 5 0-725/36-725: Convex Optimization Fall 205 Lecturer: Ryan Tibshirani Lecture 9: November 5 Scribes: Hyun Ah Song Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

Simultaneous Tensor Subspace Selection and Clustering: The Equivalence of High Order SVD and K-Means Clustering

Simultaneous Tensor Subspace Selection and Clustering: The Equivalence of High Order SVD and K-Means Clustering Simultaneous Tensor Subspace Selection and Clustering: The Equivalence of High Order SVD and K-Means Clustering ABSTRACT Heng Huang Computer Science and Engineering Department University of Texas at Arlington

More information

Collaborative Sparsity and Compressive MRI

Collaborative Sparsity and Compressive MRI Modeling and Computation Seminar February 14, 2013 Table of Contents 1 T2 Estimation 2 Undersampling in MRI 3 Compressed Sensing 4 Model-Based Approach 5 From L1 to L0 6 Spatially Adaptive Sparsity MRI

More information

Learning Tensor-Based Features for Whole-Brain fmri Classification

Learning Tensor-Based Features for Whole-Brain fmri Classification Learning Tensor-Based Features for Whole-Brain fmri Classification Xiaonan Song, Lingnan Meng, Qiquan Shi, and Haiping Lu Department of Computer Science, Hong Kong Baptist University haiping@hkbu.edu.hk

More information

CPSC 340: Machine Learning and Data Mining. Deep Learning Fall 2018

CPSC 340: Machine Learning and Data Mining. Deep Learning Fall 2018 CPSC 340: Machine Learning and Data Mining Deep Learning Fall 2018 Last Time: Multi-Dimensional Scaling Multi-dimensional scaling (MDS): Non-parametric visualization: directly optimize the z i locations.

More information

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University January 24 2019 Logistics HW 1 is due on Friday 01/25 Project proposal: due Feb 21 1 page description

More information

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING SECOND EDITION IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING ith Algorithms for ENVI/IDL Morton J. Canty с*' Q\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC

More information

Blocking Optimization Strategies for Sparse Tensor Computation

Blocking Optimization Strategies for Sparse Tensor Computation Blocking Optimization Strategies for Sparse Tensor Computation Jee Choi 1, Xing Liu 1, Shaden Smith 2, and Tyler Simon 3 1 IBM T. J. Watson Research, 2 University of Minnesota, 3 University of Maryland

More information

Convex or non-convex: which is better?

Convex or non-convex: which is better? Sparsity Amplified Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York March 7 / 4 Convex or non-convex: which is better? (for sparse-regularized

More information

Facial Expression Recognition Using Non-negative Matrix Factorization

Facial Expression Recognition Using Non-negative Matrix Factorization Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,

More information

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract 19-1214 Chemometrics Technical Note Description of Pirouette Algorithms Abstract This discussion introduces the three analysis realms available in Pirouette and briefly describes each of the algorithms

More information

Compression, Clustering and Pattern Discovery in Very High Dimensional Discrete-Attribute Datasets

Compression, Clustering and Pattern Discovery in Very High Dimensional Discrete-Attribute Datasets IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 1 Compression, Clustering and Pattern Discovery in Very High Dimensional Discrete-Attribute Datasets Mehmet Koyutürk, Ananth Grama, and Naren Ramakrishnan

More information

Statistical data integration: Challenges and opportunities

Statistical data integration: Challenges and opportunities Statistical data integration: Challenges and opportunities Genevera I. Allen 1,2 1 Departments of Statistics, Computer Science, and Electrical and Computer Engineering, Rice University, TX, USA. 2 Jan

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP

More information

Locality Preserving Projections (LPP) Abstract

Locality Preserving Projections (LPP) Abstract Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL

More information

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Goals. The goal of the first part of this lab is to demonstrate how the SVD can be used to remove redundancies in data; in this example

More information

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12

1 Introduction Motivation and Aims Functional Imaging Computational Neuroanatomy... 12 Contents 1 Introduction 10 1.1 Motivation and Aims....... 10 1.1.1 Functional Imaging.... 10 1.1.2 Computational Neuroanatomy... 12 1.2 Overview of Chapters... 14 2 Rigid Body Registration 18 2.1 Introduction.....

More information

Introduction to Machine Learning Spring 2018 Note Sparsity and LASSO. 1.1 Sparsity for SVMs

Introduction to Machine Learning Spring 2018 Note Sparsity and LASSO. 1.1 Sparsity for SVMs CS 189 Introduction to Machine Learning Spring 2018 Note 21 1 Sparsity and LASSO 1.1 Sparsity for SVMs Recall the oective function of the soft-margin SVM prolem: w,ξ 1 2 w 2 + C Note that if a point x

More information

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1 Preface to the Second Edition Preface to the First Edition vii xi 1 Introduction 1 2 Overview of Supervised Learning 9 2.1 Introduction... 9 2.2 Variable Types and Terminology... 9 2.3 Two Simple Approaches

More information

Machine Learning: Think Big and Parallel

Machine Learning: Think Big and Parallel Day 1 Inderjit S. Dhillon Dept of Computer Science UT Austin CS395T: Topics in Multicore Programming Oct 1, 2013 Outline Scikit-learn: Machine Learning in Python Supervised Learning day1 Regression: Least

More information

Sparse Coding for Learning Interpretable Spatio-Temporal Primitives

Sparse Coding for Learning Interpretable Spatio-Temporal Primitives Sparse Coding for Learning Interpretable Spatio-Temporal Primitives Taehwan Kim TTI Chicago taehwan@ttic.edu Gregory Shakhnarovich TTI Chicago gregory@ttic.edu Raquel Urtasun TTI Chicago rurtasun@ttic.edu

More information

FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC)

FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC) FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC) Software: Matlab Toolbox: GIFT & FNC Yingying Wang, Ph.D. in Biomedical Engineering 10 16 th, 2014 PI: Dr. Nadine Gaab Outline

More information

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT

More information

Medical Image Analysis

Medical Image Analysis Medical Image Analysis Instructor: Moo K. Chung mchung@stat.wisc.edu Lecture 10. Multiple Comparisons March 06, 2007 This lecture will show you how to construct P-value maps fmri Multiple Comparisons 4-Dimensional

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR Leslie Foster Department of Mathematics, San Jose State University October 28, 2009, SIAM LA 09 DEPARTMENT OF MATHEMATICS,

More information

A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009]

A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] Yongjia Song University of Wisconsin-Madison April 22, 2010 Yongjia Song

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Basics of Multivariate Modelling and Data Analysis

Basics of Multivariate Modelling and Data Analysis Basics of Multivariate Modelling and Data Analysis Kurt-Erik Häggblom 9. Linear regression with latent variables 9.1 Principal component regression (PCR) 9.2 Partial least-squares regression (PLS) [ mostly

More information

Multiresponse Sparse Regression with Application to Multidimensional Scaling

Multiresponse Sparse Regression with Application to Multidimensional Scaling Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,

More information

Modern Multidimensional Scaling

Modern Multidimensional Scaling Ingwer Borg Patrick J.F. Groenen Modern Multidimensional Scaling Theory and Applications Second Edition With 176 Illustrations ~ Springer Preface vii I Fundamentals of MDS 1 1 The Four Purposes of Multidimensional

More information

The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem

The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem Int. J. Advance Soft Compu. Appl, Vol. 9, No. 1, March 2017 ISSN 2074-8523 The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem Loc Tran 1 and Linh Tran

More information

Learning from High Dimensional fmri Data using Random Projections

Learning from High Dimensional fmri Data using Random Projections Learning from High Dimensional fmri Data using Random Projections Author: Madhu Advani December 16, 011 Introduction The term the Curse of Dimensionality refers to the difficulty of organizing and applying

More information

Convex Principal Feature Selection

Convex Principal Feature Selection Convex Principal Feature Selection Mahdokht Masaeli 1,YanYan 1,YingCui 1,2, Glenn Fung 3, Jennifer G. Dy 1 1 Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 2 Advertising

More information

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements

More information

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Exploratory data analysis tasks Examine the data, in search of structures

More information

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A Krylov-type methods and perturbation analysis Berkant Savas Department of Mathematics Linköping University Workshop on Tensor Approximation in High Dimension Hausdorff Institute for Mathematics, Universität

More information

Learning Low-rank Transformations: Algorithms and Applications. Qiang Qiu Guillermo Sapiro

Learning Low-rank Transformations: Algorithms and Applications. Qiang Qiu Guillermo Sapiro Learning Low-rank Transformations: Algorithms and Applications Qiang Qiu Guillermo Sapiro Motivation Outline Low-rank transform - algorithms and theories Applications Subspace clustering Classification

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information