Quasi-Newton algorithm for best multilinear rank approximation of tensors

Size: px
Start display at page:

Download "Quasi-Newton algorithm for best multilinear rank approximation of tensors"

Transcription

1 Quasi-Newton algorithm for best multilinear rank of tensors and Lek-Heng Lim Department of Mathematics Linköpings Universitet 6th International Congress on Industrial and Applied Mathematics

2 Outline

3 Multidimensional arrays and multilinear tensor scalar α R vector a R I matrix A R I J tensor A R I 1 I 2 I N A R I J K zero order tensor first order tensor second order tensor N th order tensor A

4 Multidimensional arrays and multilinear tensor scalar α R zero order tensor vector a R I first order tensor matrix A R I J second order tensor tensor A R I 1 I 2 I N N th order tensor A R I J K A X S Z T Y T S R r 1 r 2 r 3 and X R I r 1, Y R J r 2, Z R K r 3

5 scalar (inner) product: A, B = i,j,k a ijkb ijk orthogonality: A, B = 0 Frobenius norm: A F = A, A multilinear matrix multiplication X, Y, Z matrices B = (X, Y, Z) A b ijk = λ,µ,ν x iλ y jµ z kν a λµν

6 scalar (inner) product: A, B = i,j,k a ijkb ijk orthogonality: A, B = 0 Frobenius norm: A F = A, A multilinear matrix multiplication X, Y, Z matrices B = (X, Y, Z) A b ijk = λ,µ,ν x iλ y jµ z kν a λµν contravariant B = A (X, Y, Z) bijk = λ,µ,ν a λµν x λi y µj z νk covariant (X T, Y T, Z T ) A = A (X, Y, Z)

7 scalar (inner) product: A, B = i,j,k a ijkb ijk orthogonality: A, B = 0 Frobenius norm: A F = A, A multilinear matrix multiplication X, Y, Z matrices B = (X, Y, Z) A b ijk = λ,µ,ν x iλ y jµ z kν a λµν contravariant B = A (X, Y, Z) bijk = λ,µ,ν a λµν x λi y µj z νk covariant (X T, Y T, Z T ) A = A (X, Y, Z) (X, Y) A = XAY T contravariant A (X, Y) = X T AY covariant A (I, I, Z) = A (Z) 3

8 and notation contracted products: A and B of appropriate sizes C = A, B 1 a λij b λkl 4-tensor c ijkl = λ C = A, B 1;2 c ijkl = λ a λij b kλl 4-tensor D = A, B 1:2 d jk = a λµj b λµk λ,µ 2-tensor D = A, B 2,3;3,1 djk = a jλµ b µkλ λ,µ 2-tensor e = A, B = A, B 1:3 e = λ,µ,ν a λµν b λµν scalar

9 , notation and properties Lemma tensor (outer) product: A R I J K, B R L M R I J K L M C = A B, useful with negative contraction subscripts c ijklm = a ijk b lm A, B 2:3 A, B 1, A, B 2 A, B (1,3). Let the N-tensors B and C and the matrix Q be of conforming dimensions. Then B (Q) i, C = Q, B, C i B (Q) i, C i = Q T B, C i = Q, B, C i 1

10 Canonical tensor matricization 3-tensors (or unfolding, or flattening) R I J K A A (1) R I JK, R I J K A A (2) R J KI R I J K A A (3) R K IJ

11 Canonical tensor matricization relation to multilinear matrix multiplication Let A be a 5-tensor and consider B = A (X, Y, Z, U, V) B (2) = Y T A (2) (X Z U V) B (3,2) = (Z Y) T A (3,2) (X U V) B (3,2;5,4,1) = (Z Y) T A (3,2;5,4,1) (V U X) The matrix cases: B = (X, Y) A = XAY T B (1) = XA (1) Y T B, B (2) = YA (2) X T B T Vectorization is possible as well B (2,1; ) = vec(b) = (Y X) vec(a) = (Y X)A (2,1; )

12 multilinear rank (Tucker model): r n = rank n (A) = rank ( A (n)) rank(a) = (r 1, r 2, r 3 ) outer product rank (Candecom/Parafac): minimal sum of rank 1 tensors (outer product of vectors)

13 Higher order SVD R I J K A = (U, V, W) C W A = U C V U, V, W orthogonal, C all orthogonal and ordered Truncated not optimal! If rank(a) = (r 1, r 2, r 3 ) then C(1 : r 1, 1 : r 2, 1 : r 3 ) 0 and the rest is zero.

14 The general problem nonlinear manifold optimization The problem min A (X, Y, Z) S A S X Z T Y T Problem overparameterized and equivalent with max (X T, Y T, Z T ) A = max A (X, Y, Z), X T X = I, Y T Y = I and Z T Z = I

15 Nonlinear optimization on a product of Grassmann manifolds Rewrite the objective function to matrix form { } f(x, Y, Z) = A (X, Y, Z) U, V, W orthogonal = X T A (1) (Y Z) = U T X T A (1) (Y Z) = Y T A (2) (Z X) = V T Y T A (2) (Z X) = Z T A (3) (X Y) = W T Z T A (3) (X Y) = A (XU, Y V, Z W) X XU, Y Y V, Z Z W Only the subspace spanned by X, Y, Z matters!

16 Newton-type optimization on manifold Denote the Grassmann manifold G(n, r) and the tangent space T X at X G(n, r) R n r T X T X = 0 Iterate until convergence 1 compute f 2 compute Hessian or update Hessian 3 X G(n, r) solve H = f 4 Update iterate X: move along geodesic in direction given by X(t) = XV cos(σt)v T + U sin(σt)v T = UΣV T

17 The special case of G(3, 1), i.e. X R 3 and X T X = 1. geodesic curve vector in tangent space of X point X on the manifold

18 in Euclidean space The where B k+1 = B k B ks k s T k B k s T k B ks k s k = x k+1 x k = t k p k y k = f k+1 f k + y k y T k y T k y k On Grassmann manifold the vectors are defined on different points belonging to different tangent spaces.

19 Different ways of parallel transporting vectors X G(n, r), 1, 2 T X and X(t) geodesic path along 1 Parallel transport using global coordinates 2 (t) = T 1 (t) 2

20 Different ways of parallel transporting vectors X G(n, r), 1, 2 T X and X(t) geodesic path along 1 Parallel transport using global coordinates 2 (t) = T 1 (t) 2 we have also 1 = X D 1 and 2 = X D 2 where X basis for T X. Let X(t) be basis for T X(t) Parallel transport using local coordinates 2 (t) = X(t) D 2

21 Parallel transport in local coordinates is in general more efficient All transported have the same coordinate representation in the basis X(t) at all points on the path X(t). + no need to transport the gradient or the Hessian - we need to compute X(t). In global coordinate we compute T k+1 s k = t k T pk (t k )p k T k+1 y k = f k+1 T pk (t k ) f k T pk (t k )H k Tp 1 k (t k ) : T k+1 T k+1 B k+1 = B k B ks k s T k B k s T k B ks k + y k y T k y T k y k

22 The objective function and directional derivative f(x, Y, Z) = 1 2 A (X, Y, Z) 2 F = 1 A (X, Y, Z), A (X, Y, Z) 2 Differentiate along three tangent directions x, y, z df dt = 1 d A (X, Y, Z), A (X, Y, Z) 2 dt = A ( x, Y, Z), A (X, Y, Z) + A (X, y, Z), A (X, Y, Z) + A (X, Y, z ), A (X, Y, Z) = x, A (I, Y, Z), A (X, Y, Z) 1 + y, A (X, I, Z), A (X, Y, Z) 2 + z, A (X, Y, I), A (X, Y, Z) 3

23 The Grassmann Gradient In global coordinates A (Π x, Y, Z), A (X, Y, Z) 1 f = A (X,Π y, Z), A (X, Y, Z) 2 A (X, Y,Π z ), A (X, Y, Z) 3 where Π x = I XX T, Π y = I YY T, Π z = I ZZ T

24 The Grassmann Gradient In global coordinates A (Π x, Y, Z), A (X, Y, Z) 1 f = A (X,Π y, Z), A (X, Y, Z) 2 A (X, Y,Π z ), A (X, Y, Z) 3 where Π x = I XX T, Π y = I YY T, Π z = I ZZ T In local coordinates, X, Y, Z basis for the tangent spaces. A (X, Y, Z), A (X, Y, Z) 1 f = A (X, Y, Z), A (X, Y, Z) 2 A (X, Y, Z ), A (X, Y, Z) 3

25 A (X, Y, Z) Interpretations of the gradient The entries of the gradient are scalars products between slices from the illustrated tensor blocks. Consider A (X, Y, Z), A (X, Y, Z) 1 A (X, Y, Z ) A (X, Y, Z) A (X, Y, Z)

26 f(x) = 1 2 A (X, X, X) 2 F = 1 A (X, X, X), A (X, X, X) 2 The becomes with Π = I XX T If A is symmetric then f = A (Π, X, X), A (X, X, X) 1 + A (X,Π, X), A (X, X, X) 2 + A (X, X,Π), A (X, X, X) 3 f = 3 A (Π, X, X), F 1

27 convergence plots and computation times random tensor approximated with a rank-(5,5,5) tensor RELATIVE NORM OF THE GRADIENT QNG NG CGG HOOI ITERATION # QNG NG CGG HOOI (250 it, rel. norm 5.6e-8) 14 (250 it, rel. norm 2e-10)

28 convergence plots and computation times random tensor approximated with a rank-(10,10,10) tensor RELATIVE NORM OF THE GRADIENT QNG NG CGG HOOI ITERATION # QNG NG CGG HOOI (500 it, rel. norm 1.5e-7) 51 ( 500 it, rel. norm 2.7e-9)

29 convergence plots and computation times symmetric random tensor approximated with a rank-(5,5,5) symmetric tensor RELATIVE NORM OF THE GRADIENT QNG NG HOOI ITERATION # QNG NG HOOI (500 it, rel. norm 1e-6)

30 convergence plots and computation times symmetric random tensor approximated with a rank-(10,10,10) symmetric tensor RELATIVE NORM OF THE GRADIENT QNG NG HOOI ITERATION # QNG NG HOOI (500 it, rel. norm 1e-7)

31 A few remarks and questions Need good starting points, truncated not always good, not always available Need to make efficient implementation of the algorithm The better initial Hessian the better the convergence Good Hessian or rather inverse Hessian is needed at starting point What properties of a tensor determine convergence rates of different algorithms?

32 Presented a brief introduction to tensors and multilinear algebraic. Presented an algebraic framework for solving the best multilinear rank of a tensor on a product of Grassmann manifolds. Both the general and the symmetric tensor problems were addressed. Compare different algorithms and the fast convergence of the QNG algorithm was verified.

33 For Further Reading Lars Eldén and. A Newton Grassmann method for computing the Best Multi-Linear Rank-(r 1, r 2, r 3 ) Approximation of a. submitted to SIAM Journal on Matrix Analysis and Applications, and Lek-Heng Lim Best multilinear rank with quasi-newton method on Grassmannians. soon available as a techreport, 2007.

34 Thank you for your time Questions??, PhD student Department of Mathematics Linköpings Universitet SE Linköping Web Phone +46 (0) Fax +46 (0)

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A Krylov-type methods and perturbation analysis Berkant Savas Department of Mathematics Linköping University Workshop on Tensor Approximation in High Dimension Hausdorff Institute for Mathematics, Universität

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

Alternative Statistical Methods for Bone Atlas Modelling

Alternative Statistical Methods for Bone Atlas Modelling Alternative Statistical Methods for Bone Atlas Modelling Sharmishtaa Seshamani, Gouthami Chintalapani, Russell Taylor Department of Computer Science, Johns Hopkins University, Baltimore, MD Traditional

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

Geometric adaptation and parameterization of subspacebased reduced models

Geometric adaptation and parameterization of subspacebased reduced models Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Geometric adaptation and parameterization of subspacebased reduced models Ralf Zimmermann, AG Numerik, ICM, TU Braunschweig Workshop:

More information

Minima, Maxima, Saddle points

Minima, Maxima, Saddle points Minima, Maxima, Saddle points Levent Kandiller Industrial Engineering Department Çankaya University, Turkey Minima, Maxima, Saddle points p./9 Scalar Functions Let us remember the properties for maxima,

More information

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives Recall that if z = f(x, y), then the partial derivatives f x and f y are defined as and represent the rates of change of z in the x- and y-directions, that is, in the directions of the unit vectors i and

More information

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2 (1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

Surfaces and Partial Derivatives

Surfaces and Partial Derivatives Surfaces and Partial Derivatives James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 9, 2016 Outline Partial Derivatives Tangent Planes

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Surfaces and Partial Derivatives

Surfaces and Partial Derivatives Surfaces and James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 15, 2017 Outline 1 2 Tangent Planes Let s go back to our simple surface

More information

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Goals. The goal of the first part of this lab is to demonstrate how the SVD can be used to remove redundancies in data; in this example

More information

SDLS: a Matlab package for solving conic least-squares problems

SDLS: a Matlab package for solving conic least-squares problems SDLS: a Matlab package for solving conic least-squares problems Didier Henrion 1,2 Jérôme Malick 3 June 28, 2007 Abstract This document is an introduction to the Matlab package SDLS (Semi-Definite Least-Squares)

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Numerical Analysis and Statistics on Tensor Parameter Spaces

Numerical Analysis and Statistics on Tensor Parameter Spaces Numerical Analysis and Statistics on Tensor Parameter Spaces SIAM - AG11 - Tensors Oct. 7, 2011 Overview Normal Mean / Karcher Mean Karcher mean / Normal mean - finding representatives for a set of points

More information

Lower bounds on the barrier parameter of convex cones

Lower bounds on the barrier parameter of convex cones of convex cones Université Grenoble 1 / CNRS June 20, 2012 / High Performance Optimization 2012, Delft Outline Logarithmically homogeneous barriers 1 Logarithmically homogeneous barriers Conic optimization

More information

14.6 Directional Derivatives and the Gradient Vector

14.6 Directional Derivatives and the Gradient Vector 14 Partial Derivatives 14.6 and the Gradient Vector Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. and the Gradient Vector In this section we introduce

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Projective 3D Geometry (Back to Chapter 2) Lecture 6 2 Singular Value Decomposition Given a

More information

Generalized trace ratio optimization and applications

Generalized trace ratio optimization and applications Generalized trace ratio optimization and applications Mohammed Bellalij, Saïd Hanafi, Rita Macedo and Raca Todosijevic University of Valenciennes, France PGMO Days, 2-4 October 2013 ENSTA ParisTech PGMO

More information

SYSTEMS OF NONLINEAR EQUATIONS

SYSTEMS OF NONLINEAR EQUATIONS SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Conforming Vector Interpolation Functions for Polyhedral Meshes

Conforming Vector Interpolation Functions for Polyhedral Meshes Conforming Vector Interpolation Functions for Polyhedral Meshes Andrew Gillette joint work with Chandrajit Bajaj and Alexander Rand Department of Mathematics Institute of Computational Engineering and

More information

Continuity and Tangent Lines for functions of two variables

Continuity and Tangent Lines for functions of two variables Continuity and Tangent Lines for functions of two variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 4, 2014 Outline 1 Continuity

More information

Adaptive and Smooth Surface Construction by Triangular A-Patches

Adaptive and Smooth Surface Construction by Triangular A-Patches Adaptive and Smooth Surface Construction by Triangular A-Patches Guoliang Xu Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, Beijing, China Abstract

More information

Regularized Tensor Factorizations & Higher-Order Principal Components Analysis

Regularized Tensor Factorizations & Higher-Order Principal Components Analysis Regularized Tensor Factorizations & Higher-Order Principal Components Analysis Genevera I. Allen Department of Statistics, Rice University, Department of Pediatrics-Neurology, Baylor College of Medicine,

More information

Orientation of manifolds - definition*

Orientation of manifolds - definition* Bulletin of the Manifold Atlas - definition (2013) Orientation of manifolds - definition* MATTHIAS KRECK 1. Zero dimensional manifolds For zero dimensional manifolds an orientation is a map from the manifold

More information

Quaternions and Exponentials

Quaternions and Exponentials Quaternions and Exponentials Michael Kazhdan (601.457/657) HB A.6 FvDFH 21.1.3 Announcements OpenGL review II: Today at 9:00pm, Malone 228 This week's graphics reading seminar: Today 2:00-3:00pm, my office

More information

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1 Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients

More information

SDLS: a Matlab package for solving conic least-squares problems

SDLS: a Matlab package for solving conic least-squares problems SDLS: a Matlab package for solving conic least-squares problems Didier Henrion, Jérôme Malick To cite this version: Didier Henrion, Jérôme Malick. SDLS: a Matlab package for solving conic least-squares

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y. 2 Second Derivatives As we have seen, a function f (x, y) of two variables has four different partial derivatives: (x, y), (x, y), f yx (x, y), (x, y) It is convenient to gather all four of these into

More information

REVIEW I MATH 254 Calculus IV. Exam I (Friday, April 29) will cover sections

REVIEW I MATH 254 Calculus IV. Exam I (Friday, April 29) will cover sections REVIEW I MATH 254 Calculus IV Exam I (Friday, April 29 will cover sections 14.1-8. 1. Functions of multivariables The definition of multivariable functions is similar to that of functions of one variable.

More information

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND Student Submission for the 5 th OpenFOAM User Conference 2017, Wiesbaden - Germany: SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND TESSA UROIĆ Faculty of Mechanical Engineering and Naval Architecture, Ivana

More information

Online Subspace Estimation and Tracking from Missing or Corrupted Data

Online Subspace Estimation and Tracking from Missing or Corrupted Data Online Subspace Estimation and Tracking from Missing or Corrupted Data Laura Balzano www.ece.wisc.edu/~sunbeam Work with Benjamin Recht and Robert Nowak Subspace Representations Capture Dependencies Subspace

More information

Winter 2012 Math 255 Section 006. Problem Set 7

Winter 2012 Math 255 Section 006. Problem Set 7 Problem Set 7 1 a) Carry out the partials with respect to t and x, substitute and check b) Use separation of varibles, i.e. write as dx/x 2 = dt, integrate both sides and observe that the solution also

More information

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33 Convex Optimization 2. Convex Sets Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 33 Outline Affine and convex sets Some important examples Operations

More information

Modern Multidimensional Scaling

Modern Multidimensional Scaling Ingwer Borg Patrick Groenen Modern Multidimensional Scaling Theory and Applications With 116 Figures Springer Contents Preface vii I Fundamentals of MDS 1 1 The Four Purposes of Multidimensional Scaling

More information

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

MULTIPLE VIEW GEOMETRY

MULTIPLE VIEW GEOMETRY Chapter 3 MULTIPLE VIEW GEOMETRY Anders Heyden and Marc Pollefeys 3.1 Introduction There exist intricate geometric relations between multiple views of a 3D scene. These relations are related to the camera

More information

Convex Sets. CSCI5254: Convex Optimization & Its Applications. subspaces, affine sets, and convex sets. operations that preserve convexity

Convex Sets. CSCI5254: Convex Optimization & Its Applications. subspaces, affine sets, and convex sets. operations that preserve convexity CSCI5254: Convex Optimization & Its Applications Convex Sets subspaces, affine sets, and convex sets operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Tutorial on Convex Optimization for Engineers

Tutorial on Convex Optimization for Engineers Tutorial on Convex Optimization for Engineers M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

Vector: A series of scalars contained in a column or row. Dimensions: How many rows and columns a vector or matrix has.

Vector: A series of scalars contained in a column or row. Dimensions: How many rows and columns a vector or matrix has. ASSIGNMENT 0 Introduction to Linear Algebra (Basics of vectors and matrices) Due 3:30 PM, Tuesday, October 10 th. Assignments should be submitted via e-mail to: matlabfun.ucsd@gmail.com You can also submit

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Geodesic and curvature of piecewise flat Finsler surfaces

Geodesic and curvature of piecewise flat Finsler surfaces Geodesic and curvature of piecewise flat Finsler surfaces Ming Xu Capital Normal University (based on a joint work with S. Deng) in Southwest Jiaotong University, Emei, July 2018 Outline 1 Background Definition

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Modern Multidimensional Scaling

Modern Multidimensional Scaling Ingwer Borg Patrick J.F. Groenen Modern Multidimensional Scaling Theory and Applications Second Edition With 176 Illustrations ~ Springer Preface vii I Fundamentals of MDS 1 1 The Four Purposes of Multidimensional

More information

Multivariate Numerical Optimization

Multivariate Numerical Optimization Jianxin Wei March 1, 2013 Outline 1 Graphics for Function of Two Variables 2 Nelder-Mead Simplex Method 3 Steepest Descent Method 4 Newton s Method 5 Quasi-Newton s Method 6 Built-in R Function 7 Linear

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

LOCAL LINEAR APPROXIMATION OF PRINCIPAL CURVE PROJECTIONS. Peng Zhang, Esra Ataer-Cansizoglu, Deniz Erdogmus

LOCAL LINEAR APPROXIMATION OF PRINCIPAL CURVE PROJECTIONS. Peng Zhang, Esra Ataer-Cansizoglu, Deniz Erdogmus 2012 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 23 26, 2012, SATANDER, SPAIN LOCAL LINEAR APPROXIMATION OF PRINCIPAL CURVE PROJECTIONS Peng Zhang, Esra Ataer-Cansizoglu,

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

Machine Learning (CSE 446): Perceptron

Machine Learning (CSE 446): Perceptron Machine Learning (CSE 446): Perceptron Sham M Kakade c 2018 University of Washington cse446-staff@cs.washington.edu 1 / 14 Announcements HW due this week. See detailed instructions in the hw. One pdf file.

More information

Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method

Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method Wenyong Pan 1, Kris Innanen 1 and Wenyuan Liao 2 1. CREWES Project, Department of Geoscience,

More information

Solving for dynamic user equilibrium

Solving for dynamic user equilibrium Solving for dynamic user equilibrium CE 392D Overall DTA problem 1. Calculate route travel times 2. Find shortest paths 3. Adjust route choices toward equilibrium We can envision each of these steps as

More information

Lecture 6: Chain rule, Mean Value Theorem, Tangent Plane

Lecture 6: Chain rule, Mean Value Theorem, Tangent Plane Lecture 6: Chain rule, Mean Value Theorem, Tangent Plane Rafikul Alam Department of Mathematics IIT Guwahati Chain rule Theorem-A: Let x : R R n be differentiable at t 0 and f : R n R be differentiable

More information

Inverse and Implicit functions

Inverse and Implicit functions CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

More information

Optimization on manifolds and semidefinite relaxations

Optimization on manifolds and semidefinite relaxations Optimization on manifolds and semidefinite relaxations Nicolas Boumal Princeton University Based on work with Pierre-Antoine Absil, Afonso Bandeira, Coralia Cartis and Vladislav Voroninski 1 Max-Cut relaxation

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer George B. Dantzig Mukund N. Thapa Linear Programming 1: Introduction With 87 Illustrations Springer Contents FOREWORD PREFACE DEFINITION OF SYMBOLS xxi xxxiii xxxvii 1 THE LINEAR PROGRAMMING PROBLEM 1

More information

Image Analysis & Retrieval. CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W Lec 13

Image Analysis & Retrieval. CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W Lec 13 Image Analysis & Retrieval CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W 4-5:15pm@Bloch 0012 Lec 13 Dimension Reduction: SVD and PCA Zhu Li Dept of CSEE, UMKC Office: FH560E, Email:

More information

Concept of Curve Fitting Difference with Interpolation

Concept of Curve Fitting Difference with Interpolation Curve Fitting Content Concept of Curve Fitting Difference with Interpolation Estimation of Linear Parameters by Least Squares Curve Fitting by Polynomial Least Squares Estimation of Non-linear Parameters

More information

A projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines

A projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines A projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines SUMMARY A Hessian matrix in full waveform inversion (FWI) is difficult

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Conjugate Direction Methods Barnabás Póczos & Ryan Tibshirani Conjugate Direction Methods 2 Books to Read David G. Luenberger, Yinyu Ye: Linear and Nonlinear Programming Nesterov:

More information

Convex Optimization / Homework 2, due Oct 3

Convex Optimization / Homework 2, due Oct 3 Convex Optimization 0-725/36-725 Homework 2, due Oct 3 Instructions: You must complete Problems 3 and either Problem 4 or Problem 5 (your choice between the two) When you submit the homework, upload a

More information

On the null space of a Colin de Verdière matrix

On the null space of a Colin de Verdière matrix On the null space of a Colin de Verdière matrix László Lovász 1 and Alexander Schrijver 2 Dedicated to the memory of François Jaeger Abstract. Let G = (V, E) be a 3-connected planar graph, with V = {1,...,

More information

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR Leslie Foster Department of Mathematics, San Jose State University October 28, 2009, SIAM LA 09 DEPARTMENT OF MATHEMATICS,

More information

Kevin James. MTHSC 206 Section 15.6 Directional Derivatives and the Gra

Kevin James. MTHSC 206 Section 15.6 Directional Derivatives and the Gra MTHSC 206 Section 15.6 Directional Derivatives and the Gradient Vector Definition We define the directional derivative of the function f (x, y) at the point (x 0, y 0 ) in the direction of the unit vector

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 9, 2017 1 Outline Multivariate ordinary least squares Matlab code:

More information

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING

More information

. 1. Chain rules. Directional derivative. Gradient Vector Field. Most Rapid Increase. Implicit Function Theorem, Implicit Differentiation

. 1. Chain rules. Directional derivative. Gradient Vector Field. Most Rapid Increase. Implicit Function Theorem, Implicit Differentiation 1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose

More information

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang Maths for Signals and Systems Linear Algebra in Engineering Some problems by Gilbert Strang Problems. Consider u, v, w to be non-zero vectors in R 7. These vectors span a vector space. What are the possible

More information

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers 3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers Prof. Tesler Math 20C Fall 2018 Prof. Tesler 3.3 3.4 Optimization Math 20C / Fall 2018 1 / 56 Optimizing y = f (x) In Math 20A, we

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

Jacobian: Velocities and Static Forces 1/4

Jacobian: Velocities and Static Forces 1/4 Jacobian: Velocities and Static Forces /4 Models of Robot Manipulation - EE 54 - Department of Electrical Engineering - University of Washington Kinematics Relations - Joint & Cartesian Spaces A robot

More information

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University September 18 2018 Logistics HW 1 is on Piazza and Gradescope Deadline: Friday, Sept. 28, 2018 Office

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Curve Subdivision in SE(2)

Curve Subdivision in SE(2) Curve Subdivision in SE(2) Jan Hakenberg, ETH Zürich 2018-07-26 Figure: A point in the special Euclidean group SE(2) consists of a position in the plane and a heading. The figure shows two rounds of cubic

More information

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps Today Gradient descent for minimization of functions of real variables. Multi-dimensional scaling Self-organizing maps Gradient Descent Derivatives Consider function f(x) : R R. The derivative w.r.t. x

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

13. Learning Ballistic Movementsof a Robot Arm 212

13. Learning Ballistic Movementsof a Robot Arm 212 13. Learning Ballistic Movementsof a Robot Arm 212 13. LEARNING BALLISTIC MOVEMENTS OF A ROBOT ARM 13.1 Problem and Model Approach After a sufficiently long training phase, the network described in the

More information

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral.

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral. 1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose

More information

Jorg s Graphics Lecture Notes Coordinate Spaces 1

Jorg s Graphics Lecture Notes Coordinate Spaces 1 Jorg s Graphics Lecture Notes Coordinate Spaces Coordinate Spaces Computer Graphics: Objects are rendered in the Euclidean Plane. However, the computational space is better viewed as one of Affine Space

More information

Control Volume Finite Difference On Adaptive Meshes

Control Volume Finite Difference On Adaptive Meshes Control Volume Finite Difference On Adaptive Meshes Sanjay Kumar Khattri, Gunnar E. Fladmark, Helge K. Dahle Department of Mathematics, University Bergen, Norway. sanjay@mi.uib.no Summary. In this work

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Functions of Two variables.

Functions of Two variables. Functions of Two variables. Ferdinánd Filip filip.ferdinand@bgk.uni-obuda.hu siva.banki.hu/jegyzetek 27 February 217 Ferdinánd Filip 27 February 217 Functions of Two variables. 1 / 36 Table of contents

More information

A C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions

A C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions A C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions Nira Dyn School of Mathematical Sciences Tel Aviv University Michael S. Floater Department of Informatics University of

More information

Distributed Estimation for Spatial Rigid Motion Based on Dual Quaternions

Distributed Estimation for Spatial Rigid Motion Based on Dual Quaternions OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. ; : 24 Published online in Wiley InterScience (www.interscience.wiley.com). DOI:.2/oca Distributed Estimation for Spatial Rigid Motion

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder]

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Preliminaries Recall: Given a smooth function f:r R, the function

More information

A popular method for moving beyond linearity. 2. Basis expansion and regularization 1. Examples of transformations. Piecewise-polynomials and splines

A popular method for moving beyond linearity. 2. Basis expansion and regularization 1. Examples of transformations. Piecewise-polynomials and splines A popular method for moving beyond linearity 2. Basis expansion and regularization 1 Idea: Augment the vector inputs x with additional variables which are transformation of x use linear models in this

More information

Summer School: Mathematical Methods in Robotics

Summer School: Mathematical Methods in Robotics Summer School: Mathematical Methods in Robotics Part IV: Projective Geometry Harald Löwe TU Braunschweig, Institute Computational Mathematics 2009/07/16 Löwe (TU Braunschweig) Math. robotics 2009/07/16

More information

Exponential Maps for Computer Vision

Exponential Maps for Computer Vision Exponential Maps for Computer Vision Nick Birnie School of Informatics University of Edinburgh 1 Introduction In computer vision, the exponential map is the natural generalisation of the ordinary exponential

More information