A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A

Size: px
Start display at page:

Download "A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A"

Transcription

1 Krylov-type methods and perturbation analysis Berkant Savas Department of Mathematics Linköping University Workshop on Tensor Approximation in High Dimension Hausdorff Institute for Mathematics, Universität Bonn August 2, 2011 Joint work with Lars Eldén, Linköping University

2 A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A, B 1 c αβ = j,k a αjk b βjk Multilinear rank rank (A) = (r 1, r 2, r 3 ) r 1 = rank(a (1) ) r 2 = rank(a (2) ) r 3 = rank(a (3) )

3 A few multilinear algebraic definitions II Multilinear tensor-matrix multiplication: U, V, W matrices A = U S V T W T A = (U, V, W ) S a ijk = λ,µ,ν u iλ v jµ w kν s λµν convention ( U T, V T, W T) A = A (U, V, W ) special cases (U, V ) A = UAV T A (I, I, W )=A (W ) 3

4 Multilinear low rank approximation With min B A B rank(b) (r 1, r 2, r 3 ) B =(X, Y, Z) C C has dimensions r 1 r 2 r 3 A X C Z T Y T

5 Multilinear low rank approximation min A (X, Y, Z) C min A XCY T A X C Z T Y T Problem overparameterized: C can be eliminated Problem equivalent to max A (X, Y, Z) max X T AY X T X = I Y T Y = I Z T Z = I

6 Multilinear low rank approximation min A (X, Y, Z) C min A XCY T A X C Z T Y T In addition to X T X = I Y T Y = I Z T Z = I A (X, Y, Z) = A (X U, Y V, ZW ) X T AY = U T X T AY V Grassmann manifold problem!

7 Computing the core C and the low rank approximation Given orthonormal X, Y, Z min A (X, Y, Z) C min A XCY T C Solution C = A (X, Y, Z) C C = X T AY and the low rank approximation is ( Â =(X, Y, Z) C= XX T, YY T, ZZ T) A ( Â = XCY T = XX T AYY T = XX T, YY T) A

8 HOSVD [De Lathauwer et al. 2000] A = U S V T W T A = (U, V, W ) S a ijk = λ,µ,ν u iλ v jµ w kν s λµν U, V, W are orthogonal S is all-orthogonal

9 Methods for best low rank approximation Alternating minimization 1 HOOI (Kroonenberg, De Lathauwer) 2 Trace maximization (Regalia) Grassmann manifold approach 1 Newton (Eldén, Savas) 2 Trust region/newton (Ishteva, De Lathauwer et al.) 3 BFGS quasi-newton (Savas, Lim) 4 Limited memory BFGS (Savas, Lim) 5 Symmetric tensors, without explicit representation (Morton) In this talk we will consider Krylov-type tensor computations. OBS: This approach does not solve the best low rank problem

10 Krylov-type tensor computations

11 Krylov subspaces for matrices Given A R m m and starting vector v R m K k (A, v) = span{v, Av, A 2 v,..., A k 1 v} If v 1 = v K k (A, v) = span{v 1, v 2, v 3,..., v k } v i+1 = Av i, i =1,..., k 1 Specifically useful for large and sparse problems Systems of linear equations Eigenvalues and eigenvectors Singular values and singular vectors Approximation of matrices and functions of matrices

12 A general square The Arnoldi process for k =1, 2,... do 1 h k = Uk TAu k 2 v = Au k U k h k 3 β k = h k+1,k = v 2 4 u k+1 = v/β k 5 Ĥ k = end for (Ĥk 1 h k 0 h k+1,k ) The Arnoldi decomposition: AU k = U k+1 Ĥ k, Ĥ k (k + 1) k Hessenberg If A is symmetric Lanczos recurrence.

13 A R m n Golub-Kahan bidiagonalization β 1 v 1, u 0 =0 for j =1, 2,..., k do 1 α j u j = A T v j β j u j 1 2 β j+1 v j+1 = Au j α j v j end for α j,β j are chosen to normalize u j, v j U k =[u 1,..., u k ] and V k+1 =[v 1,..., v k+1 ] we have AU k = V k+1 B k+1, V T k V k = I, U T k+1 U k+1 = I and B k+1 is bidiagonal, U k and V k orthonormal basis for K k (A T A, u) = span{u, (A T A)u, (A T A) 2 u,..., (A T A) k 1 u} K k (AA T, v) = span{v, (AA T )v, (AA T ) 2 v,..., (AA T ) k 1 v}

14 Matrix-vector and tensor-vector products A R m n, v R n A R l m n, v R m, w R n Av = A (v) 2 R m A (w) 3 R l m A (v, w) 2,3 R l [A (w) 3 ] ij = k [A (v, w) 2,3 ] i = jk a ijk w k a ijk v j w k similarly A (u, w) 1,3 R m A (u, v) 1,2 R n

15 Minimal Krylov method I A R l m n and starting vectors with norm one u 1 R l, v 1 R m, w 1 R n u i+1 = A (v i, w i ) 2,3 i =1,..., k 1 v i+1 = A (u i, w i ) 1,3 i =1,..., k 1 w i+1 = A (u i, v i ) 1,2 i =1,..., k 1 Set U k =[u 1 u 2... u k ] V k =[v 1 v 2... v k ] W k =[w 1 w 2... w k ] orthogonalize explicitly using Gram Schmidt on each sequence

16 Minimal Krylov method II A R l m n and starting vectors with norm one u 1 R l, v 1 R m, w 1 R n u i+1 = A (v i, w i ) 2,3 i =1,..., k 1 v i+1 = A (u i+1, w i ) 1,3 i =1,..., k 1 w i+1 = A (u i+1, v i+1 ) 1,2 i =1,..., k 1 Orthogonalize, set U k =[u 1 u 2... u k ] V k =[v 1 v 2... v k ] W k =[w 1 w 2... w k ] and approximate A ( ) U k Uk T, V kvk T, W kwk T A

17 Krylov method on a low rank matrix Let A R n n with rank(a) =k. A = U k Σ k Vk T then K k (A, u) =K k+p (A, u) p 1 We only need to do k steps of Arnoldi.

18 Low rank tensor and minimal Krylov method Theorem Let A R l m n with rank(a) =(p, q, r) and assume p q r. Then A = X C Z T Y T A = (X, Y, Z) C In p steps we get U p, s.t. span(u p ) = span(x ) In q steps we get V q, s.t. span(v q ) = span(y ) In r steps we get W r, s.t. span(w r ) = span(z) using a modified minimal Krylov recursion.

19 Low rank tensors + noise and minimal Krylov method Theorem Let A R l m n with rank(a) =(p, q, r) and add noise ρe, again assume p q r. Then A = (X, Y, Z) C + ρe In p steps we get U p, s.t. span(u p ) span(x ) within level of noise In q steps we get V q, s.t. span(v p ) span(y ) within level of noise In r steps we get W r, s.t. span(w p ) span(z) within level of noise using a modified minimal Krylov recursion.

20 Maximal Krylov Method Generate all possible combinations at each step {u 1 } {v 1 } w 1 {v 1 } {w 1 } u 2 {u 1, u 2 } {w 1 } {v 2 v 3 } {u 1, u 2 } {v 1, v 2, v 3 } {w 1, w 2, w 3, w 4, w 5, w 6 } {v 1, v 2, v 3 } {w 1, w 2,..., w 6 } {u 2, u 3,..., u 19 } {u 1, u 2,..., u 19 } {w 1, w 2,..., w 6 } {v 2, v 3, v 4,..., v 115 }

21 Krylov factorization for maximal recursion Theorem (Tensor Krylov factorizations) After a complete u-loop: A (V k, W l ) 2,3 = (U j ) 1 H jkl. After a complete v-loop: A (U j, W l ) 1,3 = (V m ) 2 H jml. After a complete w-loop: A (U j, V m ) 1,2 = (W n ) 3 H jmn.

22 Example of a maximal Krylov method 1 {u 1 } {v 1 } w 1 A (u 1, v 1 ) 1,2 = (w 1 ) 3 H {v 1 } {w 1 } u 2 A (v 1, w 1 ) 2,3 = ([u 1 u 2 ]) 1 H {u 1, u 2 } {w 1 } {v 2 v 3 } A ([u 1 u 2 ], w 1 ) 1,3 = ([v 1 v 2 v 3 ]) 2 H {u 1, u 2 } {v 1, v 2, v 3 } {w 1, w 2, w 3, w 4, w 5, w 6 } 5... A ([u 1 u 2 ], [v 1 v 2 v 3 ]) 1,2 = ([w 1 w 6 ]) 2 H the bad: dimension of subspaces explode

23 Krylov subspaces of contracted tensor products Recall: for a matrix A R m n we have AA T = A, A 1 R m m, A T A = A, A 2 R n n A R m n l, u R m, v R n, w R l and consider A, A 1 = A (1) (A (1) ) T R m m K k ( A, A 1, u) A, A 2 = A (2) (A (2) ) T R n n K k ( A, A 2, v) A, A 3 = A (3) (A (3) ) T R l l K k ( A, A 3, w) Symmetric matrices! Apply the Lanczos recurrence. All computations are implemented using A (v, w) 2,3 A (u, w) 1,3 A (u, v) 1,2 Optimal subspaces give truncated HOSVD

24 Optimized tensor-krylov approach Let U i =[u 1 u i ], V i =[v 1 v i ], and W i 1 =[w 1 w i 1 ]. Find θ and η that give optimal ŵ ŵ = A (U i θ, V i η) 1,2, max θ,η ŵ, s.t. ŵ W i 1, θ = η =1, θ, η R i. Solution: best rank-(1, 1, 1) approximation ( ) (θ, η, ω) S 111 A U i, V i, I W i 1 Wi 1 T. [Goreinov, Oseledets and Savostyanov 2010] and [Savas and Eldén 2010]

25 Experiments: Minimal Krylov vs truncated HOSVD ( H min H hosvd )/ H hosvd random runs Difference between  min and  hosvd, of a tensor A. Rank of approximation is (10, 10, 10).

26 HOSVD using the minimal tensor Krylov method Let A have exact low rank A = X C Z T Y T with HOSVD A = (U, V, W ) S Alternative I 1 SVD of A (1) gives U 2 SVD of A (1) gives V 3 SVD of A (1) gives W 4 Compute S = A (U, V, W ) Alternative II 1 Minimal Krylov method on A gives U p, V q, and W r 2 3 Compute C = A (U p, V q, W r ) Compite HOSVD of C = ( Ū, V, W ) S 4 Change basis: U = U p Ū, V = V q V, W = Wr W

27 Experiments: Applied on the Natflix data Tensor of size User mode: number of users Movie mode: number of movies Time mode: 2243 number of days Time for computing a rank-(100, 100, 100) approximation: 17 hours Bottleneck: u = A (v, w) 2,3 Efficient storage schemes are needed Do not compute A (w) 3 as it will be dense

28 Optimality conditions

29 Recall: Multilinear low rank approximation min A (X, Y, Z) C A X C Z T Y T Problem overparameterized, nonlinear, and equivalent to max f (X, Y, Z), s.t. X T X = I Y T Y = I Z T Z = I where f (X, Y, Z) = A (X, Y, Z) 2 F

30 First order optimality conditions Objective function f (X, Y, Z) = A (X, Y, Z) 2 F At a stationary point (X, Y, Z) we have f =0 f x = A (X, Y, Z), A (X, Y, Z) 1 =0 f y = A (X, Y, Z), A (X, Y, Z) 2 =0 f z = A (X, Y, Z), A (X, Y, Z ) 3 =0 [X X ], [Y Y ], [Z Z ] are orthogonal For the matrices we have f (X, Y )= X T AY 2 F X T AX, X T AY 1 = X T AX (X T AY )T =0 Let [U k U ] and [V k V ] contain left and right singular vectors, then [ [U k U ] T U T A[V k V ]= k AV k Uk TAV ] [ ] Σk 0 U TAV k U TAV = 0 Σ

31 A (X,Y,Z) A (X, Y, Z ) A (X,Y,Z ) WOOW..., what is that?? A (X, Y, Z), A (X, Y, Z) 1 =0 A (X, Y, Z) A (X, Y, Z ) A (X, Y,Z) A (X, Y,Z ) A (X, Y, Z) Figure visualizes the generalization of [ [U k U ] T U T A[V k V ]= k AV k Uk TAV ] [ ] Σk 0 U TAV k U TAV = 0 Σ

32 Second order optimality conditions ordering (X, Y, Z) is a local max of f (X, Y, Z) = A (X, Y, Z) 2 F Hessian is negative definite. if the Theorem Let (X, Y, Z) be a local min and B = [ ] A (X, Y, Z) then A (X, Y, Z) B(1, :, :) B(r 1, :, :) > B(r 1 +1, :, :) B(l, :, :) A (X, Y, Z) A (X, Y, Z ) A (X, Y,Z) A (X, Y,Z ) A (X, Y, Z) A (X,Y,Z) A (X, Y, Z ) A (X

33 We can also get orthogonality (X, Y, Z) is a local max of f (X, Y, Z) = A (X, Y, Z) 2 F Hessian is negative definite. if the Theorem Let (X, Y, Z) be a local min and B = [ ] A (X, Y, Z) then A (X, Y, Z) B(i, :, :), B(j, :, :) =0, i j A (X, Y, Z) A (X, Y, Z ) A (X, Y,Z) A (X, Y,Z ) A (X, Y, Z) A (X,Y,Z) A (X, Y, Z ) A (X

34 Compare with the HOSVD... A = U S V T W T U, V, W are orthogonal S is all-orthogonal: S(:, :, i), S(:, :, j) =0 i j S(:, :, 1) S(:, :, 2) S(:, :, 3) Properties hold in all three modes simultaniously

35 Perturbation theory and concept of gap for tensors

36 Perturbation analysis: setup (X, Y, Z) representative of a stationary point for A (X, Y, Z) C 2 F Now perturbe A with a small E Ã = A + E What are the first order perturbation ( X, Ỹ, Z) of the stationary point? X = X + δx Ỹ = Y + δy Z = Z + δz We want to bound δx, δy, and δz in terms of some properties of A

37 First order optimality condition The perturbed point ( X, Ỹ, Z) has to satisfy the first order optimality conditions, i.e. f =( f x, f y, f z )=0 f x = Ã ( X, Ỹ, Z ), ( X Ã, Ỹ, Z ) =0 1 f y = Ã ( X, Ỹ, Z ), ( X Ã, Ỹ, Z ) =0 2 f z = Ã ( X, Ỹ, Z ), ( X Ã, Ỹ, Z ) =0 3

38 First order optimality condition: shake the gradient......, and we get the Hessian! In operator form we have H xx H xy H xz x H yx H yy H yz y H zx H zy H zz z The Hessian in the LHS and some other things on the RHS H xx (δx )+H xy (δy )+H xz (δz) = ( F x, E E x, F 1 ) H yx (δx )+H yy (δy )+H yz (δz) = ( F y, E E y, F 2 ) H zx (δx )+H zy (δy )+H zz (δz) = ( F z, E E z, F 3 ) This looks messy... BUT: these equations will give us the gap!

39 Example 1: Matrix case [ ][ ] [ ] 1 (Σ 2 1 I ) Σ 1 Σ 2 vec(δx ) (Σ1 I ) vec(e Σ 1 Σ 2 (Σ 2 1 I ) = x ) vec(δy ) (Σ 1 I ) vec(e y ) 2 We can uncouple into 2 2 block systems. The worst is [ ][ ] [ ] σ 2 r σ r σ r+1 δx σr e = x σ r σ r+1 δy σ r e y σ 2 r 3 Solution: [ ] δx = δy 1 (σ r σ r+1 )(σ r + σ r+1 ) [ ] σr e x + σ r+1 e y σ r+1 e x + σ r e y 4 Bounding gives: δx 1 σ r σ r+1 ( e x + e y ) δy 1 σ r σ r+1 ( e x + e y ) 5 The quantity (σ r σ r+1 ) is called the gap!

40 A (X,Y,z) A (X, y, Z ) A (X,Y,Z ) Example 2: Tensor case with rank-(1, 1, 1) approximation Let (x, y, z) be a stationary point of f (x, y, z) = A (x, y, z) 2 F With c = A (x, y, z) the perturbation equations become c 2 I cf xy cf xz δx ce x cf yx c 2 I cf yz δy = ce y c 2 I δz ce z cf zx cf zy Where F xy = A (X, Y, z),... A (x, y, Z ) A (x, y, z) A (x, Y,Z ) A (x, Y,z) A (X, y, z) Due to orthogonality blue vectors are zero!

41 Example 2: Tensor case with rank-(1, 1, 1) approximation The perturbation satisfy c 2 I cf xy cf xz δx ce x cf yx c 2 I cf yz δy = ce y c 2 I δz ce z cf zx cf zy We get the bound δx δy δz G 1 e x e y e z I F xy F xz G = c 0 I 0 + F yx 0 F yz 0 0 I F zy F zx 0 What can we say about G 1?

42 A (X,Y,z) A (X, y, Z ) A (X,Y,Z ) Example 2: Tensor case with rank-(1, 1, 1) approximation We have that G 1 1 λ min (G) I F xy F xz G = c 0 I 0 + F yx 0 F yz ci + F 0 0 I F zy F zx 0 The gap becomes λ min (G) = c µ max where µ max is the largest eigenvalue of F A (x, y, Z ) A (x, y, z) A (x, Y,Z ) A (x, Y,z) [ σ1 0 ] 0 Σ A (X, y, z) Compare with the matrix case

43 A (X, Y, Z) A (X, Y, Z) A (X, Y,Z) A (X, Y, Z ) A (X, Y, Z ) A (X, Y,Z ) Summary 1 We considered the multilinear low rank approximation of a tensor A X C Z T Y T 2 Presented several Krylov-type procedures that generate low rank approximations 3 Interpreted first and second order optimality conditions: ordering and orthogonality A (X,Y,Z) A (X,Y,Z ) 4 Generalized the gap from matrix sensitivity analysis to tensors

44 Thank you for your time!

Quasi-Newton algorithm for best multilinear rank approximation of tensors

Quasi-Newton algorithm for best multilinear rank approximation of tensors Quasi-Newton algorithm for best multilinear rank of tensors and Lek-Heng Lim Department of Mathematics Linköpings Universitet 6th International Congress on Industrial and Applied Mathematics Outline 1

More information

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2 (1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of

More information

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y. 2 Second Derivatives As we have seen, a function f (x, y) of two variables has four different partial derivatives: (x, y), (x, y), f yx (x, y), (x, y) It is convenient to gather all four of these into

More information

Generalized trace ratio optimization and applications

Generalized trace ratio optimization and applications Generalized trace ratio optimization and applications Mohammed Bellalij, Saïd Hanafi, Rita Macedo and Raca Todosijevic University of Valenciennes, France PGMO Days, 2-4 October 2013 ENSTA ParisTech PGMO

More information

The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that A = UΣV T.

The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that A = UΣV T. Section 7.4 Notes (The SVD) The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that Then there are A = UΣV T Specifically: The ordering of

More information

Euclidean Space. Definition 1 (Euclidean Space) A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,.

Euclidean Space. Definition 1 (Euclidean Space) A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,. Definition 1 () A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,. 1 Inner Product Definition 2 (Inner Product) An inner product, on a real vector space X

More information

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs

More information

Hw 4 Due Feb 22. D(fg) x y z (

Hw 4 Due Feb 22. D(fg) x y z ( Hw 4 Due Feb 22 2.2 Exercise 7,8,10,12,15,18,28,35,36,46 2.3 Exercise 3,11,39,40,47(b) 2.4 Exercise 6,7 Use both the direct method and product rule to calculate where f(x, y, z) = 3x, g(x, y, z) = ( 1

More information

HONOM EUROPEAN WORKSHOP on High Order Nonlinear Numerical Methods for Evolutionary PDEs, Bordeaux, March 18-22, 2013

HONOM EUROPEAN WORKSHOP on High Order Nonlinear Numerical Methods for Evolutionary PDEs, Bordeaux, March 18-22, 2013 HONOM 2013 EUROPEAN WORKSHOP on High Order Nonlinear Numerical Methods for Evolutionary PDEs, Bordeaux, March 18-22, 2013 A priori-based mesh adaptation for third-order accurate Euler simulation Alexandre

More information

Section 4.2 selected answers Math 131 Multivariate Calculus D Joyce, Spring 2014

Section 4.2 selected answers Math 131 Multivariate Calculus D Joyce, Spring 2014 4. Determine the nature of the critical points of Section 4. selected answers Math 11 Multivariate Calculus D Joyce, Spring 014 Exercises from section 4.: 6, 1 16.. Determine the nature of the critical

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

Matrix algorithms: fast, stable, communication-optimizing random?!

Matrix algorithms: fast, stable, communication-optimizing random?! Matrix algorithms: fast, stable, communication-optimizing random?! Ioana Dumitriu Department of Mathematics University of Washington (Seattle) Joint work with Grey Ballard, James Demmel, Olga Holtz, Robert

More information

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency 1 2 Parallel Algorithms for Linear Algebra Richard P. Brent Computer Sciences Laboratory Australian National University Outline Basic concepts Parallel architectures Practical design issues Programming

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

1) Give a set-theoretic description of the given points as a subset W of R 3. a) The points on the plane x + y 2z = 0.

1) Give a set-theoretic description of the given points as a subset W of R 3. a) The points on the plane x + y 2z = 0. ) Give a set-theoretic description of the given points as a subset W of R. a) The points on the plane x + y z =. x Solution: W = {x: x = [ x ], x + x x = }. x b) The points in the yz-plane. Solution: W

More information

Vector Calculus: Understanding the Cross Product

Vector Calculus: Understanding the Cross Product University of Babylon College of Engineering Mechanical Engineering Dept. Subject : Mathematics III Class : 2 nd year - first semester Date: / 10 / 2016 2016 \ 2017 Vector Calculus: Understanding the Cross

More information

Recognition, SVD, and PCA

Recognition, SVD, and PCA Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another

More information

REGULAR GRAPHS OF GIVEN GIRTH. Contents

REGULAR GRAPHS OF GIVEN GIRTH. Contents REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion

More information

21-256: Lagrange multipliers

21-256: Lagrange multipliers 21-256: Lagrange multipliers Clive Newstead, Thursday 12th June 2014 Lagrange multipliers give us a means of optimizing multivariate functions subject to a number of constraints on their variables. Problems

More information

Bernstein-Bezier Splines on the Unit Sphere. Victoria Baramidze. Department of Mathematics. Western Illinois University

Bernstein-Bezier Splines on the Unit Sphere. Victoria Baramidze. Department of Mathematics. Western Illinois University Bernstein-Bezier Splines on the Unit Sphere Victoria Baramidze Department of Mathematics Western Illinois University ABSTRACT I will introduce scattered data fitting problems on the sphere and discuss

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

Joint Feature Distributions for Image Correspondence. Joint Feature Distribution Matching. Motivation

Joint Feature Distributions for Image Correspondence. Joint Feature Distribution Matching. Motivation Joint Feature Distributions for Image Correspondence We need a correspondence model based on probability, not just geometry! Bill Triggs MOVI, CNRS-INRIA, Grenoble, France http://www.inrialpes.fr/movi/people/triggs

More information

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR

CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR CALCULATING RANKS, NULL SPACES AND PSEUDOINVERSE SOLUTIONS FOR SPARSE MATRICES USING SPQR Leslie Foster Department of Mathematics, San Jose State University October 28, 2009, SIAM LA 09 DEPARTMENT OF MATHEMATICS,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,

More information

A new implementation of the dqds algorithm

A new implementation of the dqds algorithm Kinji Kimura Kyoto Univ. Kinji Kimura (Kyoto Univ.) 1 / 26 We propose a new implementation of the dqds algorithm for the bidiagonal SVD. Kinji Kimura (Kyoto Univ.) 2 / 26 Our aim(1) srand(1); n=70000;

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Projective 3D Geometry (Back to Chapter 2) Lecture 6 2 Singular Value Decomposition Given a

More information

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin)

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin) CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin) Solutions (hand calculations, plots) have to be submitted electronically as a single pdf file using

More information

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral.

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral. 1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose

More information

Adaptive Filters Algorithms (Part 2)

Adaptive Filters Algorithms (Part 2) Adaptive Filters Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Efficient Iterative Semi-supervised Classification on Manifold

Efficient Iterative Semi-supervised Classification on Manifold . Efficient Iterative Semi-supervised Classification on Manifold... M. Farajtabar, H. R. Rabiee, A. Shaban, A. Soltani-Farani Sharif University of Technology, Tehran, Iran. Presented by Pooria Joulani

More information

CSC 411 Lecture 18: Matrix Factorizations

CSC 411 Lecture 18: Matrix Factorizations CSC 411 Lecture 18: Matrix Factorizations Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 18-Matrix Factorizations 1 / 27 Overview Recall PCA: project data

More information

LAPACK. Linear Algebra PACKage. Janice Giudice David Knezevic 1

LAPACK. Linear Algebra PACKage. Janice Giudice David Knezevic 1 LAPACK Linear Algebra PACKage 1 Janice Giudice David Knezevic 1 Motivating Question Recalling from last week... Level 1 BLAS: vectors ops Level 2 BLAS: matrix-vectors ops 2 2 O( n ) flops on O( n ) data

More information

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications

More information

A hybrid GMRES and TV-norm based method for image restoration

A hybrid GMRES and TV-norm based method for image restoration A hybrid GMRES and TV-norm based method for image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University, Cleveland, OH 44106 b Rocketcalc,

More information

Introduction to PDEs: Notation, Terminology and Key Concepts

Introduction to PDEs: Notation, Terminology and Key Concepts Chapter 1 Introduction to PDEs: Notation, Terminology and Key Concepts 1.1 Review 1.1.1 Goal The purpose of this section is to briefly review notation as well as basic concepts from calculus. We will also

More information

302 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES. 4. Function of several variables, their domain. 6. Limit of a function of several variables

302 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES. 4. Function of several variables, their domain. 6. Limit of a function of several variables 302 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.8 Chapter Review 3.8.1 Concepts to Know You should have an understanding of, and be able to explain the concepts listed below. 1. Boundary and interior points

More information

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers 3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers Prof. Tesler Math 20C Fall 2018 Prof. Tesler 3.3 3.4 Optimization Math 20C / Fall 2018 1 / 56 Optimizing y = f (x) In Math 20A, we

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

On the maximum rank of completions of entry pattern matrices

On the maximum rank of completions of entry pattern matrices Linear Algebra and its Applications 525 (2017) 1 19 Contents lists available at ScienceDirect Linear Algebra and its Applications wwwelseviercom/locate/laa On the maximum rank of completions of entry pattern

More information

LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier

LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier Wang Ding, Songnian Yu, Shanqing Yu, Wei Wei, and Qianfeng Wang School of Computer Engineering and Science, Shanghai University, 200072

More information

Image Compression using Singular Value Decomposition

Image Compression using Singular Value Decomposition Applications of Linear Algebra 1/41 Image Compression using Singular Value Decomposition David Richards and Adam Abrahamsen Introduction The Singular Value Decomposition is a very important process. In

More information

Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data. Abstract. 1. Introduction

Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data. Abstract. 1. Introduction Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data Hongjun Jia, Jeff Fortuna and Aleix M. Martinez Department of Electrical and Computer Engineering The Ohio

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Clustering: Classic Methods and Modern Views

Clustering: Classic Methods and Modern Views Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering

More information

A GPU-based Approximate SVD Algorithm Blake Foster, Sridhar Mahadevan, Rui Wang

A GPU-based Approximate SVD Algorithm Blake Foster, Sridhar Mahadevan, Rui Wang A GPU-based Approximate SVD Algorithm Blake Foster, Sridhar Mahadevan, Rui Wang University of Massachusetts Amherst Introduction Singular Value Decomposition (SVD) A: m n matrix (m n) U, V: orthogonal

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Quasilinear First-Order PDEs

Quasilinear First-Order PDEs MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 16 Lecture 3 Quasilinear First-Order PDEs A first order quasilinear PDE is of the form a(x, y, z) + b(x, y, z) x y = c(x, y, z). (1) Such equations

More information

Math 233. Lagrange Multipliers Basics

Math 233. Lagrange Multipliers Basics Math 233. Lagrange Multipliers Basics Optimization problems of the form to optimize a function f(x, y, z) over a constraint g(x, y, z) = k can often be conveniently solved using the method of Lagrange

More information

. 1. Chain rules. Directional derivative. Gradient Vector Field. Most Rapid Increase. Implicit Function Theorem, Implicit Differentiation

. 1. Chain rules. Directional derivative. Gradient Vector Field. Most Rapid Increase. Implicit Function Theorem, Implicit Differentiation 1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose

More information

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF Scott Smith Advanced Image Processing March 15, 2011 Speeded-Up Robust Features SURF Overview Why SURF? How SURF works Feature detection Scale Space Rotational invariance Feature vectors SURF vs Sift Assumptions

More information

Alternative Statistical Methods for Bone Atlas Modelling

Alternative Statistical Methods for Bone Atlas Modelling Alternative Statistical Methods for Bone Atlas Modelling Sharmishtaa Seshamani, Gouthami Chintalapani, Russell Taylor Department of Computer Science, Johns Hopkins University, Baltimore, MD Traditional

More information

In-plane principal stress output in DIANA

In-plane principal stress output in DIANA analys: linear static. class: large. constr: suppor. elemen: hx24l solid tp18l. load: edge elemen force node. materi: elasti isotro. option: direct. result: cauchy displa princi stress total. In-plane

More information

Lecture 25 Nonlinear Programming. November 9, 2009

Lecture 25 Nonlinear Programming. November 9, 2009 Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

SLEPc: Scalable Library for Eigenvalue Problem Computations

SLEPc: Scalable Library for Eigenvalue Problem Computations SLEPc: Scalable Library for Eigenvalue Problem Computations Jose E. Roman Joint work with A. Tomas and E. Romero Universidad Politécnica de Valencia, Spain 10th ACTS Workshop - August, 2009 Outline 1 Introduction

More information

14.5 Directional Derivatives and the Gradient Vector

14.5 Directional Derivatives and the Gradient Vector 14.5 Directional Derivatives and the Gradient Vector 1. Directional Derivatives. Recall z = f (x, y) and the partial derivatives f x and f y are defined as f (x 0 + h, y 0 ) f (x 0, y 0 ) f x (x 0, y 0

More information

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview Aim Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk What should a 2-hour PhD lecture on structure

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

True/False. MATH 1C: SAMPLE EXAM 1 c Jeffrey A. Anderson ANSWER KEY

True/False. MATH 1C: SAMPLE EXAM 1 c Jeffrey A. Anderson ANSWER KEY MATH 1C: SAMPLE EXAM 1 c Jeffrey A. Anderson ANSWER KEY True/False 10 points: points each) For the problems below, circle T if the answer is true and circle F is the answer is false. After you ve chosen

More information

Spacecraft Actuation Using CMGs and VSCMGs

Spacecraft Actuation Using CMGs and VSCMGs Spacecraft Actuation Using CMGs and VSCMGs Arjun Narayanan and Ravi N Banavar (ravi.banavar@gmail.com) 1 1 Systems and Control Engineering, IIT Bombay, India Research Symposium, ISRO-IISc Space Technology

More information

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015 Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Structured System Theory

Structured System Theory Appendix C Structured System Theory Linear systems are often studied from an algebraic perspective, based on the rank of certain matrices. While such tests are easy to derive from the mathematical model,

More information

CS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing?

CS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing? CS 6210 Fall 2016 Bei Wang Review Lecture What have we learnt in Scientific Computing? Let s recall the scientific computing pipeline observed phenomenon mathematical model discretization solution algorithm

More information

Surfaces and Partial Derivatives

Surfaces and Partial Derivatives Surfaces and Partial Derivatives James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 9, 2016 Outline Partial Derivatives Tangent Planes

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

General Instructions. Questions

General Instructions. Questions CS246: Mining Massive Data Sets Winter 2018 Problem Set 2 Due 11:59pm February 8, 2018 Only one late period is allowed for this homework (11:59pm 2/13). General Instructions Submission instructions: These

More information

ML Detection via SDP Relaxation

ML Detection via SDP Relaxation ML Detection via SDP Relaxation Junxiao Song and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC 5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture

More information

Projective Shape Analysis in Face Recognition from Digital Camera ImagesApril 19 th, / 32

Projective Shape Analysis in Face Recognition from Digital Camera ImagesApril 19 th, / 32 3D Projective Shape Analysis in Face Recognition from Digital Camera Images Seunghee Choi Florida State University, Department of Statistics April 19 th, 2018 Projective Shape Analysis in Face Recognition

More information

When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration

When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration Bihan Wen, Yanjun Li and Yoram Bresler Department of Electrical and Computer Engineering Coordinated

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition ) Keypoint detection n n Many applications benefit from features localized in (x,y) (image registration, panorama stitching, motion estimation + tracking, recognition ) Edges well localized only in one direction

More information

Tutorial on Convex Optimization for Engineers

Tutorial on Convex Optimization for Engineers Tutorial on Convex Optimization for Engineers M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

ON CONCAVITY OF THE PRINCIPAL S PROFIT MAXIMIZATION FACING AGENTS WHO RESPOND NONLINEARLY TO PRICES

ON CONCAVITY OF THE PRINCIPAL S PROFIT MAXIMIZATION FACING AGENTS WHO RESPOND NONLINEARLY TO PRICES ON CONCAVITY OF THE PRINCIPAL S PROFIT MAXIMIZATION FACING AGENTS WHO RESPOND NONLINEARLY TO PRICES Shuangjian Zhang This is joint work with my supervisor Robert J. McCann University of Toronto April 11,

More information

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction MATEMATIQKI VESNIK Corrected proof Available online 01.10.2016 originalni nauqni rad research paper THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS Alekseí Yu. Chekunov Abstract. In this paper

More information

CPSC 340: Machine Learning and Data Mining. Kernel Trick Fall 2017

CPSC 340: Machine Learning and Data Mining. Kernel Trick Fall 2017 CPSC 340: Machine Learning and Data Mining Kernel Trick Fall 2017 Admin Assignment 3: Due Friday. Midterm: Can view your exam during instructor office hours or after class this week. Digression: the other

More information

Animation Curves and Splines 2

Animation Curves and Splines 2 Animation Curves and Splines 2 Animation Homework Set up Thursday a simple avatar E.g. cube/sphere (or square/circle if 2D) Specify some key frames (positions/orientations) Associate Animation a time with

More information

MODEL REDUCTION FOR LARGE-SCALE SYSTEMS WITH HIGH-DIMENSIONAL PARAMETRIC INPUT SPACE

MODEL REDUCTION FOR LARGE-SCALE SYSTEMS WITH HIGH-DIMENSIONAL PARAMETRIC INPUT SPACE MODEL REDUCTION FOR LARGE-SCALE SYSTEMS WITH HIGH-DIMENSIONAL PARAMETRIC INPUT SPACE T. BUI-THANH, K. WILLCOX, AND O. GHATTAS Abstract. A model-constrained adaptive sampling methodology is proposed for

More information

Minima, Maxima, Saddle points

Minima, Maxima, Saddle points Minima, Maxima, Saddle points Levent Kandiller Industrial Engineering Department Çankaya University, Turkey Minima, Maxima, Saddle points p./9 Scalar Functions Let us remember the properties for maxima,

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Computation of the gravity gradient tensor due to topographic masses using tesseroids

Computation of the gravity gradient tensor due to topographic masses using tesseroids Computation of the gravity gradient tensor due to topographic masses using tesseroids Leonardo Uieda 1 Naomi Ussami 2 Carla F Braitenberg 3 1. Observatorio Nacional, Rio de Janeiro, Brazil 2. Universidade

More information

An Approximate Singular Value Decomposition of Large Matrices in Julia

An Approximate Singular Value Decomposition of Large Matrices in Julia An Approximate Singular Value Decomposition of Large Matrices in Julia Alexander J. Turner 1, 1 Harvard University, School of Engineering and Applied Sciences, Cambridge, MA, USA. In this project, I implement

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

ELEG Compressive Sensing and Sparse Signal Representations

ELEG Compressive Sensing and Sparse Signal Representations ELEG 867 - Compressive Sensing and Sparse Signal Representations Introduction to Matrix Completion and Robust PCA Gonzalo Garateguy Depart. of Electrical and Computer Engineering University of Delaware

More information

Quaternion properties: addition. Introduction to quaternions. Quaternion properties: multiplication. Derivation of multiplication

Quaternion properties: addition. Introduction to quaternions. Quaternion properties: multiplication. Derivation of multiplication Introduction to quaternions Definition: A quaternion q consists of a scalar part s, s, and a vector part v ( xyz,,, v 3 : q where, [ s, v q [ s, ( xyz,, q s+ ix + jy + kz i 2 j 2 k 2 1 ij ji k k Quaternion

More information

Dubna 2018: lines on cubic surfaces

Dubna 2018: lines on cubic surfaces Dubna 2018: lines on cubic surfaces Ivan Cheltsov 20th July 2018 Lecture 1: projective plane Complex plane Definition A line in C 2 is a subset that is given by ax + by + c = 0 for some complex numbers

More information

Contents. MATH 32B-2 (18W) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables. 1 Homework 1 - Solutions 3. 2 Homework 2 - Solutions 13

Contents. MATH 32B-2 (18W) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables. 1 Homework 1 - Solutions 3. 2 Homework 2 - Solutions 13 MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables Contents Homework - Solutions 3 2 Homework 2 - Solutions 3 3 Homework 3 - Solutions 9 MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus

More information

Lecture 11: Randomized Least-squares Approximation in Practice. 11 Randomized Least-squares Approximation in Practice

Lecture 11: Randomized Least-squares Approximation in Practice. 11 Randomized Least-squares Approximation in Practice Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 11-10/09/013 Lecture 11: Randomized Least-squares Approximation in Practice Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 21 Outline 1 Course

More information

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi Lagrangian methods for the regularization of discrete ill-posed problems G. Landi Abstract In many science and engineering applications, the discretization of linear illposed problems gives rise to large

More information

A Detailed Look into Forward and Inverse Kinematics

A Detailed Look into Forward and Inverse Kinematics A Detailed Look into Forward and Inverse Kinematics Kinematics = Study of movement, motion independent of the underlying forces that cause them September 19-26, 2016 Kinematics Preliminaries Preliminaries:

More information

SDLS: a Matlab package for solving conic least-squares problems

SDLS: a Matlab package for solving conic least-squares problems SDLS: a Matlab package for solving conic least-squares problems Didier Henrion 1,2 Jérôme Malick 3 June 28, 2007 Abstract This document is an introduction to the Matlab package SDLS (Semi-Definite Least-Squares)

More information

Section 1.8. Simplifying Expressions

Section 1.8. Simplifying Expressions Section 1.8 Simplifying Expressions But, first Commutative property: a + b = b + a; a * b = b * a Associative property: (a + b) + c = a + (b + c) (a * b) * c = a * (b * c) Distributive property: a * (b

More information

Data-driven modeling: A low-rank approximation problem

Data-driven modeling: A low-rank approximation problem 1 / 34 Data-driven modeling: A low-rank approximation problem Ivan Markovsky Vrije Universiteit Brussel 2 / 34 Outline Setup: data-driven modeling Problems: system identification, machine learning,...

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 9, 2017 1 Outline Multivariate ordinary least squares Matlab code:

More information

13. Learning Ballistic Movementsof a Robot Arm 212

13. Learning Ballistic Movementsof a Robot Arm 212 13. Learning Ballistic Movementsof a Robot Arm 212 13. LEARNING BALLISTIC MOVEMENTS OF A ROBOT ARM 13.1 Problem and Model Approach After a sufficiently long training phase, the network described in the

More information