A semi-separable approach to a tridiagonal hierarchy of matrices with application to image flow analysis

Size: px
Start display at page:

Download "A semi-separable approach to a tridiagonal hierarchy of matrices with application to image flow analysis"

Transcription

1 A semi-separable approach to a tridiagonal hierarchy of matrices with application to image flow analysis Patric Dewilde Klaus Diepold Walter Bamberger Abstract Image flow analysis (e.g. as derived by Horn and Schunc) gives rise to matrices that are not semi-separable in themselves but have a hierarchical structure such that each level of the hierarchy bloc tridiagonal matrices appear, whose bloc entries contain themselves bloc tridiagonal matrices. In this paper we explore how typical algorithms to handle the inversion of semi-separable systems can be adapted to that hierarchical situation. At first it appears that the attractive property of linearity in the number of equations given the state dimension of non hierarchical semi separable inversion gets lost, due to a gradual increase of the semi separable complexity at a lower level of the hierarchy. We indicate how this increase can be ept in chec. This leads to hierarchical algorithms for a more general class than covered by the HSS class. 1 Introduction We follow the notation of the boo 2 on time-varying systems throughtout (it should be clear that semi-separable systems of equations are nothing else but time-discrete and time-varying systems). In particular, we utilize non-uniform vectors characterized by a sequence of dimensions, where particular entries may disappear when they have dimension zero. Finite vectors may always be thought embedded in doubly infinite sequences by augmenting them with empty entries acting as placeholders. Liewise, all matrices considered are of bloc-type and such that entries may disappear when the corresponding entries in the vectors disappear as well - the only additional property needed for matrix calculus with disappearing entries being that the product of a matrix of dimensions m 0 with a matrix of dimensions 0 n is a matrix with zero entries of dimensions m n. Furthermore, special matrices such as bloc-diagonal matrices and the unilateral shift Z are utilized, whereby (uz) = u 1 or (Zu) = u +1 is a unit matrix that shifts indices in the vectors it is applied to as indicated, Z will be its transpose. A semi-separable upper operator then has a representation T = D + BZ(I AZ) 1 C where A, B, C, D are bloc-diagonal matrices (presumably of low dimension), while a semi-separable representation for a bloc lower matrix is similarly given by Delft University of Technology Technical University of Munich T = D + BZ (I AZ ) 1 C. 1

2 The shift operator can equally well be applied to matrices, both left and right resulting in a shift of rows or of columns respectively. In addition we will need a shift along diagonals for which we will use the special notation A (1) = Z AZ to represent a unit shift in the South-East direction on the matrix A (A () will be a shift in the same direction over positions). The sequence of dimensions of the so called state transition operator A is called the degree of the realization (at each particular index), it is closely related to the computational complexity of the local operations needed to compute a matrix vector product with the corresponding semiseparable operator - in many cases a general statement can be made on their size. A general semi-separable operator will be the sum of an upper and a lower operator, each coded with its own semi-separable representation, often called a realization of the operator because of the system theoretic connotations. It is nown that in many cases a high degree semi-separable representation can be efficiently represented by a low degree one, given a certain accuracy requirement. There is a general theory available to derive such representations nown as model reduction theory for time-varying systems, originally due to 1. In 6 an algorithm is presented for exact Hanel norm approximation of a time-varying system, while in 3 an efficient approximation algorithm is given. A tri-diagonal matrix T generically has degree two: for each side of the representation one. We indicate this by writing #T = 1 1. If an invertible upper operator has degree n, then its inverse has also degree n, meaning that there is a realization for it of the same dimension as for the original. In particular, an invertible upper bi-diagonal matrix has an inverse that is a full matrix but still has semi-separable degree one. A general invertible operator will have the same overall degree for its inverse, but the numbers will be redistributed over the upper and the lower part. The degree of the sum and the product of two upper ss-operators is generically the sum of the degrees. Hence, it pays to write the operators and their inverses down in ss form resulting in more efficient calculations. The evaluation of the degree is indicative for (if not equal to) the complexity of the calculations. 2 Image flow analysis Image flow analysis consists in comparing two image pixel maps (an original and a target) and determining for each pixel optimal displacement vectors that when applied to the original yields an image that is optimally close (in the least squares sense) to the target 4. These vectors can then be used for optimal coding as is been done extensively by the MPEG standards. The Horn and Schunc method, as well as other methods for this problem produce matrices that have a strongly pronounced hierarchical form, they are not simply semi separable (in fact, they are not semi separable of low degree at all), but at each level of the hierarchy they exhibit a strongly semi separable structure, if the entries at that level were scalar, then the system would be semiseparable. Image flow analysis is prototypical for a class of system modeling and system inversion problems that are governed by partial differential equations woring in layered structures. They provide a simple but generic instance of a tough modeling problem for which efficient solution algorithms are very welcome. 2

3 3 Elementary inversion In this section we show the basic ideas on an elementary LU type factorization, executed on a (hiarchical) doubly tridiagonal matrix. This algorithm will only be applicable when the original can be LU factorized without pivoting. A more general algorithm capable of computing the Moore- Penrose inverse will be proposed in the next section. Let the original operator be given as T = Z L + M + UZ in which L, M, N are conformal bloc diagonal matrices consisting of tridiagonal blocs. Our goal is to factor T = (Z l+δ)(ɛ+uz). Let us introduce on diagonal matrices the notation A (1) = Z AZ which shifts blocs one notch in the south-east direction. The solution is of course classical: L = lɛ M = δɛ + (lu) (1) U = δu This defines a recursion, all -1 entries are zero, the first step gives: STEP 1: M 0 = δ 0 ɛ 0, l 0 = L 0 ɛ 1 0, u 0 = δ0 1 U 0. GENERIC STEP: δ ɛ = M l 1 u 1 l = L ɛ 1 u = δ 1 U It should be clear that due to the recursion the ss degree will increase by an order 2 at each step of the recursion, yielding degree n by the time the end is reached - an unacceptable (but mathematically correct) situation. The redemption comes from model reduction. It has to be executed at each step, in the first equation. This should eep the ss order of δ and ɛ within a given bound - say s. The model reduction step would require O(sn) operations. As there are n steps in the recursion, the overal LDU factorization complexity is O(sn 2 ), of the order of the number of non-trivial blocs in the matrix, except for a constant the minimal number possible. 4 Application to the Horn and Schunc case In the Horn and Schunc case 4, the general hierarchical appearance of the flow matrix is as follows: D1;1 R T = 1;1 (1) L 1;2 D 1;2 Where D 1;i stands for a bloc tridiagonal matrix of dimension N 2 N 2 assuming square images with N pixels per line: D 1;i = (2) 3

4 where each stands for a normal tridiagonal matrix, and the off diagonal blocs R 1,2 and L 1;2 are purely diagonal matrices (see the previous paper in these proceedings for more detailed information on these types of matrices). We shall play various games on these types of matrices to put them in a form that is amenable to efficient treatment. At this point we try to connect up with the elementary LU or Cholesy factorization theory as rudimentarily explained in the previous section. One attractive way of handling the off diagonal blocs (which are very simple indeed) is to reorder the matrix by interleaving bloc column 1 with bloc column N + 1, 2 with N + 2 etc... This produces a bloc matrix of the form: T = D 1 U 1 V 1 L 1 D 2 U 2 V 2 M 1 L 2 D 3 U 3 M 2 UN 2 V N 2 DN 1 U N 1 M N 2 L N 1 D N The bloc entries do have special structure: the D i, M i and V i are (scalar) tridiagonal of dimension N N, the L 2i+1 and U 2i+1 are diagonal and the L 2i and U 2i are simply zero. This special structure is to be exploited in the next hierarchical level down (i.e. at the matrix entry level). A direct extension of the LU or Cholesy factorization technique of the previous section (in the Horn and Schunc case the matrix is actually positive definite, here we only assume that all pivots are invertible for a little more generality) provides the factorization: δ 1 ɛ 1 u 1 v 1 l 1 δ 2 T = m 1 l 2 δ 3 ɛ 2 u 2 v 2.. ɛ 3 u.. 3 (4).. Woring the recursion out in the classical way gives: δ ɛ = D l 1 u 1 m 2 v 2 l = (L m 1 u 1 )ɛ 1 m = M ɛ 1 u = δ 1 (U l 1 v 1 ) v = δ 1 V It is easy to see that these recursions lead to an increase in complexity of the lower level matrices involved at each step, when operations are executed exactly. The recursion simply starts up with δ 1 ɛ 1 = D 1, and since D 1 is tridiagonal, we have δ 1 and ɛ 1 bidiagonal. Next we find l 1 = L 1 ɛ 1 1 which is now semi-separable of degree 1 since L 1 is diagonal and ɛ 1 1 is the inverse of a band matrix of degree 1 (i.e. a diagonal plus an adjacent diagonal). The situation with m 1 is a little more dramatic, we have m 1 = M 1 ɛ 1 1, which is a product of a degree 2 semi-separable matrix with one of degree one, hence an SS-matrix of degree three. That means that in the next step δ 2 and ɛ 2 will be of degree three, and the resulting v 2 and m 2 will both be of degree six. This process eeps escalating linearly if uncheced. This means that at a certain point (e.g. when degree 8 or 10 is reached), the matrices δ and ɛ have to be trimmed in degree, using a Hanel ran reduction method as proposed e.g. in 2, 3. As long as the degree is ept low this can be done efficiently. 4 (3) (5)

5 An alternative approach for this example is to include a second level of (recursive) hierarchy. Let us define (note the redefined quantities!) D2 1 U M = 2 1 V2 1 0 M2 1 0, U L 2 1 D =, L 2 0 V = 2 0 M 2 then T = M 1 U 1 L 1 M 2 U 2 L 2 M 3. (6) Now the recursions of the previous section are directly applicable. However, it is not clear that this scheme is more advantageous than the previous five bloc diagonals approach. 5 The Moore-Penrose case The algorithm given in 3 and which is derived from 6 can be applied in an adapted form. In the bloc tridiagonal case the first step: conversion to upper form is trivial, the main bloc diagonal is just displaced one bloc down. The crucial step is the inner-outer factorization for this case. This consists in a bloc-square root algorithm whose central ( th) step is as follows. Y A Q Y C Y+1 =. B D Here Q is a unitary matrix which brings the matrix it operates on in row independent form, i.e. er( D o ) = 0 and er( Y +1 ) = 0. Y +1 is a connecting matrix that is passed from the th stage to the next one, carrying the residue which might be involved further-on in the Moore-Penrose inversion. It plays a crucial role in eeping the computational complexity down. In our case, the {A, B, C, D } have a special form, namely (with the notation of section (3) 0 0 I A C = U 0 M. 0 I B D The equation to be solved at the th step is (1) the determination of an orthogonal transformation matrix such that (here we assume that the original system is invertible, if that is not the case, then D o can be rectangular and we have to require that it is right invertible, zero rows have to be collected separately and Y +1 must be required to be right invertible as well, we sip details, see 2) Y1 + Y Q 2 M 0 = L D o followed by the update of Y : B o D o L Y+1,1 Y +1,2 = Q Y2 U 0 Given that the ss-degree is limited to s, these update operations necessitate a degree s approximation of the resulting matrices Y and Q. The latter will be automatic if the former is ept in chec. 5

6 Because of the hierarchical nature of the matrices, the ss degree of the given data at a given recursive step is given by the ss degree of Y plus the ss degree of the base matrices {A, B, C, D }. In the QR factorization we now that we can eep the ss degree of the results the same, the degree is hence increased by two. Keeping it in chec requires a model reduction either at each step, or, preferably, after a number of steps. In the hierarchical case given by eq. (6), the realization specializes further to 0 0 I A C = U B D 0 M (7) 0 I L in which the significant entries are 2 2 blocs as defined in the previous section - their significant entries in turn are either tridiagonal or diagonal matrices of size N N. The connecting matrix Y is a 2 2 matrix. Given Y, Q and Y +1 are recursively defined by Q Y ;11 + Y ;12 M Y ;21 + Y ;22 M L 0 = in which D o is square non-singular (when the original matrix is invertible or right invertible when it is not) and Y;12 U Y +1 = Q 0. (9) Y ;22 U 0 Here again we observe the increase in degree at each step, a degree reduction is called for at regular intervals. D o (8) 6 The Horn-Schunc case revisited In the special case of the Horn Schunc algorithm, a further exploitation of the data is advantageous and leads to a simpler solution. We start out with the observation that the Horn Schunc equations can conveniently be summarized as follows: where ( α 2 C 0 0 C + I1 I 2 I1 I 2 ) x 1 x 2 = I1 I 2 I t (10) C is a positive definite, bloc tridiagonal matrix with blocs that are themselves tridiagonal. In addition, C is Toeplitz bloc Toeplitz with fixed (data independent) entries. It represents the negative of the Laplace operator; α is a scalar constant; I are diagonal matrices with light intensities as its entries; the unnowns x are the sought after components of the flow vector; 6

7 I t is also a matrix with intensities. To invert the factor between round bracets we first extract the bloc diagonals of C s, and utilize the inversion rule (I + ab T ) 1 = I a(i + b T a) 1 b T to obtain x1 x 2 = α 2 C 1 I 1 C 1 I 2 (I + α 2 (I 1 C 1 I 1 + I 2 C 1 I 2 ) ) 1 It. (11) Hence the problem is reduced to the computation of C 1 I 1 and C 1 I 2 in which C is a constant Laplace matrix, and the inversion of the braceted middle term. These two problems are substantially different in nature. The first is basically a 2D inverse filtering problem of data that has been passed through a nown direct filter, which has a doubly banded sparse Toeplitz bloc Toeplitz structure. While straight 2D filtering methods could be used here, we present a solution in which the inverse filter is realized as a low degree, hierarchically semi-separable system. The matrix C has the form C= M U U M U U M U. U..... U U M in which M and U indicate (equal) matrices that are all Toeplitz tridiagonal and positive definite. The inverse of this matrix is best characterized in terms of its Cholesy factors. If C = LL then we have L = m 1 1 µ 1 1 Um 1 2 µ 1 1 Um 1 m 1 2 µ 1 2 Um 1 in which the µ satisfy the (Cholesy-Riccati) recursion m 1 µ +1 = M Uµ 1 U with starting value µ 1 = M, and µ i = m i m i. L 1 and L exhibit a hierarchical semi-separable form which is to be exploited when computing its multiplication with a vector, and each of their entries itself will exhibit an orthodox semi separable form provided a semi separable approximant can be obtained for the µ i either by Hanel norm model reduction or Schur approximation, see the next paragraph for the latter, the treatment of these approximations has been done in the literature cited. A further step is taen by assuming that the Cholesy-Riccati recursion stabilizes quicly to a steady state value (that will be the case in the present example), in that case all µ can be taen equal and an invariant, a low complexity inverse filter will result which can be precomputed and burned into the architecture. For the inversion of the braceted term the situation is quite different as the entries of the matrix to be inverted are data dependent. There are two different avenues possible. The first has already been described in the previous sections, since the matrix has a triple band of blocs. The second consists 7

8 in using a hierarchical Schur interpolation method as proposed in 7, 8. We briefly introduce this latter method. Each of the non-zero submatrices is sampled along its main diagonal and a few side diagonals, in a symmetric fashion, producing a pattern of sampled data that loos lie The hierarchical Schur interpolation will produce a tridiagonal bloc tridiagonal approximate inverse of this matrix fitting exactly the same pattern, and using only the sampled data to compute it. It is based on the observation that the Schur interpolation technique is additive on the inverse, and that superblocs of 2 2 adjacent blocs can be reordered to yield banded matrices, for details we refer to the literature cited Generalizations It is an open question whether the algorithms presented here can be generalized to higher orders of hierarchy. We believe they can, provided a new notion of symbolic degree of semi-separability is defined. References 1 P. Dewilde and A.-J. van der Veen, On the Hanel-norm approximation of upper-triangular operators and matrices, Integral Eq. and Operator Theory, 17(1):1-45, P. Dewilde and A.-J. van der Veen, Time-Varying Systems and Computations, Kluwer 1998, 450 pp. 3 S. Chandrasearan, P. Dewilde, M. Gu, T. Pals, A.-J. van der Veen, Fast Stable Solvers for Sequentially Semi-separable Linear Systems of Equations, SIMAX to appear. 8

9 4 B.K.P. Horn and B.G. Schunc, Determining Optical Flow, Artificial Intelligence, 17, (1981) W. Hacbusch, A sparse algorithm bases on H-matrices. Part I: Introduction to H-matrices, Computing 62, pp , Inner-outer factorization and the inversion of locally finite systems of equations, Linear Algebra and its Applications, 313:53-100, H. Nelis, Sparse Approximations of Inverse Matrices, Ph. D. Thesis, Delft University of Technology, Inversion of partially specified positive definite matrices by inverse scattering. Operator Theory: Advances and Applications, 40: ,

An Approximate Singular Value Decomposition of Large Matrices in Julia

An Approximate Singular Value Decomposition of Large Matrices in Julia An Approximate Singular Value Decomposition of Large Matrices in Julia Alexander J. Turner 1, 1 Harvard University, School of Engineering and Applied Sciences, Cambridge, MA, USA. In this project, I implement

More information

x = 12 x = 12 1x = 16

x = 12 x = 12 1x = 16 2.2 - The Inverse of a Matrix We've seen how to add matrices, multiply them by scalars, subtract them, and multiply one matrix by another. The question naturally arises: Can we divide one matrix by another?

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang Solving and CS6015 / LARP 2018 ACK : Linear Algebra and Its Applications - Gilbert Strang Introduction Chapter 1 concentrated on square invertible matrices. There was one solution to Ax = b and it was

More information

Least-Squares Fitting of Data with B-Spline Curves

Least-Squares Fitting of Data with B-Spline Curves Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 29, No. 1, pp. 67 81 c 2006 Society for Industrial and Applied Mathematics A FAST SOLVER FOR HSS REPRESENTATIONS VIA SPARSE MATRICES S. CHANDRASEKARAN, P. DEWILDE, M. GU,

More information

Interleaving Schemes on Circulant Graphs with Two Offsets

Interleaving Schemes on Circulant Graphs with Two Offsets Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical

More information

Lecture 27: Fast Laplacian Solvers

Lecture 27: Fast Laplacian Solvers Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall

More information

Matrix algorithms: fast, stable, communication-optimizing random?!

Matrix algorithms: fast, stable, communication-optimizing random?! Matrix algorithms: fast, stable, communication-optimizing random?! Ioana Dumitriu Department of Mathematics University of Washington (Seattle) Joint work with Grey Ballard, James Demmel, Olga Holtz, Robert

More information

Sparse matrices, graphs, and tree elimination

Sparse matrices, graphs, and tree elimination Logistics Week 6: Friday, Oct 2 1. I will be out of town next Tuesday, October 6, and so will not have office hours on that day. I will be around on Monday, except during the SCAN seminar (1:25-2:15);

More information

Chapter 18 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 18 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. Chapter 8 out of 7 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal 8 Matrices Definitions and Basic Operations Matrix algebra is also known

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

Numerical Algorithms

Numerical Algorithms Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

On JAM of Triangular Fuzzy Number Matrices

On JAM of Triangular Fuzzy Number Matrices 117 On JAM of Triangular Fuzzy Number Matrices C.Jaisankar 1 and R.Durgadevi 2 Department of Mathematics, A. V. C. College (Autonomous), Mannampandal 609305, India ABSTRACT The fuzzy set theory has been

More information

Parallel Implementations of Gaussian Elimination

Parallel Implementations of Gaussian Elimination s of Western Michigan University vasilije.perovic@wmich.edu January 27, 2012 CS 6260: in Parallel Linear systems of equations General form of a linear system of equations is given by a 11 x 1 + + a 1n

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,

More information

Arrays, Matrices and Determinants

Arrays, Matrices and Determinants Arrays, Matrices and Determinants Spreadsheet calculations lend themselves almost automatically to the use of arrays of values. Arrays in Excel can be either one- or two-dimensional. For the solution of

More information

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Triangular Factors and Row Exchanges LARP / 28 ACK :. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Then there were three

More information

On the maximum rank of completions of entry pattern matrices

On the maximum rank of completions of entry pattern matrices Linear Algebra and its Applications 525 (2017) 1 19 Contents lists available at ScienceDirect Linear Algebra and its Applications wwwelseviercom/locate/laa On the maximum rank of completions of entry pattern

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

On Shanks Algorithm for Modular Square Roots

On Shanks Algorithm for Modular Square Roots On Shanks Algorithm for Modular Square Roots Jan-Christoph Schlage-Puchta 29 May 2003 Abstract Let p be a prime number, p = 2 n q +1, where q is odd. D. Shanks described an algorithm to compute square

More information

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine Example 1.1 Consider again the two span beam previously discussed and determine The shearing force M1 at end B of member B. The bending moment M at end B of member B. The shearing force M3 at end B of

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26

More information

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency 1 2 Parallel Algorithms for Linear Algebra Richard P. Brent Computer Sciences Laboratory Australian National University Outline Basic concepts Parallel architectures Practical design issues Programming

More information

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the

More information

Realization of Hardware Architectures for Householder Transformation based QR Decomposition using Xilinx System Generator Block Sets

Realization of Hardware Architectures for Householder Transformation based QR Decomposition using Xilinx System Generator Block Sets IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 08 February 2016 ISSN (online): 2349-784X Realization of Hardware Architectures for Householder Transformation based QR

More information

SUPERFAST MULTIFRONTAL METHOD FOR STRUCTURED LINEAR SYSTEMS OF EQUATIONS

SUPERFAST MULTIFRONTAL METHOD FOR STRUCTURED LINEAR SYSTEMS OF EQUATIONS SUPERFAS MULIFRONAL MEHOD FOR SRUCURED LINEAR SYSEMS OF EQUAIONS S. CHANDRASEKARAN, M. GU, X. S. LI, AND J. XIA Abstract. In this paper we develop a fast direct solver for discretized linear systems using

More information

Parallel Reduction from Block Hessenberg to Hessenberg using MPI

Parallel Reduction from Block Hessenberg to Hessenberg using MPI Parallel Reduction from Block Hessenberg to Hessenberg using MPI Viktor Jonsson May 24, 2013 Master s Thesis in Computing Science, 30 credits Supervisor at CS-UmU: Lars Karlsson Examiner: Fredrik Georgsson

More information

A Study of Numerical Methods for Simultaneous Equations

A Study of Numerical Methods for Simultaneous Equations A Study of Numerical Methods for Simultaneous Equations Er. Chandan Krishna Mukherjee B.Sc.Engg., ME, MBA Asstt. Prof. ( Mechanical ), SSBT s College of Engg. & Tech., Jalgaon, Maharashtra Abstract: -

More information

Structured System Theory

Structured System Theory Appendix C Structured System Theory Linear systems are often studied from an algebraic perspective, based on the rank of certain matrices. While such tests are easy to derive from the mathematical model,

More information

Conforming Vector Interpolation Functions for Polyhedral Meshes

Conforming Vector Interpolation Functions for Polyhedral Meshes Conforming Vector Interpolation Functions for Polyhedral Meshes Andrew Gillette joint work with Chandrajit Bajaj and Alexander Rand Department of Mathematics Institute of Computational Engineering and

More information

ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS

ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS NURNABI MEHERUL ALAM M.Sc. (Agricultural Statistics), Roll No. I.A.S.R.I, Library Avenue, New Delhi- Chairperson: Dr. P.K. Batra Abstract: Block designs

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

PetShop (BYU Students, SIGGRAPH 2006)

PetShop (BYU Students, SIGGRAPH 2006) Now Playing: PetShop (BYU Students, SIGGRAPH 2006) My Mathematical Mind Spoon From Gimme Fiction Released May 10, 2005 Geometric Objects in Computer Graphics Rick Skarbez, Instructor COMP 575 August 30,

More information

x ji = s i, i N, (1.1)

x ji = s i, i N, (1.1) Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,

More information

Two-graphs revisited. Peter J. Cameron University of St Andrews Modern Trends in Algebraic Graph Theory Villanova, June 2014

Two-graphs revisited. Peter J. Cameron University of St Andrews Modern Trends in Algebraic Graph Theory Villanova, June 2014 Two-graphs revisited Peter J. Cameron University of St Andrews Modern Trends in Algebraic Graph Theory Villanova, June 2014 History The icosahedron has six diagonals, any two making the same angle (arccos(1/

More information

Numerical Robustness. The implementation of adaptive filtering algorithms on a digital computer, which inevitably operates using finite word-lengths,

Numerical Robustness. The implementation of adaptive filtering algorithms on a digital computer, which inevitably operates using finite word-lengths, 1. Introduction Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. These

More information

Contents. I The Basic Framework for Stationary Problems 1

Contents. I The Basic Framework for Stationary Problems 1 page v Preface xiii I The Basic Framework for Stationary Problems 1 1 Some model PDEs 3 1.1 Laplace s equation; elliptic BVPs... 3 1.1.1 Physical experiments modeled by Laplace s equation... 5 1.2 Other

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Nicholas J. Higham Pythagoras Papadimitriou Abstract A new method is described for computing the singular value decomposition

More information

Vertex Magic Total Labelings of Complete Graphs

Vertex Magic Total Labelings of Complete Graphs AKCE J. Graphs. Combin., 6, No. 1 (2009), pp. 143-154 Vertex Magic Total Labelings of Complete Graphs H. K. Krishnappa, Kishore Kothapalli and V. Ch. Venkaiah Center for Security, Theory, and Algorithmic

More information

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Creating Arrays & Matrices) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Sparse Linear Systems and Factorization Methods Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Sparse

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester. AM205: lecture 2 Luna and Gary will hold a Python tutorial on Wednesday in 60 Oxford Street, Room 330 Assignment 1 will be posted this week Chris will hold office hours on Thursday (1:30pm 3:30pm, Pierce

More information

A METHOD TO MODELIZE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL

A METHOD TO MODELIZE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL A METHOD TO MODELIE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL Marc LEBELLE 1 SUMMARY The aseismic design of a building using the spectral analysis of a stick model presents

More information

Numerical Analysis I - Final Exam Matrikelnummer:

Numerical Analysis I - Final Exam Matrikelnummer: Dr. Behrens Center for Mathematical Sciences Technische Universität München Winter Term 2005/2006 Name: Numerical Analysis I - Final Exam Matrikelnummer: I agree to the publication of the results of this

More information

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1.1 Introduction Given that digital logic and memory devices are based on two electrical states (on and off), it is natural to use a number

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

A Formalization of Transition P Systems

A Formalization of Transition P Systems Fundamenta Informaticae 49 (2002) 261 272 261 IOS Press A Formalization of Transition P Systems Mario J. Pérez-Jiménez and Fernando Sancho-Caparrini Dpto. Ciencias de la Computación e Inteligencia Artificial

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

arxiv: v1 [math.na] 26 Jun 2014

arxiv: v1 [math.na] 26 Jun 2014 for spectrally accurate wave propagation Vladimir Druskin, Alexander V. Mamonov and Mikhail Zaslavsky, Schlumberger arxiv:406.6923v [math.na] 26 Jun 204 SUMMARY We develop a method for numerical time-domain

More information

A New Algorithm for Measuring and Optimizing the Manipulability Index

A New Algorithm for Measuring and Optimizing the Manipulability Index DOI 10.1007/s10846-009-9388-9 A New Algorithm for Measuring and Optimizing the Manipulability Index Ayssam Yehia Elkady Mohammed Mohammed Tarek Sobh Received: 16 September 2009 / Accepted: 27 October 2009

More information

What is a Graphon? Daniel Glasscock, June 2013

What is a Graphon? Daniel Glasscock, June 2013 What is a Graphon? Daniel Glasscock, June 2013 These notes complement a talk given for the What is...? seminar at the Ohio State University. The block images in this PDF should be sharp; if they appear

More information

METHOD. Warsaw, Poland. Abstractt. control. periodic. displacements of. nts. In this. Unauthenticated Download Date 1/24/18 12:08 PM

METHOD. Warsaw, Poland. Abstractt. control. periodic. displacements of. nts. In this. Unauthenticated Download Date 1/24/18 12:08 PM Reports on Geodesy and Geoinformatics vol. 98 /215; pages 72-84 DOI: 1.2478/rgg-215-7 IDENTIFICATION OF THE REFERENCE ASEE FOR HORIZONTAL DISPLACEMENTS Y ALL-PAIRS METHOD Mieczysław Kwaśniak Faculty of

More information

A.1 Numbers, Sets and Arithmetic

A.1 Numbers, Sets and Arithmetic 522 APPENDIX A. MATHEMATICS FOUNDATIONS A.1 Numbers, Sets and Arithmetic Numbers started as a conceptual way to quantify count objects. Later, numbers were used to measure quantities that were extensive,

More information

arxiv: v1 [math.co] 25 Sep 2015

arxiv: v1 [math.co] 25 Sep 2015 A BASIS FOR SLICING BIRKHOFF POLYTOPES TREVOR GLYNN arxiv:1509.07597v1 [math.co] 25 Sep 2015 Abstract. We present a change of basis that may allow more efficient calculation of the volumes of Birkhoff

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

LAPACK. Linear Algebra PACKage. Janice Giudice David Knezevic 1

LAPACK. Linear Algebra PACKage. Janice Giudice David Knezevic 1 LAPACK Linear Algebra PACKage 1 Janice Giudice David Knezevic 1 Motivating Question Recalling from last week... Level 1 BLAS: vectors ops Level 2 BLAS: matrix-vectors ops 2 2 O( n ) flops on O( n ) data

More information

3.7 Denotational Semantics

3.7 Denotational Semantics 3.7 Denotational Semantics Denotational semantics, also known as fixed-point semantics, associates to each programming language construct a well-defined and rigorously understood mathematical object. These

More information

AH Matrices.notebook November 28, 2016

AH Matrices.notebook November 28, 2016 Matrices Numbers are put into arrays to help with multiplication, division etc. A Matrix (matrices pl.) is a rectangular array of numbers arranged in rows and columns. Matrices If there are m rows and

More information

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić How to perform HPL on CPU&GPU clusters Dr.sc. Draško Tomić email: drasko.tomic@hp.com Forecasting is not so easy, HPL benchmarking could be even more difficult Agenda TOP500 GPU trends Some basics about

More information

Algorithms and Data Structures

Algorithms and Data Structures Charles A. Wuethrich Bauhaus-University Weimar - CogVis/MMC June 22, 2017 1/51 Introduction Matrix based Transitive hull All shortest paths Gaussian elimination Random numbers Interpolation and Approximation

More information

Downloaded 11/15/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 11/15/16 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 38, No. 6, pp. C63 C63 c 6 Society for Industrial and Applied Mathematics PRECONDITIONING OF LINEAR LEAST SQUARES BY ROBUST INCOMPLETE FACTORIZATION FOR IMPLICITLY HELD NORMAL

More information

3 Interior Point Method

3 Interior Point Method 3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders

More information

CS 598: Communication Cost Analysis of Algorithms Lecture 7: parallel algorithms for QR factorization

CS 598: Communication Cost Analysis of Algorithms Lecture 7: parallel algorithms for QR factorization CS 598: Communication Cost Analysis of Algorithms Lecture 7: parallel algorithms for QR factorization Edgar Solomonik University of Illinois at Urbana-Champaign September 14, 2016 Parallel Householder

More information

Using multifrontal hierarchically solver and HPC systems for 3D Helmholtz problem

Using multifrontal hierarchically solver and HPC systems for 3D Helmholtz problem Using multifrontal hierarchically solver and HPC systems for 3D Helmholtz problem Sergey Solovyev 1, Dmitry Vishnevsky 1, Hongwei Liu 2 Institute of Petroleum Geology and Geophysics SB RAS 1 EXPEC ARC,

More information

Natural Quartic Spline

Natural Quartic Spline Natural Quartic Spline Rafael E Banchs INTRODUCTION This report describes the natural quartic spline algorithm developed for the enhanced solution of the Time Harmonic Field Electric Logging problem As

More information

Dense Matrix Algorithms

Dense Matrix Algorithms Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication

More information

A linear algebra processor using Monte Carlo methods

A linear algebra processor using Monte Carlo methods A linear algebra processor using Monte Carlo methods Conference or Workshop Item Accepted Version Plaks, T. P., Megson, G. M., Cadenas Medina, J. O. and Alexandrov, V. N. (2003) A linear algebra processor

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2013 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

2017 SOLUTIONS (PRELIMINARY VERSION)

2017 SOLUTIONS (PRELIMINARY VERSION) SIMON MARAIS MATHEMATICS COMPETITION 07 SOLUTIONS (PRELIMINARY VERSION) This document will be updated to include alternative solutions provided by contestants, after the competition has been mared. Problem

More information

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Nicholas J. Higham Pythagoras Papadimitriou Abstract A new method is described for computing the singular value decomposition

More information

Chapter 1 BACKGROUND

Chapter 1 BACKGROUND Chapter BACKGROUND. Introduction In many areas of mathematics and in applications of mathematics, it is often necessary to be able to infer information about some function using only a limited set of sample

More information

Tracking in image sequences

Tracking in image sequences CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

More information

Alessandro Del Ponte, Weijia Ran PAD 637 Week 3 Summary January 31, Wasserman and Faust, Chapter 3: Notation for Social Network Data

Alessandro Del Ponte, Weijia Ran PAD 637 Week 3 Summary January 31, Wasserman and Faust, Chapter 3: Notation for Social Network Data Wasserman and Faust, Chapter 3: Notation for Social Network Data Three different network notational schemes Graph theoretic: the most useful for centrality and prestige methods, cohesive subgroup ideas,

More information

Solution of Rectangular Interval Games Using Graphical Method

Solution of Rectangular Interval Games Using Graphical Method Tamsui Oxford Journal of Mathematical Sciences 22(1 (2006 95-115 Aletheia University Solution of Rectangular Interval Games Using Graphical Method Prasun Kumar Nayak and Madhumangal Pal Department of Applied

More information

Methods of solving sparse linear systems. Soldatenko Oleg SPbSU, Department of Computational Physics

Methods of solving sparse linear systems. Soldatenko Oleg SPbSU, Department of Computational Physics Methods of solving sparse linear systems. Soldatenko Oleg SPbSU, Department of Computational Physics Outline Introduction Sherman-Morrison formula Woodbury formula Indexed storage of sparse matrices Types

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 36 In last class, we have derived element equations for two d elasticity problems

More information

Optimum Array Processing

Optimum Array Processing Optimum Array Processing Part IV of Detection, Estimation, and Modulation Theory Harry L. Van Trees WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Preface xix 1 Introduction 1 1.1 Array Processing

More information

Approximate Linear Programming for Average-Cost Dynamic Programming

Approximate Linear Programming for Average-Cost Dynamic Programming Approximate Linear Programming for Average-Cost Dynamic Programming Daniela Pucci de Farias IBM Almaden Research Center 65 Harry Road, San Jose, CA 51 pucci@mitedu Benjamin Van Roy Department of Management

More information

Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics

Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics Dimitri Van De Ville Ecole Polytechnique Fédérale de Lausanne Biomedical Imaging Group dimitri.vandeville@epfl.ch

More information

Generalized Network Flow Programming

Generalized Network Flow Programming Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful

More information

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB MATLAB sessions: Laboratory MAT 75 Laboratory Matrix Computations and Programming in MATLAB In this laboratory session we will learn how to. Create and manipulate matrices and vectors.. Write simple programs

More information

MA651 Topology. Lecture 4. Topological spaces 2

MA651 Topology. Lecture 4. Topological spaces 2 MA651 Topology. Lecture 4. Topological spaces 2 This text is based on the following books: Linear Algebra and Analysis by Marc Zamansky Topology by James Dugundgji Fundamental concepts of topology by Peter

More information

I. INTRODUCTION. 2 matrix, integral-equation-based methods, matrix inversion.

I. INTRODUCTION. 2 matrix, integral-equation-based methods, matrix inversion. 2404 IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 59, NO. 10, OCTOBER 2011 Dense Matrix Inversion of Linear Complexity for Integral-Equation-Based Large-Scale 3-D Capacitance Extraction Wenwen

More information

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2) 1 CHAPTER 10 101 The flop counts for LU decomposition can be determined in a similar fashion as was done for Gauss elimination The major difference is that the elimination is only implemented for the left-hand

More information

Course Number 432/433 Title Algebra II (A & B) H Grade # of Days 120

Course Number 432/433 Title Algebra II (A & B) H Grade # of Days 120 Whitman-Hanson Regional High School provides all students with a high- quality education in order to develop reflective, concerned citizens and contributing members of the global community. Course Number

More information

arxiv: v1 [math.co] 17 Jan 2014

arxiv: v1 [math.co] 17 Jan 2014 Regular matchstick graphs Sascha Kurz Fakultät für Mathematik, Physik und Informatik, Universität Bayreuth, Germany Rom Pinchasi Mathematics Dept., Technion Israel Institute of Technology, Haifa 2000,

More information

Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering. Introduction

Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering. Introduction Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering Introduction A SolidWorks simulation tutorial is just intended to illustrate where to

More information

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation and the type of an object. Even simple scaling turns a

More information

Matrices. A Matrix (This one has 2 Rows and 3 Columns) To add two matrices: add the numbers in the matching positions:

Matrices. A Matrix (This one has 2 Rows and 3 Columns) To add two matrices: add the numbers in the matching positions: Matrices A Matrix is an array of numbers: We talk about one matrix, or several matrices. There are many things we can do with them... Adding A Matrix (This one has 2 Rows and 3 Columns) To add two matrices:

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2018 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

Real-Time Shape Editing using Radial Basis Functions

Real-Time Shape Editing using Radial Basis Functions Real-Time Shape Editing using Radial Basis Functions, Leif Kobbelt RWTH Aachen Boundary Constraint Modeling Prescribe irregular constraints Vertex positions Constrained energy minimization Optimal fairness

More information