COMPUTER OPTIMIZATION

Size: px
Start display at page:

Download "COMPUTER OPTIMIZATION"

Transcription

1 COMPUTER OPTIMIZATION Storage Optimization: Since Normal Matrix is a symmetric matrix, store only half of it in a vector matrix and develop a indexing scheme to map the upper or lower half to the vector. Vector Storage PLATE 22-1

2 INDEXING SCHEMES Upper triangular matrix to vector: Index( i, j ) = 1/2 [ j ( j-1 )] + i Example of indexing (2, 3) element: Index( 2,3 ) = 1/2 {3 (3-1)}+2 = 5 Lower triangular matrix to vector: Index( i, j ) = 1/2 [ i ( i-1 )] + j Example of indexing (3, 2) element: Index( 3,2 ) = 1/2 {3 (3-1)}+2 = 5 PLATE 22-2

3 INDEXING SCHEMES Use a MAPPING TABLE to reduce computation of indices. Upper Triangular Storage VI is a mapping table indicating the storage location immediately prior to the first element of each column. Lower Triangular Storage VI is a mapping table indicating the storage location immediately prior to the first element of each row. Computing a Mapping Table VI(1) = 0 for i going from 2 to the number of unknowns VI(i) = VI(i-1) + i-1 PLATE 22-3

4 INDEXING USING A MAPPING TABLE Upper Triangular Matrix Index( i, j ) = VI(j) + I Example of indexing (2, 3) element: Index( 2,3 ) = VI(3) + 2 = = 5 Lower Triangular Matrix Index( i, j ) = VI(j) + I Example of indexing (3, 2) element: Index( 3,2 ) = VI(3) + 2 = = 5 PLATE 22-4

5 TIME COMPARISONS Method Time Extra Method A: Using Function A (sec) storage i from 1 to j from i to 1000 A k = Index( i,j ) B bytes per element C bytes per element D full matrix required Method B: Using Function B i from 1 to 1000 j from i to 1000 k = Index( i,j ) Method C: Direct access using mapping table: i from 1 to 1000 j from i to 1000 k = Vi(j) + I Function A: Index = j*(j-1)/2 + i Function B: Index = Vi(j) + i; Method D: Direct matrix element access. i from 1 to 1000 j from 1 to 1000 k = A[i,j] Note: Method D uses direct access to a full matrix and thus is the fastest at access to element, but most costly in storage requirements. Methods A, B and C only require storage of upper triangle of normal matrix, and that use of a mapping table almost as fast as direct access. PLATE 22-5

6 PLATE 22-6

7 DIRECT FORMATION OF UPPER-TRIANGULAR NORMAL MATRIX Adding a single row of an a matrix directly to normal matrix and constants matrix. ADVANTAGES: T Avoids formation of intermediate matrices A, A, T W, L, and A W. Faster times in development of normal equations. Steps: 1. Zero the normal and constants matrix. 2. Zero a single row of the coefficient matrix. 3. Based on the values of a single row of the coefficient matrix, add the proper value to the appropriate normal and constant elements. 4. Repeat Steps 3 and 4 for all observations. PLATE 22-7

8 COMPUTER CODE FOR MATRIX FORMATION (BASIC) VI is a mapping table, T N is the normal matrix (A WA), T C is the constants matrix (A WL) W is the weight matrix unknown is the number of unknown parameters. For i = 1 to unknown ix = VI(i): ixi = ix + i N(ixi) = N(ixi) + A(i)^2 * W(i) C(i) = C(i) + A(i)*W(i)*L(i) For j = i+1 to unknown N(ix+j) = N(ix+j) + A(i)*W(i)*A(j) Next j Next i PLATE 22-8

9 COMPUTER CODE FOR MATRIX FORMATION (C) vi is a mapping table, T n is the normal matrix (A WA), T c is the constants matrix (A WL) w is the weight matrix unknown is the number of unknown parameters. for ( i=1; i<=unknown; i++) { ix = vi[i]; ixi = ix + i; n[ixi] += a[i] * a[i] * w[i]; c[i] += a[i] * w[i] * l[i]; for ( j=i+1; j<=unknown; j++) n[ix+j] += a[i] * w[i] * a[j]; }// for i PLATE 22-9

10 COMPUTER CODE FOR MATRIX FORMATION (FORTRAN) VI is a mapping table, T N is the normal matrix (A WA), T C is the constants matrix (A WL) W is the weight matrix unknown is the number of unknown parameters. Do 100 i = 1, unknown ix = VI(i) ixi = ix + i N(ixi) = N(ixi) + A(i)**2 * W(i) C(i) = C(i) + A(i) * W(i) * L(i) Do 100 j = i+1, unknown N(ix+j) = N(ix+j) + A(i) * W(i) * A(j) 100 Continue PLATE 22-10

11 COMPUTER CODE FOR MATRIX FORMATION (PASCAL) VI is a mapping table, T N is the normal matrix (A WA), T C is the constants matrix (A WL) W is the weight matrix, and unknown is the number of unknown parameters. For i := 1 to unknown do begin ix := VI[i]; ixi := ix + i; N[ixi] := N[ixi] + Sqr(A[i]) * W[i]; C[i] := C[i] + A[i] * W[i] * L[i]; For j := i+1 to unknown do N[ix+j] := N[ix+j] + A[i] * W[i] * A[j] End; { for i } PLATE 22-11

12 CHOLESKY DECOMPOSITION Method of solving a system of equations that decreases computations over full inversing of matrix. Since the normal matrix is a Hermitian matrix, it can be written as the product of lower, L, and upper, U, triangular T matrices where the L = U, or: l ii l ii i 1 k 1 l 2 ik T T N = LU = LL = U U Also the resultant decomposed matrix can overwrite original normal matrix stored as a vector. Pseudo Code for i = 1, 2,, number of unknowns 1 2 For j = i+1, i+2,, number of unknowns i 1 l ji l ji k 1 l ik l jk. l ii PLATE 22-12

13 CHOLESKY DECOMPOSITION (BASIC) s is summing variable FOR i = 1 TO unknown ix = VI(i): ixi = ix + i: s = 0# FOR k = 1 TO i - 1 s = s + N(ix + k) ^ 2 NEXT k N(ixi) = SQR(N(ixi) - s) FOR j = i + 1 TO unknown s = 0#: jx = vi(j) FOR k = 1 TO i - 1 s = s + N(jx + k) * N(ix + k) NEXT k N(jx + i) = (N(jx + i) - s) / N(ixi) NEXT j NEXT I PLATE 22-13

14 CHOLESKY DECOMPOSITION (C) s is summing variable for ( i=1; i<=unknown; i++ ) { ix = vi[i]; ixi = ix+i; s = 0.0; for ( k=1; k<i; k++ ) s += n[ix+k]*n[ix+k]; n[ixi] = sqrt( n[ixi] - s ); for ( j=i+1; j<=unknown; j++ ) { s = 0; jx = vi[j]; for ( k=1; k<i; k++) s += n[jx+k] * n[ix+k]; n[jx+i] = ( n[jx+i] - s ) / n[ixi]; } // for j } // for i PLATE 22-14

15 CHOLESKY DECOMPOSITION (FORTRAN) s is summing variable Do 30 i = 1,Unknown ix = VI(i) i1 = i-1 S = 0.0 Do 10 k=1, i1 10 s = s + N( ix+k )**2 N(ix+i) = Sqrt( N(ix+i)-s ) Do 30 j = i+1, Unknown s = 0.0 Do 20 k = 1, i1 20 s = s + N( VI[j]+k ) * N( ix+k ) N(VI[j]+i) = (N(VI[j]+i)-s) / N(ix+i) 30 Continue PLATE 22-15

16 CHOLESKY DECOMPOSITION (PASCAL) s is summing variable For i := 1 to unknown do Begin ix := VI[i]; ixi := ix+i; S := 0; For k := 1 to Pred( i ) do S := S + Sqr( N[ix+k] ); N[ixi] := Sqrt( N[ixi] -S ); For j := Succ(i) to unknown do Begin S := 0; jx := Vi[j]; For k := 1 to Pred(i) do S := S + N[jx+k]*N[ix+k]; N[jx+i] := ( N[jx+i] - S ) / N[ixi] End; { for j} End; { for i } PLATE 22-16

17 SOLUTION OF A TRIANGULAR MATRIX SYSTEM l l 11 l 21 l 31 l n1 x 1 c 1 l 21 l l 22 l 32 l n2 x 2 c 2 LUX l 31 l 32 l l 33 l n3 x 3 c 3 C 0 l n1 l n2 l n3 l nn l nn x n c n Equation can be written as: and LY = C UX = Y Solve for Y using forward-substitution. Then solve for X using backward-substitution. PLATE 22-17

18 Pseudo Code: FORWARD SUBSTITUTION 1. Solve for y as: 1 y 1 c 1 l Substitute this value into row 2, and compute y as: 2 y 2 c 2 l 21 y 1 l Repeat this procedure until all values for y are found using the algorithm: i 1 y i c i k 1 l ik y k l ii PLATE 22-18

19 Pseudo Code: BACKWARD SUBSTITUTION 1. Compute x as: n x n y n l nn 2. Solve for x as: n-1 x n 1 y n 1 l nn x n l n 1 n 1 3. Repeat this procedure until all unknowns are computed using the algorithm: n x k y k j k 1 l kj y j l kk PLATE 22-19

20 SUBSTITUTION PROCESS (BASIC) Rem Forward Substitution For i = 1 to Unknown ix = VI(i) C(i) = C(i) / N(ix+i) For j = i+1 to Unknown C(j) = C(j) - N(ix+j) * C(i) Next j Next i Rem Backward Substitution For i = Unknown to 1 Step -1 ix = VI(i) For j = Unknown To i+1 Step -1 C(i) = N(ix+j) * C(j) Next j C(i) = C(i) / N(ix+i) Next i PLATE 22-20

21 SUBSTITUTION PROCESS (C) //Forward Substitution for (i=1; i<=unknown; I++) { ix = vi[i]; c[i] = c[i]/n[ix+i]; for (j=i+1; j<=unknown; j++) { c[j] -= n[ix+j]*c[i]; } //for j } //for i //backward substitution for (i=unknown; i>=1; i--) { ix = vi[i]; for (j=unknown; j>=i+1; j--) { c[i] -= n[ix+j] * c[j]; } //for j c[i] = c[i]/n[ix+i]; } // for i PLATE 22-21

22 SUBSTITUTION PROCESS (FORTRAN) C Forward Substitution Do 100 i = 1,Unknown ix = VI(i) C(i) = C(i) / N(ix+i) Do 100 j = i+1, Unknown C(j) = C(j) - N(ix+j) * C(i) 100 Continue C Backward Substitution Do 110 i = Unknown, 1,-1 ix = VI(i) Do 120 j = Unknown, i+1, C(i) = C(i) - N(Ix+j) * C(j) C(i) = C(i) / N(Ix+i) 110 Continue PLATE 22-22

23 SUBSTITUTION PROCESS (PASCAL) {Forward Substitution} For i := 1 to Unknown Do Begin ix := VI[i]; C[i] := C[i] / N[ix+i]; For j := i+1 to Unknown Do C[j] := C[j] - N[ix+j] * C[i]; End; { for i } {Backward Substitution} For i := Unknown DownTo 1 do Begin ix := VI[i]; For j := Unknown DownTo i+1 Do C[i] := C[i] - N[ix+j] * C[j]; C[i] := C[i] / N[ix+i]; End; { for i } PLATE 22-23

24 COMPUTING THE INVERSE MATRIX FROM A CHOLESKY FACTOR Inverse matrix necessary to compute post-adjustment statistics. Inverse of full matrix is found by: 1. Computing inverse of Cholesky factor calling S 2. The inverse of the full matrix = S S T Pseudo code for computing inverse of Cholesky factor. for i going from the number of unknowns down to 1 for k going from number of unknowns down to i+1 for j going from i+1 to k sum the product N(i, j) * N(j, k) N(i, k) = -S / N(i, I) N(i, i) = 1/N(i, i) PLATE 22-24

25 CODE FOR COMPUTING INVERSE (BASIC) {Inverse} For i = Unknown To 1 Step -1 ix = VI(i): ixi = ix + i For k = Unknown To i+1 Step -1 S = 0! For j = i+1 To k S = S + N(ix+j) * N(VI(j)+k) Next j N(ix+k) = -S / N(ixi) Next k N(ixi) = 1.0 / N(ixi) Next i {Inverse * Transpose of Inverse} For j = 1 to Unknown ixj = VI(j) For k = j to Unknown S = 0! For i = k to Unknown S = S + N(VI(k)+i) * N(ixj+i) Next i N(ixj+k) = S Next k Next j PLATE 22-25

26 CODE FOR COMPUTING INVERSE (C) {Inverse} for ( i=unknown; i>=1; i--) { ix = vi[i]; ixi = ix + i; for( k=unknown; k>=i+1; k--) { s = 0.0; for ( j=i+1; j<=k; j++) s += n[ix+j] * n[vi[j]+k]; n[ix+k] = -s / n[ixi]; } // for k n[ixi] = 1.0 / n[ixi]; } // for i {inverse * transpose of inverse} for (j=1; j<=unknown; j++) { ixj = vi[j]; for (k=j; k<=unknown; k++) { s = 0.0; For (i=k; i<=unknown; i++) s += n[vi[k]+i] * n[ixj+i]; n[ixj+k] = s } // for k } //for j PLATE 22-26

27 CODE FOR COMPUTING INVERSE (FORTRAN) C Inverse Do 10 i = Unknown, 1,-1 ix = VI(i) ixi = ix + i Do 11 k = Unknown, i+1,-1 S = 0.0 Do 12 j = i+1,k 12 S = S + N(ix+j) * N(Vi(j)+k) N(ix+k) = -S / N(ixi) 11 Continue N(ixi) = 1.0 / N(ixi) 10 Continue C Inverse * Transpose of Inverse} Do 20 j = 1, Unknown ixj = VI(j) Do 21 k = j, Unknown S = 0.0 Do 22 i = k,unknown 22 S = S + N(Vi(k)+i) * N(ixj+i) N(ixj+k) = S 21 Continue 20 Continue PLATE 22-27

28 CODE FOR COMPUTING INVERSE (PASCAL) {Inverse} For i := Unknown DownTo 1 do Begin ix := VI[i]; ixi := ix + i; For k := Unknown DownTo i+1 Do Begin S := 0.0; For j := i+1 To k Do S := S + N[ix+j] * N[Vi[j]+k]; N[ix+k] := -S / N[ixi]; End; { For k } N[ixi] := 1.0 / N[ixi]; End; { For i } {Inverse * Transpose of Inverse} For j := 1 to Unknown Do Begin ixj := VI[j]; For k := j to Unknown Do Begin S := 0.0; For i := k to Unknown Do S := S + N[Vi[k]+i] * N[ixj+i]; N[ixj+k] := S End; { For k } End; { For j } PLATE 22-28

29 SPARENESS AND OPTIMIZATION EXAMPLE NETWORK: All lines have measured lengths. Angles are measured as shown. Non-zero element in normal matrix (Note: x and y elements are combined.) PLATE 22-29

30 SAME NORMAL MATRIX WITH STATIONS IN DIFFERENT ORDER Note that known off-diagonal zero elements are to the left of non-zero elements. PLATE 22-30

31 OPTIMIZING CHOLESKY DECOMPOSITION TO TAKE ADVANTAGE OF REORDERING Method by which Cholesky accesses matrix: not accessed computed and accessed currently accessed yet to be accessed Full Operations 1) s = n 51 + n n54 s = n 53 + n54 Optimized Operations ) n = square root of ( n - s ) n = square root of (n - s) ) j = 6 j = 6 a) s = n *n + n *n + + n *n no operations necessary b) n = (n - s)/n j = 7 j = 7 a) s = n *n + n *n + + n *n no operations necessary b) n = (n - s)/n j = 8 j = 8 a) s = n *n + n *n + + n *n s = n *n + n *n b) n = (n - s)/n n = (n - s)/n PLATE 22-31

32 REORDERING Sta Connectivity Deg ,3, ,3, ,2,4,7, ,5,6,7, ,6, ,5, ,4,5,6, ,2,3,4, Deg indicates the number stations connected by observations to the Sta. Steps: 1. Chose station with lowest degree. 2. Remove station from connectivity list and adjust degrees for each station. 3. Repeat 1 and 2 until all stations are used. PLATE 22-32

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs 5. Direct Methods for Solving Systems of Linear Equations They are all over the place... and may have special needs They are all over the place... and may have special needs, December 13, 2012 1 5.3. Cholesky

More information

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. CE 601: Numerical Methods Lecture 5 Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. Elimination Methods For a system [A]{x} = {b} where [A]

More information

Answers to Homework 12: Systems of Linear Equations

Answers to Homework 12: Systems of Linear Equations Math 128A Spring 2002 Handout # 29 Sergey Fomel May 22, 2002 Answers to Homework 12: Systems of Linear Equations 1. (a) A vector norm x has the following properties i. x 0 for all x R n ; x = 0 only if

More information

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur Lecture 5: Matrices Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur 29 th July, 2008 Types of Matrices Matrix Addition and Multiplication

More information

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting Project Report Bernardo A. Gonzalez Torres beaugonz@ucsc.edu Abstract The final term project consist of two parts: a Fortran implementation of a linear algebra solver and a Python implementation of a run

More information

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop Parallelizing The Matrix Multiplication 6/10/2013 LONI Parallel Programming Workshop 2013 1 Serial version 6/10/2013 LONI Parallel Programming Workshop 2013 2 X = A md x B dn = C mn d c i,j = a i,k b k,j

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Nerma Baščelija Sarajevo School of Science and Technology Department of Computer Science Hrasnicka Cesta 3a, 71000 Sarajevo

More information

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2) 1 CHAPTER 10 101 The flop counts for LU decomposition can be determined in a similar fashion as was done for Gauss elimination The major difference is that the elimination is only implemented for the left-hand

More information

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Triangular Factors and Row Exchanges LARP / 28 ACK :. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Then there were three

More information

0_PreCNotes17 18.notebook May 16, Chapter 12

0_PreCNotes17 18.notebook May 16, Chapter 12 Chapter 12 Notes BASIC MATRIX OPERATIONS Matrix (plural: Matrices) an n x m array of elements element a ij Example 1 a 21 = a 13 = Multiply Matrix by a Scalar Distribute scalar to all elements Addition

More information

Least-Squares Fitting of Data with B-Spline Curves

Least-Squares Fitting of Data with B-Spline Curves Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

Hypercubes. (Chapter Nine)

Hypercubes. (Chapter Nine) Hypercubes (Chapter Nine) Mesh Shortcomings: Due to its simplicity and regular structure, the mesh is attractive, both theoretically and practically. A problem with the mesh is that movement of data is

More information

Matrix Inverse 2 ( 2) 1 = 2 1 2

Matrix Inverse 2 ( 2) 1 = 2 1 2 Name: Matrix Inverse For Scalars, we have what is called a multiplicative identity. This means that if we have a scalar number, call it r, then r multiplied by the multiplicative identity equals r. Without

More information

Numerical Methods 5633

Numerical Methods 5633 Numerical Methods 5633 Lecture 7 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 10 5633-Numerical Methods Organisational To appear

More information

Lecture 9. Introduction to Numerical Techniques

Lecture 9. Introduction to Numerical Techniques Lecture 9. Introduction to Numerical Techniques Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 27, 2015 1 / 25 Logistics hw8 (last one) due today. do an easy problem

More information

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview Aim Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk What should a 2-hour PhD lecture on structure

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,

More information

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

Autar Kaw Benjamin Rigsby.   Transforming Numerical Methods Education for STEM Undergraduates Autar Kaw Benjamin Rigsy http://nmmathforcollegecom Transforming Numerical Methods Education for STEM Undergraduates http://nmmathforcollegecom LU Decomposition is another method to solve a set of simultaneous

More information

Sparse Linear Systems

Sparse Linear Systems 1 Sparse Linear Systems Rob H. Bisseling Mathematical Institute, Utrecht University Course Introduction Scientific Computing February 22, 2018 2 Outline Iterative solution methods 3 A perfect bipartite

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Zero elements of first column below 1 st row multiplying 1 st

More information

CS Elementary Graph Algorithms & Transform-and-Conquer

CS Elementary Graph Algorithms & Transform-and-Conquer CS483-10 Elementary Graph Algorithms & Transform-and-Conquer Outline Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 1:30pm - 2:30pm or by appointments Depth-first Search cont Topological

More information

Introduction to Multithreaded Algorithms

Introduction to Multithreaded Algorithms Introduction to Multithreaded Algorithms CCOM5050: Design and Analysis of Algorithms Chapter VII Selected Topics T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein. Introduction to algorithms, 3 rd

More information

2.7 Numerical Linear Algebra Software

2.7 Numerical Linear Algebra Software 2.7 Numerical Linear Algebra Software In this section we will discuss three software packages for linear algebra operations: (i) (ii) (iii) Matlab, Basic Linear Algebra Subroutines (BLAS) and LAPACK. There

More information

Quaternion Rotations AUI Course Denbigh Starkey

Quaternion Rotations AUI Course Denbigh Starkey Major points of these notes: Quaternion Rotations AUI Course Denbigh Starkey. What I will and won t be doing. Definition of a quaternion and notation 3 3. Using quaternions to rotate any point around an

More information

Gaussian Elimination 2 5 = 4

Gaussian Elimination 2 5 = 4 Linear Systems Lab Objective: The fundamental problem of linear algebra is solving the linear system Ax = b, given that a solution exists There are many approaches to solving this problem, each with different

More information

PARDISO Version Reference Sheet Fortran

PARDISO Version Reference Sheet Fortran PARDISO Version 5.0.0 1 Reference Sheet Fortran CALL PARDISO(PT, MAXFCT, MNUM, MTYPE, PHASE, N, A, IA, JA, 1 PERM, NRHS, IPARM, MSGLVL, B, X, ERROR, DPARM) 1 Please note that this version differs significantly

More information

Transforming Imperfectly Nested Loops

Transforming Imperfectly Nested Loops Transforming Imperfectly Nested Loops 1 Classes of loop transformations: Iteration re-numbering: (eg) loop interchange Example DO 10 J = 1,100 DO 10 I = 1,100 DO 10 I = 1,100 vs DO 10 J = 1,100 Y(I) =

More information

A Study of Numerical Methods for Simultaneous Equations

A Study of Numerical Methods for Simultaneous Equations A Study of Numerical Methods for Simultaneous Equations Er. Chandan Krishna Mukherjee B.Sc.Engg., ME, MBA Asstt. Prof. ( Mechanical ), SSBT s College of Engg. & Tech., Jalgaon, Maharashtra Abstract: -

More information

Parallel Implementations of Gaussian Elimination

Parallel Implementations of Gaussian Elimination s of Western Michigan University vasilije.perovic@wmich.edu January 27, 2012 CS 6260: in Parallel Linear systems of equations General form of a linear system of equations is given by a 11 x 1 + + a 1n

More information

Linear Algebra. Chapter Introduction

Linear Algebra. Chapter Introduction Chapter 6 Linear Algebra Abstract This chapter introduces several matrix related topics, from thesolutionoflinear equations, computing determinants, conjugate-gradient methods, spline interpolation to

More information

Finite Math - J-term Homework. Section Inverse of a Square Matrix

Finite Math - J-term Homework. Section Inverse of a Square Matrix Section.5-77, 78, 79, 80 Finite Math - J-term 017 Lecture Notes - 1/19/017 Homework Section.6-9, 1, 1, 15, 17, 18, 1, 6, 9, 3, 37, 39, 1,, 5, 6, 55 Section 5.1-9, 11, 1, 13, 1, 17, 9, 30 Section.5 - Inverse

More information

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss ICRA 2016 Tutorial on SLAM Graph-Based SLAM and Sparsity Cyrill Stachniss 1 Graph-Based SLAM?? 2 Graph-Based SLAM?? SLAM = simultaneous localization and mapping 3 Graph-Based SLAM?? SLAM = simultaneous

More information

CSE 4/531 Solution 3

CSE 4/531 Solution 3 CSE 4/531 Solution 3 Edited by Le Fang November 7, 2017 Problem 1 M is a given n n matrix and we want to find a longest sequence S from elements of M such that the indexes of elements in M increase and

More information

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for

More information

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for

More information

x = 12 x = 12 1x = 16

x = 12 x = 12 1x = 16 2.2 - The Inverse of a Matrix We've seen how to add matrices, multiply them by scalars, subtract them, and multiply one matrix by another. The question naturally arises: Can we divide one matrix by another?

More information

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0 The Simplex Algorithm Chapter 5 Decision Procedures An Algorithmic Point of View D.Kroening O.Strichman Revision 1.0 Outline 1 Gaussian Elimination 2 Satisfiability with Simplex 3 General Simplex Form

More information

CS Data Structures and Algorithm Analysis

CS Data Structures and Algorithm Analysis CS 483 - Data Structures and Algorithm Analysis Lecture VI: Chapter 5, part 2; Chapter 6, part 1 R. Paul Wiegand George Mason University, Department of Computer Science March 8, 2006 Outline 1 Topological

More information

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA Ch 09 Multidimensional arrays & Linear Systems Andrea Mignone Physics Department, University of Torino AA 2017-2018 Multidimensional Arrays A multidimensional array is an array containing one or more arrays.

More information

Interlude: Solving systems of Equations

Interlude: Solving systems of Equations Interlude: Solving systems of Equations Solving Ax = b What happens to x under Ax? The singular value decomposition Rotation matrices Singular matrices Condition number Null space Solving Ax = 0 under

More information

3. Replace any row by the sum of that row and a constant multiple of any other row.

3. Replace any row by the sum of that row and a constant multiple of any other row. Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying

More information

Scientific Computing. Some slides from James Lambers, Stanford

Scientific Computing. Some slides from James Lambers, Stanford Scientific Computing Some slides from James Lambers, Stanford Dense Linear Algebra Scaling and sums Transpose Rank-one updates Rotations Matrix vector products Matrix Matrix products BLAS Designing Numerical

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Some special matrices Matlab code How many operations and memory

More information

A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX

A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX Progress In Electromagnetics Research, PIER 90, 121 136, 2009 A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX Q. Wang, Y. C. Guo, and X. W. Shi National Key Laboratory

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

Fail-Stop Failure ABFT for Cholesky Decomposition

Fail-Stop Failure ABFT for Cholesky Decomposition Fail-Stop Failure ABFT for Cholesky Decomposition Doug Hakkarinen, Student Member, IEEE, anruo Wu, Student Member, IEEE, and Zizhong Chen, Senior Member, IEEE Abstract Modeling and analysis of large scale

More information

Chapter 4. Matrix and Vector Operations

Chapter 4. Matrix and Vector Operations 1 Scope of the Chapter Chapter 4 This chapter provides procedures for matrix and vector operations. This chapter (and Chapters 5 and 6) can handle general matrices, matrices with special structure and

More information

Data Structures and Algorithms(12)

Data Structures and Algorithms(12) Ming Zhang "Data s and Algorithms" Data s and Algorithms(12) Instructor: Ming Zhang Textbook Authors: Ming Zhang, Tengjiao Wang and Haiyan Zhao Higher Education Press, 2008.6 (the "Eleventh Five-Year"

More information

PACKAGE SPECIFICATION HSL To solve a symmetric, sparse and positive definite set of linear equations Ax = b i.e. j=1

PACKAGE SPECIFICATION HSL To solve a symmetric, sparse and positive definite set of linear equations Ax = b i.e. j=1 MA61 PACKAGE SPECIFICATION HSL 2013 1 SUMMARY To solve a symmetric, sparse and positive definite set of linear equations Ax = b i.e. n a ij x j = b i j=1 i=1,2,...,n The solution is found by a preconditioned

More information

CSCE 5160 Parallel Processing. CSCE 5160 Parallel Processing

CSCE 5160 Parallel Processing. CSCE 5160 Parallel Processing HW #9 10., 10.3, 10.7 Due April 17 { } Review Completing Graph Algorithms Maximal Independent Set Johnson s shortest path algorithm using adjacency lists Q= V; for all v in Q l[v] = infinity; l[s] = 0;

More information

SHORTEST PATHS ON SURFACES GEODESICS IN HEAT

SHORTEST PATHS ON SURFACES GEODESICS IN HEAT SHORTEST PATHS ON SURFACES GEODESICS IN HEAT INF555 Digital Representation and Analysis of Shapes 28 novembre 2015 Ruoqi He & Chia-Man Hung 1 INTRODUCTION In this project we present the algorithm of a

More information

CSCE 411 Design and Analysis of Algorithms

CSCE 411 Design and Analysis of Algorithms CSCE 411 Design and Analysis of Algorithms Set 4: Transform and Conquer Slides by Prof. Jennifer Welch Spring 2014 CSCE 411, Spring 2014: Set 4 1 General Idea of Transform & Conquer 1. Transform the original

More information

Dense Matrix Algorithms

Dense Matrix Algorithms Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication

More information

Basic matrix math in R

Basic matrix math in R 1 Basic matrix math in R This chapter reviews the basic matrix math operations that you will need to understand the course material and how to do these operations in R. 1.1 Creating matrices in R Create

More information

Preconditioning for linear least-squares problems

Preconditioning for linear least-squares problems Preconditioning for linear least-squares problems Miroslav Tůma Institute of Computer Science Academy of Sciences of the Czech Republic tuma@cs.cas.cz joint work with Rafael Bru, José Marín and José Mas

More information

MATH 5520 Basics of MATLAB

MATH 5520 Basics of MATLAB MATH 5520 Basics of MATLAB Dmitriy Leykekhman Spring 2011 Topics Sources. Entering Matrices. Basic Operations with Matrices. Build in Matrices. Build in Scalar and Matrix Functions. if, while, for m-files

More information

MATH 3511 Basics of MATLAB

MATH 3511 Basics of MATLAB MATH 3511 Basics of MATLAB Dmitriy Leykekhman Spring 2012 Topics Sources. Entering Matrices. Basic Operations with Matrices. Build in Matrices. Build in Scalar and Matrix Functions. if, while, for m-files

More information

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang Solving and CS6015 / LARP 2018 ACK : Linear Algebra and Its Applications - Gilbert Strang Introduction Chapter 1 concentrated on square invertible matrices. There was one solution to Ax = b and it was

More information

Advanced Computer Graphics

Advanced Computer Graphics G22.2274 001, Fall 2009 Advanced Computer Graphics Project details and tools 1 Project Topics Computer Animation Geometric Modeling Computational Photography Image processing 2 Optimization All projects

More information

Introduction to MatLab. Introduction to MatLab K. Craig 1

Introduction to MatLab. Introduction to MatLab K. Craig 1 Introduction to MatLab Introduction to MatLab K. Craig 1 MatLab Introduction MatLab and the MatLab Environment Numerical Calculations Basic Plotting and Graphics Matrix Computations and Solving Equations

More information

CPS 616 TRANSFORM-AND-CONQUER 7-1

CPS 616 TRANSFORM-AND-CONQUER 7-1 CPS 616 TRANSFORM-AND-CONQUER 7-1 TRANSFORM AND CONQUER Group of techniques to solve a problem by first transforming the problem into one of: 1. a simpler/more convenient instance of the same problem (instance

More information

and n is an even positive integer, then A n is a

and n is an even positive integer, then A n is a 1) If A is a skew symmetric matrix and n is an even positive integer, then A n is a a) Symmetric Matrix b) Skew Symmetric Matrix c) Diagonal Matrix d) Scalar a Matrix Correct answer is (a) a) 5 b) 3 c)

More information

AH Matrices.notebook November 28, 2016

AH Matrices.notebook November 28, 2016 Matrices Numbers are put into arrays to help with multiplication, division etc. A Matrix (matrices pl.) is a rectangular array of numbers arranged in rows and columns. Matrices If there are m rows and

More information

A Detailed Look into Forward and Inverse Kinematics

A Detailed Look into Forward and Inverse Kinematics A Detailed Look into Forward and Inverse Kinematics Kinematics = Study of movement, motion independent of the underlying forces that cause them September 19-26, 2016 Kinematics Preliminaries Preliminaries:

More information

ECE 158A - Data Networks

ECE 158A - Data Networks ECE 158A - Data Networks Homework 2 - due Tuesday Nov 5 in class Problem 1 - Clustering coefficient and diameter In this problem, we will compute the diameter and the clustering coefficient of a set of

More information

Chapter 8 Dense Matrix Algorithms

Chapter 8 Dense Matrix Algorithms Chapter 8 Dense Matrix Algorithms (Selected slides & additional slides) A. Grama, A. Gupta, G. Karypis, and V. Kumar To accompany the text Introduction to arallel Computing, Addison Wesley, 23. Topic Overview

More information

CS 770G - Parallel Algorithms in Scientific Computing

CS 770G - Parallel Algorithms in Scientific Computing CS 770G - Parallel lgorithms in Scientific Computing Dense Matrix Computation II: Solving inear Systems May 28, 2001 ecture 6 References Introduction to Parallel Computing Kumar, Grama, Gupta, Karypis,

More information

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n.

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n. September, 1998 PHY307F/407F - Computational Physics Background Material for the Exercise - Solving Systems of Linear Equations David Harrison This document discusses techniques to solve systems of linear

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

Parallelizing LU Factorization

Parallelizing LU Factorization Parallelizing LU Factorization Scott Ricketts December 3, 2006 Abstract Systems of linear equations can be represented by matrix equations of the form A x = b LU Factorization is a method for solving systems

More information

Numerical Algorithms

Numerical Algorithms Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0

More information

ME305: Introduction to System Dynamics

ME305: Introduction to System Dynamics ME305: Introduction to System Dynamics Using MATLAB MATLAB stands for MATrix LABoratory and is a powerful tool for general scientific and engineering computations. Combining with user-friendly graphics

More information

Algorithms: Dynamic Programming

Algorithms: Dynamic Programming Algorithms: Dynamic Programming Amotz Bar-Noy CUNY Spring 2012 Amotz Bar-Noy (CUNY) Dynamic Programming Spring 2012 1 / 58 Dynamic Programming General Strategy: Solve recursively the problem top-down based

More information

Module 5.5: nag sym bnd lin sys Symmetric Banded Systems of Linear Equations. Contents

Module 5.5: nag sym bnd lin sys Symmetric Banded Systems of Linear Equations. Contents Module Contents Module 5.5: nag sym bnd lin sys Symmetric Banded Systems of nag sym bnd lin sys provides a procedure for solving real symmetric or complex Hermitian banded systems of linear equations with

More information

BLAS: Basic Linear Algebra Subroutines I

BLAS: Basic Linear Algebra Subroutines I BLAS: Basic Linear Algebra Subroutines I Most numerical programs do similar operations 90% time is at 10% of the code If these 10% of the code is optimized, programs will be fast Frequently used subroutines

More information

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Matrix norms Can be defined using corresponding vector norms Two norm One norm Infinity

More information

Systems of Linear Equations and their Graphical Solution

Systems of Linear Equations and their Graphical Solution Proceedings of the World Congress on Engineering and Computer Science Vol I WCECS, - October,, San Francisco, USA Systems of Linear Equations and their Graphical Solution ISBN: 98-988-95-- ISSN: 8-958

More information

Natural Quartic Spline

Natural Quartic Spline Natural Quartic Spline Rafael E Banchs INTRODUCTION This report describes the natural quartic spline algorithm developed for the enhanced solution of the Time Harmonic Field Electric Logging problem As

More information

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the

More information

CS 775: Advanced Computer Graphics. Lecture 3 : Kinematics

CS 775: Advanced Computer Graphics. Lecture 3 : Kinematics CS 775: Advanced Computer Graphics Lecture 3 : Kinematics Traditional Cell Animation, hand drawn, 2D Lead Animator for keyframes http://animation.about.com/od/flashanimationtutorials/ss/flash31detanim2.htm

More information

Problem Set 4. Assigned: March 23, 2006 Due: April 17, (6.882) Belief Propagation for Segmentation

Problem Set 4. Assigned: March 23, 2006 Due: April 17, (6.882) Belief Propagation for Segmentation 6.098/6.882 Computational Photography 1 Problem Set 4 Assigned: March 23, 2006 Due: April 17, 2006 Problem 1 (6.882) Belief Propagation for Segmentation In this problem you will set-up a Markov Random

More information

CSC Design and Analysis of Algorithms

CSC Design and Analysis of Algorithms CSC : Lecture 7 CSC - Design and Analysis of Algorithms Lecture 7 Transform and Conquer I Algorithm Design Technique CSC : Lecture 7 Transform and Conquer This group of techniques solves a problem by a

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 3 Dense Linear Systems Section 3.3 Triangular Linear Systems Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign

More information

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes SCIE 4101, Spring 011 Miller Math Review Packet #4 Algebra II (Part 1) Notes Matrices A matrix is a rectangular arra of numbers. The order of a matrix refers to the number of rows and columns the matrix

More information

Independent systems consist of x

Independent systems consist of x 5.1 Simultaneous Linear Equations In consistent equations, *Find the solution to each system by graphing. 1. y Independent systems consist of x Three Cases: A. consistent and independent 2. y B. inconsistent

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

CSC Design and Analysis of Algorithms. Lecture 7. Transform and Conquer I Algorithm Design Technique. Transform and Conquer

CSC Design and Analysis of Algorithms. Lecture 7. Transform and Conquer I Algorithm Design Technique. Transform and Conquer // CSC - Design and Analysis of Algorithms Lecture 7 Transform and Conquer I Algorithm Design Technique Transform and Conquer This group of techniques solves a problem by a transformation to a simpler/more

More information

OPPA European Social Fund Prague & EU: We invest in your future.

OPPA European Social Fund Prague & EU: We invest in your future. OPPA European Social Fund Prague & EU: We invest in your future. Choleski Decomposition for B. A. The most expensive computation in B. A. is solving the normal eqs: k ( k find d s such that L r νr(θs )

More information

Advanced Computer Graphics

Advanced Computer Graphics G22.2274 001, Fall 2010 Advanced Computer Graphics Project details and tools 1 Projects Details of each project are on the website under Projects Please review all the projects and come see me if you would

More information

CS420/520 Algorithm Analysis Spring 2009 Lecture 14

CS420/520 Algorithm Analysis Spring 2009 Lecture 14 CS420/520 Algorithm Analysis Spring 2009 Lecture 14 "A Computational Analysis of Alternative Algorithms for Labeling Techniques for Finding Shortest Path Trees", Dial, Glover, Karney, and Klingman, Networks

More information

CS222 Homework 4(Solution)

CS222 Homework 4(Solution) CS222 Homework 4(Solution) Dynamic Programming Exercises for Algorithm Design and Analysis by Li Jiang, 2018 Autumn Semester 1. Given an m n matrix of integers, you are to write a program that computes

More information

Dynamic Programming. Outline and Reading. Computing Fibonacci

Dynamic Programming. Outline and Reading. Computing Fibonacci Dynamic Programming Dynamic Programming version 1.2 1 Outline and Reading Matrix Chain-Product ( 5.3.1) The General Technique ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Dynamic Programming version 1.2 2 Computing

More information

Chapter 4. Transform-and-conquer

Chapter 4. Transform-and-conquer Chapter 4 Transform-and-conquer 1 Outline Transform-and-conquer strategy Gaussian Elimination for solving system of linear equations Heaps and heapsort Horner s rule for polynomial evaluation String matching

More information

Lab #10 Multi-dimensional Arrays

Lab #10 Multi-dimensional Arrays Multi-dimensional Arrays Sheet s Owner Student ID Name Signature Group partner 1. Two-Dimensional Arrays Arrays that we have seen and used so far are one dimensional arrays, where each element is indexed

More information

Geometric Modeling Assignment 3: Discrete Differential Quantities

Geometric Modeling Assignment 3: Discrete Differential Quantities Geometric Modeling Assignment : Discrete Differential Quantities Acknowledgements: Julian Panetta, Olga Diamanti Assignment (Optional) Topic: Discrete Differential Quantities with libigl Vertex Normals,

More information

Synthesis of Constrained nr Planar Robots to Reach Five Task Positions

Synthesis of Constrained nr Planar Robots to Reach Five Task Positions Robotics: Science and Systems 007 Atlanta, GA, USA, June 7-30, 007 Synthesis of Constrained nr Planar Robots to Reach Five Task Positions Gim Song Soh Robotics and Automation Laboratory University of California

More information

BLAS: Basic Linear Algebra Subroutines I

BLAS: Basic Linear Algebra Subroutines I BLAS: Basic Linear Algebra Subroutines I Most numerical programs do similar operations 90% time is at 10% of the code If these 10% of the code is optimized, programs will be fast Frequently used subroutines

More information

******** Chapter-4 Dynamic programming

******** Chapter-4 Dynamic programming repeat end SHORT - PATHS Overall run time of algorithm is O ((n+ E ) log n) Example: ******** Chapter-4 Dynamic programming 4.1 The General Method Dynamic Programming: is an algorithm design method that

More information