Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Size: px
Start display at page:

Download "Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting"

Transcription

1 Project Report Bernardo A. Gonzalez Torres Abstract The final term project consist of two parts: a Fortran implementation of a linear algebra solver and a Python implementation of a run setup, a run scheduler, and a data visualizer. In order to solve the matrix equation Ax = b () where A R n n and x, b R n, the Fortran implementation applies Gaussian elimination with or without partial pivoting to decompose matrix A into a lower L and upper U triangular matrices such that LU. Substituting this in equation (): Ax = LUx = b () Let y R n be a vector such that Ux = y, then from equation (): LUx = Ly = b (3) Solving successively the matrix equations Ly = b and then Ux = y is straightforward because both are triangular matrices. Solution x obtained after this process is a valid solution for the matrix equation Ax = b. Algorithms. Gaussian elimination without partial pivoting Following the reading material provided by the instructor [], the algorithm applied to compute the LU decomposition without pivoting is shown in Algorithm.. Gaussian elimination with partial pivoting Unlike the method without pivoting, Gaussian elimination with partial pivoting consecutively applies row permutation to matrix A in order to avoid possible a kk diagonal entries of matrix A being equal to zero. Gaussian elimination with partial pivoting solves the matrix equation Ax = b decomposing matrix A into a lower L and upper U triangular matrices such that P LU, where P is a row permutation matrix. Multiplying equation () by matrix P and substituting P LU: PAx = Pb (4) PAx = LUx = Pb (5)

2 Algorithms Algorithm LU factorization by Gaussian elimination without pivoting : for k = to n do : [loop over column] 3: if a kk = then 4: stop 5: [stop if pivot (or divisor) is zero] 6: end if 7: for i = k + to n do 8: l ik = a ik /a kk 9: [compute multipliers for each column] : end for : for j = k + to n do : for i = k + to n do 3: a ij = a ij l ik a kj 4: [transformation to remaining submatrix] 5: end for 6: end for 7: end for Similarly to the method without pivoting, let y R n be a vector such that Ux = y, then from equation (5): LUx = Ly = Pb (6) To calculate a solution x for the matrix equation Ax = b, we successively solve the matrix equations Ly = Pb and then Ux = y. Following [], the algorithm applied to compute the LU decomposition with (partial) pivoting is shown in Algorithm. Matrix I is the identity matrix. Algorithm LU factorization by Gaussian elimination with (partial) pivoting : U = A, L = I : for k = to n do 3: Select i k to maximize u ik 4: if i k then 5: u k,k:n u i,k:n 6: b k b i 7: l k,:k l i,:k 8: [interchange rows if needed] 9: end if : for j = k + to n do : l jk = u jk /u kk : [compute multipliers for each column] 3: u j,k:n = u j,k:n l jk u k,k:n 4: [transformation to remaining submatrix] 5: end for 6: end for

3 3 Fortran Implementation 3 3 Fortran Implementation The Fortran implementation imports the values of the entries of matrix A and vector b from input files A i.dat and b i.dat respectively, for i N, and after reading those from files, writes A and b onto screen so that users can check if the inputs are correct. Files A i.dat and b i.dat are initialized by the Python implementation. Then, the implementation applies Gaussian elimination with or without partial pivoting to decompose matrix A into a lower L and upper U triangular matrices such that LU, Ly = b and Ux = y. Once that matrices L and U had been computed, the implementation solves successively the matrix equations Ly = b and then Ux = y to obtain solution x. This solution is written to screen and to a file called x i.dat, i N. All Fortran files are in the directory project/linalg. The implementation has one main driver routine called linear solve which calls others necessary subroutines and functions. The structure of the code is the following: linear solve.f9 : main driver routine. setup module.f9 : module that reads in runtime parameters from setup.init file. read data.f9 : module that reads in matrix A and vector b from files. write to screen.f9 : module that writes matrix A and vector b to screen for sanity check. LU decomp.f9 : module that implements Gaussian elimination with or without partial pivoting (Algorithms and ). forward solve.f9 : module that solves Ly = b. backward solve.f9 : module that solves Ux = y. write data.f9 : module that outputs the solution vector x onto screen as well as to a file called x i.dat, for each i. 4 Python Implementation The Python implementation initialize three different inputs of a pair A and b and write them to files A i.dat and b i.dat respectively, for each i. Then, it compiles the Fortran code in the directory project/linalg from within the Python directory project/pyrun. After compiling the Fortran code, the Python implementation runs three cases for different matrices A and vectors b. The matrices A and vectors b are the following: (7)

4 5 User s manual (8) 8 (9) Once that solution x had been computed, the Python implementation check the correctness of the Fortran solutions computing x using numpy library. The implementation compares Fortran and Python solutions and outputs Pass or Fail depending on the results. A threshold value is used to quantify the error between the two solutions. This threshold value could be modified in the Python code. In both cases (Fail or Pass) the Fortran and Python solutions are written to screen. If the result is Fail, the error is written to screen too. Finally, the implementation produces three plots, one for each matrix A, and another three plots of vectors x and b. The pyrun LinAlg.py file is in the directory project/pyrun. following: The structure of the code is the make make() : function that compiles the Fortran code. matrix init(input number) : function that writes the A i.dat and b i.dat files, for each i, and the setup.init file. The argument input number is the number of the matrix A and vector b (, or 3). run linearsolver() : function that runs the Fortran executable file. check solution(input number, threshold) : function that compares the solution of the Fortran code with one computed with numpy library. The argument input number is the number of the solution vector x (, or 3) and the argument threshold is the value of the threshold used to compare the solutions. plot data(input number) : function that plots the matrix A and vector b. The argument input number is the number of the matrix A and vector b (, or 3). 5 User s manual To run the code, you just need to type in the command line: >> python pyrun LinAlg.py while you are in the directory project/pyrun. The only modifiable value in the code is the variable threshold, and it s value is used to compare the solutions computed by Python and Fortran. Once the execution begins, you will be asked if the input in screen is correct. If you answer is no, the message SORRY, INPUT MATRIX A AND/OR VECTOR b ARE INCORRECT will prompt in screen and the execution will halt with no results, as shown in figure.

5 6 Results 5 Fig. : Incorrect input In a similar way, you will be asked if you want to use the version with or without pivoting. If you answer anything different to zero or one, the message INVALID PIVOTING METHOD will prompt in screen and the execution will halt with no results, as shown in figure. Fig. : Incorrect input for pivoting method A successful execution is shown in figure 3. 6 Results The computed vectors x are the following: , x =, x = 3 () ()

6 7 Conclusion 6 Fig. 3: Successful execution , x = 3 3 () Both the Gaussian elimination with and without pivoting gets the same results, as well as the Python solutions calculated using numpy. In figures 4 and 5 are shown the plots of the matrix A and vectors x and b shown in (). Due to the lack of space, the rest of the plots of the matrices A and vectors x and b are not shown in this report, but can be easily found in the directory project/linalg. Fig. 4: Matrix A 7 Conclusion Gaussian elimination without pivoting was straightforward to implement using the reading material provided by the Professor. This method worked well in the test matrices provided for the project. However, this method is known to fail when a diagonal element of the matrix results zero (a kk = ).

7 7 Conclusion 7 Fig. 5: Vectors x and b On the other hand, the computation of the matrices L and U using Gaussian elimination with partial pivoting is more robust due to the row permutations. This method was not easy to implement using the material provided, but the algorithm and the explanation in [] were very useful for it s implementation. Although another methods like complete pivoting exists, one advantage of partial pivoting is that the complexity does not increases due to the fact that it only search for pivots in the same row, which is a O(n) search. References [] Lee, Dongwook. AMS 9, Fall 6. Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems edu/~dongwook/wp-content/uploads/6/ams9/lecturenote/_build/html/_downloads/ gaussian_elimination_pivoting.pdf [] Lecture. Pivoting

AMS209 Final Project: Linear Equations System Solver

AMS209 Final Project: Linear Equations System Solver AMS209 Final Project: Linear Equations System Solver Rene Gutierrez Marquez 1 UCSC 1 December 7, 2016 Abstract In this project an implementation of a solver of a system of linear equations is implemented.

More information

AMS209 Final Project

AMS209 Final Project AMS209 Final Project Xingchen Yu Department of Applied Mathematics and Statistics, University of California, Santa Cruz November 2015 1 Abstract In the project, we explore LU decomposition with or without

More information

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. CE 601: Numerical Methods Lecture 5 Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. Elimination Methods For a system [A]{x} = {b} where [A]

More information

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview Aim Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk What should a 2-hour PhD lecture on structure

More information

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Triangular Factors and Row Exchanges LARP / 28 ACK :. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Then there were three

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Zero elements of first column below 1 st row multiplying 1 st

More information

Numerical Methods 5633

Numerical Methods 5633 Numerical Methods 5633 Lecture 7 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 10 5633-Numerical Methods Organisational To appear

More information

COMPUTER OPTIMIZATION

COMPUTER OPTIMIZATION COMPUTER OPTIMIZATION Storage Optimization: Since Normal Matrix is a symmetric matrix, store only half of it in a vector matrix and develop a indexing scheme to map the upper or lower half to the vector.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,

More information

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

Autar Kaw Benjamin Rigsby.   Transforming Numerical Methods Education for STEM Undergraduates Autar Kaw Benjamin Rigsy http://nmmathforcollegecom Transforming Numerical Methods Education for STEM Undergraduates http://nmmathforcollegecom LU Decomposition is another method to solve a set of simultaneous

More information

Dense Matrix Algorithms

Dense Matrix Algorithms Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs 5. Direct Methods for Solving Systems of Linear Equations They are all over the place... and may have special needs They are all over the place... and may have special needs, December 13, 2012 1 5.3. Cholesky

More information

Gaussian Elimination 2 5 = 4

Gaussian Elimination 2 5 = 4 Linear Systems Lab Objective: The fundamental problem of linear algebra is solving the linear system Ax = b, given that a solution exists There are many approaches to solving this problem, each with different

More information

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n.

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n. September, 1998 PHY307F/407F - Computational Physics Background Material for the Exercise - Solving Systems of Linear Equations David Harrison This document discusses techniques to solve systems of linear

More information

CS Elementary Graph Algorithms & Transform-and-Conquer

CS Elementary Graph Algorithms & Transform-and-Conquer CS483-10 Elementary Graph Algorithms & Transform-and-Conquer Outline Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 1:30pm - 2:30pm or by appointments Depth-first Search cont Topological

More information

Parallel Implementations of Gaussian Elimination

Parallel Implementations of Gaussian Elimination s of Western Michigan University vasilije.perovic@wmich.edu January 27, 2012 CS 6260: in Parallel Linear systems of equations General form of a linear system of equations is given by a 11 x 1 + + a 1n

More information

1 Introduction: Using a Python script to compile and plot data from Fortran modular program for Newton s method

1 Introduction: Using a Python script to compile and plot data from Fortran modular program for Newton s method 1 Introduction: Using a Python script to compile and plot data from Fortran modular program for Newton s method This week s assignment for AMS209 involves using a Python script that can compile a Fortran

More information

Chemical Engineering 541

Chemical Engineering 541 Chemical Engineering 541 Computer Aided Design Methods Direct Solution of Linear Systems 1 Outline 2 Gauss Elimination Pivoting Scaling Cost LU Decomposition Thomas Algorithm (Iterative Improvement) Overview

More information

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA Ch 09 Multidimensional arrays & Linear Systems Andrea Mignone Physics Department, University of Torino AA 2017-2018 Multidimensional Arrays A multidimensional array is an array containing one or more arrays.

More information

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices 6.1 Solving Systems of Linear Equations Using Matrices Objectives Write the augmented matrix for a linear system. Perform matrix row operations. Use matrices and Gaussian elimination to solve systems.

More information

Solution of Out-of-Core Lower-Upper Decomposition for Complex Valued Matrices

Solution of Out-of-Core Lower-Upper Decomposition for Complex Valued Matrices Solution of Out-of-Core Lower-Upper Decomposition for Complex Valued Matrices Marianne Spurrier and Joe Swartz, Lockheed Martin Corp. and ruce lack, Cray Inc. ASTRACT: Matrix decomposition and solution

More information

Work Allocation. Mark Greenstreet. CpSc 418 Oct. 25, Mark Greenstreet Work Allocation CpSc 418 Oct. 25, / 13

Work Allocation. Mark Greenstreet. CpSc 418 Oct. 25, Mark Greenstreet Work Allocation CpSc 418 Oct. 25, / 13 Work Allocation Mark Greenstreet CpSc 418 Oct. 25, 2012 Mark Greenstreet Work Allocation CpSc 418 Oct. 25, 2012 0 / 13 Lecture Outline Work Allocation Static Allocation (matrices and other arrays) Stripes

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Some special matrices Matlab code How many operations and memory

More information

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Matrix norms Can be defined using corresponding vector norms Two norm One norm Infinity

More information

AMath 483/583 Lecture 22. Notes: Another Send/Receive example. Notes: Notes: Another Send/Receive example. Outline:

AMath 483/583 Lecture 22. Notes: Another Send/Receive example. Notes: Notes: Another Send/Receive example. Outline: AMath 483/583 Lecture 22 Outline: MPI Master Worker paradigm Linear algebra LAPACK and the BLAS References: $UWHPSC/codes/mpi class notes: MPI section class notes: Linear algebra Another Send/Receive example

More information

Section 3.1 Gaussian Elimination Method (GEM) Key terms

Section 3.1 Gaussian Elimination Method (GEM) Key terms Section 3.1 Gaussian Elimination Method (GEM) Key terms Rectangular systems Consistent system & Inconsistent systems Rank Types of solution sets RREF Upper triangular form & back substitution Nonsingular

More information

Lecture 9. Introduction to Numerical Techniques

Lecture 9. Introduction to Numerical Techniques Lecture 9. Introduction to Numerical Techniques Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering May 27, 2015 1 / 25 Logistics hw8 (last one) due today. do an easy problem

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 21 Outline 1 Course

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

F04EBFP.1. NAG Parallel Library Routine Document

F04EBFP.1. NAG Parallel Library Routine Document F04 Simultaneous Linear Equations F04EBFP NAG Parallel Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check for implementation-dependent

More information

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0 The Simplex Algorithm Chapter 5 Decision Procedures An Algorithmic Point of View D.Kroening O.Strichman Revision 1.0 Outline 1 Gaussian Elimination 2 Satisfiability with Simplex 3 General Simplex Form

More information

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P. 1 2 The LINPACK Benchmark on the Fujitsu AP 1000 Richard P. Brent Computer Sciences Laboratory The LINPACK Benchmark A popular benchmark for floating-point performance. Involves the solution of a nonsingular

More information

PARDISO Version Reference Sheet Fortran

PARDISO Version Reference Sheet Fortran PARDISO Version 5.0.0 1 Reference Sheet Fortran CALL PARDISO(PT, MAXFCT, MNUM, MTYPE, PHASE, N, A, IA, JA, 1 PERM, NRHS, IPARM, MSGLVL, B, X, ERROR, DPARM) 1 Please note that this version differs significantly

More information

Sparse Linear Systems

Sparse Linear Systems 1 Sparse Linear Systems Rob H. Bisseling Mathematical Institute, Utrecht University Course Introduction Scientific Computing February 22, 2018 2 Outline Iterative solution methods 3 A perfect bipartite

More information

Adaptive Parallel Exact dense LU factorization

Adaptive Parallel Exact dense LU factorization 1/35 Adaptive Parallel Exact dense LU factorization Ziad SULTAN 15 mai 2013 JNCF2013, Université de Grenoble 2/35 Outline 1 Introduction 2 Exact gaussian elimination Gaussian elimination in numerical computation

More information

2.7 Numerical Linear Algebra Software

2.7 Numerical Linear Algebra Software 2.7 Numerical Linear Algebra Software In this section we will discuss three software packages for linear algebra operations: (i) (ii) (iii) Matlab, Basic Linear Algebra Subroutines (BLAS) and LAPACK. There

More information

Chapter 8 Dense Matrix Algorithms

Chapter 8 Dense Matrix Algorithms Chapter 8 Dense Matrix Algorithms (Selected slides & additional slides) A. Grama, A. Gupta, G. Karypis, and V. Kumar To accompany the text Introduction to arallel Computing, Addison Wesley, 23. Topic Overview

More information

3. Replace any row by the sum of that row and a constant multiple of any other row.

3. Replace any row by the sum of that row and a constant multiple of any other row. Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying

More information

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić How to perform HPL on CPU&GPU clusters Dr.sc. Draško Tomić email: drasko.tomic@hp.com Forecasting is not so easy, HPL benchmarking could be even more difficult Agenda TOP500 GPU trends Some basics about

More information

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop Parallelizing The Matrix Multiplication 6/10/2013 LONI Parallel Programming Workshop 2013 1 Serial version 6/10/2013 LONI Parallel Programming Workshop 2013 2 X = A md x B dn = C mn d c i,j = a i,k b k,j

More information

Parallelizing LU Factorization

Parallelizing LU Factorization Parallelizing LU Factorization Scott Ricketts December 3, 2006 Abstract Systems of linear equations can be represented by matrix equations of the form A x = b LU Factorization is a method for solving systems

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero

More information

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang Solving and CS6015 / LARP 2018 ACK : Linear Algebra and Its Applications - Gilbert Strang Introduction Chapter 1 concentrated on square invertible matrices. There was one solution to Ax = b and it was

More information

spam: a Sparse Matrix R Package

spam: a Sparse Matrix R Package Inlet one: spam: a Sparse Matrix R Package with Emphasis on MCMC Methods for Gaussian Markov Random Fields NCAR August 2008 Reinhard Furrer What is spam? an R package for sparse matrix algebra publicly

More information

NAG Fortran Library Routine Document F04CAF.1

NAG Fortran Library Routine Document F04CAF.1 F04 Simultaneous Linear Equations NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find). Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving

More information

Sparse Matrices Direct methods

Sparse Matrices Direct methods Sparse Matrices Direct methods Iain Duff STFC Rutherford Appleton Laboratory and CERFACS Summer School The 6th de Brùn Workshop. Linear Algebra and Matrix Theory: connections, applications and computations.

More information

Natural Quartic Spline

Natural Quartic Spline Natural Quartic Spline Rafael E Banchs INTRODUCTION This report describes the natural quartic spline algorithm developed for the enhanced solution of the Time Harmonic Field Electric Logging problem As

More information

NAG Fortran Library Routine Document F07AAF (DGESV).1

NAG Fortran Library Routine Document F07AAF (DGESV).1 NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

Chapter Introduction

Chapter Introduction Chapter 4.1 Introduction After reading this chapter, you should be able to 1. define what a matrix is. 2. identify special types of matrices, and 3. identify when two matrices are equal. What does a matrix

More information

Least-Squares Fitting of Data with B-Spline Curves

Least-Squares Fitting of Data with B-Spline Curves Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

NAG Fortran Library Routine Document F04BJF.1

NAG Fortran Library Routine Document F04BJF.1 F04 Simultaneous Linear Equations NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

Linear Algebra. Chapter Introduction

Linear Algebra. Chapter Introduction Chapter 6 Linear Algebra Abstract This chapter introduces several matrix related topics, from thesolutionoflinear equations, computing determinants, conjugate-gradient methods, spline interpolation to

More information

NAG Fortran Library Routine Document F04CJF.1

NAG Fortran Library Routine Document F04CJF.1 F04 Simultaneous Linear Equations NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

Iterative Refinement on FPGAs

Iterative Refinement on FPGAs Iterative Refinement on FPGAs Tennessee Advanced Computing Laboratory University of Tennessee JunKyu Lee July 19 th 2011 This work was partially supported by the National Science Foundation, grant NSF

More information

CSCE 411 Design and Analysis of Algorithms

CSCE 411 Design and Analysis of Algorithms CSCE 411 Design and Analysis of Algorithms Set 4: Transform and Conquer Slides by Prof. Jennifer Welch Spring 2014 CSCE 411, Spring 2014: Set 4 1 General Idea of Transform & Conquer 1. Transform the original

More information

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang Maths for Signals and Systems Linear Algebra in Engineering Some problems by Gilbert Strang Problems. Consider u, v, w to be non-zero vectors in R 7. These vectors span a vector space. What are the possible

More information

F01BSF NAG Fortran Library Routine Document

F01BSF NAG Fortran Library Routine Document F01 Matrix Factorizations F01BSF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Nerma Baščelija Sarajevo School of Science and Technology Department of Computer Science Hrasnicka Cesta 3a, 71000 Sarajevo

More information

Programming Problem for the 1999 Comprehensive Exam

Programming Problem for the 1999 Comprehensive Exam Programming Problem for the 1999 Comprehensive Exam Out: Monday 1/25/99 at 9:00 am Due: Friday 1/29/99 at 5:00 pm Department of Computer Science Brown University Providence, RI 02912 1.0 The Task This

More information

A Comparative study on Algorithms for Shortest-Route Problem and Some Extensions

A Comparative study on Algorithms for Shortest-Route Problem and Some Extensions International Journal of Basic & Applied Sciences IJBAS-IJENS Vol: No: 0 A Comparative study on Algorithms for Shortest-Route Problem and Some Extensions Sohana Jahan, Md. Sazib Hasan Abstract-- The shortest-route

More information

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus Tim Davis last modified September 23, 2014 1 Catalog Description CSCE 689. Special

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2) 1 CHAPTER 10 101 The flop counts for LU decomposition can be determined in a similar fashion as was done for Gauss elimination The major difference is that the elimination is only implemented for the left-hand

More information

Solving Systems Using Row Operations 1 Name

Solving Systems Using Row Operations 1 Name The three usual methods of solving a system of equations are graphing, elimination, and substitution. While these methods are excellent, they can be difficult to use when dealing with three or more variables.

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

For example, the system. 22 may be represented by the augmented matrix

For example, the system. 22 may be represented by the augmented matrix Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural of matrix) may be

More information

NAG Library Routine Document F04MCF.1

NAG Library Routine Document F04MCF.1 NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

COMPUTER SCIENCE 314 Numerical Methods SPRING 2013 ASSIGNMENT # 2 (25 points) January 22

COMPUTER SCIENCE 314 Numerical Methods SPRING 2013 ASSIGNMENT # 2 (25 points) January 22 COMPUTER SCIENCE 314 Numerical Methods SPRING 2013 ASSIGNMENT # 2 (25 points) January 22 Announcements Office hours: Instructor Teaching Assistant Monday 4:00 5:00 Tuesday 2:30 3:00 4:00 5:00 Wednesday

More information

NAG Fortran Library Routine Document F04DHF.1

NAG Fortran Library Routine Document F04DHF.1 F04 Simultaneous Linear Equations NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

D-BAUG Informatik I. Exercise session: week 5 HS 2018

D-BAUG Informatik I. Exercise session: week 5 HS 2018 1 D-BAUG Informatik I Exercise session: week 5 HS 2018 Homework 2 Questions? Matrix and Vector in Java 3 Vector v of length n: Matrix and Vector in Java 3 Vector v of length n: double[] v = new double[n];

More information

CS 770G - Parallel Algorithms in Scientific Computing

CS 770G - Parallel Algorithms in Scientific Computing CS 770G - Parallel lgorithms in Scientific Computing Dense Matrix Computation II: Solving inear Systems May 28, 2001 ecture 6 References Introduction to Parallel Computing Kumar, Grama, Gupta, Karypis,

More information

A Study of Numerical Methods for Simultaneous Equations

A Study of Numerical Methods for Simultaneous Equations A Study of Numerical Methods for Simultaneous Equations Er. Chandan Krishna Mukherjee B.Sc.Engg., ME, MBA Asstt. Prof. ( Mechanical ), SSBT s College of Engg. & Tech., Jalgaon, Maharashtra Abstract: -

More information

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2 Uppsala University Department of Information technology Hands-on : Ill-conditioning Exercise (Ill-conditioned linear systems) Definition A system of linear equations is said to be ill-conditioned when

More information

Object-oriented design in numerical linear algebra.

Object-oriented design in numerical linear algebra. Object-oriented design in numerical linear algebra. John Alan McDonald Dept. of Statistics, University of Washington June, 1987 Abstract Straightforward application of object-oriented design to standard

More information

Lecture 27: Fast Laplacian Solvers

Lecture 27: Fast Laplacian Solvers Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall

More information

Independent systems consist of x

Independent systems consist of x 5.1 Simultaneous Linear Equations In consistent equations, *Find the solution to each system by graphing. 1. y Independent systems consist of x Three Cases: A. consistent and independent 2. y B. inconsistent

More information

Matrix Inverse 2 ( 2) 1 = 2 1 2

Matrix Inverse 2 ( 2) 1 = 2 1 2 Name: Matrix Inverse For Scalars, we have what is called a multiplicative identity. This means that if we have a scalar number, call it r, then r multiplied by the multiplicative identity equals r. Without

More information

CS 179: Lecture 10. Introduction to cublas

CS 179: Lecture 10. Introduction to cublas CS 179: Lecture 10 Introduction to cublas Table of contents, you are here. Welcome to week 4, this is new material from here on out so please ask questions and help the TAs to improve the lectures and

More information

Exam Design and Analysis of Algorithms for Parallel Computer Systems 9 15 at ÖP3

Exam Design and Analysis of Algorithms for Parallel Computer Systems 9 15 at ÖP3 UMEÅ UNIVERSITET Institutionen för datavetenskap Lars Karlsson, Bo Kågström och Mikael Rännar Design and Analysis of Algorithms for Parallel Computer Systems VT2009 June 2, 2009 Exam Design and Analysis

More information

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur Lecture 5: Matrices Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur 29 th July, 2008 Types of Matrices Matrix Addition and Multiplication

More information

Parallel Sparse LU Factorization on Different Message Passing Platforms

Parallel Sparse LU Factorization on Different Message Passing Platforms Parallel Sparse LU Factorization on Different Message Passing Platforms Kai Shen Department of Computer Science, University of Rochester Rochester, NY 1467, USA Abstract Several message passing-based parallel

More information

Iterative Sparse Triangular Solves for Preconditioning

Iterative Sparse Triangular Solves for Preconditioning Euro-Par 2015, Vienna Aug 24-28, 2015 Iterative Sparse Triangular Solves for Preconditioning Hartwig Anzt, Edmond Chow and Jack Dongarra Incomplete Factorization Preconditioning Incomplete LU factorizations

More information

Numerical Algorithms

Numerical Algorithms Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0

More information

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8. CZ4102 High Performance Computing Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations - Dr Tay Seng Chuan Reference: Introduction to Parallel Computing Chapter 8. 1 Topic Overview

More information

AMSC/CMSC 460 Final Exam, Fall 2007

AMSC/CMSC 460 Final Exam, Fall 2007 AMSC/CMSC 460 Final Exam, Fall 2007 Show all work. You may leave arithmetic expressions in any form that a calculator could evaluate. By putting your name on this paper, you agree to abide by the university

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Using PASSION System on LU Factorization

Using PASSION System on LU Factorization Syracuse University SURFACE Electrical Engineering and Computer Science Technical Reports College of Engineering and Computer Science 11-1995 Using PASSION System on LU Factorization Haluk Rahmi Topcuoglu

More information

High Performance Computing Programming Paradigms and Scalability Part 6: Examples of Parallel Algorithms

High Performance Computing Programming Paradigms and Scalability Part 6: Examples of Parallel Algorithms High Performance Computing Programming Paradigms and Scalability Part 6: Examples of Parallel Algorithms PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering (CiE) Scientific Computing

More information

Dense matrix algebra and libraries (and dealing with Fortran)

Dense matrix algebra and libraries (and dealing with Fortran) Dense matrix algebra and libraries (and dealing with Fortran) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Dense matrix algebra and libraries (and dealing with Fortran)

More information

A Few Numerical Libraries for HPC

A Few Numerical Libraries for HPC A Few Numerical Libraries for HPC CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) A Few Numerical Libraries for HPC Spring 2016 1 / 37 Outline 1 HPC == numerical linear

More information

Sparse Matrices Introduction to sparse matrices and direct methods

Sparse Matrices Introduction to sparse matrices and direct methods Sparse Matrices Introduction to sparse matrices and direct methods Iain Duff STFC Rutherford Appleton Laboratory and CERFACS Summer School The 6th de Brùn Workshop. Linear Algebra and Matrix Theory: connections,

More information

BEGINNING MATLAB. R.K. Beatson Mathematics Department University of Canterbury. 2 Matlab as a simple matrix calculator 2

BEGINNING MATLAB. R.K. Beatson Mathematics Department University of Canterbury. 2 Matlab as a simple matrix calculator 2 BEGINNING MATLAB R.K. Beatson Mathematics Department University of Canterbury Contents 1 Getting started 1 2 Matlab as a simple matrix calculator 2 3 Repeated commands 4 4 Subscripting, rows, columns and

More information

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency 1 2 Parallel Algorithms for Linear Algebra Richard P. Brent Computer Sciences Laboratory Australian National University Outline Basic concepts Parallel architectures Practical design issues Programming

More information

Matrices. D. P. Koester, S. Ranka, and G. C. Fox. The Northeast Parallel Architectures Center (NPAC) Syracuse University

Matrices. D. P. Koester, S. Ranka, and G. C. Fox. The Northeast Parallel Architectures Center (NPAC) Syracuse University Parallel LU Factorization of Block-Diagonal-Bordered Sparse Matrices D. P. Koester, S. Ranka, and G. C. Fox School of Computer and Information Science and The Northeast Parallel Architectures Center (NPAC)

More information

Realization of Hardware Architectures for Householder Transformation based QR Decomposition using Xilinx System Generator Block Sets

Realization of Hardware Architectures for Householder Transformation based QR Decomposition using Xilinx System Generator Block Sets IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 08 February 2016 ISSN (online): 2349-784X Realization of Hardware Architectures for Householder Transformation based QR

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

IN OUR LAST HOMEWORK, WE SOLVED LARGE

IN OUR LAST HOMEWORK, WE SOLVED LARGE Y OUR HOMEWORK H A SSIGNMENT Editor: Dianne P. O Leary, oleary@cs.umd.edu FAST SOLVERS AND SYLVESTER EQUATIONS: BOTH SIDES NOW By Dianne P. O Leary IN OUR LAST HOMEWORK, WE SOLVED LARGE SPARSE SYSTEMS

More information

A MATLAB Interface to the GPU

A MATLAB Interface to the GPU Introduction Results, conclusions and further work References Department of Informatics Faculty of Mathematics and Natural Sciences University of Oslo June 2007 Introduction Results, conclusions and further

More information