CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

Size: px
Start display at page:

Download "CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati."

Transcription

1 CE 601: Numerical Methods Lecture 5 Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

2 Elimination Methods For a system [A]{x} = {b} where [A] is an n X n matrix, the number of operations = (2/3)n 3 +(3/2)n 2 -(7/6)n. Note: The formula (2/3)n 3 +(3/2)n 2 -(7/6)n will always yield integer values. Like Gauss elimination, you might have studied Gauss- Jordan elimination method. The objective of Gauss- Jordan scheme was to convert [A]{x} = {b} => [I]{x} = {x}, [I] -> Identity matrix, through systematic elimination process. We are not going to discuss on this method in the class and request you to work on your our referring literature.

3 The Gauss-Jordan scheme is computationally less efficient than the Gauss elimination scheme. The no. of operations involved in Gauss-Jordan scheme = n 3 +n 2 -n. Like Gauss elimination, Gauss-Jordan, etc. you might have studied matrix inverse methods to solve linear systems i.e. [A]{x} = {b} so, {x} = [A] -1 {b} There are two evaluations in this scheme First evaluate the inverse of matrix [A] Second, to evaluate the product of [A] -1 {b}

4 In both the evaluations, there are arithmetic operations involved. For matrix inverse it takes 2n 2 -n operations i.e. total of 2n 3 operations. Based on the significant digits assigned to the variables or components, round-off errors may creep in the solutions while using Gauss elimination scheme. These errors can be minimized by performing partial pivoting or scaled partial pivoting.

5 LU Decomposition o We have discussed that matrix can be factored i.e., it can be given as product of two different matrix. [A] = [B][C] o There can be many possibilities of obtaining factor matrices. However if we specify the diagonal elements of either [L] or [U], then we will have a unique a factorization for [A]. The LU decomposition methods like Doolittle and Crout work on these principles. o In a similar tone, one can also factorize [A] as product of [L] and [U] i.e., [A]= [L][U] where [L] is lower triangular and [U] is a upper triangular matrix. a a a l 0 0 u u u n n a a a l l 0 0 u u n n a a a l l l 0 0 u n1 n2 nn n1 n2 nn nn

6 Property of LU decomposition o [A]{x} = {b} [L][U] {x} = {b} Multiply left and right side by [L] -1 [L] -1 [L][U] {x} = [L] -1 {b} [I][U]{x} = [L] -1 {b} [U}{x} = [L] -1 {b} o Let us define the system [U]{x} = {c} So, we get {c} = [L] -1 {b} or, [L]{c} = {b}

7 o The meaning here is: Lower triangular matrix [L] will transform RHS vector {b} to {c} using the relation [L] -1 {c} = {b}. On obtaining {c}, using the relation [U]{x} = {c}, we obtain the solution vector. So if we can suitable algorithm such that process involved in LU decomposition are Factorize [A] = [L][U] Forward substitution to evaluate {c} Backward substitution to evaluate {x] If the no. of arithmetic operations in factorization can be reduced then LU decomposition becomes efficient.

8 The Advantage of LU Decomposition o If there are many linear systems with same coefficient matrix i.e., [A]{x} = {b} [A]{y} = {c} [A]{z} = {d} o Then if we adopt Gauss elimination, for each system it may take (2/3)n 3 +(3/2)n 2 -(7/6)n operations. The purpose may become tedious. o If we do LU decomposition, then you may see that only once we need to decompose [A]=[L][U] o After that for each system, we need to utilize only the forward and backward substitutions.

9 Doolittle LU Decomposition [A] = [L][U] If we factorize in such a way that a a a u u u n n a a a l u u n n a a a l l u n1 n2 nn n1 n2 nn i.e. diagonal elements of [L] are 1, then the approach is Doolittle s method. If [ A] l u u n l l u n l l l n1 n2 nn the approach is called Crout s method.

10 Algorithm for Doolittle s LU Decomposition Doolittle algorithm is developed using knowledge of Gauss elimination. Recall Gauss elimination at any step k a a l a ( k ) ( k 1) ( k 1) ij ij ik kj l a a ( k 1) ( k 1) ik ik kk You can now write k 1,2,3,..., n 1; j k, k 1,..., n; i k 1, k 2,..., n a a l a ( k ) ( k 1) ( k 1) ij ij ik kj i.e., a a l a l a ( k ) ( k 2) ( k 1) ( k 2) ij ij ik kj i( k 1) ( k 1) j Extending, k ( k) ( m 1) ij ij im mj m 1 a a l a k or, ( k) ( m 1) ij ij im mj ; 1, 2,..., ;, 1,..., m 1 a a l a i k k n j k k n

11 This is nothing but [ A] [ L][ U ] where So you get k ( k) ( m 1) ij ij im mj m 1 a a l a l ij elements of lower triangular matrix such that l 1 ; for i j ij l ; for i k; k 1,2,3,..., n 1 ik 0 ; for i j a a a a n (1) (1) (1) n (2) (2) n l a a a [ L] l l 1 0,[ U ] 0 0 a a l l l a ( n 1) n1 n2 n3 nn where l a a, l a a, etc. (1) (1)

12 So the steps involved are: o No. of steps as in Gauss elimination k = 1,2,3,,n-1 o At any k, i= k+1,k+2,,n and j=k,k+1,,n l kk 1 l 0; i k ik u a a l a u ( k ) ( k 1) ( k 1) ij ij ij ik kj ij 0 l a a ; i k ( k 1) ( k 1) ik ik kk

13 o Forward substitution for c c b 1 1 i i im m m 1 k 1 c b l c ; i 2,3,, n o Back substitution for x x c u x n n nn n 1 c u x n 1 ( n 1) n n u ( n 1)( n 1) c n u x i ij j j i 1 In genral, xi ; i ( n 1),( n 2),...,2,1 uii

14 Example x x x 20 A L U L l 1 0, U 0 u u l l u l 1 / 4, l 1 / 4, l 5 / 7 u 80, u 20, u 20 u u u u u u 0, u 40 ( 1 / 4) ( 20) 35 u 20 ( 1 / 4)( 20) 25 0 u 130 l a 33 3m m / 7 k 2 ( m 1) mj

15 Forward Substitution: c c c b ( 20 / 4) ( 20 / 4) ( 5 25 / 7) 300 / 7 [ U ]{ x} { c} x 20 i.e., x / 7 x 300 / 7 Now you can do back substitution easily

16 Number of operations In Doolittle s algorithm, o to evaluate l ik for any k th step for i> k, it takes (n-k) operations. o To evaluate u ij = a ij (k) it takes 2(n-k) 2 operations. o In (n-k) elimination steps to form [L] and [u], n 1 it takes, p 1 ( n p)(2n 2 p 1) 2 n 2 3 n n ( n 1)(4n 1) n

17 o To perform forward substitution, as c 1 = b 1 (No operation required) There are 2i operations for each i. So, no. of operations = 1 c b l c ; i 2,3,4,, n i i im m m 1 i o To perform backward substitution, it requires n 2 operations (as in Gauss) o Total no. of operations = i n i n n n n 2 n 2n n n n n 3 2 6

18 No. of operations is same as Gauss elimination method. However if there are many systems involving [A], then you may need to just add the no. of operations for forward & backward substitution for each system.

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting Project Report Bernardo A. Gonzalez Torres beaugonz@ucsc.edu Abstract The final term project consist of two parts: a Fortran implementation of a linear algebra solver and a Python implementation of a run

More information

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Triangular Factors and Row Exchanges LARP / 28 ACK :. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Then there were three

More information

Matrix Inverse 2 ( 2) 1 = 2 1 2

Matrix Inverse 2 ( 2) 1 = 2 1 2 Name: Matrix Inverse For Scalars, we have what is called a multiplicative identity. This means that if we have a scalar number, call it r, then r multiplied by the multiplicative identity equals r. Without

More information

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

Autar Kaw Benjamin Rigsby.   Transforming Numerical Methods Education for STEM Undergraduates Autar Kaw Benjamin Rigsy http://nmmathforcollegecom Transforming Numerical Methods Education for STEM Undergraduates http://nmmathforcollegecom LU Decomposition is another method to solve a set of simultaneous

More information

Section 3.1 Gaussian Elimination Method (GEM) Key terms

Section 3.1 Gaussian Elimination Method (GEM) Key terms Section 3.1 Gaussian Elimination Method (GEM) Key terms Rectangular systems Consistent system & Inconsistent systems Rank Types of solution sets RREF Upper triangular form & back substitution Nonsingular

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Zero elements of first column below 1 st row multiplying 1 st

More information

Parallel Implementations of Gaussian Elimination

Parallel Implementations of Gaussian Elimination s of Western Michigan University vasilije.perovic@wmich.edu January 27, 2012 CS 6260: in Parallel Linear systems of equations General form of a linear system of equations is given by a 11 x 1 + + a 1n

More information

Numerical Methods 5633

Numerical Methods 5633 Numerical Methods 5633 Lecture 7 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 10 5633-Numerical Methods Organisational To appear

More information

A Study of Numerical Methods for Simultaneous Equations

A Study of Numerical Methods for Simultaneous Equations A Study of Numerical Methods for Simultaneous Equations Er. Chandan Krishna Mukherjee B.Sc.Engg., ME, MBA Asstt. Prof. ( Mechanical ), SSBT s College of Engg. & Tech., Jalgaon, Maharashtra Abstract: -

More information

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview Aim Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk What should a 2-hour PhD lecture on structure

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs 5. Direct Methods for Solving Systems of Linear Equations They are all over the place... and may have special needs They are all over the place... and may have special needs, December 13, 2012 1 5.3. Cholesky

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,

More information

COMPUTER OPTIMIZATION

COMPUTER OPTIMIZATION COMPUTER OPTIMIZATION Storage Optimization: Since Normal Matrix is a symmetric matrix, store only half of it in a vector matrix and develop a indexing scheme to map the upper or lower half to the vector.

More information

3. Replace any row by the sum of that row and a constant multiple of any other row.

3. Replace any row by the sum of that row and a constant multiple of any other row. Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1 the method of Jacobi derivation of the formulas cost and convergence of the algorithm a Julia function 2 Gauss-Seidel Relaxation an iterative method for solving linear

More information

CS Elementary Graph Algorithms & Transform-and-Conquer

CS Elementary Graph Algorithms & Transform-and-Conquer CS483-10 Elementary Graph Algorithms & Transform-and-Conquer Outline Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 1:30pm - 2:30pm or by appointments Depth-first Search cont Topological

More information

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur

Lecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur Lecture 5: Matrices Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur 29 th July, 2008 Types of Matrices Matrix Addition and Multiplication

More information

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Some special matrices Matlab code How many operations and memory

More information

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices 6.1 Solving Systems of Linear Equations Using Matrices Objectives Write the augmented matrix for a linear system. Perform matrix row operations. Use matrices and Gaussian elimination to solve systems.

More information

MA 162: Finite Mathematics - Sections 2.6

MA 162: Finite Mathematics - Sections 2.6 MA 162: Finite Mathematics - Sections 2.6 Fall 2014 Ray Kremer University of Kentucky September 24, 2014 Announcements: Homework 2.6 due next Tuesday at 6pm. Multiplicative Inverses If a is a non-zero

More information

Math 1B03/1ZC3 - Tutorial 3. Jan. 24th/28th, 2014

Math 1B03/1ZC3 - Tutorial 3. Jan. 24th/28th, 2014 Math 1B03/1ZC3 - Tutorial 3 Jan. 24th/28th, 2014 Tutorial Info: Website: http://ms.mcmaster.ca/ dedieula. Math Help Centre: Wednesdays 2:30-5:30pm. Email: dedieula@math.mcmaster.ca. Elementary Matrices

More information

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0 The Simplex Algorithm Chapter 5 Decision Procedures An Algorithmic Point of View D.Kroening O.Strichman Revision 1.0 Outline 1 Gaussian Elimination 2 Satisfiability with Simplex 3 General Simplex Form

More information

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as EXTENSION Using Matrix Row Operations to Solve Systems The elimination method used to solve systems introduced in the previous section can be streamlined into a systematic method by using matrices (singular:

More information

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2) 1 CHAPTER 10 101 The flop counts for LU decomposition can be determined in a similar fashion as was done for Gauss elimination The major difference is that the elimination is only implemented for the left-hand

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Linear Equation Systems Iterative Methods

Linear Equation Systems Iterative Methods Linear Equation Systems Iterative Methods Content Iterative Methods Jacobi Iterative Method Gauss Seidel Iterative Method Iterative Methods Iterative methods are those that produce a sequence of successive

More information

Deep-Pipelined FPGA Implementation of Ellipse Estimation for Eye Tracking

Deep-Pipelined FPGA Implementation of Ellipse Estimation for Eye Tracking Deep-Pipelined FPGA Implementation of Ellipse Estimation for Eye Tracking Keisuke Dohi, Yuma Hatanaka, Kazuhiro Negi, Yuichiro Shibata, Kiyoshi Oguri Graduate school of engineering, Nagasaki University,

More information

Math 13 Chapter 3 Handout Helene Payne. Name: 1. Assign the value to the variables so that a matrix equality results.

Math 13 Chapter 3 Handout Helene Payne. Name: 1. Assign the value to the variables so that a matrix equality results. Matrices Name:. Assign the value to the variables so that a matrix equality results. [ [ t + 5 4 5 = 7 6 7 x 3. Are the following matrices equal, why or why not? [ 3 7, 7 4 3 4 3. Let the matrix A be defined

More information

Finite Math - J-term Homework. Section Inverse of a Square Matrix

Finite Math - J-term Homework. Section Inverse of a Square Matrix Section.5-77, 78, 79, 80 Finite Math - J-term 017 Lecture Notes - 1/19/017 Homework Section.6-9, 1, 1, 15, 17, 18, 1, 6, 9, 3, 37, 39, 1,, 5, 6, 55 Section 5.1-9, 11, 1, 13, 1, 17, 9, 30 Section.5 - Inverse

More information

0_PreCNotes17 18.notebook May 16, Chapter 12

0_PreCNotes17 18.notebook May 16, Chapter 12 Chapter 12 Notes BASIC MATRIX OPERATIONS Matrix (plural: Matrices) an n x m array of elements element a ij Example 1 a 21 = a 13 = Multiply Matrix by a Scalar Distribute scalar to all elements Addition

More information

Gaussian Elimination 2 5 = 4

Gaussian Elimination 2 5 = 4 Linear Systems Lab Objective: The fundamental problem of linear algebra is solving the linear system Ax = b, given that a solution exists There are many approaches to solving this problem, each with different

More information

A MATLAB Interface to the GPU

A MATLAB Interface to the GPU Introduction Results, conclusions and further work References Department of Informatics Faculty of Mathematics and Natural Sciences University of Oslo June 2007 Introduction Results, conclusions and further

More information

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Matrix norms Can be defined using corresponding vector norms Two norm One norm Infinity

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

Dense Matrix Algorithms

Dense Matrix Algorithms Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication

More information

Partha Sarathi Manal

Partha Sarathi Manal MA 515: Introduction to Algorithms & MA353 : Design and Analysis of Algorithms [3-0-0-6] Lecture 29 http://www.iitg.ernet.in/psm/indexing_ma353/y09/index.html Partha Sarathi Manal psm@iitg.ernet.in Dept.

More information

Answers to Homework 12: Systems of Linear Equations

Answers to Homework 12: Systems of Linear Equations Math 128A Spring 2002 Handout # 29 Sergey Fomel May 22, 2002 Answers to Homework 12: Systems of Linear Equations 1. (a) A vector norm x has the following properties i. x 0 for all x R n ; x = 0 only if

More information

D-BAUG Informatik I. Exercise session: week 5 HS 2018

D-BAUG Informatik I. Exercise session: week 5 HS 2018 1 D-BAUG Informatik I Exercise session: week 5 HS 2018 Homework 2 Questions? Matrix and Vector in Java 3 Vector v of length n: Matrix and Vector in Java 3 Vector v of length n: double[] v = new double[n];

More information

MAT 343 Laboratory 2 Solving systems in MATLAB and simple programming

MAT 343 Laboratory 2 Solving systems in MATLAB and simple programming MAT 343 Laboratory 2 Solving systems in MATLAB and simple programming In this laboratory session we will learn how to 1. Solve linear systems with MATLAB 2. Create M-files with simple MATLAB codes Backslash

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

x = 12 x = 12 1x = 16

x = 12 x = 12 1x = 16 2.2 - The Inverse of a Matrix We've seen how to add matrices, multiply them by scalars, subtract them, and multiply one matrix by another. The question naturally arises: Can we divide one matrix by another?

More information

Chemical Engineering 541

Chemical Engineering 541 Chemical Engineering 541 Computer Aided Design Methods Direct Solution of Linear Systems 1 Outline 2 Gauss Elimination Pivoting Scaling Cost LU Decomposition Thomas Algorithm (Iterative Improvement) Overview

More information

Chapter 8 Dense Matrix Algorithms

Chapter 8 Dense Matrix Algorithms Chapter 8 Dense Matrix Algorithms (Selected slides & additional slides) A. Grama, A. Gupta, G. Karypis, and V. Kumar To accompany the text Introduction to arallel Computing, Addison Wesley, 23. Topic Overview

More information

Linear Algebra. Chapter Introduction

Linear Algebra. Chapter Introduction Chapter 6 Linear Algebra Abstract This chapter introduces several matrix related topics, from thesolutionoflinear equations, computing determinants, conjugate-gradient methods, spline interpolation to

More information

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang Solving and CS6015 / LARP 2018 ACK : Linear Algebra and Its Applications - Gilbert Strang Introduction Chapter 1 concentrated on square invertible matrices. There was one solution to Ax = b and it was

More information

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares 12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution to the original

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Lecture 4: Principles of Parallel Algorithm Design (part 4)

Lecture 4: Principles of Parallel Algorithm Design (part 4) Lecture 4: Principles of Parallel Algorithm Design (part 4) 1 Mapping Technique for Load Balancing Minimize execution time Reduce overheads of execution Sources of overheads: Inter-process interaction

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 24 So in today s class, we will look at quadrilateral elements; and we will

More information

REGULAR GRAPHS OF GIVEN GIRTH. Contents

REGULAR GRAPHS OF GIVEN GIRTH. Contents REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion

More information

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop Parallelizing The Matrix Multiplication 6/10/2013 LONI Parallel Programming Workshop 2013 1 Serial version 6/10/2013 LONI Parallel Programming Workshop 2013 2 X = A md x B dn = C mn d c i,j = a i,k b k,j

More information

A Block Cipher Basing Upon a Revisit to the Feistel Approach and the Modular Arithmetic Inverse of a Key Matrix

A Block Cipher Basing Upon a Revisit to the Feistel Approach and the Modular Arithmetic Inverse of a Key Matrix IAENG International Journal of Computer Science, 32:4, IJCS_32_4_ A Block Cipher Basing Upon a Revisit to the Feistel Approach and the Modular Arithmetic Inverse of a Key Matrix S. Udaya Kumar V. U. K.

More information

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA Ch 09 Multidimensional arrays & Linear Systems Andrea Mignone Physics Department, University of Torino AA 2017-2018 Multidimensional Arrays A multidimensional array is an array containing one or more arrays.

More information

Natural Quartic Spline

Natural Quartic Spline Natural Quartic Spline Rafael E Banchs INTRODUCTION This report describes the natural quartic spline algorithm developed for the enhanced solution of the Time Harmonic Field Electric Logging problem As

More information

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the

More information

Questions. Numerical Methods for CSE Final Exam Prof. Rima Alaifari ETH Zürich, D-MATH. 22 January / 9

Questions. Numerical Methods for CSE Final Exam Prof. Rima Alaifari ETH Zürich, D-MATH. 22 January / 9 Questions Numerical Methods for CSE Final Exam Prof Rima Alaifari ETH Zürich, D-MATH 22 January 2018 1 / 9 1 Tridiagonal system solver (16pts) Important note: do not use built-in LSE solvers in this problem

More information

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors 1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors on an EREW PRAM: See solution for the next problem. Omit the step where each processor sequentially computes the AND of

More information

Assignment 2. with (a) (10 pts) naive Gauss elimination, (b) (10 pts) Gauss with partial pivoting

Assignment 2. with (a) (10 pts) naive Gauss elimination, (b) (10 pts) Gauss with partial pivoting Assignment (Be sure to observe the rules about handing in homework). Solve: with (a) ( pts) naive Gauss elimination, (b) ( pts) Gauss with partial pivoting *You need to show all of the steps manually.

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

Mathematics 4330/5344 #1 Matlab and Numerical Approximation

Mathematics 4330/5344 #1 Matlab and Numerical Approximation David S. Gilliam Department of Mathematics Texas Tech University Lubbock, TX 79409 806 742-2566 gilliam@texas.math.ttu.edu http://texas.math.ttu.edu/~gilliam Mathematics 4330/5344 #1 Matlab and Numerical

More information

Performance Evaluation of Multiple and Mixed Precision Iterative Refinement Method and its Application to High-Order Implicit Runge-Kutta Method

Performance Evaluation of Multiple and Mixed Precision Iterative Refinement Method and its Application to High-Order Implicit Runge-Kutta Method Performance Evaluation of Multiple and Mixed Precision Iterative Refinement Method and its Application to High-Order Implicit Runge-Kutta Method Tomonori Kouya Shizuoa Institute of Science and Technology,

More information

CS Data Structures and Algorithm Analysis

CS Data Structures and Algorithm Analysis CS 483 - Data Structures and Algorithm Analysis Lecture VI: Chapter 5, part 2; Chapter 6, part 1 R. Paul Wiegand George Mason University, Department of Computer Science March 8, 2006 Outline 1 Topological

More information

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find). Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving

More information

PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination

PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination 1503 PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination Shietung PENG and Stanislav G. SEDUKHIN Nonmembers SUMMARY The design of array processors for solving linear

More information

Linear Programming. Course review MS-E2140. v. 1.1

Linear Programming. Course review MS-E2140. v. 1.1 Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer

More information

On Massively Parallel Algorithms to Track One Path of a Polynomial Homotopy

On Massively Parallel Algorithms to Track One Path of a Polynomial Homotopy On Massively Parallel Algorithms to Track One Path of a Polynomial Homotopy Jan Verschelde joint with Genady Yoffe and Xiangcheng Yu University of Illinois at Chicago Department of Mathematics, Statistics,

More information

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011 Rice University, Summer 20 Math 355: Linear Algebra: Midterm Colin Carroll June 25, 20 I have adhered to the Rice honor code in completing this test. Signature: Name: Date: Time: Please read the following

More information

Linear Arrays. Chapter 7

Linear Arrays. Chapter 7 Linear Arrays Chapter 7 1. Basics for the linear array computational model. a. A diagram for this model is P 1 P 2 P 3... P k b. It is the simplest of all models that allow some form of communication between

More information

Parallelizing LU Factorization

Parallelizing LU Factorization Parallelizing LU Factorization Scott Ricketts December 3, 2006 Abstract Systems of linear equations can be represented by matrix equations of the form A x = b LU Factorization is a method for solving systems

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 21 Outline 1 Course

More information

Revision Problems for Examination 2 in Algebra 1

Revision Problems for Examination 2 in Algebra 1 Centre for Mathematical Sciences Mathematics, Faculty of Science Revision Problems for Examination in Algebra. Let l be the line that passes through the point (5, 4, 4) and is at right angles to the plane

More information

Digital Signal Processing. Soma Biswas

Digital Signal Processing. Soma Biswas Digital Signal Processing Soma Biswas 2017 Partial credit for slides: Dr. Manojit Pramanik Outline What is FFT? Types of FFT covered in this lecture Decimation in Time (DIT) Decimation in Frequency (DIF)

More information

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8. CZ4102 High Performance Computing Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations - Dr Tay Seng Chuan Reference: Introduction to Parallel Computing Chapter 8. 1 Topic Overview

More information

AMS209 Final Project

AMS209 Final Project AMS209 Final Project Xingchen Yu Department of Applied Mathematics and Statistics, University of California, Santa Cruz November 2015 1 Abstract In the project, we explore LU decomposition with or without

More information

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n.

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n. September, 1998 PHY307F/407F - Computational Physics Background Material for the Exercise - Solving Systems of Linear Equations David Harrison This document discusses techniques to solve systems of linear

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

MA 252: Data Structures and Algorithms Lecture 36. Partha Sarathi Mandal. Dept. of Mathematics, IIT Guwahati

MA 252: Data Structures and Algorithms Lecture 36. Partha Sarathi Mandal. Dept. of Mathematics, IIT Guwahati MA 252: Data Structures and Algorithms Lecture 36 http://www.iitg.ernet.in/psm/indexing_ma252/y12/index.html Partha Sarathi Mandal Dept. of Mathematics, IIT Guwahati The All-Pairs Shortest Paths Problem

More information

Sparse Linear Systems

Sparse Linear Systems 1 Sparse Linear Systems Rob H. Bisseling Mathematical Institute, Utrecht University Course Introduction Scientific Computing February 22, 2018 2 Outline Iterative solution methods 3 A perfect bipartite

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P. 1 2 The LINPACK Benchmark on the Fujitsu AP 1000 Richard P. Brent Computer Sciences Laboratory The LINPACK Benchmark A popular benchmark for floating-point performance. Involves the solution of a nonsingular

More information

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2 Uppsala University Department of Information technology Hands-on : Ill-conditioning Exercise (Ill-conditioned linear systems) Definition A system of linear equations is said to be ill-conditioned when

More information

Numerical Algorithms

Numerical Algorithms Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0

More information

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency 1 2 Parallel Algorithms for Linear Algebra Richard P. Brent Computer Sciences Laboratory Australian National University Outline Basic concepts Parallel architectures Practical design issues Programming

More information

Solving Algebraic Equations

Solving Algebraic Equations Lesson 4. Solving Algebraic Equations 3 3 3 3 3 8 8 4 Add 3 to both sides. Divide both sides by. 4 gives the solution of the equation 3. Check: Substitute 4 for x into the original equation. 3 4 3 When

More information

Chapter V. Building Classes

Chapter V. Building Classes 78 Chapter V Building Classes A class embodies a set of variables and a set of functions that may operate on the variables in the class. These variables and functions are referred to as members of that

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 36 In last class, we have derived element equations for two d elasticity problems

More information

SOME CONCEPTS IN DISCRETE COSINE TRANSFORMS ~ Jennie G. Abraham Fall 2009, EE5355

SOME CONCEPTS IN DISCRETE COSINE TRANSFORMS ~ Jennie G. Abraham Fall 2009, EE5355 SOME CONCEPTS IN DISCRETE COSINE TRANSFORMS ~ Jennie G. Abraham Fall 009, EE5355 Under Digital Image and Video Processing files by Dr. Min Wu Please see lecture10 - Unitary Transform lecture11 - Transform

More information

MITOCW ocw f99-lec07_300k

MITOCW ocw f99-lec07_300k MITOCW ocw-18.06-f99-lec07_300k OK, here's linear algebra lecture seven. I've been talking about vector spaces and specially the null space of a matrix and the column space of a matrix. What's in those

More information

Practice Test - Chapter 6

Practice Test - Chapter 6 1. Write each system of equations in triangular form using Gaussian elimination. Then solve the system. Align the variables on the left side of the equal sign. Eliminate the x-term from the 2nd equation.

More information

Matrices and Systems of Equations

Matrices and Systems of Equations 1 CA-Fall 2011-Jordan College Algebra, 4 th edition, Beecher/Penna/Bittinger, Pearson/Addison Wesley, 2012 Chapter 6: Systems of Equations and Matrices Section 6.3 Matrices and Systems of Equations Matrices

More information

Homework 5: Transformations in geometry

Homework 5: Transformations in geometry Math b: Linear Algebra Spring 08 Homework 5: Transformations in geometry This homework is due on Wednesday, February 7, respectively on Thursday February 8, 08. a) Find the reflection matrix at the line

More information

Assignment 1 (concept): Solutions

Assignment 1 (concept): Solutions CS10b Data Structures and Algorithms Due: Thursday, January 0th Assignment 1 (concept): Solutions Note, throughout Exercises 1 to 4, n denotes the input size of a problem. 1. (10%) Rank the following functions

More information

Least-Squares Fitting of Data with B-Spline Curves

Least-Squares Fitting of Data with B-Spline Curves Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

Part 1. The Review of Linear Programming The Revised Simplex Method

Part 1. The Review of Linear Programming The Revised Simplex Method In the name of God Part 1. The Review of Linear Programming 1.4. Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Outline in Tableau Format Comparison Between the Simplex and the Revised Simplex

More information

2.7 Numerical Linear Algebra Software

2.7 Numerical Linear Algebra Software 2.7 Numerical Linear Algebra Software In this section we will discuss three software packages for linear algebra operations: (i) (ii) (iii) Matlab, Basic Linear Algebra Subroutines (BLAS) and LAPACK. There

More information

An Improved Measurement Placement Algorithm for Network Observability

An Improved Measurement Placement Algorithm for Network Observability IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper

More information

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems AMSC 6 /CMSC 76 Advanced Linear Numerical Analysis Fall 7 Direct Solution of Sparse Linear Systems and Eigenproblems Dianne P. O Leary c 7 Solving Sparse Linear Systems Assumed background: Gauss elimination

More information

Levitin second pages 2002/9/9 12:52 p. 192 (chap06) Windfall Software, PCA ZzTeX v8.7

Levitin second pages 2002/9/9 12:52 p. 192 (chap06) Windfall Software, PCA ZzTeX v8.7 Levitin second pages /9/9 1:5 p. 19 (chap6) Windfall Software, PCA ZzTeX v8.7 6 Transform-and-Conquer That s the secret to life...replace one worry with another. Charles M. Schulz (19 ), American cartoonist,

More information