Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Similar documents
Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Chemical Engineering 541

Numerical Linear Algebra

Numerical Methods 5633

3. Replace any row by the sum of that row and a constant multiple of any other row.

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

CS Elementary Graph Algorithms & Transform-and-Conquer

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

Exercise Set Decide whether each matrix below is an elementary matrix. (a) (b) (c) (d) Answer:

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Interlude: Solving systems of Equations

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

Matrix Inverse 2 ( 2) 1 = 2 1 2

0_PreCNotes17 18.notebook May 16, Chapter 12

Section 3.1 Gaussian Elimination Method (GEM) Key terms

Parallel Implementations of Gaussian Elimination

Practice Test - Chapter 6

Matrices. A Matrix (This one has 2 Rows and 3 Columns) To add two matrices: add the numbers in the matching positions:

MATRIX REVIEW PROBLEMS: Our matrix test will be on Friday May 23rd. Here are some problems to help you review.

x = 12 x = 12 1x = 16

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

Simultaneous equations 11 Row echelon form

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

A Study of Numerical Methods for Simultaneous Equations

Lesson #1: Exponential Functions and Their Inverses Day 2

Therefore, after becoming familiar with the Matrix Method, you will be able to solve a system of two linear equations in four different ways.

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

FreeMat Tutorial. 3x + 4y 2z = 5 2x 5y + z = 8 x x + 3y = -1 xx

Work Allocation. Mark Greenstreet. CpSc 418 Oct. 25, Mark Greenstreet Work Allocation CpSc 418 Oct. 25, / 13

Linear Equation Systems Iterative Methods

A Poorly Conditioned System. Matrix Form

AH Matrices.notebook November 28, 2016

Computers in Engineering. Linear Algebra Michael A. Hawker

Computers in Engineering COMP 208. Representing Vectors. Vector Operations 11/29/2007. Scaling. Adding and Subtracting

CS Data Structures and Algorithm Analysis

AMS209 Final Project

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

Linear Loop Transformations for Locality Enhancement

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Curriculum Map: Mathematics

F04EBFP.1. NAG Parallel Library Routine Document

Linear Equations in Linear Algebra

AMSC/CMSC 460 Final Exam, Fall 2007

Solving Systems Using Row Operations 1 Name

Dense Matrix Algorithms

Basic Matrix Manipulation with a TI-89/TI-92/Voyage 200

For example, the system. 22 may be represented by the augmented matrix

Matrices. Matrices. A matrix is a 2D array of numbers, arranged in rows that go across and columns that go down: 4 columns. Mike Bailey.

Assignment 2. with (a) (10 pts) naive Gauss elimination, (b) (10 pts) Gauss with partial pivoting

(Sparse) Linear Solvers

Identity Matrix: >> eye(3) ans = Matrix of Ones: >> ones(2,3) ans =

Now, start over with a smarter method. But it is still heavily relied on the power of matlab. But we can see the pattern of the matrix elements.

Algorithms and Data Structures

Matrices. Mike Bailey. matrices.pptx. Matrices

Introduction to Parallel. Programming

Chapter LU Decomposition More Examples Electrical Engineering

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

Numerical Method (2068 Third Batch)

MA 162: Finite Mathematics - Sections 2.6

Legal and impossible dependences

Lecture 3: Camera Calibration, DLT, SVD

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

Lesson 11: Duality in linear programming

PARDISO Version Reference Sheet Fortran

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

Problem 2. Problem 3. Perform, if possible, each matrix-vector multiplication. Answer. 3. Not defined. Solve this matrix equation.

MA/CSSE 473 Day 20. Recap: Josephus Problem

Using Weighted Least Squares to Model Data Accurately. Linear algebra has applications across many, if not all, mathematical topics.

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011

MEMORY EFFICIENT WDR (WAVELET DIFFERENCE REDUCTION) using INVERSE OF ECHELON FORM by EQUATION SOLVING

CS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing?

(Sparse) Linear Solvers

Math 1B03/1ZC3 - Tutorial 3. Jan. 24th/28th, 2014

Computer Packet 1 Row Operations + Freemat

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

MATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]

COSC6365. Introduction to HPC. Lecture 21. Lennart Johnsson Department of Computer Science

Null space basis: mxz. zxz I

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Basic matrix math in R

NAG Fortran Library Routine Document F04CAF.1

High Performance Computing Programming Paradigms and Scalability Part 6: Examples of Parallel Algorithms

CS 770G - Parallel Algorithms in Scientific Computing

Iterative Refinement on FPGAs

Fast Algorithms for Regularized Minimum Norm Solutions to Inverse Problems

Iterative Methods for Linear Systems

ECE 190 Midterm Exam 3 Spring 2011

Arrays, Matrices and Determinants

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

ME305: Introduction to System Dynamics

An Improved Measurement Placement Algorithm for Network Observability

CSCE 411 Design and Analysis of Algorithms

Transcription:

Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Zero elements of first column below 1 st row multiplying 1 st row by 0.3 and add to 2 nd row multiplying 1 st row by -0.5 and add to 3 rd row Results in Zero elements of first column below 2 nd row Swap rows Multiply 2 nd row by 0.04 and add to 3 rd Gaussian Elimination

Pivoting Every step involves dividing by diagonal in the current row Algorithm should work for general data Remember: whenever an algorithm calls for division, we need to check if the entry being divided by can become zero (or almost zero). Consider Here the system has solution Yet division by zero would occur! Fix: rearrange the system Element by which we divide is called Pivot element Changing the pivot element is called pivoting Partial Pivoting and Full Pivoting

Partial Pivoting Interchange rows below the current row to ensure that the largest element by magnitude is in the current row Also possible to do column interchanges in addition to row interchanges (called full pivoting)

Solution Start from last equation which can be solved by division Next substitute in the previous line and continue This describes the way to do the algorithm by hand How to represent it using matrices? Also, how do we solve another system that has the same matrix? Upper triangular matrix we end up with will be the same, but the sequence of operations on the r.h.s needs to be repeated

Gaussian Elimination: LU Matrix decomposition It turns out that Gaussian elimination corresponds to a particular matrix decomposition Product of permutation, lower triangular and upper triangular matrices What is a permutation matrix? It rearranges a system of equations and changes the order. Multiplying by it swaps the order of rows in a matrix Essentially a rearrangement of the identity Nice property: transpose is its inverse: PP T =I

Book keeping to account for changes in a permutation matrix (If we did full pivoting we need two permutation matrices to account in addition for column interchanges)

What is an upper triangular matrix? LU Decomposition Elements below diagonal are zero Lower triangular matrix Elements above diagonal are zero Unit lower triangular matrix Elements along diagonal are one Upper triangular part of Gauss Elimination is clear final matrix we end up with What about lower triangular and permutation?

LU=PA Identify the elements of L and P? L has the multipliers we used in the elimination steps P has a record of the row swaps we did to avoid dividing by small numbers In fact we can write each step of Gaussian elimination in matrix form

Solving a system with the LU decomposition Ax=b LU=PA P T LUx=b L[Ux]=Pb Solve Ly=Pb Then Ux=y

How good are the answers given by LU? In general we cannot determine the exact answer. Rather we will determine answer (possibly incorrect), and use it to find how poorly it does in predicting the right hand side Difference in r.h.s is the residual and is a measure of the error True solution With pivoting we get

Recall backward error analysis Determine region in problem space where the problem we solved lies Here the problem we solved is Ax=b-r Theorem Gauss elimination with partial pivoting produces small residuals Next question: Does a small residue mean a small forward error? Here it did

Example 2 Compute via GE with partial pivoting However true solution is [1.000-1.000] t Residual was small, but error is large! Why? Recall condition number

Condition Number of a Matrix A measure of how close a matrix is to singular cond( A) = κ ( A) = A A cond(i) = 1 cond(singular matrix) = So even though residual was small, error was multiplied by the condition number, and was significant 1 maximum stretch = = maximum shrink max i min i λ λ i i