Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Similar documents
Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

0_PreCNotes17 18.notebook May 16, Chapter 12

Chemical Engineering 541

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

Simultaneous equations 11 Row echelon form

Numerical Linear Algebra

Linear Equation Systems Iterative Methods

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

Numerical Methods 5633

3. Replace any row by the sum of that row and a constant multiple of any other row.

Practice Test - Chapter 6

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

Curriculum Map: Mathematics

For example, the system. 22 may be represented by the augmented matrix

Interlude: Solving systems of Equations

Matrix Inverse 2 ( 2) 1 = 2 1 2

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

Matrices. A Matrix (This one has 2 Rows and 3 Columns) To add two matrices: add the numbers in the matching positions:

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Multiple View Geometry in Computer Vision

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

FreeMat Tutorial. 3x + 4y 2z = 5 2x 5y + z = 8 x x + 3y = -1 xx

MATH 5520 Basics of MATLAB

MATH 3511 Basics of MATLAB

x = 12 x = 12 1x = 16

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Basic Matrix Manipulation with a TI-89/TI-92/Voyage 200

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

CS Elementary Graph Algorithms & Transform-and-Conquer

Solving Systems of Equations Using Matrices With the TI-83 or TI-84

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Basic matrix math in R

Solving Systems Using Row Operations 1 Name

Therefore, after becoming familiar with the Matrix Method, you will be able to solve a system of two linear equations in four different ways.

Exercise Set Decide whether each matrix below is an elementary matrix. (a) (b) (c) (d) Answer:

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

3D Geometry and Camera Calibration

COMP 558 lecture 19 Nov. 17, 2010

Work Allocation. Mark Greenstreet. CpSc 418 Oct. 25, Mark Greenstreet Work Allocation CpSc 418 Oct. 25, / 13

Assignment 2. with (a) (10 pts) naive Gauss elimination, (b) (10 pts) Gauss with partial pivoting

Chapter LU Decomposition More Examples Electrical Engineering

MATRIX REVIEW PROBLEMS: Our matrix test will be on Friday May 23rd. Here are some problems to help you review.

Computers in Engineering. Linear Algebra Michael A. Hawker

Computers in Engineering COMP 208. Representing Vectors. Vector Operations 11/29/2007. Scaling. Adding and Subtracting

Independent systems consist of x

MATLAB Lecture 1. Introduction to MATLAB

Now, start over with a smarter method. But it is still heavily relied on the power of matlab. But we can see the pattern of the matrix elements.

Parallel Implementations of Gaussian Elimination

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation

A Study of Numerical Methods for Simultaneous Equations

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Math 1B03/1ZC3 - Tutorial 3. Jan. 24th/28th, 2014

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011

CSCE 411 Design and Analysis of Algorithms

Matrices. Matrices on the TI Creating, Storing, and Displaying Matrices Using Mathematical Functions with Matrices...

Precalculus Notes: Unit 7 Systems of Equations and Matrices

Computer Packet 1 Row Operations + Freemat

MATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]

Identity Matrix: >> eye(3) ans = Matrix of Ones: >> ones(2,3) ans =

6-2 Matrix Multiplication, Inverses and Determinants

Some Linear Algebra Basics in Maple

Arrays, Matrices and Determinants

ME305: Introduction to System Dynamics

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes

AH Matrices.notebook November 28, 2016

COSC6365. Introduction to HPC. Lecture 21. Lennart Johnsson Department of Computer Science

An idea which can be used once is a trick. If it can be used more than once it becomes a method

CS Data Structures and Algorithm Analysis

Objectives and Homework List

Computational Methods CMSC/AMSC/MAPL 460. Representing numbers in floating point Error Analysis. Ramani Duraiswami, Dept. of Computer Science

Classroom Tips and Techniques: Stepwise Solutions in Maple - Part 2 - Linear Algebra

Implementation Of Quadratic Rotation Decomposition Based Recursive Least Squares Algorithm

Algorithms and Data Structures

2.7 Numerical Linear Algebra Software

6-1 Practice Multivariable Linear Systems and Row Operations

MA 162: Finite Mathematics - Sections 2.6

MATLAB Premier. Middle East Technical University Department of Mechanical Engineering ME 304 1/50

(Sparse) Linear Solvers

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

Fundamentals of MATLAB Usage

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36

Multiple View Geometry in Computer Vision

Fundamentals of Linear Algebra, Part II

Recognition, SVD, and PCA

Using Weighted Least Squares to Model Data Accurately. Linear algebra has applications across many, if not all, mathematical topics.

(Sparse) Linear Solvers

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

New parallel algorithms for finding determinants of NxN matrices

S Postgraduate Course on Signal Processing in Communications, FALL Topic: Iteration Bound. Harri Mäntylä

The inverse of a matrix

Least-Squares Fitting of Data with B-Spline Curves

Section 26: Associativity and Order of Operations

AMSC/CMSC 460 Final Exam, Fall 2007

Transcription:

Computational Methods CMSC/AMSC/MAPL 460 Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Matrix norms Can be defined using corresponding vector norms Two norm One norm Infinity norm Two norm is hard to define need to find maximum singular value related to idea that matrix acting on unit sphere converts it in to an ellipsoid Frobenius norm is defined just using matrix elements

Condition Number of a Matrix A measure of how close a matrix is to singular cond( A) = κ ( A) = A A maximum stretch = = maximum shrink cond(i) = 1 cond(singular matrix) = 1 max i min i λ λ i i

Solving Linear Systems One idea compute inverse Not usually a good idea (unless inverse is computable easily and accurately using some matrix property) Leads to increased errors, and is more expensive usually

Representing linear systems as matrix-vector equations Represent it as a matrix-vector equation (linear system) We will apply the familiar elimination technique, and then see its matrix equivalent

Cramer s Rule Familiar algorithm often taught in school years Get determinant of the matrix Determinant defined by recursion Complexity of determinant determination Factorial and polynomial complexity Get determinant of the matrix with column corresponding to variable replaced with the right hand side

Cramer s Rule (contd.) In 3D (x,y,z) =, In higher dimensions it generalizes similarly Home reading: Why does Cramer s rule work? (From Cramer s rule article Eric W. Weisstein math encyclpedia available online)

Zero elements of first column below 1 st row multiplying 1 st row by 0.3 and add to 2 nd row multiplying 1 st row by -0.5 and add to 3 rd row Results in Zero elements of first column below 2 nd row Swap rows Multiply 2 nd row by 0.04 and add to 3 rd Gaussian Elimination

Solution Start from last equation which can be solved by division Next substitute in the previous line and continue This describes the way to do the algorithm by hand How to represent it using matrices? Also, how do we solve another system that has the same matrix? Upper triangular matrix we end up with will be the same, but the sequence of operations on the r.h.s needs to be repeated

Gaussian Elimination: LU Matrix decomposition It turns out that Gaussian elimination corresponds to a particular matrix decomposition Product of permutation, lower triangular and upper triangular matrices What is a permutation matrix? It rearranges a system of equations and changes the order. Multiplying by it swaps the order of rows in a matrix Essentially a rearrangement of the identity Nice property: transpose is its inverse: PP T =I

What is an upper triangular matrix? LU Decomposition Elements below diagonal are zero Lower triangular matrix Elements above diagonal are zero Unit lower triangular matrix Elements along diagonal are one Upper triangular part of Gauss Elimination is clear final matrix we end up with What about lower triangular and permutation?

LU=PA Identify the elements of L and P? L has the multipliers we used in the elimination steps P has a record of the row swaps we did to avoid dividing by small numbers In fact we can write each step of Gaussian elimination in matrix form

Solving a system with the LU decomposition Ax=b LU=PA P T LUx=b L[Ux]=Pb Solve Ly=Pb Then Ux=y

Solving a triangular system