Section 3.1 Gaussian Elimination Method (GEM) Key terms

Similar documents
For example, the system. 22 may be represented by the augmented matrix

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

3. Replace any row by the sum of that row and a constant multiple of any other row.

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

Linear Equations in Linear Algebra

Solving Systems Using Row Operations 1 Name

Numerical Methods 5633

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

MATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

A Poorly Conditioned System. Matrix Form

CHAPTER 5 SYSTEMS OF EQUATIONS. x y

Solving Systems of Equations Using Matrices With the TI-83 or TI-84

Numerical Linear Algebra

Precalculus Notes: Unit 7 Systems of Equations and Matrices

Parallel Implementations of Gaussian Elimination

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011

Matrix Inverse 2 ( 2) 1 = 2 1 2

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

Identity Matrix: >> eye(3) ans = Matrix of Ones: >> ones(2,3) ans =

6.3 Notes O Brien F15

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

Answers to practice questions for Midterm 1

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

MITOCW ocw f99-lec07_300k

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Matrices and Systems of Equations

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

x = 12 x = 12 1x = 16

Column and row space of a matrix

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

Basic Matrix Manipulation with a TI-89/TI-92/Voyage 200

A Study of Numerical Methods for Simultaneous Equations

CS Elementary Graph Algorithms & Transform-and-Conquer

0_PreCNotes17 18.notebook May 16, Chapter 12

Math 2B Linear Algebra Test 2 S13 Name Write all responses on separate paper. Show your work for credit.

hp calculators hp 39g+ & hp 39g/40g Using Matrices How are matrices stored? How do I solve a system of equations? Quick and easy roots of a polynomial

Dense Matrix Algorithms

Classroom Tips and Techniques: Stepwise Solutions in Maple - Part 2 - Linear Algebra

Performing Matrix Operations on the TI-83/84

3.1 Solving Systems Using Tables and Graphs

An Improved Measurement Placement Algorithm for Network Observability

Autar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates

LAB 2: Linear Equations and Matrix Algebra. Preliminaries

CHAPTER HERE S WHERE YOU LL FIND THESE APPLICATIONS:

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes

Problem 2. Problem 3. Perform, if possible, each matrix-vector multiplication. Answer. 3. Not defined. Solve this matrix equation.

MAT 343 Laboratory 2 Solving systems in MATLAB and simple programming

Exercise Set Decide whether each matrix below is an elementary matrix. (a) (b) (c) (d) Answer:

AMSC/CMSC 460 Final Exam, Fall 2007

Now, start over with a smarter method. But it is still heavily relied on the power of matlab. But we can see the pattern of the matrix elements.

Mathematics 4330/5344 #1 Matlab and Numerical Approximation

Matrices and Determinants

Solutions to Programming Assignment Four Simultaneous Linear Equations

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

Chemical Engineering 541

Math 2331 Linear Algebra

Computer Packet 1 Row Operations + Freemat

2.7 Numerical Linear Algebra Software

Most nonzero floating-point numbers are normalized. This means they can be expressed as. x = ±(1 + f) 2 e. 0 f < 1

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1.1 calculator viewing window find roots in your calculator 1.2 functions find domain and range (from a graph) may need to review interval notation

The Interpolating Polynomial

Numerical Methods for PDEs : Video 9: 2D Finite Difference February 14, Equations / 29

Introduction to the R Statistical Computing Environment R Programming: Exercises

Linear Equation Systems Iterative Methods

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Using Model Checking with Symbolic Execution for the Verification of Data-Dependent Properties of MPI-Based Parallel Scientific Software

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% MATLAB DEMO %% Basic Matlab commands %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Put the following equations to slope-intercept form then use 2 points to graph

Exploration Assignment #1. (Linear Systems)

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

Number of Regions An Introduction to the TI-Nspire CAS Student Worksheet Created by Melissa Sutherland, State University of New York at Geneseo

COMPUTER SCIENCE 314 Numerical Methods SPRING 2013 ASSIGNMENT # 2 (25 points) January 22

Math 1B03/1ZC3 - Tutorial 3. Jan. 24th/28th, 2014

Matrix Multiplication

Vector: A series of scalars contained in a column or row. Dimensions: How many rows and columns a vector or matrix has.

February 01, Matrix Row Operations 2016 ink.notebook. 6.6 Matrix Row Operations. Page 49 Page Row operations

Spring 2018 Updates. Computing Technology for All. Data Structure Essentials. Digital Design

Numerical Method (2068 Third Batch)

Graphs and Linear Functions

On the maximum rank of completions of entry pattern matrices

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Therefore, after becoming familiar with the Matrix Method, you will be able to solve a system of two linear equations in four different ways.

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

SOLVING SIMULTANEOUS EQUATIONS. Builtin rref(a) EXAMPLE E4.1 Solve Corresponding augmented matrix 2x 4y 2 a 242 x 3y 3

Date Lesson TOPIC Homework. The Intersection of a Line with a Plane and the Intersection of Two Lines

Course Number 432/433 Title Algebra II (A & B) H Grade # of Days 120

Matrix Multiplication

Algebraic Iterative Methods for Computed Tomography

Introduction to the R Statistical Computing Environment R Programming: Exercises

Curriculum Map: Mathematics

Matrices and Systems of Linear Equations

Transcription:

Section 3.1 Gaussian Elimination Method (GEM) Key terms Rectangular systems Consistent system & Inconsistent systems Rank Types of solution sets RREF Upper triangular form & back substitution Nonsingular systems Pivots in row reduction Row operations MATLAB routine reduce Pivoting strategy Operation counts Forward sweep followed by back substitution Gaussian elimination vs. Gauss Jordan elimination

Section 3.1 Gaussian Elimination Method (GEM) Basic Problems Rectangular Linear Systems Recall that the rank of a matrix is the number of nonzero rows in the reduced row echelon form (abbreviated RREF). The system will be inconsistent if there is a row in the RREF of the form [0 0 0 1]. If the system is consistent, then there is either a unique solution (meaning exactly one solution vector) or there are infinitely many solutions. There will be infinitely many solutions if some of the unknowns can be chosen arbitrarily (sometimes referred to as free variables).

( Square) Nonsingular Linear Systems c u x c c u x x n =, x =,..., x = u u u 1 1 j n n 1 n 1n n j= 2 n 1 1 nn n 1 n 1 11 n j

Gaussian Elimination applied to square linear system Ax = b is the systematic use of row operations on the augmented matrix [A b] to obtain an equivalent linear system [U c] where matrix U is in upper triangular and then apply back substitution to the upper triangular system to solve Ux = c for x. As we apply row operations in Gaussian Elimination we choose certain matrix entries called pivots. The pivot serves as a reference location for organizing subsequent calculations. The goal is to replace each entry below the pivot, within the pivot column, with zero. This procedure is referred to as elimination.

In numerical analysis we do not use this row operation. Any conjectures why we omit this type of operation?

The crucial issue is to be able to choose nonzero pivots as we proceed. The pivots will appear as diagonal entries in basic Gaussian elimination. We use row interchanges only to avoid zero pivots or small pivots. The reason for this is that pivots become denominators of the scalars k in row operations as we move toward upper triangular form. Division by small values is a floating point arithmetic pitfall. The selective use of row interchanges to determine pivots is referred to as a PIVOTING STRATEGY. A pivoting strategy is any procedure used to determine the diagonal entries as we apply row operations.

Before investigating pivot strategies we will investigate the cost of applying Gaussian elimination to solve square linear systems. By cost we mean a count of the number of floating point arithmetic operations to obtain the solution of the linear system. To aid in this regard we will find the following facts useful:

We will break down Gaussian elimination into THREE phases: 1. the forward sweep on the coefficient matrix to get to upper triangular form 2. the forward sweep on the augmented column that reflects the row ops to get to upper triangular form 3. the back substitution process on the equivalent upper triangular system

For large n, the total number of floating point operations is approximately Here is a breakdown for certain values of n. Sometimes it is said that the number of additions/subtractions is about the same as the number of multiplications/divisions.

There are two alternatives to Gaussian elimination that are studied in linear algebra courses. Gauss-Jordan elimination eliminates both below and above pivots and uses row operations to make the pivots all one. So [A b] is transformed into [I c], so the solution is x = c. The number of floating point arithmetic operations is 3 2 n + n n Gauss-Jordan elimination gives a result identical to computing the reduced row echelon form of [A b]. A comparison of the operations for Gaussian elimination (GE) and Gauss-Jordan elimination (GJ) is given below Obviously Gaussian elimination costs less and hence should be used.