Section 3.1 Gaussian Elimination Method (GEM) Key terms Rectangular systems Consistent system & Inconsistent systems Rank Types of solution sets RREF Upper triangular form & back substitution Nonsingular systems Pivots in row reduction Row operations MATLAB routine reduce Pivoting strategy Operation counts Forward sweep followed by back substitution Gaussian elimination vs. Gauss Jordan elimination
Section 3.1 Gaussian Elimination Method (GEM) Basic Problems Rectangular Linear Systems Recall that the rank of a matrix is the number of nonzero rows in the reduced row echelon form (abbreviated RREF). The system will be inconsistent if there is a row in the RREF of the form [0 0 0 1]. If the system is consistent, then there is either a unique solution (meaning exactly one solution vector) or there are infinitely many solutions. There will be infinitely many solutions if some of the unknowns can be chosen arbitrarily (sometimes referred to as free variables).
( Square) Nonsingular Linear Systems c u x c c u x x n =, x =,..., x = u u u 1 1 j n n 1 n 1n n j= 2 n 1 1 nn n 1 n 1 11 n j
Gaussian Elimination applied to square linear system Ax = b is the systematic use of row operations on the augmented matrix [A b] to obtain an equivalent linear system [U c] where matrix U is in upper triangular and then apply back substitution to the upper triangular system to solve Ux = c for x. As we apply row operations in Gaussian Elimination we choose certain matrix entries called pivots. The pivot serves as a reference location for organizing subsequent calculations. The goal is to replace each entry below the pivot, within the pivot column, with zero. This procedure is referred to as elimination.
In numerical analysis we do not use this row operation. Any conjectures why we omit this type of operation?
The crucial issue is to be able to choose nonzero pivots as we proceed. The pivots will appear as diagonal entries in basic Gaussian elimination. We use row interchanges only to avoid zero pivots or small pivots. The reason for this is that pivots become denominators of the scalars k in row operations as we move toward upper triangular form. Division by small values is a floating point arithmetic pitfall. The selective use of row interchanges to determine pivots is referred to as a PIVOTING STRATEGY. A pivoting strategy is any procedure used to determine the diagonal entries as we apply row operations.
Before investigating pivot strategies we will investigate the cost of applying Gaussian elimination to solve square linear systems. By cost we mean a count of the number of floating point arithmetic operations to obtain the solution of the linear system. To aid in this regard we will find the following facts useful:
We will break down Gaussian elimination into THREE phases: 1. the forward sweep on the coefficient matrix to get to upper triangular form 2. the forward sweep on the augmented column that reflects the row ops to get to upper triangular form 3. the back substitution process on the equivalent upper triangular system
For large n, the total number of floating point operations is approximately Here is a breakdown for certain values of n. Sometimes it is said that the number of additions/subtractions is about the same as the number of multiplications/divisions.
There are two alternatives to Gaussian elimination that are studied in linear algebra courses. Gauss-Jordan elimination eliminates both below and above pivots and uses row operations to make the pivots all one. So [A b] is transformed into [I c], so the solution is x = c. The number of floating point arithmetic operations is 3 2 n + n n Gauss-Jordan elimination gives a result identical to computing the reduced row echelon form of [A b]. A comparison of the operations for Gaussian elimination (GE) and Gauss-Jordan elimination (GJ) is given below Obviously Gaussian elimination costs less and hence should be used.