Chemical Engineering 541

Similar documents
Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Ch 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

(Sparse) Linear Solvers

Parallel Implementations of Gaussian Elimination

Numerical Linear Algebra

A Study of Numerical Methods for Simultaneous Equations

Addition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)

Section 3.1 Gaussian Elimination Method (GEM) Key terms

3. Replace any row by the sum of that row and a constant multiple of any other row.

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

Introduction to Parallel. Programming

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Numerical Methods 5633

(Sparse) Linear Solvers

Numerical Algorithms

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n.

Dense Matrix Algorithms

Chapter Introduction

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

Linear systems of equations

Iterative Methods for Linear Systems

Linear Equation Systems Iterative Methods

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

CS Elementary Graph Algorithms & Transform-and-Conquer

A Poorly Conditioned System. Matrix Form

Now, start over with a smarter method. But it is still heavily relied on the power of matlab. But we can see the pattern of the matrix elements.

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

Contents. F10: Parallel Sparse Matrix Computations. Parallel algorithms for sparse systems Ax = b. Discretized domain a metal sheet

Chapter LU Decomposition More Examples Electrical Engineering

CS 770G - Parallel Algorithms in Scientific Computing

Computers in Engineering. Linear Algebra Michael A. Hawker

Computers in Engineering COMP 208. Representing Vectors. Vector Operations 11/29/2007. Scaling. Adding and Subtracting

0_PreCNotes17 18.notebook May 16, Chapter 12

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

swsptrsv: a Fast Sparse Triangular Solve with Sparse Level Tile Layout on Sunway Architecture Xinliang Wang, Weifeng Liu, Wei Xue, Li Wu

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

COMPUTER SCIENCE 314 Numerical Methods SPRING 2013 ASSIGNMENT # 2 (25 points) January 22

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

In this section, we will discuss some of the Matrix class methods from the lecture.

Accelerating the Iterative Linear Solver for Reservoir Simulation

Study and implementation of computational methods for Differential Equations in heterogeneous systems. Asimina Vouronikoy - Eleni Zisiou

Least-Squares Fitting of Data with B-Spline Curves

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Numerical Method (2068 Third Batch)

Vector: A series of scalars contained in a column or row. Dimensions: How many rows and columns a vector or matrix has.

Report of Linear Solver Implementation on GPU

Comparison of different solvers for two-dimensional steady heat conduction equation ME 412 Project 2

CS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing?

Chapter V. Building Classes

Linear Equations in Linear Algebra

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

Matrix Inverse 2 ( 2) 1 = 2 1 2

For example, the system. 22 may be represented by the augmented matrix

Parallelizing LU Factorization

Advanced Computer Graphics

Questions. Numerical Methods for CSE Final Exam Prof. Rima Alaifari ETH Zürich, D-MATH. 22 January / 9

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

Assignment 1 (concept): Solutions

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

f xx + f yy = F (x, y)

Exam Design and Analysis of Algorithms for Parallel Computer Systems 9 15 at ÖP3

1 Introduction to MATLAB

Chapter 8 Dense Matrix Algorithms

Iterative Sparse Triangular Solves for Preconditioning

Solving Systems Using Row Operations 1 Name

1 Introduction to MATLAB

Num. #1: Recalls on Numerical Methods for Linear Systems and Ordinary differential equations (ODEs) - CORRECTION

Sparse Linear Systems

Advanced Computer Graphics

Chapter 14: Matrix Iterative Methods

CSE 160 Lecture 14. Floating Point Arithmetic Condition Variables

Synchronous Shared Memory Parallel Examples. HPC Fall 2012 Prof. Robert van Engelen

CS Data Structures and Algorithm Analysis

Radiosity. Early Radiosity. Page 1

ECE 190 Midterm Exam 3 Spring 2011

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

COSC6365. Introduction to HPC. Lecture 21. Lennart Johnsson Department of Computer Science

x = 12 x = 12 1x = 16

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

Iterative Refinement on FPGAs

Introduction to Analysis of Algorithms

Synchronous Computation Examples. HPC Fall 2008 Prof. Robert van Engelen

MATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

Stopping Criteria for Iterative Solution to Linear Systems of Equations

COMBINED OBJECTIVE LEAST SQUARES AND LONG STEP PRIMAL DUAL SUBPROBLEM SIMPLEX METHODS

Lecture 27: Fast Laplacian Solvers

COMPUTATIONAL LINEAR ALGEBRA

A MATLAB Interface to the GPU

Lecture 17: Array Algorithms

Journal of Engineering Research and Studies E-ISSN

Transcription:

Chemical Engineering 541 Computer Aided Design Methods Direct Solution of Linear Systems 1

Outline 2 Gauss Elimination Pivoting Scaling Cost LU Decomposition Thomas Algorithm (Iterative Improvement)

Overview 3 Two types of solutions to linear systems of equations 1. Direct Methods Analytic solution of the system ( exact solution Algorithms work to 1) Reduce operations, 2) Minimize roundoff error 2. Iterative Methods 1. Approximate solutions to some accuracy

Direct Methods 4 Gauss Elimination (GE), LU Decomposition (LU), Thomas Algorithm (TA), QR Decomposition (QR) Use when: 1) Small number of equations (~100 or less) Cost scales as n 3, so minimize n Roundoff errors increase with number of operations. 2) Dense Matricies These usually arise in smaller problems anyway 3) Systems that are not diagonally dominant (DD) Practical, large systems are DD 4) Ill-conditioned systems (still subject to roundoff error, but not iterative convergence)

Gauss Elimination 5 1) Eliminate to upper triangular form 2) Back substitution to Solution. Elimination Work through columns k=1,n-1 Put 0 s in rows i=k+1 to n of column k Subtract multiples of row k from row i Operate on A, b (b as if elements were in A: augmented matrix)

G.E. Algorithm 6 Elimination k=1 k=2 k=3 loop: k=1 k=2 k=3 loop over cols: loop over rows: find fac: loop row elems: do b: do k=1,n-1 do i=k+1,n fac = a(i,k)/a(k,k) do j=k+1,n a(i,j)=a(i,j)-fac*a(k,j) end b(i) = b(i) - fac*b(k) end end Cost ~ 2/3*n 3

G.E. Algorithm 7 Back Substitution do it Loop with inner loop in the numerator x(n) = b(n)/a(n,n) (do first) do i=n-1,1,-1 (bottom up) x(i) = b(i) do j=i+1,n (loop numrtr) x(i) = x(i) - x(j)*a(i,j) end end Cost ~ n 2

Pivoting 8 G.E. fails for 0 on the pivot (diagonal) this causes a division by zero Avoid by interchanging rows partial pivoting Also use for numerical precision to minimize roundoff errors Don t divide small # s: R 2 -R 2 *a 21 /a 11. If a 11 small R 1 -big lose precision. Swap rows to get largest element in the pivot position Algorithmically, don t actually swap rows, use and index array: pos = (1 2 3 4) (4 2 3 1) Avoid recopying memory, easy to program Partial pivoting cost ~n 2, full pivoting cost ~n 3

Scaling 9 Scaling is needed to select the pivot elements To compare rows (pivot element) need rows to be of the same size Divide each possible pivot by the max row element. Scale each row to have a max of one Use scaling just to select pivot At first sight, use R 1, Scaling >> use R 2

LU Decomposition 10 Closely related to G.E. G.E. produces LU decomposition implicitly Useful if multiple b vectors are to be evaluated Ax=b LUx=b L(Ux)=b Ly=b Forward substitution to get y (easy) Solve Ux=y for x with back substitution (easy)

LU Algorithm 11 l s are G.E. Factors u s are from G.E. row operations Algorithm works in place : LU occupies same space as A G.E. Alg. changes: don t consider b on factorization Add in l ij replacement

Thomas Algorithm 12 G.E. applied to a tridiagonal system TDMA Common in Differential Equations Especially diffusion problems No pivoting pivoting would destroy the structure no need to pivot in practice due to natural structure. Cost ~ n Very fast algorithm

Thomas Algorithm 13 2 3 1 a * d Store 3, 1D arrays G.E. array c becomes zeros, so ignore / c - a G.E. array d not changes cause zeros above, so ignore Just change array a Elimination Back Substitution