Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

Similar documents
Section 3.1 Gaussian Elimination Method (GEM) Key terms

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Parallel Implementations of Gaussian Elimination

EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI

Numerical Linear Algebra

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS

GRAPH CENTERS USED FOR STABILIZATION OF MATRIX FACTORIZATIONS

Iterative Methods for Solving Linear Problems

(Sparse) Linear Solvers

Iterative Methods for Linear Systems

Contents. I The Basic Framework for Stationary Problems 1

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

NUMERICAL ANALYSIS USING SCILAB: NUMERICAL STABILITY AND CONDITIONING

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

MAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB

Chapter 4. Matrix and Vector Operations

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

1. Represent each of these relations on {1, 2, 3} with a matrix (with the elements of this set listed in increasing order).

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

Parallel solution for finite element linear systems of. equations on workstation cluster *

MODULE - 7. Subject: Computer Science. Module: Other 2D Transformations. Module No: CS/CGV/7

(Sparse) Linear Solvers

College Algebra Exam File - Fall Test #1

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

September, a 11 x 1 +a 12 x a 1n x n =b 1 a 21 x 1 +a 22 x a 2n x n =b 2.. (1) a n 1 x 1 +a n2 x a nn x n = b n.

Graphs and transformations, Mixed Exercise 4

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus

Sparse matrices, graphs, and tree elimination

A Study of Numerical Methods for Simultaneous Equations

Computing the rank of big sparse matrices modulo p using gaussian elimination

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Matrices 4: use of MATLAB

Output: For each size provided as input, a figure of that size is to appear, followed by a blank line.

3. Replace any row by the sum of that row and a constant multiple of any other row.

Report of Linear Solver Implementation on GPU

Intel Math Kernel Library (Intel MKL) Sparse Solvers. Alexander Kalinkin Intel MKL developer, Victor Kostin Intel MKL Dense Solvers team manager

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin)

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Lecture 27: Fast Laplacian Solvers

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Sparse Matrices. sparse many elements are zero dense few elements are zero

Numerical Methods 5633

Sparse Matrices. This means that for increasing problem size the matrices become sparse and sparser. O. Rheinbach, TU Bergakademie Freiberg

Parallel Threshold-based ILU Factorization

PARDISO Version Reference Sheet Fortran

Model Reduction for High Dimensional Micro-FE Models

Solving Systems Using Row Operations 1 Name

16.410/413 Principles of Autonomy and Decision Making

Chapter 14: Matrix Iterative Methods

Vector Algebra Transformations. Lecture 4

Null space computation of sparse singular matrices with MUMPS

High Performance Computing: Tools and Applications

Linear systems of equations

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Advanced Algebra. Equation of a Circle

Multiview Stereo COSC450. Lecture 8

Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering. Introduction

Journal of Engineering Research and Studies E-ISSN

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Chemical Engineering 541

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Introduction to Parallel. Programming

AMS526: Numerical Analysis I (Numerical Linear Algebra)

3D Geometry and Camera Calibration

nag sparse nsym sol (f11dec)

Matrices and Systems of Equations

Parallelizing LU Factorization

Numerical Methods to Solve 2-D and 3-D Elliptic Partial Differential Equations Using Matlab on the Cluster maya

CE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

Floating Point Fault Tolerance with Backward Error Assertions

APPM 2360 Project 2 Due Nov. 3 at 5:00 PM in D2L

Matrix-free IPM with GPU acceleration

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

THE application of advanced computer architecture and

Project Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

Chapter 18. Geometric Operations

Work Allocation. Mark Greenstreet. CpSc 418 Oct. 25, Mark Greenstreet Work Allocation CpSc 418 Oct. 25, / 13

Abstract. Introduction

Stopping Criteria for Iterative Solution to Linear Systems of Equations

Lecture 9. Introduction to Numerical Techniques

Adaptive numerical methods

Figure 6.1: Truss topology optimization diagram.

Epipolar Geometry and the Essential Matrix

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND

Structure from Motion. Prof. Marco Marcon

1314. Estimation of mode shapes expanded from incomplete measurements

CS231A Course Notes 4: Stereo Systems and Structure from Motion

A Poorly Conditioned System. Matrix Form

Course Number 432/433 Title Algebra II (A & B) H Grade # of Days 120

Multiple Regression White paper

Transcription:

Uppsala University Department of Information technology Hands-on : Ill-conditioning Exercise (Ill-conditioned linear systems) Definition A system of linear equations is said to be ill-conditioned when some small perturbations in the system can produce relatively large changes in the exact solution. Otherwise, the system is said to be well-conditioned. Consider the problem Ax = b, where [ ] [ ] 0.8 0.667 x Ax = = 0. 0.266 x 2 [ b b 2 ] = b. () Let the right-hand vector b be a result of an experiment and is read from the dial of a test instrument. Assume we know that the tolerance within which the dial can be read is ±0.00. [ ] 0.68 Assume that the values of the components are read as b 0 =. However, due to the 0.067 small uncertainty, we have to expect that 0.67 b 0.69 and 0.066 b 2 0.068. [ ] [ ] 0.67 0.69 Thus, the solution for b = and for b 0.068 2 = should be expected to be as valid 0.066 as that in the first case.. Find the exact solution of the system for each of the above vectors b 0, b, b 2. 2. Is the observed instability due to some numerical procedure, the vector b, the matrix A or a combination of these factors?. Try to quantify how ill-condition the problem is. How would you do this? Remark One can demonstrate the reason for a 2 2-system to be ill-conditioned as follows. Geometrically, two equations in two unknowns define two straight lines. The point of intersection of those lines defines the solution. An ill-conditioned system represents two lines which are almost parallel. (It is advisable to try to plot the two lines corresponding to the considered system.)

Exercise 2 Consider the system in () once more, [ ] [ ] 0.8 0.667 x = 0. 0.266 x 2 [ ] 0.68 0.067 and imagine that you somehow have computed a solution x = [ 666, 8]. Try to check the error by substituting the solution in the equations and to see what is the error. In other words, compute the residual r = Ax b.. Is the residual small or big? Is the residual a reliable source of information whether the computed solution accurate or not and in which cases? 2. How can we check a computed solution for accuracy? Exercise Determine the exact solution of the following system 8 2 x 2 9 6 y = 6. (2) 9 8 z 0 Now change to in the first equation. and again solve the system with exact arithmetic. Is the system ill-conditioned? Exercise (Effect of matrix scaling) Remark 2 (Practical scaling strategies). When scaling, choose units which are natural to the underlying problem and do not distort the proportion between the problem components, such as matrix entries and entries of the right-hand vector. Column scaling can cause such a distortion. 2. Row-scale the system Ax = b so that the coefficient of maximum magnitude in each row is equal to.. When the matrix is symmetric, it is better to scale symmetrically, for example as à = DAD, where D is diagonal D = {/ a ii, i =,, n}. Consider the following system of equations: x 2 y = z Matrix of the above type is referred to as a Hilbert matrix. 2 2

(a) First convert the coefficients to -digit floating point numbers and the use -digit arithmetic with partial pivoting but no scaling to compute the solution. (b) Again use a -digit arithmetic, but row-scale the coefficients (after converting them to -digit floating point numbers), and then use partial pivoting to compute the solution. (c) Proceed as in [(b)] but this time row-scale the coefficients before each elimination step. (d) Use exact arithmetic to determine the solution and compare the result with those obtained in (a), (b) and (c). Hint: One should somehow simulate -digit arithmetic in Matlab. One limited possibility is to replace a value v by w=floor(v*0ˆ())*0\ˆ(-).the latter will give the following approximation: v=0.26, w=0.20. Exercise (Feel the sparse-dense difference) Create a 00 00 random matrix that has about % nonzero entries: S = sprand(00,00,.0); Convert it to full matrix F = full(s); Then check: tic,s*s;toc tic,f*f;toc Exercise 6 (The effect of fill-in) MATLAB provides a nonsymmetric sparse demonstration matrix west079 of dimension 79 79, which can be accessed after executing load west079. Let us use an abbreviate name A=west079;. Compute the sparse LU factorization, checking the time it takes to perform it, for example tic,[l,u,p]=lu(a),toc. Look at spy plots of A, PA, L+U and make note of the number of nonzero elements in the triangular factors. 2. Modify A by applying a random column permutation to it. Then repeat (). % random column permutation col = randperm(79); AP = A(:,col); spy(ap). Modify A by applying the column minimum degree ordering to the columns. Repeat ().

Figure :. Compare the above results.. Generate a random right-hand-side vector b=randn(79,);. Solve the system as x=a\b;. Compare the solution time with that spent to factorize the matrix. Exercise 7 ( Hard-to-handle matrices) Load the matrices Bone matr 269.mat and Bone matr 272.mat. These originate from a numerical model of a trabecular bone tissue. A small sample of bone tissue is scanned using a high resolution CT scanner and the D geometry of the bone tissue is reconstructed. Then, using the linear elasticity model and the finite element method, we arrive at linear systems to be solved, to determine whether or not the bone can withstand a certain load. The smaller matrix corresponds to a tiny piece of bone with a geometry as shown in Figure. The files contain the corresponding stiffness and mass matrices K and M. Study the matrices K by all means you know - structure, spectrum, condition number, scaling,... Create a right-hand-side (rhs) vector, containing zeros except for its first element, which should be taken as. Try to solve with the matrices K and the above rhs (of suitable size) using a direct method the conjugate gradient method, provided in Matlab [sol it,flag,relres,iter,resvec] = pcg(k,rhs,e-6,0000); Note : The file Bone matr 272.mat is large. The size of the matrices in this case is 27 2. This, however, is considered as very small. The bone problems of practical interest are of size several hundreds of millions. The largest problem of this type, solved by now is over 00 000 000 degrees of freedom on a machine with over 000 processors.

Used literature: Carl D. Meyer, Matrix analysis and applied linear algebra, SIAM, 2000. L. Trefethen, D. Bau, Numerical Linear Algebra, SIAM, 997. J. Leader, Numerical Analysis and Scientific Computation, Pearson Education, 200. D.S. Watkins, Fundamentals of Matrix Computations, J. Wiley & Sins Ltd, 2002.