A Study of Numerical Methods for Simultaneous Equations
|
|
- Kory Valerie Gray
- 5 years ago
- Views:
Transcription
1 A Study of Numerical Methods for Simultaneous Equations Er. Chandan Krishna Mukherjee B.Sc.Engg., ME, MBA Asstt. Prof. ( Mechanical ), SSBT s College of Engg. & Tech., Jalgaon, Maharashtra Abstract: - Engineers are often confronted with the problem of finding a solution for a set of simultaneous equations involving multiple variables. Traditional mathematical methods are not amenable to computerization & hence too cumbersome to use in case of large number of variables. Numerical methods are programmable, can deliver solutions rapidly with reasonable accuracy & hence provide a viable alternative for handling such problems. In this paper the working procedure of different numerical methods have been amply described, using simple illustration and compared. Keywords:-, Gauss Elimination, Gauss-Jordan, Doo Little, Crout, Cholesky, Jacobi, Gauss-Seidel methods I. INTRODUCTION A set of simultaneous equations involves n algebraic equations in n unknown variables. In cases where n 3, traditional mathematical methods like elimination & re-substitution method or Kramer s rule, taught in UG level, may be conveniently used. However these methods are not programmable & is tedious to use for solving cases wherein n > 3. Engineers are frequently confronted with the problem of solving simultaneous equations involving large number of variables. Finding solutions by use of traditional mathematical techniques is daunting & tedious. In these cases numerical methods, which are programmable, can be much more convenient applied to find the solution rapidly. Numerical methods available may be classified according to the approach used as follows a. Direct approach ( ie. Gauss Elimination & Gauss Jordan methods) b. Factorization approach ( ie. LU decomposition & Cholesky methods ) and c. Iterative approach ( ie. Jacobi & Gauss Seidel methods ) The working procedure of these methods are described & compared below. 1. DIRECT APPROACH METHODS This approach may be considered as a programmable version of the elimination & re-substitution method. The set of simultaneous equations is first represented as an augmented matrix. Eg. Equations X + Y = 6 X Y = 4 may be represented by the augmented matrix [ ] Please note that the LHS elements of the n th row are the co-efficients of the n variables of the n th Eq n, the constant term being placed in RHS of the augmented matrix. This augmented matrix is then reduced, using basic matrix transformation operation, to either an upper triangular matrix ( Gauss DOI : /IJRTER WNISA 208
2 elimination method ) or an unit matrix ( Gauss Jordan method ). The working procedure of these 2 methods is described below using the above example. Gauss Elimination method :- [1],[2],[3] On row 2 of the above augmented matrix, the operation [ R2 ( a 21 a11 ) R1 ] is performed to obtain this upper triangular matrix [ ] Please note that if the pivot element a11 is zero the program will stall & the situation has to be corrected by interchanging rows if necessary. From row 2 we get - 2 Y = - 2 or, Y = 1 Back Substituting value of Y in row 1, we have X + 1 = 6 or, X = 5 Gauss Jordan method :- [1],[2],[3] On row 1 of the above upper triangular matrix, another operation [ R1 ( a 12 a22 ) R2 ] is performed to obtain this unit matrix [ ] From row 1 we directly get X = 5. Additional computation in back substitution is not required, from row 2 we directly get Y = FACTORIZATION APPROACH METHODS The general procedure in this approach involves a finite no. of steps described below - Step 1 - The system is represented as [ A ] [ X ] = [ B ]. ( 1 ) where [ A ] is a n x n coefficient matrix, [ B ] is a n x 1 R.H.S. constant matrix. [ X ] is a n x 1 matrix having elements X1, X2,, Xn variables Step 2 - The coefficient matrix [ A ] is decomposed into an n x n lower triangular matrix [ L ] and an n x n upper triangular matrix [ U ], such that [ L ] [ U ] = [ A ] ( 2 ) Equation ( 1 ) can hence be re written as [ L ] [ U ] [ X ] = [ B ] which can be further represented as [ L ] [ V ] = [ B ].. ( 3 ) where dummy variable matrix [ V ] is given by [ U ] [ X ] = [ V ]. ( 4 ) The elements of [ L ] and [ U ] are determined from eqn ( 2 ) using elementary rules of matrix multiplication. Step 3 - Using elementary rules of matrix multiplication on relation ( 3 ), elements of the n x 1 dummy variable matrix [ V ] is determined. Step 4 - Using elementary rules of matrix multiplication on relation ( 4 ) the elements of [ X ], is derived, which is the required All Rights Reserved 209
3 Different forms of lower & upper triangular matrices used gives rise to different methods - 1. LU Decomposition method a) Doo Little method [1],[2] b) Crout method [3] 2. Cholesky method [1] Doo Little method :- Here [ L ] = [ L 11 0 ] and [ U ] = [ 1 U 12 L 21 L ] In step 2, from Eqn ( 2 ), we have [ L 11 0 L 21 L 22 ] [ 1 U ] = [ ] Row 1 of [ L ] x Column 1 of [ U ] yields 1 st. row 1 st. column element of [ A ], giving L11 = 1. Row 1 of [ L ] x Column 2 of [ U ] yields L11 * U12 = 1, giving U12 = 1 Row 2 of [ L ] x Column 1 of [ U ] yields L21 = 1 Row 2 of [ L ] x Column 2 of [ U ] yields L21 * U12 + L22 = - 1, giving L22 = - 2 In step 3, from Eqn ( 3 ), we have [ ] [ V 1 V 2 ] = [ 6 4 ] Row 1 of [ L ] x Column 1 of [ V ] yields V1 = 6 and Row 2 of [ L ] x Column 1 of [ V ] yields V1 2 V2 = 4, giving V2 = 1 In step 4, from Eqn ( 4 ), we have [ ] [ X 1 X 2 ] = [ 6 1 ] Row 2 of [ U ] x Column 1 of [ X ] yields X2 = 1 Row 1 of [ U ] x Column 1 of [ X ] yields X1 + X2 = 6, giving X1 = 5 Crout method :- Here [ L ] = [ In step 2, from eqn ( 2 ), we have [ 1 0 L 11 1 ] and [ U ] = [ U 11 U 12 0 U 22 ] 1 0 L 21 1 ] [ U 11 U 12 0 U 22 ] = [ ] Row 1 of [ L ] x Col 1 of [ U ] yields U11 = 1, Row 1 of [ L ] x Col 2 of [ U ] yields U12 = 1, Row 2 of [ L ] x Column 1 of [ U ] yields L21 * U11 = 1, giving L21 = 1, Row 2 of [ L ] x Column 2 of [ U ] yields L21 * U12 + U22 = -1, giving U22 = - 2 In step 3, from Eqn ( 3 ), we have [ ] [ V 1 V 2 ] = [ 6 4 All Rights Reserved 210
4 Row 1 of [ L ] x Column 1 of [ V ] gives V1 = 6 Row 2 of [ L ] x Column 1 of [ V ] yields V1 + V2 = 4, giving V2 = - 2 In step 4, from Eqn ( 4 ), we have [ ] [ X 1 6 ] = [ X 2 2 ] Row 2 of [ U ] x Column 1 of [ X ] gives -2 X2 = -2, ie. X2 = 1 Row 1 of [ U ] x Column 1 of [ X ] yields X1 + X2 = 6, giving X1 = 5 Cholesky Method :- [1] This method may be considered as a special type of LU decomposition method, applicable for cases where the coefficient matrix [ A ] is a symmetrical positive definite matrix. For a symmetrical matrix [ A ] = [ A ] T or a ij = a ji for all i, j For a positive definite matrix [ X ] T [ A ] [ X ] > 0 for all non zero vectors [ X ] General procedure of Cholesky method is similar to that of factorization method except for the fact that we consider [ U ] = [ L ] T or U ij = L ji for all i, j. The method is illustrated below Eg. For Equations X 1 + X 2 = 2 X X 2 = 6 ; [ A ] = [ ] which is symmetrical Here we take [ L ] = [ L 11 0 L 21 L 22 ] and [ U ] = [ L 11 L 21 0 L 22 ] In step 2, from Eqn ( 2 ), we have [ L 11 0 L 21 L 22 ] [ L 11 L 21 0 L 22 ] = [ ] Row 1 of [ L ] x Col 1 of [ U ] yields L = 1 or L 11 = 1. Since [ A ] is positive definite, L11 will never be imaginary. Either value may be taken. Row 1 of [ L ] x Col 2 of [ U ] or Row 2 of [ L ] x Col 1 of [ U ] gives L11 * L21 = 1. If L11 is taken positive, L21 = 1 if L11 is taken negative, L21 = - 1. Row 2 of [ L ] x Column 2 of [ U ] yields L L = 5 giving L 22 = 2 For { L11, L21, L22 } the values { 1, 1, 2 }, { 1, 1, -2 }, { -1, -1, 2 }, { -1, -1, -2 } are obtainable, any of which may be taken for proceeding to next step, giving same end result. In step 3, from Eqn ( 3 ), using 4 th set of values, we have [ ] [ V 1 ] = [ 2 V 2 6 ] Row 1 of [ L ] x Column 1 of [ V ] gives - V1 = 2 or V1 = - 2 Row 2 of [ L ] x Column 1 of [ V ] yields V1 2 V2 = 6 or V2 = - All Rights Reserved 211
5 1 1 In step 4, from Eqn ( 4 ), we have [ 0 2 ] [ X 1 ] = [ 2 X 2 2 ] Row 2 of [ U ] x Column 1 of [ X ] gives 2 X2 = - 2 or X2 = 1 Row 1 of [ U ] x Column 1 of [ X ] yields X1 X2 = - 2 or X1 = 1 II. ITERATIVE APPROACH METHODS In an iterative approach, starting from an initial trial solution, a better solution is expected after each iteration, ultimately converging to a reasonably close approximate solution. To ensure convergence, the equations are first re-arranged so that all the large coefficients of each variable lie on the leading diagonal of the co efficient matrix [ A ]. Subsequently, relation for finding a variable is obtained from that equation having largest co-efficient for that variable, as illustrated below [1],[2],[3] Eg. Equations 4 X 10 Y + 3 Z = 3 10 X 5 Y 2 Z = 3 X + 6 Y + 10 Z = 3 re arranged 10 X 5 Y 2 Z = 3 4 X 10 Y + 3 Z = 3 X + 6 Y + 10 Z = Check co efficient matrix [ A ] = [ ] All large values are on leading diagonal X 5 Y 2 Z = 3 X = [ Y + 2 Z ] / 10.. ( 4 ) Obtain relation 4 X 10 Y + 3 Z = 3 Y = [ X + 3 Z ] / 10.. ( 5 ) X + 6 Y + 10 Z = 3 Z = [ 3 X 6 Y ] / 10. ( 6 ) Jacobi Method :- The iteration is started with an initial solution X0 = Y0 = Z0 = 0 being substituted in the relations 4, 5 & 6 respectively to obtain X1 = 0.3, Y1 = 0.3, Z1 = which are re - substituted in the relations 4, 5 & 6 to obtain even better values X 2 = 0.39, Y2 = 0.33, Z2 = , etc. Iteration is continued until no significant improvement is achieved. After 9 iterations we have X9 = 0.342, Y9 = 0.285, Z9 = Gauss Seidel Method :- The iteration is started with initial solution Y0 = Z0 = 0 being substituted in relation 4 to obtain X1 = 0.3. X1 & Z0 is substituted in relation 5, to obtain Y1 = X1 & Y1 is substituted in relation 6 to obtain Z1 = Iteration is continued in the same sequence to obtain even better values till no further improvement is achieved. After 5 iterations we have X 5 = 0.342, Y 5 = 0.285, Z5 = Thus this method is more efficient than Jacobi method. III. CONCLUSION In this paper, the working procedures of all the numerical methods have been amply described using simple illustration. While comparing the different numerical methods, it is found that iterative methods usually give fast, reasonably accurate results with minimum computational efforts but sometimes convergence may be slow or may not occur. Direct methods give more accurate results but are plagued by division by zero situations, require more computational efforts & memory space. Factorization methods give most accurate results, with less computational efforts & memory space. All the above methods are programmable & hence can be run on a computer. It is hoped that this paper will help generate interest in the subject as well as act as a ready reference for engineering students & All Rights Reserved 212
6 REFERENCE 1. S.C. Chapra & Raymond P. Canale : Numerical Methods for Engineers ; Tata McGraw Hill publication 2. B.S. Grewal : Higher Engineering Mathematics, Numerical Methods ; Khanna publication 3. P. Kandasamy, K. Thilagavathy, K. Gunavathy : Numerical Methods ; S. Chand All Rights Reserved 213
3. Replace any row by the sum of that row and a constant multiple of any other row.
Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying
More informationLinear Equation Systems Iterative Methods
Linear Equation Systems Iterative Methods Content Iterative Methods Jacobi Iterative Method Gauss Seidel Iterative Method Iterative Methods Iterative methods are those that produce a sequence of successive
More information10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices
6.1 Solving Systems of Linear Equations Using Matrices Objectives Write the augmented matrix for a linear system. Perform matrix row operations. Use matrices and Gaussian elimination to solve systems.
More informationAddition/Subtraction flops. ... k k + 1, n (n k)(n k) (n k)(n + 1 k) n 1 n, n (1)(1) (1)(2)
1 CHAPTER 10 101 The flop counts for LU decomposition can be determined in a similar fashion as was done for Gauss elimination The major difference is that the elimination is only implemented for the left-hand
More informationCE 601: Numerical Methods Lecture 5. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.
CE 601: Numerical Methods Lecture 5 Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati. Elimination Methods For a system [A]{x} = {b} where [A]
More informationNumerical Linear Algebra
Numerical Linear Algebra Probably the simplest kind of problem. Occurs in many contexts, often as part of larger problem. Symbolic manipulation packages can do linear algebra "analytically" (e.g. Mathematica,
More informationChemical Engineering 541
Chemical Engineering 541 Computer Aided Design Methods Direct Solution of Linear Systems 1 Outline 2 Gauss Elimination Pivoting Scaling Cost LU Decomposition Thomas Algorithm (Iterative Improvement) Overview
More informationSection 3.1 Gaussian Elimination Method (GEM) Key terms
Section 3.1 Gaussian Elimination Method (GEM) Key terms Rectangular systems Consistent system & Inconsistent systems Rank Types of solution sets RREF Upper triangular form & back substitution Nonsingular
More informationNumerical Methods 5633
Numerical Methods 5633 Lecture 7 Marina Krstic Marinkovic mmarina@maths.tcd.ie School of Mathematics Trinity College Dublin Marina Krstic Marinkovic 1 / 10 5633-Numerical Methods Organisational To appear
More informationEXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as
EXTENSION Using Matrix Row Operations to Solve Systems The elimination method used to solve systems introduced in the previous section can be streamlined into a systematic method by using matrices (singular:
More informationParallel Implementations of Gaussian Elimination
s of Western Michigan University vasilije.perovic@wmich.edu January 27, 2012 CS 6260: in Parallel Linear systems of equations General form of a linear system of equations is given by a 11 x 1 + + a 1n
More informationIterative Methods for Linear Systems
Iterative Methods for Linear Systems 1 the method of Jacobi derivation of the formulas cost and convergence of the algorithm a Julia function 2 Gauss-Seidel Relaxation an iterative method for solving linear
More informationA New Formulae Method For Solving The Simultaneous Equations
A New Formulae Method For Solving The Simultaneous Equations Avinash A. Musale, Eknath S. Ugale ABSTRACT: Actually there are too many Methods which are used for solving the various types of equations by
More informationComputational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Zero elements of first column below 1 st row multiplying 1 st
More information1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3
6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More information(Sparse) Linear Solvers
(Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert
More informationIterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms
Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear
More informationLARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates
Triangular Factors and Row Exchanges LARP / 28 ACK :. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates Then there were three
More informationSECOND SEMESTER BCA : Syllabus Copy
BCA203T: DATA STRUCTURES SECOND SEMESTER BCA : Syllabus Copy Unit-I Introduction and Overview: Definition, Elementary data organization, Data Structures, data structures operations, Abstract data types,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26
More information(Sparse) Linear Solvers
(Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert
More informationCS Elementary Graph Algorithms & Transform-and-Conquer
CS483-10 Elementary Graph Algorithms & Transform-and-Conquer Outline Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 1:30pm - 2:30pm or by appointments Depth-first Search cont Topological
More informationEFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI
EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI 1 Akshay N. Panajwar, 2 Prof.M.A.Shah Department of Computer Science and Engineering, Walchand College of Engineering,
More informationMATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]
x y z 0 0 3 4 5 MATH 000 Gauss-Jordan Elimination and the TI-3 [Underlined bold terms are defined in the glossary] 3z = A linear system such as x + 4y z = x + 5y z = can be solved algebraically using ordinary
More information2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).
Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving
More informationMatrix Inverse 2 ( 2) 1 = 2 1 2
Name: Matrix Inverse For Scalars, we have what is called a multiplicative identity. This means that if we have a scalar number, call it r, then r multiplied by the multiplicative identity equals r. Without
More informationSolving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems
AMSC 6 /CMSC 76 Advanced Linear Numerical Analysis Fall 7 Direct Solution of Sparse Linear Systems and Eigenproblems Dianne P. O Leary c 7 Solving Sparse Linear Systems Assumed background: Gauss elimination
More informationIntroduction to Parallel. Programming
University of Nizhni Novgorod Faculty of Computational Mathematics & Cybernetics Introduction to Parallel Section 9. Programming Parallel Methods for Solving Linear Systems Gergel V.P., Professor, D.Sc.,
More information0_PreCNotes17 18.notebook May 16, Chapter 12
Chapter 12 Notes BASIC MATRIX OPERATIONS Matrix (plural: Matrices) an n x m array of elements element a ij Example 1 a 21 = a 13 = Multiply Matrix by a Scalar Distribute scalar to all elements Addition
More informationNumerical Algorithms
Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0
More informationComputational Methods CMSC/AMSC/MAPL 460. Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Matrix norms Can be defined using corresponding vector norms Two norm One norm Infinity
More informationAMS527: Numerical Analysis II
AMS527: Numerical Analysis II A Brief Overview of Finite Element Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 1 / 25 Overview Basic concepts Mathematical
More informationPaul's Online Math Notes. Online Notes / Algebra (Notes) / Systems of Equations / Augmented Matricies
1 of 8 5/17/2011 5:58 PM Paul's Online Math Notes Home Class Notes Extras/Reviews Cheat Sheets & Tables Downloads Algebra Home Preliminaries Chapters Solving Equations and Inequalities Graphing and Functions
More informationFor example, the system. 22 may be represented by the augmented matrix
Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural of matrix) may be
More informationComputational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science Some special matrices Matlab code How many operations and memory
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs
5. Direct Methods for Solving Systems of Linear Equations They are all over the place... and may have special needs They are all over the place... and may have special needs, December 13, 2012 1 5.3. Cholesky
More informationMatrices and Systems of Equations
1 CA-Fall 2011-Jordan College Algebra, 4 th edition, Beecher/Penna/Bittinger, Pearson/Addison Wesley, 2012 Chapter 6: Systems of Equations and Matrices Section 6.3 Matrices and Systems of Equations Matrices
More informationLinear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.
University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall
More informationUppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2
Uppsala University Department of Information technology Hands-on : Ill-conditioning Exercise (Ill-conditioned linear systems) Definition A system of linear equations is said to be ill-conditioned when
More informationAhigh school curriculum in Algebra 2 contains both solving systems of linear equations,
The Simplex Method for Systems of Linear Inequalities Todd O. Moyer, Towson University Abstract: This article details the application of the Simplex Method for an Algebra 2 class. Students typically learn
More informationFinite Math Section 2_2 Solutions and Hints
Finite Math Section _ Solutions and Hints by Brent M. Dingle for the book: Finite Mathematics, 7 th Edition by S. T. Tan. DO NOT PRINT THIS OUT AND TURN IT IN!!!!!!!! This is designed to assist you in
More information1.2 Numerical Solutions of Flow Problems
1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian
More informationLecture 9: Linear Programming
Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative
More informationx = 12 x = 12 1x = 16
2.2 - The Inverse of a Matrix We've seen how to add matrices, multiply them by scalars, subtract them, and multiply one matrix by another. The question naturally arises: Can we divide one matrix by another?
More informationECE 204 Numerical Methods for Computer Engineers MIDTERM EXAMINATION /4:30-6:00
ECE 4 Numerical Methods for Computer Engineers ECE 4 Numerical Methods for Computer Engineers MIDTERM EXAMINATION --7/4:-6: The eamination is out of marks. Instructions: No aides. Write your name and student
More informationCurriculum Map: Mathematics
Curriculum Map: Mathematics Course: Honors Advanced Precalculus and Trigonometry Grade(s): 11-12 Unit 1: Functions and Their Graphs This chapter will develop a more complete, thorough understanding of
More informationReport of Linear Solver Implementation on GPU
Report of Linear Solver Implementation on GPU XIANG LI Abstract As the development of technology and the linear equation solver is used in many aspects such as smart grid, aviation and chemical engineering,
More informationA Block Cipher Basing Upon a Revisit to the Feistel Approach and the Modular Arithmetic Inverse of a Key Matrix
IAENG International Journal of Computer Science, 32:4, IJCS_32_4_ A Block Cipher Basing Upon a Revisit to the Feistel Approach and the Modular Arithmetic Inverse of a Key Matrix S. Udaya Kumar V. U. K.
More informationSparse Linear Systems
1 Sparse Linear Systems Rob H. Bisseling Mathematical Institute, Utrecht University Course Introduction Scientific Computing February 22, 2018 2 Outline Iterative solution methods 3 A perfect bipartite
More informationCh 09 Multidimensional arrays & Linear Systems. Andrea Mignone Physics Department, University of Torino AA
Ch 09 Multidimensional arrays & Linear Systems Andrea Mignone Physics Department, University of Torino AA 2017-2018 Multidimensional Arrays A multidimensional array is an array containing one or more arrays.
More informationOutline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency
1 2 Parallel Algorithms for Linear Algebra Richard P. Brent Computer Sciences Laboratory Australian National University Outline Basic concepts Parallel architectures Practical design issues Programming
More informationParallelizing LU Factorization
Parallelizing LU Factorization Scott Ricketts December 3, 2006 Abstract Systems of linear equations can be represented by matrix equations of the form A x = b LU Factorization is a method for solving systems
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Sparse Linear Systems and Factorization Methods Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Sparse
More informationSELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND
Student Submission for the 5 th OpenFOAM User Conference 2017, Wiesbaden - Germany: SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND TESSA UROIĆ Faculty of Mechanical Engineering and Naval Architecture, Ivana
More informationRecreational Mathematics:
Recreational Mathematics: SOLVING LIGHTS OUT BY: ALI THOMAS Lights Out An Introduction: 5 x 5 array of lights Object of the game: Turn off all of the lights in the fewest amount of moves given lights that
More informationOptimizing Data Locality for Iterative Matrix Solvers on CUDA
Optimizing Data Locality for Iterative Matrix Solvers on CUDA Raymond Flagg, Jason Monk, Yifeng Zhu PhD., Bruce Segee PhD. Department of Electrical and Computer Engineering, University of Maine, Orono,
More informationFast Iterative Solvers for Markov Chains, with Application to Google's PageRank. Hans De Sterck
Fast Iterative Solvers for Markov Chains, with Application to Google's PageRank Hans De Sterck Department of Applied Mathematics University of Waterloo, Ontario, Canada joint work with Steve McCormick,
More informationLecture 5: Matrices. Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur
Lecture 5: Matrices Dheeraj Kumar Singh 07CS1004 Teacher: Prof. Niloy Ganguly Department of Computer Science and Engineering IIT Kharagpur 29 th July, 2008 Types of Matrices Matrix Addition and Multiplication
More informationCOMPUTER OPTIMIZATION
COMPUTER OPTIMIZATION Storage Optimization: Since Normal Matrix is a symmetric matrix, store only half of it in a vector matrix and develop a indexing scheme to map the upper or lower half to the vector.
More informationAn Improved Measurement Placement Algorithm for Network Observability
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper
More informationDense Matrix Algorithms
Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication
More informationLinear programming II João Carlos Lourenço
Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,
More informationPrecalculus Notes: Unit 7 Systems of Equations and Matrices
Date: 7.1, 7. Solving Systems of Equations: Graphing, Substitution, Elimination Syllabus Objectives: 8.1 The student will solve a given system of equations or system of inequalities. Solution of a System
More informationAutar Kaw Benjamin Rigsby. Transforming Numerical Methods Education for STEM Undergraduates
Autar Kaw Benjamin Rigsy http://nmmathforcollegecom Transforming Numerical Methods Education for STEM Undergraduates http://nmmathforcollegecom LU Decomposition is another method to solve a set of simultaneous
More informationCS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang
Solving and CS6015 / LARP 2018 ACK : Linear Algebra and Its Applications - Gilbert Strang Introduction Chapter 1 concentrated on square invertible matrices. There was one solution to Ax = b and it was
More informationIndependent systems consist of x
5.1 Simultaneous Linear Equations In consistent equations, *Find the solution to each system by graphing. 1. y Independent systems consist of x Three Cases: A. consistent and independent 2. y B. inconsistent
More informationNumerical Analysis I - Final Exam Matrikelnummer:
Dr. Behrens Center for Mathematical Sciences Technische Universität München Winter Term 2005/2006 Name: Numerical Analysis I - Final Exam Matrikelnummer: I agree to the publication of the results of this
More informationChapter LU Decomposition More Examples Electrical Engineering
Chapter 4.7 LU Decomposition More Examples Electrical Engineering Example Three-phase loads e common in AC systems. When the system is balanced the analysis can be simplified to a single equivalent rcuit
More informationRobot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss
Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for
More informationGraphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General
Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for
More informationProject Report. 1 Abstract. 2 Algorithms. 2.1 Gaussian elimination without partial pivoting. 2.2 Gaussian elimination with partial pivoting
Project Report Bernardo A. Gonzalez Torres beaugonz@ucsc.edu Abstract The final term project consist of two parts: a Fortran implementation of a linear algebra solver and a Python implementation of a run
More informationAssignment 2. with (a) (10 pts) naive Gauss elimination, (b) (10 pts) Gauss with partial pivoting
Assignment (Be sure to observe the rules about handing in homework). Solve: with (a) ( pts) naive Gauss elimination, (b) ( pts) Gauss with partial pivoting *You need to show all of the steps manually.
More informationChapter Introduction
Chapter 4.1 Introduction After reading this chapter, you should be able to 1. define what a matrix is. 2. identify special types of matrices, and 3. identify when two matrices are equal. What does a matrix
More informationContents. I The Basic Framework for Stationary Problems 1
page v Preface xiii I The Basic Framework for Stationary Problems 1 1 Some model PDEs 3 1.1 Laplace s equation; elliptic BVPs... 3 1.1.1 Physical experiments modeled by Laplace s equation... 5 1.2 Other
More informationLINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.
1 2 The LINPACK Benchmark on the Fujitsu AP 1000 Richard P. Brent Computer Sciences Laboratory The LINPACK Benchmark A popular benchmark for floating-point performance. Involves the solution of a nonsingular
More informationCivil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics
Civil Engineering Systems Analysis Lecture XV Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Today s Learning Objectives Sensitivity Analysis Dual Simplex Method 2
More informationJulian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study
Parallel matrix inversion for the revised simplex method - A study Julian Hall School of Mathematics University of Edinburgh June 5th 006 Parallel matrix inversion for the revised simplex method - a study
More informationRealization of Hardware Architectures for Householder Transformation based QR Decomposition using Xilinx System Generator Block Sets
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 08 February 2016 ISSN (online): 2349-784X Realization of Hardware Architectures for Householder Transformation based QR
More informationICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss
ICRA 2016 Tutorial on SLAM Graph-Based SLAM and Sparsity Cyrill Stachniss 1 Graph-Based SLAM?? 2 Graph-Based SLAM?? SLAM = simultaneous localization and mapping 3 Graph-Based SLAM?? SLAM = simultaneous
More informationMultigrid solvers M. M. Sussman sussmanm@math.pitt.edu Office Hours: 11:10AM-12:10PM, Thack 622 May 12 June 19, 2014 1 / 43 Multigrid Geometrical multigrid Introduction Details of GMG Summary Algebraic
More informationAlternating Projections
Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point
More informationLeast-Squares Fitting of Data with B-Spline Curves
Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero
More informationCS 770G - Parallel Algorithms in Scientific Computing
CS 770G - Parallel lgorithms in Scientific Computing Dense Matrix Computation II: Solving inear Systems May 28, 2001 ecture 6 References Introduction to Parallel Computing Kumar, Grama, Gupta, Karypis,
More informationAim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview
Aim Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk What should a 2-hour PhD lecture on structure
More informationON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS
ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS NURNABI MEHERUL ALAM M.Sc. (Agricultural Statistics), Roll No. I.A.S.R.I, Library Avenue, New Delhi- Chairperson: Dr. P.K. Batra Abstract: Block designs
More informationSOLVING SYSTEMS OF LINEAR INTERVAL EQUATIONS USING THE INTERVAL EXTENDED ZERO METHOD AND MULTIMEDIA EXTENSIONS
Please cite this article as: Mariusz Pilarek, Solving systems of linear interval equations using the "interval extended zero" method and multimedia extensions, Scientific Research of the Institute of Mathematics
More informationA numerical grid and grid less (Mesh less) techniques for the solution of 2D Laplace equation
Available online at www.pelagiaresearchlibrary.com Advances in Applied Science Research, 2014, 5(1):150-155 ISSN: 0976-8610 CODEN (USA): AASRFC A numerical grid and grid less (Mesh less) techniques for
More informationCOMP 558 lecture 19 Nov. 17, 2010
COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is
More informationStudy and implementation of computational methods for Differential Equations in heterogeneous systems. Asimina Vouronikoy - Eleni Zisiou
Study and implementation of computational methods for Differential Equations in heterogeneous systems Asimina Vouronikoy - Eleni Zisiou Outline Introduction Review of related work Cyclic Reduction Algorithm
More informationNew parallel algorithms for finding determinants of NxN matrices
New parallel algorithms for finding determinants of NxN matrices Sami Almalki, Saeed Alzahrani, Abdullatif Alabdullatif College of Computer and Information Sciences King Saud University Riyadh, Saudi Arabia
More informationMATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.
MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the
More informationWhat is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear.
AMSC 600/CMSC 760 Fall 2007 Solution of Sparse Linear Systems Multigrid, Part 1 Dianne P. O Leary c 2006, 2007 What is Multigrid? Originally, multigrid algorithms were proposed as an iterative method to
More informationFinite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24
Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 24 So in today s class, we will look at quadrilateral elements; and we will
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,
More informationLevitin second pages 2002/9/9 12:52 p. 192 (chap06) Windfall Software, PCA ZzTeX v8.7
Levitin second pages /9/9 1:5 p. 19 (chap6) Windfall Software, PCA ZzTeX v8.7 6 Transform-and-Conquer That s the secret to life...replace one worry with another. Charles M. Schulz (19 ), American cartoonist,
More informationChap5 The Theory of the Simplex Method
College of Management, NCTU Operation Research I Fall, Chap The Theory of the Simplex Method Terminology Constraint oundary equation For any constraint (functional and nonnegativity), replace its,, sign
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationA Poorly Conditioned System. Matrix Form
Possibilities for Linear Systems of Equations A Poorly Conditioned System A Poorly Conditioned System Results No solution (inconsistent) Unique solution (consistent) Infinite number of solutions (consistent)
More information1 Exercise: 1-D heat conduction with finite elements
1 Exercise: 1-D heat conduction with finite elements Reading This finite element example is based on Hughes (2000, sec. 1.1-1.15. 1.1 Implementation of the 1-D heat equation example In the previous two
More information