Algorithms for Reordering Power Systems Matrices to Minimal Band Form

Size: px
Start display at page:

Download "Algorithms for Reordering Power Systems Matrices to Minimal Band Form"

Transcription

1 174 Algorithms for Reordering Power Systems Matrices to Minimal Band Form P.S. Nagendra Rao Pravin Kumar Gupta Abstract - Efficient exploitation of sparsity of matrices is very critical for the successful implementation of many power system computation problems. This paper introduces a non-traditional approach to handling sparsity in power system computations. An attempt is made to provide evidence to suggest that minimum band ordering of matrices also could be considered as an option in this regard. This is done by showing that of the y- bus matrix of power networks is amenable to minimum band ordering. We propose several new algorithms for reordering symmetric matrices to minimal band form demonstrate that these new algorithms are very effective in reducing the band width of not only the y-bus matrices of power network bus also problems. In addition, the performance of the new algorithms are compared with several existing band reduction algorithms. Index Terms Linear system of equations, sparse matrices, optimal ordering, minimal band form, finite element methods, spectral methods. I. INTRODUCTION Minimum degree ordering schemes proposed by Tinney in the late sixties are by and large the only optimal ordering methods in use in power system computations. However, there are many other ordering scheme that have not been explored in this context. In this paper we study an alternate approach to reordering power systems matrices. The scheme considered here is, reordering to minimize the bandwidth of matrices. Here, we propose several algorithms for reducing the bandwidth of sparse symmetric matrices and compare their performance with several popular approaches. Reduced bandwidth ordering is of interest due to several reasons. There are many schemes for solving linear equations for the form Ax=b, where A is in banded form. These schemes are considerably simple to implement a the schemes of solving linear systems where the coefficient matrices have arbitrary sparsity. The execution time for "band solvers" for example, is O(NB 2 ) for large N and B. Thus it is desirable to reduce B as much as possible Even if it is possible to reduce bandwidth by a small margin, one archives a significant reduction in computation. Bandwidth reduction is also of interest in the case of solving large sets of linear equations, having arbitrary sparsity Prof. P.S. Nagendra Rao, Department of Electrical Engineering, Indian Institute of Science, Bangalore , id nagendra@ee.iisc.ernet.in Pravin Kumar Gupta, former student, Indian Institute of Science, Bangalore structure, in parallel. A banded structure facilitates efficient assignment of threads to processors. It helps to reduce the amount of communication between processors and in most cases this can be limited to neighbouring processors connected as linear array. Besides this, banded structure also helps to reduce the complexity of computation in each of the processors and in situations where the processors support vector instruction, leads itself to easy vectorisation. Reordering large matrices for reducing bandwidth is not in wide use in power system applications because the network topology is not regularly structured and it is believed that finding reduced bandwidth ordering is not easy for these systems. However, bandwidth reducing algorithms are widely used in civil and structural engineering applications in the context of finite element analysis where the physical being analyzed are well structure. Most of the existing work in bandwidth reduction is motivated by the needs of these applications. During the past decade, considerable amount of research efforts has gone into evolving efficient methods for minimizing the bandwidth of sparse symmetric matrices. Some of the important algorithms are discussed briefly here: In the following discussion, we use the words row/column number and node as equivalent. The graph terminology used pertaining to the graph which represents the structure of the given symmetric matrix. Each row/column is represented by a node and the nonzero off-diagonal elements are represent the edge of this graph. In Cuthill-McKee(CM) algorithm[5,7], among the nodes of low degree, we select the one as a potential starting node from which we find a tree using BFS approach which partition the nodes to several level sets. Nodes in each of the level sets are ordered consequently, by numbering those nodes, whose parent node have a smaller number, first. Considering each of these nodes as a starting node, Consider each potential starting node as starting node, we choose the particular ordering which gives the minimum bandwidth. The Gibbs-Poole- Stockmeyer(GPs)[8] algorithm differs from CM primarily in the selection of starting node. In GPS, we try to find the pseudo-diameter and the starting node is choosen as an end point of pseudo-diameter. Thus, the number of BFS searches are reduced significantly. Puttonen[3] has proposed a renumbering method by systematically changing rows and columns in the connectivity matrix. In phase 1, it condenses the upper triangular remaining nodes below the highest bandwidth row and checks whether it is possible to reduce bandwidth by interchanging the highest bandwidth row with

2 INDIAN INSTITUTE OF TECHNOLOGY, KHARAGPUR , DECEMBER 27-29, a row below it. Other algorithms are due to : Collins(RC)[2](used a element-dependent search), Rosen(RR)[12] (used a row permutation scheme), and Akhras[10] (used a row ponderation method). The Gibbs- King(GK)[4,11]algorithm, as compared to GPS algorithm, differs only in the last step (reordering the node similar to CM). Here, nodes are reordered according to a different heuristic algorithm. From a careful analysis of these algorithm it is easy to see that all these algorithms are essentially heuristic search algorithms and their success depends on the correctness of heuristic. Hence, no single algorithm performs uniformly well for all type of systems and the performance of a given algorithm for a particular system could be varied depending on the choice of the initial structure/data structure etc. The major aim of this paper is to show that the power system matrices can indeed be ordered to bounded forms and this is established by considering several existing algorithms as well as by proposing new band reducing algorithms. The performance of the set of algorithms propose here is constant over a range of applications. The proposed algorithms are basically different from the existing ones as they are not based on heuristic search. Instead, they are based on numerical computations. We compare their performance with the existing algorithms for matrices obtained from power systems as well as other engineering application areas. II THE PROPOSED ALGORITHMS The approach adopted here to develop the new bandwidth reduction algorithms is to start with a row numbering which is unique for a given matrix. One such numbering is possible for symmetric matrices. This can be done by assigning a unique value to each row/column of the matrix and order the matrix row/column based on this value. Such a unique value is obtained by computing the eigen vector values corresponding to second eigen value of the matrix which captures the structural information of the given matrix. Second eigen value is the lowest non zero eigen value of connectivity matrix. Such numbering has been very profitably used in several matrix structure related applications [1] such as finding bordered band forms and clusters in sparse matrices. The first of the new algorithms considers this ordering itself as the basis for the final banded solution, the other algorithm uses this ordering as an initial ordering and seek to reduce bandwidth further by some additional simple search techniques. A. Algorithm 1 Step 1: From the adjacency list, form the connectivity matrix C, where C ( i, = 1 if node i and node j are connected for all i i, j = 1,2,..., N j N C ( i, = - C( i, j= 1, j i Step 2: Find out eigen values of matrix C. Step 3: Find out eigen vector corresponding to second eigen value of C. Assign the N components of the eigen vector as values to the N rows/columns of the matrix. Step 4: Renumber the rows/columns such that their valuation are in non-decreasing order. Step 5: stop. B. Algorithm 2 Step1-3: Same as that of algorithm 1. Step4 : Find the rows/columns that has smallest eigen vector value. Level of these nodes is 1. Renumber all the nodes in this level by assigning number 1,2,3.. etc. Step5 : Find nodes in the next level by finding the nodes connected to the nodes at the present level which have not been accounted so far. Order this set according to the increasing position of their parents. Ties are broken based on the eigen vector value (the node that has smaller eigen vector value will be numbered first). Step6 : Continue step 6 until all the nodes are not renumbered. C. Algorithm 3 Same as algorithm 2, only difference is that instead of starting with smallest eigen vector rows, start with largest eigen vector rows and in case of tie in step 5, the node that has the larger eigen vector value will be numbered first. D. Algorithm 4 This algorithm is a heuristic algorithm, which tries to improve to bandwidth of any given ordering, using some limited local searches. Let us define for the given matrix A, for row i, let left(i) and right( represents the position of leftmost and rightmost non zero entry columns respectively. Then left( i) i right( i) Hence, the bandwidth can also be defined as B = max [max[( i left ( i )), ( right ( i ) i )]] i Now, let row i be the highest bandwidth row and let [i-left(i)] be the higher of two values for row i then there will also be another row j(j=left(i)) for which the value [right(-j] will be equal to B. so for reducing the bandwidth of matrix A - (a) either row i has to move upwards, or (b) row j(=left(i)) has to move downwards. This can be achieved by the use of the following algorithm. If we move row i upwards by k rows or say we interchange row i and row (i-k) so maximum bandwidth due to modified row i-k will be =max[((i-k)-left(i)), (right(i)-(i-k))] while previously it was = (i-left(i)) so, for sure reduction in bandwidth of row i, the following two conditions must be satisified: 1. i-left(i)>(i-k)-left(i) which is always true for any k > 0 2. i-left(i) > right(i) (i-k)

3 176 or k < 2*i-(left(i) + right(i)) Similarly, the condition to guarantee reduction in bandwidth while moving row j downwards by k rows is : k < 2*j-left(i) + right(i)) For finding a row 1 suitable for interchange with i, where i is to be moved upwards, it should be that the bandwidth of modified row i should not increase. So, i-left(1) < i-left(i) or, left(1) > left(i) also, if i is to moved downwards by interchanging with some row 1 (1-, then right(1) < right(i) so, after the interchange of row i with row 1, we can see that there will be no effect on left(m) and right(m) of row m iff, 1. either both node i and node 1 are not connected to node m, or 2. left(m) <i, 1 <right(m) So, if any of above two conditions are satisfied, there is no need to calculated the new left(m) and right(m) for row m, where m = 1,2, N. Steps of Algorithm 4 Step 1 : Find the set of bottleneck rows(rows for which bandwidth is equal of bandwidth of matrix) Step 2 : Try to interchange any of these rows (say i ) with some suitable row 1. Step 3 : If successful, calculate new values of right(m) and left(m) for only those rows which are connected to either i or 1. Step 4 : If there is success for even one row, goto step 1. Step 5 : stop. III COMPARISON OF PERFORMANCE We have applied our four algorithms on 24 test matrices, obtained from different fields of engineering. The source of the data, the filed to which it belongs and the size of the network(matrix) for all the examples considered is given in Table1. All the results are given in Table2. In Table2 for each of the examples we have provided the minimum based width achieved by eight different methods. The first column gives the results obtained by the most popular and widely used Reverse Cuthill-Mackee (RCM) method. The next three columns give the results obtained by using the first three algorithm of this paper. As our fourth algorithm is basically a row interchange algorithm, we have applied it on the results of our pervious three algorithms and also on the results obtained by the Reverse Cuthill-Mackee[RCM] algorithm, (MATLAB implementation). The next four sets of results correspond to this experiment.. In Table 3, we also give the best results that the others have been able to obtain for some of these examples. These results show that the results of applying fourth algorithm on the results of the first three algorithms give us minimum bandwidth compared to that achievable by other existing algorithms. No. of Nodes Description Graph Power System Source No. of Nodes [5] 72 [3] 37 Description Source [9] [3] [3] 62 Water network [3] [3] 73 [3] 162 [1] 45 [8] 19 [3] 40 Random [3] 100 Random [3] 150 Random [3] 200 Random [3] 300 Random [9] 400 Random [9] 500 Random Table 1 : Test Problems Description There are five examples considered in Table2, which taken from power systems. The 30, 57 and 118 node examples represent the topology IEEE standard test systems. A perusal of Table 2 shown for all these examples the result indicated in column G is consistency better than all the other methods. This approach corresponds to using Algorithm 4 in combination with Algorithm 1 of this paper. It is interesting to note that for all these systems, it is possible to reorder them so that the bandwidth is quite small. This is a significant outcome as it implies that this method can also be considered as a method of exploiting the sparsity in power system networks. It is true that the minimum degree based optimal ordering methods have matured but the enormous simplicity of the banded form for solving linear equations should provide further incentive to investigations in this direction. [3] [9] [2] [2]

4 INDIAN INSTITUTE OF TECHNOLOGY, KHARAGPUR , DECEMBER 27-29, A B C D E F G H I * 3 * 4 3 * NC NC NC 4 * 5 NC NC 7 * * 10 * 10 * * 15 * 15 * * * 6 * 7 6* NC NC NC 8 NC NC * 6 * * 18 * 18 * * 18 * 19 NC * * * 3 * 3 * 3 * NC * NC * NC * NC * NC * * NC 7 * 13 7 * * * * * 8 5 * 5 * NC * 6 NC * NC * 19 4 * 4 * 4 * 4 * NC * NC * NC * NC * * 18 NC * * * * * 62 NC - No change * smallest obtained A- No. of Nodes B - Results for RCM method C - Results of Algorithm 1 D - Results of Algorithm 2 E - Results of Algorithm 3 F - On applying algorithm 4 in results of RCM G - On applying algorithm 4 on results of algorithm 1 H - On applying algorithm 4 on results of algorithm 2 I - On applying algorithm 4 on results of algorithm 3 Table 2 Results of bandwidth reduction algorithms IV CONCLUSIONS We have proposed a new approach for reducing the bandwidth of sparse symmetric matrices. While all previous algorithms are heuristic methods, we have used the spectral information to reduce the bandwidth, which is a numerical approach. The use of spectral information for reducing the bandwidth seems to be able to capture the global network structure and hence, results in extremely small bandwidths. The application of the new bandwidth reducing algorithm to the power system matrices shows that these matrices could also be reordered to forms with a reasonably small bandwidth. This is a very significant outcome as it implies that we can consider alternate avenues for solving the large system of linear equations encountered in many power system computations. This aspect needs further studies. A B C D E F G * * * * * * * smallest obtained A No. of Nodes E Results of A&U [2] B Results of Puttonen [3] F Results of HG [2] C Results of GK [4] G Results of RC [2] D Results of RR [2] Table 3 Results of existing bandwidth reduction algorithms V REFERENCES [1] Mohan P., "Graph Partitioning Algorithms: with Applications in Power System", M.Sc Thesis, Deptt. of Electrical Engg., I.I.Sc., Bangalore, 1992 [2] Collins R., :Bandwidth Reduction by Automatic Renumbering", Int J. Num. Meth, Engg. 6(1973), pp [3] Puttonen J., "Simple and effective Bandwidth Reduction Algorithm", Int. J. Num. Meth. Engg., 19(1983), pp [4] Lewis J.G., :Implementation of Gibbs-Poole-Stockmeyer and Gibbs - King Algorithms", ACM Trans. Math. Soft., 8,22(June 1982), pp [5] Tewerson R.P., "Sparse Matrices", Academic Press, New York, 1973 [6] Wallach Y, "Calculations and Programs for Power System ", Engelwood, Prentice Hall, 1986 [7] Cuthill E. and McKee J., "Reducing the Bandwidth of Sparse Symmetric Matrices, ACM Nat. Conf, San Francisco, U.S.A., 1969, PP [8] Gibbs N.E., Poole W.G. and Stockmeyer P.K., "An Algorithm for Reducing the Bandwidth and Praofile of a Sparse Matrix ", SIAM J. Nm. Anal., 13,2 (April 1976), PP [9] Everstine G.C., "A Comparision of Three Resequencing Algorithms for Reducing of Matrix Profile and Wavefront, Int. J. Num. Meth. Engg., 14(1979), pp [10] Akhras G. and Dhatt G., "An Automatic Node relabelling Scheme for Minimizing a Matrix or Bandwidth", Int.J.Num. Meth. Engg., 10(1976), pp [11] Gibbs N.E., "A Hybrid Profile Reduction Algorithm", ACM Trans. Math. Soft.,2,4 (Dec 1976),

5 178 [12] Rosen R., "Matrix Bandwidth Minimisation", Proc. 23rd Nat. Cof., Ass. For Computing Machinery,Brandon system press, princeton, New Jersy, 1968.

Q. Wang National Key Laboratory of Antenna and Microwave Technology Xidian University No. 2 South Taiba Road, Xi an, Shaanxi , P. R.

Q. Wang National Key Laboratory of Antenna and Microwave Technology Xidian University No. 2 South Taiba Road, Xi an, Shaanxi , P. R. Progress In Electromagnetics Research Letters, Vol. 9, 29 38, 2009 AN IMPROVED ALGORITHM FOR MATRIX BANDWIDTH AND PROFILE REDUCTION IN FINITE ELEMENT ANALYSIS Q. Wang National Key Laboratory of Antenna

More information

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems AMSC 6 /CMSC 76 Advanced Linear Numerical Analysis Fall 7 Direct Solution of Sparse Linear Systems and Eigenproblems Dianne P. O Leary c 7 Solving Sparse Linear Systems Assumed background: Gauss elimination

More information

Review on Storage and Ordering for Large Sparse Matrix

Review on Storage and Ordering for Large Sparse Matrix Review on Storage and Ordering for Large Sparse Matrix (Final Project, Math) Wei He (SID: 5763774) Introduction Many problems in science and engineering require the solution of a set of sparse matrix equations

More information

A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX

A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX Progress In Electromagnetics Research, PIER 90, 121 136, 2009 A GENERALIZED GPS ALGORITHM FOR REDUCING THE BANDWIDTH AND PROFILE OF A SPARSE MATRIX Q. Wang, Y. C. Guo, and X. W. Shi National Key Laboratory

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Sparse Linear Systems and Factorization Methods Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Sparse

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26

More information

Large-scale Structural Analysis Using General Sparse Matrix Technique

Large-scale Structural Analysis Using General Sparse Matrix Technique Large-scale Structural Analysis Using General Sparse Matrix Technique Yuan-Sen Yang 1), Shang-Hsien Hsieh 1), Kuang-Wu Chou 1), and I-Chau Tsai 1) 1) Department of Civil Engineering, National Taiwan University,

More information

ORDERING SYMMETRIC SPARSE MATRICES FOR SMALL PROFILE AND WAVEFRONT

ORDERING SYMMETRIC SPARSE MATRICES FOR SMALL PROFILE AND WAVEFRONT INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng. 45, 1737}1755 (1999) ORDERING SYMMETRIC SPARSE MATRICES FOR SMALL PROFILE AND WAVEFRONT J. K. REID* AND J. A. SCOTT

More information

Native mesh ordering with Scotch 4.0

Native mesh ordering with Scotch 4.0 Native mesh ordering with Scotch 4.0 François Pellegrini INRIA Futurs Project ScAlApplix pelegrin@labri.fr Abstract. Sparse matrix reordering is a key issue for the the efficient factorization of sparse

More information

Sparse matrices, graphs, and tree elimination

Sparse matrices, graphs, and tree elimination Logistics Week 6: Friday, Oct 2 1. I will be out of town next Tuesday, October 6, and so will not have office hours on that day. I will be around on Monday, except during the SCAN seminar (1:25-2:15);

More information

A Genetic Algorithm with Hill Climbing for the Bandwidth Minimization Problem

A Genetic Algorithm with Hill Climbing for the Bandwidth Minimization Problem A Genetic Algorithm with Hill Climbing for the Bandwidth Minimization Problem Andrew Lim a, Brian Rodrigues b,feixiao c a Department of Industrial Engineering and Engineering Management, Hong Kong University

More information

Sparse LU Factorization for Parallel Circuit Simulation on GPUs

Sparse LU Factorization for Parallel Circuit Simulation on GPUs Department of Electronic Engineering, Tsinghua University Sparse LU Factorization for Parallel Circuit Simulation on GPUs Ling Ren, Xiaoming Chen, Yu Wang, Chenxi Zhang, Huazhong Yang Nano-scale Integrated

More information

Row ordering for frontal solvers in chemical process engineering

Row ordering for frontal solvers in chemical process engineering Computers and Chemical Engineering 24 (2000) 1865 1880 www.elsevier.com/locate/compchemeng Row ordering for frontal solvers in chemical process engineering Jennifer A. Scott * Computational Science and

More information

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach 1 Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach David Greiner, Gustavo Montero, Gabriel Winter Institute of Intelligent Systems and Numerical Applications in Engineering (IUSIANI)

More information

Reducing the total bandwidth of a sparse unsymmetric matrix

Reducing the total bandwidth of a sparse unsymmetric matrix RAL-TR-2005-001 Reducing the total bandwidth of a sparse unsymmetric matrix J. K. Reid and J. A. Scott March 2005 Council for the Central Laboratory of the Research Councils c Council for the Central Laboratory

More information

REDUCING THE TOTAL BANDWIDTH OF A SPARSE UNSYMMETRIC MATRIX

REDUCING THE TOTAL BANDWIDTH OF A SPARSE UNSYMMETRIC MATRIX SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 3, pp. 805 821 c 2006 Society for Industrial and Applied Mathematics REDUCING THE TOTAL BANDWIDTH OF A SPARSE UNSYMMETRIC MATRIX J. K. REID AND J. A. SCOTT Abstract.

More information

Parallelizing LU Factorization

Parallelizing LU Factorization Parallelizing LU Factorization Scott Ricketts December 3, 2006 Abstract Systems of linear equations can be represented by matrix equations of the form A x = b LU Factorization is a method for solving systems

More information

Sparse Linear Systems

Sparse Linear Systems 1 Sparse Linear Systems Rob H. Bisseling Mathematical Institute, Utrecht University Course Introduction Scientific Computing February 22, 2018 2 Outline Iterative solution methods 3 A perfect bipartite

More information

Improving Performance of Sparse Matrix-Vector Multiplication

Improving Performance of Sparse Matrix-Vector Multiplication Improving Performance of Sparse Matrix-Vector Multiplication Ali Pınar Michael T. Heath Department of Computer Science and Center of Simulation of Advanced Rockets University of Illinois at Urbana-Champaign

More information

The Bandwidths of a Matrix. A Survey of Algorithms

The Bandwidths of a Matrix. A Survey of Algorithms DOI: 10.2478/awutm-2014-0019 Analele Universităţii de Vest, Timişoara Seria Matematică Informatică LII, 2, (2014), 183 223 The Bandwidths of a Matrix. A Survey of Algorithms Liviu Octavian Mafteiu-Scai

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Lecture 27: Fast Laplacian Solvers

Lecture 27: Fast Laplacian Solvers Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall

More information

CS Data Structures and Algorithm Analysis

CS Data Structures and Algorithm Analysis CS 483 - Data Structures and Algorithm Analysis Lecture VI: Chapter 5, part 2; Chapter 6, part 1 R. Paul Wiegand George Mason University, Department of Computer Science March 8, 2006 Outline 1 Topological

More information

Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning

Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning Optimizing Parallel Sparse Matrix-Vector Multiplication by Corner Partitioning Michael M. Wolf 1,2, Erik G. Boman 2, and Bruce A. Hendrickson 3 1 Dept. of Computer Science, University of Illinois at Urbana-Champaign,

More information

AA220/CS238 Parallel Methods in Numerical Analysis. Introduction to Sparse Direct Solver (Symmetric Positive Definite Systems)

AA220/CS238 Parallel Methods in Numerical Analysis. Introduction to Sparse Direct Solver (Symmetric Positive Definite Systems) AA0/CS8 Parallel ethods in Numerical Analysis Introduction to Sparse Direct Solver (Symmetric Positive Definite Systems) Kincho H. Law Professor of Civil and Environmental Engineering Stanford University

More information

F k G A S S1 3 S 2 S S V 2 V 3 V 1 P 01 P 11 P 10 P 00

F k G A S S1 3 S 2 S S V 2 V 3 V 1 P 01 P 11 P 10 P 00 PRLLEL SPRSE HOLESKY FTORIZTION J URGEN SHULZE University of Paderborn, Department of omputer Science Furstenallee, 332 Paderborn, Germany Sparse matrix factorization plays an important role in many numerical

More information

PARALLEL COMPUTATION OF THE SINGULAR VALUE DECOMPOSITION ON TREE ARCHITECTURES

PARALLEL COMPUTATION OF THE SINGULAR VALUE DECOMPOSITION ON TREE ARCHITECTURES PARALLEL COMPUTATION OF THE SINGULAR VALUE DECOMPOSITION ON TREE ARCHITECTURES Zhou B. B. and Brent R. P. Computer Sciences Laboratory Australian National University Canberra, ACT 000 Abstract We describe

More information

Developing the TELEMAC system for HECToR (phase 2b & beyond) Zhi Shang

Developing the TELEMAC system for HECToR (phase 2b & beyond) Zhi Shang Developing the TELEMAC system for HECToR (phase 2b & beyond) Zhi Shang Outline of the Talk Introduction to the TELEMAC System and to TELEMAC-2D Code Developments Data Reordering Strategy Results Conclusions

More information

An Improved Measurement Placement Algorithm for Network Observability

An Improved Measurement Placement Algorithm for Network Observability IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper

More information

Second-order shape optimization of a steel bridge

Second-order shape optimization of a steel bridge Computer Aided Optimum Design of Structures 67 Second-order shape optimization of a steel bridge A.F.M. Azevedo, A. Adao da Fonseca Faculty of Engineering, University of Porto, Porto, Portugal Email: alvaro@fe.up.pt,

More information

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices Nerma Baščelija Sarajevo School of Science and Technology Department of Computer Science Hrasnicka Cesta 3a, 71000 Sarajevo

More information

Advances in Parallel Partitioning, Load Balancing and Matrix Ordering for Scientific Computing

Advances in Parallel Partitioning, Load Balancing and Matrix Ordering for Scientific Computing Advances in Parallel Partitioning, Load Balancing and Matrix Ordering for Scientific Computing Erik G. Boman 1, Umit V. Catalyurek 2, Cédric Chevalier 1, Karen D. Devine 1, Ilya Safro 3, Michael M. Wolf

More information

Memory Hierarchy Management for Iterative Graph Structures

Memory Hierarchy Management for Iterative Graph Structures Memory Hierarchy Management for Iterative Graph Structures Ibraheem Al-Furaih y Syracuse University Sanjay Ranka University of Florida Abstract The increasing gap in processor and memory speeds has forced

More information

EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI

EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI 1 Akshay N. Panajwar, 2 Prof.M.A.Shah Department of Computer Science and Engineering, Walchand College of Engineering,

More information

Increasing the Scale of LS-DYNA Implicit Analysis

Increasing the Scale of LS-DYNA Implicit Analysis Increasing the Scale of LS-DYNA Implicit Analysis Cleve Ashcraft 2, Jef Dawson 1, Roger Grimes 2, Erman Guleryuz 3, Seid Koric 3, Robert Lucas 2, James Ong 4, Francois-Henry Rouet 2, Todd Simons 4, and

More information

PARDISO Version Reference Sheet Fortran

PARDISO Version Reference Sheet Fortran PARDISO Version 5.0.0 1 Reference Sheet Fortran CALL PARDISO(PT, MAXFCT, MNUM, MTYPE, PHASE, N, A, IA, JA, 1 PERM, NRHS, IPARM, MSGLVL, B, X, ERROR, DPARM) 1 Please note that this version differs significantly

More information

Parallel Computation of the Singular Value Decomposition on Tree Architectures

Parallel Computation of the Singular Value Decomposition on Tree Architectures Parallel Computation of the Singular Value Decomposition on Tree Architectures Zhou B. B. and Brent R. P. y Computer Sciences Laboratory The Australian National University Canberra, ACT 000, Australia

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2006-010 Revisiting hypergraph models for sparse matrix decomposition by Cevdet Aykanat, Bora Ucar Mathematics and Computer Science EMORY UNIVERSITY REVISITING HYPERGRAPH MODELS FOR

More information

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as EXTENSION Using Matrix Row Operations to Solve Systems The elimination method used to solve systems introduced in the previous section can be streamlined into a systematic method by using matrices (singular:

More information

For example, the system. 22 may be represented by the augmented matrix

For example, the system. 22 may be represented by the augmented matrix Matrix Solutions to Linear Systems A matrix is a rectangular array of elements. o An array is a systematic arrangement of numbers or symbols in rows and columns. Matrices (the plural of matrix) may be

More information

Linear programming II João Carlos Lourenço

Linear programming II João Carlos Lourenço Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,

More information

CHAPTER 6 Quadratic Functions

CHAPTER 6 Quadratic Functions CHAPTER 6 Quadratic Functions Math 1201: Linear Functions is the linear term 3 is the leading coefficient 4 is the constant term Math 2201: Quadratic Functions Math 3201: Cubic, Quartic, Quintic Functions

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 13: Sparse Matrix Ordering, Sparse Vector Methods

ECEN 615 Methods of Electric Power Systems Analysis Lecture 13: Sparse Matrix Ordering, Sparse Vector Methods ECEN 615 Methods of Electric Power Systems Analysis Lecture 13: Sparse Matrix Ordering, Sparse Vector Methods Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University overbye@tamu.edu

More information

A Poorly Conditioned System. Matrix Form

A Poorly Conditioned System. Matrix Form Possibilities for Linear Systems of Equations A Poorly Conditioned System A Poorly Conditioned System Results No solution (inconsistent) Unique solution (consistent) Infinite number of solutions (consistent)

More information

Clustering Algorithms for Data Stream

Clustering Algorithms for Data Stream Clustering Algorithms for Data Stream Karishma Nadhe 1, Prof. P. M. Chawan 2 1Student, Dept of CS & IT, VJTI Mumbai, Maharashtra, India 2Professor, Dept of CS & IT, VJTI Mumbai, Maharashtra, India Abstract:

More information

Large-Scale Networks. PageRank. Dr Vincent Gramoli Lecturer School of Information Technologies

Large-Scale Networks. PageRank. Dr Vincent Gramoli Lecturer School of Information Technologies Large-Scale Networks PageRank Dr Vincent Gramoli Lecturer School of Information Technologies Introduction Last week we talked about: - Hubs whose scores depend on the authority of the nodes they point

More information

How to declare an array in C?

How to declare an array in C? Introduction An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type. Representation of a large number of homogeneous values.

More information

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus

CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus CSCE 689 : Special Topics in Sparse Matrix Algorithms Department of Computer Science and Engineering Spring 2015 syllabus Tim Davis last modified September 23, 2014 1 Catalog Description CSCE 689. Special

More information

CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning

CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning Parallel sparse matrix-vector product Lay out matrix and vectors by rows y(i) = sum(a(i,j)*x(j)) Only compute terms with A(i,j) 0 P0 P1

More information

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm International Journal of Engineering Research and General Science Volume 3, Issue 4, July-August, 15 ISSN 91-2730 A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 11: Sparse Systems

ECEN 615 Methods of Electric Power Systems Analysis Lecture 11: Sparse Systems ECEN 615 Methods of Electric Power Systems Analysis Lecture 11: Sparse Systems Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University overbye@tamu.edu Announcements Homework

More information

Sparse Matrices. Mathematics In Science And Engineering Volume 99 READ ONLINE

Sparse Matrices. Mathematics In Science And Engineering Volume 99 READ ONLINE Sparse Matrices. Mathematics In Science And Engineering Volume 99 READ ONLINE If you are looking for a ebook Sparse Matrices. Mathematics in Science and Engineering Volume 99 in pdf form, in that case

More information

ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS

ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS NURNABI MEHERUL ALAM M.Sc. (Agricultural Statistics), Roll No. I.A.S.R.I, Library Avenue, New Delhi- Chairperson: Dr. P.K. Batra Abstract: Block designs

More information

Ordering Algorithms for. Irreducible Sparse Linear Systems. R. Fletcher* and J.A.J.Hall** Abstract

Ordering Algorithms for. Irreducible Sparse Linear Systems. R. Fletcher* and J.A.J.Hall** Abstract Ordering Algorithms for Irreducible Sparse Linear Systems by R. Fletcher* and J.A.J.Hall** Abstract Ordering algorithms aim to pre-order a matrix in order to achieve a favourable structure for factorization.

More information

Lecture 10 Graph algorithms: testing graph properties

Lecture 10 Graph algorithms: testing graph properties Lecture 10 Graph algorithms: testing graph properties COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski Lecture 10: Testing Graph Properties 1 Overview Previous lectures: Representation

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 21 Outline 1 Course

More information

Spectral Clustering and Community Detection in Labeled Graphs

Spectral Clustering and Community Detection in Labeled Graphs Spectral Clustering and Community Detection in Labeled Graphs Brandon Fain, Stavros Sintos, Nisarg Raval Machine Learning (CompSci 571D / STA 561D) December 7, 2015 {btfain, nisarg, ssintos} at cs.duke.edu

More information

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study Parallel matrix inversion for the revised simplex method - A study Julian Hall School of Mathematics University of Edinburgh June 5th 006 Parallel matrix inversion for the revised simplex method - a study

More information

Genetic Hyper-Heuristics for Graph Layout Problems

Genetic Hyper-Heuristics for Graph Layout Problems Genetic Hyper-Heuristics for Graph Layout Problems Behrooz Koohestani A Thesis Submitted for the Degree of Doctor of Philosophy Department of Computer Science University of Essex January 15, 2013 Acknowledgments

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

A Connection between Network Coding and. Convolutional Codes

A Connection between Network Coding and. Convolutional Codes A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source

More information

Dense Matrix Algorithms

Dense Matrix Algorithms Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

A substructure based parallel dynamic solution of large systems on homogeneous PC clusters

A substructure based parallel dynamic solution of large systems on homogeneous PC clusters CHALLENGE JOURNAL OF STRUCTURAL MECHANICS 1 (4) (2015) 156 160 A substructure based parallel dynamic solution of large systems on homogeneous PC clusters Semih Özmen, Tunç Bahçecioğlu, Özgür Kurç * Department

More information

QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks

QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks Satyabrata Pradhan, Yi Li, and Muthucumaru Maheswaran Advanced Networking Research Laboratory Department of Computer Science University

More information

New Strategies for Filtering the Number Field Sieve Matrix

New Strategies for Filtering the Number Field Sieve Matrix New Strategies for Filtering the Number Field Sieve Matrix Shailesh Patil Department of CSA Indian Institute of Science Bangalore 560 012 India Email: shailesh.patil@gmail.com Gagan Garg Department of

More information

HPC Algorithms and Applications

HPC Algorithms and Applications HPC Algorithms and Applications Dwarf #5 Structured Grids Michael Bader Winter 2012/2013 Dwarf #5 Structured Grids, Winter 2012/2013 1 Dwarf #5 Structured Grids 1. dense linear algebra 2. sparse linear

More information

A linear algebra processor using Monte Carlo methods

A linear algebra processor using Monte Carlo methods A linear algebra processor using Monte Carlo methods Conference or Workshop Item Accepted Version Plaks, T. P., Megson, G. M., Cadenas Medina, J. O. and Alexandrov, V. N. (2003) A linear algebra processor

More information

DEGENERACY AND THE FUNDAMENTAL THEOREM

DEGENERACY AND THE FUNDAMENTAL THEOREM DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and

More information

Lecture 15: More Iterative Ideas

Lecture 15: More Iterative Ideas Lecture 15: More Iterative Ideas David Bindel 15 Mar 2010 Logistics HW 2 due! Some notes on HW 2. Where we are / where we re going More iterative ideas. Intro to HW 3. More HW 2 notes See solution code!

More information

An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type.

An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type. Data Structures Introduction An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type. Representation of a large number of homogeneous

More information

Sparse Matrix Reordering Algorithms for Cluster Identification

Sparse Matrix Reordering Algorithms for Cluster Identification Sparse Matrix Reordering Algorithms for Cluster Identification Chris Mueller For I532, Machine Learning in Bioinformatics December 17, 2004 Introduction The dot plot (Figure 1) is a technique for displaying

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 24 So in today s class, we will look at quadrilateral elements; and we will

More information

Adaptive Quantization for Video Compression in Frequency Domain

Adaptive Quantization for Video Compression in Frequency Domain Adaptive Quantization for Video Compression in Frequency Domain *Aree A. Mohammed and **Alan A. Abdulla * Computer Science Department ** Mathematic Department University of Sulaimani P.O.Box: 334 Sulaimani

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Last time Network topologies Intro to MPI Matrix-matrix multiplication Today MPI I/O Randomized Algorithms Parallel k-select Graph coloring Assignment 2 Parallel I/O Goal of Parallel

More information

Triangulation and Convex Hull. 8th November 2018

Triangulation and Convex Hull. 8th November 2018 Triangulation and Convex Hull 8th November 2018 Agenda 1. Triangulation. No book, the slides are the curriculum 2. Finding the convex hull. Textbook, 8.6.2 2 Triangulation and terrain models Here we have

More information

Distributed Schur Complement Solvers for Real and Complex Block-Structured CFD Problems

Distributed Schur Complement Solvers for Real and Complex Block-Structured CFD Problems Distributed Schur Complement Solvers for Real and Complex Block-Structured CFD Problems Dr.-Ing. Achim Basermann, Dr. Hans-Peter Kersken German Aerospace Center (DLR) Simulation- and Software Technology

More information

Big Data Management and NoSQL Databases

Big Data Management and NoSQL Databases NDBI040 Big Data Management and NoSQL Databases Lecture 10. Graph databases Doc. RNDr. Irena Holubova, Ph.D. holubova@ksi.mff.cuni.cz http://www.ksi.mff.cuni.cz/~holubova/ndbi040/ Graph Databases Basic

More information

Fingerprint Image Compression

Fingerprint Image Compression Fingerprint Image Compression Ms.Mansi Kambli 1*,Ms.Shalini Bhatia 2 * Student 1*, Professor 2 * Thadomal Shahani Engineering College * 1,2 Abstract Modified Set Partitioning in Hierarchical Tree with

More information

Methods for Enhancing the Speed of Numerical Calculations for the Prediction of the Mechanical Behavior of Parts Made Using Additive Manufacturing

Methods for Enhancing the Speed of Numerical Calculations for the Prediction of the Mechanical Behavior of Parts Made Using Additive Manufacturing Methods for Enhancing the Speed of Numerical Calculations for the Prediction of the Mechanical Behavior of Parts Made Using Additive Manufacturing Mohammad Nikoukar, Nachiket Patil, Deepankar Pal, and

More information

3. Replace any row by the sum of that row and a constant multiple of any other row.

3. Replace any row by the sum of that row and a constant multiple of any other row. Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying

More information

PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination

PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination 1503 PAPER Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination Shietung PENG and Stanislav G. SEDUKHIN Nonmembers SUMMARY The design of array processors for solving linear

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

Generic Topology Mapping Strategies for Large-scale Parallel Architectures

Generic Topology Mapping Strategies for Large-scale Parallel Architectures Generic Topology Mapping Strategies for Large-scale Parallel Architectures Torsten Hoefler and Marc Snir Scientific talk at ICS 11, Tucson, AZ, USA, June 1 st 2011, Hierarchical Sparse Networks are Ubiquitous

More information

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine

Example Lecture 12: The Stiffness Method Prismatic Beams. Consider again the two span beam previously discussed and determine Example 1.1 Consider again the two span beam previously discussed and determine The shearing force M1 at end B of member B. The bending moment M at end B of member B. The shearing force M3 at end B of

More information

Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction

Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction Konstantinos I. Karantasis, Andrew Lenharth, Donald Nguyen, María J. Garzarán, Keshav Pingali, Department of Computer Science,

More information

Error Detecting and Correcting Code Using Orthogonal Latin Square Using Verilog HDL

Error Detecting and Correcting Code Using Orthogonal Latin Square Using Verilog HDL Error Detecting and Correcting Code Using Orthogonal Latin Square Using Verilog HDL Ch.Srujana M.Tech [EDT] srujanaxc@gmail.com SR Engineering College, Warangal. M.Sampath Reddy Assoc. Professor, Department

More information

Network Routing Protocol using Genetic Algorithms

Network Routing Protocol using Genetic Algorithms International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:0 No:02 40 Network Routing Protocol using Genetic Algorithms Gihan Nagib and Wahied G. Ali Abstract This paper aims to develop a

More information

Artificial Intelligence

Artificial Intelligence University of Cagliari M.Sc. degree in Electronic Engineering Artificial Intelligence Academic Year: 07/08 Instructor: Giorgio Fumera Exercises on search algorithms. A -litre and a -litre water jugs are

More information

Optimized Implementation of Logic Functions

Optimized Implementation of Logic Functions June 25, 22 9:7 vra235_ch4 Sheet number Page number 49 black chapter 4 Optimized Implementation of Logic Functions 4. Nc3xe4, Nb8 d7 49 June 25, 22 9:7 vra235_ch4 Sheet number 2 Page number 5 black 5 CHAPTER

More information

AN EFFICIENT DESIGN OF VLSI ARCHITECTURE FOR FAULT DETECTION USING ORTHOGONAL LATIN SQUARES (OLS) CODES

AN EFFICIENT DESIGN OF VLSI ARCHITECTURE FOR FAULT DETECTION USING ORTHOGONAL LATIN SQUARES (OLS) CODES AN EFFICIENT DESIGN OF VLSI ARCHITECTURE FOR FAULT DETECTION USING ORTHOGONAL LATIN SQUARES (OLS) CODES S. SRINIVAS KUMAR *, R.BASAVARAJU ** * PG Scholar, Electronics and Communication Engineering, CRIT

More information

Systems of Linear Equations and their Graphical Solution

Systems of Linear Equations and their Graphical Solution Proceedings of the World Congress on Engineering and Computer Science Vol I WCECS, - October,, San Francisco, USA Systems of Linear Equations and their Graphical Solution ISBN: 98-988-95-- ISSN: 8-958

More information

Techniques for Optimizing FEM/MoM Codes

Techniques for Optimizing FEM/MoM Codes Techniques for Optimizing FEM/MoM Codes Y. Ji, T. H. Hubing, and H. Wang Electromagnetic Compatibility Laboratory Department of Electrical & Computer Engineering University of Missouri-Rolla Rolla, MO

More information

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find). Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving

More information

Inverted Index for Fast Nearest Neighbour

Inverted Index for Fast Nearest Neighbour Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,

More information

Contents. F10: Parallel Sparse Matrix Computations. Parallel algorithms for sparse systems Ax = b. Discretized domain a metal sheet

Contents. F10: Parallel Sparse Matrix Computations. Parallel algorithms for sparse systems Ax = b. Discretized domain a metal sheet Contents 2 F10: Parallel Sparse Matrix Computations Figures mainly from Kumar et. al. Introduction to Parallel Computing, 1st ed Chap. 11 Bo Kågström et al (RG, EE, MR) 2011-05-10 Sparse matrices and storage

More information

HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE

HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER AVISHA DHISLE PRERIT RODNEY ADHISLE PRODNEY 15618: PARALLEL COMPUTER ARCHITECTURE PROF. BRYANT PROF. KAYVON LET S

More information

Application of Two-dimensional Periodic Cellular Automata in Image Processing

Application of Two-dimensional Periodic Cellular Automata in Image Processing International Journal of Computer, Mathematical Sciences and Applications Serials Publications Vol. 5, No. 1-2, January-June 2011, pp. 49 55 ISSN: 0973-6786 Application of Two-dimensional Periodic Cellular

More information

IN OUR LAST HOMEWORK, WE SOLVED LARGE

IN OUR LAST HOMEWORK, WE SOLVED LARGE Y OUR HOMEWORK H A SSIGNMENT Editor: Dianne P. O Leary, oleary@cs.umd.edu FAST SOLVERS AND SYLVESTER EQUATIONS: BOTH SIDES NOW By Dianne P. O Leary IN OUR LAST HOMEWORK, WE SOLVED LARGE SPARSE SYSTEMS

More information