Solvers for Linear Systems in Graph Laplacians

Size: px
Start display at page:

Download "Solvers for Linear Systems in Graph Laplacians"

Transcription

1 CS7540 Spectral Algorithms, Spring 2017 Lectures #22-#24 Solvers for Linear Systems in Graph Laplacians Presenter: Richard Peng Mar, 2017 These notes are an attempt at summarizing the rather large literature on solving linear systems in graph Laplacians. They re designed to give a high level overview of the various solver algorithms, while giving pointers to papers for the key proofs. Richard s mile-away view of solvers is that there are really two types: Solver Flow/Cuts Based Multiscale / ILU Based Base Case Trees Expanders Intermediate Step Remove off-tree edges Matrix operations Intermediate Errors Large Small Recursive Call Structure W-cycle V-cycle Solution transfer Tree data structures Operator composition Figure 1: Comparison of High-Level Approaches to Solvers Each of these ideas also have a variety of different instantiations. A quick list of them in roughly chronological order is below: 1. Graph based: (a) Congestion-dilation embeddings: the now (in)famous repeatedly rejected manuscript by Vaidya [Vai91], with the incorporation of low-stretch spanning trees by Boman and Hendrickson [BH01]. (b) Routing based: Steiner tree preconditioners by Gremban [Gre96], oblivious routing schemes by Maggs et al. [MMP + 05], (c) Notion of ultra-sparsifiers and incorporation of Loewner orderings: O(m 1.31 ) time algorithm by Spielman and Teng [ST03], and subsequently the first nearlylinear time solver [ST14]. (d) Completely numerical construction of ultra-sparsifiers by Koutis, Miller and Peng [KMP10, KMP11]. (e) Routines that act on one off-tree edge at a time, with connections to stochastic gradient descent and data structures by Kelner, Orecchia, Sidford, and Zhu [KOSZ13], with extensions to accelerated methods by Lee and Sidford [LS13]. (f) Analysis of expected preconditioners that combine analyses of numerical routines with the randomized construction of sparsifiers by Cohen, Kyng, Miller, Pachocki, Peng, Rao, and Xu [CKM + 14]. 1

2 2. Motivated by multiscale methods and incomplete Cholesky factorization: (a) Repeated squaring by Peng and Spielman [PS14], with extensions to directed Laplacians by Cohen, Kelner, Peebles, Peng, Rao, and Vladu [CKP + 16]. (b) Sparsified block Cholesky factorization by Kyng, Lee, Peng, Sachdeva, and Spielman [KLP + 16]. (c) Incomplete Choleksy factorization with the (randomized) error of every vertex elimination analyzed using matrix martingales by Kyng and Sachdeva [KS16]. We will start with the second set of algorithms as they have strictly fewer pieces, but are a bit harder to motivate. 1 Some Terminology and Tools A key idea that underlie all these algorithms is the view of error as an integral part of computation, and tracking entire algorithms as linear operators that approximate some ideal algorithm. Matrix approximations need to be defined via. the Loewner ordering: A B if B A is positive semi-definite. We will say A κ-approximates B if there exists λ min and λ max such that λ max κλ min and λ min A B λ max A. The key fact from numerical analysis that we ll use repeatedly is: Fact 1.1. If A κ-approximates B, then a linear system in A can be solved by O( κ) iterations of: 1. Matrix-vector multiplication by A, 2. Solving a linear system by B. This gives a very powerful way of removing errors. In particular, it says that all we need to do is to provide access to a linear operator Z that 2-approximates A to solve systems in A to high accuracy. This idea that constant factor errors suffice also motivates us to define small errors. We use A ɛ B to denote that A exp(ɛ)-approximates B. Here we exponentiate ɛ because errors simply add upon compositions now. This notion of approximation is also useful because of sparsification by effective resistances, which can be abstracted as: Fact 1.2. In any graph, sampling edges by probabilities exceeding O(log n) times weights times effective resistances produces an O(1)-approximation 2

3 2 Sparsification Based Solvers Sparsification based solvers use a sequence of reductions to make the matrix well-conditioned, while accumulating small errors at each of these change steps. They can be further divided into: 1. Squaring based, which reduce to a well-conditioned matrix on the same set of original variables. 2. Schur complement based, which reduces the number of variables until none remain. 2.1 Repeated Squaring By adding self loops, and rescaling by degree, we can ensure that the graph Laplacian is represented as: I X. Furthermore, by some perturbation, let s assume that I X is full rank, so we can compute its inverse. This assumption can be removed by carefully juggling pseudoinverses, but that just complicates things. The key observation here is (I X ) 1 = ( I X 2) 1 (I + X ), that is, solving a system in I X reduces to solving I X 2 after one matrix-vector multiplication involving I + X. Furthermore, as X can be interpreted as a random walk here, I X 2 can also be approximated by a sparse graph. However, spectral approximations do not compose under one sided multiplications: there are matrices A, B and C such that A 0.1 B but AC and BC do not approximate each other. In fact, the only thing that we can use is: C T AC 0.1 C T BC, that is, composing symmetrically by a matrix C on both sides. This restriction may sound a bit arbitrary, but ends up being the crux of the problem here. Once we accept it, and manipulate algebra with it as the key criteria, we can produce the following identity instead: (I X ) 1 = 1 2 (I + (I + X ) ( I X 2) 1 (I + X ) ). Once again, this can be derived and verified using scalars, as I and X commute with each other. 3

4 2.2 Vertex Elimination A close cousin of sparsified squaring is block Cholesky elimination based methods. Such methods also have close connections with multigrid and multilevel methods. The main idea is to partition the vertices into two parts, C, the coarse grid that remains, and F, the fine grid that gets removed. We will use [, ] to denote blocks of the Laplacian corresponding to these vertex subsets, specifically a Laplacian gets partitioned into: [ ] L[F,F L = ] L [F,C], L [C,F ] L [C,C] and block-cholesky factorization is more or less doing Gaussian elimination by pretending each of the blocks are scalars. It produces the factorization: [ ] [ ] [ I 0 L[F,F ] 0 I L 1 L = L [C,F ] L 1 [F,F ] I 0 L [C,C] L [C,F ] L 1 [F,F ] L [F,F ] L ] [F,C]. [F,C] 0 I Here the only important thing to notice is that the terms on both sides are symmetric. Furthermore, by inductively removing vertices, we can show that the Schur complement, Sc(L, C) def = L [C,C] L [C,F ] L 1 [F,F ] L [F,C], is another graph Laplacian. This means it can be sparsified, and we can recurse on it. So the only obstacle is to find a subset F such that L [F,F ] is easy to invert. To this end, note that if F is an independent set, then L [F,F ] is just a diagonal matrix containing all the degrees (of vertices to C). Therefore what we re looking for is finding an almost independent set. This we can do via the following Lemma: Lemma 2.1. In any undirected weighted graph there is a set F of at least 0.1n vertices such that each vertex in F has at least 1 of its degree going to C = V \ S. 10 The construction is a simple (but somewhat subtle 1 ) use of randomization. Start by picking half of the vertices at random: each of them has expected weighted degree to C of at least half of their total weighted degree. So with probability at least 1/2 such edges have 1/4 of their weights leaving. Then note that removing vertices in F can only improve the ones that remain, so we get about 1/4n vertices in expectation. Such a set is useful because L [F,F ] can now be written as a Laplacian plus a diagonal, L [F,F ] = X + D A. As X is the excess degree, we have X 0.1D, which coupled with D A 2D gives: 21X L X, or that L [F,F ] is O(1)-approximated by X. This plus iterative methods then means it s easy to solve systems in L [F,F ]. 1 you d be amazed about the messes that Richard arrived at before being shown this construction... 4

5 3 Graph Based Solvers Graph based solvers on the other hand aim to reduce to the easy combinatorial case of a tree. They treat error as more of a computational resource to be traded against combinatorial steps. They rely on sampling by upper bounds of a effective resistanced produced by stretches w.r.t. a tree: Formally, stretch is useful because: str T (e) def = w e 1 w e e P ath T (e) 1. If T is a spanning tree of G, then r e w e str T (e). 2. For any graph G, there exist a spanning tree such that the total stretch of all edges, e str T (e) is at most O(m log n log log n). Algebraic proofs of the effective resistance bounds also show a very interesting connection: [ ] tr L G L T = str T (e). e If we squint and think about the m = n case, this means that on average, the eigenvalues of L G L T are about log n, while L T L G also gives that they re at least 1. So on average, an iterative method should work, except their convergences are governed by the worst approximation. Fast graph based solvers can be viewed as ways of deailng with these large eigenvalues throgh further combinatorial structures. 3.1 Ultra-Sparsifiers Ultra-sparsifiers are more or less tree plus k edges. They are useful because Fact 3.1. A tree plus k edges is more or less a graph on O(k) vertices / edges. This is proven by removing leaves, as well as intermediate vertices of degree 2. In both cases, the number of edges minus vertices remains invariant, and the initial invariant can then be used to upper bound the final size. The goal is then to construct ultra-sparsifiers with k off-tree edges that also O(k log 2 n log log n)- approximate L G. This plus iterative methods leads to a running time recurrence of the form: T (m) = O(k log 2 n log log n) (T (k) + m),. which solves to about O(m log 2 n log log n). variety of ways. This bound can be further improved in a 5

6 It remains to give a construction for an ultra-sparsifier. Directly applying sampling by stretch gives an O(1)-approximation but with O(m log 2 n log log n) off-tree edges. Our goal is to have fewer off-tree edges, but at the cost of a worse approximation. A way to introduce error that also reduces off-tree stretch is to simply scale up the tree by a factor of κ. This reduces the total off-tree stretch to ( ) m log n log log n O, κ which after some short calculations means setting gives the required bounds. 3.2 Single Cycle Toggling κ O ( k log 2 n log log n ) There are also single cycle toggling algorithms that can be viewed as variants of gradient descent. These weren t covered in class due to time constraints, and hopefully at some point Richard will go back to write notes about them. References [BH01] Eric Boman and Bruce Hendrickson. On spanning tree preconditioners. Manuscript, Sandia National Lab., [CKM + 14] Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup Rao, and Shen Chen Xu. Solving SDD linear systems in nearly m log 1/2 n time. In STOC, pages , [CKP + 16] [Gre96] [KLP + 16] Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup Rao, Aaron Sidford, and Adrian Vladu. Almost-linear-time algorithms for markov chains and new spectral primitives for directed graphs. Accepted to STOC Preprint available at Keith D. Gremban. Combinatorial Preconditioners for Sparse, Symmetric, Diagonally Dominant Linear Systems. PhD thesis, Carnegie Mellon University, Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A Spielman. Sparsified cholesky and multigrid solvers for connection laplacians. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, pages ACM, Available at 6

7 [KMP10] Ioannis Koutis, Gary L. Miller, and Richard Peng. Approaching optimality for solving SDD linear systems. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, FOCS 10, pages , Washington, DC, USA, IEEE Computer Society. Available at [KMP11] [KOSZ13] [KS16] [LS13] Ioannis Koutis, Gary L. Miller, and Richard Peng. A nearly-m log n time solver for SDD linear systems. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 11, pages , Washington, DC, USA, IEEE Computer Society. Available at Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. A simple, combinatorial algorithm for solving SDD systems in nearly-linear time. In Proceedings of the 45th annual ACM symposium on Symposium on theory of computing, STOC 13, pages , New York, NY, USA, ACM. Available at Rasmus Kyng and Sushant Sachdeva. Approximate gaussian elimination for laplacians: Fast, sparse, and simple. CoRR, abs/ , Available at: Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS 13, pages , Washington, DC, USA, IEEE Computer Society. [MMP + 05] Bruce M. Maggs, Gary L. Miller, Ojas Parekh, R. Ravi, and Shan Leung Maverick Woo. Finding effective support-tree preconditioners. In Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures, SPAA 05, pages , New York, NY, USA, ACM. [PS14] Richard Peng and Daniel A. Spielman. An efficient parallel solver for SDD linear systems. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, STOC 14, pages , New York, NY, USA, ACM. Available at [ST03] Daniel A. Spielman and Shang-Hua Teng. Solving sparse, symmetric, diagonally-dominant linear systems in time O(m 1.31 ). In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, FOCS 03, pages 416, Washington, DC, USA, IEEE Computer Society. [ST14] D. Spielman and S. Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35(3): , Available at 7

8 [Vai91] Pravin M. Vaidya. Solving linear equations with symmetric diagonally dominant matrices by constructing good preconditioners. A talk based on this manuscript was presented at the IMA Workshop on Graph Theory and Sparse Matrix Computation, October

Laplacian Paradigm 2.0

Laplacian Paradigm 2.0 Laplacian Paradigm 2.0 8:40-9:10: Merging Continuous and Discrete(Richard Peng) 9:10-9:50: Beyond Laplacian Solvers (Aaron Sidford) 9:50-10:30: Approximate Gaussian Elimination (Sushant Sachdeva) 10:30-11:00:

More information

Lecture 27: Fast Laplacian Solvers

Lecture 27: Fast Laplacian Solvers Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall

More information

SDD Solvers: Bridging theory and practice

SDD Solvers: Bridging theory and practice Yiannis Koutis University of Puerto Rico, Rio Piedras joint with Gary Miller, Richard Peng Carnegie Mellon University SDD Solvers: Bridging theory and practice The problem: Solving very large Laplacian

More information

Combinatorial meets Convex Leveraging combinatorial structure to obtain faster algorithms, provably

Combinatorial meets Convex Leveraging combinatorial structure to obtain faster algorithms, provably Combinatorial meets Convex Leveraging combinatorial structure to obtain faster algorithms, provably Lorenzo Orecchia Department of Computer Science 6/30/15 Challenges in Optimization for Data Science 1

More information

CS7540 Spectral Algorithms, Spring 2017 Lecture #2. Matrix Tree Theorem. Presenter: Richard Peng Jan 12, 2017

CS7540 Spectral Algorithms, Spring 2017 Lecture #2. Matrix Tree Theorem. Presenter: Richard Peng Jan 12, 2017 CS7540 Spectral Algorithms, Spring 2017 Lecture #2 Matrix Tree Theorem Presenter: Richard Peng Jan 12, 2017 DISCLAIMER: These notes are not necessarily an accurate representation of what I said during

More information

Preconditioning Linear Systems Arising from Graph Laplacians of Complex Networks

Preconditioning Linear Systems Arising from Graph Laplacians of Complex Networks Preconditioning Linear Systems Arising from Graph Laplacians of Complex Networks Kevin Deweese 1 Erik Boman 2 1 Department of Computer Science University of California, Santa Barbara 2 Scalable Algorithms

More information

Spectral Graph Sparsification: overview of theory and practical methods. Yiannis Koutis. University of Puerto Rico - Rio Piedras

Spectral Graph Sparsification: overview of theory and practical methods. Yiannis Koutis. University of Puerto Rico - Rio Piedras Spectral Graph Sparsification: overview of theory and practical methods Yiannis Koutis University of Puerto Rico - Rio Piedras Graph Sparsification or Sketching Compute a smaller graph that preserves some

More information

Parallel Graph Algorithms. Richard Peng Georgia Tech

Parallel Graph Algorithms. Richard Peng Georgia Tech Parallel Graph Algorithms Richard Peng Georgia Tech OUTLINE Model and problems Graph decompositions Randomized clusterings Interface with optimization THE MODEL The `scale up approach: Have a number of

More information

Lecture 25: Element-wise Sampling of Graphs and Linear Equation Solving, Cont. 25 Element-wise Sampling of Graphs and Linear Equation Solving,

Lecture 25: Element-wise Sampling of Graphs and Linear Equation Solving, Cont. 25 Element-wise Sampling of Graphs and Linear Equation Solving, Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 25-12/04/2013 Lecture 25: Element-wise Sampling of Graphs and Linear Equation Solving, Cont. Lecturer: Michael Mahoney Scribe: Michael

More information

The Revolution in Graph Theoretic Optimization Problem

The Revolution in Graph Theoretic Optimization Problem The Revolution in Graph Theoretic Optimization Problem Gary L Miller Simons Open Lecture Oct 6, 2014 JOINT WORK Guy Blelloch, Hui Han Chin, Michael Cohen, Anupam Gupta, Jonathan Kelner, Yiannis Koutis,

More information

arxiv: v1 [cs.ds] 9 Sep 2016

arxiv: v1 [cs.ds] 9 Sep 2016 An Empirical Study of Cycle Toggling Based Laplacian Solvers arxiv:1609.0957v1 [cs.ds] 9 Sep 016 Abstract Kevin Deweese UCSB kdeweese@cs.ucsb.edu Richard Peng Georgia Tech rpeng@cc.gatech.edu John R. Gilbert

More information

Testing Isomorphism of Strongly Regular Graphs

Testing Isomorphism of Strongly Regular Graphs Spectral Graph Theory Lecture 9 Testing Isomorphism of Strongly Regular Graphs Daniel A. Spielman September 26, 2018 9.1 Introduction In the last lecture we saw how to test isomorphism of graphs in which

More information

Exponential Start Time Clustering and its Applications in Spectral Graph Theory. Shen Chen Xu

Exponential Start Time Clustering and its Applications in Spectral Graph Theory. Shen Chen Xu Exponential Start Time Clustering and its Applications in Spectral Graph Theory Shen Chen Xu CMU-CS-17-120 August 2017 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis

More information

Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions

Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions 2018 IEEE 59th Annual Symposium on Foundations of Computer Science Graph Sparsification, Spectral Sketches, and Faster Resistance Computation, via Short Cycle Decompositions Timothy Chu,YuGao, Richard

More information

SAMG: Sparsified Graph-Theoretic Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant (SDD) Matrices

SAMG: Sparsified Graph-Theoretic Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant (SDD) Matrices SAMG: Sparsified Graph-Theoretic Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant (SDD) Matrices Zhiqiang Zhao Department of ECE Michigan Technological University Houghton, Michigan

More information

Efficient and practical tree preconditioning for solving Laplacian systems

Efficient and practical tree preconditioning for solving Laplacian systems Efficient and practical tree preconditioning for solving Laplacian systems Luca Castelli Aleardi, Alexandre Nolin, Maks Ovsjanikov To cite this version: Luca Castelli Aleardi, Alexandre Nolin, Maks Ovsjanikov.

More information

Efficient and practical tree preconditioning for solving Laplacian systems

Efficient and practical tree preconditioning for solving Laplacian systems Efficient and practical tree preconditioning for solving Laplacian systems Luca Castelli Aleardi, Alexandre Nolin, and Maks Ovsjanikov LIX - École Polytechnique, amturing,maks@lix.polytechnique.fr, alexandre.nolin@polytechnique.edu

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

Lecture 11: Clustering and the Spectral Partitioning Algorithm A note on randomized algorithm, Unbiased estimates

Lecture 11: Clustering and the Spectral Partitioning Algorithm A note on randomized algorithm, Unbiased estimates CSE 51: Design and Analysis of Algorithms I Spring 016 Lecture 11: Clustering and the Spectral Partitioning Algorithm Lecturer: Shayan Oveis Gharan May nd Scribe: Yueqi Sheng Disclaimer: These notes have

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26

More information

Computing Heat Kernel Pagerank and a Local Clustering Algorithm

Computing Heat Kernel Pagerank and a Local Clustering Algorithm Computing Heat Kernel Pagerank and a Local Clustering Algorithm Fan Chung and Olivia Simpson University of California, San Diego La Jolla, CA 92093 {fan,osimpson}@ucsd.edu Abstract. Heat kernel pagerank

More information

Empirical Complexity of Laplacian Linear Solvers: Discussion

Empirical Complexity of Laplacian Linear Solvers: Discussion Empirical Complexity of Laplacian Linear Solvers: Discussion Erik Boman, Sandia National Labs Kevin Deweese, UC Santa Barbara John R. Gilbert, UC Santa Barbara 1 Simons Institute Workshop on Fast Algorithms

More information

Large-Scale Graph-based Semi-Supervised Learning via Tree Laplacian Solver

Large-Scale Graph-based Semi-Supervised Learning via Tree Laplacian Solver Large-Scale Graph-based Semi-Supervised Learning via Tree Laplacian Solver AAAI Press Association for the Advancement of Artificial Intelligence 2275 East Bayshore Road, Suite 60 Palo Alto, California

More information

Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching

Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching Henry Lin Division of Computer Science University of California, Berkeley Berkeley, CA 94720 Email: henrylin@eecs.berkeley.edu Abstract

More information

MAXIMUM-WEIGHT-BASIS PRECONDITIONERS

MAXIMUM-WEIGHT-BASIS PRECONDITIONERS MAXIMUM-WEIGHT-BASIS PRECONDITIONERS ERIK G. BOMAN, DORON CHEN, BRUCE HENDRICKSON, AND SIVAN TOLEDO Abstract. This paper analyzes a novel method for constructing preconditioners for diagonally-dominant

More information

arxiv: v3 [cs.ds] 6 Oct 2015

arxiv: v3 [cs.ds] 6 Oct 2015 Evaluating the Potential of a Dual Randomized Kaczmarz Solver for Laplacian Linear Systems Erik G. Boman Sandia National Laboratories Center for Computing Research egboman@sandia.gov Kevin Deweese UC Santa

More information

Lecture 11: Randomized Least-squares Approximation in Practice. 11 Randomized Least-squares Approximation in Practice

Lecture 11: Randomized Least-squares Approximation in Practice. 11 Randomized Least-squares Approximation in Practice Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 11-10/09/013 Lecture 11: Randomized Least-squares Approximation in Practice Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these

More information

Graph Partitioning Algorithms

Graph Partitioning Algorithms Graph Partitioning Algorithms Leonid E. Zhukov School of Applied Mathematics and Information Science National Research University Higher School of Economics 03.03.2014 Leonid E. Zhukov (HSE) Lecture 8

More information

Parallel graph decompositions using random shifts

Parallel graph decompositions using random shifts Parallel graph decompositions using random shifts Gary L. Miller, Richard Peng, Shen Chen Xu Presenter: Jessica Shi 6.886 Algorithm Engineering Spring 2019, MIT Introduction Introduction Jessica Shi Parallel

More information

Oblivious Routing on Node-Capacitated and Directed Graphs

Oblivious Routing on Node-Capacitated and Directed Graphs Oblivious Routing on Node-Capacitated and Directed Graphs Mohammad Thagi Hajiaghayi Robert D. Kleinberg Tom Leighton Harald Räcke Abstract Oblivious routing algorithms for general undirected networks were

More information

6 Distributed data management I Hashing

6 Distributed data management I Hashing 6 Distributed data management I Hashing There are two major approaches for the management of data in distributed systems: hashing and caching. The hashing approach tries to minimize the use of communication

More information

CSCI 5454 Ramdomized Min Cut

CSCI 5454 Ramdomized Min Cut CSCI 5454 Ramdomized Min Cut Sean Wiese, Ramya Nair April 8, 013 1 Randomized Minimum Cut A classic problem in computer science is finding the minimum cut of an undirected graph. If we are presented with

More information

Lecture 15: More Iterative Ideas

Lecture 15: More Iterative Ideas Lecture 15: More Iterative Ideas David Bindel 15 Mar 2010 Logistics HW 2 due! Some notes on HW 2. Where we are / where we re going More iterative ideas. Intro to HW 3. More HW 2 notes See solution code!

More information

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6,

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6, Spectral Clustering XIAO ZENG + ELHAM TABASSI CSE 902 CLASS PRESENTATION MARCH 16, 2017 1 Presentation based on 1. Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17.4

More information

A Parallel Solver for Laplacian Matrices. Tristan Konolige (me) and Jed Brown

A Parallel Solver for Laplacian Matrices. Tristan Konolige (me) and Jed Brown A Parallel Solver for Laplacian Matrices Tristan Konolige (me) and Jed Brown Graph Laplacian Matrices Covered by other speakers (hopefully) Useful in a variety of areas Graphs are getting very big Facebook

More information

Modeling and Detecting Community Hierarchies

Modeling and Detecting Community Hierarchies Modeling and Detecting Community Hierarchies Maria-Florina Balcan, Yingyu Liang Georgia Institute of Technology Age of Networks Massive amount of network data How to understand and utilize? Internet [1]

More information

Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs

Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs Gramoz Goranci University of Vienna, Faculty of Computer Science, Vienna, Austria gramoz.goranci@univie.ac.at Monika Henzinger

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 18 Luca Trevisan March 3, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 18 Luca Trevisan March 3, 2011 Stanford University CS359G: Graph Partitioning and Expanders Handout 8 Luca Trevisan March 3, 20 Lecture 8 In which we prove properties of expander graphs. Quasirandomness of Expander Graphs Recall that

More information

arxiv: v1 [cs.ds] 1 Jan 2015

arxiv: v1 [cs.ds] 1 Jan 2015 Fast Generation of Random Spanning Trees and the Effective Resistance Metric Aleksander Madry EPFL Damian Straszak University of Wroc law Jakub Tarnawski University of Wroc law arxiv:1501.00267v1 [cs.ds]

More information

Performance Evaluation of a New Parallel Preconditioner

Performance Evaluation of a New Parallel Preconditioner Performance Evaluation of a New Parallel Preconditioner Keith D. Gremban Gary L. Miller Marco Zagha School of Computer Science Carnegie Mellon University 5 Forbes Avenue Pittsburgh PA 15213 Abstract The

More information

Lesson 2 7 Graph Partitioning

Lesson 2 7 Graph Partitioning Lesson 2 7 Graph Partitioning The Graph Partitioning Problem Look at the problem from a different angle: Let s multiply a sparse matrix A by a vector X. Recall the duality between matrices and graphs:

More information

A Distribution-Sensitive Dictionary with Low Space Overhead

A Distribution-Sensitive Dictionary with Low Space Overhead A Distribution-Sensitive Dictionary with Low Space Overhead Prosenjit Bose, John Howat, and Pat Morin School of Computer Science, Carleton University 1125 Colonel By Dr., Ottawa, Ontario, CANADA, K1S 5B6

More information

Algebraic Graph Theory- Adjacency Matrix and Spectrum

Algebraic Graph Theory- Adjacency Matrix and Spectrum Algebraic Graph Theory- Adjacency Matrix and Spectrum Michael Levet December 24, 2013 Introduction This tutorial will introduce the adjacency matrix, as well as spectral graph theory. For those familiar

More information

Bipartite Edge Prediction via Transductive Learning over Product Graphs

Bipartite Edge Prediction via Transductive Learning over Product Graphs Bipartite Edge Prediction via Transductive Learning over Product Graphs Hanxiao Liu, Yiming Yang School of Computer Science, Carnegie Mellon University July 8, 2015 ICML 2015 Bipartite Edge Prediction

More information

Implicit schemes for wave models

Implicit schemes for wave models Implicit schemes for wave models Mathieu Dutour Sikirić Rudjer Bo sković Institute, Croatia and Universität Rostock April 17, 2013 I. Wave models Stochastic wave modelling Oceanic models are using grids

More information

Multigrid Methods for Markov Chains

Multigrid Methods for Markov Chains Multigrid Methods for Markov Chains Hans De Sterck Department of Applied Mathematics, University of Waterloo collaborators Killian Miller Department of Applied Mathematics, University of Waterloo, Canada

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Visual Representations for Machine Learning

Visual Representations for Machine Learning Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering

More information

Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for Advanced Algorithms, Fall 2013.

Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for Advanced Algorithms, Fall 2013. Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for 4540 - Advanced Algorithms, Fall 2013. Today s topic: Karger s min-cut algorithm [3]. Problem Definition

More information

6.889 Lecture 15: Traveling Salesman (TSP)

6.889 Lecture 15: Traveling Salesman (TSP) 6.889 Lecture 15: Traveling Salesman (TSP) Christian Sommer csom@mit.edu (figures by Philip Klein) November 2, 2011 Traveling Salesman Problem (TSP) given G = (V, E) find a tour visiting each 1 node v

More information

SPECTRAL SPARSIFICATION IN SPECTRAL CLUSTERING

SPECTRAL SPARSIFICATION IN SPECTRAL CLUSTERING SPECTRAL SPARSIFICATION IN SPECTRAL CLUSTERING Alireza Chakeri, Hamidreza Farhidzadeh, Lawrence O. Hall Department of Computer Science and Engineering College of Engineering University of South Florida

More information

Algebraic method for Shortest Paths problems

Algebraic method for Shortest Paths problems Lecture 1 (06.03.2013) Author: Jaros law B lasiok Algebraic method for Shortest Paths problems 1 Introduction In the following lecture we will see algebraic algorithms for various shortest-paths problems.

More information

Control Flow Analysis with SAT Solvers

Control Flow Analysis with SAT Solvers Control Flow Analysis with SAT Solvers Steven Lyde, Matthew Might University of Utah, Salt Lake City, Utah, USA Abstract. Control flow analyses statically determine the control flow of programs. This is

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND Student Submission for the 5 th OpenFOAM User Conference 2017, Wiesbaden - Germany: SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND TESSA UROIĆ Faculty of Mechanical Engineering and Naval Architecture, Ivana

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

Programming aspects in a combinatorial sparse linear solver

Programming aspects in a combinatorial sparse linear solver U.U.D.M. Project Report 2014:2 Programming aspects in a combinatorial sparse linear solver Aron Rindstedt Examensarbete i matematik, 15 hp Handledare: Dimitar Lukarski, Institutionen för informationsteknologi

More information

Spanners of Complete k-partite Geometric Graphs

Spanners of Complete k-partite Geometric Graphs Spanners of Complete k-partite Geometric Graphs Prosenjit Bose Paz Carmi Mathieu Couture Anil Maheshwari Pat Morin Michiel Smid May 30, 008 Abstract We address the following problem: Given a complete k-partite

More information

Comparing the strength of query types in property testing: The case of testing k-colorability

Comparing the strength of query types in property testing: The case of testing k-colorability Comparing the strength of query types in property testing: The case of testing k-colorability Ido Ben-Eliezer Tali Kaufman Michael Krivelevich Dana Ron Abstract We study the power of four query models

More information

Structured System Theory

Structured System Theory Appendix C Structured System Theory Linear systems are often studied from an algebraic perspective, based on the rank of certain matrices. While such tests are easy to derive from the mathematical model,

More information

A Network Coloring Game

A Network Coloring Game A Network Coloring Game Kamalika Chaudhuri, Fan Chung 2, and Mohammad Shoaib Jamall 2 Information Theory and Applications Center, UC San Diego kamalika@soe.ucsd.edu 2 Department of Mathematics, UC San

More information

arxiv: v1 [cs.cg] 4 Dec 2007

arxiv: v1 [cs.cg] 4 Dec 2007 Spanners of Complete k-partite Geometric Graphs Prosenjit Bose Paz Carmi Mathieu Couture Anil Maheshwari Pat Morin Michiel Smid November 6, 018 arxiv:071.0554v1 [cs.cg] 4 Dec 007 Abstract We address the

More information

Foundations of Computing

Foundations of Computing Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/

More information

Graphs in Machine Learning

Graphs in Machine Learning Graphs in Machine Learning Michal Valko Inria Lille - Nord Europe, France TA: Daniele Calandriello Partially based on material by: Toma s Koca k, Nikhil Srivastava, Yiannis Koutis, Joshua Batson, Daniel

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

Loopy Belief Propagation

Loopy Belief Propagation Loopy Belief Propagation Research Exam Kristin Branson September 29, 2003 Loopy Belief Propagation p.1/73 Problem Formalization Reasoning about any real-world problem requires assumptions about the structure

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

princeton univ. F 15 cos 521: Advanced Algorithm Design Lecture 2: Karger s Min Cut Algorithm

princeton univ. F 15 cos 521: Advanced Algorithm Design Lecture 2: Karger s Min Cut Algorithm princeton univ. F 5 cos 5: Advanced Algorithm Design Lecture : Karger s Min Cut Algorithm Lecturer: Pravesh Kothari Scribe:Pravesh (These notes are a slightly modified version of notes from previous offerings

More information

Cuckoo Hashing for Undergraduates

Cuckoo Hashing for Undergraduates Cuckoo Hashing for Undergraduates Rasmus Pagh IT University of Copenhagen March 27, 2006 Abstract This lecture note presents and analyses two simple hashing algorithms: Hashing with Chaining, and Cuckoo

More information

Exact Algorithms Lecture 7: FPT Hardness and the ETH

Exact Algorithms Lecture 7: FPT Hardness and the ETH Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,

More information

Approximability Results for the p-center Problem

Approximability Results for the p-center Problem Approximability Results for the p-center Problem Stefan Buettcher Course Project Algorithm Design and Analysis Prof. Timothy Chan University of Waterloo, Spring 2004 The p-center

More information

CS 4349 Lecture October 18th, 2017

CS 4349 Lecture October 18th, 2017 CS 4349 Lecture October 18th, 2017 Main topics for #lecture include #minimum_spanning_trees. Prelude Homework 6 due today. Homework 7 due Wednesday, October 25th. Homework 7 has one normal homework problem.

More information

Efficient Preconditioning of Laplacian Matrices for Computer Graphics

Efficient Preconditioning of Laplacian Matrices for Computer Graphics Efficient Preconditioning of Laplacian Matrices for Computer Graphics Dilip Krishnan New York University Raanan Fattal Hebrew University of Jerusalem Richard Szeliski Microsoft Research Figure 1: Example

More information

3136 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 11, NOVEMBER 2004

3136 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 11, NOVEMBER 2004 3136 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 11, NOVEMBER 2004 Embedded Trees: Estimation of Gaussian Processes on Graphs with Cycles Erik B. Sudderth, Student Member, IEEE, Martin J. Wainwright,

More information

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011 Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011 Lecture 1 In which we describe what this course is about and give two simple examples of approximation algorithms 1 Overview

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Online Bipartite Matching: A Survey and A New Problem

Online Bipartite Matching: A Survey and A New Problem Online Bipartite Matching: A Survey and A New Problem Xixuan Feng xfeng@cs.wisc.edu Abstract We study the problem of online bipartite matching, where algorithms have to draw irrevocable matchings based

More information

Summary of Raptor Codes

Summary of Raptor Codes Summary of Raptor Codes Tracey Ho October 29, 2003 1 Introduction This summary gives an overview of Raptor Codes, the latest class of codes proposed for reliable multicast in the Digital Fountain model.

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Key Grids: A Protocol Family for Assigning Symmetric Keys

Key Grids: A Protocol Family for Assigning Symmetric Keys Key Grids: A Protocol Family for Assigning Symmetric Keys Amitanand S. Aiyer University of Texas at Austin anand@cs.utexas.edu Lorenzo Alvisi University of Texas at Austin lorenzo@cs.utexas.edu Mohamed

More information

1 Minimum Cut Problem

1 Minimum Cut Problem CS 6 Lecture 6 Min Cut and Karger s Algorithm Scribes: Peng Hui How, Virginia Williams (05) Date: November 7, 07 Anthony Kim (06), Mary Wootters (07) Adapted from Virginia Williams lecture notes Minimum

More information

Extracting the Range of cps from Affine Typing

Extracting the Range of cps from Affine Typing Extracting the Range of cps from Affine Typing Extended Abstract Josh Berdine, Peter W. O Hearn Queen Mary, University of London {berdine, ohearn}@dcs.qmul.ac.uk Hayo Thielecke The University of Birmingham

More information

CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri

CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri CS 598CSC: Approximation Algorithms Lecture date: March, 011 Instructor: Chandra Chekuri Scribe: CC Local search is a powerful and widely used heuristic method (with various extensions). In this lecture

More information

Support-Theoretic Subgraph Preconditioners for Large-Scale SLAM

Support-Theoretic Subgraph Preconditioners for Large-Scale SLAM Support-Theoretic Subgraph Preconditioners for Large-Scale SLAM Yong-Dian Jian, Doru Balcan, Ioannis Panageas, Prasad Tetali and Frank Dellaert Abstract Efficiently solving large-scale sparse linear systems

More information

Level 3: Level 2: Level 1: Level 0:

Level 3: Level 2: Level 1: Level 0: A Graph Based Method for Generating the Fiedler Vector of Irregular Problems 1 Michael Holzrichter 1 and Suely Oliveira 2 1 Texas A&M University, College Station, TX,77843-3112 2 The University of Iowa,

More information

Random Walks and Universal Sequences

Random Walks and Universal Sequences Random Walks and Universal Sequences Xiaochen Qi February 28, 2013 Abstract A random walk is a chance process studied in probability, which plays an important role in probability theory and its applications.

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

HIPS : a parallel hybrid direct/iterative solver based on a Schur complement approach

HIPS : a parallel hybrid direct/iterative solver based on a Schur complement approach HIPS : a parallel hybrid direct/iterative solver based on a Schur complement approach Mini-workshop PHyLeaS associated team J. Gaidamour, P. Hénon July 9, 28 HIPS : an hybrid direct/iterative solver /

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

smooth coefficients H. Köstler, U. Rüde

smooth coefficients H. Köstler, U. Rüde A robust multigrid solver for the optical flow problem with non- smooth coefficients H. Köstler, U. Rüde Overview Optical Flow Problem Data term and various regularizers A Robust Multigrid Solver Galerkin

More information

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem CS61: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem Tim Roughgarden February 5, 016 1 The Traveling Salesman Problem (TSP) In this lecture we study a famous computational problem,

More information

An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem

An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem Ahmad Biniaz Anil Maheshwari Michiel Smid September 30, 2013 Abstract Let P and S be two disjoint sets of n and m points in the

More information

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

A Parallel Implementation of the BDDC Method for Linear Elasticity

A Parallel Implementation of the BDDC Method for Linear Elasticity A Parallel Implementation of the BDDC Method for Linear Elasticity Jakub Šístek joint work with P. Burda, M. Čertíková, J. Mandel, J. Novotný, B. Sousedík Institute of Mathematics of the AS CR, Prague

More information

CS 267 Applications of Parallel Computers. Lecture 23: Load Balancing and Scheduling. James Demmel

CS 267 Applications of Parallel Computers. Lecture 23: Load Balancing and Scheduling. James Demmel CS 267 Applications of Parallel Computers Lecture 23: Load Balancing and Scheduling James Demmel http://www.cs.berkeley.edu/~demmel/cs267_spr99 CS267 L23 Load Balancing and Scheduling.1 Demmel Sp 1999

More information

Generalized Subgraph Preconditioners for Large-Scale Bundle Adjustment

Generalized Subgraph Preconditioners for Large-Scale Bundle Adjustment To appear in the Proceedings of 13th IEEE International Conference on Computer Vision (ICCV 2011) 1 Generalized Subgraph Preconditioners for Large-Scale Bundle Adjustment Yong-Dian Jian, Doru C. Balcan

More information

Planar Graphs with Many Perfect Matchings and Forests

Planar Graphs with Many Perfect Matchings and Forests Planar Graphs with Many Perfect Matchings and Forests Michael Biro Abstract We determine the number of perfect matchings and forests in a family T r,3 of triangulated prism graphs. These results show that

More information