Randomized Algorithms

Similar documents
Introduction to Parallel & Distributed Computing Parallel Graph Algorithms

Lecture 4: Graph Algorithms

Lesson 2 7 Graph Partitioning

Design and Analysis of Algorithms - - Assessment

CS61BL. Lecture 5: Graphs Sorting

CS 5220: Parallel Graph Algorithms. David Bindel

Lecture 10 Graph algorithms: testing graph properties

Lecture 3: Sorting 1

Info 2950, Lecture 16

CSE 638: Advanced Algorithms. Lectures 10 & 11 ( Parallel Connected Components )

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Algorithms and Applications

Direct Addressing Hash table: Collision resolution how handle collisions Hash Functions:

Algorithms in Systems Engineering ISE 172. Lecture 16. Dr. Ted Ralphs

The ADT priority queue Orders its items by a priority value The first item removed is the one having the highest priority value

CS521 \ Notes for the Final Exam

Parallel Algorithms for (PRAM) Computers & Some Parallel Algorithms. Reference : Horowitz, Sahni and Rajasekaran, Computer Algorithms

Mining Social Network Graphs

Chapter 5. Decrease-and-Conquer. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

CS Data Structures and Algorithm Analysis

MA/CSSE 473 Day 12. Questions? Insertion sort analysis Depth first Search Breadth first Search. (Introduce permutation and subset generation)

Local Algorithms for Sparse Spanning Graphs

Graph Definitions. In a directed graph the edges have directions (ordered pairs). A weighted graph includes a weight function.

FINAL EXAM SOLUTIONS

Graph Partitioning for Scalable Distributed Graph Computations

Inf 2B: Graphs, BFS, DFS

Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest. Introduction to Algorithms

Fundamental Algorithms

I A graph is a mathematical structure consisting of a set of. I Formally: G =(V, E), where V is a set and E V V.

CSE 100: GRAPH ALGORITHMS

End-Term Examination Second Semester [MCA] MAY-JUNE 2006

Graph Algorithms. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CSE 613: Parallel Programming. Lecture 11 ( Graph Algorithms: Connected Components )

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs

2. (a) Explain when the Quick sort is preferred to merge sort and vice-versa.

Parallel Breadth First Search

Question 7.11 Show how heapsort processes the input:

Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering

COMP 251 Winter 2017 Online quizzes with answers

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V

DYNAMIC MEMORY ALLOCATION AND DEALLOCATION

Problem Score Maximum MC 34 (25/17) = 50 Total 100

W4231: Analysis of Algorithms

R13. II B. Tech I Semester Supplementary Examinations, May/June DATA STRUCTURES (Com. to ECE, CSE, EIE, IT, ECC)

Deterministic Parallel Graph Coloring with Symmetry Breaking

Solutions to Exam Data structures (X and NV)

Contents. Preface xvii Acknowledgments. CHAPTER 1 Introduction to Parallel Computing 1. CHAPTER 2 Parallel Programming Platforms 11

Chapter 14. Graphs Pearson Addison-Wesley. All rights reserved 14 A-1

Outline. Graphs. Divide and Conquer.

DESIGN AND ANALYSIS OF ALGORITHMS

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

CSci 231 Final Review

Your favorite blog : (popularly known as VIJAY JOTANI S BLOG..now in facebook.join ON FB VIJAY

Recitation 13. Minimum Spanning Trees Announcements. SegmentLab has been released, and is due Friday, November 17. It s worth 135 points.

Parallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides)

21# 33# 90# 91# 34# # 39# # # 31# 98# 0# 1# 2# 3# 4# 5# 6# 7# 8# 9# 10# #

UML CS Algorithms Qualifying Exam Fall, 2003 ALGORITHMS QUALIFYING EXAM

CSL 730: Parallel Programming

CS 361 Data Structures & Algs Lecture 15. Prof. Tom Hayes University of New Mexico

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE

CSE373: Data Structures & Algorithms Lecture 28: Final review and class wrap-up. Nicki Dell Spring 2014

) $ f ( n) " %( g( n)

Introduction to Algorithms Third Edition

Divide-and-Conquer Algorithms

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

Konigsberg Bridge Problem

CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018

CS/COE 1501 cs.pitt.edu/~bill/1501/ Graphs

( ) n 3. n 2 ( ) D. Ο

Elementary Graph Algorithms. Ref: Chapter 22 of the text by Cormen et al. Representing a graph:

Theory of Computing. Lecture 4/5 MAS 714 Hartmut Klauck

Dynamic-Programming algorithms for shortest path problems: Bellman-Ford (for singlesource) and Floyd-Warshall (for all-pairs).

Big Data Management and NoSQL Databases

L.J. Institute of Engineering & Technology Semester: VIII (2016)

BACKGROUND: A BRIEF INTRODUCTION TO GRAPH THEORY

Lecture Summary CSC 263H. August 5, 2016

Lecture 3, Review of Algorithms. What is Algorithm?

2. True or false: even though BFS and DFS have the same space complexity, they do not always have the same worst case asymptotic time complexity.

Graph Algorithms (part 3 of CSC 282),

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

CMPSCI 311: Introduction to Algorithms Practice Final Exam

Lecture 9 Graph Traversal

Design of Parallel Algorithms. Models of Parallel Computation

Index. Anagrams, 212 Arrays.sort in O(nlogn) time, 202

Welcome to the course Algorithm Design

Chapter 9 Graph Algorithms

Introduction to Graph Theory

CSCI5070 Advanced Topics in Social Computing

CSI 604 Elementary Graph Algorithms

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

Graph. Vertex. edge. Directed Graph. Undirected Graph

Multidimensional Arrays & Graphs. CMSC 420: Lecture 3

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019

Distributed Algorithms 6.046J, Spring, Nancy Lynch

Parallel quicksort algorithms with isoefficiency analysis. Parallel quicksort algorithmswith isoefficiency analysis p. 1

Lecture 6 Basic Graph Algorithms

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

INSTITUTE OF AERONAUTICAL ENGINEERING

SPANNING TREES. Lecture 21 CS2110 Spring 2016

Transcription:

Randomized Algorithms

Last time Network topologies Intro to MPI Matrix-matrix multiplication

Today MPI I/O Randomized Algorithms Parallel k-select Graph coloring Assignment 2

Parallel I/O Goal of Parallel I/O is to use parallelism to increase bandwidth Parallel I/O can be hard to coordinate and optimize if working directly at the level of Lustre(DFS) API or POSIX I/O Interface Therefore, specialists implement a number of intermediate layers for coordination of data access and mapping from application layer to I/O layer Hence, application developers only have to deal with a high-level interface built on top of a software stack that in turn sits on top of the underlying hardware

Typical Pattern: Parallel programs doing sequential I/O All processes send data to master process, and then the process designated as master writes the collected data to the file This sequential nature of I/O can limit performance and scalability of many applications

Desired Pattern: Parallel Programs doing Parallel I/O Multiple processes participating in reading data from or writing data to a common file in parallel This strategy improves performance and provides a single file for storage and transfer purposes Simple Solution?

MPI for Parallel I/O Reading and writing in parallel is like receiving and sending messages Hence, an MPI-like machinery is a good setting for Parallel I/O (think MPI communicators and MPI datatypes) MPI-I/O featured in MPI-2 which was released in 1997, and it interoperates with the file system to enhance I/O performance for distributed memory applications Given N number of processes, each process participates in reading or writing a portion of a common file

Using MPI I/O MPI_File_open, MPI_File_close There are three ways of positioning where the read or write takes place for each process: Use individual file pointers (e.g., MPI_File_seek/MPI_File_read) Calculate byte offsets (e.g., MPI_File_read_at) Access a shared file pointer (e.g., MPI_File_seek_shared, MPI_File_read_shared) Collective I/O forces all processes in the communicator to read/write data simultaneously and to wait for each other MPI_File_read_(at)_all, MPI_File_write_(at)_all,

Randomized Algorithms

Recall Two main strategies for parallelization Divide & Conquer Randomization Ensure that processors can make local decisions which, with high probability, add up to good global decisions Sampling quicksort, samplesort

Randomization Sampling Symmetry breaking Independent sets today Load balancing

Parallel k-select Given an array A with n elements, Select(A, k) returns the k th largest element in A Select(A, n/2), Select(A, n+1 ) median selection 2 Select(A, 1) minimum Select(A, n) maximum Note that for sample sort, we can perform (p 1) k-selects simultaneously to obtain the p 1 splitters Select A, n p, 2n p,, p 1 n p

Parallel k-select Select Splitter(s) randomly (k 1, k 2 ) Estimate ranks for the splitters (R 1, R 2 ) and partition A into three sets Sequential ranking (count) on each process O n p Reduction to obtain global ranking O(log p) Recurse on the appropriate set depending on whether k 0, R 1, R 1, R 2, or R 2, n Recursion on a strictly smaller set will terminate Tradeoff between number of splitters at each iteration and number of iterations to converge Choosing p 1 splitters efficient for samplesort What to do for quicksort?

Graph Algorithms Will be covered in detail later Graph G = V, E, vertices and edges Matrices Graphs Adjacency graph of a matrix A Edge (i, j) exists iff A ij 0 Edge weight, W ij, can be the A ij value Graphs Matrices Adjacency matrix of a weighted graph Default weight 1, vertex value is in-degree Symmetric undirected graphs Unsymmetric directed graphs

Graph Algorithms Graph partitioning NP Hard problem We will cover in detail Coloring Graph Laplacian & Eigenproblem Breadth First Search (BFS) Depth First Search (DFS) Connected components Spanning Trees

Sequential Greedy Algorithm n = V Choose a random permutation p 1,, p(n) of numbers 1,, n U V for i 1 to n v p i S {colors of all colored neighbors of v} c v smallest color S U U {v} Bounds?

(Maximal) Independent Set Independent Set: no two vertices share a common edge

Parallel Graph Coloring Any independent set can be colored in parallel U V while U > 0 do in parallel Choose an independent set I from U Color all vertices in I U U I Optimal Coloring color using smallest color Balanced Coloring use all colors equally

Maximal Independent Set (Luby) find largest MIS from graph Color all with the same color and remove from graph Recurse I V V while V > 0 do choose and independent set I from V I I + I X I + N(I ) V V X

How to choose independent sets in parallel? Assign a random weight to each vertex Choose vertices that are a local maxima O c + 1 log V algorithm for sparse graphs

Luby s Algorithm 87 23 97 29 71 14 66 02 62 05 78 68 82 30 initially assigned random numbers

Luby s Algorithm 87 23 97 29 71 14 66 02 62 05 78 68 82 30 Find MIS

Luby s Algorithm 87 23 97 29 71 14 66 02 62 05 78 68 82 30 Find MIS

Luby s Algorithm 87 23 97 29 71 14 66 02 62 05 78 68 82 30 Find MIS, color all the same color

Luby s Algorithm 62 44 28 36 23 17 81 88 11 repeat

Luby s Algorithm 62 44 28 36 23 17 81 88 11 repeat

Luby s Algorithm repeat

Luby s Algorithm repeat

Jones-Plassmann Coloring Not necessary to create a new random permutation of vertices every time Use vertex number to resolve conflicts Does not find a MIS at each step Instead, Find independent set Not assigned the same color Color individually using smallest available color

Jones-Plassmann Coloring U V while U > 0 do for all vertices v U do in parallel I v w v > w u u N v } for all vertices v I do in parallel S colors of N v c v U U I minimum color S

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 initially assigned random numbers

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 If local maxima, assign lowest color

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 repeat, considering only uncolored vertices

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 repeat, considering only uncolored vertices

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 repeat, considering only uncolored vertices

Jones-Plassmann Coloring 87 23 97 29 71 14 66 02 62 05 78 68 82 30 repeat, considering only uncolored vertices