Parallelization Strategy
|
|
- Poppy Alexander
- 5 years ago
- Views:
Transcription
1 COSC 335 Software Design Parallel Design Patterns (II) Spring 2008 Parallelization Strategy Finding Concurrency Structure the problem to expose exploitable concurrency Algorithm Structure Supporting Structure Implementation Mechanism Structure the algorithm to take advantage of concurrency Intermediate stage between Algorithm Structure and Implementation program structuring definition of shared data structures Mapping of the higher level patterns onto a programming environment
2 Finding concurrency - Overview Decomposition Task Decomposition Data Decomposition Dependency Analysis Group Tasks Order Tasks Data Sharing Design Evaluation Overview (II) Is the problem large enough to justify the efforts for parallelizing it? Are the key features of the problem and the data elements within the problem well understood? Which parts of the problem are most computationally intensive? 2
3 Task decomposition How can a problem be decomposed into tasks that can execute concurrently? Goal: a collection of (nearly) independent tasks Initially, try to find as many as tasks as possible (can be merged later to form larger tasks) A task can correspond to A function call Distinct iterations of loops within the algorithm (loop splitting) Any independent sections in the source code Task decomposition goals and constraints Flexibility: the design should be flexible enough to be able to handle any numbers of processes E.g. the number and the size of each task should be a parameter for the task decomposition Efficiency: each task should include enough work to compensate for the overhead of generating tasks and managing their dependencies Simplicity: tasks should be defined such that it keeps debugging and maintenance easy 3
4 Task decomposition Two tasks A and B are considered independent if I I A B O A O B O A O B = = = Task decomposition - example Solving a system of linear equations using the Conjugant Gradient Method Given the vectors x 0, A, b p 0 = r 0 :=b-ax 0 For i=0,,2, α = r T ip i β = (Ap i ) T p i λ = α/β x i+ = x i + λp i r i+ = r i λap i p i+ = r i+ -((Ar i+ ) T p i )/β Can be executed in parallel Can be executed in parallel 4
5 Data decomposition How can a problem s data be decomposed into units that can be operated (relatively) independently? Most computationally intensive parts of a problem are dealing with large data structures Common mechanisms Array-based decomposition (e.g. see example on the next slides) Recursive data structures (e.g. decomposing the parallel update of a large tree data structure) Data decomposition goals and constraints Flexibility: size and number of data chunks should be flexible (granularity) Efficiency: Data chunks have to be large enough that the amount of work with the data chunk compensates for managing dependencies Load balancing Simplicity: Complex data types difficult to debug Mapping of local indexes to global indexed often required 5
6 Numerical differentiation - recap Forward difference formula: f( x+ h) f( x) f ( x) = h Central difference formula for the st derivative: f ( x) = [ f( x+ h) f( x h)] 2h Central difference formula for the 2 nd derivative: f ( x) = [ f( x+ h) 2f( x) + f( x h)] 2 h Finite Differences Approach for Solving Differential Equations Idea: replace the derivatives in the DE by an according approximation formula Typically central differences y ( t) = [ y( t+ h) y( t h)] 2h y ( t) = [ y( t+ h) 2y( t) + y( t h)] 2 h Example: Boundary value problem of an ordinary differential equation 2 d y dy = f( x, y, ) 2 dx dx y(a) =α y(b) =β a x b 6
7 Finite Differences Approach (II) For simplicity, lets assume the points are equally spaced ( b a) x i = a+ ih 0 n+ h= n i + A two point boundary value problem becomes then y 0 =α ( yi 2yi + yi h =β ) = f( x, y, ( yi 2h yi y n+ )) (x:) Equation (x:) leads to a system of equations Solving the system of linear equations x 0, x,..., gives x, x n n+ the solution of the ODE at the distinct points Example (I) Solve the following two point boundary value problem using the finite difference method with h=0.2 2 d y dy x= 0 0 x 2 dx dx y( 0) = y( ) = 2 Since h=0.2, the mesh points are x0 = 0, x = 0.2, x2 = 0.4, x3 = 0.6, x4 = 0.8, x5 =.0 Thus, y = y( x 0 ) y = y( x 5 ) 2 0 = 5 = y - y 4 are unknown 7
8 Example (II) Discrete version of the ODE using central differences: ( y 2 + ) + 2 ( ) + 0 = 0 2 i+ yi yi yi+ yi xi h 2h ( y 2 ) 2 ( ) i+ yi+ yi + yi+ yi + xi = ( yi+ 2yi+ yi ) + 5( yi+ yi ) + 0xi = 20 i 50yi+ 30yi+ = y 0x i 0 i=: Example (III) 20y y y x 20 50y+ 30y2 = = 0 i=: 50y + 30y2 = 22 i=2: 20y 50y2+ 30y3 = 4 i=3: 20y2 50y3+ 30y4 = 6 i=4: 20y3 50y4 = 68 or y y 2 = 30 y 3 50 y A y b 8
9 Solving Ay=b using Bi-CGSTAB Scalar product Given A,b and an initial guess y 0 r0 = b Ay 0 Given rˆ such that rˆ T 0 r0 0 ρ0 = α = ω0 = v =p = for i =,2, T ρ ˆ i = r0 ri ρ i α β = ρ i ω i pi = ri + β pi ωi vi v = i Ap i ρ i α = T rˆ 0 vi s= r i αv i t= As T t s ω i = T t t ( ) yi = yi +α pi+ ωis r = s ωt i i Matrix-vector multiplication Scalar product: s= i= 0 Parallel algorithm s= / 2 i= 0 / 2 Scalar product in parallel a[ i]* b[ i] ( a[ i]* b[ i]) + i= / 2 ( a[ i]* b[ i]) / 2 = ( alocal[ i]* blocal[ i]) + ( alocal[ i]* blocal[ i]) i= 0 i= rank= 0 Process with rank=0 a ( 0... ) b ( 0... ) a (... ) b(... ) rank= requires communication between the processes Process with rank= 9
10 Matrix-vector product in parallel x rhs x2 = rhs2 30 x 3 rhs3 50 x4 rhs4 Process 0 Process 50x + 30x 2 20x + x =rhs 2 20x x = rhs 50x x x 3 50x4 = rhs 4 =rhs 3 4 Process 0 needs x 3 Process needs x 2 Matrix vector product in parallel (II) Introduction of ghost cells Process zero Process one x x2 x3 x x 2 3 Looking at the source code, e.g p v = i = ri i i i i Ap i + β( p ω v ) since the vector used in the matrix vector multiplication changes every iteration, you always have to update the ghost cells before doing the calculation x 4 0
11 Matrix vector product in parallel (III) so the parallel algorithm for the same area is: pi = ri + β( pi ωi vi ) Update the ghost-cells of p, e.g - Process 0 sends p(2) to Process - Process sends p(3) to Process 0 v = i Ap i 2-D Example Laplace equation Parallel domain decomposition Data exchange at process boundaries required Halo cells / Ghost cells Copy of the last row/column of data from the neighbor process
12 Parallelization Strategy Finding Concurrency Structure the problem to expose exploitable concurrency Algorithm Structure Supporting Structure Implementation Mechanism Structure the algorithm to take advantage of concurrency Intermediate stage between Algorithm Structure and Implementation program structuring definition of shared data structures Mapping of the higher level patterns onto a programming environment Finding concurrency Result A task decomposition that identifies tasks that can execute concurrently A data decomposition that identifies data local to each task A way of grouping tasks and ordering them according to temporal constraints 2
13 Algorithm structure Organize by tasks Task Parallelism Divide and Conquer Organize by data decomposition Geometric decomposition Recursive data Organize by flow of data Pipeline Event-based coordination Task parallelism (I) Problem can be decomposed into a collection of tasks that can execute concurrently Tasks can be completely independent (embarrassingly parallel) or can have dependencies among them All tasks might be known at the beginning or might be generated dynamically 3
14 Task parallelism (II) Tasks: There should be at least as many tasks as UEs (typically many, many more) Computation associated with each task should be large enough to offset the overhead associated with managing tasks and handling dependencies Dependencies: Ordering constraints: sequential composition of taskparallel computations Shared-data dependencies: several tasks have to access the same data structure Shared data dependencies Shared data dependencies can be categorized as follows: Removable dependencies: an apparent dependency that can be removed by code transformation int i, ii=0, jj=0; for (i=0; i<n; i++ ) { ii = ii + ; d[ii] = big_time_consuming_work (ii); jj = jj + ii; a[jj] = other_big_time_consuming_work (jj); } for (i=0; i<n; i++ ) { d[i] = big_time_consuming_work (i); a[(i*i+i)/2] = other_big_time_consuming_work((i*i+i)/2); } 28 4
15 Shared data dependencies (II) Separable dependencies: replicate the shared data structure and combine the copies into a single structure at the end Remember the matrix-vector multiply using columnwise block distribution in the first MPI lecture? Other dependencies: non-resolvable, have to be followed Task scheduling Schedule: the way in which tasks are assigned to UEs for execution Goal: load balance minimize the overall execution of all tasks Two classes of schedule: Static schedule: distribution of tasks to UEs is determined at the start of the computation and not changed anymore Dynamic schedule: the distribution of tasks to UEs changes as the computation proceeds 5
16 Task scheduling - example Independent tasks A B C D E F Poor mapping to 4 UEs Good mapping to 4 UEs A B F C D A B D C F E E Static schedule Tasks are associated into blocks Blocks are assigned to UEs Each UE should take approximately same amount of time to complete task Static schedule usually used when Availability of computational resources is predictable (e.g. dedicated usage of nodes) UEs are identical (e.g. homogeneous parallel computer) Size of each task is nearly identical 6
17 Dynamic scheduling Used when Effort associated with each task varies widely/is unpredictable Capabilities of UEs vary widely (heterogeneous parallel machine) Common implementations: usage of task queues: if a UE finishes current task, it removes the next task from the task-queue Work-stealing: each UE has its own work queue once its queue is empty, a UE steals work from the task queue of another UE Dynamic scheduling Trade-offs: Fine grained (=shorter, smaller) tasks allow for better load balance Fine grained task have higher costs for task management and dependency management 7
18 Divide and Conquer algorithms split split split Base solve Merge Base solve Merge Merge Divide and Conquer A problem is split into a number of smaller subproblems Each sub-problem is solved independently Sub-solutions of each sub-problem will be merged to the solution of the final problem Problems of Divide and Conquer for Parallel Computing: Amount of exploitable concurrency decreases over the lifetime Trivial parallel implementation: each function call to solve is a task on its own. For small problems, no new task should be generated, but the basesolve should be applied 8
19 Divide and Conquer Implementation: On shared memory machines, a divide and conquer algorithm can easily be mapped to a fork/join model A new task is forked(=created) After this task is done, it joins the original task (=destroyed) On distributed memory machines: task queues Often implemented using the Master/Worker framework discussed later in this course int solve ( Problem P ) { int solution; Divide and Conquer /* Check whether we can further partition the problem */ if (basecase(p) ) { solution = basesolve(p); /* No, we can t */ } else { /* yes, we can */ Problem subproblems[n]; int subsolutions[n]; } subproblems = split (P); /* Partition the problem */ for ( i=0; i < N; i++ ) { subsolutions[i] = solve ( subproblems[i]); } solution = merge (subsolutions); } return ( solution ); 38 9
20 Task Parallelism using Master-Worker framework Master Process Worker Process Worker Process 2 Result queue Task queue Task Parallelism using work stealing Worker Process Worker Process 2 20
21 Geometric decomposition For all applications relying on data decomposition All processes should apply the same operations on different data items Key elements: Data decomposition Exchange and update operation Data distribution and task scheduling Algorithm structure - Pipeline pattern Calculation can be viewed in terms of data flowing through a sequence of stages Computation performed on many data sets Compare to pipelining in processors on the instruction level time Pipeline stage Pipeline stage 2 Pipeline stage 3 Pipeline stage 4 C C2 C3 C4 C5 C6 C C2 C3 C4 C5 C6 C C2 C3 C4 C5 C6 C C2 C3 C4 C5 C6 2
22 Pipeline pattern (II) Amount of concurrency limited to the number of stages of the pipeline Patterns works best, if amount of work performed by various stages is roughly equal Filling the pipeline: some stages will be idle Draining the pipeline: some stages will be idle Non-linear pipeline: pattern allows for different execution for different data items Stage Stage 2 Stage 3a Stage 3b Stage 4 Pipeline pattern (III) Implementation: Each stage typically assigned to a process/thread A stage might be a data-parallel task itself Computation per task has to be large enough to compensate for communication costs between the tasks 22
Parallelization Strategy
COSC 6374 Parallel Computation Algorithm structure Spring 2008 Parallelization Strategy Finding Concurrency Structure the problem to expose exploitable concurrency Algorithm Structure Supporting Structure
More informationCOSC 6374 Parallel Computation. Parallel Design Patterns. Edgar Gabriel. Fall Design patterns
COSC 6374 Parallel Computation Parallel Design Patterns Fall 2014 Design patterns A design pattern is a way of reusing abstract knowledge about a problem and its solution Patterns are devices that allow
More informationPatterns for! Parallel Programming II!
Lecture 4! Patterns for! Parallel Programming II! John Cavazos! Dept of Computer & Information Sciences! University of Delaware! www.cis.udel.edu/~cavazos/cisc879! Task Decomposition Also known as functional
More informationPatterns for! Parallel Programming!
Lecture 4! Patterns for! Parallel Programming! John Cavazos! Dept of Computer & Information Sciences! University of Delaware!! www.cis.udel.edu/~cavazos/cisc879! Lecture Overview Writing a Parallel Program
More informationParallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides)
Parallel Computing 2012 Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Algorithm Design Outline Computational Model Design Methodology Partitioning Communication
More informationThe Algorithm Structure Design Space
CHAPTER 4 The Algorithm Structure Design Space 4.1 INTRODUCTION 4.2 CHOOSING AN ALGORITHM STRUCTURE PATTERN 4.3 EXAMPLES 4.4 THE TASK PARALLELISM PATTERN 4.5 THE DIVIDE AND CONQUER PATTERN 4.6 THE GEOMETRIC
More informationEE382N (20): Computer Architecture - Parallelism and Locality Lecture 13 Parallelism in Software IV
EE382 (20): Computer Architecture - Parallelism and Locality Lecture 13 Parallelism in Software IV Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality (c) Rodric Rabbah, Mattan
More informationParallel Programming Concepts. Parallel Algorithms. Peter Tröger
Parallel Programming Concepts Parallel Algorithms Peter Tröger Sources: Ian Foster. Designing and Building Parallel Programs. Addison-Wesley. 1995. Mattson, Timothy G.; S, Beverly A.; ers,; Massingill,
More informationEE382N (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 14 Parallelism in Software I
EE382 (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 14 Parallelism in Software I Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality, Spring 2015
More informationEE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 11 Parallelism in Software II
EE382 (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 11 Parallelism in Software II Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality, Fall 2011 --
More informationParallel Algorithm Design. Parallel Algorithm Design p. 1
Parallel Algorithm Design Parallel Algorithm Design p. 1 Overview Chapter 3 from Michael J. Quinn, Parallel Programming in C with MPI and OpenMP Another resource: http://www.mcs.anl.gov/ itf/dbpp/text/node14.html
More informationLecture 15: More Iterative Ideas
Lecture 15: More Iterative Ideas David Bindel 15 Mar 2010 Logistics HW 2 due! Some notes on HW 2. Where we are / where we re going More iterative ideas. Intro to HW 3. More HW 2 notes See solution code!
More informationScalable Algorithmic Techniques Decompositions & Mapping. Alexandre David
Scalable Algorithmic Techniques Decompositions & Mapping Alexandre David 1.2.05 adavid@cs.aau.dk Introduction Focus on data parallelism, scale with size. Task parallelism limited. Notion of scalability
More informationParallel Programming Patterns Overview and Concepts
Parallel Programming Patterns Overview and Concepts Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License.
More informationEE382N (20): Computer Architecture - Parallelism and Locality Lecture 11 Parallelism in Software II
EE382 (20): Computer Architecture - Parallelism and Locality Lecture 11 Parallelism in Software II Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality (c) Rodric Rabbah, Mattan
More informationAdaptive-Mesh-Refinement Pattern
Adaptive-Mesh-Refinement Pattern I. Problem Data-parallelism is exposed on a geometric mesh structure (either irregular or regular), where each point iteratively communicates with nearby neighboring points
More informationMarco Danelutto. May 2011, Pisa
Marco Danelutto Dept. of Computer Science, University of Pisa, Italy May 2011, Pisa Contents 1 2 3 4 5 6 7 Parallel computing The problem Solve a problem using n w processing resources Obtaining a (close
More informationDesign of Parallel Algorithms. Models of Parallel Computation
+ Design of Parallel Algorithms Models of Parallel Computation + Chapter Overview: Algorithms and Concurrency n Introduction to Parallel Algorithms n Tasks and Decomposition n Processes and Mapping n Processes
More informationMultigrid Pattern. I. Problem. II. Driving Forces. III. Solution
Multigrid Pattern I. Problem Problem domain is decomposed into a set of geometric grids, where each element participates in a local computation followed by data exchanges with adjacent neighbors. The grids
More informationHPC Algorithms and Applications
HPC Algorithms and Applications Dwarf #5 Structured Grids Michael Bader Winter 2012/2013 Dwarf #5 Structured Grids, Winter 2012/2013 1 Dwarf #5 Structured Grids 1. dense linear algebra 2. sparse linear
More informationIN5050: Programming heterogeneous multi-core processors Thinking Parallel
IN5050: Programming heterogeneous multi-core processors Thinking Parallel 28/8-2018 Designing and Building Parallel Programs Ian Foster s framework proposal develop intuition as to what constitutes a good
More informationParallelism in Software
Parallelism in Software Minsoo Ryu Department of Computer Science and Engineering 2 1 Parallelism in Software 2 Creating a Multicore Program 3 Multicore Design Patterns 4 Q & A 2 3 Types of Parallelism
More informationPatterns for Parallel Application Programs *
Patterns for Parallel Application Programs * Berna L. Massingill, University of Florida, blm@cise.ufl.edu Timothy G. Mattson, Intel Corporation, timothy.g.mattson@intel.com Beverly A. Sanders, University
More informationWorkloads Programmierung Paralleler und Verteilter Systeme (PPV)
Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment
More informationParallel Programming Patterns
Parallel Programming Patterns Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna http://www.moreno.marzolla.name/ Copyright 2013, 2017, 2018 Moreno Marzolla, Università
More informationPrinciple Of Parallel Algorithm Design (cont.) Alexandre David B2-206
Principle Of Parallel Algorithm Design (cont.) Alexandre David B2-206 1 Today Characteristics of Tasks and Interactions (3.3). Mapping Techniques for Load Balancing (3.4). Methods for Containing Interaction
More informationCS 470 Spring Parallel Algorithm Development. (Foster's Methodology) Mike Lam, Professor
CS 470 Spring 2018 Mike Lam, Professor Parallel Algorithm Development (Foster's Methodology) Graphics and content taken from IPP section 2.7 and the following: http://www.mcs.anl.gov/~itf/dbpp/text/book.html
More informationCSCE 5160 Parallel Processing. CSCE 5160 Parallel Processing
HW #9 10., 10.3, 10.7 Due April 17 { } Review Completing Graph Algorithms Maximal Independent Set Johnson s shortest path algorithm using adjacency lists Q= V; for all v in Q l[v] = infinity; l[s] = 0;
More informationNumerical Algorithms
Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0
More informationPrinciples of Parallel Algorithm Design: Concurrency and Decomposition
Principles of Parallel Algorithm Design: Concurrency and Decomposition John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 2 12 January 2017 Parallel
More informationSome aspects of parallel program design. R. Bader (LRZ) G. Hager (RRZE)
Some aspects of parallel program design R. Bader (LRZ) G. Hager (RRZE) Finding exploitable concurrency Problem analysis 1. Decompose into subproblems perhaps even hierarchy of subproblems that can simultaneously
More informationPrinciples of Parallel Algorithm Design: Concurrency and Mapping
Principles of Parallel Algorithm Design: Concurrency and Mapping John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 3 28 August 2018 Last Thursday Introduction
More informationKevin J. Barker. Scott Pakin and Darren J. Kerbyson
Experiences in Performance Modeling: The Krak Hydrodynamics Application Kevin J. Barker Scott Pakin and Darren J. Kerbyson Performance and Architecture Laboratory (PAL) http://www.c3.lanl.gov/pal/ Computer,
More informationParallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming University of Evansville
Parallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming of Evansville Selection of slides from CIS 410/510 Introduction to Parallel Computing Department of Computer and Information
More informationParallelism I. John Cavazos. Dept of Computer & Information Sciences University of Delaware
Parallelism I John Cavazos Dept of Computer & Information Sciences University of Delaware Lecture Overview Thinking in Parallel Flynn s Taxonomy Types of Parallelism Parallelism Basics Design Patterns
More informationHomework # 1 Due: Feb 23. Multicore Programming: An Introduction
C O N D I T I O N S C O N D I T I O N S Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.86: Parallel Computing Spring 21, Agarwal Handout #5 Homework #
More informationLecture 4: Principles of Parallel Algorithm Design (part 3)
Lecture 4: Principles of Parallel Algorithm Design (part 3) 1 Exploratory Decomposition Decomposition according to a search of a state space of solutions Example: the 15-puzzle problem Determine any sequence
More informationEE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 14 Parallelism in Software V
EE382 (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 14 Parallelism in Software V Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality, Fall 2011 --
More informationDesigning Parallel Programs. This review was developed from Introduction to Parallel Computing
Designing Parallel Programs This review was developed from Introduction to Parallel Computing Author: Blaise Barney, Lawrence Livermore National Laboratory references: https://computing.llnl.gov/tutorials/parallel_comp/#whatis
More informationEE382N (20): Computer Architecture - Parallelism and Locality Lecture 10 Parallelism in Software I
EE382 (20): Computer Architecture - Parallelism and Locality Lecture 10 Parallelism in Software I Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality (c) Rodric Rabbah, Mattan
More informationData Partitioning. Figure 1-31: Communication Topologies. Regular Partitions
Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy
More informationCSC630/COS781: Parallel & Distributed Computing
CSC630/COS781: Parallel & Distributed Computing Algorithm Design Chapter 3 (3.1-3.3) 1 Contents Preliminaries of parallel algorithm design Decomposition Task dependency Task dependency graph Granularity
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationDivide-and-Conquer. Divide-and conquer is a general algorithm design paradigm:
Presentation for use with the textbook Data Structures and Algorithms in Java, 6 th edition, by M. T. Goodrich, R. Tamassia, and M. H. Goldwasser, Wiley, 2014 Merge Sort 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9
More informationPatterns of Parallel Programming with.net 4. Ade Miller Microsoft patterns & practices
Patterns of Parallel Programming with.net 4 Ade Miller (adem@microsoft.com) Microsoft patterns & practices Introduction Why you should care? Where to start? Patterns walkthrough Conclusions (and a quiz)
More informationParallel Algorithm Design. CS595, Fall 2010
Parallel Algorithm Design CS595, Fall 2010 1 Programming Models The programming model o determines the basic concepts of the parallel implementation and o abstracts from the hardware as well as from the
More informationAn Introduction to Parallel Programming
An Introduction to Parallel Programming Ing. Andrea Marongiu (a.marongiu@unibo.it) Includes slides from Multicore Programming Primer course at Massachusetts Institute of Technology (MIT) by Prof. SamanAmarasinghe
More informationPARALLEL CLASSIFICATION ALGORITHMS
PARALLEL CLASSIFICATION ALGORITHMS By: Faiz Quraishi Riti Sharma 9 th May, 2013 OVERVIEW Introduction Types of Classification Linear Classification Support Vector Machines Parallel SVM Approach Decision
More informationIntroduction to Multigrid and its Parallelization
Introduction to Multigrid and its Parallelization! Thomas D. Economon Lecture 14a May 28, 2014 Announcements 2 HW 1 & 2 have been returned. Any questions? Final projects are due June 11, 5 pm. If you are
More informationPrinciples of Parallel Algorithm Design: Concurrency and Mapping
Principles of Parallel Algorithm Design: Concurrency and Mapping John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 3 17 January 2017 Last Thursday
More informationTransactions on Information and Communications Technologies vol 3, 1993 WIT Press, ISSN
The implementation of a general purpose FORTRAN harness for an arbitrary network of transputers for computational fluid dynamics J. Mushtaq, A.J. Davies D.J. Morgan ABSTRACT Many Computational Fluid Dynamics
More informationLecture 16: Recapitulations. Lecture 16: Recapitulations p. 1
Lecture 16: Recapitulations Lecture 16: Recapitulations p. 1 Parallel computing and programming in general Parallel computing a form of parallel processing by utilizing multiple computing units concurrently
More informationNumerical Methods for PDEs. SSC Workgroup Meetings Juan J. Alonso October 8, SSC Working Group Meetings, JJA 1
Numerical Methods for PDEs SSC Workgroup Meetings Juan J. Alonso October 8, 2001 SSC Working Group Meetings, JJA 1 Overview These notes are meant to be an overview of the various memory access patterns
More informationHARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA
HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES Cliff Woolley, NVIDIA PREFACE This talk presents a case study of extracting parallelism in the UMT2013 benchmark for 3D unstructured-mesh
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing
More informationSearch and Optimization
Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution
More informationCS 140 : Numerical Examples on Shared Memory with Cilk++
CS 140 : Numerical Examples on Shared Memory with Cilk++ Matrix-matrix multiplication Matrix-vector multiplication Hyperobjects Thanks to Charles E. Leiserson for some of these slides 1 Work and Span (Recap)
More informationIntroduction to Parallel Programming
Introduction to Parallel Programming Linda Woodard CAC 19 May 2010 Introduction to Parallel Computing on Ranger 5/18/2010 www.cac.cornell.edu 1 y What is Parallel Programming? Using more than one processor
More informationProgrammazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci
v1.0 20140421 Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci [module 3.1] ELEMENTS OF CONCURRENT PROGRAM DESIGN 1 STEPS IN DESIGN
More informationProgramming as Successive Refinement. Partitioning for Performance
Programming as Successive Refinement Not all issues dealt with up front Partitioning often independent of architecture, and done first View machine as a collection of communicating processors balancing
More informationLecture 5 2D Transformation
Lecture 5 2D Transformation What is a transformation? In computer graphics an object can be transformed according to position, orientation and size. Exactly what it says - an operation that transforms
More informationHigh Scalability of Lattice Boltzmann Simulations with Turbulence Models using Heterogeneous Clusters
SIAM PP 2014 High Scalability of Lattice Boltzmann Simulations with Turbulence Models using Heterogeneous Clusters C. Riesinger, A. Bakhtiari, M. Schreiber Technische Universität München February 20, 2014
More informationLecture 27: Fast Laplacian Solvers
Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18
More informationReview: Creating a Parallel Program. Programming for Performance
Review: Creating a Parallel Program Can be done by programmer, compiler, run-time system or OS Steps for creating parallel program Decomposition Assignment of tasks to processes Orchestration Mapping (C)
More informationWhy Use the GPU? How to Exploit? New Hardware Features. Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid. Semiconductor trends
Imagine stream processor; Bill Dally, Stanford Connection Machine CM; Thinking Machines Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid Jeffrey Bolz Eitan Grinspun Caltech Ian Farmer
More informationECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 3: Programming Models Piccolo: Building Fast, Distributed Programs
More informationParallel Computing. Parallel Algorithm Design
Parallel Computing Parallel Algorithm Design Task/Channel Model Parallel computation = set of tasks Task Program Local memory Collection of I/O ports Tasks interact by sending messages through channels
More information15. The Software System ParaLab for Learning and Investigations of Parallel Methods
15. The Software System ParaLab for Learning and Investigations of Parallel Methods 15. The Software System ParaLab for Learning and Investigations of Parallel Methods... 1 15.1. Introduction...1 15.2.
More informationLecture 4: Principles of Parallel Algorithm Design (part 4)
Lecture 4: Principles of Parallel Algorithm Design (part 4) 1 Mapping Technique for Load Balancing Minimize execution time Reduce overheads of execution Sources of overheads: Inter-process interaction
More informationCS4961 Parallel Programming. Lecture 5: Data and Task Parallelism, cont. 9/8/09. Administrative. Mary Hall September 8, 2009.
CS4961 Parallel Programming Lecture 5: Data and Task Parallelism, cont. Administrative Homework 2 posted, due September 10 before class - Use the handin program on the CADE machines - Use the following
More informationPARALLEL METHODS FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS. Ioana Chiorean
5 Kragujevac J. Math. 25 (2003) 5 18. PARALLEL METHODS FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS Ioana Chiorean Babeş-Bolyai University, Department of Mathematics, Cluj-Napoca, Romania (Received May 28,
More informationIntroduction to Parallel Programming for Multicore/Manycore Clusters Part II-3: Parallel FVM using MPI
Introduction to Parallel Programming for Multi/Many Clusters Part II-3: Parallel FVM using MPI Kengo Nakajima Information Technology Center The University of Tokyo 2 Overview Introduction Local Data Structure
More informationParallel Programming Patterns. Overview and Concepts
Parallel Programming Patterns Overview and Concepts Outline Practical Why parallel programming? Decomposition Geometric decomposition Task farm Pipeline Loop parallelism Performance metrics and scaling
More information6.189 IAP Lecture 11. Parallelizing Compilers. Prof. Saman Amarasinghe, MIT IAP 2007 MIT
6.189 IAP 2007 Lecture 11 Parallelizing Compilers 1 6.189 IAP 2007 MIT Outline Parallel Execution Parallelizing Compilers Dependence Analysis Increasing Parallelization Opportunities Generation of Parallel
More informationParallel Programming
Parallel Programming 7. Data Parallelism Christoph von Praun praun@acm.org 07-1 (1) Parallel algorithm structure design space Organization by Data (1.1) Geometric Decomposition Organization by Tasks (1.3)
More informationSMD149 - Operating Systems - Multiprocessing
SMD149 - Operating Systems - Multiprocessing Roland Parviainen December 1, 2005 1 / 55 Overview Introduction Multiprocessor systems Multiprocessor, operating system and memory organizations 2 / 55 Introduction
More informationOverview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy
Overview SMD149 - Operating Systems - Multiprocessing Roland Parviainen Multiprocessor systems Multiprocessor, operating system and memory organizations December 1, 2005 1/55 2/55 Multiprocessor system
More informationSerial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing
CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.
More informationHigh Performance Computing. Introduction to Parallel Computing
High Performance Computing Introduction to Parallel Computing Acknowledgements Content of the following presentation is borrowed from The Lawrence Livermore National Laboratory https://hpc.llnl.gov/training/tutorials
More informationCOMP/CS 605: Introduction to Parallel Computing Topic: Parallel Computing Overview/Introduction
COMP/CS 605: Introduction to Parallel Computing Topic: Parallel Computing Overview/Introduction Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
More informationParallel Mesh Partitioning in Alya
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Parallel Mesh Partitioning in Alya A. Artigues a *** and G. Houzeaux a* a Barcelona Supercomputing Center ***antoni.artigues@bsc.es
More informationModalis. A First Step to the Evaluation of SimGrid in the Context of a Real Application. Abdou Guermouche and Hélène Renard, May 5, 2010
A First Step to the Evaluation of SimGrid in the Context of a Real Application Abdou Guermouche and Hélène Renard, LaBRI/Univ Bordeaux 1 I3S/École polytechnique universitaire de Nice-Sophia Antipolis May
More informationCS 468 (Spring 2013) Discrete Differential Geometry
CS 468 (Spring 2013) Discrete Differential Geometry 1 Math Review Lecture 14 15 May 2013 Discrete Exterior Calculus Lecturer: Justin Solomon Scribe: Cassidy Saenz Before we dive into Discrete Exterior
More informationMPI Case Study. Fabio Affinito. April 24, 2012
MPI Case Study Fabio Affinito April 24, 2012 In this case study you will (hopefully..) learn how to Use a master-slave model Perform a domain decomposition using ghost-zones Implementing a message passing
More informationCSE 421 Greedy Alg: Union Find/Dijkstra s Alg
CSE 1 Greedy Alg: Union Find/Dijkstra s Alg Shayan Oveis Gharan 1 Dijkstra s Algorithm Dijkstra(G, c, s) { d s 0 foreach (v V) d[v] //This is the key of node v foreach (v V) insert v onto a priority queue
More informationIntroduction to Optimization Problems and Methods
Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex
More informationLecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.
CZ4102 High Performance Computing Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations - Dr Tay Seng Chuan Reference: Introduction to Parallel Computing Chapter 8. 1 Topic Overview
More informationParallelization Principles. Sathish Vadhiyar
Parallelization Principles Sathish Vadhiyar Parallel Programming and Challenges Recall the advantages and motivation of parallelism But parallel programs incur overheads not seen in sequential programs
More informationMultiview Stereo COSC450. Lecture 8
Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect
More informationIntroduction to Parallel Programming Models
Introduction to Parallel Programming Models Tim Foley Stanford University Beyond Programmable Shading 1 Overview Introduce three kinds of parallelism Used in visual computing Targeting throughput architectures
More informationA Pattern-supported Parallelization Approach
A Pattern-supported Parallelization Approach Ralf Jahr, Mike Gerdes, Theo Ungerer University of Augsburg, Germany The 2013 International Workshop on Programming Models and Applications for Multicores and
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationDataflow Architectures. Karin Strauss
Dataflow Architectures Karin Strauss Introduction Dataflow machines: programmable computers with hardware optimized for fine grain data-driven parallel computation fine grain: at the instruction granularity
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26
More information3D Helmholtz Krylov Solver Preconditioned by a Shifted Laplace Multigrid Method on Multi-GPUs
3D Helmholtz Krylov Solver Preconditioned by a Shifted Laplace Multigrid Method on Multi-GPUs H. Knibbe, C. W. Oosterlee, C. Vuik Abstract We are focusing on an iterative solver for the three-dimensional
More informationA First Step to the Evaluation of SimGrid in the Context of a Real Application. Abdou Guermouche
A First Step to the Evaluation of SimGrid in the Context of a Real Application Abdou Guermouche Hélène Renard 19th International Heterogeneity in Computing Workshop April 19, 2010 École polytechnique universitaire
More informationCS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department
CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):
More informationAdvanced Algorithms and Data Structures
Advanced Algorithms and Data Structures Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Prerequisites A seven credit unit course Replaced OHJ-2156 Analysis of Algorithms We take things a bit further than
More informationHomework # 2 Due: October 6. Programming Multiprocessors: Parallelism, Communication, and Synchronization
ECE669: Parallel Computer Architecture Fall 2 Handout #2 Homework # 2 Due: October 6 Programming Multiprocessors: Parallelism, Communication, and Synchronization 1 Introduction When developing multiprocessor
More information