for Task-Based Parallelism
|
|
- Laurence Curtis
- 5 years ago
- Views:
Transcription
1 for Task-Based Parallelism School of EEECS, Queen s University of Belfast July
2 Table of Contents 2 / 24
3 Thread-Based Programming Hard for programmer to reason about thread interleavings and concurrent memory accesses Error-prone: Races, deadlocks, livelocks, all hard to reproduce Task-based parallelism offers a higher level of abstraction Early models (e.g. OpenMP, Cilk) based on explicit synchronization New models based on automatic dependence analysis are emerging Runtime dependence analysis based on programmer s annotation of memory footprint (OmpSs, SvS) dependence analysis based on compiler s inference of memory footprint (DPJ) Hybrid static-dynamic schemes this talk 3 / 24
4 OpportunityA:4for : FFT 1 void FFT 1D (long N, long N SQRT, long FFT BS, long TR BS, 2 double Complex A[N SQRT][N SQRT]) 3 { 4 const size t rowsz = N SQRT * 2 * sizeof(double); 5 const size t longsz = sizeof(long); 6 7 // Loop 1: Transpose 8 for (long I = 0; I < N SQRT; I += TR BS) { 9 void * tile ii = &A[I ][ I ]; #pragma task \ 12 in(n[longsz], N SQRT[longsz], TR BS[longsz]) \ 13 inout(tile ii[rowsz][tr BS TR BS*2*sizeof(double)]); 14 trsp blk (N, N SQRT, TR BS, tile ii); for (long J = I + TR BS; J < N SQRT; J += TR BS) { 17 void * tile ij = &A[I ][ J ]; 18 void * tile ji = &A[J][I ]; #pragma task \ 21 in(n[longsz], N SQRT[longsz], TR BS[longsz]) \ 22 inout(tile ij[rowsz][tr BS TR BS*2*sizeof(double)], \ 23 tile ji [rowsz][tr BS TR BS*2*sizeof(double)]); 24 trsp swap(n, N SQRT, TR BS, tile1, tile2); 25 } 26 } // Loop 2: First FFT round 29 for (long J = 0; J < N SQRT; J += FFT BS) { 30 tile = &A[J ][0]; #pragma task \ 33 in(n SQRT[longsz], FFT BS[longsz]) \ 34 inout(tile[fft BS*rowsz)]); 35 FFT1D(N SQRT, FFT BS, &A[J][0]); 36 } } 4 / 24
5 Table of Contents 5 / 24
6 Whole Object of whole task arguments (objects) Simple: use object base address for detecting dependencies Assumes objects are contiguous in memory Can not analyze dependencies between multi-dimensional array blocks Can not analyze dependencies static or dynamic, arbitrary collections of objects Can not detect partial overlaps May require data layout transformations (memory copies) Can not be used to write parallel code operating on dynamic data structures 6 / 24
7 Block-level analysis Intuition: partition virtual memory into fixed-size blocks Analyze dependencies between blocks instead of program objects Treat objects as arbitrary collections of memory blocks Can analyze dependencies between multi-dimensional array blocks, arbitrary collections of objects, static or dynamic Can detect partial overlaps Programmability: easier to write parallel code operating on static or dynamic structures, array-based or linked Overhead: Objects composed of N blocks need N analysis checks 7 / 24
8 Elements of Task descriptor containing the closure of task Code, memory footprint and other dependent tasks Block descriptors containing queue of tasks waiting to access the block Include access mode (in, out, inout) Use versioning (renaming) on writes Versions reveal parallelism Efficient implementation Custom memory allocator with task metadata embedded with memory allocator metadata O(1) lookups for all metadata Small memory footprint for low cache pollution 8 / 24
9 by Example (a) Task 1 9 / 24
10 by Example 10 / 24
11 (a) Task 1 by Example 11 / 24
12 by Example (b) Task 2 (d) Task 4 12 / 24
13 Table of Contents 13 / 24
14 Opportunity for (a) OpenMP Fig int a = 1, b = 2, c = 3; 2 int * alias = &b; 3 4 void set(int * x, int * y) { * x= * y; } 5 void addto(int * x, int * y) { * x += * y; } 6 7 int main() { 8 #pragma task inout(&b) safe(&c); 9 addto(&b, &c); #pragma task safe(&a) in(alias); 12 set(&a, alias ); #pragma wait all #pragma task safe(&a) safe(&c); 17 set(&a, &c); 18 } (b) IDDI Task graphs showing run-time task execution and dependencies Fig. 3. analysis unnecessary on a and c in first two tasks because of barrier analysis unnecessary on a and c in third task because of barrier Tedious for programmer to manage, best handled by compiler Tasks with independent arguments To execute this program preserving the sequential semantics, the second task set needs to wait until the value of b is produced by the first task, i.e., there is a dependence on memory location b. Note, however, that since the third task cannot be spawned 14 / until 24
15 A:6 University of Language λ Values Expressions Locations ρ L CFG Points φ F Tasks π T Types Constraints v ::= n () λ x.e e ::= v x e; e ee ref e! e e := e task(e 1,...,e n ) {e} barrier τ ::= int unit (τ, φ) (τ, φ) ref ρ (τ) C ::= C C τ τ ρ ρ φ φ ρ π ππ φ : Barrier φ : π Environments Γ ::= Γ,x: τ Fig. 4. λ : A simple task-based parallel language Simply-typed lambda calculus Section 3 describes the static analysis we use to discover independen ments like a and Extensions: c above. Inferring dynamicthat memory a task allocation, argument task does creation, not need t for dependenciesbarrier requires synchronization verifying that no other task can access that a short, the static analysis infers this independence in three steps. First, aliasing information for all memory locations in the program. Second, 15 / 24
16 A:6 University of Language λ Values Expressions Locations ρ L CFG Points φ F Tasks π T Types Constraints v ::= n () λ x.e e ::= v x e; e ee ref e! e e := e task(e 1,...,e n ) {e} barrier τ ::= int unit (τ, φ) (τ, φ) ref ρ (τ) C ::= C C τ τ ρ ρ φ φ ρ π ππ φ : Barrier φ : π Environments Γ ::= Γ,x: τ Type system generates set of constraints solved by rewriting Fig. 4. λ : A simple task-based parallel language rules: Inference labels φ and ρ generate control-flow and points-to graphs Constraints for data (ρ1 ρ 2 ), control (φ 1 φ 2 ), task memory footprints (ρ π), can happen in parallel (π 1 π 2 ) and barrier (φ) Section 3 describes the static analysis we use to discover independen ments like a and c above. Inferring that a task argument does not need t for dependencies requires verifying that no other task can access that a short, the static analysis infers this independence in three steps. First, aliasing information for all memory locations in the program. Second, which tasks can run in parallel; we do not need to check for conflicting 15 / 24a
17 Table of Contents 16 / 24
18 Details Source-to-source compiler using CIL OpenMP and OmpSs-like syntax supported Strided arguments, multi-dimensional array tiles, and dynamic regions Locksmith engine for points-to analysis Scalable runtime support: Concurrent queues, NUMA-aware data allocation and task scheduling Provable low bounds and determinism 17 / 24
19 Experimental Platform Cray XE6 compute node on Hector, 32 GB DRAM, NUMA 18 / 24
20 Experimental Results A:17 21*-34*56(47/(89 $'!"!#!%!$!' " # % $ '!""! #$%## &'()$% %*+,(-./ 01,(/ (a) Black-Scholes /.&*01&23)14,)56 #'!""! (' #$%## %&'()*+, %' &' $'!' ' -.(), (b) Ferret.+/012/34(254-6 (''' %('' %''' &('' &''' $('' $'''!(''!''' ('' '!""! #$%## &'()$% *+,(- (c) Cholesky +',-./,01)/2*)34!'' *' "' )' #' (' %' &' $'!'!""! #$%## +',-./,01)/2*)34!%!$!' " # % $!""! #$%## &"'()*'+,$*-,%. &('' &''' $('' $'''!(''!''' ('' /00/ $623 ' &'()* ' &'()* '!"#$% 19 / 24
21 +',-./, %' &' $'!' University of Manchester, ' July! "!# &'()* $% &$ Experimental Results ' (d) GMRES +',-./, # % $ &'()* (e) HPL &"'()*'+!(''!''' ('' '! "!#!"#$% (f) Jacob.+/012/34(254-6 $('' $'''!(''!''' (''!""! #$%## &'()$% -*./01.23%14,%56 &() &(' $() $('!()!(' '()!""! #$%&'( ' *+,(- (g) SMPSS-FFT '(' )*+%, (h) SPLASH-FFT (254-6 $('' $'''!(''!""! #$%## &'()$% ).1*)23 %' &( &' $(!""! #$% 20 / 24
22 ,(/ /.&*0 $'!' '!# $% &$Experimental Results -.(),.+/01!(''!''' ('' ' *+,(- -Scholes!# $% &$ ()* RES +',-./,01)/2*)34!%!$!' " # % $ (b) Ferret!""! #$%## ' &'()* (e) HPL &"'()*'+,$*-,%. &('' &''' $('' $'''!(''!''' ('' (c) Cholesky /00/ $623 '!"#$% (f) Jacobi (254-6 $('' $'''!(''!""! #$%## &'()$% %14,%56 &() &(' $() $('!""! #$%&'( 21 / 24
23 Table of Contents 22 / 24
24 Lessons and Challenges Lesson: Task-based parallelism achieves good balance between productivity and performance Lesson: Automatic dependence analysis uncovers more parallelism and enables further optimization Lesson: A new opportunity for parallelizing compilers Challenge: Overhead still present in some codes with low operational intensity (e.g. stencils) Challenge: Points-to analysis still limited (even in simple cases) Challenge: no magic recipe, expose analysis options as user knobs or autotuners 23 / 24
25 Acknowledgment Collaborators: Angelos Bilas, Angelos Papatriantafyllou, Polyvios Pratikakis, George Tzenakis, Hans Vandierendonck Funding: EU FP7 (ENCORE, TEXT, I-CORES) Infrastructure: Hector (EPCC), BSC 24 / 24
Region-Based Memory Management for Task Dataflow Models
Region-Based Memory Management for Task Dataflow Models Nikolopoulos, D. (2012). Region-Based Memory Management for Task Dataflow Models: First Joint ENCORE & PEPPHER Workshop on Programmability and Portability
More informationRuntime Support for Scalable Task-parallel Programs
Runtime Support for Scalable Task-parallel Programs Pacific Northwest National Lab xsig workshop May 2018 http://hpc.pnl.gov/people/sriram/ Single Program Multiple Data int main () {... } 2 Task Parallelism
More informationSMPSs Submission to HPCC 2010 Class 2 Competition. Josep M. Perez, Rosa M. Badia, Jesus Labarta, Eduard Ayguade
SMPSs Submission to HPCC 2010 Class 2 Competition Josep M. Perez, Rosa M. Badia, Jesus Labarta, Eduard Ayguade Index The model Experimental platform Global FFT Global HPL Global RandomAccess Performance
More informationTask Superscalar: Using Processors as Functional Units
Task Superscalar: Using Processors as Functional Units Yoav Etsion Alex Ramirez Rosa M. Badia Eduard Ayguade Jesus Labarta Mateo Valero HotPar, June 2010 Yoav Etsion Senior Researcher Parallel Programming
More informationAchieving Efficient Strong Scaling with PETSc Using Hybrid MPI/OpenMP Optimisation
Achieving Efficient Strong Scaling with PETSc Using Hybrid MPI/OpenMP Optimisation Michael Lange 1 Gerard Gorman 1 Michele Weiland 2 Lawrence Mitchell 2 Xiaohu Guo 3 James Southern 4 1 AMCG, Imperial College
More informationTrace-driven Simulation of Multithreaded Applications. Alejandro Rico, Alejandro Duran, Felipe Cabarcas Yoav Etsion, Alex Ramirez and Mateo Valero
Trace-driven Simulation of Multithreaded Applications Alejandro Rico, Alejandro Duran, Felipe Cabarcas Yoav Etsion, Alex Ramirez and Mateo Valero Multithreaded applications and -driven simulation Most
More informationLecture 16: Recapitulations. Lecture 16: Recapitulations p. 1
Lecture 16: Recapitulations Lecture 16: Recapitulations p. 1 Parallel computing and programming in general Parallel computing a form of parallel processing by utilizing multiple computing units concurrently
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationCS4961 Parallel Programming. Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms 9/5/12. Administrative. Mary Hall September 4, 2012
CS4961 Parallel Programming Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms Administrative Mailing list set up, everyone should be on it - You should have received a test mail last night
More informationOpenMP 4.0/4.5. Mark Bull, EPCC
OpenMP 4.0/4.5 Mark Bull, EPCC OpenMP 4.0/4.5 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all
More informationOverview of research activities Toward portability of performance
Overview of research activities Toward portability of performance Do dynamically what can t be done statically Understand evolution of architectures Enable new programming models Put intelligence into
More informationCilk Plus GETTING STARTED
Cilk Plus GETTING STARTED Overview Fundamentals of Cilk Plus Hyperobjects Compiler Support Case Study 3/17/2015 CHRIS SZALWINSKI 2 Fundamentals of Cilk Plus Terminology Execution Model Language Extensions
More informationOpenMP on the IBM Cell BE
OpenMP on the IBM Cell BE PRACE Barcelona Supercomputing Center (BSC) 21-23 October 2009 Marc Gonzalez Tallada Index OpenMP programming and code transformations Tiling and Software Cache transformations
More informationLecture 10 Midterm review
Lecture 10 Midterm review Announcements The midterm is on Tue Feb 9 th in class 4Bring photo ID 4You may bring a single sheet of notebook sized paper 8x10 inches with notes on both sides (A4 OK) 4You may
More informationBarbara Chapman, Gabriele Jost, Ruud van der Pas
Using OpenMP Portable Shared Memory Parallel Programming Barbara Chapman, Gabriele Jost, Ruud van der Pas The MIT Press Cambridge, Massachusetts London, England c 2008 Massachusetts Institute of Technology
More informationOpenMP 4.0. Mark Bull, EPCC
OpenMP 4.0 Mark Bull, EPCC OpenMP 4.0 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all devices!
More informationDawnCC : a Source-to-Source Automatic Parallelizer of C and C++ Programs
DawnCC : a Source-to-Source Automatic Parallelizer of C and C++ Programs Breno Campos Ferreira Guimarães, Gleison Souza Diniz Mendonça, Fernando Magno Quintão Pereira 1 Departamento de Ciência da Computação
More informationTutorial OmpSs: Overlapping communication and computation
www.bsc.es Tutorial OmpSs: Overlapping communication and computation PATC course Parallel Programming Workshop Rosa M Badia, Xavier Martorell PATC 2013, 18 October 2013 Tutorial OmpSs Agenda 10:00 11:00
More informationImproving the Practicality of Transactional Memory
Improving the Practicality of Transactional Memory Woongki Baek Electrical Engineering Stanford University Programming Multiprocessors Multiprocessor systems are now everywhere From embedded to datacenter
More informationShared-memory Parallel Programming with Cilk Plus
Shared-memory Parallel Programming with Cilk Plus John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 4 30 August 2018 Outline for Today Threaded programming
More informationGetting Performance from OpenMP Programs on NUMA Architectures
Getting Performance from OpenMP Programs on NUMA Architectures Christian Terboven, RWTH Aachen University terboven@itc.rwth-aachen.de EU H2020 Centre of Excellence (CoE) 1 October 2015 31 March 2018 Grant
More informationDynamic Fine Grain Scheduling of Pipeline Parallelism. Presented by: Ram Manohar Oruganti and Michael TeWinkle
Dynamic Fine Grain Scheduling of Pipeline Parallelism Presented by: Ram Manohar Oruganti and Michael TeWinkle Overview Introduction Motivation Scheduling Approaches GRAMPS scheduling method Evaluation
More informationAccelerated Library Framework for Hybrid-x86
Software Development Kit for Multicore Acceleration Version 3.0 Accelerated Library Framework for Hybrid-x86 Programmer s Guide and API Reference Version 1.0 DRAFT SC33-8406-00 Software Development Kit
More informationStatic Data Race Detection for SPMD Programs via an Extended Polyhedral Representation
via an Extended Polyhedral Representation Habanero Extreme Scale Software Research Group Department of Computer Science Rice University 6th International Workshop on Polyhedral Compilation Techniques (IMPACT
More informationEE/CSCI 451: Parallel and Distributed Computation
EE/CSCI 451: Parallel and Distributed Computation Lecture #7 2/5/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 Outline From last class
More informationOverview: Emerging Parallel Programming Models
Overview: Emerging Parallel Programming Models the partitioned global address space paradigm the HPCS initiative; basic idea of PGAS the Chapel language: design principles, task and data parallelism, sum
More informationScalable Shared Memory Programing
Scalable Shared Memory Programing Marc Snir www.parallel.illinois.edu What is (my definition of) Shared Memory Global name space (global references) Implicit data movement Caching: User gets good memory
More informationScalable Software Transactional Memory for Chapel High-Productivity Language
Scalable Software Transactional Memory for Chapel High-Productivity Language Srinivas Sridharan and Peter Kogge, U. Notre Dame Brad Chamberlain, Cray Inc Jeffrey Vetter, Future Technologies Group, ORNL
More informationMELD: Merging Execution- and Language-level Determinism. Joe Devietti, Dan Grossman, Luis Ceze University of Washington
MELD: Merging Execution- and Language-level Determinism Joe Devietti, Dan Grossman, Luis Ceze University of Washington CoreDet results overhead normalized to nondet 800% 600% 400% 200% 0% 2 4 8 2 4 8 2
More informationSteve Deitz Chapel project, Cray Inc.
Parallel Programming in Chapel LACSI 2006 October 18 th, 2006 Steve Deitz Chapel project, Cray Inc. Why is Parallel Programming Hard? Partitioning of data across processors Partitioning of tasks across
More informationPoint-to-Point Synchronisation on Shared Memory Architectures
Point-to-Point Synchronisation on Shared Memory Architectures J. Mark Bull and Carwyn Ball EPCC, The King s Buildings, The University of Edinburgh, Mayfield Road, Edinburgh EH9 3JZ, Scotland, U.K. email:
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More informationParallel Programming with OpenMP. CS240A, T. Yang
Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 9 ] Shared Memory Performance Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture
More informationParallel Programming with OpenMP. CS240A, T. Yang, 2013 Modified from Demmel/Yelick s and Mary Hall s Slides
Parallel Programming with OpenMP CS240A, T. Yang, 203 Modified from Demmel/Yelick s and Mary Hall s Slides Introduction to OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for
More informationDATA-DRIVEN TASKS THEIR IMPLEMENTATION AND SAĞNAK TAŞIRLAR, VIVEK SARKAR DEPARTMENT OF COMPUTER SCIENCE. RICE UNIVERSITY
1 DATA-DRIVEN TASKS AND THEIR IMPLEMENTATION SAĞNAK TAŞIRLAR, VIVEK SARKAR DEPARTMENT OF COMPUTER SCIENCE. RICE UNIVERSITY Fork/Join graphs constraint -ism 2 Fork/Join models restrict task graphs to be
More informationHybrid Programming with MPI and SMPSs
Hybrid Programming with MPI and SMPSs Apostolou Evangelos August 24, 2012 MSc in High Performance Computing The University of Edinburgh Year of Presentation: 2012 Abstract Multicore processors prevail
More informationIntroduction to Multithreaded Algorithms
Introduction to Multithreaded Algorithms CCOM5050: Design and Analysis of Algorithms Chapter VII Selected Topics T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein. Introduction to algorithms, 3 rd
More informationStructured Parallel Programming Patterns for Efficient Computation
Structured Parallel Programming Patterns for Efficient Computation Michael McCool Arch D. Robison James Reinders ELSEVIER AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO
More informationContents. Preface xvii Acknowledgments. CHAPTER 1 Introduction to Parallel Computing 1. CHAPTER 2 Parallel Programming Platforms 11
Preface xvii Acknowledgments xix CHAPTER 1 Introduction to Parallel Computing 1 1.1 Motivating Parallelism 2 1.1.1 The Computational Power Argument from Transistors to FLOPS 2 1.1.2 The Memory/Disk Speed
More informationCS4230 Parallel Programming. Lecture 12: More Task Parallelism 10/5/12
CS4230 Parallel Programming Lecture 12: More Task Parallelism Mary Hall October 4, 2012 1! Homework 3: Due Before Class, Thurs. Oct. 18 handin cs4230 hw3 Problem 1 (Amdahl s Law): (i) Assuming a
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationThe Legion Mapping Interface
The Legion Mapping Interface Mike Bauer 1 Philosophy Decouple specification from mapping Performance portability Expose all mapping (perf) decisions to Legion user Guessing is bad! Don t want to fight
More informationG Programming Languages - Fall 2012
G22.2110-003 Programming Languages - Fall 2012 Lecture 4 Thomas Wies New York University Review Last week Control Structures Selection Loops Adding Invariants Outline Subprograms Calling Sequences Parameter
More informationUvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP
Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel
More informationOpenMPSuperscalar: Task-Parallel Simulation and Visualization of Crowds with Several CPUs and GPUs
www.bsc.es OpenMPSuperscalar: Task-Parallel Simulation and Visualization of Crowds with Several CPUs and GPUs Hugo Pérez UPC-BSC Benjamin Hernandez Oak Ridge National Lab Isaac Rudomin BSC March 2015 OUTLINE
More informationPerformance Issues in Parallelization Saman Amarasinghe Fall 2009
Performance Issues in Parallelization Saman Amarasinghe Fall 2009 Today s Lecture Performance Issues of Parallelism Cilk provides a robust environment for parallelization It hides many issues and tries
More informationCS4230 Parallel Programming. Lecture 7: Loop Scheduling cont., and Data Dependences 9/6/12. Administrative. Mary Hall.
CS4230 Parallel Programming Lecture 7: Loop Scheduling cont., and Data Dependences Mary Hall September 7, 2012 Administrative I will be on travel on Tuesday, September 11 There will be no lecture Instead,
More informationMore Data Locality for Static Control Programs on NUMA Architectures
More Data Locality for Static Control Programs on NUMA Architectures Adilla Susungi 1, Albert Cohen 2, Claude Tadonki 1 1 MINES ParisTech, PSL Research University 2 Inria and DI, Ecole Normale Supérieure
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationParallel Programming. OpenMP Parallel programming for multiprocessors for loops
Parallel Programming OpenMP Parallel programming for multiprocessors for loops OpenMP OpenMP An application programming interface (API) for parallel programming on multiprocessors Assumes shared memory
More informationSeptember 10,
September 10, 2013 1 Bjarne Stroustrup, AT&T Bell Labs, early 80s cfront original C++ to C translator Difficult to debug Potentially inefficient Many native compilers exist today C++ is mostly upward compatible
More informationConcurrency, Thread. Dongkun Shin, SKKU
Concurrency, Thread 1 Thread Classic view a single point of execution within a program a single PC where instructions are being fetched from and executed), Multi-threaded program Has more than one point
More informationIntel Xeon Phi архитектура, модели программирования, оптимизация.
Нижний Новгород, 2017 Intel Xeon Phi архитектура, модели программирования, оптимизация. Дмитрий Прохоров, Дмитрий Рябцев, Intel Agenda What and Why Intel Xeon Phi Top 500 insights, roadmap, architecture
More informationOverview: The OpenMP Programming Model
Overview: The OpenMP Programming Model motivation and overview the parallel directive: clauses, equivalent pthread code, examples the for directive and scheduling of loop iterations Pi example in OpenMP
More informationParallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops
Parallel Programming Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Single computers nowadays Several CPUs (cores) 4 to 8 cores on a single chip Hyper-threading
More informationMilind Kulkarni Research Statement
Milind Kulkarni Research Statement With the increasing ubiquity of multicore processors, interest in parallel programming is again on the upswing. Over the past three decades, languages and compilers researchers
More informationCellSs Making it easier to program the Cell Broadband Engine processor
Perez, Bellens, Badia, and Labarta CellSs Making it easier to program the Cell Broadband Engine processor Presented by: Mujahed Eleyat Outline Motivation Architecture of the cell processor Challenges of
More informationAtomicity via Source-to-Source Translation
Atomicity via Source-to-Source Translation Benjamin Hindman Dan Grossman University of Washington 22 October 2006 Atomic An easier-to-use and harder-to-implement primitive void deposit(int x){ synchronized(this){
More informationCOMP Parallel Computing. SMM (4) Nested Parallelism
COMP 633 - Parallel Computing Lecture 9 September 19, 2017 Nested Parallelism Reading: The Implementation of the Cilk-5 Multithreaded Language sections 1 3 1 Topics Nested parallelism in OpenMP and other
More informationCPS343 Parallel and High Performance Computing Project 1 Spring 2018
CPS343 Parallel and High Performance Computing Project 1 Spring 2018 Assignment Write a program using OpenMP to compute the estimate of the dominant eigenvalue of a matrix Due: Wednesday March 21 The program
More informationEfficient Data Race Detection for Unified Parallel C
P A R A L L E L C O M P U T I N G L A B O R A T O R Y Efficient Data Race Detection for Unified Parallel C ParLab Winter Retreat 1/14/2011" Costin Iancu, LBL" Nick Jalbert, UC Berkeley" Chang-Seo Park,
More informationEfficiently Introduce Threading using Intel TBB
Introduction This guide will illustrate how to efficiently introduce threading using Intel Threading Building Blocks (Intel TBB), part of Intel Parallel Studio XE. It is a widely used, award-winning C++
More informationLess naive type theory
Institute of Informatics Warsaw University 26 May 2007 Plan 1 Syntax of lambda calculus Why typed lambda calculi? 2 3 Syntax of lambda calculus Why typed lambda calculi? origins in 1930s (Church, Curry)
More informationImproving the interoperability between MPI and OmpSs-2
Improving the interoperability between MPI and OmpSs-2 Vicenç Beltran Querol vbeltran@bsc.es 19/04/2018 INTERTWinE Exascale Application Workshop, Edinburgh Why hybrid MPI+OmpSs-2 programming? Gauss-Seidel
More informationTitanium. Titanium and Java Parallelism. Java: A Cleaner C++ Java Objects. Java Object Example. Immutable Classes in Titanium
Titanium Titanium and Java Parallelism Arvind Krishnamurthy Fall 2004 Take the best features of threads and MPI (just like Split-C) global address space like threads (ease programming) SPMD parallelism
More informationCSE 262 Lecture 13. Communication overlap Continued Heterogeneous processing
CSE 262 Lecture 13 Communication overlap Continued Heterogeneous processing Final presentations Announcements Friday March 13 th, 10:00 AM to 1:00PM Note time change. Room 3217, CSE Building (EBU3B) Scott
More informationLecture 13. Shared memory: Architecture and programming
Lecture 13 Shared memory: Architecture and programming Announcements Special guest lecture on Parallel Programming Language Uniform Parallel C Thursday 11/2, 2:00 to 3:20 PM EBU3B 1202 See www.cse.ucsd.edu/classes/fa06/cse260/lectures/lec13
More informationHierarchical Pointer Analysis for Distributed Programs
Hierarchical Pointer Analysis for Distributed Programs Amir Kamil Computer Science Division, University of California, Berkeley kamil@cs.berkeley.edu April 14, 2006 1 Introduction Many distributed, parallel
More informationA brief introduction to OpenMP
A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism
More informationCS4961 Parallel Programming. Lecture 14: Reasoning about Performance 10/07/2010. Administrative: What s Coming. Mary Hall October 7, 2010
CS4961 Parallel Programming Lecture 14: Reasoning about Performance Administrative: What s Coming Programming assignment 2 due Friday, 11:59PM Homework assignment out on Tuesday, Oct. 19 and due Monday,
More informationA Local-View Array Library for Partitioned Global Address Space C++ Programs
Lawrence Berkeley National Laboratory A Local-View Array Library for Partitioned Global Address Space C++ Programs Amir Kamil, Yili Zheng, and Katherine Yelick Lawrence Berkeley Lab Berkeley, CA, USA June
More informationPerformance Issues in Parallelization. Saman Amarasinghe Fall 2010
Performance Issues in Parallelization Saman Amarasinghe Fall 2010 Today s Lecture Performance Issues of Parallelism Cilk provides a robust environment for parallelization It hides many issues and tries
More informationG Programming Languages Spring 2010 Lecture 4. Robert Grimm, New York University
G22.2110-001 Programming Languages Spring 2010 Lecture 4 Robert Grimm, New York University 1 Review Last week Control Structures Selection Loops 2 Outline Subprograms Calling Sequences Parameter Passing
More informationShared Memory Programming with OpenMP (3)
Shared Memory Programming with OpenMP (3) 2014 Spring Jinkyu Jeong (jinkyu@skku.edu) 1 SCHEDULING LOOPS 2 Scheduling Loops (2) parallel for directive Basic partitioning policy block partitioning Iteration
More informationA Data-flow Approach to Solving the von Neumann Bottlenecks
1 A Data-flow Approach to Solving the von Neumann Bottlenecks Antoniu Pop Inria and École normale supérieure, Paris St. Goar, June 21, 2013 2 Problem Statement J. Preshing. A Look Back at Single-Threaded
More informationHierarchical Pointer Analysis
for Distributed Programs Amir Kamil and Katherine Yelick U.C. Berkeley August 23, 2007 1 Amir Kamil Background 2 Amir Kamil Hierarchical Machines Parallel machines often have hierarchical structure 4 A
More informationOpenMP on the IBM Cell BE
OpenMP on the IBM Cell BE 15th meeting of ScicomP Barcelona Supercomputing Center (BSC) May 18-22 2009 Marc Gonzalez Tallada Index OpenMP programming and code transformations Tiling and Software cache
More informationOn the cost of managing data flow dependencies
On the cost of managing data flow dependencies - program scheduled by work stealing - Thierry Gautier, INRIA, EPI MOAIS, Grenoble France Workshop INRIA/UIUC/NCSA Outline Context - introduction of work
More informationCMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)
CMSC 714 Lecture 4 OpenMP and UPC Chau-Wen Tseng (from A. Sussman) Programming Model Overview Message passing (MPI, PVM) Separate address spaces Explicit messages to access shared data Send / receive (MPI
More informationWorkloads Programmierung Paralleler und Verteilter Systeme (PPV)
Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment
More informationDense Matrix Multiplication
Dense Matrix Multiplication Abhishek Somani, Debdeep Mukhopadhyay Mentor Graphics, IIT Kharagpur October 7, 2015 Abhishek, Debdeep (IIT Kgp) Matrix Mult. October 7, 2015 1 / 56 Overview 1 The Problem 2
More informationMohamed M. Saad & Binoy Ravindran
Mohamed M. Saad & Binoy Ravindran VT-MENA Program Electrical & Computer Engineering Department Virginia Polytechnic Institute and State University TRANSACT 11 San Jose, CA An operation (or set of operations)
More informationLoop Modifications to Enhance Data-Parallel Performance
Loop Modifications to Enhance Data-Parallel Performance Abstract In data-parallel applications, the same independent
More informationIntel Thread Building Blocks, Part II
Intel Thread Building Blocks, Part II SPD course 2013-14 Massimo Coppola 25/03, 16/05/2014 1 TBB Recap Portable environment Based on C++11 standard compilers Extensive use of templates No vectorization
More informationMain Points of the Computer Organization and System Software Module
Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a
More informationCOMP Parallel Computing. SMM (2) OpenMP Programming Model
COMP 633 - Parallel Computing Lecture 7 September 12, 2017 SMM (2) OpenMP Programming Model Reading for next time look through sections 7-9 of the Open MP tutorial Topics OpenMP shared-memory parallel
More information1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008
1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction
More informationSummary: Open Questions:
Summary: The paper proposes an new parallelization technique, which provides dynamic runtime parallelization of loops from binary single-thread programs with minimal architectural change. The realization
More informationEffective Performance Measurement and Analysis of Multithreaded Applications
Effective Performance Measurement and Analysis of Multithreaded Applications Nathan Tallent John Mellor-Crummey Rice University CSCaDS hpctoolkit.org Wanted: Multicore Programming Models Simple well-defined
More informationParallel Algorithm Design. CS595, Fall 2010
Parallel Algorithm Design CS595, Fall 2010 1 Programming Models The programming model o determines the basic concepts of the parallel implementation and o abstracts from the hardware as well as from the
More informationMemory Systems and Compiler Support for MPSoC Architectures. Mahmut Kandemir and Nikil Dutt. Cap. 9
Memory Systems and Compiler Support for MPSoC Architectures Mahmut Kandemir and Nikil Dutt Cap. 9 Fernando Moraes 28/maio/2013 1 MPSoC - Vantagens MPSoC architecture has several advantages over a conventional
More informationHiPC Partial Globalization of Partitioned Address Spaces for Zero-copy Communication with Shared Memory
Partial Globalization of Partitioned Address Spaces for Zero-copy Communication with Shared Memory Fangzhou Jiao, Nilesh Mahajan, Jeremiah Willcock, Arun Chauhan, Andrew Lumsdaine Indiana University HiPC
More informationCSE 160 Lecture 8. NUMA OpenMP. Scott B. Baden
CSE 160 Lecture 8 NUMA OpenMP Scott B. Baden OpenMP Today s lecture NUMA Architectures 2013 Scott B. Baden / CSE 160 / Fall 2013 2 OpenMP A higher level interface for threads programming Parallelization
More informationHybrid Strategies for the NEMO Ocean Model on! Many-core Processors! I. Epicoco, S. Mocavero, G. Aloisio! University of Salento & CMCC, Italy!
Hybrid Strategies for the NEMO Ocean Model on! Many-core Processors! I. Epicoco, S. Mocavero, G. Aloisio! University of Salento & CMCC, Italy! 2012 Programming weather, climate, and earth-system models
More informationECE 669 Parallel Computer Architecture
ECE 669 Parallel Computer Architecture Lecture 23 Parallel Compilation Parallel Compilation Two approaches to compilation Parallelize a program manually Sequential code converted to parallel code Develop
More informationTasks with Effects. A Model for Disciplined Concurrent Programming. Abstract. 1. Motivation
Tasks with Effects A Model for Disciplined Concurrent Programming Stephen Heumann Vikram Adve University of Illinois at Urbana-Champaign {heumann1,vadve}@illinois.edu Abstract Today s widely-used concurrent
More informationPrinciples of Parallel Algorithm Design: Concurrency and Mapping
Principles of Parallel Algorithm Design: Concurrency and Mapping John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 3 28 August 2018 Last Thursday Introduction
More informationDeterministic Shared Memory Multiprocessing
Deterministic Shared Memory Multiprocessing Luis Ceze, University of Washington joint work with Owen Anderson, Tom Bergan, Joe Devietti, Brandon Lucia, Karin Strauss, Dan Grossman, Mark Oskin. Safe MultiProcessing
More informationParallel Programming. March 15,
Parallel Programming March 15, 2010 1 Some Definitions Computational Models and Models of Computation real world system domain model - mathematical - organizational -... computational model March 15, 2010
More information