Hybrid MPI + OpenMP Approach to Improve the Scalability of a Phase-Field-Crystal Code
|
|
- Linette Cameron
- 6 years ago
- Views:
Transcription
1 Hybrid MPI + OpenMP Approach to Improve the Scalability of a Phase-Field-Crystal Code Reuben D. Budiardja reubendb@utk.edu ECSS Symposium March 19 th, 2013
2 Project Background Project Team (University of Michigan): Katsuyo Thornton (P.I.), Victor Chan Phase-field-crystal (PFC) formulation to study dynamics of various metal systems Original in-house code written in C++ Has been run in 2D and 3D systems Solves multiple Helmholtz equations, a reduction, then an explicit time step 2
3 Solving the Helmholtz Equations 2 + k 2 = 0 Originally used GMRES with Algebraic Multigrid (AMG) preconditioner from HYPRE In 3D, discretization matrix is large and may become indefinite difficult to solve, requiring large iterations Poor weak-scaling results Prohibitively long for indefinite matrix case Increasing memory requirements with iteration 3
4 Goal Scalable to solve larger problem Weak scaling: maintain the time-to-solution with increasing number of processes and a fixed problem size per process Decrease the time to solution to 1 sec / time step Strong scaling: decrease time-to-solution with increasing number of process and a fixed problem size Exploit other parallelism (with OpenMP?) Investigate better preconditioner Different method (library?) to solve the equations 4
5 Goal Scalable to solve larger problem Weak scaling: maintain the time-to-solution with increasing number of processes and a fixed problem size per process Decrease the time to solution to 1 sec / time step Strong scaling: decrease time-to-solution with increasing number of process and a fixed problem size Exploit other parallelism (with OpenMP?) Investigate better preconditioner Different method to solve the equations 5
6 Complex Iterative Jacobi Solver Hadley, G. R, A complex Jacobi iterative method for the indefinite Helmholtz Equation, J.Comp.Phys. 203 (2005) Replaced HYPRE A modification of standard Jacobi method H n+1 H n, Δl i, δ i 2, δ i 2 is computed with centereddifference Easily parallelized and low memory requirement Convergence rate depends on resolution, but roughly constant from problem to problem larger problem (with similar resolution) should not increase iterations. 6
7 Complex Iterative Jacobi Solver Hadley, G. R, A complex Jacobi iterative method for the indefinite Helmholtz Equation, J.Comp.Phys. 203 (2005) Replaced HYPRE A modification of standard Jacobi method H n+1 H n, Δl i, δ i 2, δ i 2 is computed with centereddifference Easily parallelized and low memory requirement Convergence rate depends on resolution, but roughly constant from problem to problem larger problem (with similar resolution) should not increase iterations. A draft version was quickly implemented by the project team (Victor Chan) and tested. 7
8 Profiling the Code with CrayPAT Measure before optimize Can use sampling or tracing Using CrayPAT is simple: load module, re-compile, build instrumented code, re-run CayPAT can trace only specified group, e.g. mpi, io, heap, fftw,... > module load perftools > make clean > make > pat_build g mpi pfc_jacobi.exe > aprun n 48 pfc_jacobi.exe+pat > pat_report o profile.txt \ <output_data>.xf 8
9 Profiling the Code with CrayPAT Measure before optimize Can use sampling or tracing Using CrayPAT is simple: load module, re-compile, build instrumented code, re-run CayPAT can trace only specified group, e.g. mpi, io, heap, fftw,... > module load perftools > make clean > make > pat_build g mpi pfc_jacobi.exe > aprun n 48 pfc_jacobi.exe+pat > pat_report o profile.txt \ <output_data>.xf That Should Have Worked! 9
10 CrayPAT Workaround Use the API for fine grain instrumentation Add PAT_region_{begin/end} calls to most subroutines After narrowed down to a couple major subroutines, split labels to computation and communication #include <pat_api.h>... void Complex_Jacobi( ){... int PAT_ID, ierr; PAT_ID = 41; ierr = PAT_region_begin(PAT_ID, "communication"); MPI_Internal_Communicate( ); MPI_Boundary_Communicate( ) ierr = PAT_region_end(PAT_ID); PAT_ID = 42; ierr = PAT_region_begin(PAT_ID, "computation"); for (int i=1; i<size.l1+1; i++){ } for (int j=1; j<size.l2+1; j++){ } for (int k=1; k<size.l3+1; k++){ } residual(i,j,k)=(1.0/d)*(...); ierr = PAT_region_end(PAT_ID); 10
11 CrayPAT Workaround Use the API for fine grain instrumentation Add PAT_region_{begin/end} calls to most subroutines After narrowed down to a couple major subroutines, split labels to computation and communication Communication subroutine eventually dominate at certain MPI size #include <pat_api.h>... void Complex_Jacobi( ){... int PAT_ID, ierr; PAT_ID = 41; ierr = PAT_region_begin(PAT_ID, "communication"); MPI_Internal_Communicate( ); MPI_Boundary_Communicate( ) ierr = PAT_region_end(PAT_ID); PAT_ID = 42; ierr = PAT_region_begin(PAT_ID, "computation"); for (int i=1; i<size.l1+1; i++){ } for (int j=1; j<size.l2+1; j++){ } for (int k=1; k<size.l3+1; k++){ } residual(i,j,k)=(1.0/d)*(...); ierr = PAT_region_end(PAT_ID); 11
12 Cell Update and MPI Communication r r+1 r+2 Step n: Compute differences and update cell values Step n: Communicate updated value to neighboring ghost cells (using MPI_Sendrecv( ) ) Iterations Step n+1: Compute differences and update cell values There is a fixed communication cost in every iteration. Can we hide it? 12
13 Hiding Communication Cost r r+1 r+2 Step n: Post MPI_Irecv( ) for ghost cells. Compute differences and update cell values on surface cells. Step n: Send surface cells value with MPI_Isend ( ). Compute differences and update cell values on inner cells. Step n+1: Compute differences and update cell values on surface cells. Post MPI_Irecv( ) for ghost cells. Iterations 13
14 Hiding Communication Cost r r+1 r+2 Step n: Compute differences and update cell values on surface cells. Post MPI_Irecv( ) for ghost cells. Step n: Send surface cells value with MPI_Isend ( ). Compute differences and update cell values on inner cells. Step n+1: Compute differences and update cell values on surface cells. Post MPI_Irecv( ) for ghost cells. Iterations Communication cost is hidden as long as there is enough work when communication happens 14
15 Different parallelisms with OpenMP Data Parallelism Parallelize over cell updates for each Helmholtz equation Use do/for directive Need to modify every loop with OpenMP directives May require higher synchronization cost Only master thread communicates Task Parallelism Parallelize over the solving of Helmholtz equations Use section directive Only need to modify main( ) subroutine that calls solver May have load-imbalance among threads Each thread communicates, requires thread-safe MPI 15
16 Different parallelisms with OpenMP Data Parallelism Parallelize over cell updates for each Helmholtz equation Use do/for directive Need to modify every loop with OpenMP directives May require higher synchronization cost Only master thread communicates Task Parallelism Parallelize over the solving of Helmholtz equations Use section directive Only need to modify main( ) subroutine that calls solver May have load-imbalance among threads Each thread communicates, requires thread-safe MPI 16
17 MPI Threading Support MPI-2 standard defines four levels of threading model: Model Description Advantage Disadvantages Single Only one thread allowed Portable: every MPI implementation support this Funneled Serialized Only master thread make MPI calls All threads can make MPI calls, but one at a time Simpler to program Limited flexibility Manager thread could get overloaded Freedom to communicate Risk of too much cross-communication Multiple No restriction Completely thread safe Limited availability Our OpenMP implementation requires multiple threading support On Kraken, we need to set environmental variable MPICH_MAX_THREAD_SAFETY=multiple 17
18 This Should Have Worked... ierr = MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &MPI_Thread_Provided); // check that MPI_Thread_Provided == MPI_THREAD_MULTIPLE #pragma omp parallel private(alpha, beta, w1, fit, bvalue, tag) { #pragma omp sections { #pragma omp section { tag = 1; Complex_Jacobi(, tag, MPI_CART_COMM); } #pragma omp section { tag = 2; Complex_Jacobi(, tag, MPI_CART_COMM); } } } But this produced MPI error on Kraken... 18
19 Workaround for MPI Thread Issue: Create separate MPI Communicator for each thread ierr = MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &MPI_Thread_Provided); // check that MPI_Thread_Provided == MPI_THREAD_MULTIPLE ierr = MPI_Comm_dup(MPI_CART_COMM, &MPI_CART_COMM_1); ierr = MPI_Comm_dup(MPI_CART_COMM, &MPI_CART_COMM_2); #pragma omp parallel private(alpha, beta, w1, fit, bvalue, tag) { #pragma omp sections { #pragma omp section { Complex_Jacobi(, MPI_CART_COMM_1); } #pragma omp section { Complex_Jacobi(, MPI_CART_COMM_2); } } } 19
20 Other Minor Optimizations Only check for convergence (require MPI reduction) every tenth of iteration On Kraken, sacrificing one core (5 threads per socket) may in fact be beneficial (no NUMA effect, more memory bandwidth) -O2 instead of -O3 (not always better) 20
21 Results ~8X Strong-scaling and efficiency plot for system 21
22 Conclusion HYPRE was replaced by in-house Complex Iterative Jacobi solver MPI parallelization with non-blocking communications for domaindecomposition OpenMP task parallelism for multiple Helmholtz equation A scalable PFC code (weak and strong scaling), a major improvement from the original code. Future work: Simulate larger systems Implement OpenMP data parallelism (nested inside the task-parallelism) Lesson learned: Need better collaborative framework for ECSS (e.g. wiki, code revision control such as git, subversion) Estimating reasonable goal can be difficult Custom solver may give you better flexibility, but has development cost Bugs can come from unexpected places 22
Code Parallelization
Code Parallelization a guided walk-through m.cestari@cineca.it f.salvadore@cineca.it Summer School ed. 2015 Code Parallelization two stages to write a parallel code problem domain algorithm program domain
More informationCPS343 Parallel and High Performance Computing Project 1 Spring 2018
CPS343 Parallel and High Performance Computing Project 1 Spring 2018 Assignment Write a program using OpenMP to compute the estimate of the dominant eigenvalue of a matrix Due: Wednesday March 21 The program
More informationParallel Computing Using OpenMP/MPI. Presented by - Jyotsna 29/01/2008
Parallel Computing Using OpenMP/MPI Presented by - Jyotsna 29/01/2008 Serial Computing Serially solving a problem Parallel Computing Parallelly solving a problem Parallel Computer Memory Architecture Shared
More informationSHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008
SHARCNET Workshop on Parallel Computing Hugh Merz Laurentian University May 2008 What is Parallel Computing? A computational method that utilizes multiple processing elements to solve a problem in tandem
More informationAccelerated ANSYS Fluent: Algebraic Multigrid on a GPU. Robert Strzodka NVAMG Project Lead
Accelerated ANSYS Fluent: Algebraic Multigrid on a GPU Robert Strzodka NVAMG Project Lead A Parallel Success Story in Five Steps 2 Step 1: Understand Application ANSYS Fluent Computational Fluid Dynamics
More informationLittle Motivation Outline Introduction OpenMP Architecture Working with OpenMP Future of OpenMP End. OpenMP. Amasis Brauch German University in Cairo
OpenMP Amasis Brauch German University in Cairo May 4, 2010 Simple Algorithm 1 void i n c r e m e n t e r ( short a r r a y ) 2 { 3 long i ; 4 5 for ( i = 0 ; i < 1000000; i ++) 6 { 7 a r r a y [ i ]++;
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18
More informationEfficient AMG on Hybrid GPU Clusters. ScicomP Jiri Kraus, Malte Förster, Thomas Brandes, Thomas Soddemann. Fraunhofer SCAI
Efficient AMG on Hybrid GPU Clusters ScicomP 2012 Jiri Kraus, Malte Förster, Thomas Brandes, Thomas Soddemann Fraunhofer SCAI Illustration: Darin McInnis Motivation Sparse iterative solvers benefit from
More informationA brief introduction to OpenMP
A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism
More informationAcknowledgments. Amdahl s Law. Contents. Programming with MPI Parallel programming. 1 speedup = (1 P )+ P N. Type to enter text
Acknowledgments Programming with MPI Parallel ming Jan Thorbecke Type to enter text This course is partly based on the MPI courses developed by Rolf Rabenseifner at the High-Performance Computing-Center
More informationMPI and OpenMP. Mark Bull EPCC, University of Edinburgh
1 MPI and OpenMP Mark Bull EPCC, University of Edinburgh markb@epcc.ed.ac.uk 2 Overview Motivation Potential advantages of MPI + OpenMP Problems with MPI + OpenMP Styles of MPI + OpenMP programming MPI
More informationOpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system
OpenMP A parallel language standard that support both data and functional Parallelism on a shared memory system Use by system programmers more than application programmers Considered a low level primitives
More informationMPI & OpenMP Mixed Hybrid Programming
MPI & OpenMP Mixed Hybrid Programming Berk ONAT İTÜ Bilişim Enstitüsü 22 Haziran 2012 Outline Introduc/on Share & Distributed Memory Programming MPI & OpenMP Advantages/Disadvantages MPI vs. OpenMP Why
More informationAlgorithms, System and Data Centre Optimisation for Energy Efficient HPC
2015-09-14 Algorithms, System and Data Centre Optimisation for Energy Efficient HPC Vincent Heuveline URZ Computing Centre of Heidelberg University EMCL Engineering Mathematics and Computing Lab 1 Energy
More informationCMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)
CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can
More informationIntroduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi
More informationModule 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program
The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program Amdahl's Law About Data What is Data Race? Overview to OpenMP Components of OpenMP OpenMP Programming Model OpenMP Directives
More informationCMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)
CMSC 714 Lecture 4 OpenMP and UPC Chau-Wen Tseng (from A. Sussman) Programming Model Overview Message passing (MPI, PVM) Separate address spaces Explicit messages to access shared data Send / receive (MPI
More informationOpenMP 4.0. Mark Bull, EPCC
OpenMP 4.0 Mark Bull, EPCC OpenMP 4.0 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all devices!
More informationOptimising the Mantevo benchmark suite for multi- and many-core architectures
Optimising the Mantevo benchmark suite for multi- and many-core architectures Simon McIntosh-Smith Department of Computer Science University of Bristol 1 Bristol's rich heritage in HPC The University of
More informationOpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer
OpenMP examples Sergeev Efim Senior software engineer Singularis Lab, Ltd. OpenMP Is: An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism.
More informationShared Memory programming paradigm: openmp
IPM School of Physics Workshop on High Performance Computing - HPC08 Shared Memory programming paradigm: openmp Luca Heltai Stefano Cozzini SISSA - Democritos/INFM
More information6.1 Multiprocessor Computing Environment
6 Parallel Computing 6.1 Multiprocessor Computing Environment The high-performance computing environment used in this book for optimization of very large building structures is the Origin 2000 multiprocessor,
More informationMonitoring Power CrayPat-lite
Monitoring Power CrayPat-lite (Courtesy of Heidi Poxon) Monitoring Power on Intel Feedback to the user on performance and power consumption will be key to understanding the behavior of an applications
More informationAnna Morajko.
Performance analysis and tuning of parallel/distributed applications Anna Morajko Anna.Morajko@uab.es 26 05 2008 Introduction Main research projects Develop techniques and tools for application performance
More informationPortability of OpenMP Offload Directives Jeff Larkin, OpenMP Booth Talk SC17
Portability of OpenMP Offload Directives Jeff Larkin, OpenMP Booth Talk SC17 11/27/2017 Background Many developers choose OpenMP in hopes of having a single source code that runs effectively anywhere (performance
More informationEfficient Multi-GPU CUDA Linear Solvers for OpenFOAM
Efficient Multi-GPU CUDA Linear Solvers for OpenFOAM Alexander Monakov, amonakov@ispras.ru Institute for System Programming of Russian Academy of Sciences March 20, 2013 1 / 17 Problem Statement In OpenFOAM,
More informationARCHER Single Node Optimisation
ARCHER Single Node Optimisation Profiling Slides contributed by Cray and EPCC What is profiling? Analysing your code to find out the proportion of execution time spent in different routines. Essential
More informationMessage Passing with MPI
Message Passing with MPI PPCES 2016 Hristo Iliev IT Center / JARA-HPC IT Center der RWTH Aachen University Agenda Motivation Part 1 Concepts Point-to-point communication Non-blocking operations Part 2
More informationShared Memory Parallelism - OpenMP
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (http://www.llnl.gov/computing/tutorials/openmp/#introduction) OpenMP sc99 tutorial
More informationScore-P. SC 14: Hands-on Practical Hybrid Parallel Application Performance Engineering 1
Score-P SC 14: Hands-on Practical Hybrid Parallel Application Performance Engineering 1 Score-P Functionality Score-P is a joint instrumentation and measurement system for a number of PA tools. Provide
More informationOpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono
OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/
More informationScalasca performance properties The metrics tour
Scalasca performance properties The metrics tour Markus Geimer m.geimer@fz-juelich.de Scalasca analysis result Generic metrics Generic metrics Time Total CPU allocation time Execution Overhead Visits Hardware
More informationHybrid MPI/OpenMP parallelization. Recall: MPI uses processes for parallelism. Each process has its own, separate address space.
Hybrid MPI/OpenMP parallelization Recall: MPI uses processes for parallelism. Each process has its own, separate address space. Thread parallelism (such as OpenMP or Pthreads) can provide additional parallelism
More informationLecture 16: Recapitulations. Lecture 16: Recapitulations p. 1
Lecture 16: Recapitulations Lecture 16: Recapitulations p. 1 Parallel computing and programming in general Parallel computing a form of parallel processing by utilizing multiple computing units concurrently
More informationAddressing Performance and Programmability Challenges in Current and Future Supercomputers
Addressing Performance and Programmability Challenges in Current and Future Supercomputers Luiz DeRose Sr. Principal Engineer Programming Environments Director Cray Inc. VI-HPS - SC'13 Luiz DeRose 2013
More informationParallelism paradigms
Parallelism paradigms Intro part of course in Parallel Image Analysis Elias Rudberg elias.rudberg@it.uu.se March 23, 2011 Outline 1 Parallelization strategies 2 Shared memory 3 Distributed memory 4 Parallelization
More informationSimulating tsunami propagation on parallel computers using a hybrid software framework
Simulating tsunami propagation on parallel computers using a hybrid software framework Xing Simula Research Laboratory, Norway Department of Informatics, University of Oslo March 12, 2007 Outline Intro
More informationCSE 262 Lecture 13. Communication overlap Continued Heterogeneous processing
CSE 262 Lecture 13 Communication overlap Continued Heterogeneous processing Final presentations Announcements Friday March 13 th, 10:00 AM to 1:00PM Note time change. Room 3217, CSE Building (EBU3B) Scott
More informationLecture 15: More Iterative Ideas
Lecture 15: More Iterative Ideas David Bindel 15 Mar 2010 Logistics HW 2 due! Some notes on HW 2. Where we are / where we re going More iterative ideas. Intro to HW 3. More HW 2 notes See solution code!
More informationCray Performance Tools Enhancements for Next Generation Systems Heidi Poxon
Cray Performance Tools Enhancements for Next Generation Systems Heidi Poxon Agenda Cray Performance Tools Overview Recent Enhancements Support for Cray systems with KNL 2 Cray Performance Analysis Tools
More informationParallel Programming with OpenMP. CS240A, T. Yang
Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs
More informationCOMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP
COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including
More informationAmgX 2.0: Scaling toward CORAL Joe Eaton, November 19, 2015
AmgX 2.0: Scaling toward CORAL Joe Eaton, November 19, 2015 Agenda Introduction to AmgX Current Capabilities Scaling V2.0 Roadmap for the future 2 AmgX Fast, scalable linear solvers, emphasis on iterative
More information1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008
1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction
More informationPerformance of deal.ii on a node
Performance of deal.ii on a node Bruno Turcksin Texas A&M University, Dept. of Mathematics Bruno Turcksin Deal.II on a node 1/37 Outline 1 Introduction 2 Architecture 3 Paralution 4 Other Libraries 5 Conclusions
More informationAdvanced OpenMP. Lecture 11: OpenMP 4.0
Advanced OpenMP Lecture 11: OpenMP 4.0 OpenMP 4.0 Version 4.0 was released in July 2013 Starting to make an appearance in production compilers What s new in 4.0 User defined reductions Construct cancellation
More informationAMath 483/583 Lecture 21 May 13, 2011
AMath 483/583 Lecture 21 May 13, 2011 Today: OpenMP and MPI versions of Jacobi iteration Gauss-Seidel and SOR iterative methods Next week: More MPI Debugging and totalview GPU computing Read: Class notes
More informationOpenMP 4.0/4.5. Mark Bull, EPCC
OpenMP 4.0/4.5 Mark Bull, EPCC OpenMP 4.0/4.5 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all
More informationSession 4: Parallel Programming with OpenMP
Session 4: Parallel Programming with OpenMP Xavier Martorell Barcelona Supercomputing Center Agenda Agenda 10:00-11:00 OpenMP fundamentals, parallel regions 11:00-11:30 Worksharing constructs 11:30-12:00
More informationImproving the Practicality of Transactional Memory
Improving the Practicality of Transactional Memory Woongki Baek Electrical Engineering Stanford University Programming Multiprocessors Multiprocessor systems are now everywhere From embedded to datacenter
More informationParallel Programming with OpenMP. CS240A, T. Yang, 2013 Modified from Demmel/Yelick s and Mary Hall s Slides
Parallel Programming with OpenMP CS240A, T. Yang, 203 Modified from Demmel/Yelick s and Mary Hall s Slides Introduction to OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for
More informationGPU-Accelerated Algebraic Multigrid for Commercial Applications. Joe Eaton, Ph.D. Manager, NVAMG CUDA Library NVIDIA
GPU-Accelerated Algebraic Multigrid for Commercial Applications Joe Eaton, Ph.D. Manager, NVAMG CUDA Library NVIDIA ANSYS Fluent 2 Fluent control flow Accelerate this first Non-linear iterations Assemble
More informationAMath 483/583 Lecture 24. Notes: Notes: Steady state diffusion. Notes: Finite difference method. Outline:
AMath 483/583 Lecture 24 Outline: Heat equation and discretization OpenMP and MPI for iterative methods Jacobi, Gauss-Seidel, SOR Notes and Sample codes: Class notes: Linear algebra software $UWHPSC/codes/openmp/jacobi1d_omp1.f90
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More informationHybrid Model Parallel Programs
Hybrid Model Parallel Programs Charlie Peck Intermediate Parallel Programming and Cluster Computing Workshop University of Oklahoma/OSCER, August, 2010 1 Well, How Did We Get Here? Almost all of the clusters
More informationParallel Programming. Libraries and Implementations
Parallel Programming Libraries and Implementations Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationParallel Programming. OpenMP Parallel programming for multiprocessors for loops
Parallel Programming OpenMP Parallel programming for multiprocessors for loops OpenMP OpenMP An application programming interface (API) for parallel programming on multiprocessors Assumes shared memory
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 8 ] OpenMP Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture and Performance
More informationMulticore Architecture and Hybrid Programming
Multicore Architecture and Hybrid Programming Rebecca Hartman-Baker Oak Ridge National Laboratory hartmanbakrj@ornl.gov 2004-2009 Rebecca Hartman-Baker. Reproduction permitted for non-commercial, educational
More informationPerformance Engineering: Lab Session
PDC Summer School 2018 Performance Engineering: Lab Session We are going to work with a sample toy application that solves the 2D heat equation (see the Appendix in this handout). You can download that
More informationEE/CSCI 451: Parallel and Distributed Computation
EE/CSCI 451: Parallel and Distributed Computation Lecture #7 2/5/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 Outline From last class
More informationHybrid Programming. John Urbanic Parallel Computing Specialist Pittsburgh Supercomputing Center. Copyright 2017
Hybrid Programming John Urbanic Parallel Computing Specialist Pittsburgh Supercomputing Center Copyright 2017 Assuming you know basic MPI This is a rare group that can discuss this topic meaningfully.
More informationAMath 483/583 Lecture 24
AMath 483/583 Lecture 24 Outline: Heat equation and discretization OpenMP and MPI for iterative methods Jacobi, Gauss-Seidel, SOR Notes and Sample codes: Class notes: Linear algebra software $UWHPSC/codes/openmp/jacobi1d_omp1.f90
More informationCommunication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures
Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures Rolf Rabenseifner rabenseifner@hlrs.de Gerhard Wellein gerhard.wellein@rrze.uni-erlangen.de University of Stuttgart
More informationHYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE
HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER AVISHA DHISLE PRERIT RODNEY ADHISLE PRODNEY 15618: PARALLEL COMPUTER ARCHITECTURE PROF. BRYANT PROF. KAYVON LET S
More informationThe Icosahedral Nonhydrostatic (ICON) Model
The Icosahedral Nonhydrostatic (ICON) Model Scalability on Massively Parallel Computer Architectures Florian Prill, DWD + the ICON team 15th ECMWF Workshop on HPC in Meteorology October 2, 2012 ICON =
More informationPreliminary Experiences with the Uintah Framework on on Intel Xeon Phi and Stampede
Preliminary Experiences with the Uintah Framework on on Intel Xeon Phi and Stampede Qingyu Meng, Alan Humphrey, John Schmidt, Martin Berzins Thanks to: TACC Team for early access to Stampede J. Davison
More informationBasic Communication Operations (Chapter 4)
Basic Communication Operations (Chapter 4) Vivek Sarkar Department of Computer Science Rice University vsarkar@cs.rice.edu COMP 422 Lecture 17 13 March 2008 Review of Midterm Exam Outline MPI Example Program:
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More informationHPC Workshop University of Kentucky May 9, 2007 May 10, 2007
HPC Workshop University of Kentucky May 9, 2007 May 10, 2007 Part 3 Parallel Programming Parallel Programming Concepts Amdahl s Law Parallel Programming Models Tools Compiler (Intel) Math Libraries (Intel)
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationParallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops
Parallel Programming Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Single computers nowadays Several CPUs (cores) 4 to 8 cores on a single chip Hyper-threading
More informationExploiting Object-Oriented Abstractions to parallelize Sparse Linear Algebra Codes
Exploiting Object-Oriented Abstractions to parallelize Sparse Linear Algebra Codes Christian Terboven, Dieter an Mey, Paul Kapinos, Christopher Schleiden, Igor Merkulow {terboven, anmey, kapinos, schleiden,
More informationParallel Programming Libraries and implementations
Parallel Programming Libraries and implementations Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License.
More informationExtending the Task-Aware MPI (TAMPI) Library to Support Asynchronous MPI primitives
Extending the Task-Aware MPI (TAMPI) Library to Support Asynchronous MPI primitives Kevin Sala, X. Teruel, J. M. Perez, V. Beltran, J. Labarta 24/09/2018 OpenMPCon 2018, Barcelona Overview TAMPI Library
More informationLecture 4: OpenMP Open Multi-Processing
CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP
More informationChip Multiprocessors COMP Lecture 9 - OpenMP & MPI
Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather
More informationShared memory programming model OpenMP TMA4280 Introduction to Supercomputing
Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing NTNU, IMF February 16. 2018 1 Recap: Distributed memory programming model Parallelism with MPI. An MPI execution is started
More information3D Parallel FEM (V) Communication-Computation Overlapping
3D Parallel FEM (V) Communication-Computation Overlapping Kengo Nakajima Programming for Parallel Computing (616-2057) Seminar on Advanced Computing (616-4009) 2 z Uniform Distributed Force in z-direction
More informationModalis. A First Step to the Evaluation of SimGrid in the Context of a Real Application. Abdou Guermouche and Hélène Renard, May 5, 2010
A First Step to the Evaluation of SimGrid in the Context of a Real Application Abdou Guermouche and Hélène Renard, LaBRI/Univ Bordeaux 1 I3S/École polytechnique universitaire de Nice-Sophia Antipolis May
More informationCOMP528: Multi-core and Multi-Processor Computing
COMP528: Multi-core and Multi-Processor Computing Dr Michael K Bane, G14, Computer Science, University of Liverpool m.k.bane@liverpool.ac.uk https://cgi.csc.liv.ac.uk/~mkbane/comp528 2X So far Why and
More informationCSC Summer School in HIGH-PERFORMANCE COMPUTING
CSC Summer School in HIGH-PERFORMANCE COMPUTING June 30 July 9, 2014 Espoo, Finland Exercise assignments All material (C) 2014 by CSC IT Center for Science Ltd. and the authors. This work is licensed under
More informationD036 Accelerating Reservoir Simulation with GPUs
D036 Accelerating Reservoir Simulation with GPUs K.P. Esler* (Stone Ridge Technology), S. Atan (Marathon Oil Corp.), B. Ramirez (Marathon Oil Corp.) & V. Natoli (Stone Ridge Technology) SUMMARY Over the
More informationGhost Cell Pattern. Fredrik Berg Kjolstad. January 26, 2010
Ghost Cell Pattern Fredrik Berg Kjolstad University of Illinois Urbana-Champaign, USA kjolsta1@illinois.edu Marc Snir University of Illinois Urbana-Champaign, USA snir@illinois.edu January 26, 2010 Problem
More information15-418, Spring 2008 OpenMP: A Short Introduction
15-418, Spring 2008 OpenMP: A Short Introduction This is a short introduction to OpenMP, an API (Application Program Interface) that supports multithreaded, shared address space (aka shared memory) parallelism.
More information14MMFD-34 Parallel Efficiency and Algorithmic Optimality in Reservoir Simulation on GPUs
14MMFD-34 Parallel Efficiency and Algorithmic Optimality in Reservoir Simulation on GPUs K. Esler, D. Dembeck, K. Mukundakrishnan, V. Natoli, J. Shumway and Y. Zhang Stone Ridge Technology, Bel Air, MD
More informationESPRESO ExaScale PaRallel FETI Solver. Hybrid FETI Solver Report
ESPRESO ExaScale PaRallel FETI Solver Hybrid FETI Solver Report Lubomir Riha, Tomas Brzobohaty IT4Innovations Outline HFETI theory from FETI to HFETI communication hiding and avoiding techniques our new
More informationPolyhedral Optimizations of Explicitly Parallel Programs
Habanero Extreme Scale Software Research Group Department of Computer Science Rice University The 24th International Conference on Parallel Architectures and Compilation Techniques (PACT) October 19, 2015
More informationIntroduction to OpenMP
1 / 7 Introduction to OpenMP: Exercises and Handout Introduction to OpenMP Christian Terboven Center for Computing and Communication, RWTH Aachen University Seffenter Weg 23, 52074 Aachen, Germany Abstract
More informationHigh Performance Computing
High Performance Computing ADVANCED SCIENTIFIC COMPUTING Dr. Ing. Morris Riedel Adjunct Associated Professor School of Engineering and Natural Sciences, University of Iceland Research Group Leader, Juelich
More informationLecture 36: MPI, Hybrid Programming, and Shared Memory. William Gropp
Lecture 36: MPI, Hybrid Programming, and Shared Memory William Gropp www.cs.illinois.edu/~wgropp Thanks to This material based on the SC14 Tutorial presented by Pavan Balaji William Gropp Torsten Hoefler
More informationTutorial OmpSs: Overlapping communication and computation
www.bsc.es Tutorial OmpSs: Overlapping communication and computation PATC course Parallel Programming Workshop Rosa M Badia, Xavier Martorell PATC 2013, 18 October 2013 Tutorial OmpSs Agenda 10:00 11:00
More informationOpen Multi-Processing: Basic Course
HPC2N, UmeåUniversity, 901 87, Sweden. May 26, 2015 Table of contents Overview of Paralellism 1 Overview of Paralellism Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2 Pragmas/Sentinels
More informationMultigrid Method using OpenMP/MPI Hybrid Parallel Programming Model on Fujitsu FX10
Multigrid Method using OpenMP/MPI Hybrid Parallel Programming Model on Fujitsu FX0 Kengo Nakajima Information Technology enter, The University of Tokyo, Japan November 4 th, 0 Fujitsu Booth S Salt Lake
More informationScientific Computing
Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 20 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling
More informationOptimize HPC - Application Efficiency on Many Core Systems
Meet the experts Optimize HPC - Application Efficiency on Many Core Systems 2018 Arm Limited Florent Lebeau 27 March 2018 2 2018 Arm Limited Speedup Multithreading and scalability I wrote my program to
More informationHybrid Programming with MPI and OpenMP
Hybrid Programming with and OpenMP Fernando Silva and Ricardo Rocha Computer Science Department Faculty of Sciences University of Porto Parallel Computing 2017/2018 F. Silva and R. Rocha (DCC-FCUP) Programming
More information