CAF versus MPI Applicability of Coarray Fortran to a Flow Solver
|
|
- Dylan Holland
- 5 years ago
- Views:
Transcription
1 CAF versus MPI Applicability of Coarray Fortran to a Flow Solver Manuel Hasert, Harald Klimach, Sabine Roller m.hasert@grs-sim.de Applied Supercomputing in Engineering
2 Motivation We develop several CFD codes at our institute Fortran 95/2003 Parallelization MPI (90%), other (10%) How do we have to design our codes to perform well on future architectures without too much porting effort (or even a re-design)? In this talk I focus on Implementations with MPI against several CAF implementations Performance Code complexity 2
3 Coarrays in Fortran (CAF) Language-inherent parallel extension to Fortran Partitioned Global Address Space (PGAS) Direct access to remote memory locations possible Compiler handles network access GET or PUT access to remote locations remote_copy = value1 [ 2 ]! Get access from arr(2) on proc 2 value2[ 3 ] = localvalue! Put access to Array2 on proc3 sync all! Barrier. Synchronize all processes Here only GET investigated 3
4 Algorithmic background The implementation of the Lattice Boltzmann Method
5 The Lattice Boltzmann Code LBC Computation domain Solves weakly compressible fluid problems 19 DOFs/cell Compute kernel Stream-collide algorithm Nearest neighbor information required 19 DOFs per cell (f i, i=1,..,19) (Degrees Of Freedom) Cartesian, uniform grid Parallel execution: divide into sub-cubes, use ghost cells (simple approach) do k=1,nz do itime = 1, maxtime do j=1,ny call communicate( fin, fout ) do i=1,nx call compute( fin, fout )! Streaming from offsets cx, cy, cz enddo ftmp(:) = fin( :, i-cx(:), j-cy(:), k-cz(:))...! collide fout(:,i,j,k) = ftmp(:) - (1-omega)*(ftmp(:) -feq(:)) enddo; enddo; enddo Compute kernel: Element loop main loop 5 5
6 Data Alignment in Memory Cartesian grid structure with DOFs/cell Fits naturally to a four-dimensional array fin( l, i, j, k ), fout( l, i, j, k ) i, j, k: Spatial positions of the fluid cell l=1.. 19: DOFs of each cell Vary position l of the DOFs Access to all 19 DOFs in the array yield: DOF-first stride=1 DOF-last stride=n cells Considerations for parallel execution Collection of data at the border planes Smallest chunks in x-direction DOF-first: DOF/cell = 19 DOF-last: 1 entry z y x 6
7 Possible parallelizations From Explicit to Implicit Implementations
8 Starting with two different parallelization schemes Explicit Communication Separate communication routine Ghost cells sender-receiver approach do itime = 1, maxtime call communicate( fin, fout ) call compute( fin, fout ) enddo main loop Implicit Communication remote data accessed within compute routine No ghost cells do itime = 1, maxtime call compute( fin, fout ) enddo main loop 1 2 Ghost cells Fluid cells Same values Fluid cell performing coarray access Fluid cell with local access Current fluid cell Neighbor of current cell accessed as coarray 5 different implementations between these two schemes 8
9 1: Traditional Parallelization Approach Domain Decomposition Introduction of ghost cells No computation takes place on these cells Provide data for fluid cells 1 2 Ghost cells Fluid cells Same values do k=1,nz do j=1,ny do i=1,nx! Streaming from offsets cx, cy, cz ftmp(:) = fin( :, i-cx(:), j-cy(:), k-cz(:))...! collide fout(:) = ftmp(:) - (1-omega)*(ftmp(:) -feq(:)) enddo; enddo; enddo do itime = 1, maxtime call communicate( fin, fout ) call compute( fin, fout ) enddo main loop Compute kernel: Element loop MPI irecv-isend, waitall, communication buffers Use a simple, non-optimized MPI communication to be comparable 9
10 2,3: Mimic MPI Parallelization with Coarrays Computation kernel exactly like MPI Communication with coarrays GET 1 2 Ghost cells Fluid cells Same values 2: Explicit buffered CAF Replace communication itself by Coarrays Use communication buffers 3 Explicit direct CAF Omit buffer usage Direct access to remote memory locations Stride becomes important do itime = 1, maxtime call communicate( fin, fout ) call compute( fin, fout ) enddo main loop 10
11 Parallelization following the Coarray Concept Communication inside kernel s element loop Remote Neighbor / location identification Access remote data in streaming step All fluid cells are accessed as Coarrays Naive CAF do k=1,nz do j=1,ny do i=1,nx! streaming step (get values from neighbors) do l=1,nnod! loop over densities in each cell xpos = mod( crd(1)*bnd(1) + i - cx( l, 1 )- 1, bnd(1) )+1 xp( 1 ) = ( crd(1)*bnd(1) + i - cx( l, 1 )- 1)/bnd(1)+1...! analoguous for the other directions if(xp(1).lt. 1) then...! correct physical boundaries nbp = image_index( caf_cart_comm, xp(1:3))! get image num ftmp( l)=fin( l,xpos,ypos,zpos )[nbp]! coarray get enddo...! collision enddo; enddo; enddo Fluid cell performing coarray access Fluid cell with local access Current fluid cell Neighbor of current cell accessed as coarray do itime = 1, maxtime call compute enddo main loop Compute kernel: Naive Coarray Fortran implementation 11
12 Improved Coarray Implementation Segmented approach Loop over fluid cells without coarray access Border nodes include coarray access Preserve efficiency on pure local cells Segmented CAF 1 2 Remote data necessary Local data only Current fluid cell Neighbors of current fluid cell Call compute_caf(fout,fin,1,nx,1,ny,1,1) do k=2,nz-1 call compute_caf(fout,fin,1,nx,1,1,k,k) do j=2,ny-1 call compute_caf(fout,fin,1,1,j,j,k,k) call compute_purelylocal(fout,fin,j,k) call compute_caf(fout,fin,nx,nx,j,j,k,k) end do call compute_caf(fout,fin,1,nx,ny,ny,k,k) end do call compute_caf(fout,fin,1,nx,1,ny,nz,nz) do itime = 1, maxtime call compute enddo serial kernel kernel with CAF access on every cell main loop Compute kernel: Segmented Coarray Fortran implementation 12
13 Tested Implementations We now compare the performance of these implementations Name Paradigm Separate communication Buffer usage Halos 1 MPI MPI x x x 2 expl buf CAF x x x 3 expl dir CAF x x 4 impl segm CAF 5 impl naive CAF 13
14 Performance Results Serial Performance and Scalability
15 Used Hardware Criteria Hardware / software support for PGAS Gemini has been optimized for high throughput of small messages Potentially many small data packages to transfer Comparison between Seastar with Gemini XT5m XE6 CPU Barcelona 23 C2 Magny Cours 6128 Cores 4 8 GHz 2,4 2 L2 Cache 512 KB 512 KB L3 Cache 2 MB 12 MB Socket per node 2 2 Memory 16 GB 32 GB ASIC Seastar Gemini Compiler Version MPI Version Cray MPT Cray MPT
16 Serial Performance and impact of memory layout DOF-first Domain size-dependent performance, esp. when running in cache Smallest chunk is 19 datums choose DOF-last weaker impact of cache, but cache thrashing, better for large problems Smallest chunk only 1 datum, severe impact on Seastar 16
17 Strong Scaling Results Total Fluid cells: = 8 Mio. cell Three-dimensional domain decomposition same # procs in each direction 2 3,3 3, Method Paradigm Separate communication Buffer usage Halos 1 MPI MPI x x x 2 expl buf CAF x x x 3 expl dir CAF x x 4 impl segm CAF 5 impl naive CAF procs p Drastic improvements from Seastar to Gemini 9 3 cells/proc 17 3 cells/proc 17
18 Weak Scaling Results MPI outperforms CAF for large problem sizes 9 3 = 729 cells/process Latency region Method Paradigm Separate communication Buffer usage Halos 1 MPI MPI x x x 2 expl buf CAF x x x 3 expl dir CAF x x 4 impl segm CAF 5 impl naive CAF procs p One-dimensional domain-decomposition for p<8 Improvement from Seastar to Gemini 18
19 Programming complexity MPI CAF Compute kernel Same as serial Identification of neighbor proc and remote address inside -> slow and obscuring Potential for compiler optimizations Data structure Parallel infrastructure Derived Type buffers: flexible and fast Regular Coarrays: fast but unflexible Derived Type Coarray: flexible but slow on XT Neighbor and position identification with Cartesian communicator Data validity Given after communication Has to be ensured by sync statements Restrictions Sync is slow Same sized arrays 19
20 Conclusion and Outlook Code complexity First Parallelization is fairly easy to implement Achieving higher performance requires complex code Kernel code gets obscured Performance MPI yields best performance for all tested cases Usage of coarrays might get beneficial for very large number of cores Result At the current stage, more drawbacks than advantages for complex scientific codes Coarrays can be used on Gemini without severe implications Although some tasks are hidden from the user, other tasks arise No detailed performance tuning possible for the user 20
21 Thank you! 21
CAF versus MPI - Applicability of Coarray Fortran to a Flow Solver
CAF versus MPI - Applicability of Coarray Fortran to a Flow Solver Manuel Hasert, Harald Klimach, Sabine Roller 2 German Research School for Simulation Sciences GmbH, 52062 Aachen, Germany 2 RWTH Aachen
More informationOverlapping Computation and Communication for Advection on Hybrid Parallel Computers
Overlapping Computation and Communication for Advection on Hybrid Parallel Computers James B White III (Trey) trey@ucar.edu National Center for Atmospheric Research Jack Dongarra dongarra@eecs.utk.edu
More informationComparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster G. Jost*, H. Jin*, D. an Mey**,F. Hatay*** *NASA Ames Research Center **Center for Computing and Communication, University of
More informationMassively Parallel Phase Field Simulations using HPC Framework walberla
Massively Parallel Phase Field Simulations using HPC Framework walberla SIAM CSE 2015, March 15 th 2015 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Johannes Hötzer, Harald Köstler and Ulrich
More informationLecture V: Introduction to parallel programming with Fortran coarrays
Lecture V: Introduction to parallel programming with Fortran coarrays What is parallel computing? Serial computing Single processing unit (core) is used for solving a problem One task processed at a time
More informationPerformance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla
Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla SIAM PP 2016, April 13 th 2016 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Johannes Hötzer,
More informationFriday, May 25, User Experiments with PGAS Languages, or
User Experiments with PGAS Languages, or User Experiments with PGAS Languages, or It s the Performance, Stupid! User Experiments with PGAS Languages, or It s the Performance, Stupid! Will Sawyer, Sergei
More informationsimulation framework for piecewise regular grids
WALBERLA, an ultra-scalable multiphysics simulation framework for piecewise regular grids ParCo 2015, Edinburgh September 3rd, 2015 Christian Godenschwager, Florian Schornbaum, Martin Bauer, Harald Köstler
More informationOptimization of MPI Applications Rolf Rabenseifner
Optimization of MPI Applications Rolf Rabenseifner University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Optimization of MPI Applications Slide 1 Optimization and Standardization
More informationWednesday : Basic Overview. Thursday : Optimization
Cray Inc. Wednesday : Basic Overview XT Architecture XT Programming Environment XT MPT : CRAY MPI Cray Scientific Libraries CRAYPAT : Basic HOWTO Handons Thursday : Optimization Where and How to Optimize
More informationCloverLeaf: Preparing Hydrodynamics Codes for Exascale
CloverLeaf: Preparing Hydrodynamics Codes for Exascale Andrew Mallinson Andy.Mallinson@awe.co.uk www.awe.co.uk British Crown Owned Copyright [2013]/AWE Agenda AWE & Uni. of Warwick introduction Problem
More informationMigrating A Scientific Application from MPI to Coarrays. John Ashby and John Reid HPCx Consortium Rutherford Appleton Laboratory STFC UK
Migrating A Scientific Application from MPI to Coarrays John Ashby and John Reid HPCx Consortium Rutherford Appleton Laboratory STFC UK Why and Why Not? +MPI programming is arcane +New emerging paradigms
More informationEvaluating New Communication Models in the Nek5000 Code for Exascale
Evaluating New Communication Models in the Nek5000 Code for Exascale Ilya Ivanov (KTH), Rui Machado (Fraunhofer), Mirko Rahn (Fraunhofer), Dana Akhmetova (KTH), Erwin Laure (KTH), Jing Gong (KTH), Philipp
More informationPerformance Analysis of the Lattice Boltzmann Method on x86-64 Architectures
Performance Analysis of the Lattice Boltzmann Method on x86-64 Architectures Jan Treibig, Simon Hausmann, Ulrich Ruede Zusammenfassung The Lattice Boltzmann method (LBM) is a well established algorithm
More informationDesign Alternatives for Implementing Fence Synchronization in MPI-2 One-Sided Communication for InfiniBand Clusters
Design Alternatives for Implementing Fence Synchronization in MPI-2 One-Sided Communication for InfiniBand Clusters G.Santhanaraman, T. Gangadharappa, S.Narravula, A.Mamidala and D.K.Panda Presented by:
More informationHigh Scalability of Lattice Boltzmann Simulations with Turbulence Models using Heterogeneous Clusters
SIAM PP 2014 High Scalability of Lattice Boltzmann Simulations with Turbulence Models using Heterogeneous Clusters C. Riesinger, A. Bakhtiari, M. Schreiber Technische Universität München February 20, 2014
More informationParallelization Using a PGAS Language such as X10 in HYDRO and TRITON
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Parallelization Using a PGAS Language such as X10 in HYDRO and TRITON Marc Tajchman* a a Commissariat à l énergie atomique
More informationCommunication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures
Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures Rolf Rabenseifner rabenseifner@hlrs.de Gerhard Wellein gerhard.wellein@rrze.uni-erlangen.de University of Stuttgart
More informationPerformance Comparison between Two Programming Models of XcalableMP
Performance Comparison between Two Programming Models of XcalableMP H. Sakagami Fund. Phys. Sim. Div., National Institute for Fusion Science XcalableMP specification Working Group (XMP-WG) Dilemma in Parallel
More information3D ADI Method for Fluid Simulation on Multiple GPUs. Nikolai Sakharnykh, NVIDIA Nikolay Markovskiy, NVIDIA
3D ADI Method for Fluid Simulation on Multiple GPUs Nikolai Sakharnykh, NVIDIA Nikolay Markovskiy, NVIDIA Introduction Fluid simulation using direct numerical methods Gives the most accurate result Requires
More informationTurbostream: A CFD solver for manycore
Turbostream: A CFD solver for manycore processors Tobias Brandvik Whittle Laboratory University of Cambridge Aim To produce an order of magnitude reduction in the run-time of CFD solvers for the same hardware
More informationInternational Supercomputing Conference 2009
International Supercomputing Conference 2009 Implementation of a Lattice-Boltzmann-Method for Numerical Fluid Mechanics Using the nvidia CUDA Technology E. Riegel, T. Indinger, N.A. Adams Technische Universität
More informationComputational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm
Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm March 17 March 21, 2014 Florian Schornbaum, Martin Bauer, Simon Bogner Chair for System Simulation Friedrich-Alexander-Universität
More informationExploring XcalableMP. Shun Liang. August 24, 2012
Exploring XcalableMP Shun Liang August 24, 2012 MSc in High Performance Computing The University of Edinburgh Year of Presentation: 2012 Abstract This project has implemented synthetic and application
More informationParallelising Pipelined Wavefront Computations on the GPU
Parallelising Pipelined Wavefront Computations on the GPU S.J. Pennycook G.R. Mudalige, S.D. Hammond, and S.A. Jarvis. High Performance Systems Group Department of Computer Science University of Warwick
More informationParticle-in-Cell Simulations on Modern Computing Platforms. Viktor K. Decyk and Tajendra V. Singh UCLA
Particle-in-Cell Simulations on Modern Computing Platforms Viktor K. Decyk and Tajendra V. Singh UCLA Outline of Presentation Abstraction of future computer hardware PIC on GPUs OpenCL and Cuda Fortran
More informationLecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013
Lecture 13: Memory Consistency + a Course-So-Far Review Parallel Computer Architecture and Programming Today: what you should know Understand the motivation for relaxed consistency models Understand the
More informationExperiences with ENZO on the Intel Many Integrated Core Architecture
Experiences with ENZO on the Intel Many Integrated Core Architecture Dr. Robert Harkness National Institute for Computational Sciences April 10th, 2012 Overview ENZO applications at petascale ENZO and
More informationAdministrative. Optimizing Stencil Computations. March 18, Stencil Computations, Performance Issues. Stencil Computations 3/18/13
Administrative Optimizing Stencil Computations March 18, 2013 Midterm coming April 3? In class March 25, can bring one page of notes Review notes, readings and review lecture Prior exams are posted Design
More informationIntroducing the Cray XMT. Petr Konecny May 4 th 2007
Introducing the Cray XMT Petr Konecny May 4 th 2007 Agenda Origins of the Cray XMT Cray XMT system architecture Cray XT infrastructure Cray Threadstorm processor Shared memory programming model Benefits/drawbacks/solutions
More informationCFD exercise. Regular domain decomposition
CFD exercise Regular domain decomposition Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationNetwork Bandwidth & Minimum Efficient Problem Size
Network Bandwidth & Minimum Efficient Problem Size Paul R. Woodward Laboratory for Computational Science & Engineering (LCSE), University of Minnesota April 21, 2004 Build 3 virtual computers with Intel
More informationData Management. Parallel Filesystems. Dr David Henty HPC Training and Support
Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER
More informationPerformance Study of the MPI and MPI-CH Communication Libraries on the IBM SP
Performance Study of the MPI and MPI-CH Communication Libraries on the IBM SP Ewa Deelman and Rajive Bagrodia UCLA Computer Science Department deelman@cs.ucla.edu, rajive@cs.ucla.edu http://pcl.cs.ucla.edu
More informationOptimising the Mantevo benchmark suite for multi- and many-core architectures
Optimising the Mantevo benchmark suite for multi- and many-core architectures Simon McIntosh-Smith Department of Computer Science University of Bristol 1 Bristol's rich heritage in HPC The University of
More informationProfiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA
Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA Motivation Scientific experiments are generating large amounts of data Education
More informationSoftware and Performance Engineering for numerical codes on GPU clusters
Software and Performance Engineering for numerical codes on GPU clusters H. Köstler International Workshop of GPU Solutions to Multiscale Problems in Science and Engineering Harbin, China 28.7.2010 2 3
More informationEvaluation of the Coarray Fortran Programming Model on the Example of a Lattice Boltzmann Code
Evaluation of the Coarray Fortran Programming Model on the Example of a Lattice Boltzmann Code Klaus Sembritzki Friedrich-Alexander University Erlangen-Nuremberg Erlangen Regional Computing Center (RRZE)
More informationGhost Cell Pattern. Fredrik Berg Kjolstad. January 26, 2010
Ghost Cell Pattern Fredrik Berg Kjolstad University of Illinois Urbana-Champaign, USA kjolsta1@illinois.edu Marc Snir University of Illinois Urbana-Champaign, USA snir@illinois.edu January 26, 2010 Problem
More informationPerformance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures
Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures Dirk Ribbrock, Markus Geveler, Dominik Göddeke, Stefan Turek Angewandte Mathematik, Technische Universität Dortmund
More informationCompute Node Linux (CNL) The Evolution of a Compute OS
Compute Node Linux (CNL) The Evolution of a Compute OS Overview CNL The original scheme plan, goals, requirements Status of CNL Plans Features and directions Futures May 08 Cray Inc. Proprietary Slide
More information2DECOMP&FFT The Library Behind Incompact3D
2DECOMP&FFT The Library Behind Incompact3D Ning Li NAG Incompact3D User Group Meeting Imperial College London 24/04/2014 Experts in numerical algorithms and HPC services About the Speaker PhD in Mechanical
More informationComparing One-Sided Communication with MPI, UPC and SHMEM
Comparing One-Sided Communication with MPI, UPC and SHMEM EPCC University of Edinburgh Dr Chris Maynard Application Consultant, EPCC c.maynard@ed.ac.uk +44 131 650 5077 The Future ain t what it used to
More informationAn Introduction to OpenACC
An Introduction to OpenACC Alistair Hart Cray Exascale Research Initiative Europe 3 Timetable Day 1: Wednesday 29th August 2012 13:00 Welcome and overview 13:15 Session 1: An Introduction to OpenACC 13:15
More informationImplicit and Explicit Optimizations for Stencil Computations
Implicit and Explicit Optimizations for Stencil Computations By Shoaib Kamil 1,2, Kaushik Datta 1, Samuel Williams 1,2, Leonid Oliker 2, John Shalf 2 and Katherine A. Yelick 1,2 1 BeBOP Project, U.C. Berkeley
More informationFortran Coarrays John Reid, ISO Fortran Convener, JKR Associates and Rutherford Appleton Laboratory
Fortran Coarrays John Reid, ISO Fortran Convener, JKR Associates and Rutherford Appleton Laboratory This talk will explain the objectives of coarrays, give a quick summary of their history, describe the
More informationMPI Casestudy: Parallel Image Processing
MPI Casestudy: Parallel Image Processing David Henty 1 Introduction The aim of this exercise is to write a complete MPI parallel program that does a very basic form of image processing. We will start by
More informationNon-Blocking Collectives for MPI
Non-Blocking Collectives for MPI overlap at the highest level Torsten Höfler Open Systems Lab Indiana University Bloomington, IN, USA Institut für Wissenschaftliches Rechnen Technische Universität Dresden
More informationPetascale Multiscale Simulations of Biomolecular Systems. John Grime Voth Group Argonne National Laboratory / University of Chicago
Petascale Multiscale Simulations of Biomolecular Systems John Grime Voth Group Argonne National Laboratory / University of Chicago About me Background: experimental guy in grad school (LSCM, drug delivery)
More informationCommunication Characteristics in the NAS Parallel Benchmarks
Communication Characteristics in the NAS Parallel Benchmarks Ahmad Faraj Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 32306 {faraj, xyuan}@cs.fsu.edu Abstract In this
More informationLattice Boltzmann with CUDA
Lattice Boltzmann with CUDA Lan Shi, Li Yi & Liyuan Zhang Hauptseminar: Multicore Architectures and Programming Page 1 Outline Overview of LBM An usage of LBM Algorithm Implementation in CUDA and Optimization
More informationLarge Scale Parallel Lattice Boltzmann Model of Dendritic Growth
Large Scale Parallel Lattice Boltzmann Model of Dendritic Growth Bohumir Jelinek Mohsen Eshraghi Sergio Felicelli CAVS, Mississippi State University March 3-7, 2013 San Antonio, Texas US Army Corps of
More informationFujitsu s new supercomputer, delivering the next step in Exascale capability
Fujitsu s new supercomputer, delivering the next step in Exascale capability Toshiyuki Shimizu November 19th, 2014 0 Past, PRIMEHPC FX100, and roadmap for Exascale 2011 2012 2013 2014 2015 2016 2017 2018
More informationPresented by: Nafiseh Mahmoudi Spring 2017
Presented by: Nafiseh Mahmoudi Spring 2017 Authors: Publication: Type: ACM Transactions on Storage (TOS), 2016 Research Paper 2 High speed data processing demands high storage I/O performance. Flash memory
More informationComputing architectures Part 2 TMA4280 Introduction to Supercomputing
Computing architectures Part 2 TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Supercomputing What is the motivation for Supercomputing? Solve complex problems fast and accurately:
More informationFVM - How to program the Multi-Core FVM instead of MPI
FVM - How to program the Multi-Core FVM instead of MPI DLR, 15. October 2009 Dr. Mirko Rahn Competence Center High Performance Computing and Visualization Fraunhofer Institut for Industrial Mathematics
More informationMemory. From Chapter 3 of High Performance Computing. c R. Leduc
Memory From Chapter 3 of High Performance Computing c 2002-2004 R. Leduc Memory Even if CPU is infinitely fast, still need to read/write data to memory. Speed of memory increasing much slower than processor
More informationMDHIM: A Parallel Key/Value Store Framework for HPC
MDHIM: A Parallel Key/Value Store Framework for HPC Hugh Greenberg 7/6/2015 LA-UR-15-25039 HPC Clusters Managed by a job scheduler (e.g., Slurm, Moab) Designed for running user jobs Difficult to run system
More informationNumerical Algorithms on Multi-GPU Architectures
Numerical Algorithms on Multi-GPU Architectures Dr.-Ing. Harald Köstler 2 nd International Workshops on Advances in Computational Mechanics Yokohama, Japan 30.3.2010 2 3 Contents Motivation: Applications
More informationAutomatic Generation of Algorithms and Data Structures for Geometric Multigrid. Harald Köstler, Sebastian Kuckuk Siam Parallel Processing 02/21/2014
Automatic Generation of Algorithms and Data Structures for Geometric Multigrid Harald Köstler, Sebastian Kuckuk Siam Parallel Processing 02/21/2014 Introduction Multigrid Goal: Solve a partial differential
More informationDetermining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace
Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace James Southern, Jim Tuccillo SGI 25 October 2016 0 Motivation Trend in HPC continues to be towards more
More informationLecture 3: Intro to parallel machines and models
Lecture 3: Intro to parallel machines and models David Bindel 1 Sep 2011 Logistics Remember: http://www.cs.cornell.edu/~bindel/class/cs5220-f11/ http://www.piazza.com/cornell/cs5220 Note: the entire class
More informationIntro to Parallel Computing
Outline Intro to Parallel Computing Remi Lehe Lawrence Berkeley National Laboratory Modern parallel architectures Parallelization between nodes: MPI Parallelization within one node: OpenMP Why use parallel
More informationParallel Processing. Parallel Processing. 4 Optimization Techniques WS 2018/19
Parallel Processing WS 2018/19 Universität Siegen rolanda.dwismuellera@duni-siegena.de Tel.: 0271/740-4050, Büro: H-B 8404 Stand: September 7, 2018 Betriebssysteme / verteilte Systeme Parallel Processing
More informationIntroduction to parallel Computing
Introduction to parallel Computing VI-SEEM Training Paschalis Paschalis Korosoglou Korosoglou (pkoro@.gr) (pkoro@.gr) Outline Serial vs Parallel programming Hardware trends Why HPC matters HPC Concepts
More informationOptimised all-to-all communication on multicore architectures applied to FFTs with pencil decomposition
Optimised all-to-all communication on multicore architectures applied to FFTs with pencil decomposition CUG 2018, Stockholm Andreas Jocksch, Matthias Kraushaar (CSCS), David Daverio (University of Cambridge,
More informationTransactions on Information and Communications Technologies vol 3, 1993 WIT Press, ISSN
The implementation of a general purpose FORTRAN harness for an arbitrary network of transputers for computational fluid dynamics J. Mushtaq, A.J. Davies D.J. Morgan ABSTRACT Many Computational Fluid Dynamics
More informationWhy Multiprocessors?
Why Multiprocessors? Motivation: Go beyond the performance offered by a single processor Without requiring specialized processors Without the complexity of too much multiple issue Opportunity: Software
More informationarxiv: v1 [hep-lat] 13 Jun 2008
Continuing Progress on a Lattice QCD Software Infrastructure arxiv:0806.2312v1 [hep-lat] 13 Jun 2008 Bálint Joó on behalf of the USQCD Collaboration Thomas Jefferson National Laboratory, 12000 Jefferson
More informationA Simulation of Global Atmosphere Model NICAM on TSUBAME 2.5 Using OpenACC
A Simulation of Global Atmosphere Model NICAM on TSUBAME 2.5 Using OpenACC Hisashi YASHIRO RIKEN Advanced Institute of Computational Science Kobe, Japan My topic The study for Cloud computing My topic
More informationProgramming Models for Supercomputing in the Era of Multicore
Programming Models for Supercomputing in the Era of Multicore Marc Snir MULTI-CORE CHALLENGES 1 Moore s Law Reinterpreted Number of cores per chip doubles every two years, while clock speed decreases Need
More informationEULAG: high-resolution computational model for research of multi-scale geophysical fluid dynamics
Zbigniew P. Piotrowski *,** EULAG: high-resolution computational model for research of multi-scale geophysical fluid dynamics *Geophysical Turbulence Program, National Center for Atmospheric Research,
More informationA COMPARISON OF MESSAGE PASSING INTERFACE (MPI) AND CO-ARRAY FORTRAN FOR LARGE FINITE ELEMENT VARIABLY SATURATED FLOW SIMULATIONS
DOI 10.12694/scpe.v19i4.1468 Scalable Computing: Practice and Experience ISSN 1895-1767 Volume 19, Number 4, pp. 423 432. http://www.scpe.org 2018 SCPE A COMPARISON OF MESSAGE PASSING INTERFACE (MPI) AND
More informationPerformance of the hybrid MPI/OpenMP version of the HERACLES code on the Curie «Fat nodes» system
Performance of the hybrid MPI/OpenMP version of the HERACLES code on the Curie «Fat nodes» system Edouard Audit, Matthias Gonzalez, Pierre Kestener and Pierre-François Lavallé The HERACLES code Fixed grid
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationScalable Multi Agent Simulation on the GPU. Avi Bleiweiss NVIDIA Corporation San Jose, 2009
Scalable Multi Agent Simulation on the GPU Avi Bleiweiss NVIDIA Corporation San Jose, 2009 Reasoning Explicit State machine, serial Implicit Compute intensive Fits SIMT well Collision avoidance Motivation
More informationA4. Intro to Parallel Computing
Self-Consistent Simulations of Beam and Plasma Systems Steven M. Lund, Jean-Luc Vay, Rémi Lehe and Daniel Winklehner Colorado State U., Ft. Collins, CO, 13-17 June, 2016 A4. Intro to Parallel Computing
More informationOP2 FOR MANY-CORE ARCHITECTURES
OP2 FOR MANY-CORE ARCHITECTURES G.R. Mudalige, M.B. Giles, Oxford e-research Centre, University of Oxford gihan.mudalige@oerc.ox.ac.uk 27 th Jan 2012 1 AGENDA OP2 Current Progress Future work for OP2 EPSRC
More informationParallel Programming with Coarray Fortran
Parallel Programming with Coarray Fortran SC10 Tutorial, November 15 th 2010 David Henty, Alan Simpson (EPCC) Harvey Richardson, Bill Long, Nathan Wichmann (Cray) Tutorial Overview The Fortran Programming
More informationECSS Project: Prof. Bodony: CFD, Aeroacoustics
ECSS Project: Prof. Bodony: CFD, Aeroacoustics Robert McLay The Texas Advanced Computing Center June 19, 2012 ECSS Project: Bodony Aeroacoustics Program Program s name is RocfloCM It is mixture of Fortran
More informationImplementation of an integrated efficient parallel multiblock Flow solver
Implementation of an integrated efficient parallel multiblock Flow solver Thomas Bönisch, Panagiotis Adamidis and Roland Rühle adamidis@hlrs.de Outline Introduction to URANUS Why using Multiblock meshes
More informationPerformance Tools for Technical Computing
Christian Terboven terboven@rz.rwth-aachen.de Center for Computing and Communication RWTH Aachen University Intel Software Conference 2010 April 13th, Barcelona, Spain Agenda o Motivation and Methodology
More informationEfficient Tridiagonal Solvers for ADI methods and Fluid Simulation
Efficient Tridiagonal Solvers for ADI methods and Fluid Simulation Nikolai Sakharnykh - NVIDIA San Jose Convention Center, San Jose, CA September 21, 2010 Introduction Tridiagonal solvers very popular
More informationAdvanced Parallel Programming
Sebastian von Alfthan Jussi Enkovaara Pekka Manninen Advanced Parallel Programming February 15-17, 2016 PRACE Advanced Training Center CSC IT Center for Science Ltd, Finland All material (C) 2011-2016
More informationParallel Hyperbolic PDE Simulation on Clusters: Cell versus GPU
Parallel Hyperbolic PDE Simulation on Clusters: Cell versus GPU Scott Rostrup and Hans De Sterck Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Abstract Increasingly,
More informationMPI Optimisation. Advanced Parallel Programming. David Henty, Iain Bethune, Dan Holmes EPCC, University of Edinburgh
MPI Optimisation Advanced Parallel Programming David Henty, Iain Bethune, Dan Holmes EPCC, University of Edinburgh Overview Can divide overheads up into four main categories: Lack of parallelism Load imbalance
More informationLecture 17. NUMA Architecture and Programming
Lecture 17 NUMA Architecture and Programming Announcements Extended office hours today until 6pm Weds after class? Partitioning and communication in Particle method project 2012 Scott B. Baden /CSE 260/
More informationToward a Reliable Data Transport Architecture for Optical Burst-Switched Networks
Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Dr. Vinod Vokkarane Assistant Professor, Computer and Information Science Co-Director, Advanced Computer Networks Lab University
More informationTwo-Phase flows on massively parallel multi-gpu clusters
Two-Phase flows on massively parallel multi-gpu clusters Peter Zaspel Michael Griebel Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität Bonn Workshop Programming of Heterogeneous
More informationScalable Software Transactional Memory for Chapel High-Productivity Language
Scalable Software Transactional Memory for Chapel High-Productivity Language Srinivas Sridharan and Peter Kogge, U. Notre Dame Brad Chamberlain, Cray Inc Jeffrey Vetter, Future Technologies Group, ORNL
More informationPortable SHMEMCache: A High-Performance Key-Value Store on OpenSHMEM and MPI
Portable SHMEMCache: A High-Performance Key-Value Store on OpenSHMEM and MPI Huansong Fu*, Manjunath Gorentla Venkata, Neena Imam, Weikuan Yu* *Florida State University Oak Ridge National Laboratory Outline
More informationWorkloads Programmierung Paralleler und Verteilter Systeme (PPV)
Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment
More informationARCHITECTURE SPECIFIC COMMUNICATION OPTIMIZATIONS FOR STRUCTURED ADAPTIVE MESH-REFINEMENT APPLICATIONS
ARCHITECTURE SPECIFIC COMMUNICATION OPTIMIZATIONS FOR STRUCTURED ADAPTIVE MESH-REFINEMENT APPLICATIONS BY TAHER SAIF A thesis submitted to the Graduate School New Brunswick Rutgers, The State University
More informationOptimizing Out-of-Core Nearest Neighbor Problems on Multi-GPU Systems Using NVLink
Optimizing Out-of-Core Nearest Neighbor Problems on Multi-GPU Systems Using NVLink Rajesh Bordawekar IBM T. J. Watson Research Center bordaw@us.ibm.com Pidad D Souza IBM Systems pidsouza@in.ibm.com 1 Outline
More informationIntroduction to Parallel Performance Engineering
Introduction to Parallel Performance Engineering Markus Geimer, Brian Wylie Jülich Supercomputing Centre (with content used with permission from tutorials by Bernd Mohr/JSC and Luiz DeRose/Cray) Performance:
More informationIBM PSSC Montpellier Customer Center. Blue Gene/P ASIC IBM Corporation
Blue Gene/P ASIC Memory Overview/Considerations No virtual Paging only the physical memory (2-4 GBytes/node) In C, C++, and Fortran, the malloc routine returns a NULL pointer when users request more memory
More informationPorting GASNet to Portals: Partitioned Global Address Space (PGAS) Language Support for the Cray XT
Porting GASNet to Portals: Partitioned Global Address Space (PGAS) Language Support for the Cray XT Paul Hargrove Dan Bonachea, Michael Welcome, Katherine Yelick UPC Review. July 22, 2009. What is GASNet?
More informationCompute Node Linux: Overview, Progress to Date & Roadmap
Compute Node Linux: Overview, Progress to Date & Roadmap David Wallace Cray Inc ABSTRACT: : This presentation will provide an overview of Compute Node Linux(CNL) for the CRAY XT machine series. Compute
More informationDangerously Clever X1 Application Tricks
Dangerously Clever X1 Application Tricks CUG 2004 James B. White III (Trey) trey@ornl.gov 1 Acknowledgement Research sponsored by the Mathematical, Information, and Division, Office of Advanced Scientific
More informationNIA CFD Futures Conference Hampton, VA; August 2012
Petascale Computing and Similarity Scaling in Turbulence P. K. Yeung Schools of AE, CSE, ME Georgia Tech pk.yeung@ae.gatech.edu NIA CFD Futures Conference Hampton, VA; August 2012 10 2 10 1 10 4 10 5 Supported
More information