Peta-Scale Simulations with the HPC Software Framework walberla:

Size: px
Start display at page:

Download "Peta-Scale Simulations with the HPC Software Framework walberla:"

Transcription

1 Peta-Scale Simulations with the HPC Software Framework walberla: Massively Parallel AMR for the Lattice Boltzmann Method SIAM PP 2016, Paris April 15, 2016 Florian Schornbaum, Christian Godenschwager, Martin Bauer, Ulrich Rüde Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany

2 Outline Introduction The walberla Simulation Framework An Example Using the Lattice Boltzmann Method Parallelization Concepts Domain Partitioning & Data Handling Dynamic Domain Repartitioning AMR Challenges Distributed Repartitioning Procedure Dynamic Load Balancing Benchmarks / Performance Evaluation Conclusion 2

3 Introduction The walberla Simulation Framework An Example Using the Lattice Boltzmann Method

4 Introduction walberla (widely applicable Lattice Boltzmann framework from Erlangen): main focus on CFD (computational fluid dynamics) simulations based on the lattice Boltzmann method (LBM) (now also implementations of other methods, e.g., phase field) at its very core designed as an HPC software framework: scales from laptops to current petascale supercomputers largest simulation: 1,835,008 processes (IBM Blue Jülich) hybrid parallelization: MPI + OpenMP vectorization of compute kernels written in C++(11), growing Python interface support for different platforms (Linux, Windows) and compilers (GCC, Intel XE, Visual Studio, llvm/clang, IBM XL) automated build and test system 4

5 Introduction AMR for the LBM example (vocal fold phantom geometry) DNS (direct numerical simulation) Reynolds number: 2500 / D3Q27 TRT 24,054, ,611,120 fluid cells / 1 5 levels without refinement: 311 times more memory and 701 times the workload 5

6 Parallelization Concepts Domain Partitioning & Data Handling

7 Parallelization Concepts simulation domain only in here domain partitioning into blocks static block-level refinement empty blocks are discarded 7

8 Parallelization Concepts simulation domain only in here domain partitioning into blocks octree partitioning within every block of the initial partitioning ( forest of octrees) static block-level refinement empty blocks are discarded 8

9 Parallelization Concepts static block-level refinement ( forest of octrees) allocation of block data ( grids) static load balancing load balancing can be based on either space-filling curves (Morton or Hilbert order) using the underlying forest of octrees or graph partitioning (METIS, ) whatever fits best the needs of the simulation 9

10 Parallelization Concepts static block-level refinement ( forest of octrees) static load balancing DISK compact (KiB/MiB) binary MPI IO DISK allocation of block data ( grids) separation of domain partitioning from simulation (optional) 10

11 Parallelization Concepts static block-level refinement ( forest of octrees) static load balancing data & data structure stored perfectly distributed DISK compact (KiB/MiB) binary MPI IO no replication of (meta) data! allocation of block data ( grids) DISK separation of domain partitioning from simulation (optional) 11

12 Parallelization Concepts all parts customizable via callback functions in order to adapt to the underlying simulation: 1) discarding of blocks 2) (iterative) refinement of blocks 3) load balancing 4) block data allocation static block-level refinement ( forest of octrees) static load balancing DISK support for arbitrary number of block data items DISK (each of arbitrary type) compact (KiB/MiB) binary MPI IO allocation of block data ( grids) separation of domain partitioning from simulation (optional) 12

13 Parallelization Concepts different views on / representations of the domain partitioning 2:1 balanced grid (used for the LBM on refined grids) distributed graph: nodes = blocks, edges explicitly stored as < block ID, process rank > pairs forest of octrees: octrees are not explicitly stored, but implicitly defined via block IDs 13

14 Parallelization Concepts different views on / representations of the domain partitioning 2:1 balanced grid (used for the LBM on refined grids) our parallel implementation [1] of local grid refinement for the LBM based on [2] shows excellent performance: simulations with in total close to one trillion cells close to one trillion cells updated per second (with 1.8 million threads) strong scaling: more than 1000 time steps / sec. 1 ms per time step distributed graph: nodes = blocks, edges explicitly stored as < block ID, process rank > pairs forest of octrees: [1] F. Schornbaum and U. Rüde, Massively Parallel Algorithms for the Lattice Boltzmann Method on Non-Uniform [1] Grids, SIAM Journal on Scientific Computing (accepted for publication) [ octrees are not explicitly stored, but implicitly defined via block IDs [2] M. Rohde, D. Kandhai, J. J. Derksen, and H. E. A. van den Akker, A generic, mass conservative local grid refine- [2] ment technique for lattice-boltzmann schemes, International Journal for Numerical Methods and Fluids 14

15 Dynamic Domain Repartitioning AMR Challenges Distributed Repartitioning Procedure Dynamic Load Balancing Benchmarks / Performance Evaluation

16 AMR Challenges challenges because of block-structured partitioning: only entire blocks split/merge (only few blocks per process) sudden increase/decrease of memory consumption by a factor of 8 (in 3D) ( octree partitioning & same number of cells for every block) split first, balance afterwards probably won t work for the LBM, all levels must be load-balanced separately for good scalability, the entire pipeline should rely on perfectly distributed algorithms and data structures 16

17 AMR Challenges challenges because of block-structured partitioning: only entire blocks split/merge (only few blocks per process) sudden increase/decrease of memory consumption by a factor of 8 (in 3D) ( octree partitioning & same number of cells for every block) split first, balance afterwards probably won t work for the LBM, all levels must be load-balanced separately for good scalability, the entire pipeline should rely on perfectly distributed algorithms and data structures no replication of (meta) data of any sort! 17

18 Dynamic Domain Repartitioning different colors (green/blue) illustrate process assignment split merge forced split to maintain 2:1 balance 1) split/merge decision callback function to determine which blocks must split and which blocks may merge 2) skeleton data structure creation lightweight blocks (few KiB) with no actual data, 2:1 balance is automatically preserved 18

19 Dynamic Domain Repartitioning split merge forced split to maintain 2:1 balance 1) split/merge decision callback function to determine which blocks must split and which blocks may merge 2) skeleton data structure creation lightweight blocks (few KiB) with no actual data, 2:1 balance is automatically preserved 19

20 Dynamic Domain Repartitioning 3) load balancing callback function to decide to which process blocks must migrate to (skeleton blocks actually move to this process) 20

21 Dynamic Domain Repartitioning 3) load balancing lightweight skeleton blocks allow multiple migration steps to different processes ( enables balancing based on diffusion) 21

22 Dynamic Domain Repartitioning 3) load balancing links between skeleton blocks and corresponding real blocks are kept intact when skeleton blocks migrate 22

23 Dynamic Domain Repartitioning 3) load balancing for global load balancing algorithms, balance is achieved in one step skeleton blocks immediately migrate to their final processes 23

24 Dynamic Domain Repartitioning refine coarsen 4) data migration links between skeleton blocks and corresponding real blocks are used to perform actual data migration (includes refinement and coarsening of block data) 24

25 Dynamic Domain Repartitioning refine coarsen 4) data migration implementation for grid data: coarsening senders coarsen data before sending to target process refinement receivers refine on target process(es) 25

26 Dynamic Domain Repartitioning key parts customizable via callback functions in order to adapt to the underlying simulation: refine coarsen 1) decision which blocks split/merge 2) dynamic load balancing 4) data migration implementation for grid data: coarsening senders coarsen data before sending to target process refinement receivers refine on target process(es) 26

27 Dynamic Load Balancing 1) space filling curves (Morton or Hilbert): every process needs global knowledge ( all gather) scaling issues (even if it s just a few bytes from every process) 2) load balancing based on diffusion: iterative procedure (= repeat the following multiple times) communication with neighboring processes only calculate flow for every process-process connection use this flow as guideline in order to decide where blocks need to migrate for achieving balance runtime & memory independent of number of processes (true in practice? benchmarks) useful extension (benefits outweigh the costs): all reduce to check for early abort & adapt flow 27

28 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): lid-driven cavity 4 grid levels domain partitioning 28

29 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): coarsen 29

30 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): coarsen refine 30

31 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): coarsen refine 2:1 balance 31

32 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): during this refresh process all cells on the finest level are coarsened and the same amount of fine cells is created by splitting coarser cells 72 % of all cells change their size 32

33 LBM AMR - Performance Benchmark Environments: JUQUEEN (5.0 PFLOP/s) Blue Gene/Q, 459K cores, 1 GB/core compiler: IBM XL / IBM MPI SuperMUC (2.9 PFLOP/s) Intel Xeon, 147K cores, 2 GB/core compiler: Intel XE / IBM MPI Benchmark (LBM D3Q19 TRT): avg. blocks/process (max. blocks/proc.) level initially after refresh after load balance (1) (1) (1) (1) (9) (1) (2) (11) (4) (4) (16) (4) 33

34 seconds LBM AMR - Performance SuperMUC space filling curve: Morton time required for the entire refresh cycle (uphold 2:1 balance, dynamic load balancing, split/merge blocks, migrate data) ,536 cores #cells per core 209, , ,703 34

35 seconds LBM AMR - Performance SuperMUC space filling curve: Morton billion cells 64 billion cells 33 billion cells #cells per core 209, , , ,536 cores 35

36 seconds LBM AMR - Performance SuperMUC diffusion load balancing billion cells 64 billion cells 33 billion cells time almost independent of #processes! #cells per core 209, , , ,536 cores 36

37 seconds LBM AMR - Performance JUQUEEN space filling curve: Morton billion cells billion cells 14 billion cells hybrid MPI+OpenMP version with SMP 1 process 2 cores 8 threads #cells per core 31, , , , ,752 cores 37

38 seconds LBM AMR - Performance JUQUEEN diffusion load balancing billion cells billion cells 14 billion cells #cells per core 31, time almost independent of #processes! 127, , , ,752 cores 38

39 iterations LBM AMR - Performance JUQUEEN diffusion load balancing number of diffusion iterations until load is perfectly balanced , ,752 cores 39

40 LBM AMR - Performance impact on performance / overhead of the entire dynamic repartitioning procedure? depends on the number of cells per core on the actual runtime of the compute kernels (D3Q19 vs. D3Q27, additional force models, etc.) on how often dynamic repartitioning is happening previous lid-driven cavity benchmark: overhead 1 to 3 (diffusion) or 1.5 to 10 (curve) time steps In practice, a lot of time is spent just to determine whether or not the grid must be adapted, i.e., whether or not refinement must take place. often the entire overhead of AMR 40

41 LBM AMR - Performance AMR for the LBM example (vocal fold phantom geometry) DNS (direct numerical simulation) Reynolds number: 2500 / D3Q27 TRT 24,054, ,611,120 fluid cells / 1 5 levels processes: 3584 (on SuperMUC phase 2) runtime: c. 24 h (3 c. 8 h) 41

42 LBM AMR - Performance AMR for the LBM example (vocal fold phantom geometry) load balancer: space filling curve (Hilbert order) time steps: 180,000 / 2,880,000 (finest grid) refresh cycles: 537 ( refresh every 335 time steps) without refinement: 311 times more memory and 701 times the workload 42

43 Conclusion

44 Conclusion & Outlook the approach for massively parallel grid repartitioning by using a block-structured domain partitioning and employing a lightweight copy of the data structure during dynamic load balancing is paying off and working extremely well: we can handle cells (> unknowns) with 10 7 blocks and 1.83 million threads 44

45 Conclusion & Outlook the approach for massively parallel grid repartitioning by using a block-structured domain partitioning and employing a lightweight copy of the data structure during dynamic load balancing is paying off and working extremely well: we can handle cells (> unknowns) with 10 7 blocks and 1.83 million threads resilience (using ULFM): store redundant, in-memory snapshots one/multiple process(es) fail restore data on different processes perform dynamic repartitioning continue :-) 45

46 THANK YOU FOR YOUR ATTENTION!

47 THANK YOU FOR YOUR ATTENTION! QUESTIONS?

simulation framework for piecewise regular grids

simulation framework for piecewise regular grids WALBERLA, an ultra-scalable multiphysics simulation framework for piecewise regular grids ParCo 2015, Edinburgh September 3rd, 2015 Christian Godenschwager, Florian Schornbaum, Martin Bauer, Harald Köstler

More information

Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm

Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm March 17 March 21, 2014 Florian Schornbaum, Martin Bauer, Simon Bogner Chair for System Simulation Friedrich-Alexander-Universität

More information

walberla: Developing a Massively Parallel HPC Framework

walberla: Developing a Massively Parallel HPC Framework walberla: Developing a Massively Parallel HPC Framework SIAM CS&E 2013, Boston February 26, 2013 Florian Schornbaum*, Christian Godenschwager*, Martin Bauer*, Matthias Markl, Ulrich Rüde* *Chair for System

More information

Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla

Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla SIAM PP 2016, April 13 th 2016 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Johannes Hötzer,

More information

The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms

The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms Harald Köstler, Uli Rüde (LSS Erlangen, ruede@cs.fau.de) Lehrstuhl für Simulation Universität Erlangen-Nürnberg www10.informatik.uni-erlangen.de

More information

Massively Parallel Phase Field Simulations using HPC Framework walberla

Massively Parallel Phase Field Simulations using HPC Framework walberla Massively Parallel Phase Field Simulations using HPC Framework walberla SIAM CSE 2015, March 15 th 2015 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Johannes Hötzer, Harald Köstler and Ulrich

More information

A Python extension for the massively parallel framework walberla

A Python extension for the massively parallel framework walberla A Python extension for the massively parallel framework walberla PyHPC at SC 14, November 17 th 2014 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Matthias Markl, Daniela Anderl, Harald Köstler

More information

Software and Performance Engineering for numerical codes on GPU clusters

Software and Performance Engineering for numerical codes on GPU clusters Software and Performance Engineering for numerical codes on GPU clusters H. Köstler International Workshop of GPU Solutions to Multiscale Problems in Science and Engineering Harbin, China 28.7.2010 2 3

More information

Large scale Imaging on Current Many- Core Platforms

Large scale Imaging on Current Many- Core Platforms Large scale Imaging on Current Many- Core Platforms SIAM Conf. on Imaging Science 2012 May 20, 2012 Dr. Harald Köstler Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen,

More information

A Scalable Adaptive Mesh Refinement Framework For Parallel Astrophysics Applications

A Scalable Adaptive Mesh Refinement Framework For Parallel Astrophysics Applications A Scalable Adaptive Mesh Refinement Framework For Parallel Astrophysics Applications James Bordner, Michael L. Norman San Diego Supercomputer Center University of California, San Diego 15th SIAM Conference

More information

Reconstruction of Trees from Laser Scan Data and further Simulation Topics

Reconstruction of Trees from Laser Scan Data and further Simulation Topics Reconstruction of Trees from Laser Scan Data and further Simulation Topics Helmholtz-Research Center, Munich Daniel Ritter http://www10.informatik.uni-erlangen.de Overview 1. Introduction of the Chair

More information

ICON for HD(CP) 2. High Definition Clouds and Precipitation for Advancing Climate Prediction

ICON for HD(CP) 2. High Definition Clouds and Precipitation for Advancing Climate Prediction ICON for HD(CP) 2 High Definition Clouds and Precipitation for Advancing Climate Prediction High Definition Clouds and Precipitation for Advancing Climate Prediction ICON 2 years ago Parameterize shallow

More information

Enzo-P / Cello. Formation of the First Galaxies. San Diego Supercomputer Center. Department of Physics and Astronomy

Enzo-P / Cello. Formation of the First Galaxies. San Diego Supercomputer Center. Department of Physics and Astronomy Enzo-P / Cello Formation of the First Galaxies James Bordner 1 Michael L. Norman 1 Brian O Shea 2 1 University of California, San Diego San Diego Supercomputer Center 2 Michigan State University Department

More information

Enzo-P / Cello. Scalable Adaptive Mesh Refinement for Astrophysics and Cosmology. San Diego Supercomputer Center. Department of Physics and Astronomy

Enzo-P / Cello. Scalable Adaptive Mesh Refinement for Astrophysics and Cosmology. San Diego Supercomputer Center. Department of Physics and Astronomy Enzo-P / Cello Scalable Adaptive Mesh Refinement for Astrophysics and Cosmology James Bordner 1 Michael L. Norman 1 Brian O Shea 2 1 University of California, San Diego San Diego Supercomputer Center 2

More information

Lattice Boltzmann Methods on the way to exascale

Lattice Boltzmann Methods on the way to exascale Lattice Boltzmann Methods on the way to exascale Ulrich Rüde (LSS Erlangen, ulrich.ruede@fau.de) Lehrstuhl für Simulation Universität Erlangen-Nürnberg www10.informatik.uni-erlangen.de HIGH PERFORMANCE

More information

Introducing a Cache-Oblivious Blocking Approach for the Lattice Boltzmann Method

Introducing a Cache-Oblivious Blocking Approach for the Lattice Boltzmann Method Introducing a Cache-Oblivious Blocking Approach for the Lattice Boltzmann Method G. Wellein, T. Zeiser, G. Hager HPC Services Regional Computing Center A. Nitsure, K. Iglberger, U. Rüde Chair for System

More information

Debugging CUDA Applications with Allinea DDT. Ian Lumb Sr. Systems Engineer, Allinea Software Inc.

Debugging CUDA Applications with Allinea DDT. Ian Lumb Sr. Systems Engineer, Allinea Software Inc. Debugging CUDA Applications with Allinea DDT Ian Lumb Sr. Systems Engineer, Allinea Software Inc. ilumb@allinea.com GTC 2013, San Jose, March 20, 2013 Embracing GPUs GPUs a rival to traditional processors

More information

Generic finite element capabilities for forest-of-octrees AMR

Generic finite element capabilities for forest-of-octrees AMR Generic finite element capabilities for forest-of-octrees AMR Carsten Burstedde joint work with Omar Ghattas, Tobin Isaac Institut für Numerische Simulation (INS) Rheinische Friedrich-Wilhelms-Universität

More information

References. T. LeBlanc, Memory management for large-scale numa multiprocessors, Department of Computer Science: Technical report*311

References. T. LeBlanc, Memory management for large-scale numa multiprocessors, Department of Computer Science: Technical report*311 References [Ande 89] [Ande 92] [Ghos 93] [LeBl 89] [Rüde92] T. Anderson, E. Lazowska, H. Levy, The Performance Implication of Thread Management Alternatives for Shared-Memory Multiprocessors, ACM Trans.

More information

Load Balancing and Data Migration in a Hybrid Computational Fluid Dynamics Application

Load Balancing and Data Migration in a Hybrid Computational Fluid Dynamics Application Load Balancing and Data Migration in a Hybrid Computational Fluid Dynamics Application Esteban Meneses Patrick Pisciuneri Center for Simulation and Modeling (SaM) University of Pittsburgh University of

More information

The Potential of Diffusive Load Balancing at Large Scale

The Potential of Diffusive Load Balancing at Large Scale Center for Information Services and High Performance Computing The Potential of Diffusive Load Balancing at Large Scale EuroMPI 2016, Edinburgh, 27 September 2016 Matthias Lieber, Kerstin Gößner, Wolfgang

More information

Workloads Programmierung Paralleler und Verteilter Systeme (PPV)

Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment

More information

Towards a Reconfigurable HPC Component Model

Towards a Reconfigurable HPC Component Model C2S@EXA Meeting July 10, 2014 Towards a Reconfigurable HPC Component Model Vincent Lanore1, Christian Pérez2 1 ENS de Lyon, LIP 2 Inria, LIP Avalon team 1 Context 1/4 Adaptive Mesh Refinement 2 Context

More information

Numerical Algorithms on Multi-GPU Architectures

Numerical Algorithms on Multi-GPU Architectures Numerical Algorithms on Multi-GPU Architectures Dr.-Ing. Harald Köstler 2 nd International Workshops on Advances in Computational Mechanics Yokohama, Japan 30.3.2010 2 3 Contents Motivation: Applications

More information

Massively Parallel Finite Element Simulations with deal.ii

Massively Parallel Finite Element Simulations with deal.ii Massively Parallel Finite Element Simulations with deal.ii Timo Heister, Texas A&M University 2012-02-16 SIAM PP2012 joint work with: Wolfgang Bangerth, Carsten Burstedde, Thomas Geenen, Martin Kronbichler

More information

Lattice Boltzmann methods on the way to exascale

Lattice Boltzmann methods on the way to exascale Lattice Boltzmann methods on the way to exascale Ulrich Rüde LSS Erlangen and CERFACS Toulouse ulrich.ruede@fau.de Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique www.cerfacs.fr

More information

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC

More information

Joint Advanced Student School 2007 Martin Dummer

Joint Advanced Student School 2007 Martin Dummer Sierpiński-Curves Joint Advanced Student School 2007 Martin Dummer Statement of the Problem What is the best way to store a triangle mesh efficiently in memory? The following points are desired : Easy

More information

I/O Monitoring at JSC, SIONlib & Resiliency

I/O Monitoring at JSC, SIONlib & Resiliency Mitglied der Helmholtz-Gemeinschaft I/O Monitoring at JSC, SIONlib & Resiliency Update: I/O Infrastructure @ JSC Update: Monitoring with LLview (I/O, Memory, Load) I/O Workloads on Jureca SIONlib: Task-Local

More information

Using Automated Performance Modeling to Find Scalability Bugs in Complex Codes

Using Automated Performance Modeling to Find Scalability Bugs in Complex Codes Using Automated Performance Modeling to Find Scalability Bugs in Complex Codes A. Calotoiu 1, T. Hoefler 2, M. Poke 1, F. Wolf 1 1) German Research School for Simulation Sciences 2) ETH Zurich September

More information

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Scalasca support for Intel Xeon Phi Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Overview Scalasca performance analysis toolset support for MPI & OpenMP

More information

Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture. Alexander Berreth. Markus Bühler, Benedikt Anlauf

Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture. Alexander Berreth. Markus Bühler, Benedikt Anlauf PADC Anual Workshop 20 Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture Alexander Berreth RECOM Services GmbH, Stuttgart Markus Bühler, Benedikt Anlauf IBM Deutschland

More information

A Lightweight OpenMP Runtime

A Lightweight OpenMP Runtime Alexandre Eichenberger - Kevin O Brien 6/26/ A Lightweight OpenMP Runtime -- OpenMP for Exascale Architectures -- T.J. Watson, IBM Research Goals Thread-rich computing environments are becoming more prevalent

More information

Sustainability and Efficiency for Simulation Software in the Exascale Era

Sustainability and Efficiency for Simulation Software in the Exascale Era Sustainability and Efficiency for Simulation Software in the Exascale Era Dominik Thönnes, Ulrich Rüde, Nils Kohl Chair for System Simulation, University of Erlangen-Nürnberg March 09, 2018 SIAM Conference

More information

Compute Node Linux (CNL) The Evolution of a Compute OS

Compute Node Linux (CNL) The Evolution of a Compute OS Compute Node Linux (CNL) The Evolution of a Compute OS Overview CNL The original scheme plan, goals, requirements Status of CNL Plans Features and directions Futures May 08 Cray Inc. Proprietary Slide

More information

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings Mitglied der Helmholtz-Gemeinschaft I/O at JSC I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O Wolfgang Frings W.Frings@fz-juelich.de Jülich Supercomputing

More information

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE , NOW AND IN THE FUTURE Which, why and how do they compare in our systems? 08.07.2018 I MUG 18, COLUMBUS (OH) I DAMIAN ALVAREZ Outline FZJ mission JSC s role JSC s vision for Exascale-era computing JSC

More information

Achieving Efficient Strong Scaling with PETSc Using Hybrid MPI/OpenMP Optimisation

Achieving Efficient Strong Scaling with PETSc Using Hybrid MPI/OpenMP Optimisation Achieving Efficient Strong Scaling with PETSc Using Hybrid MPI/OpenMP Optimisation Michael Lange 1 Gerard Gorman 1 Michele Weiland 2 Lawrence Mitchell 2 Xiaohu Guo 3 James Southern 4 1 AMCG, Imperial College

More information

Performance Analysis of the Lattice Boltzmann Method on x86-64 Architectures

Performance Analysis of the Lattice Boltzmann Method on x86-64 Architectures Performance Analysis of the Lattice Boltzmann Method on x86-64 Architectures Jan Treibig, Simon Hausmann, Ulrich Ruede Zusammenfassung The Lattice Boltzmann method (LBM) is a well established algorithm

More information

TrafficDB: HERE s High Performance Shared-Memory Data Store Ricardo Fernandes, Piotr Zaczkowski, Bernd Göttler, Conor Ettinoffe, and Anis Moussa

TrafficDB: HERE s High Performance Shared-Memory Data Store Ricardo Fernandes, Piotr Zaczkowski, Bernd Göttler, Conor Ettinoffe, and Anis Moussa TrafficDB: HERE s High Performance Shared-Memory Data Store Ricardo Fernandes, Piotr Zaczkowski, Bernd Göttler, Conor Ettinoffe, and Anis Moussa EPL646: Advanced Topics in Databases Christos Hadjistyllis

More information

A Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004

A Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004 A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into

More information

Knights Landing Scalability and the Role of Hybrid Parallelism

Knights Landing Scalability and the Role of Hybrid Parallelism Knights Landing Scalability and the Role of Hybrid Parallelism Sergi Siso 1, Aidan Chalk 1, Alin Elena 2, James Clark 1, Luke Mason 1 1 Hartree Centre @ STFC - Daresbury Labs 2 Scientific Computing Department

More information

ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS

ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS Ferdinando Alessi Annalisa Massini Roberto Basili INGV Introduction The simulation of wave propagation

More information

Scientific Computing at Million-way Parallelism - Blue Gene/Q Early Science Program

Scientific Computing at Million-way Parallelism - Blue Gene/Q Early Science Program Scientific Computing at Million-way Parallelism - Blue Gene/Q Early Science Program Implementing Hybrid Parallelism in FLASH Christopher Daley 1 2 Vitali Morozov 1 Dongwook Lee 2 Anshu Dubey 1 2 Jonathon

More information

Data mining with sparse grids

Data mining with sparse grids Data mining with sparse grids Jochen Garcke and Michael Griebel Institut für Angewandte Mathematik Universität Bonn Data mining with sparse grids p.1/40 Overview What is Data mining? Regularization networks

More information

International Supercomputing Conference 2009

International Supercomputing Conference 2009 International Supercomputing Conference 2009 Implementation of a Lattice-Boltzmann-Method for Numerical Fluid Mechanics Using the nvidia CUDA Technology E. Riegel, T. Indinger, N.A. Adams Technische Universität

More information

(LSS Erlangen, Simon Bogner, Ulrich Rüde, Thomas Pohl, Nils Thürey in collaboration with many more

(LSS Erlangen, Simon Bogner, Ulrich Rüde, Thomas Pohl, Nils Thürey in collaboration with many more Parallel Free-Surface Extension of the Lattice-Boltzmann Method A Lattice-Boltzmann Approach for Simulation of Two-Phase Flows Stefan Donath (LSS Erlangen, stefan.donath@informatik.uni-erlangen.de) Simon

More information

Simulation of Liquid-Gas-Solid Flows with the Lattice Boltzmann Method

Simulation of Liquid-Gas-Solid Flows with the Lattice Boltzmann Method Simulation of Liquid-Gas-Solid Flows with the Lattice Boltzmann Method June 21, 2011 Introduction Free Surface LBM Liquid-Gas-Solid Flows Parallel Computing Examples and More References Fig. Simulation

More information

Parallel repartitioning and remapping in

Parallel repartitioning and remapping in Parallel repartitioning and remapping in Sébastien Fourestier François Pellegrini November 21, 2012 Joint laboratory workshop Table of contents Parallel repartitioning Shared-memory parallel algorithms

More information

Fault tolerant issues in large scale applications

Fault tolerant issues in large scale applications Fault tolerant issues in large scale applications Romain Teyssier George Lake, Ben Moore, Joachim Stadel and the other members of the project «Cosmology at the petascale» SPEEDUP 2010 1 Outline Computational

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Fujitsu s Approach to Application Centric Petascale Computing

Fujitsu s Approach to Application Centric Petascale Computing Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview

More information

Forest-of-octrees AMR: algorithms and interfaces

Forest-of-octrees AMR: algorithms and interfaces Forest-of-octrees AMR: algorithms and interfaces Carsten Burstedde joint work with Omar Ghattas, Tobin Isaac, Georg Stadler, Lucas C. Wilcox Institut für Numerische Simulation (INS) Rheinische Friedrich-Wilhelms-Universität

More information

Parallel Programming Concepts. Parallel Algorithms. Peter Tröger

Parallel Programming Concepts. Parallel Algorithms. Peter Tröger Parallel Programming Concepts Parallel Algorithms Peter Tröger Sources: Ian Foster. Designing and Building Parallel Programs. Addison-Wesley. 1995. Mattson, Timothy G.; S, Beverly A.; ers,; Massingill,

More information

Data mining with sparse grids using simplicial basis functions

Data mining with sparse grids using simplicial basis functions Data mining with sparse grids using simplicial basis functions Jochen Garcke and Michael Griebel Institut für Angewandte Mathematik Universität Bonn Part of the work was supported within the project 03GRM6BN

More information

Center Extreme Scale CS Research

Center Extreme Scale CS Research Center Extreme Scale CS Research Center for Compressible Multiphase Turbulence University of Florida Sanjay Ranka Herman Lam Outline 10 6 10 7 10 8 10 9 cores Parallelization and UQ of Rocfun and CMT-Nek

More information

The MOSIX Scalable Cluster Computing for Linux. mosix.org

The MOSIX Scalable Cluster Computing for Linux.  mosix.org The MOSIX Scalable Cluster Computing for Linux Prof. Amnon Barak Computer Science Hebrew University http://www. mosix.org 1 Presentation overview Part I : Why computing clusters (slide 3-7) Part II : What

More information

Introduction to parallel Computing

Introduction to parallel Computing Introduction to parallel Computing VI-SEEM Training Paschalis Paschalis Korosoglou Korosoglou (pkoro@.gr) (pkoro@.gr) Outline Serial vs Parallel programming Hardware trends Why HPC matters HPC Concepts

More information

Splotch: High Performance Visualization using MPI, OpenMP and CUDA

Splotch: High Performance Visualization using MPI, OpenMP and CUDA Splotch: High Performance Visualization using MPI, OpenMP and CUDA Klaus Dolag (Munich University Observatory) Martin Reinecke (MPA, Garching) Claudio Gheller (CSCS, Switzerland), Marzia Rivi (CINECA,

More information

Multiprocessors and Thread Level Parallelism Chapter 4, Appendix H CS448. The Greed for Speed

Multiprocessors and Thread Level Parallelism Chapter 4, Appendix H CS448. The Greed for Speed Multiprocessors and Thread Level Parallelism Chapter 4, Appendix H CS448 1 The Greed for Speed Two general approaches to making computers faster Faster uniprocessor All the techniques we ve been looking

More information

Accelerating CFD with Graphics Hardware

Accelerating CFD with Graphics Hardware Accelerating CFD with Graphics Hardware Graham Pullan (Whittle Laboratory, Cambridge University) 16 March 2009 Today Motivation CPUs and GPUs Programming NVIDIA GPUs with CUDA Application to turbomachinery

More information

Asynchronous OpenCL/MPI numerical simulations of conservation laws

Asynchronous OpenCL/MPI numerical simulations of conservation laws Asynchronous OpenCL/MPI numerical simulations of conservation laws Philippe HELLUY 1,3, Thomas STRUB 2. 1 IRMA, Université de Strasbourg, 2 AxesSim, 3 Inria Tonus, France IWOCL 2015, Stanford Conservation

More information

Fast Dynamic Load Balancing for Extreme Scale Systems

Fast Dynamic Load Balancing for Extreme Scale Systems Fast Dynamic Load Balancing for Extreme Scale Systems Cameron W. Smith, Gerrett Diamond, M.S. Shephard Computation Research Center (SCOREC) Rensselaer Polytechnic Institute Outline: n Some comments on

More information

Directed Optimization On Stencil-based Computational Fluid Dynamics Application(s)

Directed Optimization On Stencil-based Computational Fluid Dynamics Application(s) Directed Optimization On Stencil-based Computational Fluid Dynamics Application(s) Islam Harb 08/21/2015 Agenda Motivation Research Challenges Contributions & Approach Results Conclusion Future Work 2

More information

Managing complex cluster architectures with Bright Cluster Manager

Managing complex cluster architectures with Bright Cluster Manager Managing complex cluster architectures with Bright Cluster Manager Christopher Huggins www.clustervision.com 1 About ClusterVision Specialists in Compute, Storage & Database Clusters (Tailor-Made, Turn-Key)

More information

Communication and Topology-aware Load Balancing in Charm++ with TreeMatch

Communication and Topology-aware Load Balancing in Charm++ with TreeMatch Communication and Topology-aware Load Balancing in Charm++ with TreeMatch Joint lab 10th workshop (IEEE Cluster 2013, Indianapolis, IN) Emmanuel Jeannot Esteban Meneses-Rojas Guillaume Mercier François

More information

Toward An Integrated Cluster File System

Toward An Integrated Cluster File System Toward An Integrated Cluster File System Adrien Lebre February 1 st, 2008 XtreemOS IP project is funded by the European Commission under contract IST-FP6-033576 Outline Context Kerrighed and root file

More information

Introduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29

Introduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29 Introduction CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction Spring 2018 1 / 29 Outline 1 Preface Course Details Course Requirements 2 Background Definitions

More information

OpenFOAM on BG/Q porting and performance

OpenFOAM on BG/Q porting and performance OpenFOAM on BG/Q porting and performance Paride Dagna, SCAI Department, CINECA SYSTEM OVERVIEW OpenFOAM : selected application inside of PRACE project Fermi : PRACE Tier- System Model: IBM-BlueGene /Q

More information

τ-extrapolation on 3D semi-structured finite element meshes

τ-extrapolation on 3D semi-structured finite element meshes τ-extrapolation on 3D semi-structured finite element meshes European Multi-Grid Conference EMG 2010 Björn Gmeiner Joint work with: Tobias Gradl, Ulrich Rüde September, 2010 Contents The HHG Framework τ-extrapolation

More information

GPU Implementation of a Multiobjective Search Algorithm

GPU Implementation of a Multiobjective Search Algorithm Department Informatik Technical Reports / ISSN 29-58 Steffen Limmer, Dietmar Fey, Johannes Jahn GPU Implementation of a Multiobjective Search Algorithm Technical Report CS-2-3 April 2 Please cite as: Steffen

More information

Evaluation of Asynchronous Offloading Capabilities of Accelerator Programming Models for Multiple Devices

Evaluation of Asynchronous Offloading Capabilities of Accelerator Programming Models for Multiple Devices Evaluation of Asynchronous Offloading Capabilities of Accelerator Programming Models for Multiple Devices Jonas Hahnfeld 1, Christian Terboven 1, James Price 2, Hans Joachim Pflug 1, Matthias S. Müller

More information

Thread-Level Speculation on Off-the-Shelf Hardware Transactional Memory

Thread-Level Speculation on Off-the-Shelf Hardware Transactional Memory Thread-Level Speculation on Off-the-Shelf Hardware Transactional Memory Rei Odaira Takuya Nakaike IBM Research Tokyo Thread-Level Speculation (TLS) [Franklin et al., 92] or Speculative Multithreading (SpMT)

More information

HPC-BLAST Scalable Sequence Analysis for the Intel Many Integrated Core Future

HPC-BLAST Scalable Sequence Analysis for the Intel Many Integrated Core Future HPC-BLAST Scalable Sequence Analysis for the Intel Many Integrated Core Future Dr. R. Glenn Brook & Shane Sawyer Joint Institute For Computational Sciences University of Tennessee, Knoxville Dr. Bhanu

More information

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can

More information

Multi-GPU Scaling of Direct Sparse Linear System Solver for Finite-Difference Frequency-Domain Photonic Simulation

Multi-GPU Scaling of Direct Sparse Linear System Solver for Finite-Difference Frequency-Domain Photonic Simulation Multi-GPU Scaling of Direct Sparse Linear System Solver for Finite-Difference Frequency-Domain Photonic Simulation 1 Cheng-Han Du* I-Hsin Chung** Weichung Wang* * I n s t i t u t e o f A p p l i e d M

More information

AUTOMATIC SMT THREADING

AUTOMATIC SMT THREADING AUTOMATIC SMT THREADING FOR OPENMP APPLICATIONS ON THE INTEL XEON PHI CO-PROCESSOR WIM HEIRMAN 1,2 TREVOR E. CARLSON 1 KENZO VAN CRAEYNEST 1 IBRAHIM HUR 2 AAMER JALEEL 2 LIEVEN EECKHOUT 1 1 GHENT UNIVERSITY

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

FFTSS Library Version 3.0 User s Guide

FFTSS Library Version 3.0 User s Guide Last Modified: 31/10/07 FFTSS Library Version 3.0 User s Guide Copyright (C) 2002-2007 The Scalable Software Infrastructure Project, is supported by the Development of Software Infrastructure for Large

More information

ELASTIC: Dynamic Tuning for Large-Scale Parallel Applications

ELASTIC: Dynamic Tuning for Large-Scale Parallel Applications Workshop on Extreme-Scale Programming Tools 18th November 2013 Supercomputing 2013 ELASTIC: Dynamic Tuning for Large-Scale Parallel Applications Toni Espinosa Andrea Martínez, Anna Sikora, Eduardo César

More information

Massively Parallel Graph Algorithms with MapReduce

Massively Parallel Graph Algorithms with MapReduce Massively Parallel Graph Algorithms with MapReduce Alexander Reinefeld, Thorsten Schütt, Robert Maier Zuse Institute Berlin AEI Cluster Day, 5 April 2011 Zuse Institute Berlin Research institute for applied

More information

Challenges in Fully Generating Multigrid Solvers for the Simulation of non-newtonian Fluids

Challenges in Fully Generating Multigrid Solvers for the Simulation of non-newtonian Fluids Challenges in Fully Generating Multigrid Solvers for the Simulation of non-newtonian Fluids Sebastian Kuckuk FAU Erlangen-Nürnberg 18.01.2016 HiStencils 2016, Prague, Czech Republic Outline Outline Scope

More information

Tutorial: Application MPI Task Placement

Tutorial: Application MPI Task Placement Tutorial: Application MPI Task Placement Juan Galvez Nikhil Jain Palash Sharma PPL, University of Illinois at Urbana-Champaign Tutorial Outline Why Task Mapping on Blue Waters? When to do mapping? How

More information

Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs

Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs B. Barla Cambazoglu and Cevdet Aykanat Bilkent University, Department of Computer Engineering, 06800, Ankara, Turkey {berkant,aykanat}@cs.bilkent.edu.tr

More information

SFS: Random Write Considered Harmful in Solid State Drives

SFS: Random Write Considered Harmful in Solid State Drives SFS: Random Write Considered Harmful in Solid State Drives Changwoo Min 1, 2, Kangnyeon Kim 1, Hyunjin Cho 2, Sang-Won Lee 1, Young Ik Eom 1 1 Sungkyunkwan University, Korea 2 Samsung Electronics, Korea

More information

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing

More information

Parallel Computations

Parallel Computations Parallel Computations Timo Heister, Clemson University heister@clemson.edu 2015-08-05 deal.ii workshop 2015 2 Introduction Parallel computations with deal.ii: Introduction Applications Parallel, adaptive,

More information

Explicit and Implicit Coupling Strategies for Overset Grids. Jörg Brunswig, Manuel Manzke, Thomas Rung

Explicit and Implicit Coupling Strategies for Overset Grids. Jörg Brunswig, Manuel Manzke, Thomas Rung Explicit and Implicit Coupling Strategies for s Outline FreSCo+ Grid Coupling Interpolation Schemes Implementation Mass Conservation Examples Lid-driven Cavity Flow Cylinder in a Channel Oscillating Cylinder

More information

Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures

Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures Dirk Ribbrock, Markus Geveler, Dominik Göddeke, Stefan Turek Angewandte Mathematik, Technische Universität Dortmund

More information

Using Graph Partitioning and Coloring for Flexible Coarse-Grained Shared-Memory Parallel Mesh Adaptation

Using Graph Partitioning and Coloring for Flexible Coarse-Grained Shared-Memory Parallel Mesh Adaptation Available online at www.sciencedirect.com Procedia Engineering 00 (2017) 000 000 www.elsevier.com/locate/procedia 26th International Meshing Roundtable, IMR26, 18-21 September 2017, Barcelona, Spain Using

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks

An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks WRF Model NASA Parallel Benchmark Intel MPI Bench My own personal benchmark HPC Challenge Benchmark Abstract

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

HPX. High Performance ParalleX CCT Tech Talk Series. Hartmut Kaiser

HPX. High Performance ParalleX CCT Tech Talk Series. Hartmut Kaiser HPX High Performance CCT Tech Talk Hartmut Kaiser (hkaiser@cct.lsu.edu) 2 What s HPX? Exemplar runtime system implementation Targeting conventional architectures (Linux based SMPs and clusters) Currently,

More information

AutoTune Workshop. Michael Gerndt Technische Universität München

AutoTune Workshop. Michael Gerndt Technische Universität München AutoTune Workshop Michael Gerndt Technische Universität München AutoTune Project Automatic Online Tuning of HPC Applications High PERFORMANCE Computing HPC application developers Compute centers: Energy

More information

High performance computing and numerical modeling

High performance computing and numerical modeling High performance computing and numerical modeling Volker Springel Plan for my lectures Lecture 1: Collisional and collisionless N-body dynamics Lecture 2: Gravitational force calculation Lecture 3: Basic

More information

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC 2015-09-14 Algorithms, System and Data Centre Optimisation for Energy Efficient HPC Vincent Heuveline URZ Computing Centre of Heidelberg University EMCL Engineering Mathematics and Computing Lab 1 Energy

More information

Enabling In Situ Viz and Data Analysis with Provenance in libmesh

Enabling In Situ Viz and Data Analysis with Provenance in libmesh Enabling In Situ Viz and Data Analysis with Provenance in libmesh Vítor Silva Jose J. Camata Marta Mattoso Alvaro L. G. A. Coutinho (Federal university Of Rio de Janeiro/Brazil) Patrick Valduriez (INRIA/France)

More information

Application Example Running on Top of GPI-Space Integrating D/C

Application Example Running on Top of GPI-Space Integrating D/C Application Example Running on Top of GPI-Space Integrating D/C Tiberiu Rotaru Fraunhofer ITWM This project is funded from the European Union s Horizon 2020 Research and Innovation programme under Grant

More information

Parallel Direct Simulation Monte Carlo Computation Using CUDA on GPUs

Parallel Direct Simulation Monte Carlo Computation Using CUDA on GPUs Parallel Direct Simulation Monte Carlo Computation Using CUDA on GPUs C.-C. Su a, C.-W. Hsieh b, M. R. Smith b, M. C. Jermy c and J.-S. Wu a a Department of Mechanical Engineering, National Chiao Tung

More information