Introduction to HDF5

Size: px
Start display at page:

Download "Introduction to HDF5"

Transcription

1 Introduction to parallel HDF Maison de la Simulation Saclay, 0-0 March 201, Parallel filesystems and parallel IO libraries

2 Evaluation form Please do not forget to fill the evaluation form at

3 Outline HDF in the context of Input/Output (IO) HDF Application Programming Interface (API) Playing with Dataspace Hands on session

4 Hardware/Software stack Application Data structures High level I/O library Standard library I/O library System Hardware Operating system File system Hard drive

5 High level I/O libraries The purpose of high level I/O libraries is to provide the developer a higher level of abstraction to manipulate computational modeling objects Meshes of various complexity (rectilinear, curvilinear, unstructured... ) Discretized functions on such meshes Materials... Until now, these libraries are mainly used in the context of visualization

6 Existing libraries Silo Wide range of objects Built on top of HDF Native format for VisIt Exodus Focused on unstructured meshes and finite element representations Built on top of NetCDF Famous/intensively used codes output format extensible Data Model and Format (XDMF)

7 I/O libraries Purpose of I/O libraries: Efficient I/O Portable binary files Higher level of abstraction for the developer Two main existing libraries: Hierarchical Data Format: HDF Network Common Data Form: NetCDF

8 HDF library HDF file: HDF group: a grouping structure containing instances of zero or more groups or datasets HDF dataset: a multidimensional array of data elements HDF dataset multidimensional array: Name Datatype (Atomic, NATIVE, Compound) Dataspace (rank, sizes, max sizes) Storage layout (contiguous, compact, chunked)

9 HDF High Level APIs Dimension Scale (HDS): Enables to attach dataset dimension to scales Lite (HLT): Enables to write simple dataset in one call Image (HIM): Enables to write images in one call Table (HTB): Hides the compound types needed for writing tables Packet Table (HPT): Almost HTB but without record insertion/deletion but supports variable length records...

10 HDF low level API HF: File manipulation routines HG: Group manipulation routines HS: Dataspace manipulation routines HD: Dataset manipulation routines... Just have a look at the outstanding on-line reference manual for HDF!

11 C order versus Fortran order /* C language */ #define NX #define NY 3 int x,y; int f[ny][nx];! Fortran language integer, parameter :: NX= integer, parameter :: NY=3 integer :: x,y integer, dimension(nx,ny) :: f for (y=0;y<ny;y++) for (x=0;x<nx;x++) f[y][x] = x+y; do y=1,ny do x=1,nx f(x,y) = (x-1) + (y-1) enddo enddo The memory mapping is identical, the language semantic is different!!

12 HDF first example #define NX #define NY # define RANK 2 i n t main ( void ) { h i d t f i l e, dataset, dataspace ; h s i z e t dimsf [ 2 ] ; h e r r t s t a t u s ; i n t data [NY ] [ NX ] ; i n i t ( data ) ; f i l e = HFcreate ( example. h, HF ACC TRUNC, HP DEFAULT, \ HP DEFAULT ) ; dimsf [ 0 ] = NY; dimsf [ 1 ] = NX;

13 HDF first example cont. dataspace = HScreate simple (RANK, dimsf, NULL ) ; dataset = HDcreate ( f i l e, I n t A r r a y, HT NATIVE INT, \ dataspace, HP DEFAULT, HP DEFAULT, HP DEFAULT ) ; s t a t u s = HDwrite ( dataset, HT NATIVE INT, HS ALL, \ HS ALL, HP DEFAULT, data ) ; HSclose ( dataspace ) ; HDclose ( dataset ) ; HFclose ( f i l e ) ; } return 0;

14 HDF high level example cont. s t a t u s = HLTmake dataset int ( f i l e, I n t A r r a y, RANK, dimsf, data ) ; HFclose ( f i l e ) ; } return 0;

15 Variable C type h i d t f i l e, dataset, dataspace ; h s i z e t dimsf [ 2 ] ; h e r r t s t a t u s ; hid t: handler for any HDF objects (file, groups, dataset, dataspace, datatypes... ) hsize t: C type used for number of elements of a dataset (in each dimension) herr t: C type used for getting error status of HDF functions

16 File creation f i l e = HFcreate ( example. h, HF ACC TRUNC, HP DEFAULT, \ HP DEFAULT ) ; example.h : file name HF ACC TRUNC: File creation and suppress it if it exists already HP DEFAULT: file creation property list HP DEFAULT: file access property list (needed for MPI-IO)

17 Dataspace creation dimsf [ 0 ] = NY; dimsf [ 1 ] = NX; dataspace = HScreate simple (RANK, dimsf, NULL ) ; RANK: dataset dimensionality dimsf: size of the dataspace in each dimension NULL: specify max size of the dataset being fixed to the size

18 Dataset creation dataset = HDcreate ( f i l e, I n t A r r a y, HT NATIVE INT, \ dataspace, HP DEFAULT, HP DEFAULT, HP DEFAULT ) ; file: HDF objects where to create the dataset. Should be a file or a group. IntArray : dataset name HT NATIVE INT: type of the data the dataset will contain dataspace: size of the dataset HP DEFAULT: default option for property list.

19 Datatype Predefined Datatypes: created by HDF. Derived Datatypes: created or derived from the predefined data types. There are two types of predefined datatypes: STANDARD: They defined standard ways of representing data. Ex: HT IEEE F32BE means IEEE representation of 32 bit floating point number in big endian. NATIVE: Alias to standard data types according to the platform where the program is compiled. Ex: on an Intel based PC, HT NATIVE INT is aliased to the standard predefined type, HT STD 32LE.

20 Datatype cont. A data type can be: ATOMIC: cannot be decomposed into smaller data type units at the API level. Ex: integer COMPOSITE: An aggregation of one or more data types. Ex: compound data type, array, enumeration

21 Dataset writing s t a t u s = HDwrite ( dataset, HT NATIVE INT, HS ALL, \ HS ALL, HP DEFAULT, data ) ; dataset: HDF objects representing the dataset to write HT NATIVE INT: Type of the data in memory HS ALL: dataspace specifying the portion of memory that needs be read (in order to be written) HS ALL: dataspace specifying the portion of the file dataset that needs to be written HP DEFAULT: default option for property list (needed for MPI-IO). data: buffer containing the data to write

22 Closing HDF objects HSclose ( dataspace ) ; HDclose ( dataset ) ; HFclose ( f i l e ) ; Opened/created HDF objects are closed.

23 Some comments s t a t u s = HLTmake dataset int ( f i l e, I n t A r r a y, RANK, dimsf, data ) ; HFclose ( f i l e ) ; } return 0; This example is almost a fwrite, but: The generated file is portable The generated file can be accessed with HDF tools Attributes can be added on datasets or groups The type of the data can be fixed The storage layout can be modified Portion of the dataset can be written...

24 Concept of start, stride, count block Considering a n-dimensional array, start, stride, count and block are arrays of size n that describe a subset of the original array start: Starting location for the hyperslab (default 0) stride: The number of elements to separate each element or block to be selected (default 1) count: The number of elements or blocks to select along each dimension block: The size of the block (default 1)

25 Conventions for the examples We consider: A 2D array f [N y ][N x ] with N x =, N y = Dimension x is the dimension contiguous in memory Graphically, the x dimension is represented horizontal Language C convention is used for indexing the dimensions Dimension y is index=0 Dimension x is index=1

26 Graphical representation Dimension x Dimension y Memory order i n t s t a r t [ 2 ], s t r i d e [ 2 ], count [ 2 ], block [ 2 ] ; s t a r t [ 0 ] = 0; s t a r t [ 1 ] = 0; s t r i d e [ 0 ] = 1; s t r i d e [ 1 ] = 1; block [ 0 ] = 1; block [ 1 ] = 1;

27 Illustration for count parameter Dimension x Dimension y y=0 y=1 y= count [ 0 ] = 3; count [ 1 ] = ;

28 Illustration for start parameter Dimension x Dimension y y=0 y=1 y=2 3 s t a r t [ 0 ] = 1; s t a r t [ 1 ] = 2; count [ 0 ] = 3; count [ 1 ] = ;

29 Illustration for stride parameter Dimension x Dimension y y=0 y=1 y=2 3 s t a r t [ 0 ] = 1; s t a r t [ 1 ] = 2; count [ 0 ] = 3; count [ 1 ] = ; s t r i d e [ 0 ] = 3; s t r i d e [ 1 ] = 1;

30 Illustration for stride parameter Dimension x Dimension y y=0 y=1 y=2 3 s t a r t [ 0 ] = 1; s t a r t [ 1 ] = 2; count [ 0 ] = 3; count [ 1 ] = 2; s t r i d e [ 0 ] = 3; s t r i d e [ 1 ] = 3;

31 Illustration for block parameter Dimension x Dimension y y=0 y=1 3 y=2 y=3 y= y= s t a r t [ 0 ] = 1; s t a r t [ 1 ] = 2; count [ 0 ] = 3; count [ 1 ] = 2; s t r i d e [ 0 ] = 3; s t r i d e [ 1 ] = 3; block [ 0 ] = 2; block [ 1 ] = 2;

32 Exercise 1 Please draw the elements selected by the start, stride, count, block set below Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = ; count [ 1 ] = ;

33 Solution 1 Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = ; count [ 1 ] = ;

34 Exercise 2 Please draw the elements selected by the start, stride, count, block set below Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = 1; count [ 1 ] = 1; block [ 0 ] = ; block [ 1 ] = ;

35 Solution 2 Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = 1; count [ 1 ] = 1; block [ 0 ] = ; block [ 1 ] = ;

36 Exercise 3 Please draw the elements selected by the start, stride, count, block set below Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = 3; count [ 1 ] = 2; s t r i d e [ 0 ] = 2; s t r i d e [ 1 ] = 2; block [ 0 ] = 2; block [ 1 ] = 2;

37 Solution 3 Dimension x Dimension y s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = 3; count [ 1 ] = 2; s t r i d e [ 0 ] = 2; s t r i d e [ 1 ] = 2; block [ 0 ] = 2; block [ 1 ] = 2;

38 What is a dataspace? Dataspace Objects Null dataspaces Scalar dataspaces Simple dataspaces rank or number of dimensions current size maximum size (can be unlimited) Dataspaces come into play: for performing partial IO to describe the shape of HDF dataset

39 What is a dataspace for? Figure : Access a sub-set of data with a hyperslab 1 Figure : Build complex regions with hyperslab unions 1 1 Figures taken from HDF website

40 What is a dataspace for? Figure : Use hyper-slabs to gather or scatter data 2 2 Figures taken from HDF website

41 How to play with dataspaces h i d t space id ; h s i z e t dims [ 2 ], s t a r t [ 2 ], count [ 2 ] ; h s i z e t s t r i d e =NULL, block=null ; dims [ 0 ] = ny ; dims [ 1 ] = nx ; s t a r t [ 0 ] = 2; s t a r t [ 1 ] = 1; count [ 0 ] = ; count [ 1 ] = ; space id = HScreate simple ( 2, dims, NULL ) ; s t a t u s = HSselect hyperslab ( space id, HS SELECT SET, s t a r t, \ s t r i d e, count, block ) ;

42 How to play with dataspaces space id is modified by HSselect hyperslab, so it must exist start, stride, count, block arrays must be at least the same size as the rank of space id dataspace HS SELECT SET replaces the existing selection with the parameters from this call. Other operations : HS SELECT OR, AND, XOR, NOTB and NOTA stride, block arrays are considered as 1 if NULL is passed

43 Using dataspaces during a partial IO s t a t u s = HSselect hyperslab ( space id mem, HS SELECT SET, \ start mem, stride mem, count mem, block mem ) ; s t a t u s = HSselect hyperslab ( space id disk, HS SELECT SET, \ s t a r t d i s k, s t r i d e d i s k, count disk, b l o c k d i s k ) ; s t a t u s = HDwrite ( dataset, HT NATIVE INT, space id mem, \ space id disk, HP DEFAULT, data ) ; The two dataspace can describe non contiguous data and can be of different dimension But the number of elements must match

44 Parallel HDF implementation MPI rank 2 MPI rank 0 MPI rank 1 MPI rank 3

45 Parallel HDF implementation INTEGER(HSIZE_T) :: array_size(2), array_subsize(2), array_start(2) INTEGER(HID_T) :: plist_id1, plist_id2, file_id, filespace, dset_id, memspace array_size(1) = S array_size(2) = S array_subsize(1) = local_nx array_subsize(2) = local_ny array_start(1) = proc_x * array_subsize(1) array_start(2) = proc_y * array_subsize(2)!allocate and fill the tab array CALL hopen_f(ierr) CALL hpcreate_f(hp_file_access_f, plist_id1, ierr) CALL hpset_fapl_mpio_f(plist_id1, MPI_COMM_WORLD, MPI_INFO_NULL, ierr) CALL hfcreate_f('res.h', HF_ACC_TRUNC_F, file_id, ierr, access_prp = plist_id1)! Set collective call CALL hpcreate_f(hp_dataset_xfer_f, plist_id2, ierr) CALL hpset_dxpl_mpio_f(plist_id2, HFD_MPIO_COLLECTIVE_F, ierr) CALL hscreate_simple_f(2, array_size, filespace, ierr) CALL hscreate_simple_f(2, array_subsize, memspace, ierr) CALL hdcreate_f(file_id, 'pi_array', HT_NATIVE_REAL, filespace, dset_id, ierr) CALL hsselect_hyperslab_f (filespace, HS_SELECT_SET_F, array_start, array_subsize, ierr) CALL hdwrite_f(dset_id, HT_NATIVE_REAL, tab, array_subsize, ierr, memspace, filespace, plist_id2)! Close HDF objects

46 Hands on parallel HDF 1 tar xvfz /gpfslocal/pub/patc-io/hdf.tgz 2 cp -r HDF/phdf-1 phdf 3 cd phdf Examine the source code, compile and run it Examine the output file out*.h with hls and hdump command line tools Modify the program to write in a single file all the different datasets Modify the program to write in a single file all the data in a single dataset Modify the program to write the single dataset with MPI-IO

47 Hands on HDF 1 cp -r HDF/hdf-1 hdf 2 cd hdf 3 Examine the source code, compile and run it Examine the output file example.h with hls and hdump command line tools Modify the program to write an additional dataset containing a 3D data of size Nx.Ny.Nz Modify the program to write a single 2D slice of the 3D array instead of the whole array Modify the program to write the previous 2D slice as an extension of the original 2D dataset Implement parallel HDF output for the Jacobi solver instead of the MPI-IO ones

48 Parallel HDF for the Jacobi solver on tasks 1 Implement parallel HDF output for the Jacobi solver instead of the MPI-IO ones MPI rank 2 MPI rank 0 MPI rank 1 MPI rank 3 Single file

Introduction to serial HDF5

Introduction to serial HDF5 Introduction to serial HDF Matthieu Haefele Saclay, - March 201, Parallel filesystems and parallel IO libraries PATC@MdS Matthieu Haefele Training outline Day 1: AM: Serial HDF (M. Haefele) PM: Parallel

More information

Post-processing issue, introduction to HDF5

Post-processing issue, introduction to HDF5 Post-processing issue Introduction to HDF5 Matthieu Haefele High Level Support Team Max-Planck-Institut für Plasmaphysik, München, Germany Autrans, 26-30 Septembre 2011, École d été Masse de données :

More information

Parallel IO concepts (MPI-IO and phdf5)

Parallel IO concepts (MPI-IO and phdf5) Parallel IO concepts (MPI-IO and phdf5) Matthieu Haefele Saclay, 6-7 March 2017, Parallel filesystems and parallel IO libraries PATC@MdS Matthieu Haefele Outline Day 1 Morning: HDF5 in the context of Input/Output

More information

Parallel HDF5 (PHDF5)

Parallel HDF5 (PHDF5) Parallel HDF5 (PHDF5) Giusy Muscianisi g.muscianisi@cineca.it SuperComputing Applications and Innovation Department May 17th, 2013 Outline Overview of Parallel HDF5 design Programming model for Creating

More information

Parallel I/O and Portable Data Formats HDF5

Parallel I/O and Portable Data Formats HDF5 Parallel I/O and Portable Data Formats HDF5 Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing Centre Forschungszentrum Jülich GmbH Jülich, March 13th, 2018 Outline Introduction Structure of

More information

Parallel I/O and Portable Data Formats

Parallel I/O and Portable Data Formats Parallel I/O and Portable Data Formats Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing Centre Forschungszentrum Jülich GmbH Reykjavík, August 25 th, 2017 Overview I/O can be the main bottleneck

More information

HDF5: theory & practice

HDF5: theory & practice HDF5: theory & practice Giorgio Amati SCAI Dept. 15/16 May 2014 Agenda HDF5: main issues Using the API (serial) Using the API (parallel) Tools Some comments PHDF5 Initial Target Support for MPI programming

More information

Parallel I/O CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Parallel I/O Spring / 22

Parallel I/O CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Parallel I/O Spring / 22 Parallel I/O CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Parallel I/O Spring 2018 1 / 22 Outline 1 Overview of parallel I/O I/O strategies 2 MPI I/O 3 Parallel

More information

HDF5: An Introduction. Adam Carter EPCC, The University of Edinburgh

HDF5: An Introduction. Adam Carter EPCC, The University of Edinburgh HDF5: An Introduction Adam Carter EPCC, The University of Edinburgh What is HDF5? Hierarchical Data Format (version 5) From www.hdfgroup.org: HDF5 is a unique technology suite that makes possible the management

More information

COSC 6374 Parallel Computation. Scientific Data Libraries. Edgar Gabriel Fall Motivation

COSC 6374 Parallel Computation. Scientific Data Libraries. Edgar Gabriel Fall Motivation COSC 6374 Parallel Computation Scientific Data Libraries Edgar Gabriel Fall 2013 Motivation MPI I/O is good It knows about data types (=> data conversion) It can optimize various access patterns in applications

More information

Hierarchical Data Format 5:

Hierarchical Data Format 5: Hierarchical Data Format 5: Giusy Muscianisi g.muscianisi@cineca.it SuperComputing Applications and Innovation Department May 17th, 2013 Outline What is HDF5? Overview to HDF5 Data Model and File Structure

More information

HDF5 User s Guide. HDF5 Release November

HDF5 User s Guide. HDF5 Release November HDF5 User s Guide HDF5 Release 1.8.8 November 2011 http://www.hdfgroup.org Copyright Notice and License Terms for HDF5 (Hierarchical Data Format 5) Software Library and Utilities HDF5 (Hierarchical Data

More information

Introduction to HDF5

Introduction to HDF5 The HDF Group Introduction to HDF5 Quincey Koziol Director of Core Software & HPC The HDF Group October 15, 2014 Blue Waters Advanced User Workshop 1 Why HDF5? Have you ever asked yourself: How will I

More information

Caching and Buffering in HDF5

Caching and Buffering in HDF5 Caching and Buffering in HDF5 September 9, 2008 SPEEDUP Workshop - HDF5 Tutorial 1 Software stack Life cycle: What happens to data when it is transferred from application buffer to HDF5 file and from HDF5

More information

The HDF Group. Parallel HDF5. Extreme Scale Computing Argonne.

The HDF Group. Parallel HDF5. Extreme Scale Computing Argonne. The HDF Group Parallel HDF5 Advantage of Parallel HDF5 Recent success story Trillion particle simulation on hopper @ NERSC 120,000 cores 30TB file 23GB/sec average speed with 35GB/sec peaks (out of 40GB/sec

More information

Package rhdf5. April 5, 2014

Package rhdf5. April 5, 2014 Package rhdf5 April 5, 2014 Type Package Title HDF5 interface to R Version 2.6.0 Author, Gregoire Pau Maintainer This R/Bioconductor package provides an interface between HDF5 and

More information

The HDF Group. Parallel HDF5. Quincey Koziol Director of Core Software & HPC The HDF Group.

The HDF Group. Parallel HDF5. Quincey Koziol Director of Core Software & HPC The HDF Group. The HDF Group Parallel HDF5 Quincey Koziol Director of Core Software & HPC The HDF Group Parallel HDF5 Success Story Recent success story Trillion particle simulation on hopper @ NERSC 120,000 cores 30TB

More information

Introduction to I/O at CHPC

Introduction to I/O at CHPC CENTER FOR HIGH PERFORMANCE COMPUTING Introduction to I/O at CHPC Martin Čuma, m.cumautah.edu Center for High Performance Computing Fall 2015 Outline Types of storage available at CHPC Types of file I/O

More information

HDF- A Suitable Scientific Data Format for Satellite Data Products

HDF- A Suitable Scientific Data Format for Satellite Data Products HDF- A Suitable Scientific Data Format for Satellite Data Products Sk. Sazid Mahammad, Debajyoti Dhar and R. Ramakrishnan Data Products Software Division Space Applications Centre, ISRO, Ahmedabad 380

More information

PERFORMANCE OF PARALLEL IO ON LUSTRE AND GPFS

PERFORMANCE OF PARALLEL IO ON LUSTRE AND GPFS PERFORMANCE OF PARALLEL IO ON LUSTRE AND GPFS David Henty and Adrian Jackson (EPCC, The University of Edinburgh) Charles Moulinec and Vendel Szeremi (STFC, Daresbury Laboratory Outline Parallel IO problem

More information

Introduction to HDF5

Introduction to HDF5 Introduction to HDF5 Dr. Shelley L. Knuth Research Computing, CU-Boulder December 11, 2014 h/p://researchcompu7ng.github.io/meetup_fall_2014/ Download data used today from: h/p://neondataskills.org/hdf5/exploring-

More information

Using HDF5 for Scientific Data Analysis. NERSC Visualization Group

Using HDF5 for Scientific Data Analysis. NERSC Visualization Group Using HDF5 for Scientific Data Analysis NERSC Visualization Group Before We Get Started Glossary of Terms Data - The raw information expressed in numerical form Metadata - Ancillary information about your

More information

What NetCDF users should know about HDF5?

What NetCDF users should know about HDF5? What NetCDF users should know about HDF5? Elena Pourmal The HDF Group July 20, 2007 7/23/07 1 Outline The HDF Group and HDF software HDF5 Data Model Using HDF5 tools to work with NetCDF-4 programs files

More information

HDF5 I/O Performance. HDF and HDF-EOS Workshop VI December 5, 2002

HDF5 I/O Performance. HDF and HDF-EOS Workshop VI December 5, 2002 HDF5 I/O Performance HDF and HDF-EOS Workshop VI December 5, 2002 1 Goal of this talk Give an overview of the HDF5 Library tuning knobs for sequential and parallel performance 2 Challenging task HDF5 Library

More information

DRAFT. HDF5 Data Flow Pipeline for H5Dread. 1 Introduction. 2 Examples

DRAFT. HDF5 Data Flow Pipeline for H5Dread. 1 Introduction. 2 Examples This document describes the HDF5 library s data movement and processing activities when H5Dread is called for a dataset with chunked storage. The document provides an overview of how memory management,

More information

RFC: HDF5 Virtual Dataset

RFC: HDF5 Virtual Dataset RFC: HDF5 Virtual Dataset Quincey Koziol (koziol@hdfgroup.org) Elena Pourmal (epourmal@hdfgroup.org) Neil Fortner (nfortne2@hdfgroup.org) This document introduces Virtual Datasets (VDS) for HDF5 and summarizes

More information

Parallel I/O and Portable Data Formats I/O strategies

Parallel I/O and Portable Data Formats I/O strategies Parallel I/O and Portable Data Formats I/O strategies Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing Centre Forschungszentrum Jülich GmbH Jülich, March 13 th, 2017 Outline Common I/O strategies

More information

Adapting Software to NetCDF's Enhanced Data Model

Adapting Software to NetCDF's Enhanced Data Model Adapting Software to NetCDF's Enhanced Data Model Russ Rew UCAR Unidata EGU, May 2010 Overview Background What is netcdf? What is the netcdf classic data model? What is the netcdf enhanced data model?

More information

Types II. Hwansoo Han

Types II. Hwansoo Han Types II Hwansoo Han Arrays Most common and important composite data types Homogeneous elements, unlike records Fortran77 requires element type be scalar Elements can be any type (Fortran90, etc.) A mapping

More information

AMS 250: An Introduction to High Performance Computing. Parallel I/O. Shawfeng Dong

AMS 250: An Introduction to High Performance Computing. Parallel I/O. Shawfeng Dong AMS 250: An Introduction to High Performance Computing Parallel I/O Shawfeng Dong shaw@ucsc.edu (831) 502-7743 Applied Mathematics & Statistics University of California, Santa Cruz Outline Parallel I/O

More information

Ntuple: Tabular Data in HDF5 with C++ Chris Green and Marc Paterno HDF5 Webinar,

Ntuple: Tabular Data in HDF5 with C++ Chris Green and Marc Paterno HDF5 Webinar, Ntuple: Tabular Data in HDF5 with C++ Chris Green and Marc Paterno HDF5 Webinar, 2019-01-24 Origin and motivation Particle physics analysis often involves the creation of Ntuples, tables of (usually complicated)

More information

Parallel I/O and Portable Data Formats PnetCDF and NetCDF 4

Parallel I/O and Portable Data Formats PnetCDF and NetCDF 4 Parallel I/O and Portable Data Formats PnetDF and NetDF 4 Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing entre Forschungszentrum Jülich GmbH Jülich, March 13 th, 2017 Outline Introduction

More information

EMsoft HDF Routines. March 14, Introductory comments 2

EMsoft HDF Routines. March 14, Introductory comments 2 EMsoft HDF Routines March 14, 2017 Table of Contents 1 Introductory comments 2 2 The HDFsupport.f90 Module 2 2.1 Push-pop stack............................................ 2 2.2 Initializing the fortran

More information

rhdf5 - HDF5 interface for R

rhdf5 - HDF5 interface for R Bernd Fischer October 30, 2017 Contents 1 Introduction 1 2 Installation of the HDF5 package 2 3 High level R -HDF5 functions 2 31 Creating an HDF5 file and group hierarchy 2 32 Writing and reading objects

More information

Parallel I/O Libraries and Techniques

Parallel I/O Libraries and Techniques Parallel I/O Libraries and Techniques Mark Howison User Services & Support I/O for scientifc data I/O is commonly used by scientific applications to: Store numerical output from simulations Load initial

More information

Parallel I/O from a User s Perspective

Parallel I/O from a User s Perspective Parallel I/O from a User s Perspective HPC Advisory Council Stanford University, Dec. 6, 2011 Katie Antypas Group Leader, NERSC User Services NERSC is DOE in HPC Production Computing Facility NERSC computing

More information

ECSS Project: Prof. Bodony: CFD, Aeroacoustics

ECSS Project: Prof. Bodony: CFD, Aeroacoustics ECSS Project: Prof. Bodony: CFD, Aeroacoustics Robert McLay The Texas Advanced Computing Center June 19, 2012 ECSS Project: Bodony Aeroacoustics Program Program s name is RocfloCM It is mixture of Fortran

More information

7C.2 EXPERIENCE WITH AN ENHANCED NETCDF DATA MODEL AND INTERFACE FOR SCIENTIFIC DATA ACCESS. Edward Hartnett*, and R. K. Rew UCAR, Boulder, CO

7C.2 EXPERIENCE WITH AN ENHANCED NETCDF DATA MODEL AND INTERFACE FOR SCIENTIFIC DATA ACCESS. Edward Hartnett*, and R. K. Rew UCAR, Boulder, CO 7C.2 EXPERIENCE WITH AN ENHANCED NETCDF DATA MODEL AND INTERFACE FOR SCIENTIFIC DATA ACCESS Edward Hartnett*, and R. K. Rew UCAR, Boulder, CO 1 INTRODUCTION TO NETCDF AND THE NETCDF-4 PROJECT The purpose

More information

Introduction to I/O at CHPC

Introduction to I/O at CHPC CENTER FOR HIGH PERFORMANCE COMPUTING Introduction to I/O at CHPC Martin Čuma, m.cuma@utah.edu Center for High Performance Computing Fall 2018 Outline Types of storage available at CHPC Types of file I/O

More information

h5perf_serial, a Serial File System Benchmarking Tool

h5perf_serial, a Serial File System Benchmarking Tool h5perf_serial, a Serial File System Benchmarking Tool The HDF Group April, 2009 HDF5 users have reported the need to perform serial benchmarking on systems without an MPI environment. The parallel benchmarking

More information

New Features in HDF5. Why new features? September 9, 2008 SPEEDUP Workshop - HDF5 Tutorial

New Features in HDF5. Why new features? September 9, 2008 SPEEDUP Workshop - HDF5 Tutorial New Features in HDF5 September 9, 2008 SPEEDUP Workshop - HDF5 Tutorial 1 Why new features? September 9, 2008 SPEEDUP Workshop - HDF5 Tutorial 2 1 Why new features? HDF5 1.8.0 was released in February

More information

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums MPI Lab Parallelization (Calculating π in parallel) How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums Sharing Data Across Processors

More information

HDF5 File Space Management. 1. Introduction

HDF5 File Space Management. 1. Introduction HDF5 File Space Management 1. Introduction The space within an HDF5 file is called its file space. When a user first creates an HDF5 file, the HDF5 library immediately allocates space to store information

More information

Chapter 8 :: Composite Types

Chapter 8 :: Composite Types Chapter 8 :: Composite Types Programming Language Pragmatics, Fourth Edition Michael L. Scott Copyright 2016 Elsevier 1 Chapter08_Composite_Types_4e - Tue November 21, 2017 Records (Structures) and Variants

More information

Parallel I/O. and split communicators. David Henty, Fiona Ried, Gavin J. Pringle

Parallel I/O. and split communicators. David Henty, Fiona Ried, Gavin J. Pringle Parallel I/O and split communicators David Henty, Fiona Ried, Gavin J. Pringle Dr Gavin J. Pringle Applications Consultant gavin@epcc.ed.ac.uk +44 131 650 6709 4x4 array on 2x2 Process Grid Parallel IO

More information

Parallel NetCDF. Rob Latham Mathematics and Computer Science Division Argonne National Laboratory

Parallel NetCDF. Rob Latham Mathematics and Computer Science Division Argonne National Laboratory Parallel NetCDF Rob Latham Mathematics and Computer Science Division Argonne National Laboratory robl@mcs.anl.gov I/O for Computational Science Application Application Parallel File System I/O Hardware

More information

Advanced Parallel Programming

Advanced Parallel Programming Advanced Parallel Programming Derived Datatypes Dr Daniel Holmes Applications Consultant dholmes@epcc.ed.ac.uk Overview Lecture will cover derived datatypes memory layouts vector datatypes floating vs

More information

Advanced Parallel Programming

Advanced Parallel Programming Advanced Parallel Programming Derived Datatypes Dr David Henty HPC Training and Support Manager d.henty@epcc.ed.ac.uk +44 131 650 5960 16/01/2014 MPI-IO 2: Derived Datatypes 2 Overview Lecture will cover

More information

Advanced Parallel Programming

Advanced Parallel Programming Sebastian von Alfthan Jussi Enkovaara Pekka Manninen Advanced Parallel Programming February 15-17, 2016 PRACE Advanced Training Center CSC IT Center for Science Ltd, Finland All material (C) 2011-2016

More information

Slides prepared by : Farzana Rahman 1

Slides prepared by : Farzana Rahman 1 Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based

More information

Reusing this material

Reusing this material Derived Datatypes Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Parallel I/O. Steve Lantz Senior Research Associate Cornell CAC. Workshop: Data Analysis on Ranger, January 19, 2012

Parallel I/O. Steve Lantz Senior Research Associate Cornell CAC. Workshop: Data Analysis on Ranger, January 19, 2012 Parallel I/O Steve Lantz Senior Research Associate Cornell CAC Workshop: Data Analysis on Ranger, January 19, 2012 Based on materials developed by Bill Barth at TACC 1. Lustre 2 Lustre Components All Ranger

More information

CSC Advanced Scientific Computing, Fall Numpy

CSC Advanced Scientific Computing, Fall Numpy CSC 223 - Advanced Scientific Computing, Fall 2017 Numpy Numpy Numpy (Numerical Python) provides an interface, called an array, to operate on dense data buffers. Numpy arrays are at the core of most Python

More information

A Parallel API for Creating and Reading NetCDF Files

A Parallel API for Creating and Reading NetCDF Files A Parallel API for Creating and Reading NetCDF Files January 4, 2015 Abstract Scientists recognize the importance of portable and efficient mechanisms for storing datasets created and used by their applications.

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your

More information

MPI Lab. Steve Lantz Susan Mehringer. Parallel Computing on Ranger and Longhorn May 16, 2012

MPI Lab. Steve Lantz Susan Mehringer. Parallel Computing on Ranger and Longhorn May 16, 2012 MPI Lab Steve Lantz Susan Mehringer Parallel Computing on Ranger and Longhorn May 16, 2012 1 MPI Lab Parallelization (Calculating p in parallel) How to split a problem across multiple processors Broadcasting

More information

Introduction to Parallel I/O

Introduction to Parallel I/O Introduction to Parallel I/O Bilel Hadri bhadri@utk.edu NICS Scientific Computing Group OLCF/NICS Fall Training October 19 th, 2011 Outline Introduction to I/O Path from Application to File System Common

More information

JHDF5 (HDF5 for Java) 14.12

JHDF5 (HDF5 for Java) 14.12 JHDF5 (HDF5 for Java) 14.12 Introduction HDF5 is an efficient, well-documented, non-proprietary binary data format and library developed and maintained by the HDF Group. The library provided by the HDF

More information

Fundamentals of Programming Languages. Data Types Lecture 07 sl. dr. ing. Ciprian-Bogdan Chirila

Fundamentals of Programming Languages. Data Types Lecture 07 sl. dr. ing. Ciprian-Bogdan Chirila Fundamentals of Programming Languages Data Types Lecture 07 sl. dr. ing. Ciprian-Bogdan Chirila Predefined types Programmer defined types Scalar types Structured data types Cartesian product Finite projection

More information

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d MPI Lab Getting Started Login to ranger.tacc.utexas.edu Untar the lab source code lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar Part 1: Getting Started with simple parallel coding hello mpi-world

More information

Message-Passing and MPI Programming

Message-Passing and MPI Programming Message-Passing and MPI Programming Communicators etc. N.M. Maclaren nmm1@cam.ac.uk July 2010 7.1 Basic Concepts A group is a set of process identifiers; programs view them as integers in the range 0...(size-1),

More information

CSC2/454 Programming Languages Design and Implementation Records and Arrays

CSC2/454 Programming Languages Design and Implementation Records and Arrays CSC2/454 Programming Languages Design and Implementation Records and Arrays Sreepathi Pai November 6, 2017 URCS Outline Scalar and Composite Types Arrays Records and Variants Outline Scalar and Composite

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

Optimization of MPI Applications Rolf Rabenseifner

Optimization of MPI Applications Rolf Rabenseifner Optimization of MPI Applications Rolf Rabenseifner University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Optimization of MPI Applications Slide 1 Optimization and Standardization

More information

MPI Casestudy: Parallel Image Processing

MPI Casestudy: Parallel Image Processing MPI Casestudy: Parallel Image Processing David Henty 1 Introduction The aim of this exercise is to write a complete MPI parallel program that does a very basic form of image processing. We will start by

More information

Benchmarking the CGNS I/O performance

Benchmarking the CGNS I/O performance 46th AIAA Aerospace Sciences Meeting and Exhibit 7-10 January 2008, Reno, Nevada AIAA 2008-479 Benchmarking the CGNS I/O performance Thomas Hauser I. Introduction Linux clusters can provide a viable and

More information

API and Usage of libhio on XC-40 Systems

API and Usage of libhio on XC-40 Systems API and Usage of libhio on XC-40 Systems May 24, 2018 Nathan Hjelm Cray Users Group May 24, 2018 Los Alamos National Laboratory LA-UR-18-24513 5/24/2018 1 Outline Background HIO Design HIO API HIO Configuration

More information

I/O in scientific applications

I/O in scientific applications COSC 4397 Parallel I/O (II) Access patterns Spring 2010 I/O in scientific applications Different classes of I/O operations Required I/O: reading input data and writing final results Checkpointing: data

More information

The netcdf- 4 data model and format. Russ Rew, UCAR Unidata NetCDF Workshop 25 October 2012

The netcdf- 4 data model and format. Russ Rew, UCAR Unidata NetCDF Workshop 25 October 2012 The netcdf- 4 data model and format Russ Rew, UCAR Unidata NetCDF Workshop 25 October 2012 NetCDF data models, formats, APIs Data models for scienbfic data and metadata - classic: simplest model - - dimensions,

More information

Parallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package

Parallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package Parallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package MuQun Yang, Christian Chilan, Albert Cheng, Quincey Koziol, Mike Folk, Leon Arber The HDF Group Champaign, IL 61820

More information

Index. object lifetimes, and ownership, use after change by an alias errors, use after drop errors, BTreeMap, 309

Index. object lifetimes, and ownership, use after change by an alias errors, use after drop errors, BTreeMap, 309 A Arithmetic operation floating-point arithmetic, 11 12 integer numbers, 9 11 Arrays, 97 copying, 59 60 creation, 48 elements, 48 empty arrays and vectors, 57 58 executable program, 49 expressions, 48

More information

Metadata for Component Optimisation

Metadata for Component Optimisation Metadata for Component Optimisation Olav Beckmann, Paul H J Kelly and John Darlington ob3@doc.ic.ac.uk Department of Computing, Imperial College London 80 Queen s Gate, London SW7 2BZ United Kingdom Metadata

More information

Parallel I/O: Not Your Job. Rob Latham and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory {robl,

Parallel I/O: Not Your Job. Rob Latham and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory {robl, Parallel I/O: Not Your Job Rob Latham and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory {robl, rross}@mcs.anl.gov Computational Science! Use of computer simulation as a

More information

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center 05 Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Talk Overview Background on MPI Documentation Hello world in MPI Basic communications Simple

More information

DESY IT Seminar HDF5, Nexus, and what it is all about

DESY IT Seminar HDF5, Nexus, and what it is all about DESY IT Seminar HDF5, Nexus, and what it is all about Eugen Wintersberger HDF5 and Nexus DESY IT, 27.05.2013 Why should we care about Nexus and HDF5? Current state: Data is stored either as ASCII file

More information

A FRAMEWORK ARCHITECTURE FOR SHARED FILE POINTER OPERATIONS IN OPEN MPI

A FRAMEWORK ARCHITECTURE FOR SHARED FILE POINTER OPERATIONS IN OPEN MPI A FRAMEWORK ARCHITECTURE FOR SHARED FILE POINTER OPERATIONS IN OPEN MPI A Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements

More information

Introduction to MPI, the Message Passing Library

Introduction to MPI, the Message Passing Library Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of

More information

15-440: Recitation 8

15-440: Recitation 8 15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs

More information

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

0. Overview of this standard Design entities and configurations... 5

0. Overview of this standard Design entities and configurations... 5 Contents 0. Overview of this standard... 1 0.1 Intent and scope of this standard... 1 0.2 Structure and terminology of this standard... 1 0.2.1 Syntactic description... 2 0.2.2 Semantic description...

More information

ARMCI: A Portable Aggregate Remote Memory Copy Interface

ARMCI: A Portable Aggregate Remote Memory Copy Interface ARMCI: A Portable Aggregate Remote Memory Copy Interface Motivation and Background Jarek Nieplocha and Jialin Ju Pacific Northwest National Laboratory March 25, 1999 A portable lightweight remote memory

More information

HDF5 C++ User s Notes

HDF5 C++ User s Notes HDF5 C++ User s Notes This User s Note provides an overview of the structure, the availability, and the limitations of the C++ API of HDF5. It lists the classes and member functions included in the API

More information

Cornell Theory Center. Discussion: MPI Collective Communication I. Table of Contents. 1. Introduction

Cornell Theory Center. Discussion: MPI Collective Communication I. Table of Contents. 1. Introduction 1 of 18 11/1/2006 3:59 PM Cornell Theory Center Discussion: MPI Collective Communication I This is the in-depth discussion layer of a two-part module. For an explanation of the layers and how to navigate

More information

NetCDF-4: : Software Implementing an Enhanced Data Model for the Geosciences

NetCDF-4: : Software Implementing an Enhanced Data Model for the Geosciences NetCDF-4: : Software Implementing an Enhanced Data Model for the Geosciences Russ Rew, Ed Hartnett, and John Caron UCAR Unidata Program, Boulder 2006-01-31 Acknowledgments This work was supported by the

More information

Lustre Parallel Filesystem Best Practices

Lustre Parallel Filesystem Best Practices Lustre Parallel Filesystem Best Practices George Markomanolis Computational Scientist KAUST Supercomputing Laboratory georgios.markomanolis@kaust.edu.sa 7 November 2017 Outline Introduction to Parallel

More information

MPI: A Message-Passing Interface Standard

MPI: A Message-Passing Interface Standard MPI: A Message-Passing Interface Standard Version 2.1 Message Passing Interface Forum June 23, 2008 Contents Acknowledgments xvl1 1 Introduction to MPI 1 1.1 Overview and Goals 1 1.2 Background of MPI-1.0

More information

Advanced Message-Passing Interface (MPI)

Advanced Message-Passing Interface (MPI) Outline of the workshop 2 Advanced Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Morning: Advanced MPI Revision More on Collectives More on Point-to-Point

More information

Lecture 33: More on MPI I/O. William Gropp

Lecture 33: More on MPI I/O. William Gropp Lecture 33: More on MPI I/O William Gropp www.cs.illinois.edu/~wgropp Today s Topics High level parallel I/O libraries Options for efficient I/O Example of I/O for a distributed array Understanding why

More information

The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola

The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola What is MPI MPI: Message Passing Interface a standard defining a communication library that allows

More information

Introduction to NetCDF

Introduction to NetCDF Introduction to NetCDF NetCDF is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. First released in 1989.

More information

CSCI 171 Chapter Outlines

CSCI 171 Chapter Outlines Contents CSCI 171 Chapter 1 Overview... 2 CSCI 171 Chapter 2 Programming Components... 3 CSCI 171 Chapter 3 (Sections 1 4) Selection Structures... 5 CSCI 171 Chapter 3 (Sections 5 & 6) Iteration Structures

More information

A flow chart is a graphical or symbolic representation of a process.

A flow chart is a graphical or symbolic representation of a process. Q1. Define Algorithm with example? Answer:- A sequential solution of any program that written in human language, called algorithm. Algorithm is first step of the solution process, after the analysis of

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

Message-Passing and MPI Programming

Message-Passing and MPI Programming Message-Passing and MPI Programming 2.1 Transfer Procedures Datatypes and Collectives N.M. Maclaren Computing Service nmm1@cam.ac.uk ext. 34761 July 2010 These are the procedures that actually transfer

More information

Parallel I/O on JUQUEEN

Parallel I/O on JUQUEEN Parallel I/O on JUQUEEN 4. Februar 2014, JUQUEEN Porting and Tuning Workshop Mitglied der Helmholtz-Gemeinschaft Wolfgang Frings w.frings@fz-juelich.de Jülich Supercomputing Centre Overview Parallel I/O

More information

Supercomputing in Plain English Exercise #6: MPI Point to Point

Supercomputing in Plain English Exercise #6: MPI Point to Point Supercomputing in Plain English Exercise #6: MPI Point to Point In this exercise, we ll use the same conventions and commands as in Exercises #1, #2, #3, #4 and #5. You should refer back to the Exercise

More information

C-types: basic & constructed. C basic types: int, char, float, C constructed types: pointer, array, struct

C-types: basic & constructed. C basic types: int, char, float, C constructed types: pointer, array, struct C-types: basic & constructed C basic types: int, char, float, C constructed types: pointer, array, struct Memory Management Code Global variables in file (module) Local static variables in functions Dynamic

More information

DATA STRUCTURES CHAPTER 1

DATA STRUCTURES CHAPTER 1 DATA STRUCTURES CHAPTER 1 FOUNDATIONAL OF DATA STRUCTURES This unit introduces some basic concepts that the student needs to be familiar with before attempting to develop any software. It describes data

More information

Scalable I/O. Ed Karrels,

Scalable I/O. Ed Karrels, Scalable I/O Ed Karrels, edk@illinois.edu I/O performance overview Main factors in performance Know your I/O Striping Data layout Collective I/O 2 of 32 I/O performance Length of each basic operation High

More information

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s

More information