Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters

Size: px
Start display at page:

Download "Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters"

Transcription

1 Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters Presentation at IXPUG Meeting, July 2014 by Dhabaleswar K. (DK) Panda The Ohio State University Khaled Hamidouche The Ohio State University

2 Parallel Programming Models Overview P1 P2 P3 P1 P2 P3 P1 P2 P3 Shared Memory Memory Memory Memory Logical shared memory Memory Memory Memory Shared Memory Model DSM Distributed Memory Model MPI (Message Passing Interface) Partitioned Global Address Space (PGAS) Global Arrays, UPC, Chapel, X10, CAF, Programming models provide abstract machine models Models can be mapped on different types of systems e.g. Distributed Shared Memory (DSM), MPI within a node, etc. Additionally, OpenMP can be used to parallelize computation within the node Each model has strengths and drawbacks - suite different problems or applications IXPUG14-OSU-PGAS 2

3 Partitioned Global Address Space (PGAS) Models Key features - Simple shared memory abstractions - Light weight one-sided communication - Easier to express irregular communication Different approaches to PGAS - Languages Unified Parallel C (UPC) Co-Array Fortran (CAF) X10 Chapel - Libraries OpenSHMEM Global Arrays IXPUG14-OSU-PGAS 3

4 SHMEM SHMEM: Symmetric Hierarchical MEMory library One-sided communications library had been around for a while Similar to MPI, processes are called PEs, data movement is explicit through library calls Provides globally addressable memory using symmetric memory objects (more in later slides) Library routines for Symmetric object creation and management One-sided data movement Atomics Collectives Synchronization IXPUG14-OSU-PGAS 4

5 OpenSHMEM SHMEM implementations Cray SHMEM, SGI SHMEM, Quadrics SHMEM, HP SHMEM, GSHMEM Subtle differences in API, across versions example: SGI SHMEM Quadrics SHMEM Cray SHMEM Initialization start_pes(0) shmem_init start_pes Process ID _my_pe my_pe shmem_my_pe Made application codes non-portable OpenSHMEM is an effort to address this: A new, open specification to consolidate the various extant SHMEM versions into a widely accepted standard. OpenSHMEM Specification v1.0 by University of Houston and Oak Ridge National Lab SGI SHMEM is the baseline IXPUG14-OSU-PGAS 5

6 Compiler-based: Unified Parallel C UPC: a parallel extension to the C standard UPC Specifications and Standards: Introduction to UPC and Language Specification, 1999 UPC Language Specifications, v1.0, Feb 2001 UPC Language Specifications, v1.1.1, Sep 2004 UPC Language Specifications, v1.2, June 2005 UPC Language Specifications, v1.3, Nov 2013 UPC Consortium Academic Institutions: GWU, MTU, UCB, U. Florida, U. Houston, U. Maryland Government Institutions: ARSC, IDA, LBNL, SNL, US DOE Commercial Institutions: HP, Cray, Intrepid Technology, IBM, Supported by several UPC compilers Vendor-based commercial UPC compilers: HP UPC, Cray UPC, SGI UPC Open-source UPC compilers: Berkeley UPC, GCC UPC, Michigan Tech MuPC Aims for: high performance, coding efficiency, irregular applications, IXPUG14-OSU-PGAS 6

7 MPI+PGAS for Exascale Architectures and Applications Hierarchical architectures with multiple address spaces (MPI + PGAS) Model MPI across address spaces PGAS within an address space MPI is good at moving data between address spaces Within an address space, MPI can interoperate with other shared memory programming models Applications can have kernels with different communication patterns Can benefit from different models Re-writing complete applications can be a huge effort Port critical kernels to the desired model instead IXPUG14-OSU-PGAS 7

8 Hybrid (MPI+PGAS) Programming Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics Benefits: Best of Distributed Computing Model Best of Shared Memory Computing Model Exascale Roadmap*: Hybrid Programming is a practical way to program exascale systems HPC Application Kernel 1 MPI Kernel 2 PGAS MPI Kernel 3 MPI Kernel N PGAS MPI * The International Exascale Software Roadmap, Dongarra, J., Beckman, P. et al., Volume 25, Number 1, 2011, International Journal of High Performance Computer Applications, ISSN IXPUG14-OSU-PGAS 8

9 MVAPICH2/MVAPICH2-X Software High Performance open-source MPI Library for InfiniBand, 10Gig/iWARP, and RDMA over Converged Enhanced Ethernet (RoCE) MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Available since 2002 MVAPICH2-X (MPI + PGAS), Available since 2012 Support for GPGPUs and MIC Used by more than 2,150 organizations (HPC Centers, Industry and Universities) in 72 countries More than 218,000 downloads from OSU site directly Empowering many TOP500 clusters 7 th ranked 519,640-core cluster (Stampede) at TACC 11th ranked 74,358-core cluster (Tsubame 2.5) at Tokyo Institute of Technology 16 th ranked 96,192-core cluster (Pleiades) at NASA and many others Available with software stacks of many IB, HSE, and server vendors including Linux Distros (RedHat and SuSE) Partner in the U.S. NSF-TACC Stampede System IXPUG14-OSU-PGAS 9

10 MVAPICH2-X for Hybrid MPI + PGAS Applications MPI Applications, OpenSHMEM Applications, UPC Applications, Hybrid (MPI + PGAS) Applications OpenSHMEM Calls UPC Calls MPI Calls Unified MVAPICH2-X Runtime InfiniBand, RoCE, iwarp Unified communication runtime for MPI, UPC, OpenSHMEM available with MVAPICH2-X 1.9 onwards! Feature Highlights Supports MPI(+OpenMP), OpenSHMEM, UPC, MPI(+OpenMP) + OpenSHMEM, MPI(+OpenMP) + UPC MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard compliant (with initial support for UPC 1.3) Scalable Inter-node and intra-node communication point-to-point and collectives IXPUG14-OSU-PGAS 10

11 Outline OpenSHMEM and UPC for Host Hybrid MPI + PGAS (OpenSHMEM and UPC) for Host OpenSHMEM for MIC UPC for MIC IXPUG14-OSU-PGAS 11

12 OpenSHMEM Design in MVAPICH2-X Memory Management OpenSHMEM API Data Atomics Movement Collectives Minimal Set of Internal API Communication API Symmetric Memory Management API Active Messages One-sided Operations MVAPICH2-X Runtime Remote Atomic Ops Enhanced Registration Cache InfiniBand, RoCE, iwarp OpenSHMEM Stack based on OpenSHMEM Reference Implementation OpenSHMEM Communication over MVAPICH2-X Runtime Uses active messages, atomic and one-sided operations and remote registration cache J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September IXPUG14-OSU-PGAS 12

13 OpenSHMEM Collective Communication Performance Reduce (1,024 processes) Broadcast (1,024 processes) Time (us) X MV2X-SHMEM Scalable-SHMEM OMPI-SHMEM Time (us) X Time (us) K 4K 16K 64K Message Size Collect (1,024 processes) 180X IXPUG14-OSU-PGAS Message Size Time (us) K 4K 16K 64K 256K Message Size Barrier 3X No. of Processes 13

14 Time (s) OpenSHMEM Application Evaluation UH-SHMEM OMPI-SHMEM (FCA) Scalable-SHMEM (FCA) MV2X-SHMEM Heat Image Number of Processes Time (s) UH-SHMEM Daxpy OMPI-SHMEM (FCA) Scalable-SHMEM (FCA) MV2X-SHMEM Number of Processes Improved performance for OMPI-SHMEM and Scalable-SHMEM with FCA Execution time for 2DHeat Image at 512 processes (sec): - UH-SHMEM 523, OMPI-SHMEM 214, Scalable-SHMEM 193, MV2X- SHMEM 169 Execution time for DAXPY at 512 processes (sec): - UH-SHMEM 57, OMPI-SHMEM 56, Scalable-SHMEM 9.2, MV2X- SHMEM 5.2 J. Jose, J. Zhang, A. Venkatesh, S. Potluri, and D. K. Panda, A Comprehensive Performance Evaluation of OpenSHMEM Libraries on InfiniBand Clusters, OpenSHMEM Workshop (OpenSHMEM 14), March 2014 IXPUG14-OSU-PGAS 14

15 MVAPICH2-X Support for Berkeley UPC Runtime UPC Applications Compiler-generated Code Heavily based on active messages; directly on top of each individual network architectures Compiler-specific Runtime Extended APIs Core APIs GASNet High-level operations such as remote memory access and various collective operations SMP Conduit IB Conduit MPI Conduit MXM Conduit MVAPICH2-X Conduit GASNet (Global-Address Space Networking) is a language-independent, lowlevel networking layer that provides support for PGAS language Support multiple networks through different conduit: MVAPICH2-X Conduit is available in MVAPICH2-X release, which support UPC/OpenMP/MPI on InfiniBand IXPUG14-OSU-PGAS 15

16 Time (us) Time (ms) UPC Collectives Performance Broadcast (2048 processes) UPC-GASNet UPC-OSU K 2K 4K 8K 16K 32K 64K 128K 256K IXPUG14-OSU-PGAS Message Size Gather (2048 processes) UPC-GASNet UPC-OSU 25X 2X K 2K 4K 8K 16K 32K 64K 128K 256K Time (ms) Time (ms) Scatter (2048 processes) UPC-GASNet UPC-OSU K 2K 4K 8K 16K 32K 64K 128K 256K Message Size Exchange (2048 processes) Message Size Message Size J. Jose, K. Hamidouche, J. Zhang, A. Venkatesh, and D. K. Panda, Optimizing Collective Communication in UPC (HiPS 14, in association with IPDPS 14) 64 UPC-GASNet UPC-OSU 2X 35% K 2K 4K 8K 16K 32K 64K 128K 16

17 Outline OpenSHMEM and UPC for Host Hybrid MPI + PGAS (OpenSHMEM and UPC) for Host OpenSHMEM for MIC UPC for MIC IXPUG14-OSU-PGAS 17

18 Unified Runtime for Hybrid MPI + OpenSHMEM Applications Hybrid (OpenSHMEM + MPI) Applications MPI Applications, OpenSHMEM Applications, Hybrid (MPI + OpenSHMEM) Applications OpenSHMEM Calls MPI Calls OpenSHMEM Calls MPI Calls OpenSHMEM Runtime MPI Runtime MVAPICH2-X Runtime InfiniBand, RoCE, iwarp InfiniBand, RoCE, iwarp Optimal network resource usage No deadlock because of single runtime Better performance J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September IXPUG14-OSU-PGAS 18

19 Hybrid MPI+OpenSHMEM Graph500 Design Time (s) MPI-Simple MPI-CSC MPI-CSR Execution Time Hybrid (MPI+OpenSHMEM) 7.6X 4K 8K 16K No. of Processes 13X Performance of Hybrid (MPI+OpenSHMEM) Graph500 Design 8,192 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple 16,384 processes - 1.5X improvement over MPI-CSR - 13X improvement over MPI-Simple J. Jose, S. J. Potluri, Jose, S. K. Potluri, Tomko K. and Tomko D. K. and Panda, D. K. Designing Panda, Designing Scalable Graph500 Scalable Graph500 Benchmark Benchmark with Hybrid with MPI+OpenSHMEM Hybrid MPI+OpenSHMEM Programming Models, International Programming Supercomputing Models, Conference International (ISC 13), Supercomputing June 2013 Conference (ISC 13), June 2013 J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012 IXPUG14-OSU-PGAS 19 Billions of Traversed Edges Per Second (TEPS) Billions of Traversed Edges Per Second (TEPS) Strong Scalability MPI-Simple MPI-CSC MPI-CSR Hybrid(MPI+OpenSHMEM) # of Processes Weak Scalability MPI-Simple MPI-CSC MPI-CSR Hybrid(MPI+OpenSHMEM) Scale

20 Hybrid MPI+UPC NAS-FT Time (s) % B-64 C-64 B-128 C-128 NAS Problem Size System Size Modified NAS FT UPC all-to-all pattern using MPI_Alltoall Truly hybrid program For FT (Class C, 128 processes) 34% improvement over UPC (GASNet) 30% improvement over UPC (MV2-X) UPC (GASNet) UPC (MV2-X) MPI+UPC (MV2-X) J. J. Jose, M. Luo, S. S. Sur Sur and and D. D. K. K. Panda, Panda, Unifying Unifying UPC and UPC MPI and Runtimes: MPI Runtimes: Experience Experience with MVAPICH, with Int'l MVAPICH, Conference Fourth on Conference Partitioned on Global Partitioned Address Space Global Programming Address Space Models Programming (PGAS) 2010 Model (PGAS '10), October 2010 IXPUG14-OSU-PGAS 20

21 Outline OpenSHMEM and UPC for Host Hybrid MPI + PGAS (OpenSHMEM and UPC) for Host OpenSHMEM for MIC UPC for MIC IXPUG14-OSU-PGAS 21

22 HPC Applications on MIC Clusters Flexibility in launching HPC jobs on Xeon Phi Clusters Xeon Xeon Phi Multi-core Centric Host-only OpenSHMEM Program Offload (/reverse Offload) OpenSHMEM Program Offloaded Computation Symmetric OpenSHMEM Program OpenSHMEM Program Coprocessor-only OpenSHMEM Program Many-core Centric IXPUG14-OSU-PGAS 22

23 Data Movement on Intel Xeon Phi Clusters CPU PCIe MIC MIC Node 0 Node 1 QPI MIC CPU CPU IB MIC OpenSHMEM Process 1. Intra-Socket 2. Inter-Socket 3. Inter-Node 4. Intra-MIC 5. Intra-Socket MIC-MIC 6. Inter-Socket MIC-MIC 7. Inter-Node MIC-MIC 8. Intra-Socket MIC-Host 9. Inter-Socket MIC-Host 10. Inter-Node MIC-Host 11. Inter-Node MIC-MIC with IB on remote socket and more... Critical for runtimes to optimize data movement, hiding the complexity IXPUG14-OSU-PGAS 23

24 Need for Non-Uniform Memory Allocation in OpenSHMEM MIC cores have limited memory per core Memory per core OpenSHMEM relies on symmetric memory, allocated using shmalloc() shmalloc() allocates same amount of memory on all PEs For applications running in symmetric mode, this limits the total heap size Similar issues for applications (even host-only) with memory load imbalance (Graph500, Out-of-Core Sort, etc.) How to allocate different amounts of memory on host and MIC cores, and still be able to communicate? IXPUG14-OSU-PGAS 24

25 OpenSHMEM Design for MIC Clusters Non-Uniform Memory Allocation: Team-based Memory Allocation (Proposed Extensions) Address Structure for non-uniform memory allocations IXPUG14-OSU-PGAS 25

26 Proxy-based designs for OpenSHMEM (1) IB REQ HOST1 HOST2 HOST1 HOST2 (2) SCIF Read (3) IB FIN (2) IB Write (1) IB REQ (2) IB Write (2) SCIF Read MIC1 H C A H C A MIC2 MIC1 H C A H C A MIC2 OpenSHMEM Put using Active Proxy (3) IB FIN OpenSHMEM Get using Active Proxy Current generation architectures impose limitations on read bandwidth when HCA reads from MIC memory Impacts both put and get operation performance Solution: Pipelined data transfer by proxy running on host using IB and SCIF channels Improves latency and bandwidth! IXPUG14-OSU-PGAS 26

27 OpenSHMEM Put/Get Performance Latency (us) MV2X-Def MV2X-Opt K 4K 16K 64K 256K 1M 4M K 4K 16K 64K 256K 1M 4M Message Size OpenSHMEM Put Latency MV2X-Def 3500 MV2X-Opt X 4.6X 2500 Latency (us) Message Size OpenSHMEM Get Latency Proxy-based designs alleviate hardware limitations Put Latency of 4M message: Default: 3911us, Optimized: 838us Get Latency of 4M message: Default: 3889us, Optimized: 837us IXPUG14-OSU-PGAS 27

28 OpenSHMEM Collectives Performance Latency (us) MV2X-Def MV2X-Opt K 4K 16K 64K 256K Message Size Broadcast Latency (1,024 processes) 7000 MV2X-Def MV2X-Opt 2.6X 1.9X 4000 Latency (us) K 2K 4K 8K 16K 32K 64K 128K Message Size Reduce Latency (1,024 processes) Optimized designs for OpenSHMEM collective operations Broadcast Latency of 256KB message at 1,024 processes: Default: 5955us, Optimized: 2268us Reduce Latency of 256KB message at 1,024 processes: Default: 6581us, Optimized: 3294us IXPUG14-OSU-PGAS 28

29 Performance Evaluations using Graph500 Execution Time (s) MV2X-Def MV2X-Opt Execution Time (s) MV2X-Def MV2X-Opt 28% Number of Processes Number of Processes Native Mode (8 procs/mic) Symmetric Mode (16 Host+16MIC) Graph500 Execution Time (Native Mode): At 512 processes, Default: 5.17s, Optimized: 4.96s Performance Improvement from MIC-aware collectives design Graph500 Execution Time (Symmetric Mode): At 1,024 processes, Default: 15.91s, Optimized: 12.41s Performance Improvement from MIC-aware collectives and proxy-based designs IXPUG14-OSU-PGAS 29

30 Graph500 Evaluations with Extensions Execution Time (s) Number of Processes Redesigned Graph500 using MIC to overlap computation/communication Data Transfer to MIC memory; MIC cores pre-processes received data Host processes traverses vertices, and sends out new vertices Graph500 Execution time at 1,024 processes: Host-Only:.33s, Host+MIC with Extensions:.26s Magnitudes of improvement compared to default symmetric mode Host Host+MIC (extensions) 26% Default Symmetric Mode: 12.1s, Host+MIC Extensions: 0.16s J. Jose, K. Hamidouche, X. Lu, S. Potluri, J. Zhang, K. Tomko and D. K. Panda, High Performance OpenSHMEM for Intel MIC Clusters: Extensions, Runtime Designs and Application Co-Design, IEEE International Conference on Cluster Computing (CLUSTER '14) (Best Paper Nominee) IXPUG14-OSU-PGAS 30

31 Outline OpenSHMEM and UPC for Host Hybrid MPI + PGAS (OpenSHMEM and UPC) for Host OpenSHMEM for MIC UPC for MIC IXPUG14-OSU-PGAS 31

32 HPC Applications on MIC Clusters Flexibility in launching HPC jobs on Xeon Phi Clusters Xeon Xeon Phi Multi-core Centric Host-only UPC Program Offload (/reverse Offload) UPC Program Offloaded Computation Symmetric UPC Program UPC Program Coprocessor-only UPC Program Many-core Centric IXPUG14-OSU-PGAS 32

33 UPC on MIC clusters Process Model Communication via shared memory single copy/double copy One connection per process Drawbacks: Communication Overheads, Network Connections Thread Model Direct access to shared array: same memory space No extra copy No shared memory mapping entries Drawback: sharing of network connections Our work on multi-endpoint addresses this issue M. Luo, J. Jose, S. Sur and D. K. Panda, Multi-threaded UPC Runtime with Network Endpoints: Design Alternatives and Evaluation on Multi-core Architectures, Int'l Conference on High Performance Computing (HiPC '11), Dec IXPUG14-OSU-PGAS 33

34 UPC Runtime on MIC: Remote Memory Access between Host and MIC Original all-to-all connection mode Leader-to-all connection mode CPU Global Memory Region on MIC CPU Global Memory Region on MIC CPU CPU Pin-down the whole global memory region CPU CPU CPU Intel Xeon Intel Xeon Phi Co-Processors CPU Intel Xeon Intel Xeon Phi Co-Processors Original connection mode: Each UPC thread pins down the global memory region that has affinity with this UPC thread; Establish all-to-all connections between every pair of UPC threads between MIC and host Leader-to-all connection mode: Only one UPC thread pinned down the whole global memory space on MIC; all UPC threads on the host will get access with the leader UPC thread Connections: N MIC *N HOST -> N MIC +N HOST IXPUG14-OSU-PGAS 34

35 UPC Runtime on MIC: Remote Memory Access between Host and MIC CPU Global Memory Region on MIC CPU CPU CPU Intel Xeon Intel Xeon Phi Co-Processors Process-based Runtime: The whole global memory region need to be mapped as shared memory region, such as PSHM Thread-based Runtime: As the threads share the same memory space, the leader can access other global memory region on MIC without mapping to shared memory IXPUG14-OSU-PGAS 35

36 Application Benchmark Evaluation for Native Mode native-host native-mic CG.B EP.B FT.B IS.B MG.B RandomAccess Host and MIC are occupied with 16 and 60 UPC threads, respectively MIC performs 80%, 67%, and 54% as 16-CPU host for MG, EP, and FT, respectively For communication-intensive benchmarks CG and IS, MIC spends 7x and 3x execution times Default applications without modification no representation of optimal performance 2.6X M. Luo, M. Li, A. Venkatesh, X. Lu and D. K. Panda, UPC on MIC: Early Experiences with Native and Symmetric Modes, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October UTS 4X IXPUG14-OSU-PGAS 36

37 Concluding Remarks HPC Systems are evolving to support growing needs of exascale applications Hybrid programming (MPI + PGAS) is a practical way to program exascale systems Presented designs to demonstrate performance potential of PGAS and hybrid MPI+PGAS models MIC support for OpenSHMEM and UPC will be available in future MVAPICH2-X releases IXPUG14-OSU-PGAS 37

38 Funding Acknowledgments Funding Support by Equipment Support by IXPUG14-OSU-PGAS 38

39 Personnel Acknowledgments Current Students S. Chakraborthy (Ph.D.) N. Islam (Ph.D.) J. Jose (Ph.D.) M. Li (Ph.D.) R. Rajachandrasekhar (Ph.D.) M. Rahman (Ph.D.) D. Shankar (Ph.D.) R. Shir (Ph.D.) A. Venkatesh (Ph.D.) J. Zhang (Ph.D.) Current Senior Research Associates H. Subramoni X. Lu Current Post-Docs Current Programmers K. Hamidouche M. Arnold J. Lin J. Perkins Past Students P. Balaji (Ph.D.) D. Buntinas (Ph.D.) S. Bhagvat (M.S.) L. Chai (Ph.D.) B. Chandrasekharan (M.S.) N. Dandapanthula (M.S.) V. Dhanraj (M.S.) T. Gangadharappa (M.S.) K. Gopalakrishnan (M.S.) W. Huang (Ph.D.) W. Jiang (M.S.) S. Kini (M.S.) M. Koop (Ph.D.) R. Kumar (M.S.) S. Krishnamoorthy (M.S.) K. Kandalla (Ph.D.) P. Lai (M.S.) J. Liu (Ph.D.) M. Luo (Ph.D.) A. Mamidala (Ph.D.) G. Marsh (M.S.) V. Meshram (M.S.) S. Naravula (Ph.D.) R. Noronha (Ph.D.) X. Ouyang (Ph.D.) S. Pai (M.S.) S. Potluri (Ph.D.) G. Santhanaraman (Ph.D.) A. Singh (Ph.D.) J. Sridhar (M.S.) S. Sur (Ph.D.) H. Subramoni (Ph.D.) K. Vaidyanathan (Ph.D.) A. Vishnu (Ph.D.) J. Wu (Ph.D.) W. Yu (Ph.D.) Past Post-Docs H. Wang X. Besseron H.-W. Jin E. Mancini S. Marcarelli J. Vienne Past Research Scientist S. Sur Past Programmers D. Bureddy M. Luo IXPUG14-OSU-PGAS 39

40 Web Pointers MVAPICH Web Page IXPUG14-OSU-PGAS 40

Scaling with PGAS Languages

Scaling with PGAS Languages Scaling with PGAS Languages Panel Presentation at OFA Developers Workshop (2013) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach

Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach Talk at OSC theater (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Intel Nerve Center (SC 7) Presentation Dhabaleswar K (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu Parallel

More information

Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters. Presented at GTC 15

Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters. Presented at GTC 15 Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters Presented at Presented by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu hcp://www.cse.ohio-

More information

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X IXPUG 7 PresentaNon J. Hashmi, M. Li, H. Subramoni and DK Panda The Ohio State University E-mail: {hashmi.29,li.292,subramoni.,panda.2}@osu.edu

More information

Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters

Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Scalability and Performance of MVAPICH2 on OakForest-PACS

Scalability and Performance of MVAPICH2 on OakForest-PACS Scalability and Performance of MVAPICH2 on OakForest-PACS Talk at JCAHPC Booth (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

High Performance MPI Support in MVAPICH2 for InfiniBand Clusters

High Performance MPI Support in MVAPICH2 for InfiniBand Clusters High Performance MPI Support in MVAPICH2 for InfiniBand Clusters A Talk at NVIDIA Booth (SC 11) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters K. Kandalla, A. Venkatesh, K. Hamidouche, S. Potluri, D. Bureddy and D. K. Panda Presented by Dr. Xiaoyi

More information

Using MVAPICH2- X for Hybrid MPI + PGAS (OpenSHMEM and UPC) Programming

Using MVAPICH2- X for Hybrid MPI + PGAS (OpenSHMEM and UPC) Programming Using MVAPICH2- X for Hybrid MPI + PGAS (OpenSHMEM and UPC) Programming MVAPICH2 User Group (MUG) MeeFng by Jithin Jose The Ohio State University E- mail: jose@cse.ohio- state.edu h

More information

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Presentation at Mellanox Theater () Dhabaleswar K. (DK) Panda - The Ohio State University panda@cse.ohio-state.edu Outline Communication

More information

GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks

GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks Presented By : Esthela Gallardo Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, Jonathan Perkins, Hari Subramoni,

More information

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory

More information

MVAPICH2 and MVAPICH2-MIC: Latest Status

MVAPICH2 and MVAPICH2-MIC: Latest Status MVAPICH2 and MVAPICH2-MIC: Latest Status Presentation at IXPUG Meeting, July 214 by Dhabaleswar K. (DK) Panda and Khaled Hamidouche The Ohio State University E-mail: {panda, hamidouc}@cse.ohio-state.edu

More information

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (April 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Unified Runtime for PGAS and MPI over OFED

Unified Runtime for PGAS and MPI over OFED Unified Runtime for PGAS and MPI over OFED D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University, USA Outline Introduction

More information

Unifying UPC and MPI Runtimes: Experience with MVAPICH

Unifying UPC and MPI Runtimes: Experience with MVAPICH Unifying UPC and MPI Runtimes: Experience with MVAPICH Jithin Jose Miao Luo Sayantan Sur D. K. Panda Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,

More information

MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits

MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits Ashish Kumar Singh, Sreeram Potluri, Hao Wang, Krishna Kandalla, Sayantan Sur, and Dhabaleswar K. Panda Network-Based

More information

Overview of the MVAPICH Project: Latest Status and Future Roadmap

Overview of the MVAPICH Project: Latest Status and Future Roadmap Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

In the multi-core age, How do larger, faster and cheaper and more responsive memory sub-systems affect data management? Dhabaleswar K.

In the multi-core age, How do larger, faster and cheaper and more responsive memory sub-systems affect data management? Dhabaleswar K. In the multi-core age, How do larger, faster and cheaper and more responsive sub-systems affect data management? Panel at ADMS 211 Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory Department

More information

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters

CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters Ching-Hsiang Chu, Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan and Dhabaleswar K. (DK) Panda Speaker: Sourav Chakraborty

More information

Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop

Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop Keynote Talk at BPOE-4, in conjunction with ASPLOS 14 (March 2014) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning 5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:

More information

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters Designing High Performance Heterogeneous Broadcast for Streaming Applications on Clusters 1 Ching-Hsiang Chu, 1 Khaled Hamidouche, 1 Hari Subramoni, 1 Akshay Venkatesh, 2 Bracy Elton and 1 Dhabaleswar

More information

Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning

Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning Efficient and Scalable Multi-Source Streaming Broadcast on Clusters for Deep Learning Ching-Hsiang Chu 1, Xiaoyi Lu 1, Ammar A. Awan 1, Hari Subramoni 1, Jahanzeb Hashmi 1, Bracy Elton 2 and Dhabaleswar

More information

MVAPICH2 Project Update and Big Data Acceleration

MVAPICH2 Project Update and Big Data Acceleration MVAPICH2 Project Update and Big Data Acceleration Presentation at HPC Advisory Council European Conference 212 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters

High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters Talk at OpenFabrics Workshop (April 2016) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Jie Zhang Dr. Dhabaleswar K. Panda (Advisor) Department of Computer Science & Engineering The

More information

Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters

Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters 2015 IEEE International Conference on Cluster Computing Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan,

More information

High-Performance MPI and Deep Learning on OpenPOWER Platform

High-Performance MPI and Deep Learning on OpenPOWER Platform High-Performance MPI and Deep Learning on OpenPOWER Platform Talk at OpenPOWER SUMMIT (Mar 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Programming Models for Exascale Systems

Programming Models for Exascale Systems Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference and Exascale Workshop (214) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

An Assessment of MPI 3.x (Part I)

An Assessment of MPI 3.x (Part I) An Assessment of MPI 3.x (Part I) High Performance MPI Support for Clouds with IB and SR-IOV (Part II) Talk at HP-CAST 25 (Nov 2015) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Intra-MIC MPI Communication using MVAPICH2: Early Experience

Intra-MIC MPI Communication using MVAPICH2: Early Experience Intra-MIC MPI Communication using MVAPICH: Early Experience Sreeram Potluri, Karen Tomko, Devendar Bureddy, and Dhabaleswar K. Panda Department of Computer Science and Engineering Ohio State University

More information

The Future of Supercomputer Software Libraries

The Future of Supercomputer Software Libraries The Future of Supercomputer Software Libraries Talk at HPC Advisory Council Israel Supercomputing Conference by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing Shared Address Space MPI libraries in the Many-core Era

Designing Shared Address Space MPI libraries in the Many-core Era Designing Shared Address Space MPI libraries in the Many-core Era Jahanzeb Hashmi hashmi.29@osu.edu (NBCL) The Ohio State University Outline Introduction and Motivation Background Shared-memory Communication

More information

Overview of MVAPICH2 and MVAPICH2- X: Latest Status and Future Roadmap

Overview of MVAPICH2 and MVAPICH2- X: Latest Status and Future Roadmap Overview of MVAPICH2 and MVAPICH2- X: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) MeeKng by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu h

More information

High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms

High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms Talk at Intel HPC Developer Conference (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

High Performance Migration Framework for MPI Applications on HPC Cloud

High Performance Migration Framework for MPI Applications on HPC Cloud High Performance Migration Framework for MPI Applications on HPC Cloud Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda {zhanjie, luxi, panda}@cse.ohio-state.edu Computer Science & Engineering Department,

More information

Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2

Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2 Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2 H. Wang, S. Potluri, M. Luo, A. K. Singh, X. Ouyang, S. Sur, D. K. Panda Network-Based

More information

High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (2015)

High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (2015) High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (215) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries How to Boost the Performance of Your MPI and PGAS s with MVAPICH2 Libraries A Tutorial at the MVAPICH User Group (MUG) Meeting 18 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Memcached Design on High Performance RDMA Capable Interconnects

Memcached Design on High Performance RDMA Capable Interconnects Memcached Design on High Performance RDMA Capable Interconnects Jithin Jose, Hari Subramoni, Miao Luo, Minjia Zhang, Jian Huang, Md. Wasi- ur- Rahman, Nusrat S. Islam, Xiangyong Ouyang, Hao Wang, Sayantan

More information

Job Startup at Exascale:

Job Startup at Exascale: Job Startup at Exascale: Challenges and Solutions Hari Subramoni The Ohio State University http://nowlab.cse.ohio-state.edu/ Current Trends in HPC Supercomputing systems scaling rapidly Multi-/Many-core

More information

Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication

Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication Sreeram Potluri* Hao Wang* Devendar Bureddy* Ashish Kumar Singh* Carlos Rosales + Dhabaleswar K. Panda* *Network-Based

More information

Accelerating HPL on Heterogeneous GPU Clusters

Accelerating HPL on Heterogeneous GPU Clusters Accelerating HPL on Heterogeneous GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Outline

More information

Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning

Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, and Dhabaleswar K. Panda Network-Based Computing Laboratory Department

More information

Exploiting InfiniBand and GPUDirect Technology for High Performance Collectives on GPU Clusters

Exploiting InfiniBand and GPUDirect Technology for High Performance Collectives on GPU Clusters Exploiting InfiniBand and Direct Technology for High Performance Collectives on Clusters Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth

More information

Designing High Performance Communication Middleware with Emerging Multi-core Architectures

Designing High Performance Communication Middleware with Emerging Multi-core Architectures Designing High Performance Communication Middleware with Emerging Multi-core Architectures Dhabaleswar K. (DK) Panda Department of Computer Science and Engg. The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09 RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Introduction Problem Statement

More information

Job Startup at Exascale: Challenges and Solutions

Job Startup at Exascale: Challenges and Solutions Job Startup at Exascale: Challenges and Solutions Sourav Chakraborty Advisor: Dhabaleswar K (DK) Panda The Ohio State University http://nowlab.cse.ohio-state.edu/ Current Trends in HPC Tremendous increase

More information

MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand

MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand Presentation at GTC 213 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

High-Performance Broadcast for Streaming and Deep Learning

High-Performance Broadcast for Streaming and Deep Learning High-Performance Broadcast for Streaming and Deep Learning Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth - SC17 2 Outline Introduction

More information

Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics

Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics 1 Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics Mingzhe Li Sreeram Potluri Khaled Hamidouche Jithin Jose Dhabaleswar K. Panda Network-Based Computing Laboratory Department

More information

Software Libraries and Middleware for Exascale Systems

Software Libraries and Middleware for Exascale Systems Software Libraries and Middleware for Exascale Systems Talk at HPC Advisory Council China Workshop (212) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?

Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Sayantan Sur, Abhinav Vishnu, Hyun-Wook Jin, Wei Huang and D. K. Panda {surs, vishnu, jinhy, huanwei, panda}@cse.ohio-state.edu

More information

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart Xiangyong Ouyang, Raghunath Rajachandrasekar, Xavier Besseron, Hao Wang, Jian Huang, Dhabaleswar K. Panda Department of Computer

More information

On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI

On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI Sourav Chakraborty, Hari Subramoni, Jonathan Perkins, Ammar A. Awan and Dhabaleswar K. Panda Department of Computer Science and Engineering

More information

Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand

Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand Miao Luo, Hao Wang, & D. K. Panda Network- Based Compu2ng Laboratory Department of Computer Science and Engineering The Ohio State

More information

GPU-centric communication for improved efficiency

GPU-centric communication for improved efficiency GPU-centric communication for improved efficiency Benjamin Klenk *, Lena Oden, Holger Fröning * * Heidelberg University, Germany Fraunhofer Institute for Industrial Mathematics, Germany GPCDP Workshop

More information

Designing Multi-Leader-Based Allgather Algorithms for Multi-Core Clusters *

Designing Multi-Leader-Based Allgather Algorithms for Multi-Core Clusters * Designing Multi-Leader-Based Allgather Algorithms for Multi-Core Clusters * Krishna Kandalla, Hari Subramoni, Gopal Santhanaraman, Matthew Koop and Dhabaleswar K. Panda Department of Computer Science and

More information

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox Theater (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Krishna Kandalla, Emilio P. Mancini, Sayantan Sur, and Dhabaleswar. K. Panda Department of Computer Science & Engineering,

More information

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda Department of Computer Science and Engineering

More information

A Case for High Performance Computing with Virtual Machines

A Case for High Performance Computing with Virtual Machines A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation

More information

High-Performance Training for Deep Learning and Computer Vision HPC

High-Performance Training for Deep Learning and Computer Vision HPC High-Performance Training for Deep Learning and Computer Vision HPC Panel at CVPR-ECV 18 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization

Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization Talk at HPCAC-Switzerland (Mar 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of

More information

MVAPICH2-X 2.1 User Guide

MVAPICH2-X 2.1 User Guide 2.1 User Guide MVAPICH TEAM NETWORK-BASED COMPUTING LABORATORY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING THE OHIO STATE UNIVERSITY http://mvapich.cse.ohio-state.edu Copyright (c) 2001-2015 Network-Based

More information

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K.

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K. Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K. Panda Department of Computer Science and Engineering The Ohio

More information

LiMIC: Support for High-Performance MPI Intra-Node Communication on Linux Cluster

LiMIC: Support for High-Performance MPI Intra-Node Communication on Linux Cluster LiMIC: Support for High-Performance MPI Intra-Node Communication on Linux Cluster H. W. Jin, S. Sur, L. Chai, and D. K. Panda Network-Based Computing Laboratory Department of Computer Science and Engineering

More information

Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience

Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience Talk at OpenSHMEM Workshop (August 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

High-Performance and Scalable Designs of Programming Models for Exascale Systems

High-Performance and Scalable Designs of Programming Models for Exascale Systems High-Performance and Scalable Designs of Programming Models for Exascale Systems Keynote Talk at HPCAC, Switzerland (April 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects

Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects Dissertation Presented in Partial Fulfillment of the Requirements for the Degree

More information

Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth

Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth by D.K. Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Outline Overview of MVAPICH2-GPU

More information

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management SigHPC BigData BoF (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies

Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies Talk at Intel HPC Developer Conference 2017 (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University

More information

The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15)

The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15) The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications

Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications GPU Technology Conference GTC 2016 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Interconnect Your Future

Interconnect Your Future Interconnect Your Future Smart Interconnect for Next Generation HPC Platforms Gilad Shainer, August 2016, 4th Annual MVAPICH User Group (MUG) Meeting Mellanox Connects the World s Fastest Supercomputer

More information

Accelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached

Accelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached Accelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached Talk at HPC Advisory Council Lugano Conference (213) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Acceleration for Big Data, Hadoop and Memcached

Acceleration for Big Data, Hadoop and Memcached Acceleration for Big Data, Hadoop and Memcached A Presentation at HPC Advisory Council Workshop, Lugano 212 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Presentation at Mellanox Theatre (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (March 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing Software Libraries and Middleware for Exascale Systems: Opportunities and Challenges

Designing Software Libraries and Middleware for Exascale Systems: Opportunities and Challenges Designing Software Libraries and Middleware for Exascale Systems: Opportunities and Challenges Talk at Brookhaven National Laboratory (Oct 214) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

INAM 2 : InfiniBand Network Analysis and Monitoring with MPI

INAM 2 : InfiniBand Network Analysis and Monitoring with MPI INAM 2 : InfiniBand Network Analysis and Monitoring with MPI H. Subramoni, A. A. Mathews, M. Arnold, J. Perkins, X. Lu, K. Hamidouche, and D. K. Panda Department of Computer Science and Engineering The

More information

A Plugin-based Approach to Exploit RDMA Benefits for Apache and Enterprise HDFS

A Plugin-based Approach to Exploit RDMA Benefits for Apache and Enterprise HDFS A Plugin-based Approach to Exploit RDMA Benefits for Apache and Enterprise HDFS Adithya Bhat, Nusrat Islam, Xiaoyi Lu, Md. Wasi- ur- Rahman, Dip: Shankar, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng

More information

Enhancing Checkpoint Performance with Staging IO & SSD

Enhancing Checkpoint Performance with Staging IO & SSD Enhancing Checkpoint Performance with Staging IO & SSD Xiangyong Ouyang Sonya Marcarelli Dhabaleswar K. Panda Department of Computer Science & Engineering The Ohio State University Outline Motivation and

More information

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox booth (SC 218) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Optimizing & Tuning Techniques for Running MVAPICH2 over IB

Optimizing & Tuning Techniques for Running MVAPICH2 over IB Optimizing & Tuning Techniques for Running MVAPICH2 over IB Talk at 2nd Annual IBUG (InfiniBand User's Group) Workshop (214) by Hari Subramoni The Ohio State University E-mail: subramon@cse.ohio-state.edu

More information

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State

More information

Performance Evaluation of Soft RoCE over 1 Gigabit Ethernet

Performance Evaluation of Soft RoCE over 1 Gigabit Ethernet IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 7-66, p- ISSN: 7-77Volume 5, Issue (Nov. - Dec. 3), PP -7 Performance Evaluation of over Gigabit Gurkirat Kaur, Manoj Kumar, Manju Bala Department

More information

High-Performance and Scalable Non-Blocking All-to-All with Collective Offload on InfiniBand Clusters: A study with Parallel 3DFFT

High-Performance and Scalable Non-Blocking All-to-All with Collective Offload on InfiniBand Clusters: A study with Parallel 3DFFT High-Performance and Scalable Non-Blocking All-to-All with Collective Offload on InfiniBand Clusters: A study with Parallel 3DFFT Krishna Kandalla (1), Hari Subramoni (1), Karen Tomko (2), Dmitry Pekurovsky

More information

RDMA Read Based Rendezvous Protocol for MPI over InfiniBand: Design Alternatives and Benefits

RDMA Read Based Rendezvous Protocol for MPI over InfiniBand: Design Alternatives and Benefits RDMA Read Based Rendezvous Protocol for MPI over InfiniBand: Design Alternatives and Benefits Sayantan Sur Hyun-Wook Jin Lei Chai D. K. Panda Network Based Computing Lab, The Ohio State University Presentation

More information

A Portable InfiniBand Module for MPICH2/Nemesis: Design and Evaluation

A Portable InfiniBand Module for MPICH2/Nemesis: Design and Evaluation A Portable InfiniBand Module for MPICH2/Nemesis: Design and Evaluation Miao Luo, Ping Lai, Sreeram Potluri, Emilio P. Mancini, Hari Subramoni, Krishna Kandalla, Dhabaleswar K. Panda Department of Computer

More information

Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters

Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters Matthew Koop 1 Miao Luo D. K. Panda matthew.koop@nasa.gov {luom, panda}@cse.ohio-state.edu 1 NASA Center for Computational

More information

UCX: An Open Source Framework for HPC Network APIs and Beyond

UCX: An Open Source Framework for HPC Network APIs and Beyond UCX: An Open Source Framework for HPC Network APIs and Beyond Presented by: Pavel Shamis / Pasha ORNL is managed by UT-Battelle for the US Department of Energy Co-Design Collaboration The Next Generation

More information

Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand

Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand Qi Gao, Weikuan Yu, Wei Huang, Dhabaleswar K. Panda Network-Based Computing Laboratory Department of Computer Science & Engineering

More information

Accelerating and Benchmarking Big Data Processing on Modern Clusters

Accelerating and Benchmarking Big Data Processing on Modern Clusters Accelerating and Benchmarking Big Data Processing on Modern Clusters Open RG Big Data Webinar (Sept 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Design Alternatives for Implementing Fence Synchronization in MPI-2 One-Sided Communication for InfiniBand Clusters

Design Alternatives for Implementing Fence Synchronization in MPI-2 One-Sided Communication for InfiniBand Clusters Design Alternatives for Implementing Fence Synchronization in MPI-2 One-Sided Communication for InfiniBand Clusters G.Santhanaraman, T. Gangadharappa, S.Narravula, A.Mamidala and D.K.Panda Presented by:

More information