MVAPICH2-GDR: Pushing the Frontier of MPI Libraries Enabling GPUDirect Technologies

Size: px
Start display at page:

Download "MVAPICH2-GDR: Pushing the Frontier of MPI Libraries Enabling GPUDirect Technologies"

Transcription

1 MVAPICH2-GDR: Pushing the Frontier of MPI Libraries Enabling GPUDirect Technologies GPU Technology Conference (GTC 218) by Dhabaleswar K. (DK) Panda The Ohio State University Hari Subramoni The Ohio State University

2 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC 218 2

3 GTC Overview of the MVAPICH2 Project High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 21, First version available in 22 MVAPICH2-X (MPI + PGAS), Available since 211 Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 214 Support for Virtualization (MVAPICH2-Virt), Available since 215 Support for Energy-Awareness (MVAPICH2-EA), Available since 215 Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 215 Used by more than 2,875 organizations in 86 countries More than 46, (>.46 million) downloads from the OSU site directly Empowering many TOP5 clusters (Nov 17 ranking) 1st, 1,649,6-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China 9th, 556,14 cores (Oakforest-PACS) in Japan 12th, 368,928-core (Stampede2) at TACC 17th, 241,18-core (Pleiades) at NASA 48th, 76,32-core (Tsubame 2.5) at Tokyo Institute of Technology Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) Empowering Top5 systems for over a decade

4 Number of Downloads MVAPICH2 Release Timeline and Downloads MV.9.4 MV2.9. MV2.9.8 MV2 1. MV 1. MV MV 1.1 MV2 1.4 MV2 1.5 MV2 1.6 MV2 1.7 MV2 1.8 MV2 1.9 MV2-GDR 2.b MV2-MIC 2. MV2 Virt 2.2 MV2-X 2.3b MV2-GDR 2.3a MV2 2.3rc1 OSU INAM Sep-4 Feb-5 Jul-5 Dec-5 May-6 Oct-6 Mar-7 Aug-7 Jan-8 Jun-8 Nov-8 Apr-9 Sep-9 Feb-1 Jul-1 Dec-1 May-11 Oct-11 Mar-12 Aug-12 Jan-13 Jun-13 Nov-13 Apr-14 Sep-14 Feb-15 Jul-15 Dec-15 Timeline GTC May-16 Oct-16 Mar-17 Aug-17 Jan-18

5 MVAPICH2 Architecture High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-topoint Primitives Collectives Algorithms Job Startup Energy- Awareness Remote Memory Access I/O and File Systems Fault Tolerance Virtualization Active Messages Introspection & Analysis Support for Modern Networking Technology (InfiniBand, iwarp, RoCE, OmniPath) Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL * ), NVIDIA GPGPU) Transport Protocols Modern Features RC XRC UD DC UMR ODP * SR- IOV Multi Rail Transport Mechanisms Shared CMA IVSHMEM Memory Modern Features MCDRAM * NVLink * CAPI * * Upcoming GTC 218 5

6 GTC MVAPICH2 Software Family High-Performance Parallel Programming Libraries MVAPICH2 MVAPICH2-X MVAPICH2-GDR MVAPICH2-Virt MVAPICH2-EA MVAPICH2-MIC Microbenchmarks OMB Tools OSU INAM OEMT Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime Optimized MPI for clusters with NVIDIA GPUs High-performance and scalable MPI for hypervisor and container based HPC cloud Energy aware and High-performance MPI Optimized MPI for clusters with Intel KNC Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration Utility to measure the energy consumption of MPI applications

7 MVAPICH2-GDR: Optimizing MPI Data Movement on GPU Clusters Connected as PCIe devices Flexibility but Complexity PCIe GPU CPU GPU Node Node 1 QPI GPU CPU IB CPU GPU Memory buffers 1. Intra-GPU 2. Intra-Socket GPU-GPU 3. Inter-Socket GPU-GPU 4. Inter-Node GPU-GPU 5. Intra-Socket GPU-Host 6. Inter-Socket GPU-Host 7. Inter-Node GPU-Host 8. Inter-Node GPU-GPU with IB adapter on remote socket and more... NVLink is leading to more paths For each path different schemes: Shared_mem, IPC, GPUDirect RDMA, pipeline Critical for runtimes to optimize data movement while hiding the complexity GTC 218 7

8 GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU Standard MPI interfaces used for unified data movement Takes advantage of Unified Virtual Addressing (>= CUDA 4.) Overlaps data movement from GPU with RDMA transfers At Sender: MPI_Send(s_devbuf, size, ); inside MVAPICH2 At Receiver: MPI_Recv(r_devbuf, size, ); High Performance and High Productivity GTC 218 8

9 CUDA-Aware MPI: MVAPICH2-GDR Releases Support for MPI communication from NVIDIA GPU device memory High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) High performance intra-node point-to-point communication for multi-gpu adapters/node (GPU-GPU, GPU-Host and Host-GPU) Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node Optimized and tuned collectives for GPU device buffers MPI datatype support for point-to-point and collective communication from GPU device buffers Unified memory GTC 218 9

10 GTC Using MVAPICH2-GPUDirect Version MVAPICH2-2.3 with GDR support can be downloaded from System software requirements Mellanox OFED 3.2 or later NVIDIA Driver or later NVIDIA CUDA Toolkit 7.5/8./9. or later Plugin for GPUDirect RDMA Strongly recommended GDRCOPY module from NVIDIA Contact MVAPICH help list with any questions related to the package mvapich-help@cse.ohio-state.edu

11 GTC MVAPICH2-GDR 2.3a Released on 11/9/217 Major Features and Enhancements Based on MVAPICH2 2.2 Support for CUDA 9. Add support for Volta (V1) GPU Support for OpenPOWER with NVLink Efficient Multiple CUDA stream-based IPC communication for multi-gpu systems with and without NVLink Enhanced performance of GPU-based point-to-point communication Leverage Linux Cross Memory Attach (CMA) feature for enhanced host-based communication Enhanced performance of MPI_Allreduce for GPU-resident data InfiniBand Multicast (IB-MCAST) based designs for GPU-based broadcast and streaming applications Basic support for IB-MCAST designs with GPUDirect RDMA Advanced support for zero-copy IB-MCAST designs with GPUDirect RDMA Advanced reliability support for IB-MCAST designs Efficient broadcast designs for Deep Learning applications Enhanced collective tuning on Xeon, OpenPOWER, and NVIDIA DGX-1 systems

12 GTC Bandwidth (MB/s) Optimized MVAPICH2-GDR Design GPU-GPU Inter-node Latency 1.88us 1x K 2K 4K 8K Message Size (Bytes) MV2-(NO-GDR) MV2-GDR-2.3a GPU-GPU Inter-node Bandwidth K 2K 4K Message Size (Bytes) 9x Bandwidth (MB/s) GPU-GPU Inter-node Bi-Bandwidth K 2K 4K Message Size (Bytes) MV2-(NO-GDR) MV2-GDR-2.3a 11X MVAPICH2-GDR-2.3a Intel Haswell 3.1 GHz) node - 2 cores NVIDIA Volta V1 GPU Mellanox Connect-X4 EDR HCA CUDA 9. Mellanox OFED 4. with GPU-Direct-RDMA MV2-(NO-GDR) MV2-GDR-2.3a

13 ROCE and Optimized Collectives Support RoCE V1 and V2 support RDMA_CM connection support CUDA-Aware Collective Tuning Point-point Tuning (available since MVAPICH2-GDR 2.) Tuned thresholds for the different communication patterns and features Depending on the system configuration (CPU, HCA and GPU models) Tuning Framework for GPU based collectives Select the best algorithm depending on message size, system size and system configuration Support for Bcast and Gather operations for different GDR-enabled systems Available since MVAPICH2-GDR 2.2RC1 release GTC

14 Average Time Steps per second (TPS) Application-Level Evaluation (HOOMD-blue) K Particles Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K2c + Mellanox Connect-IB) HoomdBlue Version 1..5 GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_ MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT= X Number of Processes Average Time Steps per second (TPS) K Particles MV2 MV2+GDR 2X Number of Processes GTC

15 Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland Normalized Execution Time Wilkes GPU Cluster Default Callback-based Event-based Number of GPUs CSCS GPU cluster Default Callback-based Event-based Normalized Execution Time Number of GPUs 2X improvement on 32 GPUs nodes 3% improvement on 96 GPU nodes (8 GPUs/node) Cosmo model: /tasks/operational/meteoswiss/ On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee, H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS 16 GTC

16 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

17 Multi-stream Communication using CUDA IPC on OpenPOWER and DGX-1 Up to 16% higher Device to Device (D2D) bandwidth on OpenPOWER + NVLink inter-connect Up to 3% higher D2D bandwidth on DGX-1 with NVLink Pt-to-pt (D-D) Bandwidth: Benefits of Multi-stream CUDA IPC Design Pt-to-pt (D-D) Bandwidth: Benefits of Multi-stream CUDA IPC Design Million Bytes (MB)/second % better 16K 32K 64K 128K 256K 512K 1M 2M 4M Million Bytes (MB)/second % better 128K 256K 512K 1M 2M 4M Message Size (Bytes) Message Size (Bytes) 1-stream 4-streams 1-stream 4-streams Available with MVAPICH2-GDR-2.3a GTC

18 GTC CMA-based Intra-node Host-to-Host Communication Support Up to 3% lower Host-to-Host (H2H) latency and 3% higher H2H Bandwidth INTRA-NODE Pt-to-Pt (H2H) LATENCY 3% better Bandwidth (MBps) INTRA-NODE Pt-to-Pt (H2H) BANDWIDTH 15 3% better 1 5 Message Size (Bytes) Message Size (Bytes) MV2-GDR (w/out CMA) MV2-GDR (w/ CMA) MV2-GDR (w/out CMA) MV2-GDR (w/ CMA) MVAPICH2-GDR-2.3a Intel Broadwell (E GHz) node 28 cores NVIDIA Tesla K-8 GPU, and Mellanox Connect-X4 EDR HCA CUDA 8., Mellanox OFED 4. with GPU-Direct-RDMA

19 Non-contiguous Data Exchange Halo data exchange Multi-dimensional data Row based organization Contiguous on one dimension Non-contiguous on other dimensions Halo data exchange Duplicate the boundary Exchange the boundary in each iteration GTC

20 MPI Datatype support in MVAPICH2 Datatypes support in MPI Operate on customized datatypes to improve productivity Enable MPI library to optimize non-contiguous data At Sender: MPI_Type_vector (n_blocks, n_elements, stride, old_type, &new_type); MPI_Type_commit(&new_type); MPI_Send(s_buf, size, new_type, dest, tag, MPI_COMM_WORLD); Inside MVAPICH2 - Use datatype specific CUDA Kernels to pack data in chunks - Efficiently move data between nodes using RDMA - In progress - currently optimizes vector and hindexed datatypes - Transparent to the user H. Wang, S. Potluri, D. Bureddy, C. Rosales and D. K. Panda, GPU-aware MPI on RDMA-Enabled Clusters: Design, Implementation and Evaluation, IEEE Transactions on Parallel and Distributed Systems, Vol. 25, No. 1, pp , Oct 214. GTC 218 2

21 GTC MPI Datatype Processing (Computation Optimization ) Comprehensive support Targeted kernels for regular datatypes - vector, subarray, indexed_block Generic kernels for all other irregular datatypes Separate non-blocking stream for kernels launched by MPI library Avoids stream conflicts with application kernels Flexible set of parameters for users to tune kernels Vector MV2_CUDA_KERNEL_VECTOR_TIDBLK_SIZE MV2_CUDA_KERNEL_VECTOR_YSIZE Subarray MV2_CUDA_KERNEL_SUBARR_TIDBLK_SIZE MV2_CUDA_KERNEL_SUBARR_XDIM MV2_CUDA_KERNEL_SUBARR_YDIM MV2_CUDA_KERNEL_SUBARR_ZDIM Indexed_block MV2_CUDA_KERNEL_IDXBLK_XDIM

22 GTC MPI Datatype Processing (Communication Optimization ) Common Scenario MPI_Isend (A,.. Datatype, ) MPI_Isend (B,.. Datatype, ) MPI_Isend (C,.. Datatype, ) MPI_Isend (D,.. Datatype, ) Waste of computing resources on CPU and GPU MPI_Waitall ( ); *A, B contain non-contiguous MPI Datatype

23 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

24 GTC Initial (Basic) Support for GPU Managed Memory CUDA 6. NVIDIA introduced CUDA Managed (or Unified) memory allowing a common memory allocation for GPU or CPU through cudamallocmanaged() call Significant productivity benefits due to abstraction of explicit allocation and cudamemcpy() Extended MVAPICH2 to perform communications directly from managed buffers (Available since MVAPICH2-GDR 2.2b) OSU Micro-benchmarks extended to evaluate the performance of point-to-point and collective communications using managed buffers D. S. Banerjee, Available K Hamidouche, since and OMB D. K Panda, 5.2 Designing High Performance Communication Runtime for GPUManaged Memory: Early Experiences, GPGPU-9 Workshop, held in conjunction with PPoPP 16 Halo Exchange Time (ms) 2D Stencil Performance for Halowidth=1.8 Device.7 Managed K 4K 8K 16K Total Dimension Size (Bytes)

25 Enhanced Support for Intra-node Managed Memory CUDA Managed => no memory pin down No IPC support for intra-node communication No GDR support for Inter-node communication Initial and basic support in MVAPICH2-GDR For both intra- and inter-nodes use pipeline through host memory Enhance intra-node managed memory to use IPC Double buffering pair-wise IPC-based scheme Brings IPC performance to Managed memory High performance and high productivity 2.5 X improvement in bandwidth Available since MVAPICH2-GDR 2.2RC1 Bandwidth (MB/s) Enhanced MV2-GDR 2.2b 4K 16K 64K 256K 1M 4M Message Size (bytes) Enhanced MV2-GDR 2.2b 32K 128K 512K 2M 2.5X Message Size (bytes) GTC

26 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

27 Streaming Applications Streaming applications on HPC systems 1. Communication (MPI) Broadcast-type operations 2. Computation (CUDA) Multiple GPU nodes as workers HPC resources for real-time analytics Data Source Real-time streaming Sender Data streaming-like broadcast operations Worker CPU GPU Worker CPU GPU Worker CPU GPU Worker CPU GPU Worker CPU GPU GPU GPU GPU GPU GPU GTC

28 Hardware Multicast-based Broadcast For GPU-resident data, using GPUDirect RDMA (GDR) InfiniBand Hardware Multicast (IB-MCAST) Overhead IB UD limit GDR limit A. Venkatesh, H. Subramoni, K. Hamidouche, and D. K. Panda, A High Performance Broadcast Design with Hardware Multicast and GPUDirect RDMA for Streaming Applications on InfiniBand Clusters, in HiPC 214, Dec 214. CPU GPU Source Header Data IB HCA IB Switch 1. IB Gather + GDR Read 2. IB Hardware Multicast 3. IB Scatter + GDR Write Destination 1 IB HCA IB HCA Header CPU GPU Data Destination N Header CPU GPU Data GTC

29 GTC Optimized Broadcast Send Preparing Intermediate buffer (im_buf) Page-locked (pinned) host buffer Fast Device-Host data movement Allocated at initialization phase Low overhead Streaming data through host Fine-tuned chunked data Asynchronous copy operations Three-stage pipeline C.-H. Chu, X. Lu, A. A. Awan, H. Subramoni, J. Hashmi, B. Elton and D. K. Panda., "Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning, " ICPP 217, Aug 14-17, 217. MPI_Bcast(d_out, ) Source Header im_buf CPU GPU d_out IB HCA 1. Data Preparation 2. IB Gather 3. IB Hardware Multicast IB Switch

30 GTC Optimized Broadcast Receive Zero-copy broadcast receive Pre-posted user buffer (d_in) Avoids additional data movement Leverages IB Scatter and GDR features Low-latency Free-up PCIe resources for applications C.-H. Chu, X. Lu, A. A. Awan, H. Subramoni, J. Hashmi, B. Elton and D. K. Panda., "Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning, " ICPP 217, Aug 14-17, 217. IB Switch IB Hardware Multicast IB Scatter (GDR Write) MPI_Bcast(d_in, ) Destination 1 IB HCA IB HCA Header CPU GPU d_in Destination N Header CPU GPU d_in

31 GTC Broadcast on Multi-GPU systems Proposed Intra-node Topology-Aware Broadcast CUDA InterProcess Communication (IPC) Available in MVAPICH2-GDR 2.3a C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh, B. Elton, and D. K. Panda, "Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters, " SBAC-PAD'16, Oct , 216. Source CPU GPU Multicast steps IB Switch cudamemcpy (Device Device) CPU Node 1 CPU GPU Node N GPU GPU 1 GPU N

32 Streaming CSCS (88 GPUs) MCAST-GDR-OPT MCAST-GDR MCAST-GDR-OPT MCAST-GDR 6 12 Latency (μs) % Latency (μs) % K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size (Bytes) Message Size (Bytes) IB-MCAST + GDR + Topology-aware IPC-based schemes Up to 58% and 79% reduction for small and large messages C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh, B. Elton, and D. K. Panda, "Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters, " SBAC-PAD'16, Oct , 216. GTC

33 Speedup Application-based Evaluation: CUDA-Aware RI2 cluster, 16 GPUs, 1 GPU/node CUDA-Aware Microsoft Cognitive Toolkit (CA-CNTK) [2] MV2-GDR-Knomial MV2-GDR-Ring MV2-MCAST-GDR-Opt % 16% 18% 1.5 Higher is better 8 GPUs 16 GPUs 8 GPUs 16 GPUs 8 GPUs 16 GPUs AlexNet VGG ResNet-5 Reduces up to 24%, 16% and 18% of latency for AlexNet, VGG and ResNet models Higher improvement can be observed for larger system sizes [1] C.-H. Chu, X. Lu, A. A. Awan, H. Subramoni, J. Hashmi, B. Elton, and D. K. Panda, Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning, ICPP 17. [2] D. S. Banerjee, K. Hamidouche, and D. K. Panda, Re-Designing CNTK Deep Learning Framework on Modern GPU Enabled Clusters, IEEE CloudCom 16 GTC

34 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

35 Deep Learning: New Challenges for MPI Runtimes Deep Learning frameworks are a different game altogether Unusually large message sizes (order of megabytes) Most communication based on GPU buffers Existing State-of-the-art cudnn, cublas, NCCL --> scale-up performance NCCL2, CUDA-Aware MPI --> scale-out performance For small and medium message sizes only! Proposed: Can we co-design the MPI runtime (MVAPICH2- GDR) and the DL framework (Caffe) to achieve both? Scale-up Performance NCCL cudnn cublas NCCL2 grpc Proposed Co- Designs MPI Efficient Overlap of Computation and Communication Efficient Large-Message Communication (Reductions) Hadoop What application co-designs are needed to exploit communication-runtime co-designs? Scale-out Performance A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17) GTC

36 Large Message Optimized Collectives for Deep Learning MV2-GDR provides optimized collectives for large message sizes Optimized Reduce, Allreduce, and Bcast Good scaling with large number of GPUs Available since MVAPICH2- GDR 2.2GA Latency (ms) Latency (ms) Latency (ms) Reduce 192 GPUs Message Size (MB) Allreduce 64 GPUs Message Size (MB) Bcast 64 GPUs Message Size (MB) No. of GPUs GTC Latency (ms) Latency (ms) Latency (ms) Reduce 64 MB No. of GPUs Allreduce MB No. of GPUs Bcast 128 MB

37 MVAPICH2: Allreduce Comparison with Baidu and OpenMPI Initial Evaluation shows promising performance gains for MVAPICH2-GDR 2.3a* 8 GPUs (2 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI ~1X better Message Size (Bytes) MVAPICH2 BAIDU OPENMPI 6 ~3.5X better K 1M 2M 4M Message Size (Bytes) MVAPICH2 BAIDU OPENMPI OpenMPI is ~4X slower than Baidu MV2 is ~2% better than Baidu Message Size (Bytes) MVAPICH2 BAIDU OPENMPI *Available with MVAPICH2-GDR 2.3a GTC

38 MVAPICH2: Allreduce Comparison with Baidu and OpenMPI 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI ~3X better Message Size (Bytes) MVAPICH2 BAIDU OPENMPI ~1X better ~4X better 512K 1M 2M 4M Message Size (Bytes) MVAPICH2 BAIDU OPENMPI 6 OpenMPI is ~5X slower 5 than Baidu 4 MV2 is ~2X better 3 than Baidu Message Size (Bytes) MVAPICH2 BAIDU OPENMPI *Available with MVAPICH2-GDR 2.3a GTC

39 OSU-Caffe: Scalable Deep Learning Caffe : A flexible and layered Deep Learning framework. Benefits and Weaknesses Multi-GPU Training within a single node GoogLeNet (ImageNet) on 128 GPUs 25 Performance degradation for GPUs across different sockets Limited Scale-out OSU-Caffe: MPI-based Parallel Training Enable Scale-up (within a node) and Scale-out (across multi-gpu nodes) Scale-out on 64 GPUs for training CIFAR-1 network on CIFAR-1 dataset Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset OSU-Caffe publicly available from Training Time (seconds) No. of GPUs Invalid use case Support on OPENPOWER will be available soon Caffe OSU-Caffe (124) OSU-Caffe (248) GTC

40 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC 218 4

41 Point-to-Point Host-level Performance on OpenPower Intra-node Latency MVAPICH2-GDR 2.3a ~.5 μs K 4K Inter-node Latency MVAPICH2-GDR 2.3a ~2.3 μs K 4K Bandwidth (MB/s) Bandwidth (MB/s) Intra-node Bandwidth MVAPICH2-GDR 2.3a K 4K 16K 64K 256K 1M Inter-node Bandwidth MVAPICH2-GDR ~12GB/s ~3GB/s K 4K 16K 64K 256K 1M Intra-node Bi-directional Bandwidth Platform: OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA GTC Bandwidth (MB/s) Bandwidth (MB/s) MVAPICH2-GDR 2.3a ~6GB/s K 4K 16K 64K 256K 1M Inter-node Bi-directional Bandwidth 25 MVAPICH2-GDR 2 ~24GB/s K 4K 16K 64K 256K 1M

42 Device-to-Device Performance on OpenPOWER (NVLink + Pascal) INTRA-NODE LATENCY (SMALL) K 2K 4K 8K Message Size (Bytes) 4 2 INTRA-NODE LATENCY (LARGE) 16K 32K 64K 128K256K512K 1M 2M 4M Message Size (Bytes) Bandwidth (GB/sec) INTRA-NODE BANDWIDTH K 4K 16K 64K 256K 1M 4M Message Size (Bytes) INTRA-SOCKET(NVLINK) INTER-SOCKET INTRA-SOCKET(NVLINK) INTER-SOCKET INTRA-SOCKET(NVLINK) INTER-SOCKET Intra-node Latency: 14.6 us (without GPUDirectRDMA) Intra-node Bandwidth: 33.9 GB/sec (NVLINK) INTER-NODE LATENCY (SMALL) K 2K 4K 8K INTER-NODE LATENCY (LARGE) Message Size (Bytes) Message Size (Bytes) Message Size (Bytes) Inter-node Latency: 23.8 us (without GPUDirectRDMA) Inter-node Bandwidth: 11.9 GB/sec (EDR) Available in MVAPICH2-GDR 2.3a Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P1-SXM GPUs, and EDR InfiniBand Inter-connect GTC Bandwidth (GB/sec) INTER-NODE BANDWIDTH K 4K 16K 64K 256K 1M 4M

43 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

44 Container Support Increasing trend to provide container support for MPI Libraries Ease of build Portability Reproducibility MVAPICH2-GDR 2.3a provides container (Docker) support More details are available in the MVAPICH2-GDR User Guide Synergistic with the HPC-Container-Maker and hpccm efforts by NVIDIA ( GTC

45 GTC Bandwidth (MB/s) 1 1 MVAPICH2-GDR on Container with Negligible Overhead GPU-GPU Inter-node Latency K 4K 16K 64K 256K 1M 4M Message Size (Bytes) Docker Native GPU-GPU Inter-node Bandwidth Message Size (Bytes) Bandwidth (MB/s) GPU-GPU Inter-node Bi-Bandwidth Message Size (Bytes) Docker Native MVAPICH2-GDR-2.3a Intel Haswell 3.1 GHz) node - 2 cores NVIDIA Volta V1 GPU Mellanox Connect-X4 EDR HCA CUDA 9. Mellanox OFED 4. with GPU-Direct-RDMA Docker Native

46 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

47 (Nodes=1, PPN=2) (Nodes=1, PPN=2) Scalable Host-based Collectives with CMA (Intra-node Reduce & AlltoAll) MVAPICH2-GDR-Next SpectrumMPI OpenMPI K 2K 4K MVAPICH2-GDR-Next SpectrumMPI OpenMPI X 3.2X 3.3X Reduce 5.2X Alltoall K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively GTC MVAPICH2-GDR-Next SpectrumMPI OpenMPI-3.. 8K 16K 32K 64K 128K 256K 512K 1M MVAPICH2-GDR-Next SpectrumMPI OpenMPI X 3.2X 1.3X 1.2X (Nodes=1, PPN=2) (Nodes=1, PPN=2)

48 (Nodes=1, PPN=2) (Nodes=1, PPN=2) Scalable Host-based Collectives with CMA (Intra-node, Gather & Scatter) MVAPICH2-GDR-Next SpectrumMPI OpenMPI K 2K 4K Message Size (bytes) MVAPICH2-GDR-Next SpectrumMPI OpenMPI X 1.5X Gather 24.5X 4.9X Scatter 2 1 MVAPICH2-GDR-Next SpectrumMPI OpenMPI-3.. 8K 16K 32K 64K 128K 256K 512K 1M Message Size (bytes) K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Message Size (bytes) Message Size (bytes) Up to 24X and 15x performance improvement by MVAPICH2 for medium to large messages respectively MVAPICH2-GDR-Next SpectrumMPI OpenMPI X 8.3X 6.4X 6.7X GTC (Nodes=1, PPN=2) (Nodes=1, PPN=2)

49 (Nodes=4, PPN=2) (Nodes=4, PPN=2) Scalable Host-based Collectives with CMA (Multi-node, Reduce & Alltoall) MVAPICH2-GDR-Next OpenMPI K 2K 4K MVAPICH2-GDR-Next OpenMPI X 12.4X Reduce Alltoall K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Up to 12.4X and 8.5X performance improvement by MVAPICH2 for small and large messages respectively GTC MVAPICH2-GDR-Next OpenMPI-3.. 8K 16K 32K 64K 128K 256K 512K 1M MVAPICH2-GDR-Next OpenMPI X (Nodes=4, PPN=2) (Nodes=4, PPN=2)

50 Scalable Host-based Collectives with CMA (Multi-node, Gather & Scatter) (Nodes=4, PPN=2) (Nodes=4, PPN=2) X 4 3.9X MVAPICH2-GDR-Next MVAPICH2-GDR-Next 3 OpenMPI-3.. OpenMPI K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Message Size (bytes) Scatter Message Size (bytes) MVAPICH2-GDR-Next 1.6X OpenMPI-3.. Gather K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Message Size (bytes) Message Size (bytes) Up to 17.8X and 3.9X improvement with MVAPICH2-GDR-Next for medium to large messages respectively GTC MVAPICH2-GDR-Next 4 OpenMPI (Nodes=4, PPN=2) (Nodes=4, PPN=2)

51 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

52 Optimized All-Reduce with XPMEM (Nodes=1, PPN=2) (Nodes=2, PPN=2) MVAPICH2-GDR-Next 2 MVAPICH2-GDR-Next SpectrumMPI-1.1. SpectrumMPI-1.1. OpenMPI OpenMPI-3.. 2X 16K 32K 64K 128K 256K 512K 1M 2M 16K 32K 64K 128K 256K 512K 1M 2M MVAPICH2-GDR-Next 3.3X 4 MVAPICH2-GDR-Next SpectrumMPI SpectrumMPI-1.1. OpenMPI OpenMPI-3.. 3X 1 2X 16K 32K 64K 128K 256K 512K 1M 2M 16K 32K 64K 128K 256K 512K 1M 2M Message Size Message Size Optimized MPI All-Reduce Design in MVAPICH2 Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node 4X 34% 48% 2X Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch GTC

53 Optimized All-Reduce with XPMEM (Nodes=3, PPN=2) (Nodes=4, PPN=2) 4 MVAPICH2-GDR-Next 3 OpenMPI K 32K 64K 128K 256K 512K 1M 2M 4 MVAPICH2-GDR-Next 3 OpenMPI K 32K 64K 128K 256K 512K 1M 2M Message Size Optimized MPI All-Reduce Design in MVAPICH2 Up to 2X performance improvement over OpenMPI for inter-node. 4 MVAPICH2-GDR-Next 3 OpenMPI K 4 32K 64K 128K 256K 512K 1M 2M 42% 27% 2X 35% 3 MVAPICH2-GDR-Next 2 OpenMPI K 32K 64K 128K 256K 512K 1M 2M Message Size Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch GTC

54 Optimized Reduce with XPMEM (Nodes=1, PPN=2) (Nodes=2, PPN=2) MVAPICH2-GDR-Next SpectrumMPI-1.1. OpenMPI-3.. MVAPICH2-GDR-Next SpectrumMPI-1.1. OpenMPI % 26% 5.6X 2.8X MVAPICH2-GDR-Next SpectrumMPI-1.1. OpenMPI-3.. MVAPICH2-GDR-Next SpectrumMPI-1.1. OpenMPI X 27% 3.1X 93% Message Size Message Size Optimized MPI Reduce Design in MVAPICH2 Up to 3.1X performance improvement over OpenMPI at scale and up to 37% over spectrum MPI on intra-node Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch GTC

55 Optimized Reduce with XPMEM (Nodes=3, PPN=2) MVAPICH2-GDR-Next OpenMPI-3.. 6X MVAPICH2-GDR-Next OpenMPI-3.. 4X (Nodes=4, PPN=2) MVAPICH2-GDR-Next OpenMPI-3.. 7X MVAPICH2-GDR-Next OpenMPI-3.. 4X Message Size Optimized MPI Reduce Design in MVAPICH2 Up to 7X performance improvement over OpenMPI at scale Message Size Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch GTC

56 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

57 MVAPICH2-GDR vs. NCCL2 Broadcast Operation Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases MPI_Bcast (MVAPICH2-GDR) vs. ncclbcast (NCCL2) on 16 K-8 GPUs GPU/node 2 GPUs/node ~1X better 1 1 ~4X better Message Size (Bytes) Message Size (Bytes) NCCL2 MVAPICH2-GDR *Will be available with upcoming MVAPICH2-GDR 2.3b NCCL2 MVAPICH2-GDR Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 2 K-8 GPUs, and EDR InfiniBand Inter-connect GTC

58 MVAPICH2-GDR vs. NCCL2 Reduce Operation Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases MPI_Reduce (MVAPICH2-GDR) vs. ncclreduce (NCCL2) on 16 GPUs 1 ~2.5X better ~5X better K 2K 4K 8K 16K 32K 64K Message Size (Bytes) 1 Message Size (Bytes) MVAPICH2-GDR NCCL2 MVAPICH2-GDR NCCL2 *Will be available with upcoming MVAPICH2-GDR 2.3b Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-8 GPUs, and EDR InfiniBand Inter-connect GTC

59 MVAPICH2-GDR vs. NCCL2 Allreduce Operation Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases MPI_Allreduce (MVAPICH2-GDR) vs. ncclallreduce (NCCL2) on 16 GPUs ~3X better ~1.2X better K 2K 4K 8K 16K 32K 64K Message Size (Bytes) 1 Message Size (Bytes) MVAPICH2-GDR NCCL2 MVAPICH2-GDR NCCL2 *Will be available with upcoming MVAPICH2-GDR 2.3b Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-8 GPUs, and EDR InfiniBand Inter-connect GTC

60 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC 218 6

61 Out-of-Core Deep Neural Network Training with Caffe Large DNNs cannot be trained on GPUs due to memory limitation! ResNet-5 is the state-of-the-art DNN architecture for Image Recognition but current frameworks can only go up to a small batch size of 45 Next generation models like Neural Machine Translation (NMT) are ridiculously large, consists of billions of parameters, and require even more memory Can we design Out-of-core DNN training support using new software features in CUDA 8/9 and hardware mechanisms in Pascal/Volta GPUs? General intuition is that managed allocations will be slow! The proposed framework called OC-Caffe (Out-of-Core Caffe) shows the potential of managed memory designs that can provide performance with negligible/no overhead. In addition to Out-of-core Training support, productivity can be greatly enhanced in terms of DL framework design by using the new Unified Memory features. Submission Under Review GTC

62 Performance Trends for OC-Caffe Comparable performance to Caffe-Default for in-memory batch sizes OC-Caffe-Opt: up to 5X improvement over Intel-MKL-optimized CPU-based AlexNet training on Volta V1 GPU with CUDA9 and CUDNN7 Trainable (in-memory) Out-of-core (over-subscription) OC-Caffe will be released by the HiDL Submission Under Review hidl.cse.ohio-state.edu GTC

63 Performance Trends for OC-Caffe OC-Caffe-Opt: up to 8% better than Intel-optimized CPU Caffe for ResNet-5 training on the Volta V1 GPU with CUDA9 and CUDNN7 OC-Caffe allows efficient scale-up on DGX-1 system with Pascal P1 GPUs with CUDA9 and CUDNN7 Out-of-core (over-subscription) Scale-up on DGX-1 Submission Under Review GTC

64 Can Unified-Memory also Simplify Framework Design? GTC Enhanced and simplified the Caffe framework Simplified Layer class and all inherited classes (e.g. ConvolutionLayer) Remove almost all of the memory allocation, movement, and state management code in SyncedMemory and Blob class Estimated 3, lines of repetitive and error-prone code can be eliminated Developers can add new inherited Layer classes in a much simpler manner E.g. Implement a single Forward function instead of forward_gpu() and forward_cpu() function separately. Submission Under Review class ConvolutionLayer { public: void cpu_data() void cpu_diff() void gpu_data() void gpu_diff() void mutable_cpu_data() void mutable_cpu_diff() void mutable_gpu_data() void mutable_gpu_diff() void Forward_cpu() void Forward_gpu() void forward_cpu_gemm() void forward_gpu_gemm() void forward_cpu_bias() void forward_gpu_bias() void Backward_cpu() void Backward_gpu() void backward_cpu_gemm() void backward_gpu_gemm() void backward_cpu_bias() void backward_gpu_bias() } Existing Design class ConvolutionLayer { public: void data() void diff() } void mutable_data() void mutable_diff() void Forward() void forward_gemm() void forward_bias() void Backward() void backward_gemm() void backward_bias() Proposed High-Productivity Design based on Managed Memory Allocation and Data Movement

65 Outline Overview of the MVAPICH2 Project MVAPICH2-GPU with GPUDirect-RDMA (GDR) Current Features Multi-stream Communication for IPC CMA-based Intra-node Host-to-Host Communication Support Maximal Overlap in MPI Datatype Processing Efficient Support for Managed Memory Streaming Support with InfiniBand Multicast and GDR Support for Deep Learning Support for OpenPOWER with NVLink Support for Container Upcoming Features CMA-based Intra-node Collective Communication Support XPMEM-based Collective Communication Support Optimized Collectives for Deep Learning Out-of-core processing for Deep Learning Conclusions GTC

66 Conclusions MVAPICH2-GDR MPI library optimizes MPI communication on InfiniBand and RoCE (V1 and V2) clusters with GPUs on both x86 and OpenPOWER platforms Provides optimized designs for point-to-point two-sided and one-sided communication, datatype processing and collective operations Takes advantage of CUDA features like IPC and GPUDirect RDMA families Allows flexible solutions for streaming applications with GPUs Provides optimized solutions for High-Performance Deep Learning (HiDL) frameworks and applications GTC

67 Join Us for a Connect with the Experts Session CE Connect with the Experts: MVAPICH for GPU Session Schedule Wednesday, Mar 28, 3: PM - 4: PM, LL Pod A Session Speakers Dhabaleswar Panda - Professor and University Distinguished Scholar, OSU Davide Rossetti - Senior Software Engineer, NVIDIA Hari Subramoni - Research Scientist, OSU Session Description MVAPICH is a well established MPI library supporting GPUDirect. Meet with the experts to discuss how you can optimize your HPC and AI application using GPUDirect RDMA, with MVAPICH GDR, and learn about MVAPICH GDS, newest feature, supporting GPUDirect Async. GTC

68 Personnel Acknowledgments Current Students (Graduate) A. Awan (Ph.D.) R. Biswas (M.S.) M. Bayatpour (Ph.D.) S. Chakraborthy (Ph.D.) C.-H. Chu (Ph.D.) S. Guganani (Ph.D.) Current Students (Undergraduate) J. Hashmi (Ph.D.) N. Sarkauskas (B.S.) H. Javed (Ph.D.) P. Kousha (Ph.D.) D. Shankar (Ph.D.) H. Shi (Ph.D.) J. Zhang (Ph.D.) Current Research Scientists X. Lu H. Subramoni Current Post-doc A. Ruhela Current Research Specialist J. Smith M. Arnold Past Students A. Augustine (M.S.) P. Balaji (Ph.D.) S. Bhagvat (M.S.) A. Bhat (M.S.) D. Buntinas (Ph.D.) L. Chai (Ph.D.) B. Chandrasekharan (M.S.) N. Dandapanthula (M.S.) V. Dhanraj (M.S.) T. Gangadharappa (M.S.) K. Gopalakrishnan (M.S.) W. Huang (Ph.D.) W. Jiang (M.S.) J. Jose (Ph.D.) S. Kini (M.S.) M. Koop (Ph.D.) K. Kulkarni (M.S.) R. Kumar (M.S.) S. Krishnamoorthy (M.S.) K. Kandalla (Ph.D.) M. Li (Ph.D.) P. Lai (M.S.) J. Liu (Ph.D.) M. Luo (Ph.D.) A. Mamidala (Ph.D.) G. Marsh (M.S.) V. Meshram (M.S.) A. Moody (M.S.) S. Naravula (Ph.D.) R. Noronha (Ph.D.) X. Ouyang (Ph.D.) S. Pai (M.S.) S. Potluri (Ph.D.) R. Rajachandrasekar (Ph.D.) G. Santhanaraman (Ph.D.) A. Singh (Ph.D.) J. Sridhar (M.S.) S. Sur (Ph.D.) H. Subramoni (Ph.D.) K. Vaidyanathan (Ph.D.) A. Vishnu (Ph.D.) J. Wu (Ph.D.) W. Yu (Ph.D.) Past Research Scientist K. Hamidouche S. Sur Past Programmers D. Bureddy J. Perkins Past Post-Docs D. Banerjee J. Lin S. Marcarelli X. Besseron M. Luo J. Vienne H.-W. Jin E. Mancini H. Wang GTC

69 GTC Thank You! Network-Based Computing Laboratory The High-Performance MPI/PGAS Project The High-Performance Deep Learning Project

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox booth (SC 218) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox Theater (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Presentation at Mellanox Theater () Dhabaleswar K. (DK) Panda - The Ohio State University panda@cse.ohio-state.edu Outline Communication

More information

High-Performance MPI and Deep Learning on OpenPOWER Platform

High-Performance MPI and Deep Learning on OpenPOWER Platform High-Performance MPI and Deep Learning on OpenPOWER Platform Talk at OpenPOWER SUMMIT (Mar 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning 5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:

More information

Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning

Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning Efficient and Scalable Multi-Source Streaming Broadcast on Clusters for Deep Learning Ching-Hsiang Chu 1, Xiaoyi Lu 1, Ammar A. Awan 1, Hari Subramoni 1, Jahanzeb Hashmi 1, Bracy Elton 2 and Dhabaleswar

More information

Exploiting InfiniBand and GPUDirect Technology for High Performance Collectives on GPU Clusters

Exploiting InfiniBand and GPUDirect Technology for High Performance Collectives on GPU Clusters Exploiting InfiniBand and Direct Technology for High Performance Collectives on Clusters Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth

More information

High-Performance Broadcast for Streaming and Deep Learning

High-Performance Broadcast for Streaming and Deep Learning High-Performance Broadcast for Streaming and Deep Learning Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth - SC17 2 Outline Introduction

More information

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Intel Nerve Center (SC 7) Presentation Dhabaleswar K (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu Parallel

More information

High-Performance Training for Deep Learning and Computer Vision HPC

High-Performance Training for Deep Learning and Computer Vision HPC High-Performance Training for Deep Learning and Computer Vision HPC Panel at CVPR-ECV 18 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning GPU Technology Conference GTC 217 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

MVAPICH2 MPI Libraries to Exploit Latest Networking and Accelerator Technologies

MVAPICH2 MPI Libraries to Exploit Latest Networking and Accelerator Technologies MVAPICH2 MPI Libraries to Exploit Latest Networking and Accelerator Technologies Talk at NRL booth (SC 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning

Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, and Dhabaleswar K. Panda Network-Based Computing Laboratory Department

More information

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters Designing High Performance Heterogeneous Broadcast for Streaming Applications on Clusters 1 Ching-Hsiang Chu, 1 Khaled Hamidouche, 1 Hari Subramoni, 1 Akshay Venkatesh, 2 Bracy Elton and 1 Dhabaleswar

More information

High Performance MPI Support in MVAPICH2 for InfiniBand Clusters

High Performance MPI Support in MVAPICH2 for InfiniBand Clusters High Performance MPI Support in MVAPICH2 for InfiniBand Clusters A Talk at NVIDIA Booth (SC 11) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries How to Boost the Performance of Your MPI and PGAS s with MVAPICH2 Libraries A Tutorial at the MVAPICH User Group (MUG) Meeting 18 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing Shared Address Space MPI libraries in the Many-core Era

Designing Shared Address Space MPI libraries in the Many-core Era Designing Shared Address Space MPI libraries in the Many-core Era Jahanzeb Hashmi hashmi.29@osu.edu (NBCL) The Ohio State University Outline Introduction and Motivation Background Shared-memory Communication

More information

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K.

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K. Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K. Panda Department of Computer Science and Engineering The Ohio

More information

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (April 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing High-Performance MPI Libraries for Multi-/Many-core Era

Designing High-Performance MPI Libraries for Multi-/Many-core Era Designing High-Performance MPI Libraries for Multi-/Many-core Era Talk at IXPUG-Fall Conference (September 18) J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni and D. K Panda The Ohio State University

More information

High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms

High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms Talk at Intel HPC Developer Conference (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2

Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2 Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2 H. Wang, S. Potluri, M. Luo, A. K. Singh, X. Ouyang, S. Sur, D. K. Panda Network-Based

More information

Scalability and Performance of MVAPICH2 on OakForest-PACS

Scalability and Performance of MVAPICH2 on OakForest-PACS Scalability and Performance of MVAPICH2 on OakForest-PACS Talk at JCAHPC Booth (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters

CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters Ching-Hsiang Chu, Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan and Dhabaleswar K. (DK) Panda Speaker: Sourav Chakraborty

More information

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X

Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X IXPUG 7 PresentaNon J. Hashmi, M. Li, H. Subramoni and DK Panda The Ohio State University E-mail: {hashmi.29,li.292,subramoni.,panda.2}@osu.edu

More information

High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters

High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters Talk at OpenFabrics Workshop (April 2016) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks

GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks Presented By : Esthela Gallardo Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, Jonathan Perkins, Hari Subramoni,

More information

High-Performance and Scalable Designs of Programming Models for Exascale Systems

High-Performance and Scalable Designs of Programming Models for Exascale Systems High-Performance and Scalable Designs of Programming Models for Exascale Systems Keynote Talk at HPCAC, Switzerland (April 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach

Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach Talk at OSC theater (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

Designing Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems

Designing Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems Designing Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems Talk at HPCAC-AI Switzerland Conference (April 19) by Dhabaleswar K. (DK) Panda The Ohio State University

More information

Exploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries

Exploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries Exploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries Talk at Overlapping Communication with Computation Symposium (April 18) by Dhabaleswar K. (DK) Panda The Ohio

More information

Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth

Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth Support for GPUs with GPUDirect RDMA in MVAPICH2 SC 13 NVIDIA Booth by D.K. Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Outline Overview of MVAPICH2-GPU

More information

Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters

Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters Presentation at IXPUG Meeting, July 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach

Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (March 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits

MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits Ashish Kumar Singh, Sreeram Potluri, Hao Wang, Krishna Kandalla, Sayantan Sur, and Dhabaleswar K. Panda Network-Based

More information

Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters

Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems

Designing Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems Designing Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems Keynote Talk at HPCAC China Conference by Dhabaleswar K. (DK) Panda The Ohio State University E mail:

More information

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters K. Kandalla, A. Venkatesh, K. Hamidouche, S. Potluri, D. Bureddy and D. K. Panda Presented by Dr. Xiaoyi

More information

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory

More information

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems Keynote Talk at HPCAC Stanford Conference (Feb 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications

Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications Coupling GPUDirect RDMA and InfiniBand Hardware Multicast Technologies for Streaming Applications GPU Technology Conference GTC 2016 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand

MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand MVAPICH2: A High Performance MPI Library for NVIDIA GPU Clusters with InfiniBand Presentation at GTC 213 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization

Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization Talk at HPCAC-Switzerland (Mar 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

Overview of the MVAPICH Project: Latest Status and Future Roadmap

Overview of the MVAPICH Project: Latest Status and Future Roadmap Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities

Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities Keynote Talk at SEA Symposium (April 18) by Dhabaleswar K. (DK) Panda The Ohio State University

More information

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures

Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda Department of Computer Science and Engineering

More information

Designing HPC, Big Data, and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities

Designing HPC, Big Data, and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities Designing HPC, Big Data, and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities DHABALESWAR K PANDA DK Panda is a Professor and University Distinguished Scholar of Computer Science

More information

Scaling with PGAS Languages

Scaling with PGAS Languages Scaling with PGAS Languages Panel Presentation at OFA Developers Workshop (2013) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Overview of the MVAPICH Project: Latest Status and Future Roadmap

Overview of the MVAPICH Project: Latest Status and Future Roadmap Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Jie Zhang Dr. Dhabaleswar K. Panda (Advisor) Department of Computer Science & Engineering The

More information

How to Tune and Extract Higher Performance with MVAPICH2 Libraries?

How to Tune and Extract Higher Performance with MVAPICH2 Libraries? How to Tune and Extract Higher Performance with MVAPICH2 Libraries? An XSEDE ECSS webinar by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries A Tutorial at MVAPICH User Group 217 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Intra-MIC MPI Communication using MVAPICH2: Early Experience

Intra-MIC MPI Communication using MVAPICH2: Early Experience Intra-MIC MPI Communication using MVAPICH: Early Experience Sreeram Potluri, Karen Tomko, Devendar Bureddy, and Dhabaleswar K. Panda Department of Computer Science and Engineering Ohio State University

More information

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Krishna Kandalla, Emilio P. Mancini, Sayantan Sur, and Dhabaleswar. K. Panda Department of Computer Science & Engineering,

More information

Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication

Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication Optimizing MPI Communication on Multi-GPU Systems using CUDA Inter-Process Communication Sreeram Potluri* Hao Wang* Devendar Bureddy* Ashish Kumar Singh* Carlos Rosales + Dhabaleswar K. Panda* *Network-Based

More information

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Presentation at Mellanox Theatre (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

MVAPICH2 Project Update and Big Data Acceleration

MVAPICH2 Project Update and Big Data Acceleration MVAPICH2 Project Update and Big Data Acceleration Presentation at HPC Advisory Council European Conference 212 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

In the multi-core age, How do larger, faster and cheaper and more responsive memory sub-systems affect data management? Dhabaleswar K.

In the multi-core age, How do larger, faster and cheaper and more responsive memory sub-systems affect data management? Dhabaleswar K. In the multi-core age, How do larger, faster and cheaper and more responsive sub-systems affect data management? Panel at ADMS 211 Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory Department

More information

Designing High Performance Communication Middleware with Emerging Multi-core Architectures

Designing High Performance Communication Middleware with Emerging Multi-core Architectures Designing High Performance Communication Middleware with Emerging Multi-core Architectures Dhabaleswar K. (DK) Panda Department of Computer Science and Engg. The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Overview of the MVAPICH Project: Latest Status and Future Roadmap

Overview of the MVAPICH Project: Latest Status and Future Roadmap Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

How to Boost the Performance of your MPI and PGAS Applications with MVAPICH2 Libraries?

How to Boost the Performance of your MPI and PGAS Applications with MVAPICH2 Libraries? How to Boost the Performance of your MPI and PGAS Applications with MVAPICH2 Libraries? A Tutorial at MVAPICH User Group Meeting 216 by The MVAPICH Group The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management SigHPC BigData BoF (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of

More information

High Performance Migration Framework for MPI Applications on HPC Cloud

High Performance Migration Framework for MPI Applications on HPC Cloud High Performance Migration Framework for MPI Applications on HPC Cloud Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda {zhanjie, luxi, panda}@cse.ohio-state.edu Computer Science & Engineering Department,

More information

MVAPICH2 and MVAPICH2-MIC: Latest Status

MVAPICH2 and MVAPICH2-MIC: Latest Status MVAPICH2 and MVAPICH2-MIC: Latest Status Presentation at IXPUG Meeting, July 214 by Dhabaleswar K. (DK) Panda and Khaled Hamidouche The Ohio State University E-mail: {panda, hamidouc}@cse.ohio-state.edu

More information

Job Startup at Exascale:

Job Startup at Exascale: Job Startup at Exascale: Challenges and Solutions Hari Subramoni The Ohio State University http://nowlab.cse.ohio-state.edu/ Current Trends in HPC Supercomputing systems scaling rapidly Multi-/Many-core

More information

Unifying UPC and MPI Runtimes: Experience with MVAPICH

Unifying UPC and MPI Runtimes: Experience with MVAPICH Unifying UPC and MPI Runtimes: Experience with MVAPICH Jithin Jose Miao Luo Sayantan Sur D. K. Panda Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,

More information

Building the Most Efficient Machine Learning System

Building the Most Efficient Machine Learning System Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide

More information

NCCL 2.0. Sylvain Jeaugey

NCCL 2.0. Sylvain Jeaugey NCCL 2.0 Sylvain Jeaugey DEE LEARNING ON GUS Making DL training times shorter Deeper neural networks, larger data sets training is a very, very long operation! CUDA NCCL 1 NCCL 2 Multi-core CU GU Multi-GU

More information

Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures

Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures Talk at Bench 2018 by Xiaoyi Lu The Ohio State University E-mail: luxi@cse.ohio-state.edu http://www.cse.ohio-state.edu/~luxi

More information

The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15)

The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15) The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing

The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Presentation at OSU Booth (SC 18) by Hari Subramoni The Ohio State University E-mail: subramon@cse.ohio-state.edu http://www.cse.ohio-state.edu/~subramon

More information

Accelerating HPL on Heterogeneous GPU Clusters

Accelerating HPL on Heterogeneous GPU Clusters Accelerating HPL on Heterogeneous GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Outline

More information

Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters

Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters 2015 IEEE International Conference on Cluster Computing Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan,

More information

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09 RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Introduction Problem Statement

More information

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand. Presented at GTC 15

Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand. Presented at GTC 15 Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Presented at Presented by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu hcp://www.cse.ohio-

More information

TECHNOLOGIES FOR IMPROVED SCALING ON GPU CLUSTERS. Jiri Kraus, Davide Rossetti, Sreeram Potluri, June 23 rd 2016

TECHNOLOGIES FOR IMPROVED SCALING ON GPU CLUSTERS. Jiri Kraus, Davide Rossetti, Sreeram Potluri, June 23 rd 2016 TECHNOLOGIES FOR IMPROVED SCALING ON GPU CLUSTERS Jiri Kraus, Davide Rossetti, Sreeram Potluri, June 23 rd 2016 MULTI GPU PROGRAMMING Node 0 Node 1 Node N-1 MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM

More information

Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop

Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop Keynote Talk at BPOE-4, in conjunction with ASPLOS 14 (March 2014) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters. Presented at GTC 15

Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters. Presented at GTC 15 Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters Presented at Presented by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu hcp://www.cse.ohio-

More information

Interconnect Your Future

Interconnect Your Future Interconnect Your Future Smart Interconnect for Next Generation HPC Platforms Gilad Shainer, August 2016, 4th Annual MVAPICH User Group (MUG) Meeting Mellanox Connects the World s Fastest Supercomputer

More information

Building the Most Efficient Machine Learning System

Building the Most Efficient Machine Learning System Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide

More information

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA OPEN MPI WITH RDMA SUPPORT AND CUDA Rolf vandevaart, NVIDIA OVERVIEW What is CUDA-aware History of CUDA-aware support in Open MPI GPU Direct RDMA support Tuning parameters Application example Future work

More information

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart Xiangyong Ouyang, Raghunath Rajachandrasekar, Xavier Besseron, Hao Wang, Jian Huang, Dhabaleswar K. Panda Department of Computer

More information

Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand

Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand Miao Luo, Hao Wang, & D. K. Panda Network- Based Compu2ng Laboratory Department of Computer Science and Engineering The Ohio State

More information

Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?

Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Sayantan Sur, Abhinav Vishnu, Hyun-Wook Jin, Wei Huang and D. K. Panda {surs, vishnu, jinhy, huanwei, panda}@cse.ohio-state.edu

More information

The Future of Supercomputer Software Libraries

The Future of Supercomputer Software Libraries The Future of Supercomputer Software Libraries Talk at HPC Advisory Council Israel Supercomputing Conference by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

More information

Bridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani

Bridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani Bridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani The Ohio State University E-mail: gugnani.2@osu.edu http://web.cse.ohio-state.edu/~gugnani/ Network Based Computing Laboratory SC 17 2 Neuroscience:

More information

INAM 2 : InfiniBand Network Analysis and Monitoring with MPI

INAM 2 : InfiniBand Network Analysis and Monitoring with MPI INAM 2 : InfiniBand Network Analysis and Monitoring with MPI H. Subramoni, A. A. Mathews, M. Arnold, J. Perkins, X. Lu, K. Hamidouche, and D. K. Panda Department of Computer Science and Engineering The

More information

Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies

Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies Talk at Intel HPC Developer Conference 2017 (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University

More information

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 Leading Supplier of End-to-End Interconnect Solutions Analyze Enabling the Use of Data Store ICs Comprehensive End-to-End InfiniBand and Ethernet Portfolio

More information

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State

More information

Unified Runtime for PGAS and MPI over OFED

Unified Runtime for PGAS and MPI over OFED Unified Runtime for PGAS and MPI over OFED D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University, USA Outline Introduction

More information

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing

Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing Talk at HPCAC Stanford Conference (Feb 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience

Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience Talk at OpenSHMEM Workshop (August 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:

More information

High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (2015)

High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (2015) High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (215) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Advantages to Using MVAPICH2 on TACC HPC Clusters

Advantages to Using MVAPICH2 on TACC HPC Clusters Advantages to Using MVAPICH2 on TACC HPC Clusters Jérôme VIENNE viennej@tacc.utexas.edu Texas Advanced Computing Center (TACC) University of Texas at Austin Wednesday 27 th August, 2014 1 / 20 Stampede

More information

Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics

Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics 1 Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics Mingzhe Li Sreeram Potluri Khaled Hamidouche Jithin Jose Dhabaleswar K. Panda Network-Based Computing Laboratory Department

More information

S8688 : INSIDE DGX-2. Glenn Dearth, Vyas Venkataraman Mar 28, 2018

S8688 : INSIDE DGX-2. Glenn Dearth, Vyas Venkataraman Mar 28, 2018 S8688 : INSIDE DGX-2 Glenn Dearth, Vyas Venkataraman Mar 28, 2018 Why was DGX-2 created Agenda DGX-2 internal architecture Software programming model Simple application Results 2 DEEP LEARNING TRENDS Application

More information