Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X
|
|
- Mercy Leonard
- 6 years ago
- Views:
Transcription
1 Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X IXPUG 7 PresentaNon J. Hashmi, M. Li, H. Subramoni and DK Panda The Ohio State University {hashmi.29,li.292,subramoni.,panda.2}@osu.edu
2 Parallel Programming Models Overview P P2 P3 P P2 P3 P P2 P3 Shared Memory Memory Memory Memory Logical shared memory Memory Memory Memory Shared Memory Model Distributed Memory Model ParLLoned Global Address Space (PGAS) SHMEM, DSM MPI (Message Passing Interface) Global Arrays, UPC, Chapel, X, CAF, Programming models provide abstract machine models Models can be mapped on different types of systems e.g. Distributed Shared Memory (DSM), MPI within a node, etc. PGAS models and Hybrid MPI+PGAS models are gradually receiving importance Network Based CompuNng Laboratory IXPUG 7 2
3 IXPUG 7 3 ParNNoned Global Address Space (PGAS) Models Key features - Simple shared memory abstraclons - Light weight one-sided communicalon - Easier to express irregular communicalon Different approaches to PGAS - Languages Unified Parallel C (UPC) Co-Array Fortran (CAF) X Chapel - Libraries OpenSHMEM UPC++ Global Arrays
4 IXPUG 7 4 Hybrid (MPI+PGAS) Programming ApplicaLon sub-kernels can be re-wri^en in MPI/PGAS based on communicalon characterislcs Benefits: Cons Best of Distributed CompuLng Model Best of Shared Memory CompuLng Model Two different runlmes Need great care while programming Prone to deadlock if not careful HPC ApplicaNon Kernel MPI Kernel 2 PGAS MPI Kernel 3 MPI Kernel N PGAS MPI
5 IXPUG 7 5 MVAPICH2 SoZware Family High-Performance Parallel Programming Libraries MVAPICH2 MVAPICH2-X MVAPICH2-GDR MVAPICH2-Virt MVAPICH2-EA MVAPICH2-MIC Microbenchmarks OMB Tools OSU INAM OEMT Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI +PGAS programming models with unified communicalon runlme OpLmized MPI for clusters with NVIDIA GPUs High-performance and scalable MPI for hypervisor and container based HPC cloud Energy aware and High-performance MPI OpLmized MPI for clusters with Intel KNC Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs Network monitoring, profiling, and analysis for clusters with MPI and scheduler integralon ULlity to measure the energy consumplon of MPI applicalons
6 MVAPICH2-X for Hybrid MPI + PGAS ApplicaNons Current Model Separate RunLmes for OpenSHMEM/UPC/UPC++/CAF and MPI Possible deadlock if both runlmes are not progressed Consumes more network resource Unified communicalon runlme for MPI, UPC, UPC++, OpenSHMEM, CAF Available with since 202 (starlng with MVAPICH2-X.9) h^p://mvapich.cse.ohio-state.edu Network Based CompuNng Laboratory IXPUG 7 6
7 ApplicaNon Level Performance with Graph500 and Sort Time (s) Network Based CompuNng Laboratory Graph500 ExecuNon Time MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 4K 8K 6K No. of Processes 7.6X J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, InternaNonal SupercompuNng Conference (ISC 3), June 203 J. Jose, K. Kandalla, M. Luo and D. K. Panda, SupporNng Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance EvaluaNon, Int'l Conference on Parallel Processing (ICPP '2), September 202 3X Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design 8,92 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple 6,384 processes -.5X improvement over MPI-CSR - 3X improvement over MPI-Simple J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, OpNmizing CollecNve CommunicaNon in OpenSHMEM, Int'l Conference on ParNNoned Global Address Space Programming Models (PGAS '3), October 203. Time (seconds) MPI Sort ExecuNon Time Hybrid 500GB-52 TB-K 2TB-2K 4TB-4K Input Data - No. of Processes Performance of Hybrid (MPI+OpenSHMEM) Sort ApplicaLon 4,096 processes, 4 TB Input Size - MPI 2408 sec; 0.6 TB/min - Hybrid 72 sec; 0.36 TB/min - 5% improvement over MPI-design 5% IXPUG 7 7
8 IXPUG 7 8 Performance of PGAS Models on KNL Performance of Put and Get with OpenSHMEM, UPC, and UPC++ Comparison on KNL and Broadwell for OpenSHMEM point-topoint, colleclves, and atomics OperaLons Impact of AVX-52 VectorizaLon and MCDRAM on OpenSHMEM ApplicaLon Kernels Performance of UPC++ ApplicaLon kernels on MVAPICH2-X communicalon runlme
9 Latency (us) Performance of PGAS Models on KNL using MVAPICH2-X K 4K 6K 64K 256K M Network Based CompuNng Laboratory Intra-node PUT shmem_put upc_putmem upcxx_async_put Message Size Latency (us) Intra-node GET shmem_get upc_getmem upcxx_async_get K 4K 6K 64K 256K M Message Size Intra-node performance of one-sided Put/Get operalons of PGAS libraries/languages using MVAPICH2-X communicalon conduit Near-naLve communicalon performance is observed on KNL IXPUG 7 9
10 Latency (us) Performance of PGAS Models on KNL using MVAPICH2-X K 4K 6K 64K 256K M Network Based CompuNng Laboratory Inter-node PUT shmem_put upc_putmem upcxx_async_put Message Size Latency (us) Inter-node GET shmem_get upc_getmem upcxx_async_get K 4K 6K 64K 256K M Message Size Inter-node performance of one-sided Put/Get operalons using MVAPICH2-X communicalon conduit with InfiniBand HCA (MT45) NaLve IB performance for all three PGAS models is observed IXPUG 7
11 IXPUG 7 Performance of PGAS Models on KNL Performance of Put and Get with OpenSHMEM, UPC, and UPC++ Comparison on KNL and Broadwell for OpenSHMEM point-topoint, colleclves, and atomics OperaLons Impact of AVX-52 VectorizaLon and MCDRAM on OpenSHMEM ApplicaLon Kernels Performance of UPC++ ApplicaLon kernels on MVAPICH2-X communicalon runlme
12 Microbenchmark EvaluaNons (Intra-node Put/Get) Broadwell KNL Broadwell KNL 0 0 Time (us) 96.5 Time (us) Message size (bytes) Message size (bytes) Shmem_putmem Shmem_getmem Broadwell shows about 3X be^er performance than KNL on large message MuL-threaded memcpy roulnes on KNL could offset the degradalon caused by the slower core on basic Put/Get operalons J. Hashmi, M. Li, H. Subramoni, D. Panda, Exploi:ng and Evalua:ng OpenSHMEM on KNL Architecture, Fourth Workshop on OpenSHMEM, Aug 207 Network Based CompuNng Laboratory IXPUG 7 2
13 IXPUG 7 3 Microbenchmark EvaluaNons (Inter-node Put/Get) 0 00 Broadwell KNL Broadwell KNL Time (us) 2.8 Time (us) Message size (bytes) Message size (bytes) Shmem_putmem Shmem_getmem Inter-node small message latency is only 2X worse on KNL. While large message performance is almost similar on both KNL and Broadwell.
14 Microbenchmark EvaluaNons (CollecNves) 000 Broadwell KNL 000 Broadwell KNL Time (us) 00 0 ~4X Time (us) 00 0 ~2X Message size (bytes) Shmem_reduce on 28 processes Message size (bytes) Shmem_broadcast on 28 processes 2 KNL nodes (64 ppn) and 8 Broadwell nodes (6 ppn) 4X degradalon is observed on KNL using colleclve benchmarks Basic point-to-point performance difference is reflected in colleclves as well Network Based CompuNng Laboratory IXPUG 7 4
15 Microbenchmark EvaluaNons (Atomics) 2 KNL Broadwell Time (us) Using mullple nodes of KNL, atomic operalons showed about 2.5X degradalon on compare-swap, and Inc atomics Fetch-and-add (32-bit) showed up to 4X degradalon on KNL Network Based CompuNng Laboratory 2 0 fadd32 fadd64 cswap32 cswap64 inc32 OpenSHMEM atomics on 28 processes inc64 IXPUG 7 5
16 IXPUG 7 6 Performance of PGAS Models on KNL Performance of Put and Get with OpenSHMEM, UPC, and UPC++ Comparison on KNL and Broadwell for OpenSHMEM point-topoint, colleclves, and atomics OperaLons Impact of AVX-52 VectorizaLon and MCDRAM on OpenSHMEM ApplicaLon Kernels Performance of UPC++ ApplicaLon kernels on MVAPICH2-X communicalon runlme
17 Time (s) NAS Parallel Benchmark EvaluaNon NAS-BT (PDE solver), CLASS=B Network Based CompuNng Laboratory KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell No. of processes NAS-EP (RNG), CLASS=B AVX-52 vectorized execulon of BT kernel on KNL showed 30% improvement over default execulon while EP kernel didn t show any improvement Broadwell showed 20% improvement over oplmized KNL on BT and 4X improvement over all KNL execulons on EP kernel (random number generalon) Time (s) KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell No. of processes IXPUG 7 7
18 IXPUG 7 8 Time (s) NAS Parallel Benchmark EvaluaNon (Cont d) NAS-SP (non-linear PDE), CLASS=B KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell No. of processes NAS-MG (MulLGrid solver), CLASS=B Similar performance trends are observed on BT and MG kernels as well On SP kernel, MCDRAM based execulon showed up to 20% improvement over default at 6 processes Time (s) KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell No. of processes
19 IXPUG 7 9 ApplicaNon Kernels EvaluaNon 00 Time (s) 0 Heat-2D Kernel using Jacobi method KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell Time (s) 0 Heat Image Kernel KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell No. of processes No. of processes On heat diffusion based kernels AVX-52 vectorizalon showed be^er performance MCDRAM showed significant benefits on Heat-Image kernel for all process counts. Combined with AVX-52 vectorizalon, it showed up to 4X improved performance
20 IXPUG 7 20 ApplicaNon Kernels EvaluaNon (Cont d) 00 Matrix MulLplicaLon kernel KNL (Default) KNL (AVX-52) KNL (AVX-52+MCDRAM) Broadwell 0 KNL (Default) KNL (AVX-52) DAXPY kernel Time (s) Time (s) No. of processes No. of processes VectorizaLon helps in matrix mullplicalon and vector operalons Due to heavily compute bound nature of these kernels, MCDRAM didn t show any significant performance improvement
21 IXPUG 7 2 ApplicaNon Kernels EvaluaNon (Cont d) Scalable Integer Sort Kernel (ISx) Time (s) 0 0. KNL (Default) KNL (AVX-52+MCDRAM) KNL (AVX-52) Broadwell No. of processes Up to 3X improvement on un-oplmized execulon is observed on KNL Broadwell showed up to 2X be^er performance for core-by-core comparison
22 Node-by-node EvaluaNon using ApplicaNon Kernels ApplicaLon Kernels on a single KNL vs. single Broadwell node 0 KNL (68) Broadwell (28) +30% Time (s) DHeat HeatImg MatMul DAXPY ISx(strong) A single node of KNL is evaluated against a single node of Broadwell using all the available physical cores HeatImage showed be^er performance than Intel Xeon Network Based CompuNng Laboratory IXPUG 7 22
23 IXPUG 7 23 Performance of PGAS Models on KNL Performance of Put and Get with OpenSHMEM, UPC, and UPC++ Comparison on KNL and Broadwell for OpenSHMEM point-topoint, colleclves, and atomics OperaLons Impact of AVX-52 VectorizaLon and MCDRAM on OpenSHMEM ApplicaLon Kernels Performance of UPC++ ApplicaLon kernels on MVAPICH2-X communicalon runlme
24 IXPUG 7 24 UPC++ ApplicaNon Kernels Performance on KNL Developed and Used two applicalon kernels to evaluate UPC++ model using MVAPICH2-X as communicalon runlme Sparse Matrix Vector MulLplicaLon (SpMV) AdapLve Mesh Refinement (AMR) kernel 2D-Heat conduclon using Jacobi iteralve
25 IXPUG 7 25 ApplicaNon Kernels Performance of UPC++ on MVAPICH2-X Time (s) UPC++ SpMV No. of Processes Strong-scaling Performance of SpMV kernel (2Kx2K) Time (s) 20 UPC++ 2D Heat No. of Processes Strong-scaling Performance of 2D-Heat kernel (52x52) SpMV and 2D Heat kernels using MVAPICH2-X shows good scalability on increasing number of processes of KNL
26 IXPUG 7 26 Performance Results Summary Put/Get and Atomics Performance KNL (Default) KNL (AVX52) KNL (AVX52 + MCDRAM) Broadwell Collectives Performance Node-by-node Application Performance (Closer to center is better) Core-by-core Application Performance
27 Conclusion Comprehensive performance evalualon of MVAPICH2-X based OpenSHMEM, UPC, and UPC++ models over the KNL architecture Observed significant performance gains on applicalon kernels when using AVX-52 vectorizalon 2.5x performance benefits in terms of execulon Lme MCDRAM benefits are not prominent on most of the applicalon kernels Lack of memory bound operalons KNL showed up to 3X worse performance than Broadwell for core-by-core evalualon KNL showed be^er or on-par performance than Broadwell on Heat-Image and ISx kernels for Node-by-Node evalualon The runlme implementalons need to take advantage of the concurrency of KNL cores Network Based CompuNng Laboratory IXPUG 7 27
28 IXPUG 7 28 Funding Acknowledgments Funding Support by Equipment Support by
29 Personnel Acknowledgments Current Students A. Awan (Ph.D.) M. Bayatpour (Ph.D.) S. Chakraborthy (Ph.D.) C.-H. Chu (Ph.D.) S. Guganani (Ph.D.) J. Hashmi (Ph.D.) N. Islam (Ph.D.) M. Li (Ph.D.) M. Rahman (Ph.D.) D. Shankar (Ph.D.) A. Venkatesh (Ph.D.) J. Zhang (Ph.D.) Current Research Scien:sts X. Lu H. Subramoni Current Post-doc A. Ruhela Current Research Specialist J. Smith M. Arnold Past Students A. AugusLne (M.S.) P. Balaji (Ph.D.) S. Bhagvat (M.S.) A. Bhat (M.S.) D. BunLnas (Ph.D.) L. Chai (Ph.D.) B. Chandrasekharan (M.S.) N. Dandapanthula (M.S.) V. Dhanraj (M.S.) T. Gangadharappa (M.S.) W. Huang (Ph.D.) W. Jiang (M.S.) J. Jose (Ph.D.) S. Kini (M.S.) M. Koop (Ph.D.) K. Kulkarni (M.S.) R. Kumar (M.S.) S. Krishnamoorthy (M.S.) K. Kandalla (Ph.D.) P. Lai (M.S.) M. Luo (Ph.D.) A. Mamidala (Ph.D.) G. Marsh (M.S.) V. Meshram (M.S.) A. Moody (M.S.) S. Naravula (Ph.D.) R. Noronha (Ph.D.) X. Ouyang (Ph.D.) S. Pai (M.S.) S. Potluri (Ph.D.) R. Rajachandrasekar (Ph.D.) G. Santhanaraman (Ph.D.) A. Singh (Ph.D.) J. Sridhar (M.S.) S. Sur (Ph.D.) H. Subramoni (Ph.D.) K. Vaidyanathan (Ph.D.) A. Vishnu (Ph.D.) J. Wu (Ph.D.) W. Yu (Ph.D.) Past Research Scien:st K. Hamidouche S. Sur Past Programmers D. Bureddy J. Perkins K. Gopalakrishnan (M.S.) J. Liu (Ph.D.) Past Post-Docs D. Banerjee J. Lin S. Marcarelli X. Besseron M. Luo J. Vienne H.-W. Jin E. Mancini H. Wang Network Based CompuNng Laboratory IXPUG 7 29
30 IXPUG 7 30 Thank You! panda@cse.ohio-state.edu Network-Based CompuLng Laboratory h^p://nowlab.cse.ohio-state.edu/ The High-Performance MPI/PGAS Project h^p://mvapich.cse.ohio-state.edu/ The High-Performance Big Data Project h^p://hibd.cse.ohio-state.edu/ The High-Performance Deep Learning Project h^p://hidl.cse.ohio-state.edu/
Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X
Performance of PGAS Models on KNL: A Comprehensive Study with MVAPICH2-X Intel Nerve Center (SC 7) Presentation Dhabaleswar K (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu Parallel
More informationScalability and Performance of MVAPICH2 on OakForest-PACS
Scalability and Performance of MVAPICH2 on OakForest-PACS Talk at JCAHPC Booth (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationSupporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters
Supporting PGAS Models (UPC and OpenSHMEM) on MIC Clusters Presentation at IXPUG Meeting, July 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationSupport Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach
Support Hybrid MPI+PGAS (UPC/OpenSHMEM/CAF) Programming Models through a Unified Runtime: An MVAPICH2-X Approach Talk at OSC theater (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:
More informationHigh-Performance MPI and Deep Learning on OpenPOWER Platform
High-Performance MPI and Deep Learning on OpenPOWER Platform Talk at OpenPOWER SUMMIT (Mar 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationDesigning MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach
Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (April 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationHigh-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms
High-performance and Scalable MPI+X Library for Emerging HPC Clusters & Cloud Platforms Talk at Intel HPC Developer Conference (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationHigh Performance MPI Support in MVAPICH2 for InfiniBand Clusters
High Performance MPI Support in MVAPICH2 for InfiniBand Clusters A Talk at NVIDIA Booth (SC 11) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationScaling with PGAS Languages
Scaling with PGAS Languages Panel Presentation at OFA Developers Workshop (2013) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationUsing MVAPICH2- X for Hybrid MPI + PGAS (OpenSHMEM and UPC) Programming
Using MVAPICH2- X for Hybrid MPI + PGAS (OpenSHMEM and UPC) Programming MVAPICH2 User Group (MUG) MeeFng by Jithin Jose The Ohio State University E- mail: jose@cse.ohio- state.edu h
More informationMemcached Design on High Performance RDMA Capable Interconnects
Memcached Design on High Performance RDMA Capable Interconnects Jithin Jose, Hari Subramoni, Miao Luo, Minjia Zhang, Jian Huang, Md. Wasi- ur- Rahman, Nusrat S. Islam, Xiangyong Ouyang, Hao Wang, Sayantan
More informationDesigning MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach
Designing MPI and PGAS Libraries for Exascale Systems: The MVAPICH2 Approach Talk at OpenFabrics Workshop (March 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationEnabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters
Enabling Efficient Use of UPC and OpenSHMEM PGAS models on GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationHigh-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale Systems Keynote Talk at HPCAC, Switzerland (April 217) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationAdvancing MPI Libraries to the Many-core Era: Designs and Evalua<ons with MVAPICH2
Advancing MPI Libraries to the Many-core Era: Designs and Evalua
More informationDesigning High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning
5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:
More informationMVAPICH2-GDR: Pushing the Frontier of MPI Libraries Enabling GPUDirect Technologies
MVAPICH2-GDR: Pushing the Frontier of MPI Libraries Enabling GPUDirect Technologies GPU Technology Conference (GTC 218) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationAccelerating Big Data Processing with RDMA- Enhanced Apache Hadoop
Accelerating Big Data Processing with RDMA- Enhanced Apache Hadoop Keynote Talk at BPOE-4, in conjunction with ASPLOS 14 (March 2014) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationIn the multi-core age, How do larger, faster and cheaper and more responsive memory sub-systems affect data management? Dhabaleswar K.
In the multi-core age, How do larger, faster and cheaper and more responsive sub-systems affect data management? Panel at ADMS 211 Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory Department
More informationEfficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning
Efficient and Scalable Multi-Source Streaming Broadcast on Clusters for Deep Learning Ching-Hsiang Chu 1, Xiaoyi Lu 1, Ammar A. Awan 1, Hari Subramoni 1, Jahanzeb Hashmi 1, Bracy Elton 2 and Dhabaleswar
More informationJob Startup at Exascale:
Job Startup at Exascale: Challenges and Solutions Hari Subramoni The Ohio State University http://nowlab.cse.ohio-state.edu/ Current Trends in HPC Supercomputing systems scaling rapidly Multi-/Many-core
More informationDesigning High-Performance MPI Libraries for Multi-/Many-core Era
Designing High-Performance MPI Libraries for Multi-/Many-core Era Talk at IXPUG-Fall Conference (September 18) J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni and D. K Panda The Ohio State University
More informationEnabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters. Presented at GTC 15
Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters Presented at Presented by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu hcp://www.cse.ohio-
More informationHow to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries
How to Boost the Performance of Your MPI and PGAS s with MVAPICH2 Libraries A Tutorial at the MVAPICH User Group (MUG) Meeting 18 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationExploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR
Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Presentation at Mellanox Theater () Dhabaleswar K. (DK) Panda - The Ohio State University panda@cse.ohio-state.edu Outline Communication
More informationAccelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies
Accelerate Big Data Processing (Hadoop, Spark, Memcached, & TensorFlow) with HPC Technologies Talk at Intel HPC Developer Conference 2017 (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University
More informationDesigning Shared Address Space MPI libraries in the Many-core Era
Designing Shared Address Space MPI libraries in the Many-core Era Jahanzeb Hashmi hashmi.29@osu.edu (NBCL) The Ohio State University Outline Introduction and Motivation Background Shared-memory Communication
More informationMVAPICH2 Project Update and Big Data Acceleration
MVAPICH2 Project Update and Big Data Acceleration Presentation at HPC Advisory Council European Conference 212 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationHigh-Performance Broadcast for Streaming and Deep Learning
High-Performance Broadcast for Streaming and Deep Learning Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth - SC17 2 Outline Introduction
More informationDesigning HPC, Big Data, and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities
Designing HPC, Big Data, and Deep Learning Middleware for Exascale Systems: Challenges and Opportunities DHABALESWAR K PANDA DK Panda is a Professor and University Distinguished Scholar of Computer Science
More informationAn Assessment of MPI 3.x (Part I)
An Assessment of MPI 3.x (Part I) High Performance MPI Support for Clouds with IB and SR-IOV (Part II) Talk at HP-CAST 25 (Nov 2015) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationMVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning
MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox Theater (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationDesigning OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience
Designing OpenSHMEM and Hybrid MPI+OpenSHMEM Libraries for Exascale Systems: MVAPICH2-X Experience Talk at OpenSHMEM Workshop (August 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:
More informationAddressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization
Addressing Emerging Challenges in Designing HPC Runtimes: Energy-Awareness, Accelerators and Virtualization Talk at HPCAC-Switzerland (Mar 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:
More informationHigh-Performance Training for Deep Learning and Computer Vision HPC
High-Performance Training for Deep Learning and Computer Vision HPC Panel at CVPR-ECV 18 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationAccelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K.
Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures Mohammadreza Bayatpour, Hari Subramoni, D. K. Panda Department of Computer Science and Engineering The Ohio
More informationMVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning
MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning Talk at Mellanox booth (SC 218) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationDesigning Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems
Designing Efficient HPC, Big Data, Deep Learning, and Cloud Computing Middleware for Exascale Systems Keynote Talk at HPCAC China Conference by Dhabaleswar K. (DK) Panda The Ohio State University E mail:
More informationThe MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing
The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Presentation at Mellanox Theatre (SC 16) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationExploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters
2015 IEEE International Conference on Cluster Computing Exploiting GPUDirect RDMA in Designing High Performance OpenSHMEM for NVIDIA GPU Clusters Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan,
More informationSR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience
SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory
More informationHigh-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters
High-Performance MPI Library with SR-IOV and SLURM for Virtualized InfiniBand Clusters Talk at OpenFabrics Workshop (April 2016) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationOn-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI
On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI Sourav Chakraborty, Hari Subramoni, Jonathan Perkins, Ammar A. Awan and Dhabaleswar K. Panda Department of Computer Science and Engineering
More informationDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems Keynote Talk at HPCAC Stanford Conference (Feb 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationExploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries
Exploiting Computation and Communication Overlap in MVAPICH2 and MVAPICH2-GDR MPI Libraries Talk at Overlapping Communication with Computation Symposium (April 18) by Dhabaleswar K. (DK) Panda The Ohio
More informationExploiting InfiniBand and GPUDirect Technology for High Performance Collectives on GPU Clusters
Exploiting InfiniBand and Direct Technology for High Performance Collectives on Clusters Ching-Hsiang Chu chu.368@osu.edu Department of Computer Science and Engineering The Ohio State University OSU Booth
More informationDesigning Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems
Designing Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems Talk at HPCAC-AI Switzerland Conference (April 19) by Dhabaleswar K. (DK) Panda The Ohio State University
More informationThe Future of Supercomputer Software Libraries
The Future of Supercomputer Software Libraries Talk at HPC Advisory Council Israel Supercomputing Conference by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationUnified Runtime for PGAS and MPI over OFED
Unified Runtime for PGAS and MPI over OFED D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University, USA Outline Introduction
More informationDesigning High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters
Designing High Performance Heterogeneous Broadcast for Streaming Applications on Clusters 1 Ching-Hsiang Chu, 1 Khaled Hamidouche, 1 Hari Subramoni, 1 Akshay Venkatesh, 2 Bracy Elton and 1 Dhabaleswar
More informationGPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks
GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks Presented By : Esthela Gallardo Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, Jonathan Perkins, Hari Subramoni,
More informationOverview of the MVAPICH Project: Latest Status and Future Roadmap
Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationHow to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries
How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries A Tutorial at MVAPICH User Group 217 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationOverview of the MVAPICH Project: Latest Status and Future Roadmap
Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationMVAPICH2 and MVAPICH2-MIC: Latest Status
MVAPICH2 and MVAPICH2-MIC: Latest Status Presentation at IXPUG Meeting, July 214 by Dhabaleswar K. (DK) Panda and Khaled Hamidouche The Ohio State University E-mail: {panda, hamidouc}@cse.ohio-state.edu
More informationCan Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Sayantan Sur, Abhinav Vishnu, Hyun-Wook Jin, Wei Huang and D. K. Panda {surs, vishnu, jinhy, huanwei, panda}@cse.ohio-state.edu
More informationAccelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures
Accelerating MPI Message Matching and Reduction Collectives For Multi-/Many-core Architectures M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda Department of Computer Science and Engineering
More informationJob Startup at Exascale: Challenges and Solutions
Job Startup at Exascale: Challenges and Solutions Sourav Chakraborty Advisor: Dhabaleswar K (DK) Panda The Ohio State University http://nowlab.cse.ohio-state.edu/ Current Trends in HPC Tremendous increase
More informationDesigning Power-Aware Collective Communication Algorithms for InfiniBand Clusters
Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Krishna Kandalla, Emilio P. Mancini, Sayantan Sur, and Dhabaleswar. K. Panda Department of Computer Science & Engineering,
More informationSoftware Libraries and Middleware for Exascale Systems
Software Libraries and Middleware for Exascale Systems Talk at HPC Advisory Council China Workshop (212) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationHigh-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (2015)
High-Performance and Scalable Designs of Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference (215) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationHPC Meets Big Data: Accelerating Hadoop, Spark, and Memcached with HPC Technologies
HPC Meets Big Data: Accelerating Hadoop, Spark, and Memcached with HPC Technologies Talk at OpenFabrics Alliance Workshop (OFAW 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationStudy. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Introduction Problem Statement
More informationUnifying UPC and MPI Runtimes: Experience with MVAPICH
Unifying UPC and MPI Runtimes: Experience with MVAPICH Jithin Jose Miao Luo Sayantan Sur D. K. Panda Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
More informationIntra-MIC MPI Communication using MVAPICH2: Early Experience
Intra-MIC MPI Communication using MVAPICH: Early Experience Sreeram Potluri, Karen Tomko, Devendar Bureddy, and Dhabaleswar K. Panda Department of Computer Science and Engineering Ohio State University
More informationBig Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management
Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing and Management SigHPC BigData BoF (SC 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationThe MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing
The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Presentation at OSU Booth (SC 18) by Hari Subramoni The Ohio State University E-mail: subramon@cse.ohio-state.edu http://www.cse.ohio-state.edu/~subramon
More informationMPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits
MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefits Ashish Kumar Singh, Sreeram Potluri, Hao Wang, Krishna Kandalla, Sayantan Sur, and Dhabaleswar K. Panda Network-Based
More informationDesigning High Performance Communication Middleware with Emerging Multi-core Architectures
Designing High Performance Communication Middleware with Emerging Multi-core Architectures Dhabaleswar K. (DK) Panda Department of Computer Science and Engg. The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationBridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani
Bridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani The Ohio State University E-mail: gugnani.2@osu.edu http://web.cse.ohio-state.edu/~gugnani/ Network Based Computing Laboratory SC 17 2 Neuroscience:
More informationDesigning HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities
Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities Keynote Talk at SEA Symposium (April 18) by Dhabaleswar K. (DK) Panda The Ohio State University
More informationLatest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand
Latest Advances in MVAPICH2 MPI Library for NVIDIA GPU Clusters with InfiniBand Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationMulti-Threaded UPC Runtime for GPU to GPU communication over InfiniBand
Multi-Threaded UPC Runtime for GPU to GPU communication over InfiniBand Miao Luo, Hao Wang, & D. K. Panda Network- Based Compu2ng Laboratory Department of Computer Science and Engineering The Ohio State
More informationAccelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached
Accelerating Big Data with Hadoop (HDFS, MapReduce and HBase) and Memcached Talk at HPC Advisory Council Lugano Conference (213) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationDesigning Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters
Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters K. Kandalla, A. Venkatesh, K. Hamidouche, S. Potluri, D. Bureddy and D. K. Panda Presented by Dr. Xiaoyi
More informationEfficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics
1 Efficient and Truly Passive MPI-3 RMA Synchronization Using InfiniBand Atomics Mingzhe Li Sreeram Potluri Khaled Hamidouche Jithin Jose Dhabaleswar K. Panda Network-Based Computing Laboratory Department
More informationProgramming Models for Exascale Systems
Programming Models for Exascale Systems Talk at HPC Advisory Council Stanford Conference and Exascale Workshop (214) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationThe MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15)
The MVAPICH2 Project: Pushing the Frontier of InfiniBand and RDMA Networking Technologies Talk at OSC/OH-TECH Booth (SC 15) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationOptimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2
Optimized Non-contiguous MPI Datatype Communication for GPU Clusters: Design, Implementation and Evaluation with MVAPICH2 H. Wang, S. Potluri, M. Luo, A. K. Singh, X. Ouyang, S. Sur, D. K. Panda Network-Based
More informationEfficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning
Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning Ammar Ahmad Awan, Khaled Hamidouche, Akshay Venkatesh, and Dhabaleswar K. Panda Network-Based Computing Laboratory Department
More informationMVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning
MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning GPU Technology Conference GTC 217 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationReducing Network Contention with Mixed Workloads on Modern Multicore Clusters
Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters Matthew Koop 1 Miao Luo D. K. Panda matthew.koop@nasa.gov {luom, panda}@cse.ohio-state.edu 1 NASA Center for Computational
More informationBig Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing
Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing Talk at HPCAC Stanford Conference (Feb 18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationBig Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing
Big Data Meets HPC: Exploiting HPC Technologies for Accelerating Big Data Processing Talk at HPCAC-Switzerland (April 17) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu
More informationCUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters
CUDA Kernel based Collective Reduction Operations on Large-scale GPU Clusters Ching-Hsiang Chu, Khaled Hamidouche, Akshay Venkatesh, Ammar Ahmad Awan and Dhabaleswar K. (DK) Panda Speaker: Sourav Chakraborty
More informationMemory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand
Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand Matthew Koop, Wei Huang, Ahbinav Vishnu, Dhabaleswar K. Panda Network-Based Computing Laboratory Department of
More informationLiMIC: Support for High-Performance MPI Intra-Node Communication on Linux Cluster
LiMIC: Support for High-Performance MPI Intra-Node Communication on Linux Cluster H. W. Jin, S. Sur, L. Chai, and D. K. Panda Network-Based Computing Laboratory Department of Computer Science and Engineering
More informationThe LiMIC Strikes Back. Hyun-Wook Jin System Software Laboratory Dept. of Computer Science and Engineering Konkuk University
The LiMIC Strikes Back Hyun-Wook Jin System Software Laboratory Dept. of Computer Science and Engineering Konkuk University jinh@konkuk.ac.kr Contents MPI Intra-node Communication Introduction of LiMIC
More informationHigh-Performance and Scalable Non-Blocking All-to-All with Collective Offload on InfiniBand Clusters: A study with Parallel 3DFFT
High-Performance and Scalable Non-Blocking All-to-All with Collective Offload on InfiniBand Clusters: A study with Parallel 3DFFT Krishna Kandalla (1), Hari Subramoni (1), Karen Tomko (2), Dmitry Pekurovsky
More informationDesigning Software Libraries and Middleware for Exascale Systems: Opportunities and Challenges
Designing Software Libraries and Middleware for Exascale Systems: Opportunities and Challenges Talk at Brookhaven National Laboratory (Oct 214) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail:
More informationHow to Tune and Extract Higher Performance with MVAPICH2 Libraries?
How to Tune and Extract Higher Performance with MVAPICH2 Libraries? An XSEDE ECSS webinar by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationMVAPICH2 MPI Libraries to Exploit Latest Networking and Accelerator Technologies
MVAPICH2 MPI Libraries to Exploit Latest Networking and Accelerator Technologies Talk at NRL booth (SC 216) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationA Plugin-based Approach to Exploit RDMA Benefits for Apache and Enterprise HDFS
A Plugin-based Approach to Exploit RDMA Benefits for Apache and Enterprise HDFS Adithya Bhat, Nusrat Islam, Xiaoyi Lu, Md. Wasi- ur- Rahman, Dip: Shankar, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng
More informationImproving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters
Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of
More informationAccelerating HPL on Heterogeneous GPU Clusters
Accelerating HPL on Heterogeneous GPU Clusters Presentation at GTC 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Outline
More informationCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Data Talk at HPC Advisory Council Spain Conference (215) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationA Case for High Performance Computing with Virtual Machines
A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation
More informationHigh Performance Migration Framework for MPI Applications on HPC Cloud
High Performance Migration Framework for MPI Applications on HPC Cloud Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda {zhanjie, luxi, panda}@cse.ohio-state.edu Computer Science & Engineering Department,
More informationA Portable InfiniBand Module for MPICH2/Nemesis: Design and Evaluation
A Portable InfiniBand Module for MPICH2/Nemesis: Design and Evaluation Miao Luo, Ping Lai, Sreeram Potluri, Emilio P. Mancini, Hari Subramoni, Krishna Kandalla, Dhabaleswar K. Panda Department of Computer
More informationGPU-centric communication for improved efficiency
GPU-centric communication for improved efficiency Benjamin Klenk *, Lena Oden, Holger Fröning * * Heidelberg University, Germany Fraunhofer Institute for Industrial Mathematics, Germany GPCDP Workshop
More information