Screencast: Basic Architecture and Tuning

Size: px
Start display at page:

Download "Screencast: Basic Architecture and Tuning"

Transcription

1 Screencast: Basic Architecture and Tuning Jeff Squyres May 2008 May 2008 Screencast: Basic Architecture and Tuning 1

2 Open MPI Architecture Modular component architecture (MCA) Backbone plugin / component system Finds, loads, parameterizes components Hierarchy MCA: foundation Framework: functionality specification Component: code for specific functionality Module: instance of a component May 2008 Screencast: Basic Architecture and Tuning 2

3 Open MPI Architecture User application MPI API Modular Component Architecture (MCA) Framework Framework Framework Framework Framework Framework Framework May 2008 Screencast: Basic Architecture and Tuning 3

4 Three Main Code Sections Open MPI layer (OMPI) Top-level MPI API and supporting logic Open Run-Time Environment (ORTE) Interface to back-end run-time system Open Portability Access Layer (OPAL) OS / utility code (lists, reference counting, etc.) Dependencies - not layers OMPI ORTE OPAL May 2008 Screencast: Basic Architecture and Tuning 4

5 Three Main Code Sections OMPI ORTE OPAL Operating system May 2008 Screencast: Basic Architecture and Tuning 5

6 Three Main Code Sections User application ORTE MPI API OMPI OPAL Operating system May 2008 Screencast: Basic Architecture and Tuning 6

7 Tuning May 2008 Screencast: Basic Architecture and Tuning 7

8 MCA Parameters Run-time tunable values Per layer Per framework Per component Change behaviors of code at run-time Does not require recompiling / re-linking Simple example Choose which network to use for MPI communications May 2008 Screencast: Basic Architecture and Tuning 8

9 MCA Parameter Lookup Order 1. mpirun command line mpirun --mca <name> <value> 2. Environment variable export OMPI_MCA_<name>=<value> 3. File $HOME/.openmpi/mca-params.conf $prefix/etc/openmpi-mca-params.conf (these locations are themselves tunable) 4. Default value May 2008 Screencast: Basic Architecture and Tuning 9

10 So Much Information Open MPI has: ~30 frameworks 100+ components Each component has run-time tunable parameters How to know what to use / how to use it? May 2008 Screencast: Basic Architecture and Tuning 10

11 ompi_info Command Tells everything about OMPI installation Finds all components and all params Great for debugging Can look up specific component ompi_info --param <type> <plugin> Shows parameters and current values Can also use keyword all --parsable option May 2008 Screencast: Basic Architecture and Tuning 11

12 Example: Specify BTL BTL: Byte Transfer Layer Framework for MPI point-to-point communications Select which network to use for MPI communications mpirun --mca btl tcp,self \ -np 4 ring_c Framework-level MCA parameter Specifies which components to load May 2008 Screencast: Basic Architecture and Tuning 12

13 Example: Specify TCP BTL mpirun --mca btl tcp,self -np 4 ring_c Components tcp: TCP sockets self: Process loopback (send-to-self) May 2008 Screencast: Basic Architecture and Tuning 13

14 Example: Specify openib BTL mpirun --mca btl openib,self -np 4 ring_c Components openib: OpenFabrics verbs self: Process loopback (send-to-self) May 2008 Screencast: Basic Architecture and Tuning 14

15 Example: Specify sm+openib BTLs mpirun --mca btl sm,openib,self -np 4 ring_c Components openib: OpenFabrics verbs self: Process loopback (send-to-self) sm: Shared memory (on-host communication) May 2008 Screencast: Basic Architecture and Tuning 15

16 What Does This Do? mpirun -np 4 ring_c May 2008 Screencast: Basic Architecture and Tuning 16

17 What Does This Do? mpirun -np 4 ring_c Use all available components tcp, sm, openib, TCP too? Yes -- and no TCP will automatically disable itself in the presence of low latency components (e.g., openib) May 2008 Screencast: Basic Architecture and Tuning 17

18 What Does This Do? mpirun -np 4 ring_c More specifically: Open each BTL component Query if it wants to be used Keep all that say yes Rank by bandwidth and latency rating May 2008 Screencast: Basic Architecture and Tuning 18

19 What Does This Do? mpirun -np 4 --mca btl ^tcp ring_c May 2008 Screencast: Basic Architecture and Tuning 19

20 What Does This Do? mpirun -np 4 --mca btl ^tcp ring_c Use all available components except tcp More specifically: Open each BTL component except tcp Query if it wants to be used Keep all that say yes Rank by bandwidth and latency rating May 2008 Screencast: Basic Architecture and Tuning 20

21 openib BTL Parameters ompi_info --param btl openib Shows all openib BTL MCA parameters there are a lot! Also try: ompi_info --param btl openib \ --parsable What do they all mean? May 2008 Screencast: Basic Architecture and Tuning 21

22 May 2008 Screencast: Basic Architecture and Tuning 22

Screencast: OMPI OpenFabrics Protocols (v1.2 series)

Screencast: OMPI OpenFabrics Protocols (v1.2 series) Screencast: OMPI OpenFabrics Protocols (v1.2 series) Jeff Squyres May 2008 May 2008 Screencast: OMPI OpenFabrics Protocols (v1.2 series) 1 Short Messages For short messages memcpy() into / out of pre-registered

More information

Screencast: What is [Open] MPI?

Screencast: What is [Open] MPI? Screencast: What is [Open] MPI? Jeff Squyres May 2008 May 2008 Screencast: What is [Open] MPI? 1 What is MPI? Message Passing Interface De facto standard Not an official standard (IEEE, IETF, ) Written

More information

Screencast: What is [Open] MPI? Jeff Squyres May May 2008 Screencast: What is [Open] MPI? 1. What is MPI? Message Passing Interface

Screencast: What is [Open] MPI? Jeff Squyres May May 2008 Screencast: What is [Open] MPI? 1. What is MPI? Message Passing Interface Screencast: What is [Open] MPI? Jeff Squyres May 2008 May 2008 Screencast: What is [Open] MPI? 1 What is MPI? Message Passing Interface De facto standard Not an official standard (IEEE, IETF, ) Written

More information

Open MPI for Cray XE/XK Systems

Open MPI for Cray XE/XK Systems Open MPI for Cray XE/XK Systems Samuel K. Gutierrez LANL Nathan T. Hjelm LANL Manjunath Gorentla Venkata ORNL Richard L. Graham - Mellanox Cray User Group (CUG) 2012 May 2, 2012 U N C L A S S I F I E D

More information

Thank you, Greenplum!

Thank you, Greenplum! Open MPI: Overview / Architecture Jeff Squyres December 2012 1 Thank you, Greenplum! December 2012 2 1 Purpose An overview of Open MPI development There s too much detail for 2 hours This is not a comprehensive

More information

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA OPEN MPI WITH RDMA SUPPORT AND CUDA Rolf vandevaart, NVIDIA OVERVIEW What is CUDA-aware History of CUDA-aware support in Open MPI GPU Direct RDMA support Tuning parameters Application example Future work

More information

Open MPI from a user perspective. George Bosilca ICL University of Tennessee

Open MPI from a user perspective. George Bosilca ICL University of Tennessee Open MPI from a user perspective George Bosilca ICL University of Tennessee bosilca@cs.utk.edu From Scratch? Merger of ideas from FT-MPI (U. of Tennessee) LA-MPI (Los Alamos, Sandia) LAM/MPI (Indiana U.)

More information

Programming for Fujitsu Supercomputers

Programming for Fujitsu Supercomputers Programming for Fujitsu Supercomputers Koh Hotta The Next Generation Technical Computing Fujitsu Limited To Programmers who are busy on their own research, Fujitsu provides environments for Parallel Programming

More information

MPI History. MPI versions MPI-2 MPICH2

MPI History. MPI versions MPI-2 MPICH2 MPI versions MPI History Standardization started (1992) MPI-1 completed (1.0) (May 1994) Clarifications (1.1) (June 1995) MPI-2 (started: 1995, finished: 1997) MPI-2 book 1999 MPICH 1.2.4 partial implemention

More information

OPEN MPI AND RECENT TRENDS IN NETWORK APIS

OPEN MPI AND RECENT TRENDS IN NETWORK APIS 12th ANNUAL WORKSHOP 2016 OPEN MPI AND RECENT TRENDS IN NETWORK APIS #OFADevWorkshop HOWARD PRITCHARD (HOWARDP@LANL.GOV) LOS ALAMOS NATIONAL LAB LA-UR-16-22559 OUTLINE Open MPI background and release timeline

More information

iwarp Learnings and Best Practices

iwarp Learnings and Best Practices iwarp Learnings and Best Practices Author: Michael Fenn, Penn State Date: March 28, 2012 www.openfabrics.org 1 Introduction Last year, the Research Computing and Cyberinfrastructure group at Penn State

More information

Ravindra Babu Ganapathi

Ravindra Babu Ganapathi 14 th ANNUAL WORKSHOP 2018 INTEL OMNI-PATH ARCHITECTURE AND NVIDIA GPU SUPPORT Ravindra Babu Ganapathi Intel Corporation [ April, 2018 ] Intel MPI Open MPI MVAPICH2 IBM Platform MPI SHMEM Intel MPI Open

More information

Myths and reality of communication/computation overlap in MPI applications

Myths and reality of communication/computation overlap in MPI applications Myths and reality of communication/computation overlap in MPI applications Alessandro Fanfarillo National Center for Atmospheric Research Boulder, Colorado, USA elfanfa@ucar.edu Oct 12th, 2017 (elfanfa@ucar.edu)

More information

Open MPI und ADCL. Kommunikationsbibliotheken für parallele, wissenschaftliche Anwendungen. Edgar Gabriel

Open MPI und ADCL. Kommunikationsbibliotheken für parallele, wissenschaftliche Anwendungen. Edgar Gabriel Open MPI und ADCL Kommunikationsbibliotheken für parallele, wissenschaftliche Anwendungen Department of Computer Science University of Houston gabriel@cs.uh.edu Is MPI dead? New MPI libraries released

More information

MUSIC the Multi-Simulation Coordinator. Örjan Ekeberg and Mikael Djurfeldt CSC, KTH

MUSIC the Multi-Simulation Coordinator. Örjan Ekeberg and Mikael Djurfeldt CSC, KTH the Multi-Simulation Coordinator CSC, KTH The purpose of On-line pre- or post-processing of huge amounts of data for a parallel simulator within the cluster Connect models developed for different parallel

More information

Programming with MPI. Pedro Velho

Programming with MPI. Pedro Velho Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?

More information

Enhanced Memory debugging of MPI-parallel Applications in Open MPI

Enhanced Memory debugging of MPI-parallel Applications in Open MPI Enhanced Memory debugging of MPI-parallel Applications in Open MPI 4th Parallel tools workshop 2010 Shiqing Fan HLRS, High Performance Computing Center University of Stuttgart, Germany Slide 1 Introduction:

More information

MPI versions. MPI History

MPI versions. MPI History MPI versions MPI History Standardization started (1992) MPI-1 completed (1.0) (May 1994) Clarifications (1.1) (June 1995) MPI-2 (started: 1995, finished: 1997) MPI-2 book 1999 MPICH 1.2.4 partial implemention

More information

Implementation and Usage of the PERUSE-Interface in Open MPI

Implementation and Usage of the PERUSE-Interface in Open MPI Implementation and Usage of the PERUSE-Interface in Open MPI Rainer Keller HLRS George Bosilca UTK Graham Fagg UTK Michael Resch HLRS Jack J. Dongarra UTK 13th EuroPVM/MPI 2006, Bonn EU-project HPC-Europa

More information

Towards Exascale: Leveraging InfiniBand to accelerate the performance and scalability of Slurm jobstart.

Towards Exascale: Leveraging InfiniBand to accelerate the performance and scalability of Slurm jobstart. Towards Exascale: Leveraging InfiniBand to accelerate the performance and scalability of Slurm jobstart. Artem Y. Polyakov, Joshua S. Ladd, Boris I. Karasev Nov 16, 2017 Agenda Problem description Slurm

More information

PRIMEHPC FX10: Advanced Software

PRIMEHPC FX10: Advanced Software PRIMEHPC FX10: Advanced Software Koh Hotta Fujitsu Limited System Software supports --- Stable/Robust & Low Overhead Execution of Large Scale Programs Operating System File System Program Development for

More information

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -

More information

Open MPI: A Flexible High Performance MPI

Open MPI: A Flexible High Performance MPI Open MPI: A Flexible High Performance MPI Richard L. Graham 1, Timothy S. Woodall 1, and Jeffrey M. Squyres 2 1 Advanced Computing Laboratory, Los Alamos National Lab {rlgraham,twoodall}@lanl.gov 2 Open

More information

ADINA DMP System 9.3 Installation Notes

ADINA DMP System 9.3 Installation Notes ADINA DMP System 9.3 Installation Notes for Linux (only) Updated for version 9.3.2 ADINA R & D, Inc. 71 Elton Avenue Watertown, MA 02472 support@adina.com www.adina.com ADINA DMP System 9.3 Installation

More information

Mellanox GPUDirect RDMA User Manual

Mellanox GPUDirect RDMA User Manual Mellanox GPUDirect RDMA User Manual Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS

More information

Open MPI State of the Union X Community Meeting SC 16

Open MPI State of the Union X Community Meeting SC 16 Open MPI State of the Union X Community Meeting SC 16 Jeff Squyres George Bosilca Perry Schmidt Ralph Castain Yossi Itigin Nathan Hjelm Open MPI State of the Union X Community Meeting SC 16 10 years of

More information

CERN openlab Summer 2006: Networking Overview

CERN openlab Summer 2006: Networking Overview CERN openlab Summer 2006: Networking Overview Martin Swany, Ph.D. Assistant Professor, Computer and Information Sciences, U. Delaware, USA Visiting Helsinki Institute of Physics (HIP) at CERN swany@cis.udel.edu,

More information

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory gmkurtzer@lbl.gov CONTAINERS IN HPC WITH SINGULARITY A QUICK REVIEW OF THE LANDSCAPE Many types of virtualization

More information

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents InfiniPath 2.2.1 Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs This software license applies only to QLogic customers. QLogic Corporation. All rights reserved. Table of Contents 1. Version 2.

More information

SHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008

SHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008 SHARCNET Workshop on Parallel Computing Hugh Merz Laurentian University May 2008 What is Parallel Computing? A computational method that utilizes multiple processing elements to solve a problem in tandem

More information

Mellanox ScalableSHMEM User Manual

Mellanox ScalableSHMEM User Manual Mellanox ScalableSHMEM User Manual Rev 2.1 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS

More information

Infiniband and RDMA Technology. Doug Ledford

Infiniband and RDMA Technology. Doug Ledford Infiniband and RDMA Technology Doug Ledford Top 500 Supercomputers Nov 2005 #5 Sandia National Labs, 4500 machines, 9000 CPUs, 38TFlops, 1 big headache Performance great...but... Adding new machines problematic

More information

InfiniBand Linux Operating System Software Access Layer

InfiniBand Linux Operating System Software Access Layer Software Architecture Specification (SAS) Revision Draft 2 Last Print Date: 4/19/2002-9:04 AM Copyright (c) 1996-2002 Intel Corporation. All rights reserved. InfiniBand Linux Operating System Software

More information

LAMMPS and WRF on iwarp vs. InfiniBand FDR

LAMMPS and WRF on iwarp vs. InfiniBand FDR LAMMPS and WRF on iwarp vs. InfiniBand FDR The use of InfiniBand as interconnect technology for HPC applications has been increasing over the past few years, replacing the aging Gigabit Ethernet as the

More information

Parallel Programming Using MPI

Parallel Programming Using MPI Parallel Programming Using MPI Prof. Hank Dietz KAOS Seminar, February 8, 2012 University of Kentucky Electrical & Computer Engineering Parallel Processing Process N pieces simultaneously, get up to a

More information

Integrated Software Environment. Part 2

Integrated Software Environment. Part 2 Integrated Software Environment Part 2 Operating Systems An operating system is the most important software that runs on a computer. It manages the computer's memory, processes, and all of its software

More information

LAPI on HPS Evaluating Federation

LAPI on HPS Evaluating Federation LAPI on HPS Evaluating Federation Adrian Jackson August 23, 2004 Abstract LAPI is an IBM-specific communication library that performs single-sided operation. This library was well profiled on Phase 1 of

More information

Reach Your Congregation

Reach Your Congregation Reach Your Congregation WITH AURORA AND THE POWERFUL ICON SYSTEMS SOFTWARE INTEGRATION What Can Aurora Do For You? A lot. We've built everything you need into a single, finely tuned outreach machine. A

More information

Mellanox ScalableSHMEM User Manual

Mellanox ScalableSHMEM User Manual Mellanox ScalableSHMEM User Manual Rev 2.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS

More information

Mellanox GPUDirect RDMA User Manual

Mellanox GPUDirect RDMA User Manual Mellanox GPUDirect RDMA User Manual Rev 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head

More information

No Time to Read This Book?

No Time to Read This Book? Chapter 1 No Time to Read This Book? We know what it feels like to be under pressure. Try out a few quick and proven optimization stunts described below. They may provide a good enough performance gain

More information

Mellanox HPC-X Scalable Software Toolkit README

Mellanox HPC-X Scalable Software Toolkit README Mellanox HPC-X Scalable Software Toolkit README Rev 1.0 www.mellanox.com 2 Mellanox Technologies Document Number: MLNX-15-3871 Table of Contents Table of Contents..........................................................

More information

NovoalignMPI User Guide

NovoalignMPI User Guide MPI User Guide MPI is a messaging passing version of that allows the alignment process to be spread across multiple servers in a cluster or other network of computers 1. Multiple servers can be used to

More information

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection Switching Operational modes: Store-and-forward: Each switch receives an entire packet before it forwards it onto the next switch - useful in a general purpose network (I.e. a LAN). usually, there is a

More information

Generic RDMA Enablement in Linux

Generic RDMA Enablement in Linux Generic RDMA Enablement in Linux (Why do we need it, and how) Krishna Kumar Linux Technology Center, IBM February 28, 2006 AGENDA RDMA : Definition Why RDMA, and how does it work OpenRDMA history Architectural

More information

The Common Communication Interface (CCI)

The Common Communication Interface (CCI) The Common Communication Interface (CCI) Presented by: Galen Shipman Technology Integration Lead Oak Ridge National Laboratory Collaborators: Scott Atchley, George Bosilca, Peter Braam, David Dillow, Patrick

More information

Implementing a Hardware-Based Barrier in Open MPI

Implementing a Hardware-Based Barrier in Open MPI Implementing a Hardware-Based Barrier in Open MPI - A Case Study - Torsten Hoefler 1, Jeffrey M. Squyres 2, Torsten Mehlan 1 Frank Mietke 1 and Wolfgang Rehm 1 1 Technical University of Chemnitz 2 Open

More information

University of Energy and Natural Resources, Sunyani. Name: UBA, Felix. How to get ROMS Summer school August, 2016

University of Energy and Natural Resources, Sunyani. Name: UBA, Felix. How to get ROMS Summer school August, 2016 University of Energy and Natural Resources, Sunyani Name: UBA, Felix How to get ROMS running @ Summer school August, 2016 Introduction PRESENTATION How to download the code, Configure it for an Application,

More information

MOVING FORWARD WITH FABRIC INTERFACES

MOVING FORWARD WITH FABRIC INTERFACES 14th ANNUAL WORKSHOP 2018 MOVING FORWARD WITH FABRIC INTERFACES Sean Hefty, OFIWG co-chair Intel Corporation April, 2018 USING THE PAST TO PREDICT THE FUTURE OFI Provider Infrastructure OFI API Exploration

More information

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming

More information

Bioinformatics Facility at the Biotechnology/Bioservices Center

Bioinformatics Facility at the Biotechnology/Bioservices Center Bioinformatics Facility at the Biotechnology/Bioservices Center Co-Heads : J.P. Gogarten, Paul Lewis Facility Scientist : Pascal Lapierre Hardware/Software Manager: Jeff Lary Mandate of the Facility: To

More information

CS 43: Computer Networks. 15: Transport Layer & UDP October 5, 2018

CS 43: Computer Networks. 15: Transport Layer & UDP October 5, 2018 CS 43: Computer Networks 15: Layer & UDP October 5, 2018 Reading Quiz Lecture 15 - Slide 2 Layer Moving down a layer. Current perspective: lication is the boss Usually executing within the OS kernel. The

More information

Networking and Internetworking 1

Networking and Internetworking 1 Networking and Internetworking 1 Today l Networks and distributed systems l Internet architecture xkcd Networking issues for distributed systems Early networks were designed to meet relatively simple requirements

More information

Unified Communication X (UCX)

Unified Communication X (UCX) Unified Communication X (UCX) Pavel Shamis / Pasha ARM Research SC 18 UCF Consortium Mission: Collaboration between industry, laboratories, and academia to create production grade communication frameworks

More information

Windows OpenFabrics (WinOF) Update

Windows OpenFabrics (WinOF) Update Windows OpenFabrics (WinOF) Update Eric Lantz, Microsoft (elantz@microsoft.com) April 2008 Agenda OpenFabrics and Microsoft Current Events HPC Server 2008 Release NetworkDirect - RDMA for Windows 2 OpenFabrics

More information

Distributed Systems 8. Remote Procedure Calls

Distributed Systems 8. Remote Procedure Calls Distributed Systems 8. Remote Procedure Calls Paul Krzyzanowski pxk@cs.rutgers.edu 10/1/2012 1 Problems with the sockets API The sockets interface forces a read/write mechanism Programming is often easier

More information

Sun HPC ClusterTools 8.2.1c Software. User s Guide. Part No April 2010, Revision A

Sun HPC ClusterTools 8.2.1c Software. User s Guide. Part No April 2010, Revision A Sun HPC ClusterTools 8.2.1c Software User s Guide Part No. 821-1319-10 April 2010, Revision A Copyright 2009, 2010, Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All

More information

Sun HPC ClusterTools 8.2 Software User s Guide

Sun HPC ClusterTools 8.2 Software User s Guide Sun HPC ClusterTools 8.2 Software User s Guide Sun Microsystems, Inc. www.sun.com Part No. 821-0225-10 June 2009, Revision A Submit comments about this document by clicking the Feedback[+] link at: http://docs.sun.com

More information

NAMD Performance Benchmark and Profiling. February 2012

NAMD Performance Benchmark and Profiling. February 2012 NAMD Performance Benchmark and Profiling February 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

Cisco usnic Deployment Guide for Cisco UCS C-Series Rack-Mount Standalone Servers

Cisco usnic Deployment Guide for Cisco UCS C-Series Rack-Mount Standalone Servers Cisco usnic Deployment Guide for Cisco UCS C-Series Rack-Mount Standalone Servers First Published: 2016-01-20 Last Modified: 2016-09-17 The Cisco user-space NIC (Cisco usnic) feature improves the performance

More information

Memory Management Strategies for Data Serving with RDMA

Memory Management Strategies for Data Serving with RDMA Memory Management Strategies for Data Serving with RDMA Dennis Dalessandro and Pete Wyckoff (presenting) Ohio Supercomputer Center {dennis,pw}@osc.edu HotI'07 23 August 2007 Motivation Increasing demands

More information

GRID Testing and Profiling. November 2017

GRID Testing and Profiling. November 2017 GRID Testing and Profiling November 2017 2 GRID C++ library for Lattice Quantum Chromodynamics (Lattice QCD) calculations Developed by Peter Boyle (U. of Edinburgh) et al. Hybrid MPI+OpenMP plus NUMA aware

More information

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications Outline RDMA Motivating trends iwarp NFS over RDMA Overview Chelsio T5 support Performance results 2 Adoption Rate of 40GbE Source: Crehan

More information

LAMMPS, LS- DYNA, HPL, and WRF on iwarp vs. InfiniBand FDR

LAMMPS, LS- DYNA, HPL, and WRF on iwarp vs. InfiniBand FDR LAMMPS, LS- DYNA, HPL, and WRF on iwarp vs. InfiniBand FDR The use of InfiniBand as interconnect technology for HPC applications has been increasing over the past few years, replacing the aging Gigabit

More information

Introduction to Computer Systems. Networks 2. c Theodore Norvell. The Sockets API

Introduction to Computer Systems. Networks 2. c Theodore Norvell. The Sockets API The Sockets API [Wait! If you are not familiar with file descriptors and the UNIX read and write system calls, read chapter 10 of Bryant and O Hallaron and/or my summary before going on.] In this section

More information

USING OPEN FABRIC INTERFACE IN INTEL MPI LIBRARY

USING OPEN FABRIC INTERFACE IN INTEL MPI LIBRARY 14th ANNUAL WORKSHOP 2018 USING OPEN FABRIC INTERFACE IN INTEL MPI LIBRARY Michael Chuvelev, Software Architect Intel April 11, 2018 INTEL MPI LIBRARY Optimized MPI application performance Application-specific

More information

Designing High Performance Communication Middleware with Emerging Multi-core Architectures

Designing High Performance Communication Middleware with Emerging Multi-core Architectures Designing High Performance Communication Middleware with Emerging Multi-core Architectures Dhabaleswar K. (DK) Panda Department of Computer Science and Engg. The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

for Multi-Services Gateways

for Multi-Services Gateways KURA an OSGi-basedApplication Framework for Multi-Services Gateways Introduction & Technical Overview Pierre Pitiot Grenoble 19 février 2014 Multi-Service Gateway Approach ESF / Increasing Value / Minimizing

More information

NVIDIA COLLECTIVE COMMUNICATION LIBRARY (NCCL)

NVIDIA COLLECTIVE COMMUNICATION LIBRARY (NCCL) NVIDIA COLLECTIVE COMMUNICATION LIBRARY (NCCL) RN-08645-000_v01 September 2018 Release Notes TABLE OF CONTENTS Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter 1. NCCL Overview...1

More information

OverSim. A Flexible Overlay Network Simulation Framework. Ingmar Baumgart, Bernhard Heep, Stephan Krause

OverSim. A Flexible Overlay Network Simulation Framework. Ingmar Baumgart, Bernhard Heep, Stephan Krause OverSim A Flexible Overlay Network Simulation Framework Ingmar Baumgart, Bernhard Heep, IEEE Global Internet Symposium 2007, Anchorage, AK, USA Requirements Overlay Flexibility Scalability Underlay Heterogeneity

More information

Compute Node Linux (CNL) The Evolution of a Compute OS

Compute Node Linux (CNL) The Evolution of a Compute OS Compute Node Linux (CNL) The Evolution of a Compute OS Overview CNL The original scheme plan, goals, requirements Status of CNL Plans Features and directions Futures May 08 Cray Inc. Proprietary Slide

More information

Local Area Multicomputer (LAM -MPI)

Local Area Multicomputer (LAM -MPI) Computer and Information Science; Vol. 6, No. 2; 2013 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education Local Area Multicomputer (LAM -MPI) Athanasios I. Margaris 1

More information

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com.

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com. December 2003 Provided by ClusterWorld for Jeff Squyres cw.squyres.com www.clusterworld.com Copyright 2004 ClusterWorld, All Rights Reserved For individual private use only. Not to be reproduced or distributed

More information

10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G

10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G 10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G Mohammad J. Rashti and Ahmad Afsahi Queen s University Kingston, ON, Canada 2007 Workshop on Communication Architectures

More information

Introduction to Wireshark

Introduction to Wireshark Introduction to Wireshark CS3C03/SE4C03 Jason Jaskolka Department of Computing and Software Faculty of Engineering McMaster University Hamilton, Ontario, Canada jaskolj@mcmaster.ca Winter 2013 Jason Jaskolka

More information

CUDA Accelerated Linpack on Clusters. E. Phillips, NVIDIA Corporation

CUDA Accelerated Linpack on Clusters. E. Phillips, NVIDIA Corporation CUDA Accelerated Linpack on Clusters E. Phillips, NVIDIA Corporation Outline Linpack benchmark CUDA Acceleration Strategy Fermi DGEMM Optimization / Performance Linpack Results Conclusions LINPACK Benchmark

More information

25 minutes allocated. 15 minutes for talk, 10 minutes for questions.

25 minutes allocated. 15 minutes for talk, 10 minutes for questions. Download management on metered connections Philip Withnall Endless Mobile philip@tecnocode.co.uk 25 minutes allocated. 15 minutes for talk, 10 minutes for questions. What s the problem? What s the problem?

More information

OpenEdge 12.0 Database Performance and Server Side Joins. Richard Banville Fellow, OpenEdge Development October 12, 2018

OpenEdge 12.0 Database Performance and Server Side Joins. Richard Banville Fellow, OpenEdge Development October 12, 2018 OpenEdge 12.0 Database Performance and Server Side Joins Richard Banville Fellow, OpenEdge Development October 12, 2018 Data Access Performance Enhancements Increasing overall throughput Provide more concurrency

More information

Distributed Systems. How do regular procedure calls work in programming languages? Problems with sockets RPC. Regular procedure calls

Distributed Systems. How do regular procedure calls work in programming languages? Problems with sockets RPC. Regular procedure calls Problems with sockets Distributed Systems Sockets interface is straightforward [connect] read/write [disconnect] Remote Procedure Calls BUT it forces read/write mechanism We usually use a procedure call

More information

14th ANNUAL WORKSHOP 2018 A NEW APPROACH TO SWITCHING NETWORK IMPLEMENTATION. Harold E. Cook. Director of Software Engineering Lightfleet Corporation

14th ANNUAL WORKSHOP 2018 A NEW APPROACH TO SWITCHING NETWORK IMPLEMENTATION. Harold E. Cook. Director of Software Engineering Lightfleet Corporation 14th ANNUAL WORKSHOP 2018 A NEW APPROACH TO SWITCHING NETWORK IMPLEMENTATION Harold E. Cook Director of Software Engineering Lightfleet Corporation April 9, 2018 OBJECTIVES Discuss efficiency and reliability

More information

Persistent Memory over Fabrics

Persistent Memory over Fabrics Persistent Memory over Fabrics Rob Davis, Mellanox Technologies Chet Douglas, Intel Paul Grun, Cray, Inc Tom Talpey, Microsoft Santa Clara, CA 1 Agenda The Promise of Persistent Memory over Fabrics Driving

More information

Detects Potential Problems. Customizable Data Columns. Support for International Characters

Detects Potential Problems. Customizable Data Columns. Support for International Characters Home Buy Download Support Company Blog Features Home Features HttpWatch Home Overview Features Compare Editions New in Version 9.x Awards and Reviews Download Pricing Our Customers Who is using it? What

More information

Intel Parallel Studio XE 2015

Intel Parallel Studio XE 2015 2015 Create faster code faster with this comprehensive parallel software development suite. Faster code: Boost applications performance that scales on today s and next-gen processors Create code faster:

More information

Work Project Report: Benchmark for 100 Gbps Ethernet network analysis

Work Project Report: Benchmark for 100 Gbps Ethernet network analysis Work Project Report: Benchmark for 100 Gbps Ethernet network analysis CERN Summer Student Programme 2016 Student: Iraklis Moutidis imoutidi@cern.ch Main supervisor: Balazs Voneki balazs.voneki@cern.ch

More information

Functions!!! Why functions? Functions provide good way to design systems!

Functions!!! Why functions? Functions provide good way to design systems! Functions!!! Why functions? Functions provide good way to design systems! Coding Design! DRY principle - Don't Repeat Yourself! Loops help with this! Functions/procedures/modules help even more! Modular

More information

An Introduction to the OpenFabrics Interface. #OFAUserGroup Paul Grun Cray w/ slides stolen (with pride) from Sean Hefty

An Introduction to the OpenFabrics Interface. #OFAUserGroup Paul Grun Cray w/ slides stolen (with pride) from Sean Hefty An Introduction to the OpenFabrics Interface #OFAUserGroup Paul Grun Cray w/ slides stolen (with pride) from Sean Hefty Agenda Where the OFA is going Forming the OFI WG First Principles Application-centric

More information

. Programming in Chapel. Kenjiro Taura. University of Tokyo

. Programming in Chapel. Kenjiro Taura. University of Tokyo .. Programming in Chapel Kenjiro Taura University of Tokyo 1 / 44 Contents. 1 Chapel Chapel overview Minimum introduction to syntax Task Parallelism Locales Data parallel constructs Ranges, domains, and

More information

by Brian Hausauer, Chief Architect, NetEffect, Inc

by Brian Hausauer, Chief Architect, NetEffect, Inc iwarp Ethernet: Eliminating Overhead In Data Center Designs Latest extensions to Ethernet virtually eliminate the overhead associated with transport processing, intermediate buffer copies, and application

More information

Optimization of MPI Applications Rolf Rabenseifner

Optimization of MPI Applications Rolf Rabenseifner Optimization of MPI Applications Rolf Rabenseifner University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Optimization of MPI Applications Slide 1 Optimization and Standardization

More information

HIGH-PERFORMANCE NETWORKING :: USER-LEVEL NETWORKING :: REMOTE DIRECT MEMORY ACCESS

HIGH-PERFORMANCE NETWORKING :: USER-LEVEL NETWORKING :: REMOTE DIRECT MEMORY ACCESS HIGH-PERFORMANCE NETWORKING :: USER-LEVEL NETWORKING :: REMOTE DIRECT MEMORY ACCESS CS6410 Moontae Lee (Nov 20, 2014) Part 1 Overview 00 Background User-level Networking (U-Net) Remote Direct Memory Access

More information

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkeley Operating Systems Principles

More information

Mellanox GPUDirect RDMA User Manual

Mellanox GPUDirect RDMA User Manual Mellanox GPUDirect RDMA User Manual Rev 1.5 www.mellanox.com Mellanox Technologies NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX

More information

Babes-Bolyai University

Babes-Bolyai University Babes-Bolyai University arthur@cs.ubbcluj.ro Overview 1 Modules programming - a software design technique that increases the extent to which software is composed of independent, interchangeable components

More information

NovoalignMPI User Guide

NovoalignMPI User Guide Bioinformatics Specialists Sequence Analysis Software DNA/RNA Sequencing Service Consulting User Guide is a messaging passing version of that allows the alignment process to be spread across multiple servers

More information

Programming with MPI

Programming with MPI Programming with MPI p. 1/?? Programming with MPI Miscellaneous Guidelines Nick Maclaren Computing Service nmm1@cam.ac.uk, ext. 34761 March 2010 Programming with MPI p. 2/?? Summary This is a miscellaneous

More information

Writing MPI Programs for Odyssey. Teresa Kaltz, PhD Research Computing

Writing MPI Programs for Odyssey. Teresa Kaltz, PhD Research Computing Writing MPI Programs for Odyssey Teresa Kaltz, PhD Research Computing 1 FAS IT Research Computing Provide compute resources to FAS and SEAS for research purposes Leverage FAS IT infrastructure Architect

More information

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc.

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc. OpenFabrics Interface WG A brief introduction Paul Grun co chair OFI WG Cray, Inc. OFI WG a brief overview and status report 1. Keep everybody on the same page, and 2. An example of a possible model for

More information

RDMA programming concepts

RDMA programming concepts RDMA programming concepts Robert D. Russell InterOperability Laboratory & Computer Science Department University of New Hampshire Durham, New Hampshire 03824, USA 2013 Open Fabrics Alliance,

More information

Multi-Application Online Profiling Tool

Multi-Application Online Profiling Tool Multi-Application Online Profiling Tool Vi-HPS Julien ADAM, Antoine CAPRA 1 About MALP MALP is a tool originally developed in CEA and in the University of Versailles (UVSQ) It generates rich HTML views

More information