OpenHPC: Project Overview and Updates

Similar documents
ARM High Performance Computing

ARM BOF. Jay Kruemcke Sr. Product Manager, HPC, ARM,

Intel HPC Orchestrator System Software Stack Providing Key Building Blocks for Intel Scalable System Framework

Jay Kruemcke Sr. Product Manager, HPC, Arm,

Meeting of the Technical Steering Committee (TSC) Board

Bootstrapping a HPC Ecosystem

Meeting of the Technical Steering Committee (TSC) Board

Enabling the ARM high performance computing (HPC) software ecosystem

Meeting of the Technical Steering Committee (TSC) Board

Arm in HPC. Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm Arm Limited

Meeting of the Technical Steering Committee (TSC) Board

Install your scientific software stack easily with Spack

The Arm Technology Ecosystem: Current Products and Future Outlook

MAKING CONTAINERS EASIER WITH HPC CONTAINER MAKER. Scott McMillan September 2018

Deploying (community) codes. Martin Čuma Center for High Performance Computing University of Utah

TOSS - A RHEL-based Operating System for HPC Clusters

Site presentation: CSCS

Linux HPC Software Stack

Barcelona Supercomputing Center

The Mont-Blanc project Updates from the Barcelona Supercomputing Center

Red Hat HPC Solution Overview. Platform Computing

Nuts and Bolts: Lessons Learned in Creating a User-Friendly FOSS Cluster Configuration Tool

SGI REACT includes the SGI REACT library for Linux hard real-time performance,

Rapid Deployment of Bare-Metal and In-Container HPC clusters using OpenHPC playbooks

IT4Innovations national supercomputing center. Branislav Jansík

Overview. What are community packages? Who installs what? How to compile and install? Setup at FSU RCC. Using RPMs vs regular install

Python ecosystem for scientific computing with ABINIT: challenges and opportunities. M. Giantomassi and the AbiPy group

The Why and How of HPC-Cloud Hybrids with OpenStack

InfoBrief. Platform ROCKS Enterprise Edition Dell Cluster Software Offering. Key Points

Operational Robustness of Accelerator Aware MPI

Code-Agnostic Performance Characterisation and Enhancement

SFO17-315: OpenDataPlane Testing in Travis. Dmitry Eremin-Solenikov, Cavium Maxim Uvarov, Linaro

Toward Building up Arm HPC Ecosystem --Fujitsu s Activities--

AASPI Software Structure

Using EasyBuild and Continuous Integration for Deploying Scientific Applications on Large Scale Production Systems

Gree. SunTM HPC Software, Linux Edition Deployment and User Guide

MPI History. MPI versions MPI-2 MPICH2

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University

Innovative Alternate Architecture for Exascale Computing. Surya Hotha Director, Product Marketing

SUSE Linux Entreprise Server for ARM

Shifter: Fast and consistent HPC workflows using containers

Open Enterprise & Open Community opensuse & SLE Empowering Each Other. Richard Brown opensuse Chairman

Developing Scientific Applications with the IBM Parallel Environment Developer Edition

SUSE. High Performance Computing. Eduardo Diaz. Alberto Esteban. PreSales SUSE Linux Enterprise

A More Realistic Way of Stressing the End-to-end I/O System

My operating system is old but I don't care : I'm using NIX! B.Bzeznik BUX meeting, Vilnius 22/03/2016

Open SpeedShop Build and Installation Guide Version November 14, 2016

GPU- Aware Design, Implementation, and Evaluation of Non- blocking Collective Benchmarks

Patching and Updating your VM SUSE Manager. Donald Vosburg, Sales Engineer, SUSE

Managing complex cluster architectures with Bright Cluster Manager

Lustre Parallel Filesystem Best Practices

Using the MySQL Yum Repository

Building and Installing Software

Eclipse-PTP: An Integrated Environment for the Development of Parallel Applications

Butterfly effect of porting scientific applications to ARM-based platforms

Digitizer operating system support

vpp-firstcut Documentation

OCP Ready. and OCP Checkbox Overview

User Training Cray XC40 IITM, Pune

TrinityCore Documentation

Introduction to Software Defined Infrastructure SUSE Linux Enterprise 15

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

Taming your heterogeneous cloud with Red Hat OpenShift Container Platform.

Arm Processor Technology Update and Roadmap

MRCP. Installation Manual. Developer Guide. Powered by Universal Speech Solutions LLC

HP Storage and UMCG

Installation Guide and Release Notes

The Cray Programming Environment. An Introduction

Software stack deployment for Earth System Modelling using Spack

Handbook for System Administrators (and Users)

Welcome to SUSE Expert Days 2017 Digital Transformation

Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly.

Our new HPC-Cluster An overview

Provisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment

Bring order into packaging madness. Marcela Mašláňová Supervisor Software Engineer, Red Hat May 2013

Update of Post-K Development Yutaka Ishikawa RIKEN AICS

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

SHOC: The Scalable HeterOgeneous Computing Benchmark Suite

Making Scientific Applications Portable: Software Containers and Package Managers

Stable Cray Support in EasyBuild 2.7. Petar Forai

cget Documentation Release Paul Fultz II

Tooling Linux for the Future of Embedded Systems. Patrick Quairoli Director of Alliance and Embedded Technology SUSE /

A Big Little Hypervisor for IoT Development February 2018

Computing with the Moore Cluster

Overview on HPC software (compilers, libraries, tools)

Compile and run RAPID on the Microsoft Azure Cloud

CONTAINERIZING JOBS ON THE ACCRE CLUSTER WITH SINGULARITY

Vienna Scientific Cluster: Problems and Solutions

SuperMike-II Launch Workshop. System Overview and Allocations

Design and Evaluation of a 2048 Core Cluster System

Introduction of Oakforest-PACS

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

Scientific Software Development with Eclipse

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Linux in the nuclear industry

Readme for Platform Open Cluster Stack (OCS)

Intel Omni-Path Fabric Software

ACCELERATE APPLICATION DELIVERY WITH OPENSHIFT. Siamak Sadeghianfar Sr Technical Marketing Manager, April 2016

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes

Transcription:

http://openhpc.community OpenHPC: Project Overview and Updates Karl W. Schulz, Ph.D. Software and Services Group, Intel Technical Project Lead, OpenHPC 5 th Annual MVAPICH User Group (MUG) Meeting August 16, 2017 w Columbus, Ohio

Outline Brief project overview New items/updates from last year Highlights from latest release 2

OpenHPC: Mission and Vision Mission: to provide a reference collection of open-source HPC software components and best practices, lowering barriers to deployment, advancement, and use of modern HPC methods and tools. Vision: OpenHPC components and best practices will enable and accelerate innovation and discoveries by broadening access to state-of-the-art, open-source HPC methods and tools in a consistent environment, supported by a collaborative, worldwide community of HPC users, developers, researchers, administrators, and vendors. 3

OpenHPC: a brief History ISC 15 BoF on the merits/interest in a community effort Nov 2015 ISC 16 v1.1.1 release Linux Foundation announces technical leadership, founding members, and formal governance structure Nov 2016 ISC 17 v1.3.1 release BoF June 2015 SC 15 seed initial 1.0 release, gather interested parties to work with Linux Foundation June 2016 MUG 16 OpenHPC Intro SC 16 v1.2 release BoF June 2017 4

OpenHPC Project Members Argonne National Laboratory Indiana University University of Cambridge Mixture of Academics, Labs, OEMs, and ISVs/OSVs Project member participation interest? Please contact Jeff ErnstFriedman jernstfriedman@linuxfoundation.org 5

Governance: Technical Steering Committee (TSC) Role Overview OpenHPC Technical Steering Committee (TSC) Project Leader Integration Testing Coordinator(s) Upstream Component Development Representative(s) End-User / Site Representative(s) Maintainers Note: We just completed election of TSC members for the 2017-2018 term. terms are for 1-year 6

OpenHPC TSC Individual Members New for 2017-2018 Reese Baird, Intel (Maintainer) David Brayford, LRZ (Maintainer) Eric Coulter, Indiana University (End-User/Site Representative) Leonordo Fialho, ATOS (Maintainer) Todd Gamblin, LLNL (Component Development Representative) Craig Gardner, SUSE (Maintainer) Renato Golin, Linaro (Testing Coordinator) Jennifer Green, Los Alamos National Laboratory (Maintainer) Douglas Jacobsen, NERSC (End-User/Site Representative) Chulho Kim, Lenovo (Maintainer) Janet Lebens, Cray (Maintainer) Thomas Moschny, ParTec (Maintainer) Nam Pho, New York Langone Medical Center (Maintainer) Cyrus Proctor, Texas Advanced Computing Center (Maintainer) Adrian Reber, Red Hat (Maintainer) Joseph Stanfield, Dell (Maintainer) Karl W. Schulz, Intel (Project Lead, Testing Coordinator) Jeff Schutkoske, Cray (Component Development Representative) Derek Simmel, Pittsburgh Supercomputing Center (End-User/Site Representative) Scott Suchyta, Altair (Maintainer) Nirmala Sundararajan, Dell (Maintainer) https://github.com/openhpc/ohpc/wiki/governance-overview 7

Target system architecture Basic cluster architecture: head node (SMS) + computes Ethernet fabric for mgmt. network Shared or separate out-of-band (BMC) network High speed fabric (InfiniBand, Omni-Path) high speed network Parallel File System compute nodes eth0 eth1 tcp networking to compute eth interface to compute BMC interface 8

OpenHPC: a building block repository [ Key takeaway ] OpenHPC provides a collection of pre-built ingredients common in HPC environments; fundamentally it is a package repository The repository is published for use with Linux distro package managers: yum (CentOS/RHEL) zypper (SLES) You can pick relevant bits of interest for your site: if you prefer a resource manager that is not included, you can build that locally and still leverage the scientific libraries and development environment similarly, you might prefer to utilize a different provisioning system 9

Newish Items/Updates changes and new items since we were last together at MUG 16 10

Switched from ISOs -> distribution tarballs For those who prefer to mirror a repo locally, we have historically provided an ISO that contained all the packages/repodata Beginning with v1.2 release, switched to tarball based distribution Distribution tarballs available at: http://build.openhpc.community/dist A make_repo.sh script is provided that will setup a locally hosted OpenHPC repository using the contents from downloaded tarball Index of /dist/1.2.1 Name Last modified Size Description Parent Directory - OpenHPC-1.2.1.CentOS_7.2_aarch64.tar 2017-01-24 12:43 1.3G OpenHPC-1.2.1.CentOS_7.2_src.tar 2017-01-24 12:45 6.8G OpenHPC-1.2.1.CentOS_7.2_x86_64.tar 2017-01-24 12:43 2.2G OpenHPC-1.2.1.SLE_12_SP1_aarch64.tar 2017-01-24 12:40 1.1G OpenHPC-1.2.1.SLE_12_SP1_src.tar 2017-01-24 12:42 6.2G OpenHPC-1.2.1.SLE_12_SP1_x86_64.tar 2017-01-24 12:41 1.9G OpenHPC-1.2.1.md5s 2017-01-24 12:50 416 # tar xf OpenHPC-1.2.1.CentOS_7.2_x86_64.tar #./make_repo.sh --> Creating OpenHPC.local.repo file in /etc/yum.repos.d --> Local repodata stored in /root/repo # yum repolist grep OpenHPC OpenHPC-local OpenHPC-1.2 - Base OpenHPC-local-updates OpenHPC-1.2.1 - Updates 11

More Generic Repo Paths Starting with the v1.3 release, we adopted more generic paths for underlying distros [OpenHPC] name=openhpc-1.3 - Base baseurl=http://build.openhpc.community/openhpc:/1.3/centos_7 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/rpm-gpg-key-openhpc-1 [OpenHPC-updates] name=openhpc-1.3 - Updates baseurl=http://build.openhpc.community/openhpc:/1.3/updates/centos_7 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/rpm-gpg-key-openhpc-1 Similar approach for SLES12 repo config -> SLE_12 12

Release Roadmap Published Release Target Release Date Expectations 1.3.2 August 2017 New component additions and version upgrades. 1.3.3 November 2017 New component additions and version upgrades. Previous Releases A history of previous OpenHPC releases is highlighted below. Clicking on the version string will take you to the!release!notes! for more detailed information on the changes in a particular release. Release Date 1.3.1 June 16, 2017 1.3 March 31, 2017 1.2.1 January 24, 2017 1.2 November 12, 2016 1.1.1 June 21, 2016 1.1 April 18, 2016 Have had some requests for a roadmap for future releases High-level roadmap now maintained on GitHub wiki: https://github.com/openhpc/ohpc/wiki/release-historyand-roadmap 1.0.1 February 05, 2016 1.0 November 12, 2015 13

Component Submission Site Software Name Subset of information requested during submission process A common question posed to the project has been how to request new software components? We now have a simple submission site for new requests: - https://github.com/openhpc/submissions - requests reviewed on rolling basis at roughly a quarterly cadence Example software/recipes that have been added via this process: - xcat - BeeGFS - PLASMA, SLEPc - clang/llvm - MPICH - PBS Professional - Singularity - Scalasca Public URL Technical Overview Latest stable version number Open-source license type Relationship to component? contributing developer user other If other, please describe: Build system autotools-based CMake other 14

Opt-in System Registry Now Available System Registry Interested users can now register their usage on a public system registry Helpful for us to have an idea as to who is potentially benefitting from this community effort Accessible from toplevel GitHub page OpenHPC System Registry This opt-in form can be used to register your system to let us (and the community) know that you are using elements of OpenHPC. * Required Name of Site/Organization * Your answer What OS distribution are you using? * CentOS/RHEL SLES Other: Site or System URL Your answer 15

Multiple Architecture Builds Starting with v1.2 release, we also include builds for aarch64 - both SUSE and RHEL/CentOS now have aarch64 variants available for latest versions (SLES 12 SP2, CentOS 7.3) Recipes/packages being made available as a Tech Preview - some additional work required for provisioning - significant majority of development packages testing OK, but there are a few known caveats - please see https://github.com/openhpc/ohpc/wiki/arm-tech-preview for latest info Base OS x86_64 aarch64 noarch CentOS 7.3 424 265 32 v1.3.1 RPM counts SLES 12 SP2 429 274 32 16

Dev Environment Consistency x86_64 aarch64 OpenHPC providing consistent development environment to the end user across multiple architectures 17

End-user software addition tools OpenHPC repositories include two additional tools that can be used to further extend a user s development environment - EasyBuild and Spack - leverages other community efforts for build reproducibility and best practices for configuration - modules available for both after install # module load gcc7 # module load spack # spack compiler find ==> Added 1 new compiler to /root/.spack/linux/compilers.yaml gcc@7.1.0 # spack install darshan-runtime... #. /opt/ohpc/admin/spack/0.10.0/share/spack/setup-env.sh # module avail ----- /opt/ohpc/admin/spack/0.10.0/share/spack/modules/linux-sles12-x86_64 ----- darshan-runtime-3.1.0-gcc-7.1.0-vhd5hhg m4-1.4.17-gcc-7.1.0-7jd575i hwloc-1.11.4-gcc-7.1.0-u3k6dok ncurses-6.0-gcc-7.1.0-l3mdumo libelf-0.8.13-gcc-7.1.0-tsgwr7j openmpi-2.0.1-gcc-7.1.0-5imqlfb libpciaccess-0.13.4-gcc-7.1.0-33gbduz pkg-config-0.29.1-gcc-7.1.0-dhbpa2i libsigsegv-2.10-gcc-7.1.0-lj5rntg util-macros-1.19.0-gcc-7.1.0-vkdpa3t libtool-2.4.6-gcc-7.1.0-ulicbkz zlib-1.2.10-gcc-7.1.0-gy4dtna 18

Variety of recipes now available Choose your own adventure OpenHPC (v1.0.1) Cluster Building Recipes Initially, we started off with a single recipe with the intent to expand CentOS7.1 Base OS Base Linux* Edition Latest v1.3.1 release continues to expand with multiple resource managers, OSes, provisioners, and architectures: Install_guide-CentOS7-Warewulf-PBSPro-1.3.1-x86_64.pdf Install_guide-CentOS7-Warewulf-SLURM-1.3.1-aarch64.pd finstall_guide-centos7-warewulf-slurm-1.3.1-x86_64.pdf Install_guide-CentOS7-xCAT-SLURM-1.3.1-x86_64.pdf Install_guide-SLE_12-Warewulf-PBSPro-1.3.1-x86_64.pdf Install_guide-SLE_12-Warewulf-SLURM-1.3.1-aarch64.pdf finstall_guide-sle_12-warewulf-slurm-1.3.1-x86_64.pdf Additional resource manager (PBS Professional) with v1.2 Additional provisioner (xcat) with v1.3.1 19

Template scripts Template recipe scripts are proved that encapsulate commands presented in the guides: # yum/zypper install docs-ohpc # ls /opt/ohpc/pub/doc/recipes/*/*/*/*/recipe.sh /opt/ohpc/pub/doc/recipes/centos7/aarch64/warewulf/slurm/recipe.sh /opt/ohpc/pub/doc/recipes/centos7/x86_64/warewulf/pbspro/recipe.sh /opt/ohpc/pub/doc/recipes/centos7/x86_64/warewulf/slurm/recipe.sh /opt/ohpc/pub/doc/recipes/centos7/x86_64/xcat/slurm/recipe.sh /opt/ohpc/pub/doc/recipes/sles12/aarch64/warewulf/slurm/recipe.sh /opt/ohpc/pub/doc/recipes/sles12/x86_64/warewulf/pbspro/recipe.sh /opt/ohpc/pub/doc/recipes/sles12/x86_64/warewulf/slurm/recipe.sh # ls /opt/ohpc/pub/doc/recipes/*/input.local /opt/ohpc/pub/doc/recipes/centos7/input.local /opt/ohpc/pub/doc/recipes/sles12/input.local # compute hostnames c_name[0]=c1 c_name[1]=c2 # compute node MAC addresses c_mac[0]=00:1a:2b:3c:4f:56 c_mac[1]=00:1a:2b:3c:4f:56 input.local + recipe.sh == installed system 20

Test Suite Initiated from discussion/requests at SC 16 BoF, the OpenHPC test suite is now available as an installable RPM (introduced with v1.3 release) # yum/zypper install test-suite-ohpc - creates/relies on ohpc-test user to perform user testing (with accessibility to run jobs through resource manager) - related discussion added to recipes in Appendix C [sms]# su - ohpc-test [test@sms ~]$ cd tests [test@sms ~]$./configure --disable-all --enable-fftw checking for a BSD-compatible install... /bin/install -c checking whether build environment is sane... yes... ---------------------------------------------- SUMMARY --------------------------------------------- Package version... : test-suite-1.3.0 Build user... : ohpc-test Build host... : sms001 Configure date... : 2017-03-24 15:41 Build architecture... : x86 64 Compiler Families... : gnu MPI Families... : mpich mvapich2 openmpi Resource manager... : SLURM 21

Project CI infrastructure x86_64 TACC is kindly hosting some CI infrastructure for the project (Austin, TX) Using for build servers and continuous integration (CI) testbed. http://test.openhpc.community:8080 aarch64 Many thanks to TACC and vendors for hardware donations!!: Intel, Cavium, Dell 22

Community Test System for CI in use http://test.openhpc.community:8080 search KARL W. SCHULZ LOG OUT OpenHPC CI Infrastructure ENABLE AUTO REFRESH Thanks to the Texas Advanced Computing Center (TACC) for hosting support and to Intel, Cavium, and Dell for hardware donations. add description 1.3 1.3.1 All + S Name Last Success Last Duration (1.3 to 1.3.1) - (centos7.3,x86_64) - (warewulf+slurm) 2 days 12 hr - #41 1 hr 12 min (1.3.1) - (centos7.3,x86_64) - (warewulf+pbspro) - UEFI 2 days 7 hr - #390 56 min (1.3.1) - (centos7.3,x86_64) - (warewulf+slurm) - long cycle 2 days 3 hr - #892 1 hr 2 min (1.3.1) - (centos7.3,x86_64) - (warewulf+slurm) - tarball REPO 2 days 5 hr - #48 1 hr 2 min (1.3.1) - (centos7.3,x86_64) - (warewulf+slurm+psxe) 2 days 4 hr - #726 2 hr 14 min (1.3.1) - (centos7.3,x86_64) - (warewulf+slurm+psxe+opa) 2 days 7 hr - #80 1 hr 51 min (1.3.1) - (centos7.3,x86_64) - (xcat+slurm) 2 days 13 hr - #271 1 hr 1 min (1.3.1) - (sles12sp2,x86_64) - (warewulf+pbspro) - tarball REPO 2 days 8 hr - #45 44 min (1.3.1) - (sles12sp2,x86_64) - (warewulf+pbspro) - UEFI 2 days 4 hr - #70 45 min (1.3.1) - (sles12sp2,x86_64) - (warewulf+slurm) 2 days 8 hr - #780 53 min (1.3.1) - (sles12sp2,x86_64) - (warewulf+slurm+psxe) - long cycle 2 days 3 hr - #114 1 hr 53 min (1.3.1) - (sles12sp2,x86_64) - (warewulf+slurm+psxe+opa) 2 days 8 hr - #36 1 hr 33 min All recipes exercised in CI system (start w/ bare-metal installs + integration test suite) 23

OpenHPC v1.3.1 Release June 16, 2017 24

OpenHPC v1.3.1 - Current S/W components Functional Areas Base OS Architecture Administrative Tools Provisioning Resource Mgmt. Runtimes I/O Services Numerical/Scientific Libraries I/O Libraries Compiler Families MPI Families Development Tools Performance Tools CentOS 7.3, SLES12 SP2 x86_64, aarch64 (Tech Preview) Components Conman, Ganglia, Lmod, LosF, Nagios, pdsh, pdsh-mod-slurm, prun, EasyBuild, ClusterShell, mrsh, Genders, Shine, Spack, test-suite Warewulf, xcat SLURM, Munge, PBS Professional OpenMP, OCR, Singularity Lustre client (community version), BeeGFS client Boost, GSL, FFTW, Metis, PETSc, Trilinos, Hypre, SuperLU, SuperLU_Dist, Mumps, OpenBLAS, Scalapack HDF5 (phdf5), NetCDF (including C++ and Fortran interfaces), Adios GNU (gcc, g++, gfortran), MVAPICH2, OpenMPI, MPICH Autotools (autoconf, automake, libtool), Valgrind,R, SciPy/NumPy, hwloc PAPI, IMB, mpip, pdtoolkit TAU, Scalasca, ScoreP, SIONLib new with v1.3.1 Notes: Additional dependencies that are not provided by the BaseOS or community repos (e.g. EPEL) are also included 3 rd Party libraries are built for each compiler/mpi family Resulting repositories currently comprised of ~450 RPMs Future additions approved for inclusion in v1.3.2 release: PLASMA SLEPc pnetcdf Scotch Clang/LLVM 25

OpenHPC Component Updates Part of motivation for community effort like OpenHPC is the rapidity of S/W updates in our space Rolling history of updates/additions: v1.1$release$ v1.2%release% v1.3%release% v1.3.1%release% 38.6% 29.1% 44% 63.3% Updated$$ New$Addi2on$ No$change$ Updated%% New%Addi3on% No%change% Updated%% New%Addi3on% No%change% Updated%% New%Addi3on% No%change% Apr 2016 Nov 2016 Mar 2017 June 2017

Other new items for v1.3.1 Release Table 2: Available OpenHPC Meta-packages Meta RPM packages introduced and adopted in recipes: these replace previous use of groups/patterns general convention remains - names that begin with ohpc-* are typically metapackges - intended to group related collections of RPMs by functionality some names have been updated for consistency during the switch over updated list available in Appendix E Group Name ohpc-autotools ohpc-base ohpc-base-compute ohpc-ganglia ohpc-gnu7-io-libs ohpc-gnu7-mpich-parallel-libs ohpc-gnu7-mvapich2-parallel-libs ohpc-gnu7-mvapich2-parallel-libs ohpc-gnu7-openmpi-parallel-libs ohpc-gnu7-parallel-libs ohpc-gnu7-perf-tools ohpc-gnu7-python-libs ohpc-gnu7-runtimes ohpc-gnu7-serial-libs ohpc-intel-impi-parallel-libs ohpc-intel-io-libs ohpc-intel-mpich-parallel-libs ohpc-intel-mvapich2-parallel-libs ohpc-intel-openmpi-parallel-libs ohpc-intel-perf-tools ohpc-intel-python-libs ohpc-intel-runtimes ohpc-intel-serial-libs ohpc-nagios ohpc-slurm-client ohpc-slurm-server ohpc-warewulf Description Collection of GNU autotools packages. Collection of base packages. Collection of compute node base packages. Collection of Ganglia monitoring and metrics packages. Collection of IO library builds for use with GNU compiler toolchain. Collection of parallel library builds for use with GNU compiler toolchain and the MPICH runtime. Collection of parallel library builds for use with GNU compiler toolchain and the MVAPICH2 runtime. Collection of parallel library builds for use with GNU compiler toolchain and the OpenMPI runtime. Collection of parallel library builds for use with GNU compiler toolchain. Collection of performance tool builds for use with GNU compiler toolchain. Collection of python related library builds for use with GNU compiler toolchain. Collection of runtimes for use with GNU compiler toolchain. Collection of serial library builds for use with GNU compiler toolchain. Collection of parallel library builds for use with Intel(R) Parallel Studio XE toolchain and the Intel(R) MPI Library. Collection of IO library builds for use with Intel(R) Parallel Studio XE software suite. Collection of parallel library builds for use with Intel(R) Parallel Studio XE toolchain and the MPICH runtime. Collection of parallel library builds for use with Intel(R) Parallel Studio XE toolchain and the MVAPICH2 runtime. Collection of parallel library builds for use with Intel(R) Parallel Studio XE toolchain and the OpenMPI runtime. Collection of performance tool builds for use with Intel(R) Parallel Studio XE toolchain. Collection of python related library builds for use with Intel(R) Parallel Studio XE toolchain. Collection of runtimes for use with Intel(R) Parallel Studio XE toolchain. Collection of serial library builds for use with Intel(R) Parallel Studio XE toolchain. Collection of Nagios monitoring and metrics packages. Collection of client packages for SLURM. Collection of server packages for SLURM. Collection of base packages for Warewulf provisioning. 27

Other new items for v1.3.1 Release A new compiler variant (gnu7) was introduced - in the case of a fresh install, recipes default to installing the new variant along with matching runtimes and libraries - if upgrading a previously installed system, administrators can opt-in to enable the gnu7 variant The meta-packages for gnu7 provide a convenient mechanism to add on: - upgrade discussion in recipes (Appendix B) amended to highlight this workflow # Update default environment [sms]# yum -y remove lmod-defaults-gnu-mvapich2-ohpc [sms]# yum -y install lmod-defaults-gnu7-mvapich2-ohpc # Install GCC 7.x-compiled meta-packages with dependencies [sms]# yum -y install ohpc-gnu7-perf-tools \ ohpc-gnu7-serial-libs \ ohpc-gnu7-io-libs \ ohpc-gnu7-python-libs \ ohpc-gnu7-runtimes \ ohpc-gnu7-mpich-parallel-libs \ ohpc-gnu7-openmpi-parallel-libs \ ohpc-gnu7-mvapich2-parallel-libs note: could skip this to leave previous gnu toolchain as default parallel libs for gnu7/mpich 28

Coexistence of multiple variants Consider an example of system originally installed from 1.3 base release and then added gnu7 variant using commands from last slide $ module list Currently Loaded Modules: 1) autotools 2) prun/1.1 3) gnu7/7.1.0 4) mvapich2/2.2 5) ohpc everything else from 1.3.1 updates (add-on) $ module avail --------------------------- /opt/ohpc/pub/moduledeps/gnu7-mvapich2 ----------------------------- adios/1.11.0 imb/4.1 netcdf-cxx/4.3.0 scalapack/2.0.2 sionlib/1.7.1 boost/1.63.0 mpip/3.4.1 netcdf-fortran/4.4.4 scalasca/2.3.1 superlu_dist/4.2 fftw/3.3.6 mumps/5.1.1 petsc/3.7.6 scipy/0.19.0 tau/2.26.1 hypre/2.11.1 netcdf/4.4.1.1 phdf5/1.10.0 scorep/3.0 trilinos/12.10.1 ------------------------------- /opt/ohpc/pub/moduledeps/gnu7 ---------------------------------- R_base/3.3.3 metis/5.1.0 numpy/1.12.1 openmpi/1.10.7 gsl/2.3 mpich/3.2 ocr/1.0.1 pdtoolkit/3.23 hdf5/1.10.0 mvapich2/2.2 (L) openblas/0.2.19 superlu/5.2.1 --------------------------------- /opt/ohpc/pub/modulefiles ------------------------------------ EasyBuild/3.2.1 gnu/5.4.0 ohpc (L) singularity/2.3 autotools (L) gnu7/7.1.0 (L) papi/5.5.1 valgrind/3.12.0 clustershell/1.7.3 hwloc/1.11.6 prun/1.1 (L) previously installed from 1.3 release 29

Summary Technical Steering Committee just completed it s first year of operation; new membership selection for 2017-2018 in place Provided a highlight of changes/evolutions that have occurred since MUG 16-4 releases since last MUG - architecture addition (ARM) - multiple provisioner/resource manager recipes Getting Started Tutorial held at PEARC 17 last month - more in-depth overview: https://goo.gl/nyidmr http://openhpc.community (general info) https://github.com/openhpc/ohpc (main GitHub site) https://github.com/openhpc/submissions (new submissions) https://build.openhpc.community (build system/repos) http://www.openhpc.community/support/mail-lists/ (mailing lists) Community Resources 30