DATARMOR: Comment s'y préparer? Tina Odaka

Similar documents
J O U R N É E D E R E N C O N T R E D E S U T I L I S A T E U R S D U P Ô L E D E C A L C U L I N T E N S I F P O U R L A M E R P I E R R E C O T T Y

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

LBRN - HPC systems : CCT, LSU

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

SuperMike-II Launch Workshop. System Overview and Allocations

Genius Quick Start Guide

HPC Hardware Overview

University at Buffalo Center for Computational Research

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1

Comet Virtualization Code & Design Sprint

HPC Architectures. Types of resource currently in use

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

The RWTH Compute Cluster Environment

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

Illinois Proposal Considerations Greg Bauer

Designed for Maximum Accelerator Performance

GPUs and Emerging Architectures

High Performance Computing with Accelerators

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

User Training Cray XC40 IITM, Pune

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Carlo Cavazzoni, HPC department, CINECA

Pedraforca: a First ARM + GPU Cluster for HPC

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

Lecture 20: Distributed Memory Parallelism. William Gropp

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director

Application Performance on IME

Our new HPC-Cluster An overview

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Introduc)on to Hyades

Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence

How To Design a Cluster

FUJITSU PHI Turnkey Solution

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines

OP2 FOR MANY-CORE ARCHITECTURES

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

Headline in Arial Bold 30pt. Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008

ACCRE High Performance Compute Cluster

LQCD Facilities at Jefferson Lab. Chip Watson May 6, 2011

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

MSC Nastran Explicit Nonlinear (SOL 700) on Advanced SGI Architectures

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Cluster Network Products

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Introduction to PICO Parallel & Production Enviroment

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada

Habanero Operating Committee. January

HOKUSAI System. Figure 0-1 System diagram

n N c CIni.o ewsrg.au

CS500 SMARTER CLUSTER SUPERCOMPUTERS

Analyzing the Performance of IWAVE on a Cluster using HPCToolkit

Performance Tools for Technical Computing

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

GPU Computing with Fornax. Dr. Christopher Harris

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

PlaFRIM. Technical presentation of the platform

LS-DYNA Performance Benchmark and Profiling. October 2017

HPC Architectures evolution: the case of Marconi, the new CINECA flagship system. Piero Lanucara

The rcuda middleware and applications

OCTOPUS Performance Benchmark and Profiling. June 2015

IBM řešení pro větší efektivitu ve správě dat - Store more with less

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment

Hybrid Warm Water Direct Cooling Solution Implementation in CS300-LC

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

The RAMDISK Storage Accelerator

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

IBM CORAL HPC System Solution

The BioHPC Nucleus Cluster & Future Developments

Mapping MPI+X Applications to Multi-GPU Architectures

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

Leibniz Supercomputer Centre. Movie on YouTube

Tech Computer Center Documentation

Graham vs legacy systems

Windows-HPC Environment at RWTH Aachen University

SGI Speed and Scale. Jesús Martínez Chavolla General Manager México

FUSION1200 Scalable x86 SMP System

Emerging Technologies for HPC Storage

Preparing GPU-Accelerated Applications for the Summit Supercomputer

SCALABLE HYBRID PROTOTYPE

High Performance Computing (HPC) Introduction

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Smarter Clusters from the Supercomputer Experts

Cisco MCS 7825-I1 Unified CallManager Appliance

The Stampede is Coming: A New Petascale Resource for the Open Science Community

Transcription:

DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016

PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR

DATARMOR : convention de nommage ClusterHPC REF SCRATCH ClusterSMP HOME ClusterWEB + CacheWEB DATAWORK

Configuration Global: Datarmor ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) Cluster HPC 426 Tflops 11 088 cores Login nodes Cluster SMP 8 Tflops 240 cores 5 To RAM NFS /CIFS Cluster WEB Cache WEB 10 servers 100 To net Batch system: PBS Pro Visualisation 4 GPUs InfiniBand FDR Gateway Lustre NFS/CIFS Lustre servers SCRATCH Lustre: 500 To net REF Lustre: 1,5 Po net Gateway GPFS NFS/CIFS DATAWORK GPFS: 5 Po net clients GPFS NFS /CIFS HOME 32 To net ACCES CLIENTS NFS / CIFS / FTP

Cluster HPC ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) Login nodes Cluster HPC 426 Tflops 11 088 cores 396 Nodes 99 Lams 3 Racks Batch system: PBS Pro InfiniBand FDR

Compute nodes Cluster HPC CPU Intel E5-2680 v4 : 14 cores, 2.4 Ghz 1 node: 2CPUS => 28 cores / nodes 1 lam => 4 nodes 1 IRU => 9 lam => 36 nodes 1E rack = 4 IRU = 144 nodes = 4032 cores!!dense!! 1E cell = 2 E rack. DATARMOR have 1 E cell and ¾ E rack.

Cluster HPC Cooling Rack Compute Lams Compute Lams Compute Lams

Cluster HPC ICE XA connected to Cooling Distribution Unit CDU connected to Dry cooler (free cooling) Dry cooler works outside temperature max 25 degree. Back up is done by Cold water system in IFREMER.

Infra: more in detail poster DATARMOR aujourd'hui, Les Photos: (IMA et RIC IFREMER)

Memory Cluster HPC 128Go/node, 50To Total. Network Dual InfiniBand 4x FDR topologie Enhanced Hypercube. Login nodes and available services Intel Parallel Studio XE, Vtune, DDT(256 cores), SGI MPI Submitting jobs : PBS PRO (qsub)

Cluster HPC: access to data ClusterHPC REF SCRATCH DATAWORK HOME

Cluster SMP ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) Cluster SMP 240Tflops cores 5 To RAM Batch system: PBS Pro InfiniBand FDR ACCES CLIENTS NFS / CIFS / FTP

UV3000 Cluster SMP CPU Intel Xeon E5-4650v3 12 cores @ 2.1 Ghz30 MB L3 cache105 1 node: 2CPUS => 24 cores / nodes Total 10 node, => 240 Cores/ machine. Memory: 160 x 32 GB DDR4 2133 hz = 5.1 To / machine. SMP => 10 node but can boot as 1 big node. Use pbs as Cluster HPC (same login nodes)

REF and SCRATCH: Lustre ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) InfiniBand FDR Gateway Lustre NFS/CIFS Temporally HPC DATA Lustre: 500 To net Lustre servers ACCES CLIENTS NFS / CIFS / FTP Reference DATA Lustre: 1,5 Po net

REF and SCRATCH: Lustre Lustre => parallel file system (/temp and /work on caparmor) Separated meta data and object data in physical servers REF and SCRATCH are hosted in Lustre material is same, but will be configured so that there will be Difference in performance. Gateway-lustre will assure the NFS (linux) CIFS (windows) access

DATAWORK: GPFS ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) InfiniBand FDR Work DATA GPFS: 5 Po net Gateway GPFS NFS/CIFS clients GPFS ACCES CLIENTS NFS / CIFS / FTP

DATAWORK: GPFS GPFS=> parallel file system (IBM) Separated meta data and object data but can be in mixed in physical servers New to PCIM but GPFS is a stable and well know product. DATAWORK is hosted in GPFS Gateway-GPFS will assure the NFS (linux) CIFS (windows) access

HOME ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) InfiniBand FDR NFS /CIFS HOME 32 To net ACCES CLIENTS NFS / CIFS / FTP

HOME User quota as CAPARMOR Two re-dundant NFS server Same space as caparmor s HOME. NFS, as caparmor. Possible to mount from windows

Cluster WEB and Cache WEB ACCES CLIENTS (10 Gigabit Ethernet connected to IFREMER and INFUSER network (SHOM / IUEM / ENSTA Bretagne) Cluster WEB Cache WEB 10 servers 100 To net InfiniBand FDR ACCES CLIENTS NFS / CIFS / FTP

Cluster WEB and Cache WEB To host web server it can Fast access to small files Redundant with FC disk Can host virtual machines. Half of the nodes have direct connection to IB network.

Visualization 2 Machine for 128 GB memory (16x 16GB) 2 NVIDIA Quadro 6000 2x processors E5-2630 v4 : 10 cores 2.2Ghz IB FDR 10Gb dual-port Enfanced possibility for users to visualize prepost calcul

Visualization NVIDIA Quadro 6000 448 cuda cores 1 T flops (single precision 0.5T flops (double precision) Memory 6GB GDDR5 Memory bandwidth 114 GB/s

Planning A partir de Debut 2017 : ouverture aux utilisateurs (DATARMOR challenge inclus) Fin Mars 2017 : Arrêt Caparmor => VERY SHORT transission like nymphea=>caparmor nymphea: Alpha >caparmor: intel -> DATARMOR : intel BUT Should be easier than last transition

What you can prepare today: for CAPARMOR users Check with your code with new compiler Intel comp ver 15 Test aggressive compiler optimization to see if you get wrong result or not CAPARMOR DATARMOR CPU Nehalem Broadway Speed 2.8 Ghz 2.4 Ghz Flops/clock (each core) 4 16

What you can prepare today: for CAPARMOR users HPC: parallel (Multi thread.) Be ready for 28 thread OpenMP with MPI. (hybrid) CAPARMOR DATARMOR Cores/node 8 28 Memory/node 24 Go 128 Go Memory/core 3 Go 4,5Go

What you can prepare today RAPPEL: think your data now!! How you need to organize your data for datarmor? Separating scratch home workdata ref Think time line of your data (which data are worse kept for years, which data must have back up,..)

What you can do today: new users New users: Non Fortran (or C) users Do you use docker? If so contact us and we can try with UV2000 we have today to test submit your job. Visualisation? We will have test environment in autome. Plz contact us so that we can make test do not forget to tell us which software you use. Whats the size of data you use What is the graphic card you use today

Merci Poster DATARMOR aujourd'hui, Les Photos: (IMA et RIC IFREMER)