Status of optimisation work on the NEC SX8R at Météo-France

Similar documents
Profiling Arpege, Aladin and Arome and Alaro! R. El Khatib with contributions from CHMI

Just on time to face new challenges with NEC super-computer at Meteo-France

A problem of strong scalability?

Computer Architectures and Aspects of NWP models

IFS migrates from IBM to Cray CPU, Comms and I/O

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

Performance Tools for Technical Computing

The NEC SX-9: The next step forward in computational meteorology. Thomas Schoenemeyer NEC Deutschland GmbH HPCE Division November 4, 2008

Partitioning Effects on MPI LS-DYNA Performance

The Harmonie mini-sms system

Cray XE6 Performance Workshop

What SMT can do for You. John Hague, IBM Consultant Oct 06

Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures

IFS Vectorisation Improvements Using Cray Supercomputer. John Hague. Cray Consultant, Sept 2014

Reducing the SPEC2006 Benchmark Suite for Simulation Based Computer Architecture Research

Out-Of-Core Sort-First Parallel Rendering for Cluster-Based Tiled Displays

Programming with MPI

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1

Programming with MPI

Programming with MPI

MPI Optimisation. Advanced Parallel Programming. David Henty, Iain Bethune, Dan Holmes EPCC, University of Edinburgh

Message-Passing and MPI Programming

Porting and Optimizing the COSMOS coupled model on Power6

Cache Optimisation. sometime he thought that there must be a better way

2014 LENOVO. ALL RIGHTS RESERVED.

AROME-SURFEX orographic parametrizations

STUDIES ON THE LATEST DEVELOPMENTS IN ARPEGE/IFS (CY46)

Implementation of Parallelization

Introduction to Computing and Systems Architecture

Message-Passing and MPI Programming

GPUs and Emerging Architectures

Effective Testing for Live Applications. March, 29, 2018 Sveta Smirnova

Findings from real petascale computer systems with meteorological applications

Software Ecosystem for Arm-based HPC

Composite Metrics for System Throughput in HPC

Porting and Optimisation of UM on ARCHER. Karthee Sivalingam, NCAS-CMS. HPC Workshop ECMWF JWCRP

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

R&D on dynamical kernels and subkm scale modeling with ALADIN/AROME. 35th EWGLAM and 20th SRNWP Meeting

Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ,

Introduction to OpenMP

Database Workload. from additional misses in this already memory-intensive databases? interference could be a problem) Key question:

Introduction to OpenMP

Optimising for the p690 memory system

Chapter 4: Multi-Threaded Programming

Performance of the AMD Opteron LS21 for IBM BladeCenter

A brief introduction to OpenMP

Performance Prediction for Parallel Local Weather Forecast Programs

Bindel, Fall 2011 Applications of Parallel Computers (CS 5220) Tuning on a single core

PERFORMANCE INVESTIGATION TOOLS & TECHNIQUES. 7C Matthew Morris Desynit

PCS - Part Two: Multiprocessor Architectures

LS-DYNA Scalability Analysis on Cray Supercomputers

High Performance Computing Course Notes HPC Fundamentals

Benefits and Costs of Coarse-Grained Multithreading for HARMONIE

Latest results of variational bias correction in LAM DA systems

CMAQ PARALLEL PERFORMANCE WITH MPI AND OPENMP**

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)

Lecture-14 (Memory Hierarchy) CS422-Spring

Unified Model Performance on the NEC SX-6

z10 Capacity Planning Issues Fabio Massimo Ottaviani EPV Technologies White paper

Capriccio: Scalable Threads for Internet Services

SHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008

Performance analysis basics

April 9, 2000 DIS chapter 1

Software Infrastructure for Data Assimilation: Object Oriented Prediction System

Point-to-Point Synchronisation on Shared Memory Architectures

Claude TADONKI. MINES ParisTech PSL Research University Centre de Recherche Informatique

Using DDN IME for Harmonie

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

Caches 3/23/17. Agenda. The Dataflow Model (of a Computer)

Introduction to OpenMP

Parallel Numerical Algorithms

Advanced Topics in Computer Architecture

Using OpenACC in IFS Physics Cloud Scheme (CLOUDSC) Sami Saarinen ECMWF Basic GPU Training Sept 16-17, 2015

Parameterization of orographic effects on surface radiation in AROME

Parallel Programming Concepts. Tom Logan Parallel Software Specialist Arctic Region Supercomputing Center 2/18/04. Parallel Background. Why Bother?

COMP Parallel Computing. SMM (2) OpenMP Programming Model

Deutscher Wetterdienst

Exadata X3 in action: Measuring Smart Scan efficiency with AWR. Franck Pachot Senior Consultant

Parallelization of HIRLAM

Introduction to OpenMP

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

1993 Paper 3 Question 6

The MOSIX Scalable Cluster Computing for Linux. mosix.org

(Lightweight) Recoverable Virtual Memory. Robert Grimm New York University

Parallelisation of Surface-Related Multiple Elimination

Programming with MPI

Database Systems: Design, Implementation, and Management Tenth Edition. Chapter 9 Database Design

OpenMP 4.0/4.5. Mark Bull, EPCC

Performance and Optimization Issues in Multicore Computing

The Execution and Validation of the code during a phasing

MPI and OpenMP. Mark Bull EPCC, University of Edinburgh

MUSC : (Modèle Unifié, Simple Colonne) for Arpege-Aladin-Arome-Alaro-Hirlam-(IFS) (CY31T1 version)

Assessment of New Languages of Multicore Processors

OpenACC Course. Office Hour #2 Q&A

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Parallel Computing Using OpenMP/MPI. Presented by - Jyotsna 29/01/2008

Acknowledgments. Amdahl s Law. Contents. Programming with MPI Parallel programming. 1 speedup = (1 P )+ P N. Type to enter text

Advances in Time-Parallel Four Dimensional Data Assimilation in a Modular Software Framework

Optimising with the IBM compilers

Transcription:

Status of optimisation work on the NEC SX8R at Météo-France R. El Khatib with the help of many : Damien Lambert, NEC, Dominique Puech, Sami Saarinen (ECMWF), Yann Seity, Eric Sevault, Filip Vana (CHMI) Aladin workshop / Hirlam all staff meeting Bruxelles 07-10/04/2008

Plan INTRODUCTION Purposes of this work Overview of a node SX8R Tools at disposal for optimisation -ftrace : example Methodology OPTIMISATION WORK Main features I/Os Vectorisation («NPROMA») Message passing Multitasking (Open-MP) Specific issues ODB & observations preprocessing («Bator») Screening Minimisations Arpege/Aladin - Arome Fullpos CONCLUSIONS & OUTLOOK

INTRODUCTION : Purposes of this work After porting : time to have a look on the profiles with this new machine (even if still vector...) Make sure the code is optimised for higher truncations : especially for the now operational truncation T538 And then for the next truncation increasement (T799 on SX9) Optimise AROME investigations needed New technologies at disposal for MF (on site) : Local vs global file system : how to get the best profit from them? Open-MP : should we use it to improve the performances?

INTRODUCTION : Overview of a node SX8R Processors : 8 processors/node (VPP5000 : 1 processor/node) Vector processors with 256 vector registers Memory : 128 Gb/node (compared to VPP5000 : 4 or 8 Gb/node) 4096 memory banks (VPP5000 : 512) I/Os : Local disk : fast, but private to a node Global file system (GFS) : slower but visible by all nodes Expected transfer rate (GFS) : > 100 Mb/s CPU performance : Peak performance : 35 Gflops/processor. In practice we can expect 20% of the peak performance from a vector machine => 7 Gflops/processor

INTRODUCTION : Tools at disposal for optimisation Compiler option -ftrace providing detailed informations per subroutines, such as : frequency of calls, real time, Mflpos, vector length, cachemisses, banks conflicts... Elapse time, communications time, messages length Environment variables providing : general informations per processors or for all processors, such as timings, Mflpos, mean vector length,... Detailed informations for each accessed file + Environment variables able to set miscellaneous parameters, especially buffer sizes for reading files

INTRODUCTION : -ftrace : example (1)

INTRODUCTION : -ftrace : example (2)

INTRODUCTION : Methodology Scalability comparisons for various software configurations : Over 1, 2, 4, 8, 16 processors MPI vs. OPEN-MP whenever possible North-South vs. Square MPI distribution Local disk vs. Global file system whenever possible Profilings from the best performing configuration, usually over 16 processors (ie : 2 full nodes) Tunings from the best performing configuration (mainly : namelists) Unfortunately : not a full node used for tests over 1, 2, 4 processors... but jobs running during summer!

Main features : I/Os distribution Real time I/Os on local disk (millisec.) 11000 10000 9000 8000 7000 NSTRIN /LOC NSTROUT /LOC 11000 10000 9000 8000 7000 Real time I/Os on GFS (millisec.) 6000 6000 5000 5000 4000 4000 3000 3000 2000 1000 2000 1000 NSTRIN /GFS NSTROUT /GFS 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 NSTRIN : number of processors for reading/decoding NSTROUT : number of processors for encoding Local disks usage seems always better than global file system It is useless/detrimental to distribute encoding work on much processors It seems beneficial to distribute decoding work on much processors : Influence of multi-nodes??

Influence of nodes on I/Os Time (millisec.) 14000 12000 10000 8000 6000 4000 2000 0 Impact of the number of nodes on NSTRIN 8 tasks over 1 node 8 tasks over 2 nodes 8 tasks over 4 nodes 8 tasks over 8 nodes 1 2 3 4 5 6 7 8 NSTRIN for NPROC=8 Better spread over nodes But not too much : 2 nodes or 4/6 MPI tasks for I/Os => I/Os distribution should be spread over nodes, not over MPI tasks BEWARE OF CONCURRENT DISK ACCESSES!

I/Os : 2 examples of disk accesses issues Fullpos-Arpege (stretched): For a single file to post-process : short job but large matrixes files to read ( 133 Mb/s) => scalability mostly limited by the I/O part SURFEX (in the framework of AROME) : An input file ( 320 Mb) was treated by each processor : 16 tasks => 480 s. (0.7 Kb/s)! 32 tasks => 600 s. (0.5 Kb/s)!! 64 tasks => 800 s. (0.4 Kb/s)!!! => Once read by a unique processor + MPI distribution (*) : 19 s. (17 Mb/s) + 40 s. communications = 60 s. (*) unfortunately still to be debugged...

I/Os : files formats Better read binary files than ASCII files : see the case of RTTOV coefficients file rtcoef_eos_2_airs.dat : ASCII version : 60 Mb read in more than 18 s. (3 Mb/s) Occurrence of 'BACKSPACE' breaking the use of a memory buffer BINARY version : 30 Mb read in 2 s. (14 Mb/s.) Now acknowledged by ECMWF : faster on IBM as well. Is the «FA/LFI» file format (ie : Fortran indexed sequential files) performant enough? To be investigated. Apparently not so bad. Actual transfer rate for a historical file : less than 120 Mb/s

Main features : vectorisation (NPROMA) 200 Vector length % NPROMA 9000 Memory cost % NPROMA 2600 Real time % NPROMA 180 8500 2400 160 8000 2200 140 120 7500 2000 1800 100 7000 1600 80 6500 1400 60 6000 1200 49 98 195 390 780 1560 3119 6238 12475 49 98 195 390 780 1560 3119 6238 12475 49 98 195 390 780 1560 3119 6238 12475 Instruction & operand cache misses % NPROMA 140 140 Banks conflicts % NPROMA 6500 MFLOPS % NPROMA 120 I. MISS O. MISS 120 6000 100 100 5500 80 80 5000 60 60 4500 40 40 4000 20 20 3500 0 0 3000 49 98 195 390 780 1560 3119 6238 12475 49 98 195 390 780 1560 3119 6238 12475 Forecast H24 T538 over 4 processors 49 98 195 390 780 1560 3119 6238 12475 Best values : between 1500 and 3000 Chaotic performances for such values or bigger values => further study

NPROMA : vector length aspect 80 70 60 Relative vector length (%) and NPROMA 1249 1281 1315 1349 1387 1427 1469 1513 1561 1611 1665 1721 1783 1849 1921 1997 2081 2171 2269 2377 2495 2627 2773 2937 3119 3327 3565 3839 «Kings» show that there must exist optimal values Rmq : There are 256 vector registers on a vector unit of a SX8R processor => Best NPROMA should be less than a multiple of 256 190 185 180 175 Vector length % NPROMA (zoom) 170 1265 1267 1269 1271 1273 1275 1277 1279 1281 200 180 160 140 120 100 80 60 Vector length for NPROMA = k*256-1 255 511 767 1023 1279 1535 1791 2047 2303 2559 2815 3071 3327 3583 3839 4095

NPROMA : Banks conflicts aspect Inspired by an old recommandation by Fujitsu : «a power of 2 minus 2», test on the multiples of 256 minus 2 : Banks conflicts (sec.) % "Multiple of 256 minus 1" 140 120 Banks conflicts (sec.) % "Multiple of 256 minus 2" 140 120 100 80 60 40 20 0 100 80 60 40 20 0 255 511 767 1023 1279 1535 1791 2047 2303 2559 2815 3071 3327 3583 3839 4095 254 510 766 1022 1278 1534 1790 2046 2302 2558 2814 3070 3326 3582 3838 4094 => Looks slightly better! Justification??

NPROMA final recommendations & example A multiple of 256 minus 2 Between 2000 and 3000 Advice : Either «NPROMA=-2046» (REK after smoothing graphics) Or «NPROMA=-2814» (ES after global 4DVar on SX8R «SUMO») It can be worth (regularly re-)trying a set of values between 2000 and 3000 : Minimisation T107 (16 procs) : NPROMA contest 295 290 285 280 275 270 265 260 255 250 Performance +/- 1% to 5 % Here the best values are exactly : 2046 and 2814 Prime numbers : not the best Best NPROMA bigger than NGPTOT (1105)!? 254 510 766 1022 1105 1278 1279 1534 1790 2046 2302 2558 2814 Time (sec.) 3583 NPROMA

Main features : Message passing (1/3) For all models : Switch to LIMP_NOOLAP & LSLONDEM ( LIMP_NOOLAP was not supported on VPP5000) Communication buffer length tuned to a more «binary» value (3276800 instead of 1800000) => saves 0.5% Time (sec.) 260 258 256 254 252 250 248 246 244 242 240 Low res. minimisation : NCOMBFLEN strange numbers 450000 900000 1800000 3600000 7200000 Time (sec.) 260 258 256 254 252 250 248 246 244 242 240 Low res. minimisation : NCOMBFLEN nice numbers 409600 819200 1638400 3276800 6553600 13107200 NCOMBFLEN NCOMBFLEN

Message passing (2/3) For global model only : NORTH-SOUTH distribution always slightly faster than, or equivalent to SQUARE distribution : tested on forecast with 4, 8, 16 & 32 processors. Tested on screening & minimisation with 4, 8 & 16 processors. For LAM only : SQUARE distribution is faster than NORTH-SOUTH distribution (by 7 % according to a test with AROME) because coupling is then balanced betwen Western and Eastern processsors For a better load balance, Y-axis distribution should consider : (C+I) if physics is more cpu-consumming than dynamics But (C+I+E) if dynamics (predictor-corrector) is more cpuconsumming than physics => saves 6 % according to test with AROME

Message passing (3/3) The GSTATS mystery : More MPI barriers makes AROME faster!? Explanation :... not very clear. Is it because «sends» are «non-blocking standard» and «receives» are «blocking standard»? Rmq : Barriers are already coded in mpobseq/mpobseqad. => in cycle 33, two new namelists variables : LSYNC_TRANS : to synchronise MPI tasks in spectral transforms before communications (MPL_ALLTOALLV) LSYNC_SLCOM : to synchronise MPI tasks before semi-lagrangian communications => Saves 10 % cpu in AROME

Main features : Multitasking (Open-MP) Works on ARPEGE conf 001 and Fullpos, but slightly less efficient than MPI Partly working on the minimisations : Problems of interpretation of the norm OPEN-MP Remaining bugs in compiler? Workarounds : a few OMP directives modified or switched off Still not working when mixing MPI and OMP if more than 2 MPI tasks SL adjoint : a synchronisation lock on threads can do worse than a single well-vectorising thread => still less efficient than pure MPI distribution Real time (sec.) Real time (sec.) 2000 1800 1600 1400 1200 1000 800 600 400 200 0 2000 1800 1600 1400 1200 1000 800 600 400 200 0 T538 forecast 1 thread 2 threads 4 threads 8 threads 1 2 4 8 16 32 Total number of processors T107 minimisation MPI * 1 OMP MPI * 2 OMP MPI * 4 OMP MPI * 8 OMP 1 2 4 8 16 Total number of processors

Multitasking (Open-MP) Not yet working on Screening, even partly (crash) Not yet working on ALADIN (a problem probably located inside spectral transforms) AROME not ready for Open-MP : needs SURFEX and Meso-NH physics to support Open-MP Open-MP : a disappointing technique on NEC?

Specific issues : ODB & observations preprocessing («BATOR») ODB : Porting has been partly missed : An #ifdef VPP not translated => computation via a scalar subroutine (in ifsaux/xrd). Now fixed. Vectorisation directives inside C codes not interpreted by the compiler, unless on top of the subroutine => need to use Sami Saarinen's wrapper for the sxcc compiler. Now active. Tuning of ODB environment variables under progress. Local disks synchronisation tool : to be re-written (failures occured) BATOR : Profilings have been done early on top of the mis-porting of ODB => to be traced again However some optimisations have been realised (D. Puech) Further optimisations under progress to enable distribution

Screening Real time (sec.) 1400 1200 1000 800 600 400 200 Screening scalability Real time /GFS NS Real time /LOC NS Real time /GFS Square Scalability looks good Better use local disks => needs inter-nodes synchronisations when more than 8 processors Bug if 1 processor only? (time limit systematically exceeded) Performance was limited by RRTM better vectorized. 1 2 4 8 16 Number of processors TO DO : Re-examine profiles since RRTM optimisation and when ODB environment variables are properly tuned

Minimisation High resolution Performance limited by larcinbad (500 Mflops 7% of what could be expected on a SX8R!) NEC reported poor performance because of «gather» operations. Now optimised via compiler directives ( 1 Gflops) «Top 10» in CPU for minimisation T224 (16 processors)

Minimisation Low resolution CPU : Max. number of gridpoint per processor can become too small for optimal vectorisation 225 200 175 150 125 100 75 50 25 0 T107 minimisation : Vector length 1 2 4 8 16 Number of MPI tasks Time (sec.) 2000 1800 1600 1400 1200 1000 800 600 400 200 T107 minimisation scalability Real time (sec.) 1100 1000 900 800 700 600 500 400 300 200 100 T48 minimisation scalability ELAPSE : However communications are driving the performance. Scalability remains unchanged with an even lower truncation (<=> relatevely with more obs.) 0 1 2 4 8 16 Number of MPI tasks 0 1 2 4 8 16 Number of MPI tasks

ARPEGE/ALADIN - AROME ARPEGE/ALADIN : Performances were driven by RRTM => profiles to be re-examined since its optimisation AROME : Performance still driven by communications on SX8R. (ref : AROME-France, NPROC=64 to run 24 hours forecast in 30 min. elapse) Miscellaneous tiny improvement must be still possible Could Open-MP reduce the load imbalance caused by the E-zone? (tests with ALADIN in CHMI would not advocate for it...)

«Top 10» CPU AROME profile (1)

«Top 10» ELAPSE AROME profile (2)

Performance* of AROME vs. ARPEGE AROME 32 processors over France** : NGPTOTG=307200 ; NFLEVG=41 ; => 2.85 GFlops/cpu AROME 4 processors over Austria : NGPTOTG=64800 ; NFLEVG=41 ; => 3.58 GFlops/cpu ARPEGE 32 processors T538 : NGPTOTG=399180 ; NFLEVG=60 ; => 5.36 GFlops/cpu ARPEGE 4 processors T538 : NGPTOTG=399180 ; NFLEVG=60 ; => 6.11 GFlops/cpu * Before optimisation of RRTM and without MPI synchronisation in AROME. ** with predictor-corrector scheme. May not be used.

Fullpos 50 Fullpos "BDAP" scalability Post-processing (on a single file) : Performance driven by large I/Os (mostly matrixes because of stretching) => poor scalability Time (sec.) 45 40 35 30 25 20 15 10 Single node LOC MPI /NSTRIN=NSTROUT= 1Single node LOC MPI /NSTRIN=NSTROUT= 2Single node LOC OMP 5 200 Fullpos "927" scalability 0 1 2 4 8 Number of processors Time (sec.) 180 160 140 120 100 80 60 40 20 Single node LOC MPI Single node LOC OMP Multinode GFS MPI Single node GFS MPI «927» (changing geometries) : Performance driven by the I/Os (mostly the intermediate file...) => poor scalability Better use local disk 0 1 2 4 Number of processors

Conclusions & Outlooks (1) Investigations on various configurations of ARPEGE and AROME on NEC platform has lead to several positive results : Better understanding of I/Os performances, improvements, and some ideas to go further Identification of the optimal NPROMA values Reduction of load inbalance and speed-up of communications in AROME Identification of misporting of ODB Optimisation of RRTM, done by NEC team at Météo-France elapse time : Arpege T358l46/12processors = Arpege T538l60/16processors (= cpu power x 2) Still problems remains : Limitations in the search for a higher performance in the minimisation Apparently poor performance of AROME, compared to ARPEGE

Conclusions & Outlooks (2) What to do now? Priority #1 is now to optimise post-processing : Operations to switch to the in-line mode of Fullpos Finish the work on the miscellaneous identified weaknesses Get more benefits from ECMWF last benchmark (under progress) Update profiles with the last optimisations, and assess next priorities Survey of RTTOV-9 (expected to be even better vectorised, however) Keep the profiles up to date along the code releases on various platforms And also : Investigate more configurations (3DVar Aladin/Arome, 601, CANARI...) Be prepared for optimising on SX9 in early 2009 Work on other identified problems : Fullpos «927» : remove the intermediate file thanks to the modularity of spectral transforms Further optimisation of Surfex I/Os Support for Open-MP in AROME (at least for portability reasons)