Building Ensemble-Based Data Assimilation Systems for Coupled Models

Size: px
Start display at page:

Download "Building Ensemble-Based Data Assimilation Systems for Coupled Models"

Transcription

1 Building Ensemble-Based Data Assimilation Systems for Coupled s Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany

2 Overview How to simplify to apply data assimilation? 1. Extend model to integrate the ensemble 2. Add analysis step to the model 3. Then focus on applying data assimilation Lars Nerger et al. Building EnsDA Systems for Coupled s

3 PDAF: A tool for data assimilation PDAF - Parallel Data Assimilation Framework a program library for ensemble data assimilation provide support for parallel ensemble forecasts provide fully-implemented & parallelized filters and smoothers (EnKF, LETKF, NETF, EWPF easy to add more) easily useable with (probably) any numerical model (applied with NEMO, MITgcm, FESOM, HBM, TerrSysMP, ) run from laptops to supercomputers (Fortran, MPI & OpenMP) first public release in 2004; continued development ~200 registered users; community contributions Open source: Code, documentation & tutorials at L. Nerger, Lars Nerger W. Hiller, et al. Computers Building EnsDA & Geosciences Systems for 55 Coupled (2013) s

4 Application examples run with PDAF FESOM: Global ocean state estimation (Janjic et al., 2011, 2012) NASA Ocean Biogeochemical : Chlorophyll assimilation (Nerger & Gregg, 2007, 2008) HBM-ERGOM: Coastal assimilation of SST & ocean color (S. Losa et al. 2013, 2014) Sea surface elevation RMS error in surface temperature Sea ice thickness MITgcm: sea-ice assimilation (Q. Yang et al., , NMEFC Beijing) + external applications & users, e.g. Geodynamo (IPGP Paris, A. Fournier) MPI-ESM (coupled ESM, IFM Hamburg, S. Brune) -> talk tomorrow CMEMS BAL-MFC (Copernicus Marine Service Baltic Sea) TerrSysMP-PDAF (hydrology, FZJ) Lars Nerger et al. Building EnsDA Systems for Coupled s

5 Ensemble filter analysis step case-specific call-back routines Analysis operates on state vectors (all fields in one vector) Ensemble of state vectors X Filter analysis 1. update mean state 2. ensemble transformation For localization: Local ensemble Local observations Vector of observations y Observation operator H(...) Observation error covariance matrix R Lars Nerger et al. Building EnsDA Systems for Coupled s

6 Logical separation of assimilation system single program time state Ensemble Filter Initialization analysis ensemble transformation Core of PDAF state observations initialization time integration post processing modify parallelization mesh data Observations quality control obs. vector obs. operator obs. error Explicit interface Indirect exchange (module/common) Nerger, L., Hiller, W. Software for Ensemble-based DA Systems Implementation Lars Nerger and Scalability. et al. Building Computers EnsDA and Systems Geosciences for Coupled 55 (2013) s

7 Extending a for Data Assimilation single or multiple executables coupler might be separate program Start Initialize parallel. Initialize Initialize coupler Initialize grid & fields Do i=1, nsteps Time stepper in-compartment step coupling Post-processing Stop revised parallelization enables ensemble forecast Start Aaaaaaaa Initialize parallel. Init_parallel_PDAF Aaaaaaaa Initialize Initialize coupler aaaaaaaaa Initialize grid & fields Init_PDAF Do i=1, nsteps Time stepper in-compartment step coupling Assimilate_PDAF Post-processing Stop Extension for data assimilation plus: Possible model-specific adaption Possible adaption of coupler (e.g. OASIS3-MCT) Lars Nerger et al. Building EnsDA Systems for Coupled s

8 Framework solution with generic filter implementation Start Aaaaaaaa init_parallel_da Aaaaaaaa Initialize aaaaaaaaa Init_DA Do i=1, nsteps Time stepper Assimilate Post-processing Stop Subroutine calls or parallel communication Generic No files needed! PDAF_Init Set parameters Initialize ensemble PDAF_Analysis Check time step Perform analysis Write results Dependent on model and observations Read ensemble from files Initialize state vector from model fields Initialize vector of observations Apply observation operator to a state vector multiply R-matrix with a matrix with assimilation extension Core-routines of assimilation framework Case specific callback routines Lars Nerger et al. Building EnsDA Systems for Coupled s

9 2-level Parallelism Forecast Analysis Forecast Task 1 Filter Task 1 Task 2 Task 2 Task 3 Task 3 1. Multiple concurrent model tasks 2. Each model task can be parallelized Analysis step is also parallelized Configured by MPI Communicators Lars Nerger et al. Building EnsDA Systems for Coupled s

10 Building the Assimilation System Problem reduces to: 1. Configuration of parallelization (MPI communicators) 2. Implementation of compartment-specific user routines and linking with model codes at compile time Lars Nerger et al. Building EnsDA Systems for Coupled s

11 2 compartment system strongly coupled DA Forecast Analysis Forecast might be separate programs Comp. 1 Task 1 Cpl. 1 Comp. 2 Task 1 Filter Comp. 1 Task 1 Cpl. 1 Comp. 2 Task 1 Comp. 1 Task 1 Comp. 1 Task 1 Cpl. 1 Cpl. 1 Comp. 1 Task 1 Comp. 1 Task 1 Lars Nerger et al. Building EnsDA Systems for Coupled s

12 Configure Parallelization weakly coupled DA Comp. 1 Task 1 Cpl. 1 Comp. 2 Task 1 Comp. 1 Task 2 Cpl. 2 Comp. 2 Task 2 Forecast Filter Comp. 1 Analysis Filter Comp. 1 Logical decomposition: Communicator for each Coupled model task Compartment in each task (init by coupler) (Coupler might want to split MPI_COMM_WORLD) Filter for each compartment Connection for collecting ensembles for filtering Different compartments Initialize distinct assimilation parameters Use distinct user routines Lars Nerger et al. Building EnsDA Systems for Coupled s

13 Example: TerrSysMP-PDAF (Kurtz et al. 2016) l.: TerrSysM P PDAF 134 rce code TerrSysMP (and buildingmodel procedure), it was posne the model libraries for CLM and ParFlow SIS-MCT) Atmosphere: with the data assimilation COSMO libraries DAF in one Land mainsurface: program. Figure CLM 3 sketches omponents of the TerrSysMP PDAF framersysmp PDAF Subsurface: driver (i.e. the main ParFlow program) hole framework. This includes the initialisasation of MPI, coupled TerrSysMP with and PDAF PDAF using as well ping control wrapper for the model forward integration ssimilation. The TerrSysMP wrapper is used e driver program single executable with the individual model m and libparflow coupled via OASIS-MCT). driver controls program r(-defined) functions are specifically adapted and the desired assimilation scheme (EnKF in include, for example, the definition of the state ervation vector and the observation error cox. These data are either provided by the model tate vector) or are read from files or command.g. observations and observation errors). The ctions provide the algorithms for different fils. This part of PDAF is not modified for the n of TerrSysMP PDAF because the input for functions (e.g. state vector, observation vec- Tested using processor cores Figure 3. Components of TerrSysMP PDAF. W. Kurtz et al., Geosci. Dev. 9 (2016) 1341 processor is assigned either to CLM or ParFlow dependin on the processor rank within the model communicator. A example of this model assignment is given in Fig. 4. Th first processors within a model communicator are always as signed to CLM and the rest to ParFlow summing up to th total number of processors for each model realisation. Af terwards, each of the component models is initialised wit the initialisation function of the corresponding model librar (see above). In this step also the model communicators o Lars Nerger et al. Building EnsDA Systems for Coupled s

14 Example: ECHAM6-FESOM Atmosphere ECHAM6 JSBACH land Ocean FESOM includes sea ice Coupler library OASIS3-MCT Atmosphere Ocean Separate executables for atmosphere and ocean Data assimilation (FESOM completed, ECHAM6 in progress) Add 3 subroutine calls per compartment model Replace MPI_COMM_WORLD in OASIS coupler Implement call-back routines : D. Sidorenko et al., Clim Dyn 44 (2015) 757 Lars Nerger et al. Building EnsDA Systems for Coupled s

15 Summary Software framework simplifies building data assimilation systems Efficient online DA coupling with minimal changes to model code Setup of data assimilation with coupled model 1. Configuration of communicators 2. Implementation of user-routines for interfacing with model code and observation handling Lars - Building Nerger et EnsDA al. Building Systems EnsDA for Systems Coupled for s Coupled s

16 References Nerger, L., Hiller, W. Software for Ensemble-based DA Systems Implementation and Scalability. Computers and Geosciences 55 (2013) Nerger, L., Hiller, W., Schröter, J.(2005). PDAF - The Parallel Data Assimilation Framework: Experiences with Kalman Filtering, Proceedings of the Eleventh ECMWF Workshop on the Use of High Performance Computing in Meteorology, Reading, UK, October 2004, pp Thank you! Lars.Nerger@awi.de Lars - Building Nerger et EnsDA al. Building Systems EnsDA for Systems Coupled for s Coupled s

17 Changes to FESOM Add to par_init (gen_partitioning.f90) after MPI_init #ifdef USE_PDAF CALL init_parallel_pdaf(0, 1, MPI_COMM_FESOM) #endif Add to main (fesom_main.f90) just before stepping loop CALL init_pdaf() Add to main (fesom_main.f90) just before END DO CALL assimilate_pdaf() OASIS3-MCT Assumes to split MPI_COMM_WORLD in oasis_init_comp (mod_oasis_method.f90) Needs to split COMM_FESOM Start Aaaaaaaa init_parallel_pdaf Aaaaaaaa Initialize generate mesh aaaaaaaaa Initialize fields init_pdaf Do i=1, nsteps Time stepper consider BC Consider forcing assimilate_pdaf Post-processing Stop Lars Nerger et al. Building EnsDA Systems for Coupled s

18 Changes to ECHAM6 Add to p_start (mo_mpi.f90) after MPI_init #ifdef USE_PDAF CALL init_parallel_pdaf(0, 1, p_global_comm) #endif Add to control (control.f90) before call to stepon CALL init_pdaf() Add to stepon (step.f90) before END DO CALL assimilate_pdaf() OASIS3-MCT Assumes to split MPI_COMM_WORLD in oasis_init_comp (mod_oasis_method.f90) Needs to split p_global_comm Start Aaaaaaaa init_parallel_pdaf Aaaaaaaa Initialize generate mesh aaaaaaaaa Initialize fields init_pdaf Do i=1, nsteps Time stepper consider BC Consider forcing assimilate_pdaf Post-processing Stop Lars Nerger et al. Building EnsDA Systems for Coupled s

19 Minimal changes to NEMO Add to mynode (lin_mpp.f90) just before init of myrank #ifdef key_use_pdaf CALL init_parallel_pdaf(0, 1, mpi_comm_opa) #endif Add to nemo_init (nemogcm.f90) at end of routine CALL init_pdaf() Add to stp (step.f90) at end of routine CALL assimilate_pdaf() For Euler time step after analysis step: Modify dyn_nxt (dynnxt.f90) Start Aaaaaaaa init_parallel_pdaf Aaaaaaaa Initialize generate mesh aaaaaaaaa Initialize fields init_pdaf Do i=1, nsteps Time stepper consider BC Consider forcing assimilate_pdaf Post-processing #ifdef key_use_pdaf IF((neuler==0.AND. kt==nit000).or. assimilate) #else Lars Nerger et al. Building EnsDA Systems for Coupled s Stop

20 Current algorithms in PDAF PDAF originated from comparison studies of different filters Filters EnKF (Evensen, perturbed obs.) ETKF (Bishop et al., 2001) SEIK filter (Pham et al., 1998) SEEK filter (Pham et al., 1998) ESTKF (Nerger et al., 2012) LETKF (Hunt et al., 2007) LSEIK filter (Nerger et al., 2006) LESTKF (Nerger et al., 2012) Not yet released: serial EnSRF particle filter Global EWPF filters NETF Localized filters Smoothers for ETKF/LETKF ESTKF/LESTKF EnKF Global and local smoothers Lars Nerger et al. Building EnsDA Systems for Coupled s

21 Parallel Performance Lars Nerger et al. Building EnsDA Systems for Coupled s

22 Time increase factor Speedup Parallel Performance Use between 64 and 4096 processor cores of SGI Altix ICE cluster (HLRN-II) 94-99% of computing time in model integrations Speedup: Increase number of processes for each model task, fixed ensemble size factor 6 for 8x processes/model task one reason: time stepping solver needs more iterations 64/512 proc. 512 proc proc proc. Scalability: Increase ensemble size, fixed number of processes per model task increase by ~7% from 512 to 4096 processes (8x ensemble size) one reason: more communication on the network Lars Nerger et al. Building EnsDA Systems for 64/512 Coupled proc. s 512 proc.

23 time for analysis step [s] Very big test case Simulate a model Choose an ensemble state vector per processor: 10 7 observations per processor: Ensemble size: 25 2GB memory per processor Apply analysis step for different processor numbers Very small increase in analysis time (~1%) Timing of global SEIK analysis step N=50 N= processor cores State dimension: 1.2e11 Observation dimension: 2.4e9 Didn t try to run a real ensemble of largest state size (no model yet) Lars Nerger et al. Building EnsDA Systems for Coupled s

24 Application examples run with PDAF FESOM: Global ocean state estimation (Janjic et al., 2011, 2012) NASA Ocean Biogeochemical : Chlorophyll assimilation (Nerger & Gregg, 2007, 2008) HBM: Coastal assimilation of SST and in situ data (S. Losa et al. 2013, 2014) Sea surface elevation RMS error in surface temperature STD of sea ice concentration MITgcm: sea-ice assimilation (Q. Yang et al., , NMEFC Beijing) + external applications & users, e.g. Geodynamo (IPGP Paris, A. Fournier) MPI-ESM (coupled ESM, IFM Hamburg, S. Brune) -> talk tomorrow CMEMS BAL-MFC (Copernicus Marine Service Baltic Sea) TerrSysMP-PDAF (hydrology, FZJ) Lars Nerger et al. Building EnsDA Systems for Coupled s

NEMO data assimilation and PDAF cooperation. Lars Axell & Ye Liu, SMHI

NEMO data assimilation and PDAF cooperation. Lars Axell & Ye Liu, SMHI NEMO data assimilation and PDAF cooperation Lars Axell & Ye Liu, SMHI Outline NEMO data assimilation An introduction NEMO DA 3D/4D Var 3D/4D EnVar PDAF Different ensemble filters NEMO-Nordic implementation

More information

Software Infrastructure for Data Assimilation: Object Oriented Prediction System

Software Infrastructure for Data Assimilation: Object Oriented Prediction System Software Infrastructure for Data Assimilation: Object Oriented Prediction System Yannick Trémolet ECMWF Blueprints for Next-Generation Data Assimilation Systems, Boulder, March 2016 Why OOPS? Y. Trémolet

More information

Marshall Ward National Computational Infrastructure

Marshall Ward National Computational Infrastructure SCALABILITY OF MOM 5, NEMO, AND MOM 6 ON NCI'S RAIJIN SUPERCOMPUTER Marshall Ward National Computational Infrastructure nci.org.au @NCInews http://marshallward.org/talks/ecmwf2016.html ATMOSPHERIC SCALES

More information

A simple OASIS interface for CESM E. Maisonnave TR/CMGC/11/63

A simple OASIS interface for CESM E. Maisonnave TR/CMGC/11/63 A simple OASIS interface for CESM E. Maisonnave TR/CMGC/11/63 Index Strategy... 4 Implementation... 6 Advantages... 6 Current limitations... 7 Annex 1: OASIS3 interface implementation on CESM... 9 Annex

More information

CCSM Performance with the New Coupler, cpl6

CCSM Performance with the New Coupler, cpl6 CCSM Performance with the New Coupler, cpl6 Tony Craig Brian Kauffman Tom Bettge National Center for Atmospheric Research Jay Larson Rob Jacob Everest Ong Argonne National Laboratory Chris Ding Helen He

More information

A Software Developing Environment for Earth System Modeling. Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012

A Software Developing Environment for Earth System Modeling. Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012 A Software Developing Environment for Earth System Modeling Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012 1 Outline Motivation Purpose and Significance Research Contents Technology

More information

Improving climate model coupling through complete mesh representation

Improving climate model coupling through complete mesh representation Improving climate model coupling through complete mesh representation Robert Jacob, Iulian Grindeanu, Vijay Mahadevan, Jason Sarich July 12, 2018 3 rd Workshop on Physics Dynamics Coupling Support: U.S.

More information

Adding MOAB to CIME s MCT driver

Adding MOAB to CIME s MCT driver Adding MOAB to CIME s MCT driver Robert Jacob, Iulian Grindeanu, Vijay Mahadevan, Jason Sarich CESM SEWG winter meeting February 27, 2018 Support: DOE BER Climate Model Development and Validation project

More information

RegCM-ROMS Tutorial: Coupling RegCM-ROMS

RegCM-ROMS Tutorial: Coupling RegCM-ROMS RegCM-ROMS Tutorial: Coupling RegCM-ROMS Ufuk Utku Turuncoglu ICTP (International Center for Theoretical Physics) Earth System Physics Section - Outline Outline Information about coupling and ESMF Installation

More information

EnKF implementation at NRL Stennis

EnKF implementation at NRL Stennis Hans E. Ngodock (DMS/USM) Ole Martin Smedstad (PSI) In collaboration with Laurent Bertino and Knut A. Lisæter (NERSC) Geir Evensen (NORSK HYDRO) Address the need of advanced data assimilation techniques

More information

About the SPEEDY model (from Miyoshi PhD Thesis):

About the SPEEDY model (from Miyoshi PhD Thesis): SPEEDY EXPERIMENTS. About the SPEEDY model (from Miyoshi PhD Thesis): The SPEEDY model (Molteni 2003) is a recently developed atmospheric general circulation model (AGCM) with a spectral primitive-equation

More information

An Overview of ROMS Code. Kate Hedstrom, ARSC January 2011

An Overview of ROMS Code. Kate Hedstrom, ARSC January 2011 An Overview of ROMS Code Kate Hedstrom, ARSC January 2011 Outline Outline of the code cpp cppdefs.h Modules ocean.in Compiling ROMS ls Trunk Atmosphere/ Lib/ ROMS/ Compilers/ makefile User/ Data/ Master/

More information

OASIS3-MCT, a coupler for climate modelling

OASIS3-MCT, a coupler for climate modelling OASIS3-MCT, a coupler for climate modelling S. Valcke, CERFACS OASIS historical overview OASIS3-MCT: Application Programming Interface Parallel Decompositions supported Communication Interpolations et

More information

From Integrated to Object-Oriented

From Integrated to Object-Oriented From Integrated to Object-Oriented Yannick Trémolet and Mike Fisher ECMWF 3 October 2012 Thanks to many people who have contributed to the project: Tomas Wilhelmsson, Deborah Salmond, John Hague, George

More information

A comparative study of coupling frameworks: the MOM case study

A comparative study of coupling frameworks: the MOM case study A comparative study of coupling frameworks: the MOM case study V. Balaji Princeton University and NOAA/GFDL Giang Nong and Shep Smithline RSIS Inc. and NOAA/GFDL Rene Redler NEC Europe Ltd ECMWF High Performance

More information

EMPIRE implementation manual. Philip A. Browne

EMPIRE implementation manual. Philip A. Browne EMPIRE implementation manual Philip A Browne University of Reading February 17, 2015 Python 27 version Version 51 Contents 1 Install and test the minimal programs 2 2 Set up communicator 3 3 Initial send

More information

C-Coupler2: a flexible and user-friendly community coupler for model coupling and nesting

C-Coupler2: a flexible and user-friendly community coupler for model coupling and nesting https://doi.org/10.5194/gmd-11-3557-2018 Author(s) 2018. This work is distributed under the Creative Commons Attribution 4.0 License. C-Coupler2: a flexible and user-friendly community coupler for model

More information

The challenges of new, efficient computer architectures, and how they can be met with a scalable software development strategy.! Thomas C.

The challenges of new, efficient computer architectures, and how they can be met with a scalable software development strategy.! Thomas C. The challenges of new, efficient computer architectures, and how they can be met with a scalable software development strategy! Thomas C. Schulthess ENES HPC Workshop, Hamburg, March 17, 2014 T. Schulthess!1

More information

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can

More information

Parallel Processing Experience on Low cost Pentium Machines

Parallel Processing Experience on Low cost Pentium Machines Parallel Processing Experience on Low cost Pentium Machines By Syed Misbahuddin Computer Engineering Department Sir Syed University of Engineering and Technology, Karachi doctorsyedmisbah@yahoo.com Presentation

More information

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums MPI Lab Parallelization (Calculating π in parallel) How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums Sharing Data Across Processors

More information

Unstructured grid modelling

Unstructured grid modelling Unstructured grid modelling Intercomparison between several finite element and finite volume approaches to model the North Sea tides Silvia Maßmann 1, Alexey Androsov 1, Sergey Danilov 1 and Jens Schröter

More information

Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Introduction to Parallel Programming with MPI PICASso Tutorial October 25-26, 2006 Stéphane Ethier (ethier@pppl.gov) Computational Plasma Physics Group Princeton Plasma Physics Lab Why Parallel Computing?

More information

HPC Performance Advances for Existing US Navy NWP Systems

HPC Performance Advances for Existing US Navy NWP Systems HPC Performance Advances for Existing US Navy NWP Systems Timothy Whitcomb, Kevin Viner Naval Research Laboratory Marine Meteorology Division Monterey, CA Matthew Turner DeVine Consulting, Monterey, CA

More information

OASIS4 User Guide. PRISM Report No 3

OASIS4 User Guide. PRISM Report No 3 PRISM Project for Integrated Earth System Modelling An Infrastructure Project for Climate Research in Europe funded by the European Commission under Contract EVR1-CT2001-40012 OASIS4 User Guide Edited

More information

The Icosahedral Nonhydrostatic (ICON) Model

The Icosahedral Nonhydrostatic (ICON) Model The Icosahedral Nonhydrostatic (ICON) Model Scalability on Massively Parallel Computer Architectures Florian Prill, DWD + the ICON team 15th ECMWF Workshop on HPC in Meteorology October 2, 2012 ICON =

More information

NVIDIA Update and Directions on GPU Acceleration for Earth System Models

NVIDIA Update and Directions on GPU Acceleration for Earth System Models NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,

More information

FISOC: Framework for Ice Sheet Ocean Coupling

FISOC: Framework for Ice Sheet Ocean Coupling Rupert Gladstone, Ben Galton-Fenzi, David Gwyther, Lenneke Jong Contents Third party coupling software: Earth System Modelling Framework (ESMF). FISOC overview: aims and design ethos. FISOC overview: code

More information

New Features of HYCOM. Alan J. Wallcraft Naval Research Laboratory. 16th Layered Ocean Model Workshop

New Features of HYCOM. Alan J. Wallcraft Naval Research Laboratory. 16th Layered Ocean Model Workshop New Features of HYCOM Alan J. Wallcraft Naval Research Laboratory 16th Layered Ocean Model Workshop May 23, 2013 Mass Conservation - I Mass conservation is important for climate studies It is a powerfull

More information

Introduction to Parallel Programming. Tuesday, April 17, 12

Introduction to Parallel Programming. Tuesday, April 17, 12 Introduction to Parallel Programming 1 Overview Parallel programming allows the user to use multiple cpus concurrently Reasons for parallel execution: shorten execution time by spreading the computational

More information

Porting and Optimizing the COSMOS coupled model on Power6

Porting and Optimizing the COSMOS coupled model on Power6 Porting and Optimizing the COSMOS coupled model on Power6 Luis Kornblueh Max Planck Institute for Meteorology November 5, 2008 L. Kornblueh, MPIM () echam5 November 5, 2008 1 / 21 Outline 1 Introduction

More information

CPL6: THE NEW EXTENSIBLE, HIGH PERFORMANCE PARALLEL COUPLER FOR THE COMMUNITY CLIMATE SYSTEM MODEL

CPL6: THE NEW EXTENSIBLE, HIGH PERFORMANCE PARALLEL COUPLER FOR THE COMMUNITY CLIMATE SYSTEM MODEL CPL6: THE NEW EXTENSIBLE, HIGH PERFORMANCE PARALLEL COUPLER FOR THE COMMUNITY CLIMATE SYSTEM MODEL Anthony P. Craig 1 Robert Jacob 2 Brian Kauffman 1 Tom Bettge 3 Jay Larson 2 Everest Ong 2 Chris Ding

More information

Common Infrastructure for Modeling Earth (CIME) and MOM6. Mariana Vertenstein CESM Software Engineering Group

Common Infrastructure for Modeling Earth (CIME) and MOM6. Mariana Vertenstein CESM Software Engineering Group Common Infrastructure for Modeling Earth (CIME) and MOM6 Mariana Vertenstein CESM Software Engineering Group Outline What is CIME? New CIME coupling infrastructure and MOM6 CESM2/DART Data Assimilation

More information

Introduction to Parallel Programming

Introduction to Parallel Programming Introduction to Parallel Programming Linda Woodard CAC 19 May 2010 Introduction to Parallel Computing on Ranger 5/18/2010 www.cac.cornell.edu 1 y What is Parallel Programming? Using more than one processor

More information

The MPI Message-passing Standard Lab Time Hands-on. SPD Course 11/03/2014 Massimo Coppola

The MPI Message-passing Standard Lab Time Hands-on. SPD Course 11/03/2014 Massimo Coppola The MPI Message-passing Standard Lab Time Hands-on SPD Course 11/03/2014 Massimo Coppola What was expected so far Prepare for the lab sessions Install a version of MPI which works on your O.S. OpenMPI

More information

Coupling of the NEMO and IFS models in a single executable.

Coupling of the NEMO and IFS models in a single executable. 673 Coupling of the NEMO and IFS models in a single executable. Kristian Mogensen, Sarah Keeley and Peter Towers Research Department April 2012 Series: ECMWF Technical Memoranda A full list of ECMWF Publications

More information

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s

More information

SHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008

SHARCNET Workshop on Parallel Computing. Hugh Merz Laurentian University May 2008 SHARCNET Workshop on Parallel Computing Hugh Merz Laurentian University May 2008 What is Parallel Computing? A computational method that utilizes multiple processing elements to solve a problem in tandem

More information

Optimizing an Earth Science Atmospheric Application with the OmpSs Programming Model

Optimizing an Earth Science Atmospheric Application with the OmpSs Programming Model www.bsc.es Optimizing an Earth Science Atmospheric Application with the OmpSs Programming Model HPC Knowledge Meeting'15 George S. Markomanolis, Jesus Labarta, Oriol Jorba University of Barcelona, Barcelona,

More information

OpenMP - exercises -

OpenMP - exercises - OpenMP - exercises - Introduction to Parallel Computing with MPI and OpenMP P.Dagna Segrate, November 2016 Hello world! (Fortran) As a beginning activity let s compile and run the Hello program, either

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

Migrating A Scientific Application from MPI to Coarrays. John Ashby and John Reid HPCx Consortium Rutherford Appleton Laboratory STFC UK

Migrating A Scientific Application from MPI to Coarrays. John Ashby and John Reid HPCx Consortium Rutherford Appleton Laboratory STFC UK Migrating A Scientific Application from MPI to Coarrays John Ashby and John Reid HPCx Consortium Rutherford Appleton Laboratory STFC UK Why and Why Not? +MPI programming is arcane +New emerging paradigms

More information

Verdandi: a Generic Data Assimilation Library

Verdandi: a Generic Data Assimilation Library : a Generic Data Assimilation Library Claire Mouton, Vivien Mallet In collaboration with Dominique Chapelle, Philippe Moireau and Marc Fragu Second ADAMS Meeting October 27 th 2009 Claire Mouton, Vivien

More information

Shared Memory Parallel Programming. Shared Memory Systems Introduction to OpenMP

Shared Memory Parallel Programming. Shared Memory Systems Introduction to OpenMP Shared Memory Parallel Programming Shared Memory Systems Introduction to OpenMP Parallel Architectures Distributed Memory Machine (DMP) Shared Memory Machine (SMP) DMP Multicomputer Architecture SMP Multiprocessor

More information

HPSC 5576 Final presentation. Florian Rappl & Christoph Preis

HPSC 5576 Final presentation. Florian Rappl & Christoph Preis HPSC 5576 Final presentation Florian Rappl & Christoph Preis A. Physics Contents The Ising model Helium mixing with Monte Carlo B. Computation Splitting up the lattice & ghost-sites Shared memory on blades

More information

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace James Southern, Jim Tuccillo SGI 25 October 2016 0 Motivation Trend in HPC continues to be towards more

More information

Performance Impact of Resource Contention in Multicore Systems

Performance Impact of Resource Contention in Multicore Systems Performance Impact of Resource Contention in Multicore Systems R. Hood, H. Jin, P. Mehrotra, J. Chang, J. Djomehri, S. Gavali, D. Jespersen, K. Taylor, R. Biswas Commodity Multicore Chips in NASA HEC 2004:

More information

Finite-Element Ocean circulation Model (FEOM)

Finite-Element Ocean circulation Model (FEOM) AWI Finite-Element Ocean circulation Model (FEOM) Sergey Danilov, Sven Harig, Gennady Kivman, and Jens Schröter Alfred Wegener Institute for Polar and Marine Research, Bremerhaven Motivation: Complexity

More information

FMS: the Flexible Modeling System

FMS: the Flexible Modeling System FMS: the Flexible Modeling System Coupling Technologies for Earth System Modeling Toulouse FRANCE V. Balaji balaji@princeton.edu Princeton University 15 December 2010 Balaji (Princeton University) Flexible

More information

Parallelisation of Surface-Related Multiple Elimination

Parallelisation of Surface-Related Multiple Elimination Parallelisation of Surface-Related Multiple Elimination G. M. van Waveren High Performance Computing Centre, Groningen, The Netherlands and I.M. Godfrey Stern Computing Systems, Lyon,

More information

MPI in ROMS. Kate Hedstrom Dan Schaffer, NOAA Tom Henderson, NOAA January 2011

MPI in ROMS. Kate Hedstrom Dan Schaffer, NOAA Tom Henderson, NOAA January 2011 1 MPI in ROMS Kate Hedstrom Dan Schaffer, NOAA Tom Henderson, NOAA January 2011 2 Outline Introduction to parallel computing ROMS grids Domain decomposition Picky details Debugging story 3 Parallel Processing

More information

Yasuo Okabe. Hitoshi Murai. 1. Introduction. 2. Evaluation. Elapsed Time (sec) Number of Processors

Yasuo Okabe. Hitoshi Murai. 1. Introduction. 2. Evaluation. Elapsed Time (sec) Number of Processors Performance Evaluation of Large-scale Parallel Simulation Codes and Designing New Language Features on the (High Performance Fortran) Data-Parallel Programming Environment Project Representative Yasuo

More information

PRISM A Software Infrastructure Project for Climate Research in Europe

PRISM A Software Infrastructure Project for Climate Research in Europe PRISM A Software Infrastructure Project for Climate Research in Europe OASIS4 User Guide (OASIS4_0_2) Edited by: S. Valcke, CERFACS R. Redler, NEC-CCRLE PRISM Support Initiative Technical Report No 4 August

More information

CHAO YANG. Early Experience on Optimizations of Application Codes on the Sunway TaihuLight Supercomputer

CHAO YANG. Early Experience on Optimizations of Application Codes on the Sunway TaihuLight Supercomputer CHAO YANG Dr. Chao Yang is a full professor at the Laboratory of Parallel Software and Computational Sciences, Institute of Software, Chinese Academy Sciences. His research interests include numerical

More information

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES Cliff Woolley, NVIDIA PREFACE This talk presents a case study of extracting parallelism in the UMT2013 benchmark for 3D unstructured-mesh

More information

GEMS: Global Earth-system Monitoring using Satellite & in-situ Data

GEMS: Global Earth-system Monitoring using Satellite & in-situ Data GEMS: Global Earth-system Monitoring using Space and in-situ data Introduction to GEMS 2006 Assembly GMES Integrated Project, 12.5MEuro, 30 Institutes, 14 Countries www.ecmwf.int/research/eu_projects/gems

More information

Retrospect on the tsunami simulation efforts for the German-Indonesian Tsunami Early Warning System

Retrospect on the tsunami simulation efforts for the German-Indonesian Tsunami Early Warning System Retrospect on the tsunami simulation efforts for the German-Indonesian Tsunami Early Warning System Natalja Rakowsky, Alexey Androsov, Annika Fuchs, Sven Harig, Antonia Immerz, Jörn Behrens, Wolfgang Hiller,

More information

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including

More information

How to overcome common performance problems in legacy climate models

How to overcome common performance problems in legacy climate models How to overcome common performance problems in legacy climate models Jörg Behrens 1 Contributing: Moritz Handke 1, Ha Ho 2, Thomas Jahns 1, Mathias Pütz 3 1 Deutsches Klimarechenzentrum (DKRZ) 2 Helmholtz-Zentrum

More information

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Acknowledgements: Petra Kogel Sami Saarinen Peter Towers 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Motivation Opteron and P690+ clusters MPI communications IFS Forecast Model IFS 4D-Var

More information

ESMF. Earth System Modeling Framework. Carsten Lemmen. Schnakenbek, 17 Sep /23

ESMF. Earth System Modeling Framework. Carsten Lemmen. Schnakenbek, 17 Sep /23 1/23 ESMF Earth System Modeling Framework Carsten Lemmen Schnakenbek, 17 Sep 2013 2/23 Why couple? GEOS5 vorticity We live in a coupled world combine, extend existing models (domains + processes) reuse

More information

Implementation of an integrated efficient parallel multiblock Flow solver

Implementation of an integrated efficient parallel multiblock Flow solver Implementation of an integrated efficient parallel multiblock Flow solver Thomas Bönisch, Panagiotis Adamidis and Roland Rühle adamidis@hlrs.de Outline Introduction to URANUS Why using Multiblock meshes

More information

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing

More information

Parallel Numerics. 1 Data Dependency Graphs & DAGs. Exercise 3: Vector/Vector Operations & P2P Communication II

Parallel Numerics. 1 Data Dependency Graphs & DAGs. Exercise 3: Vector/Vector Operations & P2P Communication II Technische Universität München WiSe 2014/15 Institut für Informatik Prof. Dr. Thomas Huckle Dipl.-Inf. Christoph Riesinger Sebastian Rettenberger, M.Sc. Parallel Numerics Exercise 3: Vector/Vector Operations

More information

Lecture 3: Intro to parallel machines and models

Lecture 3: Intro to parallel machines and models Lecture 3: Intro to parallel machines and models David Bindel 1 Sep 2011 Logistics Remember: http://www.cs.cornell.edu/~bindel/class/cs5220-f11/ http://www.piazza.com/cornell/cs5220 Note: the entire class

More information

Environmental Modelling: Crossing Scales and Domains. Bert Jagers

Environmental Modelling: Crossing Scales and Domains. Bert Jagers Environmental Modelling: Crossing Scales and Domains Bert Jagers 3 rd Workshop on Coupling Technologies for Earth System Models Manchester, April 20-22, 2015 https://www.earthsystemcog.org/projects/cw2015

More information

What Can a Small Country Do? The MeteoSwiss Implementation of the COSMO Suite on the Cray XT4

What Can a Small Country Do? The MeteoSwiss Implementation of the COSMO Suite on the Cray XT4 Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz What Can a Small Country Do? The MeteoSwiss Implementation of the COSMO Suite on the Cray XT4 13th ECMWF

More information

Introduction to parallel computing with MPI

Introduction to parallel computing with MPI Introduction to parallel computing with MPI Sergiy Bubin Department of Physics Nazarbayev University Distributed Memory Environment image credit: LLNL Hybrid Memory Environment Most modern clusters and

More information

OASIS3-MCT, a coupler for climate modelling

OASIS3-MCT, a coupler for climate modelling OASIS3-MCT, a coupler for climate modelling S. Valcke, L. Coquart (CNRS/CERFACS), T. Craig Technical coupling solutions OASIS historical overview OASIS3-MCT: Application Programming Interface Configuration

More information

Challenges in Accelerating Operational Weather and Climate Codes

Challenges in Accelerating Operational Weather and Climate Codes Challenges in Accelerating Operational Weather and Climate Codes Peter Strazdins, The Australian National University Peter.Strazdins [at] cs.anu.edu.au GTC 2015 Singapore Challenges in Accelerating Operational

More information

Development and Testing of a Next Generation Spectral Element Model for the US Navy

Development and Testing of a Next Generation Spectral Element Model for the US Navy Development and Testing of a Next Generation Spectral Element Model for the US Navy Alex Reinecke 1, Kevin Viner 1, James Doyle 1, Sasa Gabersek 1, Matus Martini 2, John Mickalakes 3, Dave Ryglicki 4,

More information

cdo Data Processing (and Production) Luis Kornblueh, Uwe Schulzweida, Deike Kleberg, Thomas Jahns, Irina Fast

cdo Data Processing (and Production) Luis Kornblueh, Uwe Schulzweida, Deike Kleberg, Thomas Jahns, Irina Fast cdo Data Processing (and Production) Luis Kornblueh, Uwe Schulzweida, Deike Kleberg, Thomas Jahns, Irina Fast Max-Planck-Institut für Meteorologie, DKRZ September 24, 2014 MAX-PLANCK-GESELLSCHAFT Data

More information

Introduction to Data Assimilation

Introduction to Data Assimilation Introduction to Data Assimilation Eric Kostelich and David Kuhl MSRI Climate Change Summer School July 21, 2008 Introduction The goal of these exercises is to familiarize you with LETKF data assimilation.

More information

HPC Methods for Coupling Spectral Cloud Microphysics with the COSMO Model

HPC Methods for Coupling Spectral Cloud Microphysics with the COSMO Model Center for Information Services and High Performance Computing (ZIH) HPC Methods for Coupling Spectral Cloud Microphysics with the COSMO Model Max Planck Institute for Meteorology 17 March 2011 Matthias

More information

VisIt Libsim. An in-situ visualisation library

VisIt Libsim. An in-situ visualisation library VisIt Libsim. An in-situ visualisation library December 2017 Jean M. Favre, CSCS Outline Motivations In-situ visualization In-situ processing strategies VisIt s libsim library Enable visualization in a

More information

Parallel Programming. Marc Snir U. of Illinois at Urbana-Champaign & Argonne National Lab

Parallel Programming. Marc Snir U. of Illinois at Urbana-Champaign & Argonne National Lab Parallel Programming Marc Snir U. of Illinois at Urbana-Champaign & Argonne National Lab Summing n numbers for(i=1; i++; i

More information

Shared Memory programming paradigm: openmp

Shared Memory programming paradigm: openmp IPM School of Physics Workshop on High Performance Computing - HPC08 Shared Memory programming paradigm: openmp Luca Heltai Stefano Cozzini SISSA - Democritos/INFM

More information

NEMOVAR, a data assimilation framework for NEMO

NEMOVAR, a data assimilation framework for NEMO , a data assimilation framework for Arthur Vidard September, 19 th 2008 19/09/2008 1/15 Forecast is produced by integration of a model from an initial state combines in a coherent manner all the available

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your

More information

Acknowledgments. Amdahl s Law. Contents. Programming with MPI Parallel programming. 1 speedup = (1 P )+ P N. Type to enter text

Acknowledgments. Amdahl s Law. Contents. Programming with MPI Parallel programming. 1 speedup = (1 P )+ P N. Type to enter text Acknowledgments Programming with MPI Parallel ming Jan Thorbecke Type to enter text This course is partly based on the MPI courses developed by Rolf Rabenseifner at the High-Performance Computing-Center

More information

The NEMO Ocean Modelling Code: A Case Study

The NEMO Ocean Modelling Code: A Case Study The NEMO Ocean Modelling Code: A Case Study CUG 24 th 27 th May 2010 Dr Fiona J. L. Reid Applications Consultant, EPCC f.reid@epcc.ed.ac.uk +44 (0)131 651 3394 Acknowledgements Cray Centre of Excellence

More information

Computer Architectures and Aspects of NWP models

Computer Architectures and Aspects of NWP models Computer Architectures and Aspects of NWP models Deborah Salmond ECMWF Shinfield Park, Reading, UK Abstract: This paper describes the supercomputers used in operational NWP together with the computer aspects

More information

A TIMING AND SCALABILITY ANALYSIS OF THE PARALLEL PERFORMANCE OF CMAQ v4.5 ON A BEOWULF LINUX CLUSTER

A TIMING AND SCALABILITY ANALYSIS OF THE PARALLEL PERFORMANCE OF CMAQ v4.5 ON A BEOWULF LINUX CLUSTER A TIMING AND SCALABILITY ANALYSIS OF THE PARALLEL PERFORMANCE OF CMAQ v4.5 ON A BEOWULF LINUX CLUSTER Shaheen R. Tonse* Lawrence Berkeley National Lab., Berkeley, CA, USA 1. INTRODUCTION The goal of this

More information

Evaluating the Performance and Energy Efficiency of the COSMO-ART Model System

Evaluating the Performance and Energy Efficiency of the COSMO-ART Model System Evaluating the Performance and Energy Efficiency of the COSMO-ART Model System Joseph Charles & William Sawyer (CSCS), Manuel F. Dolz (UHAM), Sandra Catalán (UJI) EnA-HPC, Dresden September 1-2, 2014 1

More information

A Dual EnKF for Estimating Water Level, Bottom Roughness, and Bathymetry in a 1-D Hydrodynamic Model

A Dual EnKF for Estimating Water Level, Bottom Roughness, and Bathymetry in a 1-D Hydrodynamic Model A Dual EnKF for Estimating Water Level, Bottom Roughness, and Bathymetry in a 1-D Hydrodynamic Model Milad Hooshyar Department of civil, Environmental, and construction Engineering, University of Central

More information

Introduction to OpenMP

Introduction to OpenMP 1 Introduction to OpenMP NTNU-IT HPC Section John Floan Notur: NTNU HPC http://www.notur.no/ www.hpc.ntnu.no/ Name, title of the presentation 2 Plan for the day Introduction to OpenMP and parallel programming

More information

MPAS-O: Plan for 2013

MPAS-O: Plan for 2013 MPAS-O: Plan for 2013 Todd Ringler Theoretical Division Los Alamos National Laboratory Climate, Ocean and Sea-Ice Modeling Project http://public.lanl.gov/ringler/ringler.html Staffing, Effort and Focus

More information

Parallel Programming Models. Parallel Programming Models. Threads Model. Implementations 3/24/2014. Shared Memory Model (without threads)

Parallel Programming Models. Parallel Programming Models. Threads Model. Implementations 3/24/2014. Shared Memory Model (without threads) Parallel Programming Models Parallel Programming Models Shared Memory (without threads) Threads Distributed Memory / Message Passing Data Parallel Hybrid Single Program Multiple Data (SPMD) Multiple Program

More information

The IFS / MOCAGE coupling

The IFS / MOCAGE coupling The IFS / MOCAGE coupling Implementation issues and status of the development Philippe Moinat / Vincent-Henri Peuch Choice of the coupling software METEO-FR contributed to the selection of the OASIS4 software

More information

Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture. Alexander Berreth. Markus Bühler, Benedikt Anlauf

Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture. Alexander Berreth. Markus Bühler, Benedikt Anlauf PADC Anual Workshop 20 Performance of the 3D-Combustion Simulation Code RECOM-AIOLOS on IBM POWER8 Architecture Alexander Berreth RECOM Services GmbH, Stuttgart Markus Bühler, Benedikt Anlauf IBM Deutschland

More information

Optimising the Mantevo benchmark suite for multi- and many-core architectures

Optimising the Mantevo benchmark suite for multi- and many-core architectures Optimising the Mantevo benchmark suite for multi- and many-core architectures Simon McIntosh-Smith Department of Computer Science University of Bristol 1 Bristol's rich heritage in HPC The University of

More information

Analysis of Matrix Multiplication Computational Methods

Analysis of Matrix Multiplication Computational Methods European Journal of Scientific Research ISSN 1450-216X / 1450-202X Vol.121 No.3, 2014, pp.258-266 http://www.europeanjournalofscientificresearch.com Analysis of Matrix Multiplication Computational Methods

More information

Introduction to LETKF Data Assimilation

Introduction to LETKF Data Assimilation Introduction to LETKF Data Assimilation David Kuhl and Eric Kostelich AOSC614 November 12, 2008 Introduction The goal of these exercises is to familiarize you with LETKF data assimilation. We will first

More information

Why C? Because we can t in good conscience espouse Fortran.

Why C? Because we can t in good conscience espouse Fortran. C Tutorial Why C? Because we can t in good conscience espouse Fortran. C Hello World Code: Output: C For Loop Code: Output: C Functions Code: Output: Unlike Fortran, there is no distinction in C between

More information

First Experiences with Application Development with Fortran Damian Rouson

First Experiences with Application Development with Fortran Damian Rouson First Experiences with Application Development with Fortran 2018 Damian Rouson Overview Fortran 2018 in a Nutshell ICAR & Coarray ICAR WRF-Hydro Results Conclusions www.yourwebsite.com Overview Fortran

More information

Performance Analysis with Periscope

Performance Analysis with Periscope Performance Analysis with Periscope M. Gerndt, V. Petkov, Y. Oleynik, S. Benedict Technische Universität München periscope@lrr.in.tum.de October 2010 Outline Motivation Periscope overview Periscope performance

More information

Findings from real petascale computer systems with meteorological applications

Findings from real petascale computer systems with meteorological applications 15 th ECMWF Workshop Findings from real petascale computer systems with meteorological applications Toshiyuki Shimizu Next Generation Technical Computing Unit FUJITSU LIMITED October 2nd, 2012 Outline

More information

Supercomputing in Plain English Exercise #6: MPI Point to Point

Supercomputing in Plain English Exercise #6: MPI Point to Point Supercomputing in Plain English Exercise #6: MPI Point to Point In this exercise, we ll use the same conventions and commands as in Exercises #1, #2, #3, #4 and #5. You should refer back to the Exercise

More information

INTAROS Integrated Arctic Observation System

INTAROS Integrated Arctic Observation System INTAROS Integrated Arctic Observation System A project funded by EC - H2020-BG-09-2016 Coordinator: Stein Sandven Nansen Environmental and Remote Sensing Center, Norway Overall objective: to develop an

More information

Table 2. Further details on the explanatory variables used (sorted by the physical scheme and variable name)

Table 2. Further details on the explanatory variables used (sorted by the physical scheme and variable name) Table 2. Further details on the explanatory variables used (sorted by the physical scheme and variable name) Explanatory eacf Cloud physics Empirically adjusted cloud fraction rhcrit Cloud physics Critical

More information