Enabling Legacy Applications on Service Grids. Asvija B, C-DAC, Bangalore

Size: px
Start display at page:

Download "Enabling Legacy Applications on Service Grids. Asvija B, C-DAC, Bangalore"

Transcription

1 Enabling Legacy Applications on Service Grids Asvija B, C-DAC, 1

2 Legacy Applications Service enablement of existing legacy applications is difficult No source code available Proprietary code fragments Production applications Effort Wastage in rewriting A possible option is to give Service Frontends 2

3 Service Wrappers Easiest possible approach to enable many legacy applications as Grid Services However not a panacea for all applications Each application requires detailed analysis and study before service enablement Outline of writing service wrappers with MM5 as case study application. 3

4 What is MM5? MM5 - Mesoscale Meteorological Model Version 5 Atmospheric Model to simulate or predict mesoscale atmospheric circulation Mesoscale Study ( Typical spatial scales between 10 and 1000 km ) Examples of Mesoscale phenomena Thunderstorms, gap winds, downslope windstorms, land-sea breezes, and squall lines 4

5 5

6 What is MM5? Developed at Penn State and NCAR (National Centre for Atmospheric Research) USA as a community mesoscale model with contributions from users worldwide Currently MM5 software is freely provided and supported by the Mesoscale Prediction Group in the Mesoscale and Microscale Meteorology Division, NCAR 6

7 7

8 Model Details Input Data Required - Topography and landuse (in categories); Gridded atmospheric data that have at least these variables: sea-level pressure, wind, temperature, relative humidity and geopotential height; and at these pressure levels: surface, 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100 mb; Observation data that contains soundings and surface reports. 8

9 Model Details Mostly Fortran programs (Major portion of code) + Some code in C F77 and F90 code Requires Intel s ifort compilers on Linux Machines 9

10 Why Does MM5 require Grid? Huge Input and Output Data Sizes (in the order of GBs) Large Execution Time (typically in hours even for smaller data sets) Real Time Simulation and Predictions are required Data Sets are distributed worldwide and require Online download and Processing Application ideally suited to be packaged as a Grid Workflow 10

11 MM5 I/P and O/P file sizes Size File Name BDYOUT_DOMAIN BDYOUT_DOMAIN LOWBDY_DOMAIN LOWBDY_DOMAIN MMINPUT_DOMAIN MMINPUT_DOMAIN2 11

12 MM5 I/P and O/P file sizes Size File Name TERRAIN_DOMAIN TERRAIN_DOMAIN MMOUT_DOMAIN MMOUT_DOMAIN2 12

13 Steps in building the wrapper Step A - Identify the actual components of the application Selecting the different components of your application that you wish to make available as a service Identifying the Input / Output for each of the components. Identify the procedure for invoking each of these components. Identify the sequence in which these components have to be invoked Draw a flow chart of the sequence of operations 13

14 Steps in building the wrapper Step B - Implement the core workflow From the abstract flowchart of the model, we shall write shell scripts which perform these individual actions on an identified Cluster system. We shall also make use of the Grid specific commands for doing data transfers from our workflow system. 14

15 Our Planned Workflow Input Data MM5 (MPI Version) GRAPH Visualization Utilities Visual Output 15

16 Planning Data Staging Input Data staging Use the Grid file transfer commands to stage in data to the remote machine Multiple input files can be archived into a single tar archive and copied & later expanded Output Data staging Output data and the visualization data have to be staged back to the end user submit node. 16

17 Planning Core Workflow Start Step 1. Create a temporary working directory based on random number input and copy the executables to it. Step 2. Do a Grid File transfer for staging in (copying) the Input Data to the current working directory & Expand the tar archive Step 3. Run the MM5 Model (MM5 Version) Input Data Staging Running the Model 17

18 Planning Core Workflow Step 4. Run the GRAPH Utility Step 5. Run the CTRANS Utility to get a Postscript file Step 6. Run the PS2PDF Utility to convert into a PDF file Step 7. Do a Grid File transfer for staging out (copying) the Output PDF file to the user specified location Post Process Output Data Output Data Staging Stop 18

19 Implementing Core Workflow Write a shell script for the workflow /usr/local/mpich-intel/bin/mpirun -np $no_of_nodes -machinefile hostfile mm5.mpp # Run the GRAPH Utility./graph.csh 1 1 MMOUT_DOMAIN1 &> /dev/null.. 19

20 Service Wrapper Design Inputs for the Service Input Host name Name of the host where the input files are located Input Data File name - Absolute path of the input tar archive on the Input host Output Host name Name of the host where the output files are to be copied Output Data Dir - Absolute path of the directory on Output host where the output data has to be copied. 20

21 WSDL for input type <xsd:element name="run"> <xsd:complextype> <xsd:sequence> <xsd:element name="inputhost" type="xsd:string" /> <xsd:element name="inputfile" type="xsd:string" /> <xsd:element name="outputhost" type="xsd:string" /> <xsd:element name="outputdir" type="xsd:string" /> </xsd:sequence> </xsd:complextype> </xsd:element> 21

22 Implementing the server StringBuffer commandbuf = new StringBuffer(); commandbuf.append("/usr/local/mm5/mm5.sh "); commandbuf.append(randint); commandbuf.append(" ").append(inputhost); commandbuf.append(" ").append(inputfile); commandbuf.append(" ").append(outputhost); commandbuf.append(" ").append(outputdir); String command = commandbuf.tostring(); Process p = Runtime.getRuntime().exec(command); // Run the command 22

23 Building & Deploying Building and deploying./globus-build-service.sh MM5 globus-deploy-gar in_cdac_mm5_services.gar 23

24 Accessing the Service Service can be accessed using the URL similar to this : srf/services/mm5/mm5service (hyd01.hardware.cdac.ernet.in is the server name) 24

25 How to invoke Grid Services? 25

26 Deployed Service Model 26

27 Service Details Service URL - MM5/MM5Service MM5 Distributed Memory (DM) Parallel Option (MPP) Runs using MPICH v Deployed at resources in CDAC-Hyderabad 8 Nodes running on Linux Red Hat EL 4 Intel Xeon Dual Core CPUs GHz 27

28 Service Details Deployed as a WSRF Service Deployed in the standard GT4 Container Stdout and StdErr of the application are exposed as WSRF ResourceProperties Implements the WS-Notification (WSN) for StdOut and StdErr changes Clients can Subscribe for streaming StdOut and StdErr Clients can also Subscribe for Program Exit Event 28

29 Service Details Service Input No. of Nodes to Use Input Host and Input File Path Output Host and Output File Path I/P Files : Model initial condition file(s): MMINPUT_DOMAINx Lateral and lower boundary condition files for the coarsest domain: BDYOUT_DOMAIN1, LOWBDY_DOMAINx Nest terrain file(s) from program TERRAIN: TERRAIN_DOMAIN2, 3, etc O/P Files : MMOUT_DOMAIN files Post-Processed files from GRAPH-CTRANS gmeta.pdf 29

30 30

MM5 Modeling System Performance Research and Profiling. March 2009

MM5 Modeling System Performance Research and Profiling. March 2009 MM5 Modeling System Performance Research and Profiling March 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center

More information

Efficient Clustering and Scheduling for Task-Graph based Parallelization

Efficient Clustering and Scheduling for Task-Graph based Parallelization Center for Information Services and High Performance Computing TU Dresden Efficient Clustering and Scheduling for Task-Graph based Parallelization Marc Hartung 02. February 2015 E-Mail: marc.hartung@tu-dresden.de

More information

Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande. {zflavio, Eliane Araújo

Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande. {zflavio, Eliane Araújo Enhancing SegHidro/BRAMS experience through EELA José Flávio M. V. Júnior Paulo Ricardo M. Gomes Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande {zflavio, paulo}@lsd.ufcg.edu.br

More information

WRF-NMM Standard Initialization (SI) Matthew Pyle 8 August 2006

WRF-NMM Standard Initialization (SI) Matthew Pyle 8 August 2006 WRF-NMM Standard Initialization (SI) Matthew Pyle 8 August 2006 1 Outline Overview of the WRF-NMM Standard Initialization (SI) package. More detailed look at individual SI program components. SI software

More information

JOB SUBMISSION ON GRID

JOB SUBMISSION ON GRID arxiv:physics/0701101v2 [physics.comp-ph] 12 Jan 2007 JOB SUBMISSION ON GRID An Users Introduction Rudra Banerjee ADVANCED COMPUTING LAB. Dept. of Physics, University of Pune March 13, 2018 Contents preface

More information

Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue. Computer Problem #l: Optimization Exercises

Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue. Computer Problem #l: Optimization Exercises Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue Computer Problem #l: Optimization Exercises Due Thursday, September 19 Updated in evening of Sept 6 th. Exercise 1. This exercise is

More information

Styx Grid Services: Lightweight, easy-to-use middleware for scientific workflows

Styx Grid Services: Lightweight, easy-to-use middleware for scientific workflows Styx Grid Services: Lightweight, easy-to-use middleware for scientific workflows J.D. Blower 1, A.B. Harrison 2, and K. Haines 1 1 Reading e-science Centre, Environmental Systems Science Centre, University

More information

Towards a Unified Monitoring and Performance Analysis System for the Grid

Towards a Unified Monitoring and Performance Analysis System for the Grid Towards a Unified Monitoring and Performance Analysis System for the Grid Hong-Linh Truong, Thomas Fahringer Institute for Software Science, University of Vienna, Austria {truong,tf}@par.univie.ac.at http://www.par.univie.ac.at/project/scalea

More information

Introduction to PRECIS

Introduction to PRECIS Introduction to PRECIS Joseph D. Intsiful CGE Hands-on training Workshop on V & A, Asuncion, Paraguay, 14 th 18 th August 2006 Crown copyright Page 1 Contents What, why, who The components of PRECIS Hardware

More information

Gridded Data Speedwell Derived Gridded Products

Gridded Data Speedwell Derived Gridded Products Gridded Data Speedwell Derived Gridded Products Introduction Speedwell Weather offers access to a wide choice of gridded data series. These datasets are sourced from the originating agencies in their native

More information

LS-DYNA Performance on 64-Bit Intel Xeon Processor-Based Clusters

LS-DYNA Performance on 64-Bit Intel Xeon Processor-Based Clusters 9 th International LS-DYNA Users Conference Computing / Code Technology (2) LS-DYNA Performance on 64-Bit Intel Xeon Processor-Based Clusters Tim Prince, PhD ME Hisaki Ohara, MS IS Nick Meng, MS EE Intel

More information

An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks

An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks WRF Model NASA Parallel Benchmark Intel MPI Bench My own personal benchmark HPC Challenge Benchmark Abstract

More information

Getting started with the CEES Grid

Getting started with the CEES Grid Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account

More information

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity Fujitsu High Performance Computing Ecosystem Human Centric Innovation in Action Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Innovation Flexibility Simplicity INTERNAL USE ONLY 0 Copyright

More information

Seasonal forecast modeling application on the GARUDA Grid infrastructure

Seasonal forecast modeling application on the GARUDA Grid infrastructure Seasonal forecast modeling application on the GARUDA Grid infrastructure 1, S. Janakiraman, Mohit Ved and B. B. Prahlada Rao Centre for Development of Advanced Computing CDAC Knowledge Park, Byappanahalli,

More information

Sung Ho Choi Korea Hydrographic and Oceanographic Administration

Sung Ho Choi   Korea Hydrographic and Oceanographic Administration The development of User Interface for Multi-Beam data processing in Linux Sung Ho Choi E-mail : choise7413@korea.kr Korea Hydrographic and Oceanographic Administration Introduction Purpose The IHO will

More information

What s New in STK 11?

What s New in STK 11? What s New in STK 11? Major Enhancements for STK 11 Performance Improvements 64-bit Desktop Deck access parallelization Terrain Server Worldwide streaming terrain Available for analysis and visualization

More information

Grid Computing. Resource Properties so far. Resource Property Document. Globus Toolkit Programming GT4 Tutorial Chapter 6 Resource Properties

Grid Computing. Resource Properties so far. Resource Property Document. Globus Toolkit Programming GT4 Tutorial Chapter 6 Resource Properties Globus Toolkit Programming GT4 Tutorial Chapter 6 Resource Properties Grid Computing Fall 2006 Globus Toolkit 4: Programming Java Services Borja Sotomayor and Lisa Childers Morgan Kaufmann Publishers /

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

About the SPEEDY model (from Miyoshi PhD Thesis):

About the SPEEDY model (from Miyoshi PhD Thesis): SPEEDY EXPERIMENTS. About the SPEEDY model (from Miyoshi PhD Thesis): The SPEEDY model (Molteni 2003) is a recently developed atmospheric general circulation model (AGCM) with a spectral primitive-equation

More information

BOINC extensions in the SZTAKI DesktopGrid system

BOINC extensions in the SZTAKI DesktopGrid system BOINC extensions in the SZTAKI DesktopGrid system József Kovács smith@sztaki.hu BOINC Workshop, Grenoble, 10/09/2008 SZTAKI Desktop Grid: BOINC project http://szdg.lpds.sztaki.hu/szdg SZTAKI Desktop Grid:

More information

SNMP MIBs and Traps Supported

SNMP MIBs and Traps Supported This section describes the MIBs available on your system. When you access your MIB data you will expose additional MIBs not listed in this section. The additional MIBs you expose through the process are

More information

DATA INGEST AND OBJECTIVE ANALYSIS FOR THE PSU/NCAR MODELING SYSTEM: PROGRAMS DATAGRIDAND RAWINS

DATA INGEST AND OBJECTIVE ANALYSIS FOR THE PSU/NCAR MODELING SYSTEM: PROGRAMS DATAGRIDAND RAWINS - I~~~~~~~~~~~~~~~~~~~~ NCAR/TN-376+IA NCAR TECHNICAL NOTE October 1992 -jii DATA INGEST AND OBJECTIVE ANALYSIS FOR THE PSU/NCAR MODELING SYSTEM: PROGRAMS DATAGRIDAND RAWINS Kevin W. Manning Philip L.

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Building Services in WSRF. Ben Clifford GGF Summer School July 2004

Building Services in WSRF. Ben Clifford GGF Summer School July 2004 Building Services in WSRF Ben Clifford GGF Summer School July 2004 TODOs This should be a hidden slide Modify RP exercise to use Query not GMRP Interop slide 2 hours exercise = 60 slides = 15 slides per

More information

without too much work Yozo Hida April 28, 2008

without too much work Yozo Hida April 28, 2008 Yozo Hida without too much Work 1/ 24 without too much work Yozo Hida yozo@cs.berkeley.edu Computer Science Division EECS Department U.C. Berkeley April 28, 2008 Yozo Hida without too much Work 2/ 24 Outline

More information

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU-Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de March 2012 Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

GrADS for Beginners. Laura Mariotti

GrADS for Beginners. Laura Mariotti GrADS for Beginners Laura Mariotti mariotti@ictp.it Outline n What is GrADS and how do I get it? n GrADS essentials n Getting started n Gridded data sets n Displaying data n Script language n Saving your

More information

Parallelism. Wolfgang Kastaun. May 9, 2008

Parallelism. Wolfgang Kastaun. May 9, 2008 Parallelism Wolfgang Kastaun May 9, 2008 Outline Parallel computing Frameworks MPI and the batch system Running MPI code at TAT The CACTUS framework Overview Mesh refinement Writing Cactus modules Links

More information

Recent Advances in Modelling Wind Parks in STAR CCM+ Steve Evans

Recent Advances in Modelling Wind Parks in STAR CCM+ Steve Evans Recent Advances in Modelling Wind Parks in STAR CCM+ Steve Evans Introduction Company STAR-CCM+ Agenda Wind engineering at CD-adapco STAR-CCM+ & EnviroWizard Developments for Offshore Simulation CD-adapco:

More information

WRF performance on Intel Processors

WRF performance on Intel Processors WRF performance on Intel Processors R. Dubtsov, A. Semenov, D. Shkurko Intel Corp., pr. ak. Lavrentieva 6/1, Novosibirsk, Russia, 630090 {roman.s.dubtsov, alexander.l.semenov,dmitry.v.shkurko,}@intel.com

More information

Installing Steps. WRF & WPS: Compilation Process. System Requirements. Check System Requirements

Installing Steps. WRF & WPS: Compilation Process. System Requirements. Check System Requirements WRF & WPS: Compilation Process Kelly Werner NCAR/MMM January 2018 Installing Steps 1 2 System Requirements On what kinds of systems will WRF run? Generally any 32- or 64-bit hardware, running a UNIX-like

More information

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace James Southern, Jim Tuccillo SGI 25 October 2016 0 Motivation Trend in HPC continues to be towards more

More information

An Eclipse-based Environment for Programming and Using Service-Oriented Grid

An Eclipse-based Environment for Programming and Using Service-Oriented Grid An Eclipse-based Environment for Programming and Using Service-Oriented Grid Tianchao Li and Michael Gerndt Institut fuer Informatik, Technische Universitaet Muenchen, Germany Abstract The convergence

More information

Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering. David D. Dixon April 8, 2009

Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering. David D. Dixon April 8, 2009 Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering David D. Dixon April 8, 2009 Overview Status of Existing Computational Infrastructure General

More information

Orbital Integrator System Manual

Orbital Integrator System Manual Orbital Integrator System Manual Benjamin Sprague This manual is intended to describe the functionality of the orbital integrator system. Copyright c 2006 Benjamin Sprague Permission is granted to copy,

More information

QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT

QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT By Wan Mohd. Nazri Wan Daud Malaysian Meteorological Department, Jalan Sultan, 46667 Petaling Jaya, Selangor,

More information

Meta-learning in Grid-based Data Mining Systems

Meta-learning in Grid-based Data Mining Systems Meta-learning in Grid-based Data Mining Systems Moez Ben Haj Hmida and Yahya Slimani Faculty of Sciences of Tunisia Campus Universitaire. 2092 El Manar, Tunis, Tunisia Phone : +216 71 872 600 - Fax : +216

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky The GPU-Cluster Sandra Wienke wienke@rz.rwth-aachen.de Fotos: Christian Iwainsky Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative computer

More information

PARAMETERIZATION OF DRAG FORCES IN URBAN CANOPY MODELS USING MICROSCALE-CFD MODELS FOR DIFFERENT WIND DIRECTIONS

PARAMETERIZATION OF DRAG FORCES IN URBAN CANOPY MODELS USING MICROSCALE-CFD MODELS FOR DIFFERENT WIND DIRECTIONS PARAMETERIZATION OF DRAG FORCES IN URBAN CANOPY MODELS USING MICROSCALE-CFD MODELS FOR DIFFERENT WIND DIRECTIONS J. L. Santiago 1, O. Coceal 2 and A. Martilli 1 1 Atmospheric Pollution Division, Environmental

More information

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Acknowledgements: Petra Kogel Sami Saarinen Peter Towers 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Motivation Opteron and P690+ clusters MPI communications IFS Forecast Model IFS 4D-Var

More information

15 Answers to Frequently-

15 Answers to Frequently- 15 Answers to Frequently- Asked Questions In this chapter, we provide answers to some commonly asked questions about ARPS. Most of the questions are collected from the users. We will continue to collect

More information

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC To Infiniband or Not Infiniband, One Site s s Perspective Steve Woods MCNC 1 Agenda Infiniband background Current configuration Base Performance Application performance experience Future Conclusions 2

More information

RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model

RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model Ufuk Utku Turuncoglu ICTP (International Center for Theoretical Physics) Earth System Physics Section - Outline Outline Introduction Grid generation

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

NUSGRID a computational grid at NUS

NUSGRID a computational grid at NUS NUSGRID a computational grid at NUS Grace Foo (SVU/Academic Computing, Computer Centre) SVU is leading an initiative to set up a campus wide computational grid prototype at NUS. The initiative arose out

More information

ADINA DMP System 9.3 Installation Notes

ADINA DMP System 9.3 Installation Notes ADINA DMP System 9.3 Installation Notes for Linux (only) Updated for version 9.3.2 ADINA R & D, Inc. 71 Elton Avenue Watertown, MA 02472 support@adina.com www.adina.com ADINA DMP System 9.3 Installation

More information

Comparing Linux Clusters for the Community Climate System Model

Comparing Linux Clusters for the Community Climate System Model Comparing Linux Clusters for the Community Climate System Model Matthew Woitaszek, Michael Oberg, and Henry M. Tufo Department of Computer Science University of Colorado, Boulder {matthew.woitaszek, michael.oberg}@colorado.edu,

More information

Xeon Phi Native Mode - Sharpen Exercise

Xeon Phi Native Mode - Sharpen Exercise Xeon Phi Native Mode - Sharpen Exercise Fiona Reid, Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents June 19, 2015 1 Aims 1 2 Introduction 1 3 Instructions 2 3.1 Log into yellowxx

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

Bei Wang, Dmitry Prohorov and Carlos Rosales

Bei Wang, Dmitry Prohorov and Carlos Rosales Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512

More information

Compiling applications for the Cray XC

Compiling applications for the Cray XC Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers

More information

MPICH User s Guide Version Mathematics and Computer Science Division Argonne National Laboratory

MPICH User s Guide Version Mathematics and Computer Science Division Argonne National Laboratory MPICH User s Guide Version 3.1.4 Mathematics and Computer Science Division Argonne National Laboratory Pavan Balaji Wesley Bland William Gropp Rob Latham Huiwei Lu Antonio J. Peña Ken Raffenetti Sangmin

More information

Overview Interactive Data Language Design of parallel IDL on a grid Design of IDL clients for Web/Grid Service Status Conclusions

Overview Interactive Data Language Design of parallel IDL on a grid Design of IDL clients for Web/Grid Service Status Conclusions GRIDL: High-Performance and Distributed Interactive Data Language Svetlana Shasharina, Ovsei Volberg, Peter Stoltz and Seth Veitzer Tech-X Corporation HPDC 2005, July 25, 2005 Poster Overview Interactive

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

User's Guide for the NMM Core of the Weather Research and Forecast (WRF) Modeling System Version 3. Chapter 2: Software Installation

User's Guide for the NMM Core of the Weather Research and Forecast (WRF) Modeling System Version 3. Chapter 2: Software Installation User's Guide for the NMM Core of the Weather Research and Forecast (WRF) Modeling System Version 3 Chapter 2: Software Installation Table of Contents Introduction Required Compilers and Scripting Languauges

More information

Application Servers Sun Java Systems Application Server (SJSAS) Installation

Application Servers Sun Java Systems Application Server (SJSAS) Installation Proven Practice Application Servers Sun Java Systems Application Server (SJSAS) Installation Product(s): IBM Cognos 8.4, SJSAS Area of Interest: Infrastructure DOC ID: AS07 Version 8.4.0.0 Sun Java Systems

More information

Parallel & Cluster Computing. cs 6260 professor: elise de doncker by: lina hussein

Parallel & Cluster Computing. cs 6260 professor: elise de doncker by: lina hussein Parallel & Cluster Computing cs 6260 professor: elise de doncker by: lina hussein 1 Topics Covered : Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Data Access and Analysis with Distributed, Federated Data Servers in climateprediction.net

Data Access and Analysis with Distributed, Federated Data Servers in climateprediction.net Data Access and Analysis with Distributed, Federated Data Servers in climateprediction.net Neil Massey 1 neil.massey@comlab.ox.ac.uk Tolu Aina 2, Myles Allen 2, Carl Christensen 1, David Frame 2, Daniel

More information

NCL on Yellowstone. Mary Haley October 22, 2014 With consulting support from B.J. Smith. Sponsored in part by the National Science Foundation

NCL on Yellowstone. Mary Haley October 22, 2014 With consulting support from B.J. Smith. Sponsored in part by the National Science Foundation Mary Haley October 22, 2014 With consulting support from B.J. Smith Sponsored in part by the National Science Foundation Main goals Demo two ways to run NCL in yellowstone environment Point you to useful

More information

Beginner's Guide for UK IBM systems

Beginner's Guide for UK IBM systems Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,

More information

Executing Message-Passing Programs. Mitesh Meswani

Executing Message-Passing Programs. Mitesh Meswani Executing Message-assing rograms Mitesh Meswani resentation Outline Introduction to Top Gun (eserver pseries 690) MI on Top Gun (AIX/Linux) Itanium2 (Linux) Cluster Sun (Solaris) Workstation Cluster Environment

More information

Streams Version Installation and Registration

Streams Version Installation and Registration Streams Version 2.06 Installation and Registration Dr Roger Nokes February 2017 Department of Civil and Natural Resources Engineering University of Canterbury Christchurch, NZ roger.nokes@canterbury.ac.nz

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

Turbostream: A CFD solver for manycore

Turbostream: A CFD solver for manycore Turbostream: A CFD solver for manycore processors Tobias Brandvik Whittle Laboratory University of Cambridge Aim To produce an order of magnitude reduction in the run-time of CFD solvers for the same hardware

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

GT 4.2.0: Community Scheduler Framework (CSF) System Administrator's Guide

GT 4.2.0: Community Scheduler Framework (CSF) System Administrator's Guide GT 4.2.0: Community Scheduler Framework (CSF) System Administrator's Guide GT 4.2.0: Community Scheduler Framework (CSF) System Administrator's Guide Introduction This guide contains advanced configuration

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

PyCordexer. A RegCM output format converter according to CORDEX archive specifications

PyCordexer. A RegCM output format converter according to CORDEX archive specifications PyCordexer A RegCM output format converter according to CORDEX archive specifications December 2014 2 PyCordexer The PyCordexer scripts have been developed to ease the RegCM Model User in converting variables

More information

RSA NetWitness Logs. Oracle iplanet Web Server. Event Source Log Configuration Guide. Last Modified: Tuesday, May 09, 2017

RSA NetWitness Logs. Oracle iplanet Web Server. Event Source Log Configuration Guide. Last Modified: Tuesday, May 09, 2017 RSA NetWitness Logs Event Source Log Configuration Guide Oracle iplanet Web Server Last Modified: Tuesday, May 09, 2017 Event Source Product Information: Vendor: Oracle Event Source: iplanet Web Server

More information

Task farming on Blue Gene

Task farming on Blue Gene Task farming on Blue Gene Fiona J. L. Reid July 3, 2006 Abstract In this paper we investigate how to implement a trivial task farm on the EPCC eserver Blue Gene/L system, BlueSky. This is achieved by adding

More information

OBAN Class Homework Assignment No. 4 Distributed on November 3, Due Thursday, December 1, 2016

OBAN Class Homework Assignment No. 4 Distributed on November 3, Due Thursday, December 1, 2016 OBAN Class Homework Assignment No. 4 Distributed on November 3, 2016 Due Thursday, December 1, 2016 Original document on ANALAB was written by Huang, Gustafsson and Robertson (2000), and can be found in

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Xeon Phi Native Mode - Sharpen Exercise

Xeon Phi Native Mode - Sharpen Exercise Xeon Phi Native Mode - Sharpen Exercise Fiona Reid, Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents April 30, 2015 1 Aims The aim of this exercise is to get you compiling and

More information

Programming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018

Programming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018 Programming Techniques for Supercomputers HPC Services @ RRZE University Erlangen-Nürnberg Sommersemester 2018 Outline Login to RRZE s Emmy cluster Basic environment Some guidelines First Assignment 2

More information

ARCHITECTURE OF MADIS DATA PROCESSING AND DISTRIBUTION AT FSL

ARCHITECTURE OF MADIS DATA PROCESSING AND DISTRIBUTION AT FSL P2.39 ARCHITECTURE OF MADIS DATA PROCESSING AND DISTRIBUTION AT FSL 1. INTRODUCTION Chris H. MacDermaid*, Robert C. Lipschutz*, Patrick Hildreth*, Richard A. Ryan*, Amenda B. Stanley*, Michael F. Barth,

More information

Installing WRF & WPS. Kelly Keene NCAR/MMM January 2015

Installing WRF & WPS. Kelly Keene NCAR/MMM January 2015 Installing WRF & WPS Kelly Keene NCAR/MMM January 2015 1 Installing Steps Check system requirements Installing Libraries Download source data Download datasets Compile WRFV3 Compile WPS 2 System Requirements

More information

The Why and How of HPC-Cloud Hybrids with OpenStack

The Why and How of HPC-Cloud Hybrids with OpenStack The Why and How of HPC-Cloud Hybrids with OpenStack OpenStack Australia Day Melbourne June, 2017 Lev Lafayette, HPC Support and Training Officer, University of Melbourne lev.lafayette@unimelb.edu.au 1.0

More information

PC-Cluster Operation Manual

PC-Cluster Operation Manual PC-Cluster Operation Manual 1. Start PC-Cluster 1.1. Power ON (1) Confirm power cables Connected OUTLET AVR UPS PC-Cluster (2) Switch ON AVR Switch ON (3) Switch ON UPS Switch ON PC-Cluster Operation Manual

More information

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster

More information

Real-Time Monitoring Configuration Utility

Real-Time Monitoring Configuration Utility CHAPTER 3 Revised: January 12, 2010, Introduction, page 3-1 rtmcmd Utility, page 3-2 Information About The User Configuration File, page 3-3 User Configuration File Format, page 3-4 Files and Directories,

More information

Real-Time Monitoring Configuration Utility

Real-Time Monitoring Configuration Utility 3 CHAPTER Revised: September 17, 2012, Introduction This chapter provides an overview of the Real-time monitoring configuration, rtmcmd utility, and user configuration files. This chapter consists of these

More information

NAREGI Middleware Mediator

NAREGI Middleware Mediator Administrator's Guide NAREGI Middleware Mediator October, 2008 National Institute of Informatics Documents List Administrator s Guide Group Administrator s Guide, NAREGI Middleware IS(Distributed Information

More information

MVAPICH MPI and Open MPI

MVAPICH MPI and Open MPI CHAPTER 6 The following sections appear in this chapter: Introduction, page 6-1 Initial Setup, page 6-2 Configure SSH, page 6-2 Edit Environment Variables, page 6-5 Perform MPI Bandwidth Test, page 6-8

More information

GV-System V8.7 Supports H.265 GPU Decoding

GV-System V8.7 Supports H.265 GPU Decoding GV-System V8.7 Supports H.265 GPU Decoding Article ID: V1-16-07-15-a Applied to GV-System V8.7 Release Date: 07/15/2016 Summary It takes both Intel Skylake platform and GV-System V8.7 to enable the highly

More information

Climate model-based probabilistic wind risk assessment under future climate

Climate model-based probabilistic wind risk assessment under future climate Climate model-based probabilistic wind risk assessment under future climate Kazuyoshi Nishijima Associate Professor of Engineering Decision Analysis CERDA CERDA Areas and current focuses Structural reliability

More information

Perceptive TransForm. Technical Specifications. Version: 8.x Compatible with ImageNow 6.5.x to 6.7.x and Perceptive Content 7.x

Perceptive TransForm. Technical Specifications. Version: 8.x Compatible with ImageNow 6.5.x to 6.7.x and Perceptive Content 7.x Perceptive TransForm Technical s Version: 8.x Compatible with ImageNow 6.5.x to 6.7.x and Perceptive Content 7.x Written by: Product Knowledge, R&D Date: May 2018 2008-2018 Hyland Software, Inc. and its

More information

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 Overview The Globally Accessible Data Environment (GLADE) provides centralized file storage for HPC computational, data-analysis,

More information

Answers to Webinar "Wind farm flow modelling using CFD update" Q&A session

Answers to Webinar Wind farm flow modelling using CFD update Q&A session Answers to Webinar "Wind farm flow modelling using CFD - 2012 update" Q&A session Christiane Montavon, Ian Jones Physics related Q: Should the roughness map be scaled from the normal WAsP map to the CFD

More information

Introduction to GPU hardware and to CUDA

Introduction to GPU hardware and to CUDA Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 35 Course outline Introduction to GPU hardware

More information

TelFit Documentation. Release Kevin Gullikson

TelFit Documentation. Release Kevin Gullikson TelFit Documentation Release 1.3.0 Kevin Gullikson June 23, 2015 Contents 1 Introduction to Telluric Modeling with TelFit 3 2 Installation 5 3 TelFit Tutorial 7 3.1 Generating a Telluric Model with TelFit.................................

More information

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES Cliff Woolley, NVIDIA PREFACE This talk presents a case study of extracting parallelism in the UMT2013 benchmark for 3D unstructured-mesh

More information

DiskSavvy Disk Space Analyzer. DiskSavvy DISK SPACE ANALYZER. User Manual. Version Dec Flexense Ltd.

DiskSavvy Disk Space Analyzer. DiskSavvy DISK SPACE ANALYZER. User Manual. Version Dec Flexense Ltd. DiskSavvy DISK SPACE ANALYZER User Manual Version 10.3 Dec 2017 www.disksavvy.com info@flexense.com 1 1 Product Overview...3 2 Product Versions...7 3 Using Desktop Versions...8 3.1 Product Installation

More information

WSMetacatService a GT4 Web Service Wrapper for Metacat

WSMetacatService a GT4 Web Service Wrapper for Metacat WSMetacatService a GT4 Web Service Wrapper for Metacat Author: Terry Fleury (tfleury@ncsa.uiuc.edu) Date: October 3, 2005 Summary In addition to the GSI-enabling of the https connection to Metacat, work

More information