Delft3d-FLOW Quick Start Manual

Size: px
Start display at page:

Download "Delft3d-FLOW Quick Start Manual"

Transcription

1 Delft3d-FLOW Quick Start Manual Michael Kliphuis April 2, Introduction Delft3D-FLOW is a multi-dimensional (2D or 3D) hydrodynamic (and transport) simulation program which calculates non-steady flow and transport phenomena that result from tidal and meteorological forcing on a rectilinear or a curvilinear, boundary fitted grid. Chapter two describes how to start a parallel Delft3d-FLOW run on a cluster like Cartesius and chapter three how to plot the results of a run with Matlab. We discuss specific scripts and a handy application called QUICKPLOT. 2. Start a Delft3d-FLOW run This chapter describes how to quickly setup and start a Delft3d-FLOW run. As an example we take a low resolution (approximately 200m) run that simulates the Lake Garda in Italy on the dijkbio login on Cartesius. 2.1 Log in to Cartesius ssh X dijkbio@cartesius.surfsara.nl with password: ****** you know it, I better not mention it because this document will be on my website All the needed files to start a run are on Cartesius* on the dijkbio login in the directory: /projects/0/samoc/henk/delft3d/garda/f2004/200m/scripts/run_template If you want to start a new run to test parameters, e.g. let s call it lg_run1 then type: cd /projects/0/samoc/henk/delft3d/garda/f2004/200m/scripts cp r run_template lg_run1 cd lg_run1 then in run.sh change the line * If you want to run it on another login on Cartesius or on another machine please contact Michael Kliphuis (m.kliphuis@uu.nl)

2 cd /projects/0/samoc/henk/delft3d/garda/f2004/200m/scripts/run_template into cd /projects/0/samoc/henk/delft3d/garda/f2004/200m/scripts/lg_run1 If needed change any of the parameters in the file g2004.mdf check the meaning of the parameters via the manual After that you are ready to start the run. 2.2 Start run (in the batch queue) 1. cd /projects/0/samoc/henk/delft3d/garda/f2004/200m/scripts/lg_run1 2. Open the MDF file g2004.mdf Determine the reference start date of the run. e.g. set it to: Itdate = # # Set the restart file from which you want to restart from. We did a spinup run of 5 modelyears and we want to start from its last restart file trim-y2008_21jun-31dec_f2004.dat containing all modeldays from 21 jun dec2008. Restid = #trim-y2008_21jun-31dec_f2004# Suppose you want to run for 1 extra modelyear. Then set the start time 'Tstart' of your run as follows. Since the value of 'Tunit' is #M# which means minutes, set it to: Tstart = In this example we want to start on the last day of the spinup run so at the last day of the restart file Restid = trim-y2008_21jun-31dec_f2004. The name is a bit misleading but this is actually 1 jan 2009 = day 1827 with respect to Itdate = 1 jan 2004 (366*2 + 3*365 = 1827). So the start minute Tstart = 1827*60*24= Set the stop time 'Tstop' of your run 1 modelyear later. Since the value of 'Tunit' is #M# which means minutes, set it to: Tstop = Tstop is the last day of year 2009 =( )*60*24= Note that year 2004 and 2008 are leap years Set the frequency of writing 3D output like temperature and velocities to a so called map file (in this case trim-g2004.dat) e.g. set it to:

3 Flmap = which means start writing at minute 0 and stop at minute (Tstop) with a time interval of every 1440 minutes. In other words write to the trim file every model day. Set the frequency of writing 2D output like temperature and velocities in certain observation points (set in file obspoints_unitn_62x222.obs in table 1 above) to a so called history file (in this case trih-g2004.dat) e.g. set it to: Flhis = which means start writing at minute 0 and stop at minute with a time interval of every 60 minutes. In other words write to the trim file every model hour. Set the frequency of writing a restart file (in this case for instance trirst.g ) e.g. set it to: Flrst = Open the file run.sh which means write every minutes (30 model days). In fact these restart files are not used, we use the trim*dat files (also possible). Perhaps we can just leave this empty then??? These were the main keyword settings but there are many more keywords you can set in the MDF file. For a complete list go to section A.1.2 on page 417 of the user manual at FLOW_User_Manual.pdf Set the number of cores you want to run on, e.g. set it to 32: #SBATCH -n 32 Set the name of the job, e.g. set it to 200m_garda: #SBATCH -J 200m_garda Reserve time for running this job in the batch, e.g. reserve 30 hours. #SBATCH -t 30:00:00 Make sure this is long enough for the job to finish. If it finishes sooner then this is no problem. The higher you set this value the longer it can take for a job to get picked up. 4. Start the job by submitting it to the batch queue by typing: sbatch run.sh

4 5. Check the status of the job by typing: mysqueue (this is an alias for squeue -u dijkbio -u to denote the login you work on is dijkbio) you then see for instance: JOBID PARTITION NAME USER TIME_LIMIT START_TIME ST TIME NODES NODELIST(REASON) broadwell 200m_g dijkbio 4-00:00: T14:28:34 R 10:50 1 tcn921 which means that the job has jobid , it is running on a broadwell node and has name 200m_g. It is reserved to run for 96 wallclock hours (4 wallclock days), it started on 22 May 2017 at 14:28:34, it has status (ST) running (R) and it is already running for 10m 50s on 1 node namely node tcn921. If you want to stop the run then type: scancel Check how far the run is by opening the file slurm out. I usually type: tail -f slurm out 2. Finally AND THIS IS VERY IMPORTANT! When the run is finished rename the outputfiles: trih-g2004.dat trih-g2004.def trim-g2004.dat trim-g2004.def If you do not do this then starting the run again or restarting it from another restartfile will overwrite these files and you lost all data. I usually run for 1 modelyear e.g. year 2009 and when the run is done I type: mv trih-g2004.dat trih-y2009_f2004.dat mv trih-g2004.def trih-y2009_f2004.def mv trim-g2004.def trim-y2009_f2004.def mv trim-g2004.dat trim-y2009_f2004.dat cp g2004.mdf g2004.mdf_y2009 After that you can restart the run to simulate year 2010 by setting in g2004.mdf Tstart = Tstop = Restid = #trim-y2009_f2004# The next section describes all the files in the run directory that are needed for the run

5 2.3 More info on the needed files The table below shows all the files (and information about them) in directory run_template (and pm_run1) These files are needed in order to run this/a Delft3d-FLOW run: g2004.mdf Gardagrid_unitn.grd Gardagrid_unitn.enc benaco_ dep drypoints_unitn.dry obspoints_unitn_62x222.obs _interp.amc _interp.amp _interp.amr _interp.amt _interp.amu _interp.amv _interp.ams 2004_interp.grd config_d_hydro.xml run.sh the so called Master Definition Flow file (MDF-file) which contains all information to execute a flow simulation. It is the main input file in which a user sets the time step, the start and stop time, used parametrizations etc. but also the name of the grid file (.grd), bathymetry file (.dep), the forcing files containing the atmospheric wind forcing (.amu and.amv) and all other files in this table. Here it is called g2004.mdf because it starts a run that is forced with a year 2004 atmospheric forcing grid file grid enclosure file bathymetry file indices of dry points file observation points file air cloudiness forcing file 7 years all containing forcing of year 2004 atmospheric pressure forcing file 7 years all containing forcing of year 2004 relative air humidity forcing file 7 years all containing forcing of year 2004 air temperature forcing file 7 years all containing forcing of year 2004 wind speed in East direction forcing file 7 years all containing forcing of year 2004 wind speed in North direction forcing file 7 years all containing forcing of year 2004 Solar radiation forcing file 7 years all containing forcing of year 2004 forcing grid file (set in the 7 forcing files above) config file expected as argument for Delft3d-FLOW executable d_hydro.exe. It contains the name of the MDF file (in this case g2004.mdf) script to start the run in the batch on Cartesius Making the grid (.grd) and enclosure (.enc) files is done with the Delft3d application RGFGRID. The bathymetry (.dep) file is made with Delft3d application QUICKIN. The way to do this is not in the scope of this document. The following manuals show how:

6 3. Analyse and plot the results Running Delft3d-FLOW with the.mdf file above will generate 3D map files: and 2D history files: trim-g2004.dat trim-g2004.def trih-g2004.dat trih-g2004.def The.dat files contain the actual data, the.def files contain metadata info like size and description of the variables which are in specific format and can be read with MATLAB. 3.1 with QUICKPLOT The easiest way to check the variables is with a MATLAB application called QUICKPLOT. You do this as follows: 1. cd /projects/0/samoc/henk/delft3d/garda/f2004/200m/matlab 2. type: module load matlab 3. type: matlab 4. In the Matlab command window type: d3d_qp Don t worry about all the WARNINGS you will get! You will soon see the following window:

7 5. Then select an input file by clicking on File followed by Open File and select for instance the map output file one directory up in../scripts/lg_run1 called trim-g2004.dat as shown in the figure below: 6. Standard the selected variable is 'morphological grid'. Select one of the many other variables for instance 'temperature'

8 7. This variable is valid for all 100 depth values. If you only want to see the temperature flux in the highest level (100) then deselect the 'all' box and set value 100 as shown in the figure below: 8. Then plot the 'temperature' by clicking the button 'Quick View' righ below. You then get a new window showing the plot.

9 You can loop to other time frames by clicking on the < or > of the slide left below in the figure above. For more information about the use of QUICKPLOT check its manual at: with myscripts Another way to check the results is with some MATLAB scripts written by Marina, Sebastiano and Michael to plot basic things like: 1. Timeseries of for instance the mean SST or mean volume temperature 2. X-Y grid plots of variables like the temperature anomaly with respect to the mean with in the background arrows showing the value and direction of the wind or water flow. 3. X-Z (or Y-Z) grid transect plots of variables like temperature, zonal velocity U (meridional velocity V) etc along a transect going from the West to the East (or South to North) side of the lake. Below we will analyse a 10 year long 200m resolution lake Garda run with a realistic atmospheric forcing for years Usually a delft3d run generates multiple output files. This particular run generated the following three so called trim files in the directory: /projects/0/samoc/henk/delft3d/garda/f /200m/scripts/lg_run - trim-1jan mar2008_f dat - trim-13mar2008-1jan2011_f dat - trim-1jan dec2013_f dat

10 For postprocessing it is not handy to have the data in multiple files. We decided to make a script data_extractor.m that extracts layers and transects from multiple outputfiles like the three above and puts it in one file Extract layers and/or transects 1. type: module load matlab 2. type: cd /projects/0/samoc/henk/delft3d/garda/f /200m/matlab/myscripts Before you decide to extract a certain transect make sure that the transect is a continuous line without interuptions going from the east to the west side or from the south to the north side of the lake. Remember that this 200m resolution run is defined on M x N = 64 x 224 gridpoints. The south to north transects are so called M transects (M Î [0,64]) The east to west transects are so called N transects (N Î [0,224]) 3. With the script show_transect.m you can check if a transect is continuous. Open this script and set for instance: IDtransect = 'n13'; plotid = 'transect_n13_long_lg_run_y2013'; Then type: matlab -nosplash -noawt show_transect.m This generates an image file: transect_n13_long_lg_run_y2013_position.png (open with command: display)

11 As you can see the line N=13 is not continuous. But for instance N=100 is OK! Also transect M=23 is OK 4. Ok suppose we are interested in: layers 99 and 100 and transects M=23 and N=100 Then open script data_extractor.m and set:

12 flag_grid=1; flag_transect = 1; transectsm = [23]; transectsn = [100]; flag_layer = 1; layer = [ ]; 5. Then type: Plot timeseries matlab -nosplash -noawt < data_extractor.m This will extract the variables u,v,w,t,nuz and windu and windv on the requested layers and transects. The data will be extracted for all modeldays of the simulation, in this case 3653 days and put in underlying directory extracted/. The script also extracts the used grid (X, Y, Z, bathymetry) that was used for this run. Most of the plot scripts mentioned below need this grid file. Extracting the grid and layers does not take so long but extracting just one transect can take up to 45 minutes! Fortunately many of the layers and transects are already extracted, just check in directory extracted/ which ones you can use. If more or other variables are needed the user can change the scripts f_extract_layer.m f_extrac_transectm.m f_extract_transectn.m that are being called by data_extractor.m 6. Now plot the timeseries of the mean SST of the lake for all the generated years For this purpose we need file layer100_long_lg_run_y _days mat which was generated in previous step 5. Then type: matlab -nosplash -noawt < plot_tseries_sst.m This generates an image file: tseries_mean_sst_long_lg_run_y png (open with command: display)

13 3.2.3 Plot layers 7. Now let us plot data on layer 100. More specific, make an X-Y grid plot of the mean year 2013 (last year of the simulation) ) temperature anomaly with respect to the Lake mean temperature on the surface layer 100 with in the background arrows showing the value and direction of the wind or water flow. For this purpose we (again) need file layer100_long_lg_run_y _days mat which was generated in step 5. Then open script: plot_layer.m And set: file = 'layer100_long_lg_run_y _days mat' grid = 'grid_long_lg_run_y mat' plotid = 'wind_and_flow_layer100_long_lg_run_y2013' Also in order to average over year 365 set: % define the time window to average the data over % order in datenum below = datenum(y,m,d) date1=datenum([ ]); date2=datenum([ ]); % for some reason last day does not work It is ofcourse also possible to average over other years or for instance the month March or September of a specific year. 8. Then type: matlab -nosplash -noawt < plot_layer.m

14 This generates an image file: wind_and_flow_layer100_long_lg_run_y2013.png (open with command: display) Plot transects 9. Now let us make some X-Z (and Y-Z) grid transect plots of variables like temperature and zonal velocity U (meridional velocity V) along a transect going from the West to the East (or South to North) side of the lake. Again we want to plot mean values of year 2013 (last year of the simulation). We first plot it for transect N=100 discussed in step 3. For this purpose we need file n100_long_lg_run_y _days mat which was generated in step 5. Then open script: plot_transectn.m And set: file = 'n100_long_lg_run_y _days mat' IDtransect = 'n100';

15 plotid = 'transect_n100_long_lg_run_y2013'; % define the time window to average the data over % order in datenum below = datenum(y,m,d) date1=datenum([ ]); date2=datenum([ ]); % for some reason last day does not work 10. Then type: matlab -nosplash -noawt < plot_transectn.m This generates 5 image files: transect_n100_long_lg_run_y2013_position.png transect_n100_long_lg_run_y2013_v.png transect_n100_long_lg_run_y2013_t.png transect_n100_long_lg_run_y2013_dt.png transect_n100_long_lg_run_y2013_dtnorm.png (open with command: display) The first one is the same plot as can be made by show_transect.m discussed in step 3 and it shows the position of the transect. The plots are shown below:

16 And below the anomaly with respect to the mean

17 Next is the anomaly above normalized relative to Tmean

18 11. Finally let us make the same plots for transect M=23 discussed in step 3. For this purpose we need file m23_long_lg_run_y _days mat which was generated in step 5. Then open script: plot_transectm.m And set: file = 'm23_long_lg_run_y _days mat' IDtransect = 'm23'; plotid = 'transect_m23_long_lg_run_y2013'; % define the time window to average the data over % order in datenum below = datenum(y,m,d) date1=datenum([ ]); date2=datenum([ ]); % for some reason last day does not work 12. Then type: matlab -nosplash -noawt < plot_transectm.m This generates 5 image files: transect_m23_long_lg_run_y2013_position.png transect_m23_long_lg_run_y2013_v.png transect_m23_long_lg_run_y2013_t.png transect_m23_long_lg_run_y2013_dt.png transect_m23_long_lg_run_y2013_dtnorm.png (open with command: display) The first one is the same plot as can be made by show_transect.m discussed in step 3 and it shows the position of the transect. The plots are shown below:

19

20 And below the anomaly with respect to the mean Next is the anomaly above normalized relative to Tmean

Slurm basics. Summer Kickstart June slide 1 of 49

Slurm basics. Summer Kickstart June slide 1 of 49 Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource

More information

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders

More information

COSC 6385 Computer Architecture. - Homework

COSC 6385 Computer Architecture. - Homework COSC 6385 Computer Architecture - Homework Fall 2008 1 st Assignment Rules Each team should deliver Source code (.c,.h and Makefiles files) Please: no.o files and no executables! Documentation (.pdf,.doc,.tex

More information

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides)

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides) STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2 (Mouse over to the left to see thumbnails of all of the slides) ALLINEA DDT Allinea DDT is a powerful, easy-to-use graphical debugger capable of debugging a

More information

Introduction to GACRC Teaching Cluster PHYS8602

Introduction to GACRC Teaching Cluster PHYS8602 Introduction to GACRC Teaching Cluster PHYS8602 Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three

More information

Submitting batch jobs Slurm on ecgate Solutions to the practicals

Submitting batch jobs Slurm on ecgate Solutions to the practicals Submitting batch jobs Slurm on ecgate Solutions to the practicals Xavi Abellan xavier.abellan@ecmwf.int User Support Section Com Intro 2015 Submitting batch jobs ECMWF 2015 Slide 1 Practical 1: Basic job

More information

Slurm at UPPMAX. How to submit jobs with our queueing system. Jessica Nettelblad sysadmin at UPPMAX

Slurm at UPPMAX. How to submit jobs with our queueing system. Jessica Nettelblad sysadmin at UPPMAX Slurm at UPPMAX How to submit jobs with our queueing system Jessica Nettelblad sysadmin at UPPMAX Free! Watch! Futurama S2 Ep.4 Fry and the Slurm factory Simple Linux Utility for Resource Management Open

More information

High Performance Computing Cluster Basic course

High Performance Computing Cluster Basic course High Performance Computing Cluster Basic course Jeremie Vandenplas, Gwen Dawes 30 October 2017 Outline Introduction to the Agrogenomics HPC Connecting with Secure Shell to the HPC Introduction to the Unix/Linux

More information

Submitting batch jobs

Submitting batch jobs Submitting batch jobs SLURM on ECGATE Xavi Abellan Xavier.Abellan@ecmwf.int ECMWF February 20, 2017 Outline Interactive mode versus Batch mode Overview of the Slurm batch system on ecgate Batch basic concepts

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or

More information

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting

More information

LAB. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers

LAB. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers LAB Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers Dan Stanzione, Lars Koesterke, Bill Barth, Kent Milfeld dan/lars/bbarth/milfeld@tacc.utexas.edu XSEDE 12 July 16, 2012 1 Discovery

More information

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018 Linux Tutorial Ken-ichi Nomura 3 rd Magics Materials Software Workshop Gaithersburg Marriott Washingtonian Center November 11-13, 2018 Wireless Network Configuration Network Name: Marriott_CONFERENCE (only

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY

More information

Federated Cluster Support

Federated Cluster Support Federated Cluster Support Brian Christiansen and Morris Jette SchedMD LLC Slurm User Group Meeting 2015 Background Slurm has long had limited support for federated clusters Most commands support a --cluster

More information

How to access Geyser and Caldera from Cheyenne. 19 December 2017 Consulting Services Group Brian Vanderwende

How to access Geyser and Caldera from Cheyenne. 19 December 2017 Consulting Services Group Brian Vanderwende How to access Geyser and Caldera from Cheyenne 19 December 2017 Consulting Services Group Brian Vanderwende Geyser nodes useful for large-scale data analysis and post-processing tasks 16 nodes with: 40

More information

MIKE Zero. Project Oriented Water Modelling. Step-by-step training guide

MIKE Zero. Project Oriented Water Modelling. Step-by-step training guide MIKE Zero Project Oriented Water Modelling Step-by-step training guide MIKE 2017 DHI headquarters Agern Allé 5 DK-2970 Hørsholm Denmark +45 4516 9200 Telephone +45 4516 9333 Support +45 4516 9292 Telefax

More information

Batch Systems. Running your jobs on an HPC machine

Batch Systems. Running your jobs on an HPC machine Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

v TUFLOW-2D Hydrodynamics SMS Tutorials Time minutes Prerequisites Overview Tutorial

v TUFLOW-2D Hydrodynamics SMS Tutorials Time minutes Prerequisites Overview Tutorial v. 12.2 SMS 12.2 Tutorial TUFLOW-2D Hydrodynamics Objectives This tutorial describes the generation of a TUFLOW project using the SMS interface. This project utilizes only the two dimensional flow calculation

More information

AN INTRODUCTION TO UNIX

AN INTRODUCTION TO UNIX AN INTRODUCTION TO UNIX Paul Johnson School of Mathematics September 18, 2011 OUTLINE 1 INTRODUTION Unix Common Tasks 2 THE UNIX FILESYSTEM Moving around Copying, deleting File Permissions 3 SUMMARY OUTLINE

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

COSC 6385 Computer Architecture - Project

COSC 6385 Computer Architecture - Project COSC 6385 Computer Architecture - Project Edgar Gabriel Spring 2018 Hardware performance counters set of special-purpose registers built into modern microprocessors to store the counts of hardwarerelated

More information

EcoGEnIE: A practical course in global ocean ecosystem modelling

EcoGEnIE: A practical course in global ocean ecosystem modelling EcoGEnIE: A practical course in global ocean ecosystem modelling Lesson zero.c: Ocean circulation and Atlantic overturning stability Stuff to keep in mind: Nothing at all keep your mind completely empty

More information

Slurm at UPPMAX. How to submit jobs with our queueing system. Jessica Nettelblad sysadmin at UPPMAX

Slurm at UPPMAX. How to submit jobs with our queueing system. Jessica Nettelblad sysadmin at UPPMAX Slurm at UPPMAX How to submit jobs with our queueing system Jessica Nettelblad sysadmin at UPPMAX Slurm at UPPMAX Intro Queueing with Slurm How to submit jobs Testing How to test your scripts before submission

More information

Getting the code to work: Proof of concept in an interactive shell.

Getting the code to work: Proof of concept in an interactive shell. Getting the code to work: Proof of concept in an interactive shell. 1) Codes are executed with run scripts. These are shell script text files that set up the individual runs and execute the code. The scripts

More information

Exercises: Abel/Colossus and SLURM

Exercises: Abel/Colossus and SLURM Exercises: Abel/Colossus and SLURM November 08, 2016 Sabry Razick The Research Computing Services Group, USIT Topics Get access Running a simple job Job script Running a simple job -- qlogin Customize

More information

Ocean Data View (ODV) Manual V1.0

Ocean Data View (ODV) Manual V1.0 Ocean Data View (ODV) Manual V1.0 Prepared by the Coastal and Regional Oceanography Lab UNSW, Australia (www.oceanography.unsw.edu.au ) for the Sydney Institute of Marine Science. Table of Contents 1 Introduction

More information

QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT

QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT QUALITY CONTROL FOR UNMANNED METEOROLOGICAL STATIONS IN MALAYSIAN METEOROLOGICAL DEPARTMENT By Wan Mohd. Nazri Wan Daud Malaysian Meteorological Department, Jalan Sultan, 46667 Petaling Jaya, Selangor,

More information

Introduction to RCC. September 14, 2016 Research Computing Center

Introduction to RCC. September 14, 2016 Research Computing Center Introduction to HPC @ RCC September 14, 2016 Research Computing Center What is HPC High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers

More information

PC-Cluster Operation Manual

PC-Cluster Operation Manual PC-Cluster Operation Manual 1. Start PC-Cluster 1.1. Power ON (1) Confirm power cables Connected OUTLET AVR UPS PC-Cluster (2) Switch ON AVR Switch ON (3) Switch ON UPS Switch ON PC-Cluster Operation Manual

More information

Bright Cluster Manager: Using the NVIDIA NGC Deep Learning Containers

Bright Cluster Manager: Using the NVIDIA NGC Deep Learning Containers Bright Cluster Manager: Using the NVIDIA NGC Deep Learning Containers Technical White Paper Table of Contents Pre-requisites...1 Setup...2 Run PyTorch in Kubernetes...3 Run PyTorch in Singularity...4 Run

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Introduction to SLURM & SLURM batch scripts

Introduction to SLURM & SLURM batch scripts Introduction to SLURM & SLURM batch scripts Anita Orendt Assistant Director Research Consulting & Faculty Engagement anita.orendt@utah.edu 6 February 2018 Overview of Talk Basic SLURM commands SLURM batch

More information

Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue. Computer Problem #l: Optimization Exercises

Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue. Computer Problem #l: Optimization Exercises Meteorology 5344, Fall 2017 Computational Fluid Dynamics Dr. M. Xue Computer Problem #l: Optimization Exercises Due Thursday, September 19 Updated in evening of Sept 6 th. Exercise 1. This exercise is

More information

Submitting batch jobs Slurm on ecgate

Submitting batch jobs Slurm on ecgate Submitting batch jobs Slurm on ecgate Xavi Abellan xavier.abellan@ecmwf.int User Support Section Com Intro 2015 Submitting batch jobs ECMWF 2015 Slide 1 Outline Interactive mode versus Batch mode Overview

More information

An introduction to HYCOM

An introduction to HYCOM CCS workshop, April 2013 An introduction to HYCOM Matthieu Le Hénaff (1) (1) RSMAS/CIMAS, Miami, FL mlehenaff@rsmas.miami.edu Why HYCOM? HYCOM stands for HYbrid Coordinate Ocean Model; it merges various

More information

For Dr Landau s PHYS8602 course

For Dr Landau s PHYS8602 course For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your

More information

Visualizing Results when GaussView and Gaussian are Installed on Different Machines

Visualizing Results when GaussView and Gaussian are Installed on Different Machines Visualizing Results when GaussView and Gaussian are Installed on Different Machines Joseph W. Ochterski, Ph.D. help@gaussian.com copyright c 2000, Gaussian, Inc. June 21, 2000 Abstract The purpose of this

More information

COMP Superscalar. COMPSs at BSC. Supercomputers Manual

COMP Superscalar. COMPSs at BSC. Supercomputers Manual COMP Superscalar COMPSs at BSC Supercomputers Manual Version: 2.4 November 9, 2018 This manual only provides information about the COMPSs usage at MareNostrum. Specifically, it details the available COMPSs

More information

Introduction to SLURM & SLURM batch scripts

Introduction to SLURM & SLURM batch scripts Introduction to SLURM & SLURM batch scripts Anita Orendt Assistant Director Research Consulting & Faculty Engagement anita.orendt@utah.edu 23 June 2016 Overview of Talk Basic SLURM commands SLURM batch

More information

CRUK cluster practical sessions (SLURM) Part I processes & scripts

CRUK cluster practical sessions (SLURM) Part I processes & scripts CRUK cluster practical sessions (SLURM) Part I processes & scripts login Log in to the head node, clust1-headnode, using ssh and your usual user name & password. SSH Secure Shell 3.2.9 (Build 283) Copyright

More information

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Getting Started with High Performance GEOS-Chem

Getting Started with High Performance GEOS-Chem Getting Started with High Performance GEOS-Chem Lizzie Lundgren GEOS-Chem Support Team geos-chem-support@as.harvard.edu June 2017 Overview 1) What is GCHP and why use it? 2) Common Misconceptions 3) Useful

More information

CNAG Advanced User Training

CNAG Advanced User Training www.bsc.es CNAG Advanced User Training Aníbal Moreno, CNAG System Administrator Pablo Ródenas, BSC HPC Support Rubén Ramos Horta, CNAG HPC Support Barcelona,May the 5th Aim Understand CNAG s cluster design

More information

Getting Started with GCHP v11-02c

Getting Started with GCHP v11-02c Getting Started with GCHP v11-02c Lizzie Lundgren GEOS-Chem Support Team geos-chem-support@as.harvard.edu September 2017 Overview 1) What is GCHP and why use it? 2) Common Misconceptions 3) Useful Tips

More information

MIKE 21 & MIKE 3 Flow Model FM. Hydrodynamic Module. Step-by-step training guide

MIKE 21 & MIKE 3 Flow Model FM. Hydrodynamic Module. Step-by-step training guide MIKE 21 & MIKE 3 Flow Model FM Hydrodynamic Module Step-by-step training guide MIKE 2017 DHI headquarters Agern Allé 5 DK-2970 Hørsholm Denmark +45 4516 9200 Telephone +45 4516 9333 Support +45 4516 9292

More information

Accelerate HPC Development with Allinea Performance Tools

Accelerate HPC Development with Allinea Performance Tools Accelerate HPC Development with Allinea Performance Tools 19 April 2016 VI-HPS, LRZ Florent Lebeau / Ryan Hulguin flebeau@allinea.com / rhulguin@allinea.com Agenda 09:00 09:15 Introduction 09:15 09:45

More information

P a g e 1. HPC Example for C with OpenMPI

P a g e 1. HPC Example for C with OpenMPI P a g e 1 HPC Example for C with OpenMPI Revision History Version Date Prepared By Summary of Changes 1.0 Jul 3, 2017 Raymond Tsang Initial release 1.1 Jul 24, 2018 Ray Cheung Minor change HPC Example

More information

Applications Software Example

Applications Software Example Applications Software Example How to run an application on Cluster? Rooh Khurram Supercomputing Laboratory King Abdullah University of Science and Technology (KAUST), Saudi Arabia Cluster Training: Applications

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model

RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model RegCM-ROMS Tutorial: Introduction to ROMS Ocean Model Ufuk Utku Turuncoglu ICTP (International Center for Theoretical Physics) Earth System Physics Section - Outline Outline Introduction Grid generation

More information

HPC Introductory Course - Exercises

HPC Introductory Course - Exercises HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands

More information

Introduction to SLURM & SLURM batch scripts

Introduction to SLURM & SLURM batch scripts Introduction to SLURM & SLURM batch scripts Anita Orendt Assistant Director Research Consulting & Faculty Engagement anita.orendt@utah.edu 16 Feb 2017 Overview of Talk Basic SLURM commands SLURM batch

More information

Introduction to RCC. January 18, 2017 Research Computing Center

Introduction to RCC. January 18, 2017 Research Computing Center Introduction to HPC @ RCC January 18, 2017 Research Computing Center What is HPC High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much

More information

Running the model in production mode: using the queue.

Running the model in production mode: using the queue. Running the model in production mode: using the queue. 1) Codes are executed with run scripts. These are shell script text files that set up the individual runs and execute the code. The scripts will seem

More information

MLEP Intermediate GPS Workshop Exercise Two Using Maps

MLEP Intermediate GPS Workshop Exercise Two Using Maps During this exercise, you will scale coordinates from a map and enter them into the GPS receiver. This requires a ruler (provided) and all calculations require a paper and pencil. During this exercise,

More information

Heterogeneous Job Support

Heterogeneous Job Support Heterogeneous Job Support Tim Wickberg SchedMD SC17 Submitting Jobs Multiple independent job specifications identified in command line using : separator The job specifications are sent to slurmctld daemon

More information

14 - Multiple Files and Folders Dragging and dropping File name collisions revisited

14 - Multiple Files and Folders Dragging and dropping File name collisions revisited 14 - Multiple Files and Folders In the last lesson, we saw how to use the context menu or the ribbon to copy and move files on our hard drive. In this lesson, we will review and build on those skills as

More information

Shared Memory Programming With OpenMP Exercise Instructions

Shared Memory Programming With OpenMP Exercise Instructions Shared Memory Programming With OpenMP Exercise Instructions John Burkardt Interdisciplinary Center for Applied Mathematics & Information Technology Department Virginia Tech... Advanced Computational Science

More information

Bird Solar Model Source Creator

Bird Solar Model Source Creator Bird Solar Model Source Creator INTRODUCTION This knowledge base article describes a script that generates a FRED source that models the properties of solar light incident on a tilted or solar-tracking

More information

Calcul intensif et Stockage de Masse. CÉCI/CISM HPC training sessions Use of Matlab on the clusters

Calcul intensif et Stockage de Masse. CÉCI/CISM HPC training sessions Use of Matlab on the clusters Calcul intensif et Stockage de Masse CÉCI/ HPC training sessions Use of Matlab on the clusters Typical usage... Interactive Batch Type in and get an answer Submit job and fetch results Sequential Parallel

More information

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing Overview of CHPC Martin Čuma, PhD Center for High Performance Computing m.cuma@utah.edu Spring 2014 Overview CHPC Services HPC Clusters Specialized computing resources Access and Security Batch (PBS and

More information

Calcul intensif et Stockage de Masse. CÉCI/CISM HPC training sessions

Calcul intensif et Stockage de Masse. CÉCI/CISM HPC training sessions Calcul intensif et Stockage de Masse CÉCI/ HPC training sessions Calcul intensif et Stockage de Masse Parallel Matlab on the cluster /CÉCI Training session www.uclouvain.be/cism www.ceci-hpc.be November

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

ECE 574 Cluster Computing Lecture 4

ECE 574 Cluster Computing Lecture 4 ECE 574 Cluster Computing Lecture 4 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 31 January 2017 Announcements Don t forget about homework #3 I ran HPCG benchmark on Haswell-EP

More information

UPPMAX Introduction Martin Dahlö Valentin Georgiev

UPPMAX Introduction Martin Dahlö Valentin Georgiev UPPMAX Introduction 2017-11-27 Martin Dahlö martin.dahlo@scilifelab.uu.se Valentin Georgiev valentin.georgiev@icm.uu.se Objectives What is UPPMAX what it provides Projects at UPPMAX How to access UPPMAX

More information

High Performance Computing Cluster Advanced course

High Performance Computing Cluster Advanced course High Performance Computing Cluster Advanced course Jeremie Vandenplas, Gwen Dawes 9 November 2017 Outline Introduction to the Agrogenomics HPC Submitting and monitoring jobs on the HPC Parallel jobs on

More information

Shared Memory Programming With OpenMP Computer Lab Exercises

Shared Memory Programming With OpenMP Computer Lab Exercises Shared Memory Programming With OpenMP Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/presentations/fsu

More information

MIKE 21 Flow Model FM. Sand Transport Module. Step-by-step training guide: Coastal application

MIKE 21 Flow Model FM. Sand Transport Module. Step-by-step training guide: Coastal application MIKE 21 Flow Model FM Sand Transport Module Step-by-step training guide: Coastal application MIKE 2017 DHI headquarters Agern Allé 5 DK-2970 Hørsholm Denmark +45 4516 9200 Telephone +45 4516 9333 Support

More information

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

MIKE 21/3 Coupled Model FM. Step-by-step training guide: Coastal application

MIKE 21/3 Coupled Model FM. Step-by-step training guide: Coastal application MIKE 21/3 Coupled Model FM Step-by-step training guide: Coastal application MIKE 2017 DHI headquarters Agern Allé 5 DK-2970 Hørsholm Denmark +45 4516 9200 Telephone +45 4516 9333 Support +45 4516 9292

More information

COSC 6374 Parallel Computation. Debugging MPI applications. Edgar Gabriel. Spring 2008

COSC 6374 Parallel Computation. Debugging MPI applications. Edgar Gabriel. Spring 2008 COSC 6374 Parallel Computation Debugging MPI applications Spring 2008 How to use a cluster A cluster usually consists of a front-end node and compute nodes Name of the front-end node: shark.cs.uh.edu You

More information

MATLAB 1. Jeff Freymueller September 24, 2009

MATLAB 1. Jeff Freymueller September 24, 2009 MATLAB 1 Jeff Freymueller September 24, 2009 MATLAB IDE MATLAB Edi?ng Window We don t need no steenkin GUI You can also use MATLAB without the fancy user interface, just a command window. Why? You can

More information

Practical 2: Using Minitab (not assessed, for practice only!)

Practical 2: Using Minitab (not assessed, for practice only!) Practical 2: Using Minitab (not assessed, for practice only!) Instructions 1. Read through the instructions below for Accessing Minitab. 2. Work through all of the exercises on this handout. If you need

More information

PV Elite 2011 version Quick Start Page 1-21

PV Elite 2011 version Quick Start Page 1-21 PV Elite 2011 version Quick Start Page 1-21 INTRODUCTION The 2009 version of PV Elite introduces an updated user interface. The interface is the area on the screen where you, the user enter all the information

More information

Working with Shell Scripting. Daniel Balagué

Working with Shell Scripting. Daniel Balagué Working with Shell Scripting Daniel Balagué Editing Text Files We offer many text editors in the HPC cluster. Command-Line Interface (CLI) editors: vi / vim nano (very intuitive and easy to use if you

More information

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro INTRODUCTION TO GPU COMPUTING WITH CUDA Topi Siro 19.10.2015 OUTLINE PART I - Tue 20.10 10-12 What is GPU computing? What is CUDA? Running GPU jobs on Triton PART II - Thu 22.10 10-12 Using libraries Different

More information

An Introduction to Cluster Computing Using Newton

An Introduction to Cluster Computing Using Newton An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.

More information

About the SPEEDY model (from Miyoshi PhD Thesis):

About the SPEEDY model (from Miyoshi PhD Thesis): SPEEDY EXPERIMENTS. About the SPEEDY model (from Miyoshi PhD Thesis): The SPEEDY model (Molteni 2003) is a recently developed atmospheric general circulation model (AGCM) with a spectral primitive-equation

More information

How to run a job on a Cluster?

How to run a job on a Cluster? How to run a job on a Cluster? Cluster Training Workshop Dr Samuel Kortas Computational Scientist KAUST Supercomputing Laboratory Samuel.kortas@kaust.edu.sa 17 October 2017 Outline 1. Resources available

More information

Revolve 3D geometry to display a 360-degree image.

Revolve 3D geometry to display a 360-degree image. Tutorial 24. Turbo Postprocessing Introduction This tutorial demonstrates the turbomachinery postprocessing capabilities of FLUENT. In this example, you will read the case and data files (without doing

More information

Express Introductory Training in ANSYS Fluent Workshop 04 Fluid Flow Around the NACA0012 Airfoil

Express Introductory Training in ANSYS Fluent Workshop 04 Fluid Flow Around the NACA0012 Airfoil Express Introductory Training in ANSYS Fluent Workshop 04 Fluid Flow Around the NACA0012 Airfoil Dimitrios Sofialidis Technical Manager, SimTec Ltd. Mechanical Engineer, PhD PRACE Autumn School 2013 -

More information

STEPS BY STEPS FOR THREE-DIMENSIONAL ANALYSIS USING ABAQUS STEADY-STATE HEAT TRANSFER ANALYSIS

STEPS BY STEPS FOR THREE-DIMENSIONAL ANALYSIS USING ABAQUS STEADY-STATE HEAT TRANSFER ANALYSIS UNIVERSITI MALAYSIA PERLIS FACULTY OF ENGINEERING TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING TECHNOLOGY PDT348 FINITE ELEMENT ANALYSIS Semester II 2017/2018 STEPS BY STEPS FOR THREE-DIMENSIONAL ANALYSIS

More information

Introduction to UBELIX

Introduction to UBELIX Science IT Support (ScITS) Michael Rolli, Nico Färber Informatikdienste Universität Bern 06.06.2017, Introduction to UBELIX Agenda > Introduction to UBELIX (Overview only) Other topics spread in > Introducing

More information

Topaze NL (RTA) Tutorial #1

Topaze NL (RTA) Tutorial #1 Topaze NL (RTA) Tutorial #1 1. Introduction This tutorial provides a description of the options of RTA in KAPPA Workstation G5 through the analysis of a gas producer well example. We recommend that you

More information

An Introduction to Gauss. Paul D. Baines University of California, Davis November 20 th 2012

An Introduction to Gauss. Paul D. Baines University of California, Davis November 20 th 2012 An Introduction to Gauss Paul D. Baines University of California, Davis November 20 th 2012 What is Gauss? * http://wiki.cse.ucdavis.edu/support:systems:gauss * 12 node compute cluster (2 x 16 cores per

More information

Requirements (QASR) - Chapter 6. HYDRO-METEOROLOGIC and HYDRAULIC MONITORING

Requirements (QASR) - Chapter 6. HYDRO-METEOROLOGIC and HYDRAULIC MONITORING CERP Quality Assurance Systems Requirements (QASR) - Chapter 6 HYDRO-METEOROLOGIC and HYDRAULIC MONITORING Purpose and Scope Purpose is to provide guidelines for efficient and effective production of hydrologic

More information

HOW TO CAPTURE LIVE ACTIVITIES ON THE COMPUTER SCREEN USING CAMTASIA RELAY

HOW TO CAPTURE LIVE ACTIVITIES ON THE COMPUTER SCREEN USING CAMTASIA RELAY Updated 02/27/12 Camtasia Relay allows users to quickly create and publish lectures and presentations occurring on computer screen with an audio recording or narration. An Active Directory (AD) account

More information

SwanOne User Manual V1.3. April 2018

SwanOne User Manual V1.3. April 2018 SwanOne User Manual V1.3 April 2018 Version 1.3 (April 2018) 1 Table of Contents 1. Introduction... 3 2. Installation... 3 3. Starting with SwanOne... 4 3.1. File New... 4 4. Input data... 5 4.1. Input

More information

LAB. Preparing for Stampede: Programming Heterogeneous Many- Core Supercomputers

LAB. Preparing for Stampede: Programming Heterogeneous Many- Core Supercomputers LAB Preparing for Stampede: Programming Heterogeneous Many- Core Supercomputers Dan Stanzione, Lars Koesterke, Bill Barth, Kent Milfeld dan/lars/bbarth/milfeld@tacc.utexas.edu XSEDE 12 July 16, 2012 1

More information

TITANI CLUSTER USER MANUAL V.1.3

TITANI CLUSTER USER MANUAL V.1.3 2016 TITANI CLUSTER USER MANUAL V.1.3 This document is intended to give some basic notes in order to work with the TITANI High Performance Green Computing Cluster of the Civil Engineering School (ETSECCPB)

More information

Using a Linux System 6

Using a Linux System 6 Canaan User Guide Connecting to the Cluster 1 SSH (Secure Shell) 1 Starting an ssh session from a Mac or Linux system 1 Starting an ssh session from a Windows PC 1 Once you're connected... 1 Ending an

More information

Bureau of Water System Engineering

Bureau of Water System Engineering Physical Connection E-Permitting Quarterly Service Instructions (includes instructions for registering with NJDEP Online and for reporting results) NJDEP Bureau of Water System Engineering January 2015

More information

ICS-ACI System Basics

ICS-ACI System Basics ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents

More information

Examples, examples: Outline

Examples, examples: Outline Examples, examples: Outline Overview of todays exercises Basic scripting Importing data Working with temporal data Working with missing data Interpolation in 1D Some time series analysis Linear regression

More information