repex Documentation Release 0.2 Antons Treikalis

Size: px
Start display at page:

Download "repex Documentation Release 0.2 Antons Treikalis"

Transcription

1 repex Documentation Release 0.2 Antons Treikalis October 13, 2015

2

3 Contents 1 Introduction What is RepEx? What can I do with it? Why should I use it? Installation 5 3 Getting Started Invoking RepEx T-REMD example (peptide ala10) with Amber kernel One-dimensional REMD simulations US-REMD example using Alanine Dipeptide system with Amber kernel Multi-dimensional REMD simulations TUU-REMD example (alanine dipeptide) with Amber kernel Replica Exchange Patterns Synchronous Replica Exchange Pattern Asynchronous Replica Exchange Pattern Flexible execution modes Execution Strategy S Execution Strategy S Execution Strategy S Tutorial Running on Stampede Running on Archer T-REMD example (peptide ala10) with Amber kernel US-REMD example using Alanine Dipeptide system with Amber kernel TUU-REMD example (alanine dipeptide) with Amber kernel Frequently Asked Questions Where are.mdout files? Where are.mdinfo files? How can I obtain information about accepted exchanges? How can I obtain information about attempted exchanges? i

4 10 Indices and tables 33 ii

5 Contents: Contents 1

6 2 Contents

7 CHAPTER 1 Introduction 1.1 What is RepEx? RepEx is a new Replica-Exchange Molecular Dynamics (REMD) simulations package written in Python programming language. RepEx supports Amber [1] and NAMD [2] as Molecular Dynamics application kernels and can be easily modified to support any conventional MD package. The main motivation behind RepEx is to enable efficient and scalable multidimensional REMD simulations on HPC systems, while separating execution details from simulation setup, specific to a given MD package. RepEx provides several Execution Patterns designed to meet the needs of it s users. RepEx relies on a concept of Pilot-Job to run RE simulations on HPC clusters. Namely, RepEx is using Radical Pilot Pilot System for execution of it s workloads. RepEx effectively takes advantage of a task-level-parallelism concept to run REMD simulations. RepEx is modular, object-oriented code, which is designed to facilitate development of extension modules by it s users. [1] - [2] What can I do with it? Currently are supported the following one-dimentional REMD simulations: Temperature-Exchange (T-REMD), Umbrella Sampling (US-REMD) and Salt Concentration (S-REMD). It is possible to combine supported one-dimensional cases into multi-dimentional cases with arbitrary ordering and number of dimensions. This level of flexibility is not attainable by conventional MD software packages. RepEx easily can be used as a testing platform for new or unexplored REMD algorithms. Due to relative simplicity of the code, development time is significantly reduced, enabling scientists to focus on their experiments and not on a software engineering task at hand. 1.3 Why should I use it? While many MD software packages provide implementations of REMD algorithms, a number of implementation challenges exist. Despite the fact that REMD algorithms are very well suited for parallelization, implementing dynamic pairwise communication between replicas is non-trivial. This results in REMD implementations being limited in terms of number of parameters being exchanged and being rigid in terms of synchronization mechanisms. The above challenges together with the limitations arising from design specifics contribute to scalability barriers in some MD software packages. For many scientific problems, simulations with number of replicas at the order of thousands would substantially improve sampling quality. Main distinguishing features of RepEx are: 3

8 low barrier for implementation of new REMD algorithms facilitated by separation of simulaiton execuiton details from implementation specific to current MD package functionality to run multi-dimentional REMD simulations with arbitrary ordering of dimensions 4 Chapter 1. Introduction

9 CHAPTER 2 Installation This page describes the requirements and procedure to be followed to install the RepEx package. Note: Pre-requisites.The following are the minimal requirements to install the RepEx package. python >= 2.7 virtualenv >= 1.11 pip >= 1.5 Password-less ssh login to target cluster The easiest way to install RepEx is to create virtualenv. This way, RepEx and its dependencies can easily be installed in user-space without clashing with potentially incompatible system-wide packages. Tip: If the virtualenv command is not available, try the following set of commands: wget --no-check-certificate tar xzf virtualenv-1.11.tar.gz python virtualenv-1.11/virtualenv.py --system-site-packages $HOME/repex-env/ source $HOME/repex-env/bin/activate Step 1 : Create and activate virtualenv: virtualenv $HOME/repex-env/ source $HOME/repex-env/bin/activate Step 2 : Install RepEx: git clone cd radical.repex python setup.py install Now you should be able to print the installed version of RepEx: repex-version Installation is complete! 5

10 6 Chapter 2. Installation

11 CHAPTER 3 Getting Started In this section we will briefly describe how RepEx can be invoked, how input and resource configuration files should be used. We will also introduce two concepts, central to RepEx - Replica Exchange Patterns and Execution Strategies. 3.1 Invoking RepEx To run RepEx users need to use a command line tool corresponding to MD package kernel they intend to use. For example, if user wants to use Amber as MD kernel, she would use repex-amber command line tool. In addition to specifying an appropriate command line tool, user need to specify a resource configuration file and REMD simulation input file. The resulting invocation of RepEx should be: repex-amber --input= tsu_remd_ace_ala_nme.json --rconfig= stampede.json where: --input= - specifies the REMD simulation input file --rconfig= - specifies resource configuration file Both REMD simulation input file and resource configuration file must conform to JSON format Resource configuration file In resource configuration file must be provided the following parameters: resource - this is the name of the target machine. Currently supported machines are: local.localhost - your local system xsede.stampede - Stampede supercomputer at TACC xsede.supermic - SuperMIC supercomputer at LSU xsede.comet - Comet supercomputer at SDSC xsede.gordon - Gordon supercomputer at SDSC epsrc.archer - Archer supercomputer at EPCC ncsa.bw_orte - Blue Waters supercomputer at NCSA username - your username on the target machine project - your allocation on specified machine cores - number of cores you would like to allocate 7

12 runtime - for how long you would like to allocate cores on target machine (in minutes). In addition are provided the following optional parameters: queue - specifies which queue to use for job submission. Values are machine specific. cleanup - specifies if files on remote machine must be deleted. Possible values are: True or False Example resource configuration file for Stampede supercomputer might look like this: { "target": { "resource" : "stampede.tacc.utexas.edu", "username" : "octocat", "project" : "TG-XYZ123456", "queue" : "development", "runtime" : "30", "cleanup" : "False", "cores" : "16" REMD input file for Amber kernel For use with Amber kernel, in REMD simulation input file must be provided the following parameters: re_pattern - this parameter specifies Replica Exchange Pattern to use, options are: S - synchronous and A - asynchronous exchange - this parameter specifies type of REMD simulation, for 1D simulation options are: T-REMD, S-REMD and US-REMD number_of_cycles - number of cycles for a given simulation number_of_replicas - number of replicas to use input_folder - path to folder which contains simulation input files input_file_basename - base name of generated input/output files amber_input - name of input file template amber_parameters - name of parameters file amber_coordinates - name of coordinates file replica_mpi - specifies if sander or sander.mpi is used for MD-step. Options are: True or False replica_cores - number of cores to use for MD-step for each replica, if replica_mpi is False this parameters must be equal to 1 steps_per_cycle - number of simulation time-steps download_mdinfo - specifies if Amber.mdinfo files must be downloaded. Options are: True or False. If this parameter is ommited, value defaults to True download_mdout - specifies if Amber.mdout files must be downloaded. Options are: True or False. If this parameter is ommited, value defaults to True Optional parameters are specific to each simulation type. Example REMD simulation input file for T-REMD simulation might look like this: 8 Chapter 3. Getting Started

13 { "remd.input": { "re_pattern": "S", "exchange": "T-REMD", "number_of_cycles": "4", "number_of_replicas": "16", "input_folder": "t_remd_inputs", "input_file_basename": "ace_ala_nme_remd", "amber_input": "ace_ala_nme.mdin", "amber_parameters": "ace_ala_nme.parm7", "amber_coordinates": "ace_ala_nme.inpcrd", "replica_mpi": "False", "replica_cores": "1", "min_temperature": "300", "max_temperature": "600", "steps_per_cycle": "1000", "download_mdinfo": "True", "download_mdout" : "True", 3.2 T-REMD example (peptide ala10) with Amber kernel We will take a look at Temperature-Exchange REMD example using peptide ala10 system with Amber simulations kernel. To run this example locally you must have Amber installed on your system. If you don t have Amber installed please download it from: and install it using instructions at: This guide assumes that you have already cloned RepEx repository during the installation. If you haven t, please do: git clone and cd into repex examples directory where input files recide: cd radical.repex/examples/amber Amongst other things in this directory are present: t_remd_inputs - input files for T-REMD simulations t_remd_ala10.json - REMD input file for Temperature-Exchnage example using peptide ala10 system local.json - resource configuration file to run on local system (your laptop) Run locally To run this example locally you need to make appropriate changes to local.json resouce configuration file. You need to open this file in your favorite text editor (vim in this case): vim local.json By default this file looks like this: { "target": { "resource": "local.localhost", "username" : "octocat", 3.2. T-REMD example (peptide ala10) with Amber kernel 9

14 "runtime" : "30", "cleanup" : "False", "cores" : "4" You need to modify only two parameters in this file: username - this should be your username on your laptop cores - change this parameter to number of cores supported by your laptop Next you need to verify if parameters specified in t_remd_ala10.json REMD input file satisfy your requirements. By default t_remd_ala10.json file looks like this: { "remd.input": { "re_pattern": "S", "exchange": "T-REMD", "number_of_cycles": "4", "number_of_replicas": "8", "input_folder": "t_remd_inputs", "input_file_basename": "ala10_remd", "amber_input": "ala10.mdin", "amber_parameters": "ala10.prmtop", "amber_coordinates": "ala10_minimized.inpcrd", "replica_mpi": "False", "replica_cores": "1", "exchange_mpi": "False", "min_temperature": "300", "max_temperature": "600", "steps_per_cycle": "4000", "exchange_mpi": "False", "download_mdinfo": "True", "download_mdout" : "True" In comparison with general REMD input file format discussed above this input file contains some additional parameters: min_temperature - minimal temperature value to be assigned to replicas max_temperature - maximal temperature value to be assigned to replicas (we use geometrical progression for temperature assignment) exchange_mpi - specifies if exchange step should use MPI interface. Options are: True or False To run this example, all you need to do is to specify path to sander executable on your laptop. To do that please add amber_path parameter under remd.input. For example: "amber_path": "/home/octocat/amber/amber14/bin/sander" To get notified about important events during the simulation please specify in terminal: export RADICAL_REPEX_VERBOSE=info Now you can run this simulation by: repex-amber --input= t_remd_ala10.json --rconfig= local.json 10 Chapter 3. Getting Started

15 3.2.2 Verify output If simulation has successfully finished, last three lines of terminal log should be similar to: 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Simulation successfully fi 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Please check output files 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Closing session. You should see nine new directories in your current path: eight replica_x directories one shared_files directory If you want to check which replicas exchanged configurations during each cycle you can cd into shared_files directory and check each of four pairs_for_exchange_x.dat files. In these files are recorded indexes of replicas exchanging configurations during each cycle. If you want to check.mdinfo or.mdout files for some replica, you can find those files in corresponding replica_x directory. File format is ala10_remd_i_c.mdinfo where: i is index of replica c is current cycle 3.2. T-REMD example (peptide ala10) with Amber kernel 11

16 12 Chapter 3. Getting Started

17 CHAPTER 4 One-dimensional REMD simulations In addition to T-REMD simulations, RepEx also supports Umbrella Sampling (biasing potentials) and Salt Concentration (ionic strength) one-dimensional REMD simulations with Amber kernel. In this section we will take a look at Umbrella Sampling - US-REMD example. 4.1 US-REMD example using Alanine Dipeptide system with Amber kernel For the example we will use Alanine Dipeptide (Ace-Ala-Nme) system. To run this example locally you must have Amber installed on your system. If you don t have Amber installed please download it from: and install it using instructions at: This guide assumes that you have already run example in getting-started section and are currently in amber directory, if not please cd into this directory from repex root directory: cd examples/amber Amongst other things in this directory are present: us_remd_inputs - input files for US-REMD simulations us_remd_ace_ala_nme.json - REMD input file for Umbrella Sampling REMD example using Alanine Dipeptide system local.json - resource configuration file to run on local system (your laptop) Run locally To run this example locally you need to make appropriate changes to local.json resouce configuration file. We assume that you have already done this in getting started section. Next you need to verify if parameters specified in us_remd_ace_ala_nme.json REMD input file satisfy your requirements. By default us_remd_ace_ala_nme.json file looks like this: { "remd.input": { "re_pattern": "S", "exchange": "US-REMD", "number_of_cycles": "4", "number_of_replicas": "8", "input_folder": "us_remd_inputs", 13

18 "input_file_basename": "ace_ala_nme_remd", "amber_input": "ace_ala_nme.mdin", "amber_parameters": "ace_ala_nme.parm7", "amber_coordinates_folder": "ace_ala_nme_coors", "same_coordinates": "True", "us_template": "ace_ala_nme_us.rst", "replica_mpi": "False", "replica_cores": "1", "us_start_param": "120", "us_end_param": "160", "init_temperature": "300.0", "steps_per_cycle": "2000", "exchange_mpi": "False", "download_mdinfo": "True", "download_mdout" : "True" In comparison with general REMD input file format discussed in getting-started section this input file contains some additional parameters: same_coordinates - specifies if each replica should use an individual coordinates file. Options are: True or False. If True is selected, in amber_coordinates_folder must be provided coordinate files for each replica. Format of coordinates file is: filename.inpcrd.x.y, where filename can be any valid python string, inpcrd is required file extension, x is index of replica in 1st dimension and y is index of replica in second dimension. For one-dimensional REMD, y = 0 must be provided us_template - name of Restraints template file us_start_param - starting value of Umbrella interval us_end_param - ending value of Umbrella interval init_temperature - initial temperature to use exchange_mpi - specifies if exchange step should use MPI interface. Options are: True or False To run this example, all you need to do is to specify path to sander executable on your laptop. To do that please add amber_path parameter under remd.input. For example: "amber_path": "/home/octocat/amber/amber14/bin/sander" To get notified about important events during the simulation please specify in terminal: export RADICAL_REPEX_VERBOSE=info Now you can run this simulation by: repex-amber --input= us_remd_ace_ala_nme.json --rconfig= local.json Verify output If simulation has successfully finished, last three lines of terminal log should be similar to: 14 Chapter 4. One-dimensional REMD simulations

19 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Simulation successfully fi 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Please check output files 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Closing session. You should see nine new directories in your current path: eight replica_x directories one shared_files directory If you want to check which replicas exchanged configurations during each cycle you can cd into shared_files directory and check each of four pairs_for_exchange_x.dat files. In these files are recorded indexes of replicas exchanging configurations during each cycle. If you want to check.mdinfo or.mdout files for some replica, you can find those files in corresponding replica_x directory. File format is ala10_remd_i_c.mdinfo where: i is index of replica c is current cycle 4.1. US-REMD example using Alanine Dipeptide system with Amber kernel 15

20 16 Chapter 4. One-dimensional REMD simulations

21 CHAPTER 5 Multi-dimensional REMD simulations In addition to one-dimensional REMD simulations, RepEx also supports multi-dimensional REMD simulations. With Amber Kernel currently supported are two three-dimensional usecases: TSU-REMD with one Temperature, one Salt Concentraiton and one Umbrella restraint dimension TUU-REMD with one Temperature dimension and two Umbrella restraint dimensions 5.1 TUU-REMD example (alanine dipeptide) with Amber kernel For the example we will use Alanine Dipeptide (Ace-Ala-Nme) system. To run this example locally you must have Amber installed on your system. If you don t have Amber installed please download it from: and install it using instructions at: This guide assumes that you have already run example in getting-started section and are currently in amber directory, if not please cd into this directory from repex root directory: cd examples/amber Amongst other things in this directory are present: tuu_remd_inputs - input files for TUU-REMD simulations tuu_remd_ace_ala_nme.json - REMD input file for TUU-REMD usecase using Alanine Dipeptide system local.json - resource configuration file to run on local system (your laptop) Run locally To run this example locally you need to make appropriate changes to local.json resouce configuration file. We assume that you have already done this in getting started section. Next you need to verify if parameters specified in tuu_remd_ace_ala_nme.json REMD input file satisfy your requirements. By default tuu_remd_ace_ala_nme.json file looks like this: { "remd.input": { "re_pattern": "S", "exchange": "TUU-REMD", "number_of_cycles": "4", "input_folder": "tuu_remd_inputs", 17

22 "input_file_basename": "ace_ala_nme_remd", "amber_input": "ace_ala_nme.mdin", "amber_parameters": "ace_ala_nme.parm7", "amber_coordinates_folder": "ace_ala_nme_coors", "us_template": "ace_ala_nme_us.rst", "replica_mpi": "False", "replica_cores": "1", "steps_per_cycle": "6000", "dim.input": { "umbrella_sampling_1": { "number_of_replicas": "2", "us_start_param": "0", "us_end_param": "360", "temperature_2": { "number_of_replicas": "2", "min_temperature": "300", "max_temperature": "600", "umbrella_sampling_3": { "number_of_replicas": "2", "us_start_param": "0", "us_end_param": "360" In comparison to REMD simulaiton input files used previously, this file has the following additional parameters: dim.input - under this key must be specified parameters and names of individual dimensions for all multidimensional REMD simulations. umbrella_sampling_1 - indicates that first dimension is Umbrella potential temperature_2 - indicates that second dimension is Temperature umbrella_sampling_1 - indicates that third dimension is Umbrella potential number_of_replicas - indicates number of replicas in this dimension To run this example, all you need to do is to specify path to sander executable on your laptop. To do that please add amber_path parameter under remd.input. For example: "amber_path": "/home/octocat/amber/amber14/bin/sander" To get notified about important events during the simulation please specify in terminal: export RADICAL_REPEX_VERBOSE=info Now you can run this simulation by: repex-amber --input= tuu_remd_ace_ala_nme.json --rconfig= local.json Verify output If simulation has successfully finished, last three lines of terminal log should be similar to: 18 Chapter 5. Multi-dimensional REMD simulations

23 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Simulation successfully fi 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Please check output files 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Closing session. You should see nine new directories in your current path: eight replica_x directories one shared_files directory If you want to check which replicas exchanged configurations during each cycle you can cd into shared_files directory and check each of four pairs_for_exchange_x.dat files. In these files are recorded indexes of replicas exchanging configurations during each cycle. If you want to check.mdinfo or.mdout files for some replica, you can find those files in corresponding replica_x directory. File format is ala10_remd_i_c.mdinfo where: i is index of replica c is current cycle 5.1. TUU-REMD example (alanine dipeptide) with Amber kernel 19

24 20 Chapter 5. Multi-dimensional REMD simulations

25 CHAPTER 6 Replica Exchange Patterns One of the distinctive features that RepEx provides to its users, is ability to select a Replica Exchange Pattern. Replica Exchange Patterns differ in synchronization modes between MD and Exchange steps. We define two types of Replica Exchange Patterns: 1. Synchronous Replica Exchange Pattern 2. Asynchronous Replica Exchange Pattern 6.1 Synchronous Replica Exchange Pattern Synchronous Pattern, corresponds to conventional way of running REMD simulations, where all replicas propagate MD for a fixed period of simulation time (e.g. 2 ps) and execution time for replicas is not fixed - all replicas must finish MD-step before Exchange-step takes place. When all replicas have finished MD-step, the Exchange-step is performed. 6.2 Asynchronous Replica Exchange Pattern Contrary to Synchronous Pattern, Asynchronous Pattern does not have a global synchronization barrier - while some replicas are performing an MD-step others might be performing an Exchange-step amongst a subset of replicas. In current implementation of Asynchronous Pattern, MD-step is defined as a fixed period of simulation time (e.g. 2 ps), but execution time for MD-step is fixed (e.g. 30 secs). Then predefined execution time elapses, Exchange-step is performed amongst replicas which have finished MD-step. In this pattern there is no synchronization between MD and Exchange-step, thus this pattern can be referred to as asynchronous. 21

26 22 Chapter 6. Replica Exchange Patterns

27 CHAPTER 7 Flexible execution modes REMD simulation corresponding to any of the two Replica Exchange Patterns can be executed in multiple ways. Execution Strategies specify simulation execution details and in particular the resource management details. These strategies differ in: 1. MD simulation time definition: fixed period of simulation time (e.g. 2 ps) for all replicas or fixed period of wall clock time (e.g. 2 minutes) for all replicas, meaning that after this time interval elapses all running replicas will be stopped, regardless of how much simulation time was obtained. 2. task submission modes (bulk submission vs sequential submission) 3. task execution modes on remote HPC system (order and level of concurrency) 4. number of Pilots used for a given simulation 5. number of target resources used concurrently for a given simulation Next we will introduce three Execution Strategies which can be used with Synchronous Replica Exchange Pattern. 7.1 Execution Strategy S1 Synchronous Replica Exchange simulations, may be executed using Execution strategy S1. This strategy differs from a conventional one in number of allocated cores on a target resource (bullet point 3.). In this case number of cores is 1/2 of the number of replicas. As a result of this, only a half of replicas can propogate MD or Exchange-step concurrently. In this execution strategy MD simulation time is defined as a fixed period of simulation time (e.g. 2 ps) for all replicas, meaning that replicas which will finish simulation earlier will have to wait for other replicas before exchange-step may take place. This strategy demonstrates advantage of using a task-level parallelism based approach. Many MD packages are lacking the capability to use less cores than replicas. 23

28 7.2 Execution Strategy S2 Execution Strategy S2 differs from Strategy S1 in MD simulation time definition. Here MD is specified as a fixed period of wall clock time (e.g. 2 minutes) for all replicas. Replicas which will not finish MD-step within this time interval, will be stopped. In addition, Strategy S2 differs from Strategy S1 in the number of allocated cores. Here number of cores equals to the number of replicas. 7.3 Execution Strategy S3 Last Execution strategy we will discuss in this section is Execution Strategy S3. In this strategy all replicas are run concurrently for a presumably indefinite period. At predefined intervals exchanges are performed amongst all (or a subset) of replicas on resource using data from checkpoint files. Any replicas that accept the exchange are reset and then restarted. Since only a small fraction of replicas will actually accept this exchange (10-30%) the amount of time discarded by the exchange is assumed to be minimal. Differences of this strategy from a conventional one can be attributed to bullet point Chapter 7. Flexible execution modes

29 CHAPTER 8 Tutorial In this tutorial we will run several 1D-REMD and 3D-REMD examples on Stampede and Archer supercomputers. This guide assumes that you have already installed RepEx and cloned RepEx repository during the installation. If you haven t installed RepEx, please follow the steps in Installation section of this user guide. If you can t find location of radical.repex directory, please clone repository again: git clone and cd into Amber examples directory where input files recide: cd radical.repex/examples/amber To run examples of this tutorial you will need to modify two resource configuration files - stampede.json and archer.json. Once you have these two files properly configured you can use them for all examples of this tutorial. 8.1 Running on Stampede To run on Stampede you need to make appropriate changes to stampede.json resouce configuration file. Open this file in your favorite text editor (vim in this case): vim stampede.json By default this file looks like this: { "target": { "resource": "xsede.stampede", "username" : "octocat", "project" : "bigthings", "runtime" : "30", "cleanup" : "False", "cores" : "16" You need to modify two parameters in this file: username - this should be your username on Stampede project - this should be your allocation on Stampede 25

30 8.2 Running on Archer To run on Archer you need to make appropriate changes to archer.json resouce configuration file. Open this file in your favorite text editor (vim in this case): vim archer.json By default this file looks like this: { "target": { "resource": "epsrc.archer", "username" : "octocat", "project" : "bigthings", "runtime" : "40", "cleanup" : "False", "cores" : "24" You need to modify two parameters in this file: username - this should be your username on Archer project - this should be your allocation on Archer At this point you are done with resource configuration files and are ready to run simulations. 8.3 T-REMD example (peptide ala10) with Amber kernel First, we will take a look at Temperature-Exchange REMD example using peptide ala10 system with Amber simulations kernel. You need to verify if parameters specified in t_remd_ala10.json REMD input file satisfy your requirements. By default t_remd_ala10.json file looks like this: { "remd.input": { "re_pattern": "S", "exchange": "T-REMD", "number_of_cycles": "4", "number_of_replicas": "8", "input_folder": "t_remd_inputs", "input_file_basename": "ala10_remd", "amber_input": "ala10.mdin", "amber_parameters": "ala10.prmtop", "amber_coordinates": "ala10_minimized.inpcrd", "replica_mpi": "False", "replica_cores": "1", "exchange_mpi": "False", "min_temperature": "300", "max_temperature": "600", "steps_per_cycle": "4000", "exchange_mpi": "False", "download_mdinfo": "True", "download_mdout" : "True" 26 Chapter 8. Tutorial

31 In comparison with general REMD input file format discussed above this input file contains some additional parameters: min_temperature - minimal temperature value to be assigned to replicas max_temperature - maximal temperature value to be assigned to replicas (we use geometrical progression for temperature assignment) exchange_mpi - specifies if exchange step should use MPI interface. Options are: True or False Since we are using a supercomputer to run REMD simulation we increase the nuber of replicas to use. Please set "number_of_replicas" to "16". To get notified about important events during the simulation please specify in terminal: export RADICAL_REPEX_VERBOSE=info Now you are ready to run this simulation. If you want to run on Stampede run in terminal: repex-amber --input= t_remd_ala10.json --rconfig= stampede.json If you want to run on Archer run in terminal: repex-amber --input= t_remd_ala10.json --rconfig= archer.json Verify output If simulation has successfully finished, last three lines of terminal log should be similar to: 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Simulation successfully fi 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Please check output files 2015:10:11 18:49: MainThread radical.repex.amber : [INFO ] Closing session. You should see 17 new directories in your current path: sixteen replica_x directories one shared_files directory If you want to check which replicas exchanged configurations during each cycle you can cd into shared_files directory and check each of four pairs_for_exchange_x.dat files. In these files are recorded indexes of replicas exchanging configurations during each cycle. If you want to check.mdinfo or.mdout files for some replica, you can find those files in corresponding replica_x directory. File format is ala10_remd_i_c.mdinfo where: i is index of replica c is current cycle Simulation output can similarly be verified for all other examples of this tutorial. 8.4 US-REMD example using Alanine Dipeptide system with Amber kernel For the example we will use Alanine Dipeptide (Ace-Ala-Nme) system. In examples/amber directory are present: us_remd_inputs - input files for US-REMD simulations us_remd_ace_ala_nme.json - REMD input file for Umbrella Sampling REMD example using Alanine Dipeptide system 8.4. US-REMD example using Alanine Dipeptide system with Amber kernel 27

32 To run this example you need to verify if parameters specified in us_remd_ace_ala_nme.json REMD input file satisfy your requirements. By default us_remd_ace_ala_nme.json file looks like this: { "remd.input": { "re_pattern": "S", "exchange": "US-REMD", "number_of_cycles": "4", "number_of_replicas": "8", "input_folder": "us_remd_inputs", "input_file_basename": "ace_ala_nme_remd", "amber_input": "ace_ala_nme.mdin", "amber_parameters": "ace_ala_nme.parm7", "amber_coordinates_folder": "ace_ala_nme_coors", "same_coordinates": "True", "us_template": "ace_ala_nme_us.rst", "replica_mpi": "False", "replica_cores": "1", "us_start_param": "120", "us_end_param": "160", "init_temperature": "300.0", "steps_per_cycle": "2000", "exchange_mpi": "False", "download_mdinfo": "True", "download_mdout" : "True" In comparison with general REMD input file format discussed in getting-started section this input file contains some additional parameters: same_coordinates - specifies if each replica should use an individual coordinates file. Options are: True or False. If True is selected, in amber_coordinates_folder must be provided coordinate files for each replica. Format of coordinates file is: filename.inpcrd.x.y, where filename can be any valid python string, inpcrd is required file extension, x is index of replica in 1st dimension and y is index of replica in second dimension. For one-dimensional REMD, y = 0 must be provided us_template - name of Restraints template file us_start_param - starting value of Umbrella interval us_end_param - ending value of Umbrella interval init_temperature - initial temperature to use exchange_mpi - specifies if exchange step should use MPI interface. Options are: True or False Since we are using a supercomputer to run REMD simulation we increase the nuber of replicas to use. Please set "number_of_replicas" to "16". Now you are ready to run this simulation. If you want to run on Stampede run in terminal: repex-amber --input= us_remd_ace_ala_nme.json --rconfig= stampede.json If you want to run on Archer run in terminal: repex-amber --input= us_remd_ace_ala_nme.json --rconfig= archer.json Output verification can be done similarly as for T-REMD example. 28 Chapter 8. Tutorial

33 8.5 TUU-REMD example (alanine dipeptide) with Amber kernel For the example we also will use Alanine Dipeptide (Ace-Ala-Nme) system. In examples/amber directory are present: tuu_remd_inputs - input files for TUU-REMD simulations tuu_remd_ace_ala_nme.json - REMD input file for TUU-REMD usecase using Alanine Dipeptide system To run this example you need to verify if parameters specified in tuu_remd_ace_ala_nme.json REMD input file satisfy your requirements. By default tuu_remd_ace_ala_nme.json file looks like this: { "input.md": { "re_pattern": "S", "exchange": "TUU-REMD", "number_of_cycles": "4", "input_folder": "tuu_remd_inputs", "input_file_basename": "ace_ala_nme_remd", "amber_input": "ace_ala_nme.mdin", "amber_parameters": "ace_ala_nme.parm7", "amber_coordinates_folder": "ace_ala_nme_coors", "us_template": "ace_ala_nme_us.rst", "replica_mpi": "False", "replica_cores": "1", "steps_per_cycle": "6000", "input.dim": { "umbrella_sampling_1": { "number_of_replicas": "4", "us_start_param": "0", "us_end_param": "360", "temperature_2": { "number_of_replicas": "4", "min_temperature": "300", "max_temperature": "600", "umbrella_sampling_3": { "number_of_replicas": "4", "us_start_param": "0", "us_end_param": "360" In comparison to general REMD simulaiton input file, this file has the following additional parameters: input.dim - under this key must be specified parameters and names of individual dimensions for all multidimensional REMD simulations. umbrella_sampling_1 - indicates that first dimension is Umbrella potential temperature_2 - indicates that second dimension is Temperature umbrella_sampling_1 - indicates that third dimension is Umbrella potential number_of_replicas - indicates number of replicas in this dimension Now you are ready to run this simulation. If you want to run on Stampede run in terminal: 8.5. TUU-REMD example (alanine dipeptide) with Amber kernel 29

34 repex-amber --input= tuu_remd_ace_ala_nme.json --rconfig= stampede.json If you want to run on Archer run in terminal: repex-amber --input= tuu_remd_ace_ala_nme.json --rconfig= archer.json Output verification can be done similarly as for T-REMD example. 30 Chapter 8. Tutorial

35 CHAPTER 9 Frequently Asked Questions 9.1 Where are.mdout files? todo 9.2 Where are.mdinfo files? Amber.mdinfo files by default are residing in respective replica directories on target cluster. 9.3 How can I obtain information about accepted exchanges? todo 9.4 How can I obtain information about attempted exchanges? todo 31

36 32 Chapter 9. Frequently Asked Questions

37 CHAPTER 10 Indices and tables genindex modindex search 33

Job Submitter Documentation

Job Submitter Documentation Job Submitter Documentation Release 0+untagged.133.g5a1e521.dirty Juan Eiros February 27, 2017 Contents 1 Job Submitter 3 1.1 Before you start............................................. 3 1.2 Features..................................................

More information

The Power of Many: Scalable Execution of Heterogeneous Workloads

The Power of Many: Scalable Execution of Heterogeneous Workloads The Power of Many: Scalable Execution of Heterogeneous Workloads Shantenu Jha Research in Advanced DIstributed Cyberinfrastructure & Applications Laboratory (RADICAL) http://radical.rutgers.edu & http://radical-cybertools.github.io

More information

MIC Lab Parallel Computing on Stampede

MIC Lab Parallel Computing on Stampede MIC Lab Parallel Computing on Stampede Aaron Birkland and Steve Lantz Cornell Center for Advanced Computing June 11 & 18, 2013 1 Interactive Launching This exercise will walk through interactively launching

More information

Celery-RabbitMQ Documentation

Celery-RabbitMQ Documentation Celery-RabbitMQ Documentation Release 1.0 sivabalan May 31, 2015 Contents 1 About 3 1.1 Get it................................................... 3 1.2 Downloading and installing from source.................................

More information

Deploying a Production Gateway with Airavata

Deploying a Production Gateway with Airavata Deploying a Production Gateway with Airavata Table of Contents Pre-requisites... 1 Create a Gateway Request... 1 Gateway Deploy Steps... 2 Install Ansible & Python...2 Deploy the Gateway...3 Gateway Configuration...

More information

bootmachine Documentation

bootmachine Documentation bootmachine Documentation Release 0.6.0 Thomas Schreiber April 20, 2015 Contents 1 bootmachine 3 1.1 Configuration Management Tools.................................... 3 1.2 Providers.................................................

More information

SendCloud OpenCart 2 Extension Documentation

SendCloud OpenCart 2 Extension Documentation SendCloud OpenCart 2 Extension Documentation Release 1.2.0 Comercia November 22, 2017 Contents 1 GitHub README info 3 1.1 Links................................................... 3 1.2 Version Support.............................................

More information

Distributed Memory Programming With MPI Computer Lab Exercises

Distributed Memory Programming With MPI Computer Lab Exercises Distributed Memory Programming With MPI Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/classes/acs2

More information

TOWARDS AN ERROR DETECTION TEST FRAMEWORK FOR MOLECULAR DYNAMICS APPLICATIONS

TOWARDS AN ERROR DETECTION TEST FRAMEWORK FOR MOLECULAR DYNAMICS APPLICATIONS TOWARDS AN ERROR DETECTION TEST FRAMEWORK FOR MOLECULAR DYNAMICS APPLICATIONS By SUVIGYA TRIPATHI A thesis submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in

More information

Python Project Example Documentation

Python Project Example Documentation Python Project Example Documentation Release 0.1.0 Neil Stoddard Mar 22, 2017 Contents 1 Neilvana Example 3 1.1 Features.................................................. 3 1.2 Credits..................................................

More information

oemof.db Documentation

oemof.db Documentation oemof.db Documentation Release 0.0.5 Uwe Krien, oemof developing group Mar 20, 2017 Contents 1 Getting started 3 1.1 Installation................................................ 3 1.2 Configuration and

More information

Getting Started with the Google Cloud SDK on ThingsPro 2.0 to Publish Modbus Data and Subscribe to Cloud Services

Getting Started with the Google Cloud SDK on ThingsPro 2.0 to Publish Modbus Data and Subscribe to Cloud Services to Publish Modbus Data and Subscribe to Cloud Services Contents Moxa Technical Support Team support@moxa.com 1 Introduction... 2 2 Application Scenario... 2 3 Prerequisites... 3 4 Solution... 3 4.1 Set

More information

Gunnery Documentation

Gunnery Documentation Gunnery Documentation Release 0.1 Paweł Olejniczak August 18, 2014 Contents 1 Contents 3 1.1 Overview................................................. 3 1.2 Installation................................................

More information

Plumeria Documentation

Plumeria Documentation Plumeria Documentation Release 0.1 sk89q Aug 20, 2017 Contents 1 Considerations 3 2 Installation 5 2.1 Windows................................................. 5 2.2 Debian/Ubuntu..............................................

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011)

Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011) Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011) Sergio Maffioletti Grid Computing Competence Centre, University of Zurich http://www.gc3.uzh.ch/

More information

Roman Numeral Converter Documentation

Roman Numeral Converter Documentation Roman Numeral Converter Documentation Release 0.1.0 Adrian Cruz October 07, 2014 Contents 1 Roman Numeral Converter 3 1.1 Features.................................................. 3 2 Installation 5

More information

CHEM 5412 Spring 2017: Introduction to Maestro and Linux Command Line

CHEM 5412 Spring 2017: Introduction to Maestro and Linux Command Line CHEM 5412 Spring 2017: Introduction to Maestro and Linux Command Line March 28, 2017 1 Introduction Molecular modeling, as with other computational sciences, has rapidly grown and taken advantage of the

More information

SUG Breakout Session: OSC OnDemand App Development

SUG Breakout Session: OSC OnDemand App Development SUG Breakout Session: OSC OnDemand App Development Basil Mohamed Gohar Web and Interface Applications Manager Eric Franz Senior Engineer & Technical Lead This work is supported by the National Science

More information

Scalable In-memory Checkpoint with Automatic Restart on Failures

Scalable In-memory Checkpoint with Automatic Restart on Failures Scalable In-memory Checkpoint with Automatic Restart on Failures Xiang Ni, Esteban Meneses, Laxmikant V. Kalé Parallel Programming Laboratory University of Illinois at Urbana-Champaign November, 2012 8th

More information

Classified Documentation

Classified Documentation Classified Documentation Release 1.3.0 Wijnand Modderman-Lenstra October 13, 2014 Contents 1 Requirements 3 2 Requirements (optional) 5 3 Table Of Contents 7 3.1 Getting started..............................................

More information

Table of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine

Table of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine

More information

Introduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen

Introduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen Introduction to Unix The Windows User perspective Wes Frisby Kyle Horne Todd Johansen What is Unix? Portable, multi-tasking, and multi-user operating system Software development environment Hardware independent

More information

Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot

Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot Research in Advanced DIstributed Cyberinfrastructure & Applications Laboratory (RADICAL) Rutgers University http://radical.rutgers.edu

More information

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Before you login to the Blue Waters system, make sure you have the following information

More information

mri Documentation Release Nate Harada

mri Documentation Release Nate Harada mri Documentation Release 1.0.0 Nate Harada September 18, 2015 Contents 1 Getting Started 3 1.1 Deploying A Server........................................... 3 1.2 Using Caffe as a Client..........................................

More information

Pulp Python Support Documentation

Pulp Python Support Documentation Pulp Python Support Documentation Release 1.0.1 Pulp Project October 20, 2015 Contents 1 Release Notes 3 1.1 1.0 Release Notes............................................ 3 2 Administrator Documentation

More information

Hmax Documentation Documentation

Hmax Documentation Documentation Hmax Documentation Documentation Release 0.01 Youssef Barhomi January 22, 2014 Contents 1 OS and Hardware Prerequisites: 1 2 Installation: 3 3 Run the model on Caltech 101: 5 4 Directory structure: 7

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 3: Programming Models Piccolo: Building Fast, Distributed Programs

More information

MD Workflow Single System Tutorial (LINUX OPERATION GPU Cluster) Written by Pek Ieong

MD Workflow Single System Tutorial (LINUX OPERATION GPU Cluster) Written by Pek Ieong MD Workflow Single System Tutorial (LINUX OPERATION GPU Cluster) Written by Pek Ieong The purpose of this tutorial is to introduce the Amber GPU Molecular Dynamic (MD) Kepler workflow developed by NBCR

More information

AN OVERVIEW OF COMPUTING RESOURCES WITHIN MATHS AND UON

AN OVERVIEW OF COMPUTING RESOURCES WITHIN MATHS AND UON AN OVERVIEW OF COMPUTING RESOURCES WITHIN MATHS AND UON 1 PURPOSE OF THIS TALK Give an overview of the provision of computing facilities within Maths and UoN (Theo). When does one realise that should take

More information

Biostar Central Documentation. Release latest

Biostar Central Documentation. Release latest Biostar Central Documentation Release latest Oct 05, 2017 Contents 1 Features 3 2 Support 5 3 Quick Start 7 3.1 Install................................................... 7 3.2 The biostar.sh manager..........................................

More information

CSE 101 Introduction to Computers Development / Tutorial / Lab Environment Setup

CSE 101 Introduction to Computers Development / Tutorial / Lab Environment Setup CSE 101 Introduction to Computers Development / Tutorial / Lab Environment Setup Purpose: The purpose of this lab is to setup software that you will be using throughout the term for learning about Python

More information

sainsmart Documentation

sainsmart Documentation sainsmart Documentation Release 0.3.1 Victor Yap Jun 21, 2017 Contents 1 sainsmart 3 1.1 Install................................................... 3 1.2 Usage...................................................

More information

Catbook Workshop: Intro to NodeJS. Monde Duinkharjav

Catbook Workshop: Intro to NodeJS. Monde Duinkharjav Catbook Workshop: Intro to NodeJS Monde Duinkharjav What is NodeJS? NodeJS is... A Javascript RUNTIME ENGINE NOT a framework NOT Javascript nor a JS package It is a method for running your code in Javascript.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Running Blockchain in Docker Containers Prerequisites Sign up for a LinuxONE Community Cloud trial account Deploy a virtual server instance

Running Blockchain in Docker Containers Prerequisites Sign up for a LinuxONE Community Cloud trial account Deploy a virtual server instance Running Blockchain in Docker Containers The following instructions can be used to install the current hyperledger fabric, and run Docker and blockchain code in IBM LinuxONE Community Cloud instances. This

More information

RUNNING MOLECULAR DYNAMICS SIMULATIONS WITH CHARMM: A BRIEF TUTORIAL

RUNNING MOLECULAR DYNAMICS SIMULATIONS WITH CHARMM: A BRIEF TUTORIAL RUNNING MOLECULAR DYNAMICS SIMULATIONS WITH CHARMM: A BRIEF TUTORIAL While you can probably write a reasonable program that carries out molecular dynamics (MD) simulations, it s sometimes more efficient

More information

EReinit: Scalable and Efficient Fault-Tolerance for Bulk-Synchronous MPI Applications

EReinit: Scalable and Efficient Fault-Tolerance for Bulk-Synchronous MPI Applications EReinit: Scalable and Efficient Fault-Tolerance for Bulk-Synchronous MPI Applications Sourav Chakraborty 1, Ignacio Laguna 2, Murali Emani 2, Kathryn Mohror 2, Dhabaleswar K (DK) Panda 1, Martin Schulz

More information

Quick-Start Tutorial. Airavata Reference Gateway

Quick-Start Tutorial. Airavata Reference Gateway Quick-Start Tutorial Airavata Reference Gateway Test/Demo Environment Details Tutorial I - Gateway User Account Create Account Login to Account Password Recovery Tutorial II - Using Projects Create Project

More information

MD Workflow Single System Tutorial (LINUX OPERATION Local Execution) Written by Pek Ieong

MD Workflow Single System Tutorial (LINUX OPERATION Local Execution) Written by Pek Ieong MD Workflow Single System Tutorial (LINUX OPERATION Local Execution) Written by Pek Ieong The purpose of this tutorial is to introduce the Amber GPU Molecular Dynamic (MD) Kepler workflow developed by

More information

Accelerating Parallel Analysis of Scientific Simulation Data via Zazen

Accelerating Parallel Analysis of Scientific Simulation Data via Zazen Accelerating Parallel Analysis of Scientific Simulation Data via Zazen Tiankai Tu, Charles A. Rendleman, Patrick J. Miller, Federico Sacerdoti, Ron O. Dror, and David E. Shaw D. E. Shaw Research Motivation

More information

ardpower Documentation

ardpower Documentation ardpower Documentation Release v1.2.0 Anirban Roy Das May 18, 2016 Contents 1 Introduction 1 2 Screenshot 3 3 Documentaion 5 3.1 Overview................................................. 5 3.2 Installation................................................

More information

EnhancedEndpointTracker Documentation

EnhancedEndpointTracker Documentation EnhancedEndpointTracker Documentation Release 1.0 agccie Jul 23, 2018 Contents: 1 Introduction 1 2 Install 3 2.1 ACI Application............................................. 3 2.2 Standalone Application.........................................

More information

The Eclipse Parallel Tools Platform

The Eclipse Parallel Tools Platform May 1, 2012 Toward an Integrated Development Environment for Improved Software Engineering on Crays Agenda 1. What is the Eclipse Parallel Tools Platform (PTP) 2. Tour of features available in Eclipse/PTP

More information

boost Documentation Release 0.1 Carl Chenet

boost Documentation Release 0.1 Carl Chenet boost Documentation Release 0.1 Carl Chenet May 06, 2017 Contents 1 Guide 3 1.1 How to install Boost........................................... 3 1.2 Configure Boost.............................................

More information

Parsl: Developing Interactive Parallel Workflows in Python using Parsl

Parsl: Developing Interactive Parallel Workflows in Python using Parsl Parsl: Developing Interactive Parallel Workflows in Python using Parsl Kyle Chard (chard@uchicago.edu) Yadu Babuji, Anna Woodard, Zhuozhao Li, Ben Clifford, Ian Foster, Dan Katz, Mike Wilde, Justin Wozniak

More information

Release Nicholas A. Del Grosso

Release Nicholas A. Del Grosso wavefront r eaderdocumentation Release 0.1.0 Nicholas A. Del Grosso Apr 12, 2017 Contents 1 wavefront_reader 3 1.1 Features.................................................. 3 1.2 Credits..................................................

More information

Autopology Installation & Quick Start Guide

Autopology Installation & Quick Start Guide Autopology Installation & Quick Start Guide Version 1.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. You

More information

Lab 1 1 Due Wed., 2 Sept. 2015

Lab 1 1 Due Wed., 2 Sept. 2015 Lab 1 1 Due Wed., 2 Sept. 2015 CMPSC 112 Introduction to Computer Science II (Fall 2015) Prof. John Wenskovitch http://cs.allegheny.edu/~jwenskovitch/teaching/cmpsc112 Lab 1 - Version Control with Git

More information

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE. Product: Virtual Iron Virtualization Manager Version: 4.2

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE. Product: Virtual Iron Virtualization Manager Version: 4.2 VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron Virtualization Manager to configure

More information

UGP and the UC Grid Portals

UGP and the UC Grid Portals UGP and the UC Grid Portals OGF 2007 Documentation at: http://www.ucgrid.org Prakashan Korambath & Joan Slottow Research Computing Technologies UCLA UGP (UCLA Grid Portal) Joins computational clusters

More information

dh-virtualenv Documentation

dh-virtualenv Documentation dh-virtualenv Documentation Release 0.7 Spotify AB July 21, 2015 Contents 1 What is dh-virtualenv 3 2 Changelog 5 2.1 0.7 (unreleased)............................................. 5 2.2 0.6....................................................

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013 Lecture 13: Memory Consistency + a Course-So-Far Review Parallel Computer Architecture and Programming Today: what you should know Understand the motivation for relaxed consistency models Understand the

More information

Introduction to Change and Configuration Management

Introduction to Change and Configuration Management CHAPTER 1 Introduction to Change and Configuration Management Cisco Prime Network Change and Configuration Management provides tools that allow you to manage the software and device configuration changes

More information

Python Schema Generator Documentation

Python Schema Generator Documentation Python Schema Generator Documentation Release 1.0.0 Peter Demin June 26, 2016 Contents 1 Mutant - Python code generator 3 1.1 Project Status............................................... 3 1.2 Design..................................................

More information

Shared Memory Programming With OpenMP Computer Lab Exercises

Shared Memory Programming With OpenMP Computer Lab Exercises Shared Memory Programming With OpenMP Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/presentations/fsu

More information

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No # 09 Lecture No # 40 This is lecture forty of the course on

More information

Advanced MPI: MPI Tuning or How to Save SUs by Optimizing your MPI Library! The lab.

Advanced MPI: MPI Tuning or How to Save SUs by Optimizing your MPI Library! The lab. Advanced MPI: MPI Tuning or How to Save SUs by Optimizing your MPI Library! The lab. Jérôme VIENNE viennej@tacc.utexas.edu Texas Advanced Computing Center (TACC). University of Texas at Austin Tuesday

More information

OpenCL Base Course Ing. Marco Stefano Scroppo, PhD Student at University of Catania

OpenCL Base Course Ing. Marco Stefano Scroppo, PhD Student at University of Catania OpenCL Base Course Ing. Marco Stefano Scroppo, PhD Student at University of Catania Course Overview This OpenCL base course is structured as follows: Introduction to GPGPU programming, parallel programming

More information

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk

More information

Precursor Steps & Storage Node

Precursor Steps & Storage Node Precursor Steps & Storage Node In a basic HPC cluster, the head node is the orchestration unit and possibly the login portal for your end users. It s one of the most essential pieces to get working appropriately.

More information

ForeScout Extended Module for Tenable Vulnerability Management

ForeScout Extended Module for Tenable Vulnerability Management ForeScout Extended Module for Tenable Vulnerability Management Version 2.7.1 Table of Contents About Tenable Vulnerability Management Module... 4 Compatible Tenable Vulnerability Products... 4 About Support

More information

Bitdock. Release 0.1.0

Bitdock. Release 0.1.0 Bitdock Release 0.1.0 August 07, 2014 Contents 1 Installation 3 1.1 Building from source........................................... 3 1.2 Dependencies............................................... 3

More information

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Scalable Computing: Practice and Experience Volume 10, Number 4, pp Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO

More information

chatterbot-weather Documentation

chatterbot-weather Documentation chatterbot-weather Documentation Release 0.1.1 Gunther Cox Nov 23, 2018 Contents 1 chatterbot-weather 3 1.1 Installation................................................ 3 1.2 Example.................................................

More information

Python Project Documentation

Python Project Documentation Python Project Documentation Release 1.0 Tim Diels Jan 10, 2018 Contents 1 Simple project structure 3 1.1 Code repository usage.......................................... 3 1.2 Versioning................................................

More information

Python wrapper for Viscosity.app Documentation

Python wrapper for Viscosity.app Documentation Python wrapper for Viscosity.app Documentation Release Paul Kremer March 08, 2014 Contents 1 Python wrapper for Viscosity.app 3 1.1 Features.................................................. 3 2 Installation

More information

SystemDesk - EB tresos Studio - TargetLink Workflow Descriptions

SystemDesk - EB tresos Studio - TargetLink Workflow Descriptions SystemDesk - EB tresos Studio - TargetLink Workflow Descriptions Usable with Versions: dspace SystemDesk 4.1 EB tresos Studio 13 or 14 TargetLink 3.4 or TargetLink 3.5 (with patches) February, 2014 1 /

More information

Kivy Designer Documentation

Kivy Designer Documentation Kivy Designer Documentation Release 0.9 Kivy October 02, 2016 Contents 1 Installation 3 1.1 Prerequisites............................................... 3 1.2 Installation................................................

More information

google-search Documentation

google-search Documentation google-search Documentation Release 1.0.0 Anthony Hseb May 08, 2017 Contents 1 google-search 3 1.1 Features.................................................. 3 1.2 Credits..................................................

More information

Ceilometer Documentation

Ceilometer Documentation Ceilometer Documentation Release 0.0 OpenStack, LLC July 06, 2012 CONTENTS 1 What is the purpose of the project and vision for it? 3 2 Table of contents 5 2.1 Initial setup................................................

More information

A Software Developing Environment for Earth System Modeling. Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012

A Software Developing Environment for Earth System Modeling. Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012 A Software Developing Environment for Earth System Modeling Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012 1 Outline Motivation Purpose and Significance Research Contents Technology

More information

I2C LCD Documentation

I2C LCD Documentation I2C LCD Documentation Release 0.1.0 Peter Landoll Sep 04, 2017 Contents 1 I2C LCD 3 1.1 Features.................................................. 3 1.2 Credits..................................................

More information

Python AutoTask Web Services Documentation

Python AutoTask Web Services Documentation Python AutoTask Web Services Documentation Release 0.5.1 Matt Parr May 15, 2018 Contents 1 Python AutoTask Web Services 3 1.1 Features.................................................. 3 1.2 Credits..................................................

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples

Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples Louis Lyons (Imperial College), Lorenzo Moneta (CERN) IPMU, 27-29 March 2017 Introduction Hands-on session based on

More information

Benchmarking Instructions

Benchmarking Instructions Benchmarking Instructions Barcelona, May 4th, 2017 Summary Summary 1. CSUC benchmark suite 2017 2. General rules 3. Software 4. Results 1. CSUC benchmark suite 2017 With the present set of benchmarks,

More information

Virtual CD TS 1 Introduction... 3

Virtual CD TS 1 Introduction... 3 Table of Contents Table of Contents Virtual CD TS 1 Introduction... 3 Document Conventions...... 4 What Virtual CD TS Can Do for You...... 5 New Features in Version 10...... 6 Virtual CD TS Licensing......

More information

And check out a copy of your group's source tree, where N is your one-digit group number and user is your rss username

And check out a copy of your group's source tree, where N is your one-digit group number and user is your rss username RSS webmaster Subversion is a powerful, open-source version control system favored by the RSS course staff for use by RSS teams doing shared code development. This guide is a primer to the use of Subversion

More information

NetIQ Identity Manager Jobs Guide. February 2017

NetIQ Identity Manager Jobs Guide. February 2017 NetIQ Identity Manager Jobs Guide February 2017 Legal Notice For information about NetIQ legal notices, disclaimers, warranties, export and other use restrictions, U.S. Government restricted rights, patent

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

Batch Systems. Running calculations on HPC resources

Batch Systems. Running calculations on HPC resources Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between

More information

FEEG Applied Programming 3 - Version Control and Git II

FEEG Applied Programming 3 - Version Control and Git II FEEG6002 - Applied Programming 3 - Version Control and Git II Richard Boardman, Sam Sinayoko 2016-10-19 Outline Learning outcomes Working with a single repository (review) Working with multiple versions

More information

Google Domain Shared Contacts Client Documentation

Google Domain Shared Contacts Client Documentation Google Domain Shared Contacts Client Documentation Release 0.1.0 Robert Joyal Mar 31, 2018 Contents 1 Google Domain Shared Contacts Client 3 1.1 Features..................................................

More information

Upgrading the Cisco APIC-EM Deployment

Upgrading the Cisco APIC-EM Deployment Review the following sections in this chapter for information about upgrading to the latest Cisco APIC-EM version and verification. Using the GUI to Upgrade Cisco APIC-EM, page 1 Using the CLI to Upgrade

More information

Getting Started with Serial and Parallel MATLAB on bwgrid

Getting Started with Serial and Parallel MATLAB on bwgrid Getting Started with Serial and Parallel MATLAB on bwgrid CONFIGURATION Download either bwgrid.remote.r2014b.zip (Windows) or bwgrid.remote.r2014b.tar (Linux/Mac) For Windows users, unzip the download

More information

1. Introduction. 2. Setup

1. Introduction. 2. Setup 1. Introduction This document outlines the steps to benchmark the performance of the PostgreSQL-compatible edition of Amazon Aurora using the pgbench and sysbench benchmarking tools. It describes how to

More information

Secure Web Appliance. Basic Usage Guide

Secure Web Appliance. Basic Usage Guide Secure Web Appliance Basic Usage Guide Table of Contents 1. Introduction... 1 1.1. About CYAN Secure Web Appliance... 1 1.2. About this Manual... 1 1.2.1. Document Conventions... 1 2. Description of the

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Fault-Tolerant Parallel Analysis of Millisecond-scale Molecular Dynamics Trajectories. Tiankai Tu D. E. Shaw Research

Fault-Tolerant Parallel Analysis of Millisecond-scale Molecular Dynamics Trajectories. Tiankai Tu D. E. Shaw Research Fault-Tolerant Parallel Analysis of Millisecond-scale Molecular Dynamics Trajectories Tiankai Tu D. E. Shaw Research Anton: A Special-Purpose Parallel Machine for MD Simulations 2 Routine Data Analysis

More information

Mutation Testing in Patterns Documentation

Mutation Testing in Patterns Documentation Mutation Testing in Patterns Documentation Release 1.0 Alexander Todorov Aug 18, 2016 Contents 1 Make sure your tools work 3 2 Make sure your tests work 5 3 Divide and conquer 7 4 Fail fast 9 5 Python:

More information

VPS SETUP: What is a VPS? A VPS is a cloud server, running on a virtual machine. You can t run a masternode on your computer itself.

VPS SETUP: What is a VPS? A VPS is a cloud server, running on a virtual machine. You can t run a masternode on your computer itself. Our guide makes it easy to set up your own masternode! BEFORE YOU BEGIN, YOU WILL NEED: 1. 1,000 SUPPO s 2. The latest SuppoCoin wallet, which can always be found here: https://www.suppocoin.io 3. Two

More information

Remote & Collaborative Visualization. Texas Advanced Computing Center

Remote & Collaborative Visualization. Texas Advanced Computing Center Remote & Collaborative Visualization Texas Advanced Computing Center TACC Remote Visualization Systems Longhorn NSF XD Dell Visualization Cluster 256 nodes, each 8 cores, 48 GB (or 144 GB) memory, 2 NVIDIA

More information

Océ Engineering Exec. Doc Exec Pro and Electronic Job Ticket for the Web

Océ Engineering Exec. Doc Exec Pro and Electronic Job Ticket for the Web Océ Engineering Exec Doc Exec Pro and Electronic Job Ticket for the Web Océ-Technologies B.V. Copyright 2004, Océ-Technologies B.V. Venlo, The Netherlands All rights reserved. No part of this work may

More information

Heckaton. SQL Server's Memory Optimized OLTP Engine

Heckaton. SQL Server's Memory Optimized OLTP Engine Heckaton SQL Server's Memory Optimized OLTP Engine Agenda Introduction to Hekaton Design Consideration High Level Architecture Storage and Indexing Query Processing Transaction Management Transaction Durability

More information

Pegasus. Automate, recover, and debug scientific computations. Rafael Ferreira da Silva.

Pegasus. Automate, recover, and debug scientific computations. Rafael Ferreira da Silva. Pegasus Automate, recover, and debug scientific computations. Rafael Ferreira da Silva http://pegasus.isi.edu Experiment Timeline Scientific Problem Earth Science, Astronomy, Neuroinformatics, Bioinformatics,

More information

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5 VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron VI-Center to configure and manage virtual

More information