FGI User Guide. Version Kimmo Mattila / CSC - IT center for science

Size: px
Start display at page:

Download "FGI User Guide. Version Kimmo Mattila / CSC - IT center for science"

Transcription

1 FGI User Guide Version Kimmo Mattila / CSC - IT center for science

2 Table of Contents FGI User Guide Preparatory steps Grid certificates Obtaining a grid certificate from TERENA Exporting the certificate from the browser Installing certificate Managing certificates in the Scientist's User Interface Joining the fgi.csc.fi Virtual Organization The ARC middleware Using the ARC client at CSC Installing the ARC client to a local computer ARC settings on your local computer Using FGI with ARC middleware Job description files Executing grid jobs with ARC commands Creating a proxy-certificate Job submission commands Running the sample job in FGI environment Keeping the grid job status up to date Using software through runtime environments Running parallel applications in FGI Executing threads based parallel software in FGI Executing MPI based parallel program in FGI environment Using arcrunner to run large job sets in FGI Installing arcrunner Using arcrunner Arcrunner example Using storage elements for data transport in FGI Using storage elements with ARC commands Using storage elements in grid jobs Grid Monitor...29

3 FGI User Guide The FGI (Finnish Grid Infrastructure) is a distributed grid computing environment that consists of 11 computing clusters located in different university campuses in Finland. CSC administrates the grid usage of these clusters. The FGI service is available for all researchers doing non-profit research in Finnish universities. Researchers can use the FGI directly form their personal computers or from the computing servers of CSC. This guide provides the basic information about taking the FGI service into use and running simple jobs in the FGI grid. The first chapter of this guide describes the three mandatory preparatory steps for FGI usage: 1. Obtaining a grid certificate 2. Joining to the fgi.csc.fi virtual organisation 3. Installing the ARC middleware client These preparatory steps are done just once, when the user starts using FGI. The second chapter provides an introduction to the ARC middleware. This chapter introduces the most commonly used ARC commands and shows examples that demonstrate running simple jobs in the FGI. The last chapter of this guide provides a short introduction to the ARC Grid Monitor that can be used to monitor the load of the clusters and the progress of the grid jobs. This guide provides a general introduction to the FGI grid environment. More detailed information as well as tutorials and software specific instructions can be found from the FGI web site:

4 1. Preparatory steps 1.1 Grid certificates FGI, like most of the middleware based grid environments, uses personal X.509 certificates for user authentication. In this approach the users doesn't need a personal user accounts in the cluster they are using. This means also that CSC user account is not necessary for FGI usage. Certificates are granted by a certification authority (CA) who acts as a trusted third party that checks that the certificate is based on valid identity information. Finnish academic grid users can use TERENA (Trans-European Research and Education Networking Association) as the the certification authority. The TERENA certificate service creates the grid certificate based on the information provided by the HAKA authentication system (typically, HAKA authentication means the user account in the local university network). Normally the grid certificate can be obtained from the TERENA web pages in a few minutes. The researchers from the University of Turku, however, must request a change to their HAKA account before they can use the TERENA service. The grid certificate provided by TERENA is valid for one year. If your certificate breaks or you loose your certificate, you can always request a new certificate from the TERENA service Obtaining a grid certificate from TERENA Please ONLY use your personal computer for obtaining your grid certificate as your grid certificate will be stored into the browser you are using for obtaining the certificate. Here are the step-by-step instructions for obtaining your own certificate: 1. Go to 2. Click Login, select Finland as your country, and after that select your institution. 3. Login using your HAKA username and password (a HAKA account is created by your home organization, not by CSC. Typically this is the user account you use to log in to local university network) 4. Depending on your institution, you might be asked for a permission to forward the information to the TCS portal. 5. Click "My Certificates" (on the left under certificates) 6. Click on "New Certificate", read the Acceptable Use Policy, and if if you agree with it, proceed to the next step 7. You should now be in the "Generate a CSR in the browser" menu. Click: next. 8. You should now have a drop-down menu on the left. If the key length is not already set to 2048 bits, ensure that it is set to 2048 bits. 9. Click: next. Your browser may ask you for your browser security password at this point. 10. Wait until you get the new certificate. (2 minutes or less) 11.The last step in this part of the process is to click "Install to keystore" to install the certificate into your browser.

5 1.1.2 Exporting the certificate from the browser After obtaining the certificate from TERENA, the certificate is initially stored only in the certificate repository of the web browser that was used for the certificate generation process. To use your certificate for grid jobs you need to export your certificate to a certificate file. The location of the certificate repository and commands that export the certificate to a file vary between browsers (even between different versions of the same browser). Your browser may contain several certificates, many of which are used to verify other web service providers. Normally you can recognize your personal TERENA certificate based on the certificate name that should contain your name or address. Below are instructions for exporting the certificate from a few commonly used browsers. Firefox: Opera: 1. Select: Edit -> Preferences (in Linux) or Tools -> Options (in Windows) or Firefox -> Preferences ( In Mac) 2. Go to Advanced -> Encryption -> View Certificates 3. Select your certificate and click Backup 4. Save the certificate as "usercert.p12". The browser will ask you for your password, along with an export password. You MUST have a password here, you may not backup the certificate without a password! 1. Select Menu -> Settings -> Preferences 2. Go to Advanced -> Security -> Manage Certificates 3. Select your certificate and click Export 4. Choose the PKCS #12 (with private key) file type, and save the certificate as "usercert.p12". The browser will ask you for your password, along with an export password. You MUST have a password here, you may not export the certificate without a password! Chrome: 1. Open Settings (Under the Wrench) 2. Click Show advanced settings 3. Click the Manage certificates button in the HTTPS/SSL sections 4. Select a certificate to export 5. Click Export and save the certificate as "usercert.p12" 6. The browser will ask you for your password, along with an export password. You MUST have a password here, you may not export the certificate without a password! Installing certificate Browsers normally store the certificates using the PKCS12 format. However, for the ARC middleware, the certificate is normally converted to the PEM format. Following commands do this conversion on Linux machines. If you will use the grid tools on a different machine than that which your browser is on, you can transfer the usercert.p12 file to that machine, and run the following commands there. It's suggested that you use a secure tool like scp to do this. Optionally, you can use the My Certificates tool, introduced in section 1.1.4, to do the conversion and transport. The PEM formatted certificate consists of two files: private key file (userkey.pem) and certificate file (usercert.pem). The certificate private key is created with the command: openssl pkcs12 -nocerts -in usercert.p12 -out userkey.pem

6 When executed, this command will ask for the old and the new key passwords (they can be the same). The user certificate file is created with the command : openssl pkcs12 -clcerts -nokeys -in usercert.p12 -out usercert.pem The commands above should have created two files, usercert.pem and userkey.pem. To use the ARC middleware these two files should be moved into a.globus sub-directory under the user's home directory (note the dot as the first character of the directory name). If the.globus directory does not exist, it can be created with the command: mkdir ~/.globus/ After this, the certificate files can be moved to the.globus directory with the commands: cp usercert.pem ~/.globus/ cp userkey.pem ~/.globus/ Finally, make sure that the access permissions of the userkey.pem file are set up correctly. The command to ensure this is: chmod 400 ~/.globus/userkey.pem

7 1.1.4 Managing certificates in the Scientist's User Interface CSC's Scientist's User Interface (SUI) service contains a My Certificates tool that can be used to manage X.509 certificates. The My Certificates tool provides a machine independent certificate repository that can be used for backing up and copying certificates from one machine to another. My Certificates can also do conversions between different certificate formats. The My Certificates tool can be found at When you use this service for the first time you need to set up a password that this certificate repository will use. This is done by right-clicking the repository and selecting: Security Settings from the pop-up menu. This repository password is not linked technically to other passwords like CSC, SUI or certificate passwords. Once the repository password has been defined, certificates can be imported to the repository. This is done by right clicking the empty certificate menu and selecting: Upload. The My Certificates tool can read in certificates in PEM and PKCS12 formats. For example to import the usercet.p12 file, created with the commands used in chapter 1.1.2, the Select file format setting must be changed to PKCS12. The import process is then started by clicking OK. Clicking on the Upload button opens a file browser that can be used to select the certificate file (usercert.p12). When the certificate file is imported, SUI will ask for the certificate password. This is the password that has been assigned to the certificate in the TERENA certificate portal (not the certificate repository password or the CSC password). After providing the password, check the certificate information and click "Save". The certificate should then be visible as one row in the Stored Certificates table. Once the certificate is stored to the My Certificates repository, it can easily be exported to the computer you are currently working by logging in to the service and using the Download command. You can also use this command to store the certificate in PEM format to the.globus directory of the CSC server Hippu. This is necessary to be able to submit grid jobs from Hippu. 1. Select the certificate from the list. 2. Right-click and select: Download 3. Then select: Download Destination: Globus directory ($HOME/.globus) File format: PEM After entering the certificate password for the PEM certificate files, they will be stored to the.globus directory of your CSC home directory. This is the default location for the certificates that grid tools at CSC will use.

8 1.2 Joining the fgi.csc.fi Virtual Organization Use of the FGI computing environment is controlled through Virtual Organizations (VO). A VO refers to group of users that utilise some grid resource according to a set of resource sharing rules and conditions. In turn, the grid resource providers use VOs to control the access and usage of the grid resources. At a practical level, a membership in a VO grants permission to use the grid resource. Virtual Organizations are typically linked to a distinct set of grid resources that are provided to some specific branch of science and/or geographical region. Currently all FGI usage is controlled through the one VO called fgi.csc.fi. In the future there may also be other, more focused VOs in FGI. The fgi.csc.fi VO is open for all academic university researchers working in Finland. To join the fgi.csc.fi VO go to FGI Virtual Organization web page: This web page authenticates you using the TERENA certificate installed in your browser. Therefore it is preferred that you use the same machine and browser for obtaining both the certificate and for joining the VO. In the FGI VO web page, fill the form with you personal information and read the Acceptable Use Policy and accept it. After that you will receive an confirmation request in your mailbox. Please follow the instructions in that . After finishing the VO membership application process it will take some time before the VO membership is activated. Normally the membership will be activated within one working day after the application process is finished. 1.3 The ARC middleware In grid computing users don't log in directly to the computing clusters they are using. Instead, the

9 computing tasks are submitted via a job broker tool called middleware. FGI uses the ARC (Advanced Resource Connector) middleware, developed by the Nordugrid community. There are also several other commonly used grid middlewares like glite and Unicore. Currently it is not possible to use these middleware packages with FGI. The ARC middleware consists of two parts, the ARC server, which runs in the computing servers of FGI, and the ARC client, that is used to send computing tasks to the ARC servers. To send jobs to the FGI you must either install the ARC client to your local computer or use the ARC client available at CSC in the Hippu (hippu.csc.fi) server Using the ARC client at CSC The ARC client is available at CSC in Taito (taito.csc.fi) and Hippu (hippu.csc.fi) servers. To be able to use the ARC client on Taito or Hippu you must create a sub-directory called.globus in your CSC home directory (note that the first character of the directory name is a dot) and copy your usecert.pem and userkey.pem certificate files to this directory. As Hippu and Taito have separate home directories, you must create this dirctory for both machines if you plan to use FGI from both of these servers. You can copy the files from your own computer with, for example, the scp command or you can use the My Certificates tool of the Scientist User Interface (see chapter 1.1.4). Note also that the access permissions of the userkey.pem file should be set up with command: chmod 400 ~/.globus/userkey.pem Once the certificate and client-conf files are correctly installed, you can execute ARC commands as discussed in section 2 of this guide Installing the ARC client to a local computer The ARC middleware client can be installed on Linux, Mac and Windows machines. There are several alternative ways to do the installation. In many Linux distributions, you can install the repository version of the clients using tools like yum or apt. This allows you to always use the latest release of the client. More information about the repository installation can be found from the ARC repository page: The repository installation requires that you have administrator privileges for your computer (root or sudo access). If you don't have administrator privileges, you can obtain a pre-compiled version of ARC from the address: At this URL, first select the correct operating system, version and processor architecture and press Download. For Linux systems the installation data set is a packed file that can be unpacked with the command: tar zxvf installation_file For example: tar zxvf nordugrid arc standalone el6.x86_64.tgz The unpacked installation directory includes pre-compiled ARC commands and two set-up scripts: setup.sh for bash and sh command shells and setup.csh for csh and tcsh command shells. Whenever you log into your system and wish to start using ARC commands, you must first go to the ARC installation directory and execute one of these setup scripts (depending on the command shell you

10 are using). For example, in the case of bash command shell: cd nordugrid-arc-standalone source setup.sh Alternatively, you can set this into your login script. Simply add following lines to your.bashrc file: location=`pwd` cd /path/to/arc_installation/nordugrid-arc-standalone source setup.sh cd $location ARC settings on your local computer In addition to the ARC client installation you must make sure that a valid grid certificate is available in your.globus directory that is located in your home directory (see chapter ). You can easily check that by running command: ls -l ~/.globus The file listing, printed by the command above, should include the files usercert.pem and userkey.pem. In addition to the certificates you will also need an ARC configuration file called client.conf. This file defines a set of parameters that are needed to use the FGI clusters. The file can be downloaded from CSC and it should be placed into a sub-directory of your home directory called.arc (note: the dot is the first character of the directory name). You can do the installation with the commands: cd ~ mkdir.arc cd.arc wget f

11 2. Using FGI with ARC middleware The FGI grid computing environment is used via the Advanced Resource Connector (ARC) middleware which is produced by the Nordugrid community. All tasks and commands are submitted via the middleware and the user never needs to directly log into the actual computing clusters. For this reason FGI can't be used to run programs interactively. Instead the commands to be executed are collected into command files that are submitted to the FGI using ARC commands and job description files. In this section we provide an introduction to the xrsl (Extended Resource Specification Language) job description file format and to the most frequently used ARC middleware commands. More detailed information about ARC middleware and xrsl files can be found from the manuals provided by Nordugrid: Job description files Submitting computing tasks to FGI resembles submitting batch jobs to normal computing clusters. However, in the case of batch jobs the user just defines the commands to be executed, while in the case of grid usage the user must also define the (input) files that need to be transported with the job to the remote cluster and also the resulting (output)files that are returned when the job finishes. In the case of ARC-middleware the grid jobs are defined using two files: 1. A job description file, that defines the resources needed (for example the required computing time, memory and number of processors) and the files that will be copied to and from the remote clusters. ARC can use xrsl or JSDL job description file formats. 2. A command file containing the commands that will be executed when the job is run in the remote cluster. The command files are in most cases normal Linux command scripts. Linux scripting is not discussed in this guide. You can find more information about Linux scripting from the CSC computing environment users guide, chapter 2.7 ( The xrsl formatted job description files are text files that define the resources and files that the grid job needs. The file starts with an & sign, followed by a list of attribute-value pairs in the format: (attribute="value") Table 1. lists the most frequently used job description attributes. You can create xrsl formatted job description files with normal text editors or you can use the Batch Job Wizard Tool, in the Scientist's User Interface ( ). (See Figure 2.)

12 Table 1. Most commonly used xrsl attributes Attribute Description Example count Number of computing cores to be reserved. (count=8) cputime Computing time requested. For a multi-processor job, this is a sum over all requested processors. (cputime="6 hours, 20 minutes") executable Name of the command script file (executable=runhello.sh) inputfiles Files that will be copied from the local computer to the remote cluster. The left column refers to the file name at the remote cluster and the right column refers to the file name in the local computer. (inputfiles= ("file1.txt" "file1.txt" ) ) jobname Name of the grid job (jobname="hello_fgi") memory Memory requirement in megabytes (memory="4000") notify will be sent to the given address at certain states of the job. E.g. When the job begins (b) or ends (e). Here the be signifies sending an for both states. (notify="be kkayttajl@csc.fi") outputfiles Files that will be copied from the remote cluster when the results are retrieved. The left column refers to the file name at the remote cluster and the right column refers to the file name in the local computer. (outputfiles= ("out.txt" "out.txt" ) ) runtimeenvironment Required run time environment. (runtimeenvironment="apps/bio/b OWTIE-2.0.0") stderr File for standard error (stderr=std.err) stdout File for standard output (stdout=std.out)

13 Figure 2. Batch job wizard in the Scientists User Interface. Below is a short command script that is used as a simplified example of a grid command file. The job prints words "Hello FGI" to the standard output and then writes the number of lines in files inputfile.txt and file2.txt to a new file called output.txt. The name of the command script in this example is runhello.sh. #!/bin/sh echo "Hello FGI" wc -l inputfile.txt file2.txt > output.txt exit The runhello.sh script above can be executed in the FGI environment using the following job-description file (called hello.xrsl): &(executable=runhello.sh) (jobname=hello_fgi) (stdout=std.out) (stderr=std.err) (cputime="1 hours") (memory="1000")

14 (inputfiles= ("inputfile.txt" "file1.txt" ) ("file2.txt" "") ) (outputfiles= ("output.txt" "" ) ) The first line of the job description file defines that the script runhello.sh will be copied to the remote cluster and executed. The following lines define the name of the grid job (hello_fgi) and the names of the standard output (std.out) and standard error (std.err) files. The computing time (1 h) and memory (1000 MB) requirements of the job are defined in the fifth and the sixth rows. Defining these values is not mandatory but it is recommended that you do so. Setting up memory and time limits ensures that your job will be submitted to a cluster that has enough resources. Further, correctly set memory and time requirements ensure that in the remote cluster your job ends up in a queue that most effectively executes your job. The definition (inputfiles= starts the region that lists the files that will be copied to the cluster executing the job. In addition to the actual input files of your job, this notation can also be used to copy program files like pre-compiled executables, source codes or program scripts to be used in the remote cluster. The example above uses two alternative ways to define a file that will be copied to the remote cluster. The row: ("inputfile.txt" "file1.txt" ) defines that the file called file1.txt will be copied so that in the remote cluster the name of the file will be inputfile.txt. The next row: ("file2.txt" "") defines that file file2.txt will be copied to the remote cluster. The same result could also be defined by a row such as: ("file2.txt" "file2.txt") The final closing bracket on a line by itself ends the input files defining regions. A similar syntax is used to define the files that will be copied back from the remote cluster when the job results are retrieved. The output file defining regions starts with the notation (outputfiles=. In this example we will retrieve only one file, called output.txt. If you would like to retrieve all the files that are generated by the job execution directory in the remote cluster you can use the notation: (outputfiles=("/" "" )) When you define the output to be retrieved, it is good to remember that moving large files between the remote cluster and the local computer can take a long time. Thus, you should try to avoid unnecessary copying of large data-sets.

15 2.2 Executing grid jobs with ARC commands In this chapter we assume that the user has installed the personal grid certificate and ARC middleware as described in chapter 1.3. Further we assume that the user has set up the ARC environment with the commands: cd nordugrid-arc-standalone source setup.sh Or, if the hippu.csc.fi server is being used, with the command: module load nordugrid-arc Creating a proxy-certificate Before you can submit grid jobs, you must create a temporary proxy-certificate. ARC uses this proxy-certificate for authenticating you and checking that you have permission to submit jobs to the FGI. The proxy certificate is created with the command: arcproxy The arcproxy command asks for the password you have set for your certificate file. Once the proxy-certificate is created you can start executing other ARC commands. By default the proxy-certificate is valid for 12 hours. After the certificate has expired you can't submit new grid jobs or retrieve results before creating a new proxy-certificate. However, please note that even though your proxy-certificate has expired, the grid jobs you have already submitted will continue running normally in the FGI environment. You can also refresh your proxy-certificate before the current proxy-certificate expires by running the arcproxy command again. You can modify the validity time of the certificate with option -c validityperiod. For example the command below would create a proxy-certificate that is valid for 72 hours. arcproxy -c validityperiod=72h The status of your proxy-cetificate can be checked with the command: arcproxy -I As jobs can be checked and retrieved easily by generating a new grid-proxy-certificate, it is not recommended to make long validity periods for the grid-proxy-certificate Job submission commands If your grid-proxy-certificate is valid, you can submit a job, defined with an xrsl file, with the command: arcsub jobdescription.xrsl If no other arcsub options are used, the command first checks, what remote clusters have suitable resources for the job and then submits the job to one of these clusters. By default ARC randomly selects one of the suitable clusters. Option -b FastestQueue makes arcsub submit the job to a cluster were the number of queuing jobs is the smallest. arcsub -b FastestQueue jobdefinition.xrsl

16 If you wish to submit the job to a certain FGI cluster, you can define the the cluster name with the option -c. For example, the following command would send the job to usva.fgi.csc.fi cluster: arcsub -c aesyle-grif.fgi.csc.fi jobdescription.xrsl When arcsub has submitted the job, it prints out an identifier for the job (jobid). This identifier is used to monitor the progress of the job and to retrieve the results when the job has finished. The syntax of the job identifier is: protocol://name_of_the_executing_cluster:2811/jobs/jobnumber for example: gsiftp://asteropegrid.abo.fi:2811/jobs/ The command arcstat is used to check the status of grid jobs. The status of a singe job can be checked with the command: arcstat jobid You can see the status of all of your FGI jobs by using the option -a. arcstat -a The status of a grid job can be: Preparing, Queuing, Running, Finishing, Finished or Failed. In addition to arcstat you can also use the command arccat to follow the progress of a grid job. Arccat prints out the standard output, or if you use the option -e, standard error, that the job script has generated so far. The syntax of arccat is: arccat jobid arccat -e jobid Once the job is in state Finished or Failed you can use the command arcget to retrieve the results. The syntax of the command is: arcget jobid Arcget creates a new directory for your results on your local computer and copies the output files defined in the job description file there, as well as the standard output and standard error files produced by the grid job. This directory is named according to the number of the job (the random number in the grid job name) by default. However, if you use the option -J, the result directory is named according to the grid job name defined in the job description file. arcget -J If the arcget command runs successfully, it removes all the job related files in the FGI environment. This means that once arcget has downloaded the results, the job no longer exists in the FGI environment and it can't be accessed with arcstat or other ARC commands. You can also cancel a job from FGI before the job is finished. This can be done with the command arckill. The command arcclean removes a finished or failed job from the grid environment without downloading the results to a local computer. The syntax of these commands is: arckill jobid arcclean jobid You can cancel and clean all your grid jobs from the FGI environment by using the option -a with the commands above:

17 arckill -a arcclean -a Table 2. Essential ARC commands for running FGI jobs Command Description arccat arcclean arcget arckill arcproxy arcstat arcsub arcsync Check the standard output and standard error of a running or finished grid job. Command to remove a finished or failed grid job without downloading the results. Retrieve the results of a finished grid job. Cancel an active grid job. Create proxy certificate. Check the status of grid jobs. Command to submit a grid job. Synchronise the grid job list of the local computer with the FGI environment Running the sample job in FGI environment Below we go through a session where the simple FGI job hello.xrsl, described in chapter 2.1, is executed in FGI. Both the commands and their output are shown. The character ">" represents the command prompt. The commands given by the user are typed with bold-face letters. First we create a grid proxy certificate and check that all the files that the job uses (job description file, command script and input files) are present in the current working directory. > arcproxy Your identity: /DC=org/DC=terena/DC=tcs/C=FI/O=CSC/CN=Kalle Käyttäjä kkayttajl@csc.fi Enter pass phrase for /home/kkayttaj/.globus/userkey.pem: Proxy generation succeeded Your proxy is valid until: :49:17 > ls file1.txt file2.txt hello.xrsl runhello.sh After this the job defined in the file hello.xrsl is submitted with the command arcsub: > arcsub hello.xrsl ERROR: Conversion 3055 ERROR: Conversion failed: : SEVQLVNQRUMwNiBAIEDCoDEyLjIy Job submitted with jobid: gsiftp://celaeno-grid.lut.fi:2811/jobs/

18 The output of the arcsub command includes two error messages but they can be ignored. For the future it is good to copy the jobid from the end of the arcsub output to a file for reference. Next, we follow the progress of the job with the commands arcstat and arccat: > arcstat gsiftp://celaeno-grid.lut.fi:2811/jobs/ Job: gsiftp://celaeno-grid.lut.fi:2811/jobs/ Name: hello_fgi State: Queuing (INLRMS:E) > arcstat gsiftp://celaeno-grid.lut.fi:2811/jobs/ Job: gsiftp://celaeno-grid.lut.fi:2811/jobs/ Name: hello_fgi State: Finished (FINISHED) Exit Code: 0 > arccat gsiftp://celaeno-grid.lut.fi:2811/jobs/ Hello FGI > arcget gsiftp://celaeno-grid.lut.fi:2811/jobs/ > ls file1.txt file2.txt hello.xrsl runhello.sh > cd / > ls output.txt std.err std.out Keeping the grid job status up to date When you submit a job with the arcsub command, information about the submitted job is written to file.arc/jobs.xml that locates in the home directory of the computer you are using. The arcstat command uses this local list to resolve the job names, when checking the grid jobs. Thus arcstat does not by default see jobs that you have been submitting from other machines. To add the jobs submitted from other machines to your local jobs.xml, run command: arcsync Sometimes your local joblist may also contain jobs that you have already retrieved using some other machine or that have been inactive for several weeks and thus automatically cleaned away. In these cases arcstat -a gives error messages like: WARNING: Job information not found in the information system: gsiftp://usva.fgi.csc.fi:2811/jobs/rtvkdmgycuhndj9egpfstmkpabfkdma BFKDmjqKKDmABFKDm8VTkNn To get rid of these messages, run command: arcsync -T This command removes the old jobs.xml file and creates a new one, based on the data it collects from the grid environment. 2.3 Using software through runtime environments The FGI environment contains a set of pre-installed software tools because installing complex software tools within the grid job can be difficult. The software installed on FGI is used by accessing its Run Time Environment (RTE). The RTE concept is analogous to the environment modules used in the computing servers of CSC: the RTE adds the commands of the selected

19 software to the command path and sets up the environment variables that the software uses. In addition, RTEs are used to tell the ARC middleware what software are available on the different clusters. The FGI clusters don't all contain the same RTE:s. List and usage examples about RTEs that are available in FGI can be found from the FGI Runtime Environments pages: An RTE is taken into use by adding the runtimeenvironment parameter to the job description file. For example, the RTE that is used for Bowtie2 software is referred to as APPS/BIO/BOWTIE To use Bowtie2 commands in a grid job you should add following line to the job description file: (runtimeenvironment="apps/bio/bowtie-2.0.0") Bowtie2 can utilise OpenMP based parallelisation. For the case of programs which are capable of parallel computing, that is, running on more than one core, the runtime environment can define the number of computing cores to be used. This kind of arrangement allows the system to automatically set the suitable core number, which may be different in different clusters, for parallel processing. The core number defined by the RTE is typically stored to an environment variable which is then used to transport this information to the command to be executed. Below is an example of a job description file (bowtie2.xrsl) that uses the Bowtie2 RTE: & (executable=run_bowtie2.sh) (jobname=bowtie2) (stdout=std.out) (stderr=std.err) (cputime="6 hours") (memory=4000) (runtimeenvironment=apps/bio/bowtie-2.0.0) (inputfiles= ("chr_18.fa" "chr_18.fa") ("reads.fq" "reads.fq") ) (outputfiles= ("output.sam" "output.sam") ) In the command script file we use the environment variable $BOWTIE_NUM_CPUS, defined by the RTE with the bowtie2 command: #!/bin/sh echo "Hello Bowtie2" bowtie2-build -p $BOWTIE_NUM_CPUS chr_18.fa chr_18 bowtie2 -p $BOWTIE_NUM_CPUS chr_18 reads.fq > output.sam exitcode=$? echo "Bye Bowtie2!" exit $exitcode Details about what parameters a certain RTE defines can be checked from the home page of the RTE. In the case of Bowtie2 the address of the page is:

20 2.4 Running parallel applications in FGI In FGI you can utilize POSIX threads (OpenMP) and MPI based parallel computing. In the case of threads based parallel computing the number of parallel processes (threads) is limited by the structure of the hardware: all the processes must be running in the same node. Thus in the case of FGI, threads based programs can't use more than 12 computing cores. In MPI computing the parallel processes can be distributed to several computing nodes and thus there is no technical limit to the number of cores that can be used. However all parallel implementations benefit from the parallel computing only up to a certain extend. After some application and analysis dependent limit, utilizing larger amount of cores will not be feasible. Because of that, scaling tests, where the application is tested with different core amounts, a should be run before the actual production runs Executing threads based parallel software in FGI In the case of many pre-installed threads utilising programs, the Runtime Environment of the program automatically sets up the parameters the parallel job execution requires. However if you use your own software, you need to do some extra definitions in the job description file. In the following example we use software package called SOAPdenovo to run a sequence assembly job in FGI. SOAPdenovo is not available as a Runtime Environment in FGI. However you can download a pre-compiled Linux executables from the home page of SOAPdenovo. These executables can be copied to the remote cluster together with other input files. In this example we use executable called SOAPdenovo-31mer, job configuration file: soap.conf and the input dataset: datape.fasta. SOAPdenovo produces a large set of result directories and files. Thus in this case it is handy to use the output definition "( "/" "" ) " that defines that all the data will be retrieved from the execution directory. & (executable=runsoapdenovo.sh) (jobname=soapdenovo) (stdout=std.out) (stderr=std.err) (gmlog=gridlog) (walltime=24h) (memory=2000) (count=12) (runtimeenvironment="env/onenode") (inputfiles= ( "SOAPdenovo-31mer" "SOAPdenovo-31mer" ) ( "soap.config" "soap.config" ) ( "datape.fasta" "datape.fasta") ) (outputfiles= ( "/" "" ) ) The definition "(runtimeenvironment="env/onenode") " is essential for threads based parallel jobs. This definition ensures that all the cores, that the job uses, will be in the same computing node. In this case we use 12 computing cores (count=12) that is the maximum for thread based parallel

21 jobs in FGI. In ARC environment the memory reservation is given per one core. For example in this example (memory=2000) reserves 2 GB for each core which means that the job requires total of 24 GB of memory. When you change the core number to be used, you should always check the memory reservation too. In the command script runsoapdenovo.sh below, we first need to use command chmod to give execution permissions for the executable that is copied to the remote cluster. In the case of SOAPdenovo-31mer command, the number of computing cores to be used is given with option -p. In the end of the script the input files are deleted with rm commands. This is done to avoid unnecessary copying of the input files back from the grid environment. #!/bin/bash echo "Hello SOAPdenovo!" chmod u+x SOAPdenovo-31mer./SOAPdenovo-31mer all -s soap.config -K 23 -p 12 -o soap23 rm -f datape.fasta rm -f SOAPdenovo-31mer rm -r soap.conf echo "Bye SOAPdenovo! The sample job above could be executed using the normal arcsub, arcstat and arcget commands.

22 Executing MPI based parallel program in FGI environment The way, how an MPI based applications are launched in FGI environment may differ between different applications. For application specific details, please check the runtime environment page of the application from the FGI User pages. As MPI jobs can utilize several computing nodes the ENV/ONENODE definition, used with thread based parallel jobs, is not needed in the job description file. However, just like threads based parallel jobs, too you should always remember to check the memory reservation, when the number of computing cores is changed. A simple Gromacs run is used here as an example of MPI based parallel job. In this example the job description file gromacs.xrsl below reserves 32 computing cores (count=32), 500 MB of memory per core ( total memory 16 GB) and 24 hours of computing time. The pre-installed Gromacs is taken in use with runtime environment definition: (runtimeenvironment>="apps/chem/gromacs-4.5.5"). &(executable=rungromacs.sh) (jobname=gromacs) (stdout=std.out) (stderr=std.err) (runtimeenvironment>="apps/chem/gromacs-4.5.5") (gmlog=gridlog_1) (walltime="24 hour") (memory=500) (count=32) (inputfiles=( "topol.tpr" "topol.tpr" )) (outputfiles= ( "output.tar.gz" "output.tar.gz" ) ) In the command script, the MPI version of the Gromacs molecular dynamics engine: mdrun_mpi is launched using the mpirun command. When the Gromacs run is ready, all the files from the remote execution directory are packed to a single gzip compressed tar file. #!/bin/sh echo "Hello GROMACS!" mpirun mdrun_mpi -s topol.tpr exitcode=$? tar cf output.tar./* gzip output.tar echo "Bye GROMACS!" exit $exitcode The sample job above could be executed using the normal arcsub, arcstat and arcget commands.

23 2.5 Using arcrunner to run large job sets in FGI Grid computing can be very effective in cases where the analysis task can be split into numerous independent sub-tasks. This kind of tasks are generally referred to as embarrassingly parallel computing tasks. Typical examples are cases where the same simulation task is executed several times with different parameter settings. Another common embarrassingly parallel job type are cases where the same analysis is performed to a large set of input files. Running embarrassingly parallel computing tasks in the grid environment is in principle straight forward: the user just creates the grid job files, described in chapter 2.2, submits all the jobs to grid and, once the jobs are ready, the collects the results and merges them together. However, this kind of straight forward seeming approach is not always the most efficient way. In this chapter we describe a grid job manager tool, called arcrunner, that can be used to run large embarrassingly parallel computing tasks easily and effectively in the FGI environment. You can use arcrunner at CSC in Hippu and Taito servers or you can download it to your local Linux or MacOSX computer Installing arcrunner Arcrunner is installed in Hippu and Taito where it can be launched with command: arcrunner To use arcrunner in your local computer, you need to have the ARC middleware client and python installed on your computer. You can download the arcrunner tool from the Web site of FGI: Once you have downloaded the installation file, unpack it with the command: tar zxf arcrunner.tgz Next change to the arcrunner/bin directory: cd arcrunner/bin The next step is to modify the fifth row of the arcrunner file so that the jobmanagerpath variable corresponds to the location of your arcrunner installation. For example if you have downloaded and unpacked arcrunner installation package to directory /opt/grid the jobmanagerpath defining line should be: set jobmanagerpath=("/opt/grid/arcrunner") After this the only thing left to do is to add the arcrunner/bin directory to your command path Using arcrunner The minimum input for the arcrunner command is: arcrunner -xrsl job_descriptionfile.xrsl When arcrunner is launched, it first checks all the sub-directories of the current directory. If a job

24 description file, defined with the option -xrsl, is found in a sub-directory, arcrunner tries to execute the task in the FGI environment. In cases where there are a large number of grid jobs to be executed, all jobs are not submitted at once. In these cases arcrunner tries to optimize the usage of the grid environment. It follows the number jobs that are queuing in the clusters and sends more jobs only when there are free resources available. The command also keeps a track of the executed grid jobs and starts sending more jobs to those clusters that execute the jobs most efficiently. If you don't want to use all the FGI clusters, you can use the cluster list file and option -R to define what clusters will be used. The maximum number of jobs, waiting to be executed, can be defined with the option -W. If some job stays in a queue for too long a time, it is withdrawn from this queue and submitted to another cluster. The maximum queuing time (in seconds) can be set with the option -Q Sometimes, some FGI cluster may not work properly and the jobs may fail due to technical reasons. If this happens, the failed grid jobs are re-submitted to other clusters three times before they are considered as failed sub-jobs. During execution arcrunner checks the status of the sub-jobs once a minute and prints the status of each active sub-job. Finally it writes out a summary table about the sub-jobs. When a job finishes successfully, the job-manager retrieves the resultant files from the grid to the grid job directory. Table 3. arcrunner options Option xrsl file_name Description The common xrsl file name that defines the jobs. R file_name Text file containing the names of the clusters to be used. W integer Maximum number of jobs in the grid waiting to run. (Default 200). Q integer The maximum time a jobs stays in a queue before being resubmitted. (Default 3600s). S integer The maximum time a jobs stays in a submitted state before being resubmitted. (Default 3600s). J integer Maximum number of simultaneous jobs running in the grid. (Default 1000 jobs) Arcrunner example The following simple example demonstrates the usage of the arcrunner command. First we assume that we have a set of files which we wish to analyse. In this example we have 100 files named file_1, file_2, file_3,..., file_100, each containing 100 integer numbers in one column. We would like to calculate the average for the values in each file using FGI.

25 To run the analysis in FGI using arcrunner we first need to create a sub-folder for each of the input files and copy the input files there. This could be done, for example, with a shell script like the following bash script: for number in `seq 1 100` do mkdir subjob_$number mv file_$number subjob_$number/inputfile.txt done Now your directory should contain 100 subfolders, each containing one of the files to be analysed. Note that the name of the input file is now the same (inputfile.txt) in all the sub-job directories. The average of the numbers in a file called inputfile.txt can be calculated with the following script. The script is created with a text editor and saved as file calc_average.csh #!/bin/bash awk '{ a = (a + $1)} END{ print a/nr }' inputfile.txt > output.txt To run this script in FGI we need to create a job description file. In this case we will name the file average.xrsl. The content of the job description file would then be: &(executable=calc_average.csh) (jobname=arc_example) (stdout=std.out) (stderr=std.err) (cputime="2 minutes") (memory="1000") (inputfiles= ("inputfile.txt" "inputfile.txt" ) ) (outputfiles= ("output.txt" "" ) ) Next we need to copy the command script and the job description files to all the sub-job folders. This is done with another small shell script containing the following loop: for number in `seq 1 100` do cp calc_average.csh subjob_$number/ cp average.xrsl subjob_$number/ done Now we have 100 sub-directories each containing a grid job description file and the corresponding job script and input files. We can now launch the analysis task with arcrunner: arcrunner -xrsl average.xrsl When the command is executed, arcrunner starts sending the 100 jobs, one by one, to FGI. Sending the jobs will take some time. If some of the FGI servers are down, arcrunner will give error messages about unsuccessful job submission. These messages can however be ignored as the jobs will be re-submitted during the next job status checking cycle. The job submission log that arcrunner prints to standard output (i.e your screen) looks like the following:

26 /home/csc/kkmattil/.arc/clusters_for_arcrunner :50:26 INFO Job subjob_1 submitted with gid gsiftp://electra-grid.chem.jyu.fi:2811/jobs/vh2ldmdtkjgna2bavq8oapwnabfkdmabfkdmchnkdmabfkdmd56jwm :50:26 INFO Job subjob_1 changing state from new to submitted :50:48 INFO Job subjob_10 submitted with gid gsiftp://taygeta-grid.oulu.fi:2811/jobs/mbfldmztkjgnhwnbsqwni3hmabfkdmabfkdmv7kkdmabfkdmgprtin :50:48 INFO Job subjob_10 changing state from new to submitted :50:50 INFO Job subjob_100 submitted with gid gsiftp://maia-grid.uef.fi:2811/jobs/hg5kdm1tkjgn9novemgrhjgmabfkdmabfkdmebmkdmabfkdmt8qnto :50:50 INFO Job subjob_100 changing state from new to submitted :50:52 INFO Job subjob_11 submitted with gid gsiftp://aesyle-grid.fgi.csc.fi:2811/jobs/c9sldm3tkjgnkazegptluzemabfkdmabfkdmydnkdmabfkdmrs5hjn :50:52 INFO Job subjob_11 changing state from new to submitted When all the jobs are submitted or the number of the submitted job reaches the limit of simultaneously submitted jobs ( default 200 ), arcrunner writes out a summary about the computing task it is executing. For example, the following summary tells that of the 100 sub-jobs, 71 jobs have already finished, two are running, 25 are queuing and 2 are being submitted to the clusters :08:21 INFO host new submitted queuing running finished failed success failure :08:21 INFO merope-grid.cc.tut.fi :08:21 INFO asterope-grid.abo.fi :08:21 INFO electra-grid.chem.jyu.fi :08:21 INFO taygeta-grid.oulu.fi :08:21 INFO maia-grid.uef.fi :08:21 INFO grid.triton.aalto.fi :08:21 INFO aesyle-grid.fgi.csc.fi :08:21 INFalcyone-grid.grid.helsinki.fi :08:21 INFO celaeno-grid.lut.fi :08:21 INFO usva.fgi.csc.fi :08:21 INFO pleione-grid.utu.fi :08:21 INFO TOTAL The arcrunner command should be kept running until all the jobs have reached the state of success or failure and the command stops. For the case of large analysis tasks this can mean that arcrunner will be running for several days. In these kind of situations we recommend that arcrunner is launched inside a screen virtual session. If the arcrunner command stops for some reason before all the sub-jobs are ready, you can continue the jobs by running the same arcrunner command again. When all the sub-jobs are processed, all the sub-job directories contain a new directory called results. This directory now contains the output files defined in the job description files and the standard output and input files. In this case the results directory contains files output.txt, std.err and std.out. All the results can now be collected into one file, for example, with the command: cat subjob_*/results/output.txt > results.txt

27 2.6 Using storage elements for data transport in FGI Moving large datasets between the local computer and grid clusters is often one of the major bottlenecks in grid computing. The FGI environment includes a centralised data repository that can be used to avoid copying the same file repetitively between the local computer and the FGI environment. This system allows FGI users to load data into a central repository that can be accessed form the computing nodes, and is commonly referred to as a storage element. The storage element system of ARC has a cache system that in many cases significantly reduces the network load and speeds up the job submission process. To illustrate the benefits of storage elements, lets assume that we are submitting one hundred grid jobs, all of which need the same large input file called bigdb.txt. If the input file is copied using the normal input file definitions in the job description files, then the bigdb.txt file is transported from local machine to the computing clusters one hundred times. A more clever way in this case is to first upload the bigdb.txt to the grid storage element. Then we modify the job description file so that in the input file list we refer to the bigdb.txt file in the storage element rather than to the local copy. When we now launch the 100 grid jobs, the first job in each computing cluster copies the input file from the storage element to the cache disk area of the computing cluster. Subsequent jobs now can use the same file from the local data cache. Thus, if we send 100 jobs to ten computing clusters, the data only needs to be transported from the storage element to the remote clusters only ten times. The storage element of FGI is based on the SRM (Storage Resource Manager) protocol. The address of the storage element protocol is: srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/ User specific directories are not automatically created in the FGI storage element. Instead, users should create their own personal sub-directory when they use the FGI storage element for the first time. The storage element system is intended to support running computing tasks in the FGI environment. It is not intended for storing data for longer periods. As the size of the storage element is rather limited users must remove their unused data from the storage element. When the storage element starts filling up, the oldest files will be automatically removed. It should also be noted that the security level of the storage element is very low: you should not use it for sensitive data. The files stored on the storage element can't be modified. If you wish to modify a file in storage element, you must first download it to your computer, modify the local copy, remove the original file from the storage element and then copy the modified file back to the storage element Using storage elements with ARC commands The storage element can be used through a set of ARC commands arcls, arccp and arcrm. The command arccp can be used to copy data between the local computer and the storage element. For example, the file bidgb.txt can be copied to the storage element with command: arccp bidgb.txt srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/bigdbtxt Note that in the file path of storage element we have added one extra folder level: my_username. When the command above is executed two things happen: 1. a new directory called my_username is created (if it does not yet exist). 2. the file bigdb.txt is copied there. (Currently the ARC client does not have a separate command for creating new directories. To create a new directory you must use the arccp command as above) Copying a file from the storage element is also done with the arccp command. For example, the

28 command: arccp srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/bigdb.txt bigb_copy.txt Would copy the file bigdb.txt to the local computer into the file bigb_copy.txt The content of a directory in the storage element can be checked with the command arcls. For example, the content of the my_username directory can be checked with the commands: arcls srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/ or arcls -l srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/ The command arcrm is used to remove a file from the storage element. Fore example, to remove the file bigdb.txt from the storage element, you should use the command: arcrm srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/bigdb.txt Using storage elements in grid jobs For the actual grid jobs, the storage element files are used via the job description files. There you can set that a specified input file is read from the storage element (instead of the local computer) or that a certain output file is transported to the storage element. For example, in the Bowtie2 runtime environment example in chapter 2.3 a chromosome sequence file, chr_18.fa, is used as one of the input files. If we copy the file to the storage element with the command: arccp chr_18.fa srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/chr_18.fa and then modify the input line defining the chr_18.fa file to: ("chr_18.fa" "srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/chr_18.fa") The output file of this job, output.sam, could be automatically saved to the storage element in the same way by modifying the output definition to: ("output.sam" "srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/outpt.sam") Thus a job description file using the storage element for both reading the input and storing the results, would look similar to the example below. Note that the command script (run_bowtie2.sh) needs no modifications. &(executable=run_bowtie2.sh) (jobname=bowtie2) (stdout=std.out) (stderr=std.err) (cputime="6 hours") (memory=4000) (runtimeenvironment=apps/bio/bowtie-2.0.0) (inputfiles= ("chr_18.fa" "srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/chr_18.fa") ("reads.fq" "reads.fq") ) (outputfiles= ("output.sam" "srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/output.sam") ) When the job is finished the output can be retrieved with the command: arccp srm://srm.fgi.csc.fi/dpm/fgi.csc.fi/home/fgi.csc.fi/my_username/output.sam./output.sam

29 3. Grid Monitor The Grid Monitor is a web based tool for getting information about the status of ARC based grids and the clusters running the jobs. The monitor is maintained by the Nordugrid community and it is located at the following URL address: The main page of the grid-monitor shows information about grid clusters from several countries. As we are only interested about the FGI environment here, we can alternatively use the URL address: that shows just the Finnish ARC clusters. Figure 3. shows the status page of Finnish grid clusters. Figure 3. ARC Grid Monitor. The main view of the Grid Monitor shows a list of clusters and the load of each cluster. The relative load of each cluster are shown as a bar diagram, where the dark gray bar shows the total load of the cluster (including the jobs submitted by the local users) and the green bar shows the amount of grid jobs submitted through the ARC middleware. The main view of the Grid Monitor contains a large amount of links. If you click the cluster name in the Site column, you can open a window that shows the details of the cluster including, for example, the name of the machine, its operating system and the available runtime environments.

30 Clicking on one of the green bars in the Load diagram opens a new window showing the grid jobs currently running in the corresponding cluster (Figure 4.) and if you click on a number in the Queueing column you will get a similar view about the grid jobs queuing in the batch job system of that cluster. In the job lists you can further click the Job name to see the details of a specific job or Owner to see more information of the user and the status of all the FGI jobs of that user. Figure 4. Joblist of a cluster in grid monitor. A more detailed description about the Grid Monitor tool can be found from the Grid Monitor manual, provided by the Nordugrid community:

Introduction to Programming and Computing for Scientists

Introduction to Programming and Computing for Scientists Oxana Smirnova (Lund University) Programming for Scientists Tutorial 4b 1 / 44 Introduction to Programming and Computing for Scientists Oxana Smirnova Lund University Tutorial 4b: Grid certificates and

More information

Client tools know everything

Client tools know everything Scheduling, clients Client tools know everything Application database Certificate Certificate Authorised users directory Certificate Policies Grid job management service Data Certificate Certificate Researcher

More information

NorduGrid Tutorial. Client Installation and Job Examples

NorduGrid Tutorial. Client Installation and Job Examples NorduGrid Tutorial Client Installation and Job Examples Linux Clusters for Super Computing Conference Linköping, Sweden October 18, 2004 Arto Teräs arto.teras@csc.fi Steps to Start Using NorduGrid 1) Install

More information

ALICE Grid/Analysis Tutorial Exercise-Solutions

ALICE Grid/Analysis Tutorial Exercise-Solutions WLCG Asia Workshop, 2.12.2006 ALICE Grid/Analysis Tutorial Exercise-Solutions Andreas-Joachim Peters CERN www.eu-egee.org cern.ch/lcg http://cern.ch/arda EGEE is a project funded by the European Union

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

Grid Documentation Documentation

Grid Documentation Documentation Grid Documentation Documentation Release 1.0 Grid Support Nov 06, 2018 Contents 1 General 3 2 Basics 9 3 Advanced topics 25 4 Best practices 81 5 Service implementation 115 6 Tutorials

More information

Grid Experiment and Job Management

Grid Experiment and Job Management Grid Experiment and Job Management Week #6 Basics of Grid and Cloud computing University of Tartu March 20th 2013 Hardi Teder hardi@eenet.ee Overview Grid Jobs Simple Jobs Pilot Jobs Workflows Job management

More information

CSC BioWeek 2018: Using Taito cluster for high throughput data analysis

CSC BioWeek 2018: Using Taito cluster for high throughput data analysis CSC BioWeek 2018: Using Taito cluster for high throughput data analysis 7. 2. 2018 Running Jobs in CSC Servers Exercise 1: Running a simple batch job in Taito We will run a small alignment using BWA: https://research.csc.fi/-/bwa

More information

NorduGrid Tutorial Exercises

NorduGrid Tutorial Exercises NorduGrid Tutorial Exercises Juha Lento Arto Teräs 1st Nordic Grid Neighbourhood Conference August 17, 2005 Contents 1 Introduction 2 2 Getting started 2 3 Getting

More information

Introduction: What is Unix?

Introduction: What is Unix? Introduction Introduction: What is Unix? An operating system Developed at AT&T Bell Labs in the 1960 s Command Line Interpreter GUIs (Window systems) are now available Introduction: Unix vs. Linux Unix

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:

More information

CSC BioWeek 2016: Using Taito cluster for high throughput data analysis

CSC BioWeek 2016: Using Taito cluster for high throughput data analysis CSC BioWeek 2016: Using Taito cluster for high throughput data analysis 4. 2. 2016 Running Jobs in CSC Servers A note on typography: Some command lines are too long to fit a line in printed form. These

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource. Siemens PLM Software HEEDS MDO 2018.04 Setting up a Windows-to- Linux Compute Resource www.redcedartech.com. Contents Introduction 1 On Remote Machine B 2 Installing the SSH Server 2 Configuring the SSH

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

ARC Clients User Manual for ARC (client versions 1.0.0) and above

ARC Clients User Manual for ARC (client versions 1.0.0) and above NORDUGRID NORDUGRID-MANUAL-13 10/7/2017 ARC Clients User Manual for ARC 11.05 (client versions 1.0.0) and above 2 Contents 1 Introduction 5 2 Commands 7 2.1 Proxy utilities...........................................

More information

Kaivos User Guide Getting a database account 2

Kaivos User Guide Getting a database account 2 Contents Kaivos User Guide 1 1. Getting a database account 2 2. MySQL client programs at CSC 2 2.1 Connecting your database..................................... 2 2.2 Setting default values for MySQL connection..........................

More information

Processes. Shell Commands. a Command Line Interface accepts typed (textual) inputs and provides textual outputs. Synonyms:

Processes. Shell Commands. a Command Line Interface accepts typed (textual) inputs and provides textual outputs. Synonyms: Processes The Operating System, Shells, and Python Shell Commands a Command Line Interface accepts typed (textual) inputs and provides textual outputs. Synonyms: - Command prompt - Shell - CLI Shell commands

More information

Linux Operating System Environment Computadors Grau en Ciència i Enginyeria de Dades Q2

Linux Operating System Environment Computadors Grau en Ciència i Enginyeria de Dades Q2 Linux Operating System Environment Computadors Grau en Ciència i Enginyeria de Dades 2017-2018 Q2 Facultat d Informàtica de Barcelona This first lab session is focused on getting experience in working

More information

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version... Contents Note: pay attention to where you are........................................... 1 Note: Plaintext version................................................... 1 Hello World of the Bash shell 2 Accessing

More information

Lab Working with Linux Command Line

Lab Working with Linux Command Line Introduction In this lab, you will use the Linux command line to manage files and folders and perform some basic administrative tasks. Recommended Equipment A computer with a Linux OS, either installed

More information

Parallel Programming Pre-Assignment. Setting up the Software Environment

Parallel Programming Pre-Assignment. Setting up the Software Environment Parallel Programming Pre-Assignment Setting up the Software Environment Authors: B. Wilkinson and C. Ferner. Modification date: Aug 21, 2014 (Minor correction Aug 27, 2014.) Software The purpose of this

More information

High Performance Computing Cluster Basic course

High Performance Computing Cluster Basic course High Performance Computing Cluster Basic course Jeremie Vandenplas, Gwen Dawes 30 October 2017 Outline Introduction to the Agrogenomics HPC Connecting with Secure Shell to the HPC Introduction to the Unix/Linux

More information

Scientist s User Interface (SUI)

Scientist s User Interface (SUI) WWW-portal for all CSC users https://sui.csc.fi Sign up as customer Manage your account Access your data Download material Watch videos Monitor hosts and jobs Use applications Personalize your use Participate

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Introduction to Linux for BlueBEAR. January

Introduction to Linux for BlueBEAR. January Introduction to Linux for BlueBEAR January 2019 http://intranet.birmingham.ac.uk/bear Overview Understanding of the BlueBEAR workflow Logging in to BlueBEAR Introduction to basic Linux commands Basic file

More information

A Brief Introduction to the Linux Shell for Data Science

A Brief Introduction to the Linux Shell for Data Science A Brief Introduction to the Linux Shell for Data Science Aris Anagnostopoulos 1 Introduction Here we will see a brief introduction of the Linux command line or shell as it is called. Linux is a Unix-like

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

When talking about how to launch commands and other things that is to be typed into the terminal, the following syntax is used:

When talking about how to launch commands and other things that is to be typed into the terminal, the following syntax is used: Linux Tutorial How to read the examples When talking about how to launch commands and other things that is to be typed into the terminal, the following syntax is used: $ application file.txt

More information

Genesys Security Deployment Guide. What You Need

Genesys Security Deployment Guide. What You Need Genesys Security Deployment Guide What You Need 12/27/2017 Contents 1 What You Need 1.1 TLS Certificates 1.2 Generating Certificates using OpenSSL and Genesys Security Pack 1.3 Generating Certificates

More information

15-122: Principles of Imperative Computation

15-122: Principles of Imperative Computation 15-122: Principles of Imperative Computation Lab 0 Navigating your account in Linux Tom Cortina, Rob Simmons Unlike typical graphical interfaces for operating systems, here you are entering commands directly

More information

Introduction to the shell Part II

Introduction to the shell Part II Introduction to the shell Part II Graham Markall http://www.doc.ic.ac.uk/~grm08 grm08@doc.ic.ac.uk Civil Engineering Tech Talks 16 th November, 1pm Last week Covered applications and Windows compatibility

More information

How to Create a NetBeans PHP Project

How to Create a NetBeans PHP Project How to Create a NetBeans PHP Project 1. SET UP PERMISSIONS FOR YOUR PHP WEB SITE... 2 2. CREATE NEW PROJECT ("PHP APPLICATION FROM REMOTE SERVER")... 2 3. SPECIFY PROJECT NAME AND LOCATION... 2 4. SPECIFY

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

Using ISE 2.2 Internal Certificate Authority (CA) to Deploy Certificates to Cisco Platform Exchange Grid (pxgrid) Clients

Using ISE 2.2 Internal Certificate Authority (CA) to Deploy Certificates to Cisco Platform Exchange Grid (pxgrid) Clients Using ISE 2.2 Internal Certificate Authority (CA) to Deploy Certificates to Cisco Platform Exchange Grid (pxgrid) Clients Author: John Eppich Table of Contents About this Document... 4 Using ISE 2.2 Internal

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Network Administration/System Administration (NTU CSIE, Spring 2018) Homework #1. Homework #1

Network Administration/System Administration (NTU CSIE, Spring 2018) Homework #1. Homework #1 Submission Homework #1 Due Time: 2018/3/11 (Sun.) 22:00 Contact TAs: vegetable@csie.ntu.edu.tw Compress all your files into a file named HW1_[studentID].zip (e.g. HW1_bxx902xxx.zip), which contains two

More information

Vi & Shell Scripting

Vi & Shell Scripting Vi & Shell Scripting Comp-206 : Introduction to Week 3 Joseph Vybihal Computer Science McGill University Announcements Sina Meraji's office hours Trottier 3rd floor open area Tuesday 1:30 2:30 PM Thursday

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

Bitnami MariaDB for Huawei Enterprise Cloud

Bitnami MariaDB for Huawei Enterprise Cloud Bitnami MariaDB for Huawei Enterprise Cloud First steps with the Bitnami MariaDB Stack Welcome to your new Bitnami application running on Huawei Enterprise Cloud! Here are a few questions (and answers!)

More information

Introduction to UNIX. Logging in. Basic System Architecture 10/7/10. most systems have graphical login on Linux machines

Introduction to UNIX. Logging in. Basic System Architecture 10/7/10. most systems have graphical login on Linux machines Introduction to UNIX Logging in Basic system architecture Getting help Intro to shell (tcsh) Basic UNIX File Maintenance Intro to emacs I/O Redirection Shell scripts Logging in most systems have graphical

More information

Chapter 9. Shell and Kernel

Chapter 9. Shell and Kernel Chapter 9 Linux Shell 1 Shell and Kernel Shell and desktop enviroment provide user interface 2 1 Shell Shell is a Unix term for the interactive user interface with an operating system A shell usually implies

More information

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:

More information

Advanced Linux Commands & Shell Scripting

Advanced Linux Commands & Shell Scripting Advanced Linux Commands & Shell Scripting Advanced Genomics & Bioinformatics Workshop James Oguya Nairobi, Kenya August, 2016 Man pages Most Linux commands are shipped with their reference manuals To view

More information

Bitnami Re:dash for Huawei Enterprise Cloud

Bitnami Re:dash for Huawei Enterprise Cloud Bitnami Re:dash for Huawei Enterprise Cloud Description Re:dash is an open source data visualization and collaboration tool. It was designed to allow fast and easy access to billions of records in all

More information

Apptix Online Backup by Mozy User Guide

Apptix Online Backup by Mozy User Guide Apptix Online Backup by Mozy User Guide 1.10.1.2 Contents Chapter 1: Overview...5 Chapter 2: Installing Apptix Online Backup by Mozy...7 Downloading the Apptix Online Backup by Mozy Client...7 Installing

More information

Upgrade Instructions. NetBrain Integrated Edition 7.1. Two-Server Deployment

Upgrade Instructions. NetBrain Integrated Edition 7.1. Two-Server Deployment NetBrain Integrated Edition 7.1 Upgrade Instructions Two-Server Deployment Version 7.1a Last Updated 2018-09-04 Copyright 2004-2018 NetBrain Technologies, Inc. All rights reserved. Contents 1. Upgrading

More information

Bitnami MySQL for Huawei Enterprise Cloud

Bitnami MySQL for Huawei Enterprise Cloud Bitnami MySQL for Huawei Enterprise Cloud Description MySQL is a fast, reliable, scalable, and easy to use open-source relational database system. MySQL Server is intended for mission-critical, heavy-load

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

An Introduction to Cluster Computing Using Newton

An Introduction to Cluster Computing Using Newton An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Operating Systems. Copyleft 2005, Binnur Kurt

Operating Systems. Copyleft 2005, Binnur Kurt 3 Operating Systems Copyleft 2005, Binnur Kurt Content The concept of an operating system. The internal architecture of an operating system. The architecture of the Linux operating system in more detail.

More information

MVAPICH MPI and Open MPI

MVAPICH MPI and Open MPI CHAPTER 6 The following sections appear in this chapter: Introduction, page 6-1 Initial Setup, page 6-2 Configure SSH, page 6-2 Edit Environment Variables, page 6-5 Perform MPI Bandwidth Test, page 6-8

More information

VII. Corente Services SSL Client

VII. Corente Services SSL Client VII. Corente Services SSL Client Corente Release 9.1 Manual 9.1.1 Copyright 2014, Oracle and/or its affiliates. All rights reserved. Table of Contents Preface... 5 I. Introduction... 6 Chapter 1. Requirements...

More information

Operating Systems 3. Operating Systems. Content. What is an Operating System? What is an Operating System? Resource Abstraction and Sharing

Operating Systems 3. Operating Systems. Content. What is an Operating System? What is an Operating System? Resource Abstraction and Sharing Content 3 Operating Systems The concept of an operating system. The internal architecture of an operating system. The architecture of the Linux operating system in more detail. How to log into (and out

More information

Slurm basics. Summer Kickstart June slide 1 of 49

Slurm basics. Summer Kickstart June slide 1 of 49 Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource

More information

Ansible Tower Quick Setup Guide

Ansible Tower Quick Setup Guide Ansible Tower Quick Setup Guide Release Ansible Tower 2.4.5 Red Hat, Inc. Jun 06, 2017 CONTENTS 1 Quick Start 2 2 Login as a Superuser 3 3 Import a License 4 4 Examine the Tower Dashboard 6 5 The Setup

More information

Troubleshooting. Participants List Displays Multiple Entries for the Same User

Troubleshooting. Participants List Displays Multiple Entries for the Same User Participants List Displays Multiple Entries for the Same User, page 1 Internet Explorer Browser Not Supported, page 2 "404 Page Not Found" Error Encountered, page 2 Cannot Start or Join Meeting, page 2

More information

Perceptive TransForm E-Forms Manager 8.x. Installation and Configuration Guide March 1, 2012

Perceptive TransForm E-Forms Manager 8.x. Installation and Configuration Guide March 1, 2012 Perceptive TransForm E-Forms Manager 8.x Installation and Configuration Guide March 1, 2012 Table of Contents 1 Introduction... 3 1.1 Intended Audience... 3 1.2 Related Resources and Documentation... 3

More information

WAM!NET Submission Icons. Help Guide. March 2015

WAM!NET Submission Icons. Help Guide. March 2015 WAM!NET Submission Icons Help Guide March 2015 Document Contents 1 Introduction...2 1.1 Submission Option Resource...2 1.2 Submission Icon Type...3 1.2.1 Authenticated Submission Icons...3 1.2.2 Anonymous

More information

Using UNIX. -rwxr--r-- 1 root sys Sep 5 14:15 good_program

Using UNIX. -rwxr--r-- 1 root sys Sep 5 14:15 good_program Using UNIX. UNIX is mainly a command line interface. This means that you write the commands you want executed. In the beginning that will seem inferior to windows point-and-click, but in the long run the

More information

Linux Essentials Objectives Topics:

Linux Essentials Objectives Topics: Linux Essentials Linux Essentials is a professional development certificate program that covers basic knowledge for those working and studying Open Source and various distributions of Linux. Exam Objectives

More information

Recap From Last Time:

Recap From Last Time: Recap From Last Time: BGGN 213 Working with UNIX Barry Grant http://thegrantlab.org/bggn213 Motivation: Why we use UNIX for bioinformatics. Modularity, Programmability, Infrastructure, Reliability and

More information

BGGN 213 Working with UNIX Barry Grant

BGGN 213 Working with UNIX Barry Grant BGGN 213 Working with UNIX Barry Grant http://thegrantlab.org/bggn213 Recap From Last Time: Motivation: Why we use UNIX for bioinformatics. Modularity, Programmability, Infrastructure, Reliability and

More information

LiveNX Upgrade Guide from v5.2.0 to v5.2.1

LiveNX Upgrade Guide from v5.2.0 to v5.2.1 LIVEACTION, INC. LiveNX Upgrade Guide from v5.2.0 to v5.2.1 UPGRADE LiveAction, Inc. 3500 Copyright WEST BAYSHORE 2016 LiveAction, ROAD Inc. All rights reserved. LiveAction, LiveNX, LiveUX, the LiveAction

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Introduction to UNIX. SURF Research Boot Camp April Jeroen Engelberts Consultant Supercomputing

Introduction to UNIX. SURF Research Boot Camp April Jeroen Engelberts Consultant Supercomputing Introduction to UNIX SURF Research Boot Camp April 2018 Jeroen Engelberts jeroen.engelberts@surfsara.nl Consultant Supercomputing Outline Introduction to UNIX What is UNIX? (Short) history of UNIX Cartesius

More information

Using Lloyd s Direct Reporting. User Guide

Using Lloyd s Direct Reporting. User Guide Using Lloyd s Direct Reporting User Guide AUGUST 2013 2 Contents CONTENTS 2 LLOYD S DIRECT REPORTING 3 ABOUT THIS SERVICE 3 FURTHER HELP AND SUPPORT 3 USER GUIDE 4 ACCESSING LLOYD S DIRECT REPORTING 4

More information

Adobe Marketing Cloud Using FTP and sftp with the Adobe Marketing Cloud

Adobe Marketing Cloud Using FTP and sftp with the Adobe Marketing Cloud Adobe Marketing Using FTP and sftp with the Adobe Marketing Contents Using FTP and sftp with the Adobe Marketing...3 Setting Up FTP Accounts Hosted by Adobe...3 Classifications...3 Data Sources...4 Data

More information

Dell Repository Manager Business Client Version 2.0 User s Guide

Dell Repository Manager Business Client Version 2.0 User s Guide Dell Repository Manager Business Client Version 2.0 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Unix/Linux Basics. Cpt S 223, Fall 2007 Copyright: Washington State University

Unix/Linux Basics. Cpt S 223, Fall 2007 Copyright: Washington State University Unix/Linux Basics 1 Some basics to remember Everything is case sensitive Eg., you can have two different files of the same name but different case in the same folder Console-driven (same as terminal )

More information

CS Fundamentals of Programming II Fall Very Basic UNIX

CS Fundamentals of Programming II Fall Very Basic UNIX CS 215 - Fundamentals of Programming II Fall 2012 - Very Basic UNIX This handout very briefly describes how to use Unix and how to use the Linux server and client machines in the CS (Project) Lab (KC-265)

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

Introduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen

Introduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen Introduction to Unix The Windows User perspective Wes Frisby Kyle Horne Todd Johansen What is Unix? Portable, multi-tasking, and multi-user operating system Software development environment Hardware independent

More information

Configuring SSL. SSL Overview CHAPTER

Configuring SSL. SSL Overview CHAPTER 7 CHAPTER This topic describes the steps required to configure your ACE appliance as a virtual Secure Sockets Layer (SSL) server for SSL initiation or termination. The topics included in this section are:

More information

5/20/2007. Touring Essential Programs

5/20/2007. Touring Essential Programs Touring Essential Programs Employing fundamental utilities. Managing input and output. Using special characters in the command-line. Managing user environment. Surveying elements of a functioning system.

More information

Remote Support Web Rep Console

Remote Support Web Rep Console Remote Support Web Rep Console 2017 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their

More information

Minimum intrusion Grid Tutorial

Minimum intrusion Grid Tutorial Minimum intrusion Grid Tutorial Special version for DIKU students Minimum intrusion Grid, MiG, is a Grid middleware that seeks to make access to Grid as easy as possible for both users of - and contributors

More information

Booting a Galaxy Instance

Booting a Galaxy Instance Booting a Galaxy Instance Create Security Groups First time Only Create Security Group for Galaxy Name the group galaxy Click Manage Rules for galaxy Click Add Rule Choose HTTPS and Click Add Repeat Security

More information

This help covers the ordering, download and installation procedure for Odette Digital Certificates.

This help covers the ordering, download and installation procedure for Odette Digital Certificates. This help covers the ordering, download and installation procedure for Odette Digital Certificates. Answers to Frequently Asked Questions are available online CONTENTS Preparation for Ordering an Odette

More information

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at Document Date: May 16, 2017 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL

More information

EE516: Embedded Software Project 1. Setting Up Environment for Projects

EE516: Embedded Software Project 1. Setting Up Environment for Projects EE516: Embedded Software Project 1. Setting Up Environment for Projects By Dong Jae Shin 2015. 09. 01. Contents Introduction to Projects of EE516 Tasks Setting Up Environment Virtual Machine Environment

More information

Overview. Borland VisiBroker 7.0

Overview. Borland VisiBroker 7.0 Overview Borland VisiBroker 7.0 Borland Software Corporation 20450 Stevens Creek Blvd., Suite 800 Cupertino, CA 95014 USA www.borland.com Refer to the file deploy.html for a complete list of files that

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Helsinki 19 Jan Practical course in genome bioinformatics DAY 0

Helsinki 19 Jan Practical course in genome bioinformatics DAY 0 Helsinki 19 Jan 2017 529028 Practical course in genome bioinformatics DAY 0 This document can be downloaded at: http://ekhidna.biocenter.helsinki.fi/downloads/teaching/spring2017/exercises_day0.pdf The

More information

Ricoh Managed File Transfer (MFT) User Guide

Ricoh Managed File Transfer (MFT) User Guide Ricoh Managed File Transfer (MFT) User Guide -- TABLE OF CONTENTS 1 ACCESSING THE SITE... 3 1.1. WHAT IS RICOH MFT... 3 1.2. SUPPORTED BROWSERS... 3 1.3. LOG IN... 3 1.4. NAVIGATION... 4 1.5. FORGOTTEN

More information

Remote Support 19.1 Web Rep Console

Remote Support 19.1 Web Rep Console Remote Support 19.1 Web Rep Console 2003-2019 BeyondTrust Corporation. All Rights Reserved. BEYONDTRUST, its logo, and JUMP are trademarks of BeyondTrust Corporation. Other trademarks are the property

More information

Manual Shell Script Linux If Not Exist Directory Does

Manual Shell Script Linux If Not Exist Directory Does Manual Shell Script Linux If Not Exist Directory Does Bash can be configured to be POSIX-confor mant by default. and then a much longer manual available using info (usually they refer to the info page

More information

Guide Installation and User Guide - Mac

Guide Installation and User Guide - Mac Guide Installation and User Guide - Mac With Fujitsu mpollux DigiSign Client, you can use your smart card for secure access to electronic services or organization networks, as well as to digitally sign

More information

For Dr Landau s PHYS8602 course

For Dr Landau s PHYS8602 course For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved. Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It

More information

Bitnami JRuby for Huawei Enterprise Cloud

Bitnami JRuby for Huawei Enterprise Cloud Bitnami JRuby for Huawei Enterprise Cloud Description JRuby is a 100% Java implementation of the Ruby programming language. It is Ruby for the JVM. JRuby provides a complete set of core built-in classes

More information

First of all, these notes will cover only a small subset of the available commands and utilities, and will cover most of those in a shallow fashion.

First of all, these notes will cover only a small subset of the available commands and utilities, and will cover most of those in a shallow fashion. Warnings 1 First of all, these notes will cover only a small subset of the available commands and utilities, and will cover most of those in a shallow fashion. Read the relevant material in Sobell! If

More information

Perl and R Scripting for Biologists

Perl and R Scripting for Biologists Perl and R Scripting for Biologists Lukas Mueller PLBR 4092 Course overview Linux basics (today) Linux advanced (Aure, next week) Why Linux? Free open source operating system based on UNIX specifications

More information