SAGA-Python Documentation
|
|
- Maud Holt
- 5 years ago
- Views:
Transcription
1 SAGA-Python Documentation Release v0.29 The SAGA Project July 13, 2015
2
3 Contents 1 Contents: Installation and Usage Tutorial Library Reference Developer Documentation Indices and tables 75 Python Module Index 77 i
4 ii
5 SAGA-Python is a light-weight Python package that implements OGF SAGA interface specification and provides adaptors for different distributed middleware systems and services. SAGA-Python focuses on usability, extensibility and simple deployment in real-world heterogeneous distributed computing environments and application scenarios. Get involved or contact us: SAGA-Python on GitHub: SAGA-Python Mailing List: Contents 1
6 2 Contents
7 CHAPTER 1 Contents: 1.1 Installation and Usage This part of the documentation is devoted to general information on the setup and configuration of SAGA and things that make working with SAGA easier Installation Requirements saga-python has the following requirements: Python 2.5 or newer Installation via PyPi saga-python is available for download via PyPI and may be installed using easy_install or pip (preferred). Both automatically download and install all dependencies required by saga-python if they can t be found on your system: pip install saga-python or with easy_install: easy_install saga-python Using Virtualenv If you don t want to (or can t) install SAGA Python into your system s Python environment, there s a simple (and often preferred) way to create an alternative Python environment (e.g., in your home directory): virtualenv --no-site-package $HOME/sagaenv/. $HOME/sagaenv/bin/activate pip install saga-python What if my system Doesn t come With virtualenv, pip or easy_install? There s a simple workaround for that using the instant version of virtualenv. It also installs easy_install and pip: 3
8 wget python virtualenv.py $HOME/sagaenv/ --no-site-packages. $HOME/sagaenv/bin/activate pip install saga-python Installing the Latest Development Version Warning: Please keep in mind that the latest development version of SAGA Python can be highly unstable or even completely broken. It s not recommended to use it in a production environment. You can install the latest development version of SAGA Python directly from our Git repository using pip: pip install -e git://github.com/saga-project/saga-python.git@devel#egg=saga-python Configuration Note: SAGA has been designed as a zero-configuration library. Unless you are experiencing problems with one of the default configuration settings, there s really no need to create a configuration file for SAGA. SAGA and its individual middleware adaptors provide various optional Configuration Options (page 4). While SAGA tries to provide sensible default values for the majority of these options (zero-conf), it can sometimes be necessary to modify or extend SAGA s configuration. SAGA provides two ways to access and modify its configuration: via Configuration Files (page 4) (recommended) and via the Configuration API (page 5) (for advanced use-cases). Configuration Files If you need to make persistent changes to any of SAGA s Configuration Options (page 4), the simplest option is to create a configuration file. During startup, SAGA checks for the existence of a configuration file in $HOME/.saga.conf. If that configuration file is found, it is parsed by SAGA s configuration system. SAGA configuration files use a structure that looks like this: [saga.engine] option = value [saga.logger] option = value [saga.adaptor.name] option = value Configuration Options Warning: This should be generated automatically! 4 Chapter 1. Contents:
9 Configuration API Module saga.utils.config The config module provides classes and functions to introspect and modify SAGA s configuration. The getconfig() function is used to get the GlobalConfig object which represents the current configuration of SAGA: from saga.utils.config import getconfig sagaconf = getconfig() print sagaconf.get_category('saga.utils.logger') Logging System In a distributed environment unified error logging and reporting is a crucial capability for debugging and monitoring. SAGA has a configurable logging system that captures debug, info, warning and error messages across all of its middelware adaptors. The logging system can be controlled in two different ways: via Environment Variables (page 5) variables, which should be sufficient in most scenarios, and via the Application Level Logging (page 6), which provides programmatic access to the logging system for advanced use-cases. Environment Variables Several environment variables can be used to control SAGA s logging behavior from the command line. Obviously, this can come in handy when debugging a problem with an existing SAGA application. Environment variables are set in the executing shell and evaluated by SAGA at program startup. SAGA_VERBOSE Controls the log level. This controls the amount of output generated by the logging system. SAGA_VERBOSE expects either a numeric (0-4) value or a string (case insensitive) representing the log level: Numeric Value Log Level Type of Messages Displayed 0 (default) CRITICAL Only fatal events that will cause SAGA to abort. 1 ERROR Errors that will not necessarily cause SAGA to abort. 2 WARNING Warnings that are generated by SAGA and its middleware adaptors. 3 INFO Useful (?) runtime information that is generated by SAGA and its middleware adaptors. 4 DEBUG Debug message added to the code by the developers. (Lots of output) For example, if you want to see the debug messages that SAGA generates during program execution, you would set SAGA_VERBOSE (page 5) to DEBUG before you run your program: SAGA_VERBOSE=DEBUG python mysagaprog.py SAGA_LOG_FILTERS Controls the message sources displayed. SAGA uses a hierarchal structure for its log sources. Starting with the root logger saga, several sub loggers are defined for SAGA-internal logging events (saga.engine) and individual middleware adaptors saga.adaptor.name. SAGA_LOG_FILTERS expects either a single source name or a comma-separated list of source names. Non-existing source names are ignored. For example, if you want to see only the debug messages generated by saga.engine and a specific middleware adaptor called xyz you would set the following environment variables: 1.1. Installation and Usage 5
10 SAGA_VERBOSE=DEBUG SAGA_LOG_FILTERS=saga.engine,saga.adaptor.xyz python mysagaprog.py SAGA_LOG_TARGETS Controls where the log messages go. Multiple concurrent locations are supported. SAGA_LOG_TARGETS expects either a single location or a comma-separated list of locations, where a location can either be a path/filename or the STDOUT keyword (case insensitive) for logging to the console. For example, if you want to see debug messages on the console but also want to log them in a file for further analysis, you would set the the following environment variables: SAGA_VERBOSE=DEBUG SAGA_LOG_TARGETS=STDOUT,/tmp/mysaga.log python mysagaprog.py Application Level Logging The SAGA-Python logging utilities are a thin wrapper around Python s logging facilities, integrated into the SAGA-Python configuration facilities. To support the seamless integration of application level logging needs, the saga.utils.logger.getlogger() allows to produce additional logger facilities, which are again native Python logging.logger instances, but preconfigured according to the SAGA-Python logging configuration. Those instances can then be further customized as needed: from saga.utils.logger import getlogger, INFO app_logger = getlogger ('application.test') app_logger.level = INFO app_logger.info ('application level log message on INFO level') 1.2 Tutorial This tutorial explains the job and filesystem packages, arguably the most widely used capabilities in saga-python. It covers local as well as remote job submission and management (ssh, pbs, sge) and file operations (sftp). Prerequisites: You are familiar with Linux or UNIX You can read and write Python code You can use SSH and understand how public and private keys work You understand the basic concepts of distributed computing You will learn how to: Install SAGA on your own machine Write a program that runs a job locally on your machine Use the same program with a different plug-in to run the job on a remote site Add file transfer capabilities to the program to retrieve results Contents: 6 Chapter 1. Contents:
11 1.2.1 Part 1: Introduction The SAGA Python module provides an object-oriented programming interface for job submission and management, resource allocation, file handling and coordination and communication - functionality that is required in the majority of distributed applications, frameworks and tool. SAGA encapsulates the complexity and heterogeneity of different distributed computing systems and cyberinfrastructures by providing a single, coherent API to the application developer. The so-called adaptor-mechanism that is transparent to the application translates the API calls to the different middleware interfaces. A list of available adaptors can be found in chapter_adaptors. In part 2 of this tutorial, we will start with using the local (fork) job adaptor. In part 3, we use the ssh job adaptor to submit a job to a remote host. In part 4, we use one of the HPC adaptors (sge, slurm, pbs) to submit a job to an HPC cluster. Additionally, we introduce the sftp file adaptor to implement input and output file staging. Installation Warning: SAGA-Python requires Python >= 2.5. It won t work with an older version of Python. Install Virtualenv A small Python command-line tool called virtualenv allows you to create a local Python environment (sandbox) in user space, which allows you to install additional Python packages without having to be root. To create your local Python environment run the following command (you can install virtualenv on most systems via apt-get or yum, etc.): virtualenv $HOME/tutorial If you don t have virtualenv installed and you don t have root access on your machine, you can use the following script instead: curl --insecure -s python - $HOME/tutor Note: If you have multiple Python versions installed on your system, you can use the virtualenv --python=python_exe flag to force virtualenv to use a specific version. Next, you need to activate your Python environment in order to make it work: source $HOME/tutorial/bin/activate Activating the virtualenv is very important. If you don t activate your virtualenv, the rest of this tutorial will not work. You can usually tell that your environment is activated properly if your bash command-line prompt starts with (tutorial). Install SAGA Python The latest saga-python module is available via the Python Package Index (PyPi). PyPi packages are installed very similar to Linux deb or rpm packages with a tool called pip (which stands for pip installs packages ). Pip is installed by default in your virtualenv, so in order to install SAGA Python, the only thing you have to do is this: 1.2. Tutorial 7
12 pip install saga-python To make sure that your installation works, run the following command to check if the saga-python module can be imported by the interpreter (the output of the command below should be version number of the saga-python module): python -c "import saga; print saga.version" Part 2: Local Job Submission One of the most important feature of SAGA Python is the capability to submit jobs to local and remote queueing systems and resource managers. This first example explains how to define a SAGA job using the Job API and run it on your local machine. If you are somewhat familiar with Python and the principles of distributed computing, the Hands-On code example is probably all you want to know. The code is relatively simple and pretty self-explanatory. If you have questions about the code or if you want to know in detail what s going on, read the Details and Discussion section further below. Hands-On: Local Job Submission Before we discuss the individual API call in more detail, let s get down and dirty and run our first example: creating and running a SAGA job on your local machine. Create a new file saga_example_local.py and paste the following code ( or download it directly from here. ) author = "Ole Weidner" copyright = "Copyright , The SAGA Project" license = "MIT" import sys import saga def main(): try: # Create a job service object that represent the local machine. # The keyword 'fork://' in the url scheme triggers the 'shell' adaptor # which can execute jobs on the local machine as well as on a remote # machine via "ssh://hostname". js = saga.job.service("fork://localhost") # describe our job jd = saga.job.description() # Next, we describe the job we want to run. A complete set of job # description attributes can be found in the API documentation. jd.environment = {'MYOUTPUT':'"Hello from SAGA"'} jd.executable = '/bin/echo' jd.arguments = ['$MYOUTPUT'] jd.output = "mysagajob.stdout" jd.error = "mysagajob.stderr" # Create a new job from the job description. The initial state of # the job is 'New'. myjob = js.create_job(jd) 8 Chapter 1. Contents:
13 # Check our job's id and state print "Job ID : %s" % (myjob.id) print "Job State : %s" % (myjob.state) print "\n...starting job...\n" # Now we can start our job. myjob.run() print "Job ID : %s" % (myjob.id) print "Job State : %s" % (myjob.state) print "\n...waiting for job...\n" # wait for the job to either finish or fail myjob.wait() print "Job State : %s" % (myjob.state) print "Exitcode : %s" % (myjob.exit_code) return 0 except saga.sagaexception, ex: # Catch all saga exceptions print "An exception occured: (%s) %s " % (ex.type, (str(ex))) # Trace back the exception. That can be helpful for debugging. print " \n*** Backtrace:\n %s" % ex.traceback return -1 if name == " main ": sys.exit(main()) Run the Code Save the file and execute it (make sure your virtualenv is activated): python saga_example_local.py The output should look something like this: Job ID : [fork://localhost]-[none] Job State : saga.job.job.new...starting job... Job ID : [fork://localhost]-[644240] Job State : saga.job.job.pending...waiting for job... Job State : saga.job.job.done Exitcode : None 1.2. Tutorial 9
14 Check the Output Once the job has completed, you will find a file mysagajob.stdout in your current working directory. It should contain the line: Hello from SAGA A Quick Note on Logging and Debugging Since working with distributed systems is inherently complex and much of the complexity is hidden within SAGA Python, it is necessary to do a lot of internal logging. By default, logging output is disabled, but if something goes wrong or if you re just curious, you can enable the logging output by setting the environment variable SAGA_VERBOSE to a value between 1 (print only critical messages) and 5 (print all messages). Give it a try with the above example: SAGA_VERBOSE=5 python saga_example_local.py Discussion Now that we have successfully run our first job with saga-python, we will discuss some of the the building blocks and details of the code. The job submission and management capabilities of saga-python are packaged in the saga.job module (API Doc). Three classes are defined in this module: The job.service class provides a handle to the resource manager, like for example a remote PBS cluster. The job.description class is used to describe the executable, arguments, environment and requirements (e.g., number of cores, etc) of a new job. The job.job class is a handle to a job associated with a job.service. It is used to control (start, stop) the job and query its status (e.g., Running, Finished, etc). In order to use the SAGA Job API, we first need to import the saga-python module: import saga Next, we create a job.service object that represents the compute resource you want to use (see figure above). The job service takes a single URL as parameter. The URL is a way to tell saga-python what type of resource or middleware you want to use and where it is. The URL parameter is passed to saga-python s plug- in selector and based on the URL scheme, a plug-in is selected. In this case the Local job plug-in is selected for fork://. URL scheme - Plug-in mapping is described in chapter_adaptors. js = saga.job.service("fork://localhost") To define a new job, a job.description object needs to be created that contains information about the executable we want to run, its arguments, the environment that needs to be set and some other optional job requirements: jd = saga.job.description() # environment, executable & arguments jd.environment = {'MYOUTPUT':'"Hello from SAGA"'} jd.executable = '/bin/echo' jd.arguments = ['$MYOUTPUT'] # output options 10 Chapter 1. Contents:
15 jd.output = "mysagajob.stdout" jd.error = "mysagajob.stderr" Once the job.service has been created and the job has been defined via the job.description object, we can create a new instance of the job via the create_job method of the job.service and use the resulting object to control (start, stop) and monitor the job: myjob = js.create_job(jd) # create a new job instance myjob.run() # start the job instance print "Initial Job ID : %s" % (myjob.jobid) print "Initial Job State : %s" % (myjob.get_state()) myjob.wait() # Wait for the job to reach either 'Done' or 'Failed' state print "Final Job ID : %s" % (myjob.jobid) print "Final Job State : %s" % (myjob.get_state()) Part 3: Remote Job Submission Next, we take the previous example and modify it, so that our job is executed on a remote machine instead of localhost. This examples shows one of the most important capabilities of SAGA: abstracting system heterogeneity. We can use the same code we have used to run a job via fork with minimal modifications to run a job on a different resource, e.g., via ssh on another remote system or via pbs or sge on a remote cluster. Prerequisites This example assumes that you have SSH access to a remote resource, either a single host or an HPC cluster. The example also assumes that you have a working public/private SSH key-pair and that you can log-in to your remote resource of choice using those keys, i.e., your public key is in the ~/.ssh/authorized_hosts file on the remote machine. If you are not sure how this works, you might want to read about SSH and GSISSH first. Hands-On: Remote Job Submission Copy the code from the previous example to a new file saga_example_remote.py. Add a saga.context and saga.session right before the job.service object initialization. Sessions and Contexts describe your SSH identity on the remote machine: ctx = saga.context("ssh") ctx.user_id = "oweidner" session = saga.session() session.add_context(ctx) To change the execution host for the job, change the URL in the job.service constructor. If you want to use a remote SSH host, use an ssh:// URL. Note that the session is passed as an additional parameter to the Service constructor: js = saga.job.service("ssh://remote.host.net", session=session) Alternatively, if you have access to a PBS cluster, use a pbs+ssh://... URL: js = saga.job.service("pbs+ssh://remote.hpchost.net", session=session) There are more URL options. Have a look at the chapter_adaptors section for a complete list. If you submitting your job to a PBS cluster (pbs+ssh://), you will probably also have to make some modifications to your 1.2. Tutorial 11
16 job.description. Depending on the configuration of your cluster, you might have to put in the name of the queue you want to use or the allocation or project name that should be credited: jd = saga.job.description() jd.environment jd.executable jd.arguments jd.output jd.error jd.queue jd.project = {'MYOUTPUT':'"Hello from SAGA"'} = '/bin/echo' = ['$MYOUTPUT'] = "mysagajob.stdout" = "mysagajob.stderr" = "short" # Using a specific queue = "TG-XYZABCX" # Example for an XSEDE/TeraGrid allocation Run the Code Save the file and execute it (make sure your virtualenv is activated): python saga_example_remote.py The output should look something like this: Job ID : None Job State : New...starting job... Job ID : [ssh://gw68.quarry.iu.teragrid.org]-[18533] Job State : Done...waiting for job... Job State : Done Exitcode : 0 Values marked as None could not be fetched from the backend, at that point. Check the Output As opposed to the previous local example, you won t find a mysagajob.stdout file in your working directory. This is because the file has been created on the remote host were your job was executed. In order to check the content, you would have to log-in to the remote machine. We will address this issue in the next example. Discussion Besides changing the job.service URL to trigger a different middleware plug-in, we have introduced another new aspect in this tutorial example: Contexts. Contexts are used to define security / log-in contexts for SAGA objects and are passed to the executing plug-in (e.g., the SSH plug-in). A context always has a type that matches the executing plug-in. The two most commonly used contexts in SAGA are ssh and gsissh: # Your ssh identity on the remote machine ctx = saga.context("ssh") ctx.user_id = "oweidner" 12 Chapter 1. Contents:
17 A Context can t be used by itself, but rather has to be added to a saga.session object. A session can have one or more Contexts. At runtime, SAGA Python will iterate over all Contexts of a Session to see if any of them can be used to establish a connection. session = saga.session() session.add_context(ctx) Finally, Sessions are passed as an extra parameter during object creation, otherwise they won t get considered: js = saga.job.service("ssh://remote.host.net", session=ses) The complete API documentation for Session and Context classes can be found in the Library Reference section of this manual Part 4: Adding File Transfer In this fourth part of the tutorial, we again build on the previous example and some code that copies our job s output file back to the local machine. This is done using the saga.filesystem API package. Prerequisites This example assumes that you have SFTP access to the remote resource that you have used in the previous example. Again, this example assumes that you have a working public/private SSH key-pair and that you can sftp into your remote resource using those keys, i.e., your public key is in the ~/.ssh/authorized_hosts file on the remote machine. If you are not sure how this works, you might want to read about SSH and GSISSH first. Hands-On: Remote Job Submission with File Staging Copy the code from the previous example 3 to a new file saga_example_remote_staging.py. following code after the last print, right before the except statement: Add the Note: Make sure that you adjust the paths to reflect your home directory on the remote machine. outfilesource = 'sftp://gw68.quarry.iu.teragrid.org/users/oweidner/mysagajob.stdout' outfiletarget = 'file://localhost/tmp/' out = saga.filesystem.file(outfilesource, session=ses) out.copy(outfiletarget) print "Staged out %s to %s (size: %s bytes)" % (outfilesource, outfiletarget, out.get_size()) Run the Code Save the file and execute it (make sure your virtualenv is activated): python saga_example_remote.py The output should look something like this: Job ID : None Job State : New...starting job Tutorial 13
18 Job ID : [ssh://gw68.quarry.iu.teragrid.org]-[18533] Job State : Done...waiting for job... Job State : Done Exitcode : 0 Staged out gw68.quarry.iu.teragrid.org/users/oweidner/mysagajob.stdout to file://localhost/tmp/ (size Check the Output Your output file should now be in /tmp/mysagajob.stdout and contain the string Hello from SAGA Part 5: A More Complex Example: Mandelbrot Warning: If you want to run the Mandelbrot example on OSG with Condor, please refer to the OSG-specific instructions: tutorial_mandelbrot_osg. In this example, we split up the calculation of a Mandelbrot set into several tiles, submit a job for each tile using the SAGA Job API, retrieve the tiles using the SAGA File API and stitch together the final image from the individual tiles. This example shows how SAGA can be used to create more complex application workflows that involve multiple aspects of the API. Hands-On: Distributed Mandelbrot Fractals In order for this example to work, we need to install an additional Python module, the Python Image Library (PIL). This is done via pip: pip install PIL Next, we need to download the Mandelbrot fractal generator itself as well as the shell wrapper scrip. It is really just a very simple python script that, if invoked on the command line, outputs a full or part of a Mandelbrot fractal as a PNG image. Download the scripts into your $HOME directory: curl --insecure -Os curl --insecure -Os You can give mandelbrot.py a test-drive locally by calculating a single-tiled 1024x1024 Mandelbrot fractal: python mandelbrot.py frac.gif In your $HOME directory, open a new file saga_mandelbrot.py with your favorite editor and paste the following script (or download it directly from here).: author = "Ole Weidner" copyright = "Copyright , The SAGA Project" license = "MIT" import os import sys import time 14 Chapter 1. Contents:
19 import saga from PIL import Image # # # Change REMOTE_HOST to the machine you want to run this on. # You might have to change the URL scheme below for REMOTE_JOB_ENDPOINT # accordingly. REMOTE_HOST = "localhost" # try this with different hosts # This refers to your working directory on 'REMOTE_HOST'. If you use a\ # cluster for 'REMOTE_HOST', make sure this points to a shared filesystem. REMOTE_DIR = "/tmp/" # change this to your home directory # If you change 'REMOTE_HOST' above, you might have to change 'ssh://' to e.g., # 'pbs+ssh://', 'sge+ssh://', depdending on the type of service endpoint on # that particualr host. REMOTE_JOB_ENDPOINT = "ssh://" + REMOTE_HOST # At the moment saga-python only provides an sftp file adaptor, so changing # the URL scheme here wouldn't make any sense. REMOTE_FILE_ENDPOINT = "sftp://" + REMOTE_HOST + "/" + REMOTE_DIR # the dimension (in pixel) of the whole fractal imgx = 2048 imgy = 2048 # the number of tiles in X and Y direction tilesx = 2 tilesy = 2 # # if name == " main ": try: # Your ssh identity on the remote machine ctx = saga.context("ssh") #ctx.user_id = "" session = saga.session() session.add_context(ctx) # list that holds the jobs jobs = [] # create a working directory in /scratch dirname = '%s/mbrot/' % (REMOTE_FILE_ENDPOINT) workdir = saga.filesystem.directory(dirname, saga.filesystem.create, session=session) # copy the executable and warpper script to the remote host mbwrapper = saga.filesystem.file('file://localhost/%s/mandelbrot.sh' % os.getcwd()) mbwrapper.copy(workdir.get_url()) mbexe = saga.filesystem.file('file://localhost/%s/mandelbrot.py' % os.getcwd()) mbexe.copy(workdir.get_url()) # the saga job services connects to and provides a handle 1.2. Tutorial 15
20 # to a remote machine. In this case, it's your machine. # fork can be replaced with ssh here: jobservice = saga.job.service(remote_job_endpoint, session=session) for x in range(0, tilesx): for y in range(0, tilesy): # describe a single Mandelbrot job. we're using the # directory created above as the job's working directory outputfile = 'tile_x%s_y%s.gif' % (x, y) jd = saga.job.description() #jd.queue = "development" jd.wall_time_limit = 10 jd.total_cpu_count = 1 jd.working_directory = workdir.get_url().path jd.executable jd.arguments = 'sh' = ['mandelbrot.sh', imgx, imgy, (imgx/tilesx*x), (imgx/tilesx*(x+1)), (imgy/tilesy*y), (imgy/tilesy*(y+1)), outputfile] # create the job from the description # above, launch it and add it to the list of jobs job = jobservice.create_job(jd) job.run() jobs.append(job) print ' * Submitted %s. Output will be written to: %s' % (job.id, outputfile) # wait for all jobs to finish while len(jobs) > 0: for job in jobs: jobstate = job.get_state() print ' * Job %s status: %s' % (job.id, jobstate) if jobstate in [saga.job.done, saga.job.failed]: jobs.remove(job) print "" time.sleep(5) # copy image tiles back to our 'local' directory for image in workdir.list('*.gif'): print ' * Copying %s/%s/%s back to %s' % (REMOTE_FILE_ENDPOINT, workdir.get_url(), image, os.getcwd()) workdir.copy(image, 'file://localhost/%s/' % os.getcwd()) # stitch together the final image fullimage = Image.new('RGB', (imgx, imgy), (255, 255, 255)) print ' * Stitching together the whole fractal: mandelbrot_full.gif' for x in range(0, tilesx): for y in range(0, tilesy): partimage = Image.open('tile_x%s_y%s.gif' % (x, y)) fullimage.paste(partimage, (imgx/tilesx*x, imgy/tilesy*y, imgx/tilesx*(x+1), imgy/tilesy*(y+1))) fullimage.save("mandelbrot_full.gif", "GIF") sys.exit(0) except saga.sagaexception, ex: # Catch all saga exceptions 16 Chapter 1. Contents:
21 print "An exception occured: (%s) %s " % (ex.type, (str(ex))) # Trace back the exception. That can be helpful for debugging. print " \n*** Backtrace:\n %s" % ex.traceback sys.exit(-1) except KeyboardInterrupt: # ctrl-c caught: try to cancel our jobs before we exit # the program, otherwise we'll end up with lingering jobs. for job in jobs: job.cancel() sys.exit(-1) Look at the code and change the constants at the very top accordingly. Then run it. The output should look something like this: python saga_mandelbrot.py * Submitted [ssh://india.futuregrid.org]-[4073]. Output will be written to: tile_x0_y0.gif * Submitted [ssh://india.futuregrid.org]-[4094]. Output will be written to: tile_x0_y1.gif * Submitted [ssh://india.futuregrid.org]-[4116]. Output will be written to: tile_x1_y0.gif * Submitted [ssh://india.futuregrid.org]-[4144]. Output will be written to: tile_x1_y1.gif * Job [ssh://india.futuregrid.org]-[4073] status: Running * Job [ssh://india.futuregrid.org]-[4094] status: Running * Job [ssh://india.futuregrid.org]-[4116] status: Running * Job [ssh://india.futuregrid.org]-[4144] status: Running * Job [ssh://india.futuregrid.org]-[4073] status: Running * Job [ssh://india.futuregrid.org]-[4094] status: Running * Job [ssh://india.futuregrid.org]-[4116] status: Running * Job [ssh://india.futuregrid.org]-[4144] status: Running * Job [ssh://india.futuregrid.org]-[4073] status: Done * Job [ssh://india.futuregrid.org]-[4116] status: Running * Job [ssh://india.futuregrid.org]-[4144] status: Running * Job [ssh://india.futuregrid.org]-[4094] status: Done * Job [ssh://india.futuregrid.org]-[4144] status: Done * Job [ssh://india.futuregrid.org]-[4116] status: Done * Copying sftp://india.futuregrid.org//n/u/oweidner/sftp://india.futuregrid.org//n/u/oweidner/mbrot// * Copying sftp://india.futuregrid.org//n/u/oweidner/sftp://india.futuregrid.org//n/u/oweidner/mbrot// * Copying sftp://india.futuregrid.org//n/u/oweidner/sftp://india.futuregrid.org//n/u/oweidner/mbrot// * Copying sftp://india.futuregrid.org//n/u/oweidner/sftp://india.futuregrid.org//n/u/oweidner/mbrot// * Stitching together the whole fractal: mandelbrot_full.gif Open mandelbrot_full.gif with your favorite image editor. It should look like the image below. The different tile*.gif files (open them if you want) were computed on REMOTE_HOST, transfered back and stitched together as the full image. 1.3 Library Reference Intro library reference Library Reference 17
22 1.3.1 URLs Url saga.url class saga.url(*pargs, **pkwargs) Bases: radical.utils.url.url The SAGA Url class. URLs are used in several places in the SAGA API: to specify service endpoints for job submission or resource management, for file or directory locations, etc. The URL class is designed to simplify URL management for these purposes it allows to manipulate individual URL elements, while ensuring that the resulting URL is well formatted. Example: # create a URL from a string location = saga.url ("file://localhost/tmp/file.dat") d = saga.filesystem.directory(location) A URL consists of the following components (where one ore more can be None ): <scheme>://<user>:<pass>@<host>:<port>/<path>?<query>#<fragment> Each of these components can be accessed via its property or alternatively, via getter / setter methods. Example: url = saga.url ("scheme://pass:user@host:123/path?query#fragment") # modify the scheme url.scheme = "anotherscheme" # above is equivalent with url.set_scheme("anotherscheme") Job Submission and Control SAGA s job management module is central to the API. It represents an application/executable running under the management of a resource manager. A resrouce manager can be anything from the local machine to a remote HPC queing system to grid and cloud computing services. The basic usage of the job module is as follows: # A job.description object describes the executable/application and its requirements job_desc = saga.job.description() job_desc.executable = '/bin/sleep' job_desc.arguments = ['10'] job_desc.output = 'myjob.out' job_desc.error = 'myjob.err' # A job.service object represents the resource manager. In this example we use the 'local' adaptor to service = saga.job.service('local://localhost') # A job is created on a service (resource manager) using the job description job = service.create_job(job_desc) # Run the job and wait for it to finish job.run() print "Job ID : %s" % (job.job_id) job.wait() 18 Chapter 1. Contents:
23 # Get some info about the job print "Job State : %s" % (job.state) print "Exitcode : %s" % (job.exit_code) service.close() See also: More examples can be found in the individual adaptor sections! Like all SAGA modules, the job module relies on middleware adaptors to provide bindings to a specific resource manager. Adaptors are implicitly selected via the scheme part of the URL, e.g., local:// in the example above selects the local job adaptor. The Job Service saga.job.service (page 19) section explains this in more detail. Note: A list of available adaptors and supported resource managers can be found in the Developer Documentation (page 60) part of this documentation. The rest of this section is structured as follows: Table of Contents Job Service saga.job.service (page 19) (page 19) Job Description saga.job.description (page 22) (page 22) Jobs saga.job.job (page 26) (page 26) Attributes (page 32) States (page 33) Metrics (page 33) Job Containers saga.job.container (page 34) (page 34) Job Service saga.job.service class saga.job.service(*pargs, **pkwargs) Bases: saga.base.base, saga.async.async The job.service represents a resource management backend, and as such allows the creation, submission and management of jobs. A job.service represents anything which accepts job creation requests, and which manages thus created saga.job.job (page 26) instances. That can be a local shell, a remote ssh shell, a cluster queuing system, a IaaS backend you name it. The job.service is identified by an URL, which usually points to the contact endpoint for that service. Example: service = saga.job.service("fork://localhost") ids = service.list() for job_id in ids : print job_id j = service.get_job(job_id) if j.get_state() == saga.job.job.pending: print "pending" elif j.get_state() == saga.job.job.running: 1.3. Library Reference 19
24 print "running" else: print "job is already final!" service.close() init (rm, session) Create a new job.service instance. Parameters rm (string or saga.url (page 18)) resource manager URL session (saga.session) an optional session object with security contexts Return type saga.job.service (page 19) close() Close the job service instance and disconnect from the (remote) job service if necessary. Any subsequent calls to a job service instance after close() was called will fail. Example: service = saga.job.service("fork://localhost") # do something with the 'service' object, create jobs, etc... service.close() service.list() # this call will throw an exception Warning: While in principle the job service destructor calls close() automatically when a job service instance goes out of scope, you shouldn t rely on it. Python s garbage collection can be a bit odd at times, so you should always call close() explicitly. Especially in a multi-threaded program this will help to avoid random errors. create_job(job_desc) Create a new job.job instance from a Description (page 22). The resulting job instance is in NEW (page 33) state. Parameters job_desc (saga.job.description (page 22)) job description to create the job from ttype Type of operation. Default (None) is synchronous. Return type saga.job.job (page 26) or saga.task if the operation is asynchronous. create_job() accepts a job description, which described the application instance to be created by the backend. The create_job() method is not actually attempting to run the job, but merely parses the job description for syntactic and semantic consistency. The job returned object is thus not in Pending or Running, but rather in New state. The actual submission is performed by calling run() on the job object. Example: # A job.description object describes the executable/application and its requirements job_desc = saga.job.description() job_desc.executable = '/bin/sleep' job_desc.arguments = ['10'] job_desc.output = 'myjob.out' 20 Chapter 1. Contents:
25 job_desc.error = 'myjob.err' service = saga.job.service('local://localhost') job = service.create_job(job_desc) # Run the job and wait for it to finish job.run() print "Job ID : %s" % (job.job_id) job.wait() # Get some info about the job print "Job State : %s" % (job.state) print "Exitcode : %s" % (job.exit_code) service.close() run_job(cmd, host=none) Warning: CURRENTLY NOT IMPLEMENTED / SUPPORTED list() Return a list of the jobs that are managed by this Service instance. See also: The jobs (page 21) property and the list() (page 21) method are semantically equivalent. Ttype Type of operation. Default (None) is synchronous. Return type list of saga.job.job (page 26) As the job.service represents a job management backend, list() will return a list of job IDs for all jobs which are known to the backend, and which can potentially be accessed and managed by the application. Example: service = saga.job.service("fork://localhost") ids = service.list() jobs list() for job_id in ids : print job_id service.close() Return a list of the jobs that are managed by this Service instance. See also: The jobs (page 21) property and the list() (page 21) method are semantically equivalent. Ttype Type of operation. Default (None) is synchronous. Return type list of saga.job.job (page 26) As the job.service represents a job management backend, list() will return a list of job IDs for all jobs which are known to the backend, and which can potentially be accessed and managed by the application Library Reference 21
26 Example: service = saga.job.service("fork://localhost") ids = service.list() for job_id in ids : print job_id service.close() get_url() Return the URL this Service instance was created with. See also: The url (page 22) property and the get_url() (page 22) method are semantically equivalent and only duplicated for convenience. url get_url() Return the URL this Service instance was created with. See also: The url (page 22) property and the get_url() (page 22) method are semantically equivalent and only duplicated for convenience. get_job(job_id) Return the job object for a given job id. Parameters job_id The id of the job to retrieve Return type saga.job.job (page 26) Job objects are a local representation of a remote stateful entity. The job.service supports to reconnect to those remote entities: service = saga.job.service("fork://localhost") j = service.get_job(my_job_id) if j.get_state() == saga.job.job.pending: print "pending" elif j.get_state() == saga.job.job.running: print "running" else: print "job is already final!" service.close() Job Description saga.job.description Warning: There is no guarantee that all middleware adaptors implement all job description attributes. In case a specific attribute is not supported, the create_job() (page 20) will throw an exception. Please refer to the Developer Documentation (page 60) documentation for more details and adaptor-specific lists of supported attributes. class saga.job.description(*pargs, **pkwargs) Bases: saga.attributes.attributes (page 56) The job description class. 22 Chapter 1. Contents:
27 clone() Implements deep copy. u Unlike the default python assignment (copy object reference), a deep copy will create a new object instance with the same state after a deep copy, a change on one instance will not affect the other. SAGA defines the following constants as valid job description attributes: saga.job.executable The executable to start once the job starts running: jd = saga.job.description() jd.executable = "/bin/sleep" Type str saga.job.executable Same as attribute EXECUTABLE (page 23). saga.job.arguments Arguments to pass to the EXECUTABLE (page 23): jd = saga.job.description() jd.arguments = ['--flag1', '--flag2'] Tpye list() saga.job.arguments Same as attribute ARGUMENTS (page 23). saga.job.environment Environment variables to set in the job s context: jd = saga.job.description() jd.environemnt = {'FOO': 'BAR', 'FREE': 'BEER'} Type dict() saga.job.environment Same as attribute ENVIRONMENT (page 23). saga.job.working_directory The working directory of the job: jd = saga.job.description() jd.working_directory = "/scratch/experiments/123/" Type str() saga.job.working_directory Same as attribute WORKING_DIRECTORY (page 23). saga.job.output Filename to capture the executable s STDOUT stream. If output is a relative filename, the file is relative to WORKING_DIRECTORY (page 23): jd = saga.job.description() jd.output = "myjob_stdout.txt" 1.3. Library Reference 23
28 Type str() saga.job.output Same as attribute OUTPUT (page 23). saga.job.error Filename to capture the executable s STDERR stream. If error is a relative filename, the file is relative to WORKING_DIRECTORY (page 23): jd = saga.job.description() jd.error = "myjob_stderr.txt" Type str() saga.job.error Same as attribute ERROR (page 24). saga.job.file_transfer Files to stage-in before the job starts running and to stage out once the job has finished running. The syntax is as follows: local_file OPERATOR remote_file OPERATOR can be one of the following: > copies the local file to the remote fille before the job starts. Overwrites the remote file if it exists. < copies the remote file to the local file after the job finishes. Overwrites the local file if it exists Example: jd = saga.job.description() jd.input_file_transfer = ["file://localhost/data/input/test.dat > "test.dat", "file://localhost/data/results/1/result.dat < "result1.dat" ] Type list() saga.job.file_transfer Same as attribute FILE_TRANSFER (page 24). saga.job.queue The name of the queue to submit the job to: jd = saga.job.description() jd.queue = "mpi_long" Type str() saga.job.queue Same as attribute QUEUE (page 24). saga.job.project The name of the project / allocation to charged for the job jd = saga.job.description() jd.project = "TG-XYZ123456" Type str() 24 Chapter 1. Contents:
29 saga.job.project Same as attribute PROJECT (page 24). saga.job.spmd_variation Describe me! Type str() saga.job.spmd_variation (Property) Same as attribute SPMD_VARIATION (page 25). saga.job.total_cpu_count = TotalCPUCount str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.total_cpu_count (Property) Same as attribute TOTAL_CPU_COUNT (page 25). Type int() or str() saga.job.number_of_processes = NumberOfProcesses str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.number_of_processes (Property) Same as attribute NUMBER_OF_PROCESSES (page 25). Type int() or str() saga.job.processes_per_host = ProcessesPerHost str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.processes_per_host (Property) Same as attribute PROCESSES_PER_HOST (page 25). Type int() or str() saga.job.threads_per_process = ThreadsPerProcess str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.threads_per_process (Property) Same as attribute THREADS_PER_PROCESS (page 25). Type int() or str() # NOT IMPLEMENTED.. autodata:: INTERACTIVE saga.job.cleanup = Cleanup str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.cleanup (Property) Same as attribute CLEANUP (page 25). Type bool() saga.job.job_start_time = JobStartTime str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object Library Reference 25
30 saga.job.job_start_time (Property) Same as attribute JOB_START_TIME (page 25). Type UNIX timestamp saga.job.wall_time_limit = WallTimeLimit str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.wall_time_limit (Property) Same as attribute WALL_TIME_LIMIT (page 26). saga.job.total_physical_memory = TotalPhysicalMemory str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.total_physical_memory (Property) Same as attribute TOTAL_PHYSICAL_MEMORY (page 26). saga.job.cpu_architecture = CPUArchitecture str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.cpu_architecture (Property) Same as attribute CPU_ARCHITECTURE (page 26). saga.job.operating_system_type = OperatingSystemType str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.operating_system_type (Property) Same as attribute OPERATIN_SYSTEM_TYPE. saga.job.candidate_hosts = CandidateHosts str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.candidate_hosts (Property) Same as attribute CANDIDATE_HOSTS (page 26). saga.job.job_contact = JobContact str(object= ) -> string Return a nice string representation of the object. If the argument is a string, the return value is the same object. saga.job.job_contact (Property) Same as attribute JOB_CONTACT (page 26). Jobs saga.job.job class saga.job.job(*pargs, **pkwargs) Bases: saga.base.base, saga.task.task, saga.async.async Represents a SAGA job as defined in GFD.90 A Job represents a running application instance, which may consist of one or more processes. Jobs are created by submitting a Job description to a Job submission system usually a queuing system, or some other service which spawns jobs on the user s behalf. 26 Chapter 1. Contents:
31 Jobs have a unique ID (see get_job_id()), and are stateful entities their state attribute changes according to a well defined state model: A job as returned by job.service.create(jd) is in New state it is not yet submitted to the job submission backend. Once it was submitted, via run(), it will enter the Pending state, where it waits to get actually executed by the backend (e.g. waiting in a queue etc). Once the job is actually executed, it enters the Running state only in that state is the job actually consuming resources (CPU, memory,...). Jobs can leave the Running state in three different ways: they finish successfully on their own ( Done ), they finish unsuccessfully on their own, or get canceled by the job management backend ( Failed ), or they get actively canceled by the user or the application ( Canceled ). The methods defined on the Job object serve two purposes: inspecting the job s state, and initiating job state transitions. get_id() Return the job ID. get_description() Return the job description this job was created from. The returned description can be used to inspect job properties (executable name, arguments, etc.). It can also be used to start identical job instances. The returned job description will in general reflect the actual state of the running job, and is not necessarily a simple copy of the job description which was used to create the job instance. For example, the environment variables in the returned job description may reflect the actual environment of the running job instance. Example: service = saga.job.service("fork://localhost") jd = saga.job.description () jd.executable = '/bin/date' j1 = service.create_job(jd) j1.run() j2 = service.create_job(j1.get_description()) j2.run() service.close() get_stdout_string() Return the job s STDOUT as string. ttype: saga.task.type enum ret: string / saga.task THIS METHOD IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE. USE job.get_stdout() INSTEAD. get_stderr_string() Return the job s STDERR. ttype: saga.task.type enum ret: string / saga.task THIS METHOD IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE. USE job.get_stderr() INSTEAD. get_log(*pargs, **pkwargs) get_log_string() 1.3. Library Reference 27
32 Return the job s log information, ie. backend specific log messages which have been collected during the job execution. Those messages also include stdout/stderr from the job s pre- and post-exec. The returned string generally contains one log message per line, but the format of the string is ultimately undefined. ttype: saga.task.type enum ret: string / saga.task get_log_string() Return the job s log information, ie. backend specific log messages which have been collected during the job execution. Those messages also include stdout/stderr from the job s pre- and post-exec. The returned string generally contains one log message per line, but the format of the string is ultimately undefined. ttype: saga.task.type enum ret: string / saga.task THIS METHOD IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE. USE job.get_log() INSTEAD. signal(signum) Send a signal to the job. id get_id() Parameters signum (int) signal to send Return the job ID. description get_description() Return the job description this job was created from. The returned description can be used to inspect job properties (executable name, arguments, etc.). It can also be used to start identical job instances. The returned job description will in general reflect the actual state of the running job, and is not necessarily a simple copy of the job description which was used to create the job instance. For example, the environment variables in the returned job description may reflect the actual environment of the running job instance. Example: service = saga.job.service("fork://localhost") jd = saga.job.description () jd.executable = '/bin/date' j1 = service.create_job(jd) j1.run() j2 = service.create_job(j1.get_description()) j2.run() service.close() stdin get_stdin() ttype: saga.task.type enum ret: string / saga.task Return the job s STDIN as string. log get_log_string() 28 Chapter 1. Contents:
Cross-platform daemonization tools.
Cross-platform daemonization tools. Release 0.1.0 Muterra, Inc Sep 14, 2017 Contents 1 What is Daemoniker? 1 1.1 Installing................................................. 1 1.2 Example usage..............................................
More informationyardstick Documentation
yardstick Documentation Release 0.1.0 Kenny Freeman December 30, 2015 Contents 1 yardstick 3 1.1 What is yardstick?............................................ 3 1.2 Features..................................................
More informationThe Simple API for Grid Applications (SAGA)
The Simple API for Grid Applications (SAGA) Thilo Kielmann Vrije Universiteit, Amsterdam kielmann@cs.vu.nl A Grid Application Execution Scenario Functional Properties of a Grid API What applications need
More informationProject #1: Tracing, System Calls, and Processes
Project #1: Tracing, System Calls, and Processes Objectives In this project, you will learn about system calls, process control and several different techniques for tracing and instrumenting process behaviors.
More informationCS Programming Languages: Python
CS 3101-1 - Programming Languages: Python Lecture 5: Exceptions / Daniel Bauer (bauer@cs.columbia.edu) October 08 2014 Daniel Bauer CS3101-1 Python - 05 - Exceptions / 1/35 Contents Exceptions Daniel Bauer
More informationContents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...
Contents Note: pay attention to where you are........................................... 1 Note: Plaintext version................................................... 1 Hello World of the Bash shell 2 Accessing
More informationUser Scripting April 14, 2018
April 14, 2018 Copyright 2013, 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and
More informationChapter 1 - Introduction. September 8, 2016
Chapter 1 - Introduction September 8, 2016 Introduction Overview of Linux/Unix Shells Commands: built-in, aliases, program invocations, alternation and iteration Finding more information: man, info Help
More informationAssignment 1: Communicating with Programs
Assignment 1: Communicating with Programs EC602 Design by Software Fall 2018 Contents 1 Introduction 2 1.1 Assignment Goals........................... 2 1.2 Group Size.............................. 2 1.3
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More informationgit commit --amend git rebase <base> git reflog git checkout -b Create and check out a new branch named <branch>. Drop the -b
Git Cheat Sheet Git Basics Rewriting Git History git init Create empty Git repo in specified directory. Run with no arguments to initialize the current directory as a git repository. git commit
More informationGROWL Scripts and Web Services
GROWL Scripts and Web Services Grid Technology Group E-Science Centre r.j.allan@dl.ac.uk GROWL Collaborative project (JISC VRE I programme) between CCLRC Daresbury Laboratory and the Universities of Cambridge
More informationIntroduction to the SAGA API
Introduction to the SAGA API Outline SAGA Standardization API Structure and Scope (C++) API Walkthrough SAGA SoftwareComponents Command Line Tools Python API bindings C++ API bindings [ Java API bindings
More informationZeroVM Package Manager Documentation
ZeroVM Package Manager Documentation Release 0.2.1 ZeroVM Team October 14, 2014 Contents 1 Introduction 3 1.1 Creating a ZeroVM Application..................................... 3 2 ZeroCloud Authentication
More informationProcess Management! Goals of this Lecture!
Process Management! 1 Goals of this Lecture! Help you learn about:" Creating new processes" Programmatically redirecting stdin, stdout, and stderr" (Appendix) communication between processes via pipes"
More informationJob Submitter Documentation
Job Submitter Documentation Release 0+untagged.133.g5a1e521.dirty Juan Eiros February 27, 2017 Contents 1 Job Submitter 3 1.1 Before you start............................................. 3 1.2 Features..................................................
More informationWhat is a Process? Processes and Process Management Details for running a program
1 What is a Process? Program to Process OS Structure, Processes & Process Management Don Porter Portions courtesy Emmett Witchel! A process is a program during execution. Ø Program = static file (image)
More informationBash command shell language interpreter
Principles of Programming Languages Bash command shell language interpreter Advanced seminar topic Louis Sugy & Baptiste Thémine Presentation on December 8th, 2017 Table of contents I. General information
More informationTable of Contents EVALUATION COPY
Table of Contents Introduction... 1-2 A Brief History of Python... 1-3 Python Versions... 1-4 Installing Python... 1-5 Environment Variables... 1-6 Executing Python from the Command Line... 1-7 IDLE...
More informationPresented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY
Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory gmkurtzer@lbl.gov CONTAINERS IN HPC WITH SINGULARITY A QUICK REVIEW OF THE LANDSCAPE Many types of virtualization
More informationardpower Documentation
ardpower Documentation Release v1.2.0 Anirban Roy Das May 18, 2016 Contents 1 Introduction 1 2 Screenshot 3 3 Documentaion 5 3.1 Overview................................................. 5 3.2 Installation................................................
More informationCSC209 Review. Yeah! We made it!
CSC209 Review Yeah! We made it! 1 CSC209: Software tools Unix files and directories permissions utilities/commands Shell programming quoting wild cards files 2 ... and C programming... C basic syntax functions
More informationLecture 5. Essential skills for bioinformatics: Unix/Linux
Lecture 5 Essential skills for bioinformatics: Unix/Linux UNIX DATA TOOLS Text processing with awk We have illustrated two ways awk can come in handy: Filtering data using rules that can combine regular
More informationContents: 1 Basic socket interfaces 3. 2 Servers 7. 3 Launching and Controlling Processes 9. 4 Daemonizing Command Line Programs 11
nclib Documentation Release 0.7.0 rhelmot Apr 19, 2018 Contents: 1 Basic socket interfaces 3 2 Servers 7 3 Launching and Controlling Processes 9 4 Daemonizing Command Line Programs 11 5 Indices and tables
More informationCreated by: Nicolas Melillo 4/2/2017 Elastic Beanstalk Free Tier Deployment Instructions 2017
Created by: Nicolas Melillo 4/2/2017 Elastic Beanstalk Free Tier Deployment Instructions 2017 Detailed herein is a step by step process (and explanation) of how to prepare a project to be deployed to Amazon
More informationscrapekit Documentation
scrapekit Documentation Release 0.1 Friedrich Lindenberg July 06, 2015 Contents 1 Example 3 2 Reporting 5 3 Contents 7 3.1 Installation Guide............................................ 7 3.2 Quickstart................................................
More informationOS Structure, Processes & Process Management. Don Porter Portions courtesy Emmett Witchel
OS Structure, Processes & Process Management Don Porter Portions courtesy Emmett Witchel 1 What is a Process?! A process is a program during execution. Ø Program = static file (image) Ø Process = executing
More informationCSC209: Software tools. Unix files and directories permissions utilities/commands Shell programming quoting wild cards files
CSC209 Review CSC209: Software tools Unix files and directories permissions utilities/commands Shell programming quoting wild cards files ... and systems programming C basic syntax functions arrays structs
More informationCSC209: Software tools. Unix files and directories permissions utilities/commands Shell programming quoting wild cards files. Compiler vs.
CSC209 Review CSC209: Software tools Unix files and directories permissions utilities/commands Shell programming quoting wild cards files... and systems programming C basic syntax functions arrays structs
More informationGridMap Documentation
GridMap Documentation Release 0.14.0 Daniel Blanchard Cheng Soon Ong Christian Widmer Dec 07, 2017 Contents 1 Documentation 3 1.1 Installation...................................... 3 1.2 License........................................
More informationAn introduction to checkpointing. for scientifc applications
damien.francois@uclouvain.be UCL/CISM An introduction to checkpointing for scientifc applications November 2016 CISM/CÉCI training session What is checkpointing? Without checkpointing: $./count 1 2 3^C
More informationCisco IOS Shell. Finding Feature Information. Prerequisites for Cisco IOS.sh. Last Updated: December 14, 2012
Cisco IOS Shell Last Updated: December 14, 2012 The Cisco IOS Shell (IOS.sh) feature provides shell scripting capability to the Cisco IOS command-lineinterface (CLI) environment. Cisco IOS.sh enhances
More informationGrid Compute Resources and Job Management
Grid Compute Resources and Job Management How do we access the grid? Command line with tools that you'll use Specialised applications Ex: Write a program to process images that sends data to run on the
More informationUnix Processes. What is a Process?
Unix Processes Process -- program in execution shell spawns a process for each command and terminates it when the command completes Many processes all multiplexed to a single processor (or a small number
More informationargcomplete Documentation Andrey Kislyuk
Andrey Kislyuk May 08, 2018 Contents 1 Installation 3 2 Synopsis 5 2.1 argcomplete.autocomplete(parser).................................... 5 3 Specifying completers 7 3.1 Readline-style completers........................................
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More information.NET Library for Seamless Remote Execution of Supercomputing Software
.NET Library for Seamless Remote Execution of Supercomputing Software Alexander Tsidaev 1,2 1 Bulashevich Institute of Geophysics, Yekaterinburg, Russia 2 Ural Federal University, Yekaterinburg, Russia
More informationsupernova Documentation
supernova Documentation Release trunk Major Hayden June 21, 2015 Contents 1 Documentation 3 1.1 Rackspace Quick Start.......................................... 3 1.2 Installing supernova...........................................
More informationCS354 gdb Tutorial Written by Chris Feilbach
CS354 gdb Tutorial Written by Chris Feilbach Purpose This tutorial aims to show you the basics of using gdb to debug C programs. gdb is the GNU debugger, and is provided on systems that
More informationCS/IT 114 Introduction to Java, Part 1 FALL 2016 CLASS 2: SEP. 8TH INSTRUCTOR: JIAYIN WANG
CS/IT 114 Introduction to Java, Part 1 FALL 2016 CLASS 2: SEP. 8TH INSTRUCTOR: JIAYIN WANG 1 Notice Class Website http://www.cs.umb.edu/~jane/cs114/ Reading Assignment Chapter 1: Introduction to Java Programming
More informationProcess Management 1
Process Management 1 Goals of this Lecture Help you learn about: Creating new processes Programmatically redirecting stdin, stdout, and stderr (Appendix) communication between processes via pipes Why?
More informationLinux shell scripting Getting started *
Linux shell scripting Getting started * David Morgan *based on chapter by the same name in Classic Shell Scripting by Robbins and Beebe What s s a script? text file containing commands executed as a unit
More informationDataMan. version 6.5.4
DataMan version 6.5.4 Contents DataMan User Guide 1 Introduction 1 DataMan 1 Technical Specifications 1 Hardware Requirements 1 Software Requirements 2 Ports 2 DataMan Installation 2 Component Installation
More informationShells and Shell Programming
Shells and Shell Programming 1 Shells A shell is a command line interpreter that is the interface between the user and the OS. The shell: analyzes each command determines what actions are to be performed
More informationRedwood.log( Hello World! );
Redwood Tutorial Quick Start Code import edu.stanford.nlp.util.logging.* StanfordRedwoodConfiguration.setup(); Redwood.log( Hello World! ); >> Hello World! Output Main Ideas We use logging to trace code
More information: the User (owner) for this file (your cruzid, when you do it) Position: directory flag. read Group.
CMPS 12L Introduction to Programming Lab Assignment 2 We have three goals in this assignment: to learn about file permissions in Unix, to get a basic introduction to the Andrew File System and it s directory
More informationCelery-RabbitMQ Documentation
Celery-RabbitMQ Documentation Release 1.0 sivabalan May 31, 2015 Contents 1 About 3 1.1 Get it................................................... 3 1.2 Downloading and installing from source.................................
More informationProject #1 rev 2 Computer Science 2334 Fall 2013 This project is individual work. Each student must complete this assignment independently.
Project #1 rev 2 Computer Science 2334 Fall 2013 This project is individual work. Each student must complete this assignment independently. User Request: Create a simple magazine data system. Milestones:
More informationMoodle Destroyer Tools Documentation
Moodle Destroyer Tools Documentation Release 0.0.1 Manly Man Dec 22, 2017 With Web Services 1 Features and Screenshots 3 2 Grading with Webservices 7 2.1 Prerequisites...............................................
More informationPyUpdater wxpython Demo Documentation
PyUpdater wxpython Demo Documentation Release 0.0.1 James Wettenhall Nov 17, 2017 Contents 1 Demo of a Self-Updating wxpython Application 3 1.1 Running from Source..........................................
More informationPypeline Documentation
Pypeline Documentation Release 0.2 Kyle Corbitt May 09, 2014 Contents 1 Contents 3 1.1 Installation................................................ 3 1.2 Quick Start................................................
More informationDCLI User's Guide. Data Center Command-Line Interface
Data Center Command-Line Interface 2.10.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit
More informationSoftware Development I
6.148 Software Development I Two things How to write code for web apps. How to collaborate and keep track of your work. A text editor A text editor A text editor Anything that you re used to using Even
More informationmri Documentation Release Nate Harada
mri Documentation Release 1.0.0 Nate Harada September 18, 2015 Contents 1 Getting Started 3 1.1 Deploying A Server........................................... 3 1.2 Using Caffe as a Client..........................................
More informationsimplevisor Documentation
simplevisor Documentation Release 1.2 Massimo Paladin June 27, 2016 Contents 1 Main Features 1 2 Installation 3 3 Configuration 5 4 simplevisor command 9 5 simplevisor-control command 13 6 Supervisor
More informationwithenv Documentation
withenv Documentation Release 0.7.0 Eric Larson Aug 02, 2017 Contents 1 withenv 3 2 Installation 5 3 Usage 7 3.1 YAML Format.............................................. 7 3.2 Command Substitutions.........................................
More informationHistory of SURAgrid Deployment
All Hands Meeting: May 20, 2013 History of SURAgrid Deployment Steve Johnson Texas A&M University Copyright 2013, Steve Johnson, All Rights Reserved. Original Deployment Each job would send entire R binary
More informationFlask Web Development Course Catalog
Flask Web Development Course Catalog Enhance Your Contribution to the Business, Earn Industry-recognized Accreditations, and Develop Skills that Help You Advance in Your Career March 2018 www.iotintercon.com
More informationmanifold Documentation
manifold Documentation Release 0.0.1 Open Source Robotics Foundation Mar 04, 2017 Contents 1 What is Manifold? 3 2 Installation 5 2.1 Ubuntu Linux............................................... 5 2.2
More informationChapter 2: Operating-System Structures. Operating System Concepts 9 th Edit9on
Chapter 2: Operating-System Structures Operating System Concepts 9 th Edit9on Silberschatz, Galvin and Gagne 2013 Objectives To describe the services an operating system provides to users, processes, and
More informationXSEDE High Throughput Computing Use Cases
XSEDE High Throughput Computing Use Cases 31 May 2013 Version 0.3 XSEDE HTC Use Cases Page 1 XSEDE HTC Use Cases Page 2 Table of Contents A. Document History B. Document Scope C. High Throughput Computing
More information9.2 Linux Essentials Exam Objectives
9.2 Linux Essentials Exam Objectives This chapter will cover the topics for the following Linux Essentials exam objectives: Topic 3: The Power of the Command Line (weight: 10) 3.3: Turning Commands into
More informationg-pypi Documentation Release 0.3 Domen Kožar
g-pypi Documentation Release 0.3 Domen Kožar January 20, 2014 Contents i ii Author Domen Kožar Source code Github.com source browser Bug tracker Github.com issues Generated January 20,
More informationdh-virtualenv Documentation
dh-virtualenv Documentation Release 0.7 Spotify AB July 21, 2015 Contents 1 What is dh-virtualenv 3 2 Changelog 5 2.1 0.7 (unreleased)............................................. 5 2.2 0.6....................................................
More informationAn introduction to checkpointing. for scientific applications
damien.francois@uclouvain.be UCL/CISM - FNRS/CÉCI An introduction to checkpointing for scientific applications November 2013 CISM/CÉCI training session What is checkpointing? Without checkpointing: $./count
More informationflask-dynamo Documentation
flask-dynamo Documentation Release 0.1.2 Randall Degges January 22, 2018 Contents 1 User s Guide 3 1.1 Quickstart................................................ 3 1.2 Getting Help...............................................
More informationCreating a Shell or Command Interperter Program CSCI411 Lab
Creating a Shell or Command Interperter Program CSCI411 Lab Adapted from Linux Kernel Projects by Gary Nutt and Operating Systems by Tannenbaum Exercise Goal: You will learn how to write a LINUX shell
More informationHW 1: Shell. Contents CS 162. Due: September 18, Getting started 2. 2 Add support for cd and pwd 2. 3 Program execution 2. 4 Path resolution 3
CS 162 Due: September 18, 2017 Contents 1 Getting started 2 2 Add support for cd and pwd 2 3 Program execution 2 4 Path resolution 3 5 Input/Output Redirection 3 6 Signal Handling and Terminal Control
More informationRELEASE NOTES FOR THE Kinetic - Edge & Fog Processing Module (EFM) RELEASE 1.2.0
RELEASE NOTES FOR THE Kinetic - Edge & Fog Processing Module (EFM) RELEASE 1.2.0 Revised: November 30, 2017 These release notes provide a high-level product overview for the Cisco Kinetic - Edge & Fog
More informationargcomplete Documentation
argcomplete Documentation Release Andrey Kislyuk Nov 21, 2017 Contents 1 Installation 3 2 Synopsis 5 2.1 argcomplete.autocomplete(parser).................................... 5 3 Specifying completers
More informationKilling Zombies, Working, Sleeping, and Spawning Children
Killing Zombies, Working, Sleeping, and Spawning Children CS 333 Prof. Karavanic (c) 2015 Karen L. Karavanic 1 The Process Model The OS loads program code and starts each job. Then it cleans up afterwards,
More informationHello World! Computer Programming for Kids and Other Beginners. Chapter 1. by Warren Sande and Carter Sande. Copyright 2009 Manning Publications
Hello World! Computer Programming for Kids and Other Beginners by Warren Sande and Carter Sande Chapter 1 Copyright 2009 Manning Publications brief contents Preface xiii Acknowledgments xix About this
More informationAbout the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Gerrit
Gerrit About the Tutorial Gerrit is a web-based code review tool, which is integrated with Git and built on top of Git version control system (helps developers to work together and maintain the history
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationGetting Started. Excerpted from Hello World! Computer Programming for Kids and Other Beginners
Getting Started Excerpted from Hello World! Computer Programming for Kids and Other Beginners EARLY ACCESS EDITION Warren D. Sande and Carter Sande MEAP Release: May 2008 Softbound print: November 2008
More informationArchan. Release 2.0.1
Archan Release 2.0.1 Jul 30, 2018 Contents 1 Archan 1 1.1 Features.................................................. 1 1.2 Installation................................................ 1 1.3 Documentation..............................................
More informationRH033 Red Hat Linux Essentials
RH033 Red Hat Linux Essentials Version 3.5 QUESTION NO: 1 You work as a Network Administrator for McNeil Inc. The company has a Linux-based network. A printer is configured on the network. You want to
More informationExeco tutorial Grid 5000 school, Grenoble, January 2016
Execo tutorial Grid 5000 school, Grenoble, January 2016 Simon Delamare Matthieu Imbert Laurent Pouilloux INRIA/CNRS/LIP ENS-Lyon 03/02/2016 1/34 1 introduction 2 execo, core module 3 execo g5k, Grid 5000
More informationCS355 Hw 4. Interface. Due by the end of day Tuesday, March 20.
Due by the end of day Tuesday, March 20. CS355 Hw 4 User-level Threads You will write a library to support multiple threads within a single Linux process. This is a user-level thread library because the
More informationScientific Software Development with Eclipse
Scientific Software Development with Eclipse A Best Practices for HPC Developers Webinar Gregory R. Watson ORNL is managed by UT-Battelle for the US Department of Energy Contents Downloading and Installing
More informationDCLI User's Guide. Modified on 20 SEP 2018 Data Center Command-Line Interface
Modified on 20 SEP 2018 Data Center Command-Line Interface 2.10.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about
More informationSupermann. Release 3.0.0
Supermann Release 3.0.0 May 27, 2015 Contents 1 Usage 3 1.1 What Supermann does.......................................... 3 1.2 supermann-from-file....................................... 3 2 Installation
More informationHTCondor Essentials. Index
HTCondor Essentials 31.10.2017 Index Login How to submit a job in the HTCondor pool Why the -name option? Submitting a job Checking status of submitted jobs Getting id and other info about a job
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 5 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 User Operating System Interface - CLI CLI
More informationSlurm basics. Summer Kickstart June slide 1 of 49
Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource
More informationCOMS 6100 Class Notes 3
COMS 6100 Class Notes 3 Daniel Solus September 1, 2016 1 General Remarks The class was split into two main sections. We finished our introduction to Linux commands by reviewing Linux commands I and II
More informationCS 326: Operating Systems. Process Execution. Lecture 5
CS 326: Operating Systems Process Execution Lecture 5 Today s Schedule Process Creation Threads Limited Direct Execution Basic Scheduling 2/5/18 CS 326: Operating Systems 2 Today s Schedule Process Creation
More informationIntroduction Variables Helper commands Control Flow Constructs Basic Plumbing. Bash Scripting. Alessandro Barenghi
Bash Scripting Alessandro Barenghi Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano alessandro.barenghi - at - polimi.it April 28, 2015 Introduction The bash command shell
More informationWeek - 01 Lecture - 04 Downloading and installing Python
Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 01 Lecture - 04 Downloading and
More informationProject 1: Implementing a Shell
Assigned: August 28, 2015, 12:20am Due: September 21, 2015, 11:59:59pm Project 1: Implementing a Shell Purpose The purpose of this project is to familiarize you with the mechanics of process control through
More informationA shell can be used in one of two ways:
Shell Scripting 1 A shell can be used in one of two ways: A command interpreter, used interactively A programming language, to write shell scripts (your own custom commands) 2 If we have a set of commands
More informationUnix Shells and Other Basic Concepts
CSCI 2132: Software Development Unix Shells and Other Basic Concepts Norbert Zeh Faculty of Computer Science Dalhousie University Winter 2019 Shells Shell = program used by the user to interact with the
More informationLazarus Documentation
Lazarus Documentation Release 0.6.3 Lazarus Authors December 09, 2014 Contents 1 Lazarus 3 1.1 Features.................................................. 3 1.2 Examples.................................................
More informationGrid Compute Resources and Grid Job Management
Grid Compute Resources and Job Management March 24-25, 2007 Grid Job Management 1 Job and compute resource management! This module is about running jobs on remote compute resources March 24-25, 2007 Grid
More informationOpenACC Course. Office Hour #2 Q&A
OpenACC Course Office Hour #2 Q&A Q1: How many threads does each GPU core have? A: GPU cores execute arithmetic instructions. Each core can execute one single precision floating point instruction per cycle
More informationConnexion Documentation
Connexion Documentation Release 0.5 Zalando SE Nov 16, 2017 Contents 1 Quickstart 3 1.1 Prerequisites............................................... 3 1.2 Installing It................................................
More informationReducing Cluster Compatibility Mode (CCM) Complexity
Reducing Cluster Compatibility Mode (CCM) Complexity Marlys Kohnke Cray Inc. St. Paul, MN USA kohnke@cray.com Abstract Cluster Compatibility Mode (CCM) provides a suitable environment for running out of
More informationbistro Documentation Release dev Philippe Veber
bistro Documentation Release dev Philippe Veber Oct 10, 2018 Contents 1 Getting started 1 1.1 Installation................................................ 1 1.2 A simple example............................................
More informationPlatform Migrator Technical Report TR
Platform Migrator Technical Report TR2018-990 Munir Contractor mmc691@nyu.edu Christophe Pradal christophe.pradal@inria.fr Dennis Shasha shasha@cs.nyu.edu May 12, 2018 CONTENTS: 1 Abstract 4 2 Platform
More information