CERN LCG. LCG Short Demo. Markus Schulz. FZK 30 September 2003

Size: px
Start display at page:

Download "CERN LCG. LCG Short Demo. Markus Schulz. FZK 30 September 2003"

Transcription

1 LCG Short Demo Markus Schulz LCG FZK 30 September 2003

2 LCG-1 Demo Outline Monitoring tools and where to get documentation Getting started Running simple jobs Using the information system More on JDL Data management 2

3 LCG-1 Deployment Status Up to date status can be seen here: Has links to maps with sites that are in operation Links to GridICE based monitoring tool (history of VO s jobs, etc) Using information provided by the information system Tables with deployment status Sites that are currently in LCG-1 (here) expect by end of 2003 PIC-Barcelona (RB) Budapest (RB) (RB) Sites to enter soon CNAF (RB) FermiLab. (FNAL) BNL, Prague,(Lyon) FZK Several tier2 centres Krakow in Italy and Spain Moscow (RB) RAL (RB) Sites preparing to join Taipei (RB) Tokyo Pakistan, Sofia, Switzerland Total number of CPUs ~120 WNs # of sites matters more Markus.Schulz@cern.ch 3

4 4

5 The Basics Get the LCG-1 Users Guide Get a certificate Go to the CA that is responsible for you and request a user certificate List of CAs can be found here Follow instructions on how to load the certificate into an web-browser Do this. Register with LCG and a VO of your choice: In case your cert is not in PEM format change it to it by using openssl Ask your CA how to do this Find a user interface machine We use adc0014 at Markus.Schulz@cern.ch 5

6 Get ready Check your certificate in ~/.globus $ grid-cert-info Cert valid? Should return with a O.K. $ openssl verify -CApath /etc/grid-security/certificates ~/.globus/usercert.pem Generate a proxy (valid for 12h) $ grid-proxy-init (will ask for your pass phrase) $ grid-proxy-info (to see details, like how many hours until t.o.d.) $ grid-proxy-destroy For long jobs register longterm credential with proxy server $ myproxy-init -s adc0024 -d -n Creates proxy with one week duration $ myproxy-info -s adc0024 -d $ myproxy-destroy -s adc0024 -d Markus.Schulz@cern.ch 6

7 Job Submission Basic command: edg-job-submit test.jdl Many, many options, see WLMS manual for details Try -help option (very useful -o to get job id in a file) Tiny JDL file executable = "testjob.sh"; StdOutput = "testjob.out"; StdError = "testjob.err"; InputSandbox = {"./testjob.sh"}; OutputSandbox = {"testjob.out","testjob.err"}; Connecting to host lxshare0380.cern.ch, port 7772 Logging to host lxshare0380.cern.ch, port 9002 ================================ edg-job-submit Success ===================================== The job has been successfully submitted to the Network Server. Use edg-job-status command to check job current status. Your job identifier (edg_jobid) is: Docs for WLMS - The edg_jobid has been saved in the following file: /afs/cern.ch/user/m/markusw/test/demo/out ============================================================================================= Markus.Schulz@cern.ch 7

8 Work Load Management System Input Sandbox is what you take with you to the node Output Sandbox is what you get back Job Status submitted Arrived on RB UI sandbox Network Server RB node Match- Maker/ Broker Replica Catalog waiting ready Matching Job Adapter Workload Manager RB storage Job Adapter Inform. Service scheduled running On CE Processed Logging & Bookkeeping Log Monitor Job Contr. - CondorG CE characts & status SE characts & status done cleared Output back User done Failed jobs are resubmitted Markus.Schulz@cern.ch 8

9 Work Load Management System The services that bring the resources and the jobs together Live most of the times on a node called RB (Resource Broker) Keep track of the status of jobs (LBS Logging and Bookkeeping Service) Talks to the globus gate keepers and resource managers on the remote sites (LRMS) (CE) Matches jobs with sites where data and resources are available Re-submission if jobs fail Uses almost all services: IS, RLS, GSI,.. Walking trough a job might be instructive see next slide The user describes the job and its requirements using JDL (Job Description Lang.) [ JobType= Normal ; Executable = gridtest ; StdError = stderr.log ; StdOutput = stdout.log ; InputSandbox = { home/joda/test/gridtest }; OutputSandbox = { stderr.log, stdout.log }; InputData = { lfn:green, guid:red }; DataAccessProtocol = gridftp ; Requirements = other.gluehostoperatingsystemnameopsys == LINUX && other.gluecestatefreecpus>=4; Rank = other.gluecepolicymaxcputime; ] RB Docs for WLMS Markus.Schulz@cern.ch 9

10 Where to Run? Before submitting a job you might want to see where you can run edg-job-list-match <jdl> Switching RBs Use the --config-vo < vo conf file> and --config <conf file> (see User Guide) Find out which RBs you could use Connecting to host lxshare0380.cern.ch, port 7772 *************************************************************************** COMPUTING ELEMENT IDs LIST The following CE(s) matching your job requirements have been found: *CEId* adc0015.cern.ch:2119/jobmanager-lcgpbs-infinite adc0015.cern.ch:2119/jobmanager-lcgpbs-long adc0015.cern.ch:2119/jobmanager-lcgpbs-short adc0018.cern.ch:2119/jobmanager-pbs-infinite adc0018.cern.ch:2119/jobmanager-pbs-long adc0018.cern.ch:2119/jobmanager-pbs-short dgce0.icepp.s.u-tokyo.ac.jp:2119/jobmanager-lcgpbs-infinite dgce0.icepp.s.u-tokyo.ac.jp:2119/jobmanager-lcgpbs-long dgce0.icepp.s.u-tokyo.ac.jp:2119/jobmanager-lcgpbs-short grid-w1.ifae.es:2119/jobmanager-lcgpbs-infinite grid-w1.ifae.es:2119/jobmanager-lcgpbs-long grid-w1.ifae.es:2119/jobmanager-lcgpbs-short hik-lcg-ce.fzk.de:2119/jobmanager-lcgpbs-infinite hik-lcg-ce.fzk.de:2119/jobmanager-lcgpbs-long hik-lcg-ce.fzk.de:2119/jobmanager-lcgpbs-short hotdog46.fnal.gov:2119/jobmanager-pbs-infinite hotdog46.fnal.gov:2119/jobmanager-pbs-long hotdog46.fnal.gov:2119/jobmanager-pbs-short lcg00105.grid.sinica.edu.tw:2119/jobmanager-lcgpbs-infinite lcg00105.grid.sinica.edu.tw:2119/jobmanager-lcgpbs-long lcg00105.grid.sinica.edu.tw:2119/jobmanager-lcgpbs-short lcgce01.gridpp.rl.ac.uk:2119/jobmanager-lcgpbs-infinite lcgce01.gridpp.rl.ac.uk:2119/jobmanager-lcgpbs-long lcgce01.gridpp.rl.ac.uk:2119/jobmanager-lcgpbs-short lhc01.sinp.msu.ru:2119/jobmanager-lcgpbs-infinite lhc01.sinp.msu.ru:2119/jobmanager-lcgpbs-long lhc01.sinp.msu.ru:2119/jobmanager-lcgpbs-short wn a.cr.cnaf.infn.it:2119/jobmanager-lcgpbs-infinite wn a.cr.cnaf.infn.it:2119/jobmanager-lcgpbs-long wn a.cr.cnaf.infn.it:2119/jobmanager-lcgpbs-short zeus02.cyf-kr.edu.pl:2119/jobmanager-lcgpbs-infinite zeus02.cyf-kr.edu.pl:2119/jobmanager-lcgpbs-long zeus02.cyf-kr.edu.pl:2119/jobmanager-lcgpbs-short *************************************************************************** Markus.Schulz@cern.ch 10

11 And then? Check the status: edg-job-status -v <0 1 2> -o <file with id> Many options, play with it, do a -help --noint for working with scripts In case of problems: edg-job-get-logging-info (shows a lot of information) controlled by -v option Get output sandbox: edg-job-get-output, options do work on collections of jobs Output in /tmp/joboutput/1gmdxnfzed1o0b9bjfc3lw Remove the job edg-job-cancel Getting the output cancels the job, canceling a canceled job is an error Markus.Schulz@cern.ch 11

12 Information System Have a look at the status page to find BDII Query the BDII (use an ldap browser, or ldapsearch command) Sample: BDII at lxshare0222.cern.ch Have a look at the man pages and explore the BDII, RGIIS, CE and SE BDII ldapsearch -LLL -x -H ldap://lxshare0222.cern.ch:2170 -b "mds-vo-name=local,o=grid" "(objectclass=gluece)" dn Regional GIIS ldapsearch -LLL -x -H ldap://adc0026.cern.ch:2135 -b "mds-vo-name=lcgeast,o=grid" "(objectclass=gluece)" dn CE ldapsearch -LLL -x -H ldap://adc0018.cern.ch:2135 -b "mds-vo-name=local,o=grid" SE ldapsearch -LLL -x -H ldap://adc0021.cern.ch:2135 -b "mds-vo-name=local,o=grid" Markus.Schulz@cern.ch 12

13 GLUE SCHEMA Appendix B in the LCG-1 User Guide Many Categories, some attributes that are defined might be still not filled Describing CE, cluster, hosts, SE, Batchsystem etc Too many for this presentation->user Guide Markus.Schulz@cern.ch 13

14 GLUE SCHEMA Attributes for the Computing Element CE (objectclass GlueCE) GlueCEUniqueID: unique identifier for the CE GlueCEName: human-readable name of the service Info (objectclass GlueCEInfo) GlueCEInfoLRMSType: name of the local batch system GlueCEInfoLRMSVersion: version of the local batch system GlueCEInfoGRAMVersion: version of GRAM GlueCEInfoHostName: fully qualified name of the host where the gatekeeper runs GlueCEInfoGateKeeperPort: port number for the gatekeeperglueceinfototalcpus: number of CPUs in the cluster associated to the CE Policy (objectclass GlueCEPolicy) GlueCEPolicyMaxWallClockTime: maximum wall clock time available to jobs submitted to the CE GlueCEPolicyMaxCPUTime: maximum CPU time available to jobs submitted to the CE GlueCEPolicyMaxTotalJobs: maximum allowed total number of jobs in the queue GlueCEPolicyMaxRunningJobs: maximum allowed number of running jobs in the queue GlueCEPolicyPriority: information about the service priority State (objectclass GlueCEState) GlueCEStateRunningJobs: number of running jobs GlueCEStateWaitingJobs: number of jobs not running GlueCEStateTotalJobs: total number of jobs (running + waiting) GlueCEStateStatus: queue status: queueing (jobs are accepted but not run), production (jobs are accepted and run), closed (jobs are neither accepted nor run), draining (jobs are not accepted but those in the queue are run) GlueCEStateWorstResponseTime: worst possible time between the submission of a job and the start of its execution GlueCEStateEstimatedResponseTime: estimated time between the submission of a job and the start of its execution GlueCEStateFreeCPUs: number of CPUs available to the scheduler Job (currently not filled, the Logging and Bookkeeping service can provide this information) (objectclass GlueCEJob) GlueCEJobLocalOwner: local user name of the jobbcejobglobalowner: GSI subject of the real jobbowner GlueCEJobLocalID: local job identifier GlueCEJobGlobalId: global job identifier GlueCEJobGlueCEJobStatus: job status: SUBMITTED, WAITING, READY, SCHEDULED, RUNNING, ABORTED, DONE, CLEARED, CHECKPOINTED GlueCEJobSchedulerSpecific: any scheduler specific informationaccess control (objectclass GlueCEAccessControlBase) GlueCEAccessControlBaseRule: a rule defining any access restrictions to the CE. Current semantic: VO = a VO name, DENY = a\ n X.509 user subject Cluster (objectclass GlueCluster) GlueClusterUniqueID: unique identifier for the cluster GlueClusterName: human-readable name of the cluster Subcluster (objectclass GlueSubCluster) GlueSubClusterUniqueID: unique identifier for the subcluster Markus.Schulz@cern.ch 14 GlueSubClusterName: human-readable name of the subcluster

15 GLUE SCHEMA Host (objectclass GlueHost) GlueHostUniqueId: unique identifier for the host GlueHostName: human-readable name of the host Architecture (objectclass GlueHostArchitecture)GlueHostArchitecturePlatformType: platform description GlueHostArchitectureSMPSize: number of CPUs Operating system (objectclass GlueHostOperatingSystem) GlueHostOperatingSystemOSName: OS name GlueHostOperatingSystemOSRelease: OS release GlueHostOperatingSystemOSVersion: OS or kernel version Benchmark (objectclass GlueHostBenchmark) GlueHostBenchmarkSI00: SpecInt2000 benchmark Application software (objectclass GlueHostApplicationSoftware) GlueHostApplicationSoftwareRunTimeEnvironment: list of software installed on this host Processor (objectclass GlueHostProcessor) GlueHostProcessorVendor: name of the CPU vendor GlueHostProcessorModel: name of the CPU model GlueHostProcessorVersion: version of the CPU GlueHostProcessorOtherProcessorDescription: other description for the CPU GlueHostProcessorClockSpeed: clock speed of the CPU GlueHostProcessorInstructionSet: name of the instruction set architecture of the CPU GlueHostProcessorGlueHostProcessorFeatures: list of optional features of the CPU GlueHostProcessorCacheL1: size of the unified L1 cache GlueHostProcessorCacheL1I: size of the instruction L1 cache GlueHostProcessorCacheL1D: size of the data L1 cache GlueHostProcessorCacheL2: size of the unified L2 cache Main memory (objectclass GlueHostMainMemory) GlueHostMainMemoryRAMSize: physical RAM GlueHostMainMemoryRAMAvailable: unallocated RAM GlueHostMainMemoryVirtualSize: size of the configured virtual memory GlueHostMainMemoryVirtualAvailable: available virtual memory Network adapter (objectclass GlueHostNetworkAdapter) GlueHostNetworkAdapterName: name of the network card GlueHostNetworkAdapterIPAddress: IP address of the network card GlueHostNetworkAdapterMTU: the MTU size for the LAN to which the network card is attached GlueHostNetworkAdapterOutboundIP: permission for outbound connectivity GlueHostNetworkAdapterInboundIP: permission for inbound connectivity Processor load (objectclass GlueHostProcessorLoad) GlueHostProcessorLoadLast1Min: one-minute average processor availability for a single node GlueHostProcessorLoadLast5Min: 5-minute average processor availability for a single node GlueHostProcessorLoadLast15Min: 15-minute average processor availability for a single node Markus.Schulz@cern.ch 15

16 GLUE SCHEMA SMP load (objectclass GlueHostSMPLoad) GlueHostSMPLoadLast1Min: one-minute average processor availability for a single node GlueHostSMPLoadLast5Min: 5-minute average processor availability for a single node GlueHostSMPLoadLast15Min: 15-minute average processor availability for a single node Storage device (objectclass GlueHostStorageDevice) GlueHostStorageDeviceName: name of the storage device GlueHostStorageDeviceType: storage device type GlueHostStorageDeviceTransferRate: maximum transfer rate for the device GlueHostStorageDeviceSize: Size of the device GlueHostStorageDeviceAvailableSpace: amount of free space Local file system (objectclass GlueHostLocalFileSystem) GlueHostLocalFileSystemRoot: path name or other information defining the root of the file system GlueHostLocalFileSystemSize: size of the file system in bytes GlueHostLocalFileSystemAvailableSpace: amount of free space in bytes GlueHostLocalFileSystemReadOnly: true if the file system is read-only GlueHostLocalFileSystemType: file system type GlueHostLocalFileSystemName: the name for the file system GlueHostLocalFileSystemClient: host unique id of clients allowed to remotely access this file system Remote file system (objectclass GlueHostRemoteFileSystem) GlueHostLRemoteFileSystemRoot: path name or other information defining the root of the file system GlueHostRemoteFileSystemSize: size of the file system in bytes GlueHostRemoteFileSystemAvailableSpace: amount of free space in bytes GlueHostRemoteFileSystemReadOnly: true if the file system is read-only GlueHostRemoteFileSystemType: file system type GlueHostRemoteFileSystemName: the name for the file system GlueHostRemoteFileSystemServer: host unique id of the server which provides access to the file system File (objectclass GlueHostFile) GlueHostFileName: name for the file GlueHostFileSize: file size in bytes GlueHostFileCreationDate: file creation date and time GlueHostFileLastModified: date and time of the last modification of the file GlueHostFileLastAccessed: date and time of the last access to the file GlueHostFileLatency: time taken to access the file in seconds GlueHostFileLifeTime: time for which the file will stay on the storage device GlueHostFileOwner: name of the owner of the file Markus.Schulz@cern.ch 16

17 GLUE SCHEMA Attributes for the Storage Element Storage Service (objectclass GlueSE) GlueSEUniqueId: unique identifier of the storage service (URI) GlueSEName: human-readable name for the service GlueSEPort: port number that the service listens GlueSEHostingSL: unique identifier of the storage library hosting the service Storage Service State (objectclass GlueSEState) GlueSEStateCurrentIOLoad: system load (for example, number of files in the queue) Storage Service Access Protocol (objectclass GlueSEAccessProtocol) GlueSEAccessProtocolType: protocol type to access or transfer files GlueSEAccessProtocolPort: port number for the protocol GlueSEAccessProtocolVersion: protocol version GlueSEAccessProtocolAccessTime: time to access a file using this protocol GlueSEAccessProtocolSupportedSecurity: security features supported by the protocol Storage Library (objectclass GlueSL) GlueSLName: human-readable name of the storage library GlueSLUniqueId: unique identifier of the machine providing the storage service GlueSLService: unique identifier for the provided storage service Local File system (objectclass GlueSLLocalFileSystem) GlueSLLocalFileSystemRoot: path name (or other information) defining the root of the file system GlueSLLocalFileSystemName: name of the file system GlueSLLocalFileSystemType: file system type (e.g. NFS, AFS, etc.) GlueSLLocalFileSystemReadOnly: true is the file system is read-only GlueSLLocalFileSystemSize: total space assigned to this file system GlueSLLocalFileSystemAvailableSpace: total free space in this file system GlueSLLocalFileSystemClient: unique identifiers of clients allowed to access the file system remotely GlueSLLocalFileSystemServer: unique identifier of the server exporting this file system (only for remote file systems) Remote File system (objectclass GlueSLRemoteFileSystem) GlueSLRemoteFileSystemRoot: path name (or other information) defining the root of the file system GlueSLRemoteFileSystemName: name of the file system GlueSLRemoteFileSystemType: file system type (e.g. NFS, AFS, etc.) GlueSLRemoteFileSystemReadOnly: true is the file system is read-only GlueSLRemoteFileSystemSize: total space assigned to this file system GlueSLRemoteFileSystemAvailableSpace: total free space in this file system GlueSLRemoteFileSystemServer: unique identifier of the server exporting this file system Markus.Schulz@cern.ch 17

18 GLUE SCHEMA File Information (objectclass GlueSLFile) GlueSLFileName: file name GlueSLFileSize: file size GlueSLFileCreationDate: file creation date and time GlueSLFileLastModified: date and time of the last modification of the file GlueSLFileLastAccessed: date and time of the last access to the file GlueSLFileLatency: time needed to access the file GlueSLFileLifeTime: file lifetime GlueSLFilePath: file path Directory Information (objectclass GlueSLDirectory) GlueSLDirectoryName: directory name GlueSLDirectorySize: directory size GlueSLDirectoryCreationDate: directory creation date and time GlueSLDirectoryLastModified: date and time of the last modification of the directory GlueSLDirectoryLastAccessed: date and time of the last access to the directory GlueSLDirectoryLatency: time needed to access the directory GlueSLDirectoryLifeTime: directory lifetime GlueSLDirectoryPath: directory path Architecture (objectclass GlueSLDirectory) GlueSLDirectoryType: type of storage hardware (i.e. disk, RAID array, tape library, etc.) Performance (objectclass GlueSLPerformance) GlueSLPerformanceMaxIOCapacity: maximum bandwidth between the service and the network Storage Space (objectclass GlueSA) GlueSARoot: pathname of the directory containing the files of the storage space Policy (objectclass GlueSAPolicy) GlueSAPolicyMaxFileSize: maximum file size GlueSAPolicyMinFileSize: minimum file size GlueSAPolicyMaxData: maximum allowed amount of data that a single job can store GlueSAPolicyMaxNumFiles: maximum allowed number of files that a single job can store GlueSAPolicyMaxPinDuration: maximum allowed lifetime for non-permanent files GlueSAPolicyQuota: total available space GlueSAPolicyFileLifeTime: lifetime policy for the contained files Access Control Base (objectclass GlueSAAccessControlBase) GlueSAAccessControlBase Rule: list of the access control rules State (objectclass GlueSAState) GlueSAStateAvailableSpace: total space available in the storage space GlueSAStateUsedSpace: used space in the storage spacemarkus.schulz@cern.ch 18

19 More on JDL Based on Condor ClassAds syntax (parser very sensitive) Simple statements: attribute = value; Arguments = wall ; (passes arguments to the executable) Input sandbox can handle wildcards like *? Environment = { DTEAM_PATH=$HOME/dteam, TEAM=dteam }; OutputSE= adc0021.cern.ch ; (selects job to run close to this SE) [ InputSandbox = { home/joda/test/gridtest, /tmp/test/* }; OutputSandbox = { stderr.log, stdout.log }; InputData = { lfn:green, guid:red }; DataAccessProtocol ={ file, gridftp }; Requirements = other.gluehostoperatingsystemnameopsys == LINUX && other.gluecestatefreecpus>=4 && Member( alice3-4,other.gluehostapplicationsoftwareruntimeenvironment); Rank = other.gluecestatefreecpus; MyProxyServer = wn a.cr.cnaf.infn.it ; RetryCount = 7 ; ] Markus.Schulz@cern.ch 19

20 More on JDL OutputData specifies where files should go If no LFN specified WP2 selects one If no SE is specified, the close SE is chosen At the end of the job the files are moved from WN and registered File with result of this operation is created and added to the snadbox DSUpload_<unique jobstring>.out OutputData = { [ OutputFile = toto.out ; StorageElement = adc0021.cern.ch ; LogicalFileName = thebesttotoever ; ], [ ] }; OutputFile = toto2.out ; StorageElement = adc0021.cern.ch ; LogicalFileName = thebesttotoever2 ; Markus.Schulz@cern.ch 20

21 Data Users should use high level tools (references and details -> User Guide for LCG and WP2 ) Avoid globus-url-copy, and the edg-gridftp-x tools, Except maybe the following X = exists, ls, mkdir The edg-replica-manager tools allow to: edg-rm move files around UI->SE WN->SE, Register files in the RLS Replicate them between SEs Many options -help + documentation Move a file from UI to SE Where? edg-rm --vo=dteam printinfo edg-rm --vo=dteam copyandregisterfile file:`pwd`/load -d srm://adc0021.cern.ch/flatfiles/se00/dteam/markus/t1 -l lfn:markus1 guid:dc9760d7-f36a-11d7-864b-925f9e8966fe is returned Hostname is sufficient for -d (without the RM decides where to go) Markus.Schulz@cern.ch 21

22 Data Replicate file to other SE (guid needed) edg-rm --vo=dteam replicatefile guid:dc9760d7-f36a-11d7-864b-925f9e8966fe -d wn a.cr.cnaf.infn.it To list replicas edg-rm --vo=dteam listreplicas guid:dc9760d7-f36a-11d7-864b-925f9e8966fe To delete replicas use deletefile guid:xxx -s se.cern.ch To find all aliases of a file: First: edg-rm -i --vo=dteam listguid lfn:mm2 -> guid Then: edg-rmc -I aliasesforguid -h rlsdteam.cern.ch -p vo=dteam guid Listing an SE dir: edg-rm -i --vo=dteam list srm://adc0021.cern.ch/flatfiles/se00/dteam/markus/ (broken) use instead edg-gridftp-ls --verbose gsiftp://adc0021.cern.ch/flatfiles/se00/dteam/markus Markus.Schulz@cern.ch 22

23 File access from a job The WLMS (RB) creates the brokerinfo file and moves it to the WN This is used to answer questions about the site you are on Get 1st input file name: (use.brokerinfo) infile=`edg-brokerinfo getinputdata cut -d -f 1` Get first close SE closese=`edg-brokerinfo getclosese cut -d -f 1 ` Get TURL TURL=`edg-rm --vo=dteam gbf $infile -d $closese -t file ` Get file name Localfile=`echo $TURL cut -d : -f 2 ` Markus.Schulz@cern.ch 23

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

DataGrid. Document identifier: Date: 16/06/2003. Work package: Partner: Document status. Deliverable identifier:

DataGrid. Document identifier: Date: 16/06/2003. Work package: Partner: Document status. Deliverable identifier: DataGrid JDL ATTRIBUTES Document identifier: Work package: Partner: WP1 Datamat SpA Document status Deliverable identifier: Abstract: This note provides the description of JDL attributes supported by the

More information

glite Middleware Usage

glite Middleware Usage glite Middleware Usage Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Nov. 18, 2008 www.eu-egee.org EGEE and glite are registered trademarks Usage

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Problemi di schedulazione distribuita su Grid

Problemi di schedulazione distribuita su Grid Problemi di schedulazione distribuita su Grid Ivan Porro Università degli Studi di Genova, DIST, Laboratorio BioLab pivan@unige.it 010-3532789 Riadattato da materiale realizzato da INFN Catania per il

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

DataGrid. Document identifier: Date: 28/10/2003. Work package: Partner: Document status. Deliverable identifier:

DataGrid. Document identifier: Date: 28/10/2003. Work package: Partner: Document status. Deliverable identifier: DataGrid JDL ATTRIBUTES Document identifier: Work package: Partner: WP1 Datamat SpA Document status Deliverable identifier: Abstract: This note provides the description of JDL attributes supported by the

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

Usage of Glue Schema v1.3 for WLCG Installed Capacity information

Usage of Glue Schema v1.3 for WLCG Installed Capacity information Usage of Glue Schema v1.3 for WLCG Installed Capacity information Editor: Flavia Donno Date: 2/17/2009 Version: 1.8-8 Contributors/Authors: Stephen Burke (GLUE/RAL), Greig Cowan (Storage Accounting/Edinburgh),

More information

DataGrid. Document identifier: Date: 24/11/2003. Work package: Partner: Document status. Deliverable identifier:

DataGrid. Document identifier: Date: 24/11/2003. Work package: Partner: Document status. Deliverable identifier: DataGrid WMS GUI USER G UIDE Document identifier: Work package: Partner: WP1 Datamat SpA Document status Deliverable identifier: Abstract: This document provides a description of all functionalities provided

More information

DataGrid EDG-BROKERINFO USER GUIDE. Document identifier: Date: 06/08/2003. Work package: Document status: Deliverable identifier:

DataGrid EDG-BROKERINFO USER GUIDE. Document identifier: Date: 06/08/2003. Work package: Document status: Deliverable identifier: DataGrid Document identifier: Date: 06/08/2003 Work package: Document status: WP1-WP2 DRAFT Deliverable identifier: Abstract: This document presents what is the BrokerInfo file, the edg-brokerinfo command

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008 Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, 13-14 November 2008 Outline Introduction SRM Storage Elements in glite LCG File Catalog (LFC) Information System Grid Tutorial, 13-14

More information

WMS overview and Proposal for Job Status

WMS overview and Proposal for Job Status WMS overview and Proposal for Job Status Author: V.Garonne, I.Stokes-Rees, A. Tsaregorodtsev. Centre de physiques des Particules de Marseille Date: 15/12/2003 Abstract In this paper, we describe briefly

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N

J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N . E G E E J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N F O R T H E G L I T E W O R K L O A D M A N A G E M E N T S Y S T E M Document identifier: WMS-JDL.odt

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

DataGrid D EFINITION OF ARCHITECTURE, TECHNICAL PLAN AND EVALUATION CRITERIA FOR SCHEDULING, RESOURCE MANAGEMENT, SECURITY AND JOB DESCRIPTION

DataGrid D EFINITION OF ARCHITECTURE, TECHNICAL PLAN AND EVALUATION CRITERIA FOR SCHEDULING, RESOURCE MANAGEMENT, SECURITY AND JOB DESCRIPTION DataGrid D EFINITION OF ARCHITECTURE, SECURITY AND JOB DESCRIPTION Document identifier: Work package: Partner: WP1: Workload Management INFN Document status DRAFT Deliverable identifier: DataGrid-D1.2

More information

Information and monitoring

Information and monitoring Information and monitoring Information is essential Application database Certificate Certificate Authorised users directory Certificate Certificate Grid tools Researcher Certificate Policies Information

More information

MONITORING OF GRID RESOURCES

MONITORING OF GRID RESOURCES MONITORING OF GRID RESOURCES Nikhil Khandelwal School of Computer Engineering Nanyang Technological University Nanyang Avenue, Singapore 639798 e-mail:a8156178@ntu.edu.sg Lee Bu Sung School of Computer

More information

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status:

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status: LHC COMPUTING GRID INSTALLING THE RELEASE Document identifier: EDMS id: Version: n/a v2.4.0 Date: April 6, 2005 Section: Document status: gis final Author(s): GRID Deployment Group ()

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

MyProxy Server Installation

MyProxy Server Installation MyProxy Server Installation Emidio Giorgio INFN First Latin American Workshop for Grid Administrators 21-25 November 2005 www.eu-egee.org Outline Why MyProxy? Proxy Renewal mechanism. Remote authentication

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

AGATA Analysis on the GRID

AGATA Analysis on the GRID AGATA Analysis on the GRID R.M. Pérez-Vidal IFIC-CSIC For the e682 collaboration What is GRID? Grid technologies allow that computers share trough Internet or other telecommunication networks not only

More information

The DESY Grid Testbed

The DESY Grid Testbed The DESY Grid Testbed Andreas Gellrich * DESY IT Group IT-Seminar 27.01.2004 * e-mail: Andreas.Gellrich@desy.de Overview The Physics Case: LHC DESY The Grid Idea: Concepts Implementations Grid @ DESY:

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

OSGMM and ReSS Matchmaking on OSG

OSGMM and ReSS Matchmaking on OSG OSGMM and ReSS Matchmaking on OSG Condor Week 2008 Mats Rynge rynge@renci.org OSG Engagement VO Renaissance Computing Institute Chapel Hill, NC 1 Overview ReSS The information provider OSG Match Maker

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

glite/egee in Practice

glite/egee in Practice glite/egee in Practice Alex Villazon (DPS, Innsbruck) Markus Baumgartner (GUP, Linz) With material from www.eu-egee.org ISPDC 2007 5-8 July 2007 Hagenberg, Austria EGEE-II INFSO-RI-031688 Overview Introduction

More information

EGEE. Grid Middleware. Date: June 20, 2006

EGEE. Grid Middleware. Date: June 20, 2006 EGEE Grid Middleware HANDOUTS FOR STUDENTS Author(s): Fokke Dijkstra, Jeroen Engelberts, Sjors Grijpink, David Groep, Jeff Templon Abstract: These handouts are provided for people to learn how to use the

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

Layered Architecture

Layered Architecture The Globus Toolkit : Introdution Dr Simon See Sun APSTC 09 June 2003 Jie Song, Grid Computing Specialist, Sun APSTC 2 Globus Toolkit TM An open source software toolkit addressing key technical problems

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

DataGRID EDG TUTORIAL. Document identifier: EDMS id: Date: April 4, Work package: Partner(s): Lead Partner: Document status: Version 2.6.

DataGRID EDG TUTORIAL. Document identifier: EDMS id: Date: April 4, Work package: Partner(s): Lead Partner: Document status: Version 2.6. DataGRID EDG TUTORIAL HANDOUTS FOR PARTICIPANTS FOR EDG RELEASE 1.4.X Document identifier: DataGrid-08-TUT-V2.6 EDMS id: Work package: Partner(s): Lead Partner: EDG Collaboration EDG Collaboration EDG

More information

EU DataGRID testbed management and support at CERN

EU DataGRID testbed management and support at CERN EU DataGRID testbed management and support at CERN E. Leonardi and M.W. Schulz CERN, Geneva, Switzerland In this paper we report on the first two years of running the CERN testbed site for the EU DataGRID

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session OSG Lessons Learned and Best Practices Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session Introduction Ziggy wants his supper at 5:30 PM Users submit most jobs at 4:59

More information

SPGrid Efforts in Italy

SPGrid Efforts in Italy INFN - Ferrara BaBarGrid Meeting SPGrid Efforts in Italy BaBar Collaboration Meeting - SLAC December 11, 2002 Enrica Antonioli - Paolo Veronesi Topics Ferrara Farm Configuration First SP submissions through

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Introduction to Programming and Computing for Scientists

Introduction to Programming and Computing for Scientists Oxana Smirnova (Lund University) Programming for Scientists Tutorial 4b 1 / 44 Introduction to Programming and Computing for Scientists Oxana Smirnova Lund University Tutorial 4b: Grid certificates and

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

DIRAC Documentation. Release integration. DIRAC Project. 09:29 20/05/2016 UTC

DIRAC Documentation. Release integration. DIRAC Project. 09:29 20/05/2016 UTC DIRAC Documentation Release integration DIRAC Project. 09:29 20/05/2016 UTC Contents 1 User Guide 3 1.1 Getting Started.............................................. 3 1.2 Web Portal Reference..........................................

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

Introduction to SRM. Riccardo Zappi 1

Introduction to SRM. Riccardo Zappi 1 Introduction to SRM Grid Storage Resource Manager Riccardo Zappi 1 1 INFN-CNAF, National Center of INFN (National Institute for Nuclear Physic) for Research and Development into the field of Information

More information

Dr. Giuliano Taffoni INAF - OATS

Dr. Giuliano Taffoni INAF - OATS Query Element Demo The Grid Query Element for glite Dr. Giuliano Taffoni INAF - OATS Overview What is a G-DSE? Use and Admin a DB: the Query Element; Upcoming Features; Working on QE People: Edgardo Ambrosi

More information

CineGrid GRID & Networking

CineGrid GRID & Networking CineGrid GRID & Networking Cees de Laat University of Amsterdam With grid slides thanks to David Groep (NIKHEF) CineGrid Mission To build an interdisciplinary community that is focused on the research,

More information

dcache Introduction Course

dcache Introduction Course GRIDKA SCHOOL 2013 KARLSRUHER INSTITUT FÜR TECHNOLOGIE KARLSRUHE August 29, 2013 dcache Introduction Course Overview Chapters I, II and Ⅴ Christoph Anton Mitterer christoph.anton.mitterer@lmu.de ⅤIII.

More information

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI Setup Desktop Grids and Bridges Tutorial Robert Lovas, MTA SZTAKI Outline of the SZDG installation process 1. Installing the base operating system 2. Basic configuration of the operating system 3. Installing

More information

GROWL Scripts and Web Services

GROWL Scripts and Web Services GROWL Scripts and Web Services Grid Technology Group E-Science Centre r.j.allan@dl.ac.uk GROWL Collaborative project (JISC VRE I programme) between CCLRC Daresbury Laboratory and the Universities of Cambridge

More information

A Login Shell interface for INFN-GRID

A Login Shell interface for INFN-GRID A Login Shell interface for INFN-GRID S.Pardi2,3, E. Calloni1,2, R. De Rosa1,2, F. Garufi1,2, L. Milano1,2, G. Russo1,2 1Università degli Studi di Napoli Federico II, Dipartimento di Scienze Fisiche, Complesso

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Agent Teamwork Research Assistant. Progress Report. Prepared by Solomon Lane

Agent Teamwork Research Assistant. Progress Report. Prepared by Solomon Lane Agent Teamwork Research Assistant Progress Report Prepared by Solomon Lane December 2006 Introduction... 3 Environment Overview... 3 Globus Grid...3 PBS Clusters... 3 Grid/Cluster Integration... 4 MPICH-G2...

More information

Tutorial for CMS Users: Data Analysis on the Grid with CRAB

Tutorial for CMS Users: Data Analysis on the Grid with CRAB Tutorial for CMS Users: Data Analysis on the Grid with CRAB Benedikt Mura, Hartmut Stadie Institut für Experimentalphysik, Universität Hamburg September 2nd, 2009 In this part you will learn... 1 how to

More information

Grid Documentation Documentation

Grid Documentation Documentation Grid Documentation Documentation Release 1.0 Grid Support Nov 06, 2018 Contents 1 General 3 2 Basics 9 3 Advanced topics 25 4 Best practices 81 5 Service implementation 115 6 Tutorials

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Resource Allocation in computational Grids

Resource Allocation in computational Grids Grid Computing Competence Center Resource Allocation in computational Grids Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Nov. 23, 21 Scheduling on

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

Cloud Computing. Up until now

Cloud Computing. Up until now Cloud Computing Lecture 4 and 5 Grid: 2012-2013 Introduction. Up until now Definition of Cloud Computing. Grid Computing: Schedulers: Condor SGE 1 Summary Core Grid: Toolkit Condor-G Grid: Conceptual Architecture

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

Using the MyProxy Online Credential Repository

Using the MyProxy Online Credential Repository Using the MyProxy Online Credential Repository Jim Basney National Center for Supercomputing Applications University of Illinois jbasney@ncsa.uiuc.edu What is MyProxy? Independent Globus Toolkit add-on

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4 An Overview of Grid Computing Workshop Day 1 : August 05 2004 (Thursday) An overview of Globus Toolkit 2.4 By CDAC Experts Contact :vcvrao@cdacindia.com; betatest@cdacindia.com URL : http://www.cs.umn.edu/~vcvrao

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

Programming the Grid with glite

Programming the Grid with glite Programming the Grid with glite E. Laure 1, C. Grandi 1, S. Fisher 2, A. Frohner 1, P. Kunszt 3, A. Krenek 4, O. Mulmo 5, F. Pacini 6, F. Prelz 7, J. White 1 M. Barroso 1, P. Buncic 1, R. Byrom 2, L. Cornwall

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

Deliverable D8.9 - First release of DM services

Deliverable D8.9 - First release of DM services GridLab - A Grid Application Toolkit and Testbed Deliverable D8.9 - First release of DM services Author(s): Document Filename: Work package: Partner(s): Lead Partner: Config ID: Document classification:

More information

The GridWay. approach for job Submission and Management on Grids. Outline. Motivation. The GridWay Framework. Resource Selection

The GridWay. approach for job Submission and Management on Grids. Outline. Motivation. The GridWay Framework. Resource Selection The GridWay approach for job Submission and Management on Grids Eduardo Huedo Rubén S. Montero Ignacio M. Llorente Laboratorio de Computación Avanzada Centro de Astrobiología (INTA - CSIC) Associated to

More information

An Example Grid Middleware - The Globus Toolkit. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

An Example Grid Middleware - The Globus Toolkit. MCSN N. Tonellotto Complements of Distributed Enabling Platforms An Example Grid Middleware - The Globus Toolkit 1 Globus Toolkit A software toolkit addressing key technical problems in the development of Grid enabled tools, services, and applications Offer a modular

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications R-GMA (Relational Grid Monitoring Architecture) for monitoring applications www.eu-egee.org egee EGEE-II INFSO-RI-031688 Acknowledgements Slides are taken/derived from the GILDA team Steve Fisher (RAL,

More information

MPI SUPPORT ON THE GRID. Kiril Dichev, Sven Stork, Rainer Keller. Enol Fernández

MPI SUPPORT ON THE GRID. Kiril Dichev, Sven Stork, Rainer Keller. Enol Fernández Computing and Informatics, Vol. 27, 2008, 213 222 MPI SUPPORT ON THE GRID Kiril Dichev, Sven Stork, Rainer Keller High Performance Computing Center University of Stuttgart Nobelstrasse 19 70569 Stuttgart,

More information

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP, Computing in HEP Andreas Gellrich DESY IT Group - Physics Computing DESY Summer Student Program 2005 Lectures in HEP, 11.08.2005 Program for Today Computing in HEP The DESY Computer Center Grid Computing

More information

WP3 Final Activity Report

WP3 Final Activity Report WP3 Final Activity Report Nicholas Loulloudes WP3 Representative On behalf of the g-eclipse Consortium Outline Work Package 3 Final Status Achievements Work Package 3 Goals and Benefits WP3.1 Grid Infrastructure

More information

How to use computing resources at Grid

How to use computing resources at Grid How to use computing resources at Grid Nikola Grkic ngrkic@ipb.ac.rs Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Academic and Educat ional Gr id Init iat ive of S er bia Oct.

More information

NorduGrid Tutorial. Client Installation and Job Examples

NorduGrid Tutorial. Client Installation and Job Examples NorduGrid Tutorial Client Installation and Job Examples Linux Clusters for Super Computing Conference Linköping, Sweden October 18, 2004 Arto Teräs arto.teras@csc.fi Steps to Start Using NorduGrid 1) Install

More information

Michigan Grid Research and Infrastructure Development (MGRID)

Michigan Grid Research and Infrastructure Development (MGRID) Michigan Grid Research and Infrastructure Development (MGRID) Abhijit Bose MGRID and Dept. of Electrical Engineering and Computer Science The University of Michigan Ann Arbor, MI 48109 abose@umich.edu

More information