Monitoring the Usage of the ZEUS Analysis Grid

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Monitoring the Usage of the ZEUS Analysis Grid"

Transcription

1 Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical University Of Athens Abstract The Grid has been already used very successfully for Monte Carlo production for ZEUS experiment. The next natural step is the submission of user's analysis jobs to the Grid. In the last two moths 3184 jobs, in total, were successfully submitted to various sites of the grid with a total runtime of 1162 days and 45637MB of output size. To give the users feedback, a exible monitoring system is needed. The implementation of the display part of this system is described in this note. I developed a set of PHP scripts that display all the necessary information about the submitted jobs, allows the users to check the status of jobs submitted to ZEUS Grid, and shows informations about the eciency of the sites that the jobs are submitted to. 1

2 Contents 1 The Grid The Grid Structure Job Submission on the Grid Running ZEUS Applications on the Grid ZEUS Grid Toolkit Layout of the Integrated Monte Carlo Production ZEUS Analysis on the Grid (ZAG) Monitoring the Job Submissions on the ZEUS Grid Summary of the Job Submissions on ZEUS Grid Statistics of the Grid Use Conclusions 11 5 Acknowledgments 12 2

3 1 The Grid The idea of the Grid goes back to the 90's and it was based on the principal of the electric current grid. This grid supplies electric energy to its users with a fairly simple and standardised way. Every user is able to use the grid by plugging in a device which is supplied with a specic voltage and has a standardised plug. An equivalent model is used for the computers. At this moment, millions of computers and storage devices are connected with the Internet, worldwide. The only thing required now, is an infrastructure and a standardised interface which will provide transparent access to this computing power and storage space, in a homogeneous (uniform) way. The Grid provides a service for the public use of this computing power and storage space on the Internet. The Grid moves further than the usual connection among computers and aims at the transformation of the world computer web into a great computational tool. Actually, the Grid diers from the World Wide Web as it provides the users, not with only information, but with processing power and storage space as well. The supremacy of the Grid can be shown in its applications. When a user wishes to run an application, the Grid locates the best available place for the execution of this application and it executes it at this point without any necessary actions from the user himself. The Grid can easily facilitate, in great extent, the process of great amounts of information coming from dierent computers. Again, it locates, the best available information source, without any action required by the user, and executes the respective process. Additionally, this analysis can be done in collaboration with partners around the world, as the Grid connects every user as if they were working in a local intranet. An impressive function of the Grid is that the user does not need to know which and where the computing sources needed for the task are located. The only thing required is for the Grid to provide the computing power and the storage space through a standardized interface The framework (structure) of the Grid usually consists of levels which serve a specic task. In general, higher levels focus on the user, where lower levels focus on the computers and networks. The middle software level provide the tools for the participation of the dierent elements (servers, storage elements, networks) in the uniform Grid environment. The middle software represents the Grid intelligence that unites the remaining elements and is characterized as the Grid brain. At the base of the whole substructure the network that ensures the connection of the Grid resources is situated. Over the network one can nd the level of the resources that consists of the resources that are the components of Grid as computers, storage elements, electronic data catalogs, even detectors and telescopes that can be connected directly to the network. The highest level of the Grid substructure is the level of applications that incorporates all the dierent user applications (physics, engineering, economics, etc), ports and development tools that support these applications. This is the level with which the Grid users cooperate with each other. 1.1 The Grid Structure Grid sections are the following: Resource Broker: This unit receives the commands of the users and examines the information catalogs for the appropriate resources. BDII: It is the information catalog that collects the information that is relevant to the available resources. The 3

4 catalog is very probable to be in the same machine with the Resource Broker. Replica Manager: It is used for the coordination of the le copying to the testbed during their transfer from one storage element to another. Something like this, is useful for data redundancy, but also for the data transfer closer to the machines that will realize the process. Replica Catalogue: Collects information for the data copies. A logical le can be matched with one or more natural les which are themselves copies of the same data. Therefore, the logical le can be related to one or more natural les names. The list containing the copied les can located together with the same engine as the Replica Manager. Computer Element: Computer Element refers to the unit which collects the requirements for a specic task. Following, the Computer Element delivers these requirements to the Worker Nodes (WN), which afterwards will realize the task itself. The Computer Element provides an interface on the local waiting systems cluster. One computational element is able to control one or more working knots. Additionally, one working knot can be set up in the same matching with the computational element. Worker Node (WN): The Worker Node is the engine which will process the input data. Storage Element (SE): The Storage Element is the engine which will provide storage room to the testbed. also provides with a uniform interface for dierent/individual storage systems. It User Interface (UI): It is the engine which allows the users' access to the testbed 1.2 Job Submission on the Grid A job submission can be described in seven steps divided in two cycles. The rst one involves the registration procedure and the last one is responsible for the actual job submission. The registration procedure consists of three steps and has to be done only the rst time one enters the Grid and then it only has to be renewed once a year. The rst step is to join a Virtual Organization (VO). Virtual Organizations are distributed communities. High Energy Physics (e.g. LHC VO), Earth Observation (e.g. EO VO) and Biology (e.g. Biomed VO) are examples of such communities, consisting of several institutions and individuals sharing the same interests and the same scientic goals. They greatly benet from putting together their computing resources, data and scientic instruments. Joining a Virtual Organization will provide access to the Grid facility. In order to prove your identity on the Grid Facility, you need a certicate from one of the Certication Authorities. The request for a certicate from a Certication Authority can be made by lling in and submitting an online form (e.g. The Certication Authority will check your identity, issue the certicate and give it back to you via the Web. Once you have your certicate you need to install it in the user interface (the machine you will use to access the Grid Facility). With this step the registration procedure ends and the actual job submission takes place. Every time you start a session on the Grid, you need to create temporary credentials. This is necessary to avoid exposing your certicate to an insecure network. Proxy credentials have default expiration time of 12 hours. During 4

5 this period you can work on the testbed. To run a job on the LCG/EGEE Grid Facility, you have to describe it in the Job Description Language (JDL). JDL species job characteristics such as the application to use, the input data, the required resources, etc. Once you have the jdl le for your job, you can submit it to the Resource Broker. At the same time, the Logging & Book-keeping service logs the job as submitted. Based on the information given in the JDL le, the Resource Broker queries the Information Service and the Replica Catalog to check resources. The Replica Catalog and the Information Service hold information on the current status of all the sites. The Resource Broker uses this info to match the job to a suitable Computing Element. During this phase, the job is in WAITING status. The Resource Broker makes its choice. It has found a suitable Computing Element and the Storage Element with the necessary data. It informs the logging & Book-keeping service of its decision.the Resource Broker submits the job to the selected Computing Element service of its decision. The Computing Element gets any necessary data from the Storage Element. The Job is eventually executed in the chosen Computing Element. During all this process, you can check the status of your job by contacting the Logging & Book-keeping Service.The execution of the job has completed on the Computing Element. The Computing Element transfers the output to the Resource Broker. You can now retrieve your Output from the Resource Broker. When nished, book-keeping information is purged. 2 Running ZEUS Applications on the Grid Around 70% of ZEUS Monte Carlo production is done on the ZEUS Grid. The past year more than more than 240 million jobs were submitted in various sites of the grid with a total of more than 14TB of output size. The Monte Carlo production package for ZEUS is an application built-in on top of ZEUS Grid Toolkit [1]. 2.1 ZEUS Grid Toolkit The ZEUS Grid toolkit is the basic toolkit for the implementation of the new production system. It is written in object-oriented Perl and consists of a set of classes for basic data structures, job submission, data transfer and output logging and validation. The parts of the toolkit that use Grid client tools directly are encapsulated, because a variety of client tools to access Grid services exist on dierent sites and new projects are being developed. However, the main concepts for data handling and job submission do not change. Consequently, the ZEUS Grid toolkit implements the Strategy pattern and adds an additional layer of abstraction to the usage of Grid client tools. Abstract interfaces are dened for data handling and job operations, and the correct middleware implementation is chosen at run time based on a conguration le and the installed middleware packages. In addition, the encapsulation enables us to x known deciencies of the Grid tools. In earlier versions of the LCG middleware, failed or never-ending data transfers were the main cause for job failures. In the ZEUS Grid toolkit, all data transfer commands are, therefore, run with a timeout and are retried automatically a congurable number of times in case of failures. Furthermore, the size of the le and a checksum are used to validate the le after every transfer to insure the integrity of the data. These measures have been found to reduce the failure rate considerably. 5

6 2.2 Layout of the Integrated Monte Carlo Production To include Grid resources in the Monte Carlo production system completely transparently for the users, the existing interfaces for the submission of a Monte Carlo request and querying its state are reused. As can be seen in Fig. 1, a central scheduler distributes incoming Monte Carlo requests either to the traditional Monte Carlo sites or to a gateway to the Grid resources. This setup allows us to preserve the resources of the HERA-1 production system and to reuse most of the existing scheduler. One node set up as a LCG user interface acts as a bridge between the production system and the Grid world. Cron jobs process the incoming requests and keep track of the individual Grid jobs. A database is used to store the state of the Monte Carlo requests and their associated Grid jobs. All the code is written in object-oriented Perl and submit jobs. Since the ZEUS Grid toolkit is able to simultaneously support dierent middleware projects, we were able to establish submission to a non-lcg site, which belongs to the University of Wisconsin in the USA and runs the Grid2003 middleware. Jobs are submitted directly to this site using an implementation of our job submission interface based on the Globus toolkit [2]. For any new request, a cron job translates this request into a set of Grid jobs and copies the input le to the storage element at DESY. The LCG jobs are submitted using the resource broker at DESY. Every Grid job processes between 1000 and 2000 events which corresponds to a run time of around 3h. The status of the jobs is updated regularly and failed jobs are automatically resubmitted. When a Grid job has nished, its output sandbox is retrieved and the log le is checked for error conditions. The result of this check is stored in an additional database table. If the job has passed the check, the Monte Carlo output le is transferred from the DESY storage element to the nal tape storage tool. As both systems use the dcache mass storage subsystem [8,9], this is a very fast operation. When all Grid jobs for one request have nished, the gateway returns the request to the production system with all necessary bookkeeping les and an archive containing all log les of the individual jobs. 6

7 Figure 1: Fig. 1. Layout of the integrated Monte Carlo production When a Grid job starts running on a worker node, it rst copies two archives containing all the run scripts for the Monte Carlo production and the calibration constants and the three executables belonging to the requested Monte Carlo version from a Grid storage element to the local disk. This avoids the requirement of pre-installed software. As each Grid job only processes part of the input le, the events needed for this job are then extracted from the input le. The executables are run consecutively and their log les are checked after each run. If no errors are found, the output le is nally copied to the DESY storage element. The execution of data handling operations and all commands are logged to the standard output. The same is done with errors that occurred during the data validation or were spotted in the log les. This output is analyzed by the production system after the job output was retrieved. A crucial problem for diagnostics on the Grid is the fact that a job waiting for data for a long time or hitting an endless loop on an executable is generally killed by the local batch system and delivers no output. If this happens, one does not obtain any information about the cause of the failure. Therefore, it is important to guarantee that the job nishes within the queue time limit to identify these problems. This is achieved with ZEUS Grid toolkit by imposing a timeout on all commands executed on the worker node. 2.3 ZEUS Analysis on the Grid (ZAG) The next step for ZEUS Grid is to extend the usage of Grid resources to user`s analysis jobs. This is much more dicult as it is not a central production and the users might submit many dierent jobs with dierent requirements. A job submission framework has been developed based on the experiences gained from the Monte Carlo production. This system is now in a testing phase. In the last two moths 3184 jobs, in total, were successfully submitted to various sites of the grid with a total runtime of 1162 days and 45637MB of output size. To give the users feedback, a exible monitoring system is needed. 7

8 3 Monitoring the Job Submissions on the ZEUS Grid Every job submitted to the ZEUS Analysis on the Grid (ZAG) system is also led to a MySQL database. So, in order to check the status of a job you either type the command edg-job-status or go to the MySQL database and type select user, state from anajobs where user="<name of user>";. The status of a ZAG job can be Ready (state=1), Submitted (=2), Cleared (=3), Done (=4) or Completed (=5). The database also contains information about the site the job is submitted to. You can check the site that the job is submitted to with the command select user, site from submissions where user="<name of user>"; 3.1 Summary of the Job Submissions on ZEUS Grid In order to be easier to check the status of the submitted jobs on the ZEUS Grid, I built a page where all the users can be informed about the status of their jobs. The page contains a simple drop-down menu which includes all the users from the analysis job database using MySQL and an Update button which can be used to refresh the page and update the status of the jobs. In the menu there is also an all option, which includes every job from every user. Figure 2: Monitoring the user's jobs 3.2 Statistics of the Grid Use The ZAG database also contains information about the size, the runtime and the result (whether it was successful or not) of the jobs of each user. Using these data I made another page that monitors the total completed jobs, the Grid jobs success rate, the runtime and the Outputsize for every user. This page also gives you the option to select a specic period of time that you want to extract the information from. 8

9 Figure 3: Stats of the ZEUS Grid use The third page displays informations about the sites that the jobs are submitted to. It also features the option to select a specic period of time that you want to extract the informations from. If the success rate of a site, in the chosen period, is under 50% the percentage is shown in grey background. Each site has a link to another page, which contains information about the error that failed the job. Figure 4: Statistics of Sites the ZEUS Grid use 9

10 Figure 5: Error messages of the selected site 10

11 4 Conclusions The ZEUS Grid Analysis database contains valuable data about the job submissions to the ZEUS Analysis on the Grid (ZAG) system. These data include information about the size, the runtime and the result of the jobs that each user submits. The presented display system oers a helpful and easy way to gain information about job submissions on the ZEUS Grid. They, in general, replace any eort to type MySQL or grid commands and thus users do not have to be Grid experts to get information about their jobs. The system also provides information about the past use of ZEUS Analysis Grid. 11

12 5 Acknowledgments I would like to thank my supervisor, Hartmut Stadie, for helping me and and for the all the time he spent for solving my problems. I also would like to thank my parents, Gregory and Christina, and my brother Stamatis. You really made my work easier... 12

13 References [1] H. Stadie et al.: Monte Carlo mass production fot the ZEUS experiment on the Grid, Nuclear Instruments and Methods in Physics Research A 559 (2006) [2] I. Foster, C, Kesselmann, Globus: a metacomputing infrastructure toolkit, Int. J. Supercomput. Appl. 11 (2) (1997) 115. [3] P. Fuhrmann, dcache, LCG storage element and enhanced use cases, in: Proceedings of the International Conference on Computing in High Energy Physics, 2004 [4] dcache: 13

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Summer Student Work: Accounting on Grid-Computing

Summer Student Work: Accounting on Grid-Computing Summer Student Work: Accounting on Grid-Computing Walter Bender Supervisors: Yves Kemp/Andreas Gellrich/Christoph Wissing September 18, 2007 Abstract The task of this work was to develop a graphical tool

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

UNICORE Globus: Interoperability of Grid Infrastructures

UNICORE Globus: Interoperability of Grid Infrastructures UNICORE : Interoperability of Grid Infrastructures Michael Rambadt Philipp Wieder Central Institute for Applied Mathematics (ZAM) Research Centre Juelich D 52425 Juelich, Germany Phone: +49 2461 612057

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

University of Maryland. fzzj, basili, Empirical studies (Desurvire, 1994) (Jeries, Miller, USABILITY INSPECTION

University of Maryland. fzzj, basili, Empirical studies (Desurvire, 1994) (Jeries, Miller, USABILITY INSPECTION AN EMPIRICAL STUDY OF PERSPECTIVE-BASED USABILITY INSPECTION Zhijun Zhang, Victor Basili, and Ben Shneiderman Department of Computer Science University of Maryland College Park, MD 20742, USA fzzj, basili,

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California R. Barbera Dipartimento di Fisica e Astronomia, INFN, and ALICE Collaboration, via S. Sofia 64,

More information

Web site Image database. Web site Video database. Web server. Meta-server Meta-search Agent. Meta-DB. Video query. Text query. Web client.

Web site Image database. Web site Video database. Web server. Meta-server Meta-search Agent. Meta-DB. Video query. Text query. Web client. (Published in WebNet 97: World Conference of the WWW, Internet and Intranet, Toronto, Canada, Octobor, 1997) WebView: A Multimedia Database Resource Integration and Search System over Web Deepak Murthy

More information

The DESY Grid Testbed

The DESY Grid Testbed The DESY Grid Testbed Andreas Gellrich * DESY IT Group IT-Seminar 27.01.2004 * e-mail: Andreas.Gellrich@desy.de Overview The Physics Case: LHC DESY The Grid Idea: Concepts Implementations Grid @ DESY:

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

A security architecture for the ALICE Grid Services

A security architecture for the ALICE Grid Services ab, Costin Grigoras b, Alina Grigoras b, Latchezar Betev b, and Johannes Buchmann ac a CASED - Center for Advanced Security Research Darmstadt, Mornewegstrasse 32, 64293 Darmstadt, Germany b CERN - European

More information

Attribute based query. Direct reservation. Pentium Cluster (Instruments) SGI Cluster (Gemstones) PENTIUM SCHEDULER. request 2 pentiums.

Attribute based query. Direct reservation. Pentium Cluster (Instruments) SGI Cluster (Gemstones) PENTIUM SCHEDULER. request 2 pentiums. A General Resource Reservation Framework for Scientic Computing? Ravi Ramamoorthi, Adam Rifkin, Boris Dimitrov, and K. Mani Chandy California Institute of Technology Abstract. We describe three contributions

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Programming Models for WSCs. Important Design Factors for WSCs

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Programming Models for WSCs. Important Design Factors for WSCs Concepts Introduced in Chapter 6 Warehouse-Scale Computers A cluster is a collection of desktop computers or servers connected together by a local area network to act as a single larger computer. introduction

More information

Storage-system Research Group Barcelona Supercomputing Center *Universitat Politècnica de Catalunya

Storage-system Research Group Barcelona Supercomputing Center *Universitat Politècnica de Catalunya Replica management in GRID superscalar Jesús Malo Poyatos, Jonathan Martí Fraiz, Toni Cortes* Storage-system Research Group Barcelona Supercomputing Center {jesus.malo,jonathan.marti,toni.cortes}@bsc.es

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Pizza Delivery Helper

Pizza Delivery Helper Pizza Delivery Helper Aldo Doronzo 2008 / 2009 Abstract This is a report describing the Pizza Delivery Helper project realized during the course of Mobile Services taught by prof. Ricci at the Free University

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control Computing in High Energy and Nuclear Physics, La Jolla, California, 24-28 March 2003 1 DIRAC Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev, V.Garonne CPPM-IN2P3-CNRS, Marseille,

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

e-infrastructures in FP7 INFO DAY - Paris

e-infrastructures in FP7 INFO DAY - Paris e-infrastructures in FP7 INFO DAY - Paris Carlos Morais Pires European Commission DG INFSO GÉANT & e-infrastructure Unit 1 Global challenges with high societal impact Big Science and the role of empowered

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Bachelor-Arbeit. Design and Implementation of a Security Gateway for Grid Services. Roland Gude Fachhochschule Bonn-Rhein- Sieg

Bachelor-Arbeit. Design and Implementation of a Security Gateway for Grid Services. Roland Gude Fachhochschule Bonn-Rhein- Sieg Bachelor-Arbeit Design and Implementation of a Security Gateway Roland Gude Fachhochschule Bonn-Rhein- Sieg Fachbereich Informatik Departemet Of Computer Sciences Bachelor Thesis Design and Implementation

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore. Garuda : The National Grid Computing Initiative Of India Natraj A.C, CDAC Knowledge Park, Bangalore. natraj@cdacb.ernet.in 1 Agenda About CDAC Garuda grid highlights Garuda Foundation Phase EU-India grid

More information

THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL. Jun Sun, Yasushi Shinjo and Kozo Itano

THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL. Jun Sun, Yasushi Shinjo and Kozo Itano THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL Jun Sun, Yasushi Shinjo and Kozo Itano Institute of Information Sciences and Electronics University of Tsukuba Tsukuba,

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

Prototypes of a Computational Grid for the Planck Satellite

Prototypes of a Computational Grid for the Planck Satellite ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XIV ASP Conference Series, Vol. 347, 2005 P. L. Shopbell, M. C. Britton, and R. Ebert, eds. Prototypes of a Computational Grid for the Planck Satellite Giuliano

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

The Grid Monitor. Usage and installation manual. Oxana Smirnova

The Grid Monitor. Usage and installation manual. Oxana Smirnova NORDUGRID NORDUGRID-MANUAL-5 2/5/2017 The Grid Monitor Usage and installation manual Oxana Smirnova Abstract The LDAP-based ARC Grid Monitor is a Web client tool for the ARC Information System, allowing

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

Storage System. Distributor. Network. Drive. Drive. Storage System. Controller. Controller. Disk. Disk

Storage System. Distributor. Network. Drive. Drive. Storage System. Controller. Controller. Disk. Disk HRaid: a Flexible Storage-system Simulator Toni Cortes Jesus Labarta Universitat Politecnica de Catalunya - Barcelona ftoni, jesusg@ac.upc.es - http://www.ac.upc.es/hpc Abstract Clusters of workstations

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Multiple Broker Support by Grid Portals* Extended Abstract

Multiple Broker Support by Grid Portals* Extended Abstract 1. Introduction Multiple Broker Support by Grid Portals* Extended Abstract Attila Kertesz 1,3, Zoltan Farkas 1,4, Peter Kacsuk 1,4, Tamas Kiss 2,4 1 MTA SZTAKI Computer and Automation Research Institute

More information

Prototype DIRAC portal for EISCAT data Short instruction

Prototype DIRAC portal for EISCAT data Short instruction Prototype DIRAC portal for EISCAT data Short instruction Carl-Fredrik Enell January 19, 2017 1 Introduction 1.1 DIRAC EGI, first European Grid Initiative, later European Grid Infrastructure, and now simply

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca   S. Lacaprara INFN Legnaro Status and evolution of CRAB University and INFN Milano-Bicocca E-mail: fabio.farina@cern.ch S. Lacaprara INFN Legnaro W. Bacchi University and INFN Bologna M. Cinquilli University and INFN Perugia G.

More information

Framework for Interactive Parallel Dataset Analysis on the Grid

Framework for Interactive Parallel Dataset Analysis on the Grid SLAC-PUB-12289 January 2007 Framework for Interactive Parallel Analysis on the David A. Alexander, Balamurali Ananthan Tech-X Corporation 5621 Arapahoe Ave, Suite A Boulder, CO 80303 {alexanda,bala}@txcorp.com

More information

Shared File Directory

Shared File Directory A Web-Based Repository Manager for Brain Mapping Data R.M. Jakobovits, B. Modayur, and J.F. Brinkley Departments of Computer Science and Biological Structure University of Washington, Seattle, WA The Web

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group SAMOS: an Active Object{Oriented Database System Stella Gatziu, Klaus R. Dittrich Database Technology Research Group Institut fur Informatik, Universitat Zurich fgatziu, dittrichg@ifi.unizh.ch to appear

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS Raj Kumar, Vanish Talwar, Sujoy Basu Hewlett-Packard Labs 1501 Page Mill Road, MS 1181 Palo Alto, CA 94304 USA { raj.kumar,vanish.talwar,sujoy.basu}@hp.com

More information

SASGSUB for Job Workflow and SAS Log Files Piyush Singh, Prasoon Sangwan TATA Consultancy Services Ltd. Indianapolis, IN

SASGSUB for Job Workflow and SAS Log Files Piyush Singh, Prasoon Sangwan TATA Consultancy Services Ltd. Indianapolis, IN SCSUG-2016 SASGSUB for Job Workflow and SAS Log Files Piyush Singh, Prasoon Sangwan TATA Consultancy Services Ltd. Indianapolis, IN ABSTRACT SAS Grid Manager Client Utility (SASGSUB) is one of the key

More information

Cluster Upgrade Procedure with Job Queue Migration.

Cluster Upgrade Procedure with Job Queue Migration. Cluster Upgrade Procedure with Job Queue Migration. Zend Server 5.6 Overview Zend Server 5.6 introduces a new highly-reliable Job Queue architecture, based on a MySQL database storage backend. This document

More information

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT.

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT. Chapter 4:- Introduction to Grid and its Evolution Prepared By:- Assistant Professor SVBIT. Overview Background: What is the Grid? Related technologies Grid applications Communities Grid Tools Case Studies

More information

Self-Service Portal & estore Guide. Your complete guide to installing, administering and using the 1CRM Self-Service Portal and estore.

Self-Service Portal & estore Guide. Your complete guide to installing, administering and using the 1CRM Self-Service Portal and estore. Self-Service Portal & estore Guide Your complete guide to installing, administering and using the 1CRM Self-Service Portal and estore. Version 4.2, October, 2017. This document is subject to change without

More information

Kerberos & HPC Batch systems. Matthieu Hautreux (CEA/DAM/DIF)

Kerberos & HPC Batch systems. Matthieu Hautreux (CEA/DAM/DIF) Kerberos & HPC Batch systems Matthieu Hautreux (CEA/DAM/DIF) matthieu.hautreux@cea.fr Outline Kerberos authentication HPC site environment Kerberos & HPC systems AUKS From HPC site to HPC Grid environment

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

Amazon SES - For Great Delivery

Amazon SES - For Great  Delivery Amazon SES - For Great Email Delivery This is a one-time setup, and it should be done near the beginning of your business setup process because it may take a few days to get it through the simple approval

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI July, 2006

BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI July, 2006 BACKUP PLANNING AND IMPLEMENTATION FOR ANUPAM SUPERCOMPUTER USING ROBOTIC AUTOLOADERS BY Aalap Tripathy 2004P3PS208 B.E. (Hons) Electrical & Electronics Prepared in partial fulfillment of the Practice

More information

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer computing platform Internal report Marko Niinimaki, Mohamed BenBelgacem, Nabil Abdennadher HEPIA, January 2010 1. Background and motivation

More information

Manual for Conference Web Based Application (C-WBA) Dedicated to the African Union Commission Conference Services Directorate

Manual for Conference Web Based Application (C-WBA) Dedicated to the African Union Commission Conference Services Directorate Manual for Conference Web Based Application (C-WBA) REV 2 Dedicated to the African Union Commission Conference Services Directorate Designed By Merga Deressa Documentalist Conference Services Directorate

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Stefan WAGNER *, Michael AFFENZELLER * HEURISTICLAB GRID A FLEXIBLE AND EXTENSIBLE ENVIRONMENT 1. INTRODUCTION

Stefan WAGNER *, Michael AFFENZELLER * HEURISTICLAB GRID A FLEXIBLE AND EXTENSIBLE ENVIRONMENT 1. INTRODUCTION heuristic optimization, distributed computation, optimization frameworks Stefan WAGNER *, Michael AFFENZELLER * HEURISTICLAB GRID A FLEXIBLE AND EXTENSIBLE ENVIRONMENT FOR PARALLEL HEURISTIC OPTIMIZATION

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus UNIT IV PROGRAMMING MODEL Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus Globus: One of the most influential Grid middleware projects is the Globus

More information

Data reduction for CORSIKA. Dominik Baack. Technical Report 06/2016. technische universität dortmund

Data reduction for CORSIKA. Dominik Baack. Technical Report 06/2016. technische universität dortmund Data reduction for CORSIKA Technical Report Dominik Baack 06/2016 technische universität dortmund Part of the work on this technical report has been supported by Deutsche Forschungsgemeinschaft (DFG) within

More information

Netsim: A Network Performance Simulator. University of Richmond. Abstract

Netsim: A Network Performance Simulator. University of Richmond. Abstract Netsim: A Network Performance Simulator B. Lewis Barnett, III Department of Mathematics and Computer Science University of Richmond Richmond, VA 23233 barnett@armadillo.urich.edu June 29, 1992 Abstract

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Global Collaboration on Accelerator Operations and Experiments

Global Collaboration on Accelerator Operations and Experiments Global Collaboration on Accelerator Operations and Experiments Globalization in the Financial World Has a bad taste. Socializing risk? Privatizing win? in the HEP Community Is key to build the next big

More information

How to use the Grid for my e-science

How to use the Grid for my e-science How to use the Grid for my e-science a guide on things I must know to benefit from it Álvaro Fernández Casaní IFIC computing & GRID researcher Introduction The Time Projection Chamber of ALICE (A Large

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

A Grid-Enabled Component Container for CORBA Lightweight Components

A Grid-Enabled Component Container for CORBA Lightweight Components A Grid-Enabled Component Container for CORBA Lightweight Components Diego Sevilla 1, José M. García 1, Antonio F. Gómez 2 1 Department of Computer Engineering 2 Department of Information and Communications

More information

Mobile NFS. Fixed NFS. MFS Proxy. Client. Client. Standard NFS Server. Fixed NFS MFS: Proxy. Mobile. Client NFS. Wired Network.

Mobile NFS. Fixed NFS. MFS Proxy. Client. Client. Standard NFS Server. Fixed NFS MFS: Proxy. Mobile. Client NFS. Wired Network. On Building a File System for Mobile Environments Using Generic Services F. Andre M.T. Segarra IRISA Research Institute IRISA Research Institute Campus de Beaulieu Campus de Beaulieu 35042 Rennes Cedex,

More information

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap Arnie Miles Georgetown University adm35@georgetown.edu http://thebes.arc.georgetown.edu The Thebes middleware project was

More information

Applications of Grid Computing in Genetics and Proteomics

Applications of Grid Computing in Genetics and Proteomics Applications of Grid Computing in Genetics and Proteomics Jorge Andrade 1, Malin Andersen 1,2, Lisa Berglund 1, and Jacob Odeberg 1,2 1 Department of Biotechnology, Royal Institute of Technology (KTH),

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

Security Service Challenge & Security Monitoring

Security Service Challenge & Security Monitoring Security Service Challenge & Security Monitoring Jinny Chien Academia Sinica Grid Computing OSCT Security Workshop on 7 th March in Taipei www.eu-egee.org Motivation After today s training, we expect you

More information

Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft

Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft Manuscript Number: CHEP191 Title: The HappyFace Project Article Type: Poster Corresponding Author: Viktor Mauch Corresponding

More information

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures Journal of Physics: Conference Series Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures To cite this article: L Field et al

More information

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

A Graphical User Interface for Job Submission and Control at RHIC/STAR using PERL/CGI

A Graphical User Interface for Job Submission and Control at RHIC/STAR using PERL/CGI A Graphical User Interface for Job Submission and Control at RHIC/STAR using PERL/CGI Crystal Nassouri Wayne State University Brookhaven National Laboratory Upton, NY Physics Department, STAR Summer 2004

More information

Grid Compute Resources and Grid Job Management

Grid Compute Resources and Grid Job Management Grid Compute Resources and Job Management March 24-25, 2007 Grid Job Management 1 Job and compute resource management! This module is about running jobs on remote compute resources March 24-25, 2007 Grid

More information

PARALLEL PROGRAM EXECUTION SUPPORT IN THE JGRID SYSTEM

PARALLEL PROGRAM EXECUTION SUPPORT IN THE JGRID SYSTEM PARALLEL PROGRAM EXECUTION SUPPORT IN THE JGRID SYSTEM Szabolcs Pota 1, Gergely Sipos 2, Zoltan Juhasz 1,3 and Peter Kacsuk 2 1 Department of Information Systems, University of Veszprem, Hungary 2 Laboratory

More information

Job Definition Format (JDF)

Job Definition Format (JDF) Job Definition Format (JDF) An Open, Multi-Vendor Solution An overview of the new specification format developed by Adobe, Agfa, HEIDELBERG, and MAN Roland. In the last few years, the demand for greater

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information