Monitoring the Usage of the ZEUS Analysis Grid

Size: px
Start display at page:

Download "Monitoring the Usage of the ZEUS Analysis Grid"

Transcription

1 Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical University Of Athens Abstract The Grid has been already used very successfully for Monte Carlo production for ZEUS experiment. The next natural step is the submission of user's analysis jobs to the Grid. In the last two moths 3184 jobs, in total, were successfully submitted to various sites of the grid with a total runtime of 1162 days and 45637MB of output size. To give the users feedback, a exible monitoring system is needed. The implementation of the display part of this system is described in this note. I developed a set of PHP scripts that display all the necessary information about the submitted jobs, allows the users to check the status of jobs submitted to ZEUS Grid, and shows informations about the eciency of the sites that the jobs are submitted to. 1

2 Contents 1 The Grid The Grid Structure Job Submission on the Grid Running ZEUS Applications on the Grid ZEUS Grid Toolkit Layout of the Integrated Monte Carlo Production ZEUS Analysis on the Grid (ZAG) Monitoring the Job Submissions on the ZEUS Grid Summary of the Job Submissions on ZEUS Grid Statistics of the Grid Use Conclusions 11 5 Acknowledgments 12 2

3 1 The Grid The idea of the Grid goes back to the 90's and it was based on the principal of the electric current grid. This grid supplies electric energy to its users with a fairly simple and standardised way. Every user is able to use the grid by plugging in a device which is supplied with a specic voltage and has a standardised plug. An equivalent model is used for the computers. At this moment, millions of computers and storage devices are connected with the Internet, worldwide. The only thing required now, is an infrastructure and a standardised interface which will provide transparent access to this computing power and storage space, in a homogeneous (uniform) way. The Grid provides a service for the public use of this computing power and storage space on the Internet. The Grid moves further than the usual connection among computers and aims at the transformation of the world computer web into a great computational tool. Actually, the Grid diers from the World Wide Web as it provides the users, not with only information, but with processing power and storage space as well. The supremacy of the Grid can be shown in its applications. When a user wishes to run an application, the Grid locates the best available place for the execution of this application and it executes it at this point without any necessary actions from the user himself. The Grid can easily facilitate, in great extent, the process of great amounts of information coming from dierent computers. Again, it locates, the best available information source, without any action required by the user, and executes the respective process. Additionally, this analysis can be done in collaboration with partners around the world, as the Grid connects every user as if they were working in a local intranet. An impressive function of the Grid is that the user does not need to know which and where the computing sources needed for the task are located. The only thing required is for the Grid to provide the computing power and the storage space through a standardized interface The framework (structure) of the Grid usually consists of levels which serve a specic task. In general, higher levels focus on the user, where lower levels focus on the computers and networks. The middle software level provide the tools for the participation of the dierent elements (servers, storage elements, networks) in the uniform Grid environment. The middle software represents the Grid intelligence that unites the remaining elements and is characterized as the Grid brain. At the base of the whole substructure the network that ensures the connection of the Grid resources is situated. Over the network one can nd the level of the resources that consists of the resources that are the components of Grid as computers, storage elements, electronic data catalogs, even detectors and telescopes that can be connected directly to the network. The highest level of the Grid substructure is the level of applications that incorporates all the dierent user applications (physics, engineering, economics, etc), ports and development tools that support these applications. This is the level with which the Grid users cooperate with each other. 1.1 The Grid Structure Grid sections are the following: Resource Broker: This unit receives the commands of the users and examines the information catalogs for the appropriate resources. BDII: It is the information catalog that collects the information that is relevant to the available resources. The 3

4 catalog is very probable to be in the same machine with the Resource Broker. Replica Manager: It is used for the coordination of the le copying to the testbed during their transfer from one storage element to another. Something like this, is useful for data redundancy, but also for the data transfer closer to the machines that will realize the process. Replica Catalogue: Collects information for the data copies. A logical le can be matched with one or more natural les which are themselves copies of the same data. Therefore, the logical le can be related to one or more natural les names. The list containing the copied les can located together with the same engine as the Replica Manager. Computer Element: Computer Element refers to the unit which collects the requirements for a specic task. Following, the Computer Element delivers these requirements to the Worker Nodes (WN), which afterwards will realize the task itself. The Computer Element provides an interface on the local waiting systems cluster. One computational element is able to control one or more working knots. Additionally, one working knot can be set up in the same matching with the computational element. Worker Node (WN): The Worker Node is the engine which will process the input data. Storage Element (SE): The Storage Element is the engine which will provide storage room to the testbed. also provides with a uniform interface for dierent/individual storage systems. It User Interface (UI): It is the engine which allows the users' access to the testbed 1.2 Job Submission on the Grid A job submission can be described in seven steps divided in two cycles. The rst one involves the registration procedure and the last one is responsible for the actual job submission. The registration procedure consists of three steps and has to be done only the rst time one enters the Grid and then it only has to be renewed once a year. The rst step is to join a Virtual Organization (VO). Virtual Organizations are distributed communities. High Energy Physics (e.g. LHC VO), Earth Observation (e.g. EO VO) and Biology (e.g. Biomed VO) are examples of such communities, consisting of several institutions and individuals sharing the same interests and the same scientic goals. They greatly benet from putting together their computing resources, data and scientic instruments. Joining a Virtual Organization will provide access to the Grid facility. In order to prove your identity on the Grid Facility, you need a certicate from one of the Certication Authorities. The request for a certicate from a Certication Authority can be made by lling in and submitting an online form (e.g. The Certication Authority will check your identity, issue the certicate and give it back to you via the Web. Once you have your certicate you need to install it in the user interface (the machine you will use to access the Grid Facility). With this step the registration procedure ends and the actual job submission takes place. Every time you start a session on the Grid, you need to create temporary credentials. This is necessary to avoid exposing your certicate to an insecure network. Proxy credentials have default expiration time of 12 hours. During 4

5 this period you can work on the testbed. To run a job on the LCG/EGEE Grid Facility, you have to describe it in the Job Description Language (JDL). JDL species job characteristics such as the application to use, the input data, the required resources, etc. Once you have the jdl le for your job, you can submit it to the Resource Broker. At the same time, the Logging & Book-keeping service logs the job as submitted. Based on the information given in the JDL le, the Resource Broker queries the Information Service and the Replica Catalog to check resources. The Replica Catalog and the Information Service hold information on the current status of all the sites. The Resource Broker uses this info to match the job to a suitable Computing Element. During this phase, the job is in WAITING status. The Resource Broker makes its choice. It has found a suitable Computing Element and the Storage Element with the necessary data. It informs the logging & Book-keeping service of its decision.the Resource Broker submits the job to the selected Computing Element service of its decision. The Computing Element gets any necessary data from the Storage Element. The Job is eventually executed in the chosen Computing Element. During all this process, you can check the status of your job by contacting the Logging & Book-keeping Service.The execution of the job has completed on the Computing Element. The Computing Element transfers the output to the Resource Broker. You can now retrieve your Output from the Resource Broker. When nished, book-keeping information is purged. 2 Running ZEUS Applications on the Grid Around 70% of ZEUS Monte Carlo production is done on the ZEUS Grid. The past year more than more than 240 million jobs were submitted in various sites of the grid with a total of more than 14TB of output size. The Monte Carlo production package for ZEUS is an application built-in on top of ZEUS Grid Toolkit [1]. 2.1 ZEUS Grid Toolkit The ZEUS Grid toolkit is the basic toolkit for the implementation of the new production system. It is written in object-oriented Perl and consists of a set of classes for basic data structures, job submission, data transfer and output logging and validation. The parts of the toolkit that use Grid client tools directly are encapsulated, because a variety of client tools to access Grid services exist on dierent sites and new projects are being developed. However, the main concepts for data handling and job submission do not change. Consequently, the ZEUS Grid toolkit implements the Strategy pattern and adds an additional layer of abstraction to the usage of Grid client tools. Abstract interfaces are dened for data handling and job operations, and the correct middleware implementation is chosen at run time based on a conguration le and the installed middleware packages. In addition, the encapsulation enables us to x known deciencies of the Grid tools. In earlier versions of the LCG middleware, failed or never-ending data transfers were the main cause for job failures. In the ZEUS Grid toolkit, all data transfer commands are, therefore, run with a timeout and are retried automatically a congurable number of times in case of failures. Furthermore, the size of the le and a checksum are used to validate the le after every transfer to insure the integrity of the data. These measures have been found to reduce the failure rate considerably. 5

6 2.2 Layout of the Integrated Monte Carlo Production To include Grid resources in the Monte Carlo production system completely transparently for the users, the existing interfaces for the submission of a Monte Carlo request and querying its state are reused. As can be seen in Fig. 1, a central scheduler distributes incoming Monte Carlo requests either to the traditional Monte Carlo sites or to a gateway to the Grid resources. This setup allows us to preserve the resources of the HERA-1 production system and to reuse most of the existing scheduler. One node set up as a LCG user interface acts as a bridge between the production system and the Grid world. Cron jobs process the incoming requests and keep track of the individual Grid jobs. A database is used to store the state of the Monte Carlo requests and their associated Grid jobs. All the code is written in object-oriented Perl and submit jobs. Since the ZEUS Grid toolkit is able to simultaneously support dierent middleware projects, we were able to establish submission to a non-lcg site, which belongs to the University of Wisconsin in the USA and runs the Grid2003 middleware. Jobs are submitted directly to this site using an implementation of our job submission interface based on the Globus toolkit [2]. For any new request, a cron job translates this request into a set of Grid jobs and copies the input le to the storage element at DESY. The LCG jobs are submitted using the resource broker at DESY. Every Grid job processes between 1000 and 2000 events which corresponds to a run time of around 3h. The status of the jobs is updated regularly and failed jobs are automatically resubmitted. When a Grid job has nished, its output sandbox is retrieved and the log le is checked for error conditions. The result of this check is stored in an additional database table. If the job has passed the check, the Monte Carlo output le is transferred from the DESY storage element to the nal tape storage tool. As both systems use the dcache mass storage subsystem [8,9], this is a very fast operation. When all Grid jobs for one request have nished, the gateway returns the request to the production system with all necessary bookkeeping les and an archive containing all log les of the individual jobs. 6

7 Figure 1: Fig. 1. Layout of the integrated Monte Carlo production When a Grid job starts running on a worker node, it rst copies two archives containing all the run scripts for the Monte Carlo production and the calibration constants and the three executables belonging to the requested Monte Carlo version from a Grid storage element to the local disk. This avoids the requirement of pre-installed software. As each Grid job only processes part of the input le, the events needed for this job are then extracted from the input le. The executables are run consecutively and their log les are checked after each run. If no errors are found, the output le is nally copied to the DESY storage element. The execution of data handling operations and all commands are logged to the standard output. The same is done with errors that occurred during the data validation or were spotted in the log les. This output is analyzed by the production system after the job output was retrieved. A crucial problem for diagnostics on the Grid is the fact that a job waiting for data for a long time or hitting an endless loop on an executable is generally killed by the local batch system and delivers no output. If this happens, one does not obtain any information about the cause of the failure. Therefore, it is important to guarantee that the job nishes within the queue time limit to identify these problems. This is achieved with ZEUS Grid toolkit by imposing a timeout on all commands executed on the worker node. 2.3 ZEUS Analysis on the Grid (ZAG) The next step for ZEUS Grid is to extend the usage of Grid resources to user`s analysis jobs. This is much more dicult as it is not a central production and the users might submit many dierent jobs with dierent requirements. A job submission framework has been developed based on the experiences gained from the Monte Carlo production. This system is now in a testing phase. In the last two moths 3184 jobs, in total, were successfully submitted to various sites of the grid with a total runtime of 1162 days and 45637MB of output size. To give the users feedback, a exible monitoring system is needed. 7

8 3 Monitoring the Job Submissions on the ZEUS Grid Every job submitted to the ZEUS Analysis on the Grid (ZAG) system is also led to a MySQL database. So, in order to check the status of a job you either type the command edg-job-status or go to the MySQL database and type select user, state from anajobs where user="<name of user>";. The status of a ZAG job can be Ready (state=1), Submitted (=2), Cleared (=3), Done (=4) or Completed (=5). The database also contains information about the site the job is submitted to. You can check the site that the job is submitted to with the command select user, site from submissions where user="<name of user>"; 3.1 Summary of the Job Submissions on ZEUS Grid In order to be easier to check the status of the submitted jobs on the ZEUS Grid, I built a page where all the users can be informed about the status of their jobs. The page contains a simple drop-down menu which includes all the users from the analysis job database using MySQL and an Update button which can be used to refresh the page and update the status of the jobs. In the menu there is also an all option, which includes every job from every user. Figure 2: Monitoring the user's jobs 3.2 Statistics of the Grid Use The ZAG database also contains information about the size, the runtime and the result (whether it was successful or not) of the jobs of each user. Using these data I made another page that monitors the total completed jobs, the Grid jobs success rate, the runtime and the Outputsize for every user. This page also gives you the option to select a specic period of time that you want to extract the information from. 8

9 Figure 3: Stats of the ZEUS Grid use The third page displays informations about the sites that the jobs are submitted to. It also features the option to select a specic period of time that you want to extract the informations from. If the success rate of a site, in the chosen period, is under 50% the percentage is shown in grey background. Each site has a link to another page, which contains information about the error that failed the job. Figure 4: Statistics of Sites the ZEUS Grid use 9

10 Figure 5: Error messages of the selected site 10

11 4 Conclusions The ZEUS Grid Analysis database contains valuable data about the job submissions to the ZEUS Analysis on the Grid (ZAG) system. These data include information about the size, the runtime and the result of the jobs that each user submits. The presented display system oers a helpful and easy way to gain information about job submissions on the ZEUS Grid. They, in general, replace any eort to type MySQL or grid commands and thus users do not have to be Grid experts to get information about their jobs. The system also provides information about the past use of ZEUS Analysis Grid. 11

12 5 Acknowledgments I would like to thank my supervisor, Hartmut Stadie, for helping me and and for the all the time he spent for solving my problems. I also would like to thank my parents, Gregory and Christina, and my brother Stamatis. You really made my work easier... 12

13 References [1] H. Stadie et al.: Monte Carlo mass production fot the ZEUS experiment on the Grid, Nuclear Instruments and Methods in Physics Research A 559 (2006) [2] I. Foster, C, Kesselmann, Globus: a metacomputing infrastructure toolkit, Int. J. Supercomput. Appl. 11 (2) (1997) 115. [3] P. Fuhrmann, dcache, LCG storage element and enhanced use cases, in: Proceedings of the International Conference on Computing in High Energy Physics, 2004 [4] dcache: 13

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Summer Student Work: Accounting on Grid-Computing

Summer Student Work: Accounting on Grid-Computing Summer Student Work: Accounting on Grid-Computing Walter Bender Supervisors: Yves Kemp/Andreas Gellrich/Christoph Wissing September 18, 2007 Abstract The task of this work was to develop a graphical tool

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Università degli Studi di Ferrara

Università degli Studi di Ferrara Università degli Studi di Ferrara Dottorato di ricerca in Matematica e Informatica Ciclo XXIII Coordinatore: Prof. Luisa Zanghirati Grid accounting for computing and storage resources towards standardization

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

UNICORE Globus: Interoperability of Grid Infrastructures

UNICORE Globus: Interoperability of Grid Infrastructures UNICORE : Interoperability of Grid Infrastructures Michael Rambadt Philipp Wieder Central Institute for Applied Mathematics (ZAM) Research Centre Juelich D 52425 Juelich, Germany Phone: +49 2461 612057

More information

WMS overview and Proposal for Job Status

WMS overview and Proposal for Job Status WMS overview and Proposal for Job Status Author: V.Garonne, I.Stokes-Rees, A. Tsaregorodtsev. Centre de physiques des Particules de Marseille Date: 15/12/2003 Abstract In this paper, we describe briefly

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

Tutorial for CMS Users: Data Analysis on the Grid with CRAB

Tutorial for CMS Users: Data Analysis on the Grid with CRAB Tutorial for CMS Users: Data Analysis on the Grid with CRAB Benedikt Mura, Hartmut Stadie Institut für Experimentalphysik, Universität Hamburg September 2nd, 2009 In this part you will learn... 1 how to

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

AliEn Resource Brokers

AliEn Resource Brokers AliEn Resource Brokers Pablo Saiz University of the West of England, Frenchay Campus Coldharbour Lane, Bristol BS16 1QY, U.K. Predrag Buncic Institut für Kernphysik, August-Euler-Strasse 6, 60486 Frankfurt

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Site Report. Stephan Wiesand DESY -DV

Site Report. Stephan Wiesand DESY -DV Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

Heterogeneous Grid Computing: Issues and Early Benchmarks

Heterogeneous Grid Computing: Issues and Early Benchmarks Heterogeneous Grid Computing: Issues and Early Benchmarks Eamonn Kenny 1, Brian Coghlan 1, George Tsouloupas 2, Marios Dikaiakos 2, John Walsh 1, Stephen Childs 1, David O Callaghan 1, and Geoff Quigley

More information

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Scalable Computing: Practice and Experience Volume 10, Number 4, pp Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO

More information

University of Maryland. fzzj, basili, Empirical studies (Desurvire, 1994) (Jeries, Miller, USABILITY INSPECTION

University of Maryland. fzzj, basili, Empirical studies (Desurvire, 1994) (Jeries, Miller, USABILITY INSPECTION AN EMPIRICAL STUDY OF PERSPECTIVE-BASED USABILITY INSPECTION Zhijun Zhang, Victor Basili, and Ben Shneiderman Department of Computer Science University of Maryland College Park, MD 20742, USA fzzj, basili,

More information

Dependable services, built on group communication systems, providing fast access to huge volumes of data in distributed systems

Dependable services, built on group communication systems, providing fast access to huge volumes of data in distributed systems Dependable services, built on group communication systems, providing fast access to huge volumes of data in distributed systems PhD Thesis Extended abstract PhD Student ing. Adrian Coleșa Scientic advisor

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Attribute based query. Direct reservation. Pentium Cluster (Instruments) SGI Cluster (Gemstones) PENTIUM SCHEDULER. request 2 pentiums.

Attribute based query. Direct reservation. Pentium Cluster (Instruments) SGI Cluster (Gemstones) PENTIUM SCHEDULER. request 2 pentiums. A General Resource Reservation Framework for Scientic Computing? Ravi Ramamoorthi, Adam Rifkin, Boris Dimitrov, and K. Mani Chandy California Institute of Technology Abstract. We describe three contributions

More information

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP, Computing in HEP Andreas Gellrich DESY IT Group - Physics Computing DESY Summer Student Program 2005 Lectures in HEP, 11.08.2005 Program for Today Computing in HEP The DESY Computer Center Grid Computing

More information

ATLAS NorduGrid related activities

ATLAS NorduGrid related activities Outline: NorduGrid Introduction ATLAS software preparation and distribution Interface between NorduGrid and Condor NGlogger graphical interface On behalf of: Ugur Erkarslan, Samir Ferrag, Morten Hanshaugen

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Web site Image database. Web site Video database. Web server. Meta-server Meta-search Agent. Meta-DB. Video query. Text query. Web client.

Web site Image database. Web site Video database. Web server. Meta-server Meta-search Agent. Meta-DB. Video query. Text query. Web client. (Published in WebNet 97: World Conference of the WWW, Internet and Intranet, Toronto, Canada, Octobor, 1997) WebView: A Multimedia Database Resource Integration and Search System over Web Deepak Murthy

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control DIRAC Distributed Infrastructure with Remote Agent Control E. van Herwijnen, J. Closier, M. Frank, C. Gaspar, F. Loverre, S. Ponce (CERN), R.Graciani Diaz (Barcelona), D. Galli, U. Marconi, V. Vagnoni

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Transparent Access to Legacy Data in Java. Olivier Gruber. IBM Almaden Research Center. San Jose, CA Abstract

Transparent Access to Legacy Data in Java. Olivier Gruber. IBM Almaden Research Center. San Jose, CA Abstract Transparent Access to Legacy Data in Java Olivier Gruber IBM Almaden Research Center San Jose, CA 95120 Abstract We propose in this paper an extension to PJava in order to provide a transparent access

More information

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Programming Models for WSCs. Important Design Factors for WSCs

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Programming Models for WSCs. Important Design Factors for WSCs Concepts Introduced in Chapter 6 Warehouse-Scale Computers A cluster is a collection of desktop computers or servers connected together by a local area network to act as a single larger computer. introduction

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

Technische Universitat Munchen. Institut fur Informatik. D Munchen.

Technische Universitat Munchen. Institut fur Informatik. D Munchen. Developing Applications for Multicomputer Systems on Workstation Clusters Georg Stellner, Arndt Bode, Stefan Lamberts and Thomas Ludwig? Technische Universitat Munchen Institut fur Informatik Lehrstuhl

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California R. Barbera Dipartimento di Fisica e Astronomia, INFN, and ALICE Collaboration, via S. Sofia 64,

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Storage-system Research Group Barcelona Supercomputing Center *Universitat Politècnica de Catalunya

Storage-system Research Group Barcelona Supercomputing Center *Universitat Politècnica de Catalunya Replica management in GRID superscalar Jesús Malo Poyatos, Jonathan Martí Fraiz, Toni Cortes* Storage-system Research Group Barcelona Supercomputing Center {jesus.malo,jonathan.marti,toni.cortes}@bsc.es

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

A security architecture for the ALICE Grid Services

A security architecture for the ALICE Grid Services ab, Costin Grigoras b, Alina Grigoras b, Latchezar Betev b, and Johannes Buchmann ac a CASED - Center for Advanced Security Research Darmstadt, Mornewegstrasse 32, 64293 Darmstadt, Germany b CERN - European

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

The PRISM infrastructure System Architecture and User Interface

The PRISM infrastructure System Architecture and User Interface The PRISM infrastructure System Architecture and User Interface Claes Larsson ECMWF UK PRISM Hamburg D&M presentation p.1/20 Architecture goals PRISM architecture to provide an efficient climate modelling

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

The DESY Grid Testbed

The DESY Grid Testbed The DESY Grid Testbed Andreas Gellrich * DESY IT Group IT-Seminar 27.01.2004 * e-mail: Andreas.Gellrich@desy.de Overview The Physics Case: LHC DESY The Grid Idea: Concepts Implementations Grid @ DESY:

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Pizza Delivery Helper

Pizza Delivery Helper Pizza Delivery Helper Aldo Doronzo 2008 / 2009 Abstract This is a report describing the Pizza Delivery Helper project realized during the course of Mobile Services taught by prof. Ricci at the Free University

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

e-infrastructures in FP7 INFO DAY - Paris

e-infrastructures in FP7 INFO DAY - Paris e-infrastructures in FP7 INFO DAY - Paris Carlos Morais Pires European Commission DG INFSO GÉANT & e-infrastructure Unit 1 Global challenges with high societal impact Big Science and the role of empowered

More information

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

A Management System for Online Presentations at Meetings. Krzysztof Wrona (ZEUS) DESY Hamburg

A Management System for Online Presentations at Meetings. Krzysztof Wrona (ZEUS) DESY Hamburg A Management System for Online Presentations at Meetings Krzysztof Wrona (ZEUS) DESY Hamburg 22 April, 2002 1 Motivation Daily tasks at research institutes Preparing and improving an experiment Collecting

More information

The Grid Monitor. Usage and installation manual. Oxana Smirnova

The Grid Monitor. Usage and installation manual. Oxana Smirnova NORDUGRID NORDUGRID-MANUAL-5 2/5/2017 The Grid Monitor Usage and installation manual Oxana Smirnova Abstract The LDAP-based ARC Grid Monitor is a Web client tool for the ARC Information System, allowing

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Real-time grid computing for financial applications

Real-time grid computing for financial applications CNR-INFM Democritos and EGRID project E-mail: cozzini@democritos.it Riccardo di Meo, Ezio Corso EGRID project ICTP E-mail: {dimeo,ecorso}@egrid.it We describe the porting of a test case financial application

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL. Jun Sun, Yasushi Shinjo and Kozo Itano

THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL. Jun Sun, Yasushi Shinjo and Kozo Itano THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL Jun Sun, Yasushi Shinjo and Kozo Itano Institute of Information Sciences and Electronics University of Tsukuba Tsukuba,

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

DIRAC File Replica and Metadata Catalog

DIRAC File Replica and Metadata Catalog DIRAC File Replica and Metadata Catalog A.Tsaregorodtsev 1, S.Poss 2 1 Centre de Physique des Particules de Marseille, 163 Avenue de Luminy Case 902 13288 Marseille, France 2 CERN CH-1211 Genève 23, Switzerland

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907 The Game of Clustering Rowena Cole and Luigi Barone Department of Computer Science, The University of Western Australia, Western Australia, 697 frowena, luigig@cs.uwa.edu.au Abstract Clustering is a technique

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT The Monte Carlo Method: Versatility Unbounded in a Dynamic Computing World Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2005) MONTE CARLO SIMULATION

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation ATL-SOFT-PROC-2012-045 22 May 2012 Not reviewed, for internal circulation only AutoPyFactory: A Scalable Flexible Pilot Factory Implementation J. Caballero 1, J. Hover 1, P. Love 2, G. A. Stewart 3 on

More information

GRID Stream Database Managent for Scientific Applications

GRID Stream Database Managent for Scientific Applications GRID Stream Database Managent for Scientific Applications Milena Ivanova (Koparanova) and Tore Risch IT Department, Uppsala University, Sweden Outline Motivation Stream Data Management Computational GRIDs

More information

A VO-friendly, Community-based Authorization Framework

A VO-friendly, Community-based Authorization Framework A VO-friendly, Community-based Authorization Framework Part 1: Use Cases, Requirements, and Approach Ray Plante and Bruce Loftis NCSA Version 0.1 (February 11, 2005) Abstract The era of massive surveys

More information