Data Management for the World s Largest Machine

Size: px
Start display at page:

Download "Data Management for the World s Largest Machine"

Transcription

1 Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland sigve.haug@lhep.unibe.ch 2 Department of Physics, University of Oslo, Postboks 1048 Blindern, NO-0316 Oslo, Norway {farid.ould-saada, katarina.pajchel, a.l.read}@fys.uio.no Abstract. The world s largest machine, the Large Hadron Collider, will have four detectors whose output is expected to answer fundamental questions about the universe. The ATLAS detector is expected to produce 3.2 PB of data per year which will be distributed to storage elements all over the world. In 2008 the resource need is estimated to be 16.9 PB of tape, 25.4 PB of disk, and 50 MSI2k of CPU. Grids are used to simulate, access, and process the data. Sites in several European and non-european countries are connected with the Advanced Resource Connector (ARC) middleware of NorduGrid. In the first half of 2006 about 10 5 simulation jobs with 27 TB of distributed output organized in some 10 5 files and 740 datasets were performed on this grid. ARC s data management capabilities, the Globus Replica Location Service, and ATLAS software were combined to achieve a comprehensive distributed data management system. 1 Introduction At the end of 2007 the Large Hadron Collider (LHC) in Geneva, often referred to as the world s largest machine, will start to operate [1]. Its four detectors aim to collect data which is expected to give some answers to fundamental questions about the universe, e.g. what is the origin of mass. The data acquisition system of one of these detectors, the ATLAS detector, will write the recorded information of the proton-proton collision events at a rate of 200 events per second [2]. Each event s information will require 1.6 MB storage space [3]. Taking the operating time of the machine into account this will yield 3.2 PB of recorded data per year. The simulated and reprocessed data comes in addition. The estimated computing resource needs for 2008 are 16.9 PB tape storage, 25.4 PB disk storage and 50.6 MSI2k CPU. The ATLAS experiment uses three grids to store, replicate, simulate, and process the data all over the planet : The LHC Computing Grid (LCG), the B. Kågström et al. (Eds.): PARA 2006, LNCS 4699, pp , c Springer-Verlag Berlin Heidelberg 2007

2 Data Management for the World s Largest Machine 481 Fig. 1. Geographical snapshot of sites connected with ARC middleware (as of Dec. 2005). Many sites are also organized into national and or organizational grids, e.g. Swegrid and Swiss ATLAS Grid. Open Science Grid (OSG), and the NorduGrid [4] [5] [6]. Here we report on the recent experience with the present distributed simulation and data management system used by the ATLAS experiment on NorduGrid. A geographical map of the sites connected by NorduGrid s middleware The Advanced Resource Connector (ARC) is shown in Figure 1. The network of sites which also have the necessary ATLAS software installed and thus are capable of running ATLAS computing tasks will in the following be called the ATLAS ARC Grid. First, a description of the distributed simulation and data management system follows. Second, a report on the system performance in the period from November 2005 to June 2006 is presented. Then future usage, limitations, and needed

3 482 S. Haug et al. improvements are commented. Finally, we recapitulate the performance of the ATLAS ARC Grid in this period and draw some conclusions. 2 The Simulation and Data Management System The distributed simulation and data management system on the ATLAS ARC Grid can be divided into three main parts. First, there is the production database which is used for definition and tracking of the simulation tasks [7]. Second, there is the Supervisor-Executor instance which pulls tasks from the production database and submits them to the ATLAS ARC Grid. And finally, there are the ATLAS data management databases which collect the logical file names into datasets [8]. The Supervisor is common for all three grids. The Executor is unique for each grid and contains code to submit, monitor, postprocess and clean the grid jobs. In the case of the ATLAS ARC Grid, this simple structure relies on the full ARC grid infrastructure, in particular also a Globus Replica Location Service (RLS) which maps logical to physical file names [9]. The production database is an Oracle instance where job definitions, job input locations and job output names are kept. Further jobs estimated resource needs, status, etc are stored. The Supervisor-Executor is a Python application which is run by a user whose grid certificate is accepted at all ATLAS ARC sites. The Supervisor communicates with the production database and passes simulation jobs to the Executor in XML format. The Executor then translates the job descriptions into ARC s extended resource specification language (XRSL). Job-brokering is performed with attributes specified in the XRSL job-description and information gathered from the computing clusters with the ARC information system. In particular, clusters have to have the required ATLAS run time environment installed. This is an experiment-specific software package of about 5 GB which is frequently released. When a suitable cluster is found, the job is submitted. The ARC gridmanager on the front-end of the cluster downloads the input files, submits jobs to the local batch system and monitors them to their completion, and uploads the output of successful jobs. In this process the RLS is used to index both input and output files. The physical storage element (SE) for an output file is provided automatically by a storage service which obtains a list of potential SE s indexed by RLS. Thus neither the grid job executing on the batch node nor the Executor do any data movement and do not need to know explicitly where the physical inputs come from or where the physical outputs are stored. When the Executor finds a job finished, it registers the metadata, e.g. a globally unique identifier and creation date, of the joboutput files in the RLS. It sets the desired grid access control list (gacl) on the files and reports back to the Supervisor and the production database. Finally, the production database is periodically queried for finished tasks. For these the logical file names and their dataset affiliation are retrieved in order to register available datasets, their file content, state and locations in the ATLAS dataset databases. Hence, datasets can subsequentially be looked up

4 Data Management for the World s Largest Machine 483 for replication and analysis. The dataset catalogs provide the logical file names and the indexing service (from among the more than 20 index servers for the three grids of which the ATLAS computing grid is comprised) for the dataset to which the logical file is attached. The indexing service, i.e. the RLS on the ATLAS ARC Grid, provides the physical file location. In short, the production on ATLAS ARC Grid is by design a fully automatic and light weight system which takes advantage of the inherent job-brokering and data management capabilities of the ARC middleware (RLS for indexing logical to physical filenames and storing metadata about files) and the ATLAS distributed data management system (a set of catalogs allowing replication and analysis on a dataset basis). See Reference [10] and [11] for detailed descriptions of the ATLAS and ARC data management systems. 3 Recent System Performance on the ATLAS ARC Grid The preparation for the ATLAS experiment relies on detailed simulations of the physics processes, from the proton-proton collision, via the particle propagation through the detector material, to the full reconstruction of the particles tracks. To a large extent this has been achieved in carefully planned time periods of operation, so-called Data Challenges. Many ARC sites have been providing Table 1. ARC clusters which contributed to the ATLAS simulations in the period from November 2005 to June The number of jobs per site and the percentage of successful jobs are shown. Cluster Number of jobs Efficiency 1 ingrid.hpc2n.umu.se benedict.grid.aau.dk hive.unicc.chalmers.se pikolit.ijs.si bluesmoke.nsc.liu.se hagrid.it.uu.se grid00.unige.ch morpheus.dcgc.dk grid.uio.no lheppc10.unibe.ch hypatia.uio.no sigrid.lunarc.lu.se alice.grid.upjs.sk norgrid.ntnu.no grid01.unige.ch norgrid.bccs.no grid.tsl.uu.se

5 484 S. Haug et al. Table 2. ARC Storage Elements and their contributions to the ATLAS Computing System Commissioning. Number of files stored by the ATLAS production in the period are shown in the third column. The fourth lists the total space occupied by these files. The numbers were extracted from the Replica Location Service rls://atlasrls.nordugrid.org on Storage Element Location Files TB ingrid.hpc2n.umu.se Umeaa se1.hpc2n.umu.se Umeaa ss2.hpc2n.umu.se Umeaa ss1.hpc2n.umu.se Umeaa hive-se2.unicc.chalmers.se Goteborg harry.hagrid.it.uu.se Uppsala hagrid.it.uu.se Uppsala storage2.bluesmoke.nsc.liu.se Linkoping sigrid.lunarc.lu.se Lund swelanka1.it.uu.se Sri Lanka 1 < 0.1 grid.uio.no Oslo 856 < 0.1 grid.ift.uib.no Bergen 1 < 0.1 morpheus.dcgc.dk Aalborg 252 < 0.1 benedict.grid.aau.dk Aalborg pikolit.ijs.si:2811 Slovenia pikolit.ijs.si Slovenia resources for these large scale production operations [12]. At the present time the third Data Challenge, or the Computing System Commissioning (CSC), is entering a phase of more or less constant production. As part of this constant production about simulation jobs were run on ATLAS enabled ARC sites in the period from mid November 2005 to mid June 2006 where the end date just reflects the time of this report. Up to 17 clusters comprising about 1000 CPU s were used as a single resource for these jobs. In Table 1 the clusters and their executed job shares are listed. Depending on their size, access policy, and competition with local users the number of jobs varies. In this period six countries provided resources. The Slovenian cluster, pikolit.ijs.si, was the largest contributor followed by the Swedish resources. The best clusters have efficiencies close to 90% (total ATLAS and grid middleware efficiency). This number reflects what can be expected in a heterogenious grid environment where not only different jobs and evolving software are used, but also the operational efficiency of the numerous computing clusters and storage services is a significant factor. In Table 2 the number of output files and their integrated sizes are listed according to storage elements and locations. About files with a total of

6 Data Management for the World s Largest Machine 485 Fig. 2. TB per country. The graph visualizes the numbers in Table 2. In the period from November 2005 to June 2006 Sweden and Slovenia were the largest storage contributers to the ATLAS Computing System Commissioning. Only ARC storage is considered. 27 TB were produced and stored on disks at 11 sites in five different countries. This gives an average file size of 90 MB. The integrated storage contribution per country is shown in Figure 2. 1 In the ATLAS production of simulated data (future data analysis will produce a different and more chaotic pattern) simulation is done in three steps. For each step input and output sizes vary. In the first step the physics in the proton-proton collisions is simulated, so-called event generation. These jobs have practically no input and output about 0.1 GB per job. In the second step the detector response to the particle interactions is simulated. These jobs use the output from the first step as input. They produce about 1 GB output per job. This output is again used as input for the last step where the reconstruction of the detector response is performed. A reconstruction job takes about 10 GB input in 10 files and produces an output of typically 1 GB. In order to minimize the number of files, it is foreseen to increase the file sizes (from 1 to 10 GB) as network capacity, disk sizes and tape systems evolve. The outputs are normally replicated to at least one other storage element in one of the other grids and in the case of reconstruction outputs (the starting point of most physics analyses) to all the other large computing sites spread throughout the ATLAS grid. The output remains on the storage elements till a central ATLAS decision is made about deletion, most probably several years. 1 This distribution is not representative for the previous data challenges.

7 486 S. Haug et al. Table 3. ATLAS Datasets on ARC Storage Elements as of Category ARC Total ARC/Total Description All CSC + CTB + MC CSC Computing System Commisioning CTB Combined Test Beam Production MC MC Production Finally, the output files were logically collected into datasets, objects of analysis and replication. The ATLAS files produced in this period and stored on ARC storage elements belong to 739 datasets in the period. The average number of files was then roughly 400, the actual numbers ranging from 50 to Table 3 shows the categories of datasets and their respective parts of the total numbers. The numbers in the ARC column were collected with the ATLAS DQ2 client, the numbers in the Total column with the PANDA monitor ( Since in the considered period the ATLAS ARC Grid s contribution to the total AT- LAS Grid production is estimated to have been about 11 to 13%, the numbers indicate that rather shorter than average and long jobs were processed. 2 4 Perspective, Limitations and Improvements The limitations of the system must be considered in the context of its desired capabilities. At the moment the system manages some 10 3 jobs per day where each job typically needs less than a day to finish. The number of output files are about three times larger. In order to provide the ATLAS experiment with a significant production grid, the ATLAS ARC Grid should aim to cope with numbers of jobs another order of magnitude larger. In this perspective the ATLAS ARC Grid has no fundamental scaling limitations. However, in order to meet the ambition several improvements are needed. First, the available amount of resources must increase. The present operation almost exhausts the existing. And since the resources are shared and with growing attraction to users, fair-sharing of the resources between local and grid and between different grid-users needs to be implemented. At the moment local users always have implicit first priority. And the grid-users are often mapped to a single local account so that they are effectively treated first-come first-serve. Second, the crucial Replica Location Service provides the desired functionality with mapping from logical to physical file names, certificate authentication and bulk operations and is expected to be able to handle the planned scaling-up 2 The Nordic share of the ATLAS computing resources is 7.5%, according to a memorandum of understanding.

8 Data Management for the World s Largest Machine 487 of the system. However, the lack of perfect stability is an important problem which remains to be solved. Meanwhile, the persons running the Supervisor- Executor instances should probably have some administration privileges, e.g. the possibility to restart the service. Third, further development should aim at some hours database independency. Both the production database and the data management databases now and then have some hours down time. This should cause problems other than delays in database registrations. Continuous improvements in the ARC middleware ease the operation. However, in the ATLAS ARC Grid there are many independent clusters in production mode and not dedicated to ATLAS. Thus it is impractical to negotiate frequent middleware upgrades on all of them. Hence, the future system should rely as much as possible on the present features. 5 Conclusions As part of the preparations for the ATLAS experiment at the Large Hadron Collider, large amounts of data are simulated on grids. The ATLAS ARC Grid, sites connected with NorduGrid s Advanced Resource Connector and having ATLAS software installed and configured for use by grid-jobs, now continuously contributes to this global effort. In the period from November 2005 to June 2006 about output files were produced on the ATLAS ARC Grid. Up to 17 sites in five different countries were used as a single batch facility to run about jobs. Compared to previous usage, another layer of organization was introduced in the data management system. This enabled the concept of datasets, i.e. conglomerations of files, which are used as objects for data analysis and replication. The 27 TB output was collected into 740 datasets with the physical output distributed over eight significant sites in four countries. Present experience shows that the system design can be expected to cope with the future load. Provided enough available resources, one person should be able to supervise about 10 4 jobs per day with a few GB of input and output data. The present implementation of the ATLAS ARC Grid is lacking the ability to replicate ATLAS datasets to and from other grids via the ATLAS distributed data management tools [8] and there is no support for tape-based storage elements. These shortcomings will be addressed in the near future. Acknowledgments. The indispensable work of the contributing resources system administrators is highly appreciated. References 1. The LHC Study Group: The Large Hadron Collider, Conceptual Design, CERN- AC LHC (1995) 2. ATLAS Collaboration: Detector and Physics Performance Technical Design Report, CERN-LHCC (1999)

9 488 S. Haug et al. 3. ATLAS Collaboration: ATLAS Computing Technical Design Report, CERN- LHCC (2005) 4. Knobloch, J. (ed.): LHC Computing Grid - Technical Design Report, CERN- LHCC (2005) 5. Open Science Grid Homepage: 6. NorduGrid Homepage: 7. Goosesens, L., et al.: ATLAS Production System in ATLAS Data Challenge 2, CHEP 2004, Interlaken, contribution, no ATLAS Collaboration: ATLAS Computing Technical Design Report, CERN- LHCC , p. 115 (2005) 9. Nielsen, J., et al.: Experiences with Data Indexing Services supported by the NorduGrid Middleware, CHEP 2004, Interlaken, contribution, no Konstantinov, A., et al.: Data management services of NorduGrid, CERN , vol. 2, p. 765 (2005) 11. Branco, M.: Don Quijote - Data Management for the ATLAS Automatic Production System, CERN , p. 661 (2005) 12. NorduGrid Collaboration: Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2, CERN , vol. 2, p (2005)

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Sturrock, R.; Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Lindemann, Jonas; et, al. Published in: CERN-2005-002 Published:

More information

ATLAS NorduGrid related activities

ATLAS NorduGrid related activities Outline: NorduGrid Introduction ATLAS software preparation and distribution Interface between NorduGrid and Condor NGlogger graphical interface On behalf of: Ugur Erkarslan, Samir Ferrag, Morten Hanshaugen

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Atlas Managed Production on Nordugrid

Atlas Managed Production on Nordugrid Atlas Managed Production on Nordugrid Alex Read Mattias Ellert (Uppsala), Katarina Pajchel, Adrian Taga University of Oslo November 7 9, 2006 1 Outline 1. 2. 3. 4. 5. 6. 7. 8. 9. LHC/ATLAS Background The

More information

Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world

Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world Pajchel, K.; Eerola, Paula; Konya, Balazs; Smirnova,

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

ATLAS Production System in ATLAS Data Challenge 2 Luc Goossens (CERN/EP/ATC) Kaushik De (UTA)

ATLAS Production System in ATLAS Data Challenge 2 Luc Goossens (CERN/EP/ATC) Kaushik De (UTA) ATLAS Production System in ATLAS Data Challenge 2 Luc Goossens (CERN/EP/ATC) Kaushik De (UTA) 27 September 2004 CHEP 2004 1 in this talk introduction terminology and conceptual model architecture and components

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Experiences with the new ATLAS Distributed Data Management System

Experiences with the new ATLAS Distributed Data Management System Experiences with the new ATLAS Distributed Data Management System V. Garonne 1, M. Barisits 2, T. Beermann 2, M. Lassnig 2, C. Serfon 1, W. Guan 3 on behalf of the ATLAS Collaboration 1 University of Oslo,

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Atlas Data-Challenge 1 on NorduGrid

Atlas Data-Challenge 1 on NorduGrid Atlas Data-Challenge 1 on NorduGrid P. Eerola, B. Kónya, O. Smirnova Particle Physics, Institute of Physics, Lund University, Box 118, 22100 Lund, Sweden T. Ekelöf, M. Ellert Department of Radiation Sciences,

More information

Monitoring ARC services with GangliARC

Monitoring ARC services with GangliARC Journal of Physics: Conference Series Monitoring ARC services with GangliARC To cite this article: D Cameron and D Karpenko 2012 J. Phys.: Conf. Ser. 396 032018 View the article online for updates and

More information

WHEN the Large Hadron Collider (LHC) begins operation

WHEN the Large Hadron Collider (LHC) begins operation 2228 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 4, AUGUST 2006 Measurement of the LCG2 and Glite File Catalogue s Performance Craig Munro, Birger Koblitz, Nuno Santos, and Akram Khan Abstract When

More information

ATLAS Data Challenge 2: A Massive Monte Carlo Production on the Grid

ATLAS Data Challenge 2: A Massive Monte Carlo Production on the Grid ATLAS Data Challenge 2: A Massive Monte Carlo Production on the Grid Santiago González de la Hoz 1, Javier Sánchez 1, Julio Lozano 1, Jose Salt 1, Farida Fassi 1, Luis March 1, D.L. Adams 2, Gilbert Poulard

More information

The ATLAS PanDA Pilot in Operation

The ATLAS PanDA Pilot in Operation The ATLAS PanDA Pilot in Operation P. Nilsson 1, J. Caballero 2, K. De 1, T. Maeno 2, A. Stradling 1, T. Wenaus 2 for the ATLAS Collaboration 1 University of Texas at Arlington, Science Hall, P O Box 19059,

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

The NorduGrid Architecture and Middleware for Scientific Applications

The NorduGrid Architecture and Middleware for Scientific Applications The NorduGrid Architecture and Middleware for Scientific Applications O. Smirnova 1, P. Eerola 1,T.Ekelöf 2, M. Ellert 2, J.R. Hansen 3, A. Konstantinov 4,B.Kónya 1, J.L. Nielsen 3, F. Ould-Saada 5, and

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Analysis of internal network requirements for the distributed Nordic Tier-1

Analysis of internal network requirements for the distributed Nordic Tier-1 Journal of Physics: Conference Series Analysis of internal network requirements for the distributed Nordic Tier-1 To cite this article: G Behrmann et al 2010 J. Phys.: Conf. Ser. 219 052001 View the article

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The Swiss ATLAS Grid End 2008 Progress Report for the SwiNG EB

The Swiss ATLAS Grid End 2008 Progress Report for the SwiNG EB The Swiss ATLAS Grid End 2008 Progress Report for the SwiNG EB 2009-02-06 E. Cogneras a, S. Gadomski b, S. Haug a, Peter Kunszt c Sergio Maffioletti c, Riccardo Murri c a Center for Research and Education

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer computing platform Internal report Marko Niinimaki, Mohamed BenBelgacem, Nabil Abdennadher HEPIA, January 2010 1. Background and motivation

More information

Future trends in distributed infrastructures the Nordic Tier-1 example

Future trends in distributed infrastructures the Nordic Tier-1 example Future trends in distributed infrastructures the Nordic Tier-1 example O. G. Smirnova 1,2 1 Lund University, 1, Professorsgatan, Lund, 22100, Sweden 2 NeIC, 25, Stensberggata, Oslo, NO-0170, Norway E-mail:

More information

C3PO - A Dynamic Data Placement Agent for ATLAS Distributed Data Management

C3PO - A Dynamic Data Placement Agent for ATLAS Distributed Data Management 1 2 3 4 5 6 7 C3PO - A Dynamic Data Placement Agent for ATLAS Distributed Data Management T Beermann 1, M Lassnig 1, M Barisits 1, C Serfon 2, V Garonne 2 on behalf of the ATLAS Collaboration 1 CERN, Geneva,

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

Distributed Data Management with Storage Resource Broker in the UK

Distributed Data Management with Storage Resource Broker in the UK Distributed Data Management with Storage Resource Broker in the UK Michael Doherty, Lisa Blanshard, Ananta Manandhar, Rik Tyer, Kerstin Kleese @ CCLRC, UK Abstract The Storage Resource Broker (SRB) is

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Transient Compute ARC as Cloud Front-End

Transient Compute ARC as Cloud Front-End Digital Infrastructures for Research 2016 2016-09-29, 11:30, Cracow 30 min slot AEC ALBERT EINSTEIN CENTER FOR FUNDAMENTAL PHYSICS Transient Compute ARC as Cloud Front-End Sigve Haug, AEC-LHEP University

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano. First sights on a non-grid end-user analysis model on Grid Infrastructure Roberto Santinelli CERN E-mail: roberto.santinelli@cern.ch Fabrizio Furano CERN E-mail: fabrzio.furano@cern.ch Andrew Maier CERN

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

The NorduGrid production Grid infrastructure, status and plans

The NorduGrid production Grid infrastructure, status and plans The NorduGrid production Grid infrastructure, status and plans P.Eerola,B.Kónya, O. Smirnova Department of High Energy Physics Lund University Box 118, 22100 Lund, Sweden T. Ekelöf, M. Ellert Department

More information

Grid Computing at Ljubljana and Nova Gorica

Grid Computing at Ljubljana and Nova Gorica Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

DIAL: Distributed Interactive Analysis of Large Datasets

DIAL: Distributed Interactive Analysis of Large Datasets DIAL: Distributed Interactive Analysis of Large Datasets D. L. Adams Brookhaven National Laboratory, Upton NY 11973, USA DIAL will enable users to analyze very large, event-based datasets using an application

More information

ATLAS Nightly Build System Upgrade

ATLAS Nightly Build System Upgrade Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

ARC middleware. The NorduGrid Collaboration

ARC middleware. The NorduGrid Collaboration ARC middleware The NorduGrid Collaboration Abstract The paper describes the existing components of ARC, discusses some of the new components, functionalities and enhancements currently under development,

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

Figure 1: cstcdie Grid Site architecture

Figure 1: cstcdie Grid Site architecture AccessionIndex: TCD-SCSS-T.20121208.098 Accession Date: Accession By: Object name: cstcdie Grid Site Beowulf Clusters and Datastore Vintage: c.2009 Synopsis: Complex of clusters & storage (1500 cores/600

More information

ATLAS Offline Data Quality Monitoring

ATLAS Offline Data Quality Monitoring ATLAS Offline Data Quality Monitoring ATL-SOFT-PROC-2009-003 24 July 2009 J. Adelman 9, M. Baak 3, N. Boelaert 6, M. D Onofrio 1, J.A. Frost 2, C. Guyot 8, M. Hauschild 3, A. Hoecker 3, K.J.C. Leney 5,

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

A Popularity-Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management

A Popularity-Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management A Popularity-Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management CERN E-mail: thomas.beermann@cern.ch Graeme A. Stewart University of Glasgow E-mail: graeme.a.stewart@gmail.com

More information

Popularity Prediction Tool for ATLAS Distributed Data Management

Popularity Prediction Tool for ATLAS Distributed Data Management Popularity Prediction Tool for ATLAS Distributed Data Management T Beermann 1,2, P Maettig 1, G Stewart 2, 3, M Lassnig 2, V Garonne 2, M Barisits 2, R Vigne 2, C Serfon 2, L Goossens 2, A Nairz 2 and

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

The SweGrid Accounting System

The SweGrid Accounting System The SweGrid Accounting System Enforcing Grid Resource Allocations Thomas Sandholm sandholm@pdc.kth.se 1 Outline Resource Sharing Dilemma Grid Research Trends Connecting National Computing Resources in

More information