Data Management for the World s Largest Machine

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Data Management for the World s Largest Machine"

Transcription

1 Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland 2 Department of Physics, University of Oslo, Postboks 1048 Blindern, NO-0316 Oslo, Norway {farid.ould-saada, katarina.pajchel, Abstract. The world s largest machine, the Large Hadron Collider, will have four detectors whose output is expected to answer fundamental questions about the universe. The ATLAS detector is expected to produce 3.2 PB of data per year which will be distributed to storage elements all over the world. In 2008 the resource need is estimated to be 16.9 PB of tape, 25.4 PB of disk, and 50 MSI2k of CPU. Grids are used to simulate, access, and process the data. Sites in several European and non-european countries are connected with the Advanced Resource Connector (ARC) middleware of NorduGrid. In the first half of 2006 about 10 5 simulation jobs with 27 TB of distributed output organized in some 10 5 files and 740 datasets were performed on this grid. ARC s data management capabilities, the Globus Replica Location Service, and ATLAS software were combined to achieve a comprehensive distributed data management system. 1 Introduction At the end of 2007 the Large Hadron Collider (LHC) in Geneva, often referred to as the world s largest machine, will start to operate [1]. Its four detectors aim to collect data which is expected to give some answers to fundamental questions about the universe, e.g. what is the origin of mass. The data acquisition system of one of these detectors, the ATLAS detector, will write the recorded information of the proton-proton collision events at a rate of 200 events per second [2]. Each event s information will require 1.6 MB storage space [3]. Taking the operating time of the machine into account this will yield 3.2 PB of recorded data per year. The simulated and reprocessed data comes in addition. The estimated computing resource needs for 2008 are 16.9 PB tape storage, 25.4 PB disk storage and 50.6 MSI2k CPU. The ATLAS experiment uses three grids to store, replicate, simulate, and process the data all over the planet : The LHC Computing Grid (LCG), the B. Kågström et al. (Eds.): PARA 2006, LNCS 4699, pp , c Springer-Verlag Berlin Heidelberg 2007

2 Data Management for the World s Largest Machine 481 Fig. 1. Geographical snapshot of sites connected with ARC middleware (as of Dec. 2005). Many sites are also organized into national and or organizational grids, e.g. Swegrid and Swiss ATLAS Grid. Open Science Grid (OSG), and the NorduGrid [4] [5] [6]. Here we report on the recent experience with the present distributed simulation and data management system used by the ATLAS experiment on NorduGrid. A geographical map of the sites connected by NorduGrid s middleware The Advanced Resource Connector (ARC) is shown in Figure 1. The network of sites which also have the necessary ATLAS software installed and thus are capable of running ATLAS computing tasks will in the following be called the ATLAS ARC Grid. First, a description of the distributed simulation and data management system follows. Second, a report on the system performance in the period from November 2005 to June 2006 is presented. Then future usage, limitations, and needed

3 482 S. Haug et al. improvements are commented. Finally, we recapitulate the performance of the ATLAS ARC Grid in this period and draw some conclusions. 2 The Simulation and Data Management System The distributed simulation and data management system on the ATLAS ARC Grid can be divided into three main parts. First, there is the production database which is used for definition and tracking of the simulation tasks [7]. Second, there is the Supervisor-Executor instance which pulls tasks from the production database and submits them to the ATLAS ARC Grid. And finally, there are the ATLAS data management databases which collect the logical file names into datasets [8]. The Supervisor is common for all three grids. The Executor is unique for each grid and contains code to submit, monitor, postprocess and clean the grid jobs. In the case of the ATLAS ARC Grid, this simple structure relies on the full ARC grid infrastructure, in particular also a Globus Replica Location Service (RLS) which maps logical to physical file names [9]. The production database is an Oracle instance where job definitions, job input locations and job output names are kept. Further jobs estimated resource needs, status, etc are stored. The Supervisor-Executor is a Python application which is run by a user whose grid certificate is accepted at all ATLAS ARC sites. The Supervisor communicates with the production database and passes simulation jobs to the Executor in XML format. The Executor then translates the job descriptions into ARC s extended resource specification language (XRSL). Job-brokering is performed with attributes specified in the XRSL job-description and information gathered from the computing clusters with the ARC information system. In particular, clusters have to have the required ATLAS run time environment installed. This is an experiment-specific software package of about 5 GB which is frequently released. When a suitable cluster is found, the job is submitted. The ARC gridmanager on the front-end of the cluster downloads the input files, submits jobs to the local batch system and monitors them to their completion, and uploads the output of successful jobs. In this process the RLS is used to index both input and output files. The physical storage element (SE) for an output file is provided automatically by a storage service which obtains a list of potential SE s indexed by RLS. Thus neither the grid job executing on the batch node nor the Executor do any data movement and do not need to know explicitly where the physical inputs come from or where the physical outputs are stored. When the Executor finds a job finished, it registers the metadata, e.g. a globally unique identifier and creation date, of the joboutput files in the RLS. It sets the desired grid access control list (gacl) on the files and reports back to the Supervisor and the production database. Finally, the production database is periodically queried for finished tasks. For these the logical file names and their dataset affiliation are retrieved in order to register available datasets, their file content, state and locations in the ATLAS dataset databases. Hence, datasets can subsequentially be looked up

4 Data Management for the World s Largest Machine 483 for replication and analysis. The dataset catalogs provide the logical file names and the indexing service (from among the more than 20 index servers for the three grids of which the ATLAS computing grid is comprised) for the dataset to which the logical file is attached. The indexing service, i.e. the RLS on the ATLAS ARC Grid, provides the physical file location. In short, the production on ATLAS ARC Grid is by design a fully automatic and light weight system which takes advantage of the inherent job-brokering and data management capabilities of the ARC middleware (RLS for indexing logical to physical filenames and storing metadata about files) and the ATLAS distributed data management system (a set of catalogs allowing replication and analysis on a dataset basis). See Reference [10] and [11] for detailed descriptions of the ATLAS and ARC data management systems. 3 Recent System Performance on the ATLAS ARC Grid The preparation for the ATLAS experiment relies on detailed simulations of the physics processes, from the proton-proton collision, via the particle propagation through the detector material, to the full reconstruction of the particles tracks. To a large extent this has been achieved in carefully planned time periods of operation, so-called Data Challenges. Many ARC sites have been providing Table 1. ARC clusters which contributed to the ATLAS simulations in the period from November 2005 to June The number of jobs per site and the percentage of successful jobs are shown. Cluster Number of jobs Efficiency 1 ingrid.hpc2n.umu.se benedict.grid.aau.dk hive.unicc.chalmers.se pikolit.ijs.si bluesmoke.nsc.liu.se hagrid.it.uu.se grid00.unige.ch morpheus.dcgc.dk grid.uio.no lheppc10.unibe.ch hypatia.uio.no sigrid.lunarc.lu.se alice.grid.upjs.sk norgrid.ntnu.no grid01.unige.ch norgrid.bccs.no grid.tsl.uu.se

5 484 S. Haug et al. Table 2. ARC Storage Elements and their contributions to the ATLAS Computing System Commissioning. Number of files stored by the ATLAS production in the period are shown in the third column. The fourth lists the total space occupied by these files. The numbers were extracted from the Replica Location Service rls://atlasrls.nordugrid.org on Storage Element Location Files TB ingrid.hpc2n.umu.se Umeaa se1.hpc2n.umu.se Umeaa ss2.hpc2n.umu.se Umeaa ss1.hpc2n.umu.se Umeaa hive-se2.unicc.chalmers.se Goteborg harry.hagrid.it.uu.se Uppsala hagrid.it.uu.se Uppsala storage2.bluesmoke.nsc.liu.se Linkoping sigrid.lunarc.lu.se Lund swelanka1.it.uu.se Sri Lanka 1 < 0.1 grid.uio.no Oslo 856 < 0.1 grid.ift.uib.no Bergen 1 < 0.1 morpheus.dcgc.dk Aalborg 252 < 0.1 benedict.grid.aau.dk Aalborg pikolit.ijs.si:2811 Slovenia pikolit.ijs.si Slovenia resources for these large scale production operations [12]. At the present time the third Data Challenge, or the Computing System Commissioning (CSC), is entering a phase of more or less constant production. As part of this constant production about simulation jobs were run on ATLAS enabled ARC sites in the period from mid November 2005 to mid June 2006 where the end date just reflects the time of this report. Up to 17 clusters comprising about 1000 CPU s were used as a single resource for these jobs. In Table 1 the clusters and their executed job shares are listed. Depending on their size, access policy, and competition with local users the number of jobs varies. In this period six countries provided resources. The Slovenian cluster, pikolit.ijs.si, was the largest contributor followed by the Swedish resources. The best clusters have efficiencies close to 90% (total ATLAS and grid middleware efficiency). This number reflects what can be expected in a heterogenious grid environment where not only different jobs and evolving software are used, but also the operational efficiency of the numerous computing clusters and storage services is a significant factor. In Table 2 the number of output files and their integrated sizes are listed according to storage elements and locations. About files with a total of

6 Data Management for the World s Largest Machine 485 Fig. 2. TB per country. The graph visualizes the numbers in Table 2. In the period from November 2005 to June 2006 Sweden and Slovenia were the largest storage contributers to the ATLAS Computing System Commissioning. Only ARC storage is considered. 27 TB were produced and stored on disks at 11 sites in five different countries. This gives an average file size of 90 MB. The integrated storage contribution per country is shown in Figure 2. 1 In the ATLAS production of simulated data (future data analysis will produce a different and more chaotic pattern) simulation is done in three steps. For each step input and output sizes vary. In the first step the physics in the proton-proton collisions is simulated, so-called event generation. These jobs have practically no input and output about 0.1 GB per job. In the second step the detector response to the particle interactions is simulated. These jobs use the output from the first step as input. They produce about 1 GB output per job. This output is again used as input for the last step where the reconstruction of the detector response is performed. A reconstruction job takes about 10 GB input in 10 files and produces an output of typically 1 GB. In order to minimize the number of files, it is foreseen to increase the file sizes (from 1 to 10 GB) as network capacity, disk sizes and tape systems evolve. The outputs are normally replicated to at least one other storage element in one of the other grids and in the case of reconstruction outputs (the starting point of most physics analyses) to all the other large computing sites spread throughout the ATLAS grid. The output remains on the storage elements till a central ATLAS decision is made about deletion, most probably several years. 1 This distribution is not representative for the previous data challenges.

7 486 S. Haug et al. Table 3. ATLAS Datasets on ARC Storage Elements as of Category ARC Total ARC/Total Description All CSC + CTB + MC CSC Computing System Commisioning CTB Combined Test Beam Production MC MC Production Finally, the output files were logically collected into datasets, objects of analysis and replication. The ATLAS files produced in this period and stored on ARC storage elements belong to 739 datasets in the period. The average number of files was then roughly 400, the actual numbers ranging from 50 to Table 3 shows the categories of datasets and their respective parts of the total numbers. The numbers in the ARC column were collected with the ATLAS DQ2 client, the numbers in the Total column with the PANDA monitor ( Since in the considered period the ATLAS ARC Grid s contribution to the total AT- LAS Grid production is estimated to have been about 11 to 13%, the numbers indicate that rather shorter than average and long jobs were processed. 2 4 Perspective, Limitations and Improvements The limitations of the system must be considered in the context of its desired capabilities. At the moment the system manages some 10 3 jobs per day where each job typically needs less than a day to finish. The number of output files are about three times larger. In order to provide the ATLAS experiment with a significant production grid, the ATLAS ARC Grid should aim to cope with numbers of jobs another order of magnitude larger. In this perspective the ATLAS ARC Grid has no fundamental scaling limitations. However, in order to meet the ambition several improvements are needed. First, the available amount of resources must increase. The present operation almost exhausts the existing. And since the resources are shared and with growing attraction to users, fair-sharing of the resources between local and grid and between different grid-users needs to be implemented. At the moment local users always have implicit first priority. And the grid-users are often mapped to a single local account so that they are effectively treated first-come first-serve. Second, the crucial Replica Location Service provides the desired functionality with mapping from logical to physical file names, certificate authentication and bulk operations and is expected to be able to handle the planned scaling-up 2 The Nordic share of the ATLAS computing resources is 7.5%, according to a memorandum of understanding.

8 Data Management for the World s Largest Machine 487 of the system. However, the lack of perfect stability is an important problem which remains to be solved. Meanwhile, the persons running the Supervisor- Executor instances should probably have some administration privileges, e.g. the possibility to restart the service. Third, further development should aim at some hours database independency. Both the production database and the data management databases now and then have some hours down time. This should cause problems other than delays in database registrations. Continuous improvements in the ARC middleware ease the operation. However, in the ATLAS ARC Grid there are many independent clusters in production mode and not dedicated to ATLAS. Thus it is impractical to negotiate frequent middleware upgrades on all of them. Hence, the future system should rely as much as possible on the present features. 5 Conclusions As part of the preparations for the ATLAS experiment at the Large Hadron Collider, large amounts of data are simulated on grids. The ATLAS ARC Grid, sites connected with NorduGrid s Advanced Resource Connector and having ATLAS software installed and configured for use by grid-jobs, now continuously contributes to this global effort. In the period from November 2005 to June 2006 about output files were produced on the ATLAS ARC Grid. Up to 17 sites in five different countries were used as a single batch facility to run about jobs. Compared to previous usage, another layer of organization was introduced in the data management system. This enabled the concept of datasets, i.e. conglomerations of files, which are used as objects for data analysis and replication. The 27 TB output was collected into 740 datasets with the physical output distributed over eight significant sites in four countries. Present experience shows that the system design can be expected to cope with the future load. Provided enough available resources, one person should be able to supervise about 10 4 jobs per day with a few GB of input and output data. The present implementation of the ATLAS ARC Grid is lacking the ability to replicate ATLAS datasets to and from other grids via the ATLAS distributed data management tools [8] and there is no support for tape-based storage elements. These shortcomings will be addressed in the near future. Acknowledgments. The indispensable work of the contributing resources system administrators is highly appreciated. References 1. The LHC Study Group: The Large Hadron Collider, Conceptual Design, CERN- AC LHC (1995) 2. ATLAS Collaboration: Detector and Physics Performance Technical Design Report, CERN-LHCC (1999)

9 488 S. Haug et al. 3. ATLAS Collaboration: ATLAS Computing Technical Design Report, CERN- LHCC (2005) 4. Knobloch, J. (ed.): LHC Computing Grid - Technical Design Report, CERN- LHCC (2005) 5. Open Science Grid Homepage: 6. NorduGrid Homepage: 7. Goosesens, L., et al.: ATLAS Production System in ATLAS Data Challenge 2, CHEP 2004, Interlaken, contribution, no ATLAS Collaboration: ATLAS Computing Technical Design Report, CERN- LHCC , p. 115 (2005) 9. Nielsen, J., et al.: Experiences with Data Indexing Services supported by the NorduGrid Middleware, CHEP 2004, Interlaken, contribution, no Konstantinov, A., et al.: Data management services of NorduGrid, CERN , vol. 2, p. 765 (2005) 11. Branco, M.: Don Quijote - Data Management for the ATLAS Automatic Production System, CERN , p. 661 (2005) 12. NorduGrid Collaboration: Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2, CERN , vol. 2, p (2005)

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Sturrock, R.; Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Lindemann, Jonas; et, al. Published in: CERN-2005-002 Published:

More information

Atlas Managed Production on Nordugrid

Atlas Managed Production on Nordugrid Atlas Managed Production on Nordugrid Alex Read Mattias Ellert (Uppsala), Katarina Pajchel, Adrian Taga University of Oslo November 7 9, 2006 1 Outline 1. 2. 3. 4. 5. 6. 7. 8. 9. LHC/ATLAS Background The

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

WHEN the Large Hadron Collider (LHC) begins operation

WHEN the Large Hadron Collider (LHC) begins operation 2228 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 4, AUGUST 2006 Measurement of the LCG2 and Glite File Catalogue s Performance Craig Munro, Birger Koblitz, Nuno Santos, and Akram Khan Abstract When

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

Analysis of internal network requirements for the distributed Nordic Tier-1

Analysis of internal network requirements for the distributed Nordic Tier-1 Journal of Physics: Conference Series Analysis of internal network requirements for the distributed Nordic Tier-1 To cite this article: G Behrmann et al 2010 J. Phys.: Conf. Ser. 219 052001 View the article

More information

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer computing platform Internal report Marko Niinimaki, Mohamed BenBelgacem, Nabil Abdennadher HEPIA, January 2010 1. Background and motivation

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

The NorduGrid production Grid infrastructure, status and plans

The NorduGrid production Grid infrastructure, status and plans The NorduGrid production Grid infrastructure, status and plans P.Eerola,B.Kónya, O. Smirnova Department of High Energy Physics Lund University Box 118, 22100 Lund, Sweden T. Ekelöf, M. Ellert Department

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

ARC middleware. The NorduGrid Collaboration

ARC middleware. The NorduGrid Collaboration ARC middleware The NorduGrid Collaboration Abstract The paper describes the existing components of ARC, discusses some of the new components, functionalities and enhancements currently under development,

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control Computing in High Energy and Nuclear Physics, La Jolla, California, 24-28 March 2003 1 DIRAC Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev, V.Garonne CPPM-IN2P3-CNRS, Marseille,

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Introduction to Geant4

Introduction to Geant4 Introduction to Geant4 Release 10.4 Geant4 Collaboration Rev1.0: Dec 8th, 2017 CONTENTS: 1 Geant4 Scope of Application 3 2 History of Geant4 5 3 Overview of Geant4 Functionality 7 4 Geant4 User Support

More information

An SQL-based approach to physics analysis

An SQL-based approach to physics analysis Journal of Physics: Conference Series OPEN ACCESS An SQL-based approach to physics analysis To cite this article: Dr Maaike Limper 2014 J. Phys.: Conf. Ser. 513 022022 View the article online for updates

More information

CERN Lustre Evaluation

CERN Lustre Evaluation CERN Lustre Evaluation Arne Wiebalck Sun HPC Workshop, Open Storage Track Regensburg, Germany 8 th Sep 2009 www.cern.ch/it Agenda A Quick Guide to CERN Storage Use Cases Methodology & Initial Findings

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Spark and HPC for High Energy Physics Data Analyses

Spark and HPC for High Energy Physics Data Analyses Spark and HPC for High Energy Physics Data Analyses Marc Paterno, Jim Kowalkowski, and Saba Sehrish 2017 IEEE International Workshop on High-Performance Big Data Computing Introduction High energy physics

More information

Determination of the aperture of the LHCb VELO RF foil

Determination of the aperture of the LHCb VELO RF foil LHCb-PUB-214-12 April 1, 214 Determination of the aperture of the LHCb VELO RF foil M. Ferro-Luzzi 1, T. Latham 2, C. Wallace 2. 1 CERN, Geneva, Switzerland 2 University of Warwick, United Kingdom LHCb-PUB-214-12

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Multiple Broker Support by Grid Portals* Extended Abstract

Multiple Broker Support by Grid Portals* Extended Abstract 1. Introduction Multiple Broker Support by Grid Portals* Extended Abstract Attila Kertesz 1,3, Zoltan Farkas 1,4, Peter Kacsuk 1,4, Tamas Kiss 2,4 1 MTA SZTAKI Computer and Automation Research Institute

More information

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures Journal of Physics: Conference Series Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures To cite this article: L Field et al

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

LCG Conditions Database Project

LCG Conditions Database Project Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,

More information

Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data

Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data June 2006 Note: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality,

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

PoS(EGICF12-EMITC2)081

PoS(EGICF12-EMITC2)081 University of Oslo, P.b.1048 Blindern, N-0316 Oslo, Norway E-mail: aleksandr.konstantinov@fys.uio.no Martin Skou Andersen Niels Bohr Institute, Blegdamsvej 17, 2100 København Ø, Denmark E-mail: skou@nbi.ku.dk

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

The Grid Monitor. Usage and installation manual. Oxana Smirnova

The Grid Monitor. Usage and installation manual. Oxana Smirnova NORDUGRID NORDUGRID-MANUAL-5 2/5/2017 The Grid Monitor Usage and installation manual Oxana Smirnova Abstract The LDAP-based ARC Grid Monitor is a Web client tool for the ARC Information System, allowing

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

CMS data quality monitoring: Systems and experiences

CMS data quality monitoring: Systems and experiences Journal of Physics: Conference Series CMS data quality monitoring: Systems and experiences To cite this article: L Tuura et al 2010 J. Phys.: Conf. Ser. 219 072020 Related content - The CMS data quality

More information

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L.

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. LCG-TDR-001 CERN-LHCC-2005-024 20 June 2005 LHC Computing Grid Technical Design Report Version: 1.04 20 June 2005 The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. Robertson Technical Design

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

CMS - HLT Configuration Management System

CMS - HLT Configuration Management System Journal of Physics: Conference Series PAPER OPEN ACCESS CMS - HLT Configuration Management System To cite this article: Vincenzo Daponte and Andrea Bocci 2015 J. Phys.: Conf. Ser. 664 082008 View the article

More information

Data Management Components for a Research Data Archive

Data Management Components for a Research Data Archive Data Management Components for a Research Data Archive Steven Worley and Bob Dattore Scientific Computing Division Computational and Information Systems Laboratory National Center for Atmospheric Research

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

arxiv: v1 [cs.dc] 12 May 2017

arxiv: v1 [cs.dc] 12 May 2017 GRID Storage Optimization in Transparent and User-Friendly Way for LHCb Datasets arxiv:1705.04513v1 [cs.dc] 12 May 2017 M Hushchyn 1,2, A Ustyuzhanin 1,3, P Charpentier 4 and C Haen 4 1 Yandex School of

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

Performance of the ATLAS Inner Detector at the LHC

Performance of the ATLAS Inner Detector at the LHC Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California R. Barbera Dipartimento di Fisica e Astronomia, INFN, and ALICE Collaboration, via S. Sofia 64,

More information

Framework for Interactive Parallel Dataset Analysis on the Grid

Framework for Interactive Parallel Dataset Analysis on the Grid SLAC-PUB-12289 January 2007 Framework for Interactive Parallel Analysis on the David A. Alexander, Balamurali Ananthan Tech-X Corporation 5621 Arapahoe Ave, Suite A Boulder, CO 80303 {alexanda,bala}@txcorp.com

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2

More information

Performance and Stability of the Chelonia Storage System

Performance and Stability of the Chelonia Storage System Performance and Stability of the Chelonia Storage System Jon Kerr Nilsen University of Oslo, Dept. of Physics, P. O. Box 148, Blindern, N-316 Oslo, Norway University of Oslo, Center for Information Technology

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 M. E. Pozo Astigarraga, on behalf of the ATLAS Collaboration CERN, CH-1211 Geneva 23, Switzerland E-mail: eukeni.pozo@cern.ch The LHC has been providing proton-proton

More information

ATLAS distributed computing: experience and evolution

ATLAS distributed computing: experience and evolution Journal of Physics: Conference Series OPEN ACCESS ATLAS distributed computing: experience and evolution To cite this article: A Nairz and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 523 012020 View

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

DIRAC Distributed Secure Framework

DIRAC Distributed Secure Framework DIRAC Distributed Secure Framework A Casajus Universitat de Barcelona E-mail: adria@ecm.ub.es R Graciani Universitat de Barcelona E-mail: graciani@ecm.ub.es on behalf of the LHCb DIRAC Team Abstract. DIRAC,

More information

A new petabyte-scale data derivation framework for ATLAS

A new petabyte-scale data derivation framework for ATLAS Journal of Physics: Conference Series PAPER OPEN ACCESS A new petabyte-scale data derivation framework for ATLAS To cite this article: James Catmore et al 2015 J. Phys.: Conf. Ser. 664 072007 View the

More information

Unified System for Processing Real and Simulated Data in the ATLAS Experiment

Unified System for Processing Real and Simulated Data in the ATLAS Experiment Unified System for Processing Real and Simulated Data in the ATLAS Experiment Mikhail Borodin Big Data Laboratory, National Research Centre "Kurchatov Institute", Moscow, Russia National Research Nuclear

More information

Improving Generators Interface to Support LHEF V3 Format

Improving Generators Interface to Support LHEF V3 Format Improving Generators Interface to Support LHEF V3 Format Fernando Cornet-Gomez Universidad de Granada, Spain DESY Summer Student Supervisor: Ewelina M. Lobodzinska September 11, 2014 Abstract The aim of

More information

Data Quality Monitoring at CMS with Machine Learning

Data Quality Monitoring at CMS with Machine Learning Data Quality Monitoring at CMS with Machine Learning July-August 2016 Author: Aytaj Aghabayli Supervisors: Jean-Roch Vlimant Maurizio Pierini CERN openlab Summer Student Report 2016 Abstract The Data Quality

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine Journal of Physics: Conference Series OPEN ACCESS Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine To cite this article: Henrik Öhman et al 2014 J. Phys.: Conf.

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Top-level DB design for Big Data in ATLAS Experiment at CERN

Top-level DB design for Big Data in ATLAS Experiment at CERN 23 rd Nov 2017, DOAG conference, Nuremberg (Germany) Top-level DB design for Big Data in ATLAS Experiment at CERN Gancho Dimitrov, Petya Vasileva, Elizabeth Gallas, on behalf of the ATLAS Collaboration

More information

The ATLAS Data Acquisition System: from Run 1 to Run 2

The ATLAS Data Acquisition System: from Run 1 to Run 2 Available online at www.sciencedirect.com Nuclear and Particle Physics Proceedings 273 275 (2016) 939 944 www.elsevier.com/locate/nppp The ATLAS Data Acquisition System: from Run 1 to Run 2 William Panduro

More information

IllustraCve Example of Distributed Analysis in ATLAS Spanish Tier2 and Tier3

IllustraCve Example of Distributed Analysis in ATLAS Spanish Tier2 and Tier3 IllustraCve Example of Distributed Analysis in ATLAS Spanish Tier2 and Tier3 S. González, E. Oliver, M. Villaplana, A. Fernández, M. Kaci, A. Lamas, J. Salt, J. Sánchez PCI2010 Workshop Rabat, 5 th 7 th

More information

An Overview of a Large-Scale Data Migration

An Overview of a Large-Scale Data Migration An Overview of a Large-Scale Data Migration Magnus Lübeck Magnus.Lubeck@cern.ch Dirk Geppert Dirk Geppert@cern.ch Marcin Nowak Marcin.Nowak@cern.ch Krzysztof Nienartowicz Krzysztof.Nienartowicz@cern.ch

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

Monte Carlo Production Management at CMS

Monte Carlo Production Management at CMS Monte Carlo Production Management at CMS G Boudoul 1, G Franzoni 2, A Norkus 2,3, A Pol 2, P Srimanobhas 4 and J-R Vlimant 5 - for the Compact Muon Solenoid collaboration 1 U. C. Bernard-Lyon I, 43 boulevard

More information

Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework

Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework Erik Elmroth 1, Sverker Holmgren 2, Jonas Lindemann 3, Salman Toor 2, and Per-Olov Östberg1 1 Dept. Computing Science

More information

An RDF NetAPI. Andy Seaborne. Hewlett-Packard Laboratories, Bristol

An RDF NetAPI. Andy Seaborne. Hewlett-Packard Laboratories, Bristol An RDF NetAPI Andy Seaborne Hewlett-Packard Laboratories, Bristol andy_seaborne@hp.com Abstract. This paper describes some initial work on a NetAPI for accessing and updating RDF data over the web. The

More information

arxiv: v1 [cs.dc] 3 Feb 2010

arxiv: v1 [cs.dc] 3 Feb 2010 Performance and Stability of the Chelonia Storage Cloud J. K. Nilsen a,b, S. Toor c, Zs. Nagy d, B. Mohn e, A. L. Read a arxiv:12.712v1 [cs.dc] 3 Feb 21 a University of Oslo, Dept. of Physics, P. O. Box

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

Communication Software for the ALICE TPC Front-End Electronics

Communication Software for the ALICE TPC Front-End Electronics Communication Software for the ALICE Front-End Electronics M. Richter 1, J. Alme 1, T. Alt 4, S. Bablok 1, U. Frankenfeld 5, D. Gottschalk 4, R. Keidel 3, Ch. Kofler 3, D. T. Larsen 1, V. Lindenstruth

More information