CMS Computing Model with Focus on German Tier1 Activities

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "CMS Computing Model with Focus on German Tier1 Activities"

Transcription

1 CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg,

2 Overview The Large Hadron Collider The Compact Muon Solenoid CMS Computing Model CMS in Germany CMS Workflow & Data Transfers CMS Service Challenges Monitoring HappyFace Project CMS Tier1 Computing - DESY DVSEM,

3 Large Hadron Collider ALICE CMS ATLAS LHCb Proton-Proton Collider Circumference: 27 km Beam Energy: 7 TeV/c² Below Surface: 100 m Temperature: -271 C Energy Use: 1 TWh/a 4 Large Experiments CMS (General-Purpose) Atlas (General-Purpose) LHCb (Physics of b-quarks) Alice (Lead Ion Collisions) 2 Smaller Experiments LHCf and Totem CMS Tier1 Computing - DESY DVSEM,

4 Compact Muon Solenoid Muon Chambers Hadronic Calorimetry Magnetic Retrun Yoke Technical Details: Total Weigh: Diameter: Total Length: Magnetic Field Solenoid: Yoke: t 15 m 21,5 m 4 Tesla 2 Tesla Silicon Tracker Readout Channels, e.g. Tracker: 10 Mio. Superconducting Solenoid Electromagnetic Calorimetry Forward Calorimetry Collission Rate: 40 MHz CMS Tier1 Computing - DESY DVSEM,

5 Pictures of CMS CMS Tier1 Computing - DESY DVSEM,

6 CMS Collaboration CMS 38 Nations 182 Institutions > 2000 Scientists & Engineers CMS Tier1 Computing - DESY DVSEM,

7 Physics Motivation Test of the Standard Model (at the TeV energy scale) Search for the Higgs-Boson In each proton-proton collision, more than 1000 particles are created. The decay products allow to conclude on underlying physics processes of the collision. Physics beyond the SM (e.g. SUSY, extra dimensions,...) CMS Tier1 Computing - DESY DVSEM,

8 Trigger and Event Rate 60 TB/sec Level 1 Trigger Reduction with ASICs (Hardware) Collision Rate: 40 MHz Event-Size: 1,5 MB 150 GB/sec Tape & HDD Storage for Offline- Analysis Recorded Events: 150 per second 225 MB/sec High Level Trigger Software Data Reduction CMS Tier1 Computing - DESY DVSEM,

9 [ TB ] [ TB ] [ MSI2k ] Expected Resources CPU Year Disk Space MSS Space Year Year CMS Tier1 Computing - DESY DVSEM,

10 CMS Computing Model LHC experiments have decided to use distributed computing and storage resources. The Worldwide LHC Computing Grid (WLCG). Grid services based on the glite middleware. Computing centres arranged in a four-tiered hierarchical structure. Availability and resources are regulated by MOU (e.g. Downtime per year, response time, etc.). CMS Tier1 Computing - DESY DVSEM,

11 CMS Tier Structure CMS Tier1 Computing - DESY DVSEM,

12 WLCG Resources Taken from the WLCG Real Time Monitor: CMS Tier1 Computing - DESY DVSEM,

13 Main Tier Responsibilities Tier0: Storage of RAW detector data. First reconstruction of physics objects after data-taking. Tier1: Host one dedicated copy of RAW and reconstructed data outside the Tier0. Re-processing of stored data. Skimming (creation of small sub data samples) Tier2: Monte Carlo production/simulation. Calibration activities. Resources for physics groups analyses. Tier3: Individual user analyses. Interactive logins (for development & debugging) CMS Tier1 Computing - DESY DVSEM,

14 German CMS Members Uni Hamburg DESY Hamburg/ Zeuthen RWTH Aachen FSP-CMS (BMBF) Forschungszentrum Karlsruhe Uni Karlsruhe CMS Tier1 Computing - DESY DVSEM,

15 Tier1 GridKa Located at Forschungszentrum Karlsruhe (KIT, Campus Nord) Part of and operated by the Steinbuch Centre for Computing (SCC, KIT) Multi-VO Tier centre (supports 8 experiments) 4 LHC experiments: CMS, ATLAS, ALICE, LHCb 4 non-lhc experiments: CDF, D0, BaBar, Compass CMS Tier1 Computing - DESY DVSEM,

16 GridKa Resources (2008) Network: CPU: 10 Gbit link CERN GridKa 3 x 10 Gbit link to 3 European Tier1s 10 Gbit link to DFN/X-Win (e.g. Tier2 connections) CMS Resources: Storage: Based on dcache (developed at DESY and FNAL) CMS Disk: ~ 650 TB + D-Grid: ~ 40 TB CMS Tape: ~ 900 TB ~ 800 cores, 2 GB Ram per core D-Grid Resources: ~ 500 cores, 2 GB Ram per core CMS Tier1 Computing - DESY DVSEM,

17 Tier2/3 and NAF Resources Besides the Tier1 resources, considerable CPU and disk capacities are available in Germany (August 2008): Tier2 (German CMS Tier2 federation): DESY: RWTH Aachen: 400 cores, 185 TB disk 360 cores, 99 TB disk Tier3: RWTH Aachen: Uni Karlsruhe: Uni Hamburg: 850 cores, 165 TB disk 500 cores, 150 TB disk 15 TB disk NAF (National Analysis Facility): DESY: 245 cores, 32 TB disk D-Grid resources, partially usable for FSP-CMS: RWTH Aachen: GridKA: 600 cores, 231 TB disk 500 cores, 40 TB disk CMS Tier1 Computing - DESY DVSEM,

18 CMS Workflow 1 Transfers 2 Transfers 3 CMS Tier1 Computing - DESY DVSEM,

19 PhEDEx Data Transfers Physics Experiment Data Export (CMS data placement tool): Various agents/daemons are run at the tier sites (depending on the tier level) upload, download, MSS stage-in/out, DB,... Provides automatic WAN data transfers, based on subscriptions Automatic load-balancing File transfer routing topology (FTS) Automatic bookkeeping (database entries, logging) Consistency checks (DB vs. filesystem) File integrity checks CMS Tier1 Computing - DESY DVSEM,

20 Other Involved Components Monte Carlo Production Agent (ProdAgent) CMS Remote Analysis Builder (CRAB) Dataset Bookkeeping System (DBS) Data Location Service (DLS) Trivial File Catalog (TFC) Grid middleware (glite) SE, CE, RB, UI,... DBS: Relates dataset/block and site DLS: Relates dataset/block and logical file name (LFN) TFC: Relates LFN and local file name Very complex system Needs intense testing, debugging and optimisation. CMS Tier1 Computing - DESY DVSEM,

21 WLCG Job Workflow User Interface Computing Element 6. Job execution Task Queue RB Storage glite WMS WM Proxy 2. Job Workload Manager 4. Job Job Controller CondorG 6. Grid enabled data transfer & access Information Service Match Maker Information Supermarket Information System Storage Element CMS Tier1 Computing - DESY DVSEM,

22 CMS Service Challenges - 1 Computing, Service and Analysis (CSA) Challenges: Test the readiness for data-taking CSA06, CSA07 and CSA08 were performed to test the computing and network infrastructure and the computing resources at a level of 25%, 50% and 100% required for the LHC start-up. The whole CMS workflow was tested: Production/Simulation Prompt reconstruction at the Tier0 Re-reconstruction at the Tier1s Data distribution to Tier1s and Tier2s Data Analyses CMS Tier1 Computing - DESY DVSEM,

23 CMS Service Challenges - 2 Cumulative data volume transferred during CSA08: > 2 PetaByte in 4 weeks Transfer rate from the Tier0 to the Tier1s: < 600 MB/sec Goal was achieved only partially, problems with: Storage systems (CASTOR at CERN) Problems identified and fixed afterwards. CMS Tier1 Computing - DESY DVSEM,

24 CMS Service Challenges - 3 CMS Grid-job statistics: May to August 2008 German centres GridKa, DESY and Aachen performed extremely well, compared to all other CMS tier sites. CMS Tier1 Computing - DESY DVSEM,

25 Results of CMS Challenges Service Goal 2008 Status 2008 Goal 2007 Status 2007 Goal 2006 Status 2006 Tier-0 Reco Rate Hz Achieved 100 Hz Only at bursts 50 Hz Achieved Tier-0 Tier-1 Transfer Rate 600 MB/sec Achieved partially 300 MB/sec Only at bursts 150 MB/sec Achieved Tier-1 Tier-2 Transfer Rate MB/sec Achieved MB/sec Achieved partially MB/sec Achieved Tier-1 Tier-1 Transfer Rate 100 MB/sec Achieved 50 MB/sec Achieved partially N/A - Tier-1 Job Submission jobs/day Achieved jobs/day Achieved jobs/day jobs/day Tier-2 Job Submission jobs/day Achieved jobs/day jobs/day jobs/day Achieved Monte Carlo Simulation 1.5 x 10 9 events/year Achieved 50 x 10 6 events/month Achieved N/A - CMS Tier1 Computing - DESY DVSEM,

26 First Real CMS Data 20 TB/day 10 TB/day Data Imports to GridKa of cosmic muon samples (per day) Stable running over large periods of time First real beam event: LHC Beam hitting collimators near CMS (Sept. 10th 2008). GridKa was the first CMS tier centre outside CERN with real beam data. CMS Tier1 Computing - DESY DVSEM,

27 Monitoring - Overview Distributed Grid resources are complex systems. To provide a stable and reliable service, monitoring of the different services and resources is indispensable. Use monitoring to discover failures before the actual services are affected. Different types of monitoring: Central monitoring of all Tier centres Status of the high level grid services and their interconnection Availability of the physical resources of the Tier centres. Central monitoring of experiment specific components Local monitoring at the different computing sites Local monitoring of experiment specific components Many different monitoring instances for different purposes! CMS Tier1 Computing - DESY DVSEM,

28 Grid Monitoring Central WLCG Monitoring: Service Availability Monitoring (SAM) Standardised Grid jobs, testing the basic grid functionality of a Tier centre. Includes all Grid services of a site. Centrally send to the Tiers (once per hour) Output available on a web page with history Once a critical SAM test has failed on a site, no user jobs will be send to it anymore. CMS Tier1 Computing - DESY DVSEM,

29 Experiment Monitoring Besides the basic grid functionality, experiment specific components and functionalities are tested. Access point to the results is the CMS ARDA Dashboard. SAM: Also tests of dedicated functionalities of the Tier centres requested by CMS. Job Summary: Type of the job and its exit code. History per centre and activity. PhEDEx data transfers: Transfer quality for CMS datasets. Collects error messages in case of failures. Gives overview on the transferred data samples available at the different sites. CMS Tier1 Computing - DESY DVSEM,

30 Local Site Monitoring Each computing centre has its own monitoring infrastructure: Nagios Ganglia... Offers monitoring for hardware, services and network connections. Adapted for the local particularities. Automated notification in case of failures by , instant messages, SMS,... Experiment specific local monitoring: Is the local software installation accessible? Are all required services properly configured? CMS Tier1 Computing - DESY DVSEM,

31 USCHI Ultra Sophisticated CMS Hardware Information System Provides specific tests of the local infrastructure of a site. Mainly driven by previously observed problems. Designed in a modular way to ease the implementation of new tests. Currently, the following tests are running locally at GridKa: Availability of the CMS production software area Sufficient free inodes of the file system Basic grid functionality (e.g. user interface commands) FTS transfer channels (are they enabled, working, etc.) Remaining space of the disk-only area Status of the dcache tape-write buffers Functionality of basic VOBox commands PhEDEx grid proxy validity Output of the test provided in xml format Developed in Karlsruhe A. Scheurer, A. Trunov CMS Tier1 Computing - DESY DVSEM,

32 Multi-Monitoring Drawbacks The current monitoring infrastructure is characterised by a multitude of different sources to query for information about one single site. Limitations of the current monitoring infrastructure: Information is not clearly arranged: Too much information not related to a dedicated site. Different site design and history plots / formats / technologies. Nearly impossible to correlate information from different sources. Uncomfortable to use: Access many different websites to get all needed information about one site. Different web interfaces require parameters (time range, site name,...) Often needs a long time to download / update plots (> 1 minute!) Increases administrational effort and slows down the identification of errors. A meta-monitoring system can help to improve this situation! CMS Tier1 Computing - DESY DVSEM,

33 Ideas for Meta-Monitoring Only one website to access all relevant information for one site Simple warning system (with symbols) Single point of access for all relevant information (+ their history) Caching of important external plots for a fast access Backend for implementation of own tests Easy implementation of new tests Self-defined rules for service status and warning-alerts triggering Correlation of data from different sources Fast, simple and up to date All monitoring information should be renewed every 15 minutes Short access / loading times of the web page Easy to install and use - no complex database Access to every test result in less then 3 mouse clicks Provide links to raw information from the data source The meta-monitoring system HappyFace Project offers this functionality. CMS Tier1 Computing - DESY DVSEM,

34 HappyFace Project CMS Tier1 Computing - DESY DVSEM,

35 HappyFace Project - 2 Our HappyFace instance is currently monitoring the following quantities at GridKa: Local CMS infrastructure: the USCHI test-set Incoming/outgoing PhEDEx transfers: General transfer quality. Classification of error type. Classification of destination and source errors. Status of the WLCG production environment Queries SAM test results and displays eventually failed tests including a hyperlink to the error report. CMS Dashboard: Search Job Summary Service for failed jobs, print statistics of the exit codes. Fairshare of CMS Displays the fraction of running CMS jobs Caching of all plots necessary for monitoring CMS Tier1 Computing - DESY DVSEM,

36 HappyFace Project - 3 Instances of HappyFace currently used by: Uni Karlsruhe RWTH Aachen Uni Hamburg Uni Göttingen Code and documentation available under: Developed in Karlsruhe V. Buege, V. Mauch, A. Trunov Future plans: Platform to share HappyFace modules Provide standardised output in xml format of all HappyFace instances Contact: Viktor Mauch: Volker Buege: CMS Tier1 Computing - DESY DVSEM,

37 Summary CMS uses Grid technology for data access and processing. Multiple service challenges have proven the readiness for first data. PhEDEx is used for reliable large-scale data management. Experience has shown, that monitoring is indespensable for the operation of such a heterogeneous and complex system. Multitude of monitoring services available but uncomfortable to use. Meta-monitoring HappyFace Project : Very useful for daily administration of services at e.g. GridKa CMS Tier1 Computing - DESY DVSEM,

38 Backup Backup CMS Tier1 Computing - DESY DVSEM,

39 GridKa Resource Requests *Numbers for 2013 uncertain. **Numbers for 2014 unofficial. CMS Tier1 Computing - DESY DVSEM,

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft

Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft Manuscript Number: CHEP191 Title: The HappyFace Project Article Type: Poster Corresponding Author: Viktor Mauch Corresponding

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

Detector Control LHC

Detector Control LHC Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy

More information

Data Access and Data Management

Data Access and Data Management Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

b-jet identification at High Level Trigger in CMS

b-jet identification at High Level Trigger in CMS Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

How to discover the Higgs Boson in an Oracle database. Maaike Limper

How to discover the Higgs Boson in an Oracle database. Maaike Limper How to discover the Higgs Boson in an Oracle database Maaike Limper 2 Introduction CERN openlab is a unique public-private partnership between CERN and leading ICT companies. Its mission is to accelerate

More information

Monte Carlo Production Management at CMS

Monte Carlo Production Management at CMS Monte Carlo Production Management at CMS G Boudoul 1, G Franzoni 2, A Norkus 2,3, A Pol 2, P Srimanobhas 4 and J-R Vlimant 5 - for the Compact Muon Solenoid collaboration 1 U. C. Bernard-Lyon I, 43 boulevard

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Data Reconstruction in Modern Particle Physics

Data Reconstruction in Modern Particle Physics Data Reconstruction in Modern Particle Physics Daniel Saunders, University of Bristol 1 About me Particle Physics student, final year. CSC 2014, tcsc 2015, icsc 2016 Main research interests. Detector upgrades

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Performance of the ATLAS Inner Detector at the LHC

Performance of the ATLAS Inner Detector at the LHC Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

Monitoring the ALICE Grid with MonALISA

Monitoring the ALICE Grid with MonALISA Monitoring the ALICE Grid with MonALISA 2008-08-20 Costin Grigoras ALICE Workshop @ Sibiu Monitoring the ALICE Grid with MonALISA MonALISA Framework library Data collection and storage in ALICE Visualization

More information

Federated data storage system prototype for LHC experiments and data intensive science

Federated data storage system prototype for LHC experiments and data intensive science Federated data storage system prototype for LHC experiments and data intensive science A. Kiryanov 1,2,a, A. Klimentov 1,3,b, D. Krasnopevtsev 1,4,c, E. Ryabinkin 1,d, A. Zarochentsev 1,5,e 1 National

More information

CMS data quality monitoring: Systems and experiences

CMS data quality monitoring: Systems and experiences Journal of Physics: Conference Series CMS data quality monitoring: Systems and experiences To cite this article: L Tuura et al 2010 J. Phys.: Conf. Ser. 219 072020 Related content - The CMS data quality

More information

ATLAS, CMS and LHCb Trigger systems for flavour physics

ATLAS, CMS and LHCb Trigger systems for flavour physics ATLAS, CMS and LHCb Trigger systems for flavour physics Università degli Studi di Bologna and INFN E-mail: guiducci@bo.infn.it The trigger systems of the LHC detectors play a crucial role in determining

More information

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca   S. Lacaprara INFN Legnaro Status and evolution of CRAB University and INFN Milano-Bicocca E-mail: fabio.farina@cern.ch S. Lacaprara INFN Legnaro W. Bacchi University and INFN Bologna M. Cinquilli University and INFN Perugia G.

More information

Understanding the T2 traffic in CMS during Run-1

Understanding the T2 traffic in CMS during Run-1 Journal of Physics: Conference Series PAPER OPEN ACCESS Understanding the T2 traffic in CMS during Run-1 To cite this article: Wildish T and 2015 J. Phys.: Conf. Ser. 664 032034 View the article online

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

CMS Workflow Execution using Intelligent Job Scheduling and Data Access Strategies

CMS Workflow Execution using Intelligent Job Scheduling and Data Access Strategies 1 CMS Workflow Execution using Intelligent Job Scheduling and Data Access Strategies Khawar Hasham 1,2, Antonio Delgado Peris 3, Ashiq Anjum 1, Dave Evans 4, Dirk Hufnagel 2, Eduardo Huedo 5, José M. Hernández

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Data Challenges in Photon Science Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Photon Science > Exploration of tiny samples of nanomaterials > Synchrotrons and free electron lasers generate

More information

Entwicklung von HappyFace Modulen für Atlas. Update of HappyFace modules for Atlas

Entwicklung von HappyFace Modulen für Atlas. Update of HappyFace modules for Atlas Bachelor s Thesis Entwicklung von HappyFace Modulen für Atlas Update of HappyFace modules for Atlas prepared by Lino Oscar Gerlach from Kreuztal at the II. Physikalischen Institut Thesis number: II.Physik-UniGö-BSc-2015/12

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Spark and HPC for High Energy Physics Data Analyses

Spark and HPC for High Energy Physics Data Analyses Spark and HPC for High Energy Physics Data Analyses Marc Paterno, Jim Kowalkowski, and Saba Sehrish 2017 IEEE International Workshop on High-Performance Big Data Computing Introduction High energy physics

More information

Monitoring for IT Services and WLCG. Alberto AIMAR CERN-IT for the MONIT Team

Monitoring for IT Services and WLCG. Alberto AIMAR CERN-IT for the MONIT Team Monitoring for IT Services and WLCG Alberto AIMAR CERN-IT for the MONIT Team 2 Outline Scope and Mandate Architecture and Data Flow Technologies and Usage WLCG Monitoring IT DC and Services Monitoring

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

A Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data

A Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data Prototype of the CMS Object Oriented Reconstruction and nalysis Framework for the Beam Test Data CMS Collaboration presented by Lucia Silvestris CERN, Geneve, Suisse and INFN, Bari, Italy bstract. CMS

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Data Handling for LHC: Plans and Reality

Data Handling for LHC: Plans and Reality Data Handling for LHC: Plans and Reality Tony Cass Leader, Database Services Group Information Technology Department 11 th July 2012 1 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

Track reconstruction with the CMS tracking detector

Track reconstruction with the CMS tracking detector Track reconstruction with the CMS tracking detector B. Mangano (University of California, San Diego) & O.Gutsche (Fermi National Accelerator Laboratory) Overview The challenges The detector Track reconstruction

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2

More information

Simulation and Physics Studies for SiD. Norman Graf (for the Simulation & Reconstruction Team)

Simulation and Physics Studies for SiD. Norman Graf (for the Simulation & Reconstruction Team) Simulation and Physics Studies for SiD Norman Graf (for the Simulation & Reconstruction Team) SLAC DOE Program Review June 13, 2007 Linear Collider Detector Environment Detectors designed to exploit the

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

An ATCA framework for the upgraded ATLAS read out electronics at the LHC

An ATCA framework for the upgraded ATLAS read out electronics at the LHC An ATCA framework for the upgraded ATLAS read out electronics at the LHC Robert Reed School of Physics, University of the Witwatersrand, Johannesburg, South Africa E-mail: robert.reed@cern.ch Abstract.

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP THE EUROPEAN ORGANISATION FOR PARTICLE PHYSICS RESEARCH (CERN) 2 THE LARGE HADRON COLLIDER THE LARGE HADRON COLLIDER

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

Deep Learning Photon Identification in a SuperGranular Calorimeter

Deep Learning Photon Identification in a SuperGranular Calorimeter Deep Learning Photon Identification in a SuperGranular Calorimeter Nikolaus Howe Maurizio Pierini Jean-Roch Vlimant @ Williams College @ CERN @ Caltech 1 Outline Introduction to the problem What is Machine

More information

CERN Lustre Evaluation

CERN Lustre Evaluation CERN Lustre Evaluation Arne Wiebalck Sun HPC Workshop, Open Storage Track Regensburg, Germany 8 th Sep 2009 www.cern.ch/it Agenda A Quick Guide to CERN Storage Use Cases Methodology & Initial Findings

More information

An SQL-based approach to physics analysis

An SQL-based approach to physics analysis Journal of Physics: Conference Series OPEN ACCESS An SQL-based approach to physics analysis To cite this article: Dr Maaike Limper 2014 J. Phys.: Conf. Ser. 513 022022 View the article online for updates

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Evolution of Database Replication Technologies for WLCG Zbigniew Baranowski, Lorena Lobato Pardavila, Marcin Blaszczyk, Gancho Dimitrov, Luca Canali European Organisation for Nuclear Research (CERN), CH-1211

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster

More information

MSG: An Overview of a Messaging System for the Grid

MSG: An Overview of a Messaging System for the Grid MSG: An Overview of a Messaging System for the Grid Daniel Rodrigues Presentation Summary Current Issues Messaging System Testing Test Summary Throughput Message Lag Flow Control Next Steps Current Issues

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 M. E. Pozo Astigarraga, on behalf of the ATLAS Collaboration CERN, CH-1211 Geneva 23, Switzerland E-mail: eukeni.pozo@cern.ch The LHC has been providing proton-proton

More information

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L.

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. LCG-TDR-001 CERN-LHCC-2005-024 20 June 2005 LHC Computing Grid Technical Design Report Version: 1.04 20 June 2005 The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. Robertson Technical Design

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

LHC Computing Grid today Did it work?

LHC Computing Grid today Did it work? Did it work? Sept. 9th 2011, 1 KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Institut www.kit.edu Abteilung Large Hadron Collider and Experiments

More information

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC X11 opical Seminar IPRD, Siena - 7- th June 20 he performance of the ALAS Inner Detector rigger Algorithms in pp collisions at the LHC Mark Sutton University of Sheffield on behalf of the ALAS Collaboration

More information

AMGA metadata catalogue system

AMGA metadata catalogue system AMGA metadata catalogue system Hurng-Chun Lee ACGrid School, Hanoi, Vietnam www.eu-egee.org EGEE and glite are registered trademarks Outline AMGA overview AMGA Background and Motivation for AMGA Interface,

More information

Report Semesterarbeit MORE-Web

Report Semesterarbeit MORE-Web Report Semesterarbeit MORE-Web Esteban Marín 2013-06-20 Abstract This document describes an undergraduate project (Semesterarbeit) that is part of the physics studies at the ETH in Zurich, Switzerland.

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

Monitoring of large-scale federated data storage: XRootD and beyond.

Monitoring of large-scale federated data storage: XRootD and beyond. Monitoring of large-scale federated data storage: XRootD and beyond. J Andreeva 1, A Beche 1, S Belov 2, D Diguez Arias 1, D Giordano 1, D Oleynik 2, A Petrosyan 2, P Saiz 1, M Tadel 3, D Tuckett 1 and

More information

A Grid Computing Centre at Forschungszentrum Karlsruhe. Response on the Requirements for a Regional Data and Computing Centre in Germany 1 (RDCCG)

A Grid Computing Centre at Forschungszentrum Karlsruhe. Response on the Requirements for a Regional Data and Computing Centre in Germany 1 (RDCCG) A Grid Computing Centre at Forschungszentrum Karlsruhe Response on the Requirements for a Regional Data and Computing Centre in Germany 1 (RDCCG) 5. December 2001 H. Marten, K.-P. Mickel, R. Kupsch Forschungszentrum

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information