LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

Size: px
Start display at page:

Download "LHCb Computing Resources: 2019 requests and reassessment of 2018 requests"

Transcription

1 LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB /09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB Created: 30 th August 2017 Last modified: 5 th September 2017 Prepared By: LHCb Computing Project C. Bozzi/Editor

2

3 Introduction Last modified: 5th September 2017 Abstract This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters. page iii

4 Introduction Last modified: 5th September 2017 Table of Contents 1. INTRODUCTION THE LHCB COMPUTING MODEL AND PROCESSING PLANS EXTRAPOLATION OF STORAGE RESOURCES AT THE END OF RESOURCES NEEDED IN 2018 AND SUMMARY OF REQUESTS... 7 iv page iv

5 Introduction Last modified: 5th September 2017 List of Tables Table 2-1: Assumed LHC proton-proton and heavy ion running time in Table 4-1: Estimated CPU work needed for the different activities (unchanged)... 6 Table 4-2: Disk Storage needed for the different categories of LHCb data Table 4-3: Tape Storage needed for the different categories of LHCb data Table 5-1: CPU power requested at the different Tier levels Table 5-2: LHCb Disk request for each Tier level. For countries hosting a Tier1, the Tier2 contribution could also be provided at the Tier Table 5-3: LHCb Tape request for each Tier level List of Figures Figure 1: (blue) 2016 disk requests, disk occupancies on (red) April 1 st 2017 and (green) September 1 st 2017, for different storage classes. The 2016 pledge is shown in violet.3 Figure 2: (blue) 2016 tape requests, tape occupancies on (red) April 1 st 2017 and (green) September 1 st 2017, for different storage classes. The 2016 pledge is shown in violet.3 Figure 3: (top) increment of and (bottom) required disk space at the end of the 2017 WLCG year, following the three extrapolations criteria described in the text. The violet histogram in the Total column represents the 2017 disk pledge Figure 4: (top) increment of and (bottom) required tape space at the end of the 2017 WLCG year, following the three extrapolations criteria described in the text. The violet histogram in the Total column represents the 2017 tape pledge page v

6

7 Introduction Last modified: 5th September Introduction This document presents the computing resources needed by LHCb in the 2019 WLCG year 1 and a reassessment of the 2018 requests. This document is based on the latest measurements of the LHCb computing model parameters and latest updates of the LHC running plans. In particular, the LHC efficiency in the first months of 2017 was worse than the one used to estimate the requests in previous reports. The impact of this change on 2018 and 2019 requests is assessed. The LHCb computing model, its implementation, recent changes and processing plans are described in Section 2. The determination of the storage resources needed at the end of 2017 is given in Section 3, where the LHC performance in the first part of 2017 is extrapolated until the end of the 2017 data taking. Resource estimates in 2018 and 2019 are given in Section 4. A summary of the requests is given in Section The LHCb Computing Model and processing plans A detailed description of the LHCb Computing Model is given elsewhere [LHCb-PUB and LHCb-PUB ]. Subsequent reports [LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB ] discussed further changes and their impact on the required resources. The most relevant features of the LHCb Computing Model are reported below. Data are received from the online system in several streams, o o A FULL stream, where RAW events are reconstructed offline, then filtered (stripping) according to selection criteria specific for given analyses (stripping lines). A TURBO stream, in which the output of the online reconstruction is stored, turned offline into a micro-dst format and made available to analysts RAW data of all streams are saved on tape at Tier0 and at one selected Tier1. The output of the offline reconstruction (RDST) of the FULL stream are saved on tape, the stripping output is replicated and distributed on disk storage. The stripping output can be in either DST format that contains the complete reconstructed event, or micro-dst format, where only signal candidates and possible additional information is included. Stripping lines are designed such, that as many lines as possible are written in micro-dst. The micro-dst files from the TURBO stream are replicated on disk storage. All datasets made available for analysis on disk are also archived on tape for data preservation. The production of simulated events runs continuously, with the aim of producing signal and background samples for a total number of simulated (and reconstructed) events which is of the order of 15% of the total number of collected real data events. Estimates of the resources required for 2018 and 2019 are re-computed by taking into account the following changes with respect to LHCb : 1 For the purpose of this document a given year always refers to the period between April 1 st of that year and March 31 st of the following year. page 1

8 Extrapolation of storage resources at the end of 2017 Last modified: 5th September 2017 The LHC efficiency for physics observed in the first part of the 2017 data taking is significantly lower than expected (40% instead of 60%); the extrapolations of the storage resources needed at the end of the 2017 WLCG year (Section 3) are therefore lower than previous estimates. This enables LHCb to use the available disk space in 2017 by relaxing two measures that had impact on physics analysis but nevertheless had to be taken to cope with the anticipated shortage of disk space: o The fraction of TURBO data (35%) parked on tape during the 2016 data taking will be replicated on disk. TURBO data is not being parked on tape during the 2017 data taking. There will be no parking foreseen in 2018 either. o A full stripping cycle will be performed on 2016 data in The MDST.DST files that were saved on tape as a backup of all events written to micro-dst format are no longer saved, since the content of the micro-dst files is now well established. This reduces the tape requests by 6PB. This master DST was introduced to enable fast regeneration of micro-dst files in cases there would be missing information, during the validation phase of the analyses on Run2 datasets. It is no longer needed. Additional resources for analysis preservation activities have been made, amounting to 7kHS06 CPU power and 0.1 PB disk space at the Tier0. Assumptions that are unchanged with respect to LHCb include: Running time for proton collisions of 7.8 million seconds in 2018, corresponding to an efficiency for physics of about 60% (Table 2-1); A month of heavy ion collisions in 2018, with concurrent heavy ion proton collision in fixed target configuration, will also take place (Table 2-1) data will be fully re-stripped in 2017 and incrementally stripped in a legacy re-stripping of all Run 2 data will take place during LS2 in 2019 Two copies of the most recent processing pass on both data and simulation are kept on disk. For the next-to-most recent data processing pass the number of copies are two for data, one for simulation. Throughput of the stripping of 195MB per live second of the LHC in Throughput of TURBO of 118MB per live second of the LHC in FULL stream trigger rate of 9.4kHz in Parameter 2018 Proton physics LHC run days 150 LHC efficiency 0.60 Approx. running seconds Number of bunches 2448 Heavy Ion physics Approx. running seconds Table 2-1: Assumed LHC proton-proton and heavy ion running time in Extrapolation of storage resources at the end of 2017 The LHC efficiency observed so far in 2017 is lower than previously foreseen. Therefore, an extrapolation has been made to compute the storage resources that are going to be used by the end of the 2017 WLCG 2 page 2

9 Extrapolation of storage resources at the end of 2017 Last modified: 5th September 2017 year, by taking the storage occupancy at the end of the 2016 WLCG year as starting point and measuring the occupancy on September 1 st 2017, and by taking into account the other changes mentioned in Section 2. The requests presented in 2016 and the actual storage usage on April 1 st and September 1 st 2017 are shown in Figure 1 (Figure 2) for disk (tape), as well as the 2017 pledges Disk requests vs occupancy (PB) 00 Data MC User Buffers Other Total 2016 req April 2017 Sept pledge Figure 1: (blue) 2016 disk requests, disk occupancies on (red) April 1 st 2017 and (green) September 1 st 2017, for different storage classes. The 2016 pledge is shown in violet Tape requests vs occupancy (PB) 00 RAW RDST Archive Total 2016 req April 2017 Sept pledge Figure 2: (blue) 2016 tape requests, tape occupancies on (red) April 1 st 2017 and (green) September 1 st 2017, for different storage classes. The 2016 pledge is shown in violet. The extrapolation of storage resources to the end of the 2017 WLCG year depends on the LHC performance from the time of this writing until the end of the 2017 data taking. Three scenarios have been considered, where the LHC performs either at the same level observed until now (pessimistic scenario, 40% efficiency), or 50% better (baseline scenario, 60% efficiency), or 100% better (optimistic scenario, 80% efficiency). page 3

10 Extrapolation of storage resources at the end of 2017 Last modified: 5th September 2017 In addition, based on current experience, it is assumed that simulation continues to accumulate 0.125PB of data per month on disk, and half of that on tape. This is significantly smaller than expected, due to the increased usage of filters (based on the stripping of real data) that allow to save on storage only a fraction of events that pass analysis-specific criteria. Since the MDST.DST files are no longer produced, the tape space that was previously foreseen for MDST.DST (6.8PB) is now set to the tape space (0.7PB) currently taken by MDST.DST. Moreover, 2PB of disk space were recovered by following dataset popularity studies. The expected increments and total amounts of disk (tape) in these three scenarios are shown in Figure 3 (Figure 4). By taking as baseline the scenario where the LHC run at 60% efficiency in the remaining months of 2017, the disk (tape) increment will be 7.1PB (13.7PB) which, taking into account the current storage usage, leads to an expected total disk (tape) usage of 29.6PB (54.2PB), i.e. 15% (20%) less than the 2017 pledges. Disk increments Apr17-Apr18 (PB) Data MC User Buffers Other Total Pessimistic Baseline Optimistic Disk Apr18 extrapolations (PB) Data MC User Buffers Other Total Pessimistic Baseline Optimistic Pledge Figure 3: (top) increment of and (bottom) required disk space at the end of the 2017 WLCG year, following the three extrapolations criteria described in the text. The violet histogram in the Total column represents the 2017 disk pledge. 4 page 4

11 Extrapolation of storage resources at the end of 2017 Last modified: 5th September 2017 Tape increments Apr17-Apr18 (PB) RAW RDST Archive Total Pessimistic Baseline Optimistic Tape Apr18 extrapolations (PB) RAW RDST Archive Total Pessimistic Baseline Optimistic 2017 pledge Figure 4: (top) increment of and (bottom) required tape space at the end of the 2017 WLCG year, following the three extrapolations criteria described in the text. The violet histogram in the Total column represents the 2017 tape pledge. page 5

12 Resources needed in 2018 and 2019 Last modified: 5th September Resources needed in 2018 and 2019 Table 4-1 presents, for the different activities, the CPU work estimates when applying the model defined above. The only change with respect to the previous requests shown in LHCb-PUB is due to the resources added for analysis preservation (7kHS06, a 1.4% effect). CPU Work in WLCG year (khs06.years) Prompt Reconstruction 49 0 First pass Stripping 20 0 Full restripping 0 61 Incremental (re-)stripping Processing of heavy ion collisions 38 0 Simulation VoBoxes and other services 4 4 User Analysis Analysis preservation 7 7 Total Work (khs06.years) Table 4-1: Estimated CPU work needed for the different activities (unchanged) Table 4-2 presents, for the different data classes, the forecast total disk space usage at the end of the years 2018 and 2019 when applying the baseline model described in the previous section. Table 4-3 shows, for the different data classes, the forecast total tape usage at the end of the years 2018 and The disk space in 2018 is 0.7PB (1.6%) lower than the previous requests shown in LHCb-PUB , with small rearrangements between the stripping, TURBO and simulated data. The tape space in 2018 is 18.7PB (19%) lower, due to the suppression of MDST.DST (6.1PB less space than foreseen) and to the smaller space needed by RAW, RDST and ARCHIVE (4.9PB less), as a direct consequence of the lower LHC efficiency in For 2019, the disk space is 1.8PB (3.5%) lower than the previous requests shown in LHCb-PUB , while tape requests are 19.2PB (18%) lower. Disk storage usage forecast (PB) Stripped real data TURBO Data Simulated Data User Data Heavy Ion Data RAW and other buffers Other Analysis preservation Total Table 4-2: Disk Storage needed for the different categories of LHCb data. 6 page 6

13 Summary of requests Last modified: 5th September 2017 Tape storage usage forecast (PB) Raw Data RDST MDST.DST Heavy Ion Data Archive Total Table 4-3: Tape Storage needed for the different categories of LHCb data. 5. Summary of requests Table 5-1 shows the CPU requests at the various tiers, as well as for the HLT farm and Yandex. We assume that the HLT and Yandex farms will provide the same level of computing power as in the past, therefore we subtract the contributions from these two sites from our requests to WLCG. The required resources are apportioned between the different Tiers taking into account the capacities that are already installed. The disk and tape estimates shown in previous section have to be broken down into fractions to be provided by the different Tiers using the distribution policies described in LHCb-PUB The results of this sharing are shown in Table 5-2 and Table 5-3. It should be noted that, although the total storage capacity is given globally for the 8 Tier1 sites pledging resources to LHCb, it is mandatory that the sharing of this storage between Tier1 sites remains very similar from one year to another: the annual increments are small, existing data is expected to remain there, runs assigned to each Tier1 are also expected to be reprocessed there and the analysis data is stored at these same Tier1s. There is a level of flexibility offered when replicating the data to a second Tier1 that takes into account the available space. However, a baseline increase is mandatory at all sites, otherwise they can no longer be used for the placement of new data. LHCb will be upgraded during the LHC shutdown of , and will resume data taking in There will be many changes in the computing activities in the upgrade era, that will need to be prepared and properly tested before These activities are being planned in a Technical Design Report of software and computing for the LHCb upgrade, due by the end of The part more related to the computing model and the required computing resources will be finalized in a document to be released in mid The computing resources required in 2020 will depend on the outcome of the upgrade activities and on the detailed planning, which is not ready yet. Therefore, the LHCb computing requests for 2020 have not been presented in this report. CPU Power (khs06) Tier Tier Tier Total WLCG HLT farm Yandex Total non-wlcg Grand total Table 5-1: CPU power requested at the different Tier levels. page 7

14 Summary of requests Last modified: 5th September 2017 Disk (PB) Tier Tier Tier Total Table 5-2: LHCb Disk request for each Tier level. For countries hosting a Tier1, the Tier2 contribution could also be provided at the Tier1. Tape (PB) Tier Tier Total Table 5-3: LHCb Tape request for each Tier level. 8 page 8

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

Track pattern-recognition on GPGPUs in the LHCb experiment

Track pattern-recognition on GPGPUs in the LHCb experiment Track pattern-recognition on GPGPUs in the LHCb experiment Stefano Gallorini 1,2 1 University and INFN Padova, Via Marzolo 8, 35131, Padova, Italy 2 CERN, 1211 Geneve 23, Switzerland DOI: http://dx.doi.org/10.3204/desy-proc-2014-05/7

More information

Update of the Computing Models of the WLCG and the LHC Experiments

Update of the Computing Models of the WLCG and the LHC Experiments Update of the Computing Models of the WLCG and the LHC Experiments September 2013 Version 1.7; 16/09/13 Editorial Board Ian Bird a), Predrag Buncic a),1), Federico Carminati a), Marco Cattaneo a),4), Peter

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

ATLAS PILE-UP AND OVERLAY SIMULATION

ATLAS PILE-UP AND OVERLAY SIMULATION ATLAS PILE-UP AND OVERLAY SIMULATION LPCC Detector Simulation Workshop, June 26-27, 2017 ATL-SOFT-SLIDE-2017-375 22/06/2017 Tadej Novak on behalf of the ATLAS Collaboration INTRODUCTION In addition to

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization Deferred High Level Trigger in LHCb: A Boost to Resource Utilization The use of periods without beam for online high level triggers Introduction, problem statement Realization of the chosen solution Conclusions

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

b-jet identification at High Level Trigger in CMS

b-jet identification at High Level Trigger in CMS Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Data and Analysis preservation in LHCb

Data and Analysis preservation in LHCb Data and Analysis preservation in LHCb - March 21, 2013 - S.Amerio (Padova), M.Cattaneo (CERN) Outline 2 Overview of LHCb computing model in view of long term preservation Data types and software tools

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L.

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. LCG-TDR-001 CERN-LHCC-2005-024 20 June 2005 LHC Computing Grid Technical Design Report Version: 1.04 20 June 2005 The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. Robertson Technical Design

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

Software for the LHC Upgrade(s)

Software for the LHC Upgrade(s) Software for the LHC Upgrade(s) Graeme Stewart (with the help of many colleagues in ATLAS, CMS and LHCb) 1 2016-07-20 LHC to HL-LHC LHC HL-LHC Run1 LS1 Run 2 LS2 Run 3 LS3 Run 4 2012 2013 2014 2015 2016

More information

Primary Vertex Reconstruction at LHCb

Primary Vertex Reconstruction at LHCb LHCb-PUB-214-44 October 21, 214 Primary Vertex Reconstruction at LHCb M. Kucharczyk 1,2, P. Morawski 3, M. Witek 1. 1 Henryk Niewodniczanski Institute of Nuclear Physics PAN, Krakow, Poland 2 Sezione INFN

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

CMS FPGA Based Tracklet Approach for L1 Track Finding

CMS FPGA Based Tracklet Approach for L1 Track Finding CMS FPGA Based Tracklet Approach for L1 Track Finding Anders Ryd (Cornell University) On behalf of the CMS Tracklet Group Presented at AWLC June 29, 2017 Anders Ryd Cornell University FPGA Based L1 Tracking

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

Benchmarking message queue libraries and network technologies to transport large data volume in

Benchmarking message queue libraries and network technologies to transport large data volume in Benchmarking message queue libraries and network technologies to transport large data volume in the ALICE O 2 system V. Chibante Barroso, U. Fuchs, A. Wegrzynek for the ALICE Collaboration Abstract ALICE

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Oliver Berroteran 2016-08-23 Supervisor: Markus Fasel, CERN Abstract The High Level Trigger (HLT) is a computing farm consisting

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Disaster recovery of the INFN Tier-1 data center: lesson learned. Luca dell Agnello INFN - CNAF CHEP 2018 conference Sofia, July

Disaster recovery of the INFN Tier-1 data center: lesson learned. Luca dell Agnello INFN - CNAF CHEP 2018 conference Sofia, July Disaster recovery of the INFN Tier-1 data center: lesson learned Luca dell Agnello INFN - CNAF CHEP 2018 conference Sofia, July 12 2018 INFN National Institute for Nuclear Physics (INFN) is funded by Italian

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

More information

Data Reconstruction in Modern Particle Physics

Data Reconstruction in Modern Particle Physics Data Reconstruction in Modern Particle Physics Daniel Saunders, University of Bristol 1 About me Particle Physics student, final year. CSC 2014, tcsc 2015, icsc 2016 Main research interests. Detector upgrades

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Accelerating Throughput from the LHC to the World

Accelerating Throughput from the LHC to the World Accelerating Throughput from the LHC to the World David Groep David Groep Nikhef PDP Advanced Computing for Research v5 Ignatius 2017 12.5 MByte/event 120 TByte/s and now what? Kans Higgs deeltje: 1 op

More information

First results from the LHCb Vertex Locator

First results from the LHCb Vertex Locator First results from the LHCb Vertex Locator Act 1: LHCb Intro. Act 2: Velo Design Dec. 2009 Act 3: Initial Performance Chris Parkes for LHCb VELO group Vienna Conference 2010 2 Introducing LHCb LHCb is

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

CERN European Organization for Nuclear Research, 1211 Geneva, CH

CERN European Organization for Nuclear Research, 1211 Geneva, CH Disk storage at CERN L Mascetti, E Cano, B Chan, X Espinal, A Fiorot, H González Labrador, J Iven, M Lamanna, G Lo Presti, JT Mościcki, AJ Peters, S Ponce, H Rousseau and D van der Ster CERN European Organization

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

Scale-out Data Deduplication Architecture

Scale-out Data Deduplication Architecture Scale-out Data Deduplication Architecture Gideon Senderov Product Management & Technical Marketing NEC Corporation of America Outline Data Growth and Retention Deduplication Methods Legacy Architecture

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Update of the Computing Models of the WLCG and the LHC Experiments

Update of the Computing Models of the WLCG and the LHC Experiments Update of the Computing Models of the WLCG and the LHC Experiments WLCG- ComputingModel- LS1-001 April 2014 Version 2.8; 15/04/14 CERN-LHCC-2014-014 / LCG-TDR-002 15/04/2014 Editorial Board Ian Bird a),

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The High-Level Dataset-based Data Transfer System in BESDIRAC

The High-Level Dataset-based Data Transfer System in BESDIRAC The High-Level Dataset-based Data Transfer System in BESDIRAC T Lin 1,2, X M Zhang 1, W D Li 1 and Z Y Deng 1 1 Institute of High Energy Physics, 19B Yuquan Road, Beijing 100049, People s Republic of China

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

ATLAS Offline Data Quality Monitoring

ATLAS Offline Data Quality Monitoring ATLAS Offline Data Quality Monitoring ATL-SOFT-PROC-2009-003 24 July 2009 J. Adelman 9, M. Baak 3, N. Boelaert 6, M. D Onofrio 1, J.A. Frost 2, C. Guyot 8, M. Hauschild 3, A. Hoecker 3, K.J.C. Leney 5,

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Evolution of Database Replication Technologies for WLCG Zbigniew Baranowski, Lorena Lobato Pardavila, Marcin Blaszczyk, Gancho Dimitrov, Luca Canali European Organisation for Nuclear Research (CERN), CH-1211

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Computing strategy for PID calibration samples for LHCb Run 2

Computing strategy for PID calibration samples for LHCb Run 2 Computing strategy for PID calibration samples for LHCb Run 2 LHCb-PUB-2016-020 18/07/2016 Issue: 1 Revision: 0 Public Note Reference: LHCb-PUB-2016-020 Created: June, 2016 Last modified: July 18, 2016

More information

ALICE tracking system

ALICE tracking system ALICE tracking system Marian Ivanov, GSI Darmstadt, on behalf of the ALICE Collaboration Third International Workshop for Future Challenges in Tracking and Trigger Concepts 1 Outlook Detector description

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up

More information

CMS - HLT Configuration Management System

CMS - HLT Configuration Management System Journal of Physics: Conference Series PAPER OPEN ACCESS CMS - HLT Configuration Management System To cite this article: Vincenzo Daponte and Andrea Bocci 2015 J. Phys.: Conf. Ser. 664 082008 View the article

More information

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES 1 THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch ph-adp-ddm-lab@cern.ch XLDB

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information