LHCb Computing Resources: 2018 requests and preview of 2019 requests
|
|
- Maud Barker
- 5 years ago
- Views:
Transcription
1 LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB /02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB Created: 23 rd February 2017 Last modified: 28 th February 2017 Prepared By: LHCb Computing Project C. Bozzi/Editor
2 Introduction Last modified: 28th February 2017 Abstract This document presents a reassessment of computing resources needed by LHCb in 2018 and a preview of computing requests for 2019, as resulting from the current experience of Run2 data taking and recent changes in the LHCb computing model parameters. ii page ii
3 Introduction Last modified: 28th February 2017 Table of Contents 1. INTRODUCTION THE LHCB COMPUTING MODEL PROCESSING PLANS FOR 2016 AND BEYOND SIMULATION DATA TAKING PROCESSING MODEL FOR RUN DATA DISTRIBUTION MODELS RESOURCE ESTIMATES UNDER VARIOUS HYPOTHESES RESOURCES NEEDED IN 2018 AND SUMMARY OF REQUESTS... 8 page iii
4 Introduction Last modified: 28th February 2017 List of Tables Table 3-1: Assumed LHC proton-proton and heavy ion running time for 2017 and Table 5-1: Estimated CPU work needed for the different activities Table 5-2: Disk Storage needed for the different categories of LHCb data Table 5-3: Tape Storage needed for the different categories of LHCb data Table 6-1: CPU power requested at the different Tier levels Table 6-2: LHCb Disk request for each Tier level. For countries hosting a Tier1, the Tier2 contribution could also be provided at the Tier Table 6-3: LHCb Tape request for each Tier level List of Figures Figure 4-1: Disk estimates under various hypotheses. The top plot shows the total request, the bottom plot the increment with respect to the previous year Figure 4-2 Tape estimates under various hypotheses. The top plot shows the total request, the bottom plot the increment with respect to the previous year iv page iv
5 Introduction Last modified: 28th February Introduction This document presents the computing resources needed by LHCb in the 2018 WLCG year 1 and a preview of 2019 requests. It is based on the latest measurements of the LHCb computing model parameters and latest updates of the LHC running plans. This document is organized as follows. The LHCb computing model, its implementation and recent changes are described in section 2, while processing plans are described in section 3. Resource estimates in different scenarios are given in section 4. A summary of the requests is given in section The LHCb Computing Model A detailed description of the LHCb Computing Model is given elsewhere [LHCb-PUB and LHCb-PUB ]. Subsequent reports [LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB , LHCb-PUB ] discussed further changes and their impact on the required resources. The 2017 requests were reassessed in Autumn 2016, given that the expected LHC live times in both 2016 and 2017 were considerably higher than previously foreseen. Also, some parameters of the computing model, measured on the 2016 datasets, were different from the one used to compute the requests. Mitigation measures were put in place and the LHCb computing model has been modified, as discussed in [LHCb- PUB ], with the purpose of keeping the same resource envelope for 2016, limiting the increase in 2017 with respect to the previous requests, and staying within a reasonable budget in the following years. The most relevant features of the LHCb Computing Model are reported below. Data are received from the online system in two streams, o o A TURBO stream, where the output of the online reconstruction is stored on tape and subsequently resurrected in a micro-dst format and made available to analysts A FULL stream, where RAW events are reconstructed offline, then filtered according to selection criteria specific for given analyses (stripping lines). RAW data and the output of the offline reconstruction (RDST) are saved on tape, the stripping output is replicated and distributed on disk storage The stripping output can be in either DST or micro-dst format, where the complete reconstructed event is contained in the former and only signal candidates and possible additional information is included the latter. Stripping lines are designed such, that as many lines as possible are written in micro-dst. The production of simulated events runs continuously, with the aim of producing signal and background samples for a total number of simulated (and reconstructed) events which is in the order of 15% of the total number of collected real data events. In the previous reports, preliminary estimates of the resources required for 2018 and 2019 were computed with the following assumptions: 1 For the purpose of this document a given year always refers to the period between April 1 st of that year and March 31 st of the following year. page 1
6 Processing plans for 2016 and beyond Last modified: 28th February 2017 Running time for proton collisions of 7.8 million seconds (2017) and 7.8 million seconds (2018). This corresponds to an efficiency for physics of about 60%. A week of proton-argon collisions in a fixed target mode in 2017, assuming an efficiency of 30%. In 2018, a month of heavy ion collisions, with concurrent heavy ion proton collision in fixed target configuration, will also take place. Trigger output rates of 8kHz and 4kHz for the FULL and TURBO streams respectively Two copies of the most recent processing of both data and simulation are kept on disk. For the next-to-most recent data processing the number of copies are two for data, one for simulation. The stripping process produces an output of 120MB per live second of LHC. The event size for RAW, RDST, TURBO, DST and micro-dst in data are 65, 50, 20, 120 and 10 kb, respectively. 3. Processing plans for 2016 and beyond 3.1. Simulation Simulation will continue to produce samples for the LHCb Upgrade studies, the simulation of events according to the observed Run 2 data-taking conditions, and the implementation of the latest updates to generators and decay processes. The implementation of fast MC simulation options, with either the simplification of time-consuming algorithms (e.g. the propagation and detection of photons in the RICH detectors), or a parameterized detector response, in order to mitigate the impact on CPU due to the usage of the full GEANT4 detector simulation, is also in progress. As an alternative, the "particle gun" approach, in which only the signal decay is generated, simulated and reconstructed, and not the entire pp collision, is also available. A reduction factor of ~ 20 in both CPU and disk space is estimated with this technique. Other techniques have been deployed for reducing the storage requirements by running the trigger and stripping steps in so-called "filtering" mode, in which generated MC event which fail the trigger or stripping criteria are thrown away. Until very recently, simulation was saved in the DST format only. The latest simulation cycle, which started in June 2016, includes also the implementation of a micro-dst format for MC, where only the signal part of the event is saved, with considerable disk space savings Data taking Table 3-1 shows the assumptions made concerning the availability of the LHC for physics running in 2016 and In 2016, a detailed investigation optimized the trigger bandwidth division between FULL and TURBO at 8kHz and 4kHz, respectively. However, given that LHCb takes data at a constant average number of collisions per bunch crossing, for a given trigger configuration these rates scale with the number of bunches 2 and links therein. 2 page 2
7 Processing plans for 2016 and beyond Last modified: 28th February 2017 colliding in the LHC. It is foreseen in 2017 and 2018 that this number will increase from 2076 to 2448, i.e. by a factor The trigger rate is therefore scaled up by this factor. The average event size in the TURBO stream went up from the originally planned 10kB to 50kB in While an optimization of the contents of the TURBO format, it was decided to park on tape 35% of the TURBO stream for the entire Run 2. A review of the various lines entering in the TURBO stream allowed to conclude that the throughput on disk of 2016 TURBO data can be limited to 100MB per live second of the LHC, to be compared with 20kB*4kHz = 80MB/s foreseen in the previous report. This value, rescaled by the factor discussed above to take into account the increased number of bunches, is used to compute the requests. Parameter Proton physics LHC run days LHC efficiency Approx. running seconds Number of bunches Heavy Ion physics Approx. running seconds Table 3-1: Assumed LHC proton-proton and heavy ion running time for 2017 and Processing model for Run 2 The data taking model exploited successfully so far in Run 2 will also continue in No offline reprocessing of RAW data is foreseen during Run 2. If necessary, this can be done during Long Shutdown 2. However, no resources are currently foreseen for this activity. The processing scenario for proton physics in the previous report assumed a full restripping to be performed at the end of each year of data taking, another one a year later; two incremental strippings are performed in between. In all cases, the entire dataset accumulated up to that point was supposed to be involved. In the meantime, given the data volumes involved, the limited bandwidth for tape recalls at the Tier 1 sites and the necessity to avoid to run additional stripping campaigns concurrently with data taking periods, it became clear that only the data accumulated in a given year can be re-stripped at the end of that year. Nevertheless, a full re-stripping of all Run 2 data will take place during LS2 in WLCG year As a temporary mitigation measure, at the end of 2016 data taking it was decided to perform a full restripping of the 2016 data and an incremental stripping of 2015 data. The 2016 re-stripped data will replace completely the previous stripping cycle, and avoid the incremental strippings to be performed in The same strategy is foreseen for the 2018 data taking. The stripping throughput used in this report is 165MB per live second of the LHC, as resulting from recent measurements in preparation of the upcoming stripping campaign, very similar to 160MB/s, the throughput measured in the 2016 data taking, and significantly higher than the value of 120MB/s, which was used in the previous report Data distribution models In the LHCb computing model used in the present document, the following policy is used for the distribution of proton physics data on disk at Tier0, Tier1 and Tier2-D 3 : 3 Tier-2D sites are a selection of Tier2 sites with disk, that host part of the most recent version of official datasets for analysis, both for real and simulated data, and are therefore available for running user analysis page 3
8 Resource estimates under various hypotheses Last modified: 28th February 2017 For real data, two copies of each dataset are available to analysts for both the most recent and the previous version of the processing For simulated data and the most recent processing, two copies of each dataset are kept on disk; for the previous processing version, there is only one disk copy Further reducing the number of copies would impact the efficiency in data analysis, as scheduled and unscheduled outages of computing centers, as well as the increase in the number of jobs in any single site, would slow down the time needed by analysts to process data. 4. Resource estimates under various hypotheses This section shows the evolution of the computing resources under different assumptions. Figure 4-1 and Figure 4-2 show the total and incremental disk and tape requirements for The variations in CPU requirements are small and not reported. In each plot, the leftmost entry in the 2017, 2018 and 2019 requests corresponds to those reported in LHCb-PUB The other entries show the effect (on top of each other) due to the reorganization of the stripping campaigns (see Section 3.3); variation of stripping throughput (120à165MB/s, see Section 3.3); variation of TURBO throughput (80à100MB/s, see Section 3.2); rescaling of full, stripping (165à195MB/s) and TURBO (100à118MB/s) throughputs due to the different number of bunches (2076à2448, see Section 3.2). For 2017, the last entry represents the WLCG pledge, taken from REBUS. The resources needed in 2018 and 2019 are computed by using the last model, that implements the new scheme for stripping and takes into account the various increases in the stripping and TURBO throughput, rescaled with the number of bunches expected in 2018 and The detailed requests are presented in the next section. jobs requiring those data. The ensemble of Tier-2D sites gives a total disk space equivalent to an average Tier1 site 4 page 4
9 Resource estimates under various hypotheses Last modified: 28th February 2017 Figure 4-1: Disk estimates under various hypotheses. The top plot shows the total request, the bottom plot the increment with respect to the previous year. page 5
10 Resource estimates under various hypotheses Last modified: 28th February 2017 Figure 4-2 Tape estimates under various hypotheses. The top plot shows the total request, the bottom plot the increment with respect to the previous year. 6 page 6
11 Resources needed in 2018 and 2019 Last modified: 28th February Resources needed in 2018 and 2019 Table 5-1 presents, for the different activities, the CPU work estimates when applying the baseline model defined above. CPU Work in WLCG year (khs06.years) Prompt Reconstruction 49 0 First pass Stripping 20 0 Full Restripping 0 61 Incremental (Re-)stripping Processing of heavy ion collisions 38 0 Simulation VoBoxes and other services 4 4 User Analysis Total Work (khs06.years) Table 5-1: Estimated CPU work needed for the different activities. Table 5-2 presents, for the different data classes, the forecast total disk space usage at the end of the years when applying the baseline model described in the previous section. Table 5-3 shows, for the different data classes, the forecast total tape usage at the end of the years Disk storage usage forecast (PB) Stripped Real Data TURBO Data Simulated Data User Data Heavy Ion Data RAW and other buffers Other Total Table 5-2: Disk Storage needed for the different categories of LHCb data. Tape storage usage forecast (PB) Raw Data RDST MDST.DST Heavy Ion Data Archive Total Table 5-3: Tape Storage needed for the different categories of LHCb data. page 7
12 Summary of requests Last modified: 28th February Summary of requests Table 6-1 shows the CPU requests at the various tiers, as well as for the HLT farm and Yandex. We assume that the HLT and Yandex farms will provide the same level of computing power as in the past; therefore we subtract the contributions from these two sites from our requests to WLCG. The required resources are apportioned between the different Tiers taking into account the capacities that are already installed. The disk and tape estimates shown in previous section have to be broken down into fractions to be provided by the different Tiers using the distribution policies described in LHCb-PUB The results of this sharing are shown in Table 6-2 and Table 6-3. CPU Power (khs06) Tier Tier Tier Total WLCG HLT farm Yandex Total non-wlcg Grand total Table 6-1: CPU power requested at the different Tier levels. Disk (PB) Tier Tier Tier Total Table 6-2: LHCb Disk request for each Tier level. For countries hosting a Tier1, the Tier2 contribution could also be provided at the Tier1. Tape (PB) Tier Tier Total Table 6-3: LHCb Tape request for each Tier level. There is a slight (2%) increase of CPU resources with respect to the previously scrutinized requests. For disk, there is a small (6%) decrease in For tape, there is a 10% increase, mainly due to the increased trigger rate. 8 page 8
LHCb Computing Resources: 2019 requests and reassessment of 2018 requests
LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last
More informationLHCb Computing Resource usage in 2017
LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April
More informationLHCb Computing Status. Andrei Tsaregorodtsev CPPM
LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz
More informationComputing Resources Scrutiny Group
CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),
More informationSoftware and computing evolution: the HL-LHC challenge. Simone Campana, CERN
Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge
More informationThe CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationFirst LHCb measurement with data from the LHC Run 2
IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze
More informationATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract
ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationTrack pattern-recognition on GPGPUs in the LHCb experiment
Track pattern-recognition on GPGPUs in the LHCb experiment Stefano Gallorini 1,2 1 University and INFN Padova, Via Marzolo 8, 35131, Padova, Italy 2 CERN, 1211 Geneve 23, Switzerland DOI: http://dx.doi.org/10.3204/desy-proc-2014-05/7
More informationSystem upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.
System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo
More information1. Introduction. Outline
Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon
More informationATLAS PILE-UP AND OVERLAY SIMULATION
ATLAS PILE-UP AND OVERLAY SIMULATION LPCC Detector Simulation Workshop, June 26-27, 2017 ATL-SOFT-SLIDE-2017-375 22/06/2017 Tadej Novak on behalf of the ATLAS Collaboration INTRODUCTION In addition to
More informationNew strategies of the LHC experiments to meet the computing requirements of the HL-LHC era
to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider
More informationUpdate of the Computing Models of the WLCG and the LHC Experiments
Update of the Computing Models of the WLCG and the LHC Experiments September 2013 Version 1.7; 16/09/13 Editorial Board Ian Bird a), Predrag Buncic a),1), Federico Carminati a), Marco Cattaneo a),4), Peter
More informationLHC Computing Models
LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the
More informationDESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities
DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group
More informationReliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View
More informationComputing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator
Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,
More informationAugust 31, 2009 Bologna Workshop Rehearsal I
August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing
More informationMonitoring of Computing Resource Use of Active Software Releases at ATLAS
1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,
More informationA L I C E Computing Model
CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document
More informationLHCb Computing Strategy
LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy
More informationPoS(High-pT physics09)036
Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform
More informationThe GAP project: GPU applications for High Level Trigger and Medical Imaging
The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1
More informationClustering and Reclustering HEP Data in Object Databases
Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationData handling and processing at the LHC experiments
1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for
More informationData oriented job submission scheme for the PHENIX user analysis in CCJ
Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content
More informationBig Data Analytics and the LHC
Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationExperience of the WLCG data management system from the first two years of the LHC data taking
Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz
More informationData Processing and Analysis Requirements for CMS-HI Computing
CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationImplementing Online Calibration Feed Back Loops in the Alice High Level Trigger
Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Oliver Berroteran 2016-08-23 Supervisor: Markus Fasel, CERN Abstract The High Level Trigger (HLT) is a computing farm consisting
More informationOptimizing Parallel Access to the BaBar Database System Using CORBA Servers
SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,
More informationStreamlining CASTOR to manage the LHC data torrent
Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch
More informationThe LHCb upgrade. Outline: Present LHCb detector and trigger LHCb upgrade main drivers Overview of the sub-detector modifications Conclusions
The LHCb upgrade Burkhard Schmidt for the LHCb Collaboration Outline: Present LHCb detector and trigger LHCb upgrade main drivers Overview of the sub-detector modifications Conclusions OT IT coverage 1.9
More informationCMS High Level Trigger Timing Measurements
Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard
More informationStephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)
Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationConstant monitoring of multi-site network connectivity at the Tokyo Tier2 center
Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University
More informationBelle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN
1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the
More informationThe BABAR Database: Challenges, Trends and Projections
SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,
More informationCouchDB-based system for data management in a Grid environment Implementation and Experience
CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment
More informationCMS Alignement and Calibration workflows: lesson learned and future plans
Available online at www.sciencedirect.com Nuclear and Particle Physics Proceedings 273 275 (2016) 923 928 www.elsevier.com/locate/nppp CMS Alignement and Calibration workflows: lesson learned and future
More informationATLAS Distributed Computing Experience and Performance During the LHC Run-2
ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si
More informationPerformance of the ATLAS Inner Detector at the LHC
Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.
More informationReprocessing DØ data with SAMGrid
Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationarxiv: v1 [cs.dc] 20 Jul 2015
Designing Computing System Architecture and Models for the HL-LHC era arxiv:1507.07430v1 [cs.dc] 20 Jul 2015 L Bauerdick 1, B Bockelman 2, P Elmer 3, S Gowdy 1, M Tadel 4 and F Würthwein 4 1 Fermilab,
More informationEvaluation of the computing resources required for a Nordic research exploitation of the LHC
PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson
More informationComputing strategy for PID calibration samples for LHCb Run 2
Computing strategy for PID calibration samples for LHCb Run 2 LHCb-PUB-2016-020 18/07/2016 Issue: 1 Revision: 0 Public Note Reference: LHCb-PUB-2016-020 Created: June, 2016 Last modified: July 18, 2016
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationPrompt data reconstruction at the ATLAS experiment
Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationDeferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization
Deferred High Level Trigger in LHCb: A Boost to Resource Utilization The use of periods without beam for online high level triggers Introduction, problem statement Realization of the chosen solution Conclusions
More informationPreparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch
Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first
More informationStorage and I/O requirements of the LHC experiments
Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,
More information150 million sensors deliver data. 40 million times per second
CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationPoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.
Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector. CERN E-mail: agnieszka.dziurda@cern.ch he LHCb detector is a single-arm forward spectrometer
More informationThe ATLAS Tier-3 in Geneva and the Trigger Development Facility
Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online
More informationALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop
1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data
More informationWORK PROJECT REPORT: TAPE STORAGE AND CRC PROTECTION
WORK PROJECT REPORT: TAPE STORAGE AND CRC PROTECTION CERN Summer Student Programme 2014 Student: Main supervisor: Second supervisor: Division: Karel Ha Julien Marcel Leduc
More informationNCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan
NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and
More informationb-jet identification at High Level Trigger in CMS
Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online
More informationThe Software Defined Online Storage System at the GridKa WLCG Tier-1 Center
The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationThe JINR Tier1 Site Simulation for Research and Development Purposes
EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationFast pattern recognition with the ATLAS L1Track trigger for the HL-LHC
Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC On behalf of the ATLAS Collaboration Uppsala Universitet E-mail: mikael.martensson@cern.ch ATL-DAQ-PROC-2016-034 09/01/2017 A fast
More informationData oriented job submission scheme for the PHENIX user analysis in CCJ
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En yo, Takashi Ichihara, Yasushi Watanabe and Satoshi Yokkaichi RIKEN Nishina Center for Accelerator-Based
More informationTHE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2
THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 M. E. Pozo Astigarraga, on behalf of the ATLAS Collaboration CERN, CH-1211 Geneva 23, Switzerland E-mail: eukeni.pozo@cern.ch The LHC has been providing proton-proton
More informationData services for LHC computing
Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout
More informationSoftware for the LHC Upgrade(s)
Software for the LHC Upgrade(s) Graeme Stewart (with the help of many colleagues in ATLAS, CMS and LHCb) 1 2016-07-20 LHC to HL-LHC LHC HL-LHC Run1 LS1 Run 2 LS2 Run 3 LS3 Run 4 2012 2013 2014 2015 2016
More informationMachine Learning in Data Quality Monitoring
CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data
More informationMonitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino
Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN
More informationHall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.
at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent
More informationThe High-Level Dataset-based Data Transfer System in BESDIRAC
The High-Level Dataset-based Data Transfer System in BESDIRAC T Lin 1,2, X M Zhang 1, W D Li 1 and Z Y Deng 1 1 Institute of High Energy Physics, 19B Yuquan Road, Beijing 100049, People s Republic of China
More informationData Reconstruction in Modern Particle Physics
Data Reconstruction in Modern Particle Physics Daniel Saunders, University of Bristol 1 About me Particle Physics student, final year. CSC 2014, tcsc 2015, icsc 2016 Main research interests. Detector upgrades
More informationTrack reconstruction with the CMS tracking detector
Track reconstruction with the CMS tracking detector B. Mangano (University of California, San Diego) & O.Gutsche (Fermi National Accelerator Laboratory) Overview The challenges The detector Track reconstruction
More informationFull Offline Reconstruction in Real Time with the LHCb Detector
Full Offline Reconstruction in Real Time with the LHCb Detector Agnieszka Dziurda 1,a on behalf of the LHCb Collaboration 1 CERN, Geneva, Switzerland Abstract. This document describes the novel, unique
More informationPoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN
CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationALICE tracking system
ALICE tracking system Marian Ivanov, GSI Darmstadt, on behalf of the ALICE Collaboration Third International Workshop for Future Challenges in Tracking and Trigger Concepts 1 Outlook Detector description
More informationMonte Carlo Production Management at CMS
Monte Carlo Production Management at CMS G Boudoul 1, G Franzoni 2, A Norkus 2,3, A Pol 2, P Srimanobhas 4 and J-R Vlimant 5 - for the Compact Muon Solenoid collaboration 1 U. C. Bernard-Lyon I, 43 boulevard
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationHigh Level Trigger System for the LHC ALICE Experiment
High Level Trigger System for the LHC ALICE Experiment H Helstrup 1, J Lien 1, V Lindenstruth 2,DRöhrich 3, B Skaali 4, T Steinbeck 2, K Ullaland 3, A Vestbø 3, and A Wiebalck 2 for the ALICE Collaboration
More informationDetermination of the aperture of the LHCb VELO RF foil
LHCb-PUB-214-12 April 1, 214 Determination of the aperture of the LHCb VELO RF foil M. Ferro-Luzzi 1, T. Latham 2, C. Wallace 2. 1 CERN, Geneva, Switzerland 2 University of Warwick, United Kingdom LHCb-PUB-214-12
More informationData Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser
Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod
More informationTackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO
Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.
More informationData and Analysis preservation in LHCb
Data and Analysis preservation in LHCb - March 21, 2013 - S.Amerio (Padova), M.Cattaneo (CERN) Outline 2 Overview of LHCb computing model in view of long term preservation Data types and software tools
More informationBatch Services at CERN: Status and Future Evolution
Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge
More informationClustering with gradient descent
Clustering with gradient descent D. Derkach 34 N. Kazeev 3 R. Neychev 23 A. Panin 3 F. Ratnikov 34 I. Trofimov 1 M. Vesterinen 5 A. Ustyuzhanin 234 1 Yandex Data Factory 2 MIPT 3 Yandex School of Data
More informationMonte Carlo programs
Monte Carlo programs Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University November 15, 2017 Simulation steps: event generator Input = data cards (program options) this is the
More informationStorage Resource Sharing with CASTOR.
Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing
More information