LHCb Computing Resource usage in 2017

Size: px
Start display at page:

Download "LHCb Computing Resource usage in 2017"

Transcription

1 LHCb Computing Resource usage in 2017 LHCb-PUB /03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB Created: 1 st February 2018 Last modified: 12 th April 2018 Prepared By: LHCb Computing Project C. Bozzi/Editor

2 Introduction ii page ii

3 Table of Contents Abstract This document reports the usage of computing resources by the LHCb collaboration during the period January 1 st December 31 st The data in the following sections have been compiled from the EGI Accounting portal: For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: page i

4 Table of Contents Table of Contents 1. INTRODUCTION COMPUTING ACTIVITIES DURING USAGE OF CPU RESOURCES WLCG ACCOUNTING LHCB DIRAC CPU ACCOUNTING USAGE OF STORAGE RESOURCES TAPE STORAGE DISK AT TIER0, TIER1S AND TIER2S DATA POPULARITY THROUGHPUT RATES SUMMARY APPENDIX page ii

5 List of Figures List of Figures Figure 3-1: Summary of LHCb computing activities (used CPU power) from Jan 1 st to December 31 st The line represents the 2017 pledge Figure 3-2: Summary of LHCb site usage (used CPU power) from Jan 1 st until December 31 st Figure 3-3: Average monthly CPU power provided by the Tier1s (and Tier0) to LHCb from Jan 1 st until December 31 st Figure 3-4: Average monthly CPU power provided by the Tier2s to LHCb from Jan 1 st until December 31 st Figure 3-5: CPU time consumed by LHCb from Jan 1 st until December 31 st 2017 (in days), for WLCG sites, excluding the HLT farm (top, per country), and non-wlcg sites, including the HLT farm and the CERN cloud (bottom, per site) Figure 3-6: Usage of LHCb Tier0/1s during Jan 1 st Dec 31 st The top plot shows the used CPU power as a function of the different activities, while the bottom plot shows the contributions from the different sites Figure 3-7: Usage of resources in WLCG sites other than Tier0 and Tier1s, pledging to LHCb, during Jan 1 st Dec 31 st The top plot shows the used CPU power as a function of the different activities, while the bottom plot shows the contributions from the different sites. User jobs (shown in magenta in the top plot) are further detailed in Figure Figure 3-8: Running user jobs on Tier0 and Tier1 sites (top), and outside (bottom) during Jan 1 st Dec 31 st Figure 3-9: CPU time as a function of the job final status for all jobs (top), and as a function of activity for stalled jobs (bottom) Figure 4-1: Tape space occupancy for RAW data, January 1 st December 31 st Figure 4-2: Tape occupancy for RDST data, January 1 st December 31 st Figure 4-3: Tape occupancy for derived data, January 1 st December 31 st Figure 4-4: Usage of Disk resources at CERN and Tier1s from Jan 1 st to December 31 st Real data and simulated data are shown in the top and bottom figures Figure 4-5: Usage of buffer disk from January 1 st to December 31 st The large increase at the end of the year is due to pre-staging from tape in view of the restripping campaign Figure 4-6: Usage of disk resources reserved for Users from January 1 st to December 31 st Figure 4-7: Usage of disk space at Tier-2D sites from Jan 1 st to December 31 st 2017, for real data (top) and simulated data (bottom) Figure 5-1: Volume of accessed datasets as a function of the number of accesses in the last 13 weeks Figure 5-2: Volume of accessed datasets as a function of the number of accesses in the last 26 weeks page iii

6 Introduction Figure 5-3: Volume of accessed datasets as a function of the number of accesses in the last 52 weeks Figure 6-1: Throughput rates for exports of RAW data from the online storage to the CERN Tier0 (top) and from the Tier0 to Tier1s (bottom) which also includes prestaging iv page iv

7 Introduction List of Tables Table 1-1: LHCb estimated resource needs for 2017 (September 2016) Table 1-2: Site 2017 pledges for LHCb Table 3-1: Average CPU power provided to LHCb during Jan-Dec 2017 (Tier0 + Tier1s).6 Table 3-2: Average CPU power provided to LHCb from Jan 1 st until December 31 st 2017, in WLCG sites other than the Tier0 and Tier1s... 7 Table 4-1: Situation of Disk Storage resource usage as of February 8 th 2018, available and installed capacity as reported by SRM, and 2017 pledge. The contribution of each Tier1 site is also reported. (*) Due to a prolonged site outage following an incident, the CNAF figures refer to November 9 th, Table 4-2: Available and used disk resources at Tier-2D sites on February 8 th Table 5-1: Volume (as percentage of total data volume) of datasets accessed at least once in the last 13, 26, 52 weeks, in various periods as reported in this and previous documents Table A-1: Summary of the scrutinized, pledged, deployed and used resources in the 2017 WLCG year page v

8

9 Introduction 1. Introduction As part of the WLCG resource review process, experiments are periodically asked to justify the usage of computing resources that have been allocated to them. The requests for the 2017 period were presented in LHCb-PUB and, following the data taking in 2015, reassessed in LHCb-PUB Table 1-1 shows the requests to WLCG. For CPU, an additional average power (over the year) of 10kHS06 and 10kHS06 were deemed to be available from the HLT and Yandex farms, respectively CPU Disk Tape (khs06) (PB) (PB) Tier Tier Tier n/a Total WLCG Table 1-1: LHCb estimated resource needs for 2017 (September 2016). The last update to the pledge of computing resources for LHC from the different sites can be found in: The LHCb numbers for 2017 are summarized in Table CPU Disk Tape (khs06) (PB) (PB) Tier Tier Tier Total WLCG Non WLCG 20 Table 1-2: Site 2017 pledges for LHCb. The present document covers the resource usage by LHCb in the period January 1 st December 31 st It is organized as follows: section 2 presents the computing activities; section 3 shows the usage of CPU resources; section 4 reports on the usage of storage resources. Section 5 shows the results of a study on data popularity. Throughput rates are discussed in Section 6. Finally, everything is summarized on Section 7. page 1

10 Computing activities during Computing activities during 2017 The usage of offline computing resources involved: (a) the production of simulated events, which runs continuously; (b) running user jobs, which is also continuous; (c) stripping cycles before and after the end of data taking; (d) processing (i.e. reconstruction and stripping of the full stream, µdst streaming of the TURBO stream) of data taken in 2017 in proton-proton collisions; (e) centralized production of ntuples for analysis working groups. Activities related to the 2017 data taking were tested in May and started at the beginning of the LHC physics run mid-june. A steady processing and export of data transferred from the pit to offline were always seen. We recall that LHCb implemented in Run2 a trigger strategy, by which the high-level trigger is split in two parts. The first one (HLT1), synchronous with data taking, writes events at a 150kHz output rate in a temporary disk buffer located on the HLT farm nodes. Real-time calibrations and alignments are then performed and used in the second highlevel trigger stage (HLT2), where event reconstruction algorithms as close as possible to those run offline are applied, and event selection is taking place. Events passing the high-level trigger selections are sent to offline, either via a FULL stream of RAW events which are then reconstructed and processed as in Run1, or via a TURBO stream which directly records the results of the online reconstruction on tape. TURBO data are subsequently reformatted in a µdst format that does not require further processing, are stored on disk and can be used right away for physics analysis. By default, an event of the TURBO stream contains only information related to signal candidates. In order to ease the migration of analyses using also information from the rest of the event, it was allowed in 2016 data taking to persist the entire output of the online reconstruction (TURBO++). This temporary measure increased the average size of a TURBO event from 10kB to about 50kB, with obvious consequences on the required storage space. Efforts have been made in 2017 in order to slim the TURBO content by enabling analysts to save only the information that is really needed, resulting in an average TURBO event size of 30kB. Some effort was also done on streaming the TURBO output, in order to optimize data access. Other than easing and optimizing the analysis of Run2 data, these efforts are also crucial in view of the Run3 upgrade where, due to offline computing limitations, the default data format for analysis will be based on the output of the TURBO stream. In addition to the stripping activity concurrent with data taking, an incremental stripping cycle was run in March-April 2017 on data taken in 2015 and A full re-stripping of 2015, 2016 and 2017 data was started in autumn Since November 9 th 2017, the INFN-CNAF Tier1 center was no longer available, due to a major incident. The impact of this outage had a significant impact on the LHCb operational activities. Other sites helped by providing more resources, so that the global CPU work stayed unchanged. Nevertheless, the stripping cycles that were started in November are not complete yet, as about 20% of data located at CNAF still need to be processed. As in previous years, LHCb continued to make use of opportunistic resources, that are not pledged to WLCG, but that significantly contributed to the overall usage. 2 page 2

11 Usage of CPU Resources 3. Usage of CPU Resources Computing activities (see Figure 3-1) were dominated by Monte Carlo production. Figure 3-1 also shows a continuous contribution due to user jobs and periods where re-stripping was performed in the spring and towards the end of the year. Reconstruction of collision data is performed while taking data from June onwards. The stripping of 2016 proton data started concurrently with data taking, keeping up until the end of data taking. It should be noted that CPU resources for real data reconstruction and stripping represent a very low fraction of the overall CPU usage (5.7%). It is to be noted that the LHCbDirac system is flexible enough to allow real data processing to take place at sites different to that that holds the data (so-called mesh processing), which gives the possibility to help when necessary some Tier1s adding some Tier2s or even other Tier1s to its processing capabilities. Figure 3-1: Summary of LHCb computing activities (used CPU power) from Jan 1 st to December 31 st The line represents the 2017 pledge. Thanks to the pilot job paradigm used in LHCb, the jobs were executed in the various centers, according to the computing resources available to LHCb (Figure 3-2). The Tier0 and seven Tier1s contribute about 60%, the rest is due to Tier2s, with significant amounts from Tier-2D with disk, and unpledged resources (the Yandex farm and commercial clouds among them). page 3

12 Usage of CPU Resources The LHCb Online HLT farm was the site providing the largest amount of CPU, 22% of the total, with periods where its computing power was equivalent to that of all the other sites together. Due to the preparation for the 2017 data taking, it was no longer available as of the end of May. It was massively used again during the technical stops (August and September) and after the end of the data taking. Starting in September, Monte Carlo production jobs were also executed on the HLT farm concurrently with data taking although with lower priority, in order to use free CPU cycles when the HLT applications did not saturate the farm. The CPU usage was significantly higher than the requests and the pledges. This is due to the steady demand of large simulated samples from physics working groups, already observed in Since a significant fraction of these productions were filtered according to trigger and stripping selection criteria, there was no significant increase in the required storage resources. Figure 3-2: Summary of LHCb site usage (used CPU power) from Jan 1 st until December 31 st page 4

13 Usage of CPU Resources 3.1. WLCG Accounting The usage of WLCG CPU resources by LHCb is obtained from the different views provided by the EGI Accounting portal. The CPU usage is presented in Figure 3-3 for the Tier1s and in Figure 3-4 for Tier2s 1. The same data is presented in tabular form in Table 3-1 and Table 3-2. The average power used at Tier0+Tier1 sites is about 19% higher than the pledges. The average power used at Tier2s is about 25% higher than the pledges. The average CPU power accounted for by WLCG (including Tier0/1 + Tier2) amounts to 500 khs06, to be compared to 390 khs06 estimated needs quoted in Table 1-1. Additional computing power was used at the LHCb HLT farm and non-wlcg sites, for an estimated contribution of about 230 khs06 on average (see Figure 3-5). At CERN, the usage of CPU was corresponding to the request and pledges. The Tier1s usage is generally higher than the pledges. In general, the LHCb computing model is flexible enough to use computing resources for all production workflows wherever available. Figure 3-3: Average monthly CPU power provided by the Tier1s (and Tier0) to LHCb from Jan 1 st until December 31 st Including WLCG Tier2 sites that are not pledging resources to LHCb but are accepting LHCb jobs. page 5

14 Usage of CPU Resources <Power> Used (khs06) Pledge (khs06) CH-CERN DE-KIT ES-PIC FR-CCIN2P IT-INFN-CNAF NL-T RRC-KI-T UK-T1-RAL Total Table 3-1: Average CPU power provided to LHCb during Jan-Dec 2017 (Tier0 + Tier1s). Figure 3-4: Average monthly CPU power provided by the Tier2s to LHCb from Jan 1 st until December 31 st page 6

15 Usage of CPU Resources <Power> Used (khs06) Pledge (khs06) Brazil France Germany Israel Italy Poland Romania Russia Spain Switzerland UK Total Table 3-2: Average CPU power provided to LHCb from Jan 1 st until December 31 st 2017, in WLCG sites other than the Tier0 and Tier1s LHCb DIRAC CPU Accounting The sharing of CPU time (in days) is shown in Figure 3-5. The top chart reports the CPU time per country provided by WLCG sites, including the ones not providing a pledge for LHCb, and excluding the HLT farm. The bottom chart shows the CPU time per site provided by non-wlcg sites and the CERN cloud. The HLT farm at CERN give the dominant contribution, the Yandex farm and the CERN cloud (including the Oracle cloud contribution) come immediately next. Other farms (Zurich, Ohio Supercomputing Center and Sibir) provide sizeable CPU time. The contribution of volunteer computing (BOINC) is also clearly visible. The CPU power used at Tier0/1s is detailed in Figure 3-6. As seen in the top figure, simulation jobs are dominant (75% of the total), with user jobs contributing about 10% to the total. The stripping cycles are also visible, as well as the offline reconstruction of 2017 data. The contributions from the Tier2s and other WLCG sites (including sites not pledging any resources to LHCb, but excluding the HLT farm) are shown in Figure 3-7. Simulation (92%) and user jobs (5%) dominate the CPU usage. A small fraction of stripping and data reconstruction is also visible in Figure 3-7. Figure 3-8 shows the number of user jobs per site at Tier0 and Tier1s (top) and outside (bottom). Tier-2D sites contribute about 2/3 of the work outside Tier0 and Tier1s. Figure 3-9 shows pie charts of the CPU days used at all sites as a function of the final status of the job (top) and as a function of the activity when jobs are stalled (bottom plot). About 93% of all jobs complete successfully. The major contributors to unsuccessful jobs are stalled jobs. By looking at the bottom plot of Figure 3-9, one can see that stalled jobs (i.e. jobs killed by the batch system, mostly due to excess of CPU or memory usage, or abrupt failure of the worker nodes) are due to simulation jobs in 2/3 of the cases, the remainder being due to user jobs. page 7

16 Usage of CPU Resources The majority of stalled jobs were temporary bursts, due to a small number of Monte-Carlo productions that were misconfigured. The remaining stalled jobs were typically due to failures in getting the proper size of a batch queue, or in evaluating the CPU-work needed for an event (and thus over evaluating the number of events the jobs can run). 8 page 8

17 Usage of CPU Resources Figure 3-5: CPU time consumed by LHCb from Jan 1 st until December 31 st 2017 (in days), for WLCG sites, excluding the HLT farm (top, per country), and non-wlcg sites, including the HLT farm and the CERN cloud (bottom, per site). page 9

18 Usage of CPU Resources Figure 3-6: Usage of LHCb Tier0/1s during Jan 1 st Dec 31 st The top plot shows the used CPU power as a function of the different activities, while the bottom plot shows the contributions from the different sites. 10 page 10

19 Usage of CPU Resources Figure 3-7: Usage of resources in WLCG sites other than Tier0 and Tier1s, pledging to LHCb, during Jan 1 st Dec 31 st The top plot shows the used CPU power as a function of the different activities, while the bottom plot shows the contributions from the different sites. User jobs (shown in magenta in the top plot) are further detailed in Figure 3-8. page 11

20 Usage of CPU Resources Figure 3-8: Running user jobs on Tier0 and Tier1 sites (top), and outside (bottom) during Jan 1 st Dec 31 st page 12

21 Usage of CPU Resources Figure 3-9: CPU time as a function of the job final status for all jobs (top), and as a function of activity for stalled jobs (bottom) page 13

22 Usage of Storage resources 4. Usage of Storage resources 4.1. Tape storage Tape storage grew by about 13.0 PB. Of these, 7.5 PB were due to RAW data taken since June The rest was due to RDST (2.1 PB) and ARCHIVE (3.4 PB), the latter due to the archival of Monte Carlo productions, re-stripping of former real data, and new Run2 data. The total tape occupancy as of December 31 st 2017 is 52.2 PB, 28.9 PB of which are used for RAW data, 10.7 PB for RDST, 12.6 PB for archived data. This is 20% lower than the original request (LHCb-PUB ) of 67.2PB but quite in line with the revised estimate (LHCb-PUB ) of 54.2PB, that should be approached by the end of the WLCG year by an expected growth of 1PB in the ARCHIVE space, due to the end of the stripping campaigns. Figure 4-1: Tape space occupancy for RAW data, January 1 st December 31 st page 14

23 Usage of Storage resources Figure 4-2: Tape occupancy for RDST data, January 1 st December 31 st 2017 Figure 4-3: Tape occupancy for derived data, January 1 st December 31 st 2017 page 15

24 Usage of Storage resources 4.2. Disk at Tier0, Tier1s and Tier2s The evolution of Disk usage during 2017 at the Tier0 and Tier1s is presented in Figure 4-4, separately for real data DST and MC DST, Figure 4-5 for buffer space, and Figure 4-6 for user disk. The buffer space is used for storing temporary files such as RDST until they are stripped or DST files before they are merged into large (5 GB) files. The increase of the buffer space in autumn 2017 corresponds to the staging of RDST and RAW files from tape in preparation for the 2015/16 re-stripping campaigns. The increase of the data volumes during data taking periods results in a corresponding increase of the buffer space in the Tier0 and all Tier1s. The real data DST space increased in April and May, due to the incremental stripping of previous Run2 data. An increase due to the 2017 data taking is visible starting from July. The decreases in February-March and June correspond to the removal of old datasets, following analyses of their popularity, for a total recovered space of about 2PB. The space used for simulation also decreases in three steps due to the removal of unused datasets, which allowed to recover about 1.8PB disk space. Table 4-1 shows the situation of disk storage resources at CERN and Tier1s, as well as at each Tier1 site, as of February 8 th The used space includes derived data, i.e. DST and micro-dst of both real and simulated data, and space reserved for users. The latter accounts for 0.8PB in total. Disk (PB) CERN Tier1s CNAF(*) GRIDKA IN2P3 PIC RAL RRCKI SARA LHCb accounting SRM T0D1 used SRM T0D1 free SRM T1D0 (used+free) SRM T0D1+T1D0 total Pledge Table 4-1: Situation of Disk Storage resource usage as of February 8 th 2018, available and installed capacity as reported by SRM, and 2017 pledge. The contribution of each Tier1 site is also reported. (*) Due to a prolonged site outage following an incident, the CNAF figures refer to November 9 th, The SRM used and SRM free information concerns only permanent disk storage (T0D1). The first two lines show a good agreement between what the site reports and what the LHCb accounting (first line) reports. The sum of the Tier0 and Tier1s 2017 pledges amount to 31.8 PB. The used disk space is 26.6 PB in total, 20.5 PB of which are used to store real and simulated datasets and 0.8 PB for user data. The remaining is used as buffer. The available disk at Tier-2D is now 4.48 PB, 3.11 PB of which is used. The disk pledge at Tier-2D sites is 3.3PB. Table 4-2 shows the situation of the disk space at the Tier-2D sites. Figure 4-7 shows the disk usage at Tier-2D sites. A similar pattern to that observed for the Tier0 and Tier1 sites is seen. 2 This total includes disk visible in SRM for tape cache. It does not include invisible disk pools for dcache (stage and read pools). 16 page 16

25 Usage of Storage resources In conclusion, LHCb is currently using 29.7PB of disk space, 15% lower than the original request of 35.1PB (LHCb-PUB ) and in line with the revised estimate of 29.6PB (LHCb-PUB ). Figure 4-4: Usage of Disk resources at CERN and Tier1s from Jan 1 st to December 31 st Real data and simulated data are shown in the top and bottom figures. page 17

26 Usage of Storage resources Figure 4-5: Usage of buffer disk from January 1 st to December 31 st The large increase at the end of the year is due to pre-staging from tape in view of the re-stripping campaign. Figure 4-6: Usage of disk resources reserved for Users from January 1 st to December 31 st page 18

27 Usage of Storage resources Table 4-2: Available and used disk resources at Tier-2D sites on February 8 th Site SRM free disk (TB) SRM used disk (TB) SRM total disk (TB) CBPF (Brasil) CPPM (France) CSCS (Switzerland) Glasgow (UK) IHEP (Russia) LAL (France) LPNHE (France) Liverpool (UK) Manchester (UK) NCBJ (Poland) NIPNE (Romania) RAL-HEP (UK) UKI-QMUL (UK) UKI-IC (UK) Total page 19

28 Usage of Storage resources Figure 4-7: Usage of disk space at Tier-2D sites from Jan 1 st to December 31 st 2017, for real data (top) and simulated data (bottom). 20 page 20

29 Data popularity 5. Data popularity Dataset usage patterns, routinely monitored in LHCb since some years, are used to recover a significant amount of disk space. As already mentioned, about 3.8 PB of disk space were recovered in 2017 (Figure 4-4). Figure 5-1, Figure 5-2, Figure 5-3 show the physical volume on disk of the accessed datasets as a function of the number of accesses in the last 13, 26, and 52 weeks as measured on February 8 th The number of usages of a dataset over a period of time is defined as the ratio of the number of files used in that dataset to the total number of files in the dataset. The first bin indicates the volume of data that were generated before and were not accessed during that period (e.g. 3.3 PB in the last 52 weeks). The last bin shows the total volume of data that was accessed (e.g PB in the last 52 weeks) 3. The total amount of disk space used by the derived LHCb data is 18.0 PB 4. The datasets accessed at least once in 13, 26 and 52 weeks, occupy respectively 59%, 66% and 77% of the total disk space in use, respectively. There is a slight decrease of the fraction of used data with respect to the previous report (Table 5-1). This is due to a decrease in the usage of 2017 data, for which the stripping concurrent with data taking was affected by a software bug that necessary to prepare and run a re-stripping cycle of those data. LHCb- PUB (end 2014) LHCb- PUB (mid 2015) LHCb- PUB (end 2015) LHCb- PUB (mid 2016) LHCb-PUB (end 2016) This document (Feb. 2018) 13 weeks 42% 52% 46% 53% 63% 59% 26 weeks 57% 63% 58% 70% 71% 66% 52 weeks 72% 72% 70% 79% 80% 77% Table 5-1: Volume (as percentage of total data volume) of datasets accessed at least once in the last 13, 26, 52 weeks, in various periods as reported in this and previous documents 3 The volume in each bin is not weighted by the number of accesses. 4 Since this number is computed by explicitly excluding files that are not usable for physics analysis, e.g. RDST or temporary unmerged files, it is not directly comparable to the used resources mentioned in Section 4. page 21

30 Data popularity Figure 5-1: Volume of accessed datasets as a function of the number of accesses in the last 13 weeks. Figure 5-2: Volume of accessed datasets as a function of the number of accesses in the last 26 weeks. 22 page 22

31 Data popularity Figure 5-3: Volume of accessed datasets as a function of the number of accesses in the last 52 weeks. page 23

32 Throughput rates 6. Throughput rates RAW data were exported from the pit to the CERN Tier0, and onwards to Tier1s, at the design Run2 throughput rates (Figure 6-1). Transfers of up to 1.4GB/s were observed. In order to optimize the workflow (reducing the number of files) while keeping the processing time within reasonable bounds, the size of the RAW files have been increased (5 GB for FULL stream, 15 GB for TURBO stream). Figure 6-1: Throughput rates for exports of RAW data from the online storage to the CERN Tier0 (top) and from the Tier0 to Tier1s (bottom) which also includes pre-staging. 24 page 24

33 Summary 7. Summary The usage of computing resources in the 2017 calendar year has been quite smooth for LHCb. Simulation is the dominant activity in terms of CPU work. Additional unpledged resources, as well as clouds, on-demand and volunteer computing resources, were also successfully used. They were essential in providing CPU work during the outage of one Tier1 center. The utilization of storage resources is in line with the most recent reassessment. page 25

34 Appendix Appendix As requested by the C-RSG, the following table provides a summary of the scrutinized (C-RSG), pledged, deployed and used resources in the WLCG 2017 year (April 1 st 2017 March 31 st 2018), as well some useful ratios between these quantities. Table A-1: Summary of the scrutinized, pledged, deployed and used resources in the 2017 WLCG year. 26 page 26

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Track pattern-recognition on GPGPUs in the LHCb experiment

Track pattern-recognition on GPGPUs in the LHCb experiment Track pattern-recognition on GPGPUs in the LHCb experiment Stefano Gallorini 1,2 1 University and INFN Padova, Via Marzolo 8, 35131, Padova, Italy 2 CERN, 1211 Geneve 23, Switzerland DOI: http://dx.doi.org/10.3204/desy-proc-2014-05/7

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1 Journal of Physics: Conference Series PAPER OPEN ACCESS An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1 Related content - Topical Review W W Symes - MAP Mission C. L. Bennett,

More information

The High-Level Dataset-based Data Transfer System in BESDIRAC

The High-Level Dataset-based Data Transfer System in BESDIRAC The High-Level Dataset-based Data Transfer System in BESDIRAC T Lin 1,2, X M Zhang 1, W D Li 1 and Z Y Deng 1 1 Institute of High Energy Physics, 19B Yuquan Road, Beijing 100049, People s Republic of China

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization Deferred High Level Trigger in LHCb: A Boost to Resource Utilization The use of periods without beam for online high level triggers Introduction, problem statement Realization of the chosen solution Conclusions

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Update of the Computing Models of the WLCG and the LHC Experiments

Update of the Computing Models of the WLCG and the LHC Experiments Update of the Computing Models of the WLCG and the LHC Experiments September 2013 Version 1.7; 16/09/13 Editorial Board Ian Bird a), Predrag Buncic a),1), Federico Carminati a), Marco Cattaneo a),4), Peter

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K)

2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K) 18 déc. 07 2007 NEW LHC SCHEDULE CERN Tier0 CAF Tier 1 Tier1 Tier1ex Tier2ex ex 0.0675 0.0304 0.0246 0.0979 3.2160 3.1914 3.1914 3.2894 0 CPU (MSI2K) 0.0205 0.0093 0.0075 0.0298 0.9777 0.9702 0.0000 0.9702

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Distributed e-infrastructures for data intensive science

Distributed e-infrastructures for data intensive science Distributed e-infrastructures for data intensive science Bob Jones CERN Bob.Jones CERN.ch Overview What is CERN The LHC accelerator and experiments The Computing needs of the LHC The World wide LHC

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1 Pushing the Limits ADSM Symposium Sheelagh Treweek sheelagh.treweek@oucs.ox.ac.uk September 1999 Oxford University Computing Services 1 Overview History of ADSM services at Oxford October 1995 - started

More information

HammerCloud: A Stress Testing System for Distributed Analysis

HammerCloud: A Stress Testing System for Distributed Analysis HammerCloud: A Stress Testing System for Distributed Analysis Daniel C. van der Ster 1, Johannes Elmsheuser 2, Mario Úbeda García 1, Massimo Paladin 1 1: CERN, Geneva, Switzerland 2: Ludwig-Maximilians-Universität

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

The BABAR Database: Challenges, Trends and Projections

The BABAR Database: Challenges, Trends and Projections SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

Five years of OpenStack at CERN

Five years of OpenStack at CERN Five years of OpenStack at CERN CERN: founded in 1954: 12 European States Science for Peace Today: 22 Member States ~ 2300 staff ~ 1400 other paid personnel ~ 12500 scientific users Budget (2017) ~1000

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control Computing in High Energy and Nuclear Physics, La Jolla, California, 24-28 March 2003 1 DIRAC Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev, V.Garonne CPPM-IN2P3-CNRS, Marseille,

More information

Past. Inputs: Various ATLAS ADC weekly s Next: TIM wkshop in Glasgow, 6-10/06

Past. Inputs: Various ATLAS ADC weekly s Next: TIM wkshop in Glasgow, 6-10/06 Introduction T2s L. Poggioli, LAL Past Machine Learning wkshop 29-31/03 @ CERN HEP Software foundation 2-4/05 @ LAL Will get a report @ next CAF Inputs: Various ATLAS ADC weekly s Next: TIM wkshop in Glasgow,

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

arxiv: v1 [cs.dc] 12 May 2017

arxiv: v1 [cs.dc] 12 May 2017 GRID Storage Optimization in Transparent and User-Friendly Way for LHCb Datasets arxiv:1705.04513v1 [cs.dc] 12 May 2017 M Hushchyn 1,2, A Ustyuzhanin 1,3, P Charpentier 4 and C Haen 4 1 Yandex School of

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

ATLAS computing activities and developments in the Italian Grid cloud

ATLAS computing activities and developments in the Italian Grid cloud Journal of Physics: Conference Series ATLAS computing activities and developments in the Italian Grid cloud To cite this article: L Rinaldi et al 2012 J. Phys.: Conf. Ser. 396 042052 View the article online

More information

(Tier1) A. Sidoti INFN Pisa. Outline: Tasks and Goals The analysis (physics) Resources Needed

(Tier1) A. Sidoti INFN Pisa. Outline: Tasks and Goals The analysis (physics) Resources Needed CDF@CNAF (Tier1) A. Sidoti INFN Pisa Outline: Tasks and Goals CAF@CNAF The analysis (physics) Resources Needed Tasks and Goals Goal: Transferring most of analysis performed in Italy from FNAL to CNAF Start

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers Journal of Physics: Conference Series The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers To cite this article: D Bonacorsi et al 2010 J. Phys.: Conf. Ser. 219 072027 View

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Example. Section: PS 709 Examples of Calculations of Reduced Hours of Work Last Revised: February 2017 Last Reviewed: February 2017 Next Review:

Example. Section: PS 709 Examples of Calculations of Reduced Hours of Work Last Revised: February 2017 Last Reviewed: February 2017 Next Review: Following are three examples of calculations for MCP employees (undefined hours of work) and three examples for MCP office employees. Examples use the data from the table below. For your calculations use

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Evolution of Database Replication Technologies for WLCG Zbigniew Baranowski, Lorena Lobato Pardavila, Marcin Blaszczyk, Gancho Dimitrov, Luca Canali European Organisation for Nuclear Research (CERN), CH-1211

More information