Monte Carlo Production Management at CMS

Similar documents
The CMS data quality monitoring software: experience and future prospects

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

CMS Alignement and Calibration workflows: lesson learned and future plans

CMS High Level Trigger Timing Measurements

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

The CMS Computing Model

The CMS L1 Global Trigger Offline Software

The Database Driven ATLAS Trigger Configuration System

First LHCb measurement with data from the LHC Run 2

CMS Simulation Software

Tracking and flavour tagging selection in the ATLAS High Level Trigger

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS Nightly Build System Upgrade

b-jet identification at High Level Trigger in CMS

PoS(TIPP2014)204. Tracking at High Level Trigger in CMS. Mia TOSI Universitá degli Studi di Padova e INFN (IT)

CMS - HLT Configuration Management System

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction

The GAP project: GPU applications for High Level Trigger and Medical Imaging

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Performance of the ATLAS Inner Detector at the LHC

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Data handling with SAM and art at the NOνA experiment

Real-time dataflow and workflow with the CMS tracker data

LHCb Computing Resources: 2018 requests and preview of 2019 requests

Geant4 Computing Performance Benchmarking and Monitoring

PoS(High-pT physics09)036

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization

File Access Optimization with the Lustre Filesystem at Florida CMS T2

Virtualizing a Batch. University Grid Center

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

ATLAS, CMS and LHCb Trigger systems for flavour physics

FAMOS: A Dynamically Configurable System for Fast Simulation and Reconstruction for CMS

Track reconstruction with the CMS tracking detector

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.

Muon Reconstruction and Identification in CMS

CMS data quality monitoring: Systems and experiences

Early experience with the Run 2 ATLAS analysis model

Reprocessing DØ data with SAMGrid

The NOvA software testing framework

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

CMS users data management service integration and first experiences with its NoSQL data storage

CMS reconstruction improvements for the tracking in large pile-up events

ATLAS PILE-UP AND OVERLAY SIMULATION

Detector Control LHC

Evolution of ATLAS conditions data and its management for LHC Run-2

Prompt data reconstruction at the ATLAS experiment

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

ATLAS software configuration and build tool optimisation

Update of the BESIII Event Display System

Particle Track Reconstruction with Deep Learning

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

CMS Computing Model with Focus on German Tier1 Activities

The ALICE Glance Shift Accounting Management System (SAMS)

Phronesis, a diagnosis and recovery tool for system administrators

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

CMS event display and data quality monitoring at LHC start-up

ATLAS operations in the GridKa T1/T2 Cloud

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

An SQL-based approach to physics analysis

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Conditions Database Model for the Muon Spectrometer

Streamlining CASTOR to manage the LHC data torrent

Evolution of Database Replication Technologies for WLCG

Track pattern-recognition on GPGPUs in the LHCb experiment

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

Data Quality Monitoring at CMS with Machine Learning

Data handling and processing at the LHC experiments

The ATLAS Trigger Simulation with Legacy Software

CMS Conference Report

Modules and Front-End Electronics Developments for the ATLAS ITk Strips Upgrade

A data handling system for modern and future Fermilab experiments

PROOF-Condor integration for ATLAS

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

A Tool for Conditions Tag Management in ATLAS

Analogue, Digital and Semi-Digital Energy Reconstruction in the CALICE AHCAL

Overview of ATLAS PanDA Workload Management

AGIS: The ATLAS Grid Information System

PoS(IHEP-LHC-2011)002

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Tracking and Vertex reconstruction at LHCb for Run II

LHCb Distributed Conditions Database

High Throughput WAN Data Transfer with Hadoop-based Storage

Multi-threaded, discrete event simulation of distributed computing systems

IEPSAS-Kosice: experiences in running LCG site

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

DIRAC pilot framework and the DIRAC Workload Management System

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Introduction to Geant4

Monte Carlo Production on the Grid by the H1 Collaboration

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1

Gamma-ray Large Area Space Telescope. Work Breakdown Structure

Tracking POG Update. Tracking POG Meeting March 17, 2009

Transcription:

Monte Carlo Production Management at CMS G Boudoul 1, G Franzoni 2, A Norkus 2,3, A Pol 2, P Srimanobhas 4 and J-R Vlimant 5 - for the Compact Muon Solenoid collaboration 1 U. C. Bernard-Lyon I, 43 boulevard du 11 Novembre 1918 69622 Villeurbanne, France 2 CERN, PH CMG-CO; CH-1211 Geneva 23 Switzerland 3 University of Vilnius, Universiteto g. 3, Vilnius 01513, Lithuania 4 Chulalongkorn University, 254 Phayathai Road, Pathumwan, Bangkok 10330, Thailand 5 California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, United States E-mail: giovanni.franzoni@cern.ch Abstract. The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (2010-2012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the webbased service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production. 1. Introduction The Compact Muon Solenoid (CMS) experiment [1] is an omni-purpose detector operating at the Large Hadron Collider [2] at CERN. The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the superconducting solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass/scintillator hadron calorimeter. Muons are measured in gas-ionization detectors embedded in the steel flux return yoke outside the solenoid. In addition, the CMS detector has extensive forward calorimetry. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4 µs. The High Level Trigger (HLT) computer farm further decreases the event rate from around 100 khz to around 0.5 khz, before data storage. The analysis of the LHC data at CMS experiment requires the production of a large number of simulated events. During the RunI of LHC (2010-2012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1

detector and LHC running conditions, such as multiple interactions per bunch crossing (pileup), beam-spot shape and position, etc [4]. Such vast scale production of simulated events has continued during the first LHC long shutdown (LS1): at first in preparation of the updated offline data simulation and reconstruction software, and later, from the autumn of 2014, to produce simulated the events which will be used in the analysis of the LHC RunII data. This paper describes the procedures and infrastructure used by CMS to produce simulated events, and is structured as follows: the roles involved in the procedures are outlined first, followed by the description of the web-based platform (McM) used to aggregate and submit the processing requests to the CMS computing infrastructure. The second part of the paper is dedicated to a novel monitoring infrastructure (pmp) and the outlook. 2. Monte Carlo Management: Roles and McM Platform The roots of the CMS collaboration rest on working teams, the so called level two groups, each charged with a set of goals and deliverables. Detector (e.g. the electromagnetic calorimeter ECAL detector performance group), physics object (muon physics object group) or analyses (exotica analysis group) teams are three examples of level two groups we will deal with in this paper. Any such group needs simulated datasets, either to qualify the performance they are responsible for or to produce published results. Several universities and research laboratories participating to CMS share part of their computing facilities with the collaboration, which uses them either for official data processing or to serve user jobs for analysis [3]. The production of simulated events employed by CMS for public results is carried out using the shared storage and processing resources available at the regional Tier-1 and Tier-2 centres around the world. The organisation of such production is the responsibility of the Monte Carlo Coordination Meeting (MCCM) team, part of the Physics Performance and Datasets coordination area. The MCCM team collects inputs from the Physics Coordination leaders and from all the level two groups, draws a plan to deliver the agreed datasets exploiting the computing and time resources available at best, and submits detailed production request to the operation team of the Computing coordination area. Three key roles take part in this process: generator contact: collects the needs for simulated datasets from within the detector or physics group she/he represents, proposes them to the MCCM for production and composes and tests the configuration of the physics event generator to provide the desired collision type and final state; generator convener: scrutinises and approves the event generator configurations proposed by the generator contact; request manager: defines and configures production campaigns and submits requests to the production infrastructure. Since mid 2014, the Monte Carlo Management (McM) web-based platform is the service used by the generator contacts, generator conveners and request managers to exchange information and submit the production requests. McM aggregates the needs for generation and processing of simulated events, holds the configuration of the processing jobs, provides their bookkeeping and prioritisation and submits the requests of events production to the computing resources. The McM front end web interface is based on recent server infrastructure technology (CherryPy + AngularJS), while the back-end database uses CouchDB. As represented in figure 1, these are the services McM interacts with: wmagent [5]: McM submits requests for event production to the request manager infrastructure of the CMS computing project, by providing the CMSSW (the offline software framework used by CMS) job configuration and other key parameters, for instance the 2

21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015) IOP Publishing number of event to be processed, the alignment and calibration scenario, the name of the dataset to be produced. The wmagent framework acts on the processing requests delivered to the request manager and assigns them to a pool of CMS computing resources (at Tier-1 and Tier-2 centres), determines the splitting of events across the jobs, handles recovery attempts of processing failures and and publishes completed datasets to dbs [6]; statsdb: a CouchDB-based database which, for any submitted processing request (MC or data alike), dynamically aggregates the status and growth of datasets, during and after their production stage; production Monitoring platform (pmp): pmp dynamically aggregates status of processing requests and size of the output dataset by importing information from McM and dbs/statsdb, and provides monitoring plots; its statistics which can be browsed, either for a single request or for a set of requests form the same chained campaign or the same campaign. A more detailed description of pmp is available in section 4. 3. Requests Definition and Handling Processing requests for events simulation are split across campaigns and classified according to the simulation step they execute. Depending on the physics goal a given simulated dataset is intended to achieve, the production of events ready for analysis can comprise up to six different steps, see figure 2: wmlhe: simulation of the hard event by specialised event generator programs, resulting in events written in LHE format [7]; generation: hadronization of the hard event, resulting in the kinematic description of the final state particles which propagate from the collision point towards the CMS detector; simulation: interaction of the final state particles with the sensitive volumes and material budget of the CMS apparatus, resulting in a collection of time-stamped energy deposits for every sensor; digitisation: emulation of the CMS front-end electronics, resulting in digital information formatted as if it was raw data acquired by the real experiment; reconstruction: unpacking of the raw data and construction of physically interpretable objects such as tracks, clustered calorimetric deposits and particle flow candidates [8]; the sequence of algorithms of this step are the same for simulated and for real events; CMS computing pmp statsdb wmagent ReqManager McM T1 s and T2 s dbs Figure 1. The Monte Carlo Management (McM) platform and the services it interacts with; see text for a detailed description. 3

miniaod: reduction of the reconstruction output to a minimal set of variables, sufficient to carry out the majority of the physics analysis; wmlhe: hard interaction generation GEN: event hadronization + SIM: particle-detector interaction simulation DIGI: electronics emulation + RECO: event reconstruction + miniaod: analysis output configuration validation approval chain McM inj run McM chain McM inj run McM chain McM inj run run configuration validation approval chain McM inj McM chain McM inj + + run Figure 2. Types of event production requests at CMS, and sequence of steps to obtain simulated events ready for analysis: an example of normal sequence (top) and a task-chained sequence (bottom) are reported. See the text for more details. The capability of factoring the production steps offered by the CMSSW framework brings several key advantages, but comes with a cost of complexity. Among the advantages the fact that the earlier steps of the chain (typically up to the simulation) can be executed ahead of time, while the subsequent steps (notably the reconstruction) are still being developed and validated; in addition, different decay channels (in the generation step) or scenarios of alignment and calibration (in the digitisation) can be simulated branching from the same events different dataset versions, with no need of repeating the previous common steps. The complexity arises from the need of defining and submitting to production multiple processing requests for any given dataset needed for analysis: as shown in figure 2 (top), when all the six production steps are necessary, grouping is possible such that only three processing requests need to be placed. The configuration of event processing requests in McM is structured around four main entities, which are at the basis of the architecture of the service and are intended to facilitate and logically organise large sets of interdependent requests: Campaign: it is a set of requests for the same physics analysis goal, sharing software release, beam energy and event processing configuration. Currently there are 83 campaigns defined in McM. Flow: it connects a dataset produced in a campaign to the subsequent request which is part of the next campaign (e.g. the output of the simulation step is flown as input to the campaign executing digitisation and reconstruction); flows implement the modifications to the baseline processing configuration of a given the campaign to implement a specific processing scenario, like for instance the pile up or the event content. Currently there are 178 flows defined in McM. Chained campaign: a sequence of campaigns connected by flows determining the succession of processing steps and campaigns which are needed to deliver datasets for analysis. Currently there are 317 chained campaign defined in McM. Chained request: a concrete set of processing requests (currently there are about 18000 chained requests in McM), starting from a root request (e.g. a given generated event topology) and going through the steps of a chained campaign. Currently there are about 10000 chained requests defined in McM. The request managers are in charge of constructing campaigns, flows and chained campaigns: these three constitute the back-bone of the McM design. Once they are in place, McM takes

care automatically of a large set of mechanical steps needed for the submission of processing request, among which: the validation of the generator configuration (by triggering the execution of a test job on a limited number of events), the measurement of the generator efficiencies and CPU time per event and the actual submission to the request manager - see figure 2 (top) for a detailed break-down. The actual submission of processing request to the computing operations group consists of one chained request per desired analysis dataset. In 2014 a novel kind of processing request has been made available to McM allowing the submission to production of multiple requests in the same workflow, saving labor and shortening the time needed to deliver datasets. As shown in figure 2 (bottom), the current application of such novel approach has allowed the wmlhe, generation and simulation steps to be executed in a task-chained request at the same processing site. With the pile up scenarios simulated for RunII, the digitisation step can be reliably executed only at the at Tier-1 centres, which limits the scope of task-chaining to the set of simulation steps which can be executed at the same site, namely: from wmlhe to simulation (which Tier-2 s can handle) and from digitisation to miniaod (which can be processed only by Tier-1 s). Thanks to the ongoing development of a new pileup simulation technique (premixing, see [9]) which lowers the i/o requirement of the digitisation step by more than one order of magnitude, also the major Tier-2 centres, besides the Tier-1 s, will be capable of executing such step. Once the premixing technique will be usable in production, task-chaining multiple requests in a single workflow to run at a single Tier-1 or Tier-2 site (in principle from wmlhe all the way to miniaod) will bear considerable benefits to the efficiency and delivery time of the CMS simulated event production. 4. Requests Monitoring: the pmp platform McM offered a suit of monitoring plots from the start, which proved very useful and created the use case for their own expansion. The interactive production of such graphs used the same database back-end and the processing resources needed by the McM service itself. To assure that the service availability would not be jeopardised by excessive traffic, the access to the monitoring had to be restricted to the production managers, generator convenors and a few other users. The production Monitoring platform (pmp) is a new service recently deployed in production at CMS to expand the capabilities of monitoring the advancement of simulated events production and the MCCM operations. pmp has been designed to expand the suit of monitoring plots and allow also the generator contacts and any CMS users to follow the progress of requests/campaigns; the service is deployed on an independent server and uses a back-end database index (elasticseach) independent from McM which gets updated at regular intervals by retrieving information for monitoring from the McM and statsdb; such structures provides increased search performance and assures the factorisation of monitoring load from McM operation. Figure 3. pmp service: present snapshot (left) and historical view (right). 5

These are the key functionalities offered by pmp has access to the requests and campaign information from McM and to the historical advancement of all requests allows the monitoring of single requests, chained campaigns or full campaigns offers: 1) present snapshot view, where number of events/requests or processing time can be shown in a large set of configurable histograms, and 2) a historical view, where the number of expected and produced events are plotted as a function of time; see figure 3 the present view can show static values as defined in McM or update them with live information as a request is completed the entries displayed in either view can be filtered by variables such as the status of the processing request, the level two group which placed the request, or the priority 5. Summary In this paper we ve described the procedures and tools used by the CMS collaboration to manage the production of simulated events. Three key roles are involved in the process of harvesting requirements from the level two groups of CMS and turning them into an organised set of fully specified processing request to be submitted to the computing infrastructure: the generator contact, generator convenor and request manager. The McM and pmp web-bases services have been described, which take care, respectively, of structuring and bookkeeping large set of processing requests and of monitoring the Monte Carlo production process. References [1] CMS Collaboration, The CMS experiment at the CERN LHC, JINST 3 (2008) S08004, doi:10.1088/1748-0221/3/08/s08004. [2] Lyndon Evans and Philip Bryant, LHC Machine, 2008 JINST 3 S08001, doi:10.1088/1748-0221/3/08/s08001. [3] CMS Collaboration, CMS computing : Technical Design Report, 2005, CERN-LHCC-2005-023. [4] G Franzoni Data preparation for the Compact Muon Solenoid experiment Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2013 IEEE, doi: 10.1109/NSSMIC.2013.6829574 [5] E Fajardo et al for the CMS collaboration A new era for central processing and production in CMS 2012 J. Phys.: Conf. Ser. 396 042018 doi:10.1088/1742-6596/396/4/042018 [6] Valentin Kuznetsov et al for the CMS collaboration The CMS DBS query language 2010 J. Phys.: Conf. Ser. 219 042043 doi:10.1088/1742-6596/219/4/042043 [7] J Alwall et al A standard format for Les Houches Event Files 2010 Comput.Phys.Commun.176:300-304,2007 doi: 10.1016/j.cpc.2006.11.010 [8] F Beaudette for the CMS Collaboration The CMS Particle Flow Algorithm Proceedings of the CHEF2013 Conference, p295 (2013), ISBN 978-2-7302-1624-1 [9] M Hildreth A New Pileup Mixing Framework for CMS Proceedings of this conference CHEP2015 6