Data Quality Monitoring Display for ATLAS experiment

Similar documents
ATLAS Offline Data Quality Monitoring

Prompt data reconstruction at the ATLAS experiment

Online remote monitoring facilities for the ATLAS experiment

Machine Learning in Data Quality Monitoring

HLT Infrastructure Commissioning

The CMS Computing Model

The CMS data quality monitoring software: experience and future prospects

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

DQM4HEP - A Generic Online Monitor for Particle Physics Experiments

Verification and Diagnostics Framework in ATLAS Trigger/DAQ

Andrea Sciabà CERN, Switzerland

The ATLAS Conditions Database Model for the Muon Spectrometer

The Error Reporting in the ATLAS TDAQ System

CMS event display and data quality monitoring at LHC start-up

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The ALICE Glance Shift Accounting Management System (SAMS)

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Detector Control LHC

The Database Driven ATLAS Trigger Configuration System

The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

First experiences with the ATLAS pixel detector control system at the combined test beam 2004

CMS High Level Trigger Timing Measurements

Modules and Front-End Electronics Developments for the ATLAS ITk Strips Upgrade

Short Introduction to DCS, JCOP Framework, PVSS. PVSS Architecture and Concept. JCOP Framework concepts and tools.

Alignment of the ATLAS Inner Detector tracking system

Control slice prototypes for the ALICE TPC detector

Use of ROOT in the DØ Online Event Monitoring System

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Performance of the ATLAS Inner Detector at the LHC

PoS(High-pT physics09)036

THE ATLAS INNER DETECTOR OPERATION, DATA QUALITY AND TRACKING PERFORMANCE.

Data Quality Monitoring at CMS with Machine Learning

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

A Tool for Conditions Tag Management in ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

THE ATLAS experiment comprises a significant number

Virtualizing a Batch. University Grid Center

New Persistent Back-End for the ATLAS Online Information Service

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CMS data quality monitoring: Systems and experiences

Quad Module Hybrid Development for the ATLAS Pixel Layer Upgrade

Overview. About CERN 2 / 11

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

CouchDB-based system for data management in a Grid environment Implementation and Experience

PoS(ACAT)049. Alignment of the ATLAS Inner Detector. Roland Haertel Max-Planck-Institut für Physik, Munich, Germany

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

An ATCA framework for the upgraded ATLAS read out electronics at the LHC

AGIS: The ATLAS Grid Information System

Atlantis: Visualization Tool in Particle Physics

The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

Straw Detectors for the Large Hadron Collider. Dirk Wiedner

GNAM for MDT and RPC commissioning

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Control and Monitoring of the Front-End Electronics in ALICE

Run Control and Monitor System for the CMS Experiment

IEPSAS-Kosice: experiences in running LCG site

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

DAL ALGORITHMS AND PYTHON

Data handling and processing at the LHC experiments

Track reconstruction of real cosmic muon events with CMS tracker detector

An SQL-based approach to physics analysis

Determination of the aperture of the LHCb VELO RF foil

IN a system of many electronics boards of many different

Accelerator Control System

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS

Trigger information in the Atlantis event display

Front-End Electronics Configuration System for CMS. Philippe Gras CERN - University of Karlsruhe

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

The LHC Compact Muon Solenoid experiment Detector Control System

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

L1 and Subsequent Triggers

A new method of The Monte Carlo Simulation based on Hit stream for the LHAASO

A Geometrical Modeller for HEP

INFN Padova INFN & University Milano

Tutorial using BEAST v2.4.1 Troubleshooting David A. Rasmussen

First LHCb measurement with data from the LHC Run 2

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

Experience with Data-flow, DQM and Analysis of TIF Data

Automatizing the Online Filter Test Management for a General-Purpose Particle Detector

ATLAS, CMS and LHCb Trigger systems for flavour physics

The Algorithm Steering and Trigger Decision mechanism of the ATLAS High Level Trigger

Gamma-ray Large Area Space Telescope. Work Breakdown Structure

Particle Track Reconstruction with Deep Learning

Performance quality monitoring system (PQM) for the Daya Bay experiment

A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

arxiv:hep-ph/ v1 11 Mar 2002

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Performance of the MRPC based Time Of Flight detector of ALICE at LHC

Visualizing the World

CMS Conference Report

JAIDA, JAS3, WIRED4 and the AIDA tag library experience and new developments

GlueX Computing GlueX Collaboration Meeting JLab. Edward Brash University of Regina December 11 th -13th, 2003

Data Management for the World s Largest Machine

Event Displays and LArg

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

Building the Trigger Partition Testbed

Transcription:

Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics, Dallas, TX 75205, USA 2 University of California, Dept. of Physics, Irvine, CA 92697, USA Abstract. The start of collisions at the LHC brings a new era of particle physics and much improved potential to observe signatures of new physics. Some of these may be evident already from the very beginning of collisions. It s essential at this point in the experiment to be prepared to quickly and efficiently determine the quality of the incoming data. Easy visualization of data for the shift crew and experts is one of the key factors in the data quality assessment process. This paper describes the design and implementation of the Data Quality Monitoring Display and discusses experience from its usage and performance during ATLAS commissioning with cosmic ray and single beam data. 1. Introduction ATLAS is a particle physics experiment at the Large Hadron Collider (LHC) at CERN. The LHC will produce proton-proton head-on collisions with center-of-mass energy of 14 TeV at a rate of 40 MHz. The detector is composed of more than 140 million electronic channels. Data Quality Monitoring is an important and integral part of the data taking process in High Energy Physics experiments. It is performed throughout the data acquisition (DAQ), the offline processing of real data in two computing tiers, and during production of Monte Carlo data. Assessment of the quality of incoming data will be made during continuous monitoring and will be archived for retrieval at the physics analysis stage. Due to the complexity of the ATLAS experiment, a framework for automatic data quality assessments of incoming data and a visualization tool for easy identification of problems are essential. A highly scalable distributed Data Quality monitoring framework (DQMF) has been developed and is being used to monitor the quality of the data as well as operational conditions of hardware and software elements of the detector, trigger, and data acquisition systems. It is provided to avoid taking faulty data in the online environment, reject bad runs before offline processing, and categorize good runs for the various physics groups. The monitoring algorithm identifies problems automatically as they occur and alarms the shift crew. The framework allows to check and archive the quality of the data as well as to re-assess it at a later stage. The shift crew interacts with the data quality monitoring framework and other online services via the Data Quality Monitoring Display (DQMD), which serves as the platform to debug problems and perform the data quality assessment. The Display enables quick and efficient resolution of problems. It enables the shift crew to assess the data quality assessment by displaying the data quality status flags,

the monitored histograms, etc. It provides to the expert the debugging tools to efficiently resolve the issue and recover the system either online or offline. 2. Data Quality Monitoring Framework The DQMF executes specific analysis algorithms on various types of data (e.g. histograms, messages, etc) and produces color coded Results relaying the quality of data according to a particular configuration specified by the user. A summary of these results is displayed to the shifter and is also archived for future retrieval. The ATLAS data quality monitoring system, provided as part of the ATLAS trigger and data acquisition [1] software infrastructure, is depicted in Fig. 1. Incoming data from the detector in form of histograms is being processed by DQMF and the assessment made about its quality is being published to the online Information Service that stores the data quality results. Those results are being used by Data Quality Monitoring Display for visualization. Figure 1. DQMF online regime. The central part of the DQMF consists of the following core components as described in [2] Parameter, Region, Algorithm, Result. Parameter object represents the element of the DAQ system (it could be single physical element of the detector or a set of such elements) whose state can be assessed using single monitoring data (i.e. histogram). Parameter applies predefined algorithm to that data in order to obtain the assessment which will come in the form of a Result. Region object represents a certain subset of the DAQ system elements and contains a set of Parameters or low-level Regions, which correspond to these elements. Algorithm is responsible for producing the Result object being applied to the monitoring data in order to evaluate a status of the corresponding DAQ system elements. Result objects represent the status of DAQ system elements, and contain color code value (Red, Yellow, Green or Grey) that shows the state of elements.

3. Data Quality Monitoring Display The display must assist efficient recording and processing of good quality data. A software prototype was initially implemented in Java using JAIDA package [3] to display histograms. Good functionality was obtained and used from 2007-2008. However, performance was not scalable, which led to a reimplementation of the display using a QTROOT library [4]. The strengths of this implementation are that performance is scalable, and we have complete functionality of ROOT histograms readily available for an interactive display. It allows easy navigation between Regions and Parameters providing great flexibility for visualization of Results produced by DQMF. For any particular data acquisition configuration, all the available Regions and Parameters are organized into a tree where Parameter leaves are colored in corresponding Result status colors made by custom Algorithms applied to them. Region status color is inferred from the Results of underlying Parameters taken with some weight coefficients defined in configuration that determine importance of those Parameters for that Region. Each Region being defined to represent a particular set of Parameters and/or Regions stands in the tree as a branch node with a summary Result status for that set. For any Parameter, DQMD displays histograms used for the check (overlaid by corresponding reference histograms if applicable), configuration parameters of the Algorithms used to run the DQMF Algorithm, history graph of the data quality results. The Display also supports visualization of the Results in graphical form, i.e. a hardware view of the detector to easily detect faulty channels or modules by eye. The graphical representation of the detector and its subsystems allows for navigation functionality. The display provides the shift crew with a checklist before the final assessment of the data is saved to the database, a list of experts to contact in case of specific problems, and actions to perform in case of failure. 4. The Display Graphical User Interface The DQMD graphic user interface consists of two windows Summary and Detailed panels which implement its functionality. The Summary panel is used to show the overall status of current Run and detector sub-systems while Detailed Panel allows shifter to see all necessary details of data quality assessment. The Display is also provided with a supplementary tool Data Quality Display Configurator that greatly simplifies development of graphical configurations. 4.1. Summary panel The Summary panel (Fig. 2) shows Run Control conditions such as the name of the partition being used, Run number, event number, etc. It provides overall status per sub-system (each sub-system defined as a Region) in the form of buttons painted with the color of corresponding Results. Buttons are collected into sub-detector categories. The Summary panel also contains log information for the current Run. Left clicking on the sub-system button opens the Detailed panel of the corresponding tab with the root element of the tree set to the sub-system Region while Regions of other sub-systems are hidden.

Figure 2. Summary panel widow of the Display with Run Conditions and painted subsystem buttons 4.2. Detailed panel The Detailed panel organizes underlying sub-system structure in a tree of Regions and Parameters to simplify navigation. It provides essential information related to data quality by means of three tabs. Upon the left mouse click on the Region or Parameter in the tree one can see in the Histogram tab (Fig. 3) monitored and reference histograms, current color status, Result values, Algorithm parameters etc. Buttons on the toolbar allow to hide/show parameters by Result color assigned to them providing greater flexibility in selecting only necessary Parameters to display (e.g. one can select to display only red or red and yellow Parameters etc). Figure 3. Detailed panel widow with histogram tab open. Tree with Parameters and Regions painted with the corresponding Result color. Parameter histogram with overlaid reference histogram. Active Algorithm configuration tab under histogram.

The history tab (Fig. 4) provides graphs with values of Results and colors plotted versus time so that shifter can navigate back in time to see how Results were changing. Plots are updated automatically upon a new Result is published continuously keeping history graphs up-to-date. Figure 4. Detailed panel widow with history tab open. Result history plot with color codes at the moment Result has been produced. To facilitate at a glance status summary of detector sub-systems in a more convenient and natural way, the Detailed panel provides a Layout tab (Fig. 5) which allows for a graphical representation with dive in sub-level functionality. One can easily get underlying graphical representation for each element by left clicking on a given element until one reaches the necessary level of granularity. The colors assigned to elements are Result colors of corresponding Parameters. The Display builds graphical representations on the fly using user specified configurations from the configuration database. Figure 5. Detailed panel widow with layout tab open. Graphical representation of the detector sub-system.

4.3. Data Quality Display Configurator The Data Quality Display Configurator (DQMC) (Fig. 6) provided with the Display allows experts to create new configurations or modify the existing configuration database by specifying various layouts and shape parameters of the detector sub-system until it reaches a desired look. Configuration database is written in platform-independent extensible markup language (XML). XML allows storing hierarchically structured data and provides ability to validate it with respect to the schema. Using the set of basic shapes and layouts, experts can easily create advanced graphical representation of any detector sub-system or any part of the sub-system and then save it to the configuration database for future use in the Data Quality Monitoring Display. Figure 6. Data Quality Display Configurator. Graphical representation for selected Region with layout panel to modify settings. 5. Results The DQMD was developed and first used in cosmic-ray running in September 2008. It has been employed since that time in all commissioning exercises using the online trigger and data acquisition systems. It as also relied upon during the single beam running in fall 2008. The standard mode of usage is both by the overall ATLAS Data Quality shifter in the online control room, but also by subdetectors systems for data quality assessment work. During the commissioning period 3,173 Regions and 16,945 Parameters were successfully running with the Display. The Data Quality Monitoring Configurator has also been used by different ATLAS detector sub-groups for development of graphical representations of ATLAS sub-systems. 6. Conclusion The DQMD is a visualization application for the ATLAS automated data quality monitoring assessment. We designed it to work with the Data Quality Monitoring Framework, allowing for both online and offline usage. Several panels present various information about data quality evaluations, including the input histograms, the configuration used, and the results. Navigation is provided from the Results to the inputs, and sophisticated and flexible geographical and time-dependent presentations

of these results are enabled. Performance achieves the needs of the experiment. It has been used extensively online across many subsystems since summer 2008. References [1] ATLAS Collaboration, ATLAS DAQ, EF, LVL2, and DCS, CERN/LHCC/98-16. [2] Data Quality Monitoring Framework for the ATLAS Experiment at the LHC A. Corso-Radu, H. Hadavand, M. Hauschild, R. Kehoe, and S. Kolos, IEEE Trans. on Nucl. Science, February 2008, Vol. 55. NO 1. [3] AIDA, JAIDA and AIDAJNI: Data Analysis using interfaces, Victor Serbo's talk at CHEP 2004 [4] "Cross-Platform Qt-Based Implementation of the Low Level GUI Layer of ROOT", ACAT'2002 Workshop, NIM Volume 502, Issues 2-3, 21 April 2003, Pages 681-683