Performance quality monitoring system (PQM) for the Daya Bay experiment

Similar documents
Performance quality monitoring system for the Daya Bay reactor neutrino experiment

Data processing and storage in the Daya Bay Reactor Antineutrino Experiment

arxiv: v1 [physics.ins-det] 19 Oct 2017

SNiPER: an offline software framework for non-collider physics experiments

FACT-Tools. Processing High- Volume Telescope Data. Jens Buß, Christian Bockermann, Kai Brügge, Maximilian Nöthe. for the FACT Collaboration

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

The High-Level Dataset-based Data Transfer System in BESDIRAC

The ATLAS Conditions Database Model for the Muon Spectrometer

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Use of ROOT in the DØ Online Event Monitoring System

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

CMS data quality monitoring: Systems and experiences

The CMS Computing Model

Muon Reconstruction and Identification in CMS

1. Introduction. Outline

Improving Packet Processing Performance of a Memory- Bounded Application

Database Driven Processing and Slow Control Monitoring in EDELWEISS

The NOvA software testing framework

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data handling and processing at the LHC experiments

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

High Voltage system for the Double Chooz experiment

PoS(High-pT physics09)036

arxiv:cs/ v2 [cs.dc] 12 Mar 2004

Summary of the LHC Computing Review

ATLAS PILE-UP AND OVERLAY SIMULATION

Data Base and Computing Resources Management

Experience with Data-flow, DQM and Analysis of TIF Data

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

Data Quality Monitoring Display for ATLAS experiment

Forward Time-of-Flight Detector Efficiency for CLAS12

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

First LHCb measurement with data from the LHC Run 2

The CMS data quality monitoring software: experience and future prospects

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.

The NOvA DAQ Monitor System

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

The MICE Physics Computing Capabilities

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS Distributed Analysis System

DVCS software and analysis tutorial

Full Offline Reconstruction in Real Time with the LHCb Detector

CLAS12 Offline Software Tools. G.Gavalian (Jlab) CLAS Collaboration Meeting (June 15, 2016)

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

The AMS-02 Anticoincidence Counter. Philip von Doetinchem I. Phys. Inst. B, RWTH Aachen for the AMS-02 Collaboration DPG, Freiburg March 2008

Performances and main results of the KM3NeT prototypes

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

GNAM for MDT and RPC commissioning

Performance of the MRPC based Time Of Flight detector of ALICE at LHC

Data handling with SAM and art at the NOνA experiment

Anode Electronics Crosstalk on the ME 234/2 Chamber

A Web-based control and monitoring system for DAQ applications

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Super-Kamioka Computer System for Analysis

DEDICATED SERVERS WITH EBS

Modeling Resource Utilization of a Large Data Acquisition System

CMS High Level Trigger Timing Measurements

LHCb Computing Strategy

Real-time dataflow and workflow with the CMS tracker data

BESIII physical offline data analysis on virtualization platform

The GAP project: GPU applications for High Level Trigger and Medical Imaging

Tracking and Vertex reconstruction at LHCb for Run II

Gamma-ray Large Area Space Telescope. Work Breakdown Structure

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Test Beam Task List - ECAL

Data Flow & Leve1 1 Pipeline

ToolDAQ DAQ Framework

Kondo GNANVO Florida Institute of Technology, Melbourne FL

PROOF-Condor integration for ATLAS

CMS event display and data quality monitoring at LHC start-up

Users and utilization of CERIT-SC infrastructure

CIT 668: System Architecture. Amazon Web Services

Realtime Data Analytics at NERSC

Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006

Jörn Adamczewski-Musch. GSI / EEL 5 December 2017

Status of TOF TOF Performance and Evaluation. Yuekun Heng on behalf of BTOF and ETOF group June 13, Huazhong Normal Univ.

Parameterization of the LHCb magnetic field map

Super-Kamioka Computer System for Analysis

A new method of The Monte Carlo Simulation based on Hit stream for the LHAASO

The GLAST Event Reconstruction: What s in it for DC-1?

The CMS L1 Global Trigger Offline Software

Evaluation of the LDC Computing Platform for Point 2

arxiv: v1 [physics.ins-det] 10 Apr 2013

PANDA EMC Trigger and Data Acquisition Algorithms Development

Machine Learning in Data Quality Monitoring

LHCb Computing Resources: 2018 requests and preview of 2019 requests

arxiv: v5 [astro-ph.im] 28 Sep 2016

Beam test measurements of the Belle II vertex detector modules

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

arxiv: v3 [astro-ph.im] 19 Sep 2016

The ALICE High Level Trigger

SuperMike-II Launch Workshop. System Overview and Allocations

A Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data

Interferometric Neutrino Event Reconstruction in Inhomogeneous Media with the Askaryan Radio Array

Locating the neutrino interaction vertex with the help of electronic detectors in the OPERA experiment

Online Data Analysis at European XFEL

Data and Analysis preservation in LHCb

Event cataloguing and other database applications in ATLAS

One (1) acoustic and three (3) seismic channels. Three (3) component velocity output. All units are interchangeable with accuracy within two percent.

Transcription:

Performance quality monitoring system (PQM) for the Daya Bay experiment LIU Yingbiao Institute of High Energy Physics On behalf of the Daya Bay Collaboration ACAT2013, Beijing, May 16-21, 2013

2 The Daya Bay Experiment Daya Bay is a reactor neutrino experiment designed to measure sin 2 2θ 13 to 0.01 at 90% CL 6 reactor cores, 17.4 GW th Relative measurement 2 near sites, 1 far site Multiple detector modules Good cosmic shielding

The Daya Bay Detectors Automated Calibration Units (ACU) reflector 40 t MO 20 t LS 20 t Target reflector Multiple Anti-neutrino Detector (AD) modules to reduce syst. err. Far: 4 modules,near: 2 modules Multiple muon detectors to reduce veto eff. uncertainties Water Cherenkov: 2 layers RPC: 4 layers at the top + telescopes 3

Raw Data A global picture Daya Bay onsite IHEP/lxslc5 Berkeley/PDSF spade spade spade Onsite data processing/pqm Online DB Keep-Up Production Onsite DB Offline DB DCS DB DQ strip charts at SJTU Data Quality DB at SJTU/IHEP ODM at IHEP/PDSF 4

5 Offline computing environment Online DB Raw data file info. (DAQ) Hardware info. (DCS) Offline DB Raw data file info. Calib. constants etc. Portable Batch System (PBS) Allocating jobs Online DAQ file server DCS online database data transfer sever command data flow PQM job offline database PQM sever computing cluster Offline web server

Offline computers at Daya Bay Data volume ~320 files/day, ~1GB/file 11 servers File Server Data Transfer Server Offline Database Server Description Storage of raw data files Transferring raw data files from DAQ to the File Server Extracting info. from online DB. and archiving calib. constants. PQM Servers (pqm1-2) pqm2 for backup User Farms (farm1-5) Web Server Running user jobs Displaying figures produced by the PQM EH1 EH2 EH3 Event rate (khz) ~1.2 ~1.0 ~0.6 Offline Disk Volume (~25 TB) 2 5 1 Raw data files Onsite users 17 Software PQM 56 cores in total, 16 of them are dedicated for PQM CPUs 8 cores/server pqm1, farm1-2 pqm2, farm3-5 Intel (R) Xeon (R) E5506 @2.13 GHz Intel (R) Xeon (R) E5420 @2.50 GHz 6

7 Requirements The PQM High-level histograms for monitoring sub-detectors and data quality Process data asap. Process multi-data-stream 3 EHs with independent data stream Developed analysis modules in the NuWa framework Define many high-level histograms for different purposes Developed a Control Script for running the PQM

8 Offline software NuWa Employs Gaudi s event data service Provides the full functionality for simulation, recon. and physics analysis Job modules Recon. algorithms based on the charge pattern of the PMTs. Auto-building system using the Bitten plug-in of trac. Analysis modules in PQM Four modules Two for histograms of elec. channel info. and calibrated PMT info of ADs and the water shields One for histograms of reconstruction level The other one for the RPCs Flexible to add more modules (supernova trigger analysis module) Dynamically creating histograms Different configuration in 3 EHs Reducing file volume Improving processing speed

Histograms Produced by PQM 9

Data Flow of the PQM Control script In Python language In charge of the logic of the PQM running Queries the offline DB (~10 s) and submit jobs to the PBS Sending signal to the Web Server for web display Save ROOT files for each run to the File Server Using latest calib. constants for recon. Figures are displayed in around 40 minutes raw data basic level analysis Analysis Modules histograms ROOT files PBS calibration reconstruction physics analysis histogram printer Offline database command data flow 10

11 Web Display Main page Realtime page

Example Histograms Fig. 1 Number of blocked triggers in one event vs. run time Figures produced by the PQM can be compared with standard ones defined by the Data Quality Working Group (DQWG) Shift crew report possible problems to the DQWG Fig. 2 Reconstructed energy distribution for all triggers in one AD Fig. 3 Layer efficiency distribution of the RPCs in one EH 12

Data Taking Status A Two Detector Comparison: Sep. 23, 2011 Dec. 23, 2011 A Nucl. Inst. and Meth. A 685, 78 (2012) B First Oscillation Result: Hall 1 Dec. 24, 2011 Feb. 17, 2012 Phys. Rev. Lett. 108, 171803 (2012) C Updated analysis: Dec. 24, 2011 May 11, 2012 Chinese Physics C37, 011001 (2013) Hall 2 B C Data volume: 40TB DAQ eff. ~ 96% Eff. for physics: ~ 94% Hall 3 13

The PQM in Daya Bay s SNEWS Collaborators from Tsinghua University are developing supernova trigger in Daya Bay experiment Offline analysis will be implemented in the PQM Thanks Hanyu WEI for providing me the diagram. More details in his talk Supernova Trigger in the Daya Bay Reactor Neutrino Experiment in Track 2. 14

15 Summary The PQM has been developed for monitoring sub-detectors and data quality. Data processing by the PQM is running smoothly at Daya Bay Analysis figures can be displayed about 40 minutes after the DAQ closes the raw data file. Playing an important role for the data taking. Will implement SN trigger in the PQM.