Common Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger

Similar documents
The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Short Introduction to DCS, JCOP Framework, PVSS. PVSS Architecture and Concept. JCOP Framework concepts and tools.

The CMS L1 Global Trigger Offline Software

Track-Finder Test Results and VME Backplane R&D. D.Acosta University of Florida

CMS Trigger Supervisor Framework

CMS Calorimeter Trigger Phase I upgrade

L1 and Subsequent Triggers

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

WBS Trigger. Wesley H. Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Status Review November 20, 2003

Detector Control System for Endcap Resistive Plate Chambers

Front-End Electronics Configuration System for CMS. Philippe Gras CERN - University of Karlsruhe

The Database Driven ATLAS Trigger Configuration System

The Track-Finding Processor for the Level-1 Trigger of the CMS Endcap Muon System

Experiment

US CMS TriDAS US CMS Meeting. Wesley H. Smith, U. Wisconsin CMS Trigger Project Manager April 8, 2006

WBS Trigger. Wesley H. Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review June 5, 2002

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

Trigger Report. W. H. Smith U. Wisconsin. Calorimeter & Muon Trigger: Highlights Milestones Concerns Near-term Activities CMS

Verification and Diagnostics Framework in ATLAS Trigger/DAQ

Development and test of a versatile DAQ system based on the ATCA standard

The new detector readout system for the ATLAS experiment

CMX (Common Merger extension module) Y. Ermoline for CMX collaboration Preliminary Design Review, Stockholm, 29 June 2011

Software for implementing trigger algorithms on the upgraded CMS Global Trigger System

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

Muon Port Card Upgrade Status May 2013

LHC Detector Upgrades

Trigger Layout and Responsibilities

The CMS Computing Model

XD Framework (XDF) Overview. For More Information Contact BlueSpace at Tel: (512) Web:

The CMS Event Builder

The CMS data quality monitoring software: experience and future prospects

CSC Trigger Motherboard

A 3-D Track-Finding Processor for the CMS Level-1 Muon Trigger

First experiences with the ATLAS pixel detector control system at the combined test beam 2004

Etanova Enterprise Solutions

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Data Acquisition Software for CMS HCAL Testbeams

Construction of the Phase I upgrade of the CMS pixel detector

Level 0 trigger decision unit for the LHCb experiment

Control and Monitoring of the Front-End Electronics in ALICE

RT2016 Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

RPC Trigger Overview

An Upgraded ATLAS Central Trigger for 2015 LHC Luminosities

SoLID GEM Detectors in US

ELECTRICAL NETWORK SUPERVISOR STATUS REPORT

Track reconstruction with the CMS tracking detector

Experience with Data-flow, DQM and Analysis of TIF Data

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Data Quality Monitoring Display for ATLAS experiment

ATLAS Nightly Build System Upgrade

Detector Control LHC

The data-acquisition system of the CMS experiment at the LHC

MiniDAQ1 A COMPACT DATA ACQUISITION SYSTEM FOR GBT READOUT OVER 10G ETHERNET 22/05/2017 TIPP PAOLO DURANTE - MINIDAQ1 1

Level 1 Trigger, Phase 1 Upgrade. November , 2013 Committee Report

Run Control and Monitor System for the CMS Experiment

CS 470 Spring Distributed Web and File Systems. Mike Lam, Professor. Content taken from the following:

Scaling DreamFactory

Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

CMS FPGA Based Tracklet Approach for L1 Track Finding

Electronics on the detector Mechanical constraints: Fixing the module on the PM base.

Trigger Coordination Report

Expertise that goes beyond experience.

CS 470 Spring Distributed Web and File Systems. Mike Lam, Professor. Content taken from the following:

The coolest place on earth

The CMS Global Calorimeter Trigger Hardware Design

LHCb Online System BEAUTY-2002

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

Ignacy Kudla, Radomir Kupczak, Krzysztof Pozniak, Antonio Ranieri

ATLAS, CMS and LHCb Trigger systems for flavour physics

CMS Trigger/DAQ HCAL FERU System

USCMS HCAL FERU: Front End Readout Unit. Drew Baden University of Maryland February 2000

Calorimeter Trigger Hardware Update

The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

Status of the CSC Trigger. Darin Acosta University of Florida

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1

Deployment of the CMS Tracker AMC as backend for the CMS pixel detector

Achieving Scalability and High Availability for clustered Web Services using Apache Synapse. Ruwan Linton WSO2 Inc.

Timing distribution and Data Flow for the ATLAS Tile Calorimeter Phase II Upgrade

Surrogate Dependencies (in

Implementation of CMS Central DAQ monitoring services in Node.js

Interim Technical Design Report

Oracle Service Cloud. Release 18D. What s New

Muon Reconstruction and Identification in CMS

Oracle Mobile Hub. Complete Mobile Platform

Velo readout board RB3. Common L1 board (ROB)

CMS data quality monitoring: Systems and experiences

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group)

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Oracle Warehouse Builder 10g Runtime Environment, an Update. An Oracle White Paper February 2004

A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

Release Notes March 2016

Database Developers Forum APEX

Ultra-short reference for the MTCC-shifter arriving in USC55

Etlworks Integrator cloud data integration platform

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

A Web-based control and monitoring system for DAQ applications

Qlik Sense Enterprise architecture and scalability

TTC/TTS Tester (TTT) Module User Manual

Persistent storage of non-event data in the CMS databases

Transcription:

Common Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger Giuseppe Codispoti, Simone Bologna, Glenn Dirkx, Christos Lazaridis, Alessandro Thea, Tom Williams TIPP2017: International Conference on Technology and Instrumentation in Particle Physics 2017 22-26 May 2017, Institute of High Energy Physics, CAS, Beijing (China)

The CMS Level-1 Trigger Upgrade The CMS Level-1 Trigger selects 100 khz of interesting events from 40MHz pp collisions decision within 3.8 μs using coarse resolution objects ( trigger primitives ) full-resolution data held in pipeline memories LHC restarted in 2015 after Long Shutdown 1 with higher energy & luminosity trigger hardware replaced in a very short time in order to maintain/improve performance trigger primitives generators ECAL energy oslb HF energy HCAL energy CSC Hits RPC Hits DT Hits CuOF HO HTR Layer-1 Calo MPC Mezz LB Calorimetry Layer-2 Calo CPPF TwinMux Muons 9 subsystems different algorithms & scale 6 different designs of processor boards DeMux DeMux Micro-Global Trigger end-cap track finder overlap track finder Micro-Global Muon Trigger barrel track finder trigger subsystems O(100) boards O(3000) links 2

The Online Software Challenge Hardware completely replaced substituting a long stably running system Approximately 90% of software to be rewritten control, monitoring, bookkeeping of the configuration setup risk of code duplication due to the large number of components Available time to have a fully working system limited to few months Strategy adopted: maximise common software exploiting partial standardisation of the hardware imposing a common hardware abstraction layer 3

Upgraded Level-1 Trigger Hardware Hardware commonalities based on μtca (modern telecom standards) System wide use of latest FPGAs - Xilinx Virtex 7 high speed serial optical links standard Timing, Controls and DAQ interface (AMC13) Common abstraction layer for processors subsystems P01 Optical Input Algorithm TTC Clock TTC P02 AMC13 Crate P01 P02 AMC13 P02 Crate Output Readout DAQ AMC13 Crate System A System B Clock TCDS DAQ 4

SWATCH: a Common Software Framework SWATCH: SoftWare for Automating the control of Common Hardware design based on common processor/system models abstract C++ interfaces for controlling & monitoring hardware specialisation through inheritance Subsystem-agnostic description of hardware the subsystem provides specialised implementations of processors, optical I/O ports, interconnections and their mapping in the crate(s) the framework builds the subsystem control software using the subsystem-specific classes Generic API implement multi-threading and thread safety ready-to-use solutions for tricky tasks such as monitoring threads 5

SWATCH: Control System Minimise code duplication through standardised access to the hardware across the different subsystems Commands: the building blocks of the board control system used for testing as single action or chained in sequences compose the transitions of the operational FSM specific behaviours implemented through dedicated configuration parameters The Gatekeeper builds the system using XML-formatted description Processor Gatekeeper Board Control via Generic Interface Command (stateless action) XML File Sequence of Commands Finite State Machine (FSM) (stateful actions) 6

SWATCH: Monitoring The monitoring challenge: standardised definition of monitoring data items across all the subsystems Metrics: building blocks of the monitoring system standardise monitoring data registration, publication and storage logic for error/warning levels Thread safety between control and monitoring ensured at framework level Hardware Registers Monitoring Interface Metric Individual item of monitoring data Error/Warning reports GUI Monitorable Object Multilevel tree of monitoring data 7

SWATCH Generic Web GUIs Access to both common & subsystem-specific control/monitoring primitives significantly reduced required development manpower uniform interface for operators Graphic interface based on Google Polymer - HTML5, SCSS, JavaScript/ES6 8

SWATCH Monitoring Panel Optical Input Algorithm TTC Clock TTC Output Readout DAQ Processor View 9

SWATCH Monitoring Panel P01 P02 P01 AMC13 Crate P02 AMC13 P02 Crate AMC13 Crate System A System B TCDS DAQ System View 10

SWATCH Monitoring Panel Metrics Aggregation 11

SWATCH Control Panel Manual Parameters Input Command Board Select Parameters Source (or alternatively use manual input) Results 12

The Configuration Database CMS need to unambiguously identify and record the tests and datataking conditions Oracle Configuration Database Per subsystems L1-Trigger configuration split in logic block connected with tree-like structure of database keys (aka foreign keys) XML format, simplified transition from commissioning to data-taking Developed a dedicated Database Configuration Editor essential tool to deal with the database complexity easily tracks changes appropriate naming and versioning system automatic metadata insertion (date, user) comparison tool among different configurations built-in XML editor 13

Database Configuration Editor Technology choice driven by: flexibility maintainability Browser JavaScript same language for API and interface native JSON communication modern and supported standards CERN SSO Nginx l1ce.cern.ch Apache NodeJS File system CERN Kerberos html files restify socket.io passport JWT restify-json-hal JWT REST API based on Node.js fully stateless interface, solid user session at API level node-oracledb Database Database development production HTML5 interface based on Google Polymer 14

Database Configuration Editor valid/invalid keys selector upload/download files expand view navigate keys and versions navigate versions xml editor 15

The Trigger Level-1 Page Central aggregator of trigger information concise visual overview of the status of the system collector of warnings and alarms from trigger subsystems links to applications and documentation based on the same architecture as the Database Editor Running conditions L1-Trigger Configuration State and Errors L1-Trigger Monitoring Warnings and Errors 16

The Trigger Level-1 Page Useful Links Status of the Run eventual hints associated to the Alarms Subsystems Status Alarms Manual/Automatic filtering of shifter information 17

The Commissioning challenge 2014 2015 2016 2017 Running with Legacy L1-Trigger Running with L1-Trigger Upgrade HW Installation SWATCH Definition SWATCH Implementation Consolidation & New Tools New Devel. Commissioning time Upgraded Level-1 Trigger installed during 2015/2016 winter Used as CMS trigger on February 25 th 2016 SWATCH configuration integrated in the CMS Run Control Database integrated on March 21 st, both configuration and critical monitoring in time for beam splashes on March 25 th Collisions time used for development of the new tools, deployed on 2017 Configuration Editor, L1-page 18

Summary and Outlook The CMS Level-1 Trigger has been successfully upgraded in 2016 Level-1 Trigger Online Software almost completely rewritten hardware control, monitoring, database, configuration editor, L1-page Good design and planning of the Online Software largely contributed to the commissioning success and the data taking reliability The current software infrastructure is ready to serve for the next years yet flexible enough to allow further developments and improvements Development continues in order to improve the usability of the system from the point of view of the operator and reduce the possibility of errors simplify and improve interfaces 19

BACKUPS 20

L1 Trigger Online Software Layers Bi-directional SOAP communication among the actors, except hardware CMS Run Control Trigger Supervisor Subsystem Application Subsystem Application Subsystem Application x 9 Subsystems 21

The SWATCH Cell A SWATCH Cell is implemented for each subsystem SWATCH provides the hardware access for the specific processor(s) Trigger Supervisor libraries provide GUI and network communication based on XDAQ, a platform for distributed data acquisition systems reusing or readapting general purpose code from legacy system SWATCH Cell provides generic implementations of services: network-controllable FSM monitoring state publication DB communication Common DB schema control/monitoring GUIs based on Google Polymer HTML5, CSS3, JavaScript/ES6 22

Configuration Database CMS Config Hardware description e.g. crate composition, number of boards, number of links Infrastructure, aka hardware configuration TRG (L1+HLT) pure online parameters Algorithm configuration needed also offline for the analysis Run Settings rapidly changing data taking parameters e.g.: hardware masking, trigger prescale, etc. Config Keys Configuration Hardware Infrastructure Algorithm RS Keys XML Container 23

Monitoring Database Two categories of monitoring information critical for data taking and offline data certification e.g. rates and prescales direct connection with DB, high reliability, any problem stops data taking hardware status for post-mortem analysis data is pushed to a XDAQ application that takes care of saving to the Oracle DB failures do not affect data taking Subsystem Cell Database Proxy Critical Data Generic Monitoring XDAQ publication service XDAQ generic DBservice Database Proxy CMS Web Based Monitoring 24

Database Configuration Editor Highlights if there is a new version of a given key upload/download files expand view xml editor 25