The Fourth Level Trigger Online Reconstruction Farm of HERA-B 1

Similar documents
A programming environment to control switching. networks based on STC104 packet routing chip 1

First LHCb measurement with data from the LHC Run 2

Hera-B DAQ System and its self-healing abilities

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

1 Introduction The challenges in tracking charged particles in the HERA-B experiment [5] arise mainly from the huge track density, the high cell occup

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Muon Reconstruction and Identification in CMS

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.

LHCb Computing Resources: 2018 requests and preview of 2019 requests

PoS(High-pT physics09)036

Event reconstruction in STAR

b-jet identification at High Level Trigger in CMS

Trigger and Data Acquisition at the Large Hadron Collider

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC

arxiv:hep-ph/ v1 11 Mar 2002

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

The CMS Computing Model

Development of a PCI Based Data Acquisition Platform for High Intensity Accelerator Experiments

Real-time Analysis with the ALICE High Level Trigger.

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

The GAP project: GPU applications for High Level Trigger and Medical Imaging

A LVL2 Zero Suppression Algorithm for TRT Data

Tracking and Vertex reconstruction at LHCb for Run II

Tracking and compression techniques

Tracking and flavour tagging selection in the ATLAS High Level Trigger

The CMS Event Builder

Reprocessing DØ data with SAMGrid

Track pattern-recognition on GPGPUs in the LHCb experiment

PoS(ACAT08)101. An Overview of the b-tagging Algorithms in the CMS Offline Software. Christophe Saout

Clustering and Reclustering HEP Data in Object Databases

ATLAS PILE-UP AND OVERLAY SIMULATION

LHC Detector Upgrades

First results from the LHCb Vertex Locator

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

A Fast VME Data Acquisition System for Spill Analysis and Beam Loss Measurement

Fermi National Accelerator Laboratory

arxiv:cs/ v2 [cs.dc] 12 Mar 2004

Monte Carlo Production on the Grid by the H1 Collaboration

Prompt data reconstruction at the ATLAS experiment

Velo readout board RB3. Common L1 board (ROB)

The LHCb upgrade. Outline: Present LHCb detector and trigger LHCb upgrade main drivers Overview of the sub-detector modifications Conclusions

Trigger and Data Acquisition: an ATLAS case study

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

ATLAS TDAQ RoI Builder and the Level 2 Supervisor system

The ALICE High Level Trigger

Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger

IEPSAS-Kosice: experiences in running LCG site

Beam test measurements of the Belle II vertex detector modules

Track reconstruction with the CMS tracking detector

1. Introduction. Outline

The Belle II Software From Detector Signals to Physics Results

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Benchmarking message queue libraries and network technologies to transport large data volume in

Data preservation for the HERA experiments at DESY using dcache technology

Multi-threaded, discrete event simulation of distributed computing systems

05/09/07 CHEP2007 Stefano Spataro. Simulation and Event Reconstruction inside the PandaRoot Framework. Stefano Spataro. for the collaboration

CMS Conference Report

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Monte Carlo programs

UW-ATLAS Experiences with Condor

KLOE software on Linux

8.882 LHC Physics. Track Reconstruction and Fitting. [Lecture 8, March 2, 2009] Experimental Methods and Measurements

The BaBar Computing Model *

The High-Level Dataset-based Data Transfer System in BESDIRAC

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

50GeV KEK IPNS. J-PARC Target R&D sub gr. KEK Electronics/Online gr. Contents. Read-out module Front-end

ALICE tracking system

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

Andrea Sciabà CERN, Switzerland

ATLAS, CMS and LHCb Trigger systems for flavour physics

Implementation of a PC-based Level 0 Trigger Processor for the NA62 Experiment

TORCH: A large-area detector for precision time-of-flight measurements at LHCb

PoS(TIPP2014)204. Tracking at High Level Trigger in CMS. Mia TOSI Universitá degli Studi di Padova e INFN (IT)

BTeV at C0. p p. Tevatron CDF. BTeV - a hadron collider B-physics experiment. Fermi National Accelerator Laboratory. Michael Wang

CLAS12 Offline Software Tools. G.Gavalian (Jlab) CLAS Collaboration Meeting (June 15, 2016)

L1 and Subsequent Triggers

Instrumentation and Control System

Modules and Front-End Electronics Developments for the ATLAS ITk Strips Upgrade

Development of Beam Monitor DAQ system for 3NBT at J-PARC

PoS(IHEP-LHC-2011)002

The FTK to Level-2 Interface Card (FLIC)

Andy Kowalski Ian Bird, Bryan Hess

VUV FEL User Workshop 2005

Data Reconstruction in Modern Particle Physics

The ALICE electromagnetic calorimeter high level triggers

Design Concepts For A 588 Channel Data Acquisition & Control System

New Development of EPICS-based Data Acquisition System for Millimeter-wave Interferometer in KSTAR Tokamak

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Full Offline Reconstruction in Real Time with the LHCb Detector

Physics Analysis Software Framework for Belle II

OPERA: A First ντ Appearance Candidate

Vertex Detector Electronics: ODE to ECS Interface

Design, Implementation, and Performance of CREAM Data Acquisition Software

Silvia Miglioranzi University College of London / Argonne National Laboratories. June 20, Abstract

Transcription:

The Fourth Level Trigger Online Reconstruction Farm of HERA-B Introduction A. Gellrich 2, I.C. Legrand, H. Leich, U. Schwanke, F. Sun, P. Wegner DESY Zeuthen, D-5738 Zeuthen, Germany S. Scharein Humboldt-University, Berlin, Germany 29 October 998 HERA-B FARM note 98-005 / HERA-B note 98-202 The HERA-B experiment starts its physics program at the 920 GeV proton ring 3 of HERA at DESY in Hamburg/Germany in 999. HERA-B is dedicated to B-physics [,2]. Its primary goal is to measure CP-violation in decays of neutral B-mesons in the golden channel, B 0 -> J/Ψ K 0 s, with subsequent decays, J/Ψ -> µ + µ and K 0 s -> π+ π. bb pairs are produced in collisions of protons of the HERA proton ring halo with a system of eight wire targets. At HERA energies, the bb cross section is estimated to be σ bb = 2 nb whereas the inelastic cross section is σ inel = 40 mb. To obtain an adequate sample of reconstructed B 0 -decays of O(000) per year in the golden channel, an event rate of 0 MHz, corresponding to the HERA bunch clock, with on average four overlaying interactions is needed. The target wires can be steered remotely accordingly. The rate of interesting physics is well below Hz. The data acquisition and trigger system is designed to achieve a background reduction factor of 0 6. This results in a logging rate of 20 Hz. With an average event size of 00 kb, a data volume of 20 TB/year will be stored. This paper describes concept and implementation of a processor farm to perform full online event reconstruction [5] on the fourth level of HERA-B s data acquisition and trigger system. 2 Data Acquisition and Trigger System HERA-B uses a sophisticated four-level DAQ and trigger system [3,4,5] to read out 600000 channels of the detector. Two main characteristics are used to distinguish B-physics from background events: The high b-quark mass leads to high-pt tracks and the long B-lifetime causes detached secondary vertices. In subsequent trigger steps these properties are digested (table ). FLT: The First-Level-Trigger works on so-called Regions-of-Interests (RoI) which are defined by one of three pre-trigger sources which are 3 pad chambers to identify high-pt tracks, the electromagnetic calorimeter to look for high-pt clusters, and track segments from the muon system. Using a simple tracking mechanism based on RoIs in the tracking chambers behind the magnet, J/Ψ candidates are searched for. It is realized in hardware. An overall background suppression factor of 200 is obtained. SLT: The Second-Level-Trigger is based on the RoIs defined by the FLT. Using a simplified Kalman filter algorithm, drift time information from more detector planes behind the magnet are used to re-define RoIs. Tracks are then projected backwards through the magnet into. Talk presented at CHEP 98, Chicago, USA, 3 August 998, Session D, Talk #38 2. E-mail: Andreas.Gellrich@desy.de 3. Before 998 HERA operated its proton ring at 820 GeV. CHEP 98, Chicago, USA

the vertex detector. SLT trigger algorithms run on a multi-processor 240 nodes PC-farm which receives event data via high speed links through a DSP -based switch [5]. An reduction factor of 00 is obtained mainly by rejecting ghosts and by applying vertex cuts. TLT: The Third-Level-Trigger step is carried out on the same processor nodes following event building. Depending on the event type and the cuts already applied on SLT, local pattern recognition in the vertex detector in- and outside RoIs is planned. It could be shown that specially tailored algorithms can determine primary and secondary vertices in ~00 msec. In addition, information from the particle-id detectors can be exploited. To allow for full online event reconstruction on the 4LT farm, a reduction factor of 0 must be obtained. 4LT: The Fourth-Level-Trigger is subject of this paper. Table : Input rates, timing, and background suppression factors of the trigger levels. Level Input Time Supp. Method 0 MHz 0 µsec 200 Simple tracking, high p T, di-lepton mass 2 50 khz <7 msec 00 Track re-fit, magnet tracking, vertexing 3 500 Hz ~00 msec 0 Further tracking, particle-id 4 50 Hz 4 sec 2.5 Full event reconstruction 3 Design Criteria for the 4LT Farm In order to reach its primary goal to measure CP-violation in the golden channel, HERA-B needs to fully reconstruct O(000) B 0 per year. Due to the vast amount of event data and the large event reconstruction time, immediate data analysis can only be ensured by performing event reconstruction online before event data are stored. In addition event classification and a final event selection is foreseen. Although the HERA-B trigger system provides a background suppression of 6 orders of magnitude, less than Hz of interesting B-physics events are contained in the logging rate of 20 Hz. Data derived during reconstruction which are relevant for alignment and calibration purposes are collected and monitored. In the HERA-B environment event reconstruction takes ~4 sec on modern high-end processors. The computing power to perform event reconstruction at a rate of 50 Hz can be provided by multi-processor system (farm). Main building blocks are: high performance processor nodes, a network which allows to route event and control data from the TLT to the 4LT system, a Unix-like operating system which provides an environment to run software. Basic aspects to set up the 4LT farm are: scalability to allow for an incremental set-up, flexibility to handle variations in rates and processing times, costs limitations to ~2.5 kdm/node. Requirements to hardware and software components are: reliability, availability for the life time of the experiment, support, maintainability. The general goal is to give up the borderline between online and offline software. This allows to use HEP software such as reconstruction and analysis packages which are usually developed. Digital-Signal-Processors from Analog Devices (SHARC). 2 CHEP 98, Chicago, USA

in offline environments online without (major) modifications. 4 Implementation of the 4LT Farm The HERA-B 4LT farm will be exclusively built from off-the-shelf products. Only standard components from the PC market will be used to realize the processing nodes and the networking. It is foreseen to set up the 4LT farm as modular as possible by clustering small numbers of nodes in so-called mini-farms. As an operating system Linux was chosen. This Unix-like system provides standard tools such as inter-process communication (shared memory, semaphores, message queues) and network services (udp, tcp/ip) which support modular software packages running in a multi-process environment. Even more important, Linux is since recently a widely accepted development platform in HEP and is heavily used in the offline world. Processor nodes The rapidly growing PC market provides customers regularly with newer faster high-end processors. The price/performance ratio improves simultaneously. For the a first order of 20 nodes, PC with Intel Pentium II / 400 MHz cpus were chosen (table 2). Earlier, other solutions such as VME-boards or workstations were discussed [5,6,7] but could not stand the PC competition. Table 2: Hardware components of the 4LT farm nodes. Motherboard Processor L2-cache Memory Hard disk Graphics card Power supply Fan Housing Operating system Node hardware Asus P2B (00 MHz) PII / 400 MHz 52 kb 64 MB 2.5 GB Quantum Fireball 2.5EL Else Winner 000 S3 Trio 200 W high quality midi-tower (ATX) System Software Linux Distribution S.u.S.E. 5.2 Kernel 2.0.33 card Fast-Ethernet (00Mb/s) SMC Ultra 9432 (autosense) To transport data to and from 4LT nodes, a network is needed (figure ). In addition to the event CHEP 98, Chicago,USA 3

data stream which goes from TLT nodes to 4LT nodes and after processing to a central logger, control messages and monitoring data must be routed through the system. The total data rate of the event data stream is 50 Hz * 00 kb/sec = 5 MB/sec, with on average 00 kb/event. This results in an input of 25 kb/sec/node for 200 nodes. The same amount of data leaves the nodes again to go to the logger. Control messages are much smaller but are exchanged more frequently. An additional rate up to 0% of the event data rate might be produced by monitoring data. 0-20 nodes are clustered in so-called mini-farms. To connect up to 200 nodes in total, 5-0 mini-farms will be installed. The total bandwidth is divided accordingly. The backbones of the mini-farms are based on switches to ensure full bandwidth to/from nodes concurrently. By using manageable devices, performance monitoring and remote controlling of the network is possible. All mini-farms are connected to a central switches which must be capable of standing the full data rate. Event data appear two times in the central switch: First when being routed from TLT to 4LT nodes and second when being sent to the central logger. The logger is located close to the mass storage system which consists of hard disks and a tape robot in the DESY computer center. Disks are used to cache data before being stored on tape (DST ). The full DST stream leads to 20 TB/year which can not be stored on disk. A small fraction of events (~%) will be selected for direct access. This data as well as so-called MINIs 2 are provided to the user on disks. Currently, the 4LT farm nodes are set up as workstations without tools for interactive work. Home directories are provided by a server (srv) via NFS and NIS 3 which holds executables and data. The network is built of standard Fast-Ethernet components and contains network cards and switches. Between single TLT and 4LT nodes data rates up to 6 MB/sec can be reached, using a udp-based message passing system. The connection to the computer center is realized by two dedicated FDDI-links. 4LT Mini-Farm 0 000 00 n /k/n /k/n 2/k switch 0 /k Figure :. k = 5-0 n = 0-20 switch Trigger Level 2/3 Farm PC PC PC 2 NFS NIS srv 4LT file & home directory service control ctl 4LT farm control tasks 4LT Mini-Farm k Computer Center tape /k logger disk log. Data-Summary-Tapes contain entire events, including raw data. 2. MINIs contain only fractions of entire events, mainly reconstruction information. 3. -File-Service and -Installation-Service. 4 CHEP 98, Chicago, USA

Farm Control The 4LT farm control scheme is depicted in figure 2. Its tasks is to control the event data flow from around 50 TLT nodes to up to 200 4LT nodes. The 4LT farm control contains two processes which run on a dedicated machine (ctl). After an event has passed all trigger steps, a TLT node requests the ID (IP-address) of a free 4LT node from the ID_Request process. This process looks up the Free_ID_Queue to get the first available free 4LT node ID. The associated IP-address is stored in the ID_Table. Each 4LT node registers itself at start-up with the ID_Report process which puts the node s ID to the Free_ID_Queue and stores the IP-address in the ID_Table. After event processing, the 4LT nodes reports to be free and waits for the next event. The ID_Table is also used to monitor the 4LT farm nodes. Communication between the two control processes in the control machine is done by means of the inter-process communication tools shared memory and message queue. Message and data transfer between machine via the internet is realized by a HERA-B special message passing system () based on udp. Figure 2: Farm control. request () ID_Request TLT 2 m 4LT (udp) Data (udp) (message queue) Free_ID_Queue out in (shared memory) ID_Table slot slot 2 slot 3 slot n 2 3 n report () ID_Report Node Processes Figure 3 shows the process scheme of a 4LT farm node. Major tasks are housed in separate processes, using between nodes and inter-process communication within nodes. Reconstruction, event classification and selection packages are contained in a frame program called Arte. It provides memory management for event data stored in tables. Arte is also used for offline analysis, including Monte Carlo event generation and detector simulation. Arte contains interfaces to event data files for offline purposes as well as to shared memory segments for online usage on the 4LT farm. In the latter case, event data are received by a dedicated process L4Recv from a TLT node initiated by the 4LT farm control. Output is done in a similar way. Event data are sent to the central logger in the computer center. CHEP 98, Chicago,USA 5

Figure 3: Node processes. L3Send L4IdReq L4IdRep L4Recv emg Buffer I/O Conversion & Sparsification DB Access Publisher / DB Cache online ARTE L4Reco L4Clas L4Trig L4Moni rhp Gatherer / Calibrator Sender Logger Disk Tape Monitoring During the reconstruction procedure, data which are needed to align and calibrate the detector are derived. To make use of the large statistics of up to 200 nodes providing such data in parallel, a scheme was developed to collect data in a central place (Gatherer). Gathered data are then used to compute alignment and calibration constants which are if needed updated in the central database. The updated database contents is published again to the 4LT farm (Publisher). For this purpose a so-called remote-histogramming-package (rhp) was developed using. 5 Summary and Outlook Concept and implementation of the 4LT farm of the HERA-B experiment at HERA were presented. The system is based on standard off-the-shelf components and makes use of commodity hardware and software. The goal to give up the borderline between online and offline software by providing an environment which allows to directly use typical HEP programs written in an offline style for online event reconstruction was achieved. Currently, server (srv), controller (ctl), and 7 nodes are installed and running. All software packages are written and tested. Commissioning is going on. 6 CHEP 98, Chicago, USA

HERA-B s plans for this year s running focus on J/Ψ-physics with a di-electron trigger derived from the electro-magnetic calorimeter. A small 4LT farm which is about to be included in the full data path will be sufficient. In 999 a first attempt to measure CP violation in the golden channel will be started with an almost complete detector. Around 50 4LT nodes will be installed. From the year 2000 on, the full physics program will be carried out. Assuming an input rate of 50 Hz and processing times of 4 sec for the full event reconstruction 200 4LT farm nodes mare needed. The final purchase will be initiated mid 999. Acknowledgements We gratefully thank the electronics department of DESY Zeuthen and the computer centers of DESY Hamburg and DESY Zeuthen for their support. References [] T. Lohse et al., Proposal, DESY-PRC 94/02 (994). [2] E. Hartouni et al., Design Report, DESY-PRC 95/0 (995). [3] M. Medinnis, HERA-B triggering, NIM A 368 (995) 6. [4] F. Sanchez, The HERA-B DAQ system, This proceedings, 998. [5] M. Dam, Second and third level trigger systems for the HERA-B experiment, This proceedings, 998. [6] R. Mankel, Online track reconstruction for HERA-B, NIM A 384 (996) 20. [7] A. Gellrich et al., The processor farm for online triggering and full event reconstruction of the HERA-B experiment at HERA, Proc. of CHEP 95, Rio de Janeiro, Brazil, 995. [8] A. Gellrich et al., A test system for the HERA-B online trigger and reconstruction farm, Proc. of DAQ 96, Osaka, Japan, 996. [9] A. Gellrich et al., A prototype system for the farm of the HERA-B experiment at HERA, Proc. of CHEP 97, Berlin, Germany, 997. [0] M. Dam et al., Higher level trigger systems for the HERA-B experiment, Proc. of IEEE Real-Time 97, Beaune, France, 997. [] A. Gellrich and M. Medinnis, HERA-B higher-level triggers: architecture and software, NIM A 408 (998) 73-80. CHEP 98, Chicago,USA 7