Fig. 1. PANDA experiment at the FAIR facility, Darmstadt. The detector comprises two spectrometers. The target spectrometer features tracking

Similar documents
PANDA EMC Trigger and Data Acquisition Algorithms Development

The Prototype Trigger-Less Data Acquisition of the PANDA Experiment

GPU-based Online Tracking for the PANDA Experiment

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.

arxiv: v1 [nucl-ex] 26 Oct 2008

arxiv: v2 [nucl-ex] 6 Nov 2008

Plans for ANDA online computing

PoS(High-pT physics09)036

05/09/07 CHEP2007 Stefano Spataro. Simulation and Event Reconstruction inside the PandaRoot Framework. Stefano Spataro. for the collaboration

LHC Detector Upgrades

The Intelligent FPGA Data Acquisition

System-on-an-FPGA Design for Real-time Particle Track Recognition and Reconstruction in Physics Experiments

Update of the BESIII Event Display System

First LHCb measurement with data from the LHC Run 2

Adding timing to the VELO

ATLAS, CMS and LHCb Trigger systems for flavour physics

Disentangling P ANDA s time-based data stream

Muon Reconstruction and Identification in CMS

Streaming Readout, the JLab perspective. Graham Heyes Data Acquisition Support Group Jefferson Lab

Development and test of a versatile DAQ system based on the ATCA standard

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The LHCb upgrade. Outline: Present LHCb detector and trigger LHCb upgrade main drivers Overview of the sub-detector modifications Conclusions

Update of the BESIII Event Display System

b-jet identification at High Level Trigger in CMS

Online Tracking Algorithms on GPUs for the. ANDA Experiment at. Journal of Physics: Conference Series. Related content.

PROJECT REPORT. Prototype of an online track and event reconstruction scheme for

The FTK to Level-2 Interface Card (FLIC)

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

Full Offline Reconstruction in Real Time with the LHCb Detector

PoS(Baldin ISHEPP XXII)134

Real-time Analysis with the ALICE High Level Trigger.

Data acquisition and online monitoring software for CBM test beams

CMS FPGA Based Tracklet Approach for L1 Track Finding

Fast 3D tracking with GPUs for analysis of antiproton annihilations in emulsion detectors

THE PANDA BARREL DIRC DETECTOR

Tracking and flavour tagging selection in the ATLAS High Level Trigger

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC

The Track-Finding Processor for the Level-1 Trigger of the CMS Endcap Muon System

CMS Conference Report

First results from the LHCb Vertex Locator

Deeply Virtual Compton Scattering at Jefferson Lab

CMS High Level Trigger Timing Measurements

Velo readout board RB3. Common L1 board (ROB)

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger

Upgrading the ATLAS Tile Calorimeter electronics

A LVL2 Zero Suppression Algorithm for TRT Data

BTeV at C0. p p. Tevatron CDF. BTeV - a hadron collider B-physics experiment. Fermi National Accelerator Laboratory. Michael Wang

ONLINE MONITORING SYSTEM FOR THE EXPERIMENT

FPGA based charge fast histogramming for GEM detector

The ATLAS Trigger System: Past, Present and Future

Data Acquisition in Particle Physics Experiments. Ing. Giuseppe De Robertis INFN Sez. Di Bari

ComPWA: A common amplitude analysis framework for PANDA

TORCH: A large-area detector for precision time-of-flight measurements at LHCb

Status of the TORCH time-of-flight detector

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

Implementation of a PC-based Level 0 Trigger Processor for the NA62 Experiment

SoLID GEM Detectors in US

Performance of the ATLAS Inner Detector at the LHC

arxiv: v1 [physics.ins-det] 11 Jul 2015

Detector Control LHC

THE PANDA BARREL DIRC DETECTOR

Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment

Compute Node Design for DAQ and Trigger Subsystem in Giessen. Justus Liebig University in Giessen

RT2016 Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

Investigation of High-Level Synthesis tools applicability to data acquisition systems design based on the CMS ECAL Data Concentrator Card example

CMS Alignement and Calibration workflows: lesson learned and future plans

All Programmable SoC based on FPGA for IoT. Maria Liz Crespo ICTP MLAB

Front end electronics and system design for the NUSTAR experiments at the FAIR facility. FEE 2006 Workshop

The LHCb Upgrade. LHCC open session 17 February Large Hadron Collider Physics (LHCP) Conference New York, 2-7 June 2014

Modules and Front-End Electronics Developments for the ATLAS ITk Strips Upgrade

Data acquisition system of COMPASS experiment - progress and future plans

Electron and Photon Reconstruction and Identification with the ATLAS Detector

Production and Quality Assurance of Detector Modules for the LHCb Silicon Tracker

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

MIP Reconstruction Techniques and Minimum Spanning Tree Clustering

arxiv:hep-ph/ v1 11 Mar 2002

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

FAMOS: A Dynamically Configurable System for Fast Simulation and Reconstruction for CMS

IEEE Nuclear Science Symposium San Diego, CA USA Nov. 3, 2015

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Tracking and compression techniques

The CMS L1 Global Trigger Offline Software

L1 and Subsequent Triggers

An FPGA Based General Purpose DAQ Module for the KLOE-2 Experiment

Performance of the GlueX Detector Systems

PoS(TIPP2014)204. Tracking at High Level Trigger in CMS. Mia TOSI Universitá degli Studi di Padova e INFN (IT)

Beam test measurements of the Belle II vertex detector modules

ATLAS ITk Layout Design and Optimisation

SoLID GEM Detectors in US

Electron detector(s) decision to proceed with 2 detectors

SoLID GEM Detectors in US

Overview. About CERN 2 / 11

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration

Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors

Update on PRad GEMs, Readout Electronics & DAQ

The MICE Run Control System

Performance Study of GPUs in Real-Time Trigger Applications for HEP Experiments

Transcription:

Proc. 12th Int. Conf. Low Energy Antiproton Physics (LEAP2016) https://doi.org/10.7566/jpscp.18.011035 Data Acquisition and Online Event Selection for the PANDA Experiment Wolfgang Kühn 1, Sören Lange 1, Yutie Liang 1, Zhen-An Liu 2, Simon Reiter 1, Milan Wagner 1, and Jingzhou Zhao 2 (PANDA Collaboration) 1 Justus-Liebig-Universität Gießen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, 35392 Gießen, Germany 2 Experimental Physics Center, Institute of High Energy Physics, Beijing 100049 China E-mail: w.kuehn@physik.uni-giessen.de (Received June 14, 2016) The PANDA experiment at FAIR features a comprehensive experimental program based on a high quality antiproton beam and a state-of the art detector. Luminosities exceeding 10 32 /cm 2 /s leading to event rates above 20 MHz require a novel approach to data acquisition and real-time event selection. After digitisation with sampling ADCs, the data is pre-processed in front-end electronics performing feature extraction and data compression. After this stage, the raw data rate still exceeds 200 GB/s. Event filtering after full online reconstruction of the events will be performed in two stages using dedicated FPGA based hardware, also utilised in the Belle II experiment at KEK, and a server/gpu farm. Since most physics channels of interest have small cross sections when compared to the total annihilation cross section, the aim is to reduce the raw data rate by three orders of magnitude. A complication arises from the large event rate leading to partial overlap of events. This is of particular importance for slow detectors, such as the straw tube tracker of PANDA. KEYWORDS: PANDA Experiment, DATA Acquisition, Event Filter 1. Introduction The PANDA experiment at the future FAIR facility to be constructed at GSI Darmstadt represents one of the four scientific pillars of FAIR. The central aim of PANDA is the exploration of multiple facets of quantum chromodynamics (QCD) in the non-perturbative regime. QCD is an integral part of the standard model of particle physics. While well understood in the perturbative regime at large q 2 / small distances, the non-perturbative regime governing the field of hadron physics is still presenting significant challenges, both from the experimental as well as the theoretical point of view. PANDA ( see Fig. 1 ) will be located at the HESR storage ring at the FAIR facility. The combination of high beam quality, high luminosity and a state-of-the-art detector with excellent resolution and particle identification capabilities will enable a highly competitive program in the key sectors of hadron physics: hadron spectroscopy, hadron structure and hadron interactions. More details about the experimental setup can be found in the contribution of Lars Schmitt to this conference. Antiproton annihilation as a tool to study hadron physics has distinct advantages when compared to other methods. In contrast to e + e annihilation, a state with any non-exotic quantum number can be formed. Combined with the excellent energy resolution of the cooled antiproton beam in the HESR, this enables the study of narrow charmonium-like XYZ states. Furthermore, the glue-rich environment in antiproton annihilation provides an excellent opportunity to search for states with valence gluons such as glue balls and hybrids. Nucleon form factors, GPDs and TDAs can be studied in the time-like region with unprecedented quality. In addition, PANDA as a fixed target experiment enables studies with nuclear targets. Here, the interaction of 1 011035-1 2017 The Author(s) This article is available under the terms of the Creative Commons Attribution 4.0 License. Any further distribution of this work must maintain attribution to the author(s) and the title of the article, journal citation, and DOI.

011035-2 Fig. 1. PANDA experiment at the FAIR facility, Darmstadt. The detector comprises two spectrometers. The target spectrometer features tracking with a silicon micro-vertex detector a straw tube tracker and GEM detectors, a high resolution electromagnetic lead-tungstate calorimeter, particle identification with DIRC type imaging Cherenkov detectors. The magnetic field is provided by a superconducting solenoid with instrumented flux return for muon detection.the forward spectrometer is build around a dipole magnet and consists of various tracking systems, particle identification devices and calorimeters. PANDA covers nearly 4π solid angle and can be operated with antiproton beams with momenta up to 15 GeV/c and hydrogen or nuclear targets. charmonia and open charm mesons with cold nuclear matter can be studied in a unique way. Furthermore, the large cross sections for multi-strange baryons enable the production and high resolution spectroscopy of S=-2 hyper-nuclei. 2. PANDA Data Acquisition Challenges In order to meet PANDA s physics challenges, operation at high interaction rates exceeding 10 7 events/s is required. This is a consequence of small cross sections down to 1 nb for processes of significant interest in comparison to the large total annihilation cross section of the order of 100 mb. This is illustrated in Fig. 2. The cross sections for most channels involving charmed quarks are below 100 nb. As a result, data acquisition has to cope with data rates above 200 GB/s. Furthermore, for compatibility with the available mass storage, a reduction of the inclusive event rate by three orders of magnitude is necessary. Unfortunately, such an aggressive goal for event filtering is not easily reachable. Unlike many experiments in particle physics, simple hardware-based first-level trigger schemes, i.e. triggering on missing transverse energy etc., are not feasible, since the topological signatures of the dominant annihilation channels are quite similar to many of the signal channels. As a consequence, event filtering has to be based on a complete reconstruction of the events, including tracking, particle identification and colorimetry. A further complication arises from the 2

011035-3 Fig. 2. Cross section for various annihilation reactions as a function of CM energy. requirement of operation at event rates beyond 20 MHz, leading to partial overlap of events in the slower sub-systems such as the straw tube tracker. Here, typical drift times are significantly longer than the average time between two events, leading to pileup which has to be properly treated in order to preserve efficiency and momentum resolution and to avoid excessive production of fake tracks. The details of the required reconstruction procedures are currently under study, using a time-based simulation and reconstruction framework [1]. In this high-rate environment a conventional event-based data acquisition approach is not feasible. Instead, all PANDA subsystems use a freely streaming data acquisition concept where the synchronization is performed using SODAnet [2, 3], a high resolution global time distribution system. For event building, a two-stage approach is used. The first stage (burst building) utilizes a gap in the coasting HESR beam to collect the entire data stream in 2 µs intervals. The gap is sufficiently long to avoid cross over of data between two different bursts. Synchronization of the freely streaming system is performed by injecting time stamps for all sub-systems. In the second stage, feature extraction algorithms (tracking, vertex determination, EMC cluster evaluation, particle identification) are executed with the aim to identify and reconstruct individual events which are analyzed and subject to filtering. 3

011035-4 3. Data Acquisition Architecture A schematic view of the DAQ architecture is show in Fig. 3. After digitization with sampling ADCs, local event processing and data compression, the data is collected in data concentrators and forwarded to the FPGA based L1 network where burst building and feature extraction is performed. Here, a certain fraction of the events can be rejected, depending on the selection of physics signal channels. Events surviving this stage are forwarded to the L2 network, where refined reconstruction algorithms are executed and the final event selection for mass storage is performed. Here, a general purpose server farm with attached GPU hardware will be used. The hardware architecture uses FPGA based compute nodes (CN) [4, 5] which are also employed for the readout and background suppression of the Belle II DEPFET pixel detector [6]. Fig. 3. Schematic view of the PANDA data acquisition and event filtering architecture. After digitization with sampling ADCs, local event processing and data compression, the data is collected in data concentrators and forwarded to a FPGA based network where burst building and feature extraction is performed. Synchronization of the freely streaming system is performed using a precision time distribution system, injecting time stamps for all sub-systems. The FPGA based compute nodes with multi-gbit/s bandwidth capability are implemented using the ATCA architecture and are designed to handle tasks such as event building, feature extraction and high level trigger processing. Each CN consists of a carrier board communicating via a full-mesh ATCA backplane to up to 13 other CNs. High speed serial data transfer is handled by a Virtex4 FX60 FPGA. The carrier board can host up to 4 xfp cards which are equipped with Virtex5 FX70T FPGAs, up to four optical links, 4 GB DDR2 - RAM and GBit ethernet. The system is scalable and can be optimized for pipelined and parallel architectures. The FPGAs feature embedded PowerPCs running Linux operating systems for slow control functionality. The large on-board RAM per FPGA is essential, allowing to store data for complex algorithms with large latency. 4

Proceedings of the 12th International Conference on Low Energy Antiproton Physics (LEAP2016) 011035-5 Fig. 4. ATCA compliant FPGA based compute node. (top): carrier board; (bottom) xfp daughter board. The architecture is based on XILINX R Virtex FPGAs. A single CN can support up to 16 optical links and a total of 18 GB DDR2-RAM. The FPGAs are equipped with embedded PowerPC - CPUs, running Linux operating systems for slow control functions. 4. Summary and Outlook We have presented an overview of the data acquisition and event filtering system for PANDA. Due to the high event rates, a trigger-less free streaming design is required. Event filtering is based on full reconstruction of events with a multi-stage approach implementing algorithms both on FPGA based hardware and on general purpose CPU farms with attached GPU hardware. The FPGA platform has already been successfully used in prototype detector tests. For further development we are preparing an upgrade of the CN board to support the latest FPGA generation. This work is supported in part by BMBF under contract number 05P2015 PANDA R&D and by Helmholtz International Centre for FAIR. References [1] [2] [3] [4] [5] [6] T. Stockmanns et al., Journal of Physics: Conference Series 664 (2015) 072046 I. Konorov et. al., Nuclear Science Symposium Conference Record (NSS/MIC), p.1863, 2009 M. Tiemens et al., Journal of Physics: Conference Series 587(2015) 012025 W.Ku hn et al., Proc. 16th IEEE-NPSS Real Time Conference, Beijing, China, May 2009 H.Xu et al., Physics Procedia 37 ( 2012 ) 1849 1854 D. Mu nchow et al., Journal of Instrumentation, Volume 9, August 2014 5