High level trigger online calibration framework in ALICE

Size: px
Start display at page:

Download "High level trigger online calibration framework in ALICE"

Transcription

1 Journal of Physics: Conference Series High level trigger online calibration framework in ALICE To cite this article: S R Bablok et al 2008 J. Phys.: Conf. Ser View the article online for updates and enhancements. Related content - The ALICE High Level Trigger: status and plans Mikolaj Krzewicki, David Rohr, Sergey Gorbunov et al. - ALICE HLT Cluster operation during ALICE Run 2 J Lehrbach, M Krzewicki, D Rohr et al. - CMS - HLT Configuration Management System Vincenzo Daponte and Andrea Bocci Recent citations - Results from the first p+p runs of the ALICE High Level Trigger at LHC Kalliopi Kanaki (for the ALICE HLT collaboration) This content was downloaded from IP address on 26/08/2018 at 06:19

2 High Level Trigger Online Calibration framework in ALICE Sebastian Robert Bablok 1, Øystein Djuvsland 1, Kalliopi Kanaki 1, Joakim Nystrand 1, Matthias Richter 1, Dieter Röhrich 1, Kyrre Skjerdal 1, Kjetil Ullaland 1, Gaute Øvrebekk 1, Dag Larsen 1, Johan Alme 1, Torsten Alt 2, Volker Lindenstruth 2, Timm M. Steinbeck 2, Jochen Thäder 2, Udo Kebschull 2, Stefan Böttger 2, Sebastian Kalcher 2, Camilo Lara 2, Ralf Panse 2, Harald Appelshäuser 3, Mateusz Ploskon 3, Håvard Helstrup 4, Kristin F. Hetland 4, Øystein Haaland 4, Ketil Roed 4, Torstein Thingnæs 4, Kenneth Aamodt 5, Per Thomas Hille 5, Gunnar Lovhoiden 5, Bernhard Skaali 5, Trine Tveter 5, Indranil Das 6, Sukalyan Chattopadhyay 6, Bruce Becker 7, Corrado Cicalo 7, Davide Marras 7, Sabyasachi Siddhanta 7, Jean Cleymans 8, Artur Szostak 8,7, Roger Fearick 8, Gareth de Vaux 8, Zeblon Vilakazi 8 1 Department of Physics and Technology, University of Bergen, Norway 2 Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany 3 Institute for Nuclear Physics, University of Frankfurt, Germany 4 Faculty of Engineering, Bergen University College, Norway 5 Department of Physics, University of Oslo, Norway 6 Saha Institute of Nuclear Physics, Kolkata, India 7 I.N.F.N. Sezione di Cagliari, Cittadella Universitaria, Cagliari, Italy 8 UCT-CERN, Department of Physics, University of Cape Town, South Africa Sebastian.Bablok@uib.no Abstract. The ALICE High Level Trigger (HLT ) is designed to perform event analysis of heavy ion and proton-proton collisions as well as calibration calculations online. A large PC farm, currently under installation, enables analysis algorithms to process these computationally intensive tasks. The HLT receives event data from all major detectors in ALICE. Interfaces to the various other systems provide the analysis software with required additional information. Processed results are sent back to the corresponding systems. To allow online performance monitoring of the detectors an interface for visualizing these results has been developed. 1. Introduction of the ALICE High Level Trigger ALICE is designed to study heavy ion (Pb-Pb) and proton-proton (pp) collisions at an event rate of up to 1kHz. In the Time Projection Chamber (TPC ), the main tracking detector in ALICE with the largest data volume, the size of a single event is around 75 MByte. After a hierarchical selection of Level 0, 1 and 2 triggers and combination with the data of the other relevant detectors this sums up to a data rate of 25 GByte/s. To match this amount with the Data Acquisition (DAQ) archiving rate of about 1.25 GByte/s, the HLT performs online event c 2008 IOP Publishing Ltd 1

3 Figure 1. ALICE systems structure and event data flow. analysis and data reduction. For this purpose, the HLT receives raw event data of the Front- End-Electronics (FEE), which are sent as direct copies of the event data by the DAQ-ReadOut Receiver Cards (D-RORC ) during the run. In return, the HLT provides a Level 3 trigger (event selection) and efficient event data compression (e.g. entropy coding). Additionally the HLT allows for selection of Regions-of-Interest (RoI) within an event, performance monitoring of the ALICE detectors and calculation of new calibration data online. As shown in figure 1, the whole setup is steered and synchronized via the Experiment Control system (ECS) [1] [2]. To cope with the large processing requirements involved in these tasks, the HLT consists of a large computing farm with several hundred off-the-shelf PCs. These computers contain a dual board equipped with AMD dual core Opteron 2 GHz CPUs, 8 GByte of RAM, two Gigabit Ethernet connections and an Infiniband backbone for high throughput communications. An upgrade to quad core CPUs is foreseen. HLT-ReadOut Receiver Cards (H-RORC ) inside dedicated Front-End-Processor (FEP) nodes accept the raw event data and perform a first reconstruction step. Detector Algorithms (DA) on additional cluster nodes take over and accomplish the above mentioned tasks. Detector- Data-Links (DDL), which transfer data over optical fibers, cover the transportation of results back to DAQ. There the results are stored together with the event data. The layout of the cluster nodes, presented in figure 2, is matching the structure of the ALICE detectors and the different event analysis steps involved. Dedicated infrastructure nodes are reserved for services and cluster maintenance (e. g. an 8 TByte AFS (Andrew File System) file server and two gateway machines). Portal nodes take care of the exchange with the other ALICE systems in the online and offline world. These portal nodes and their specialized software are the main focus of this article. The latest server version of Ubuntu Linux, currently 6.6 LTS (Long Term Support), serves as operating system inside the cluster. An interweaved system of TaskManagers organizes the cluster and steers the tasks on each node internally. A dynamic data transport framework, designed after the publish/subscriber principle, takes care of the data flow [3] [4] [5]. Detector Algorithms (DA) of the AliRoot package, the analysis framework of ALICE, analyze the incoming raw event data and calculate new calibration settings [6]. The analysis software works independent from the transportation framework, which allows the DAs to run in Offline as well 2

4 Figure 2. HLT architecture and data flow inside the HLT cluster. The cluster organization matches the structure of the ALICE detectors (TPC, Transition Radiation Detector (TRD), Inner Tracking System (ITS),... ) and their analysis steps from cluster finding to trigger decisions and data compression. without any changes. This enables result comparison later on [7]. The cluster itself is monitored by the SysMES framework (System Management for Networked Embedded Systems and Clusters) and Lemon (LHC Era Monitoring) [8]. Fail safety and the avoidance of single points of failure have been major issues in the design of the cluster. 2. HLT Interfaces 2.1. Overview The HLT has redundant interfaces to the various other systems in ALICE. These include the ALICE online systems like the ECS, FEE, DAQ and the Detector Control System (DCS), as well as the ALICE Offline system and the Alice Event Monitoring (AliEve) framework. The latter one will allow for monitoring ALICE online in the ALICE Control Room (ACR). For receiving raw event data, 365 DDLs from the different detectors in ALICE are connected to the H-RORCs in the FEPs. The data are analyzed inside the HLT cluster. These tasks are in detail: provision of Trigger decisions, selection of RoIs (only the data of the relevant parts are streamed out), lossless data compression (like entropy coding, vector quantization in the TPC data model) [9] [10]. These data are sent back to the DAQ-LDCs (DAQ Local Data Concentrator) for permanent storage via 12 DDLs. The interfaces to ECS, DCS, Offline and AliEve are described in the following subsections. In case of a failure of one portal the backup node takes over the corresponding task. In the DCS and Offline case the tasks for communication in the two exchange directions (receiving data and sending data) are separated in different applications with own names. DCS data are fetched via the so called Pendolino, while HLT data are sent to the DCS over the Front-End-Device (FED) API [11]. Offline can fetch data from HLT using the Offline Shuttle mechanism and data from Offline are retrieved over the HLT Taxi. A sketch of these interfaces is presented in figure ECS interface The HLT, like all other ALICE systems, is controlled by the ECS. An ECS-proxy, consisting of a finite state machine, contacts the ECS and informs about its current state. Transition commands issued by ECS trigger state changes and provide the initial settings of the upcoming run. This information includes the upcoming run number, the experiment type (Pb-Pb or p-p), operating mode, trigger classes, DDL lists, etc. The ECS-proxy accepts the current state from the Master TaskManagers, which control the HLT cluster internally. All state transition commands, issued by ECS, are referred to the Master TaskManagers as well [12]. The proxy is implemented in SMI++ (State Management Interface), which communicates with the ALICE ECS system using DIM (Distributed Information Management), a 3

5 Figure 3. Overview of the HLT interfaces (FEP = Front-End-Processor; DDL = Detector Data Link; HOMER = HLT Online Monitoring Environment including ROOT) communication framework developed at CERN [13]. The ECS is connected to all other ALICE systems. This allows for synchronizing the HLT with the other parts of ALICE DCS interface Pendolino The DCS controls and configures the FEE of all detectors in ALICE. In addition the current detector status is monitored; their temperature, voltage and current values are measured. These run conditions are received in the PVSS (Process Visualization and Steering System) panels of the corresponding detectors and then stored as datapoints to the DCS Archive DB during the run. The DAs running in the HLT cluster require a well defined subset of these values to calculate calibration settings and observables like the TPC drift velocity. Therefore a special HLT application, the Pendolino, contacts the DCS during the run and fetches the desired values. Since these values are constantly measured and can vary during the run, the Pendolino requests these values frequently from an Amanda server (Alice MANager for Dcs Archives), which sits on top of the DCS Archive DB [14]. It is foreseen to have three different Pendolinos running, each with a different frequency and each requesting a different subset of datapoints. These datapoints are received as timestamp value pairs. To allow the DAs to read the data regardless of whether running online or offline, the pairs have to be preprocessed and enveloped into ROOT objects. Each detector providing DAs to the HLT has to implement its own preprocessing routine. This procedure is adapted from the Offline Shuttle mechanism, which is used to store DCS data into 4

6 Figure 4. Deployment of the Pendolino for fetching DCS values from the DCS Archive DB and providing them as ROOT files to the DAs in the HLT cluster. the Offline Condition DataBase (OCDB) [15]. Due to the fact that it can take up to two minutes until DCS data are shipped from the PVSS panels to the DCS Archive DB, the preprocessing routine for HLT has to encode some prediction calculation for the retrieved values in the produced ROOT objects. This might be required for certain values in order to cope with online processing. The prediction encoding gives the routine its name: PredictionProcessor. The implementation of the prediction calculation is up to the detectors requiring the data. The produced ROOT file is stored in a file catalogue called HLT Condition DataBase (HCDB). The file catalogue is distributed to the cluster nodes running the DAs and updated each time new data are available. Afterwards a notification about new content in the HCDB is percolated through the analysis chain. The Pendolino procedure is visualized in figure FED-portal To return data like the TPC drift velocity to the DCS system, the HLT uses the Front-End-Device (FED) API, which is common among all detectors integrated in the DCS. Therefore DCS related data inside the HLT cluster are collected by the FED-portal during the run. A DIM server implementing the FED API sends these data from the FED-portal to the corresponding PVSS panels on the DCS side. From there it is included automatically in the DCS Archive DB Offline interface Taxi Assumed or in former runs calculated calibration and condition settings are stored as ROOT files in the OCDB [15]. The DAs require them in order to analyze events and calculate new calibration objects. A special application, called Taxi, requests the OCDB for latest available calibrations settings in regular time intervals and synchronizes them with the local copy of the OCDB, the HCDB. To reduce traffic, the Taxi first checks if the data are already available in the HCDB before it is fetched from the OCDB. The whole procedure runs independently and asynchronously to any run. At the start of each run the current version of the HCDB is fixed to avoid updates of calibrations settings during the run. Then the HCDB is distributed to all cluster nodes running DAs. Access to the HCDB is granted through the AliCDB (AliRoot Conditions Database) Access classes, which are also used in Offline to request the OCDB. This guarantees transparent access for the DAs, independent from running online or offline. The AliCDB Access classes return automatically the latest version of calibration settings valid for a given run number. 5

7 Figure 5. Deployment of the Shuttle portal and the Offline Shuttle mechanism for retrieving new calculated calibration objects from the HLT cluster Shuttle portal After each run the Shuttle portal collects all newly calculated calibration objects from the DAs. Transportation of the data are realized via dedicated components of the PublisherSubscriber framework. The calibration objects are stored in a File EXchange Server (FXS), while additional meta data for each file (like run number, detector, file ID, file size, checksum and timestamps) are stored in a MySQL DB. When all new objects are saved, the Shuttle portal notifies the ECS-proxy, that the collection process has finished. Now the ECS can trigger the start of the Offline Shuttle. The Shuttle requests the meta data of the latest run for the new entries in the FXS from the MySQL DB. Then it fetches the according files from the Shuttle portal non-interactively, using an shh-key. All new files are preprocessed by detector specific ShuttlePreprocessors and enveloped in ROOT files, if not already done inside the HLT cluster [15]. Afterwards the new entries are stored in the OCDB, where the Taxi can fetch them for the next run. The detour over the OCDB has been chosen to guarantee coherent version control of the calibration objects. The whole mechanism is sketched in fig AliEve interface Since the HLT performs the task of event analysis and calculation of new calibration data online, observation of the results is also possible online. Therefore the HLT provides the HOMER (HLT Online Monitoring Environment including ROOT ) interface, which offers a connection to the Alice Event monitoring framework (AliEve). AliEve is part of the AliRoot package and includes 3D visualization, as well as displaying of ROOT structures and histograms [16]. HOMER can fetch produced results at any step of the HLT analysis chain and transport them to any AliEve application inside the CERN General Purpose Network (GPN). This enables the operators to display results directly in the ACR. 3. Time line sequence and synchronization Each of the presented interfaces have dedicated places in the usage sequence. As shown in figure 6, this sequence is divided in five different time periods: Independent from a run (asynchronous to the runs): Before a first run (and repeated in regular time intervals) the Taxi requests latest calibration 6

8 Figure 6. Sequence diagram displaying the interplay of the different interfaces and portals, participating in the calibration framework of HLT. SoR (Start-of-Run) and EoR (End-of-Run) are special events triggered by ECS to indicate the start and end of a run. settings from the OCDB and caches them locally in the HCDB. Actually, this task is accomplished completely asynchronous to any run and can be performed also during a run. Initialization period before a run: The ECS informs the HLT about an upcoming run with a first INITIALIZE command. In addition, several run settings (like run number, beam type, trigger classes, etc) are transmitted. During the following configuration steps, the HLT freezes the current version of the HCDB and distributes it to the cluster nodes running the DAs. In case the Taxi fetches new data from the OCDB during a run, the new settings are only stored to the HCDB version located on the Taxi portal node, but not updated on the DA nodes. This guarantees that the DAs use a coherent version during the complete run. The completion of the initialization procedure is signaled back to the ECS. During a run: Every run starts with a special event triggered by ECS: i. e. the Start-of-Run (SoR). After the SoR event, raw event data are received from the FEE on the FEP nodes. The data are processed and analyzed over several steps. New calibration settings are calculated. For additional input, the Pendolino fetches current environment and condition settings from the DCS Archive DB (like temperature, voltages, etc). After preprocessing and enveloping them, they are available for the DAs via the HCDB. Analyzed events and trigger decisions are streamed out to DAQ for permanent storage. Freshly calculated, DCS relevant data are sent through the FED-portal for monitoring and storage in DCS. Online visualization of events and calibration data is enabled via the HOMER interface and allows to monitor the performance of the detectors in the ACR. This is continuously repeated during the run, and a notification about new DCS data in the HCDB is percolated through the analysis chain after each update. End of a run: At the end of a run, ECS issues again a special event, called End-of-Run (EoR). The event is percolated through the analysis chain and notifies each component to terminate. This phase is called completing, because it can take some time until all events are worked off and until the HLT is ready for the next run. During this time the Shuttle portal collects all freshly produced calibration objects, fills them in the FXS and stores additional meta data in the MySQL DB. As soon as this is finished, the ECS-proxy signals to ECS that the Offline Shuttle can start collecting HLT data. 7

9 After the end: Finally, the Offline Shuttle can contact the MySQL DB and the FXS on the corresponding HLT portal node and fetch the new data for the OCDB. The HLT cluster can already be used for the next run, since the fetching does not require actions from the HLT side. 4. Status and Performance These interfaces are in different stage of development. Most of them have been implemented and are in the test phase, which leads to an ongoing optimization and fine tuning of the different interface. The ECS-proxy has been implemented over a year ago and its functionality has been tested widely in tests with ECS and during the TPC commissioning in June The Shuttle portal and the Taxi are written and now deployed for performance tests and enhancements. First measurements indicate that they will do their job according to the requirements. The Pendolino is implemented without the PredictionProcessor and executing currently performance tests as well. At the moment the Pendolino takes 9 seconds to fetch 250 different datapoints, using all 250 datapoint names in one request. A soon to come upgrade in the Amanda server, which avoids a detour in the request over PVSS will bring further speed enhancements. The PredictionProcessor interface is in the final discussion and a first prototype using the TPC PredictionProcessor is about to be implemented soon. The FED-API of the FED-portal is implemented and waiting to be tested in the PubSub (PublishSubscriber) framework. Inclusion in the corresponding PVSS panels is pending. The HOMER has been implemented a while ago and widely tested: Last time in the setup of the TPC commissioning. The HLT has been able to monitor the TPC online during the commissioning in the ACR, the results are very promising. A combined test with all interfaces is pending, but scheduled for the full dress rehearsal in beginning of November Summary The ALICE HLT consists of a large computing farm with approx 1000 computing units. Fast connections guarantee high performance throughput of data. The layout of the cluster matches the structure of the ALICE detectors and their analysis steps. Interfaces to other parts of ALICE allow for data exchange with online and offline systems. Current run conditions are read from DCS, calibration settings fetched from Offline. Connections in the vice versa direction allow for feeding back new data. An interface to AliEve allows to visualize processed events online. External cluster control and synchronization is achieved via the ECS-proxy. The framework, presented in this article, enables the HLT for detector performance measurements and physics monitoring, as well as calibration calculation online. The HLT will be able to provide all required data for the analysis software performing first physics in ALICE Acknowledgments The development of these HLT interfaces have been accompanied by very good and fruitful cooperations with the collaborations of the connected systems in ALICE. The ALICE HLT project has been supported by the Norwegian Research Council (NFR). References [1] ALICE Collaboration, ALICE Technical Proposal for A Large Ion Collider Experiment at the CERN LHC, CERN/LHCC (1998) [2] ALICE Collaboration, ALICE Technical Design Report of the Trigger, Data Acquisition, High-Level Trigger, and Control System, ALICE-TDR-10, CERN-LHCC (2003) pp [3] Steinbeck T M et al New experiences with the ALICE High Level Trigger Data Transport Framework, in Proc. Computing in High Energy Physics Conf (CHEP04) Interlaken, Switzerland, 8

10 [4] Steinbeck T M et al 2002 An object-oriented network-transparent data transportation framework, IEEE Trans. Nucl. Sci. 49 (2002) pp [5] Steinbeck T M et al 2002 A Framework for Building Distributed Data Flow Chains in Clusters, Proc. 6th International Conference PARA 2002 Espoo, Finland, June 2002, Lecture Notes in Computer Science LNCS 2367, pp , Springer-Verlag Heidelberg [6] ALICE Off-line project, [7] Richter M et al, High Level Trigger applications for the ALICE experiment, submitted to IEEE Trans. Nucl. Sci. [8] Lara C, The SysMes architecture: System management for networked embedded systems and clusters, Date 2007 PhD Forum, Nice, France (2007), [9] Röhrich D and Vestbø A, Efficient TPC data compression by track and cluster modeling, Nucl. Instrum. Meth. A566 (2006) pp [10] Lindenstruth V et al, Real time TPC analysis with the ALICE High Level Trigger, Nucl. Instrum. Meth. A534 (2004) pp [11] [12] Bablok S et al, ALICE HLT interfaces and data organisation, Proc. Computing in High Energy and Nuclear Physics Conf (CHEP 2006) Mumbai, India, ed Banerjee S vol 1 Macmillian India Ltd (2007) pp [13] Gaspar C et al, An architecture and a framework for the design and implementation of large control system, Proc. ICALEPS 1999, Trieste, Italy [14] ALICE DCS Amanda project, [15] Colla A and Grosse-Oetringhaus J F, Alice internal note describing the Offline Shuttle mechanism (about to be published) [16] Tadel M and Mrak-Tadel A, AliEVE ALICE Event Visualization Environment, Proc. Computing in High Energy and Nuclear Physics Conf (CHEP 2006) Mumbai, India, ed Banerjee S vol 1 Macmillian India Ltd (2007) pp

High Level Trigger System for the LHC ALICE Experiment

High Level Trigger System for the LHC ALICE Experiment High Level Trigger System for the LHC ALICE Experiment H Helstrup 1, J Lien 1, V Lindenstruth 2,DRöhrich 3, B Skaali 4, T Steinbeck 2, K Ullaland 3, A Vestbø 3, and A Wiebalck 2 for the ALICE Collaboration

More information

Communication Software for the ALICE TPC Front-End Electronics

Communication Software for the ALICE TPC Front-End Electronics Communication Software for the ALICE Front-End Electronics M. Richter 1, J. Alme 1, T. Alt 4, S. Bablok 1, U. Frankenfeld 5, D. Gottschalk 4, R. Keidel 3, Ch. Kofler 3, D. T. Larsen 1, V. Lindenstruth

More information

The ALICE High Level Trigger

The ALICE High Level Trigger The ALICE High Level Trigger Richter Department of Physics and Technology, of Bergen, Norway for the ALICE HLT group and the ALICE Collaboration Meeting for CERN related Research in Norway Bergen, November

More information

New Test Results for the ALICE High Level Trigger. Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 HK 41.

New Test Results for the ALICE High Level Trigger. Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 HK 41. New Test Results for the ALICE High Level Trigger Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 HK 41.2 1 ALICE High Level Trigger Overview ALICE: LHC heavy-ion

More information

CMS - HLT Configuration Management System

CMS - HLT Configuration Management System Journal of Physics: Conference Series PAPER OPEN ACCESS CMS - HLT Configuration Management System To cite this article: Vincenzo Daponte and Andrea Bocci 2015 J. Phys.: Conf. Ser. 664 082008 View the article

More information

THE ALICE experiment described in [1] will investigate

THE ALICE experiment described in [1] will investigate 980 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006 The Control System for the Front-End Electronics of the ALICE Time Projection Chamber M. Richter, J. Alme, T. Alt, S. Bablok, R. Campagnolo,

More information

Detector Control LHC

Detector Control LHC Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

arxiv: v1 [physics.ins-det] 26 Dec 2017

arxiv: v1 [physics.ins-det] 26 Dec 2017 arxiv:171209416v1 [physicsins-det] 26 Dec 2017 Fast TPC Online Tracking on GPUs and Asynchronous Data Processing in the ALICE HLT to facilitate Online Calibration David Rohr, Sergey Gorbunov, Mikolaj Krzewicki,

More information

Operational experience with the ALICE High Level Trigger

Operational experience with the ALICE High Level Trigger Journal of Physics: Conference Series Operational experience with the ALICE High Level Trigger To cite this article: Artur Szostak 212 J. Phys.: Conf. Ser. 396 1248 View the article online for updates

More information

The ALICE TPC Readout Control Unit 10th Workshop on Electronics for LHC and future Experiments September 2004, BOSTON, USA

The ALICE TPC Readout Control Unit 10th Workshop on Electronics for LHC and future Experiments September 2004, BOSTON, USA Carmen González Gutierrez (CERN PH/ED) The ALICE TPC Readout Control Unit 10th Workshop on Electronics for LHC and future Experiments 13 17 September 2004, BOSTON, USA Outline: 9 System overview 9 Readout

More information

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Oliver Berroteran 2016-08-23 Supervisor: Markus Fasel, CERN Abstract The High Level Trigger (HLT) is a computing farm consisting

More information

ALICE is one of the four large-scale experiments at the

ALICE is one of the four large-scale experiments at the Online Calibration of the TPC Drift Time in the ALICE High Level Trigger David Rohr, Mikolaj Krzewicki, Chiara Zampolli, Jens Wiechula, Sergey Gorbunov, Alex Chauvin, Ivan Vorobyev, Steffen Weber, Kai

More information

Control and Monitoring of the Front-End Electronics in ALICE

Control and Monitoring of the Front-End Electronics in ALICE Control and Monitoring of the Front-End Electronics in ALICE Peter Chochula, Lennart Jirdén, André Augustinus CERN, 1211 Geneva 23, Switzerland Peter.Chochula@cern.ch Abstract This paper describes the

More information

Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger

Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger David Rohr 1,a, Ruben Shahoyan 2, Chiara Zampolli 2,3, Mikolaj Krzewicki 1,4, Jens Wiechula 4, Sergey Gorbunov 1,4,

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

CMS event display and data quality monitoring at LHC start-up

CMS event display and data quality monitoring at LHC start-up Journal of Physics: Conference Series CMS event display and data quality monitoring at LHC start-up To cite this article: I Osborne et al 2008 J. Phys.: Conf. Ser. 119 032031 View the article online for

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Development and Integration of on-line Data Analysis for the ALICE Experiment Matthias Richter

Development and Integration of on-line Data Analysis for the ALICE Experiment Matthias Richter Development and Integration of on-line Data Analysis for the ALICE Experiment Matthias Richter Dissertation for the degree philosophiae doctor (PhD) at the University of Bergen February 06, 2009 Abstract

More information

THE ALICE (A Large Ion Collider Experiment) detector

THE ALICE (A Large Ion Collider Experiment) detector 76 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 55, NO. 1, FEBRUARY 2008 Radiation-Tolerant, SRAM-FPGA Based Trigger and Readout Electronics for the ALICE Experiment Johan Alme, Roberto Campagnolo, Øystein

More information

Real-time Analysis with the ALICE High Level Trigger.

Real-time Analysis with the ALICE High Level Trigger. Real-time Analysis with the ALICE High Level Trigger C. Loizides 1,3, V.Lindenstruth 2, D.Röhrich 3, B.Skaali 4, T.Steinbeck 2, R. Stock 1, H. TilsnerK.Ullaland 3, A.Vestbø 3 and T.Vik 4 for the ALICE

More information

The ATLAS Conditions Database Model for the Muon Spectrometer

The ATLAS Conditions Database Model for the Muon Spectrometer The ATLAS Conditions Database Model for the Muon Spectrometer Monica Verducci 1 INFN Sezione di Roma P.le Aldo Moro 5,00185 Rome, Italy E-mail: monica.verducci@cern.ch on behalf of the ATLAS Muon Collaboration

More information

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System Chapter 8 Online System The task of the Online system is to ensure the transfer of data from the front-end electronics to permanent storage under known and controlled conditions. This includes not only

More information

First experiences with the ATLAS pixel detector control system at the combined test beam 2004

First experiences with the ATLAS pixel detector control system at the combined test beam 2004 Nuclear Instruments and Methods in Physics Research A 565 (2006) 97 101 www.elsevier.com/locate/nima First experiences with the ATLAS pixel detector control system at the combined test beam 2004 Martin

More information

arxiv: v1 [physics.ins-det] 26 Dec 2017

arxiv: v1 [physics.ins-det] 26 Dec 2017 arxiv:1712.09407v1 [physics.ins-det] 26 Dec 2017 ALICE HLT TPC Tracking of Pb-Pb Events on GPUs David Rohr 1, Sergey Gorbunov 1, Artur Szostak 2, Matthias Kretz 1, Thorsten Kollegger 1, Timo Breitner 1,

More information

arxiv: v1 [physics.ins-det] 26 Dec 2017

arxiv: v1 [physics.ins-det] 26 Dec 2017 EPJ Web of Conferences will be set by the publisher DOI: will be set by the publisher c Owned by the authors, published by EDP Sciences, 2017 arxiv:1712.09434v1 [physics.ins-det] 26 Dec 2017 Online Reconstruction

More information

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS C. Gaspar, CERN, Geneva, Switzerland Abstract The four LHC experiments (ALICE, ATLAS, CMS and LHCb), either by need or by choice have defined different

More information

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine Journal of Physics: Conference Series PAPER OPEN ACCESS ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine To cite this article: Noemi Calace et al 2015 J. Phys.: Conf. Ser. 664 072005

More information

Commissioning of the ALICE data acquisition system

Commissioning of the ALICE data acquisition system Journal of Physics: Conference Series Commissioning of the ALICE data acquisition system To cite this article: T Anticic et al 2008 J. Phys.: Conf. Ser. 119 022006 View the article online for updates and

More information

An FPGA Based General Purpose DAQ Module for the KLOE-2 Experiment

An FPGA Based General Purpose DAQ Module for the KLOE-2 Experiment Journal of Physics: Conference Series An FPGA Based General Purpose DAQ Module for the KLOE-2 Experiment To cite this article: A Aloisio et al 2011 J. Phys.: Conf. Ser. 331 022033 View the article online

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

CMS data quality monitoring: Systems and experiences

CMS data quality monitoring: Systems and experiences Journal of Physics: Conference Series CMS data quality monitoring: Systems and experiences To cite this article: L Tuura et al 2010 J. Phys.: Conf. Ser. 219 072020 Related content - The CMS data quality

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

The ALICE TPC Readout Control Unit

The ALICE TPC Readout Control Unit The ALICE TPC Readout Control Unit C. González Gutiérrez, R. Campagnolo, C. Engster, A. Junique, B. Mota, L.Musa CERN, Geneva 3, Switzerland Carmen.Gonzalez.Gutierrez@cern.ch J. Alme, J. Lien, B. Pommersche,

More information

2008 JINST 3 S Control System. Chapter Detector Control System (DCS) Introduction Design strategy and system architecture

2008 JINST 3 S Control System. Chapter Detector Control System (DCS) Introduction Design strategy and system architecture Chapter 7 Control System 7.1 Detector Control System (DCS) 7.1.1 Introduction The primary task of the ALICE Control System is to ensure safe and correct operation of the ALICE experiment [17]. It provides

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Stefan Koestner on behalf of the LHCb Online Group (  IEEE - Nuclear Science Symposium San Diego, Oct. Stefan Koestner on behalf of the LHCb Online Group (email: Stefan.Koestner@cern.ch) IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Dedicated to B-physics : single arm forward spectrometer

More information

ATLAS software configuration and build tool optimisation

ATLAS software configuration and build tool optimisation Journal of Physics: Conference Series OPEN ACCESS ATLAS software configuration and build tool optimisation To cite this article: Grigory Rybkin and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 513

More information

The ALICE Glance Shift Accounting Management System (SAMS)

The ALICE Glance Shift Accounting Management System (SAMS) Journal of Physics: Conference Series PAPER OPEN ACCESS The ALICE Glance Shift Accounting Management System (SAMS) To cite this article: H. Martins Silva et al 2015 J. Phys.: Conf. Ser. 664 052037 View

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Benchmarking message queue libraries and network technologies to transport large data volume in

Benchmarking message queue libraries and network technologies to transport large data volume in Benchmarking message queue libraries and network technologies to transport large data volume in the ALICE O 2 system V. Chibante Barroso, U. Fuchs, A. Wegrzynek for the ALICE Collaboration Abstract ALICE

More information

SMI++ object oriented framework used for automation and error recovery in the LHC experiments

SMI++ object oriented framework used for automation and error recovery in the LHC experiments Journal of Physics: Conference Series SMI++ object oriented framework used for automation and error recovery in the LHC experiments To cite this article: B Franek and C Gaspar 2010 J. Phys.: Conf. Ser.

More information

Tracking and compression techniques

Tracking and compression techniques Tracking and compression techniques for ALICE HLT Anders Strand Vestbø The ALICE experiment at LHC The ALICE High Level Trigger (HLT) Estimated data rate (Central Pb-Pb, TPC only) 200 Hz * 75 MB = ~15

More information

The CMS High Level Trigger System: Experience and Future Development

The CMS High Level Trigger System: Experience and Future Development Journal of Physics: Conference Series The CMS High Level Trigger System: Experience and Future Development To cite this article: G Bauer et al 2012 J. Phys.: Conf. Ser. 396 012008 View the article online

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

ATLAS Nightly Build System Upgrade

ATLAS Nightly Build System Upgrade Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous

More information

The ALICE electromagnetic calorimeter high level triggers

The ALICE electromagnetic calorimeter high level triggers Journal of Physics: Conference Series The ALICE electromagnetic calorimeter high level triggers To cite this article: F Ronchetti et al 22 J. Phys.: Conf. Ser. 96 245 View the article online for updates

More information

Dataflow Monitoring in LHCb

Dataflow Monitoring in LHCb Journal of Physics: Conference Series Dataflow Monitoring in LHCb To cite this article: D Svantesson et al 2011 J. Phys.: Conf. Ser. 331 022036 View the article online for updates and enhancements. Related

More information

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 M. E. Pozo Astigarraga, on behalf of the ATLAS Collaboration CERN, CH-1211 Geneva 23, Switzerland E-mail: eukeni.pozo@cern.ch The LHC has been providing proton-proton

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Data Quality Monitoring Display for ATLAS experiment

Data Quality Monitoring Display for ATLAS experiment Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics,

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

The Database Driven ATLAS Trigger Configuration System

The Database Driven ATLAS Trigger Configuration System Journal of Physics: Conference Series PAPER OPEN ACCESS The Database Driven ATLAS Trigger Configuration System To cite this article: Carlos Chavez et al 2015 J. Phys.: Conf. Ser. 664 082030 View the article

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors

Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors Mitchell A Cox, Robert Reed and Bruce Mellado School of Physics, University of the Witwatersrand.

More information

The GTPC Package: Tracking and Analysis Software for GEM TPCs

The GTPC Package: Tracking and Analysis Software for GEM TPCs The GTPC Package: Tracking and Analysis Software for GEM TPCs Linear Collider TPC R&D Meeting LBNL, Berkeley, California (USA) 18-19 October, 003 Steffen Kappler Institut für Experimentelle Kernphysik,

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade Journal of Instrumentation OPEN ACCESS A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade Recent citations - The Versatile Link Demo Board (VLDB) R. Martín Lesma et al To cite

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

Online remote monitoring facilities for the ATLAS experiment

Online remote monitoring facilities for the ATLAS experiment Journal of Physics: Conference Series Online remote monitoring facilities for the ATLAS experiment To cite this article: S Kolos et al 2011 J. Phys.: Conf. Ser. 331 022013 View the article online for updates

More information

arxiv: v1 [physics.ins-det] 16 Oct 2017

arxiv: v1 [physics.ins-det] 16 Oct 2017 arxiv:1710.05607v1 [physics.ins-det] 16 Oct 2017 The ALICE O 2 common driver for the C-RORC and CRU read-out cards Boeschoten P and Costa F for the ALICE collaboration E-mail: pascal.boeschoten@cern.ch,

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

Performance quality monitoring system for the Daya Bay reactor neutrino experiment

Performance quality monitoring system for the Daya Bay reactor neutrino experiment Journal of Physics: Conference Series OPEN ACCESS Performance quality monitoring system for the Daya Bay reactor neutrino experiment To cite this article: Y B Liu and the Daya Bay collaboration 2014 J.

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment Journal of Physics: Conference Series PAPER OPEN ACCESS How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment To cite this article: G Corti et

More information

Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment Journal of Physics: Conference Series PAPER OPEN ACCESS Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment To cite this article: M. Bianco et al 2015

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization Deferred High Level Trigger in LHCb: A Boost to Resource Utilization The use of periods without beam for online high level triggers Introduction, problem statement Realization of the chosen solution Conclusions

More information

Trigger and Data Acquisition at the Large Hadron Collider

Trigger and Data Acquisition at the Large Hadron Collider Trigger and Data Acquisition at the Large Hadron Collider Acknowledgments (again) This overview talk would not exist without the help of many colleagues and all the material available online I wish to

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

Control slice prototypes for the ALICE TPC detector

Control slice prototypes for the ALICE TPC detector Control slice prototypes for the ALICE TPC detector S.Popescu 1, 3, A.Augustinus 1, L.Jirdén 1, U.Frankenfeld 2, H.Sann 2 1 CERN, Geneva, Switzerland, 2 GSI, Darmstadt, Germany, 3 NIPN E, Bucharest, Romania

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

The CMS Event Builder

The CMS Event Builder The CMS Event Builder Frans Meijers CERN/EP-CMD CMD on behalf of the CMS-DAQ group CHEP03, La Jolla, USA, March 24-28 28 2003 1. Introduction 2. Selected Results from the Technical Design Report R&D programme

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment

Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment Stefan Huber, Igor Konorov, Dmytro Levit, Technische Universitaet Muenchen (DE) E-mail: dominik.steffen@cern.ch Martin

More information

Data acquisition and online monitoring software for CBM test beams

Data acquisition and online monitoring software for CBM test beams Journal of Physics: Conference Series Data acquisition and online monitoring software for CBM test beams To cite this article: J Adamczewski-Musch et al 2012 J. Phys.: Conf. Ser. 396 012001 View the article

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Status and Prospects of LHC Experiments Data Acquisition. Niko Neufeld, CERN/PH

Status and Prospects of LHC Experiments Data Acquisition. Niko Neufeld, CERN/PH Status and Prospects of LHC Experiments Data Acquisition Niko Neufeld, CERN/PH Acknowledgements & Disclaimer I would like to thank Bernd Panzer, Pierre Vande Vyvre, David Francis, John-Erik Sloper, Frans

More information

The ATLAS Data Acquisition System: from Run 1 to Run 2

The ATLAS Data Acquisition System: from Run 1 to Run 2 Available online at www.sciencedirect.com Nuclear and Particle Physics Proceedings 273 275 (2016) 939 944 www.elsevier.com/locate/nppp The ATLAS Data Acquisition System: from Run 1 to Run 2 William Panduro

More information

IN a system of many electronics boards of many different

IN a system of many electronics boards of many different 356 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 55, NO. 1, FEBRUARY 2008 Building Integrated Remote Control Systems for Electronics Boards Richard Jacobsson, Member, IEEE Abstract This paper addresses several

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Evolution of Database Replication Technologies for WLCG Zbigniew Baranowski, Lorena Lobato Pardavila, Marcin Blaszczyk, Gancho Dimitrov, Luca Canali European Organisation for Nuclear Research (CERN), CH-1211

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

I. Introduction. Abstract

I. Introduction. Abstract Irradiation tests of the complete ALICE TPC Front-End Electronics chain K. Røed 1 2, J. Alme 1, R. Campagnolo 5, C.G. Guitierrez 5, H. Helstrup 2, D. Larsen 1, V. Lindenstruth 3, L. Musa 5, E. Olsen 4,A.

More information

ROM Status Update. U. Marconi, INFN Bologna

ROM Status Update. U. Marconi, INFN Bologna ROM Status Update U. Marconi, INFN Bologna Drift Chamber ~ 35 L1 processor EMC ~ 80 L1 processor? SVT L1 processor L3 to L5 ~15 Radiation wall Clk, L1, Sync Cmds Global Level1 Trigger (GLT) Raw L1 FCTS

More information

Data acquisition system of COMPASS experiment - progress and future plans

Data acquisition system of COMPASS experiment - progress and future plans Data acquisition system of COMPASS experiment - progress and future plans Faculty of Nuclear Sciences and Physical Engineering Czech Technical University in Prague & CERN COMPASS experiment COMPASS experiment

More information

Performance quality monitoring system (PQM) for the Daya Bay experiment

Performance quality monitoring system (PQM) for the Daya Bay experiment Performance quality monitoring system (PQM) for the Daya Bay experiment LIU Yingbiao Institute of High Energy Physics On behalf of the Daya Bay Collaboration ACAT2013, Beijing, May 16-21, 2013 2 The Daya

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information