The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Similar documents
The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The data-acquisition system of the CMS experiment at the LHC

CMS Conference Report

First operational experience with the CMS Run Control System

The CMS High Level Trigger System: Experience and Future Development

High level trigger configuration and handling of trigger tables in the CMS filter farm

The LHC Compact Muon Solenoid experiment Detector Control System

Online data handling and storage at the CMS experiment

10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network

The FEROL40, a microtca card interfacing custom point-to-point links and standard TCP/IP

The Run Control and Monitoring System of the CMS Experiment

The CMS Data Acquisition System Software

A scalable monitoring for the CMS Filter Farm based on elasticsearch

The CMS Event Builder

Distributed error and alarm processing in the CMS data acquisition system

Opportunistic usage of the CMS Online cluster using a cloud overlay

File-based data flow in the CMS Filter Farm

The CMS Event Builder

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

The Global Trigger Emulator System for the CMS experiment

CMS data quality monitoring: Systems and experiences

The CMS data quality monitoring software: experience and future prospects

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

Trigger and Data Acquisition at the Large Hadron Collider

L1 and Subsequent Triggers

2008 JINST 3 S Data Acquisition. Chapter 9

Run Control and Monitor System for the CMS Experiment

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The CMS Computing Model

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

Status and Prospects of LHC Experiments Data Acquisition. Niko Neufeld, CERN/PH

Using XDAQ in Application Scenarios of the CMS Experiment

1 MHz Readout. LHCb Technical Note. Artur Barczyk, Guido Haefeli, Richard Jacobsson, Beat Jost, and Niko Neufeld. Revision: 1.0

CMS event display and data quality monitoring at LHC start-up

The ATLAS Data Acquisition System: from Run 1 to Run 2

ATLAS TDAQ RoI Builder and the Level 2 Supervisor system

Interim Technical Design Report

Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment

BlueGene/L. Computer Science, University of Warwick. Source: IBM

A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

Storage and I/O requirements of the LHC experiments

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

Benchmarking message queue libraries and network technologies to transport large data volume in

The GTPC Package: Tracking and Analysis Software for GEM TPCs

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

An FPGA Based General Purpose DAQ Module for the KLOE-2 Experiment

Experience with Data-flow, DQM and Analysis of TIF Data

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

The GAP project: GPU applications for High Level Trigger and Medical Imaging

Data acquisition system of COMPASS experiment - progress and future plans

HLT Infrastructure Commissioning

1. Introduction. Outline

Improving Packet Processing Performance of a Memory- Bounded Application

Hera-B DAQ System and its self-healing abilities

Data Quality Monitoring Display for ATLAS experiment

Common Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger

Automated load balancing in the ATLAS high-performance storage software

PoS(High-pT physics09)036

Streamlining CASTOR to manage the LHC data torrent

Update on PRad GEMs, Readout Electronics & DAQ

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

The ATLAS Data Flow System for LHC Run 2

Trigger and DAQ systems (at the LHC)

Computing at Belle II

Real-time dataflow and workflow with the CMS tracker data

Use of ROOT in the DØ Online Event Monitoring System

Report. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

S-LINK: A Prototype of the ATLAS Read-out Link

Andrea Sciabà CERN, Switzerland

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Summary of the LHC Computing Review

10Gbps TCP/IP streams from the FPGA

ASN Configuration Best Practices

ATLAS Offline Data Quality Monitoring

LHCb Distributed Conditions Database

Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution

Ethernet Networks for the ATLAS Data Collection System: Emulation and Testing

Data Transfers Between LHC Grid Sites Dorian Kcira

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Development and test of a versatile DAQ system based on the ATCA standard

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Electronics, Trigger and Data Acquisition part 3

Storage Area Network (SAN)

LHCb Online System BEAUTY-2002

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group)

Trigger Coordination Report

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

Schematic. A: Overview of the Integrated Detector Readout Electronics and DAQ-System. optical Gbit link. 1GB DDR Ram.

First experiences with the ATLAS pixel detector control system at the combined test beam 2004

Clustering and Reclustering HEP Data in Object Databases

The coolest place on earth

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

The CMS L1 Global Trigger Offline Software

Transcription:

Available on CMS information server CMS CR -2009/065 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 08 May 2009 The CMS event builder and storage system Gerry Bauer 7), Barbara Beccati 3), Ulf Behrens 2), Kurt Biery 6), Angela Brett 6), James Branson 5), Eric Cano 3), Harry Cheung 6), Marek Ciganek 3), Sergio Cittolin 3), Jose Antonio Coarasa 3,5), Christian Deldicque 3), Elizabeth Dusinberre 5), Samim Erhan 4), Fabiana Fortes Rodrigues 1), Dominique Gigi 3), Frank Glege 3), Robert Gomez-Reino 3), Johannes Gutleber 3), Derek Hatton 2), Markus Klute 7), Jean-Francois Laurens 3), Constantin Loizides 7), Juan Antonio Lopez Perez 3,6), Frans Meijers 3), Emilio Meschi 3), Andreas Meyer 2,3), Remigius K Mommsen 6), Roland Moser 3)a, Vivian O Dell 6), Alexander Oh 3)b, Luciano Orsini 3), Vaios Patras 3), Christoph Paus 7), Andrea Petrucci 5), Marco Pieri 5), Attila Racz 3), Hannes Sakulin 3), Matteo Sani 5), Philipp Schieferdecker 3)c, Christoph Schwick 3), Josep Francesc Serrano Margaleff 7), Dennis Shpakov 6), Sean Simon 5), Konstanty Sumorok 7), Marco Zanetti 3) Abstract The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 khz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector. 1) Centro Federal de Educaìïo Tecnolùgica Celso Suckow da Fonseca, Rio de Janeiro, Brazil 2) DESY, Hamburg, Germany 3) CERN, Geneva, Switzerland 4) University of California, Los Angeles, Los Angeles, California, USA 5) University of California, San Diego, San Diego, California, USA 6) FNAL, Chicago, Illinois, USA 7) Massachusetts Institute of Technology, Cambridge, Massachusetts, USA a) Also at University of Technical University of Vienna, Vienna, Austria b) Now at University of Manchester, Manchester, United Kingdom c) Now at University of Karlsruhe, Karlsruhe, Germany

Presented at 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP

The CMS event builder and storage system G Bauer 7, B Beccati 3, U Behrens 2, K Biery 6, A Brett 6, J Branson 5, E Cano 3, H Cheung 6, M Ciganek 3, S Cittolin 3, JA Coarasa 3,5, C Deldicque 3, E Dusinberre 5, S Erhan 4, F F Rodrigues 1, D Gigi 3, F Glege 3, R Gomez-Reino 3, J Gutleber 3, D Hatton 2, M Klute 7, J-F Laurens 3, C Loizides 7, JA Lopez Perez 3,6, F Meijers 3, E Meschi 3, A Meyer 2,3, R K Mommsen 6, R Moser 3a, V O Dell 6, A Oh 3b, L Orsini 3, V Patras 3, C Paus 7, A Petrucci 5, M Pieri 5, A Racz 3, H Sakulin 3, M Sani 5, P Schieferdecker 3c C Schwick 3, J F Serrano Margaleff 7, D Shpakov 6, S Simon 5, K Sumorok 7 and M Zanetti 3 1 Centro Federal de Educaìïo Tecnolùgica Celso Suckow da Fonseca, Rio de Janeiro, Brazil 2 DESY, Hamburg, Germany 3 CERN, Geneva, Switzerland 4 University of California, Los Angeles, California, USA 5 University of California, San Diego, California, USA 6 FNAL, Chicago, Illinois, USA 7 Massachusetts Institute of Technology, Cambridge, Massachusetts, USA, a Also at University of Technical University of Vienna, Vienna, Austria b Now at University of Manchester, Manchester, United Kingdom c Now at University of Karlsruhe, Karlsruhe, Germany E-mail: remigius.mommsen@cern.ch Abstract. The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 khz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector. 1. Introduction The CMS event builder collects event fragments from approximately 500 front-end readout modules (FEDs) located close to the underground detector and assembles them into complete

Data to Surface Average event size 1 MB Detector front-ends ~700 Front-end readout links 512 FED Builder 8x8 switches 72 Network technology Myrinet Scaling & Staging Data to Disk High level trigger (HLT) accept rate Peak rate filter power Transfer to Tier 0 ~100 Hz 2 GB/s ~0.5 TFlop 2x10 Gbit/s DAQ Slice - ⅛th full system: Level 1 max. trigger rate 12.5 khz Throughput ⅛ Terabit/s Fragment size 16 kb Readout Builder switch 72x288 Network technology GbEthernet Figure 1. Overview of the CMS DAQ system consisting of detector front-end readout modules (FEDs) feeding up to 8 DAQ slices consisting of independent event builders and high-level trigger (HLT) farms. events at a maximum rate of 100 khz (see figure 1). The complete events are handed to the high-level trigger (HLT) processors located on the surface and running offline-style algorithms to select interesting events. Accepted events are transferred to the storage manager (SM) which temporarily stores the events on disk at a peak rate of 2GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the storage manager application to online monitoring clients. The design of the CMS Data Acquisition System and of the High Level Trigger is described in detail in the Technical Design Report [1]. 2. CMS Building 2 Stages CMS employs only 2 trigger levels: a first level trigger based on custom hardware, and a highlevel trigger (HLT) using commodity PCs. The assembly of event fragments into complete events takes place at a level-1 accept rate of 100 khz. In order to cope with the aggregated bandwidth of 100 GB/s, and to provide a scalable system, a 2-tier approach for the event building was chosen: an initial pre-assembly by the FED builder and a final assembly and event selection by several independent DAQ slices.

2.1. FED Builder Each FED builder assembles data from 8 front-end links into one super-fragment using redundant Myrinet [2] switches. The super-fragment is delivered to one of up to 8 independent readout slices on the surface, where it is buffered on commodity PCs, the readout units (RUs). For a detailed discussion of the FED builder, see [3]. 2.2. DAQ Slices The DAQ slices are mutually independent systems, which are in charge of building full events out of super-fragments, performing the physics selections, and forwarding the selected events to mass storage (SM). The independence of each DAQ slice has several advantages: Relaxed requirements: Each slide needs to handle a reduced event rate of 12.5 khz instead of 100 khz Scalability: Slices can be added or removed as needed Robustness: In case that one slice fails, data taking does not stall as other slices continue to process events Reduced interconnects: Reduces the number of interconnected components and less internal couplings. Technology independence: Different technologies can be used for the FED builder and for the DAQ slice, or the technology can even differ between DAQ slices Each readout unit (RU) is connected with two or three links ( rails ) via a large switch to builder units (BUs). The existing system employs the TCP/IP protocol over Gigabit Ethernet [4]. In order to achieve the necessary performance, a specific configuration of the source and destination VLANs in the network is required. The event building in each slice is controlled by one event manager (EVM), which receives the level-1 trigger information. The RUs, BUs, and the EVM make up the RU-builder. The BUs store the complete events until the filter units (FUs) running the high-level trigger (HLT) algorithms either rejected or accepted the events. Accepted events are sent to the storage manager (SM), which writes the events to disk. 2.3. RU-Builder Protocol Token Passing The RU-builder consists of 3 separate services (see figure 2): the readout unit (RU) buffers the event fragments for the assembly, the builder unit (BU) assembles the event, and the event manager (EVM) interfaces with the trigger and orchestrates the data flow. The applications exchange messages over the network and use First-In-First-Out queues (FIFOs) to keep track of requests, trigger data, and event data. The BU can build several events concurrently (see figure 3). An event is composed of one super-fragment from the trigger and one super-fragment from each RU that takes part in the event building. If the BU has resources available to build another N events, it sends the EVM N tokens representing freed event IDs (1). The EVM sends the trigger supervisor N new trigger credits. If the trigger has credits available, it will send the trigger information of new events to the EVM. The EVM pairs the trigger information with an available token and notifies the RUs to readout the event s data (2). Several tokens together with the corresponding trigger data are sent to the BU requesting new events (3). The BU adds the trigger data as the first superfragment of the event, and requests the RUs to send the rest of the event s super-fragments (4). Each RU responds by retrieving the super-fragment from its internal fragment look-up table and sends it to the BU (5). The BU builds the super-fragments that it receives from the RUs into a whole event. Filter units (FUs) can ask the BU to allocate them events. If the BU has a complete event, it assigns a FU a complete event (6). When a FU has finished processing

Trigger Detector Frontend Readout Units Controls Bulider Units Filter Unit Farms Figure 2. Schematic view of the readout builder (RU-builder): event fragments from the detector front-end readout modules are buffered in the readout units (RUs). The fragments are transferred using a large switching fabric to builder units (BUs) where they are assembled into complete events. The event manager (EVM) orchestrates the event building. an event either rejecting it or sending it to the SM to be recorded, it tells the BU to discard it (7). When N events have been discarded, the BU returns the N tokens corresponding to the discarded events to the EVM (8), and the cycle continues. 2.4. XDAQ Framework The RU-builder and the storage manager applications use the XDAQ framework. XDAQ is middle ware that eases the development of distributed data acquisition systems. It provides services for data transport, configuration, monitoring, and error reporting. The framework has been developed at CERN and builds upon industrial standards, open protocols and libraries [5]. 3. Storing Data on Disk The Storage The storage manager (SM) is located at the end of the online data chain. It receives events accepted by the high-level trigger (HLT) running on the filter units (FUs) (see figure 4). The accepted events are split over several I2O binary messages (fragments) and send over a dedicated Gigabit-Ethernet network to the SM. Each DAQ slice has one or two SM applications. The SM reassembles the fragments into complete events and writes them to disk. s are stored in one or multiple files, depending on the triggers that fired for the event (the trigger bits) and the HLT process that selected the event (the trigger path). These files are buffered on a local NexSan SATABeast disk array (RAID-6) and subsequently copied to the central computing center (Tier 0), which repacks the events into larger files. These files are then fed to the offline event-reconstruction farms. A subset of the events is sent to online consumers, using a HTTP request/response loop. Consumers either connect directly to one SM, or they connect to a proxy server which aggregates

Detector Frontend Detector Frontend Trigger Readout Units Trigger Readout Units 2 1 2 3 4 4 5 5 Builder Units Filter Units 1:I have n free resources 2: Broadcast identifier association 3:Trigger information for events id 1, id 2, id n 4:Send me fragments for events id 1, id 2, id n 5: Superfragment data 6: Allocate events to Filter Units 7: Discard events 8: Release n identifiers Builder Units Filter Units Detector Frontend Trigger Readout Units 8 6 7 Builder Units Filter Units Figure 3. The event-building protocol uses a token-passing scheme (see text). DAQ Slice N DAQ Slice 2 DAQ Slice 1 Builder Units and Filter Units (BuFus) Fragments Storage Storage DQM Histos data SM Proxy Server DQM Client DQM Histos DQM Client data Output Disk 1 Output Disk m Output Disk n Output Disk x SATABeast Disk Array Tier 0 Repacking Figure 4. The storage manager writes accepted events to disk. It serves a subset of the events and data-quality (DQM) histograms either directly or via a proxy server to online consumers.

the data from all SMs in all DAQ slices. The consumers use this data for online data quality monitoring (DQM), for calibration purposes, or for displaying the events. In addition, the FUs produce a set of histograms to monitor the data quality and trigger efficiencies. These histograms include events which are only available prior or during the event selection. These histograms are sent to the SM at each luminosity section boundary (every 93 seconds). The SM sums the histograms that it received from all FUs in the slice. It forwards the summed histograms to the proxy server. The proxy server receives the histograms from all SMs in all DAQ slices and sums them up again. The CMS data quality application retrieves the summed set of histograms for monitoring. 3.1. Redesign of the Storage Application The current implementation of the storage manager (SM) exposed some weaknesses during commissioning runs of the entire CMS detector recording cosmic rays. The main issue is that most of the work is done in a single thread. This caused dead-time in the system each time the SM was summing the DQM histograms for each luminosity section, i.e. every 93 seconds. In addition, the code evolved over several years and had been adapted to changing requirements. This resulted in a code base that was difficult to understand and hard to maintain. Therefore, the opportunity of the delayed LHC start-up was taken for a redesign and re-factoring of the existing code. The new design (see figure 5) uses individual threads for the different tasks of the storage manager. The events are pushed into separate queues which are handled by dedicated threads. Care is taken that the main task of receiving events from the high-level triggers and writing them to the appropriate files (streams) is not blocked by the CPU-intensive task of summing DQM histograms, or by rather unpredictable requests from consumers of events or DQM histograms. 4. Hardware Figure 6 shows the location and the interconnection of the various hardware components used for the data-acquisition system. The heart of the FED builder is a set of six Myrinet switches installed underground, close to the detector. The data is transported to the surface using 1536 optical links operating at 2 Gb/s. Another set of 72 8 8 Myrinet switches is located close to the event builder and high-level trigger (HLT) filter farm. The RU-builder switching-fabric consist of 8 Force-10 E1200 switches [6] (one for each DAQ slice). Each switch has 6 90-port linecards with each port supports 11 Gb/s duplex. Each port is 2 times oversubscribed. The cabling is done to have full throughput on all ports using the fact that the traffic is basically uni-directional. The RU-builder and HLT farm consist of 640 (32 racks) Dell PE 2850 dual dual-core, 2 GHz, 4GB memory PCs used as RUs and 720 (24 racks) Dell PE 1950 dual quad-core, 2.6 GHz, 16 GB memory PCs used as BUs and filter units. A Force 10 Gigabit Ethernet switch connects the storage manager to the HLT processors. A separate Gigabit Ethernet switch is used for transfer to Tier 0. The storage manager s hardware consists of 2 racks each with 8 SM nodes, 2 Fibre Channel switches (QLogic SanBox 5600) providing redundant pathways with automatic fall-over, and 4 NexSan SATABeasts (RAID-6 disk arrays). The latter provide a data buffer of 300 TB, which is equivalent of several days of data taking. 5. Operational Experience The readout system has been successfully used to commission the detector, to collect more than 370 10 6 cosmic ray events, and to record the first LHC beam events in September 2008.

I2O Callback Thread data DQM histogram data Error events Init messages HTTP Callback Thread State machine commands (SOAP) Consumer requests Monitoring webpage Fragments (queue) Commands (queue) Fragment Processor Thread Process State Machine commands Assemble I2O messages into complete events Flag events for different consumer Distribute events to appropriate queues DQM histos for consumers consumers (queues) consumers s for consumers consumers (queues) consumers DQM Processor Thread Sum histograms from all FUs for each lumi section Write the histograms to disk (configurable) DQM histos (queue) Old SM application: most work done in a single thread Monitoring Thread Disk Writer Thread Collect and update statistics Update the InfoSpace Store events into correct stream files Provide information for transfer to Tier 0 s (queue) Figure 5. The storage manager application being redesigned to improve the robustness and performance by moving from a mostly single-threaded design to a multithreaded one. The figure shows the new thread model (white boxes) and the queues used to communicate between them. The shaded purple box shows the tasks performed in a single thread by the old SM application. The event builder performs to the design specifications of 100 GB/s and the system has proven to be very flexible. The event builder can be subdivided into multiple independent readout systems that can be concurrently used to readout subsets of detector components. The system can be configured to focus on performance and reliability issues related to high data throughput large number of input channels high CPU power in the high-level trigger high data rate to local disk. In addition, data payloads can be injected at various points in the system to test its functionality and performance. The 3-month long running of the experiment in fall 2008 highlighted several areas of improvement: The event builder needs to be tolerant against missing or corrupted front-end data.

Figure 6. Schema of the hardware used for the data-acquisition. The error and alarm handling should be based on a uniform and system-wide approach. The Storage application should handle CPU-intensive tasks in independent threads to assure that they do not block the readout (see section 3.1). These improvements are being deployed and tested in the full system and will be available for the first physics run in fall 2009. 6. Conclusions We discussed the architecture of the CMS two-stage event-builder and the storage of accepted events. The system has proven to be scalable and to out-perform the requirements to read-out the complete CMS detector. A few areas for improvement have been identified and are being addressed in time for the restart of LHC beam operations planned for the fall of 2009. References [1] CMS Collaboration CERN-LHCC-2002-026 [2] Myricom, see http://www.myri.com [3] Sakulin H et al. 2008 IEEE Trans. Nucl. Sci. 55 190 197 [4] Pieri M et al. 2006 Prepared for International Conference on Computing in High Energy and Nuclear Physics (CHEP 2006), Mumbai, Maharashtra, India, 13-17 Feb 2006 [5] Gutleber J et al. 2009 Prepared for International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), Prague, Czech Republic, 21-27 Mar 2009 [6] Force10, see http://www.force10networks.com

Figure 9. Racks of Dell PE 2850 and PE 1950 computers used for the event building and for the high-level triggers. Figure 7. 72 crates holding 8 8 Myrinet switches used for the FED builder located close to the event builder and high-level trigger farm. Figure 8. 2 of the 8 Force-10 E1200 switches used by the RU-builder. Figure 10. Storage manager nodes connected through Fibre Channel switches (QLogic SanBox 5600) to the NexSan SA- TABeasts (RAID-6 disk arrays).