A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

Similar documents
A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

New slow-control FPGA IP for GBT based system and status update of the GBT-FPGA project

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

MiniDAQ1 A COMPACT DATA ACQUISITION SYSTEM FOR GBT READOUT OVER 10G ETHERNET 22/05/2017 TIPP PAOLO DURANTE - MINIDAQ1 1

Validation of the front-end electronics and firmware for LHCb vertex locator.

IN a system of many electronics boards of many different

The GBT-SCA, a radiation tolerant ASIC for detector control and monitoring applications in HEP experiments

The LHCb upgrade. Outline: Present LHCb detector and trigger LHCb upgrade main drivers Overview of the sub-detector modifications Conclusions

Velo readout board RB3. Common L1 board (ROB)

L1 and Subsequent Triggers

The new detector readout system for the ATLAS experiment

GBT-SCA Slow Control Adapter ASIC. LHCb Upgrade Electronics meeting 12 June 2014

BES-III off-detector readout electronics for the GEM detector: an update

PCIe40 output interface 01/08/2017 LHCB MINIDAQ2 WORKSHOP - PCIE - PAOLO DURANTE 1

Improving Packet Processing Performance of a Memory- Bounded Application

The Design and Testing of the Address in Real Time Data Driver Card for the Micromegas Detector of the ATLAS New Small Wheel Upgrade

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group)

First LHCb measurement with data from the LHC Run 2

LHCb Online System BEAUTY-2002

TFC update and TFC simulation testbench

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Centre de Physique des Particules de Marseille. The PCIe-based readout system for the LHCb experiment

Vertex Detector Electronics: ODE to ECS Interface

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The ATLAS Data Acquisition System: from Run 1 to Run 2

Frontend Control Electronics for the LHCb upgrade Hardware realization and test

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Upgrading the ATLAS Tile Calorimeter electronics

FELIX the new detector readout system for the ATLAS experiment

Level-1 Data Driver Card of the ATLAS New Small Wheel Upgrade Compatible with the Phase II 1 MHz Readout

The CCU25: a network oriented Communication and Control Unit integrated circuit in a 0.25 µm CMOS technology.

Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment

Deployment of the CMS Tracker AMC as backend for the CMS pixel detector

EMU FED. --- Crate and Electronics. ESR, CERN, November B. Bylsma, S. Durkin, Jason Gilmore, Jianhui Gu, T.Y. Ling. The Ohio State University

arxiv: v1 [physics.ins-det] 16 Oct 2017

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

The Phase-2 ATLAS ITk Pixel Upgrade

Update on PRad GEMs, Readout Electronics & DAQ

1 MHz Readout. LHCb Technical Note. Artur Barczyk, Guido Haefeli, Richard Jacobsson, Beat Jost, and Niko Neufeld. Revision: 1.0

The CMS Event Builder

Construction of the Phase I upgrade of the CMS pixel detector

SPECS : A SERIAL PROTOCOL FOR EXPERIMENT CONTROL SYSTEM IN LHCB.

RT2016 Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

ROM Status Update. U. Marconi, INFN Bologna

IBM Network Processor, Development Environment and LHCb Software

Electronics on the detector Mechanical constraints: Fixing the module on the PM base.

Acquisition system for the CLIC Module.

TORCH: A large-area detector for precision time-of-flight measurements at LHCb

THE Large Hadron Collider (LHC) will undergo a series of. FELIX: the New Detector Interface for the ATLAS Experiment

Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC

A Fast Ethernet Tester Using FPGAs and Handel-C

Optimizing latency in Xilinx FPGA Implementations of the GBT. Jean-Pierre CACHEMICHE

First results from the LHCb Vertex Locator

Implementation of a PC-based Level 0 Trigger Processor for the NA62 Experiment

CBMnet as FEE ASIC Backend

Universal Serial Bus Host Interface on an FPGA

ALICE inner tracking system readout electronics prototype testing with the CERN ``Giga Bit Transceiver''

Data Acquisition in Particle Physics Experiments. Ing. Giuseppe De Robertis INFN Sez. Di Bari

Benchmarking message queue libraries and network technologies to transport large data volume in

Ethernet Networks for the ATLAS Data Collection System: Emulation and Testing

Detector Control LHC

DESIGN AND IMPLEMENTATION OF AN AVIONICS FULL DUPLEX ETHERNET (A664) DATA ACQUISITION SYSTEM

Networks-on-Chip Router: Configuration and Implementation

The ATLAS Data Flow System for LHC Run 2

SoC Design. Prof. Dr. Christophe Bobda Institut für Informatik Lehrstuhl für Technische Informatik

Dataflow Monitoring in LHCb

Prototyping NGC. First Light. PICNIC Array Image of ESO Messenger Front Page

A real time electronics emulator with realistic data generation for reception tests of the CMS ECAL front-end boards

LHC Detector Upgrades

TTC/TTS Tester (TTT) Module User Manual

SEU Mitigation Techniques for SRAM based FPGAs

FELIX: the New Detector Readout System for the ATLAS Experiment

Simulation of digital pixel readout chip architectures with the RD53 SystemVerilog-UVM verification environment using Monte Carlo physics data

A flexible stand-alone testbench for facilitating system tests of the CMS Preshower

Level 0 trigger decision unit for the LHCb experiment

ATLAS, CMS and LHCb Trigger systems for flavour physics

IEEE Nuclear Science Symposium San Diego, CA USA Nov. 3, 2015

SoLID GEM Detectors in US

The CMS Computing Model

The ATLAS Level-1 Muon to Central Trigger Processor Interface

Field Program mable Gate Arrays

ET4254 Communications and Networking 1

Investigation of Proton Induced Radiation Effects in 0.15 µm Antifuse FPGA

Engineering Challenges in Developing Large Flash Memory System

S-LINK: A Prototype of the ATLAS Read-out Link

Ignacy Kudla, Radomir Kupczak, Krzysztof Pozniak, Antonio Ranieri

UDP1G-IP reference design manual

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

CMS FPGA Based Tracklet Approach for L1 Track Finding

or between microcontrollers)

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses

Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

CAN protocol enhancement

PEX 8636, PCI Express Gen 2 Switch, 36 Lanes, 24 Ports

Timing distribution and Data Flow for the ATLAS Tile Calorimeter Phase II Upgrade

SD Card Controller IP Specification

CMX (Common Merger extension module) Y. Ermoline for CMX collaboration Preliminary Design Review, Stockholm, 29 June 2011

TOE10G-IP Core. Core Facts

Transcription:

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade F. Alessio 1, C. Caplan, C. Gaspar 1, R. Jacobsson 1, K. Wyllie 1 1 CERN CH-, Switzerland CBPF Rio de Janeiro, Brazil Corresponding E-mail: Federico.Alessio@cern.ch ABSTRACT: The LHCb experiment has proposed an upgrade towards a full 0 MHz readout system in order to run between five and ten times its initial design luminosity. The entire Front-End electronics will be upgraded in order to cope with higher sub-detector occupancy, higher data rate and to work in a complete trigger-less fashion. In this paper, we describe a novel way to transmit slow control information to the Front-End electronics, by profiting from bidirectional optical connections and the GBT and GBT-SCA chipset capabilities. The implementation and preliminary validation tests are shown as well. KEYWORDS: LHCb; LHC; Upgrade; trigger-less electronics; GBT; GBT-SCA; TFC; ECS; LHCb-PROC-01-01 //01

1 Contents 1. The upgrade of the LHCb experiment. The upgrade of the LHCb readout architecture. Fast and slow control to FE via the TFC system. The SOL0-SCA core.1 ECS Interface Layer. ECS Packets Buffers Layer. Protocol Layer.1 MAC Layer. Link Layer. Conclusion 1 1 1 1 1 1 0 1 0 1 1

1 1 1 1 1 1 1 0 1 0 1 0 1 1. The upgrade of the LHCb experiment The LHCb experiment [1] is a high-precision experiment at the LHC devoted to the search for New Physics by precisely measuring its effects in CP violation and rare decays. By applying an indirect approach, LHCb is able to probe effects which are strongly suppressed by the Standard Model, such as those mediated by loop diagrams and involving flavor changing neutral currents. In the proton-proton collision mode, the LHC is to a large extent a heavy flavor factory producing over 0,000 bb-pairs every second at the nominal LHCb design luminosity of x cm - s -1. Given that bb-pairs are predominantly produced in the forward or backward direction, the LHCb detector was designed as a forward spectrometer with the detector elements installed along the main LHC beam line, covering a pseudo-rapidity range of < η < well complementing the other LHC detectors ranges. LHCb proved excellent performance in terms of data taking [] and detector performance over the period 0-0 accumulating ~ fb -1 of data and it is foreseen to accumulate other ~ fb -1 over the period 01-01. Due to the foreseen improved performance of the LHC accelerator, the prospect to augment the physics yield in the LHCb dataset seems very attractive. However, the LHCb detector is limited by design in terms of data bandwidth - 1 MHz instead of the LHC bunch crossing frequency of 0 MHz - and physics yield for hadronic channels at the hardware trigger. Therefore, a Letter Of Intent [], a Framework TDR [] and a Trigger and Online TDR [] document the plans for an upgraded detector which will enable LHCb to increase its physics yield in the decays with muon by a factor of, the yield for hadronic channels by a factor 0 and to collect ~0 fb -1 at a leveled constant luminosity of 1- x cm - s -1. This corresponds to ten times the current design luminosity and increased complexity (pileup) of a factor.. The upgrade of the LHCb readout architecture In order to remove the main design limitations of the current LHCb detector, the strategy for the upgrade of the LHCb experiment essentially consists of ultimately removing the first-level hardware trigger (L0 trigger) entirely, hence to run the detector fully trigger-less. By removing the L0 trigger, LHC events are recorded and transmitted from the Front-End electronics (FE) to the readout network at the full LHC bunch crossing rate of 0 MHz, resulting in a ~0 Tb/s DAQ network. All events will therefore be available at the processing farm where a fully flexible software trigger will perform selection on events, with an overall output of about 0 khz of events to disk. This will allow maximizing signal efficiencies at high event rates. The direct consequences of this approach are that some of the LHCb sub-detectors will need to be completely redesigned to cope with an average luminosity of x cm - s -1 and the whole LHCb detector will be equipped with completely new trigger-less FE electronics. In addition, the entire readout architecture must be redesigned in order to cope with the upgraded multi-tb/s bandwidth and a full 0 MHz dataflow []. Figure 1 illustrates the upgraded LHCb readout architecture. It should be noted that although the final system will ultimately be fully trigger-less, a first-level hardware trigger based on the current L0 trigger will be maintained in software. This is commonly referred to as Software LLT and its main purpose is to allow a staging installation of the DAQ network, gradually increasing the readout rate from the current 1 MHz to the full and ultimate 0 MHz. This however will not change the rate of event recorded at the FE, which will run fully trigger-less regardless of the DAQ output rate. In order to keep synchronicity across the readout system, to control the FE electronics and to distribute clock and synchronous information to the whole readout system, a centralized Timing and Fast Control system (TFC, highlighted in Figure 1) has been envisaged, as an upgrade of the current TFC system []. The upgraded TFC system will then be interfaced to all elements in the readout architecture by heavily profiting from the bidirectional capability of

optical links and FPGA transceivers and a high level of interconnectivity. In particular, the TFC system will heavily profit from the capabilities of the GigaBit Transceiver chipset (GBT) [] currently being developed at CERN for its communication to the FE electronics. In addition, the TFC system will also be responsible to transmit slow control (ECS) information to the FE, by means of FPGA-based electronics cards interfaced to the global LHCb ECS. 1 1 1 1 1 1 1 0 1 Figure 1: The upgraded LHCb readout architecture.. Fast and slow control to FE via the TFC system Figure illustrates in detail the logical architecture of the upgraded TFC system. A pool of Readout Supervisors (commonly referred to as S-ODIN) centrally manage the readout of events, by generating synchronous and asynchronous commands, by distributing the LHC clock and by managing the dispatching of events. Each S-ODIN is associated with a sub-detector partition which effectively is a cluster of Readout Boards (TELL0) and Interface Boards (SOL0). While the TELL0s are dedicated to read out fragments of events from the FE and send them to the DAQ for software processing, the SOL0 boards are dedicated to distribute fast and slow control to the FE, by relaying timing information and clock onto the optical link to the FE, and by appending ECS information onto the same data frame. By profiting from the characteristics of the GBT chipset [], fast commands, clock and slow control are therefore transmitted on the same bidirectional optical link. This is a major novelty with respect to the current LHCb experiment where fast control and slow control are sent over different networks. At the FE, the synchronous fast control information are decoded and fanned out by a GBT Master per FE board, also responsible to recover and distribute the clock in a deterministic way. The slow control information is relayed to the GBT-SCA chipsets via the GBT Master. The GBT-SCA chipset is capable of efficiently distribute ECS configuration data to the FE chips by means of a complete set of buses and interfaces, in a generic way []. Monitoring data is sent back on the uplink of the same optical link by following the return path, from the GBT- SCA to the Master GBT to the corresponding SOL0.

The hardware backbone of the entire readout architecture is a PCIe Gen electronics card hosted in a commercial PC. The same hardware is used for the TELL0, the SOL0 and the S-ODIN boards, only the different firmware changes the flavor of the board. The board will be equipped with up to bidirectional optical link, an Altera Arria X FPGA and a 1x PCIe Gen bus interfaced to a multi-core PC. LHC Interfaces 0 MHz clock Readout Supervisors S-ODIN 0 Gb/s DAQ ECS (FE) THROTTLE TFC. Gb/s Interface boards SOL0 ECS 1 1 1 1 1 TFC + ECS TFC. Gb/s. Gb/s GBTs GBTs GBTs GBT- SCAs GBT- SCAs GBT- SCAs Front-Ends Front-Ends Front-Ends Readout Boards Readout Boards Readout TELL0 PCIe0 Boards PCIe0 Figure : Logical architecture of the upgraded TFC system. Figure shows schematically the implementation of the merging of fast and slow control information on the same optical link to the FE electronics [] in the firmware at the SOL0 board. A TFC Relay and Alignment block extracts at maximum bits out of the full TFC word which was transmitted by S-ODIN encoding the fast commands, timing information and various resets. These bits are then relayed onto the GBT link to be transmitted to the FE. The word is generated at 0 MHz and transmitted with constant latency. The TFC word from S- ODIN is used to reconstruct the clock locally in the FPGA to then be used to drive the logic in the firmware. from SODIN from ECS TFC decoder PC TFC Relay & Alignment PCIe Slave synchronous TFC Info fan-out SOL0-SCA core ECS reserved TFC max bits GBT Tx FE 1 1 0 Figure : Schematic view of the algorithm to merge TFC and ECS information on the GBT link towards the FE electronics in the SOL0 firmware.

1 1 1 1 1 1 1 0 1 Regarding the slow control part, LHCb has developed a firmware core, commonly referred to as SOL0-SCA, in order to generically drive each GBT-SCA chip at the FE, covering all of its functionality and protocols. Its location within the SOL0 firmware is highlighted in Figure. This is achieved by developing the firmware in a completely configurable way, i.e. the chosen SCA protocol can be selected in real-time via commands issued by the LHCb ECS system [] together with the configuration data. The destination of such data can be selected via a configurable mask. The core is designed to cover a full GBT link with up to 1 GBT- SCAs connected to it. It can then be replicated as many times as needed to cover all GBT links connected to a SOL0 boards. In total, the same firmware will allow driving generically the entire LHCb upgraded FE electronics over a total of ~00 duplex optical links and ~0 SOL0 boards. It is technology independent, developed in HDL language, it does not make us of any technology specific element and it is completely agnostic of the content of the data field. It is basically a generic SCA driver via optical links for the BE electronics at the LHC.. The SOL0-SCA core The core provides a way to control with high parallelism and flexibility many FE chips via the GBT-SCA interfaces through GBT links. Its main functionalities can be listed as follows: Provide a generic hardware interface (FPGA) between the ECS system and the FE electronics Build and encode/decode GBT-SCA compliant packets Serialize and de-serialize command packets in the command word sent to the FE electronics according to GBT-SCA specifications Support for all GBT-SCA protocols (SPI, I²C, JTAG, GPIO and ADC+DAC) Support for all GBT-SCA commands and channels Support for many GBT-SCAs per GBT link and many GBT links per FPGA Possibility of re-transmission of packets and transmission monitoring Modularity, i.e. components can be removed if not needed Robustness, reliability, programmability, flexibility. ECS Interface ECS Packets Buffers Protocol Layer MAC Layer Link Layer SCA Channels configuration and Shared Data Router Configuration Avalon MM Bus Avalon MM Slave Interface ECS Command FIFO ECS Packet Arbiter Protocol Drivers SCA Controller Protocol Driver SPI protocol Driver GPIO protocol Driver Ic protocol Driver SCA Packet Arbiter SCA CMD Payload Register SCA RPY Payload Register Eport TX controller Eport RX controller FPGA E-Port (HDLC and Serdes) E-Link Router GBT ECS Reply Memory JTAG protocol Driver ADC protocol Driver DAC protocol Driver 0 1 Commands Replies Figure : Top view of the SOL0-SCA core architecture.

1 1 1 1 1 1 1 0 1 The core is essentially composed of a series of layers as it is illustrates in Figure. Their main roles are to: store the ECS configuration packets and decode them as commands and viceversa in the ECS Interface and ECS Packets Buffers Layers build the corresponding GBT-SCA packets with the selected protocol in the Protocol Layer encode it in the specified communication protocol (HDLC []) in the MAC Layer serialize and route the packets to the selected GBT-SCA connected to a GBT at the FE in the Link Layer. In practice, the ECS generates a command which is transmitted to the FPGA via the PCIe bus. This commands contains an extended addressing scheme to tell the core where and how to route the configuration packet and a command code scheme which tells the core what actions to perform (i.e. read/write or wait for response/do not wait). In addition, it may contain the configuration data to be sent to the FE in case of a write operation. In the FPGA, the command is stored in a buffer in order to be picked up by the Protocol Layer when not busy. The ECS command is then decoded and the SCA specific protocol packets are built accordingly. The information about which protocol to be built is in the ECS command and it is completely generic, that is the core is able to build run-time any SCA packet simply based on the content of the command. Finally, the packet is encapsulated in the HDLC protocol to then be routed to the corresponding bit field in the GBT word to be sent through the optical link to the corresponding Master GBT at the FE. The corresponding bit field is selected based on the connections at the FE. In order to be as generic as possible, this is also a configurable parameter so that the core can be used with any FE configuration. The core also features the possibility of packet retransmission in case a particular transaction failed..1 ECS Interface Layer In order to access the PC through a PCIe bus, the SOL0 board internally uses an Altera Avalon MM bus, which is mapped onto one of the PCIe BARs (bar 0). Hence, an Avalon MM Slave Interface is used in the ECS Interface Layer to perform read and write operations to and from the control PC. 0 1 Figure : Current ECS command format as transmitted through the Avalon MM Bus. Figure illustrates the structure of the generic ECS command which is built by the control system via dedicated graphical interfaces and scripts. This command is transmitted to the firmware core and contains all the relevant information so that the core can generically and flexibly build GBT-SCA compliant packets. In the first field, an extended addressing scheme is implemented: the addresses of the GBT link, of the GBT-SCA and its channels are included. In

1 1 1 1 1 1 1 0 1 0 1 0 1 addition, there is an ECS Command field dedicated to specific commands from the ECS (for ex, Read or Write). In the second field, the length of the ECS command in number of bytes for frame boundary definitions and a protocol specific field are inserted. This is followed by Data packets if a write operation is requested. All fields are -bit aligned so that the ECS system can transmit a full command as a -bit words table. The same ECS command is generated by the firmware core in response to a polling by the ECS.. ECS Packets Buffers Layer The ECS commands are then stored in a FIFO. This is because the clock frequency used by the Avalon MM Slave Interface is 0 MHz with a fixed data size of bits. However, the output bandwidth per GBT-SCA is 0 Mb/s, i.e. two bits every 0 MHz. Considering that an ECS command can span over various bit words, the ECS command must be buffered and stored in order to allow building the corresponding SCA packets and transmit them via the corresponding pair of bits through the GBT link. An ECS Command FIFO is dedicated to store ECS command packets. A single FIFO structure per GBT link was chosen as the ECS data stream comes as a single thread of many commands. However, they are dispatched asynchronously to their associated channels and GBT-SCAs. It is therefore a simple way to create back-pressure control and avoid congestion while building packets and transmitting them. This also means that the ECS can send a table of commands in one continuous write operations and the firmware will take care of reading and decoding the commands on a per-channel basis. An ECS Reply Memory is dedicated to store the replies to a specific ECS command. It is designed as a RAM structure rather than a FIFO so that the software can access the memory following a mapping of the extended addressing scheme. The ECS can therefore poll a spcific reply based on the previously generated command.. Protocol Layer The GBT-SCA chipset supports a large variety of buses that can be interfaced to FE chips. In the Protocol Layer, each ECS command is transformed into an SCA command where the right protocol for a particular SCA channel is built. This allows the user to flexibly select whichever bus they need to drive by simply indicating it in the command code scheme as shown in Figure. In this way the same firmware can be used for all possible combinations at the FE without being dependent on the sub-detectors choices at the FE. In addition, an additional important feature is that the Protocol Layer keeps information regarding the SCA command generated for packet retransmission, manages the reading of ECS commands from the ECS command FIFO based on a busy state or a non-functional state (i.e. when the wrong SCA was selected for example) and when the packet is ready, it transmits it to the MAC Layer. This is managed by two arbiter modules, one dedicated to arbitrate the reading/writing of ECS commands and one dedicated to arbitrate the transmission/reception of SCA commands. Figure shows an example of an operation on the protocol layer, where the ECS PC sends IC write command to a certain I²C device on the FE.

Reception of a ECS command packet, and store it in a FIFO Once all the SCA replies were successfully received, build the ECS Reply to the ECS Reply Memory. This will be accessed by the ECS server Check if the associated SCA and its channel is activated. Protocol selection and routing through the Send SCA commands and wait for their replies command through the associated SCA MAC Layer, storing the current ECS Command Translation of the ECS IC Write to a SCA IC Write batch of commands (example): 1 1 1 1 1 1 1 1 0 1 ECS IC_Write command, LEN = bits, SCA=, Channel=0x0, DATA = set Device as A following by Figure : Example of I²C write operation as managed in the Protocol Layer..1 MAC Layer IC_wrt_CTRL with Nbytes = 1 and Speed = 0 KHz (default) IC_wrt_Tx0 writes 1st chunk of bits on the DATA Reg IC_wrt_Tx1 writes nd chunk of bits on the DATA Reg IC_wrt_Tx writes rd chunk of bits on the DATA Reg IC_wrt_Tx write th chunk of bits on the DATA Reg IC_MULTI_WRITE set IC The MAC Layer is mostly responsible to encapsulate the SCA payload packet into the HDLC protocol [] and to serialize it in pair of bits, to be then slotted in on each GBT word on the optical link. In addition, it de-serializes the data stream and extract the payload when a reply is received. It also features link reset, connection and test operations and error detection capabilities. The heart of the MAC Layer is a block called FPGA E-Port. This is based on the original E-Port IP Core [] but with some key differences. The block is made in a device independent way, it does not a backup connection and it is designed without Triple Modular Redundancy as it is to be used in a safe radiation environment. An additional feature is the possibility of retransmitting a packet if the transmission of a previous command failed. This is done in the MAC Layer as the full protocol included the communication protocol is already done at that stage. A programmable expiration time is used to wait for the response from the corresponding SCA and programmable bit transmitted within the ECS command is used to tell the core whether to re-transmit a packet or instead simply signal a flag to the ECS without re-transmitting the packet. This can be done on a per-command basis at run-time. Another additional feature is the possibility of waiting for a response from the corresponding GBT-SCA. Another specific bit in the ECS command is used to tell the core whether to wait for the GBT-SCA to acknowledge the response or simply send the packet a programmable time after, without caring about the acknowledge. It is however necessary to wait

1 1 1 1 1 1 1 0 1 0 1 0 1 a minimum time before transmitting the following packet because the GBT-SCA must receive the previous packet in its entirety. This can be done on a per-command basis at run-time as well.. Link Layer Finally, the last layer between the core and the GBT link is the Link Layer. It is a simple layer whose only purpose is to provide a generic and programmable logical routing so that the SCA packet can reach the right GBT-SCA over the corresponding GBT link. This is done by an E- Link router whose configuration is a matrix loaded in a configurable register, changeable at runtime.. Conclusion Within its upgrade, the LHCb experiment has developed a generic firmware core to drive any GBT-SCA within the upgrade of the experiment. This is achieved by implement an HDLbased code, capable of driving any protocol of any GBT-SCA over any GBT link, programmable at run-time. The core is so generic that can be used in any FE environment featuring the presence of the GBT chipset. The firmware core will be ready by the beginning of 01 in time for allowing its usage by sub-detectors in test-benches, test-beams and in order to commission the FE electronics for the upgrade of the LHCb. A heavy testing campaign together with the very first GBT-SCA chips will be performed in order to test robustness, reliability and compatibility issues. Acknowledges We acknowledge the help of S. Bonacini, A. Caratelli and K. Kloukinas (all with CERN) for developing the HDLC protocol driver in FPGA. References [1] LHCb Collaboration, The LHCb Detector at the LHC, JINST (00) S000 [] R. Jacobsson, Performance of the LHCb Detector during the LHCb Proton Runs 0-0, Proceedings of 0 IEEE/NSS pp. 1-1 [] LHCb Collaboration, Letter of Intent for the LHCb Upgrade, CERN-LHCC-0-001 [] LHCb Collaboration, Framework TDR for the LHCb Upgrade, CERN-LHCC-0-00 [] LHCb Collaboration, LHCb Trigger and Online Upgrade Technical Design Report, CERN- LHCC-01-01 [] F. Alessio et al., Trigger-less readout architecture for the upgrade of the LHCb experiment at CERN, JINST (01) C01 [] F. Alessio and R. Jacobsson, Timing and Fast Control for the Upgraded Readout Architecture of the LHCb experiment at CERN, IEEE TNS vol. 0, issue. [] P. Moreira et al., The GBT Ser-Des ASIC prototype, JINST (0) C [] A. Caratelli et al., The GBT-SCA, a Radiation Tolerant ASIC for Detector Control and Monitoring Applications in HEP Experiments, proceedings of TWEPP01 [] F. Alessio and R. Jacobsson, A New Readout Control system for the LHCb upgrade at CERN, JINST (0) C1 [] C. Gaspar et al., The LHCb Experiment Control System: on the path to full automation, 1 th ICALEPCS conference 0, Oct -1, pp. 0- [] International Standards Organization, Telecommunications and information exchange between systems - HDLC procedures, ISO/IEC :00