L1Calo Joint Meeting. Hub Status & Plans

Similar documents
L1Calo FW Meeting. Progress and Plans: FEX ATCA Hub. Dan Edmunds, Yuri Ermoline Brian Ferguson, Wade Fisher, Philippe Laurens, Pawel Plucinski

CMX Hardware Overview

CMX Hardware Status. Chip Brock, Dan Edmunds, Philippe Yuri Ermoline, Duc Bao Wojciech UBC

Status and planning of the CMX. Wojtek Fedorko for the MSU group TDAQ Week, CERN April 23-27, 2012

1. There are 10uF capacitors, symbol is CC0402, should double check if it is available in 0402 package.

L1Calo Status Report. efex jfex gfex L1Topo ROD Hub TREX FOX Mapping FTM Link-Speed Tests. 15 October 2015 Ian Brawn, on behalf of L1Calo 1

CMX (Common Merger extension module) Y. Ermoline for CMX collaboration Preliminary Design Review, Stockholm, 29 June 2011

Challenges and performance of the frontier technology applied to an ATLAS Phase-I calorimeter trigger board dedicated to the jet identification

ATCA Platform Considerations for Backplane Ethernet. Aniruddha Kundu Michael Altmann Intel Corporation May 2004

RT2016 Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters

Signal Conversion in a Modular Open Standard Form Factor. CASPER Workshop August 2017 Saeed Karamooz, VadaTech

Avnet, Xilinx ATCA PICMG Design Kit Hardware Manual

Centre de Physique des Particules de Marseille. The PCIe-based readout system for the LHCb experiment

Strategies for Deploying RFSoC Technology for SIGINT, DRFM and Radar Applications. Rodger Hosking Pentek, Inc. WInnForum Webinar November 8, 2018

UTC040C MicroTCA Carrier Hub (MCH) Conduction Cooled, PCIe Gen3

AMC GSPS 8-bit ADC, 2 or 4 channel with XCVU190 UltraScale

LPD FEE status. No change from Slides from Nov meeting. Time schedule. Connectors to C&C. Common TCA Blade and LAN interface. Fast Eth 100 Mbps Elec

OPTIONS FOR NEXT GENERATION DIGITAL ACQUISITION SYSTEMS

Product Catalog Embedded Computing Products

Diagnosing SSD Failures during Testing

Timing distribution and Data Flow for the ATLAS Tile Calorimeter Phase II Upgrade

ATLAS Level-1 Calorimeter Trigger

S2C K7 Prodigy Logic Module Series

DIMMs: Reasons to use only Cisco Qualified Memory on Cisco UCS Servers

BittWare s XUPP3R is a 3/4-length PCIe x16 card based on the

MAC on the HUB. Y. Ermoline, V0.1. This note describe design steps of the MAC on HUB FPGA to work with Ethernet for IPbus.

The Impact of Scalability from a Systems Perspective

ATLAS Level-1 Calorimeter Trigger

Upgrade of the ATLAS Level-1 Trigger with event topology information

Emerging Memory: In-System Enablement

RunBMC - A Modular BMC Mezzanine Card BUV - Bring Up Vehicle For BMC Mezzanine. Eric Shobe & Jared Mednick Hardware Engineer - Salesforce

Agilent E2943A/E2944A ATCA Probes for Advanced Switching Interconnect

AMC516 Virtex-7 FPGA Carrier for FMC, AMC

LAPP IPMC STATUS. Lar Week 23 September 2014 Guy Perrot on behalf of the LAPP Team

5051 & 5052 PCIe Card Overview

AMC517 Kintex-7 FPGA Carrier for FMC, AMC

EMU FED. --- Crate and Electronics. ESR, CERN, November B. Bylsma, S. Durkin, Jason Gilmore, Jianhui Gu, T.Y. Ling. The Ohio State University

Status of MTCA Infrastructure

MAC on the HUB. Y. Ermoline, V0.2a. This note describe design steps of the MAC on HUB FPGA to work with Ethernet for IPbus.

Development and test of a versatile DAQ system based on the ATCA standard

Heavy Photon Search Data Acquisition

AXIe : AdvancedTCA Extensions for Instrumentation and Test. Autotestcon 2016

Overview of SVT DAQ Upgrades. Per Hansson Ryan Herbst Benjamin Reese

Data Acquisition in Particle Physics Experiments. Ing. Giuseppe De Robertis INFN Sez. Di Bari

The Why and How of Developing All-Flash Storage Server

FELIX the new detector readout system for the ATLAS experiment

All-Flash Storage System

for Special Applications

SSI Ethernet Midplane Design Guide

Virtex 6 FPGA Broadcast Connectivity Kit FAQ

VCU110 Software Install and Board Setup October 2015

High Bandwidth Electronics

Implementing RapidIO. Travis Scheckel and Sandeep Kumar. Communications Infrastructure Group, Texas Instruments

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >

Study of 1.5m data paths along CALICE slabs

I N T E R C O N N E C T A P P L I C A T I O N N O T E. Advanced Mezzanine Card (AMC) Connector Routing. Report # 26GC011-1 September 21 st, 2006 v1.

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

January 28 29, 2014San Jose. Engineering Workshop

NEMI Optoelectronic Substrates Project (Status Report) Jack Fisher - Project Leader

XMC-RFSOC-A. XMC Module Xilinx Zynq UltraScale+ RFSOC. Overview. Key Features. Typical Applications. Advanced Information Subject To Change

Module Performance Report. ATLAS Calorimeter Level-1 Trigger- Common Merger Module. Version February-2005

INSTITUTO DE PLASMAS E FUSÃO NUCLEAR

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group)

MTCA.4 TUTORIAL BASICS INTRODUCTION IN XTCA

SMT943 APPLICATION NOTE 1 APPLICATION NOTE 1. Application Note - SMT372T and SMT943.doc SMT943 SUNDANCE MULTIPROCESSOR TECHNOLOGY LTD.

COMPUTING ATCA-7368-CE. AdvancedTCA Processor Blade

100GbE Architecture - Getting There... Joel Goergen Chief Scientist

Zynq-7000 All Programmable SoC Product Overview

LHC Detector Upgrades

Selection of hardware platform for CBM Common Readout Interface

Oracle <Insert Picture Here>

Alternative Ideas for the CALICE Back-End System

LASER INTERFEROMETER GRAVITATIONAL WAVE OBSERVATORY -LIGO-

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

Components of a MicroTCA System

Open Compute at CERN

The Design and Testing of the Address in Real Time Data Driver Card for the Micromegas Detector of the ATLAS New Small Wheel Upgrade

An Introduction to PICMG 3.7 ATCA Base Extensions PENTAIR

Highly Accurate, Record/ Playback of Digitized Signal Data Serves a Variety of Applications

Virtex-6 FPGA ML605 Evaluation Kit FAQ June 24, 2009

Hub Gigabit Ethernet Test Report

Customized COTS (C 2 OTS) Series Part 1

Cisco CRS 4-Slot Line Card Chassis Overview

ATCA-7301 AdvancedTCA Processor Blade

THE NEXT-GENERATION INTEROPERABILITY STANDARD

RPR Usage: A Carriers Carrier Perspective

Micro-Research Finland Oy. Timing goes Express. Jukka Pietarinen. EPICS Collaboration Meeting PSI, Villigen, October 2011

Pixus Technologies Catalog

Pigeon Point BMR-A2F-AMCc Reference Design Board Management Reference Design Add-on for Carrier IPMCs

PCI EXPRESS EXPANSION SYSTEM USER S MANUAL

In-chip and Inter-chip Interconnections and data transportations for Future MPAR Digital Receiving System

Development of an ATCA IPMI controller mezzanine board to be used in the ATCA developments for the ATLAS Liquid Argon upgrade

COPious-PXIe. Single Board Computer with HPC FMC IO Site DESCRIPTION APPLICATIONS SOFTWARE V0.2 01/17/17

Design Process and Technical Thoughts on a Two Channel PHY Approach

VXS-621 FPGA & PowerPC VXS Multiprocessor

LAPP IPMC STATUS. 9 th xtca IG Meeting 25 February 2015 Guy Perrot on behalf of the LAPP Team

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STEP-Z Connector Routing. Report # 26GC001-1 February 20, 2006 v1.0

VXS-610 Dual FPGA and PowerPC VXS Multiprocessor

New Software-Designed Instruments

RapidIO.org Update.

Transcription:

L1Calo Joint Meeting Hub Status & Plans Dan Edmunds, Yuri Ermoline Brian Ferguson, Wade Fisher, Philippe Laurens, Pawel Plucinski 3 November 2016, CERN

HUB HW PCB development HUB PCB is a 22 layer IPC* class 2 card, 3.05 mm thick. IPC, the Association Connecting Electronics Industries (initial name: Institute for Printed Circuits) Selected same vendors that did CMX (assembly + PWB) 10 of the layers are 1/2 oz Ground planes 8 of the layers are 1/2 oz signal routing 2 of the layers are 1 oz mixed signal/power fills 2 of the layers are 1 oz power fills/routing 2

FEX Data Fanout HUB HW PCB development ROD Mezzanine HUB PCB is a 22 layer IPC* class 2 card, 3.05 mm thick. IPC, the Association Connecting Electronics Industries (initial name: Institute for Printed Circuits) Ultrascale FPGA GbE Switch Selected same vendors that did CMX (assembly + PWB) 10 of the layers are 1/2 oz Ground planes 8 of the layers are 1/2 oz signal routing 2 of the layers are 1 oz mixed signal/power fills 2 of the layers are 1 oz power fills/routing 3

HUB PCB details There are 51 power fills in the card. There are about 8277 connections in the HUB PCB (w/out GND/Power) There are currently 2793 components on the HUB. Requiring high-speed differential pairs on all 10 routing layers is one of the "issues" with the HUB design. There are about 378 differential signal pairs used in the design, all of the high-speed FEX MGT signals are routed as strip-lines on the 8 internal signal layers no vias. The Gbit Ethernet differential pairs are routed on all 10 layers and are kept physically separated from the high-speed MGT differential signals. The 74 channel FEX MGT Data Fan-out is located immediately adjacent to where the FEX signals enter the HUB Module from the Zone 2 backplane connectors. To achieve the required fan-out channel density the fan-out chips are located on both sides of the HUB PCB. 4

HUB FW/SW Firmware development Interface to FEX/ROD data UltraScale tests with Aurora 8b10b over GTY Successes in using the Aurora 8b10b protocol using GTY - it was performed successfully and Xilinx seems have signed off on the implementation IPbus interface / Gb Ethernet switch control moving to UltraScale Exercise with the Xilinx vcu108 development board (not the same PHY chip and not the GMII interface, however provide design experience ) SW: Low-level IPbus software on Linux PC to communicate with IPbus slaves From Cactus project Looking into basic test program implementation Setting up the SW environment at MSU 5

Preparation for tests: @MSU Preliminary work plan of tests at MSU is worked out: List of hardware test stages - not necessarily sequential Lower priority stuff (e.g. IPMI) and high priority topics (e.g. switch, MGT) One HUB / Two HUB test (at MSU) Local tests will be limited by available interfaces To do: work out schedule to obtain/borrow N FTM/FEX modules (N>1?) Hub-to-Hub communications & GbE tests can cover partial ground To do: set up GBT demonstrator on VC709 dev board with GBT FW To do: establish GbE test bench 6

Preparation for tests: @UK/CERN First meeting of ROD and HUB at Cambridge or MSU We assume we ll send a Hub (+Dan +Pawel) to Cambridge After that HUB stays at Cambridge and ROD comes to MSU If we make good progress in the ROD/Hub mating step we could go to ROD+Hub+FEX/FTM tests at that time (at RAL, Cambridge?). At CERN: CERN will likely be where the highest number of FTM and FEXs will come together in one single crate backplane To do: establish bandwidth spec for @CERN shelf (>10 Gbps?) To do: establish # available FEX/FTMs and work out bandwidth test strategy HUB + one FEX + GBT one or two HUB+ROD + many FEXs + GBT optional 7

Full bandwidth test Early tests Few FEX/FTM cards available to test backplane links. Borrow V1 FTM or prototype FEX for MSU-based tests Sufficient to test individual links (1slot=6 links at a time) Moving source card slot-to-slot More Comprehensive tests We will require full tests of all 74 high-speed backplane links to fully qualify the prototype Hub modules FDR and PRR requirements for such tests need to be decided soon. Proceeding to preproduction without comprehensive tests represents a significant risk." Test of production Hub modules Eventually, we will need to characterize/verify the bandwidth performance of all Hubs to be installed in USA15. Need to work out plans for 12-FEX/FTM test bench at CERN 8

Hub Test Options We have identified a potential problem with the test plan for the prototype Hub modules. To fully qualify (and quantify) the Hub design, we will need a shelf with 12 blades capable of populating 6 transmit lanes of the zone-2 fabric interface. Thus we need to identify potential solutions. Option 1: Identify commercial ATCA cards that have the non-standard zone-2 interface that we are using. Pro: Simple (?) Con: Unlikely to find COTS card with custom Zone-2 interface. Option 2: Negotiate with the FEX/FTM community to build sufficient modules on the timescale of the tests that we need. Pro: Closest to actual implementation, best tests bandwidth limitations Con: Potentially costly and may not fit partner s available person-power/schedule. Option 3: Design our own simple "FEX lookalike" that closely emulates how a FEX transmitter/receiver would appear over the backplane. Pro: Could be cheaper in HW costs (though not in engineering time) Con: Differences WRT real FEX could be significant; large impact to schedule 9