100Gb/s Backplane/PCB Ethernet Two Channel Model and Two PHY Proposal

Similar documents
Two Backplane Markets, Two Backplane Channels, Two Signaling Schemes, Two PHYs

100G backplane PHY: NRZ and PAM4

Design Process and Technical Thoughts on a Two Channel PHY Approach

Sequence Estimators with Block Termination in the presence of ISI

25 Gb/s Ethernet Study Group CSD: Technical Feasibility. Joel Goergen - Cisco Matt Brown APM Version goergen_25ge_01_0914d

ATCA Platform Considerations for Backplane Ethernet. Aniruddha Kundu Michael Altmann Intel Corporation May 2004

40 GbE Over 4-lane 802.3ap Compliant Backplane

100G Signaling Options over Backplane Classes

Signal Integrity Analysis for 56G-PAM4 Channel of 400G Switch

400GBase-LR8: A Proposal for 10 km Objective Using 50 Gb/s PAM4 Signaling

OIF CEI-56G Project Activity

Chip To Module Twin Axial Cable Channels For 400 Gb/s: Re-examining the Insertion Loss Equation

EDA365. DesignCon Impact of Backplane Connector Pin Field on Trace Impedance and Vertical Field Crosstalk

Path forward for 100G SMF objectives in 802.3cd

AirMax VSe High Speed Backplane Connector System

Board Design Guidelines for PCI Express Architecture

100GbE Architecture - Getting There... Joel Goergen Chief Scientist

Architectural Thoughts 25G Interconnect

Decoupling electrical and Optical Modulation Andre Szczepanek. Szczepanek_3bs_01a_0315.pdf Decoupling electrical and Optical Modulation 1

Impact of Embedded Capacitance on Test Socket and Test Board Performance Michael Giesler, 3M, Alexander Barr, 3M Yoshihisa Kawate,

SSI Ethernet Midplane Design Guide

Application Note. PCIE-RA Series Final Inch Designs in PCI Express Applications Generation GT/s

Application Note. PCIE-EM Series Final Inch Designs in PCI Express Applications Generation GT/s

CEI-28G-VSR Channel Simulations, Validation, & Next Steps. Nathan Tracy and Mike Fogg May 18, 2010

Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview

SEAM-RA/SEAF-RA Series Final Inch Designs in PCI Express Applications Generation GT/s

EXAMINING THE IMPACT OF SPLIT PLANES ON SIGNAL AND POWER INTEGRITY

Energy-Efficient Ethernet for 100G Backplane and Copper

25G Ethernet CFI. Final Draft Brad Booth, Microsoft. Photo courtesy of Hugh Barrass

MDI for 4x25G Copper and Fiber Optic IO. Quadra (CFP4 proposal) Connector System

100 GBE - WHAT'S NEW AND WHAT'S NEXT

PCIEC PCI Express Jumper High Speed Designs in PCI Express Applications Generation GT/s

Closing Report. IEEE Working Group 25GBASE-T Call For Interest David Chalupsky, Intel. San Antonio, Texas November 6, 2014.

Consideration for Advancing Technology in Computer System Packaging. Dale Becker, Ph.D. IBM Corporation, Poughkeepsie, NY

PCI Express Gen 4 & Gen 5 Card Edge Connectors

Evolution from 10 G to 40 G & 100 G. Howard Frazier IEEE HSSG Geneva, CH May, 2007

Server-Grade Performance Computer-on-Modules:

100G / Lane Electrical Interfaces for Datacenter Switching - Desirable Solution Attributes

Physical Aspects of Packages for 100GEL & PKG ad-hoc Physical Aspects Summary

Unconfirmed Minutes IEEE 802.3AP - Backplane Ethernet May 26 27th, 2004 Long Beach, CA

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STRADA Whisper 4.5mm Connector Enhanced Backplane and Daughtercard Footprint Routing Guide

IEEE 802.3by 25G Ethernet TF A BASELINE PROPOSAL FOR RS, PCS, AND FEC

I N T E R C O N N E C T A P P L I C A T I O N N O T E. Z-PACK TinMan Connector Routing. Report # 27GC001-1 May 9 th, 2007 v1.0

I N T E R C O N N E C T A P P L I C A T I O N N O T E. Advanced Mezzanine Card (AMC) Connector Routing. Report # 26GC011-1 September 21 st, 2006 v1.

Channels for Consideration by the Signaling Ad Hoc

Report # 20GC004-1 November 15, 2000 v1.0

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STEP-Z Connector Routing. Report # 26GC001-1 February 20, 2006 v1.0

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

Q2 QMS/QFS 16mm Stack Height Final Inch Designs In PCI Express Applications Generation Gbps. Revision Date: February 13, 2009

Use of the Internet SCSI (iscsi) protocol

Cisco Nexus 4000 Series Switches for IBM BladeCenter

10G Ethernet Over Structured Copper Cabling

The Future of Electrical I/O for Microprocessors. Frank O Mahony Intel Labs, Hillsboro, OR USA

A Unified PMD Interface for 10GigE

Unified Storage Networking

Agenda and General. IEEE P802.3bj 100 Gb/s Backplane and Copper Cable Task Force. John D Ambrosia

Vertical Conductive Structures

March 4-7, 2018 Hilton Phoenix / Mesa Hotel Mesa, Arizona Archive

Interconnect for 100G serial I/O ports

CEI-56G Signal Integrity to the Forefront

Physical Layer changes coming to a network near you, soon! Wayne Allen Regional Marketing Engineer Fluke Networks

Objectives for HSSG. Peter Tomaszewski Scientist, Signal Integrity & Packaging. rev3 IEEE Knoxville Interim Meeting, September 2006

Ethernet Alliance Technology Exploration Forum 2014 The Rate Debate

IEEE Gb/s, 100 Gb/s, and 200 Gb/s Ethernet Task Force

PI2EQX6874ZFE 4-lane SAS/SATA ReDriver Application Information

Unification of Data Center Fabrics with Ethernet

SAS-2 Zero-Length Test Load Characterization (07-013r7) Barry Olawsky Hewlett Packard (8/2/2007)

Feasibility of 30 db Channel at 50 Gb/s

24th Annual ROTH Conference

Mezzanine connectors

High Speed and High Power Connector Design

IEEE Gb/s per Lane Electrical Interfaces and Electrical PHYs (100GEL) Study Group Closing Report

Using Chiplets to Lower Package Loss. IEEE Gb/s Electrical Lane Study Group February 26, 2018 Brian Holden, VP of Standards Kandou Bus SA

Bergstak 0.80mm Pitch Product Presentation

Technology Watch. Data Communications 2 nd Quarter, 2012

ETHERNET OPTICS TODAY: 25G NRZ

Understanding 3M Ultra Hard Metric (UHM) Connectors

40GBASE-T Cabling channel MDI-to-MDI

A novel method to reduce differential crosstalk in a highspeed

The Hidden Component of High Speed Filter Design: The Footprint Influence

DC Network Connectivity

Energy-Efficient Ethernet for 40G/100G Next Generation Optics

Fast Lanes and Speed Bumps on the Road To Higher-Performance, Higher-Speed Networking

Considerations for 25GbE

10GBASE-KR for 40G Backplane

Status Update - SDD21 & SDD11/22 Model Development

SPICE Model Validation Report

New Data Center and Transport Interconnect Technology

Accelerating the Mass Transition of 100 GbE in the Data Center. Siddharth Sheth, Vice President of Marketing, High-Speed Connectivity

Reach Objective Considerations

Simulation Strategies for Massively Parallel Supercomputer Design

Using ADS to Post Process Simulated and Measured Models. Presented by Leon Wu March 19, 2012

100G and Beyond: high-density Ethernet interconnects

EDGE CARD APPLICATION DESIGN GUIDE

Architecture: Configurations and use cases

Introduction 1. GENERAL TRENDS. 1. The technology scale down DEEP SUBMICRON CMOS DESIGN

Active Optical Cables. Dr. Stan Swirhun VP & GM, Optical Communications April 2008

An Innovative Simulation Workflow for Debugging High-Speed Digital Designs using Jitter Separation

NEMI Optoelectronic Substrates Project (Status Report) Jack Fisher - Project Leader

Current Practices Model Anatomy

Transcription:

100Gb/s Backplane/PCB Ethernet Two Channel Model and Two PHY Proposal IEEE P802.3bj 100Gb/s Backplane and Copper Cable Task Force Newport Beach Rich Mellitz, Intel Corporation Kent Lusted, Intel Corporation 1

Supporters and Contributors Matt Brown, AppliedMicro Dariush Dabiri, AppliedMicro Beth Kochuparambil Cisco Joel Goergen, Cisco Brad Booth, Dell Howard Frazier, Broadcom Vasudevan Parthasarathy, Broadcom William Bliss, Broadcom Sudeep Bhoja, Broadcom Hamid Rategh, Inphi Sanjay Kasturia, Inphi Adee Ran, Intel David Chalupsky, Intel Ilango Ganga, Intel Bob Grow, Intel 2

100G Copper Interconnects Span a Complex Roadmap Hi-End system interconnect demand performance today Examples: Edge/Core routers, switches, HPC compute nodes Mid level and low level systems demand cost effective solutions Examples: ATCA, X86 blade servers, switches, single board computers, highly integrated silicon products Chip and board level ingredients are trending towards desktop/laptop collateral. ATCA & X86 blade servers have seen year over year growth and presently represents a significant market presence Compare to when KR was originally proposed in 2003 Two PHYs provide better coverage over all types of medium while still allowing optimization for a particular market an optimized evolution of cost reduction in board material vs. silicon cost 3

Backplane landscape at a glance illustrates a variety of loss boundaries System Line Card Backplane Loss Freq. Class Hi End Meg6_LowSR Wide Meg6_LowSR Wide < 35dB 12.9GHz Mid level ImpFR4_HighSR Narrow & FR4-MC ImpFR4_HighSR Narrow < 33dB 7GHz Low Level FR4-MC ImpFR4_HighSR Narrow < 33dB 7GHz In the near future, low end system performance will exceed today s high end server performance 4

db/in from Kochuparambil_01_0112.pdf Traditional FR4 added as moderate controlled or managed loss controlled FR4-MC 0.2673 1.3844 1.5369 2.6779 2.8798 5

Low loss channels required for high end systems can use NRZ Backplane channels can be constructed with < 35dB loss at ~12.9 GHz. Channels using newer connectors, lowroughness copper foil, and low-loss PCB materials i.e. (Meg6, LowSR-HC) 1 Vasu Parthasarathy and Howard Frazier, Channel Analysis at ~12.5GHz, IEEE 802.3 100 Gb/s Backplane and Copper Cable Task Force, 6

Low/Mid Level Systems: Can t Reach The Hi End Channel Bar, but Can Deliver 25Gbps Performance With PAM4 Backplane channels can be constructed with <33dB loss at ~7 GHz. Designed by out-sourced ODM resources Higher channel variation and less sophisticated manufacturing Can be managed with IPC-TM-650, Method 2.5.5.12 2011 IPC PCB loss test method The question is not Is PAM4 better than NRZ? The question is What designs facilitate the low/mid systems as well the hi end system? PAM4 also facilitates backplane reuse with other interfaces designed for 10Gb/s signaling per lane: 10GBASE-KR 40GBASE-KR4 Future low/mid level platforms 7

The suggested insertion loss limit for low/mid level channels enables today s volume and future low end products Low and mid level system IL budget shown later are with in these limits 8 1 Rich Mellitz and Vasu Parthasarathy, Rough channel targets for 4 x 25 Gb/s operation on existing backplanes, IEEE 802.3 100 Gb/s Backplane and Copper Cable Study Group, May 2011

Server Platform is Optimized for High Volume 1 70 60 50 40 30 20 10 0 1995 2000 2005 2010 2015 CPU CPU Multi-core goes to low & mid level systems core/cpu core/board Memory Moves to CPU 1 There is strong desire for the friendly Ethernet integration on low and mid level platforms CPU South Bridge QPI 1 Server Bandwidth Scenarios 9 Signposts for 40G/100G Server Connections Presented by Kimball Brown kimball@lightcounting CPU PCIe fo NIC, HBA, CNAs

Low and Mid Level PCBs are very cost sensitive PCB technology is still standard FR4-class materials Loss is starting to be managed Typical server motherboards are large 130-150 sq. inches, 12-16 layer, 62 to 125 mils thick Will not use 802.3ap spec d improved FR4 materials Most volume server designs are outsourced Other interfaces drive PCB requirements: BTW: None are 100 ohm differential and most have +/- 15% impedance tolerance or worse Hyper market segmentation is creating a large number of system form factors 10

BGA vias are important to consider for low/mid level systems Top Terminated to 50 ohms in sub circuit Terminated to 50 ohms in sub circuit Bottom Drill diameter = 10 mils Pad diameter = 20 mils Antipad diameter = 28 mils Pitch = 20 x 34.7 = 40mils Coupling: 40 mils 11

FEXT (db) Y1-12.50-32.50-52.50-72.50-92.50 Loss of typical thru via at BGA is high using only the bottom 2 stripline layers -35.4616-40.8617-40.2921-51.7615 XY Plot 1-26.9559-31.8401-31.9781-30.1064-1.00-112.50 0.00 2.50 5.00 7.50 10.00 F [GHz] 12.50 15.00 17.50 20.00 6.4000 MX1: 12.9000-2.00 MX2: 6.5000 093 mil thick 14 layer board Layer 10 exit 062 mil thick 8 layer board Layer 4 exit Y1 IL (db) 0.00-3.00-4.00 next 12 ANSOFT BGA crosstalk is also a trade off with the via IL 062 mil thick 8 layer board Layer 6 exit 093 mil thick 14 layer board Layer 12 exit -0.2753-0.2132-0.5513-0.5035 XY Plot 1-5.00 0.00 2.50 5.00 7.50 10.00 F [GHz] 12.50 15.00 17.50 20.00 6.4000 MX2: 6.5000-0.5341 MX1: 12.9000 Back drill in BGA region is problematic for mid and low-end systems -0.5681-1.6432-2.7196 thru ANSOFT

Simplified typical loss budgets for low/mid level systems after 7 GHz exhibit a non log-linear loss BGA Board Vias Line Card board 4.5 FR4-MC Conn. Back Plane 20 ImpFR4_LowSR Narrow Conn. Loss ~ 33 db @ 7 GHz Line Card board 4.5 FR4-MC BGA Board Vias 0.5dB 7dB 0.5dB 17dB 0.5dB 7dB 0.5dB Loss ~ 65 db @ 12.9 GHz 2.7dB 13dB 2.3dB 29dB 2.3dB 13dB 2.7dB 13 Losses computed using DkDf Algebraic Model Tool, http://www.ieee802.org/3/bj/public/tools.html

Summary 100G copper interconnects span a variety of loss boundaries Channel loss varies widely from high end to low end due to design/mfg. choices Mid and low end server platforms are high volume, enabled by large scale integration The low/mid level channels have a non log linear loss behavior after 7 GHz Two PHYs provide an optimized solution for: Low loss channels in high end systems Manufacturing and volume channel requirements in mid/low level systems 14

Thank You! 15