OTN Support Update P802.3bs 400 Gb/s Ethernet Task Force. Steve Trowbridge Alcatel-Lucent

Similar documents
802.3bj FEC Overview and Status 400GbE Logic Challenges DRAFT IEEE

Flex Ethernet Implementation Agreement

FEC performance on multi-part links

EEE Modifications P802.3bj comments 115, 116, 117 P802.3bm describe fast-wake signaling compatible with OTN mapper. Steve Trowbridge Alcatel-Lucent

Latency and FEC options for 25G Ethernet

EPoC PHY and MAC proposal

Ethernet The Next Generation

Flex Ethernet 2.0 Implementation Agreement

WAN-compatible 10 Gigabit Ethernet Tutorial. Opticon Burlingame, CA July 31, 2000

Timeline Implications

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Service Interfaces & Physical Instantiations

HSE OTN Support. Qiwen Zhong Qiuyou Wu WB Jiang Huawei Technologies. IEEE 802.3ba Task Force, March HUAWEI TECHNOLOGIES Co., Ltd.

IEEE 802.3by 25G Ethernet TF A BASELINE PROPOSAL FOR RS, PCS, AND FEC

Update of Bit multiplexing in 400GbE PMA

PBL Model Update. Trey Malpass Ye Min Ding Chiwu Zengli. IEEE Higher Speed Study Group Nov HUAWEI TECHNOLOGIES Co., Ltd.

Optical Network Control Protocols

Pre-FEC Signaling. Jeffery J. Maki. IEEE P802.3bs 400GbE Task Force

Data Rate Adaptation. Andrea Garavaglia. Marek Hajduczenia, PhD. Qualcomm Inc. ZTE Corporation.

Objectives for Next Generation 100GbE Optical Interfaces

Relationship of 1000BASE-T1 to other standards

Edits to P802.3bs D1.2 for Alignment Marker Mapping and Insertion

An Alternative Scheme of FOM Bit Mux with Pre-interleave

SERIES G: TRANSMISSION SYSTEMS AND MEDIA, DIGITAL SYSTEMS AND NETWORKS. Optical transport network module framer interfaces

802.3bj FEC Overview and Status. 1x400G vs 4x100G FEC Implications DRAFT. IEEE P802.3bs 400 Gb/s Ethernet Task Force. Bill Wilkie Xilinx

IEEE P802.3ba 40 GbE and 100 GbE Standards Update

100GE and 40GE PCS Proposal

Overhead Efficiency Analysis of Multi-Lane Alternatives

40GE OTN Support Consideration

XAUI as a SUPI Alternative

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Terminology. John D Ambrosia Futurewei, Subsidiary of Huawei Jan 24, 2018

Optical Network Control

SERIES G: TRANSMISSION SYSTEMS AND MEDIA, DIGITAL SYSTEMS AND NETWORKS. Transport of IEEE 10GBASE-R in optical transport networks (OTN)

Advanced Modulation and Coding Challenges

To Infinity and Beyond! : Why 40km+ links matter, and what HSSG might do about it

Forward Error Correction in Optical Networks

G.709 The Optical Transport Network (OTN) By Andreas Schubert

WHITE PAPER THE CASE FOR 25 AND 50 GIGABIT ETHERNET WHITE PAPER

Optical transport networks

Technical Feasibility of optical PMDs with RS FEC

Investigation on Technical Feasibility of FEC Architecture with 1X400Gbps or 4X100Gbps

Methods For Adding Service Provider OAM Overhead To Existing GbE PCS. Roy Bynum Global Access Technology Development WorldCom

A Unified PMD Interface for 10GigE

400GE at the Cost of 4 x 100GE

100 Gigabit Ethernet. An Industry Update. John D Ambrosia Chair, IEEE P802.3ba Task Force Senior Scientist, Force10 Networks October 8, 2009

25 Gb/s Ethernet Study Group CSD: Technical Feasibility. Joel Goergen - Cisco Matt Brown APM Version goergen_25ge_01_0914d

UltraScale Architecture Integrated Block for 100G Ethernet v1.4

40-Gbps and 100-Gbps Ethernet in Altera Devices

Next Generation PON: Lessons Learned from G-PON and GE-PON

MCPC IN EPOC. Ryan Hirth, Glen Kramer

Common PMD Interface Hari

Pivotal Issues for 400 Gb/s Ethernet

From PoS to GFP - Abstract Thomas Hagmeister (Alcatel)

Version 1.0 (date)... vi Version X.X (date)... vi

Multilevel Serial PMD Update. Rich Taborek - IEEE HSSG - 10 GbE

In multiple-hub networks, demand priority ensures fairness of access for all nodes and guarantees access time for multimedia applications.

Higher Speed Ethernet

GFP Considerations for RPR

Receiver BER target for 32GFC. Adam Healey February 2013 T11/13-058v0

IEEE P802.3ba 40 GbE and 100 GbE Standards Update

10 GIGABIT ETHERNET CONSORTIUM. 10GBASE-R PCS Test Suite V1.0 Technical Document. Last Updated: September 30, :30pm

The equivalent of XAUI and other architectural considerations in 40G/100G Ethernet

10Gb Ethernet PCS Core

EPoC PHY Link and Auto Negotiation. Ed Boyd, Broadcom Avi Kliger, Broadcom

Topic: Specifications of allowable inter packet gap values in IEEE 802.3

JDSU ONT-503/506/512 Optical Network Tester

Data Rate Adaptation in EPoC

Encapsulation Bridging and

Technical Committee. E3 Public UNI. af-phy

Investigation on Technical Feasibility of FEC Architecture with 1X400Gbps or 4X100Gbps

25GSMF Study Group: Work Items and Goals for Jan Meeting

SERIES G: TRANSMISSION SYSTEMS AND MEDIA, DIGITAL SYSTEMS AND NETWORKS. Transport of IEEE 10G base-r in optical transport networks (OTN)

802.3bj FEC Overview and Status AMP_Valid and PCS_Lane TBDs DRAFT IEEE P802.3bs 400 Gb/s Ethernet Task Force Mark Gustlin - Xilinx

Monitoring DWDM Properties

IEEE P802.3bj D Gb/s Backplane and Copper Cable 3rd Task Force review comments

IEEE 1G and 10G EPONs: Factors that Influenced the Architecture and Specification

VIAVI ONT-600 N-PORT Test Module

Lecture 05 Chapter 16 High Speed LANs

White Paper The Evolution of ITU-T G.709 Optical Transport Networks (OTN) Beyond 100Gbit/s. March 2017

Optical Fiber Communications. Optical Networks- unit 5

Gaoyao Tang Applications Engineer, Innocor Ltd April Copyright Innocor Ltd.

10 Gigabit Ethernet 10GBase-W PHY. 1 Introduction. Product Brief Version May 2005

MEF Technical Specification MEF x.y.x Working Draft v0.09. Subscriber Layer 1 Connectivity Service Attributes. 13 December 2017

More on Super-PON. Scale Fully Passive Optical Access Networks to Longer Reaches and to a Significantly Higher Number of Subscribers

The Ethernet Rate Explosion John D Ambrosia

802.3cb Proposed Text Changes for Clause 69, 73, 78, 125

Standard VDSL Technology

IEEE P802.3ba 40Gb/s and 100Gb/s Ethernet

40Gb/s Ethernet Single-mode Fibre PMD Study Group Proposed 5-Criteria Responses

Objectives for HSSG. Peter Tomaszewski Scientist, Signal Integrity & Packaging. rev3 IEEE Knoxville Interim Meeting, September 2006

Wireless Challenges : Computer Networking. Overview. Routing to Mobile Nodes. Lecture 25: Wireless Networking

Ethernet- The Next Generation

CHAPTER TWO LITERATURE REVIEW

Baseline Proposal EFM_PHY_rev 0.3

Ottawa, ON May 23, b/66b Coding Update

Data-rate adaption function for EPoC (baseline proposal)

Configuring Performance Monitoring, RMON, OTN, and Port Provisioning

Temporary Document 3203/Rev.1

PHY Link Channel Resource Allocation, Overhead, Impact on Procedures. Nicola Varanese (Qualcomm)

Transcription:

Support Update P802.3bs 400 Gb/s Ethernet Task Force Steve Trowbridge Alcatel-Lucent 1

Key Elements of Support See Support: What is it and why is it important?, July 2013 A new rate of Ethernet (e.g., 400 Gb/s) fits into the corresponding rate transport signal All Ethernet PHYs of a given rate are mapped the same way and can be interconnected over the (e.g., same PCS for all 100 Gb/s PHYs gives a single canonical format ( characteristic information in ITU-T terminology) that can be mapped Optical modules for Ethernet can be reused for IrDI/client interfaces at the corresponding rate 2

A new rate of Ethernet (e.g., 400 Gb/s) fits into the corresponding rate transport signal Assumption the mapper/demapper will terminate and regenerate any Ethernet FEC code, correcting errors at the ingress since the FEC is chosen to correct singlelink errors but not double-link errors Assumption the mapper/demapper may transdecode/trans-encode back to 64B/66B to avoid MTTFPA reduction for transported signal Based on these assumptions, the encoded data rate of the -mapped 400 Gb/s Ethernet would be no more than 400 Gb/s x 66 / 64 = 412.5 Gb/s ±100ppm. Since the 400 Gb/s container would presumably be designed to also transport four lower order ODU4s, there should be no concern that it is large enough to carry 400 Gb/s Ethernet based on the assumption that the canonical form is near this rate. Any Ethernet bits in excess of this rate are likely to be part of a FEC that is not carried over 3

Discussion in Architecture and Logic ad hoc groups of 400GbE transport over ITU-T mapped 100GbE over ODU4 as a set of deskewed and serialized PCS lanes, which included the PCS lane BIP in the alignment markers a format common to every 100GbE PMD Good consensus so far that there should be an reference point that identifies the exact information expected to be transported over Early architecture discussion is that we shouldn t require a common logical lane architecture across all 400GbE PMDs. A consequence if we still put BIP in the lane alignment markers is that it would be segment by segment instead of end-to-end (see further slides). If we need BIP (or any other overhead, e.g., a management channel) to be end-to-end, we will have to find somewhere besides the lane alignment markers to put it 4

BIP Considerations P802.3ba introduced a BIP to detect link degradation below the level that would enter the high BER state Since all P802.3ba interfaces use the same logical lane striping, this allows the BIP to be carried in the lane alignment marker Two key use cases to be considered: Service Assurance is best supported by an end-to-end BIP. You can see whether the connection is experiencing errors at the Rx, but in a multi-segment link, it is not simple to determine where the errors were introduced Fault Isolation is best supported by a segment-by-segment BIP. You can detect whether errors are introduced in a particular segment just adjacent to that segment, but you don t have a consolidated end-to-end view at the Ethernet Rx 5

BIP Considerations - continued Transport networks (e.g., ) tend to support multiple layers of error monitoring: Path monitoring checks the connection end-to-end Section monitoring checks an individual link segment One or more layers of tandem connection monitoring can monitor between pre-determined points along the path, e.g., if the link traverses multiple operator networks, the section across each operator domain can be monitored independently; or monitoring the scope of a protection domain for working and protection paths Unlikely that Ethernet would decide to support multiple layers of monitoring as most Ethernet links are a single segment. 6

Fault Isolation Scenario Per segment BIP (one direction of transmission) Path Monitor TCM A TCM B BIP Generate BIP Detect Operator A Operator B Every Segment has its own monitoring BIP Generate BIP Detect Enterprise Ethernet Switch You know where the faults are, but the Ethernet Rx may not see upstream errors Enterprise Ethernet Switch 7

Service Assurance Scenario End-to-end BIP (one direction of transmission) Path Monitor TCM A TCM B Operator A Operator B BIP Generate Ethernet BIP Transparent pass-through BIP Detect Enterprise Ethernet Switch The Ethernet Rx knows there are errors, but not where they occur Enterprise Ethernet Switch 8

Fault Isolation Scenario Using end-to-end BIP and non-intrusive monitoring (NIM) (one direction of transmission) Path Monitor TCM A TCM B BIP Generate Ethernet BIP NIM Operator A Operator B Ethernet BIP NIM BIP Detect Enterprise Ethernet Switch Detectors are in different administrative domains and not accessible in a centralized NMS which can correlate the errors Enterprise Ethernet Switch 9

Fault Isolation Scenario Using end-to-end BIP and non-intrusive monitoring (NIM) (one direction of transmission) Path Monitor TCM A TCM B BIP Generate Ethernet BIP NIM Operator A Operator B Every Segment has its own monitoring Ethernet BIP NIM Are you seeing errors? BIP Detect Enterprise Ethernet Switch Detectors are in different administrative domains and not accessible in a centralized NMS which can correlate the errors Enterprise Ethernet Switch 10

BIP Considerations - continued The scenario of the last slide is adequate for a persistent level of errors, but what about error bursts? Telco equipment historically collects error count information and stores in 15-minute and 24-hour bins. Will Ethernet equipment do such a thing? So you come in on Monday morning and see that you had an error burst over the weekend, ring up those with access to the other monitoring points and ask whether they observed errors between 3:15pm-3:30pm on Saturday afternoon 11

BIP Recommendation From discussion so far, the primary use case seems to be fault isolation rather than service assurance Segment by segment BIP provides the simplest fault isolation End-to-end BIP provides service assurance, and CAN do fault isolation at a coarse level by comparing error counts seen across what are likely different administrative domains While doing end-to-end BIP can meet both the needs of service assurance and fault isolation, supporting the secondary use case makes the primary use case enormously more cumbersome. Therefore, if we do BIP, it should be segment-by-segment. This also would allow BIP to remain in the alignment markers even if different PMDs are striped differently Another idea (credit to Dave Ofelt) is that if every PMD has FEC, do we even need BIP since you will get better information from the FEC corrected/uncorrectable error counts. This would also be segment-by-segment if not every PMD uses the same FEC. 12

Module Reuse Module reuse was facilitated by the fact that nothing below a CAUI chip-to-module interface cared about the or manipulated the bit values on the lanes as long as was striped into the same number of logical lanes as Ethernet, everything would work The following likely can be preserved: no idle insertion/deletion occurs below a CDAUI chip-to-module interface The following are possibly not be precluded by the 400GbE architecture: Logical to physical lane multiplexing in a module may be on a block or FEC symbol basis rather than a bit basis One (possibly Ethernet Frame Format dependent) FEC code may be replaced with another) 13

Options for Module Reuse Option 1: Preserve the 802.3ba rule that no sublayers below the MAC care about bit values or manipulate the bit values on logical lanes (bit multiplexing only). Any FEC is done on the host board above a CDAUI. may use a different FEC than Ethernet if it needs a stronger FEC to compensate for the higher bit-rate Option 2: Every FEC is client independent and not locked to the Ethernet (e.g., 66B block or alignment marker position) or FAS position. Use the same FEC for Ethernet and. A module can remove one FEC and apply another, and can combine logical into physical lanes, but cannot restripe the signal in a different way since will stripe its signal into the same number of logical lanes Option 3 (most general, described in Norfolk) encode the frame as 66B blocks (all data) and use whatever striping and FEC encoding mechanisms are used for Ethernet. and Ethernet use the same FEC 14

Option #2: Bit-rates using this scheme Working Assumption Bit-Rate OTUC4 bit-rate without FEC 422.904 Gb/s Add FEC, e.g. RS(528,514) 434.423 Gb/s Logical Lane Rate (well within CEI-28G) 27.151 Gb/s Ethernet Nominal Bit-rate 412.5 Gb/s 400G Increase in bit-rate 5.31 % 100G Increase in bit-rate 8.42 % Smaller increase for 400G than for 100G, mainly due to RS(528,514) FEC rather than RS(255,239) FEC 15

Option#3 Amplification PCS is a logical serial stream of 66B blocks. Only physical instantiations are striped over physical or logical lanes Maintain the principle, as in 802.3ba, that idle insertion/deletion is not done below the PCS. Since any physical instantiation will need to be striped with lane markers, do idle insert/delete at the PCS only so the logical stream will be at the nominal MAC rate x 66/64 x (1-1/16384) so that any physical instantiation has room to insert lane markers as needed without idle insert/delete 16

Option #3 Amplification continued Example physical instantiation could be exactly the format of Idea #1, produced by transcoding 64B/66B to 256B/257B, striping first into 100G groups, striping within each 100G group into 4 logical lanes on 10-bit symbol boundaries, inserting alignment markers on each lane, and applying an RS(528,514) code based on 10-bit symbols with alignment markers appearing in the first of each of 4096 Reed Solomon code blocks 17

Option#3 Implications for Likely only possible if the same FEC code can be used for applications as for Ethernet applications at about 6% higher bit-rate Would need to make look like 66B blocks. Easiest way to do this and not lose any information in transcoding is to insert a 01 sync header after every 64 bits (all data) Since this is just part of the logical frame format, this doesn t waste as many bits as it appears. 8 sync header bits are added to every 256 data bits in the logical frame format, but 7 of those bits are immediately recovered in 256B/257B transcoding and reused for the FEC code. So 0.39% net is added to the frame to make it look like 66B blocks, then 2.724% overhead RS FEC added 18

Option #3 - Illustration of turning frame into 64B/66B blocks Frame 64 bits 64 bits 64 bits 64 bits encoded as 64B/66B 01 64 bits 01 64 bits 01 64 bits 01 64 bits Use the Ethernet Stack to stripe and FEC encode the frame when carrying over an Ethernet Module for an IrDI or client interface Scramble RS-FEC PMA Could be frame aligned as an OTUC4 frame without FEC is exactly 7648 64 bits, but not essential with scrambling MDI PMD MEDIUM 19

Option #3: Bit-rates using this scheme Working Assumption Bit-Rate OTUC4 bit-rate without FEC 422.904 Gb/s 64B/66B encoded 436.120 Gb/s 256B/257B transcoded 424.556 Gb/s Insert Lane Markers 424.582 Gb/s Add RS(528,514) FEC 436.146 Gb/s Logical Lane Rate (well within CEI-28G) 27.259 Gb/s Ethernet Nominal Bit-rate 412.5 Gb/s 400G Increase in bit-rate 5.73 % 100G Increase in bit-rate 8.42 % Smaller increase for 400G than for 100G, mainly due to RS(528,514) FEC rather than RS(255,239) FEC 20

Option #3 - The module reuse aspect of Support is satisfied if the following are true: There is an Ethernet sublayer reference point such as the PCS that is logically a serial stream of 64B/66B blocks No idle insertion/deletion occurs below the PCS (the serial stream of 64B/66B blocks), and hence the rest of the stack can deal with a constant-bit-rate (CBR) bitstream that is effectively an infinite-length packet. Note that any logical to physical lane interleaving that works for Ethernet also works for since they are encoded the same way The link parameters and FEC coding gain have sufficient margin to meet the error performance target when running at approximately 5.73% higher bit-rate than necessary for 400G Ethernet. More likely to be true if all P802.3bs interfaces have FEC. 21

Option #3 - Details Enabling 400 Gb/s Ethernet Module reuse for 400 Gb/s IrDI/client interfaces No Idle Insertion/Deletion below the PCS The frame is adapted to a CDAUI-like format by encoding as 64B/66B data blocks and using the Ethernet LLS, RS-FEC, and FLS sub-layers at 5.73% higher bit-rate All 400 Gb/s Ethernet PHYs are assumed to use FEC The FEC code is selected to have sufficient margin to meet the error performance target at 5.73% above the Ethernet bit-rate 22

Module Reuse Recommendations It seems unlikely that the architecture will preclude a module that may apply a different FEC or restripe the signal for an initial or subsequent generation PMD, so Option 1 likely does not cover all cases Option 2 brings the and Ethernet bit rates slightly closer together than Option 3, but at the expense of precluding any restriping in the module and additional constraints on the choice of FEC Option 3 is the most general, and the bitrate difference over Ethernet of 5.73% doesn t seem to be enough greater than the Option 2 difference of 5.31% to justify the additional restrictions Recommend Option 3 as a module reuse architecture 23

THANKS! 24