System Vendor Requirements Document

Similar documents
I N T E R C O N N E C T A P P L I C A T I O N N O T E. STEP-Z Connector Routing. Report # 26GC001-1 February 20, 2006 v1.0

Q2 QMS/QFS 16mm Stack Height Final Inch Designs In PCI Express Applications Generation Gbps. Revision Date: February 13, 2009

100GbE Architecture - Getting There... Joel Goergen Chief Scientist

I N T E R C O N N E C T A P P L I C A T I O N N O T E. Advanced Mezzanine Card (AMC) Connector Routing. Report # 26GC011-1 September 21 st, 2006 v1.

I N T E R C O N N E C T A P P L I C A T I O N N O T E. Z-PACK TinMan Connector Routing. Report # 27GC001-1 May 9 th, 2007 v1.0

ATCA Platform Considerations for Backplane Ethernet. Aniruddha Kundu Michael Altmann Intel Corporation May 2004

Optimal Management of System Clock Networks

Board Design Guidelines for PCI Express Architecture

OIF CEI-56G Project Activity

EDA365. DesignCon Impact of Backplane Connector Pin Field on Trace Impedance and Vertical Field Crosstalk

Application Note. PCIE-EM Series Final Inch Designs in PCI Express Applications Generation GT/s

Application Note. PCIE-RA Series Final Inch Designs in PCI Express Applications Generation GT/s

Q Pairs QTE/QSE-DP Final Inch Designs In PCI Express Applications 16 mm Stack Height

SEAM-RA/SEAF-RA Series Final Inch Designs in PCI Express Applications Generation GT/s

Design Guidelines for 100 Gbps - CFP2 Interface

RiseUp RU8-DP-DV Series 19mm Stack Height Final Inch Designs in PCI Express Applications. Revision Date: March 18, 2005

White Paper Compromises of Using a 10-Gbps Transceiver at Other Data Rates

Channels for Consideration by the Signaling Ad Hoc

I N T E R C O N N E C T A P P L I C A T I O N N O T E. STRADA Whisper 4.5mm Connector Enhanced Backplane and Daughtercard Footprint Routing Guide

MECT Series Final Inch Designs in SFP+ Applications. Revision Date: August 20, 2009

Using Chiplets to Lower Package Loss. IEEE Gb/s Electrical Lane Study Group February 26, 2018 Brian Holden, VP of Standards Kandou Bus SA

MDI for 4x25G Copper and Fiber Optic IO. Quadra (CFP4 proposal) Connector System

40 GbE Over 4-lane 802.3ap Compliant Backplane

InfiniBand SDR, DDR, and QDR Technology Guide

AdvancedTCA Backplane Tester

100G SWDM4 MSA Technical Specifications Optical Specifications

PCIEC PCI Express Jumper High Speed Designs in PCI Express Applications Generation GT/s

USB Type-C Active Cable ECN

QPairs QTE/QSE-DP Multi-connector Stack Designs In PCI Express Applications 16 mm Connector Stack Height REVISION DATE: OCTOBER 13, 2004

DesignCon SerDes Architectures and Applications. Dave Lewis, National Semiconductor Corporation

White Paper. ORSPI4 Field-Programmable System-on-a-Chip Solves Design Challenges for 10 Gbps Line Cards

An Introduction to PICMG 3.7 ATCA Base Extensions PENTAIR

5 GT/s and 8 GT/s PCIe Compared

CEI-56G Signal Integrity to the Forefront

Implementing RapidIO. Travis Scheckel and Sandeep Kumar. Communications Infrastructure Group, Texas Instruments

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

A Design of Experiments for Gigabit Serial Backplane Channels

Section 3 - Backplane Architecture Backplane Designer s Guide

TAN-067 Applications Note R 3 Technology for T/E Carrier Redundancy Applications Preliminary Rev. 1.00

AirMax VSe High Speed Backplane Connector System

TDM Backhaul Over Unlicensed Bands

10Gb/s on FDDI-grade MMF Cable. 5 Criteria Discussion Slides. SG 10Gb/s on FDDI-grade MMF

December 2002, ver. 1.1 Application Note For more information on the CDR mode of the HSDI block, refer to AN 130: CDR in Mercury Devices.

802.3bs PROJECT DOCUMENTATION CONSIDERATION. John D Ambrosia, Independent December 2, 2015

Understanding 3M Ultra Hard Metric (UHM) Connectors

Simulation Results for 10 Gb/s Duobinary Signaling

Trends in Digital Interfaces for High-Speed ADCs

PCI Express 1.0a and 1.1 Add-In Card Transmitter Testing

An Innovative Simulation Workflow for Debugging High-Speed Digital Designs using Jitter Separation

1. Introduction 2. Methods for I/O Operations 3. Buses 4. Liquid Crystal Displays 5. Other Types of Displays 6. Graphics Adapters 7.

Report # 20GC004-1 November 15, 2000 v1.0

250 Mbps Transceiver in LC FB2M5LVR

Ethernet Services over OTN Interoperability Steps Closer to Reality

The Fast Track to PCIe 5.0

Optimum Placement of Decoupling Capacitors on Packages and Printed Circuit Boards Under the Guidance of Electromagnetic Field Simulation

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

Also in this documents some known issues have been published related to PTF card with some troubleshooting steps along with logs collection.

IEEE Criteria for Standards Development (CSD)

SPI-4.2 Interoperability with the Intel IXF1110 in Stratix GX Devices

Unconfirmed Minutes IEEE 802.3AP - Backplane Ethernet May 26 27th, 2004 Long Beach, CA

LVDS applications, testing, and performance evaluation expand.

Channel Based Methods for Signal Integrity Evaluation COMPAL ELECTRONICS, INC. Taipei Server Business Aug 13, 2013

SERIAL MULTI-PROTOCOL TRANSMISSION WITH THE LatticeSC FPGA

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications

Thank you for downloading this product training module produced by 3M Electronic Solutions Division for Mouser. In this presentation, we will discuss

Application Note 1242

OnePlanner. Unified Design System

Demonstration of Technical & Economic Feasibility. Brian Seemann Xilinx

8. Selectable I/O Standards in Arria GX Devices

Specifying Crosstalk. Adam Healey Agere Systems May 4, 2005

EXAMAX HIGH SPEED BACKPLANE CONNECTOR SYSTEM Innovative pinless connector system delivering superior electrical performance at speeds 25 to 56Gb/s

Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices

High-Speed Jitter Testing of XFP Transceivers

Revolutionary High Performance Interconnect Which Maximizes Signal Density

Interlaken Look-Aside Protocol Definition

Introduction to iscsi

TECHNOLOGY BRIEF. Double Data Rate SDRAM: Fast Performance at an Economical Price EXECUTIVE SUMMARY C ONTENTS

IEEE Gb/s per Lane Electrical Interfaces and Electrical PHYs (100GEL) Study Group Closing Report

PCI Express 4.0. Electrical compliance test overview

Synchronous Ethernet A RAD White Paper

CrossLink Hardware Checklist Technical Note

Technical Note. Design Considerations when using NOR Flash on PCBs. Introduction and Definitions

10Gb Ethernet PCS Core

100G / Lane Electrical Interfaces for Datacenter Switching - Desirable Solution Attributes

Proposal for SAS 2.1 Specification to Enable Support for Active Cables

Challenges and performance of the frontier technology applied to an ATLAS Phase-I calorimeter trigger board dedicated to the jet identification

AXIe-1: Base Architecture Specification. Revision 2.0

100Gb/s Backplane/PCB Ethernet Two Channel Model and Two PHY Proposal

XS1 Link Performance and Design Guidelines

Tektronix Innovation Forum

AXIe-1: Base Architecture Specification. Revision 3.1

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

MIPI D-PHY REFERENCE TERMINATION BOARD (RTB) OVERVIEW AND DATASHEET

AN 608: HST Jitter and BER Estimator Tool for Stratix IV GX and GT Devices

Clock Tree Design Considerations

Cisco Nexus 9508 Switch Power and Performance

DCEngine Rack, Compute and Storage System Specification for CG-OpenRack-19 Version 1.0. Author: Matt St Peter, Radisys Corporation

IEEE 802.3ap Backplane Ethernet Overview

Silicon Based Packaging for 400/800/1600 Gb/s Optical Interconnects

250 Mbps Transceiver in OptoLock IDL300T XXX

Transcription:

0 0 0 0 Contribution Number: OIF00. Working Group: Physical Layer Users Group, CEI TITLE: for Common Electrical I/O (CEI) Electrical and Jitter Interoperability Agreement G+ Gb/s Long Reach Clause Source: OIF Physical Layer Users Group Mary Mandich Karl Gass Document Technical Editor Working Group Chair Lucent Technologies Sandia National Laboratories 00 Mountain Avenue, Rm D- PO Box 00, MS 0 Murray Hill, NJ 0 Albuquerque, NM USA USA Phone + 0 Phone + 0 Email mandich@lucent.com Email kgass@sandia.gov Document Contributors: See attached list on the following page. Member Companies: See attached list on the following page. Date: September, 00 Abstract: The purpose of this document is to establish a common set of system vendor requirements for the Common Electrical I/O Implementation Agreement, CEI Clause, which addresses electrical interfaces for payloads of. to. Gb/s per lane from 0- m and up to connectors. This document will be used to further develop CEI Clause and/or addadditional Clauses to the CEI document for G+ long reach interfaces. This contribution is submitted to the Physical Layer Users Group (PLUG) for their discussion and consideration. Notice: This contribution has been created to assist the Optical Internetworking Forum (OIF). This document is offered to the OIF solely as a basis for discussion and is not a binding proposal on the companies listed as resources above. Each company in the source list, and the OIF, reserves the right at any time to add, amend, or withdraw statements contained herein. Page of

0 0 0 This Working Text represents a work in progress by the OIF, and must not be construed as an official OIF Technical Report. Nothing in this document is in any way binding on the OIF or any of its members. The document is offered as a basis for discussion and communication both within and without the OIF. For additional information contact: The Optical Internetworking Forum, California Street Suite 0, Fremont, CA USA -0-0-0 info@oiforum.com List of Document Contributors Tad Hofmeister, Ciena Corporation Jim Hamstra, Flextronics Joel Goergen, Force0 Networks Hans-Joachim Goetz, Lucent Technologies Bryan Parlor, Nortel Networks, John D Ambrosia, Tyco Electronics Member Companies of the OIF Physical Layer Users Group Alcatel Ciena Corporation Cisco Systems Flextronics Force 0 Networks Huawei Technologies Intel Lucent Technologies Marconi Communications Nortel Networks Siemans Tellabs Page of

0 Document Revision History OIF00..0 May, 00 First draft based on Contributions 00..0, 00.0.0 and Motion at the January 00 OIF meeting. OIF00..0 May 0, 00 Abstract updated OIF00..0 July, 00 Section added; other wording changes based on PLUG conference calls. September, 00 Changes proposed in the OIF00..0 document incorporated. Changes incorporated that were made at the meeting of the PLUG group on July, 00 Page of

0 0 0 Common Electrical I/O (CEI) Electrical and Jitter Interoperability Agreement G+ Gb/s Long Reach Clause Document Contents Title, Source, and Abstract Revision History Table of Contents List of Figures List of Tables Introduction. Overview. Objectives. Legacy versus non-legacy applications. Attributes of CEI Interfaces in Different Classes of Applications System Architecture Examples. Introduction. Fabric Topology Star. Fabric Topology Dual Star. Fabric Topology Multi- Star. Fabric Topology Full Mesh Electrical Interface Functional Requirements. Signal characteristics.. Selection of.v for Fixed DC Coupled Operation.. Location of possible signal vias in the transmission architecture.. Signal via characteristics.. Channel characteristics.. Signal crosstalk.. Design Considerations to Minimize Signal Crosstalk Page of

0 0. System characteristics.. Additional connector requirements.. Additional interface card requirements. Link and Overall System Metrics. Link Metrics.. Power consumption of a reference. Gb/s link.. Power consumption of an G LR link.. Asynchronous versus synchronous links.. Backchannels for Equalization Control Information. System Metrics Protocol Requirements. Link Protocol Requirements. Requirements for training patterns and training states System Performance Requirements. Metrics for Nonlegacy and Legacy Dual Star and Full Mesh Fabrics. System Upgrades for Legacy and Nonlegacy Applications Electrical subsystem design requirements specific to legacy applications. Design node assumptions for legacy applications. Preserved versus upgraded electrical transmission subsystems in legacy applications. Preservation of trace routing and connector pin definitions: potential impact on crosstalk in legacy applications. Upgrade scenario considered to be non-legacy application Appendix : Additional Specifications/Issues to be addressed Page of

0 0 0 LIST OF FIGURES Figure Star Fabric Topology Figure Dual Star Fabric Topology Figure Multi-Star Fabric Topology Figure Full Mesh Fabric Topology Figure Four Port Differential Transmission Line with a Common Ground Plane Figure Channel in a Typical System Application Figure Informative and Normative Transfer Functions for Allowable Channels in Legacy and Nonlegacy Applications Figure Crosstalk Pulse Definition Figure Consecutive Identical Digit (CID) Jitter Tolerance Test Pattern Figure 0 Example of worst-case design for Tx and Rx channels in a connector pin field LIST OF TABLES Table Signal Characteristics for Nonlegacy and Legacy Dual Star Fabrics Table Signal Characteristics for Nonlegacy and Legacy Multi-Star Fabrics Table Signal Characteristics for Nonlegacy and Legacy Full Mesh Fabrics Table Types and characteristics of vias used in printed circuit boards Table Minimum Permissible Stub Resonance Frequencies for NRZ Data Table System Characteristics for Nonlegacy and Legacy Dual Star Fabrics Table System Characteristics for Nonlegacy and Legacy Multi-Star Fabrics Table System Characteristics for Nonlegacy and Legacy Full Mesh Fabrics Table Link Metrics for Nonlegacy and Legacy Dual Star Fabrics Table 0 Link Metrics for Nonlegacy and Legacy Multi-Star Fabrics Table Link Metrics for Nonlegacy and Legacy Full Mesh Fabrics Table System Metrics for Nonlegacy and Legacy Dual Star Fabrics Table System Metrics for Nonlegacy and Legacy Multi-Star Fabrics Table System Metrics for Nonlegacy and Legacy Full Mesh Fabrics Table Link Protocol Requirements for Nonlegacy and Legacy Dual Star Fabrics Table Link Protocol Requirements for Nonlegacy and Legacy Multi-Star Fabrics Table Link Protocol Requirements for Nonlegacy and Legacy Full Mesh Fabrics Page of

Table System Performance Requirements for Nonlegacy and Legacy Dual Star Fabrics Table System Performance Requirements for Nonlegacy and Legacy Multi-Star Fabrics Table 0 System Performance Requirements for Nonlegacy and Legacy Full Mesh Fabrics Table Design Criteria Assumed for Legacy Applications Page of

0 0 0 0 Common Electrical I/O (CEI) Electrical and Jitter Interoperability Agreement G+ Gb/s Long Reach Clause. Introduction. Overview This document is developed to establish a common set of system vendor requirements for the Common Electrical IO Implementation Agreement (referred to below as CEI) covering faster electrical interfaces for payloads of. to. Gb/s per lane from 0-m and up to connectors. These interfaces include SERDES to Framer Interface (SFI), System Packet Interface (SPI), and TDM-Fabric to Framer Interface (TFI). These interfaces may also need to accommodate IEEE 0. XAUI compliant backplanes for legacy system applications. As of September, 00, the CEI Implementation Agreement drafts address G+ and G+ short and long reach interfaces. These drafts are contained in two documents entitled: ) Common Electrical I/O (CEI) Electrical and Jitter Interoperability agreements for G+ bps and G+ bps I/O ; dated August, 00. This document will be referred to as the Aug00 G/G CEI document herein. ) Common Electrical I/O (CEI) Electrical and Jitter Interoperability agreement for CEI- G-LR; dated September, 00. This document will be referred to the Sept00 G LR CEI document herein. In the current version of CEI, Clause addresses such interfaces and is described in OIF00.. This document will be used to further develop Clause and/or add additional Clauses to the CEI document for G+ LR (long reach) interfaces. The creation of this document follows Motion requesting system vendor input for the development of CEI at the January 00 San Diego meeting of the CEI Physical Link Layer working group of the Optical Internetworking Forum. As referenced in this Motion, the outline for this document is: OUTLINE. System architecture examples. Electrical interface functional requirements Signal characteristics (e.g., rate, reach, voltages) System characteristics (e.g., density). Overall system metrics (e.g., cost, power, EMC). Protocol requirements (e.g., FEC, framing, error detection). System performance (e.g., BER, forward/backward compatibility) Page of

0 0 0 0. Objectives The objective of this document is to describe system vendor requirements for the transmission architectures of particular TDM (time division multiplexing) and packet based system architectures and applications. In this context, transmission architectures refer to the framework for board-to-board electrical interconnections, typically over a common backplane, within a given network element This will provide a system users view for defining common Gb/s long reach (LR) electrical interface needs at the manufactured interface channel and PHY levels. System vendor requirements include, but are not limited to: ) Electrical interface functional requirements for signal coding, rate, reach, voltages, BER, and density ) Overall system metrics of cost, power, and EMC ) Data protocol requirements for error correction codes, error detection, and framing ) System and/or subsystem performance requirements for BER, and forward/backward compatibility. In this context, subsystem refers to modules, backplanes, child boards, etc. which contain multiple components and functionality ) Support for legacy and/or non-legacy applications. Definition of legacy and non-legacy in this context is discussed below in.. This document deals only with the high speed electrical interfaces needed for transport. There are typically other types of internal interconnects needed for control, timing and signal overhead processing. It is assumed that these other functions run at data rates lower than Gb/s so they are outside the scope of this document. This document does not provide recommendations for specific boards, connectors, ASICs, or other components needed to realize system vendor requirements. It also does not provide market segmentation or other economic data regarding the various system architectures described.. Legacy versus non-legacy applications System vendors often seek to preserve or reuse subsystems when developing new generations of existing products. In the applications addressed in this document, it is important to specify where a system vendor requires preservation of the existing electrical interface subsystems when scaling from a lower electrical transmission data rate, e.g..-. Gb/s, to rates of.-. Gb/s. Existing electrical interface subsystems include high speed connectors, backplane boards, and child boards. Reuse of these existing subsystems also preserves key designs and layouts for the electrical interfaces such as transmission lines and vias which can impact higher rate data transmission. A system Page of

0 0 0 0 vendor may also want to reuse other subsystems such as non-high speed connectors, interface and switching/router blades, cooling systems, and power modules. These are not part of any high speed electrical transmission subsystem per se but can heavily impact the suitability of different high speed technologies for a hardware upgrade. In the development of Clause of CEI, it has become apparent that many system vendors expect that deployment of G+ long reach electrical interfaces will require designing new electrical transmission subsystems and any associated subsystems. These applications are termed non-legacy and are the primary focus of the G LR CEI Clause implementation agreement. Some non-legacy applications may require an upgrade path for an existing system but this is not presumed to be a general requirement of non-legacy deployments. In the development of this current document, it has also become apparent that some system vendors require preservation of their existing electrical transmission subsystems and associated subsystems during an upgrade to deploy G+ long reach electrical interfaces. These applications are termed legacy and inherently must include a viable upgrade path for an existing system. Legacy applications will be treated separately in order to determine which, if any, alternative technical specifications will be required in addition to what is detailed in Clause. It is recognized that some applications may partially reuse some, but not all, electrical transmission subsystems. These applications will be termed legacy if they reuse large subsystems such as an existing backplane and, possibly, child boards that have been designed to support data rates and reaches below a NRZ binary serial data rate from. Gb/s to. Gb/s through two connectors over distances up to m. It is recognized that there is not a sharp delineation between legacy and non-legacy applications. The focus of this document is to sufficiently distinguish these applications so that their unique requirements can be clearly identified. This document does not provide guidelines as to which particular subsystems can or cannot be reused to support a legacy application. Development of new G+ electrical specifications for CEI may, however, lead to direct or indirect requirements for these subsystems in the CEI Implementation Agreement document.. Attributes of CEI Interfaces in Different Classes of Applications The implementation agreement for G+ CEI interfaces must meet the needs of multiple classes of applications including SERDES to Framer Interface (SFI), System Packet Interface (SPI), TDM-Fabric to Framer Interface (TFI), and Ethernet-like packet interfaces. Each of these applications already has standards coverage as shown in Figure X. and Table Z. Page 0 of

0 0 0 0 Figure X. Standards Coverage of Various Interface Applications. Number of Connectors 0 Chip to Module CEI Chip to Chip over Backplane Chip to Module 0 0 00 Channel Length (cm) From a system vendor perspective, CEI must provide a superset of common electrical specifications which simultaneously should ) provide the basis for the next generations of SFI, SPI, TFI and Ethernet-like packet interfaces ) remain compatible with the electrical specification aspects of standards covering these interfaces ) remain compatible with the electrical specification aspects of related standards such as XFP. Application specific requirements such as protocols, coding, and management functions are outside the scope of CEI. System vendors expect that interoperability and conformance testing procedures defined for CEI will leverage those developed for the other interface standards. Likewise, newly developed algorithms and methods for simulation and testing within the CEI implementation agreement should be extendable to the existing standards for SFI, SPI, TFI, and Ethernet-like packet interfaces. Page of

Table Z. Comparisons of Attributes of CEI Channels in Different Classes of Applications. Classes of Applications Backplane Reach Total Reach Maximum Number of Connectors Coding Requirements Error Tolerance SPI SFI TFI 0 for legacy; up to (mm) for serialized SPI - (0-0mm) for legacy; up to 0 for serialized SPI 0 (does not go over a backplane) (0mm) (mm) is specified in TFI; extended to (mm) in some applications 0 (mm) is specified in TFI; extended to 0 (m) in some applications Scrambled Error free (weak error checking) Scrambled or optical line code (which may be scrambled) Electrical BER significantly less than optical BER Scrambled; may be SONET/SDH, OTN TFI specifies BER of 0 - ; extended to error free in some applications Ethernet-like packet Interfaces (mm) 0 ( m) + (Note ) Application specific control codes Application specific (from error free to BER <0E - ) Flow Control Explicit None None Proprietary Link Connectivity Synchronization Unidirectional, bidirectional Asynchronous Logically unidirectional (Tx and Rx are only self-aware) Either synchronous or plesiochronous Bidirectional Synchronous Bidirectional Plesiochronous Notes ) Two connectors are required. Some applications may include more than connectors. Page of

0 0. System Architecture Examples. Introduction System vendors require that Gb/s LR electrical interfaces support data transport over the printed circuit boards used for interface boards and backplanes in network equipment for data and voice communications. The Gb/s LR specifications should provide additional backplane and interface capacity over what is currently possible using.. Gb/s electrical channels. Moreover, the particular technology choices in the Gb/s LR specifications should meet the evolving needs of communications network infrastructure. This infrastructure consists of edge, core, packet, and transport network equipment. Despite their different functions, these different network elements share common architectural topologies that influence the design of the backplane and associated electrical channels. This document deals with transmission architecture which is a subset of the system architecture for individual and coupled network elements. This focus has guided our choices and descriptions of the architectural examples described in this section. These examples include Star, Dual Star, Multi-Star, and Full Mesh fabric topologies. Note that any of the interfaces described above (SFI, SPI, TFI, and Ethernet-like packet interfaces) may be used in each of these topologies. Page of

0 0 0 0. Fabric Topology Star Figure Star Fabric Topology The Star topology has a Central (Hub) Switch that is linked point-to-point to multiple nodes. The Central Switch resides in a dedicated system slot and is typically linked to the nodes over a common backplane. The centralized single switch in this topology reduces complexity and number of switch-node interconnections. The application target of the Star topology is for non-carrier grade equipment.. Fabric Topology Dual Star Figure Dual Star Fabric Topology The Dual Star fabric topology has two redundant Central Switches that are linked point-to-point to multiple nodes. Each node is linked to both switches but not to each other. The Central Switches reside in two dedicated system slots and are typically linked to the nodes over a common backplane. The Central Switches are also linked together for protection and coordination management purposes. Page of

0 0 0 0 In comparison to the (single) Star fabric topology, the redundant centralized single switches in the Dual Star topology roughly doubles the complexity and number of switchnode interconnections. In return, the Dual Star topology reduces the probability of switching failure, thereby increasing system availability. The application target of the Dual Star topology is for carrier grade equipment. Note that the Dual Star data transport topology can also be found in the Advanced TCA PICMG.0 specification. There are also related Fabric Topologies termed Dual-Dual Star and Multi-Dual Star. These are not described here since the additional star fabrics are dedicated to the control and/or timing architecture, etc. which is outside the scope of this document.. Fabric Topology Multi-Star Figure Multi-Star Fabric Topology n The Multi-Star fabric topology has two or more switch cards which house a distributed single-stage fabric or center stage of a multi-stage fabric. Each node is linked to each switch card but not to each other. Switch cards are linked to nodes over a common backplane but are not linked to one another for the high-speed data path. Protection of the switch cards is typically M:(N-M) where N is the total number of switch cards and M is the number of protection switch cards. Backplane interconnect requirements for the nodes are between that of the single star and dual star. Backplane interconnect requirements for the switch cards are less than required for either single or dual star topologies of the same switching capacity. Multi-Star topology would typically be used in systems with larger capacities than a star or mesh. These systems will often span multiple shelves and/or occupy a full bay and therefore have trace lengths of up to m (including backplane and two node cards). Page of

0 0 0 0. Fabric Topology Full Mesh Figure Full Mesh Topology Slot Slot Slot Slot Slot Slot Slot Slot The Full Mesh topology has multiple identical system slots. Each slot is directly connected point-to-point to every other slot, typically over a common backplane. There are no dedicated system slots for the switching fabrics which reside on each of the system slots. Thus, the switching fabrics and other functions in each slot are inherently redundant and distributed across all system slots. In the Full Mesh topology, the data throughput capacity scales with each added slot. Switching services and management functions are distributed across all of the system slots. In comparison to the Star and Dual Star fabric topologies, the Full Mesh distributed fabric topology is more complex and requires higher numbers of slot-to-slot interconnections. In return, the Full Mesh topology is highly scalable to larger switching capacities and is highly redundant. The application target for the Full Mesh topology is for carrier grade equipment with large data throughput requirements such as routers. This topology is both applicable to simple layer switches and higher level services. Note that the Full Mesh data transport topology can also be found in the Advanced TCA PICMG.0 specification. Different approaches using mesh topologies are possible. One example is where the mesh topology consists of connected clusters and is not 00% completely interconnected. A second example is the replicated mesh architecture where the mesh fabric is fully or partially replicated in the system slots. A third example is a mesh architecture where not all system slots are identical. Here, the system slots may differ in the size and function of their switching fabrics. In practical terms, these variations present similar transmission and backplane architectures. Thus, these will all be classified as mesh architectures in this document. Page of

0 0. Electrical Interface Functional Requirements. Signal characteristics (e.g., rate, reach, voltages, vias) Table Signal Characteristics for Nonlegacy and Legacy Dual Star Fabrics Characteristic Example Example System Architecture Type Dual Star Fabric Dual Star Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps and 0 Gbps Reach (includes backplane, child boards, connectors) 0 00 mm NOTE 000 mm NOTE Signal coding NRZ tbd Differential pair yes yes Signaling direction unidirectional unidirectional I/O Driver cell voltage.v (other fixed voltages could be chosen) NOTE.V (other fixed voltages could be chosen) NOTE DC, AC coupling Both AC and DC; DC preferred at.v Both AC and DC Differential impedance 00 O ± 0% PCB trace tbd Specification of signal vias Impedance discontinuities due to connectors, etc. Specifications TBD depending on measurement method (e.g. S parameter, TDR) tbd Specifications TBD depending on measurement method (e.g. S parameter, TDR) Note. A maximum trace length of 00 mm is chosen for the dual star architecture as it is feasible for centrally located switch cards in a standard size shelf. Using trace lengths of 00m or less, rather than up to m, optimizes the link power budget which is one of primary objectives of the G+ LR CEI implementation agreement for non-legacy applications per the Aug00 G/G CEI draft document.oif00.0.0, Section.. Note. A maximum trace length of 000mm is chosen for the legacy dual star architectures in order to support existing system designs. Note. Refer to Section... Page of

Table Signal Characteristics for Nonlegacy and Legacy Multi-Star Fabrics Characteristic Example Example System Architecture Type Multi-Star Fabric Multi-Star Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps Gbps Reach (includes backplane, 0-000mm 0-000mm child boards, connectors) Signal coding tbd tbd (NRZ to.g for backward compatibility) Differential pair yes yes Signaling direction unidirectional unidirectional I/O Driver cell voltage.v (other fixed voltages could be chosen) NOTE.V (other fixed voltages could be chosen) NOTE DC, AC coupling tbd AC Differential impedance 00 O ± 0% PCB trace 00 O ± 0% PCB trace Specification of signal vias tbd tbd Impedance discontinuities due to connectors, etc. Note. Refer to Section.. Specifications TBD depending on measurement method (e.g. S parameter, TDR) Specifications TBD depending on measurement method (e.g. S parameter, TDR) Table Signal Characteristics for Nonlegacy and Legacy Full Mesh Fabrics Characteristic Example Example System Architecture Type Full Mesh Fabric Full Mesh Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps and 0 Gbps Reach (includes backplane, child boards, connectors) 0 000 mm tbd Signal coding NRZ tbd Differential pair yes yes Signaling direction unidirectional unidirectional I/O Driver cell voltage.v (other fixed voltages could be chosen) NOTE.V (other fixed voltages could be chosen) NOTE DC, AC coupling Both AC and DC; DC preferred at.v NOTE Both AC and DC Differential impedance 00 O ± 0% PCB trace tbd Specification of signal vias Impedance discontinuities due to connectors, etc. Note. Refer to Section.. Specifications TBD depending on measurement method (e.g. S parameter, TDR) tbd Specifications TBD depending on measurement method (e.g. S parameter, TDR) Page of

0 0 0 0... Selection of.v for Fixed DC Coupled Operation AC coupled operation is clearly advantageous for legacy systems to allow for interoperability between components operating from different supply voltages. However, AC coupled operation has the disadvantage of requiring additional capacitors. On dense router and switch cards with many complex ASICs, there may not be sufficient surface area on the board for the large number of capacitors needed. Moreover, deploying these capacitors adds additional vias and possibly even board layers in order to accommodate the added routing area. Finally, these added capacitors and vias contribute more signal distortion which must be included both in the overall link budget and resulting EMC. Thus, DC coupled operation has many advantages when a common voltage can be specified. In the development of G+ and G+ CEI, an attempt was made to support an AC-like DC operation using DC coupled wide common mode receivers. However, analyses (see, for example, OIF00. and OIF00.) have shown that workable solutions for this approach are limited in their ability to support multiple voltages. Given the advantages of DC coupled operation, it has been included as an option in G+ long reach CEI for non-legacy systems. This DC coupling option only supports a single supply voltage on the transmit and receive I/O driver cells. This I/O cell voltage was set at a level of.v in order to build in forward and backward compatibility. This level was chosen since it matches the voltage levels set in the SxI- and TxI- for transmit and receive. Specification of.v in other standards indicates that there will be support for this level in the future. If these other standard interfaces migrate to a new supply voltage, the DC voltage level in CEI can be migrated at the same time. Note that setting the termination voltage at.v does not mean that this voltage must be used for the core ASIC. Indeed, development of ASICs which use lower core voltages is highly desirable for high speed devices such as SERDES and framers since they use less power.... Location of possible signal vias in the transmission architectures The list below assumes a point-to-point link between two child boards connected to a common backplane. Note that Vias- are designated either as near (N) or far (F), depending on whether they are located on the transmit side or receive side of the link. Via) Breakout signal via exiting the source device ViaN) Signal via connecting to input pad of the DC blocking capacitor ViaN) Signal via connecting to output pad of the DC blocking capacitor Via) Signal via into first child board connector Via) Signal via from first backplane connector into the backplane board Via) Signal via from backplane board into second backplane connector Via) Signal via from second child board connector ViaF) Signal via connecting to input pad of the DC blocking capacitor ViaF) Signal via connecting to output pad of the DC blocking capacitor Via) Signal via entering the load device Page of

0 0... Signal via characteristics Signal vias are a necessary component of the transmission architectures addressed in this document. The location and description of these vias is given in... AC coupled systems will have all eight Vias. DC coupled systems will have Via and Vias-. Via design and fabrication is an important aspect of the transmission characteristics of electrical interconnections. Currently, the mostly widely used via design is the plated through hole (PTH). Alternative via designs are backdrilled PTHs, microvias, blind vias, and buried vias. Board fabrication techniques are compatible with using a mixture of different via types on a single board. Choice of via designs impacts fabrication complexity, system cost and, possibly, system reliability. For these reasons, different system vendors have different requirements for their use. Table below lists the various types of vias and associated characteristics that are used in printed circuit boards. The role of signal vias in determining the overall transmission characteristics of signals at + Gb/s is well known. In addition to the vias themselves, the designs of other related features such as anti-pads and surface pads also contribute to channel distortion. Given the range of design options for all of these features, it would be a difficult task to attempt to specify the distortion of individual elements contributing to the overall via transmission characteristics. Page 0 of

Table. Types and characteristics of vias used in printed circuit boards. Type Plated Through Hole; no backdrilling Via application ref... Aspect Ratio (and diameter range where specified) NOTE Minimum Required Daughter Cards Required in some applications Backplanes and Midplanes Minimum Required Required in some applications all 0: : : : Connector Compatibility Press fit and surface mount System Vendor Acceptance all Additional Comments Introduces additional resonances depending on board thickness. Widely used by all system vendors. Plated through hole with backdrilling all 0: DC : : : Press fit and surface mount Some Used to eliminate resonances; cost adder Microvia All but can only be used on top two metal layers : Aspect Ratio; overall diameter 00-0u : Aspect Ratio; overall diameter 00-0u : Aspect Ratio; overall diameter 00-0u : Aspect Ratio; overall diameter 00-0u Surface mount Some Commonly used for ASIC packages on child boards; much less common on backplanes Blind via (standard process) Blind via (controlled depth drilling) Buried via all 0: : : : all 0: : : : -; only for buried capacitors Surface mount, limited application for press fit Surface mount, limited application for press fit NOTE. Aspect ratio refers to ratio of the length of the signal via barrel to the via barrel diameter as drilled before plating. Note that higher aspects ratios are possible than those listed here. These, however, will require advanced fabrication capabilities that are not widely available. Some Some 0: 0: 0: 0: Not applicable Some Used to eliminate resonances or increase routing density. High cost Used to eliminate resonances or increase routing density. High cost; quality control issues Used to eliminate resonances or increase routing density. High cost Page of

0 0 0 0.. Channel characteristics Each G+ board to board electrical link is a differential transmission line consisting of two conductors and a ground plane. The transfer function which describes the characteristics of each link can be defined in terms of -port S-parameters referenced to a common ground plane. Figure Four Port Differential Transmission Line with a Common Ground Plane Port Port Conductor Port Port Conductor Ground Plane The S-parameters which describe this link are given by a x matrix with complex elements. The off-diagonal elements of this matrix, S and S consist of the transmission transfer functions and the diagonal elements, S and S consist of the reflection transfer functions. The channel which makes up this link, as shown in the figure below, consists of various components, each of which can be described (and measured) in terms of individual S parameters as described, e.g. in OIF00.00. In principal, the S parameters can be determined for each connector, board, via, etc. In practice, however, it is more useful to determine the S parameters for major subsystems. In the application below, these would be the interface card, backplane, and switch fabric card, each of which would have associated transmission lines, PCBs, vias and connectors. Figure Channel in a Typical System Application Interface Card Connector Switch Fabric Card Transmission Line (diff) Connector Backplane Page of

Specifications of the channel loss require a channel model with appropriate fitting parameters that are realistic for channels in legacy and nonlegacy line cards, backplanes and switch fabric cards. Following the Aug00 G/G CEI draft document, a method using the following curve fit equation is used: SDD (db) 0 0 0 0 Specifications for the overall loss will follow the form in Figure. It is expected that the normative transfer functions will be different for legacy versus nonlegacy applications. It is also expected that system vendors will want to allow for additional margin beyond that shown in the normative channel model specification below. Recommendations for added margin would be shown in an informative transfer function as shown in Figure. Figure Informative and Normative Transfer Functions for Allowable Channels in Legacy and Nonlegacy Applications -0 - -0 - -0 - -0 - -0-0 SDD S (db, (db, Arbitrary Units) Units) Normative Informative 0.0.00 0.00 Frequency (GHz) Additional specifications are also needed for the maximum allowable ripple on the overall loss curve. It is expected that the fitting parameters will differ for legacy and non-legacy applications. One significant source of ripple is stub resonances. Ripple in the magnitude or phase of the forward transmission, S, results in waveform distortions, intersymbol interference and hence in eye closure. The relative amount of eye closure for a given amount of ripple in the transfer function depends on the input waveform into the backplane Page of

0 0 0 0 channel (e.g. rise and fall times) and on the bandwidth and filtering characteristics of the backplane channel itself. Nevertheless, a range of minimum permissible stub resonance frequencies for an acceptable amount of eye closure due to stub effects can be calculated using the channel model. For the example of NRZ data, these are given in Table below assuming the model described by the above equation, following OIF00.0. Table Minimum Permissible Stub Resonance Frequencies for NRZ Data Relative eye closure due to spectral ripple of stubs and other resonances... Signal crosstalk Minimum permissible stub resonance frequency relative to bit rate 0% 0-% 0% -% 0% 0-0% 0% -0% Crosstalk between adjacent channels in connectors, via fields, backplane boards, etc. is a major contributor to overall signal integrity of G LR transmission links in an actual system. Migration to G LR will require that system vendors address possible link performance impairments caused induced differential crosstalk. In the current CEI implementation agreement, Appendix B suggests the termination and port definitions to use when measuring the forward channel and NEXT/ FEXT crosstalk aggressors. The system vendor requirements for the measurement and reporting of signal crosstalk information are to minimally:. Determine SDD NEXT and FEXT. Determine both single aggressor and multi-aggressor NEXT and FEXT. Report crosstalk data in terms of SDD for both Single aggressor Multi-aggressor in the form of a set of independent individual single aggressor measurements The system vendors request that the PLL CEI group determine a methodology for combining a set of independent individual single aggressor measurements into a multiaggressor result for the Stat-Eye model. This methodology should be incorporated into the final CEI document. Furthermore, the system vendors request that the PLL CEI group produce an informative recommendation for the cross talk measurement methodology, including the equipment set-up and test procedure. These recommendations should include the set-up parameters for vector network analyzer equipment, the frequency range of interest, and the form of the single and multiple aggressor signals (data, single frequency, swept frequency, etc.). Page of

0... Design Considerations to Minimize Signal Crosstalk Appendix A in the CEI implementation agreement describes design practices to minimize crosstalk between adjacent channels. These include, for example, grouping Tx and Rx pins at ICs and connectors. Such design rules can be most obviously implemented in non-legacy applications. However, a legacy design for transmission speeds of. Gb/s and lower may not have been optimized following these guidelines. See Section... System characteristics (e.g., density, connector requirements, short and long reach interoperability) Table System Characteristics for Nonlegacy and Legacy Dual Star Fabrics Characteristic Example Example System Architecture Type Dual Star Fabric Dual Star Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps and 0 Gbps Board area allocated to channel ASICs Interconnect board-to-board density Connector density Live card insertion/removal requirements Requirement for Long Reach ASIC driver to be Interoperable for Short Reach usage Include signal conditioning in system ASICs. Do not want to dictate system packaging but silicon package area should not increase over that at XAUI/.G Support up to 000+ differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE No errors, no damage no damage of Short Reach receiver Note. This does not exclude the use of lower density connectors. Include signal conditioning in system ASICs. Do not want to dictate system packaging but silicon package area should not increase over that at XAUI/.G Support up to 000+ differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE No errors, no damage no damage of Short Reach receiver Page of

Table System Characteristics for Nonlegacy and Legacy Multi-Star Fabrics Characteristic Example Example System Architecture Type Multi-Star Fabric Multi-Star Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps Gbps Board area allocated to channel ASICs Include signal conditioning in system ASICs. Include signal conditioning in system ASICs. Interconnect board-to-board Support up to 00+ Support up to 0+ density Connector density differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE Page of differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE Live card insertion/removal requirements No errors, no damage No errors, no damage Requirement for Long Reach ASIC driver to be Interoperable for Short Reach usage no damage of Short Reach receiver no damage of Short Reach receiver Note. This does not exclude the use of lower density connectors. Table System Characteristics for Nonlegacy and Legacy Full Mesh Fabrics Characteristic Example Example System Architecture Type Full Mesh Fabric Full Mesh Fabric Legacy/Nonlegacy Nonlegacy Legacy Rate and 0 Gbps and 0 Gbps Board area allocated to channel ASICs Interconnect board-to-board density Connector density Include signal conditioning in system ASICs. Do not want to dictate system packaging but silicon package area should not increase over that at XAUI/.G Support up to 000+ differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE Include signal conditioning in system ASICs. Do not want to dictate system packaging but silicon package area should not increase over that at XAUI/.G Support up to 000+ differential pairs (dps) Support card edge density of at least dps/linear mm (0 dps/linear inch) NOTE Live card insertion/removal requirements No errors, no damage No errors, no damage Requirement for Long Reach ASIC driver to be Interoperable for Short Reach usage no damage of Short Reach receiver no damage of Short Reach receiver Note. This does not exclude the use of lower density connectors.

0 0... Additional Connector Requirements Clause in the current CEI Implementation Agreement must include the possibility of using non-legacy high speed connectors to improve channel performance. Non-legacy connectors in this context refer to any board-to-board connector designed to support binary electrical signaling for + Gb/s and/or + Gb/s. Some system vendors require Hhigh speed multi row connectors that must support the maximum density listed in. for high capacity cards such as switch and router cards Additionally, these connectors must support lower density for interface and line cards.. Lower density connectors (> dps/linear mm) are also used in some systems and should be supported. Note that various higher and lower density connectors will be considered to be either legacy or non-legacy, depending on individual design nodes. High speed multi row connectors must allow combinations with non-high speed connectors such as for power and medium speed signaling. These connectors must not restrict backplane/child board thickness.... Additional Interface Card Requirements. Interface cards that are inserted but not powered should not be damaged by transmission signals sent to the card. Subsequent power-up of these inserted cards should return them to normal operation and should not cause any latch-up. Multiple power up/power down cycles for inserted cards should be supported without damage or latch-up. Page of

0. Link and overall system metrics. Link Metrics (e.g. cost, power, EMC, BER, testing) Please note the differentiation here between the link and the system metrics below. Table Link Metrics for Nonlegacy and Legacy Dual Star Fabrics Characteristic Example Example System Architecture Type Dual Star Fabric Dual Star Fabric Legacy/non-legacy Non-legacy Legacy Rate and 0 Gbps and 0 Gbps Cost relative to FR- based backplanes + connectors for <.X NOTE X NOTE. Gbps Power relative to legacy based backplanes + <. X tbd connectors for. Gbps Thermal dissipation relative to legacy based backplanes + connectors for <. X tbd. Gbps BER 0 - with objective of 0-0 - with objective of 0 - PRBS - - Note. Includes the ASICs, boards and connectors. Note. Does not include the ASICs. Page of

0 Table 0 Link Metrics for Nonlegacy and Legacy Multi-Star Fabrics Characteristic Example Example System Architecture Type Multi-Star Fabric Multi-Star Fabric Legacy/non-legacy Non-legacy Legacy Rate and 0 Gbps Gbps Cost relative to FR- based backplanes + connectors for <.X NOTE X NOTE. Gbps Power relative to legacy based backplanes + tbd <.X connectors for. Gbps Thermal dissipation relative to legacy based backplanes + connectors for tbd <.X. Gbps BER 0 - with objective of 0-0 - with objective of 0 - PRBS - Note. Includes the ASICs, boards and connectors. Note. Does not include the ASICs. and b/0b for legacy support Table Link Metrics for Nonlegacy and Legacy Full Mesh Fabrics Characteristic Example Example System Architecture Type Full Mesh Fabric Full Mesh Fabric Legacy/non-legacy Non-legacy Legacy Rate and 0 Gbps and 0 Gbps Cost relative to FR- based backplanes + connectors for <.X NOTE X NOTE. Gbps Power relative to legacy based backplanes + <. X TBD connectors for. Gbps Thermal dissipation relative to legacy based backplanes + connectors for <. X TBD. Gbps BER TBD TBD PRBS TBD TBD Note. Includes the ASICs, boards and connectors. Note. Does not include the ASICs. Page of

0 0 0 0.. Power consumption of a reference. Gb/s link One objective of the CEI G LR implementation agreement is that it will be optimized for overall cost-effective system performance including total power dissipation. This is critical from a system vendor perspective since power consumption has become an increasingly important issue as data and communication network elements scale in capacity and complexity. Tables - contain metrics for the power and thermal dissipation desired for G LR links referencing. Gb/s technology. Therefore, it is necessary to establish a reference power consumption for. Gb/s transmission links. The power consumption per bi-directional. Gb/s link is usually defined by ASIC vendors to be the power of the SERDES macro as calculated for a single receiver/transmitter channel at nominal supply voltage and a maximum frequency of. Gb/s including: I/O buffer to MUX/DEMUX, PLL (phase lock loop) power divided by the maximum possible number of channels per PLL if the PLL is shared by multiple channels. An average value for the power of a. Gb/s transmission as defined above is 0 mw per channel. Average in this sense implies a value that lies approximately midway between the best and worst in class values. Average does not imply a precise arithmetic mean or median for the available commercial ASICs... Power consumption of an G LR link The system vendors require that the stated power consumption of an G LR link be normalized so that a comparison with. Gb/s technology can be clearly delineated. The power consumption of any additional encoding, equalization, emphasis, synthesized clocks, etc. that are added to the SERDES macro in order to implement the G LR transmission must also be included in the power consumption metric. For non-nrz signaling, the data conversion/reconversion macros must also be included.... Asynchronous versus synchronous links In SONET/SDH system architectures, high speed data is transmitted synchronously in the sense of having a master system clock. However, the degree of synchronicity between individual data links depends on their physical design. For example, parallel links that are transmitted synchronously will arrive asynchronously at their respective receivers if these links have different trace lengths. In order to avoid having to design a complex system to be continuously synchronous throughout, system vendors require that the high speed electrical I/Os include clock recovery at the receive end in order to support asynchronous operation at the individual link level. This clock recovery must be of sufficient quality to insure overall system performance with a large number of data links per Section. Page 0 of

0 0 0 0.. Backchannels for Equalization Control Information In the design of devices to drive + Gb/s electrical signaling, multiple equalization options can be used in order to achieve a given target bit error rate. One subset of these schemes relies on adaptive equalization at the transmitter and/or receiver. By definition, adaptive equalization optimizes the ASIC performance based on feedback information obtained from the transmission of previous bits. The design of the feedback communication channel has the potential to impact the overall complexity of a system. Generally, system vendors require that the feedback communication does not require an additional backchannel for each high speed I/O link or link pair. The reason for this requirement is that backchannel implementations require much additional logic which adds to the link power budget. From this perspective, feedback communication schemes which process link performance data via the central shelf logic controller are acceptable. However, this latter scheme will increase the latency between the time that link performance degradation is detected and when a corrective action can be taken. Thus, adaptive equalization solutions which do not require feedback communication between the transmitter and receiver are most useful... DC Balance Requirements for G Links The signal a system vendor would be able to transmit over G links should not require guaranteed DC balance. The G link receivers should have to be able to deal with this. Usually methods like scrambling are used to guarantee a certain transition density, which leads to a statistical probability that too many consecutive identical digits (CID) are avoided, as defined in... However, scrambling alone cannot guarantee DC balance. This would only be possible with methods, like b/0b coding, which has an associated penalty of % frequency overhead. In the current CEI G+ LR document, OIF0..0, the maximum serial data rate is. Gb/s which is insufficient to accommodate this required overhead. Moreover, given the hardware implications of running 0Gb/s interfaces at.+ Gb/s data rates, deploying b/0b coding simply to achieve DC balance is of questionable benefit... Robustness of BER and Crosstalk to Data Pattern Content and Length Analysis of SONET/SDH traffic errors have shown that PRBS traffic is not the worst case for backplane applications. In particular, frame structures with long regions of consecutive identical digits (CID) introduce a greater possibility of bit error. The first source of error occurs because it can introduce DC wander. Even when DC balance is assured, the timing recovery circuitry may not be able to accommodate regions of data containing very little timing information in the form of data transitions. Page of