Data Center architecture trends and their impact on PMD requirements

Similar documents
100G and Beyond: high-density Ethernet interconnects

50G and 100G Use Cases. Brad Booth Azure Networking, Microsoft

Five Criteria Responses

Accelerating the Mass Transition of 100 GbE in the Data Center. Siddharth Sheth, Vice President of Marketing, High-Speed Connectivity

Scaling the Compute and High Speed Networking Needs of the Data Center with Silicon Photonics ECOC 2017

Tim Takala RCDD. Vice President Field Application Engineer APAC. Mobility is Driving Changing Network Architectures & Transceiver Evolutions

Optical Trends in the Data Center. Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications

Objectives for Next Generation 100GbE Optical Interfaces

The Impact of Emerging Data Rates on Layer One Fiber Cabling Infrastructures. Rick Dallmann Senior Data Center Infrastructure Architect CABLExpress

100 Gb/s SMF Standard Broad Market Potential Observations

A Fork in the Road OM5 vs. Single-Mode in the Data Center. Gary Bernstein, Sr. Director, Product Management, Network Solutions

Early Market PMD Types for: Core Router to Transport interconnect and Router to Router interconnect. Andy Moorwood, Infinera

The Case for Extended Reach MM Objectives

Fiber Optic Cabling Systems for High Performance Applications

40G and 100G Products

Next-gen 400 and 200 Gb/s PHYs over Fewer MMF Pairs Call For Interest Consensus Presentation. IEEE Draft 0.3

New Data Center and Transport Interconnect Technology

Timeline Implications

Path forward for 100G SMF objectives in 802.3cd

802.3bs PROJECT DOCUMENTATION CONSIDERATION. John D Ambrosia, Independent December 2, 2015

400GE at the Cost of 4 x 100GE

Trends in Data Centre

Howard Crabtree Berk-Tek

10GbE and Beyond. Jeff Cain. Joint Techs Council February 13, 2007

Considerations on objectives for Beyond 10km Ethernet Optical PHYs running over a point-to-point DWDM system

Ethernet Adoption: SERDES Rates & Form Factors

Ethernet The Next Generation

Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity

Consensus building discussion on next-generation MMF PMDs. Presented by Robert Lingle, Jr. IEEE Interim Huntington Beach, CA January 9 th, 2017

Support for an objective of 25 Gb/s over MMF. September 2014, Kanata, ON, Canada (team of many)

100 GBE - WHAT'S NEW AND WHAT'S NEXT

Disruptive Technologies Shaping Data Center Growth

Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches

within IEEE Alan Flatman LAN Technologies

Mellanox Virtual Modular Switch

Arista AgilePorts INTRODUCTION

Objectives for HSSG. Peter Tomaszewski Scientist, Signal Integrity & Packaging. rev3 IEEE Knoxville Interim Meeting, September 2006

Silicon Photonics and the Future of Optical Connectivity in the Data Center

25 Gb/s Ethernet Study Group CSD: Technical Feasibility. Joel Goergen - Cisco Matt Brown APM Version goergen_25ge_01_0914d

25G Ethernet CFI. Final Draft Brad Booth, Microsoft. Photo courtesy of Hugh Barrass

400G PAM4 The Wave of the Future. Michael G. Furlong - Senior Director, Product Marketing

The Ethernet Rate Explosion John D Ambrosia

Ethernet 100G and Beyond

Trends in Testing Data Center Infrastructure. Ed Gastle VIAVI Solutions

Objectives for NG 200G and 400G PHYs

40 GbE: What, Why & Its Market Potential

40Gb/s Ethernet Single-mode Fibre PMD Study Group Proposed 5-Criteria Responses

High Speed Migration 100G & Beyond

Structured Cabling Design for Large IT/Service Provider Data Centres. Liang Dong Yuan, Technical Sales manager, CDCE

16GFC Sets The Pace For Storage Networks

Maximizing heterogeneous system performance with ARM interconnect and CCIX

Date Center Solutions. Stefano Alei Consulting SE EMEA Partners

Arista 400G Transceivers and Cables: Q&A

Cisco 40GBASE QSFP Modules

High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center

Leverage Data Center Innovation for Greater Efficiency and Profitability

Dell Networking Optics and Cables Connectivity Guide

Arista 7500 Series Interface Flexibility

Data Center High Speed Migration: Infrastructure issues, trends, drivers and recommendations

Pivotal Issues for 400 Gb/s Ethernet

100 GBE AND BEYOND. Diagram courtesy of the CFP MSA Brocade Communications Systems, Inc. v /11/21

ETHERNET ROADMAP UPDATE. Scott Kipp October 2014

The Arista Universal transceiver is the first of its kind 40G transceiver that aims at addressing several challenges faced by today s data centers.

Introduction - Ethernet Bandwidth Assessment, Part II

Higher Speed Ethernet Update

Cisco 100GBASE QSFP-100G Modules

Are Conflicts in Data Center Fiber Optic Structured Cabling Standards Putting Your Organization at Risk?

Deploying Data Center Switching Solutions

Avago Product Solutions for Data Centers

Deep Dive QFX5100 & Virtual Chassis Fabric Washid Lootfun Sr. System Engineer

Solutions Guide. Pluggable Optics for the Data Center

TECHNOLOGY BRIEF. 10 Gigabit Ethernet Technology Brief

Fiber Backbone Cabling in Buildings

Overview Brochure Direct attach and active optical cables

ARISTA WHITE PAPER Arista 7500E Series Interface Flexibility

Arista 7300X and 7250X Series: Q&A

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

The Software Defined Hybrid Packet Optical Datacenter Network BIG DATA AT LIGHT SPEED TM CALIENT Technologies

The Future of Fiber Optics in the Data Center Today, Tomorrow and Next Year. Dwayne Crawford Global PLM Fiber Connectivity

DC Network Connectivity

Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Optical Communications

Optical Fiber Networks: Industry. and New Options for Networks

Lecture 7: Data Center Networks

Solutions Guide. Pluggable Optics for the Data Center

OPTICAL INTERCONNECTS IN DATA CENTER. Tanjila Ahmed

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

Auto-Negotiation for 25G-CR Cables. Venu Balasubramonian, Marvell Matt Brown, APM

100GE 500m SMF Parallel and WDM links

Arista 7160 series: Q&A

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

802.3bj FEC Overview and Status 400GbE Logic Challenges DRAFT IEEE

Optical Components. A Customers Perspective. Finisar Analyst and Investor Meeting October 7, Adam Carter Director Transceiver Module Group Cisco

Delivering100G and Beyond. Steve Jones Phone: blog.cubeoptics.

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds.

WHITE PAPER THE CASE FOR 25 AND 50 GIGABIT ETHERNET WHITE PAPER

AllPath OPX. Product Description. AP-4240 Features. Datasheet DS-00102

Economics and market drivers behind the adoption of Higher Speed Ethernet Technologies

Fast Lanes and Speed Bumps on the Road To Higher-Performance, Higher-Speed Networking

10Gb/s on FDDI-grade MMF Cable. 5 Criteria Motions. 13 January 2004

Technical Feasibility of 4x10G and 10x10G Electrical Interfaces

Transcription:

Data Center architecture trends and their impact on PMD requirements Mark Nowell, Matt Traverso Cisco Kapil Shrikhande Dell IEEE 802.3 NG100GE Optics Study Group March 2012 1

Scott Kipp Brocade David Warren Hewlett-Packard Gary Nicholl Cisco Jeff Maki Juniper Pete Anslow Ciena Shimon Muller - Oracle 2

How Data Center architectures are changing and how that impacts technology requirements Implications for NG100G Optics Study group Recommendations 3

Enterprise Data Centers Lower port counts (Than MSDC) Network provides workload mobility L2 - L3 forwarding agnostic MSDC Clouds High port count :30K-100K Single tenant Server virtualization not used or hidden from network L3 forwarding only POD sub-unit 30K-100K ports Variants: Multi-tenancy Hundreds of thousands of tenants Workload and VM mobility L3 forwarding only or L2-L3 forwarding agnostic Architectures are similar Fastest growing market 4

5 Source: James Hamilton, Amazon. Internet Scale Infrastructure Innovation, Open Compute Summit 2011 http://mvdirona.com/jrh/talksandpapers/jameshamiltonocp%20summitfinal.pdf

Source: James Hamilton, Amazon. Internet Scale Infrastructure Innovation, Open Compute Summit 2011 http://mvdirona.com/jrh/talksandpapers/jameshamiltonocp%20summitfinal.pdf 6

Need to achieve higher scalability Need for better high availability and lower fate sharing Need to accommodate diverse workloads concurrently Need flexibility on workload mobility Need to further simplify operational models Need for lower and or predictable latency / response time Need physical facilities to evolve with technology Need lower cost connectivity to support large environments and trends in traffic, bandwidth and speed 7

Past 3 Tier Architecture Oversubscription between Access Aggregation and Core CY2010+ Non-blocking Data Center Fabric Oversubscription only in Access Large MSDC doing this on new network builds Key changes at the connectivity layer: 8 Tree-based architecture optimized for N-S traffic Over-subscription in access, aggregation and core Lower ports counts in Agg-Core Meshed architecture better suited for N-S and E-W traffic Over-subscription only in access, 1:1 in aggregation and core Higher ports counts in Agg-Core!

2nd Spine if needed to connect PODs Animated Slide Spine Leaf Servers Pod Pod: East- West communica2on is equidistant across a 2-2er topology HA Model: N+1 on spine and paths vs 1+1 on classic model All switches within a 2er provide equal port density Pod 9 Pod s max density: # of spine switches x switch port density Max # spine switches: ½ the port density of a leaf switch Larger than single pod capacity: requires an addi2onal 2er

150m 100m 10 DB/ Storage MDF Database Schematic representation only Fiber runs up to 2 km corner to corner Actual deployment dependent on numerous factors such as facility constraints, scale requirements

Most DC architectures built around 1-10GE MMF reach (1-300 meters inside DC for MMF) MSDC environments PODs side at 100-150m with interconnect requirements beyond 150m on MMF optics MDSC Inter-POD > 2km 40/100GE MMF reach challenges compared to 10G Reach challenges with 40G/100G MMF drive need for lower cost single mode optics Highly meshed interconnect drives need for high port density on equipment. When using ribbon fiber, are there ribbon TAPs? Cable Management. Automated patch panels: Need SMF to enable. 11

Boiling this down to PMD requirements that the Study group needs to consider: System port density is critical (size/power challenge on PMD) Economics is critical (cost challenge on PMD) 12

This project is likely to complete in 2014 timeframe - Cost optimization thus should be targeted within 2-3 years Coincides with forecasted emergence of 100G Server Market Market transition to 25G SerDes technology taking place IEEE 802.3bj Task Force Multiple announcements and developments within CMOS During 802.3ba timeframe, 25G SerDes relied upon SiGe 13 Source = CFI_01_1110.pdf LINKS: o Altera Demonstrating 25-Gbps Transceivers in Programmable Logic, Sept 2010 o Xilinx World s First Single-FPGA Solution for 400G Communications Line Cards, Nov 2010 o Inphi samples chips to power 100G ports, Sept 2011 o Avago Technologies Demonstrates Industry's First 28- nm 25-Gbps Long Reach-Compliant ASIC SerDes, Feb 2012

4x25G (aka SR4) is only proposal Definite port density advantages over 100GBASE-SR10 - Optical lane rate will align with electrical lane rate no GB or reverse GB needed - Cable reduction great reduces infrastructure cost Reach unable to meet true DC needs (i.e to be compatible with reaches supported by 10GBASE-SR) - 100m definitely needed cost/power/reach tradeoff above 100m needs to be understood - Is a second (shorter) reach required? - What reach? - Can AOC address? How would the standard address this case? - Further study to define? 14

Built today, an SR10 interface is optimized to work with system chipsets with 10G SerDes technology Next Gen ASIC technology with 25G SerDes will need a Reverse Gearbox block function to interface to SR10 15 RGB adds cost & power to system

Proposed SR4 solution would offer path to lowest component count interface Care must be taken in defining electrical i/o! Next Gen ASIC technology with 25G SerDes will need a Reverse Gearbox block function to interface to SR10 16 CDR may be pulled into ASIC especially for Server & low port count implementations RGB adds cost & power to system

Broad Market Potential: meeting DC requirements addresses BMP. System port density is critical to achieve those requirements. However, two PMDs complicates BMP response. Economic Feasibility: SG has data already. More can not hurt Technical Feasibility: Solid data establishing feasibility. Extra work needed to justify two PMDs Distinct identity: Two 4x25G MMF PMDs could complicate the response. 17

Three proposals under consideration is SG: 4x25 parallel PAMn Do nothing - economies of scale best 40G/100G MMF reach limitations are heightening the pressure on SMF to meet DC requirements Architecture trends demand high port count, low cost interface solutions Reach DC scale requires reaches up to 2km, but 300-500m should be optimization point for SG Parallel proposal has increased cable costs as reach increases Monitor taps, automated path panels require duplex SMF 18

Broad Market Potential: meeting DC requirements addresses BMP Economic Feasibility: SG has data already. More can not hurt Technical Feasibility: key focus for SG this meeting and next! Distinct identity: should limit to only one PMD not both. 19

Note this is NOT proposed language for objectives rather guidance. MMF objective: Define a PHY supporting 100m MMF Fiber type to be defined in TF. More study needed on impact of shorter reach differences (power/ size/cost) relative to 100m option SMF Objective Define single PHY supporting reach of >= 300m Final reach to be determined in TF after detailed analysis of technology breakpoints Discussion point to consider: Should reach objective be defined as a minimum, maximum or range 20

21 Backup

Mainstream Adoption 40x 1G Servers 40G Rack 40x 10G Servers 400G Rack 40x 40G Servers 1.6T Rack Core ToR Servers Uplinks 2-4x10G 10x Increase Uplinks 2-10x 40G 2-4x100G 4x Increase Uplinks 2-16x 100G 2-4x400G Non-Blocking 10G Core Non-Blocking 40/100G Core Non-Blocking 400G/1Tb Core 22