400G: Deployment at a National Lab

Similar documents
Optical considerations for nextgeneration

Open Cloud Interconnect: Use Cases for the QFX10000 Coherent DWDM Line Card

Next Generation Requirements for DWDM network

The New Internet2 Network

Research Wave Program RAC Recommendation

Panel : Next Generation Optical Transport Networks - From 100G to 1T and Beyond

Arista 7500E DWDM Solution and Use Cases

Zhengyang Liu University of Virginia. Oct 29, 2012

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S.

Critical Issues for the Flexible Spectrum Network. Peter Roorda, Product and Technology Strategy, Lumentum

User s Perspective for Ten Gigabit

Opening up Optical Networking

Advanced architecture and services Implications of the CEF Networks workshop

Unlock the Benefits of Transport SDN OIF Transport SDN API Interop Demo

Open Optical White Line Systems Similar Hype as SDN?

GÉANT Network Evolution

ALICE Grid Activities in US

MPLS network built on ROADM based DWDM system using GMPLS signaling

A smarter DWDM/OTN layer in your network Alien waves, packet aggregation and router bypass

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

Programmable Information Highway (with no Traffic Jams)

DWDM RAM. A Platform for Data Intensive Services Enabled by Next Generation Dynamic Optical Networks

NORDUnet Conference Fiber Pair Sharing. Pavel Škoda, Josef Vojtěch

High Performance Networks

METRO/ENTERPRISE WDM PLATFORM

Test Report. Infinera and DANTE establish a Guinness World Record. Reference 82792: Fastest time to provision a long haul DWDM link

Next Generation Broadband Networks

Introduction to iscsi

SLIDE 1 - COPYRIGHT G. Across the country, Across the Data Centre. Tim Rayner. Optical Engineer, AARNet.

Zero touch photonics. networks with the cost efficiency of WDM. András Kalmár Péter Barta 15. April, Szeged

Tendencias en Redes Ópticas. Michael De Leo CTO LATAM

Update on National LambdaRail

Is 1Tb/s Ready for Prime Time? Engineering Reality Check

California s Public Libraries and CENIC: Frequently Asked Questions

Transmode s Flexible Optical Networks

Considerations on objectives for Beyond 10km Ethernet Optical PHYs running over a point-to-point DWDM system

Name of Course : E1-E2 CFA. Chapter 15. Topic : DWDM

GÉANT Network Evolution

SWITCHlambda Update Felix Kugler, SWITCH

enhance the network transform performance

SOCM Service-Based Optical Connection Management. Larry S. Samberg BTI Systems Inc. June, 2013

T8 Company. DWDM Systems Fiber-optic Sensors. Vladimir Treshchikov PhD, General Director of Т8

Research requirements for a Future JANET

Sharing optical infrastructure - from small site integration to multi-domain backbone links

OptiDriver 100 Gbps Application Suite

NTT Communications' Perspective on Next GEN Optical Transport Network

ALPSTEIN. Building SWITCH s 2 nd optical backbone. Felix Kugler

Lean Disaggregated Regional Optical Transport. Nick Plunket, Interconnection Engineer NANOG 74 October 2 or 3, 2018

Why Service Providers Should Consider IPoDWDM for 100G and Beyond

NCIT*net 2 General Information

2014 Bond Technology Update Progress of the Technology Network Infrastructure Upgrades Long Range Planning Committee March 4, 2015

Trends in Optical Disaggregation. Presented by :

Visionary Technology Presentations

Current Trends in IP/Optical Transport Integration

Examining the Fronthaul Network Segment on the 5G Road Why Hybrid Optical WDM Access and Wireless Technologies are required?

1 COPYRIGHT 2013 ALCATEL-LUCENT. ALL RIGHTS RESERVED.

Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications

Wholesale Optical Product handbook. March 2018 Version 7

Next Generation Networking and The HOPI Testbed

Lambda Tunnel A Pilot Optical Service Felix Kugler

Transitioning NCAR MSS to HPSS

TOWARDS AUTONOMOUS PACKET-OPTICAL NETWORKS

Achieving the Science DMZ

WHITE PAPER. Photonic Integration

FP7 EU-JP Project : STRAUSS. Ken ichi Kitayama (JP Project Coordinator) Osaka University

Bill Boroski LQCD-ext II Contractor Project Manager

Networking in DWDM systems. Péter Barta András Kalmár 7-9. of April, Debrecen

Lowering the Costs of Optical Transport Networking

Strategy for SWITCH's next generation optical network

Wavelength-Switched to Flex-Grid Optical Networks

Phase 1 Planning ODTN

100 Gbit/s challenges for an operator as TDC

Horizon 2020 EU Japan coordinated R&D project on Scalable And Flexible optical Architecture for Reconfigurable Infrastructure (SAFARI)

B109. OBSOLETE SERVICE OFFERINGS OPTICAL NETWORK SERVICE

Surveying the Industry For a Higher Speed Ethernet

Grid Tutorial Networking

USGv6: US Government. IPv6 Transition Activities 11/04/2010 DISCOVER THE TRUE VALUE OF TECHNOLOGY

Optical Loss Budgets

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:

GARR-X phase 0. GARR network status GARR-X project updates. Massimo Carboni 9 WORKSHOP GARR, Rome - June 16th, 2009

GRID MODERNIZATION INITIATIVE PEER REVIEW

Enabling a SuperFacility with Software Defined Networking

Sharing Direct Fiber Channels Between Protection and Enterprise Applications Using Wavelength Division Multiplexing

DoD Environmental Security Technology Certification Program (ESTCP) Tim Tetreault DoD August 15, 2017

Optical Networking Activities in NetherLight

FLEXING NEXT GENERATION OPTICAL MUSCLES

Advancing European R&E through collaboration

WHITE PAPER. Photonic Integration

UltraScience Net Update: Network Research Experiments

Ronald van der Pol

SHARED MESH RESTORATION IN OPTICAL NETWORKS

Optical Network Transformation: Disaggregation and a Simpler Network. Mike Sabelhaus Fujitsu September 2016

OIF SDN Transport API NFV Proof of Concept. May 4, 2017 Layer 123 NFV World Congress Lyndon Ong, Ciena OIF MA&E Committee Co-Chair

A Possible New Dawn for the Future GÉANT Network Architecture

Broadband Rate Design for Public Benefit

Bidirectional 10&40 km Optical PHY for 50GbE. Xinyuan Wang Huawei Technologies

The NIH Big Data to Knowledge Initiative: Raising the Prominence of Data

Looking for a Smarter City? Eugene Botes RCDD/NTS Technical Manager MEPA CommScope

CzechLight & CzechLight Amplifiers

Transcription:

400G: Deployment at a National Lab Chris Tracy (Esnet) *Jason R. Lee (NERSC) June 30, 2016-1 -

Concept - 2 -

Concept: Use case This work originally began as a white paper in December 2013, in which Esnet was exploring new technologies to support rates above 100G. One use case, in particular, was the problem of linking two disparate data centers. NERSC was in the planning phase of a move from the Oakland Scientific Facility (OSF) in Oakland to Shyh Wang Hall (CRT), at LBL in Berkeley. Oakland Scientific Facility [1] Oakland, CA - 3 - Berkeley Lab s Shyh Wang Hall [2] Berkeley, CA

Concept: Proposal By February 2014, interest in these new technologies grew. This led to the generation of a draft proposal submitted to the DOE. In collaboration with NERSC and Ciena, ESnet proposed a field-trial of 400G technology on BayExpress ESnet s Bay Area production dark fiber ring. ESnet5 BayExpress: Production system serving Bay Area laboratories between Sacramento and Sunnyvale. - 4 -

400G - 5 -

400G: Plan BayExpress ring is 450 km in length National Energy Research and Scientific Computer Facility (NERSC) was moving to a new building that was only 11.5 km from the current site. Short way around. NERSC needed to stay up and running, serving the large diverse scientific community it supports ~6000 scientists, ~900 projects and 46 countries across the world. There is no time that the center is lightly loaded. As of June 27 th we have a 10 day backlog of jobs to run. - 6 -

400G: Plan We would create two alien waves, where each wave would carry 200G. Then combine these waves to form a SuperChannel, that would be 400G in total bandwidth Wave selectable switches are in the path, but they are limited to 50GHz granularity. In the production circuit this took up 100GHz of spectral bandwidth. 2 x 50GHz channels - 7 -

400G: Execution Network-wide upgrade had to be performed for new h/w and optical control plane 4x100 GigE circuits brought up and fully production quality between OSF and CRT On line (DWDM) side, provisioned on BayExpress as two adjacent 50 GHz channels Each 50 GHz channel contains one DP-16QAM signal (2x100GigE payload) DP-16QAM signal line rate => 275.75 Gbit/s (incl. G.709/FEC overhead) - 8 -

NERSC - 9 -

NERSC: Physical topology CRT OSF - 10 -

NERSC: Synchronizing File systems Sync FS between sites while keeping jobs running on the supercomputers. In total about ~10 PB of file system data GPFS restripe, keeping both sites live. Achieved a sustained rate of ~250 Gbps over the link: Limited by the number of sinks / sources we could allocate to the transfer. We did push 400 Gbps during acceptance of the link. Path that the data took was: Disk 10G Ethernet 400G Superchannel Ethernet/Infinband routers Disk All the disk at CRT was IB connected. - 11 -

NERSC: File system Transfers OSF CRT - 12 -

NERSC: WAN Key component: 200G 16QAM transponder - 13 -

400G: Production (Sept 15) 400G service Termination point (4 x 100G Ethernet client) LBNL Ciena node Berkeley, CA - 14 -

Test Bed - 15 -

Testbed: Sept 15 Demonstrated 400G super-channel in lab at LBNL 37.5GHz spacing using 80km fiber spool Better utilization of the spectral bandwidth Using Raman amplification w/ integrated OTDR Validating next-gen ROADM technology: Flexible (gridless), colorless mux/demux Level3's acquiring of TW Telecom last year has caused some delay on bringing up the Dark Fiber for this project Goal to characterize next-gen ROADM architecture in the real-world and gain operational experience - 16 -

Testbed: 400G May 2016-17 -

Testbed: Industry Partner: Provided hands-on technical assistance Loaned four 40km single-mode fiber (SMF) spools Donated equipment: two colorless mux/demux, two Raman amplifiers, two switchable line amplifiers Four 40 km SMF-28 fiber spools 1 colorless mux/dem ux 1 Raman amp 1 switchable line amp - 18 -

Final Thoughts - 19 -

Summary: Project Timeline 2013 Dec White paper on Moving ESnet Beyond 100G. 2014 Feb Draft proposal. 2014 May FWP submitted to PAMS. Ciena presents SC 13 400G superchannel at TNC2014 [4] 2014 Sept Receive FY14 guidance. 2015 Jan CR ends. Receive FY15 guidance, project kick-off. Ciena and Brocade equipment procurement. 2015 Feb ALU equipment procurement. Level 3 and ESnet complete Ciena code upgrade. 2015 Mar Level 3 splicing procurement. 2015 May 400G testbed running in lab, super-channel PoC with Ciena spools (80 km) 2nd Ciena equipment procurement. 2015 Jul 400G link across BayExpress (11.5 km) put into production ready for upcoming NERSC relocation to Berkeley. 2015 Nov Press releases (right before SC15), Shyh Wang Hall building dedication. 2015 Dec NERSC relocates to Berkeley facility. Level 3 splices delivers dark fiber. 2016 May Field trial: 400G super-channel across 93.3 km (dark fiber plus spools). - 20 -

Summary: Filesystem Transfers - 21 -

Summary: Final Thoughts Took almost two years to deploy. Worked almost flawlessly For the 11 km length. Doesn t work around the entire 450k ring (OSNR too high) Took less then a month to move all the data from OSF to CRT No apparent down-time to the users. Took about an hour per file system to remount after a final sync Still in production today as NERSC moves out of OSF - 22 -

Thank you! - 23 -

Contact Info: PI: Chris Tracy: ctracy@es.net Co-PI: Jason Lee: jason@lbl.gov - 24 -

National Energy Research Scientific Computing Center - 25 -

NERSC: WAN Topology - 26 -

WAN: Topology (cont) - 27 -

NERSC: WAN Fiber Fiber provided by: Loaned 11.5 km dark fiber BCXN6956 between Oakland and Berkeley Supported Ciena code upgrade to support new hardware from this project ESnet contributed funds for fiber splicing work Fiber path is approximate - 28 -