Extending dynamic Layer-2 services to campuses

Similar documents
Abstract Topology and Cost Maps for So5ware- Defined Inter- Domain Circuits

Distributing weather data via multipoint layer-2 paths using DYNES

Path-based networking: From POTS to SDN. Outline

SDN Peering with XSP. Ezra Kissel Indiana University. Internet2 Joint Techs / TIP2013 January 2013

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:

Advance Reservation Access Control Using Software-Defined Networking and Tokens

DYNES: DYnamic NEtwork System

Zhengyang Liu University of Virginia. Oct 29, 2012

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison

LHC and LSST Use Cases

IRNC:RXP SDN / SDX Update

5 August 2010 Eric Boyd, Internet2 Deputy CTO

Monitoring and Accelera/ng GridFTP

A Virtual Circuit Multicast Transport Protocol (VCMTP) for Scientific Data Distribution

CMS Data Transfer Challenges and Experiences with 40G End Hosts

SENSE: SDN for End-to-end Networked Science at the Exascale

UltraScience Net Update: Network Research Experiments

Competitive Advantage Through Advanced Internet Technology VT Information Technology Spring 2014

Novel Network Services for Supporting Big Data Science Research

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI

Progress Report. Project title: Resource optimization in hybrid core networks with 100G systems

Embedded Network Systems. Internet2 Technology Exchange 2018 October, 2018 Eric Boyd Ed Colone

A SECURE SDN SCIENCE DMZ

BNL Dimitrios Katramatos Sushant Sharma Dantong Yu

BigData Express: Toward Predictable, Schedulable, and High-Performance Data Transfer. BigData Express Research Team November 10, 2018

ANSE: Advanced Network Services for [LHC] Experiments

Internet2 DCN and Dynamic Circuit GOLEs. Eric Boyd Deputy Technology Officer Internet2 GLIF Catania March 5, 2009

On using virtual circuits for GridFTP transfers

Monitoring in GENI with Periscope (and other related updates)

Internet2 Technology Update. Eric Boyd Deputy Technology Officer

WLCG Network Throughput WG

Experiences with 40G/100G Applications

Programmable Information Highway (with no Traffic Jams)

Building the At-Scale GENI Testbed

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Implementation of the Pacific Research Platform over Pacific Wave

perfsonar Deployment on ESnet

International OpenFlow/SDN Test Beds 3/31/15

ANI Testbed Project Update

IRNC Kickoff Meeting

SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED

Ongoing work on NSF OCI at UNH InterOperability Laboratory. UNH IOL Participants

SOCM Service-Based Optical Connection Management. Larry S. Samberg BTI Systems Inc. June, 2013

Towards Network Awareness in LHC Computing

Hybrid Network Traffic Engineering Software (HNTES)

End- Site Control Plane Service (ESCaPeS) Monitoring with. Perisc pe. Mar8n Swany U. Delaware. (Indiana University effec8ve 8.1.

Using the Dynamic Circuit Network: A Brief Tutorial. John Vollbrecht Brian Cashman Andy Lake Spring Member Meeting April 29, 2009

Pacific Wave: Building an SDN Exchange

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

TransPAC3- Asia US High Performance International Networking (Award # ) Quarterly Report 1-March-2014 through 31-May-2014

The UniNet Express Lane Services

The perfsonar Project at 10 Years: Status and Trajectory

Evolution of OSCARS. Chin Guok, Network Engineer ESnet Network Engineering Group. Winter 2012 Internet2 Joint Techs. Baton Rouge, LA.

Experiences with Dynamic Circuit Creation in a Regional Network Testbed

WELCOME TO GLIF Technical Working Group Summer 2015 meeting. Prague, Czech Republic September 2015

ESnet s Advanced Networking Initiative and Testbed

DYNES: Building a Distributed Virtual Instrument

Using the Dynamic Circuit Network: A Brief Tutorial. John Vollbrecht Brian Cashman Andy Lake JointTechs Winter Meeting February 2, 2008

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Wenji Wu Internet2 Global Summit May 8, 2018

Progress Report. Project title: Resource optimization in hybrid core networks with 100G systems

Experience of the RISE Testbed Deployment

Interfacing CoUniverse & Internet2 DCN

DOE Award number: Name of recipient: Project Title: Principal investigator: Date of Report: Period covered by the report:

Using the Dynamic Circuit Network: A Brief Tutorial

Next Generation Networking and The HOPI Testbed

Enhancing Infrastructure: Success Stories

On How to Provision Quality of Service (QoS) for Large Dataset Transfers

TransPAC3- Asia US High Performance International Networking (Award # ) Quarterly Report 1-September-2014 through 30-November-2014

Developing Applications with Networking Capabilities via End-to-End Software Defined Networking (DANCES)

Network Testbeds at AmLight: Eight Months Later

IRNC PI Meeting. Internet2 Global Summit May 6, 2018

CloudLab. Updated: 5/24/16

The New Internet2 Network

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

Vasilis Maglaris. Chairman, NREN Policy Committee - GÉANT Consortium Coordinator, NOVI FIRE Project

NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Fermilab, May 2018

Introduction to GENI. Ben Newton University of North Carolina at Chapel Hill Sponsored by the National Science Foundation

These circuits are used in production to support a wide variety of science applications and demonstrations of advanced networking technologies.

Application of SDN: Load Balancing & Traffic Engineering

International Big Science Coming to Your Campus Soon (Sooner Than You Think )

Software Defined Exchanges: The new SDN? Inder Monga Chief Technologist Energy Sciences Network

Objective. Set out to reverse engineer SDN implementations and secure the entire thing.

CC- NIE/AL2S High Performance Transfers Jim Pepin CTO Clemson University. Broadening the Reach Workshop, Raleigh, NC 09/04/14 09/05/14

SDN for Multi-Layer IP & Optical Networks

Secure Affordable Sustainable Edge Clouds (SASEC) for Smart Cities and Enterprises *

Next-Generation ROADMs

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

Transport SDN: The What, How and the Future!

Enabling the Next Generation of SDN

Enabling a SuperFacility with Software Defined Networking

Recover-Forwarding Method in Link Failure with Pre-established Recovery Table for Wide Area Ethernet

BBR Congestion Control: IETF 100 Update: BBR in shallow buffers

A Workflow-based Network Advisor for Data Movement with End-to-end Performance Optimization

International Climate Network Working Group (ICNWG) Meeting

DICE Diagnostic Service

Congestion Control For Coded TCP. Doug Leith

America Connects to Europe (ACE) (SCI ) Quarterly Report 1-March-2014 through 31-May Jennifer Schopf Principal Investigator

Enabling High Performance Data Centre Solutions and Cloud Services Through Novel Optical DC Architectures. Dimitra Simeonidou

Transcription:

Extending dynamic Layer-2 services to campuses Scott Tepsuporn and Malathi Veeraraghavan University of Virginia (UVA) mvee@virginia.edu Brian Cashman Internet2 bsc@internet2.edu April 1, 2015 FTW Intl. OpenFlow/SDN Testbeds, Miami, FL Thanks to A. J. Ragusa and Luke Fowler (IU), Chin Guok (ESnet), T. Lehman and X. Yang (MAX) Coauthors on a submitted paper Thanks also to Ezra Kissel (Indiana U), Dale Carder and Jerry Robaidek (U. Wisconsin), Ivan Seskar and Steve Decker (Rutgers U), R. D. Russell and P. MacArthur (U. New Hampshire), Conan Moore (U.Colorado), and Ryan Harden (U. Chicago), Ron Withers (U. Virginia), John Lawson (MARIA), Eric Boyd (Internet2), GRNOC, and several regional REN providers for their support. Thanks to NSF for grants CNS-1116081, OCI-1127340, ACI-1340910, and CNS-1405171, ACI-0958998, and DOE grant DE- SC0011358C 1

Outline What was done? How was it done? Why do this? Long-term vision Contributions to community Control-plane models International component 2

What was done? Configured DYNES in 8 campuses WAN multi-domain testbed What is DYNES? Eric Boyd, Shawn McKee, Harvey Newman, Paul Sheldon: PIs NSF MRI project : File Data Transfer (FDT) host + Switch (OpenFlow) + SDN Controller (IDC) + perfsonar host 40 universities and 11 regionals Dynamically created inter-domain L2 paths via OESS GUI (running OSCARS on most DYNES IDCs) Configured FDT: vconfig, ifconfig, Linux tc Tested nuttcp and GridFTP: 0 loss? 3

Campuses involved CU (UCAR) UWisc UNH I2Lab U. Chicago Rutgers IU MAX UVA VTech UTD UH Regionals: VA: MARIA; Rutgers: MAGPI; UChic, UWisc: CIC; IU: Indiana GigaPop; UNH: NOX; CU: FRGP DYNES sites (in use); New sites

How was it done? Method Brian Cashman: significant help! For each campus: Requested logins on FDT with sudo access Assisted campus admin to install, configure and run OSCARS and OESS Assisted campus admin to organize static VLANs through campus networks and regionals Provisioned inter-domain circuits automatically Provisioned FDTs at end of each circuit manually Ran nuttcp and GridFTP with htcp/reno and tc rate shaping with cron jobs for loss/throughput 5

Multi-domain deployment Internet2 ION Regional OSCARS OESS OSCARS Regional Internet2 AL2S OESS OSCARS OSCARS University OSCARS OESS IDC 19 DYNES University FDT ps ESnet OSCARS OESS IDC 19 DYNES ps FDT DOE Sites 6

Examples: End-to-end L2 paths between UVA and IU, and between UVA and UWisc. 7

Lessons learned OSCARS and OESS software works well, but.. When something goes wrong, the error messages are cryptic: error reporting needs community help to improve Topology approach: scalability? Use DNS? Tools required for debugging on multi-domain L2 paths Providers may police rate-guaranteed paths Need to set tc ceiling (ceil) option; higher throughput at 45 Mbps than at 50 Mbps when circuit through ION was 50 Mbps It was good to have ION to gain this experience AuthN/AuthZ Add DYNES to Shibboleth single-sign service, or GlobusOnline type service: which is more scalable? 8

Why pursue this course? Rate-guaranteed circuits offer a solution to the TCP throughput issue 0.0046% loss on high BDP paths causes throughput to crash ESnet SC13 paper Dynamic L1 circuits Rates have reached levels where WDM optical circuits are economically viable L1 now has colorless, directionless, contentionless ROADMs allowing for Rate-guaranteed 100Gbps DTN-to- DTN circuits Dynamic circuits: solution to the rare big-dataset movement needs of scientific community 9

Visions of ARPAnet-like growth! Picture taken in LBJ Library, Texas Austin, 60s Exhibit, Oct. 2014 10

Contributions to community Extending dynamic L2 service to campuses by having engineers/students gain experience with OSCARS/OESS setup and usage End-host configuration: use of tc, Circuit TCP to avoid HTCP cwnd changes Develop: applications for end-to-end L2 paths FCAPS: Fault, Config. mgmt, Accounting, Performance monitoring and Security Management plane help improve OSCARS and OESS: error reporting/autoconf CC-NIE awards (ScienceDMZ): many campus deployments; grow this service 11

Control-plane models Daisy-chain vs. tree-model Research literature and PCE IETF work To avoid lockup of resources: Daisy-chaining requires limited resource allocation on forward signaling path Multiple start-time options to increase chance of success Fast processing Tree-model AuthN needs? Global PSTN: no customer-provider relationships required with providers more than two hops away in daisy-chain model. Not so in tree model Testbed view (GENI) vs. ARPAnet growth view 12

International component Added Keio University, Yokohama, Japan OSCARS successfully set up L2 circuit ping didn t work need to create trouble ticket 13

Requesting your feedback One approach Grow this deployment to CC-NIE/other DYNES sites Create a virtual organization of individuals to develop tools for diagnostics, improve OSCARS, OESS, develop applications Second approach Add Aggregate Manager and contribute this testbed to GENI for networking researchers Third approach Develop L1 (WDM optical) SDN testbed 14

Backup slides 15

UVA DYNES data-plane with an example VLAN. 16

Path UVA- to- Path rate (Gbps) tc Throughput (Mbps) Min. Mean Max. IQR IU 4 R 2933 3856 3927 39 MAX 4 R 3695 4070 4105 27 MAX 3 R 2938 3218 3262 27 MAX 3 C 3132 3221 3248 17 MAX 3 B 3124 3221 3250 19 IU 3 B 609 2973 3132 32 I2Lab 50 R 42.1 45.1 47.1 0.89 I2Lab (Reno) 50 R 25.9 37.2 41.4 2.01 UWisc 45 R 44.1 44.4 44.6 0.19 UWisc 45 C 44.1 44.4 44.6 0.17 UWisc 50 R 36.5 39.7 40.9 0.61 UWisc 50 C 37.3 39.6 41.0 0.8 UWisc 50 B 37.5 39.7 40.7 0.58 Path UVA-to- IU 26 MAX 4.4 I2Lab 27.5 UWisc 24.1 RTT (ms) nuttcp throughput for paths through AL2S (blue) and ION (red). 17

Path UVA-to- Path Rate (Gbps) tc Mean packet retx rate Mean # retx,first 2s IU 4 R 0.00075 4.8 13 MAX 4 R 0.00085 47.8 12 MAX 3 R 0.0007 58.7 4 MAX 3 C 4E-05 3.3 6 MAX 3 B 4E-05 3.5 6 IU 3 B 0.006 95 7 I2Lab 50 R 0.07 20.5 97 I2Lab (Reno) 50 R 0.02 19.7 30 UWisc 45 R 0.002 0.83 60 UWisc 45 C 0.0015 0.73 48 UWisc 50 R 0.15 17.8 100 UWisc 50 C 0.148 17.4 96 UWisc 50 B 0.149 17.3 96 % runs w/ retxin later sec R: max rate guaranteed by tc; C: Ceiling limits max sending rate 18

GridFTPtests Disk-to-disk transfers. 20 GiB * 1 file for single 20 MiB * 1024 files for LOSF. tc=c, 3 Gbps -fast, -pp, and -cc 16 used Path UVA-to- Type Throughput (Mbps) Min. Mean Max. IQR MAX Single 3179 3230 3246 15 MAX LOSF 1485 2035 2246 549 IU Single 1247 2181 2455 150 IU LOSF 1519 2025 2178 39 GridFTP reported throughput for paths through AL2S. LOSF=Lots of Small Files 19