Programmable Information Highway (with no Traffic Jams)

Similar documents
The Science DMZ: Evolution

ESnet Update Winter 2008 Joint Techs Workshop

ESnet Update Summer 2008 Joint Techs Workshop

ESnet5 Deployment Lessons Learned

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab

Transport SDN: The What, How and the Future!

Towards Network Awareness in LHC Computing

Data Intensive Science Impact on Networks

SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED

Optical considerations for nextgeneration

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Zhengyang Liu University of Virginia. Oct 29, 2012

Gigabyte Bandwidth Enables Global Co-Laboratories

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Virtual Circuits Landscape

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

The Software Journey: from networks to visualization

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams

ESnet Planning, Status, and Future Issues

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

CMS Data Transfer Challenges and Experiences with 40G End Hosts

Research Cyberinfrastructure Upgrade Proposal - CITI

Connectivity Services, Autobahn and New Services

SENSE: SDN for End-to-end Networked Science at the Exascale

LHC and LSST Use Cases

High-Energy Physics Data-Storage Challenges

UNIVERSITY OF CALIFORNIA

DYNES: DYnamic NEtwork System

Data Transfers Between LHC Grid Sites Dorian Kcira

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:

Network and Host Design to Facilitate High Performance Data Transfer

Achieving the Science DMZ

Enabling a SuperFacility with Software Defined Networking

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

ESnet s (100G) SDN Testbed

ALICE Grid Activities in US

Enhancing Infrastructure: Success Stories

SubOptic 2007 May 15, 2007 Baltimore, MD

WLCG Network Throughput WG

A Brief Overview of the Science DMZ

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)!

Experiences with 40G/100G Applications

Disk-to-Disk network transfers at 100 Gb/s

Engagement With Scientific Facilities

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Extending InfiniBand Globally

AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3

The Grid: Processing the Data from the World s Largest Scientific Machine

Discovery, Unconstrained by Geography

150 million sensors deliver data. 40 million times per second

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

400G: Deployment at a National Lab

Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16

Presentation of the LHCONE Architecture document

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech

HEP Science Network Requirements

Distributed e-infrastructures for data intensive science

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI

IRNC Kickoff Meeting

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

ANSE: Advanced Network Services for [LHC] Experiments

Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology

perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan

Storage and I/O requirements of the LHC experiments

Building a Digital Bridge from Ohio

5 August 2010 Eric Boyd, Internet2 Deputy CTO

Grid Computing: dealing with GB/s dataflows

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Center for Advanced Computing Research

SC17 - Overview

New International Connectivities of SINET5

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

Pacific Wave: Building an SDN Exchange

Update on National LambdaRail

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Summary of the LHC Computing Review

Surveying the Industry For a Higher Speed Ethernet

Grid Computing at the IIHE

Progress Report. Project title: Resource optimization in hybrid core networks with 100G systems

HTC/HPC Russia-EC. V. Ilyin NRC Kurchatov Institite Moscow State University

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

Performance Benchmarking/Testing of CARC Network, and Connectivity (CRF<- >RSC<- >Penguin)

AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13)

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison

perfsonar Deployment on ESnet

The CMS Computing Model

Monitoring and Accelera/ng GridFTP

Storage Resource Sharing with CASTOR.

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

Scaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018

ESNET Requirements for Physics Reseirch at the SSCL

Wide-Area InfiniBand RDMA: Experimental Evaluation

The perfsonar Project at 10 Years: Status and Trajectory

User s Perspective for Ten Gigabit

Optical Networking Activities in NetherLight

Transcription:

Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab

Exponential Growth ESnet Accepted Traffic: Jan 1990 - Aug 2012 (Log Scale) 0.0000.0000 70% CAGR growth From PB/month 0 PB/month in 2015 1.0000 Is there an impedance mismatch? Petabytes 0.00 0.00 0.00 Source: Wikipedia Source: NERSC 0.0001 0.0000 Jul 2011 Jan 2011 Jul 20 Jan 20 Jul 2009 Jan 2009 Jul 2008 Jan 2008 Jul 2007 Jan 2007 Jul 2006 Jan 2006 Jul 2005 Jan 2005 Jul 2004 Jan 2004 Jul 2003 Jan 2003 Jul 2002 Jan 2002 Jul 2001 Jan 2001 Jul 2000 Jan 2000 Jul 1999 Jan 1999 Jul 1998 Jan 1998 Jul 1997 Jan 1997 Jul 1996 Jan 1996 Jul 1995 Jan 1995 Jul 1994 Jan 1994 Jul 1993 Jan 1993 Jul 1992 Jan 1992 Jul 1991 Jan 1991 Jul 1990 Jan 1990 Jul 2012 Jan 2012

HEP as a Prototype for Data-Intensive Science Data courtesy of Harvey Newman, Caltech, and Richard Mount, SLAC and Belle II CHEP 2012 presentation

Moving data is going to be a way of life

SEAT 0 PNNL LBNL JGI 0 ANL 1 0 0 AMES 0 SUNN Salt Lake 0 0 Network: SNLL an integral 0 part 0 of the application LLNL LOSA SDSC 1 LASV ALBU 0 LANL SNLA workflow STAR 0 0 EQCH 0 CLEV 0 0 0 0 PPPL GFDL PU Physics JLAB BNL 0 0

A Network-Centric View of LHC CERN T1 mile s kms France 350 565 Italy 570 920 UK 625 00 Netherlands 625 00 Germany 700 1185 Spain 850 1400 Nordic 1300 20 USA New York 3900 6300 USA - Chicago 4400 70 Canada BC 5200 8400 Taiwan 60 9850 Source: Bill Johnston The LHC Open Network Environment (LHCONE) The LHC Optical Private Network (LHCOPN) O(1-) meter O(-0) meters O(1) km 500-,000 km CERN Computer Center detector Level 1 and 2 triggers Level 3 trigger ~50 Gb/s (25Gb/s ATLAS, 25Gb/s CMS) 1 PB/s LHC Tier 0 Deep archive and send data to Tier 1 centers LHC Tier 1 Data Centers LHC Tier 2 Analysis Centers

A Network Centric View of the SKA Receptors/sensors ~200km, avg. ~15,000 Tb/s aggregate correlator / data processor Source: Bill Johnston ~00 km 400 Tb/s aggregate from SKA RFI ~25,000 km (Perth to London via USA) or ~13,000 km (South Africa to London) Hypothetical (based on the LHC experience) National tier 1 supercomputer European distribution point National tier 1 0.1 Tb/s (0 Gb/s) aggregate 1 fiber data path per tier 1 data center.03 Tb/s (30 Gb/s) each National tier 1 astronomy Lawrence Berkeley astronomy National astronomy Laboratory astronomy astronomy astronomy U.S. Department astronomy of Energy astronomy Office of Science astronomy

New thinking changes the language of interaction Infrastructure Provides best-effort IP dialtone Average end-to-end performance, packet loss is fine How much bandwidth do you need? (1G/G/0G) Ping works, you are all set, go away Instrument Adapts to the requirements of the experiment, science, end-to-end flow Highly calibrated, zero packet loss end-to-end What s your sustained end-to-end performance in bits/sec? Can I get the same performance anytime? Tuned to meet the application s workflow needs, across network domains

Adjectives for Network as a Instrument (NaaI) End-to-End Programmable Simple Predictable

Science DMZ: remove roadblocks towards end-to-end performance Science DMZ a well-configured location for high-performance WAN-facing science services Located at or near site perimeter on dedicated infrastructure Dedicated, high-performance data movers Highly capable network devices (wire-speed, deep queues) Virtual circuit connectivity Security policy and enforcement are specific to science workflows perfsonar

Programmable networks (1): Service Interfaces Intelligent Network Services Reservation Scheduled Standard Service Interface NSI (OGF) Slide courtesy: ARCHSTONE project

Programmable networks (2): Software-Defined Networking (SDN) Flexible, Programmable, separation of flows Demonstrated at Joint Techs 2011

Eric Pouyoul, Inder Monga, Brian Tierney (ESnet), Martin Swany (Indiana) & Ezra Kissel (U. of Delaware) Programmable networks (3): New Protocols for high-speed data transfer Bridging end-site dynamic flows with WAN dynamic tunnels - Zero-configuration virtual circuit from end-host to end-host - Automated discovery of circuit end-points Cross-country RDMA-over-Ethernet high-performance data transfers Mb/s 000 8000 6000 4000 2000 SC11 Demonstration 0 time (s) - 9.8 Gbps on Gbps,78 ms RTT link between Brookhaven in NY to Seattle, WA - <4% CPU load compared to 1 stream TCP w/80% util. - No special host hardware other than NIC with RoCE support RDMA TCP CPU core utilization 70-90% - 1 stream TCP 3-4% - RDMA OpenFlow @SC11, Seattle OSCARS/ESnet4 OpenFlow @ BNL Fully Automated, End to End, Dynamically Stitched, Virtual Connection

Simple abstractions How can Storage leverage this simple network abstraction? SDN Ctrl. Program flows A Virtual Switch SDN Ctrl. Wide Area Network SRS Demo with Ciena, in Ciena s booth 2437

Conclusion Moving data fast(er) is a 21 st century reality Distributed Science Collaborations, Large Instruments, Cloud Computing Network is not an infrastructure, but an instrument Think Different, do not set your expectations about the network as your traffic highway Simple, Programmable, Network abstractions with a Service Interface How will storage workflows leverage that?

Inder Monga imonga@es.net www.es.net, fasterdata.es.net Thank You!