THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

Similar documents
Towards Network Awareness in LHC Computing

LHC Open Network Environment. Artur Barczyk California Institute of Technology Baton Rouge, January 25 th, 2012

ANSE: Advanced Network Services for [LHC] Experiments

Presentation of the LHCONE Architecture document

IRNC Kickoff Meeting

LHC and LSST Use Cases

Programmable Information Highway (with no Traffic Jams)

CMS Data Transfer Challenges and Experiences with 40G End Hosts

FELIX project : Overview and the results. Tomohiro Kudoh (The University of Tokyo / AIST) on behalf of all FELIX partners

IRNC:RXP SDN / SDX Update

US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

Cisco Extensible Network Controller

AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13)

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Network Services Interface. OGF NSI standards development progress:

AmLight ExP & AtlanticWave-SDX: new projects supporting Future Internet Research between U.S. & South America

SURFnet network developments 10th E-VLBI workshop 15 Nov Wouter Huisman SURFnet

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

Network Reliability. Artur Barczyk California Institute of Technology Internet2 Spring Member Meeting Arlington, April 25 th, 2011

GÉANT Mission and Services

18th WRNP Workshop RNP May Belém, Brasil Gerben van Malenstein

GLIF September 2017 Sydney, Australia Gerben van Malenstein, SURFnet & John Hess, Pacific Wave

Delay Controlled Elephant Flow Rerouting in Software Defined Network

Software Defined Exchanges: The new SDN? Inder Monga Chief Technologist Energy Sciences Network

SDN AmLight: One Year Later

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

The Software Journey: from networks to visualization

International OpenFlow/SDN Test Beds 3/31/15

Grid Computing at the IIHE

DevoFlow: Scaling Flow Management for High Performance Networks

Mellanox Virtual Modular Switch

DATA CENTER FABRIC COOKBOOK

WLCG Network Throughput WG

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

PROOF-Condor integration for ATLAS

UW-ATLAS Experiences with Condor

OTSDN What is it? Does it help?

Summary of the LHC Computing Review

Optical considerations for nextgeneration

Optical Interconnection Networks in Data Centers: Recent Trends and Future Challenges

Service-Centric Networking for the Developing World

Software-Defined WAN: Application-centric Virtualization and Visibility

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture

National R&E Networks: Engines for innovation in research

Networks for Research and Education in Europe in the Age of Fibre - Where do we move? -

Consistency in SDN. Aurojit Panda, Wenting Zheng, Xiaohe Hu, Arvind Krishnamurthy, Scott Shenker

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Some Musings on OpenFlow and SDN for Enterprise Networks. David Meyer Open Networking Summit October 18-19, 2011

Enabling a SuperFacility with Software Defined Networking

BROADCAST CONTROLLER IP. Live IP flow routing for IP-based live broadcast facilities.

Software Defined Networking Data centre perspective: Open Flow

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Carrier SDN for Multilayer Control

DYNES: DYnamic NEtwork System

ICN for Cloud Networking. Lotfi Benmohamed Advanced Network Technologies Division NIST Information Technology Laboratory

Data Intensive Science Impact on Networks

Lightpath support in LANs. Ronald van der Pol

IP Fabric Reference Architecture

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Clouds in High Energy Physics

PLEXXI HCN FOR VMWARE ENVIRONMENTS

Managed IP Services from Dial Access to Gigabit Routers

South American Astronomy Coordination Committee (SAACC) Meeting

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Future Glimpses: Two Next Generation Applications of SDN and NFV. Alan Carlton. VP, InterDigital Europe. June G World, London

ProgrammableFlow White Paper. March 24, 2016 NEC Corporation

GEN-Z AN OVERVIEW AND USE CASES

Multi Layer SDN. Yatish Kumar CTO Corsa Technology

HOME GOLE Status. David Foster CERN Hawaii GLIF January CERN - IT Department CH-1211 Genève 23 Switzerland

CSE 291: Data Center Networking. Spring 2015 Tu/Th 8:00-9:20am George Porter UC San Diego

GUIDE. Optimal Network Designs with Cohesity

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

EMC VPLEX Geo with Quantum StorNext

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

A Talari Networks White Paper. Turbo Charging WAN Optimization with WAN Virtualization. A Talari White Paper

SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED

Integration of Network Services Interface version 2 with the JUNOS Space SDK

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

Machine-Learning-Based Flow scheduling in OTSSenabled

Data Transfers Between LHC Grid Sites Dorian Kcira

IEEE NetSoft 2016 Keynote. June 7, 2016

Pacific Wave: Building an SDN Exchange

Application of SDN: Load Balancing & Traffic Engineering

Designing and Provisioning IP Contribution Networks

5G Use Cases & Opportunities

Evolution of Cloud Computing in ATLAS

High-Energy Physics Data-Storage Challenges

Understanding the Routing Requirements for FPGA Array Computing Platform. Hayden So EE228a Project Presentation Dec 2 nd, 2003

Hybrid Cloud for Business Communications

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

GLIF Architecture Task Force DRAFT DRAFT--DRAFT

PIRE ExoGENI ENVRI preparation for Big Data science

CSE 124: THE DATACENTER AS A COMPUTER. George Porter November 20 and 22, 2017

Next Steps Spring 2011 Lecture #18. Multi-hop Networks. Network Reliability. Have: digital point-to-point. Want: many interconnected points

Arista 7010 Series: Q&A

Smart and Secured Infrastructure. Rajesh Kumar Technical Consultant

Virtual Circuits Landscape

Hardware Evolution in Data Centers

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

Benefits of SDN Modeling and Analytics tool for complex Service Provider Network

Transcription:

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS Artur Barczyk/Caltech Internet2 Technology Exchange Indianapolis, October 30 th, 2014 October 29, 2014 Artur.Barczyk@cern.ch 1

HEP context - for this talk LHC experiments now moving 100+ PB per year Main driver of R&E network bandwidth utilization over the past decade LHC restart in Spring 2015, expect traffic to grow ATLAS data transfer rates, monthly average 10 k Transfer Throughput 20 0 9-0 1-0 1 0 0 :0 0 t o 20 14-0 2-0 8 0 0 :0 0 UT C LHCONE VRF October 29, 2014 Artur.Barczyk@cern.ch 2 T hroughput (MB/s) 7.5k 5k 2.5k 0 k 20 0 9-0 1 20 0 9-0 2 20 0 9-0 3 20 0 9-0 4 Destinations 20 0 9-0 5 20 0 9-0 6 20 0 9-0 7 20 0 9-0 8 20 0 9-0 9 20 0 9-10 20 0 9-11 20 0 9-12 20 10-0 1 20 10-0 2 20 10-0 3 20 10-0 4 20 10-0 5 20 10-0 6 20 10-0 7 20 10-0 8 20 10-0 9 20 10-10 20 10-11 20 10-12 20 11-0 1 20 11-0 2 20 11-0 3 20 11-0 4 20 11-0 5 20 11-0 6 20 11-0 7 20 11-0 8 20 11-0 9 20 11-10 20 11-11 20 11-12 20 12-0 1 2012-0 2 2012-0 3 20 12-0 4 2012-0 5 2012-0 6 2012-0 7 2012-0 8 2012-0 9 20 12-10 20 12-11 20 12-12 20 13-0 1 2013-0 2 2013-0 3 20 13-0 4 2013-0 5 2013-0 6 2013-0 7 2013-0 8 2013-0 9 20 13-10 20 13-11 20 13-12 20 14-0 1 CA CERN DE ES FR IT ND NL RU TW UK US

New data movement and access patterns Change in Computing Models towards more flexibility (source/destination pairings) more dynamic (popularity based caching) remote access Different flow characteristics and patterns Data production Data processing ; cached vs remote access One constant: Will remain massively distributed, WAN performance will be key to success! Tier-0 CAF Tier-1 Tier-1 Tier-1 T1 Tier-1 T1 Tier-1 T1 Tier-1 T1 Tier-1 T2 T2 T2 T2 T2 October 29, 2014 Artur.Barczyk@cern.ch 3

New capability through SDN example Leveraging provider diversity in LHCONE: many NRENs, providing multiple paths with multiple Standard network forwards all packets for a given subnet on the same path Multipath configurations not trivial Especially when coupled with multiple administrative domains SDN approach allows to build a flexible yet robust and deterministic forwarding scheme Similarities with SDX (general) concept Slides from LHCONE workshop February 2013 October 29, 2014 Artur.Barczyk@cern.ch 4

OpenFlow + Dynamic Circuits Provision capacity between OpenFlow switches based on real-time requirements Approach in the OLiMPS project: flow management in OpenFlow controller triggers circuit requests to OSCARS controller I.e. create a topology optimizing the load distribution in the network Future: couple with data transfer application October 29, 2014 Artur.Barczyk@cern.ch 5

Another SDN Value Proposition Fact: Not all flows/users/application_requirements are the same A (the?) key value proposition of SDN in R&D networks: it allows to provide not only differentiated, but tailored services; customized to the particular need of a research community Can be done with minimal amount of effort (once the system is built) HEP: Example of a vision could be to differentiate between data production and transfer flows ( elephants ) and analysis data flows in remote access scenarios ( mice ), e.g. old model : not the same sites (Tier1/2/3 definition!) new model : roles defined on temporal varying requirements not the same capacity and access latency requirements not the same resiliency requirements October 29, 2014 Artur.Barczyk@cern.ch 6

How can HEP profit from SDN? At the computing sites: Network Virtualization Scalability, flexibility, robustness, security (think ScienceDMZ here) In the WAN flexible services But also through A programmatic interface (TBD) between application and the network Tight integration; direct feedback loop -> reactive system Increased predictability; reduced distribution tails Dynamic workflow optimization Increased efficiency October 29, 2014 Artur.Barczyk@cern.ch 7

SDN and Bandwidth on Demand BoD aka Dynamic Circuit Networks, aka Lightpaths, aka Is a form of Software Defined Networking Circuits are configured by a controller Typically implemented as a (central) Domain Controller A proof-of-concept experiment is being built by LHCONE in collaboration with GLIF AutoGOLE efforts Interconnect a small number of sites initially Interface into CMS and ATLAS data movement and workflow management But really, it will be a first step towards a tight integration between network and applications through broader SDN concepts October 29, 2014 Artur.Barczyk@cern.ch 8

Automated GOLE Fabric October 29, 2014 Artur.Barczyk@cern.ch 9

Conclusions For data intensive science applications, SDN has the potential to enable Better application performance More determinism in workflows Resource optimization In order to harness the full potential of SDN, we need a good application-network interface multi-domain capability support in the R&E networks end-to-end coordinated effort between the service providers and the scientists developing their applications October 29, 2014 Artur.Barczyk@cern.ch 10

THANK YOU! Artur.Barczyk@cern.ch October 29, 2014 Artur.Barczyk@cern.ch 11