Scalable and Efficient Self-configuring Networks. Changhoon Kim

Similar documents
Floodless in SEATTLE: A Scalable Ethernet Architecture for Large Enterprises

Lecture 16: Data Center Network Architectures

6.888: Lecture 2 Data Center Network Architectures

Floodless in SEATTLE: A Scalable Ethernet Architecture for Large Enterprises

Data Center Network Topologies II

Advanced Computer Networks Data Center Architecture. Patrick Stuedi, Qin Yin, Timothy Roscoe Spring Semester 2015

Routing Domains in Data Centre Networks. Morteza Kheirkhah. Informatics Department University of Sussex. Multi-Service Networks July 2011

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era

Advanced Computer Networks Exercise Session 7. Qin Yin Spring Semester 2013

Service Mesh and Microservices Networking

Scalable Enterprise Networks with Inexpensive Switches

Configuring Virtual Private LAN Services

Top-Down Network Design, Ch. 7: Selecting Switching and Routing Protocols. Top-Down Network Design. Selecting Switching and Routing Protocols

L19 Data Center Network Architectures

Data Center Configuration. 1. Configuring VXLAN

Overlay and P2P Networks. Introduction and unstructured networks. Prof. Sasu Tarkoma

Lecture 10.1 A real SDN implementation: the Google B4 case. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

VXLAN Overview: Cisco Nexus 9000 Series Switches

Lecture 7: Data Center Networks

Deploying LISP Host Mobility with an Extended Subnet

Internetwork Expert s CCNP Bootcamp. Hierarchical Campus Network Design Overview

Software-Defined Data Centers

A Scalable, Commodity Data Center Network Architecture

ENTERPRISE MPLS. Kireeti Kompella

Scalable overlay Networks

Floodless in SEATTLE A Scalable Ethernet Architecture for Large Enterprises By Changhoon Kim, Ma/hew Caesar, and Jennifer Rexford

Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications

Data Center Interconnect Solution Overview

Lecture 6: Overlay Networks. CS 598: Advanced Internetworking Matthew Caesar February 15, 2011

Cloud networking (VITMMA02) DC network topology, Ethernet extensions

CSE 124: THE DATACENTER AS A COMPUTER. George Porter November 20 and 22, 2017

Introducing Campus Networks

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Techniques and Protocols for Improving Network Availability

PassTorrent. Pass your actual test with our latest and valid practice torrent at once

15-744: Computer Networking. Data Center Networking II

Top-Down Network Design

NaaS Network-as-a-Service in the Cloud

ITTC High-Performance Networking The University of Kansas EECS 881 Architecture and Topology

Optimizing Ethernet Access Network for Internet Protocol Multi-Service Architecture

Internet Technology. 15. Things we didn t get to talk about. Paul Krzyzanowski. Rutgers University. Spring Paul Krzyzanowski

CS 5114 Network Programming Languages Control Plane. Nate Foster Cornell University Spring 2013

Rendezvous Point Engineering

BGP. Daniel Zappala. CS 460 Computer Networking Brigham Young University

SPARTA: Scalable Per-Address RouTing Architecture

IPv6: An Introduction

Advanced Computer Networks. Network Virtualization

Utilizing Datacenter Networks: Centralized or Distributed Solutions?

Course Routing Classification Properties Routing Protocols 1/39

White Paper. OCP Enabled Switching. SDN Solutions Guide

Multi-Dimensional Service Aware Management for End-to-End Carrier Ethernet Services By Peter Chahal

1 Connectionless Routing

A Policy-aware Switching Layer for Data Centers

Top-Down Network Design

Expeditus: Congestion-Aware Load Balancing in Clos Data Center Networks

Ad Hoc Networks: Issues and Routing

Dynamic Distributed Flow Scheduling with Load Balancing for Data Center Networks

Taking MPLS to the Edge. Irit Gillath

WAN Edge MPLSoL2 Service

Request for Comments: S. Gabe Nortel (Northern Telecom) Ltd. May Nortel s Virtual Network Switching (VNS) Overview

Mellanox Virtual Modular Switch

Internetworking Part 2

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

Chapter 3: Naming Page 38. Clients in most cases find the Jini lookup services in their scope by IP

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

CSC 401 Data and Computer Communications Networks

Overlay and P2P Networks. Unstructured networks. PhD. Samu Varjonen

Data Link Layer. Our goals: understand principles behind data link layer services: instantiation and implementation of various link layer technologies

Routing Protocol comparison

Messaging Overview. Introduction. Gen-Z Messaging

Core of Multicast VPNs: Rationale for Using mldp in the MVPN Core

DATA CENTER FABRIC COOKBOOK

VSITE: a scalable and secure architecture for seamless L2 enterprise extension in the cloud

Virtualizing The Network For Fun and Profit. Building a Next-Generation Network Infrastructure using EVPN/VXLAN

NETWORK OVERLAYS: AN INTRODUCTION

Dynamic Network Segmentation

Overlay and P2P Networks. Unstructured networks. Prof. Sasu Tarkoma

Overview. Information About Layer 3 Unicast Routing. Send document comments to CHAPTER

VXLAN Design with Cisco Nexus 9300 Platform Switches

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Unicast Routing. Information About Layer 3 Unicast Routing CHAPTER

Small-World Datacenters

ARP, IP. Chong-Kwon Kim. Each station (or network interface) should be uniquely identified Use 6 byte long address

Cloud Computing. Lecture 2: Leaf-Spine and PortLand Networks

Software Defined Networks and OpenFlow. Courtesy of: AT&T Tech Talks.

Ethernet VPN (EVPN) in Data Center

SD-WAN Deployment Guide (CVD)

Network Layer (Routing)

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

SEN366 (SEN374) (Introduction to) Computer Networks

Mobile Communications. Ad-hoc and Mesh Networks

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Routing, Routing Algorithms & Protocols

Summary of MAC protocols

Principles behind data link layer services:

Principles behind data link layer services:

Introduction to Routing

COMP/ELEC 429/556 Introduction to Computer Networks

Transcription:

Scalable and Efficient Self-configuring Networks Changhoon Kim chkim@cs.princeton.edu

Today s Networks Face New Challenges Networks growing rapidly in size Up to tens of thousands to millions of hosts Networking environments getting highly dynamic Mobility, traffic volatility, frequent re-adjustment Networks being increasingly deployed in non-it industries and developing regions Limited support for management Networks must be scalable and yet easy to build and manage! 2

Pains In Large Dynamic Networks Control-plane overhead to store and disseminate host state Millions of host-info entries Commodity switches/routers can store only ~32K/300K entries Frequent state updates, each disseminated to thousands of switches/routers Status quo: Hierarchical addressing and routing Victimizes ease of management Leads to complex, high-maintenance, fragile, hard-to-debug, and brittle network 3

More Pains Limited data-plane capacity Tbps of traffic workload on core links Fastest links today are only 10 ~ 40 Gbps Highly volatile traffic patterns Status quo: Over-subscription Victimizes efficiency (performance) Lowers server utilization and end-to-end performance Trying to mitigate this via traffic engineering can make management harder 4

All We Need Is Just A Huge L2 Switch, or An Abstraction of One! Host up to millions of hosts Obviate manual configuration Ensure high capacity and small end-to-end latency R R R Scalability R R S S R Self-Configuration S S S R R Efficiency (Performance) S S S S 5

Research Strategy and Goal Emphasis on architectural solutions Redesign underlying networks Focus on edge networks Enterprise, campus, and data-center networks Virtual private networks (VPNs) [ Research Goal ] Design, build, and deploy architectural solutions enabling scalable, efficient, self-configuring edge networks 6

Why Is This Particularly Difficult? Scalability, self-configuration, and efficiency can conflict! Self-configuration can significantly increase the amount of state and make traffic forwarding inflexible Examples: Ethernet, VPN routing Present solutions to resolve this impasse 7

Universal Hammers Self-configuration Scalability Efficiency Flat Addressing Traffic Indirection Usage-driven Optimization Utilize location-independent names to identify hosts Let network self-learn hosts info Deliver traffic through intermediaries (chosen systematically or randomly) Populate routing and host state only when and where needed 8

Solutions For Various Edge Networks Enterprise, Campus Network Data-Center Network Virtual Private Network Config-free Self-configuring addressing and semantics routing Config-free addressing, routing, and traffic engineering Config-free site-level addressing and routing Huge overhead to Main scalability store and disseminate bottleneck individual hosts info Limited server-toserver capacity Expensive router memory to store customers info Partition Key technique hosts info a over nutshell switches Spread traffic randomly over multiple paths Keep routing info only when it s needed SEATTLE *SIGCOMM 08+ VL2 *SIGCOMM 09+ Relaying *SIGMETRICS 08+ 9

Strategic Approach for Practical Solutions Clean Slate Workable solution with sufficient improvement SEATTLE VL2 Relaying Backwards Compatible Enterprise, Campus Network Heterogeneity of Technical end-host Obstacles environments Clean slate on Approach network Several independent prototypes Data-Center Network Lack of programmability at switches and routers Clean slate on end hosts Real-world Deployment Virtual Private Network Immediate deployment Transparency to customers Backwards compatible Pre-deployment Evaluation 10

SEATTLE: A Scalable Ethernet Architecture for Large Enterprises Work with Matthew Caesar and Jennifer Rexford

Current Practice in Enterprises A hybrid architecture comprised of several small Ethernet-based IP subnets interconnected by routers IP subnet == Ethernet broadcast domain (LAN or VLAN) R R Loss of self-configuring capability Complexity in implementing policies R Limited mobility R support Inflexible route selection R R Need a protocol that combines only the best of IP and Ethernet 12

Objectives and Solutions Objective 1. Avoid flooding 2. Restrain broadcasting 3. Reduce routing state 4. Enable shortest-path forwarding Approach Resolve host info via unicast Bootstrap hosts via unicast Populate host info only when and where needed Allow switches to learn topology Solution Network-layer one-hop DHT Traffic-driven resolution with caching L2 link-state routing maintaining only switch-level topology * Meanwhile, avoid modifying end hosts 13

Network-layer One-hop DHT Switches maintain <key, value> pairs by commonly using a hash function F F: Consistent hash mapping a key to a switch LS routing ensures each switch knows about all the other live switches, enabling one-hop DHT operations Unique benefits Fast, efficient, and accurate reaction to churns Reliability and capacity naturally growing with network size 14

Location Resolution <key, val> = <MAC addr, location> x MACx payload Owner A Host discovery Hash F(MACx) = B A MACx Publish <MACx, A> payload C A B MACx MACx payload payload D MACx y payload Hash F(MACx ) = B Switches End hosts Resolver B Store <MACx, A> Notify <MACx, A> E Control message Data traffic 15

Handling Host Dynamics Host location, MAC-addr, or IP-addr can change x Old location New location < x, A > < x, D > D A < x, D > Dealing with host mobility B F < x, A > < x, D > Resolver < x, A > < x, D > Host talking with x y MAC- or IP-address change can be handled similarly C E 16

Further Enhancements Implemented Goals Handling network dynamics Isolating host groups (e.g., VLAN) Supporting link-local broadcast Dealing with switch-level heterogeneity Ensuring highly-available resolution service Dividing administrative control Solutions Host-info re-publication and purging Group-aware resolution Per-group multicast Virtual switches Replicating host information Multi-level one-hop DHT 17

Prototype Implementation Link-state routing: XORP OSPFD Host-info management and traffic forwarding: Click Click Interface XORP Network Map OSPF Daemon Link-state advertisements Switch Table User/Kernel Click Ring Manager Host Info Manager Host-info publication and resolution msgs SeattleSwitch Data Frames Data Frames 18

Amount of Routing State y 34.6h Ethernet y 2.4h y 1.6h SEATTLE w/ caching SEATTLE w/o caching SEATTLE reduces y = 30h the amount of routing state by more than y = 2han order of magnitude (h) 19

Cache Size vs. Stretch Stretch = actual path length / shortest path length (in latency) ROFL [SIGCOMM 06] SEATTLE offers near-optimal stretch with very small amount of routing state SEATTLE 20

SEATTLE Conclusion Enterprises need a huge L2 switch Config-free addressing and routing, support for mobility, and efficient use of links Key lessons Coupling DHT with LS routing offers huge benefits Reactive resolution and caching ensures scalability Flat Addressing Traffic Indirection Usage-driven Opt. MAC-address-based routing Forwarding through resolvers Ingress caching, reactive cache update 21

Further Questions What other kinds of networks need SEATTLE? What about other configuration tasks? What if we were allowed to modify hosts? These motivate my next work for data centers 22

VL2: A Scalable and Flexible Data-Center Network Work with Albert Greenberg, Navendu Jain, Srikanth Kandula, Dave Maltz, Parveen Patel, and Sudipta Sengupta

Data Centers Increasingly used for non-customer-facing decision-support computations Many of them will soon be outsourced to cloud-service providers Demand for large-scale, high-performance, cost-efficient DCs growing very fast 24

Tenets of Cloud-Service Data Center Scaling out: Use large pools of commodity resources Achieves reliability, performance, low cost Agility: Assign any resources to any services Increases efficiency (statistical multiplexing gain) Low Management Complexity: Self-configuring Reduces operational expenses, avoids errors Selfconfiguration Conventional DC network Scalability ensures Efficiency none 25

Status Quo: Conventional DC Network Internet DC-Layer 3 DC-Layer 2 S CR CR... AR AR AR AR S S S S ~ 1,000 servers/pod == IP subnet S... Reference Data Center: Load balancing Data Center Services, Cisco 2004 Key CR = Core Router (L3) AR = Access Router (L3) S = Switch (L2) LB = Load Balancer A = Rack of app. servers Limited Poor Dependence Fragmentation utilization server-to-server on and of proprietary resources reliability capacity high-cost solutions 26

Status Quo: Traffic Patterns in a Cloud Instrumented a cluster used for data mining, then computed representative traffic matrices Traffic patterns are highly divergent A large number (100+) of representative TMs needed to cover a day s traffic Traffic patterns are unpredictable No representative TM accounts for more than a few hundred seconds of traffic patterns Optimization approaches might cause more trouble than benefits 27

Objectives and Solutions Objective 1. Ensure layer-2 semantics 2. Offer uniform high capacity between servers 3. Avoid hot spots w/o frequent reconfiguration Approach Employ flat addressing Guarantee bandwidth for hose-model traffic Use randomization to cope with volatility Solution Name-location separation & resolution service Random traffic indirection (VLB) over a topology with huge aggregate capacity * Embrace end systems! 28

Addressing and Routing: Name-Location Separation Cope with host churns with very little overhead Switches run link-state routing and maintain only switch-level topology No LSA flooding for host info No broadcast msgs to update entire hosts No host or switch reconfiguration......... ToR 1 ToR 2 ToR 3 ToR 4 Resolution Service x ToR 2 y ToR 3 z ToR 43 ToR 3 ToR 34 y z payload payload x y, yz z Lookup & Response Hosts use flat names 29

Example Topology: Clos Net Offer huge aggregate capacity at modest cost K x 10G D/2 Intermediate Switches Int... Aggr 2 x 10G TOR D/2 x 10G D/2 x 10G... K Aggregate Switches......... DK/4 TOR Switches........... 20 Servers 20(DK/4) Servers 30

Traffic Forwarding: Random Indirection Cope with arbitrary TMs with very little overhead I ANY I 1 I ANY I 2 I ANY I 3 Links used for up paths Links used for down paths I ANY I? T 35 [ IP anycast + flow-based ECMP ] T 1 T 2 T 3 T 4 T 5 T 6 Harness huge bisection bandwidth Obviate yz payload esoteric traffic engineering or optimization Ensure robustness x yto failures z Work with switch mechanisms available today 31

Implementation Switches Commodity Ethernet ASICs Custom settings for line-rate decapsulation Default buffer-size settings No QoS or flow control Directory service Replicated state-machine (RSM) servers, and lookup proxies Various distributed-systems techniques App servers Custom Windows kernel for encapsulation & directory-service access 32

Fairness Index Data-Plane Evaluation Ensures uniform high capacity Offered various TCP traffic patterns, then measured overall and per-flow goodput Goodput efficiency 93+% VL2 Fat Tree Dcell 75+% (w/o opt) 95+% (with opt) 40 ~ 60% Fairness between flows 0.995 n/a n/a Works nicely with real traffic as well 1.00 0.96 0.92 0.88 0.84 0.80 Jain s fairness index defined as ( x i ) 2 /(n x i2 ) Fairness of Aggr-to-Int links utilization 0 100 200 300 400 500 Time (s) 33

VL2 Conclusion Cloud-service DC needs a huge L2 switch Uniform high capacity, oblivious TE, L2 semantics Key lessons Hard to outsmart haphazardness; tame it with dice Recipe for success: Intelligent hosts + Rock-solid network built with proven technologies Flat Addressing Traffic Indirection Usage-driven Opt. Name-location separation Random traffic spreading (VLB + ECMP) Utilizing ARP, reactive cache update 34

Relaying: A Scalable VPN Routing Architecture Work with Alex Gerber, Shubho Sen, Dan Pei, and Carsten Lund

Virtual Private Network Logically-isolated communication channels for corporate customers, overlayed over provider backbone Direct any-to-any reachability among sites Customers can avoid full-meshing via outsourcing routing Site 1 PE PE Provider-Edge Router PE PE Site 2 PE Provider Backbone Site 3 36

VPN Routing and Its Consequence Site-level Flat Addressing: Virtual PEs (VPEs) self-learn and maintain full routing state in the VPN (i.e., routes to every address block used in each site) Memory footprint of a VPE (forwarding table size) VPE X VPE X VPE X VPE X PE1 PE2 PE3 PE4 VPE Y VPE Y VPE Y VPE Z VPE Z VPE Z PE memory 37

Mismatch in Usage of Router Resources PE VPE X VPE Y Unused port (network interface) VPE Z Used port Memory is full, whereas lots of ports still unused Revenue is proportional to provisioned bandwidth Large VPN with a thin connection per site is the worst case Unfortunately, there are many such worst cases Providers are seriously pinched 38

Key Questions What can we do better with existing resources and capabilities only, while maintaining complete transparency to customers? Do we really need to provide direct reachability for every pair of sites? Even when most (84%) PEs communicate only with a small number (~10%) or popular PEs 39

Relaying Saves Memory Each VPN has two different types of PEs Hubs: Keep full routing state of a VPN Spokes: Keep local routes and a single default route to a hub A spoke uses a hub consistently for all non-local traffic Indirect forwarding Hub X Spoke X Spoke X PE1 Spoke Y PE2 Hub Y PE3 Spoke Y Spoke Z Spoke Z Hub Z 40

Real-World Deployment Requires A Mgmt-Support Tool Two operational problems to solve Hub selection: Which PEs should be hubs? Hub assignment: Which hub should a given spoke use? Constraint Stretch penalty must be bounded to keep SLAs Solve the problems individually for each VPN Hub selection and assignment decision for a VPN is totally independent of that of other VPNs Ensures both simplicity and flexibility 41

Latency-Constrained Relaying (LCR) Notation PE set: P { 1,2,, n} Hub set: H P The hub of PE i : hub( i) H Usage-based conversation matrix: Latency matrix: Formulation L l i, j Choose as few hubs as possible, while limiting additional distance due to Relaying C c i, j min H s. t. s, d P whose cs, d 1, l s, hub( s) l hub( s), d l s, d Parameter 42

Percentage (%) Memory Saving and Cost of LCR Based on entire traffic in 100+ VPNs for a week in May, 2007 100 Gain 80 Cost 60 40 20 0 msec LCR can save ~90% memory with very small path inflation ~ 2.5 msec Fraction of routes removed Fraction of traffic relayed Increase of backbone load Cost ~ 11.5 msec 43

Deployment and Operation Oblivious optimization also leads to significant benefits Can implement this via minor routing protocol configuration change at PEs Performance degrades very little over time Cost curves are fairly robust Weekly/monthly adjustment: 94/91% of hubs remain as hubs Can ensure high availability Need more than one hub located at different cities 98.3% of VPNs spanning 10+ PEs have at least 2 hubs anyway Enforcing H > 1 reduces memory saving by only 0.5% 44

Relaying Conclusion VPN providers need a huge L2-like switch Site-level PNP networking, any-to-any reachability, and scalability Key lessons Traffic locality is our good friend Presenting an immediately-deployable solution requires more than just designing an architecture Flat Addressing Traffic Indirection Usage-driven Opt. Hierarchy-free site addressing Forwarding through a hub Popularity-driven hub selection 45

Goals Attained Self-config Scalability Efficiency SEATTLE Eliminates addressing and routing configuration Significantly reduces control overhead and memory consumption Improves link utilization and convergence VL2 Additionally eliminates configuration for traffic engineering Allows a DC to host over 100K servers w/o oversubscription Boosts DC-server utilization by enabling agility Relaying Retains selfconfiguring semantics for VPN customers Allows existing routers to serve an order of mag. more VPNs Only slightly increases end-to-end latency and traffic workload 46

Summary and Future Work Designed, built, and deployed huge L2-like switches for various networks The universal hammers are applicable for various situations Future work Other universal hammers? Self-configuration on the Internet scale? Architecture for distributed mini data centers? 47