NETWORK OVERLAYS: AN INTRODUCTION

Similar documents
VXLAN Overview: Cisco Nexus 9000 Series Switches

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

VXLAN Design with Cisco Nexus 9300 Platform Switches

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

Virtual Security Gateway Overview

Cloud Networking (VITMMA02) Network Virtualization: Overlay Networks OpenStack Neutron Networking

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Lecture 7 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

VXLAN Functionality Cubro EXA48600 & EXA32100

Enabling Efficient and Scalable Zero-Trust Security

Architecting Scalable Clouds using VXLAN and Nexus 1000V

Getting Started with Linux on Cumulus Networks

VXLAN Testing with TeraVM

Lecture 8 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Service Graph Design with Cisco Application Centric Infrastructure

Cisco Virtual Security Gateway Deployment Guide VSG 1.4

The following steps should be used when configuring a VLAN on the EdgeXOS platform:

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Implementing VXLAN in DataCenter

Cisco Nexus 1000V InterCloud

Optimizing your virtual switch for VXLAN. Ron Fuller, VCP-NV, CCIE#5851 (R&S/Storage) Staff Systems Engineer NSBU

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture

TEN ESSENTIAL NETWORK VIRTUALIZATION DEFINITIONS

QuickSpecs. HP Z 10GbE Dual Port Module. Models

Data Center Configuration. 1. Configuring VXLAN

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Cross-subnet roaming in ABB broadband wireless mesh networks

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

STRATEGIC WHITE PAPER. Securing cloud environments with Nuage Networks VSP: Policy-based security automation and microsegmentation overview

F5 DDoS Hybrid Defender : Setup. Version

SD-WAN Deployment Guide (CVD)

Cisco ACI Virtual Machine Networking

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017

Cisco HyperFlex Systems

UDP Encapsulation in Linux netdev0.1 Conference February 16, Tom Herbert

ARISTA DESIGN GUIDE Data Center Interconnection with VXLAN

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

A Comparative Analysis on Network Virtualization Techniques

PassTorrent. Pass your actual test with our latest and valid practice torrent at once

CS-580K/480K Advanced Topics in Cloud Computing. Network Virtualization

Mapping of Address and Port (MAP) an ISPs Perspective. E. Jordan Gottlieb Principal Engineer Charter Communications

IP Mobility Design Considerations

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3

Internet Protocol version 6

Introduction. IP Datagrams. Internet Service Paradigm. Routers and Routing Tables. Datagram Forwarding. Example Internet and Conceptual Routing Table

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

VXLAN Design Using Dell EMC S and Z series Switches

Cisco Nexus Data Broker

W H I T E P A P E R : O P E N. V P N C L O U D. Implementing A Secure OpenVPN Cloud

VMware vsphere 5.5 VXLAN Networking and Emulex OneConnect OCe14000 Ethernet Adapters

TCP/IP THE TCP/IP ARCHITECTURE

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

Cisco ACI Virtual Machine Networking

End to End SLA for Enterprise Multi-Tenant Applications

BIG-IP TMOS : Tunneling and IPsec. Version 13.0

ANIC Host CPU Offload Features Overview An Overview of Features and Functions Available with ANIC Adapters

Network Virtualization

Chapter 3 Packet Switching

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

Intel Rack Scale Architecture. using Intel Ethernet Multi-host Controller FM10000 Family

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Tunneling Configuration Guide for Enterprise

Routing Between VLANs Overview

Cloud e Datacenter Networking

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads.

Pluribus Data Center Interconnect Validated

SUCCESSFUL STRATEGIES FOR NETWORK MODERNIZATION AND TRANSFORMATION

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2

Table of Contents HOL-PRT-1305

Cisco Virtual Networking Solution for OpenStack

Cloud e Datacenter Networking

Nexus 1000V in Context of SDN. Martin Divis, CSE,

GUIDE. Optimal Network Designs with Cohesity

Cisco ACI Multi-Pod and Service Node Integration

Routing Between VLANs Overview

Cross-Site Virtual Network Provisioning in Cloud and Fog Computing

Cisco Designing the Cisco Cloud (CLDDES) Download Full version :

SD-Access Wireless: why would you care?

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061. Lecture 2. Internetworking IPv4, IPv6

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service

SFC in the DOCSIS Network James Kim Cable Television Laboratories, Inc.

Forwarding Within the ACI Fabric

Cisco Application Policy Infrastructure Controller Data Center Policy Model

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

SOFTWARE DEFINED NETWORKING/ OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS

ScaleArc for SQL Server

Contents. EVPN overview 1

Cisco ACI Virtual Machine Networking

Deployments and Network Topologies

ACI Fabric Endpoint Learning

On the cost of tunnel endpoint processing in overlay virtual networks

Project Calico v3.1. Overview. Architecture and Key Components

Principles of Application Centric Infrastructure

Windows Server System Center Azure Pack

Transcription:

NETWORK OVERLAYS: AN INTRODUCTION Network overlays dramatically increase the number of virtual subnets that can be created on a physical network, which in turn supports multitenancy and virtualization features such as VM mobility, and can speed configuration of new or existing services. We ll look at how network overlays work and examine pros and cons. While network overlays are not a new concept, they have come back into the limelight, thanks to drivers brought on by large-scale virtualization. Several standards have been proposed to enable virtual networks to be layered over a physical network infrastructure: VXLAN, NVGRE, and SST. While each proposed standard uses different encapsulation techniques to solve current network limitations, they share some similarities. Let's look at how network overlays work in general. Many advanced virtualization features require Layer 2 adjacency, which is the ability to exist in the same Ethernet broadcast domain. This requirement can cause broadcast domains to grow to unmanageable sizes. Prior to virtualization, network designs emphasized shrinking broadcast domains as much as possible and routing to the edge wherever possible. That's because routing is extremely scalable, and

routing to the edge can improve path utilization and alleviate dependence on Spanning Tree for loop prevention. Now virtualization is forcing broadcast domains to grow, in part to enable features such as VM mobility. One way to do this is through the use of VLANs. The 802.1q standard defines the VLAN tag as a 12-bit space, providing for a max of 4,096 VLANs (actual implementation mileage will vary.) This is an easily reachable ceiling in multitenant environments where multiple internal or external customers will request multiple subnets. All three proposed network overlay standards solve the scale issue by providing a much larger Virtual Network ID (VNID) space in the encapsulating packet. NVGRE and VXLAN are designed to be implemented in hardware and use a 24- bit VNID tag, which allows for 16 million virtual networks. STT uses a larger 32- bit ID. This provides for more space but would be more expensive to implement in hardware, where increased address size incurs additional cost for implementation in silicon.

Aiming for Flexibility A need for flexibility in the data center also opens the door to network overlays. That is, the data center network needs to be flexible enough to support workloads that can move from one host to another on short notice, and for new services to be deployed rapidly. VMs in a data center can migrate across physical servers for a variety of reasons, including a host failure or the need to distribute workloads. These moves traditionally require identical configuration of all network devices attached to clustered hosts. There is also a requirement for common configuration of upstream connecting switches in the form of VLAN trunking, and so on. Network engineers and administrators face the same problem whether they are deploying new services or updating old ones--namely, the need to configure the network. Much of this work is manual, which limits scalability and flexibility and increases administrative overhead. Overlay tunneling techniques alleviate this problem by providing Layer 2 connectivity independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer 3 boundaries, removing the need for preconfigured VLANs and VLAN trunking.

These techniques provide massively scalable virtual network overlays on top of existing IP infrastructures. One of the keys to the technique is the removal of the dependence on underlying infrastructure configuration; as long as IP connectivity is available, the virtual networks operate. Additionally, all three techniques are transparent to the workload itself; the encapsulation is done behind the scenes so it is application independent. How It Works From a high-level perspective, all three proposed standards operate in the same way. Endpoints are assigned to a virtual network via a Virtual Network ID (VNID). These endpoints will belong to that virtual network regardless of their location on the underlying physical IP network. In diagram 1 there are four virtual hosts connected via an IP network. Each host contains a Virtual End Point (VEP), which is a virtual switch capable of acting as the encapsulation/de-encapsulation point for the virtual networks (VNIDs.) Each host has two or more VNIDs operating on it and each workload assigned to a given VNID can communicate with other workloads in the same VNID, while maintaining separation from workloads in other VNIDs on the same or other hosts. Depending on the chosen encapsulation and configuration method, hosts that do not contain a given VNID will either never see packets destined for that VNID, or

will see them and drop them at ingress. This ensures the separation of tenant traffic. Diagram 1 focuses on virtual workloads running in VMs. The same concept would apply if using a physical switch with the VEP functionality. This would allow physical devices to be connected to the overlay network as pictured in Diagram 2 below.

With a physical switch capable of acting as the tunnel end-point, you can add both physical servers and appliances (firewalls, load balancers, and so on) to the overlay. This model is key to a cohesive deployment in mixed workload environments common in today's data centers. Encapsulation techniques are not without drawbacks, including overhead, complications with load-balancing and interoperability issues with devices like firewalls. The overhead with any overlay can come in two forms: encapsulation overhead of the frame size and processing overhead on the server from lack of ability to use NIC offload functionality. Both NVGRE and VXLAN suffer from the second problem due to encapsulating in IP within the soft switch. STT skirts the processing overhead problem by using a TCP hack to gain Large Segment Offload (LSO) and Large Receive Offload (LRO) capabilities from the NIC. All three proposals will suffer from the first problem of encapsulation overhead. With any encapsulation technique you are adding additional headers to the standard frame, as shown in Diagram 3.

With modern networks the actual overhead of a few additional bytes is negligible. Where it does come into play is the size of the frame on the wire. Adding additional information will require either jumbo frame support or more fragmentation of data to meet standard frame sizes. The three standards proposals handle this differently. VXLAN is intended to be used within a data center, where jumbo frame support is nearly ubiquitous; therefore, VXLAN assumes support and uses a larger frame size. NVGRE has provisions in the proposal for Path Maximum Transmission Unit (MTU) detection in order to use jumbo frames when possible and standard frame sizes where required. STT will be segmented by the NIC and rely on NIC settings for frame size. Load balancing spreads traffic across available links to maximize network throughput. It is typically done on a flow basis--that is, by device-to-device conversation. With encapsulation techniques, the inner header information becomes opaque to devices not hardware-capable of recognizing the encapsulation. This means that data normally used to provide load-balancing disappears and all communication appears as a single "flow." VXLAN handles this issue using a hash of the of the inner payload header information as the UDP source port in the encapsulated packet. This allows for

efficient load-balancing in systems relying on 5-tuple algorithms. STT and NVGRE do not provide for as elegant of a solution, and offer up separate possibilities for providing some level of flow control. Without a granular method of providing flow control, network traffic will bottleneck and lead to congestion that can be detrimental to the network as a whole. This will be more apparent as traffic scales up and increases the demand on network pipes. In Diagram 4 we see all traffic from the VMs on both hosts traversing the same path, even though two are available. The same would be the case if the links were bonded such as with LACP--one physical link in the bond would always be used. This problem leaves an available link unused, and can result in performance problems if traffic overwhelms the one link being used. The last drawback is the challenge with devices such as firewalls. These devices use header information to enforce policies and rules. Because these devices expect a specific packet format, they may be stymied by encapsulated frames. In designs

where firewalls sit in the path of encapsulated traffic, administrators will have to configure specific rules, which may be looser than traditional design. Network overlays provide for virtualized multitenant networks on shared IP infrastructure. This provides for a more scalable design, from 4096 virtual networks to 16 million or more. In addition, a network overlay enables the flexibility and rapid provisioning required by today's business demands. Using overlays, services can be added, moved and expanded without the need for manual configuration of the underlying network infrastructure. Source: http://www.networkcomputing.com/networking/network-overlays-anintroduction/d/d-id/1234011?