An Analysis and Empirical Study of Container Networks

Similar documents
An Analysis and Empirical Study of Container Networks

Preserving I/O Prioritization in Virtualized OSes

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Cross-Site Virtual Network Provisioning in Cloud and Fog Computing

Performance Considerations of Network Functions Virtualization using Containers

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

Docker Overlay Networks

CS 470 Spring Virtualization and Cloud Computing. Mike Lam, Professor. Content taken from the following:

Project Calico v3.1. Overview. Architecture and Key Components

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads.

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

double split driver model

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

LANCOM Techpaper Routing Performance

vedge Cloud Datasheet PRODUCT OVERVIEW DEPLOYMENT USE CASES EXTEND VIPTELA OVERLAY INTO PUBLIC CLOUD ENVIRONMENTS

QuartzV: Bringing Quality of Time to Virtual Machines

MidoNet Scalability Report

Maximizing Network Throughput for Container Based Storage David Borman Quantum

Container Adoption for NFV Challenges & Opportunities. Sriram Natarajan, T-Labs Silicon Valley Innovation Center

Networking (Containers) in Ultra- Low-Latency Environments. Avi Deitcher

Exploring Alternative Routes Using Multipath TCP

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel

Comparing TCP performance of tunneled and non-tunneled traffic using OpenVPN. Berry Hoekstra Damir Musulin OS3 Supervisor: Jan Just Keijser Nikhef

Technical Presales Guidance for Partners

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Internet Technology. 15. Things we didn t get to talk about. Paul Krzyzanowski. Rutgers University. Spring Paul Krzyzanowski

LINUX CONTAINERS. Where Enterprise Meets Embedded Operating Environments WHEN IT MATTERS, IT RUNS ON WIND RIVER

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

BlueGene/L. Computer Science, University of Warwick. Source: IBM

NFVnice: Dynamic Backpressure and Scheduling for NFV Service Chains

Dockercon 2017 Networking Workshop

A Case for High Performance Computing with Virtual Machines

Merging Enterprise Applications with Docker* Container Technology

Live Migration of Direct-Access Devices. Live Migration

Cloud Networking (VITMMA02) Network Virtualization: Overlay Networks OpenStack Neutron Networking

DESIGN, IMPLEMENTATION, AND OPERATION OF IPV6-ONLY IAAS SYSTEM WITH IPV4-IPV6 TRANSLATOR FOR TRANSITION TOWARD THE FUTURE INTERNET DATACENTER

QuickSpecs. HP Z 10GbE Dual Port Module. Models

CS-580K/480K Advanced Topics in Cloud Computing. Container III

Abstract. Testing Parameters. Introduction. Hardware Platform. Native System

Extremely Fast Distributed Storage for Cloud Service Providers

RoCE vs. iwarp Competitive Analysis

On the cost of tunnel endpoint processing in overlay virtual networks

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Spring 2017 :: CSE 506. Introduction to. Virtual Machines. Nima Honarmand

Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand

Containers Do Not Need Network Stacks

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

End-to-End Adaptive Packet Aggregation for High-Throughput I/O Bus Network Using Ethernet

How Container Runtimes matter in Kubernetes?

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING

Simplify Container Networking With ican. Huawei Cloud Network Lab

Cloud & container monitoring , Lars Michelsen Check_MK Conference #4

FIVE REASONS YOU SHOULD RUN CONTAINERS ON BARE METAL, NOT VMS

vstart 50 VMware vsphere Solution Specification

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT

Milestone Solution Partner IT Infrastructure Components Certification Report

HY436: Network Virtualization

Accelerate Applications Using EqualLogic Arrays with directcache

Preemptive, Low Latency Datacenter Scheduling via Lightweight Virtualization

HPE SimpliVity. The new powerhouse in hyperconvergence. Boštjan Dolinar HPE. Maribor Lancom

Virtualization, Xen and Denali

Application-Specific Configuration Selection in the Cloud: Impact of Provider Policy and Potential of Systematic Testing

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

AIST Super Green Cloud

Evaluation of virtualization and traffic filtering methods for container networks

Fairness Issues in Software Virtual Routers

High Performance Computing Cloud - a PaaS Perspective

Accelerating Contrail vrouter

PVS Deployment in the Cloud. Last Updated: June 17, 2016

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Performance assessment of CORBA for the transport of userplane data in future wideband radios. Piya Bhaskar Lockheed Martin

so Mechanism for Internet Services

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Virtualized Infrastructure Managers for edge computing: OpenVIM and OpenStack comparison IEEE BMSB2018, Valencia,

Chelsio 10G Ethernet Open MPI OFED iwarp with Arista Switch

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances

Oracle IaaS, a modern felhő infrastruktúra

Implementation and Analysis of Large Receive Offload in a Virtualized System

Docker Overlay Networks

Ethernet Fabrics- the logical step to Software Defined Networking (SDN) Frank Koelmel, Brocade

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

WORKLOAD CHARACTERIZATION OF INTERACTIVE CLOUD SERVICES BIG AND SMALL SERVER PLATFORMS

Arrakis: The Operating System is the Control Plane

PVPP: A Programmable Vector Packet Processor. Sean Choi, Xiang Long, Muhammad Shahbaz, Skip Booth, Andy Keep, John Marshall, Changhoon Kim

Improving Altibase Performance with Solarflare 10GbE Server Adapters and OpenOnload

Benefits of full TCP/IP offload in the NFS

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Accelerating vrouter Contrail

COP Cloud Computing. Presented by: Sanketh Beerabbi University of Central Florida

Dynamic Vertical Memory Scalability for OpenJDK Cloud Applications

PageForge: A Near-Memory Content- Aware Page-Merging Architecture

Designing the Stable Infrastructure for Kernel-based Virtual Machine using VPN-tunneled VNC

LANCOM Techpaper IEEE n Indoor Performance

An FPGA-Based Optical IOH Architecture for Embedded System

INNOV-4: Fun With Virtualization. Or, How I learned to love computers that don t really exist...

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

REMEM: REmote MEMory as Checkpointing Storage

Transcription:

An Analysis and Empirical Study of Container Networks Kun Suo *, Yong Zhao *, Wei Chen, Jia Rao * University of Texas at Arlington *, University of Colorado, Colorado Springs INFOCOM 2018@Hawaii, USA 1

The Rise of Containers Containers are a lightweight alternative to virtual machines for application packaging Key benefits of containers ü Rapid deployment ü Portability ü Isolation ü Lightweight, efficiency, and density Increasingly and widely-adopted in data centers ü Google Search launches about 7,000 containers every second 2

Container Networks in the Cloud Cloud Amazon EC2 Docker Cloud Microsoft Azure Other clouds Network on a single host Bridge (Default) None Host Bridge (Default) None Container Host NAT (Default) Transparent Overlay L2Bridge L2Tunnel Network on multiple hosts NAT (Default) Overlay Third party solutions Overlay (Default) NAT (Default) Transparent Overlay L2Bridge L2Tunnel Typical use case: containers running in VMs Challenging to selectan appropriate container network 3

Container Networking Projects 4

Container networks provide connectivity among isolated, sandboxed applications A qualitative comparison ü Applicable scenarios ü Security isolation An empirical study ü Throughput / Latency ü Scalability ü Overhead/start-up cost 5

Container Networks on a Single Host None üa closed network stack and namespace ühigh security isolation Host mode üshare the network stack and namespace of the host OS ülow security isolation 6

Container Networks on a Single Host Bridge mode ü The default network setting of Docker ü An isolated network namespace and an IP address for each container ü Moderate security isolation Container mode ü A group of containers share one network namespace and IP address ü Low isolation within the same group and moderate isolationacrossgroups 7

Container Networks on a Single Host Network Intra-machine communication Inter-machine communication Access to external networks Namespace Security None / / / Independent, isolated High Bridge docker0 bridge / Bind host port, NAT Independent, isolated Moderate Container Inter-process communication / Port binding, NAT Shared with group leader Medium Host Host network stack Host network stack Host network stack Shared with the host OS Low 8

Container Networks on Multiple Hosts Host mode ü Communicate through host network stack and IP ü Pros: near-native performance ü Cons: no security isolation Network address translation (NAT) ü Bind a private container IP to the host public IP and a port number. The docker0 bridge translates between the private and public IP addresses ü Pros: Easy configuration ü Cons: IP translation overhead, inflexible due to host IP binding and port conflicts 9

Container Networks on Multiple Hosts Overlay network ü A virtual network built on top of another network through packet encapsulation ü Examples: IPIP, VXLAN, and VPN, etc. ü Pros: isolation, easy to manage, resilient to networktopology change ü Cons: overhead due to packet encapsulation and decapsulation, difficult to monitor 10

Container Networks on Multiple Hosts Routing ü A network layer solution based on BGP routing ü Pros: high performance ü Cons: BGP not widely supported in datacenter networks, limited scalability, notsuitable for highly dynamic networks or short-lived containers 11

Container Networks on Multiple Hosts Network How it works Protocol K/V store Security Host NAT Sharing host network stack and namespace Host network port binding and mapping ALL No No ALL No No Overlay VXLAN or UDP or IPIP Depends Depends Encrypted support Routing Border Gateway Protocol Depends Yes Encrypted support 12

An Empirical Study Containers in a single VM ü How much are theoverheads of single-host networking modes? Containers in multiple VMs on the same PM ü How much are theoverheads of cross-host networking? Containers in multiple PMs vs. containers in multiple VMsondifferent PMs ü Theinterplay between VM network and container networks? Impact of packet size and protocol Scalability and startup cost 13

Experiment settings Hardware ü Two DELL PowerEdge T430 servers, equipped with a dual ten-core Intel Xeon E5-2640 2.6GHz processor, 64GB memory, a 2TB 7200RPM SATA hard disk, Gigabit Ethernet Software ü Ubuntu 16.10, Linux kernel 4.9.5, KVM 2.6.1 as hypervisor, Docker CE 1.12, rtl8139 NIC drivers ü Etcd 2.2.5, weave 1.9.3, flannel 0.5.5 and calico 2.1 Benchmarks ü Netperf 2.7, Sockperf 2.8, Sparkyfish 1.2, OSU benchmarks 5.3.2 14

Container Networking in a Single VM Sparkyfish throughput Sockperf throughput Throughput relative to w/o container 1.2 1 0.8 0.6 0.4 0.2 0 Upload Download w/o container bridge container host Throughput (MBps) 2000 1500 1000 500 0 TCP UDP w/o container bridge container host OSU benchmark bandwidth OSU benchmark latency Bandwidth (MBps) 5000 4000 3000 2000 1000 0 1 4 16 64 256 1k 4k Packet size (Bytes) w/o container bridge container host Latency (us) 20 15 10 5 0 1 4 16 64 256 1k 4k Packet size (Bytes) w/o container bridge container host The container mode and host mode achieved close performance to the baseline while the bridge mode incurred significant performance loss. 15

Diagnosis of Bridge-based ContainerNetworking W/o container Bridge mode Overhead on bridges inside network stack Longer critical path of packet processing due to centralized bridge docker0 Higher CPU usage and possible queuing delays 16

1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Container and Networking decapsulation; additional across Multiple VMs Overhead 1.8 of address translation Overhead of packet encapsulation bridge processing; prolonged processing inside the kernel; etc. TCP throughput Avg latency Overhead of packet routing w/o container host-mode NAT def-overlay weave flannel calico(ipip) calico(bgp) All overlay networks consumed much more Network CPU (c) CPU (s) S.dem (c) CPU S.dem (Mostly(s) in softirq). w/o container 33.37 18.41 75.466 41.626 Host mode 34.19 22.04 82.403 53.127 Def Overlay 38.75 42.91 95.867 106.145 Weave 54.92 47.56 150.678 130.478 Flannel 42.96 40.44 127.118 119.659 Calico(IPIP) 38.53 40.53 107.854 113.465 Calico(BGP) 37.72 36.92 99.665 95.035 17

Diagnosis of Overlay Networks W/o container Container in overlay network Overhead on VXLAN Overhead on bridges much longer critical path inside the kernel more CPU usage, more soft interrupt processing 18

100 80 60 40 20 0 Impact of Packet Size and Protocol Under TCP, overlay networks do not scale as the packet size increases 140 because they are bottlenecked the packet processing rate 120 TCP throughput (MBps) UDP throughput (MBps) 60 50 40 30 20 10 0 16 32 64 128 256 512 1024 Packet size (Bytes) w/o container host-mode NAT def-overlay weave flannel calico(ipip) All networks scale better under UDP thancalico(bgp) that under TCP, though the actual throughput is much lower w/o container host-mode NAT def-overlay weave 16 32 64 128 256 512 1024 flannel Packet size (Bytes) calico(ipip) Fixed packetrate, throughput should scale with packetsize 19

Interplays between VM network virtualization and Container networks 120 PM PM TCP throughput (MBps) 100 80 60 40 20 0 VM network overheads Container network overheads VM VM PM PM 16 Bytes(PM) 16 Bytes(VM) 1024 Bytes(PM) 1024 Bytes(VM) The two-layer virtualization induces additional degradation on top of virtualization overheads Overlay networks suffer most degradation 20

Bridge docker0 limits the scalability ofcontainer network in a single node Scalability The communication over the overlay network is a major bottleneck for container network across hosts Containers on a single VM Containers on multi VMs (1:N-1) Containers on multi VMs (N/2:N/2) MPI alltoall latency (μs) 60 50 40 30 20 10 0 w/o Container w/ Container 2 3 4 5 6 7 8 Number of containers or processes MPI alltoall latency (μs) 800 600 400 200 w/o Container 0 w/ Container 2 3 4 5 6 7 8 Number of containers or processe MPI alltoall latency (μs) 1500 1000 w/o Container 500 0 2 3 4 5 6 7 8 Number of containers or process Bridge mode Dockeroverlay Dockeroverlay w/ Container Host Host Host Host Host 21

Network Startup Time Networksetup timeevent in Function Data / Cloud service docker startup time; attaching to an existing network Serverless in the container mode Applications requires least Overlay and routingbased networks require setup time much longer startup time Container Single host Launch time Multiple hosts Launch time None 539.6 ms Host 497.4 ms Bridge 663.1 ms NAT 674.5 ms Container 239.4 ms Docker Overlay 10,979.8 ms Host 497.4 ms Weave 2,365.2 ms / / Flannel 3,970.3 ms / / Calico (IPIP) 11,373.1 ms / / Calico (BGP) 11,335.2 ms The startup time was measured as the time since a container create command is issued until the container is responsive to a network ping request. 22

Insights & Takeaways Challenging to determine an appropriate network for containerized applications ü Performance vs. security vs. flexibility ü Smallpackets vs. large packets ü TCP vs. UDP Bridging is a major bottleneck ü Linuxbridge and OVS have the similar issue ü Avoid bridge mode if containers do not need to access external networks Overlay networks are most convenient but expensive ü The existing network stack is inefficient in handling packet encapsulation and decapsulation Optimizingcontainer networks ü Streamlining the asynchronous operations in the network stack ü Making the network stack aware of packets of overlay networks ü Coordinating VM-level network virtualization and container networks 23

Thank you! Questions? 24