OpenStack Networking: Where to Next?

Similar documents
Enabling Efficient and Scalable Zero-Trust Security

Bringing Software Innovation Velocity to Networking Hardware

Virtual Switch Acceleration with OVS-TC

Accelerating Contrail vrouter

Accelerating vrouter Contrail

Agilio OVS Software Architecture

SmartNIC Programming Models

SmartNIC Programming Models

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

Agilio CX 2x40GbE with OVS-TC

Host Dataplane Acceleration: SmartNIC Deployment Models

OPEN CONTRAIL ARCHITECTURE GEORGIA TECH SDN EVENT

Netronome NFP: Theory of Operation

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

Programming NFP with P4 and C

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture

Programmable Server Adapters: Key Ingredients for Success

OpenContrail, Real Speed: Offloading vrouter

Comparing Open vswitch (OpenFlow) and P4 Dataplanes for Agilio SmartNICs

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

Contrail Networking: Evolve your cloud with Containers

Accelerating 4G Network Performance

Software Defined. All The Way with OpenStack. T. R. Bosworth Senior Product Manager SUSE OpenStack Cloud

INSTALLATION RUNBOOK FOR Netronome Agilio OvS. MOS Version: 8.0 OpenStack Version:

Exploring Cloud Security, Operational Visibility & Elastic Datacenters. Kiran Mohandas Consulting Engineer

Programming Netronome Agilio SmartNICs

Virtualization of Customer Premises Equipment (vcpe)

Efficient Data Movement in Modern SoC Designs Why It Matters

Accelerating Telco NFV Deployments with DPDK and SmartNICs

Layer 7 Visibility for vcpe Services

Data Path acceleration techniques in a NFV world

Red Hat OpenStack Platform 13

All product specifications are subject to change without notice.

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Network Function Virtualization Using Data Plane Developer s Kit

Intel Rack Scale Architecture. using Intel Ethernet Multi-host Controller FM10000 Family

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

TEN ESSENTIAL NETWORK VIRTUALIZATION DEFINITIONS

Accelerating SDN and NFV Deployments. Malathi Malla Spirent Communications

Virtualizing Managed Business Services for SoHo/SME Leveraging SDN/NFV and vcpe

Building NFV Solutions with OpenStack and Cisco ACI

METAFABRIC ARCHITECTURE A SIMPLE, OPEN, AND SMART NETWORK FOR THE DATA CENTER

New trends in IT. Network Functions Virtualization (NFV) & Software Defined-WAN

BROCADE CLOUD-OPTIMIZED NETWORKING: THE BLUEPRINT FOR THE SOFTWARE-DEFINED NETWORK

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities

SmartNICs: Giving Rise To Smarter Offload at The Edge and In The Data Center

and public cloud infrastructure, including Amazon Web Services (AWS) and AWS GovCloud, Microsoft Azure and Azure Government Cloud.

Kubernetes networking in the telco space

ANIKET DAPTARI & RANJINI RAJENDRAN CONTRAIL TEAM

A Brief Guide to Virtual Switching Franck Baudin (Red Hat) Billy O Mahony (Intel)

TRANSFORM YOUR NETWORK

Red Hat OpenStack Platform 10 Red Hat OpenDaylight Product Guide

Network Edge Innovation With Virtual Routing

Performance Considerations of Network Functions Virtualization using Containers

Arista 7170 series: Q&A

Dataplane Networking journey in Containers

Flexible NFV WAN interconnections with Neutron BGP VPN

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

ONOS-based Data Plane Acceleration Support for 5G. Dec 4, SKTelecom

VFd: an SR-IOV Hypervisor using DPDK. Alex Zelezniak DPDK Summit Userspace - Dublin- 2017

Network Services Benchmarking: Accelerating the Virtualization of the Network

Empowering SDN SOFTWARE-BASED NETWORKING & SECURITY FROM VYATTA. Bruno Barba Systems Engineer Mexico & CACE

Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION

Brocade and VMware Strategic Partners. Kyle Creason Brocade Systems Engineer

Best Practice Deployment of F5 App Services in Private Clouds. Henry Tam, Senior Product Marketing Manager John Gruber, Sr. PM Solutions Architect

Building a Platform Optimized for the Network Edge

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers

Juniper Solutions for Turnkey, Managed Cloud Services

Windows Server System Center Azure Pack

OpenStack and OVN What s New with OVS 2.7 OpenStack Summit -- Boston 2017

Towards Converged SmartNIC Architecture for Bare Metal & Public Clouds. Layong (Larry) Luo, Tencent TEG August 8, 2018

Disaggregation and Virtualization within the Juniper Networks Mobile Cloud Architecture. White Paper

MidoNet Scalability Report

Network Programmability and Automation with Cisco Nexus 9000 Series Switches

vedge Cloud Datasheet PRODUCT OVERVIEW DEPLOYMENT USE CASES EXTEND VIPTELA OVERLAY INTO PUBLIC CLOUD ENVIRONMENTS

Transformation Through Innovation

Huawei AR1000V Brochure

Virtualizing 5G Infrastructure using Cloud VIM. Sangho Shin SK Telecom

Red Hat Enterprise Virtualization and KVM Roadmap. Scott M. Herold Product Management - Red Hat Virtualization Technologies

Network Virtualization

DEPLOYING NFV: BEST PRACTICES

Oracle IaaS, a modern felhő infrastruktúra

Cloud Networking (VITMMA02) Server Virtualization Data Center Gear

L7 Application Visibility for NFV and Data Centers

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

Benefits of SD-WAN to the Distributed Enterprise

Data Center Virtualization: Open vswitch

Deploying TeraVM in an OpenStack Environment

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization

Overview of the Juniper Networks Mobile Cloud Architecture

RoCE vs. iwarp Competitive Analysis

CloudEngine 1800V Virtual Switch

Agenda. Introduction Network functions virtualization (NFV) promise and mission cloud native approach Where do we want to go with NFV?

Intel Network Builders Solution Brief. Etisalat* and Intel Virtualizing the Internet. Flexibility

SD-WAN orchestrated by Amdocs

QLogic FastLinQ QL41232HMKR

VM-SERIES FOR VMWARE VM VM

Migrating Session Border Controllers to the Cloud

Transcription:

WHITE PAPER OpenStack Networking: Where to Next? WHAT IS STRIKING IS THE PERVASIVE USE OF OPEN VSWITCH (OVS), AND AMONG NEUTRON FEATURES, THE STRONG INTEREST IN SOFTWARE- BASED NETWORKING ON THE SERVER, OR SERVER- BASED NETWORKING. INTRODUCTION In April 2016, the OpenStack Foundation released a very enlightening Survey which focused on networking features in OpenStack. The following key metrics (as percentage of respondents) from the survey came to the forefront: Usage: Full operational use, in production 65%; On-premise, private cloud deployment 65%; Neutron Networking 90% Important emerging technologies: SDN/NFV - 52%; Containers - 70%; Bare Metal - 50% Workloads and frameworks: Private and public cloud products and services - 49%; NFV 29% Preferred hypervisor: K 93%; ware 8% Neutron drivers in Use: OVS - 80%; ML2-31%; Linux Bridge - 21%; Contrail - 14%; SR-IOV 2% Neutron features actively used/interested in: Distributed across many features. Those that got 20% or higher: software load balancing, routing, VPN, accelerated vswitch, QoS, software firewalling What is striking is the pervasive use of OVS, and among Neutron features, the strong interest in software-based networking on the server, or server-based networking. Emerging technologies such as SDN/NFV and use of containers in bare metal server environments further justify the need for server-based networking. This trend, starkly visible from the survey report, is naturally in line with how the largest data centers in the world (such as Microsoft Azure) have opted for server-based networking as the most flexible and scalable form of networking. With increasing bandwidth requirements and adoption of 10GbE and higher speed technologies, it is well understood today that accomplishing server-based networking tasks using general purpose CPUs is highly inefficient. Microsoft has indicated use of SmartNICs to scale their server-based networking infrastructure more efficiently. SmartNICs are used to offload and accelerate the server-based networking datapath. Ericsson and Netronome have demonstrated up to 6X lower TCO utilizing Netronome Agilio hardware-accelerated server-based networking incorporated in the OpenStack-based Ericsson s Cloud SDN platform. To achieve such benefits, enhancements to the OpenStack networking plug-in architecture are needed. Let s start by exploring what is feasible today. page 1 of 5

CURRENT STATE OF ACCELERATED OPENSTACK NETWORKING At the OpenStack Summit 2015 in Vancouver, Mirantis presented the state of hardware-accelerated networking. The below discussion is based on excerpts from that comprehensive presentation. Today, SR-IOV plug-ins used with SR-IOV-capable server NICs are the only feasible form of networking hardware acceleration. When this is implemented in OpenStack Neutron using traditional NICs, bandwidth delivered to s is improved significantly, latency for to traffic is reduced, and CPU use for networking tasks is greatly reduced. However, the number of server-based networking features available at such high performance is severely limited. Figures 1a and 1b show the packet datapath from a network port on the server to a. With the datapath shown in Figure 1a, high bandwidth and low latency benefits are available for a very limited set of features as shown in Table 1, making the solution feasible for a very limited set of applications. When more features are needed, one needs to fall back to the datapath shown in Figure 1b, resulting in poor performance and high CPU usage. te that SR-IOV adoption based on the recent survey is only 2% and this could be a reason. Figure 1a Figure 1b OpenStack Networking Plug-In Features Performance Latency CPU Core Saving Net Virtualization & Other Anti-Spoofing ACL Security (Stateless) Firewall Security Migration Per Flow Analytics SR-IOV with Traditional NICs Medium Low High VLAN only Table 1 Some of the above challenges can be addressed using DPDK where the server-based networking datapath (as in OVS or Contrail vrouter) is maintained in software, but is moved to the user space to reduce user space to kernel space context switching overheads. This approach is shown in Figure 2 below. This approach delivers additional server-based network- page 2 of 5

ing features (compared to SR-IOV) and improved performance (compared to a kernel-based networking datapath). This however comes at the cost of high latency and high CPU usage, and with use of larger number of flows in the networking datapath (as needed in security, load balancing and analytics applications) performance can be severely affected. As such, the DPDK approach has not found widespread adoption either. FIGURE 2 OpenStack Networking Plug-in Features Performance Latency CPU Core Saving Net Virtualization & Other Anti-Spoofing ACL Security (Stateless) Firewall Security Migration Per Flow Analytics DPDK with other Vendor NICs Med High Low VLAN, VxLAN, GRE This is the current state of accelerated OpenStack networking as was aptly summarized by the presentation at the OpenStack Summit 2015 in Vancouver. The April 2016 Survey sheds new light on what is needed from OpenStack networking. This leads us to the next question. WHERE DO WE GO FROM HERE? At the OpenStack Summit 2016 in Austin, Netronome demonstrated how OpenStack networking can be extended to deliver hardware-accelerated server-based networking. page 3 of 5

Netronome s Agilio server networking platform has been utilized with OpenStack, OVS, Contrail vrouter and Linux Conntrack technologies to showcase an extended set of features made possible at high performance (high bandwidth and low latency) and efficiency (low server CPU usage, freeing them up for more s). This is enabled by offloading the OVS, Contrail vrouter or Linux Conntrack datapaths into the Agilio SmartNIC as shown in Figure 3. This approach, highlighted in Table 3, shows features needed in a broad array of workloads and deployment frameworks can be enabled at high performance and efficiency. This includes infrastructure as a service (IaaS), telco cloud SDN and NFV, as well as traditional IT and private/ hybrid cloud workloads. NETRONOME S AGILIO SERVER NETWORKING PLATFORM WILL BE UTILIZED WITH OPENSTACK, OVS, CONTRAIL VROUTER AND LINUX CONNTRACK TECHNOLOGIES TO SHOWCASE AN EXTENDED SET OF FEATURES MADE POSSIBLE AT HIGH PERFORMANCE AND EFFICIENCY. Figure 3 OpenStack Networking Plug-In Features SR-IOV With Other Vendor NICs* DPDK With Other Vendor NICs* Enhancements Based On Netronome Agilio Platform Performance Med Med High Latency Low High Low CPU Core Saving High Low High Net Virtualization & Other VLAN only VLAN, VxLAN, GRE VLAN, VxLAN, GRE, MPLS, NSH Anti-Spoofing ACL Security (Stateless) Firewall Security Migration Per Flow Analytics Markets NFV POP Clusters On-Premise NFV POP, IaaS & Enterprise DC, NFV *Source: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/high-performance-neutron-using-sr-iov-and-intel-82599 This model of delivering server-based networking using SmartNIC hardware such as Netro- page 4 of 5

nome s Agilio platform is also ideally suited for bare metal server-based container deployments of the future. Features such as networking virtualization, service chaining, load balancing, security and analytics can be implemented and provisioned from outside the domain of the operating system running on the server. In this case, control plane orchestration using an SDN Controller or OpenStack is implemented directly with the SmartNIC. This concept is shown in Figure 4 below. Bare Metal Server Container Container SmartNIC with OVS, vrouter or Conntrack datapath offload Figure 4 CONCLUSION For all of this to be a reality, the OpenStack networking plug-in specification will have to be enhanced beyond the SR-IOV-based capabilities that exist today. Netronome is taking a leadership role in the development of these enhancements with industry leaders such as Mirantis, Ericsson, Juniper Networks and others. 2903 Bunker Hill Lane, Suite 150 Santa Clara, CA 95054 Tel: 408.496.0022 Fax: 408.586.0002 www.netronome.com 2017 Netronome. All rights reserved. Netronome is a registered trademark and the Netronome Logo is a trademark of Netronome. All other trademarks are the property of their respective owners. WP-OS-NETWORK-1/2017 page 5 of 5