Data Path acceleration techniques in a NFV world

Similar documents
SDWAN: Re-architecting WAN with Software Defined Networking

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Performance Considerations of Network Functions Virtualization using Containers

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

Achieve Low Latency NFV with Openstack*

MWC 2015 End to End NFV Architecture demo_

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

An Intelligent NIC Design Xin Song

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Agilio CX 2x40GbE with OVS-TC

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Network Function Virtualization Using Data Plane Developer s Kit

How to abstract hardware acceleration device in cloud environment. Maciej Grochowski Intel DCG Ireland

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

OpenStack Networking: Where to Next?

A Novel Approach to Gain High Throughput and Low Latency through SR- IOV

Zhang Tianfei. Rosen Xu

Getting Real Performance from a Virtualized CCAP

Ixia Test Solutions to Ensure Stability of its New, LXC-based Virtual Customer Premises Equipment (vcpe) Framework for Residential and SMB Markets

Cloud Networking (VITMMA02) Server Virtualization Data Center Gear

All product specifications are subject to change without notice.

Virtualization of Customer Premises Equipment (vcpe)

Container Adoption for NFV Challenges & Opportunities. Sriram Natarajan, T-Labs Silicon Valley Innovation Center

Service Edge Virtualization - Hardware Considerations for Optimum Performance

DEPLOYING NFV: BEST PRACTICES

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc.

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008

Introduction to the Cisco ASAv

Red Hat OpenStack Platform 13

Networking at the Speed of Light

QorIQ Intelligent Network Interface Card (inic) Solution SDK v1.0 Update

Nova Scheduler: Optimizing, Configuring and Deploying NFV VNF's on OpenStack

Network Adapters. FS Network adapter are designed for data center, and provides flexible and scalable I/O solutions. 10G/25G/40G Ethernet Adapters

Windows Server System Center Azure Pack

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper

I/O Virtualization The Next Virtualization Frontier

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Next Gen Virtual Switch. CloudNetEngine Founder & CTO Jun Xiao

Dataplane Networking journey in Containers

Intel Ethernet Server Adapter XL710 for OCP

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

New Approach to OVS Datapath Performance. Founder of CloudNetEngine Jun Xiao

Enterprise Network Compute System (ENCS)

Demonstrating Data Plane Performance Improvements using Enhanced Platform Awareness

Rack Disaggregation Using PCIe Networking

Building a Platform Optimized for the Network Edge

End to End SLA for Enterprise Multi-Tenant Applications

Cisco Virtualized Infrastructure Manager

Datacenter Network Solutions Group

Single Root I/O Virtualization (SR-IOV) and iscsi Uncompromised Performance for Virtual Server Environments Leonid Grossman Exar Corporation

NIC-PCIE-4RJ45-PLU PCI Express x4 Quad Port Copper Gigabit Server Adapter (Intel I350 Based)

Programmable NICs. Lecture 14, Computer Networks (198:552)

QuickSpecs. HP Z 10GbE Dual Port Module. Models

Design and Implementation of Virtual TAP for Software-Defined Networks

Thomas Lin, Naif Tarafdar, Byungchul Park, Paul Chow, and Alberto Leon-Garcia

Software Defined. All The Way with OpenStack. T. R. Bosworth Senior Product Manager SUSE OpenStack Cloud

SmartNICs: Giving Rise To Smarter Offload at The Edge and In The Data Center

Accelerating Telco NFV Deployments with DPDK and SmartNICs

QoS/QoE in future IoT/5G Networks: A Telco transformation infrastructure perspective.

Spring 2017 :: CSE 506. Introduction to. Virtual Machines. Nima Honarmand

Accelerating 4G Network Performance

Hillstone CloudEdge For Network Function Virtualization (NFV) Solutions

DPDK Summit China 2017

CSC 5930/9010 Cloud S & P: Virtualization

New trends in IT. Network Functions Virtualization (NFV) & Software Defined-WAN

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

IOV hardware and software plumbing for VMware ESX - journey to uncompromised I/O Virtualization and Convergence.

PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU

FLUID COMPUTING. ARC FORUM, India Ricky Watts Director of Industrial Solutions, Wind River IN A SOFTWARE-DEFINED WORLD

1V0-642.exam.30q.

Solarflare and OpenOnload Solarflare Communications, Inc.

and public cloud infrastructure, including Amazon Web Services (AWS) and AWS GovCloud, Microsoft Azure and Azure Government Cloud.

SmartNIC Programming Models


Windows Server 2016 Software-Defined Networking Oliver Ryf

Benefits of Offloading I/O Processing to the Adapter

SmartNIC Programming Models

NFV Infrastructure for Media Data Center Applications

International Journal of Advance Engineering and Research Development. DPDK-Based Implementation Of Application : File Downloader

All Roads Lead to Convergence

Introduction to Cisco and Intel NFV Quick Start

Storage Protocol Offload for Virtualized Environments Session 301-F

Agenda. Introduction Network functions virtualization (NFV) promise and mission cloud native approach Where do we want to go with NFV?

The.pdf version of this slide deck will have missing info, due to use of animations. The original.pptx deck is available here:

Ethernet. High-Performance Ethernet Adapter Cards

Virtualizing Managed Business Services for SoHo/SME Leveraging SDN/NFV and vcpe

Measuring a 25 Gb/s and 40 Gb/s data plane

MPLS vs SDWAN.

Next Generation Enterprise Solutions from ARM

Using (Suricata over) PF_RING for NIC-Independent Acceleration

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

A Brief Guide to Virtual Switching Franck Baudin (Red Hat) Billy O Mahony (Intel)

Accelerating Contrail vrouter

COSC6376 Cloud Computing Lecture 15: IO Virtualization

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

XE1-P241. XE1-P241 PCI Express PCIe x4 Dual SFP Port Gigabit Server Adapter (Intel I350 Based) Product Highlight

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Transcription:

Data Path acceleration techniques in a NFV world Mohanraj Venkatachalam, Purnendu Ghosh

Abstract NFV is a revolutionary approach offering greater flexibility and scalability in the deployment of virtual network functions. It provides significant advantages over the current vendor-locking infrastructure using physical network functions. The NFV architecture has three building blocks a) Infrastructure b) Network Functions c) Orchestrator. Telcos have stringent SLA towards latency, throughput, packets-per-sec, bits-per-seconds to meet both QOE, QOS and guaranteed bandwidth per user subscription. Therefore, an NFV system has to match the data path capabilities at par with actual Physical Network Functions. A lot of research is ongoing in the development of data path acceleration techniques for a virtualized infrastructure by using hypervisors (e.g.: Xen, KVM, VMware, Hyper-V) in the following possible ways: (a) Device Emulation (b) Para-ization (c) PCI Pass-through. Let us walkthrough with some of the standard HW and SW techniques available to achieve a comparable performance of VNF ( Network Function) running in a virtualized environment as compared to PNF (Physical Network Function) used in legacy network architecture. The critical aspect in all of these approaches is to: Increase packet throughput from physical NIC to virtual NICs Reduce the number of copies done before it is processed by the application residing in the VM Reduce the number of CPU interrupts before a packet is delivered to a VM

VMDq ( Machine Device Queues) These are specialized network controllers capable of sorting data packets into multiple queues. A distinct queue->vnic ->CPU_Core mapping ensures independent parallel I/O streams for each VM workload. This solution offloads the overhead of data packet sorting from hypervisor switch to network silicon and improves the network I/O throughput over 100% as compared with a traditional solution. A VMDq solution consists (this is Intel specific) improvements to Hypervisor Switch (e.g.: VMware ESX). This arrangement ensures that Rx interrupts are directly addressed to CPU core of the VM thereby reducing the number of copies required However, with VMDq there is a performance penalty due to the number of interrupts for VM directed data-plane traffic. An SR-IOV based solution brings considerable improvement in this aspect as described in the next section. SR-IOV SR-IOV or Single-Root IO ization is a specification which allows a single PCIe device configured as multiple virtual devices and they appear as independent network functions to the VMs running in a hypervisor. These devices consist of VFs (named as Functions) acting as light-weight PCIe units that are capable of being configured for data traffic movement. There are also PFs (Physical Functions) which contain full PCIe functions and can be used to configure SR-IOV enabled devices. VM VM VM Name Space Name Space PF Driver Operating System Intel VT-d PCI Express Switch PF Driver Seen as PCI Express * Device Ethernet Controller in OS Function Driver Function 1 Function 2 Function n Physical Function Seen as PCI Express * Device Ethernet Controller in OS Physical Function Driver Intel Ethernet with SR-IOV Hardware Ethernet Bridge (VEB) LAN Traffic Hardware resources are available for SR-IOV defined Functions utilizing a Hardware VEB Figure 1:Single Root I/O ization and Sharing (SR-IOV)

VM1 VM2 VM1 VM2 VM1 VF Driver VM2 VF Driver Machine Manager Layer 2 Software vswitch Machine Manager Layer 2 Software vswitch Machine Manager Function Function NIC PF NIC PF Classify and Queue NIC PF Classify and Queue Packets in /out Packets in /out Packets in /out Normal VMDq SR-IOV Figure 2: SR-IOV and VMDq comparison Such PCIe devices require BIOS and Hardware support along with appropriate drivers at the hypervisor level. Each of the VFs and PFs appears as independent PCIe device controllers to the virtual machines and they are configured so that there is a direct copy of data buffers from device to the guest machines. SR-IOV offers a marked improvement over the VMDq by reducing the number of times the data buffers get copied before delivering the packet to the VMs. Placing of VNFs in an SR-IOV environment is motivated by the overall technical solution and a deep understanding of inter-vnf dependencies which are important points to consider while selecting SR-IOV. During inter-vm communication latency is introduced as packets need to traverse back to the PCIe device before reaching the neighbouring VM. VM portability may also become a problem as SR-IOV mandates a VM to interact directly with a PCIe device, so seamless movement workloads between SR-IOV and non-sr-iov platforms are not possible DPDK DPDK approach is an alternative to Linux packet processing solution by bypassing the Linux network stack and delivering packets directly to the applications. In traditional Linux packet forwarding a user to kernel context switching takes place every time there is packet receiving interrupt and DPDK aims to reduce the latency in this approach. The fundamental concept in DPDK is the configuration of Rx and Tx queues in each port and the poll-mode drivers which does the job of copying buffers from these queues to pre-allocated hugepages in the user space. DPDK uses hugepages 2M or 1G as compared to 4K sizes in Linux kernel which reduces TLB misses, and this arrangement improves overall application performance and reduces latency. Enhanced and efficient scheduling with user space libraries implementing thread affinity, CPU pinning and lock-less queues further provide significant improvement.

User Space App DPDK Ring buffers Kernel Space UIO driver NIC RX/TX queues Fig. 3: FIR Filter Kernel 30 28.5 Per Crore L3 Performance (Mpps) 20 10 0 1.1 Linux Intel@ DPDK Platform Figure 4. Performance Comparison: Linux vs Intel DPDK Conclusion Current generation x86 based COTS system offers a great deal of cheap processing power which pushed the opportunities to think about multiple ways to speed up the transport of packet for processing or application consumption. An NFV system will have to consider appropriate solution use cases, e.g. east-west and north-south traffic and select an acceleration technique. References https: /www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/ sr-iov-nfv-tech-brief.pdf http: /dpdk.org/ https: /access.redhat.com/documentation/en-us/reference_architectures/2017/html/ deploying_mobile_networks_using_network_functions_virtualization/performance_ and_optimization

Author Purnendu Ghosh is responsible for developing solution and products around SDN & NFV technologies for global customers. He has been instrumental in leading the research and development along with developing different proof of concepts around this space for Happiest Minds. He has around 13+ years of experience and has extensively worked developing products/features/solutions around in GSM, 3G, LTE, Routing & Switching domains. Mohanraj Venkatachalam has more than 5+ years of extensive experience in Network Product Development. He primarily focuses on developing Fast Path applications and also contributes actively in developing proof of concepts in emerging SDN and NFV technologies. Some of his notable contribution are in areas of SDWAN and virtual packet broker solutions. Happiest Minds has been continuously developing various products in data path acceleration for SDN/NFV and traditional networking. Our focused expertise includes developing accelerated transport layer for virtual applications taking advantage of DPDK, SRIOV, PCI Pass-through technologies for cloud(openstack, AWS) and on-premise deployments. We are also part of some of the latest disruption in programmable ASIC like P4 and have developed a P4 based L4 load balancer using Barefoot's latest Tofino SDK and orchestrated using ONOS SDN Controller. Happiest Minds enables Digital Transformation for Enterprises and Technology providers by delivering seamless Customer Experience, Business Efficiency and Actionable Insights through an integrated set of Disruptive Technologies: Big Data Analytics, Internet of Things, SDN & NFV, Mobility, Cloud, Security, Unified Communications, etc. Happiest Minds offers domain centric solutions applying skills, IPs and functional expertise in IT Services, Product Engineering, Infrastructure Management and Security. These services have applicability across industry sectors such as Retail, Consumer Packaged Goods, Ecommerce, Banking, Insurance, Hi-tech, Engineering R&D, Manufacturing, Automotive and Travel/Transportation/Hospitality. Headquartered in Bangalore, India, Happiest Minds has operations in the US, UK, Singapore, Australia and has secured $63 million Series-A funding. Its investors are JPMorgan Private Equity Group, Intel Capital and Ashok Soota.