Corporate Update. OpenVswitch hardware offload over DPDK. DPDK summit 2017

Similar documents
Open vswitch DPDK Acceleration Using HW Classification

Using SR-IOV offloads with Open-vSwitch and similar applications

vswitch Acceleration with Hardware Offloading CHEN ZHIHUI JUNE 2018

DPDK Tunneling Offload RONY EFRAIM & YONGSEOK KOH MELLANOX

Enabling DPDK Accelerated OVS in ODL and Accelerating SFC

New Approach to OVS Datapath Performance. Founder of CloudNetEngine Jun Xiao

A Brief Guide to Virtual Switching Franck Baudin (Red Hat) Billy O Mahony (Intel)

Accelerate Service Function Chaining Vertical Solution with DPDK

Agilio CX 2x40GbE with OVS-TC

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

DPDK Summit China 2017

SUSE Linux Enterprise Server (SLES) 12 SP4 Inbox Driver Release Notes SLES 12 SP4

Virtual switching technologies and Linux bridge

Ethernet. High-Performance Ethernet Adapter Cards

SmartNIC Programming Models

Next Gen Virtual Switch. CloudNetEngine Founder & CTO Jun Xiao

SmartNIC Programming Models

Agilio OVS Software Architecture

Mellanox DPDK. Release Notes. Rev 16.11_4.0

Networking at the Speed of Light

Virtual Switch Acceleration with OVS-TC

Achieve Low Latency NFV with Openstack*

Virtio/vhost status update

OpenStack Networking: Where to Next?

Host Dataplane Acceleration: SmartNIC Deployment Models

OpenContrail, Real Speed: Offloading vrouter

Mellanox DPDK. Release Notes. Rev 16.11_2.3

RoCE Update. Liran Liss, Mellanox Technologies March,

Implementation and Testing of Soft Patch Panel. Yasufumi Ogawa (NTT) Tetsuro Nakamura (NTT) DPDK Summit - San Jose 2017

Programming Netronome Agilio SmartNICs

Open vswitch in Neutron

Mellanox DPDK Release Notes

VDPA: VHOST-MDEV AS NEW VHOST PROTOCOL TRANSPORT

SUSE Linux Enterprise Server (SLES) 15 Inbox Driver Release Notes SLES 15

Accelerating Contrail vrouter

DPDK Summit China 2017

Accelerating Telco NFV Deployments with DPDK and SmartNICs

Accelerating VM networking through XDP. Jason Wang Red Hat

VIRTIO: VHOST DATA PATH ACCELERATION TORWARDS NFV CLOUD. CUNMING LIANG, Intel

VIRTIO-NET: VHOST DATA PATH ACCELERATION TORWARDS NFV CLOUD. CUNMING LIANG, Intel

VFd: an SR-IOV Hypervisor using DPDK. Alex Zelezniak DPDK Summit Userspace - Dublin- 2017

Bringing the Power of ebpf to Open vswitch. Linux Plumber 2018 William Tu, Joe Stringer, Yifeng Sun, Yi-Hung Wei VMware Inc. and Cilium.

QorIQ Intelligent Network Interface Card (inic) Solution SDK v1.0 Update

Cloud Networking (VITMMA02) Network Virtualization: Overlay Networks OpenStack Neutron Networking

Accelerating vrouter Contrail

DPDK Summit 2016 OpenContrail vrouter / DPDK Architecture. Raja Sivaramakrishnan, Distinguished Engineer Aniket Daptari, Sr.

ODL SFC with OVS-DPDK, HW accelerated dataplane and VPP

Networking in Virtual Infrastructure and Future Internet. NCHC Jen-Wei Hu

Data Path acceleration techniques in a NFV world

Open vswitch - architecture

Improving security and flexibility within Windows DPDK networking stacks RANJIT MENON INTEL OMAR CARDONA MICROSOFT

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities

OVS Acceleration using Network Flow Processors

OpenStack and OVN What s New with OVS 2.7 OpenStack Summit -- Boston 2017

Supporting Fine-Grained Network Functions through Intel DPDK

Comparing Open vswitch (OpenFlow) and P4 Dataplanes for Agilio SmartNICs

Bridging the gap between hardware functionality in DPDK applications and vendor neutrality in the open source community

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

USER SPACE IPOIB PACKET PROCESSING

Implementing a TCP Broadband Speed Test in the Cloud for Use in an NFV Infrastructure

It's kind of fun to do the impossible with DPDK Yoshihiro Nakajima, Hirokazu Takahashi, Kunihiro Ishiguro, Koji Yamazaki NTT Labs

Open vswitch: A Whirlwind Tour. Jus8n Pe:t March 3, 2011

Ethernet Services over IPoIB

Building NFV Solutions with OpenStack and Cisco ACI

ONOS-based Data Plane Acceleration Support for 5G. Dec 4, SKTelecom

Bringing the Power of ebpf to Open vswitch

Agenda How DPDK can be used for your Application DPDK Ecosystem boosting your Development Meet the Community Challenges

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 Release Notes. Rev /

Intel Ethernet Controller 700 Series GTPv1 - Dynamic Device Personalization

Evolution of the netmap architecture

Performance Considerations of Network Functions Virtualization using Containers

How to abstract hardware acceleration device in cloud environment. Maciej Grochowski Intel DCG Ireland

WinOF-2 Release Notes

WinOF-2 for Windows 2016 Release Notes

Vhost dataplane in Qemu. Jason Wang Red Hat

Virtual Networks: Host Perspective

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

VirNOS 1.8. Product Bulletin. Figure 1. VirNOS Network Configuration. Addressing Network Challenges

Emulex Drivers for VMware ESXi for OneConnect Adapters Release Notes

Network Services Benchmarking: Accelerating the Virtualization of the Network

Red Hat Enterprise Linux (RHEL) 7.5-ALT Driver Release Notes

DPDK on Microsoft Azure

Kubernetes networking in the telco space

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Network Function Virtualization Using Data Plane Developer s Kit

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

SWITCHD. An OpenFlow implementation for OpenBSD BSDCan 2016 Reyk Flöter ESDENERA NETWORKS GmbH

ENG. WORKSHOP: PRES. Linux Networking Greatness (part II). Roopa Prabhu/Director Engineering Linux Software/Cumulus Networks.

Let s Hot plug: By uevent mechanism in DPDK. Jeff guo Intel DPDK Summit User space - Dublin- 2017

OVS-DPDK: Memory management and debugging

ERSPAN in Linux. A short history and review. Presenters: William Tu and Greg Rose

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

PCI SR-IOV on FreeBSD. Ryan Stone

Agenda Introduce NSX-T: Architecture Switching Routing Firewall Disclaimer This presentation may contain product features that are currently under dev

Virtio 1 - why do it? And - are we there yet? Michael S. Tsirkin Red Hat

Understanding The Performance of DPDK as a Computer Architect

Nexus 1000V in Context of SDN. Martin Divis, CSE,

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

Virtual RDMA devices. Parav Pandit Emulex Corporation

Transcription:

Corporate Update OpenVswitch hardware offload over DPDK DPDK summit 2017

Agenda ASAP2-Flex for vswitch/vrouter acceleration HW classification offload concept OVS-DPDK using HW classification offload RFC OVS-DPDK using HW classification offload Vxlan in OVS DPDK Multi-table vxlan HW offload concept Rte flow groups - multi-table 2017 Mellanox Technologies 2

ASAP 2 -Flex for vswitch/vrouter acceleration Offload some elements of the data-path to the NIC, but not the entire data-path Data will still flow via the vswitch Para-Virtualized (not SR-IOV) Offloads (examples) Classification offload - Application provide flow spec and flow ID - Classification done in HW and attach a flow ID in case of match - vswitch classify based on the flow ID rather than full flow spec - rte flow is used to configure the classification VxLAN Encap/decap VLAN add/remove QoS vswitch acceleration PV PV PF ConnectX 4 Hypervisor OVS Data Path Data Path TC / DPDK Offload eswitch 2017 Mellanox Technologies 3

HW classification offload concept For every OVS flow DP-if should use the DPDK filter (or TC) to classify with Action tag (report id) or drop. When receive use the tag id instead of classify the packet for Example : OVS set action Y to flow X - Add a flow to tag with id 0x1234 for flow X - Config datapath to do action Y for mbuf->fdir.id = 0x1234 OVS action drop for flow Z - Use DPDK filter to drop and count flow Z - Use DPDK filter to get flow statistic Flow X mark with id 0x1234 F_DIR NIC PMD Packets flow Config flow OVS-vswitchD DP_IF - DPDK OVS DataPath mbuf->fdir.id 0x1234 Do OVS action Y User Hardware 2017 Mellanox Technologies 4

RFC OVS-DPDK using HW classification offload For every datapath rule we add a rte_flow with flow Id. The flow id cache contain mega flow rules When packet received with flow id. No need to classify the packet to get the rule Flow id cache 2017 Mellanox Technologies 5

RFC Performance Case #flows Base MPPs Offload MPPs improvement Wire to virtio 1 5.8 8.7 50% Wire to wire 1 6.9 11.7 70% Wire to wire 512 4,2 11,2 267% Code submitted by Yuanhan Liu. Single core for each pmd, single queue, 2017 Mellanox Technologies 6

Vxlan in OVS DPDK There are 2 level of switch that are cascade The HW classification accelerate only the lower switch (br-phys1) br-phy1 is a kernel interface for vxlan The OVS datapath required to classify the inner packet VF 1 VF 2 VF 3 Br-int vports VF n vxlan T a p Br-phy1 Br-phy1 IP address Uplink/ PF 2017 Mellanox Technologies 7

Multi-table The action of a rule can be to go to other table. It can be use to chain classification Table 0 Match A flow ID 1 Match B drop Table 1 Match E flow ID 2 Match F flow ID 3 Match C Table 1 Match D Table 1 Default no flow ID Default flow ID 4 2017 Mellanox Technologies 8

vxlan HW offload concept If the action is to forward to internal interface add HW rule to point to a table named the internal interface. If the in port of the rule is internal port (like vxlan) add rule to the table named of the interface with a flow id When a packet is received with a flow id use the rule even if the in port is internal port. A packet that tagged with flow id is a packet that came on physical port and classified according to the outer and the inner. Table 0 Match A flow ID 1 Match B drop + count Match C Table 1 + count Match D Table 1 + count Table 1 all the rules that the src port is the vxlan interface Match E flow ID 2 Match F flow ID 3 Default flow ID 4 Br-phy1 Uplink/ PF Tap vxlan Br-int Br-phy1 Tap IP address Default no flow ID 2017 Mellanox Technologies 9

Vxlan HW offload If in port is HW port add rule to the HW action can be flow id or to table according to the port to forward to If the in port is internal port (like vxlan) add a rule to all the HW port with action flow id.(because traffic can came form any external/hw port) The flow id need to be unique. Table 1 all the rules that the src port is the vxlan interface Match E flow ID 2 Match F flow ID 3 Default flow ID 4 Tap vxlan Br-int Tap Table 0 Match A flow ID 1 Match B drop + count Br-phy1 Br-phy1 IP address Match C Table 1 + count Match D Table 1 + count Uplink/ PF Default no flow ID 2017 Mellanox Technologies 10

Rte flow groups - multi-table rte_flow_create() Group are used in order to add a rule to a table. Need to add new action go to group (RTE_FLOW_ACTION_TYPE_GROUP) Table/Group create is implicit The user/app need add a default (lowest priory) rule to steer the traffic to a Q and not to continue to next group 2017 Mellanox Technologies 11

Thank You