Accelerating 4G Network Performance

Similar documents
Agilio CX 2x40GbE with OVS-TC

Virtual Switch Acceleration with OVS-TC

Enabling Efficient and Scalable Zero-Trust Security

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

OpenStack Networking: Where to Next?

Accelerating Contrail vrouter

Agilio OVS Software Architecture

Accelerating vrouter Contrail

Nova Scheduler: Optimizing, Configuring and Deploying NFV VNF's on OpenStack

Agenda. Introduction Network functions virtualization (NFV) promise and mission cloud native approach Where do we want to go with NFV?

Measuring a 25 Gb/s and 40 Gb/s data plane

Virtual Evolved Packet Core (VEPC) Placement in the Metro Core- Backhual-Aggregation Ring BY ABHISHEK GUPTA FRIDAY GROUP MEETING OCTOBER 20, 2017

Deliverable D6.2 VNF/SDN/EPC: integration and system testing

A High Performance Packet Core for Next Generation Cellular Networks

Design and Implementation of Virtual TAP for Software-Defined Networks

Performance Considerations of Network Functions Virtualization using Containers

Achieve Low Latency NFV with Openstack*

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

Executive Summary. Introduction. Test Highlights

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Bringing Software Innovation Velocity to Networking Hardware

Virtual Mobile Core Placement for Metro Area BY ABHISHEK GUPTA FRIDAY GROUP MEETING NOVEMBER 17, 2017

Netronome NFP: Theory of Operation

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

6WINDGate. White Paper. Packet Processing Software for Wireless Infrastructure

Nokia Virtualized Mobility Manager

Service Edge Virtualization - Hardware Considerations for Optimum Performance

Lowering Network TCO with a Virtualized Core. Mavenir vepc S O L U T I O N B R I E F

virtual Evolved Packet Core as a Service (vepcaas) BY ABHISHEK GUPTA FRIDAY GROUP MEETING MARCH 3, 2016

Transparent Edge Gateway for Mobile Networks

- Page 1 of 8 -

IxLoad LTE Evolved Packet Core Network Testing: enodeb simulation on the S1-MME and S1-U interfaces

ONOS-based Data Plane Acceleration Support for 5G. Dec 4, SKTelecom

Introduction of ASTRI s Network Functions Virtualization (NFV) Technologies. ASTRI Proprietary

Spirent Landslide VoLTE

Agenda How DPDK can be used for your Application DPDK Ecosystem boosting your Development Meet the Community Challenges

NEC Virtualized Evolved Packet Core vepc

Innovative NFV Service-Slicing Solution Powered by DPDK

Load Tester v4.0 Release Notes - Page 1 of 6 -

Data Path acceleration techniques in a NFV world

Accelerating Telco NFV Deployments with DPDK and SmartNICs

Virtual Mobile Core Placement for Metro Area BY ABHISHEK GUPTA FRIDAY GROUP MEETING JANUARY 5, 2018

ARISTA: Improving Application Performance While Reducing Complexity

Towards an SDN-based Mobile Core Networks (MCN)

vepc-based Wireless Broadband Access

Innovation Technology for Future Convergence Network

Implementing a Session Aware Policy Based Mechanism for QoS Control in LTE

SmartNIC Programming Models

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

System Enhancements for Accessing Broadcast Services in All-IP Networks. Motivation

SmartNIC Programming Models

INTRODUCTION TO LTE. ECE MOBILE COMMUNICATION Monday, 25 June 2018

NFV and SDN: The enablers for elastic networks

SmartNICs: Giving Rise To Smarter Offload at The Edge and In The Data Center

HSS Resiliency Surviving attach storms in the networks of today and tomorrow

IPv6 migration strategies for mobile networks

DAY 2. HSPA Systems Architecture and Protocols

07/08/2016. Sami TABBANE. I. Introduction II. Evolved Packet Core III. Core network Dimensioning IV. Summary

Load Tester v11.2 Release Notes - Page 1 of 16 -

NFV ISG PoC Proposal SDN Enabled Virtual EPC Gateway

Programming NFP with P4 and C

5G: an IP Engineer Perspective

Scaling the LTE Control-Plane for Future Mobile Access

Axyom Ultra-Broadband Software Framework

Rethinking Cellular Architecture and Protocols for IoT Communica9on

LTE CONVERGED GATEWAY IP FLOW MOBILITY SOLUTION

Programmable Server Adapters: Key Ingredients for Success

Acano solution. White Paper on Virtualized Deployments. Simon Evans, Acano Chief Scientist. March B

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

Lowering Cost per Bit With 40G ATCA

EPC-SIM for enb/henb/ HeNB-GW Testing VERSION 5.0

Optimised redundancy for Security Gateway deployments

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

Nokia Cloud Mobile Gateway

Where is the Network Edge? MEC Deployment Options, Business Case & SDN Considerations

Dynamic Analytics Extended to all layers Utilizing P4

MWC 2015 End to End NFV Architecture demo_

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances

Mobile-CORD Enable 5G. ONOS/CORD Collaboration

SPIRENT LANDSLIDE MOBILE PACKET CORE PERFORMANCE TEST SYSTEM

Business Case for the Cisco ASR 5500 Mobile Multimedia Core Solution

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Improving performance of LTE control plane

DEPLOYING NFV: BEST PRACTICES

Efficient Data Movement in Modern SoC Designs Why It Matters

RoCE vs. iwarp Competitive Analysis

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

2. enodeb Emulator: Simulation of emtc and NB-IoT UE and enodeb conforming to 3GPP Release 13 enhancements for Cellular IoT.

Fujitsu Femtocell Solutions Supporting ideal in-building communications environments. shaping tomorrow with you

It s Time to Aim Lower Zero Latency at the Edge

Programming Netronome Agilio SmartNICs

Intel Network Builders Solution Brief. Etisalat* and Intel Virtualizing the Internet. Flexibility

Native Deployment of ICN in 4G/LTE Mobile Networks

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

Ron Emerick, Oracle Corporation

LTE Relay Node Self-Configuration

Direct Tunnel for 4G (LTE) Networks

Performance Comparison of State Synchronization Techniques in a Distributed LTE EPC

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

Extreme Networks Session Director

Transcription:

WHITE PAPER Accelerating 4G Network Performance OFFLOADING VIRTUALIZED EPC TRAFFIC ON AN OVS-ENABLED NETRONOME SMARTNIC NETRONOME AGILIO SMARTNICS PROVIDE A 5X INCREASE IN vepc BANDWIDTH ON THE SAME NUMBER OF CPU CORES BY USING THE OVS OFFLOAD AND SR-IOV TECHNOLOGIES CONTENTS INTRODUCTION...1 4G WIRELESS NETWORK ARCHITECTURE...2 EVOLVED PACKET CORE IN 4G NETWORKS...2 VIRTUALIZED EVOLVED PACKET CORE (vepc)...3 vepc PACKET FLOW AND TEST PARAMETERS... 4 PACKET FLOW IN THE vepc...5 CPU CORE MAPPING FOR NETRONOME AGILIO CX...5 BENCHMARKS FOR THE vepc... 6 CONCLUSION... 9 INTRODUCTION Virtual Evolved Packet Core (vepc) is the core of the 3G and 4G mobile network used for service provisioning for mobile users. The main functionality of the EPC is it acts as an interface between the 3G and 4G radio interfaces and public IP networks. vepc is implemented as virtual machines (VMs) on COTS servers. These services involve transfer of voice, data and video from a mobile device to an IP network. Netronome Agilio SmartNICs provide a 5X increase in vepc bandwidth on the same number of CPU cores by using the OVS offload and SR-IOV technologies. Evolved Packet Core (EPC) has been deployed for many years in various platforms using dedicated servers for mobile traffic workloads within the core network. Since mobile traffic is increasing exponentially, the mobile operators are addressing this demand with investments in network and compute appliances. In the EPC the processing of voice, data and video is integrated into a single IP domain. The advancement in virtualization and network functions virtualization (NFV) technologies has led to the implementation of the EPC components as virtual machines on x86 servers. The implementation of the EPC components as virtual networking functions (VNFs) is termed as vepc. In this white paper we discuss the advantages of implementing the vepc on x86 server with Netronome Agilio SmartNICs. page 1 of 9

4G WIRELESS NETWORK ARCHITECTURE Access Network Mobile Managment Entity (MME) Evolved Packet Core (EPC) S1-MME S11 LTE-Uu Cloud RAN enodeb S1-U Serving Gateway (S-GW) PDN Gateway (P-GW) SGi IP Network Mobile Device Figure 1. 4G Wireless Network Architecture The Figure 1 shows the components of the 4G LTE network. Following are the main components of the 4G network: 1. Mobile Device Any wireless device capable of transmitting and receiving data in a radio network. 2. enodeb Radio access node (base station) that communicates with the mobile device and connects with serving gateway (S-GW) on the other side using the GTP-U protocol. 3. MME (Mobile Management entity) MME controls the signaling in 4G LTE NW, it is part of the control plane functionality. 4. S-GW (serving gateway) Acts as a router, which forwards data between the base station and PDN gateway (P-GW). 5. P-GW (Packet data network gateway) This is the termination point of the packet data interface towards the IP network. PDN is responsible for policy rule enforcement, packet filtering, DPI etc. 6. IP Network P-GW connects to the provider s IP services, which are part of the IMS (IP multimedia services voice, data & video). EVOLVED PACKET CORE IN 4G NETWORKS EPC is all IP-based service provisioning for the 4G LTE network. EPC provides fast network performance by separating the control and data planes in the LTE network. It reduces the hierarchy between mobile network elements because in the EPC architecture, the data channels are established through the EPC gateways (S-GW and P-GW). The Major advantages of the EPC architecture in a 4G network are: 1. Backward compatibility and interworking with the 3G mobile network architecture. 2. Voice, data and video services are built on the IP network. 3. Scalability in terms of bandwidth, number of subscribers and terminal mobility. 4. High reliability of the services. page 2 of 9

VIRTUALIZED EVOLVED PACKET CORE (vepc) The components of the EPC can be implemented in the dedicated servers, with the emergence of cloud-based server architectures with NFV and SDN technologies, the EPC components can be implemented as the VNFs on the x86 servers while still maintaining the high availability and scaling requirements. The advantages of implementing the EPC into the VMs are: Independence of vendor hardware Scalability (More VNFs can be instantiated on the same hardware if number of connections or BW requirement increases) THE ADVANTAGES OF IMPLEMENTING THE EPC INTO THE VMs ARE INDEPENDENCE OF VENDOR HARDWARE AND SCALABILITY In this paper, we describe the advantage of using Netronome Agilio SmartNICs over traditional network adapters on a server. The test setup for the vepc performance measurement is shown in Figure 2 and Figure 3. Each of the boxes represents a physical Dell R730 server with 24 physical cores (48 NUMA hyper threaded cores -vcpus). The setup simulates a vepc network, specifically for GPRS tunneling protocol for user plane (GTP-u) data traffic, such as is used in the 4G Mobile Network. The benchmarks for the vepc have been performed for three parameters: network throughput, packet latency and CPU utilization. For the performance comparison, a traditional network interface card (NIC) and Agilio CX (the Netronome 1x40G Agilio SmartNIC) is used on the enodeb/mme and gateway (P/S-GW) servers. The User Equipment (UE) and Public Data Network (PDN) servers are used with the traditional NIC as they just act as packet source and sink for the mobile traffic. The Agilio CX and traditional NIC are used with a 1x40Gb/s to 4x10Gb/s breakout cable with two 10G fiber cables connected between adjacent servers to provide 20 Gb/s connectivity, so the total end-to-end network bandwidth between the UE to Echo server is 20Gb/s (of total 40Gb/s available). User Equipment (UE) enodeb/mme S-GW/P-GW Echo Server Packet Gen. VM (enodeb/mme) VM (SGW/PGW) Packet Sink. (PDN) OVS (Kernel) OVS (Kernel) 20G 20G 20G Figure 2. vepc Test Setup with traditional Network Interface Card page 3 of 9

User Equipment (UE) enodeb/mme S-GW/P-GW Echo Server Packet Gen. VM (enodeb/mme) VM (SGW/PGW) Packet Sink. (PDN) OVS OVS Agilio CX Agilio CX 20G 20G 20G Figure 3. vepc Test Setup with Netronome Agilio CX vepc PACKET FLOW AND TEST PARAMETERS The MME component is bundled with an enodeb emulator, which emulates the termination of the radio interface, and the endpoint for the GTP-u tunnel. The SGW and PGW are bundled together on a single VM instance on a separate server. Many of the VM instances can be created based on the bandwidth requirement. The Control Path (SCTP) traffic has not been an objective of testing in this project. Each of the VM (VNF) runs the following application software: TIP Tieto IP Stack (User Space Implementation) The Tieto IP stack (TIP) has a long history as a preferred IP stack in Telecom applications, from handsets, to various nodes in the 2G and 3G mobile networks. TIP is a portable, telecom grade RFC compliant stack, supporting the TCP/IP family of protocols (IPV4/IPv6) in single or dual mode. TIP has a modular SW architecture, which allows easy interoperation with the standard stack. TIP is used in a poll mode configuration on top of. vepc Application Software The MME establishes UE sessions by sending requests over the S1-U interface to PDN. When a session is established via the MME-emulator, all subsequent traffic is sent from the UE via the enodeb and SGW/PGW to the PDN, and back through the same path. The PDN Gateway provides connectivity from the UE to external packet data networks by being the point of entry and exit of traffic, to and from the PDN. The application is configured with IPv4 addresses for each of the following interfaces: SGW- S11 Interface - Control Plane - interface between MME and S-GW, used to support mobility and bearer management. SGW-S5 Interface - S-GW side of control plane - providing tunnel management between Serving GW (S-GW) and PDN GW (P-GW) PGW-S5 Interface - P-GW side of control plane - providing tunnel management between serving GW (S-GW) and PDN GW(P-GW) S1/S5/S8 - User Plane tunnel: enb S1 Interface SGW S5/S8 Interface PGW page 4 of 9

PACKET FLOW IN THE vepc The test system consists of four servers: StressMe-UE node traffic generator, tests performer enb-mme node evolved Node B and Mobility Management Entity SGW-PGW node Serving Gateway and Packet Data Network Gateway EchoServer-PDN node Packet Data Network Simulator The performance tests are conducted with a number of separate channels in parallel that extends end-to-end, between the StressMe-UE and EchoServer-PDN nodes, running through enb-mme and SGW-PGW, making each channel independent. Each vepc instance is running in its own VM with separate memory spaces, virtual cores mapped to the physical cores on the server, L1 and L2 cache. All VMs are started up before the test is run. Traffic is TCP, generated with iperf3 in client mode on the UE node, and terminated by iperf3 in server mode on the PDN node. CPU CORE MAPPING FOR NETRONOME AGILIO CX In order to ensure maximal performance of the VM, the CPU intensive threads related to a VM have to be placed on a dedicated core without any hyperthreads or at least hyperthreads with very little load. Otherwise the other hyperthread will contend for the core, and the throughput of the VM drops. The EPC application and TIP stack is running as one monolith inside the VM, and accessing the interface in PMD mode (Poll Mode). For Agilio CX each VM has two virtual cores assigned, one of which is used by the guest OS, and the other by the TIP/EPC application. The former is hardly loaded and can be placed on the hyperthread sibling of the one running the TIP/EPC thread. VM#11 VM#12 VM#13 VM#0 VM#1 VM#2 VM#14 VM#15 VM#16 VM#17 VM#3 VM#4 VM#5 VM#6 Socket 0 CPU0 CPU24 CPU2 CPU26 CPU4 CPU28 CPU6 CPU30 CPU8 CPU32 CPU10 CPU34 CPU12 CPU36 CPU14 CPU38 CPU16 CPU40 CPU18 CPU42 CPU20 CPU44 CPU22 CPU46 Host Socket 1 CPU1 CPU25 CPU3 CPU27 CPU5 CPU29 CPU7 CPU31 CPU9 CPU33 CPU11 CPU35 CPU13 CPU37 CPU15 CPU39 CPU17 CPU41 CPU19 CPU43 CPU21 CPU45 CPU23 CPU47 VM#18 VM#19 VM#20 VM#21 VM#7 VM#8 VM#9 VM#10 Figure 4. CPU core mapping for Netronome Agilio CX In the case of the traditional NIC, traffic is passed to and from the VMs through Virtio, by the use of supplementary Vhost processes, one for each interface. Both consume a considerable page 5 of 9

amount of CPU power. In addition, traffic is delivered from the NIC by the use of IRQs, which also consumes about 50% of a core. Thus the VM with traditional requires 3-4 dedicated cores, to run efficiently and without contending with other VMs for the same CPU resources. BENCHMARKS FOR THE vepc In the test setup described above the throughput, latency and CPU utilization benchmarks are measured while running the 4G simulated traffic through the emulated enodeb and vepc mobile core. For the comparison, traditional NICs and Netronome Agilio CX are used on enodeb and vepc servers in two different measurements with the same traffic patterns. Throughput Benchmarks The throughput is measured in terms of million packets per second (Mpps) for different packet sizes The throughput increases as the number of VMs increase on the enb and vepc server as each VM has a limit to the number of packets it can process. With the traditional NIC on the enb and vepc server, we need 3.3 cores per VM as it uses Virtio. These 3.3 cores are used for the following functionality: -thread inside the VM which is mapped to one of the VM threads on the host. Vhost 1, relaying traffic between one port on the NIC to the Hypervisor Vhost 2, relaying traffic between the other port on the NIC to the Hypervisor Interrupt Requests arriving from the NIC Since 3.3 cores are used for the single VM, only 7 VMs can be created with a 24-core dual slot Xeon server. Agilio CX Since the Agilio CX is running the OVS offload Agilio software and using the PCI SR-IOV functionality, it does not have the overhead of the Vhost and interrupt processing as in case of the traditional NIC. The VM implementation on the Dell server with Agilio CX takes only about 1.1 CPU cores. Therefore, on a dual-socket 24 core Dell R730 server, up to 22 VMs can be implemented. Figure 5. Throughput for and Agilio CX page 6 of 9

As seen in Figure 5, for 128B packets with maximum number of VMs (7) the traditional NIC can achieve 2.5 Mpps (2.9 Gb/s). The Agilio CX SmartNIC can generate performance of 13Mpps (15.5Gb/s for 128B packet) with up to 22 VMs on the same server. In the case of Agilio CX, the throughput growth rate flattens after the 16th VM, and stays at that level even with more VMs added. This means that performance faces bandwidth bottleneck; one possible cause may be the slow QPI bus between the NUMA nodes which congests the traffic. Latency Benchmarks Latency is measured from end-to-end (UE server to PDN server) as in the current setup it is not possible to breakdown the latency between different components of the data path such as NIC latency, Application SW latency etc. The cumulative latency is the latency distribution integrated over the range of the latencies. This is a common way of presenting distributions, and has the benefit of being both compact and informative at the same time. Additionally, it allows the ability to read off the median and percentiles easily. The cumulative frequency represents the percentage of the total packets with the latency number on the X-axis. As shown in the Figure 6 below, for the traditional NIC the 90% of all the packet sizes have below 1.5 milliseconds of the latency. Agilio CX SmartNIC In the case of the Agilio CX, the observations are somewhat different but they comply with the expected behavior. All the packet sizes are saturated (reaches 20G available bandwidth) except 128B packets, so the packet sizes greater than 128B packets observe the higher latency as packets start queuing up in the transmit queue. Figure 6. Latency for and Agilio CX For 128B packet size, more than 90% of the 128B packets observe the round trip latency of less than 1ms, which is about 50% better than as observed in the case of the traditional NIC. CPU Load/Utilization Benchmarks The CPU load benchmarking on the servers is tricky as the CPU cores with the VM running the TIP stack and vepc application software are almost fully loaded while the cores with the side page 7 of 9

band processing and interrupt processing are lightly loaded. For the CPU load benchmarking, we have collected the loads of 24 CPUs and averaged it to plot the average CPU load on the left side of the Y-Axis. Additionally, on the right side of the Y-axis, relative load is shown. The relative load indicates the CPU load per Mpps. The Load Average is a number, which reports together with the CPU load on each core. Also, it is one global measure of the load on, and contention for the CPUs. A core that is fully loaded will have a Load Average of 1, underloaded, a LA < 1, and overloaded, a LA > 1. A LA > 1, means that the core is fully occupied but other processes are also waiting for CPU time slice. Therefore, the ideal LA is just below 1 per core. Dell R730 servers used for the benchmarking have 24 cores and 48 vcpus. So if fully loaded, it would have a LA of 24. The hyper threads are counted in the LA total as well, but they cannot be used as they will degrade the performance per VM, making the VMs interdependent. LA is normalized as a percentage of the total so if reports a LA of 6, it is translated into 25% (6/24*100) loaded. The relative LA is just the LA divided by the throughput so it represents the CPU load per Mpps of throughput. Figure 7. CPU Load for and Agilio CX Figure-7 shows the CPU load on vepc server. As seen from the left side graph for 128B packet (solid red line) the relative peak load at highest rate achieved (2.5 Mpps) is about 45% and the relative load is 18% per Mpps. Agilio CX SmartNIC CPU loads for the Agilio CX SmartNIC are shown in the right side graph above. Looking at the CPU load for 128B packet (solid red line), the CPU Load average at the peak-achieved rate (13 Mpps or 15Gb/s) is 52% and the relative load average at peak rate is 4% (load per Mpps). For the comparison with traditional NIC, consider the 2.5 Mpps throughput for Agilio CX in Figure 7 above, as seen from the graphs the load average is 12% and relative load average is 4 % (CPU load per Mpps) page 8 of 9

CONCLUSION The vepc measurement results leads to the following conclusions: peak rate achievable for vepc load is 2.5 Mpps (2.9 Gbps). Agilio CX peak rate achievable for vepc load is 13 Mpps (15 Gbps). Comparative results Throughput: Server with Agilio CX can achieve the 5X (15 vs. 2.9) throughput of traditional NIC. Number of VMs: Server with Agilio CX can implement 3X more VMs than traditional NIC. (22 vs. 7). Latency: Round trip latencies are about 50% better than traditional NIC (1 milliseconds vs. 1.5 milliseconds). CPU load: At the same line rate (2.9 Gbps) the CPU load observed is more than 4X lower than the traditional NIC (45% vs. 12% on the average load and 18% vs. 4% on the relative load). The performance improvements of the Agilio CX over the traditional NIC are compared side by side in Figure 8. PARAMETERS TRADITIONAL NIC AGILIO CX Traffic Rate (Mpps) 128B 2.9 Gb/s 15 Gb/s Peak Throughput at SGW-U instance (GTP) 128B 2.5 Mpps 13 Mpps Peak Throughput at PGW-U instance (GTP) 128B 2.5 Mpps 13 Mpps Number of UP VMIs 7 22 Relative CPU Utilization (on the UP server) 128B 18% / Mpps 4% / Mpps Latency UE PDN, RTT, 128B 1.5 ms 1.0 ms Note The parameters are measured at 20G line rate (of total 40G available) for and Agilio CX Figure 8. Performance parameters for traditional NIC and Agilio CX From the results, it can be seen that the Netronome Agilio SmartNIC along with Agilio software, which uses the SR-IOV and OVS offloads provides more than 5X increase in the vepc performance. Agilio SmartNICs take advantage of OVS data path offload with support for SR-IOV for direct communication from VMs to the Agilio CX hardware. This saves considerable CPU resources and enables 3X more VMs per server as compared to a traditional NIC. This translates to significantly lower data center server capital expenditure. 2903 Bunker Hill Lane, Suite 150 Santa Clara, CA 95054 Tel: 408.496.0022 Fax: 408.586.0002 www.netronome.com 2016 Netronome. All rights reserved. Netronome is a registered trademark and the Netronome Logo is a trademark of Netronome. All other trademarks are the property of their respective owners. WP-vEPC-1/2017 page 9 of 9