Intel vcmts Reference Dataplane v Install Guide

Size: px
Start display at page:

Download "Intel vcmts Reference Dataplane v Install Guide"

Transcription

1 Install Guide Copyright 2018 Intel Corporation All Rights Reserved Revision: 001

2 Revision History Revision Number Description Revision Date 1 Install-guide for vcmts reference dataplane release v Dec 19 th

3 Introduction Related Information This install guide relates to v of the Intel vcmts reference dataplane package which may be downloaded from Intel 01.org at the following link: Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATHMAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling , or by visiting Intel s Web site at Intel Corporation. All rights reserved. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. 3

4 Contents 1 Introduction System overview vcmts Reference Dataplane Pipeline vcmts Reference Dataplane NFV Stack CPU Core Management Network Interface Configuration Memory Module Configuration System Configuration Release Package Contents Installation Guide vcmts dataplane server preparation Configure system BIOS settings Check Fortville NIC Firmware and Driver versions Check disk space settings Check OS and Linux kernel versions Disable automatic Linux package updates Configure Linux GRUB settings Install public key for remote access to vcmts traffic-generator server Load the vcmts reference dataplane package Configure proxy servers Install Linux packages Configure Docker Install the vcmts dataplane control tool vcmts traffic-generator server preparation Configure system BIOS settings Check Fortville NIC Firmware and Driver versions Check disk space settings Check OS and Linux kernel versions Disable automatic Linux package updates Configure Linux GRUB settings Install public key for remote access to vcmts dataplane server Load the vcmts reference dataplane package Configure proxy servers Install Linux packages Configure Docker Install vcmts dataplane configuration and control tools Configure vcmts platform settings vcmts dataplane platform configuration vcmts traffic-generator platform configuration

5 Introduction 2.4 Install Kubernetes Install Kubernetes master and node on the vcmts traffic-generator server Install Kubernetes node on the vcmts dataplane server Verify Kubernetes installation Install vcmts dataplane node software Install QAT drivers Install DPDK Install CMK Build Kubernetes infrastructure components on the vcmts dataplane server Build telemetry components on the vcmts dataplane server Build vcmts dataplane application container Install vcmts traffic-generator node software Install DPDK Install Helm Build Kubernetes infrastructure components on the traffic-generator server Build Pktgen application container Configure vcmts dataplane Configure vcmts dataplane service-group options Configure vcmts dataplane power profiles Run vcmts dataplane and traffic-generator software Start infrastructure and application instances How to stop infrastructure and application instances How to restart the statistics daemonset System verification Check Kubernetes dashboard Query service-group options Start dataplane traffic Check the Grafana dashboard System reboot Upgrade Procedure vcmts traffic-generator server preparation Update Linux GRUB settings Stop all Kubernetes-orchestrated components Backup current installation on vcmts traffic-generation server Load the new vcmts reference dataplane package Restore proxy server settings Upgrade vcmts dataplane configuration and control tools vcmts dataplane server preparation Update Linux GRUB settings Backup current installation on vcmts dataplane server Load the new vcmts reference dataplane package Restore proxy server settings

6 3.2.5 Upgrade vcmts dataplane control tool Re-configure vcmts platform settings vcmts dataplane platform configuration vcmts traffic-generator platform configuration Apply security updates for Kubernetes Kubernetes updates on vcmts traffic-generation server Kubernetes updates on vcmts dataplane server Upgrade vcmts traffic-generator node software Upgrade DPDK Re-build Kubernetes infrastructure components on the traffic-generator server Re-build Pktgen application container Upgrade vcmts dataplane node software Upgrade DPDK Re-build Kubernetes infrastructure components on the vcmts dataplane server Re-build telemetry components on the vcmts dataplane server Re-build vcmts dataplane application container Re-configure vcmts dataplane service-group options Run vcmts dataplane and traffic-generation software Appendix Sample platform configurations vcmts common platform configuration vcmts dataplane platform configuration vcmts traffic-generator platform configuration Sample Kubernetes infrastructure configuration file Sample Helm chart for vcmts POD s vcmts dataplane helm chart for 16 service-groups vcmts pktgen helm chart for 16 service-groups Sample CMK configuration file

7 Introduction List of Figures Figure 1 Intel vcmts Reference Dataplane Sample Platform for 16 service-groups at max-load 9 Figure 2 Intel vcmts reference dataplane - packet-processing pipeline Figure 3 Intel vcmts reference dataplane NFV stack Figure 4 Intel vcmts Dataplane Reference Platform core layout for 16 service-groups at max-load 12 Figure 5 Intel vcmts Dataplane Reference Platform core layout for 24 service-groups at max-load 13 Figure 6 Intel vcmts Dataplane Reference Platform core layout for 32 service-groups at low-speed 14 Figure 7 Intel vcmts Dataplane Reference Platform Power Management Architecture 58 List of Tables Table 1 System BIOS settings vcmts Dataplane server Table 2 Linux GRUB settings vcmts Dataplane server Table 3 System BIOS settings vcmts Traffic-generator server Table 4 Linux GRUB settings vcmts Traffic-generator server Table 5 vcmts dataplane configurable service-group options Table 6 vcmts dataplane fixed service-group options Table 7 Service-group bandwidth guide Table 8 Service-group configuration tool - commandline options

8 1 Introduction This document describes, step by step, how to install and run the Intel vcmts reference dataplane application in a Kubernetes-orchestrated Linux Container environment. This includes a DPDK Pktgen based cable traffic generation system for upstream and downstream traffic simulation. 1.1 System overview The Intel vcmts reference dataplane environment consists of a vcmts dataplane node and a traffic generation node. The reference platform, as presented in this guide, for both of these nodes is a 2U server with dual Intel Xeon Gold 6148 processors and four 25G Intel Ethernet Network Adapter XXV710-DA2 dual port network interface cards (NIC s). The vcmts dataplane node described in this document also includes an Intel QuickAssist Technology (QAT) 8970 PCIe card. Note that the Intel 10G quad port X710-DA4 NIC s may optionally be used instead of 25G NIC s. And if enough PCI slots are available on the platform, up to 3 NIC s per CPU may be used. Note also that the system described in this document should serve as a sample system configuration. Servers based on other Intel Xeon scalable processors with different core-counts are also supported. More NIC s may be added to the system for a greater amount of I/O as required, and the system can be configured with or without Intel QuickAssist cards. On the vcmts dataplane node, multiple Docker* containers host DPDK-based DOCSIS MAC upstream and downstream dataplane processing for individual cable service-groups (SG s). On the vcmts trafficgeneration node Docker* containers host DPDK Pktgen-based traffic generation instances which simulate DOCSIS traffic into corresponding vcmts dataplane instances. The entire system is orchestrated by Kubernetes*, under which each vcmts dataplane POD represents a service-group and has separate containers for upstream and downstream DOCSIS MAC dataplane processing. DOCSIS control-plane is simulated through a JSON configuration file containing subscriber cable-modem information. Upstream scheduling is simulated by synthetically generated cable-modem DOCSIS stream segments. Telemetry functions run in Docker* containers as a Daemonset (a singleton POD) under Kubernetes*. A comprehensive set of vcmts dataplane statistics and platform KPI s are gathered by the open-source collectd* daemon and stored in an InfluxDB time-series database. A Grafana* dashboard is provided for visualization of these metrics based on InfluxDB queries. The sample platform configuration shown in Figure 1 supports vcmts dataplane processing for 16 servicegroups. Such a system can be used to measure maximum traffic load per service-group, with up to 6 OFDM channels (at 1.89 Gbps B/W each ) allocated per service-group. 8

9 Introduction Figure 1 Intel vcmts Reference Dataplane Sample Platform for 16 service-groups at max-load NOTE: The above shows a system configuration with dual-port 25G NIC s. In case of 10G quad-port NIC s, there would be 4 PF s per NIC, and 2 VF s per PF. In case of a low-speed configuration with 1 OFDM channel per service-group, twice the number of servicegroups could be configured. In this case there would be 8 VF s per PF for 25G NIC s, and 4 VF s per PF for 10G NIC s. It is assumed that upstream traffic rates are 10% of downstream traffic rates. In the above sample platform there are 20 cores per CPU and each CPU core contains two hyperthreads. For max-load configuration, an entire physical CPU core is allocated for downstream data processing of a single service-group. For upstream data processing, a single physical core handles upstream traffic for two servicegroups. For a low-speed configuration (with 1 OFDM channel per service-group), downstream data processing for two service-groups run on the same core, and a single physical core handles upstream traffic for four service-groups. See section for CPU core allocation details. 9

10 1.1.1 vcmts Reference Dataplane Pipeline The core component of the Intel vcmts reference dataplane release package is a reference implementation of a DOCSIS MAC dataplane, also known as the vcmts dataplane. Intel has developed a DOCSIS MAC dataplane compliant with DOCSIS 3.1 specifications (notably MULPI, DEPI, UEPI and SEC specifications) and based on the DPDK packet-processing framework. The key purpose of this development is to provide a tool for characterization of vcmts dataplane packet-processing performance and power-consumption on Intel Xeon platforms. The vcmts upstream and downstream packet processing pipelines implemented by Intel are shown in Figure 2 below. Both upstream and downstream are implemented as a two-stage pipeline of upper-mac and lower-mac processing. The DPDK API used for each significant DOCSIS MAC dataplane function is also shown below. Figure 2 Intel vcmts reference dataplane - packet-processing pipeline Each of the Kubernetes POD s shown in the reference system in Figure 1 contains an instantiation of the above upstream and downstream pipelines in separate Linux containers. Each POD thus handles all subscriber traffic for a specific service-group which covers a group of cable subscribers in a particular geographical area. 10

11 Introduction vcmts Reference Dataplane NFV Stack The Intel vcmts reference dataplane runs within an industry standard Linux Container based NFV stack as shown in Figure 3 below. vcmts upstream and downstream dataplane processing for individual cable service-groups run in Docker* containers on Ubuntu OS, allowing them to be instantiated and scaled independently. The entire system including applications, telemetry, power-management and infrastructure management is orchestrated by Kubernetes*, with Intel-developed plugins being used for resource management functions such as CPU core management and assignment of SR-IOV interfaces for NIC s and QAT devices. Figure 3 Intel vcmts reference dataplane NFV stack 11

12 1.1.3 CPU Core Management CPU core allocation is managed by CMK (CPU Manager for Kubernetes) which is an open-source component provided by Intel. The CMK manages two-types of core-pool, a shared core-pool for containers that share physical cores (such as for upstream dataplane processing) and an exclusive core-pool for containers which require exclusive use of a physical core (such as for downstream dataplane processing at max-load). In the sample configuration shown below, CPU cores are allocated separately for upstream and downstream dataplane for each service-group. CPU cores for processing of downstream traffic are allocated from the exclusive core-pool, and cores for processing of upstream traffic are allocated from the shared core-pool, with 2 upstream dataplane instances per core. The detailed CPU core layout for such a 16 service-group system is shown in Figure 4 below. Figure 4 Intel vcmts Dataplane Reference Platform core layout for 16 service-groups at max-load NOTE: For the above CPU core configuration unused cores are reserved for compute resources that would be used by DOCSIS MAC components not running on Intel s reference vcmts platform such as Upstream scheduler, Control-plane, MAC management and standby dataplane instances for High-Availability. 12

13 Introduction If there are enough PCIe slots available on the system, two additional NIC s may be added for more dataplane I/O and the unused cores in Figure 4 may be used to handle this extra dataplane traffic. In this case up to 24 service-groups may be deployed to handle max traffic load per service-group. The detailed CPU core layout for such a 24 service-group system is shown in Figure 5 below. Figure 5 Intel vcmts Dataplane Reference Platform core layout for 24 service-groups at max-load NOTE: For the above CPU core configuration, all of the available 40 cores on the system are allocated so that there are no cores reserved for DOCSIS MAC components not running on Intel s reference vcmts platform such as Upstream scheduler, Control-plane, MAC management and HA. If using a 22 or 28 core CPU package, some cores may be reserved for such components. 13

14 For initial vcmts deployments there may be no more than a single OFDM channel available per servicegroup, so that an entire CPU core may not be necessary to handle downstream traffic for a service-group. In this case it is desirable for 2 such service-groups running at low traffic speed to share a core for downstream traffic processing. In such a low-speed configuration, CPU cores for upstream and downstream dataplane traffic processing are now both allocated from the shared core-pool, with 2 downstream instances per core and 4 upstream instances per core. The detailed CPU core layout for such a 32 service-group system is shown in Figure 6 below. Figure 6 Intel vcmts Dataplane Reference Platform core layout for 32 service-groups at low-speed NOTE: For the above CPU core configuration unused cores are reserved for DOCSIS MAC components not running on Intel s reference vcmts platform such as Upstream scheduler, Control-plane, MAC management and HA. If there are enough PCIe slots available for two additional NIC s, all of the available 40 cores on the system may be allocated so that there are no cores reserved for such components. In this case, it is possible to deploy up to 48 low-speed service-groups. 14

15 Introduction 1.2 Network Interface Configuration The network interface ports of the vcmts dataplane and traffic-generation nodes shown in Figure 1 should be inter-connected by optical fiber cables via an Ethernet switch. Traffic should be routed between correlating Pktgen and vcmts dataplane instances by MAC learning in the switch as MAC addresses of vcmts dataplane and Pktgen ports are based on service-group ID s. Switch configuration is not covered in this document. If an Ethernet Switch is not available, the NIC ports of the vcmts dataplane and traffic-generation nodes may be connected directly by optical fiber cables. However in this case care must be taken to connect the physical NIC ports of correlating Pktgen and vcmts dataplane application instances. When vcmts dataplane traffic is started as described in section an ARP request is sent from each traffic-generator instance to establish a link with its correlating vcmts dataplane instance. Please note that NIC s must be installed in appropriate CPU-affinitized PCI slots for balanced I/O (which for the sample configuration in Figure 1 means two NIC s per CPU socket). The NIC layout for the system can be checked by running the command shown below. Note that for the example below only the 25G XXV710 NIC s are used for vcmts dataplane traffic. Device ports with their most significant address bit unset below (18:00.0, 18:00.1, 1a:00.0, 1a:00.1) are affinitized to CPU socket 0, while those with their most significant address bit set (86:00.0, 86:00.1, b7:00.0, b7:00.1) are affinitized to CPU socket 1. This system configuration is an example of balanced I/O. lspci grep Ethernet 18:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 18:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 1a:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 1a:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 3d:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09) 3d:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09) 81:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06) 86:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 86:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) b7:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) b7:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) 15

16 1.3 Memory Module Configuration It is very important to ensure that DRAM modules are installed correctly in the server so that all memory channels are utilized. For example, for the Intel Xeon Gold 6148 scalable processor there are six memory channels per CPU socket. In this case a minimum of 12 DRAM modules are required to utilize all memory channels. Furthermore modules should be installed in the correct color-coded slots for optimum memory channel utilization. DRAM module layout on the system can be checked by running the command below. lshw -class memory Correct memory-channel utilization can also be verified using the pcm-memory tool which is provided by Intel. Below is an example of a correctly configured system which is using all 12 memory channels. /opt/intel/pcm/pcm-memory.x Socket Socket Memory Channel Monitoring Memory Channel Monitoring Mem Ch 0: Reads (MB/s): Mem Ch 0: Reads (MB/s): Writes(MB/s): Writes(MB/s): Mem Ch 1: Reads (MB/s): Mem Ch 1: Reads (MB/s): Writes(MB/s): Writes(MB/s): Mem Ch 2: Reads (MB/s): Mem Ch 2: Reads (MB/s): Writes(MB/s): Writes(MB/s): Mem Ch 3: Reads (MB/s): Mem Ch 3: Reads (MB/s): Writes(MB/s): Writes(MB/s): Mem Ch 4: Reads (MB/s): Mem Ch 4: Reads (MB/s): Writes(MB/s): Writes(MB/s): Mem Ch 5: Reads (MB/s): Mem Ch 5: Reads (MB/s): Writes(MB/s): Writes(MB/s): NODE 0 Mem Read (MB/s) : NODE 1 Mem Read (MB/s) : NODE 0 Mem Write(MB/s) : NODE 1 Mem Write(MB/s) : NODE 0 P. Write (T/s): NODE 1 P. Write (T/s): NODE 0 Memory (MB/s): NODE 1 Memory (MB/s): System Read Throughput(MB/s): System Write Throughput(MB/s): System Memory Throughput(MB/s):

17 Introduction 1.4 System Configuration The following is a sample system configuration for Intel vcmts reference dataplane v vcmts Node Hardware CPU Memory Hard Drive Network Interface Card Crypto Acceleration Card Intel Xeon Gold 6148 Dual Processor, 2.4 GHz, 20 Cores 12 x 8GB DDR4 Intel SSD DC S3520 Series (480G) 4 x Intel Ethernet Network Adapter XXV710-DA2 25GbE or 4 x Intel Ethernet Network Adapter X710-DA4 10GbE NOTE: 6 NIC s may be used if sufficient PCIe slots are available Intel QuickAssist Adapter 8970 Card (50Gbps) Software Host OS Ubuntu* 18.04, Linux* Kernel v DPDK DPDK v18.11, intel-ipsec-mb v0.51 vcmts Linux Container Docker* v18.06 Container Orchestrator Statistics Kubernetes* v1.11, CMK v1.3.0 Collectd* v5.8.0 (+) NOTE: commit id 67d1044ac1307b1bb385d13d0f0bccaadd5a63fe for pmu patch vcmts Traffic Generator Node Hardware CPU Intel Xeon Gold 6148 Dual Processor, 2.4 GHz, 20 Cores Memory 12 x 8GB DDR4 Hard Drive Intel SSD DC S3520 Series (480G) Network Interface Card 4 x Intel Ethernet Network Adapter XXV710-DA2 25GbE or 4 x Intel Ethernet Network Adapter X710-DA4 10GbE NOTE: 6 NIC s may be used if sufficient PCIe slots are available Software Host OS Ubuntu* 18.04, Linux* Kernel v DPDK DPDK v18.11 Linux Container Docker* v18.06 Traffic Generator DPDK Pktgen v

18 1.5 Release Package Contents The directory tree of the Intel vcmts Reference Dataplane Release package is shown below. `-- vcmts -- kubernetes -- cmk -- dashboard -- docker -- docker-image-cloud-init -- docker-image-pktgen-init -- docker-image-power-mgr `-- docker-image-sriov-dp -- helm -- pktgen -- resources `-- templates -- vcmts-infra -- resources `-- templates `-- vcmtsd -- resources `-- templates `-- install -- config `-- services -- pktgen -- docker `-- docker-image-pktgen `-- config `-- patches -- stats -- collectd `-- docker -- docker-image-collectd `-- docker-image-grafana `-- dashboards -- tools -- setup -- vcmtsd-config `-- vcmtsd-ctl -- traffic-profiles `-- vcmtsd -- cycle-capture -- docker `-- docker-image-vcmtsd `-- config -- patches `-- stats 18

19 Introduction The following is a description of the main directories of the release package, and their contents. "vcmts" (top-level) directory: Contains the following release documentation: README : release package details ReleaseNotes.txt : release notes relevant to the package "vcmtsd" subdirectory: Contains C source-code modules for the vcmts dataplane application, and the following sub-directories: stats : contains C source-code modules for statistics collection cycle-capture : contains C source code modules for CPU cycle measurement docker : contains files to build the docker image for the vcmts dataplane application patches : contains vcmts dataplane patch file for DPDK scheduler library "tools" subdirectory: Contains tools to configure the vcmts reference dataplane and traffic generation environments, in the following sub-directories: setup : contains scripts to configure the vcmts dataplane and traffic generation environments vcmtsd-config : contains python source modules for a tool to configure vcmts dataplane and traffic-generator platform environments settings and vcmts service-group options. vcmtsd-ctl : contains python source modules for a tool to query vcmts service-group configuration and to control traffic simulation "stats" subdirectory: Contains files to build the telemetry components of the vcmts reference dataplane platform, in the following sub-directories: docker : contains files to build docker images for Collectd, InfluxDB and Grafana. collectd : contains configuration files for Collectd "pktgen" subdirectory: Contains files to build the traffic simulation components of the vcmts reference dataplane platform, in the following sub-directories: docker : contains files to build the docker image for the vcmts traffic generator (based on DPDK Pktgen application) patches : contains patches for DPDK Pktgen which are required for vcmts traffic simulation 19

20 "traffic-profiles" subdirectory: Contains a zip package with control-plane configuration files for a number of vcmts service-group scenarios and correlating PCAP traffic files for upstream and downstream traffic simulation. "kubernetes" subdirectory: Contains files to build and install Kubernetes infrastructure required for the Intel(R) vcmts reference dataplane system, in the following sub-directories: install : contains files required for Kubernetes installation cmk : contains files required for CMK (CPU Manager for Kubernetes) helm : contains files required for Helm, which is used for configuration of vcmts dataplane and Pktgen Kubernetes POD's dashboard : contains files required for Kubernetes WebUI dashboard docker : contains files to build docker images for Kubernetes infrastructure components 20

21 Installation Guide 2 Installation Guide 2.1 vcmts dataplane server preparation A number of steps are required to prepare the vcmts dataplane server for software installation. Please note: you must log in as root user to the vcmts dataplane server to perform these steps Configure system BIOS settings The following System BIOS settings are required on the vcmts dataplane server. Table 1 System BIOS settings vcmts Dataplane server BIOS Setup Menu BIOS Option Setting Advanced->Processor Configuration Intel(R) Hyper-Threading Tech Enabled Advanced->Integrated IO Configuration Intel(R) VT for Directed I/O Enabled Advanced->Power & Performance Advanced->Power & Performance->CPU P State Control Advanced->Power & Performance->CPU P State Control Advanced->Power & Performance->CPU P State Control Advanced->Power & Performance->CPU C State Control CPU Power and Performance profile Intel(R) Turbo Boost Technology Energy Efficient Turbo Enhanced Intel SpeedStep Technology C1E Balanced Performance Disabled Disabled Enabled Enabled Check Fortville NIC Firmware and Driver versions Check the Fortville NIC i40e base driver and firmware versions by running ethtool as follows for a dataplane network interface. e.g. for network interface enp26s0f2 21

22 ethtool -i enp26s0f2 driver: i40e version: firmware-version: x expansion-rom-version: bus-info: 0000:1a:00.2 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes The driver version should be or later. The firmware version should be 6.01 or later. If need to update, the required Fortville NIC driver and firmware can be downloaded at the links below. Fortville NIC Driver Ethernet-Network-Connections-Under-Linux-?product=24586 Fortville NIC Firmware Check disk space settings At least 400GB of disk-space should be available on the vcmts dataplane server for the installation of the vcmts reference dataplane software and infrastructure components. See typical disk info below for a fully installed vcmts dataplane server. df -h Filesystem Size Used Avail Use% Mounted on udev 11G 0 11G 0% /dev tmpfs 9.4G 875M 8.5G 10% /run /dev/sdb1 440G 188G 229G 46% / tmpfs 47G 0 47G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 47G 0 47G 0% /sys/fs/cgroup 22

23 Installation Guide It is recommended to disable swap space as described below. First check if swap space is enabled, by running the following command. blkid /dev/sda1: UUID=" b8-45a5-891d-e76167ee876d" TYPE="ext4" PARTUUID="ee39ea61-01" /dev/sda5: UUID="34c0a b-4b28-abcb-8ac33c30b819" TYPE="swap" PARTUUID="ee39ea61-05" If there is an entry such as TYPE="swap" in the above, this needs to be disabled by running the following command. swapoff -a Furthermore, swap space should be permanently disabled by commenting out any swap entry in the /etc/fstab file. sed -i.bk '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab On reboot, the disabling of swap space can be verified by running the following command, which should display the output below. free -h total used free shared buff/cache available Mem: 62G 57G 239M 2.4M 4.7G 4.4G Swap: 0B 0B 0B Check OS and Linux kernel versions The recommended OS distribution is Ubuntu LTS, and recommended Linux kernel version is 4.15.x or later. Run the following command to check the OS distribution version lsb_release -a The following output is expected. No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu LTS Release: Codename: bionic Run the following command to check the kernel version of the system: 23

24 uname -r The following type of output is expected: generic If a kernel version older than 4.15.x is displayed, the required kernel can be installed by running the following : apt-get update Disable automatic Linux package updates Automatic package updates must be disabled by editing the system files, /etc/apt/apt.conf.d/10periodic and /etc/apt/apt.conf.d/20auto-upgrades as follows to set the appropriate APT configuration option. APT::Periodic::Update-Package-Lists "0"; If the 20auto-upgrades file does not exist, it can be created by running the following command. dpkg-reconfigure --priority=low unattended-upgrades Select "Yes" to create the file. Then modify as described above Configure Linux GRUB settings The following Linux GRUB settings are required on the vcmts dataplane server. Table 2 Linux GRUB settings vcmts Dataplane server Setting default_hugepagesz=1g hugepagesz=1g hugepages=72 intel_pstate=disable intel_iommu=on iommu=pt isolcpus=2-19,22-39,42-59,62-79 nr_cpus=80 Description Huge-page memory size and number of pages reserved for DPDK applications Disable hardware control of P-states Enable SR-IOV Isolate vcmts dataplane cores from the Linux kernel task scheduler NOTE: based on Dual 20-core CPU Total number of logical cores on the system (aka hyperthreads) 24

25 Installation Guide NOTE: The sample isolcpus and nr_cpus settings shown here are for a 20-core dual-processor CPU package. If not using a 20-core dual-processor CPU, the nr_cpus and isolcpus settings need to be adapted to the corelayout of the CPU package being used, as shown in the instructions that follow below. Note also that for the isolcpus setting, all except the first 2 cores on each CPU in the package must be isolated from the Linux kernel task scheduler. Based on the above table for a 20-core dual-processor CPU package, the Linux kernel GRUB file /etc/default/grub should be edited as follows to set the appropriate GRUB_CMDLINE_LINUX options. GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1g hugepages=72 intel_pstate=disable intel_iommu=on iommu=pt isolcpus=2-19,22-39,42-59,62-79 nr_cpus=80" It is very important to verify the core layout of the CPU package being used on the target system and to adapt isolcpus and nr_cpus GRUB settings based on this. The core layout of the CPU package can be verified by running the lscpu command as shown below. The output shown here is for a 20-core dual-processor CPU package. lscpu grep "CPU(s):" CPU(s): 80 NUMA node0 CPU(s): 0-19,40-59 NUMA node1 CPU(s): 20-39,60-79 Below is the output for a 22-core dual-processor CPU package. In this case the isolcpus and nr_cpus settings would be: isolcpus=2-21,24-43,46-65,68-87 nr_cpus=88 lscpu grep "CPU(s):" CPU(s): 88 NUMA node0 CPU(s): 0-21,44-65 NUMA node1 CPU(s): 22-43,66-87 Once editing of the the Linux kernel GRUB file is complete, run the commands below to compile the GRUB configuration and reboot the server for updated settings to take effect: update-grub reboot 25

26 2.1.7 Install public key for remote access to vcmts trafficgenerator server A public key must be installed to allow remote access from vcmts dataplane server to the traffic generator server. Replace the entry below with the actual vcmts traffic-generator server hostname. ssh-keygen -b t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub root@trafficgen-hostname Select default options at each prompt Load the vcmts reference dataplane package Next, download the vcmts reference dataplane package from 01.org to the vcmts dataplane server and extract into the root directory of the installation, which is assumed to be /opt below. cd /opt wget tar -zxvf intel-vcmtsd tar.gz cd vcmts ls -lr NOTE: If downloading directly is unsuccessful, visit the Intel Access Network Dataplanes 01.org site at and get the package from the Downloads section. All files from the vcmts reference dataplane package files as described in section 1.5, "Release Package Contents" should have been extracted to the /opt/vcmts directory. Set the MYHOME and VCMTS_HOST environment variables as follows (assuming the vcmts reference dataplane package will be installed in the /opt directory) export VCMTSD_HOST=y export MYHOME=/opt These should also be added as environment settings to the root bashrc file ~/.bashrc (again assuming that the vcmts reference dataplane release package has been installed into /opt ). export VCMTSD_HOST=y export MYHOME=/opt 26

27 Installation Guide Configure proxy servers Please note: proxy configuration is different for each installation environment, so care should be taken to understand the required proxy settings for a particular environment as misconfiguration may greatly disrupt this installation. The settings below need to be configured if the vcmts dataplane server is behind a proxy. Note that proxy settings must also be applied as part of Docker installation steps. Firstly, configure HTTP proxy servers and the no_proxy setting in the $HOME/vcmts/tools/setup/proxy.sh file. The example entries below should be replaced with actual proxy address and port number and the actual hostname of the vcmts dataplane server. export http_proxy= export https_proxy= export no_proxy=localhost, ,vcmtsd-hostname For Linux package installations, HTTP proxy server entries must also be added to the /etc/apt/apt.conf file. Acquire:: " Acquire:: " Install Linux packages A number of Linux packages must be installed on the vcmts dataplane server which are required for the vcmts reference dataplane runtime environment. Run the environment function as shown below to install Linux packages required for the vcmts dataplane server. source $MYHOME/vcmts/tools/setup/env.sh install_base_ubuntu_pkgs 27

28 Configure Docker The community edition of Docker should already have been installed in the previous section, Install Linux packages. Some further steps are required to configure Docker to run behind a proxy server and to configure a DNS server address. The following steps are required to configure Docker to run behind an HTTP or HTTPS proxy server. Create a systemd directory for the docker service. mkdir -p /etc/systemd/system/docker.service.d/ Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf and replace the entries below with the actual proxy address and port number which is applicable to the system. [Service] Environment="HTTP_PROXY= "HTTPS_PROXY= "NO_PROXY=localhost, " TCP port 2375 needs to be enabled to allow control of dockerized applications e.g. by the vcmtsd control tool (see section ). Add TCP port 2375 to the ExecStart entry in the file /lib/systemd/system/docker.service as shown below... ExecStart=/usr/bin/dockerd -H tcp:// :2375 -H fd://.. Please note: enabling the above tcp port should only be done in a secure environment, such as a private lab network. If the installation environment uses a domain name server, Docker may need to perform domain name resolution when installing packages during container builds. By default, Docker uses Google s domain name server, , however if a local domain name server exists, this should be specified as described below. Create a file called /etc/docker/daemon.json with the content shown below, specifying the local DNS server IP address which is applicable to the installation environment. 28

29 Installation Guide { "dns": ["<local-dns-server-ip-address>", " "] } Now start Docker by running the following commands. systemctl daemon-reload systemctl start docker systemctl show --property=environment docker If it is necessary to restart Docker at any point, the following command should be used. systemctl restart docker Install the vcmts dataplane control tool The following two tools are provided with the vcmts reference dataplane release package. vcmtsd-config : - configure vcmts dataplane platform environments settings - configure vcmts traffic-generator platform environments settings - configure vcmts service-group options vcmtsd-ctl : - arp the links between traffic-generator and vcmts dataplane instances - control traffic simulation - query vcmts service-group options The vcmtsd configuration tool should only be run from the Kubernetes master (which also acts as the vcmts traffic-generator server). See section However, the vcmtsd control tool may be run from the vcmts dataplane server, and the following commands are required for installation (assuming vcmts dataplane package installed in /opt directory) pip install virtualenv cd /opt/vcmts/tools/vcmtsd-ctl python -m virtualenv env source env/bin/activate pip install -e. deactivate The vcmts dataplane control tool is now installed. 29

30 2.2 vcmts traffic-generator server preparation A number of steps are required to prepare the vcmts traffic-generator server for software installation. Please note: you must log in as root user to the vcmts traffic-generator server to perform these steps Configure system BIOS settings The following System BIOS settings are required on the vcmts traffic-generator server. Table 3 System BIOS settings vcmts Traffic-generator server BIOS Setup Menu BIOS Option Setting Advanced->Processor Configuration Intel(R) Hyper-Threading Tech Enabled Advanced->Integrated IO Configuration Intel(R) VT for Directed I/O Enabled Advanced->Power & Performance Advanced->Power & Performance->CPU P State Control Advanced->Power & Performance->CPU P State Control Advanced->Power & Performance->CPU P State Control CPU Power and Performance profile Intel(R) Turbo Boost Technology Energy Efficient Turbo Enhanced Intel SpeedStep Technology Balanced Performance Disabled Disabled Enabled Check Fortville NIC Firmware and Driver versions Check the Fortville NIC i40e base driver and firmware versions by running the following ethtool for a dataplane network interface. e.g. ethtool -i enp26s0f2 driver: i40e version: firmware-version: x expansion-rom-version: bus-info: 0000:1a:00.2 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes 30

31 Installation Guide The driver version should be or later The firmware version should be 6.01 or later. If needed, to update, the required Fortville NIC driver and firmware can be downloaded at the links below. Fortville NIC Driver Ethernet-Network-Connections-Under-Linux-?product=24586 Fortville NIC Firmware Check disk space settings At least 400GB of disk-space should be available on the vcmts traffic-generator server for the installation of the vcmts reference dataplane software and related components. See typical disk info below for a fully installed vcmts traffic-generator server. df -h Filesystem Size Used Avail Use% Mounted on udev 12G 0 12G 0% /dev tmpfs 9.5G 1.1G 8.5G 11% /run /dev/sdb1 330G 150G 164G 48% / tmpfs 48G 0 48G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 48G 0 48G 0% /sys/fs/cgroup It is recommended to disable swap space as described below. First check if swap space enabled, by running the blkid command. e.g. blkid /dev/sda1: UUID=" b8-45a5-891d-e76167ee876d" TYPE="ext4" PARTUUID="ee39ea61-01" /dev/sda5: UUID="34c0a b-4b28-abcb-8ac33c30b819" TYPE="swap" PARTUUID="ee39ea61-05" If there is an entry such as TYPE="swap" in the above, this needs to be disabled by running the following command. 31

32 swapoff -a Furthermore, swap space should be permanently disabled by commenting out any swap entry in the /etc/fstab file. sed -i.bk '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab On reboot, the disabling of swap space can be verified by running the following command, which should display the output below. free -h total used free shared buff/cache available Mem: 62G 57G 239M 2.4M 4.7G 4.4G Swap: 0B 0B 0B Check OS and Linux kernel versions The recommended Linux kernel version is 4.15.x or later, and recommended OS distribution is Ubuntu LTS. Run the following command to check the OS distribution version lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Release: Codename: Ubuntu LTS bionic Run the following command to check the kernel version of the system: uname -r The following type of output is expected: generic If a kernel version older than 4.15.x is displayed, the required kernel can be installed by running the following : apt-get update 32

33 Installation Guide Disable automatic Linux package updates Automatic package updates must be disabled by editing the system files, /etc/apt/apt.conf.d/10periodic and /etc/apt/apt.conf.d/20auto-upgrades as follows to set the appropriate APT configuration option APT::Periodic::Update-Package-Lists "0"; If the 20auto-upgrades file does not exist, it can be created by running the following command. dpkg-reconfigure --priority=low unattended-upgrades Select "Yes" to create the file. Then modify as described above Configure Linux GRUB settings The following Linux GRUB settings are required on the vcmts traffic-generator server. Table 4 Linux GRUB settings vcmts Traffic-generator server Setting default_hugepagesz=1g hugepagesz=1g hugepages=72 intel_iommu=on iommu=pt isolcpus=2-19,22-39,42-59,62-79 nr_cpus=80 Description Huge-page memory size and number of pages reserved for DPDK applications Enable SR-IOV Isolate vcmts dataplane cores from the Linux kernel task scheduler NOTE: based on Dual 20-core CPU Total number of logical cores on the system (aka hyperthreads) NOTE: The sample isolcpus and nr_cpus settings shown here are for a 20-core dual-processor CPU package. If not using a 20-core dual-processor CPU, the nr_cpus and isolcpus settings need to be adapted to the corelayout of the CPU package being used, as shown in the instructions that follow below. Note also that for the isolcpus setting, all except the first 2 cores on each CPU in the package must be isolated from the Linux kernel task scheduler. Based on the above table for a 20-core dual-processor CPU package, the Linux kernel GRUB file /etc/default/grub should be edited as follows to set the appropriate GRUB_CMDLINE_LINUX options. GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1g hugepages=72 intel_pstate=disable intel_iommu=on iommu=pt isolcpus=2-19,22-39,42-59,62-79 nr_cpus=80" 33

34 It is very important to verify the core layout of the CPU package being used on the target system and to adapt isolcpus and nr_cpus GRUB settings based on this. The core layout of the CPU package can be verified by running the lscpu command as shown below. The output shown here is for a 20-core dual-processor CPU package. lscpu grep "CPU(s):" CPU(s): 80 NUMA node0 CPU(s): 0-19,40-59 NUMA node1 CPU(s): 20-39,60-79 Below is the output for a 22-core dual-processor CPU package. In this case the isolcpus and nr_cpus settings would be: isolcpus=2-21,24-43,46-65,68-87 nr_cpus=88 lscpu grep "CPU(s):" CPU(s): 88 NUMA node0 CPU(s): 0-21,44-65 NUMA node1 CPU(s): 22-43,66-87 Once editing of the the Linux kernel GRUB file is complete, run the commands below to compile the GRUB configuration and reboot the server for updated settings to take effect: update-grub reboot Install public key for remote access to vcmts dataplane server A public key must be installed to allow remote access from the vcmts traffic-generator server to the dataplane server. Replace the entry below with the actual vcmts dataplane server hostname. ssh-keygen -b t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub root@vcmtsd-hostname Select default options at each prompt Load the vcmts reference dataplane package Next, download the vcmts reference dataplane package from 01.org to the vcmts traffic-generator server from 01.org and extract into the root directory of the installation, assumed as /opt below. 34

35 Installation Guide cd /opt wget tar -zxvf intel-vcmtsd tar.gz cd vcmts ls -lr NOTE: If downloading directly is unsuccessful, visit the Intel Access Network Dataplanes 01.org site at and get the package from the Downloads section. All files from the vcmts reference dataplane package files as described in section 1.5, "Release Package Contents" should have been extracted to the /opt/vcmts directory. The vcmts reference dataplane release package contains imix style traffic-profiles by default. For RFC 2544 style benchmarking of a range of fixed packet sizes, an additional traffic-profile package may be downloaded as follows. cd /opt/vcmts/traffic-profiles wget Set the MYHOME and PKTGEN_HOST environment variables as follows (assuming the vcmts reference dataplane package is installed in the /opt directory) export PKTGEN_HOST=y export MYHOME=/opt These must also be added as environment settings to the root bashrc file ~/.bashrc (again assuming that the vcmts reference dataplane release package has been installed into /opt ). export VCMTSD_HOST=y export MYHOME=/opt Configure proxy servers Please note: proxy servers are different for every installation environment, so care should be taken to understand the required proxy settings for a particular environment as misconfiguration may greatly disrupt this installation. The settings below need to be configured if the vcmts traffic generator server is behind a proxy. Note that proxy settings must also be applied as part of Docker and CMK installation steps. 35

36 Firstly, configure HTTP proxy servers and the no_proxy setting in the $HOME/vcmts/tools/setup/proxy.sh file (assuming installation in /opt). The example entries below should be replaced with actual proxy address and port number and the actual hostname of the vcmts traffic generator server. export http_proxy= export https_proxy= export no_proxy=localhost, ,trafficgen-hostname For Linux package installations, HTTP proxy server entries must also be added to the /etc/apt/apt.conf file. Acquire:: " Acquire:: " Install Linux packages A number of Linux packages must be installed on the vcmts traffic-generator server which are required for the vcmts reference dataplane runtime environment. Run the environment function as shown below to install Linux packages required for the vcmts trafficgenerator server (assuming installation in /opt). source $MYHOME/vcmts/tools/setup/env.sh install_base_ubuntu_pkgs Configure Docker The community edition of Docker should already have been installed in the previous section, Install Linux packages. Some further steps are required to configure Docker to run behind a proxy server and to configure a DNS server address. The following steps are required for Docker to run behind an HTTP or HTTPS proxy server. Create a systemd directory for the docker service. mkdir -p /etc/systemd/system/docker.service.d/ 36

37 Installation Guide Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf and replace the entries below with the actual proxy address and port number which is applicable to the system. [Service] Environment="HTTP_PROXY= "HTTPS_PROXY= "NO_PROXY=localhost, " TCP port 2375 needs to be enabled to allow control of dockerized applications e.g. by the vcmtsd control tool (see section ). Add TCP port 2375 to the ExecStart entry in the file /lib/systemd/system/docker.service as shown below... ExecStart=/usr/bin/dockerd -H tcp:// :2375 -H fd://.. Please note: enabling the above tcp port should only be done in a secure environment, such as a private lab network. If the installation environment uses a domain name server, Docker may need to perform domain name resolution when installing packages during container builds. By default, Docker uses Google s domain name server, , however if a local domain name server exists, this should be specified as described below. Create a file called /etc/docker/daemon.json with the content shown below, specifying the local DNS server IP address which is applicable to the installation environment. { } "dns": ["<local-dns-server-ip-address>", " "] Now start Docker by running the following commands. systemctl daemon-reload systemctl start docker systemctl show --property=environment docker 37

38 If it is necessary to restart Docker at any point, the following command should be used. systemctl restart docker Install vcmts dataplane configuration and control tools The following two tools are provided with the vcmts reference dataplane release package. vcmtsd-config : - configure vcmts dataplane platform environments settings - configure vcmts traffic-generator platform environments settings - configure vcmts service-group options vcmtsd-ctl : - arp links between traffic-generator and vcmts dataplane instances, - control traffic simulation - query vcmts service-group options Both of these tools should be run from the vcmts traffic-generator server, which also acts as the Kubernetes master, and the following steps are required for installation. First, install the Python Virtual environment, which is required to run these tools. pip install virtualenv Next, install the vcmtsd-config tool by running the following commands. cd /opt/vcmts/tools/vcmtsd-config python -m virtualenv env source env/bin/activate pip install -e. deactivate Next install the vcmtsd-ctl tool by running the following commands. cd /opt/vcmts/tools/vcmtsd-ctl python -m virtualenv env source env/bin/activate pip install -e. deactivate Both tools are now installed. 38

39 Installation Guide Please note: the vcmtsd configuration tool should only be run from the Kubernetes master (vcmts traffic-generator server). 39

40 2.3 Configure vcmts platform settings Platform configurations must be generated for the vcmts dataplane and traffic-generator servers in order for the Intel vcmts reference dataplane runtime environment to operate correctly. Please note: both platform configurations must be generated from the vcmts traffic-generation server (also Kubernetes server) and you must log in as root user to perform the steps described below. Follow the instructions in the next two sections to configure the runtime environments for vcmts dataplane and traffic-generator platforms, using the configuration tool provided in the vcmts reference dataplane release package NOTE: vcmts dataplane platform configuration Firstly the vcmts dataplane platform environment configuration is performed. While the configuration tool is run on the vcmts traffic-generation server (also Kubernetes master), the actual platform details are gathered remotely from the vcmts dataplane platform. The following platform details need to be gathered for the vcmts dataplane server runtime environment. Fully qualified domain-name of the vcmts Dataplane server CPU information - number of CPU s and cores per CPU (isolated and not isolated) NIC information - NIC physical functions per NUMA node and their PCI addresses QAT information - QAT physical functions per NUMA node and their PCI addresses Run the vcmtsd-config tool provided in the vcmts reference dataplane release package as follows to specify the settings for the vcmts dataplane platform configuration. cd $MYHOME/vcmts/tools/vcmtsd-config source env/bin/activate vcmtsd-config platform vcmtsd deactivate Follow the menu and prompted instructions to generate the vcmts dataplane platform configuration. Please note: you must specify the fully qualified domain-name for server addresses e.g. myhost.example.com, as hostname alone is not sufficient. 40

41 Installation Guide NOTE: Once completed, it can be verified that settings for the vcmts dataplane platform runtime environment have been configured correctly in the files below on the vcmts dataplane server: - the fully qualified domain-name of the vcmts dataplane server should be specified in the common platform configuration file : $MYHOME/vcmts/tools/setup/common-host-config.sh (see Appendix section for example) - the vcmts dataplane server PCI address settings should be specified in the vcmts dataplane host configuration file : $MYHOME/vcmts/tools/setup/vcmtsd-host-config.sh (see Appendix section for example) The vcmtsd-host-config.sh file is not installed onto the Kubernetes master (also trafficgenerator/pktgen server) at this point. This will happen later in the installation process when the "vcmtsdconfig service-groups" command is run (see section 2.7.1) vcmts traffic-generator platform configuration Next the vcmts traffic-generator platform environment configuration is performed. NOTE: The following platform details need to be gathered for the vcmts traffic-generator (also known as Pktgen) server runtime environment. Fully qualified domain-name of the vcmts traffic-generation server CPU information - number of CPU s, cores per CPU (isolated and not isolated) and Pktgen application instance mappings NIC information - NIC physical functions per NUMA node and their PCI addresses CMK (CPU Manager for Kubernetes) is not used on the traffic-generator platform so that core mappings to Pktgen application instances are assigned by the configuration tool in this case Run the vcmtsd-config tool provided in the vcmts reference dataplane release package as follows to specify the settings for the vcmts traffic-generator (pktgen) platform configuration. cd $MYHOME/vcmts/tools/vcmtsd-config source env/bin/activate vcmtsd-config platform pktgen deactivate Follow the menu and prompted instructions to generate the vcmts traffic-generator (Pktgen) platform configuration. Please note: you must specify the fully qualified domain-name for server addresses e.g. myhost.example.com, as hostname only is not sufficient. 41

42 Once completed, it can be verified that settings for the vcmts traffic-generator platform runtime environment have been configured correctly in the files below on the vcmts traffic-generator server: - the vcmts traffic-generator fully qualified domain name should be specified in the common configuration file: $MYHOME/vcmts/tools/setup/common-host-config.sh : (see Appendix section for example) - the vcmts traffic-generator PCI address settings and core-mappings should be specified in the pktgen host configuration file : $MYHOME/vcmts/tools/setup/pktgen-host-config.sh (see Appendix section for example) 42

43 Installation Guide 2.4 Install Kubernetes vcmts dataplane and pktgen (traffic-generator) application instances are orchestrated using Kubernetes. Some Kubernetes add-on components are later installed for CPU core management (CMK) and management of QuickAssist resources (QAT device plugin). Helm is also later installed for configuration of vcmts dataplane and pktgen POD s. As shown in Figure 1, the Kubernetes Master installation must be performed on the vcmts traffic-generator server. Kubernetes Node installation must be performed on both the vcmts dataplane and traffic-generator servers. The following sections cover the steps for Kubernetes installation on the vcmts reference dataplane system. Note that the installation is performed through environment functions and scripts which are provided in the vcmts reference dataplane package. Please note: you must log in as root user to the respective vcmts traffic-generator server (also Kubernetes master) and vcmts dataplane node to perform these steps Install Kubernetes master and node on the vcmts trafficgenerator server The vcmts traffic-generator server acts as both Kubernetes master and node. To install Kubernetes master and node on vcmts traffic-generator server, log in to it as root user. After logging in, first set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh To install Kubernetes master and node, run the environment function provided in the release package, as follows, and follow instructions when prompted. install_kubernetes_master Note that the Kubernetes installation takes a long time and should NOT be interrupted. Once this is complete, Kubernetes Master and Node is installed and running on the vcmts traffic-generator server. 43

44 2.4.2 Install Kubernetes node on the vcmts dataplane server The vcmts dataplane server acts as a Kubernetes node. To install Kubernetes node on vcmts dataplane server, log in to it as root user. After logging in, first set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh To install Kubernetes node, run the environment function provided in the release package, as follows, and follow instructions when prompted. install_kubernetes_node Note that the Kubernetes installation takes a long time and should NOT be interrupted. Kubernetes Node is now installed and running on the vcmts dataplane server Verify Kubernetes installation A successful Kubernetes installation may be verified by running the following command on the Kubernetes master (traffic-generator server). kubectl get nodes The output of this command should indicate both nodes, traffic-generator (pktgen) and vcmts dataplane as Ready. Each Kubernetes node must be labelled as a pktgen or vcmts node. Label the nodes by running the commands below on the Kubernetes master. It is important that the node name used for pktgen_node and vcmtsd_node is the name returned from the kubectl get nodes command above. kubectl label nodes pktgen_node vcmtspktgen=true kubectl label nodes vcmts_node vcmts=true For a more detailed verification of the installation, run the check_kubernetes environment function on the Kubernetes master and node. On the Kubernetes master, output such as shown below is expected. 44

45 Installation Guide source $MYHOME/vcmts/tools/setup/env.sh check_kubernetes etcd.service - Etcd Server Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:11 IST; 2h 13min ago kube-apiserver.service - Kubernetes API Server Loaded: loaded (/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:26 IST; 2h 13min ago kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:36 IST; 2h 12min ago kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:37 IST; 2h 12min ago Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d http-proxy.conf Active: active (running) since Tue :02:41 IST; 2h 12min ago kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:41 IST; 2h 12min ago flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:41 IST; 2h 12min ago kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/lib/systemd/system/kube-proxy.service; enabled; vendor preset: enabled) Active: active (running) since Tue :02:41 IST; 2h 12min ago 45

46 On the Kubernetes vcmtsd node, output such as shown below is expected. export VCMTSD_HOST=y export MYHOME=/opt source $MYHOME/vcmts/tools/setup/env.sh check_kubernetes Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d http-proxy.conf Active: active (running) since Tue :14:31 IST; 1h 46min ago kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Tue :14:31 IST; 1h 46min ago flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled) Active: active (running) since Tue :14:30 IST; 1h 46min ago kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/lib/systemd/system/kube-proxy.service; enabled; vendor preset: enabled) Active: active (running) since Tue :14:07 IST; 1h 47min ago If a service has failed it is recommended to attempt to restart the service as follows: systemctl status <service> systemctl stop <service> systemctl start <service> 46

47 Installation Guide 2.5 Install vcmts dataplane node software The following sections describe installation of software components required on the vcmts dataplane node. These steps should be performed on the vcmts dataplane server after logging in as root user. After logging in to the vcmts dataplane server, first set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh Install QAT drivers Install QAT drivers on the vcmts dataplane node host OS by running the following command and follow the prompt accepting default options. cd $MYHOME install_qat_drivers Select option 2 Install Acceleration and select defaults when prompted. The PCI addresses of the QAT cards on the system can be checked by running the command shown below. lspci grep 37c8 8a:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04) 8c:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04) 8e:00.0 Co-processor: Intel Corporation Device 37c8 (rev 04) Install DPDK Install DPDK on the vcmts dataplane node host OS by running the following command. cd $MYHOME install_dpdk 47

48 2.5.3 Install CMK CMK (Core Manager for Kubernetes) is a key component which manages allocation of cores to POD s under Kubernetes control. This is installed on the vcmts dataplane node only. On the vcmts dataplane node, run the following commands to download and build CMK. Additional commands are included below to update the downloaded CMK Dockerfile with local proxy settings and to remove tox unit-test entries. cd $MYHOME git clone cd CPU-Manager-for-Kubernetes git checkout 45103cfd39f73439f67613fdcba5543fc9d351dc sed -i "16i\ENV http_proxy=$http_proxy" Dockerfile sed -i "16i\ENV https_proxy=$https_proxy" Dockerfile sed i "s/run tox.*/\#run tox/" Dockerfile make NOTE: The version of CMK built here contains some updates to CMK v1.3.0, which are required for the Intel vcmts reference dataplane platform. The CMK docker image should have been be built and tagged with v To verify, run the following command to display docker images on the system. docker images This should display cmk:v1.3.0 for CMK Build Kubernetes infrastructure components on the vcmts dataplane server Run the following commands to build docker container images for the Kubernetes infrastructure components required on the vcmts dataplane server for platform initialization, power-management, and QAT and NIC resource management. build_docker_cloud_init build_docker_power_mgr build_docker_sriov_dp build_docker_qat 48

49 Installation Guide Build telemetry components on the vcmts dataplane server The following telemetry components are used by the vcmts reference dataplane system. collectd : used to collect vcmts dataplane and platform statistics InfluxDB : used to store vcmts dataplane and platform statistics Grafana : used to visualize vcmts dataplane and platform statistics Run the following commands to build docker container images for the telemetry components listed above. build_docker_collectd build_docker_influxdb build_docker_grafana NOTE: The collectd docker build includes download and build of the intel_pmu and intel_rdt collectd plugins. A sample collectd.conf is provided which loads all of the plugins required on the vcmts dataplane node. Also vcmts dataplane statistic types are installed in the collectd types DB based on the vcmts.types.db file provided in the vcmts reference dataplane release package Build vcmts dataplane application container Two types of vcmts dataplane docker images are created and tagged as follows feat : maximum statistics including cycle count captures perf : minimum statistics, with no cycle count captures One of the two modes may be selected as a service-group option (see section Configure vcmts dataplane service-group options). Build the vcmts dataplane feat docker image by running the following command. build_docker_vcmtsd_feat Build the vcmts dataplane perf docker image by running the following command. build_docker_vcmtsd_perf 49

50 2.6 Install vcmts traffic-generator node software The following sections describe installation of software components required on the vcmts trafficgenerator node. These steps should be performed after logging in as root user to the vcmts traffic-generator server (which also acts as the Kubernetes master). After logging in to the vcmts traffic-generator node, set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh Install DPDK Install DPDK on the vcmts traffic-generator node host OS by running the following commands cd $MYHOME install_dpdk Install Helm Helm is used by the Kubernetes master to configure service-group information for vcmts dataplane and traffic-generator instances. Run the following commands to install Helm on the vcmts traffic-generator node, which also acts as the Kubernetes master managing all application containers on the system. cd $MYHOME wget tar -zxvf helm-v rc.1-linux-amd64.tar.gz cp linux-amd64/helm /usr/bin/. If the installation is being done behind a proxy an empty repository file must be created for Helm, as follows: mkdir -p /root/.helm/repository touch /root/.helm/repository/repositories.yaml 50

51 Installation Guide Build Kubernetes infrastructure components on the traffic-generator server Run the following commands to build docker container images for the Kubernetes infrastructure components required on the vcmts traffic-generator server for platform initialization and NIC resource management. build_docker_pktgen_init build_docker_sriov_dp Build Pktgen application container Run the following command to build the DPDK Pktgen docker image. build_docker_pktgen 51

52 2.7 Configure vcmts dataplane Each vcmts dataplane (Kubernetes) POD handles upstream and downstream traffic for a single servicegroup, which includes a number of cable subscribers, typically hundreds. The following service-group options may be configured. Table 5 vcmts dataplane configurable service-group options Configurable Service-Group Options Service-group mode Service-group mode - one of two settings described below. Max-load: exclusive core for downstream, shared core for upstream (2 x SG s) Low-speed: shared cores for downstream (2 x SG s) and upstream (4 x SG s) Default: Max-load Number of Service- Groups Number of Subscribers Channel Configuration Percentage of DES CM s Downstream CRC re-generation Downstream Crypto Offload Traffic type Number of service-groups. Default: Max based on mode, and number of CPU cores and NW ports 100, 300, 500 or 1000 subscribers per service-group Default: 300 One of the following channel configurations: 1 x OFDM, 32 x SC-QAM 2 x OFDM, 32 x SC-QAM 4 x OFDM, 0 x SC-QAM 6 x OFDM, 0 x SC-QAM Default: 4xOFDM 0,5 or 10 percent of cable-modems using DES encryption NOTE: 5 and 10% only applicable for 1 x OFDM channel configuration Default: 0% DES Enable or Disable CRC re-generation for downstream DOCSIS frames Default: Enable CRC Enable or Disable QuickAssist offload for downstream encryption Default: Disable QAT Select imix or fixed-sized packets for cable traffic. imix1: Upstream 65% : 84B, 18% : 256B, 17% : 1280B Downstream 15% : 84B, 10% : 256B, 75% : 1280B 52

53 Installation Guide imix2: Upstream 81.5% : 70B, 1% : 940B, 17.5% : 1470B Downstream 3% : 68B, 1% : 932B, 96% : 1520B Fixed-size Packets: 64B, 256B, 512B, 640B, 768B, 1024B, 1280B or 1536B Default: imix2 The following settings are applied to all service-groups and are not configurable. Table 6 vcmts dataplane fixed service-group options Fixed Settings Subscriber Lookup 4 IP addresses per subscriber DOCSIS Filtering DOCSIS Classification Downstream Service- Flow Scheduling Downstream Channel Scheduling Downstream Channel Bonding Groups 16 filters per cable-modem 10% matched (permit rule), 90% unmatched (default action - permit) 16 IPv4 classifiers per cable-modem 10% matched, 90% unmatched (default service-flow queue) 8 service-flow queues per cable-modem (4 active) Mbps Bandwidth per SQ-QAM channel 1.89 Gbps Bandwidth per OFDM channel 99% profile density ratio for OFDM channels 1xOFDM, 32xSC-QAM BG1: Channel 0-23 (SC-QAM), Channel 32 (OFDM) BG2: Channel 0-15, (SC-QAM), Channel 32 (OFDM) BG3: Channel 0-7, (SC-QAM), Channel 32 (OFDM) BG4: Channel 8-31 (SC-QAM), Channel 32 (OFDM) 2xOFDM, 32xSC-QAM BG1: Channel 0-23 (SC-QAM), Channel 32,33 (OFDM) BG2: Channel 0-15, (SC-QAM), Channel 32,33 (OFDM) BG3: Channel 0-7, (SC-QAM), Channel 32,33 (OFDM) BG4: Channel 8-31 (SC-QAM), Channel 32,33 (OFDM) 4xOFDM BG1: Channel 0,1 (OFDM) 53

54 BG2: Channel 2,3 (OFDM) BG3: Channel 0,2 (OFDM) BG4: Channel 1,3 (OFDM) 6xOFDM BG1: Channel 0,1 (OFDM) BG2: Channel 2,3 (OFDM) BG3: Channel 4,5 (OFDM) NOTE: Bonding groups are distributed evenly across cable-modems The maximum amount of downstream traffic that needs to be handled for a service group is determined by its number of OFDM and SC-QAM channels as shown in the table below. It is assumed that upstream traffic is 10% of downstream traffic. Table 7 Service-group bandwidth guide Number OFDM Channels (1.89 Gbps) Additional SC-QAM Channels (42.24 Mbps) Total Downstream Bandwidth per SG * (Gbps) Max Total BW (US = 10% DS) (Gbps) * Actual downstream channel bandwidth is reduced by DOCSIS MAC and Phy overhead NOTE: Key points to consider when selecting service-group options. DES cable-modems are only supported for the 1 OFDM channel configuration total available QAT crypto bandwidth should be taken into account when selecting number of servicegroups for QAT crypto offload option (noting that QAT offload is only used for downstream traffic) when using 10G NIC s with 6xOFDM channel configuration, the Ethernet port bandwidth limit may be reached before downstream channel bandwidth limit The required service-group settings may be configured on the system using the vcmts dataplane configuration tool provided in the vcmts dataplane release package, as described in the next section. 54

55 Installation Guide Configure vcmts dataplane service-group options Follow the steps below to specify service-group options for vcmts dataplane POD s on the system. These steps should be performed as root user on the Kubernetes master (also the traffic-generator/pktgen server). After logging in to the Kubernetes master, set the environment for running the configuration tool. source $MYHOME/vcmts/tools/setup/env.sh cd $MYHOME/vcmts/tools/vcmtsd-config source env/bin/activate Then, run the vcmtsd-config tool provided in the vcmts dataplane release package as follows to specify the required service-group options for the vcmts dataplane environment. vcmtsd-config service-groups deactivate Follow the prompts to select service-group settings and generate Helm charts for the Kubernetes cluster. The CMK cluster-init yaml file for core-pool configuration is also updated based on the number of servicegroups selected. Service-group options may also be selected by passing command-line arguments to the vcmtsd configuration tool, as described below. For example, to configure 16 service-groups at max-load with 4OFDM channels, 300 subscribers, no DES CM s, CRC enabled, SG s 8 to 11 with QAT offload and imix-2 traffic, run the following command: vcmtsd-config service-groups -u -m max-load -p packed -n 16 -c 4ofdm -s 300 -d 0 -v enabled -q "8,9,10,11" -t imix2 deactivate A full description of vcmtsd-config command-line arguments for service-group configurations provided below. Table 8 Service-group configuration tool - commandline options vcmtsd-config service-groups -u -m <mode> -n <num-service-groups> -p <cpu-socket-mode> -c <channel-config> -s <num-subs> -d <percent-des-subs> -v <crc> -q <qat-sgs> -f <featurelevel-stats-sgs> -t <traffic-type> param mode Service-group mode. values: description max-load - exclusive dataplane cores for downstream 55

56 num-service-groups cpu-socket-mode channel-config num-subs percent-des-subs crc qat-sgs feature-level-stats-sgs low-speed - shared dataplane cores for downstream default: max-load Number of service-groups. Integer value. NOTE: service-groups are evenly divided between CPU s, number of service-groups per CPU is limited by the number of NIC PF s and available isolated CPU cores. default: maximum service-groups based on mode, number of NIC PF s and number of isolated CPU cores Order of service group assignment to CPU sockets values: packed: service groups assigned to first CPU socket until full, then assigned to second spread: service groups assigned to alternating CPU sockets default: packed Channel configuration values: 1ofdm: 1 OFDM channels, 32 SQ-QAM 2ofdm: 2 OFDM channels, 32 SQ-QAM 4ofdm: 4 OFDM channels, no SQ-QAM 6ofdm: 6 OFDM channels, no SQ-QAM default: 4ofdm Number of subscribers (cable-modems) values: 100, 300, 500 or 1000 default: 300 Percentage of DES crypto subscribers (cable-modems) NOTE: only applies for the 1 OFDM channel configuration values: 0, 5, or 10 default: 0 CRC enabled/disabled values: enabled disabled default: enabled Comma separated list of service-group ID integer values with QAT offload NOTE: comma separated list must be enclosed in quotes. default: none Comma separated list of service-group ID integer values with feature-level stats 56

57 Installation Guide NOTE: comma separated list must be enclosed in quotes. default: none traffic-type Traffic-type values: imix1, imix2, 64B, 256B, 512B, 640B, 768B, 1024B, 1280B or 1536B default: imix2 Once service-group configuration has been completed, it can be verified that service-group settings have been correctly applied for the files below on the Kubernetes master (also the Pktgen server): - Selected service-group settings should have been applied to the vcmts dataplane helm chart in the yaml file below: $MYHOME/vcmts/kubernetes/helm/vcmtsd/values.yaml (See Appendix section for an example) - Selected service-group settings should have been applied to vcmts pktgen helm chart in the yaml file below: $MYHOME/vcmts/kubernetes/helm/pktgen/values.yaml (See Appendix section for an example) - power management settings for the vcmts dataplane server should be specified in the Helm file below for vcmts dataplane infrastructure: $MYHOME/vcmts/kubernetes/helm/vcmts-infra/values.yaml (see Appendix section 4.1 for example) - the fully qualified domain-name of the vcmts dataplane server and the vcmts dataplane core-pool configuration based on the selected number of service-groups should have been applied to the CMK cluster initialization yaml file below: $MYHOME/vcmts/kubernetes/cmk/cmk-cluster-init-pod.yaml (see Appendix section 4.4 for an example) The following platform environment configuration files on the Kubernetes master should also be updated with final settings after running the service-group configuration command. - the fully qualified domain-name of the vcmts dataplane and traffic-generator servers should be specified in the common platform configuration file : $MYHOME/vcmts/tools/setup/common-host-config.sh (see Appendix section for example) - the vcmts dataplane server PCI address settings should be specified in the vcmts dataplane host configuration file : $MYHOME/vcmts/tools/setup/vcmtsd-host-config.sh (see Appendix section for example) - the vcmts traffic-generator PCI address settings and core-mappings should be specified in the pktgen host configuration file : $MYHOME/vcmts/tools/setup/pktgen-host-config.sh (see Appendix section for example) 57

58 2.7.2 Configure vcmts dataplane power profiles Power-profiles may be configured for automated power management of the vcmts system. A power profile may be configured for each vcmts service-group POD which is sent to the Power-Manager Daemonset (a singleton POD) at POD startup, as shown in the architecture diagram below. Figure 7 Intel vcmts Dataplane Reference Platform Power Management Architecture These power-profiles determine how dataplane CPU cores are managed by the DPDK Power-Manager. For quiet hours during the day when network traffic is low, individual dataplane CPU cores associated with a service-group may be set to a power-state with a lower frequency as determined from its power-profile. Power profile settings for the vcmts reference dataplane system may be configured in the file, $MYHOME/vcmts/kubernetes/helm/vcmtsd/resources/power_policy.cfg as shown below. The default configuration shown below specifies no quiet hours but may be updated as required. Power profiles are set at POD initialization by Kubernetes, so vcmts POD s must be restarted to effect a change (see section 2.8). 58

59 Installation Guide {"policy": { "name": "policy_name", "command": "command_type", "policy_type": "TIME", "busy_hours":[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 ], }} "quiet_hours":[ ], "core_list":[ cores_to_manage ] Note that the power-profile file shown above controls the power profiles for all service-groups on the system. This is provided to illustrate the concept of automated power-management. In order to manage a separate power profile per service-group, some modification is required to the vcmts reference dataplane configuration tool and Helm chart schema. The Power-Consumption Grafana dashboard below illustrates the benefit of automated power management by reducing power consumption of the system during quiet periods of the day when traffic is low. 59

60 2.8 Run vcmts dataplane and traffic-generator software The following sections describe how to start, stop and re-start the vcmts reference dataplane and trafficgenerator software, and also how to verify that the software is running correctly. These steps should be performed after logging in as root user to the Kubernetes master (also the trafficgenerator/pktgen server). After logging in to the Kubernetes master, set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh Start infrastructure and application instances Run the following environment functions on the Kubernetes master to start vcmts dataplane and Pktgen application instances as well as the supporting infrastructure and telemetry components. start_cmk start_tiller start_init start_dns start_dash start_vcmtsd start_pktgen Verify that all components started correctly by running the following Kubernetes command. kubectl get all -a The Kubernetes log function can be used to diagnose vcmtsd application issues. e.g. the following commands may be run to check logs of upstream and downstream containers for vcmtsd pod number 0 (i.e. service-group ID 0) kubectl logs pod/vcmtsd-0 vcmtsd-0-ds kubectl logs pod/vcmtsd-0 vcmtsd-0-us 60

61 Installation Guide If any vcmtsd or pktgen instances failed to start, this can be resolved by re-starting a specific vcmtsd or pktgen POD by running the command below. restart_vcmtsd_pod <pod-number> restart_pktgen_pod <pod-number> How to stop infrastructure and application instances To stop vcmts dataplane and Pktgen application instances as well as supporting infrastructure and telemetry components, run the following commands. stop_pktgen stop_vcmtsd stop_dash stop_dns stop_init stop_tiller stop_cmk How to restart the statistics daemonset Note that to restart the statistics daemonset, possibly for some configuration change to collectd or InfluxDB, all vcmts dataplane and pktgen instances must be restarted as shown below. stop_pktgen stop_vcmtsd restart_stats_daemonset start_vcmtsd start_pktgen 61

62 2.9 System verification The following sections describe how to verify that the system is running correctly Check Kubernetes dashboard Kubernetes infrastructure and application POD status may be checked on the Kubernetes WebUI dashboard. Firstly, set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh Next, run the following command to display the Kubernetes WebUI dashboard URI and login details. show_kubernetes_dash NOTE: Open a web-browser at the URI displayed and use the token to sign in to the Kubernetes WebUI dashboard as shown below. If accessing from a web-browser on a remote client an SSH tunnel to the WebUI dashboard service port on the Kubernetes master may be required. Once signed in, the Kubernetes WebUI dashboard should be displayed as shown below. All vcmtsd and pktgen pods should be displayed as running. 62

63 Installation Guide Query service-group options The vcmts control tool may be used as shown below to check that the correct service-group options have been applied. Follow the steps below to query service-group options for vcmts dataplane POD s on the system. These steps should be performed as root user on the vcmts dataplane server. After logging in to the vcmts dataplane server, set the environment for running the control tool. source $MYHOME/vcmts/tools/setup/env.sh cd $MYHOME/vcmts/tools/vcmtsd-ctl source env/bin/activate Then, run the vcmtsd-ctl tool provided in the vcmts dataplane release package as follows to query configuration settings for all service-group. 63

64 vcmtsd-ctl query -i 0 usds -q all deactivate NOTE: Configuration settings for each vcmts service-group should be displayed. It may take up to 1 minute to gather information for all service-groups Start dataplane traffic This section covers how to generate simulated cable traffic into the vcmts dataplane server. Upstream and downstream DOCSIS dataplane traffic may be simulated by running the vcmtsd-ctl tool provided in the vcmts dataplane release package. These steps should be performed as root user on the Kubernetes master (also the traffic-generator/pktgen server). After logging in to the Kubernetes master, set the environment for running the vcmtsd control tool. source $MYHOME/vcmts/tools/setup/env.sh cd $MYHOME/vcmts/tools/vcmtsd-ctl source env/bin/activate Next, start traffic. Note that because dataplane NW interface VF s are dynamically allocated to POD s by the Kubernetes SR- IOV device plugin, an ARP request-response handshake is required between correlating Pktgen and vcmts application instances for correct routing of traffic. This is implicitly performed before starting to send traffic when the Pktgen start command is sent below. For example, to start traffic at 5Gbps (20% of 25G line-rate) for downstream and 0.5Gbps (2% of 25G linerate) for upstream for 16 vcmts dataplane instances (0 to 15), run the following commands. vcmtsd-ctl pktgen-rate -i 0-15 us -r 2 vcmtsd-ctl pktgen-rate -i 0-15 ds -r 20 vcmtsd-ctl pktgen-start -i 0-15 usds To reduce the traffic-rate to 2.5Gbps (10% of 25G line-rate) for downstream and 0.25Gbps (1% of 25G linerate) for upstream for 16 vcmts dataplane instances (0 to 15), run the following commands. vcmtsd-ctl pktgen-rate -i 0-15 us -r 1 vcmtsd-ctl pktgen-rate -i 0-15 ds -r 10 64

65 Installation Guide To stop traffic, run the following command. vcmtsd-ctl pktgen-stop -i 0-15 usds NOTE: If using 10G NIC s the pktgen-rate values above are percentages of 10G line-rate instead of 25G An RFC 2544 throughput measurement test may also be run. For example to measure downstream throughput for service-group 0, run the following command. vcmtsd-ctl measure -i 0 ds -m rx Check the Grafana dashboard vcmts dataplane and platform metrics may be checked on a Grafana dashboard. First, log in to the vcmts dataplane node as root user. After logging in, first set the vcmts reference dataplane environment by running the following command. source $MYHOME/vcmts/tools/setup/env.sh Next, run the following command to display the Grafana dashboard details. show_grafana_dash 65

66 NOTE: Open a web-browser at the displayed URI and use the login credentials to login to the Grafana dashboard as shown below. If accessing from a web-browser on a remote client an SSH tunnel to the Grafana dashboard service port on the vcmts dataplane node may be required. 66

67 Installation Guide Once signed in, select the required vcmts dataplane dashboard by clicking on Dashboards Home. e.g. the System Summary dashboard is shown below. 67

68 How to modify Grafana platform metrics dashboard in case of non-default CPU core count The Grafana "Platform Metrics" dashboard has been configured in the release package for a 20-core dual processor platform by default. If the CPU core-count is different, the dashboard will show incorrect platform metrics. To rectify this, the "Platform Metrics" dashboard queries must be updated as described below. The queries in each individual graph on the Platform Metrics page must be updated by selecting "Edit" from the drop down menu as shown below. Then, update as shown below for each individual graph to ensure that a query is present for each logical core (i.e. hyperthread) on the system. 68

69 Installation Guide Once the required updates have been made, save the dashboard by clicking the "Save" icon and selecting "Save JSON to file". 69

Intel vcmts Reference Dataplane v Install Guide

Intel vcmts Reference Dataplane v Install Guide Install Guide Copyright 2019 Intel Corporation All Rights Reserved Revision: 001 Revision History Revision Number Description Revision Date 1 Install-guide for vcmts reference dataplane release v18.10.2

More information

Intel Speed Select Technology Base Frequency - Enhancing Performance

Intel Speed Select Technology Base Frequency - Enhancing Performance Intel Speed Select Technology Base Frequency - Enhancing Performance Application Note April 2019 Document Number: 338928-001 You may not use or facilitate the use of this document in connection with any

More information

CPU Pinning and Isolation in Kubernetes*

CPU Pinning and Isolation in Kubernetes* CPU Pinning and Isolation in Kubernetes* December 2018 Document Number: 606834-001 You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning

More information

How to Configure Intel X520 Ethernet Server Adapter Based Virtual Functions on SuSE*Enterprise Linux Server* using Xen*

How to Configure Intel X520 Ethernet Server Adapter Based Virtual Functions on SuSE*Enterprise Linux Server* using Xen* How to Configure Intel X520 Ethernet Server Adapter Based Virtual Functions on SuSE*Enterprise Linux Server* using Xen* Technical Brief v1.0 September 2011 Legal Lines and Disclaimers INFORMATION IN THIS

More information

DPDK Performance Report Release Test Date: Nov 16 th 2016

DPDK Performance Report Release Test Date: Nov 16 th 2016 Test Date: Nov 16 th 2016 Revision History Date Revision Comment Nov 16 th, 2016 1.0 Initial document for release 2 Contents Audience and Purpose... 4 Test setup:... 4 Intel Xeon Processor E5-2699 v4 (55M

More information

Installation and Maintenance Instructions for Intel(R) R-WPA VNF Package

Installation and Maintenance Instructions for Intel(R) R-WPA VNF Package Installation and Maintenance Instructions for Intel(R) R-WPA VNF Package Contents Contents 1 Table of Figures 2 Overview 3 Installation Steps 4 1 Host Setup 4 11 System BIOS settings 4 111 Virtualization

More information

DPDK Intel Cryptodev Performance Report Release 17.11

DPDK Intel Cryptodev Performance Report Release 17.11 DPDK Intel Cryptodev Performance Report Test Date: Nov 20th 2017 Author: Intel DPDK Validation team Revision History Date Revision Comment Nov 20th, 2017 1.0 Initial document for release 2 Contents Audience

More information

Intel Core TM Processor i C Embedded Application Power Guideline Addendum

Intel Core TM Processor i C Embedded Application Power Guideline Addendum Intel Core TM Processor i3-2115 C Embedded Application Power Guideline Addendum August 2012 Document Number: 327874-001US INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO

More information

DPDK Intel Cryptodev Performance Report Release 18.08

DPDK Intel Cryptodev Performance Report Release 18.08 DPDK Intel Cryptodev Performance Report Test Date: August 7th 2018 Author: Intel DPDK Validation team Revision History Date Revision Comment August 7th, 2018 1.0 Initial document for release 2 Contents

More information

Evolving Small Cells. Udayan Mukherjee Senior Principal Engineer and Director (Wireless Infrastructure)

Evolving Small Cells. Udayan Mukherjee Senior Principal Engineer and Director (Wireless Infrastructure) Evolving Small Cells Udayan Mukherjee Senior Principal Engineer and Director (Wireless Infrastructure) Intelligent Heterogeneous Network Optimum User Experience Fibre-optic Connected Macro Base stations

More information

DPDK Vhost/Virtio Performance Report Release 18.11

DPDK Vhost/Virtio Performance Report Release 18.11 DPDK Vhost/Virtio Performance Report Test Date: December 3st 2018 Author: Intel DPDK Validation Team Revision History Date Revision Comment December 3st, 2018 1.0 Initial document for release 2 Contents

More information

DPDK Vhost/Virtio Performance Report Release 18.05

DPDK Vhost/Virtio Performance Report Release 18.05 DPDK Vhost/Virtio Performance Report Test Date: Jun 1 2018 Author: Intel DPDK Validation Team Revision History Date Revision Comment Jun 1st, 2018 1.0 Initial document for release 2 Release 18.02 Contents

More information

DPDK Intel NIC Performance Report Release 18.05

DPDK Intel NIC Performance Report Release 18.05 DPDK Intel NIC Performance Report Test Date: Jun 1th 2018 Author: Intel DPDK Validation team Revision History Date Revision Comment Jun 4th, 2018 1.0 Initial document for release 2 Contents Audience and

More information

LED Manager for Intel NUC

LED Manager for Intel NUC LED Manager for Intel NUC User Guide Version 1.0.0 March 14, 2018 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO

More information

DPDK Intel NIC Performance Report Release 17.08

DPDK Intel NIC Performance Report Release 17.08 DPDK Intel NIC Performance Report Test Date: Aug 23th 2017 Author: Intel DPDK Validation team Revision History Date Revision Comment Aug 24th, 2017 1.0 Initial document for release 2 Contents Audience

More information

Intel s Architecture for NFV

Intel s Architecture for NFV Intel s Architecture for NFV Evolution from specialized technology to mainstream programming Net Futures 2015 Network applications Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Jim Harris Principal Software Engineer Intel Data Center Group

Jim Harris Principal Software Engineer Intel Data Center Group Jim Harris Principal Software Engineer Intel Data Center Group Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR

More information

DPDK Intel NIC Performance Report Release 18.02

DPDK Intel NIC Performance Report Release 18.02 DPDK Intel NIC Performance Report Test Date: Mar 14th 2018 Author: Intel DPDK Validation team Revision History Date Revision Comment Mar 15th, 2018 1.0 Initial document for release 2 Contents Audience

More information

Intel Manycore Platform Software Stack (Intel MPSS)

Intel Manycore Platform Software Stack (Intel MPSS) Intel Manycore Platform Software Stack (Intel MPSS) README (Windows*) Copyright 2012 2014 Intel Corporation All Rights Reserved Document Number: 328510-001US Revision: 3.4 World Wide Web: http://www.intel.com

More information

Intel Open Network Platform Release 2.0 Hardware and Software Specifications Application Note. SDN/NFV Solutions with Intel Open Network Platform

Intel Open Network Platform Release 2.0 Hardware and Software Specifications Application Note. SDN/NFV Solutions with Intel Open Network Platform Intel Open Network Platform Release 2.0 Hardware and Software Specifications Application Note SDN/NFV Solutions with Intel Open Network Platform Document Revision 1.1 April 2016 Revision History Date Revision

More information

Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms

Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms Application Note August 2006 Document Number: 314337-002 Notice: This document contains information on products in the design

More information

Intel Graphics Virtualization Technology. Kevin Tian Graphics Virtualization Architect

Intel Graphics Virtualization Technology. Kevin Tian Graphics Virtualization Architect Intel Graphics Virtualization Technology Kevin Tian Graphics Virtualization Architect Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR

More information

Architecture Study. Maximizing the Performance of DOCSIS 3.0/3.1 Processing on Intel Xeon Processors

Architecture Study. Maximizing the Performance of DOCSIS 3.0/3.1 Processing on Intel Xeon Processors Architecture Study Maximizing the Performance of 3.0/3.1 Processing on Intel Xeon Processors Intel optimizes virtualized, cable modem termination system (vcmts) architecture using Intel Advanced Vector

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Intel Parallel Studio XE 2013 for Linux* Installation Guide and Release Notes Document number: 323804-003US 10 March 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.1.1 Changes since Intel

More information

Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes

Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes 23 October 2014 Table of Contents 1 Introduction... 1 1.1 Product Contents... 2 1.2 Intel Debugger (IDB) is

More information

Intel Open Network Platform Server (Release 1.3) Release Notes

Intel Open Network Platform Server (Release 1.3) Release Notes (Release 1.3) Revision 1.1 February 2015 Revision History Date Revision Description February 19, 2015 1.1 Updated for release of 1.3 of Intel Network Platform Server 1.3 December 15, 2014 1.0 Initial release

More information

Intel Core TM i7-4702ec Processor for Communications Infrastructure

Intel Core TM i7-4702ec Processor for Communications Infrastructure Intel Core TM i7-4702ec Processor for Communications Infrastructure Application Power Guidelines Addendum May 2014 Document Number: 330009-001US Introduction INFORMATION IN THIS DOCUMENT IS PROVIDED IN

More information

Intel Parallel Studio XE 2011 for Linux* Installation Guide and Release Notes

Intel Parallel Studio XE 2011 for Linux* Installation Guide and Release Notes Intel Parallel Studio XE 2011 for Linux* Installation Guide and Release Notes Document number: 323804-001US 8 October 2010 Table of Contents 1 Introduction... 1 1.1 Product Contents... 1 1.2 What s New...

More information

Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes

Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes 24 July 2014 Table of Contents 1 Introduction... 2 1.1 Product Contents... 2 1.2 System Requirements...

More information

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes Document number: 323804-002US 21 June 2012 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.2 Product Contents...

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Installation Guide and Release Notes Document number: 321604-001US 19 October 2009 Table of Contents 1 Introduction... 1 1.1 Product Contents... 1 1.2 System Requirements... 2 1.3 Documentation... 3 1.4

More information

IEEE1588 Frequently Asked Questions (FAQs)

IEEE1588 Frequently Asked Questions (FAQs) IEEE1588 Frequently Asked Questions (FAQs) LAN Access Division December 2011 Revision 1.0 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Data Plane Development Kit

Data Plane Development Kit Data Plane Development Kit Quality of Service (QoS) Cristian Dumitrescu SW Architect - Intel Apr 21, 2015 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.

More information

How to Create a.cibd File from Mentor Xpedition for HLDRC

How to Create a.cibd File from Mentor Xpedition for HLDRC How to Create a.cibd File from Mentor Xpedition for HLDRC White Paper May 2015 Document Number: 052889-1.0 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

How to Create a.cibd/.cce File from Mentor Xpedition for HLDRC

How to Create a.cibd/.cce File from Mentor Xpedition for HLDRC How to Create a.cibd/.cce File from Mentor Xpedition for HLDRC White Paper August 2017 Document Number: 052889-1.2 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

INTEL PERCEPTUAL COMPUTING SDK. How To Use the Privacy Notification Tool

INTEL PERCEPTUAL COMPUTING SDK. How To Use the Privacy Notification Tool INTEL PERCEPTUAL COMPUTING SDK How To Use the Privacy Notification Tool LEGAL DISCLAIMER THIS DOCUMENT CONTAINS INFORMATION ON PRODUCTS IN THE DESIGN PHASE OF DEVELOPMENT. INFORMATION IN THIS DOCUMENT

More information

Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing

Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing User s Guide Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2013 Intel Corporation All Rights Reserved Document

More information

How to abstract hardware acceleration device in cloud environment. Maciej Grochowski Intel DCG Ireland

How to abstract hardware acceleration device in cloud environment. Maciej Grochowski Intel DCG Ireland How to abstract hardware acceleration device in cloud environment Maciej Grochowski Intel DCG Ireland Outline Introduction to Hardware Accelerators Intel QuickAssist Technology (Intel QAT) as example of

More information

Intel Atom Processor E3800 Product Family Development Kit Based on Intel Intelligent System Extended (ISX) Form Factor Reference Design

Intel Atom Processor E3800 Product Family Development Kit Based on Intel Intelligent System Extended (ISX) Form Factor Reference Design Intel Atom Processor E3800 Product Family Development Kit Based on Intel Intelligent System Extended (ISX) Form Factor Reference Design Quick Start Guide March 2014 Document Number: 330217-002 Legal Lines

More information

Demonstrating Data Plane Performance Improvements using Enhanced Platform Awareness

Demonstrating Data Plane Performance Improvements using Enhanced Platform Awareness technical brief Intel Corporation Datacenter Network Solutions Group Demonstrating Data Plane Performance Improvements using Enhanced Platform Awareness Authors Shivapriya Hiremath Solutions Software Engineer

More information

DPDK Vhost/Virtio Performance Report Release 17.08

DPDK Vhost/Virtio Performance Report Release 17.08 DPDK Vhost/Virtio Performance Report Test Date: August 15 th 2017 Author: Intel DPDK Validation team Revision History Date Revision Comment August 15 th, 2017 1.0 Initial document for release 2 Contents

More information

Intel Cache Acceleration Software for Windows* Workstation

Intel Cache Acceleration Software for Windows* Workstation Intel Cache Acceleration Software for Windows* Workstation Release 3.1 Release Notes July 8, 2016 Revision 1.3 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

Xilinx Answer QDMA DPDK User Guide

Xilinx Answer QDMA DPDK User Guide Xilinx Answer 70928 QDMA DPDK User Guide Important Note: This downloadable PDF of an Answer Record is provided to enhance its usability and readability. It is important to note that Answer Records are

More information

Intel Cluster Ready Allowed Hardware Variances

Intel Cluster Ready Allowed Hardware Variances Intel Cluster Ready Allowed Hardware Variances Solution designs are certified as Intel Cluster Ready with an exact bill of materials for the hardware and the software stack. When instances of the certified

More information

Data Center Energy Efficiency Using Intel Intelligent Power Node Manager and Intel Data Center Manager

Data Center Energy Efficiency Using Intel Intelligent Power Node Manager and Intel Data Center Manager Data Center Energy Efficiency Using Intel Intelligent Power Node Manager and Intel Data Center Manager Deploying Intel Intelligent Power Node Manager and Intel Data Center Manager with a proper power policy

More information

The Intel Processor Diagnostic Tool Release Notes

The Intel Processor Diagnostic Tool Release Notes The Intel Processor Diagnostic Tool Release Notes Page 1 of 7 LEGAL INFORMATION INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR

More information

Boot Agent Application Notes for BIOS Engineers

Boot Agent Application Notes for BIOS Engineers Boot Agent Application Notes for BIOS Engineers September 2007 318275-001 Revision 1.0 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Intel RealSense Depth Module D400 Series Software Calibration Tool

Intel RealSense Depth Module D400 Series Software Calibration Tool Intel RealSense Depth Module D400 Series Software Calibration Tool Release Notes January 29, 2018 Version 2.5.2.0 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Measuring a 25 Gb/s and 40 Gb/s data plane

Measuring a 25 Gb/s and 40 Gb/s data plane Measuring a 25 Gb/s and 40 Gb/s data plane Christo Kleu Pervaze Akhtar 1 Contents Preliminaries Equipment Traffic generators Test topologies Host and VM configuration NUMA Architecture CPU allocation BIOS

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Intel C++ Studio XE 2013 for Windows* Installation Guide and Release Notes Document number: 323805-003US 26 June 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.1.1 Changes since Intel

More information

Intel Ethernet Controller I350 Frequently Asked Questions (FAQs)

Intel Ethernet Controller I350 Frequently Asked Questions (FAQs) Intel Ethernet Controller I350 Frequently Asked Questions (FAQs) Networking Division (ND) June 2014 Revision 2.2 Legal By using this document, in addition to any agreements you have with Intel, you accept

More information

Optimizing the operations with sparse matrices on Intel architecture

Optimizing the operations with sparse matrices on Intel architecture Optimizing the operations with sparse matrices on Intel architecture Gladkikh V. S. victor.s.gladkikh@intel.com Intel Xeon, Intel Itanium are trademarks of Intel Corporation in the U.S. and other countries.

More information

Non-Volatile Memory Cache Enhancements: Turbo-Charging Client Platform Performance

Non-Volatile Memory Cache Enhancements: Turbo-Charging Client Platform Performance Non-Volatile Memory Cache Enhancements: Turbo-Charging Client Platform Performance By Robert E Larsen NVM Cache Product Line Manager Intel Corporation August 2008 1 Legal Disclaimer INFORMATION IN THIS

More information

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Novell* Cloud Manager Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Novell* Cloud Manager Intel

More information

GUID Partition Table (GPT)

GUID Partition Table (GPT) GUID Partition Table (GPT) How to install an Operating System (OS) using the GUID Disk Partition Table (GPT) on an Intel Hardware RAID (HWR) Array under uefi environment. Revision 1.0 December, 2009 Enterprise

More information

Characterizing Memcached* with Intel Memory Drive Technology

Characterizing Memcached* with Intel Memory Drive Technology Characterizing Memcached* with Intel Memory Drive Technology Set-up and Configuration Guide for Benchmarking Evaluation Document Number: 336655-002US Revision History Revision Number Description Date 001

More information

Intel Desktop Board DH55TC

Intel Desktop Board DH55TC Intel Desktop Board DH55TC Specification Update December 2011 Order Number: E88213-006 The Intel Desktop Board DH55TC may contain design defects or errors known as errata, which may cause the product to

More information

Intel RealSense D400 Series Calibration Tools and API Release Notes

Intel RealSense D400 Series Calibration Tools and API Release Notes Intel RealSense D400 Series Calibration Tools and API Release Notes July 9, 2018 Version 2.6.4.0 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Installation Guide and Release Notes Document number: 321604-002US 9 July 2010 Table of Contents 1 Introduction... 1 1.1 Product Contents... 2 1.2 What s New... 2 1.3 System Requirements... 2 1.4 Documentation...

More information

Intel 82580EB/82580DB GbE Controller Feature Software Support. LAN Access Division (LAD)

Intel 82580EB/82580DB GbE Controller Feature Software Support. LAN Access Division (LAD) Intel 82580EB/82580DB GbE Controller Feature Software Support LAN Access Division (LAD) Revision: 1.3 March 2012 Intel 82580EB/82580DB GbE Controller Legal Legal INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Open FCoE for ESX*-based Intel Ethernet Server X520 Family Adapters

Open FCoE for ESX*-based Intel Ethernet Server X520 Family Adapters Open FCoE for ESX*-based Intel Ethernet Server X520 Family Adapters Technical Brief v1.0 August 2011 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.

More information

Device Firmware Update (DFU) for Windows

Device Firmware Update (DFU) for Windows Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY

More information

Intel Desktop Board DZ68DB

Intel Desktop Board DZ68DB Intel Desktop Board DZ68DB Specification Update April 2011 Part Number: G31558-001 The Intel Desktop Board DZ68DB may contain design defects or errors known as errata, which may cause the product to deviate

More information

Intel Desktop Board DP55SB

Intel Desktop Board DP55SB Intel Desktop Board DP55SB Specification Update July 2010 Order Number: E81107-003US The Intel Desktop Board DP55SB may contain design defects or errors known as errata, which may cause the product to

More information

Accelerate Finger Printing in Data Deduplication Xiaodong Liu & Qihua Dai Intel Corporation

Accelerate Finger Printing in Data Deduplication Xiaodong Liu & Qihua Dai Intel Corporation Accelerate Finger Printing in Data Deduplication Xiaodong Liu & Qihua Dai Intel Corporation Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

Intel Platform Administration Technology Quick Start Guide

Intel Platform Administration Technology Quick Start Guide Intel Platform Administration Technology Quick Start Guide 320014-003US This document explains how to get started with core features of Intel Platform Administration Technology (Intel PAT). After reading

More information

Intel Visual Compute Accelerator Product Family

Intel Visual Compute Accelerator Product Family Intel Visual Compute Accelerator Product Family Release Notes for 2.2 release Rev 1.0 July 2018 Intel Server Products and Solutions Intel Visual Compute Accelerator Release Notes Document

More information

Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes

Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes Intel Integrated Native Developer Experience 2015 Build Edition for OS X* Installation Guide and Release Notes 22 January 2015 Table of Contents 1 Introduction... 2 1.1 Change History... 2 1.1.1 Changes

More information

Using the Intel VTune Amplifier 2013 on Embedded Platforms

Using the Intel VTune Amplifier 2013 on Embedded Platforms Using the Intel VTune Amplifier 2013 on Embedded Platforms Introduction This guide explains the usage of the Intel VTune Amplifier for performance and power analysis on embedded devices. Overview VTune

More information

Intel SSD DC P3700 & P3600 Series

Intel SSD DC P3700 & P3600 Series Intel SSD DC P3700 & P3600 Series DP05 (Maintenance Release 8) Dell-PE March 2017 DP04 (MR7) Overview Intel SSD DC P3700/P3600 Series Changes from previous release: UEFI area is now populated, enabling

More information

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.8 (GA)

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.8 (GA) Intel Cache Acceleration Software (Intel CAS) for Linux* v2.8 (GA) Quick Start Guide August 2015 Revision 001 Order Number: 332551-001US Intel may make changes to specifications and product descriptions

More information

Intel vpro Technology Virtual Seminar 2010

Intel vpro Technology Virtual Seminar 2010 Intel Software Network Connecting Developers. Building Community. Intel vpro Technology Virtual Seminar 2010 Getting to know Intel Active Management Technology 6.0 Remote Encryption Management Andy Schiestl

More information

Upgrading Intel Server Board Set SE8500HW4 to Support Intel Xeon Processors 7000 Sequence

Upgrading Intel Server Board Set SE8500HW4 to Support Intel Xeon Processors 7000 Sequence Upgrading Intel Server Board Set SE8500HW4 to Support Intel Xeon Processors 7000 Sequence January 2006 Enterprise Platforms and Services Division - Marketing Revision History Upgrading Intel Server Board

More information

Intel vpro Technology Virtual Seminar 2010

Intel vpro Technology Virtual Seminar 2010 Intel Software Network Connecting Developers. Building Community. Intel vpro Technology Virtual Seminar 2010 Getting to know Intel Active Management Technology 6.0 Fast and Free Software Assessment Tools

More information

Intel Galileo Board. Getting Started Guide. 02 October Order Number: US

Intel Galileo Board. Getting Started Guide. 02 October Order Number: US Intel Galileo Board Getting Started Guide 02 October 2013 Order Number: 329685-001US Getting Started Guide This document explains how to connect your Intel Galileo board to the computer, install the software

More information

Introduction. How it works

Introduction. How it works Introduction Connected Standby is a new feature introduced by Microsoft in Windows 8* for SOC-based platforms. The use case on the tablet/mobile systems is similar to that on phones like Instant ON and

More information

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Document number: 323803-001US 4 May 2011 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.2 Product Contents...

More information

Intel Platform Controller Hub EG20T

Intel Platform Controller Hub EG20T Intel Platform Controller Hub EG20T Packet HUB Driver for Windows* Programmer s Guide February 2011 Order Number: 324265-002US Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Ravindra Babu Ganapathi

Ravindra Babu Ganapathi 14 th ANNUAL WORKSHOP 2018 INTEL OMNI-PATH ARCHITECTURE AND NVIDIA GPU SUPPORT Ravindra Babu Ganapathi Intel Corporation [ April, 2018 ] Intel MPI Open MPI MVAPICH2 IBM Platform MPI SHMEM Intel MPI Open

More information

Drive Recovery Panel

Drive Recovery Panel Drive Recovery Panel Don Verner Senior Application Engineer David Blunden Channel Application Engineering Mgr. Intel Corporation 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Introduction to Intel Boot Loader Development Kit (Intel BLDK) Intel SSG/SSD/UEFI

Introduction to Intel Boot Loader Development Kit (Intel BLDK) Intel SSG/SSD/UEFI Introduction to Intel Boot Loader Development Kit (Intel BLDK) Intel SSG/SSD/UEFI Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Movidius Neural Compute Stick

Movidius Neural Compute Stick Movidius Neural Compute Stick You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to

More information

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA)

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA) Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA) Release Notes June 2015 Revision 010 Document Number: 328497-010 Notice: This document contains information on products in the design

More information

Intel Media Server Studio Professional Edition for Linux*

Intel Media Server Studio Professional Edition for Linux* Intel Media Server Studio 2015 R4 Professional Edition for Linux* Release Notes Overview What's New System Requirements Package Contents Installation Installation Folders Known Limitations Legal Information

More information

Intel Platform Controller Hub EG20T

Intel Platform Controller Hub EG20T Intel Platform Controller Hub EG20T UART Controller Driver for Windows* Programmer s Guide Order Number: 324261-002US Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Intel Media Server Studio 2017 R3 Essentials Edition for Linux* Release Notes

Intel Media Server Studio 2017 R3 Essentials Edition for Linux* Release Notes Overview What's New Intel Media Server Studio 2017 R3 Essentials Edition for Linux* Release Notes System Requirements Package Contents Installation Installation Folders Known Limitations Legal Information

More information

Software Evaluation Guide for ImTOO* YouTube* to ipod* Converter Downloading YouTube videos to your ipod

Software Evaluation Guide for ImTOO* YouTube* to ipod* Converter Downloading YouTube videos to your ipod Software Evaluation Guide for ImTOO* YouTube* to ipod* Converter Downloading YouTube videos to your ipod http://www.intel.com/performance/resources Version 2008-09 Rev. 1.0 Information in this document

More information

Intel Desktop Board D945GCCR

Intel Desktop Board D945GCCR Intel Desktop Board D945GCCR Specification Update January 2008 Order Number: D87098-003 The Intel Desktop Board D945GCCR may contain design defects or errors known as errata, which may cause the product

More information

Intel Server Board S5520HC

Intel Server Board S5520HC Red Hat* Testing Services Enterprise Platforms and Services Division Rev 1.0 Intel Server Board S5520HC Server Test Submission (STS) Report For Red Hat* Enterprise Linux Certification Dec 28, 2010 This

More information

Intel Desktop Board DG41CN

Intel Desktop Board DG41CN Intel Desktop Board DG41CN Specification Update December 2010 Order Number: E89822-003US The Intel Desktop Board DG41CN may contain design defects or errors known as errata, which may cause the product

More information

Intel IT Director 1.7 Release Notes

Intel IT Director 1.7 Release Notes Intel IT Director 1.7 Release Notes Document Number: 320156-005US Contents What s New Overview System Requirements Installation Notes Documentation Known Limitations Technical Support Disclaimer and Legal

More information

KVM as The NFV Hypervisor

KVM as The NFV Hypervisor KVM as The NFV Hypervisor Jun Nakajima Contributors: Mesut Ergin, Yunhong Jiang, Krishna Murthy, James Tsai, Wei Wang, Huawei Xie, Yang Zhang 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

OpenCL* and Microsoft DirectX* Video Acceleration Surface Sharing

OpenCL* and Microsoft DirectX* Video Acceleration Surface Sharing OpenCL* and Microsoft DirectX* Video Acceleration Surface Sharing Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2012 Intel Corporation All Rights Reserved Document Number: 327281-001US

More information

Intel Galileo Firmware Updater Tool

Intel Galileo Firmware Updater Tool User Guide August 2017 Revision 002 Document Number: 332076-002 Notice: This document contains information on products in the design phase of development. The information here is subject to change without

More information

Intel Stereo 3D SDK Developer s Guide. Alpha Release

Intel Stereo 3D SDK Developer s Guide. Alpha Release Intel Stereo 3D SDK Developer s Guide Alpha Release Contents Why Intel Stereo 3D SDK?... 3 HW and SW requirements... 3 Intel Stereo 3D SDK samples... 3 Developing Intel Stereo 3D SDK Applications... 4

More information

Intel vpro Technology Virtual Seminar 2010

Intel vpro Technology Virtual Seminar 2010 Intel Software Network Connecting Developers. Building Community. Intel vpro Technology Virtual Seminar 2010 Getting to know Intel Active Management Technology 6.0 Intel Active Management Technology (AMT)

More information

Intel Simple Network Management Protocol (SNMP) Subagent v8.0

Intel Simple Network Management Protocol (SNMP) Subagent v8.0 Intel Simple Network Management Protocol (SNMP) Subagent v8.0 User Guide June 2017 Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Theory and Practice of the Low-Power SATA Spec DevSleep

Theory and Practice of the Low-Power SATA Spec DevSleep Theory and Practice of the Low-Power SATA Spec DevSleep Steven Wells Principal Engineer NVM Solutions Group, Intel August 2013 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Intel G31/P31 Express Chipset

Intel G31/P31 Express Chipset Intel G31/P31 Express Chipset Specification Update For the Intel 82G31 Graphics and Memory Controller Hub (GMCH) and Intel 82GP31 Memory Controller Hub (MCH) February 2008 Notice: The Intel G31/P31 Express

More information