IBM POWER8 100 GigE Adapter Best Practices

Size: px
Start display at page:

Download "IBM POWER8 100 GigE Adapter Best Practices"

Transcription

1 Introduction IBM POWER8 100 GigE Adapter Best Practices With higher network speeds in new network adapters, achieving peak performance requires careful tuning of the adapters and workloads using them. IBM POWER8 servers now support 100 GigE adapters and this guide will help understand what performance you can get and help maximize the utilization. Achieving 100 Gb/s bandwidth takes careful tuning and traditional methods of measuring network performance may not show the full potential of the adapters. In addition the latest adapters are capable of handling a very high number of network packets depending on the application and tuning used. All measurements and tuning below is for TCP/IP traffic. All the tuning recommendations apply to Power Systems Scale-Out S812 and Power Systems Scale-Out S82x systems running AIX. In addition to AIX we also show some results on Bare Metal Linux ( Ubuntu ) for comparison. In the sections below, we will cover peak performance, impact of performance due to number of TCP sockets and message sizes, recommended tuning, and finally how to measure peak performance.

2 Section 1. Peak performance single adapter The following section shows the peak performance of a single adapter port. Bandwidth The following are measurement results taken on Power Systems Scale-Out S822 Servers/Power Systems Scale-Out S824 Servers and Power Systems Scale-Out S822L Servers systems. Measurements on your own system can vary depending on the number of CPU's, the CPU frequency of the machine, and number of memory DIMMs installs. These measurements were done on machines where all memory DIMM slots were populated which ensures peak memory performance. All measurements were made where the partition (LPAR) had direct native adapter access. These results do not apply to virtualization of the adapter to multiple LPARS using VIOS/SEA. However, where noted, we did use PowerVM which does introduce some virtualization overhead on the performance. These measurements are also under ideal laboratory conditions. Your results may vary depending on how the system software and application behave. See section 2 for more detail on how your application characteristics can impact actual performance. When using native Linux without PowerVM, we are able to demonstrate link limited bandwidth. Since our measurements only measure actual data transferred, the rate is slightly lower than the 100 GigE speed of the adapter port. The difference is what is consumed by headers and other data on the cable to support the Ethernet protocol. Peak performance single adapter bare metal environments: MTU 1500 Bandwidth BML (Ubuntu Power Systems Scale-Out S822LC Servers) Receive: 94 Gb/s Transmit: 94 Gb/s Duplex: 171 Gb/s MTU 9000 Bandwidth BML (Ubuntu Power Systems Scale-Out S822LC Servers) Receive: 98 Gb/s Transmit: 98 Gb/s Duplex: 188 Gb/s When measuring performance on a virtualized system using PowerVM, the 100 GigE adapter does not achieve link limited bandwidth due to impact of virtualization in the POWER8 hardware. This is not seen on slower adapters because the peak bandwidth of those adapter ports is below the single port virtualization limit. It is only when trying to sustain close to 100 Gb/s that it shows in the results. The following are results of running AIX on PowerVM with dedicated 100 GigE adapters to the LPAR. Note that the peak bandwidth is slightly lower than the peaks achieved with Bare Metal Linux.

3 Peak performance single adapter virtualized environments: AIX (7.2 ) Receive: 88 Gb/s Transmit: 85 Gb/s Duplex: 90 Gb/s MTU 1500 Bandwidth AIX (7.2 ) Receive: 97 Gb/s Transmit: 93 Gb/s Duplex: 128 Gb/s MTU 9000 Bandwidth When using virtualization like PowerVM, we currently are seeing lower bandwidth, With affinitization on AIX for MTU 1500 we can get around 88 Gb/s and for RHEL 91 Gb/s. For MTU 9000 on AIX we get up to 97 Gb/s.. Latency The following is the half round trip latency for the 100 GigE adapter when using a 1 byte message. The difference between bare metal Ubuntu and AIX is that for AIX we are using PowerVM virtualization which introduces overhead, also differences between the TCP/IP implementation and features supported. Configuration Ubuntu BML Power Systems Scale-Out S822LC Servers AIX (7.2 ) PowerVM Power Systems Scale-Out S824 Servers Latency usec Small message rate The following are the current small message rates for 100 GigE adapter. These were measured using 150 concurrent TCP sockets each passing back and forth 1 byte data payloads or messages. Configuration Ubuntu BML Power Systems Scale-Out S822LC Servers AIX (7.2 ) PowerVM Power Systems Scale-Out S824 Servers Small RR message at 150 TCP sockets The higher small message rates on AIX are because of differences in the implementation of TCP and the devices drivers. Multiple port per adapter speedup limitation. Using both ports in the adapter will not give double the speed of a single port. Two ports on the same adapter in use can not exceed the speed of the underlying PCI bus that the adapter is plugged into, therefore the performance limitation will come from the PCI bus not the adapter. The limitation is the

4 128 Gb/s PCI bus limit. This limit also applies if you Etherchannel both ports in the same adapter. Etherchannel is also known as port bonding and/or LACP on Linux. When running multiple 100 GigE adapters make sure that you have PCIe gen3 x16 slots available. The adapter will not physically plug into slower x8 and other PCI slots in the machine. If they are not available you may have to move adapters around to get access to the higher speed slots. Most slower adapters do not need the higher performance x16 slot. In addition, make sure that the PCIe slot is enabled for HDDW addressing. For FSP based system you can check the setting from the FSP GUI called ASM. When utilizing multiple high speed adapters you need to ensure that there is enough system resources to support the adapter traffic. Some systems will not support more than 2 adapters at full speed. In addition you may run out of CPU if the application you are using consumes a lot of other CPU cycles not leaving enough for the network traffic. See more on CPU requirements later in this article.

5 Section 2. Traffic characteristics affecting peak performance If an application or workload does not use the same number of TCP sockets or message sizes used in our measurements, the performance will be lower. The following section shows the impact of fewer sockets and message sizes on actual performance As seen in the graph, peak receive bandwidth is not obtained until there are 40 TCP sockets all receiving at the same time. However, if only 8 sockets are receiving the bandwidth drops to about 26 Gb/s. For a single TCP socket, for things like FTP, the bandwidth drops to about 4 Gb/s. The following graph is for small messages rates. This measurement is the number of messages exchanged between two machines using multiple TCP sockets when each TCP socket has only one packet in transit. To get the peak small message rates of 550K messages a second you need 100 TCP sockets or more active at any one time. If you only have 20 TCP socket active the rate drops to 276K and for 1 socket only 19K per second.

6 The following graph shows bandwidth at different TCP socket write sizes. An application or workload has to write to the TCP socket at least 32K at a time to achieve peak bandwidth. If only 4K at a time is written, the peak utilization of the adapter will be around 25 Gb/s.

7 Section 3. Adapter tuning options The following are recommended tuning changes to apply to AIX for peak bandwidth performance. Adapter tuning options: Attribute name: Default value Recommended value Description queues_rx 8 20 Number of receive queues used by the network adapter for incoming network traffic. queues_tx 2 12 Number of transmit queues used by the adapter for outbound network traffic. rx_max_pktx Receive queue maximum packet count tx_send_cnt 8 16 Number of transmit packets chained for adapter processing 1) To display the current values use: lsattr -El entx where X is the number of the network adapter 2) To list the settable options of the attribute use: lsattr -Rl entx -a <attribute> 3) To change the current value use: chdev -l entx -a <attribute>=<value> For more information see these links: The benefit of using more receive queues is that each queue has a unique MSI-X interrupt so that will result in spreading the packets across more queues and thus across more CPU threads, to ensure the CPU is not the bottleneck. This can also help reduce latency for latency sensitive workloads. However, do not increase the number of queues to more than what is needed for good performance. Increasing the queues has the negative impact of:

8 1. Consuming more memory for receive buffers as each queue has to have a receive buffer pool. 2. Spreads the interrupts across more queues which results in lower interrupt coalescing (ie fewer packets per interrupt), and thus higher interrupt overhead. Single threads workloads like a single FTP transfer will only use a single transmit or receive queue due to how TCP connections are hashed to a queue. Multiple queues are needed to ensure good performance as more TCP connections are in use and more CPU threads (applications) are active. However above some point, more queues just consume more memory and increase system interrupt overhead with no increase in throughput.

9 Section 4. Interrupt affinitization To get the best performance out of any high speed adapter POWER8 servers need to be configured so that incoming interrupt processing is handled on cores as close as possible to the PCIe bus in the system. This lowers latency and improves response time for adapter events. The following sections describes how to find out the location of the adapters in a system and to affinitize the interrupt handling to cores close to the PCI bus. Not affinitizing the interrupt handling, can cause the peak bandwidth to drop by over 30%. Determining adapter location To determine the location code of the adapter, you can either check in the HMC in the LPAR configuration or from the OS. For AIX, to find the hardware location of the adapter you can use the following: lscfg -vl entx where X is the number of the adapter. If you are looking for the adapter location code in Linux you can use the following: iface=ethx; cat /proc/device-tree/`cat /sys/class/net/$iface/device/devspec`/ibm\,loc-code ; echo Where ethx is the interface name for the adapter from ifconfig. Determining which CPUs are local to the adapter Each model P8 system has a different PCI bus numbering system. The following tables list which CPU socket a given PCI bus is located. By getting the location of the adapter you can check the tables to determine which range of CPU's to affinitize the interrupts. Each system can have a different CPU range even if they have the same number of cores. The number of CPU's seen by the operating system is based on the number of cores multiplied by the SMT level. A system with 16 cores and ST mode will only show 16 CPUs. However, a system with 32 cores and SMT4 will show 128 CPU's. Here are the CPU ranges to assign interrupts on for various systems: Power Systems Scale-Out S812 Servers Any CPU will do because the PCI bus is local to the single CPU socket in the system. Power Systems Scale-Out S822 Servers, Power Systems Scale-Out S824 Servers and Power Systems

10 Scale-Out S822L Servers Slot location ending in: C6 or C7 C3 or C5 CPU range to use for interrupts First half of the CPU's in the system Last half of the CPU's in the system P850 Slot location ending in: C10 or C12 C8 or C9 C3 or C4 C1 or C2 CPU range to use for interrupts First quarter of the CPU's in the system Second quarter of the CPU's in the system Third quarter of the CPU's in the system Last quarter of the CPU's in the system E870/880 For help with E870/880 systems you will need assistance from IBM. Multiple LPAR location determination If multiple LPARs are configured it is possible that the bus with the adapter is not local to resources for the LPAR. Therefor no affinitization may be possible. Consult IBM for further information. If you are using DLPAR no affinitization is possible. Finding the interrupt numbers for an adapter: entstat -d entx will list out the interrupt numbers for transmit or receive queues. Binding interrupts: AIX On AIX the bindintcpu command is used to bind interrupts to CPU's. Man page information: CAUTION: Changing rx and tx queues changes interrupt numbers. The output listed in entstat will change if the number of transmit and receive queues are changed on the adapter. Reboot may change interrupt numbers. After a reboot, the output from entstat may not be the same interrupt number as before the reboot. If using bindintcpu in a script, the values would need to be modified.

11 Linux In Linux, the device driver will automatically try and affinitize interrupts to cores local to the adapter when the system is initialized.

12 Section 5. How to measure peak performance To measure peak bandwidth of the 100 GigE adapter, most multiple socket network benchmarks will work. Typically iperf and uperf can be used. To achieve peak bandwidth, as seen earlier, up to 40 TCP sockets may be needed. For BML since there is no overhead from the virtualization layer, you can demonstrate link limited bandwidth with as few as 8 TCP sockets. However, to achieve peak bandwidth using PowerVM, it may require from 24 to 40 TCP sockets. When setting up the traffic profile to use, you will need to use a TCP socket write size of at least 32K or larger. Smaller TCP socket write sizes, used as a default in many network benchmarks, may not show link limited bandwidth because of the overhead of processing smaller buffers from the socket layer. On AIX, you will need to increase the TCP send and receive space sizes to 768K or larger. This larger TCP window size is needed to keep data flowing between the two systems at 100 Gb/s. On Linux, the default setting that have a 4MB upper tcp_wmem and tcp_rmem sizes will suffice. CPU requirements Driving 100 Gb/s of network traffic required a lot of CPU, for 10 GigE adapters we usually recommend between.7 and 1 core for AIX. Since 100 GigE is 10 times faster we recommend that there be at least 7 cores of CPU available to measure link limited bandwidth. Linux will require less CPU in the 5 6 core range depending on CPU frequency. These estimates do not cover using SEA/VIOS or KVM using vhost_net. Multiple client performance Using more than one client machine generally shows more consistent and sometimes slightly better performance. With the high speed of 100 GigE adapters, using only one client, the performance is limited by the slower of two otherwise identically configured machines. Using multiple clients gets rid of the lowest common speed machine from the results and you end up measuring the server performance. Using multiple clients requires the use of a 100 GigE switch. See switch tuning tips below. Apply latest PTF's Before making performance measurements, make sure you have applied the latest updates or code released to pick up any recent improvements. Ethernet switch tuning With the use of higher speeds of networks the tuning of the network and switches is becoming more

13 important to achieve peak rates. With 100 GigE, our measurements found that if flow control through the network is not set up correctly, performance can suffer. With 100 GigE networks being 2.5 to 10 times faster than current networks, any stall in network traffic or delay has a much bigger negative impact on throughput. Because of the high bandwidth, you can't take for granted flow control through the network so a check should be done to make sure Global Pause is turned on in all switch ports and adapters used.

Linux Network Tuning Guide for AMD EPYC Processor Based Servers

Linux Network Tuning Guide for AMD EPYC Processor Based Servers Linux Network Tuning Guide for AMD EPYC Processor Application Note Publication # 56224 Revision: 1.00 Issue Date: November 2017 Advanced Micro Devices 2017 Advanced Micro Devices, Inc. All rights reserved.

More information

Linux Network Tuning Guide for AMD EPYC Processor Based Servers

Linux Network Tuning Guide for AMD EPYC Processor Based Servers Linux Network Tuning Guide for AMD EPYC Processor Application Note Publication # 56224 Revision: 1.10 Issue Date: May 2018 Advanced Micro Devices 2018 Advanced Micro Devices, Inc. All rights reserved.

More information

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer TSM Performance Tuning Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer peinkofer@lrz.de Agenda Network Performance Disk-Cache Performance Tape Performance

More information

Optimizing the GigE transfer What follows comes from company Pleora.

Optimizing the GigE transfer What follows comes from company Pleora. Optimizing the GigE transfer What follows comes from company Pleora. Selecting a NIC and Laptop Based on our testing, we recommend Intel NICs. In particular, we recommend the PRO 1000 line of Intel PCI

More information

QuickSpecs. HP Z 10GbE Dual Port Module. Models

QuickSpecs. HP Z 10GbE Dual Port Module. Models Overview Models Part Number: 1Ql49AA Introduction The is a 10GBASE-T adapter utilizing the Intel X722 MAC and X557-AT2 PHY pairing to deliver full line-rate performance, utilizing CAT 6A UTP cabling (or

More information

Netchannel 2: Optimizing Network Performance

Netchannel 2: Optimizing Network Performance Netchannel 2: Optimizing Network Performance J. Renato Santos +, G. (John) Janakiraman + Yoshio Turner +, Ian Pratt * + HP Labs - * XenSource/Citrix Xen Summit Nov 14-16, 2007 2003 Hewlett-Packard Development

More information

Planning for Virtualization on System P

Planning for Virtualization on System P Planning for Virtualization on System P Jaqui Lynch Systems Architect Mainline Information Systems Jaqui.lynch@mainline.com com http://www.circle4.com/papers/powervm-performance-may09.pdf http://mainline.com/knowledgecenter

More information

PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics

PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics PowerVM Single Root I/O Virtualization: Fundamentals, Configuration, and Advanced Topics Allyn Walsh Consulting IT Specialist awalsh@us.ibm.com Many contributions from Chuck Graham STSM, Lead Architect

More information

System Management Guide: Communications and Networks

System Management Guide: Communications and Networks [ Bottom of Page Previous Page Next Page Index Legal ] System Management Guide: Communications and Networks EtherChannel and IEEE 802.3ad Link Aggregation EtherChannel and IEEE 802.3ad Link Aggregation

More information

Power Systems with POWER8 Scale-out Technical Sales Skills V1

Power Systems with POWER8 Scale-out Technical Sales Skills V1 Power Systems with POWER8 Scale-out Technical Sales Skills V1 1. An ISV develops Linux based applications in their heterogeneous environment consisting of both IBM Power Systems and x86 servers. They are

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1343BU NSX Performance Samuel Kommu #VMworld #NET1343BU Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no

More information

IsoStack Highly Efficient Network Processing on Dedicated Cores

IsoStack Highly Efficient Network Processing on Dedicated Cores IsoStack Highly Efficient Network Processing on Dedicated Cores Leah Shalev Eran Borovik, Julian Satran, Muli Ben-Yehuda Outline Motivation IsoStack architecture Prototype TCP/IP over 10GE on a single

More information

Optimizing Performance: Intel Network Adapters User Guide

Optimizing Performance: Intel Network Adapters User Guide Optimizing Performance: Intel Network Adapters User Guide Network Optimization Types When optimizing network adapter parameters (NIC), the user typically considers one of the following three conditions

More information

Advanced Computer Networks. End Host Optimization

Advanced Computer Networks. End Host Optimization Oriana Riva, Department of Computer Science ETH Zürich 263 3501 00 End Host Optimization Patrick Stuedi Spring Semester 2017 1 Today End-host optimizations: NUMA-aware networking Kernel-bypass Remote Direct

More information

VALE: a switched ethernet for virtual machines

VALE: a switched ethernet for virtual machines L < > T H local VALE VALE -- Page 1/23 VALE: a switched ethernet for virtual machines Luigi Rizzo, Giuseppe Lettieri Università di Pisa http://info.iet.unipi.it/~luigi/vale/ Motivation Make sw packet processing

More information

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed ASPERA HIGH-SPEED TRANSFER Moving the world s data at maximum speed ASPERA HIGH-SPEED FILE TRANSFER 80 GBIT/S OVER IP USING DPDK Performance, Code, and Architecture Charles Shiflett Developer of next-generation

More information

IBM Virtualization Technical Support for AIX and Linux - v2.

IBM Virtualization Technical Support for AIX and Linux - v2. IBM 000-109 Virtualization Technical Support for AIX and Linux - v2 http://killexams.com/exam-detail/000-109 QUESTION: 170 A Power Systems server has two HMCs connected to its Flexible Service Processor

More information

DXE-810S. Manual. 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter V1.01

DXE-810S. Manual. 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter V1.01 DXE-810S 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter Manual V1.01 Table of Contents INTRODUCTION... 1 System Requirements... 1 Features... 1 INSTALLATION... 2 Unpack and Inspect... 2 Software

More information

An FPGA-Based Optical IOH Architecture for Embedded System

An FPGA-Based Optical IOH Architecture for Embedded System An FPGA-Based Optical IOH Architecture for Embedded System Saravana.S Assistant Professor, Bharath University, Chennai 600073, India Abstract Data traffic has tremendously increased and is still increasing

More information

Why Your Application only Uses 10Mbps Even the Link is 1Gbps?

Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Contents Introduction Background Information Overview of the Issue Bandwidth-Delay Product Verify Solution How to Tell Round Trip Time (RTT)

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Performance Considerations of Network Functions Virtualization using Containers

Performance Considerations of Network Functions Virtualization using Containers Performance Considerations of Network Functions Virtualization using Containers Jason Anderson, et al. (Clemson University) 2016 International Conference on Computing, Networking and Communications, Internet

More information

This document provides an overview of buffer tuning based on current platforms, and gives general information about the show buffers command.

This document provides an overview of buffer tuning based on current platforms, and gives general information about the show buffers command. Contents Introduction Prerequisites Requirements Components Used Conventions General Overview Low-End Platforms (Cisco 1600, 2500, and 4000 Series Routers) High-End Platforms (Route Processors, Switch

More information

BMC Capacity Optimization Extended Edition AIX Enhancements

BMC Capacity Optimization Extended Edition AIX Enhancements BMC Capacity Optimization Extended Edition 9.5.1 AIX Enhancements Support for AIX Active Memory Expansion (AME) mode Metrics displayed in BCO AIX PowerVM view for CPU and Memory Allocation Frame name,

More information

High-performance message striping over reliable transport protocols

High-performance message striping over reliable transport protocols J Supercomput (2006) 38:261 278 DOI 10.1007/s11227-006-8443-6 High-performance message striping over reliable transport protocols Nader Mohamed Jameela Al-Jaroodi Hong Jiang David Swanson C Science + Business

More information

IBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version :

IBM p5 and pseries Enterprise Technical Support AIX 5L V5.3. Download Full Version : IBM 000-180 p5 and pseries Enterprise Technical Support AIX 5L V5.3 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-180 A. The LPAR Configuration backup is corrupt B. The LPAR Configuration

More information

FAQ. Release rc2

FAQ. Release rc2 FAQ Release 19.02.0-rc2 January 15, 2019 CONTENTS 1 What does EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory mean? 2 2 If I want to change the number of hugepages allocated,

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

INT G bit TCP Offload Engine SOC

INT G bit TCP Offload Engine SOC INT 10011 10 G bit TCP Offload Engine SOC Product brief, features and benefits summary: Highly customizable hardware IP block. Easily portable to ASIC flow, Xilinx/Altera FPGAs or Structured ASIC flow.

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Adam Belay et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Presented by Han Zhang & Zaina Hamid Challenges

More information

NIC-PCIE-4RJ45-PLU PCI Express x4 Quad Port Copper Gigabit Server Adapter (Intel I350 Based)

NIC-PCIE-4RJ45-PLU PCI Express x4 Quad Port Copper Gigabit Server Adapter (Intel I350 Based) NIC-PCIE-4RJ45-PLU PCI Express x4 Quad Port Copper Gigabit Server Adapter (Intel I350 Based) Quad-port Gigabit Ethernet server adapters designed with performance enhancing features and new power management

More information

XE1-P241. XE1-P241 PCI Express PCIe x4 Dual SFP Port Gigabit Server Adapter (Intel I350 Based) Product Highlight

XE1-P241. XE1-P241 PCI Express PCIe x4 Dual SFP Port Gigabit Server Adapter (Intel I350 Based) Product Highlight Product Highlight o Halogen-free dual-port Gigabit Ethernet adapters with fiber interface options o Innovative power management features including Energy Efficient Ethernet (EEE) and DMA Coalescing for

More information

<Insert Picture Here> Boost Linux Performance with Enhancements from Oracle

<Insert Picture Here> Boost Linux Performance with Enhancements from Oracle Boost Linux Performance with Enhancements from Oracle Chris Mason Director of Linux Kernel Engineering Linux Performance on Large Systems Exadata Hardware How large systems are different

More information

Moneta: A High-Performance Storage Architecture for Next-generation, Non-volatile Memories

Moneta: A High-Performance Storage Architecture for Next-generation, Non-volatile Memories Moneta: A High-Performance Storage Architecture for Next-generation, Non-volatile Memories Adrian M. Caulfield Arup De, Joel Coburn, Todor I. Mollov, Rajesh K. Gupta, Steven Swanson Non-Volatile Systems

More information

Network Test and Monitoring Tools

Network Test and Monitoring Tools ajgillette.com Technical Note Network Test and Monitoring Tools Author: A.J.Gillette Date: December 6, 2012 Revision: 1.3 Table of Contents Network Test and Monitoring Tools...1 Introduction...3 Link Characterization...4

More information

Xilinx Answer QDMA Performance Report

Xilinx Answer QDMA Performance Report Xilinx Answer 71453 QDMA Performance Report Important Note: This downloadable PDF of an Answer Record is provided to enhance its usability and readability. It is important to note that Answer Records are

More information

Much Faster Networking

Much Faster Networking Much Faster Networking David Riddoch driddoch@solarflare.com Copyright 2016 Solarflare Communications, Inc. All rights reserved. What is kernel bypass? The standard receive path The standard receive path

More information

C IBM. Power Systems Enterprise Technical Support for AIX and Linux -v2

C IBM. Power Systems Enterprise Technical Support for AIX and Linux -v2 IBM C4040-108 Power Systems Enterprise Technical Support for AIX and Linux -v2 Download Full Version : https://killexams.com/pass4sure/exam-detail/c4040-108 A. Ishmc-f B. Ishmc-F C. Ishmc-V D. Ishmc-I

More information

Support for Smart NICs. Ian Pratt

Support for Smart NICs. Ian Pratt Support for Smart NICs Ian Pratt Outline Xen I/O Overview Why network I/O is harder than block Smart NIC taxonomy How Xen can exploit them Enhancing Network device channel NetChannel2 proposal I/O Architecture

More information

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research Fast packet processing in the cloud Dániel Géhberger Ericsson Research Outline Motivation Service chains Hardware related topics, acceleration Virtualization basics Software performance and acceleration

More information

IBM Exam A Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ]

IBM Exam A Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ] s@lm@n IBM Exam A4040-101 Virtualization Technical Support for AIX and Linux Version: 6.0 [ Total Questions: 93 ] IBM A4040-101 : Practice Test Question No : 1 Which of the following IOS commands displays

More information

Microsoft Windows 2016 Mellanox 100GbE NIC Tuning Guide

Microsoft Windows 2016 Mellanox 100GbE NIC Tuning Guide Microsoft Windows 2016 Mellanox 100GbE NIC Tuning Guide Publication # 56288 Revision: 1.00 Issue Date: June 2018 2018 Advanced Micro Devices, Inc. All rights reserved. The information contained herein

More information

The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication

The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication John Markus Bjørndalen, Otto J. Anshus, Brian Vinter, Tore Larsen Department of Computer Science University

More information

NotesBench Disclosure Report for IBM Netfinity 3500 (RAID-5) with Lotus Domino Server 4.6a for Windows NT 4.0

NotesBench Disclosure Report for IBM Netfinity 3500 (RAID-5) with Lotus Domino Server 4.6a for Windows NT 4.0 UNITED STATES NotesBench Disclosure Report for IBM Netfinity 3500 (RAID-5) with Lotus Domino Server 4.6a for Windows NT 4.0 Audited May 8, 1998 IBM Corporation UNITED STATES Table of Contents Executive

More information

End-to-End Adaptive Packet Aggregation for High-Throughput I/O Bus Network Using Ethernet

End-to-End Adaptive Packet Aggregation for High-Throughput I/O Bus Network Using Ethernet Hot Interconnects 2014 End-to-End Adaptive Packet Aggregation for High-Throughput I/O Bus Network Using Ethernet Green Platform Research Laboratories, NEC, Japan J. Suzuki, Y. Hayashi, M. Kan, S. Miyakawa,

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

DPDK Vhost/Virtio Performance Report Release 18.05

DPDK Vhost/Virtio Performance Report Release 18.05 DPDK Vhost/Virtio Performance Report Test Date: Jun 1 2018 Author: Intel DPDK Validation Team Revision History Date Revision Comment Jun 1st, 2018 1.0 Initial document for release 2 Release 18.02 Contents

More information

HyperLink Programming and Performance consideration

HyperLink Programming and Performance consideration Application Report Lit. Number July, 2012 HyperLink Programming and Performance consideration Brighton Feng Communication Infrastructure ABSTRACT HyperLink provides a highest-speed, low-latency, and low-pin-count

More information

Abstract. Testing Parameters. Introduction. Hardware Platform. Native System

Abstract. Testing Parameters. Introduction. Hardware Platform. Native System Abstract In this paper, we address the latency issue in RT- XEN virtual machines that are available in Xen 4.5. Despite the advantages of applying virtualization to systems, the default credit scheduler

More information

Table of Contents. Cisco Buffer Tuning for all Cisco Routers

Table of Contents. Cisco Buffer Tuning for all Cisco Routers Table of Contents Buffer Tuning for all Cisco Routers...1 Interactive: This document offers customized analysis of your Cisco device...1 Introduction...1 Prerequisites...1 Requirements...1 Components Used...1

More information

DPDK Vhost/Virtio Performance Report Release 18.11

DPDK Vhost/Virtio Performance Report Release 18.11 DPDK Vhost/Virtio Performance Report Test Date: December 3st 2018 Author: Intel DPDK Validation Team Revision History Date Revision Comment December 3st, 2018 1.0 Initial document for release 2 Contents

More information

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007 Working with 1 Gigabit Ethernet 1, The School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL UK E-mail: R.Hughes-Jones@manchester.ac.uk Stephen Kershaw The School of Physics

More information

Implementation and Analysis of Large Receive Offload in a Virtualized System

Implementation and Analysis of Large Receive Offload in a Virtualized System Implementation and Analysis of Large Receive Offload in a Virtualized System Takayuki Hatori and Hitoshi Oi The University of Aizu, Aizu Wakamatsu, JAPAN {s1110173,hitoshi}@u-aizu.ac.jp Abstract System

More information

TCP/misc works. Eric Google

TCP/misc works. Eric Google TCP/misc works Eric Dumazet @ Google 1) TCP zero copy receive 2) SO_SNDBUF model in linux TCP (aka better TCP_NOTSENT_LOWAT) 3) ACK compression 4) PSH flag set on every TSO packet Design for TCP RX ZeroCopy

More information

Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms

Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms Interrupt Swizzling Solution for Intel 5000 Chipset Series based Platforms Application Note August 2006 Document Number: 314337-002 Notice: This document contains information on products in the design

More information

The NE010 iwarp Adapter

The NE010 iwarp Adapter The NE010 iwarp Adapter Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com Today s Data Center Users Applications networking adapter LAN Ethernet NAS block storage clustering adapter adapter

More information

QuickSpecs. HP NC6170 PCI-X Dual Port 1000SX Gigabit Server Adapter. Overview. Retired

QuickSpecs. HP NC6170 PCI-X Dual Port 1000SX Gigabit Server Adapter. Overview. Retired The is a dual port fiber Gigabit server adapter that runs over multimode fiber cable. It is the first HP server adapter to combine dual port Gigabit Ethernet speed with PCI-X bus technology for fiber-optic

More information

IBM Spectrum Scale on Power Linux tuning paper

IBM Spectrum Scale on Power Linux tuning paper IBM Spectrum Scale on Power Linux tuning paper Current Version Number: 7.1 Date: 09/11/2016 Authors: Sven Oehme Todd Tosseth Daniel De Souza Casali Scott Fadden 1 Table of Contents 1 Introduction... 3

More information

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2. Overview 1. Product description 2. Product features 1. Product description HPE Ethernet 10Gb 2-port 535FLR-T adapter 1 HPE Ethernet 10Gb 2-port 535T adapter The HPE Ethernet 10GBase-T 2-port 535 adapters

More information

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances Technology Brief Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances The world

More information

Evaluation of Chelsio Terminator 6 (T6) Unified Wire Adapter iscsi Offload

Evaluation of Chelsio Terminator 6 (T6) Unified Wire Adapter iscsi Offload November 2017 Evaluation of Chelsio Terminator 6 (T6) Unified Wire Adapter iscsi Offload Initiator and target iscsi offload improve performance and reduce processor utilization. Executive Summary The Chelsio

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to

More information

Keeping Your Network at Peak Performance as You Virtualize the Data Center

Keeping Your Network at Peak Performance as You Virtualize the Data Center Keeping Your Network at Peak Performance as You Virtualize the Data Center Laura Knapp WW Business Consultant Laurak@aesclever.com Applied Expert Systems, Inc. 2011 1 Background The Physical Network Inside

More information

Measurement-based Analysis of TCP/IP Processing Requirements

Measurement-based Analysis of TCP/IP Processing Requirements Measurement-based Analysis of TCP/IP Processing Requirements Srihari Makineni Ravi Iyer Communications Technology Lab Intel Corporation {srihari.makineni, ravishankar.iyer}@intel.com Abstract With the

More information

SONAS Best Practices and options for CIFS Scalability

SONAS Best Practices and options for CIFS Scalability COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide

More information

Titan: Fair Packet Scheduling for Commodity Multiqueue NICs. Brent Stephens, Arjun Singhvi, Aditya Akella, and Mike Swift July 13 th, 2017

Titan: Fair Packet Scheduling for Commodity Multiqueue NICs. Brent Stephens, Arjun Singhvi, Aditya Akella, and Mike Swift July 13 th, 2017 Titan: Fair Packet Scheduling for Commodity Multiqueue NICs Brent Stephens, Arjun Singhvi, Aditya Akella, and Mike Swift July 13 th, 2017 Ethernet line-rates are increasing! 2 Servers need: To drive increasing

More information

Speeding up Linux TCP/IP with a Fast Packet I/O Framework

Speeding up Linux TCP/IP with a Fast Packet I/O Framework Speeding up Linux TCP/IP with a Fast Packet I/O Framework Michio Honda Advanced Technology Group, NetApp michio@netapp.com With acknowledge to Kenichi Yasukata, Douglas Santry and Lars Eggert 1 Motivation

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

Ultra high-speed transmission technology for wide area data movement

Ultra high-speed transmission technology for wide area data movement Ultra high-speed transmission technology for wide area data movement Michelle Munson, president & co-founder Aspera Outline Business motivation Moving ever larger file sets over commodity IP networks (public,

More information

Benchmark Study: A Performance Comparison Between RHEL 5 and RHEL 6 on System z

Benchmark Study: A Performance Comparison Between RHEL 5 and RHEL 6 on System z Benchmark Study: A Performance Comparison Between RHEL 5 and RHEL 6 on System z 1 Lab Environment Setup Hardware and z/vm Environment z10 EC, 4 IFLs used (non concurrent tests) Separate LPARs for RHEL

More information

Lecture 3. The Network Layer (cont d) Network Layer 1-1

Lecture 3. The Network Layer (cont d) Network Layer 1-1 Lecture 3 The Network Layer (cont d) Network Layer 1-1 Agenda The Network Layer (cont d) What is inside a router? Internet Protocol (IP) IPv4 fragmentation and addressing IP Address Classes and Subnets

More information

Performance Best Practices Paper for IBM Tivoli Directory Integrator v6.1 and v6.1.1

Performance Best Practices Paper for IBM Tivoli Directory Integrator v6.1 and v6.1.1 Performance Best Practices Paper for IBM Tivoli Directory Integrator v6.1 and v6.1.1 version 1.0 July, 2007 Table of Contents 1. Introduction...3 2. Best practices...3 2.1 Preparing the solution environment...3

More information

A Low Latency Solution Stack for High Frequency Trading. High-Frequency Trading. Solution. White Paper

A Low Latency Solution Stack for High Frequency Trading. High-Frequency Trading. Solution. White Paper A Low Latency Solution Stack for High Frequency Trading White Paper High-Frequency Trading High-frequency trading has gained a strong foothold in financial markets, driven by several factors including

More information

FPGAs and Networking

FPGAs and Networking FPGAs and Networking Marc Kelly & Richard Hughes-Jones University of Manchester 12th July 27 1 Overview of Work Looking into the usage of FPGA's to directly connect to Ethernet for DAQ readout purposes.

More information

Virtualization Technical Support for AIX and Linux - v2

Virtualization Technical Support for AIX and Linux - v2 IBM 000-109 Virtualization Technical Support for AIX and Linux - v2 Version: 5.0 Topic 1, Volume A QUESTION NO: 1 An administrator is attempting to configure a new deployment of 56 POWER7 Blades across

More information

memory VT-PM8 & VT-PM16 EVALUATION WHITEPAPER Persistent Memory Dual Port Persistent Memory with Unlimited DWPD Endurance

memory VT-PM8 & VT-PM16 EVALUATION WHITEPAPER Persistent Memory Dual Port Persistent Memory with Unlimited DWPD Endurance memory WHITEPAPER Persistent Memory VT-PM8 & VT-PM16 EVALUATION VT-PM drives, part of Viking s persistent memory technology family of products, are 2.5 U.2 NVMe PCIe Gen3 drives optimized with Radian Memory

More information

Lotus Sametime 3.x for iseries. Performance and Scaling

Lotus Sametime 3.x for iseries. Performance and Scaling Lotus Sametime 3.x for iseries Performance and Scaling Contents Introduction... 1 Sametime Workloads... 2 Instant messaging and awareness.. 3 emeeting (Data only)... 4 emeeting (Data plus A/V)... 8 Sametime

More information

Vendor: IBM. Exam Code: C Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1. Version: Demo Vendor: IBM Exam Code: C9010-251 Exam Name: Power Systems with POWER8 Scale-out Technical Sales Skills V1 Version: Demo QUESTION 1 What is a characteristic of virtualizing workloads? A. Processors are

More information

QuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview

QuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview Overview The integrated NC7782 dual port LOM incorporates a variety of features on a single chip for faster throughput than previous 10/100 solutions using Category 5 (or better) twisted-pair cabling,

More information

QuickSpecs. Models. HP NC380T PCI Express Dual Port Multifunction Gigabit Server Adapter. Overview

QuickSpecs. Models. HP NC380T PCI Express Dual Port Multifunction Gigabit Server Adapter. Overview Overview The HP NC380T server adapter is the industry's first PCI Express dual port multifunction network adapter supporting TOE (TCP/IP Offload Engine) for Windows, iscsi (Internet Small Computer System

More information

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers The Missing Piece of Virtualization I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers Agenda 10 GbE Adapters Built for Virtualization I/O Throughput: Virtual & Non-Virtual Servers Case

More information

Motivation CPUs can not keep pace with network

Motivation CPUs can not keep pace with network Deferred Segmentation For Wire-Speed Transmission of Large TCP Frames over Standard GbE Networks Bilic Hrvoye (Billy) Igor Chirashnya Yitzhak Birk Zorik Machulsky Technion - Israel Institute of technology

More information

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Eric Pouyoul, Brian Tierney ESnet January 25, 2012 ANI 100G Testbed ANI Middleware Testbed NERSC To ESnet

More information

How Verify System Platform Max. Performance

How Verify System Platform Max. Performance How Verify System Platform Max. Performance Instructions for all SSD7000 NVMe RAID Controller (for Windows platforms) Maximizing the performance potential of your NVMe storage starts with your host platform.

More information

QuickSpecs. Models HP NC364T PCI Express Quad Port Gigabit Server Adapter B21. HP NC364T PCI Express Quad Port Gigabit Server Adapter.

QuickSpecs. Models HP NC364T PCI Express Quad Port Gigabit Server Adapter B21. HP NC364T PCI Express Quad Port Gigabit Server Adapter. DA - 12701 Worldwide Version 1 3.26.2007 Page 1 Overview The HP NC364T PCI Express Quad-Port Gigabit Server features four 10/100/1000T Gigabit Ethernet ports on a single card, saving valuable server I/O

More information

Messaging Overview. Introduction. Gen-Z Messaging

Messaging Overview. Introduction. Gen-Z Messaging Page 1 of 6 Messaging Overview Introduction Gen-Z is a new data access technology that not only enhances memory and data storage solutions, but also provides a framework for both optimized and traditional

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

440GX Application Note

440GX Application Note Overview of TCP/IP Acceleration Hardware January 22, 2008 Introduction Modern interconnect technology offers Gigabit/second (Gb/s) speed that has shifted the bottleneck in communication from the physical

More information

Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency

Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency Jialin Li, Naveen Kr. Sharma, Dan R. K. Ports and Steven D. Gribble February 2, 2015 1 Introduction What is Tail Latency? What

More information

Disk I/O and the Network

Disk I/O and the Network Page 1 of 5 close window Print Disk I/O and the Network Increase performance with more tips for AIX 5.3, 6.1 and 7 October 2010 by Jaqui Lynch Editor s Note: This is the concluding article in a two-part

More information

ntop Users Group Meeting

ntop Users Group Meeting ntop Users Group Meeting PF_RING Tutorial Alfredo Cardigliano Overview Introduction Installation Configuration Tuning Use cases PF_RING Open source packet processing framework for

More information

Best Practices for Setting BIOS Parameters for Performance

Best Practices for Setting BIOS Parameters for Performance White Paper Best Practices for Setting BIOS Parameters for Performance Cisco UCS E5-based M3 Servers May 2013 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page

More information

Accelerating 4G Network Performance

Accelerating 4G Network Performance WHITE PAPER Accelerating 4G Network Performance OFFLOADING VIRTUALIZED EPC TRAFFIC ON AN OVS-ENABLED NETRONOME SMARTNIC NETRONOME AGILIO SMARTNICS PROVIDE A 5X INCREASE IN vepc BANDWIDTH ON THE SAME NUMBER

More information

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

PASS4TEST. IT Certification Guaranteed, The Easy Way!   We offer free update service for one year PASS4TEST \ http://www.pass4test.com We offer free update service for one year Exam : C9010-022 Title : IBM AIX Administration V1 Vendor : IBM Version : DEMO Get Latest & Valid C9010-022 Exam's Question

More information

QuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview

QuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview Overview The NC510C is a x8 PCI Express (PCIe) 10 Gigabit Ethernet CX4 (10GBASE-CX4 copper) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. This high-performance,

More information

Large Receive Offload implementation in Neterion 10GbE Ethernet driver

Large Receive Offload implementation in Neterion 10GbE Ethernet driver Large Receive Offload implementation in Neterion 10GbE Ethernet driver Leonid Grossman Neterion, Inc. leonid@neterion.com Abstract 1 Introduction The benefits of TSO (Transmit Side Offload) implementation

More information

Presentation_ID. 2002, Cisco Systems, Inc. All rights reserved.

Presentation_ID. 2002, Cisco Systems, Inc. All rights reserved. 1 Gigabit to the Desktop Session Number 2 Gigabit to the Desktop What we are seeing: Today s driver for Gigabit Ethernet to the Desktop is not a single application but the simultaneous use of multiple

More information

QuickSpecs. NC7771 PCI-X 1000T Gigabit Server Adapter. HP NC7771 PCI-X 1000T Gigabit Server Adapter. Overview

QuickSpecs. NC7771 PCI-X 1000T Gigabit Server Adapter. HP NC7771 PCI-X 1000T Gigabit Server Adapter. Overview Overview The NC7771 supports 10/100/1000Mbps Ethernet speeds as well as a PCI-X 64-bit/133MHz data path and it is backwards compatible with existing PCI bus architectures. Additionally, the NC7771 ships

More information

Performance and Scalability with Griddable.io

Performance and Scalability with Griddable.io Performance and Scalability with Griddable.io Executive summary Griddable.io is an industry-leading timeline-consistent synchronized data integration grid across a range of source and target data systems.

More information