Optimizing Performance: Intel Network Adapters User Guide

Similar documents
INTERNAL USE ONLY (Set it to white if you do not need it)

INTERNAL USE ONLY (Set it to white if you do not need it)

Video capture using GigE Vision with MIL. What is GigE Vision

Bandwidth under control with IDS GigE Vision cameras

Optimizing the GigE transfer What follows comes from company Pleora.

Technical Resource. Genie Nano Camera Configuration

Technical Note How to Deal with Frame Loss in GigE Vision Cameras

QuickSpecs. HP Z 10GbE Dual Port Module. Models

JAI SDK Software Development Kit and Control Tool

PacketExpert PDF Report Details

6.9. Communicating to the Outside World: Cluster Networking

Ethernet Jumbo Frames

Advanced Computer Networks. End Host Optimization

Windows Server 2012 and Windows Server 2012 R2 NIC Optimization and Best Practices with Dell PS Series

HIGH-PERFORMANCE NETWORKING :: USER-LEVEL NETWORKING :: REMOTE DIRECT MEMORY ACCESS

Basics (cont.) Characteristics of data communication technologies OSI-Model

Network Design Considerations for Grid Computing

G Robert Grimm New York University

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Ethernet Hub. Campus Network Design. Hubs. Sending and receiving Ethernet frames via a hub

Network Adapter. Increased demand for bandwidth and application processing in. Improve B2B Application Performance with Gigabit Server

Much Faster Networking

Motivation CPUs can not keep pace with network

Network device drivers in Linux

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Concept Questions Demonstrate your knowledge of these concepts by answering the following questions in the space that is provided.

Optimizing your virtual switch for VXLAN. Ron Fuller, VCP-NV, CCIE#5851 (R&S/Storage) Staff Systems Engineer NSBU

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Using Switches with a PS Series Group

Host Solutions Group Technical Bulletin August 30, 2007

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit)

Large Receive Offload implementation in Neterion 10GbE Ethernet driver

A common issue that affects the QoS of packetized audio is jitter. Voice data requires a constant packet interarrival rate at receivers to convert

Netchannel 2: Optimizing Network Performance

Network Management & Monitoring

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

QuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview

QuickSpecs. HPE Ethernet 1Gb Adapters HPE ProLiant DL, ML & Apollo. Overview

IBM POWER8 100 GigE Adapter Best Practices

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration

ET4254 Communications and Networking 1

OE2G2I35 Dual Port Copper Gigabit Ethernet OCP Mezzanine Adapter Intel I350BT2 Based

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

440GX Application Note

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control

Communication Fundamentals in Computer Networks

PE2G6I35 Six Port Copper Gigabit Ethernet PCI Express Server Adapter Intel i350am2 Based

IsoStack Highly Efficient Network Processing on Dedicated Cores

Continuous Real Time Data Transfer with UDP/IP

PE2G6BPi35 Six Port Copper Gigabit Ethernet PCI Express Bypass Server Adapter Intel based

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Receive Livelock. Robert Grimm New York University

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Maelstrom: An Enterprise Continuity Protocol for Financial Datacenters

PE2G4SFPI80 Quad Port SFP Gigabit Ethernet PCI Express Server Adapter Intel 82580EB Based

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

CTS2134 Introduction to Networking. Module 09: Network Management

Introduction to TCP/IP Offload Engine (TOE)

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

BEST PRACTICES FOR USING SMB3 MULTICHANNEL FOR 4K VIDEO PLAYBACK

LREC9030PF PCIe x1 100FX Desktop Fiber Ethernet Adapter (Intel Based)

Martin Dubois, ing. Contents

MiAMI: Multi-Core Aware Processor Affinity for TCP/IP over Multiple Network Interfaces

Lecture 3. The Network Layer (cont d) Network Layer 1-1

ETSF10 Internet Protocols Transport Layer Protocols

ETSF10 Internet Protocols Transport Layer Protocols

Implementation and Analysis of Large Receive Offload in a Virtualized System

FreeBSD Network Performance Tuning

Matrox Imaging White Paper

NETWORK OVERLAYS: AN INTRODUCTION

Comparison of Ethernet Testers GL Communications Inc

In addition to this there are hundreds of new and useful commands in the following areas:

The Future of High-Performance Networking (The 5?, 10?, 15? Year Outlook)

Using Switches with a PS Series Group

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

UNIT 2 TRANSPORT LAYER

Performance Optimisations for HPC workloads. August 2008 Imed Chihi

Internet Protocols (chapter 18)

Multimedia Systems 2011/2012

NIC TEAMING IEEE 802.3ad

Configuring Link Level Flow Control

Performance Enhancement for IPsec Processing on Multi-Core Systems

Advanced Network Tap application for Flight Test Instrumentation Systems

Network Extension Unit (NXU)

QuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview

Lesson 2-3: The IEEE x MAC Layer

A Content Delivery Accelerator in Data-Intensive Servers

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100

System Management Guide: Communications and Networks

Configuring IP Services

Table of Contents 1 Ethernet Interface Configuration Commands 1-1

Transport Layer. The transport layer is responsible for the delivery of a message from one process to another. RSManiaol

CSE 461 Module 10. Introduction to the Transport Layer

Mobile Edge Computing for 5G: The Communication Perspective

idrac7 Networking and Improved Remote Access Performance

LTE system performance optimization by RED based PDCP buffer management

FPGA Augmented ASICs: The Time Has Come

An Implementation of the Homa Transport Protocol in RAMCloud. Yilong Li, Behnam Montazeri, John Ousterhout

Transcription:

Optimizing Performance: Intel Network Adapters User Guide Network Optimization Types When optimizing network adapter parameters (NIC), the user typically considers one of the following three conditions for the imaging system. These NIC parameter options, as related to the Genie TS, are described in more detail in this document. Optimized for Quick Response and Low Latency Disable Jumbo Packets Minimize or disable Interrupt Moderation Rate Disable Offload TCP Segmentation Increase Transmit Descriptors Increase Receive Descriptors Optimized for Throughput Enable Jumbo Packets Increase Transmit Descriptors Increase Receive Descriptors Receive Side Scaling Reduce Interrupt Moderation Rate Optimized for CPU Utilization Maximize Interrupt Moderation Rate Keep the default setting for the number of Receive Descriptors; avoid setting large numbers of Receive Descriptors. page: 1

About Jumbo Packets: Packet Size/MTU (Maximum Transmission Unit) The menu below shows the network adapter settings for the Jumbo Packet parameter. By default the adapter driver disables jumbo packets. Third Party Technical Reference The paper presented by the Ethernet Alliance provides a technical resource when determining the need for jumbo packets. http://www.ethernetalliance.org/wp-content/uploads/2011/10/ea-ethernet-jumbo-framesv0-1.pdf The Pros to Using Jumbo Packets (MTU) Networks with good Ethernet connections, which minimize conditions that cause packet resends, are good candidates for the use of Jumbo Packets (larger MTU). Jumbo Packets provide greater efficiency in data transmission, since each frame carries more user data with the same fixed protocol overhead and underlying packet delays. Use of Jumbo Packets results in fewer frames being sent across the network. Fewer frames results in less host system CPU processing providing greater system throughput. Overall, a quality network will support a higher continuous camera frame rate. page: 2

The Cons to Using Jumbo Packets (MTU) While large frames provide greater efficiencies, they also cause greater lag time and latencies over the network. Any packet resends have a larger impact in average throughput. Large frames may also fill buffer queues within network equipment between the camera and host computer. Multiple cameras connected to an Ethernet switch, typically require the switch to support the PAUSE frame control, which in turn implies that the network equipment has sufficient buffers to maintain continuous camera acquisitions. Performance Optimization Controls Select Performance Options, then click Properties configure NIC optimization settings. A short description follows the screen images. page: 3

Adaptive Inter-Frame Spacing Adaptive Inter-Frame Spacing (IFS) enables the network adapter to dynamically adapt to network traffic conditions. IFS defines the space between Ethernet frames which is a required gap between Ethernet frames. Allowing a shorter gap increases the number of frames per a specific time period, but with the possibility of frames being too close together where they are not detected as individual frames, forcing a packet resend. Adaptive Inter-Frame Spacing is disabled by default. With gigabit networks performance gains are minimal especially with buffered Ethernet equipment. Flow Control Enable Flow control in the NIC when the imaging network is comprised of multiple cameras connected to a switch which supports PAUSE Frame to manage network traffic. page: 4

The NIC or Ethernet switch can become overloaded if incoming frames (from multiple cameras) arrive faster than the device can process them. The flow control mechanism eliminates the risk of lost frames. The downside to managed network traffic is that the Pause Frame control will reduce the absolute maximum transfer bandwidth possible on the network. Interrupt Moderation Rate page: 5

The Interrupt Moderation Rate parameter is used to manage the rate of CPU interrupts. By default the NIC driver sets this to 'Adaptive' where the interrupt rate automatically balances packet transmission interrupts and host CPU performance. In most cases no manual optimization of the Interrupt Moderation Rate parameter is required. In some conditions, video frames from the GigE Vision camera may be transferred to host display or memory buffer as data bursts instead of a smooth continuous stream. The NIC may be over moderating acquisition interrupts to avoid over-loading the host CPU with interrupts. If priority is required for acquisition transfers (i.e. a more real-time system response to the camera transfer), the moderation rate should be reduced by manually adjusting the NIC parameter. Interrupt Moderation Rate Options are: Adaptive (ITR = -1), no interrupts/sec defined, dynamically changed Off (ITR = 0), no limit Minimal (ITR = 200 interrupts/sec) Low (ITR = 400 interrupts/sec) Medium (ITR = 950 interrupts/sec) High (ITR = 2000 interrupts/sec) Extreme (ITR = 3600 interrupts/sec) Receive Buffers Receive Buffers are allocated in system memory by the NIC driver. When the host computer CPU is busy with tasks other than the imaging application, incoming image packets remain in the PC memory buffers allocated to store packets instead of immediately being copied into the image application buffers. page: 6

By increasing the NIC host buffers, more incoming image packets can be stored by the NIC before it must start discarding them. This provides more time for the PC to switch tasks and move image packets to the image buffer. Wait for Link The Wait for Link parameter defines how the driver waits for Auto Negotiation to be successful before reporting the link state. Auto Detect is recommended for camera applications. Intel NIC adapter options are: Off Driver does not wait for Auto Negotiation On Driver waits for Auto Negotiation. If the speed is not set to Auto Negotiation, the driver waits for a short time for link to complete and then reports the link state. Auto Detect Automatically set to On or Off depending on speed and adapter type when the driver is installed. page: 7

Receive Side Scaling Receive Side Scaling enables multiple CPU cores (multi-core CPUs are common in current host systems), to manage received Ethernet frames, which improves system cache utilization. If your NIC supports Receive Side Scaling, enable the feature and select 2 queues, which provides good throughput and low CPU utilization. page: 8

TCP/IP Offloading Options The UDP Checksum Offloading option needs to be kept disabled for compatibility with the Teledyne DALSA Filter Driver. page: 9