The NE010 iwarp Adapter Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com
Today s Data Center Users Applications networking adapter LAN Ethernet NAS block storage clustering adapter adapter Block Storage Fibre Channel SAN Clustering Myrinet, Quadrics, InfiniBand, etc. 2
Overhead & Latency in Networking server software software hardware Sources The iwarp* Solution user kernel I/O adapter application I/O library OS device driver Packet Intermediate Processing Command Buffer Context CopiesSwitches context TCP/IP TCP/IP app buffer OS buffer driver buffer adapter buffer standard Ethernet TCP/IP packet I/O cmd I/O cmd I/O cmd I/O cmd I/O cmd I/O cmd 100% 60% 40% % CPU Overhead application to OS context es 40% Intermediate application buffer to OS copies 20% context es 40% application to OS transport processing Intermediate context es 40% buffer copies 20% Transport (TCP) offload RDMA / DDP * iwarp: IETF ratified extensions to the TCP/IP Ethernet Standard User-Level Direct Access / OS Bypass 3
Tomorrow s Data Center Separate Fabrics for Networking, Storage, and Clustering Users NAS Applications networking storage networking adapter clustering LAN iwarp Ethernet Ethernet networking storage clustering networking storage clustering clustering adapter adapter Block Storage Storage iwarp Fibre Channel Ethernet SAN Clustering Myrinet, iwarp Quadrics, Ethernet InfiniBand, etc. 4
Single Adapter for All Traffic Converged Fabric for Networking, Storage, and Clustering Smaller footprint Lower complexity Higher bandwidth Lower power Lower heat dissipation Users NAS Applications networking storage clustering adapter Converged iwarp Ethernet SAN 5
NetEffect s NE010 iwarp Ethernet Channel Adapter 6
High Bandwidth Storage/Clustering Fabrics Bandwidth MB/s 1000 900 800 700 600 500 400 300 200 100 0 Bandwidth Tests NE010 vs InfiniBand & 10 GbE NIC 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) NE010 10 GbE NIC InfiniBand - PCIe InfiniBand - PCI-X Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured 7
Multiple Connections Realistic Data Center Loads BW (MB/s) 1000 900 800 700 600 500 400 300 200 100 0 Multiple QP Comparison NE010 vs InfiniBand 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) NE010 (1 QP) NE010 (2048 QP) InfiniBand (1 QP) InfiniBand (2048 QP) Source: InfiniBand - Mellanox Website; NE010 - NetEffect Measured 8
NE010 Processor Off-load 1000 Bandwidth and CPU Utilization NE010 vs 10 GbE NIC 100% 900 90% 800 80% 700 70% Bandwidth MB/s 600 500 400 60% 50% 40% CPU Utilization 300 30% 200 20% 100 10% 0 128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) 0% BW NIC BW NE010 CPU Util NIC CPU Util NE010 9
Low Latency Required for Clustering Fabrics 160 140 Application Latency NE010 vs InfiniBand & 10 GbE NIC Latency (usec) 120 100 80 60 40 20-128 256 512 1024 2048 4096 8192 16384 Message Size (Bytes) NE010 InfiniBand 10 GbE NIC Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured 10
Low Latency Required for Clustering Fabrics Application Latency D. Dalessandro OSC May 06 11
Summary & Contact Information iwarp Enhanced TCP/IP Ethernet Next Generation of Ethernet for All Data Center Fabrics Full Implementation Necessary to Meet the Most Demanding Requirement Clustering & Grid True Converged I/O Fabric for Large Clusters NetEffect Delivers Best of Breed Performance Latency Directly Competitive with Proprietary Clustering Fabrics Superior Performance under Load for Throughput and Bandwidth Dramatically Lower CPU Utilization For more information contact: Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com www.neteffect.com 12