Wide-Area InfiniBand RDMA: Experimental Evaluation

Size: px
Start display at page:

Download "Wide-Area InfiniBand RDMA: Experimental Evaluation"

Transcription

1 Wide-Area InfiniBand RDMA: Experimental Evaluation Nagi Rao, Steve Poole, Paul Newman, Susan Hicks Oak Ridge National Laboratory High-Performance Interconnects Workshop August 31, 2009, New Orleans, LA Research Sponsored by Department of Defense

2 Outline Motivation Test Environment Wide-Area InfiniBand Longbow Measurements ABEx Measurements USN and Anue Connections Voltaire and Mellanox HCAs TCP over 10GigE Conclusions

3 ORNL and NLR Collaboration On-Going Activities: Long distance network testing and measurement analysis since Supercomputing in Dallas Future Planned Activities: 40/100/1000 Gbps Wide-Area Network for Large Datasets IB and other technologies over - SONET, 10GigE, L2PT, IP and other connections Experimental Design and Measurements Performance analysis and comparison NLR:

4 Wide-Area High-Performance Transport Main Question: Best ways to support high throughput data transport between remote high-performance computing, storage and analysis systems Technical Topic Addressed: Wide-area data transfers at 10Gbps rates at thousands of miles: Robust end-to-end solutions: physical- through transport-layers Significant Technical Challenges: Two different options A. Performance of conventional TCP/IP solutions is unclear: 1. Many TCP variants: complex to implement, analyze and test 2. Enormous in-situ efforts are needed per connection basis to reach multiple Gbps rates B. InfiniBand solutions seem promising but need to be studied: enterprise centric 1. Very limited wide-area results are available 2. Objective side-by-side comparisons with TCP/IP are not available 3. Require capable test environments network centric

5 Approach and Results Specific Topics Discussed : 1. High-performance testbed: Collection of high-end hosts, storage and edge systems Flexible 10Gbps, 8600 mile network core 2. Throughput performance of IB over wide-area connections (SONET, 10GigE) 3. Throughput performance of 10GigE + TCP high-performance variants Overall Summary: Scalability and plug-in capability of IB 4x SDR demonstrated over 8600 miles two brands of IB WAN devices Longbow XR and ABEx 2020 two brands of HCAs Voltaire and Mellanox two types of 10Gbps connection s - Anue Emulated and USN testbed Technical Results Summary: In a nutshell, over 8600 mile connection throughput decrease: IB Over SONET/10GigE: 0.02Mbps/mile 10GigE-HTCP: 1.3Mbps/mile

6 Conventional I/0 Architecture for WAN Wide-Area Transport Traditionally Relies on TCP/IP : Requires conversion from enterprise I/O to TCP/IP data format and protocol Efficient TCP/IP transport CPU CPU system memory Host bus system controller PCI storage storage controller NIC wide-area TCP/IP Issues: Storage-to-WAN conversion needs careful impedance matching TCP/IP wide-area tuning is still complex and connection-specific

7 InfinBand Architecture Provides Efficient High-Performance I/O: 4x SDR offers 8Gbps throughput out-of-box Primarily designed for system/enterprise-level I/O - few millisecond latencies Question: Are these bandwidths achievable in widearea? Benefits: Extend natively highperformance connectivity between super-computers, storage systems without TCP-transition

8 IB Over Wide-Area IB can be supported over wide-area: Over SONET OC192 or 10GigE LAN-PHY or WAN-PHY infrastructure system memory CPU Host bus system controller HCA CPU Capability: Storage across WAN will simply appear as local IB WAN extension device: Frame conversion Terminates/re-starts IB flow control storage IB switch IB WAN wide-area SONET 10GigE

9 Longbow XR extends IB to WAN Performs link-level buffer credit extension to span up to 150,000km WAN distances Full bandwidth of the 4X SDR link (1Gbyte/second full-duplex) Protocol transparent; complete interoperability

10 ABEx 2020 IB WAN Extension Devices ABEx 2020 platform supports InfiniBand over OC192, 10GigE and L2TP (Layer-2 Tunneling Protocol)

11 UltraScience Net: Experimental network research testbed for advanced networking and associated technologies for high-performance Since 2005 Features End-to-end guaranteed bandwidth channels Dynamic, in-advance, reservation and provisioning of fractional/full lambdas Secure control-plane for signaling Peering with ESnet, National Science Foundation CHEETAH, and other networks Department of Defense

12 USN Architecture: Separate Data-Plane and Control-Planes No data plane continuity: can be partitioned into islands - necessitated out-of band control plane Secure control-plane with: Encryption, authentication and authorization On-demand and advanced provisioning GMPLS in IP is not secure enough: Dual OC192 backbone: SONET-switched in the backbone Ethernet-SONET conversion

13 USN data-plane: Node configuration In the core: Two OC192 switched by Ciena CDCIs At the edge: 10/1 GigE provisioning using Force10 E300s Node Configuration e300 Linux host 10 GigE GigE 10 GigE WAN PHY CDCI OC192 to Seattle Connections to CalTech and ESnet Data plane user connections: Direct connections to Core switches SONET and 1 GigE Edge switches Ethernet channels Utilize UltraScience Net hosts

14 Infiniband Over SONET: Obsidian Longbows RDMA throughput measurements over USN quad-core dual socket Linux host Linux host IB 4x ORNL longbow IB/S longbow IB/S ORNL CDCI ORNL loop -0.2 mile: 7.48Gbps OC miles Chicago CDCI ORNL-Chicago loop 1400 miles: 7.47Gbps ORNL- Chicago - Seattle loop 6600 miles: 7.37Gbps 3300 miles 4300 miles Seattle CDCI Sunnyvale CDCI IB 4x: 8Gbps (full speed) Host-to-host local switch:7.5gbps Hosts: dual-socket quad-core 2GHz AMD Opteron, 4GB memory 8-lane PCI-Express slot Dual-port Voltaire 4x SDR HCA. ORNL Chicago Seattle - Sunnyvale loop 8600 miles: 7.34Gbps Supercomputing 2008

15 Performance Profiles IB RDMA Throughputs Throughput Distance Profile Plot throughput as a function connection length (d) and message size (s) B=SONET, WAN-PHY T (, ) B d s Throughput Stability Profile Plot throughput as function of connection length and repetition number for fixed message size T (, ) B d s T ( d, s) B Average throughput over 10 iterations with 8M message size Tˆ ( d ) B Throughput Decrease Per Mile (DPM): at connection length ˆ ˆ ˆ T ( 0) ( ) ( ) B d TB di DB di d d i 0 d i

16 Distance and Stability Profiles of IB over SONET Measurements using ib_rdma-bw c It uses IB CM for connection setup and management distance profile distance profile stability profile 8M message size T ( ) B d T (, ) B d s Connection length (miles) d i Throughput (Gbps) 8M msg Std-dev (Mbps) D ( d ) DPM (Mbps) B i

17 Wide-Area Connections: SONET OC192 and 10GigE WAN/LAN-PHY SONET: Widely deployed over long-haul optical backbone networks OC192 over DWDM: 9.6Gbps over single wavelength - robust wellunderstood technology Utilizes Time-Division Multiplexing to provide well separated sublambdas 4*OC48 or 64*OC3 10Gig Ethernet: Relatively recent technology: high-potential for wide deployment WAN-PHY Ethernet frames are packed into SONET OC192c payload: peak 9.6 Gbps LAN-PHY Natively transports Ethernet packets: full 10Gbps A Comparison: SONET is widely deployed but needs more expensive infrastructure Sub-channel separation is more robust in SONET than 10GigE 10GigE more naturally transits between LAN to WAN environments

18 IB over 10GigE LAN-PHY and WAN-PHY host quad-core dual socket host longbow IB/S longbow IB/S ORNL loop -0.2 mile ORNL-Chicago loop 1400 miles ORNL 700 miles ORNL E300 switched in software ORNL CDCI ORNL- Chicago - Seattle loop 6600 miles Chicago CDCI Chicago E miles Seattle CDCI Optional WAN-PHY switching IB 4x OC GigE WAN-PHY 10 GigE LAN-PHY 1GigE 4300 miles Sunnyvale CDCI Sunnyvale E300 ORNL Chicago Seattle - Sunnyvale loop 8600 miles

19 Performance Profiles of IB Over 10GigE WAN-PHY distance profile peak distance profile average distance profile T (, ) B d s Results are almost the same as in SONET case Connection length (miles) d i Throughput (Gbps) 8M msg Std-dev (Mbps) Dˆ ( d ) DPM (Mbps) B i

20 Cross-Traffic Generation quad-core dual socket host 1 host 2 ORNL longbow IB/S longbow IB/S USN host1 ORNL E300 USN host2 ORNL loop -0.2 mile ORNL-Chicago loop 1400 miles dual-core single socket Netrion10GigE ORNL CDCI ORNL- Chicago - Seattle loop 6600 miles 700 miles Chicago CDCI Chicago E miles Seattle CDCI IB 4x OC GigE WAN-PHY 10 GigE LAN-PHY 1GigE 4300 miles Sunnyvale CDCI Sunnyvale E300 ORNL Chicago Seattle - Sunnyvale loop 8600 miles

21 Cross-Traffic Effect of IB over 10GigE WANPHY Average throughput for 8M miles cross-traffic G G G G G Competing traffic: UDP streams on WAN at 1,2,3,4 Gbps Distance profiles are unaffected for cross-traffic levels of up to 1Gbps IB throughput was drastically effected at cross-traffic level of 4 Gbps Effect of cross-traffic is more on large message sizes Below 1Gbps

22 ABEx Measurements Mellanox HCA LAN PHY WAN PHY L2TP Measurements: Quite similar to Longbow

23 Anue WAN Emulator Replicates the delays, bandwidth of OC192 and 10GigE wide-area connections Various connection lengths can be emulated some not realizable on USN Objective: Compare IB measurements on emulated and USN connections With potential interpolation and extrapolation to other connection lengths test device Hosts, Longbows, ABEx, KGs, etc Emulated emulated OC192 OC192 wan 10GigE ANUE test device Hosts, Longbows, ABEx, KGs, etc

24 Anue Emulated OC192 and 10GigE Connections host quad-core dual socket host longbow IB/S longbow IB/S ORNL Fujitsu 10GigE switched in software ORNL E300 Anue 10GigE host quad-core dual socket host longbow IB/S longbow IB/S ORNL Anue OC192 OC192 IB 4x OC GigE WAN-PHY 10 GigE LAN-PHY 1GigE

25 Throughput MB/s Throughput MB/s Throughput MB/s USN and Anue Emulated Connections For average throughput: SONET: USN higher IB throughput for long connections LANPHY: Anue has higher IB throughput for long connections WANPHY both throughputs are same at all lengths Peak IB throughputs are same for all lengths Longbow Mellanox Tests (WANPHY) ib_rdma_bw -n s i 2 -p Longbow Mellanox Tests (POS) ib_rdma_bw -n s i 2 -p Latency (ms) 170 ANUE (A) USN (A) ANUE (P) USN (P) Longbow Mellanox Tests (LANPHY) ib_rdma_bw -n s i 2 -p ANUE (A) USN (A) ANUE (P) USN (P) ANUE (A) USN (A) ANUE (P) USN (P) Latency (ms) Latency (ms)

26 Anue-USN Measurements - ABEX LAN PHY WAN PHY L2TP Performance differences are fairly small

27 LAN PHY Anue-USN Measurements Longbow Mellanox HCA SONET Anue results are slightly more optimistic WAN PHY

28 Mellanox and Voltaire - ABEX LAN PHY WAN PHY Performance differences are fairly small

29 Longbow and ABEx Mellanox HCAs LAN PHY WAN PHY Performance differences are fairly small - More detailed experimentation and analysis are needed

30 10GigE Connections Quad-core Dual socket host ORNL 700 miles 3300 miles 4300 miles host ORNL E300 ORNL CDCI Chicago CDCI Seattle CDCI Sunnyvale CDCI Myrinet 10 GigE NICS Over PCI-Express Chicago E300 Sunnyvale E300 ORNL loop -0.2 mile ORNL-Chicago loop 1400 miles ORNL- Chicago - Seattle loop 6600 miles 10 GigE WAN-PHY 10 GigE LAN-PHY OC192 ORNL Chicago Seattle - Sunnyvale loop 8600 miles

31 Challenges of Assessing TCP Transport TCP or similar transport method is needed to ensure reliable data delivery over 10GigE provisioned connections Numerous TCP variants for High-Performance: 1. Standard slow-start and Additive-Increase and Multiplicative Decrease (AIMD) are not effective for high performance operation 2. A variety of TCP variations have been proposed for high performance operation difficult to implement, analyze and test 3. All of them require multiple streams and significant amount of tuning to provide multi-gbps throughputs Testing TCP variants: 1. Congestion control dynamics are very complex and not amenable to simple analytical treatment need complementary experiments 2. Dynamically loadable congestion control modules in linux kernels: auto-tuning is effective in buffer management BIC, CUBIC, Hamilton TCP (HTCP), Scalable TCP, Highspeed TCP, TCP Vegas Max buffer sizes are set to highest allowed on hosts through sysctrl 3. Measurements show high variance robust measurement and analysis methods are needed

32 Performance Profiles TCP Throughputs BIC and Hamilton TCP pluggable Linux modules Throughput Distance Profile Plot throughput as a function connection length (d) and number of streams (n) A=BIC,HTCP T (, ) A d n Throughput Stability Profile Plot throughput as function of connection length and repetition number of streams Average throughput over repetitions and range of number of streams Tˆ ( d ) B Throughput Decrease Per Mile Tˆ ( d ) Tˆ ( d ) ˆ ( ) A 0 A i A di di d0 D

33 Performance of TCP over 10GigE BIC with Linux auto-tuning Connection length (miles) d i Throughput (Gbps) 8M msg Std-dev (Mbps) Dˆ ( d ) DPM (Mbps) B i high deviations Better than IB for local connections

34 Performance of TCP over 10GigE Hamilton TCP with Linux auto-tuning Connection length (miles) d i Throughput (Gbps) 8M msg Std-dev (Mbps) Dˆ ( d ) DPM (Mbps) B i

35 Comparative Performance of BIC and Hamilton TCP 1400 miles 8600 miles BIC HTCP Multiple streams are needed to get high throughputs Highspeed TCP We also tested Highspeed TCP Scalable-TCP TCP-Vegas not as good as BIC and HCTP

36 Decrease Per Mile Empirical Results Throughput Decrease Per Mile (DPM): at connection length ˆ ˆ ˆ T ( 0) ( ) ( ) B d TB di DB di d d Average Over Connection Lengths: k ˆ 1 ˆ C DB ( di ) k i 1 i 0 d i IB Over SONET/10GigE: 0.02Mbps/mile 10GigE-HTCP: 1.3Mbps/mile DPM estimate is of derivative of random quantity: need a systematic way to assess its robustness

37 Decrease Per Mile Estimation Measurements involve random components For a fixed connection, measurements depend on the properties of (i) connection, (ii) edge devices, (iii) host configurations, and (iv) measurement software: T (, ) C d a random fixed Connection lengths are decided by which ones are possible for the test-bed d1, d2,, dn Composite Distribution: Complex and hard to characterize due to interaction of multiple factors distributed according to distributed according to P P P T, d T d d P Td P d DPM is a derivative of random quantity, and robustness of its estimator must be explicitly established C D C ( d, c ) dp d D C TC ( d, c) d T ( d, a) E T ( d, c) C C

38 Preliminary Robustness Result of DPM Estimator How good ˆ C is as an estimator of ( d d ) P kp T d T d 0 ˆ 0 ˆ( ) ( ) k C C k k 2k 2 2 k ( d ( 0 ) T d0 ) T ( dk ) k d B 2k 2 a 2k Ae C T ( d ) T ( d ) k This result is direct application of Vapnik-Chervenenkis equation for monotonic regression functions guaranteed irrespective of the underlying Roughly speaking, ˆ C estimator is probabilistically close to gets better if well-separated connection lengths - finer results appear to be possible C

39 10Gbps Crypto Devices: Test Configuration quad-core dual socket ORNL host 3 Fujitsu 10GigE host 4 10 Gbps 10 Gbps ORNL E300 USN ORNL CDCI 700 miles Chicago CDCI Chicago E miles Seattle CDCI 4300 miles Sunnyvale CDCI Sunnyvale E300 ORNL loop -0.2 mile ORNL-Chicago loop 1400 miles OC GigE WAN-PHY 10 GigE LAN-PHY ORNL- Chicago - Seattle loop 6600 miles ORNL Chicago Seattle - Sunnyvale loop 8600 miles

40 host1-host2 Connections host3-host4 Connections through 10Gbps Devices ORNL host 1 host 3 host 4 VLAN VLAN Fujitsu 10GigE VLAN 10 Gbps 10 Gbps VLAN ORNL E300 VLAN USN ORNL CDCI host 2 VLAN OC GigE WAN-PHY 10 GigE LAN-PHY

41 TCP Profiles: Before and after MTU Alignment host3-4 Encrypted Connection: File transfer Fiber loop between 10Gbps devices : 9 Gbps TCP throughput When connected to E300: 9Gbps throughput locally MTU size is modified on E300 IP segment/datagram size set to byte MTU 1400 byte MTU 8950 byte MTU

42 TCP Profiles Comparison: Plain and Encrypted Connections Fiber loop between 10Gbps devices : 9 Gbps TCP throughput Chicago loop: host3-4 connection achieved 8Gbps Sunnyvale loop: host3-4 connection 1.5 time higher throughput Observations: Compared to plain connections, for encrypted connections: High throughput is achieved with less number of streams Higher throughput is achieved at longer distances

43 plain encrypted UDP File Transfer Profiles: host 3-4 Connection Iperf udp modules exchanged measurement information after MTU Alignment Server results were received at the client These are higher than UDT4 throughput possibly due to congestion control

44 UDT4 vs. Modified UDT4: host3-4 Encrypted Connection MTU is set to 8950 bytes and congestion control is partially commented out Achieved throughputs more than doubled Approximately same variation under measurements File transfer throughput doubled: Variance under repetition Did not depend on distance

45 plain encrypted Modified UDT4 Partial Results: Plain and Encrypted Connections Plain throughputs are 1Gbps better than encrypted throughtputs

46 UDT4: CWND and RTT estimates Host3-4 uses essentially same RTT for all connections up to Chicago much larger than ping values Host1-2 uses different RTTs for different connections plain encrypted Host3-4 encrypted connection larger windows for 8600 mile essentially uses same window size for all other connection Host1-2 uses different window sizes for different connections plain encrypted

47 Conclusions and Results We conducted structured experiments to asses: 1. Throughput performance of IB RDMA over wide-area connections SONET and 10GigE two brands of IB WAN devices Longbow XR and ABEx 2020 two brands of HCAs Voltaire and Mellanox two types of 10Gbps connection s - Anue Emulated and USN testbed 2. Throughput performance of 10GigE + TCP high performance versions Experimental Results: 1. Scalability and plug-in capability of IB ~7.3 Gbps over 8600 miles 2. Performance testing of recent high-performance TCP over Ethernet: best performance is ~2Gbps at 8600 miles, but above 9Gbps locally 3. Side-by-side comparison over 8600 mile connection: IB Over SONET/10GigE: 0.02Mbps/mile 10GigE-HTCP: 1.3Mbps/mile 4. Cross-traffic effects: IB performance is robust with upto1gbps cross-traffic either on WAN connection or using their own 1GigE ports. TCP is less prone to such effects but more testing is needed. 5. Infrastructure: SONET or 10GigE can support both solutions IB WAN devices are significantly more expensive needed for IB

48 Acknowledgements This work was supported by the United States Department of Defense & used resources of the Extreme Scale Systems Center at Oak Ridge National Laboratory.

49 Thank you

UltraScience Net Update: Network Research Experiments

UltraScience Net Update: Network Research Experiments UltraScience Net Update: Network Research Experiments Nagi Rao, Bill Wing, Susan Hicks, Paul Newman, Steven Carter Oak Ridge National Laboratory raons@ornl.gov https://www.csm.ornl.gov/ultranet February

More information

Wide-Area Performance Profiling of 10GigE and InfiniBand Technologies

Wide-Area Performance Profiling of 10GigE and InfiniBand Technologies Wide-Area Performance Profiling of 1GigE and InfiniBand Technologies Nageswara S. V. Rao, Weikuan Yu, William R. Wing, Stephen W. Poole, and Jeffrey S. Vetter Oak Ridge National Laboratory, Oak Ridge,

More information

Analytics of Wide-Area Lustre Throughput Using LNet Routers

Analytics of Wide-Area Lustre Throughput Using LNet Routers Analytics of Wide-Area Throughput Using LNet Routers Nagi Rao, Neena Imam, Jesse Hanley, Sarp Oral Oak Ridge National Laboratory User Group Conference LUG 2018 April 24-26, 2018 Argonne National Laboratory

More information

Performance of RDMA-capable Storage Protocols on Wide-Area Network

Performance of RDMA-capable Storage Protocols on Wide-Area Network Performance of RDMA-capable Storage Protocols on Wide-Area Network Weikuan Yu Nageswara S.V. Rao Pete Wyckoff Jeffrey S. Vetter Oak Ridge National Laboratory Ohio Supercomputer Center Computer Science

More information

CERN openlab Summer 2006: Networking Overview

CERN openlab Summer 2006: Networking Overview CERN openlab Summer 2006: Networking Overview Martin Swany, Ph.D. Assistant Professor, Computer and Information Sciences, U. Delaware, USA Visiting Helsinki Institute of Physics (HIP) at CERN swany@cis.udel.edu,

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

Steven Carter. Network Lead, NCCS Oak Ridge National Laboratory OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 1

Steven Carter. Network Lead, NCCS Oak Ridge National Laboratory OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 1 Networking the National Leadership Computing Facility Steven Carter Network Lead, NCCS Oak Ridge National Laboratory scarter@ornl.gov 1 Outline Introduction NCCS Network Infrastructure Cray Architecture

More information

Experimental Results On Data Transfers Over Dedicated Channels

Experimental Results On Data Transfers Over Dedicated Channels Experimental Results On Data Transfers Over Dedicated Channels Nageswara S. V. Rao, Qishi Wu, Steven M. Carter, William R. Wing Computer Science and Mathematics Division Oak Ridge National Laboratory Oak

More information

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet Vijay Shankar Rajanna, Smit Shah, Anand Jahagirdar and Kartik Gopalan Computer Science, State University of New York at Binghamton

More information

Extending InfiniBand Globally

Extending InfiniBand Globally Extending InfiniBand Globally Eric Dube (eric@baymicrosystems.com) com) Senior Product Manager of Systems November 2010 Bay Microsystems Overview About Bay Founded in 2000 to provide high performance networking

More information

ARISTA: Improving Application Performance While Reducing Complexity

ARISTA: Improving Application Performance While Reducing Complexity ARISTA: Improving Application Performance While Reducing Complexity October 2008 1.0 Problem Statement #1... 1 1.1 Problem Statement #2... 1 1.2 Previous Options: More Servers and I/O Adapters... 1 1.3

More information

Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications

Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications Year 1 Activities report for the NSF project EIN-0335190 Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications Date: July 29, 2004 (this is

More information

Network Layer Flow Control via Credit Buffering

Network Layer Flow Control via Credit Buffering Network Layer Flow Control via Credit Buffering Fibre Channel maintains throughput in the data center by using flow control via buffer to buffer credits Nominally switches provide credit buffering up to

More information

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA High Performance Ethernet for Grid & Cluster Applications Adam Filby Systems Engineer, EMEA 1 Agenda Drivers & Applications The Technology Ethernet Everywhere Ethernet as a Cluster interconnect Ethernet

More information

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)!

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! ClearStream Prototyping 40 Gbps Transparent End-to-End Connectivity Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! University of Amsterdam! more data! Speed! Volume! Internet!

More information

A TRANSPORT PROTOCOL FOR DEDICATED END-TO-END CIRCUITS

A TRANSPORT PROTOCOL FOR DEDICATED END-TO-END CIRCUITS A TRANSPORT PROTOCOL FOR DEDICATED END-TO-END CIRCUITS MS Thesis Final Examination Anant P. Mudambi Computer Engineering University of Virginia December 6, 2005 Outline Motivation CHEETAH Background UDP-based

More information

Module 2 Storage Network Architecture

Module 2 Storage Network Architecture Module 2 Storage Network Architecture 1. SCSI 2. FC Protocol Stack 3. SAN:FC SAN 4. IP Storage 5. Infiniband and Virtual Interfaces FIBRE CHANNEL SAN 1. First consider the three FC topologies pointto-point,

More information

The Future of High-Performance Networking (The 5?, 10?, 15? Year Outlook)

The Future of High-Performance Networking (The 5?, 10?, 15? Year Outlook) Workshop on New Visions for Large-Scale Networks: Research & Applications Vienna, VA, USA, March 12-14, 2001 The Future of High-Performance Networking (The 5?, 10?, 15? Year Outlook) Wu-chun Feng feng@lanl.gov

More information

iscsi Technology: A Convergence of Networking and Storage

iscsi Technology: A Convergence of Networking and Storage HP Industry Standard Servers April 2003 iscsi Technology: A Convergence of Networking and Storage technology brief TC030402TB Table of Contents Abstract... 2 Introduction... 2 The Changing Storage Environment...

More information

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007 Mellanox Technologies Maximize Cluster Performance and Productivity Gilad Shainer, shainer@mellanox.com October, 27 Mellanox Technologies Hardware OEMs Servers And Blades Applications End-Users Enterprise

More information

HyperIP : SRDF Application Note

HyperIP : SRDF Application Note HyperIP : SRDF Application Note Introduction HyperIP is a Linux software application that quantifiably and measurably enhances large data movement over big bandwidth and long-haul IP networks. HyperIP

More information

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of

More information

Features. HDX WAN optimization. QoS

Features. HDX WAN optimization. QoS May 2013 Citrix CloudBridge Accelerates, controls and optimizes applications to all locations: datacenter, branch offices, public and private clouds and mobile users Citrix CloudBridge provides a unified

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

Networks & protocols research in Grid5000 DAS3

Networks & protocols research in Grid5000 DAS3 1 Grid 5000 Networks & protocols research in Grid5000 DAS3 Date Pascale Vicat-Blanc Primet Senior Researcher at INRIA Leader of the RESO team LIP Laboratory UMR CNRS-INRIA-ENS-UCBL Ecole Normale Supérieure

More information

The NE010 iwarp Adapter

The NE010 iwarp Adapter The NE010 iwarp Adapter Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com Today s Data Center Users Applications networking adapter LAN Ethernet NAS block storage clustering adapter adapter

More information

DB2 purescale: High Performance with High-Speed Fabrics. Author: Steve Rees Date: April 5, 2011

DB2 purescale: High Performance with High-Speed Fabrics. Author: Steve Rees Date: April 5, 2011 DB2 purescale: High Performance with High-Speed Fabrics Author: Steve Rees Date: April 5, 2011 www.openfabrics.org IBM 2011 Copyright 1 Agenda Quick DB2 purescale recap DB2 purescale comes to Linux DB2

More information

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC To Infiniband or Not Infiniband, One Site s s Perspective Steve Woods MCNC 1 Agenda Infiniband background Current configuration Base Performance Application performance experience Future Conclusions 2

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

As enterprise organizations face the major

As enterprise organizations face the major Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.

More information

Infiniband and RDMA Technology. Doug Ledford

Infiniband and RDMA Technology. Doug Ledford Infiniband and RDMA Technology Doug Ledford Top 500 Supercomputers Nov 2005 #5 Sandia National Labs, 4500 machines, 9000 CPUs, 38TFlops, 1 big headache Performance great...but... Adding new machines problematic

More information

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: E Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: Native support for all SAN protocols including ESCON, Fibre Channel and Gigabit

More information

New Approaches to Optical Packet Switching in Carrier Networks. Thomas C. McDermott Chiaro Networks Richardson, Texas

New Approaches to Optical Packet Switching in Carrier Networks. Thomas C. McDermott Chiaro Networks Richardson, Texas New Approaches to Optical Packet Switching in Carrier Networks Thomas C. McDermott Chiaro Networks Richardson, Texas Outline Introduction, Vision, Problem statement Approaches to Optical Packet Switching

More information

High Performance Computing: Concepts, Methods & Means Enabling Technologies 2 : Cluster Networks

High Performance Computing: Concepts, Methods & Means Enabling Technologies 2 : Cluster Networks High Performance Computing: Concepts, Methods & Means Enabling Technologies 2 : Cluster Networks Prof. Amy Apon Department of Computer Science and Computer Engineering University of Arkansas March 15 th,

More information

10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G

10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G 10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G Mohammad J. Rashti and Ahmad Afsahi Queen s University Kingston, ON, Canada 2007 Workshop on Communication Architectures

More information

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7 IP storage: A review of iscsi, FCIP, ifcp SNIA IP Storage Forum With the advent of new IP storage products and transport protocol standards iscsi, FCIP,

More information

CN-100 Network Analyzer Product Overview

CN-100 Network Analyzer Product Overview CN-100 Network Analyzer Product Overview CN-100 network analyzers offer an extremely powerful yet cost effective solution for today s complex networking requirements. Test Ethernet or ATM networks with

More information

Storage System. David Southwell, Ph.D President & CEO Obsidian Strategics Inc. BB:(+1)

Storage System. David Southwell, Ph.D President & CEO Obsidian Strategics Inc. BB:(+1) Storage InfiniBand Area Networks System David Southwell, Ph.D President & CEO Obsidian Strategics Inc. BB:(+1) 780.964.3283 dsouthwell@obsidianstrategics.com Agenda System Area Networks and Storage Pertinent

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

NCIT*net 2 General Information

NCIT*net 2 General Information NCIT*net 2 General Information NCIT*net 2 An Introduction A component of NCCT research team activities is the development of a next generation network to promote the interaction of architecture, technologies,

More information

Evaluating the Impact of RDMA on Storage I/O over InfiniBand

Evaluating the Impact of RDMA on Storage I/O over InfiniBand Evaluating the Impact of RDMA on Storage I/O over InfiniBand J Liu, DK Panda and M Banikazemi Computer and Information Science IBM T J Watson Research Center The Ohio State University Presentation Outline

More information

Implementation Experiments on HighSpeed and Parallel TCP

Implementation Experiments on HighSpeed and Parallel TCP Implementation Experiments on HighSpeed and TCP Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Outline Introduction Background of and g Why to evaluate in a test-bed network A refined algorithm

More information

MiAMI: Multi-Core Aware Processor Affinity for TCP/IP over Multiple Network Interfaces

MiAMI: Multi-Core Aware Processor Affinity for TCP/IP over Multiple Network Interfaces MiAMI: Multi-Core Aware Processor Affinity for TCP/IP over Multiple Network Interfaces Hye-Churn Jang Hyun-Wook (Jin) Jin Department of Computer Science and Engineering Konkuk University Seoul, Korea {comfact,

More information

Experimental Analysis of InfiniBand Transport Services on WAN

Experimental Analysis of InfiniBand Transport Services on WAN International Conference on Networking, Architecture, and Storage Experimental Analysis of InfiniBand Transport Services on WAN Abstract InfiniBand Architecture (IBA) has emerged as a standard system-area

More information

COSC6376 Cloud Computing Lecture 17: Storage Systems

COSC6376 Cloud Computing Lecture 17: Storage Systems COSC6376 Cloud Computing Lecture 17: Storage Systems Instructor: Weidong Shi (Larry), PhD Computer Science Department University of Houston Storage Area Network and Storage Virtualization Single Disk Drive

More information

FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE

FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE Carl Trieloff cctrieloff@redhat.com Red Hat Lee Fisher lee.fisher@hp.com Hewlett-Packard High Performance Computing on Wall Street conference 14

More information

Send documentation comments to You must enable FCIP before attempting to configure it on the switch.

Send documentation comments to You must enable FCIP before attempting to configure it on the switch. CHAPTER 9 (Fibre Channel over IP) is an IETF standards based protocol for connecting Fibre Channel SANs over IP based networks. encapsulates the FCP frames in a TCP/IP packet which is then sent across

More information

Future Routing Schemes in Petascale clusters

Future Routing Schemes in Petascale clusters Future Routing Schemes in Petascale clusters Gilad Shainer, Mellanox, USA Ola Torudbakken, Sun Microsystems, Norway Richard Graham, Oak Ridge National Laboratory, USA Birds of a Feather Presentation Abstract

More information

GigE & 10GigE Ethernet Technology Overview & Testing Approaching

GigE & 10GigE Ethernet Technology Overview & Testing Approaching GigE & 10GigE Ethernet Technology Overview & Testing Approaching Ray Lanier ray.lanier@ccairep.com Concord Communications Associates, Inc. 214 South Main Street Concord, NH 03301 Tel: 603-228-8627 Fax:

More information

Zhengyang Liu University of Virginia. Oct 29, 2012

Zhengyang Liu University of Virginia. Oct 29, 2012 SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu University of Virginia Oct

More information

Comparative Performance Analysis of RDMA-Enhanced Ethernet

Comparative Performance Analysis of RDMA-Enhanced Ethernet Comparative Performance Analysis of RDMA-Enhanced Ethernet Casey B. Reardon and Alan D. George HCS Research Laboratory University of Florida Gainesville, FL July 24, 2005 Clement T. Cole Ammasso Inc. Boston,

More information

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09 RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Introduction Problem Statement

More information

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed ASPERA HIGH-SPEED TRANSFER Moving the world s data at maximum speed ASPERA HIGH-SPEED FILE TRANSFER Aspera FASP Data Transfer at 80 Gbps Elimina8ng tradi8onal bo

More information

IBM POWER8 100 GigE Adapter Best Practices

IBM POWER8 100 GigE Adapter Best Practices Introduction IBM POWER8 100 GigE Adapter Best Practices With higher network speeds in new network adapters, achieving peak performance requires careful tuning of the adapters and workloads using them.

More information

LANCOM Techpaper Routing Performance

LANCOM Techpaper Routing Performance LANCOM Techpaper Routing Performance Applications for communications and entertainment are increasingly based on IP networks. In order to ensure that the necessary bandwidth performance can be provided

More information

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State

More information

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2006 Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Yaaser Ahmed Mohammed Louisiana State

More information

IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX

IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX -2 EN with RoCE Adapter Delivers Reliable Multicast Messaging With Ultra Low Latency

More information

New Storage Architectures

New Storage Architectures New Storage Architectures OpenFabrics Software User Group Workshop Replacing LNET routers with IB routers #OFSUserGroup Lustre Basics Lustre is a clustered file-system for supercomputing Architecture consists

More information

Implementing Efficient and Scalable Flow Control Schemes in MPI over InfiniBand

Implementing Efficient and Scalable Flow Control Schemes in MPI over InfiniBand Implementing Efficient and Scalable Flow Control Schemes in MPI over InfiniBand Jiuxing Liu and Dhabaleswar K. Panda Computer Science and Engineering The Ohio State University Presentation Outline Introduction

More information

Grid Tutorial Networking

Grid Tutorial Networking Grid Tutorial Networking Laukik Chitnis Sanjay Ranka Outline Motivation Key Issues and Challenges Emerging protocols DWDM MPLS, GMPLS Network Infrastructure Internet2 Abilene and HOPI NLR and FLR Gloriad

More information

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008 vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008 Virtual Datacenter OS from VMware Infrastructure vservices and Cloud vservices Existing New - roadmap Virtual Datacenter OS from VMware Agenda

More information

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S.

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Steve Corbató corbato@internet2.edu Director, Network Initiatives, Internet2 & Visiting Fellow, Center for

More information

Optimized Distributed Data Sharing Substrate in Multi-Core Commodity Clusters: A Comprehensive Study with Applications

Optimized Distributed Data Sharing Substrate in Multi-Core Commodity Clusters: A Comprehensive Study with Applications Optimized Distributed Data Sharing Substrate in Multi-Core Commodity Clusters: A Comprehensive Study with Applications K. Vaidyanathan, P. Lai, S. Narravula and D. K. Panda Network Based Computing Laboratory

More information

Internet2 Network Service Descriptions DRAFT December 4, 2006

Internet2 Network Service Descriptions DRAFT December 4, 2006 DRAFT December 4, 2006 Table of Contents 1. Network Infrastructure and Services... 1 1.1 Introduction... 1 1.2 Architectural Overview... 3 1.3 Circuit Service Descriptions... 5 1.4 IP Services... 9 2.

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November Abstract

LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November Abstract LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November 2008 Abstract This paper provides information about Lustre networking that can be used

More information

ETHOS A Generic Ethernet over Sockets Driver for Linux

ETHOS A Generic Ethernet over Sockets Driver for Linux ETHOS A Generic Ethernet over Driver for Linux Parallel and Distributed Computing and Systems Rainer Finocchiaro Tuesday November 18 2008 CHAIR FOR OPERATING SYSTEMS Outline Motivation Architecture of

More information

Can Parallel Replication Benefit Hadoop Distributed File System for High Performance Interconnects?

Can Parallel Replication Benefit Hadoop Distributed File System for High Performance Interconnects? Can Parallel Replication Benefit Hadoop Distributed File System for High Performance Interconnects? N. S. Islam, X. Lu, M. W. Rahman, and D. K. Panda Network- Based Compu2ng Laboratory Department of Computer

More information

2008 International ANSYS Conference

2008 International ANSYS Conference 2008 International ANSYS Conference Maximizing Productivity With InfiniBand-Based Clusters Gilad Shainer Director of Technical Marketing Mellanox Technologies 2008 ANSYS, Inc. All rights reserved. 1 ANSYS,

More information

Solutions for Scalable HPC

Solutions for Scalable HPC Solutions for Scalable HPC Scot Schultz, Director HPC/Technical Computing HPC Advisory Council Stanford Conference Feb 2014 Leading Supplier of End-to-End Interconnect Solutions Comprehensive End-to-End

More information

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research Fast packet processing in the cloud Dániel Géhberger Ericsson Research Outline Motivation Service chains Hardware related topics, acceleration Virtualization basics Software performance and acceleration

More information

Lambda Networks DWDM. Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok

Lambda Networks DWDM. Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok Lambda Networks DWDM Vara Varavithya Department of Electrical Engineering King Mongkut s Institute of Technology North Bangkok vara@kmitnb.ac.th Treads in Communication Information: High Speed, Anywhere,

More information

Presentation_ID. 2002, Cisco Systems, Inc. All rights reserved.

Presentation_ID. 2002, Cisco Systems, Inc. All rights reserved. 1 Gigabit to the Desktop Session Number 2 Gigabit to the Desktop What we are seeing: Today s driver for Gigabit Ethernet to the Desktop is not a single application but the simultaneous use of multiple

More information

Open Cloud Interconnect: Use Cases for the QFX10000 Coherent DWDM Line Card

Open Cloud Interconnect: Use Cases for the QFX10000 Coherent DWDM Line Card Open Cloud Interconnect: Use Cases for the QFX10000 DWDM Delivering Scale, Security, and Resiliency to Metro, Regional, and Long-Haul Data Center Interconnect 1 Open Cloud Interconnect: Use Cases for the

More information

Practical Optical Networking

Practical Optical Networking 1 of 22 11/15/2007 6:04 PM Practical Optical Networking Outline 1. Evolution of national R&E infrastructure 2. Physical and architectural aspects of optical nets 3. Hybrid networking 4. Testing 2 of 22

More information

Spark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay Mellanox Technologies

Spark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay Mellanox Technologies Spark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay 1 Apache Spark - Intro Spark within the Big Data ecosystem Data Sources Data Acquisition / ETL Data Storage Data Analysis / ML Serving 3 Apache

More information

High-performance message striping over reliable transport protocols

High-performance message striping over reliable transport protocols J Supercomput (2006) 38:261 278 DOI 10.1007/s11227-006-8443-6 High-performance message striping over reliable transport protocols Nader Mohamed Jameela Al-Jaroodi Hong Jiang David Swanson C Science + Business

More information

Review of the 5 Criteria

Review of the 5 Criteria Review of the 5 Criteria Howard Frazier Broadcom Corporation September, 2006 IEEE 802.3 HSSG 1 Outline Audience Purpose 5 Criteria Guidelines for responses Summary Successful examples IEEE 802.3 HSSG 2

More information

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory

More information

Maelstrom: An Enterprise Continuity Protocol for Financial Datacenters

Maelstrom: An Enterprise Continuity Protocol for Financial Datacenters Maelstrom: An Enterprise Continuity Protocol for Financial Datacenters Mahesh Balakrishnan, Tudor Marian, Hakim Weatherspoon Cornell University, Ithaca, NY Datacenters Internet Services (90s) Websites,

More information

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking

More information

Comparing TCP performance of tunneled and non-tunneled traffic using OpenVPN. Berry Hoekstra Damir Musulin OS3 Supervisor: Jan Just Keijser Nikhef

Comparing TCP performance of tunneled and non-tunneled traffic using OpenVPN. Berry Hoekstra Damir Musulin OS3 Supervisor: Jan Just Keijser Nikhef Comparing TCP performance of tunneled and non-tunneled traffic using OpenVPN Berry Hoekstra Damir Musulin OS3 Supervisor: Jan Just Keijser Nikhef Outline Introduction Approach Research Results Conclusion

More information

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0 INFINIBAND OVERVIEW -, 2010 Page 1 Version 1.0 Why InfiniBand? Open and comprehensive standard with broad vendor support Standard defined by the InfiniBand Trade Association (Sun was a founder member,

More information

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep

46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep 46PaQ Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep 2005 1 Today s talk Overview Current Status and Results Future Work

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 11 MIDTERM EXAMINATION #1 OCT. 16, 2013 COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2013-75 minutes This examination

More information

Mark Falco Oracle Coherence Development

Mark Falco Oracle Coherence Development Achieving the performance benefits of Infiniband in Java Mark Falco Oracle Coherence Development 1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy

More information

What is the Future for High-Performance Networking?

What is the Future for High-Performance Networking? What is the Future for High-Performance Networking? Wu-chun (Wu) Feng feng@lanl.gov RADIANT: Research And Development in Advanced Network Technology http://www.lanl.gov/radiant Computer & Computational

More information

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects - WarpTCP WHITE PAPER Technology Overview -Improving the way the world connects - WarpTCP - Attacking the Root Cause TCP throughput reduction is often the bottleneck that causes data to move at slow speed.

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Department of Computer Science 1/18 Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Markus Fidler fidler@informatik.rwth-aachen.de Volker Sander sander@fz.juelich.de

More information

Memory Management Strategies for Data Serving with RDMA

Memory Management Strategies for Data Serving with RDMA Memory Management Strategies for Data Serving with RDMA Dennis Dalessandro and Pete Wyckoff (presenting) Ohio Supercomputer Center {dennis,pw}@osc.edu HotI'07 23 August 2007 Motivation Increasing demands

More information

TCP on High-Speed Networks

TCP on High-Speed Networks TCP on High-Speed Networks from New Internet and Networking Technologies for Grids and High-Performance Computing, tutorial given at HiPC 04, Bangalore, India December 22nd, 2004 C. Pham University Lyon,

More information

RoCE vs. iwarp Competitive Analysis

RoCE vs. iwarp Competitive Analysis WHITE PAPER February 217 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...5 Summary...6

More information

Virtual Networks: For Storage and Data

Virtual Networks: For Storage and Data Virtual Networks: For Storage and Data or Untangling the Virtual Server Spaghetti Pile Howard Marks Chief Scientist hmarks@deepstorage.net Our Agenda Today s Virtual Server I/O problem Why Bandwidth alone

More information

Adaptive MPI Multirail Tuning for Non-Uniform Input/Output Access

Adaptive MPI Multirail Tuning for Non-Uniform Input/Output Access Adaptive MPI Multirail Tuning for Non-Uniform Input/Output Access S. Moreaud, B. Goglin and R. Namyst INRIA Runtime team-project University of Bordeaux, France Context Multicore architectures everywhere

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Measuring end-to-end bandwidth with Iperf using Web100

Measuring end-to-end bandwidth with Iperf using Web100 Measuring end-to-end bandwidth with Iperf using Web Ajay Tirumala Les Cottrell Tom Dunigan CS Dept. SLAC CSMD University of Illinois Stanford University ORNL Urbana, IL 6181 Menlo Park, CA 9425 Oak Ridge,

More information