Experiences with 40G/100G Applications

Size: px
Start display at page:

Download "Experiences with 40G/100G Applications"

Transcription

1 Experiences with 40G/100G Applications Brian L Tierney ESnet, Internet2 Global Summit, April 2014

2 Outline Review of packet loss Overview SC13 high-bandwidth demos ESnet s 100G testbed Sample of results Challenges 4/10/14 2

3 The Punch Line Getting stable flows above 10G is hard! Significant tuning is required. Devices are designed for 1000 s of small flows, not 10 s of large flows (packet loss is typical) 4/10/14 3

4 Remember: Avoiding Packet Loss is Key 4/10/14 4

5 A small amount of packet loss makes a huge difference in TCP performance Local (LAN) Metro Area With loss, high performance beyond metro distances is essentially impossible Regional Continental International Measured (TCP Reno) Measured (HTCP) Theoretical (TCP Reno) Measured (no loss) 4/10/14

6 SC13 Hero Experiments (thanks to Azher Mughal at Caltech Bill Fink at NASA Goddard for slide material ) 4/10/14 6

7 ESnet 100G Testbed and SC13 KIT CERN AMS SC13 SCinet Second Testbed Testbed 12x10GE STAR-TB1 ESnet 100G Testbed (Starlight) BNL ANA Europe 3x40GE Testbed NERSC-TB1 GENI Rack (Starlight) Starlight Fabric Open Science Data Cloud 40GE Testbed STAR-CR5 Testbed NEWY-CR5 AOFA-CR5 MANLAN ESnet 100G Testbed (NERSC) FNAL KANS-CR5 CHIC-CR5 WASH-CR5 NASA Demo Infrastructure ESnet5 Western Region DENV-CR5 ALBQ-CR5 Demo Link 2x10GE MAX Demo Switch ELPA-CR5 HOUS-CR5 NASH-CR5 ATLA-CR5 NRL Demo Infrastructure NRL demo Northern path (20G) NRL demo Southern path (20G) NASA demo production path (50G) NASA demo testbed path (50G) OpenFlow/SDN demo ANA path (100G) Caltech demo ANA path (100G) Caltech demo FNAL path (60G) Caltech demo NERSC TB path (100G) SC13 demos ESnet5 map Eli Dart, ESnet 11/14/2013 FILENAME SC13-DEMOS-V24.VSD

8 WAN Network Layout

9 Caltech Terabit Demo 7x 100G links 8 x 40G links

10 Caltech Demo SC13 Results DE-KIT 75Gb from Disk to Disk (servers at KIT, two servers at SC13) BNL over ESnet 80G over two pair of hosts, memory to memory NERSC to SC13 over ESnet Lots of packet loss at first, removed Mellanox switch from path, then path was clean Consistent 90Gbps, reading from 2 SSD host sending to single host in the booth. FNAL over ESnet Lots of packet loss, TCP max around 5Gbps, but UDP could do 15G per flow. Used 'tc' to pace TCP, and then at least single stream TCP behaved well up to 8G. Using multiple streams was still a problem, indicates something in the path with too small buffers, but never ID d issue. Pasadena Internet2 80G read from the disks and write on the servers (disk to memory transfer). Link was lossy in the other direction. CERN over ESnet 4/10/14 About 75Gb memory to memory. Disks about 40Gb 10

11 Recent Caltech testing to CERN over ANA link 4/10/14 11

12 NASA Goddard SC13 Demo 4/10/14 12

13 ESnet s 100G Testbed 4/10/14 13

14 NERSC Test Hosts StarLight Test Hosts MAN LAN ESnet 100G Testbed 4/10/14 14

15 ESnet 100G Testbed NERSC VLANS: 4012: All hosts 4020: Loop from NERSC to Chicago and back, all NERSC hosts 10GE (MM) NERSC Site Router 1x40GE exogeni Rack 100G 100G Dedicated 100G Network aofa-cr5 core router To Esnet Production Network 100G To Esnet Production Network 100G MANLAN switch AofA 100G StarLight To Eur (ANA L nersc-diskpt-1 NICs: 4x10G Myricom 1x10G Mellanox nersc-diskpt-1 5x10GE (MM) 100G 100G 100G nersc-diskpt-2 NICs: 4x10G Myricom nersc-diskpt-3 NICs: 4x10G Myricom 1x10G Mellanox nersc-diskpt-6 NICs: 2x40G Mellanox 1x40G Chelsio nersc-diskpt-7 NICs: 2x40G Mellanox 1x40G Chelsio 4x10GE (MM) nersc-diskpt-2 5x10GE (MM) nersc-diskpt-3 2x40GE nersc-diskpt-6 1x40GE 2x40GE nersc-diskpt-7 Alcatel- Lucent 100G SR7750 Router 1x40GE 100G Alcatel- Lucent 100G SR7750 Router Star-cr5 core router 100G 5x 10GE (MM) 4x10GE (MM) StarLight 100G switch star-mempt-1 star-mempt-1 NICs: 4x10G Myricom 1x10G Mellanox star-mempt-2 NICs: 4: 4x10G Myricom nersc-ssdpt-1 NICs: 2x40G Mellanox nersc-ssdpt-2 NICs: 4x40G Mellanox nersc-ssdpt-1 2x40GE 4x40GE exogeni Rack 5x10GE (MM) star-mempt-2 star-mempt-3 star-mempt-3 NICs: 4x10G Myricom 1x10G Mellanox nersc-ssdpt-2 4/10/14 15

16 100G Testbed Capabilities This testbed is designed to support research in high-performance data transfer protocols and tools. Capabilities: Bare metal access to very high performance hosts up to 100Gbps memory to memory, and 70 Gbps disk to disk Each project gets their own disk image, which root access custom kernels, custom network protocols, etc. Only one experiment on the testbed at a time, scheduled via a shared Google calendar 4/10/14 16

17 Testbed Access Proposal process to gain access described here: Testbed is available to anyone: DOE researchers Other government agencies Industry Must submit a short proposal to ESnet (2 pages) Review Criteria: Project readiness Could the experiment easily be done elsewhere? 17

18 Extreme TCP 4/10/14 18

19 40G Lessons Learned Single flow: TCP/UDP : Gbps, CPU limited Multiple Flows: Easily fill 40G NIC Tuning for 40G is not just 4x Tuning for 10G Some of the conventional wisdom for 10G Networking is not true at 40Gbps e.g.: Parallel streams more likely to hurt than help Sandy Bridge Architectures require extra tuning as well Details at 4/10/14 19

20 New Test Tool: Iperf3: iperf3 is a new implementation from scratch, with the goal of a smaller, simpler code base, and a library version of the functionality that can be used in other programs. Some new features in iperf3 include: reports the number of TCP packets that were retransmitted and Congestion Window reports the average CPU utilization of the client and server (-V flag) support for zero copy TCP (-Z flag) JSON output format (-J flag) omit flag: ignore the first N seconds in the results iperf-and-iperf3/ 4/10/14 20

21 Sample iperf3 output iperf3 -c Connecting to host , port 5201 [ 4] local port connected to port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] sec 2.50 MBytes 19.2 Mbits/sec KBytes [ 4] sec 18.8 MBytes 157 Mbits/sec MBytes [ 4] sec 325 MBytes 2.73 Gbits/sec MBytes [ 4] sec 430 MBytes 3.60 Gbits/sec MBytes [ 4] sec 419 MBytes 3.52 Gbits/sec MBytes [ 4] sec 434 MBytes 3.64 Gbits/sec MBytes [ 4] sec 454 MBytes 3.80 Gbits/sec MBytes [ 4] sec 444 MBytes 3.73 Gbits/sec MBytes [ 4] sec 477 MBytes 4.00 Gbits/sec MBytes [ 4] sec 468 MBytes 3.93 Gbits/sec MBytes [ ID] Interval Transfer Bandwidth Retr [ 4] sec 3.39 GBytes 2.89 Gbits/sec 1510 sender [ 4] sec 3.32 GBytes 2.83 Gbits/sec receiver 4/10/14 21

22 Sample results: TCP Single vs Parallel Streams: 40G to 40G 1 stream: iperf3 -c ! [ ID] Interval Transfer Bandwidth Retransmits! [ 4] [ 4] sec 3.19 GBytes 27.4 Gbits/sec sec 3.35 GBytes 28.8 Gbits/sec 0 0!! [ 4] sec 3.35 GBytes 28.8 Gbits/sec 0! [ 4] sec 3.35 GBytes 28.8 Gbits/sec 0! [ 4] sec 3.35 GBytes 28.8 Gbits/sec 0!! 2 streams: iperf3 -c P2! [ ID] Interval Transfer Bandwidth Retransmits! [ 4] [ 6] sec 1.37 GBytes 11.8 Gbits/sec sec 1.38 GBytes 11.8 Gbits/sec 7 11!! [SUM] ! sec 2.75 GBytes 23.6 Gbits/sec 18! ! [ 4] [ 6] sec 1.43 GBytes 12.3 Gbits/sec sec 1.43 GBytes 12.3 Gbits/sec 4 6!! [SUM] sec 2.86 GBytes 24.6 Gbits/sec ! 10! [ ID] Interval Transfer Bandwidth Retransmits! [ 4] [ 6] sec 13.8 GBytes 11.9 Gbits/sec sec 13.8 GBytes 11.9 Gbits/sec 78! 95! [SUM] sec 27.6 GBytes 23.7 Gbits/sec 173! 4/10/14 22

23 40G Sender to 10G receiver; Parallel streams decrease throughput even more Single Stream: iperf3 -c w 128M! [ ID] Interval Transfer Bandwidth Retr! [ 4] sec 238 MBytes 1.90 Gbits/sec 0! [ 4] sec 404 MBytes 3.38 Gbits/sec 0! [ 4] sec 1.06 GBytes 9.10 Gbits/sec 404! [ 4] sec 1.08 GBytes 9.30 Gbits/sec 0! [ 4] sec 1.14 GBytes 9.78 Gbits/sec 0! [ 4] sec 1.15 GBytes 9.89 Gbits/sec 0! 2 Parallel Streams: iperf3 -c P2 [ ID] Interval Transfer Bandwidth Retr! [ 4] sec 2.69 GBytes 2.30 Gbits/sec 311! [ 6] sec 2.69 GBytes 2.30 Gbits/sec 613! [SUM] sec 5.38 GBytes 4.60 Gbits/sec 924! 4/10/14 23

24 Intel Sandy/Ivy Bridge

25 Sample results: TCP On Intel Sandy Bridge Motherboards 30% Improvement using the right core! nuttcp -i ! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans!! nuttcp -i1 -xc 2/ ! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans!!! nuttcp: 4/10/14 25

26 Sample results: TCP On Intel Sandy Bridge Motherboards: Fast host to Slower Host! nuttcp -i ! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 350 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 179 retrans!! Reverse direction: nuttcp -i ! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans!!!!! 4/10/14 26

27 Speed Mismatch Issues More and more often we are seeing problems sending from a faster host to a slower host This can look like a network problem (lots of TCP retransmits) The network is rarely the bottleneck anymore for many sites This may be true for: 10G to 1G host 10G host to a 2G circuit 40G to 10G host Fast host to slower host Pacing at the application level does not help The linux tc command does help But only up to speeds of 8Gbps Will this just mask problems with under-buffered switches? 4/10/14 27

28 10G to 1G tests, packets dropped 4/10/14 28

29 But 10G to 1G can work just fine too. 4/10/14 29

30 Compare tcpdumps: kans-pt1.es.net (10G) to eqx-chi-pt1.es.net (1G) 4/10/14 30

31 Compare tcpdumps kans-pt1.es.net (10G) to uct2-net4.uchicago.edu (1G) 4/10/14 31

32 Compare tcpdumps kans-pt1.es.net (10G) to uct2-net4.uchicago.edu (1G) with pacing 4/10/14 32

33 Summary At the high end, network flows are often much less than the available network capacity Requires ZERO packet loss Requires considerable host tuning Work at annual Supercomputing conference and on the ESnet 100G testbed is teaching us a lot about issues at the high end 4/10/14 33

34 More Information

35 Extra Slides 4/10/14 35

36 Intel Sandy/Ivy Bridge

37 Single flow 40G Results Tool Protocol Gbps Send CPU Recv CPU netperf xfer_test (IU tool) GridFTP TCP TCP-sendfile UDP TCP TCP-splice RoCE TCP RoCE % 34% 100% 100% 43% 2% 100% 100% 87% 94% 95% 91% 91% 1% 94% 150% 4/10/14 37

38 Interrupt Affinity Interrupts are triggered by I/O cards (storage, network). High performance means lot of interrupts per seconds Interrupt handlers are executed on a core Depending on the scheduler, core 0 gets all the interrupts, or interrupts are dispatched in a round-robin fashion among the cores: both are bad for performance: Core 0 get all interrupts: with very fast I/O, the core is overwhelmed and becomes a bottleneck Round-robin dispatch: very likely the core that executes the interrupt handler will not have the code in its L1 cache. Two different I/O channels may end up on the same core. 38 ESnet Science Engagement (engage@es.net) - 4/10/14

39 A simple solution: interrupt binding - Each interrupt is statically bound to a given core (network -> core 1, disk -> core 2) - Works well, but can become an headache and does not fully solve the problem: one very fast card can still overwhelm the core. - Needs to bind application to the same cores for best optimization: what about multi-threaded applications, for which we want one thread = one core? 39 ESnet Science Engagement (engage@es.net) - 4/10/14

40 Interrupt Binding Considerations On a multi-processor host, your process might run on one processor, but your I/O interrupts on another processor. When this happens there will be a huge performance hit. a single 10G NIC can saturate the QPIC that connects the two processors. You may want to consider getting a single CPU system, and avoid dual-processor motherboards if you only need 3 PCIe slots. If you need to optimize for a small number of very fast (> 6Gbps) flows, this means you will need to manage IQR bindings by hand. If you are optimizing for many 500Mbps-1Gbps flows, this will be less of an issue. You may be better off doing 4x10GE instead of 1x40GE, as you will have more control mapping IRQs to processors. The impact of this is even greater with Sandy/Ivy Bridge Hosts, as the PCI bus slots are connected directly to a processor. 40 ESnet Science Engagement (engage@es.net) - 4/10/14

41 Experience With 100G Equipment ESnet experiences Advanced Networking Initiative (ANI) ESnet5 production 100G network Helping other people debug their stuff Important takeaways R&E requirements are outside the design spec for most gear Results in platform limitations sometimes can t be fixed You need to be able to identify those limitations before you buy R&E requirements are outside the test scenarios for most vendors Bugs show up when R&E workload is applied You need to be able to troubleshoot those scenarios 4/10/14 41

42 Platform Limitations We have seen significant limitations in 100G equipment from all vendors with a major presence in R&E 100G single flow not supported Channelized forwarding plane Unexplained limitations Sometimes the senior sales engineers don t know! Non-determinism in the forwarding plane Performance depends on features used (i.e. config-dependent) Packet loss that doesn t show up in counters anywhere If you can t find it, nobody will tell you about it Vendors don t know or won t say Watch how you write your procurements Second-generation equipment seems to be much better Vendors have been responsive in rolling new code to fix problems 4/10/14 42

43 They Don t Test For This Stuff Most sales engineers and support engineers don t have access to 100G test equipment It s expensive Setup of scenarios is time-consuming R&E traffic profile is different than their standard model IMIX (Internet Mix) traffic is normal test profile Aggregate web browsers, , YouTube, Netflix, etc. Large flow count, low per-flow bandwidth This is to be expected that s where the market is R&E shops are the ones that get the testing done for R&E profile SCinet at Supercomuting conference provides huge value But, in the end, it s up to us the R&E community 4/10/14 43

44 New Technology, New Bugs Bugs happen. Data integrity (traffic forwarded, but with altered data payload) Packet loss Interface wedge Optics flaps Monitoring systems are indispensable Finding and fixing issues is sometimes hard Rough guess difficulty exponent is degrees of freedom Vendors/platforms, administrative domains, time zones Takeaway don t skimp on test gear (at least maintain your perfsonar boxes) 4/10/14 44

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Eric Pouyoul, Brian Tierney ESnet January 25, 2012 ANI 100G Testbed ANI Middleware Testbed NERSC To ESnet

More information

Improving Performance of 100G Data Transfer Nodes

Improving Performance of 100G Data Transfer Nodes Improving Performance of 100G Data Transfer Nodes Brian Tierney, Consultant Nate Hanford, ESnet bltierney@gmail.com http://fasterdata.es.net APAN, Singapore March 28, 2018 Recent TCP changes 2 Observation

More information

Science DMZ Architecture

Science DMZ Architecture Science DMZ Architecture Jason Zurawski - zurawski@es.net Kate Petersen Mace kate@es.net ESnet Science Engagement engage@es.net http://fasterdata.es.net Science DMZ Overview The Science DMZ in 1 Slide

More information

Experiments on TCP Re-Ordering March 27 th 2017

Experiments on TCP Re-Ordering March 27 th 2017 Experiments on TCP Re-Ordering March 27 th 2017 Introduction The Transmission Control Protocol (TCP) is very sensitive to the behavior of packets sent end-to-end. Variations in arrival time ( jitter )

More information

CMS Data Transfer Challenges and Experiences with 40G End Hosts

CMS Data Transfer Challenges and Experiences with 40G End Hosts CMS Data Transfer Challenges and Experiences with 40G End Hosts NEXT Technology Exchange Advanced Networking / Joint Techs Indianapolis, October 2014 Azher Mughal, Dorian Kcira California Institute of

More information

Network and Host Design to Facilitate High Performance Data Transfer

Network and Host Design to Facilitate High Performance Data Transfer Network and Host Design to Facilitate High Performance Data Transfer Jason Zurawski - ESnet Engineering & Outreach engage@es.net globusworld 2014 April 15 th 2014 With contributions from S. Balasubramanian,

More information

Zhengyang Liu University of Virginia. Oct 29, 2012

Zhengyang Liu University of Virginia. Oct 29, 2012 SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu University of Virginia Oct

More information

Programmable Information Highway (with no Traffic Jams)

Programmable Information Highway (with no Traffic Jams) Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

Why Your Application only Uses 10Mbps Even the Link is 1Gbps?

Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Contents Introduction Background Information Overview of the Issue Bandwidth-Delay Product Verify Solution How to Tell Round Trip Time (RTT)

More information

ESnet s (100G) SDN Testbed

ESnet s (100G) SDN Testbed ESnet s (100G) SDN Testbed Inder Monga and ESnet SDN team Interna:onal SDN Testbed, March 2015 Outline Testbeds in ESnet Mo:va:on: Building a Scalable SDN WAN testbed Hardware and Deployment Status First

More information

Network Test and Monitoring Tools

Network Test and Monitoring Tools ajgillette.com Technical Note Network Test and Monitoring Tools Author: A.J.Gillette Date: December 6, 2012 Revision: 1.3 Table of Contents Network Test and Monitoring Tools...1 Introduction...3 Link Characterization...4

More information

perfsonar Host Hardware

perfsonar Host Hardware perfsonar Host Hardware This document is a result of work by the perfsonar Project (http://www.perfsonar.net) and is licensed under CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/). Event

More information

Linux Network Tuning Guide for AMD EPYC Processor Based Servers

Linux Network Tuning Guide for AMD EPYC Processor Based Servers Linux Network Tuning Guide for AMD EPYC Processor Application Note Publication # 56224 Revision: 1.00 Issue Date: November 2017 Advanced Micro Devices 2017 Advanced Micro Devices, Inc. All rights reserved.

More information

please study up before presenting

please study up before presenting HIDDEN SLIDE Summary These slides are meant to be used as is to give an upper level view of perfsonar for an audience that is not familiar with the concept. You *ARE* allowed to delete things you don t

More information

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)!

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! ClearStream Prototyping 40 Gbps Transparent End-to-End Connectivity Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! University of Amsterdam! more data! Speed! Volume! Internet!

More information

Fermilab WAN Performance Analysis Methodology. Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008

Fermilab WAN Performance Analysis Methodology. Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008 Fermilab WAN Performance Analysis Methodology Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008 1 The Wizard s Gap 10 years and counting The Wizard Gap (Mathis 1999) is still

More information

UNIVERSITY OF CALIFORNIA

UNIVERSITY OF CALIFORNIA UNIVERSITY OF CALIFORNIA Evaluating switch buffers for high BDP flows (10G x 50ms) Michael Smitasin Network Engineer LBLnet Services Group Lawrence Berkeley National Laboratory Brian Tierney Staff Scientist

More information

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer International Center for Advanced Internet Research Northwestern University Se-young Yu Jim Chen, Joe Mambretti, Fei

More information

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April Wide-Area Networking at SLAC Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April 6 2001. Overview SLAC s Connections to WANs Utilization End-to-end Performance The Future Note:

More information

Networks & protocols research in Grid5000 DAS3

Networks & protocols research in Grid5000 DAS3 1 Grid 5000 Networks & protocols research in Grid5000 DAS3 Date Pascale Vicat-Blanc Primet Senior Researcher at INRIA Leader of the RESO team LIP Laboratory UMR CNRS-INRIA-ENS-UCBL Ecole Normale Supérieure

More information

ANI Testbed Project Update

ANI Testbed Project Update ANI Testbed Project Update Brian Tierney ESnet Feb 2, 2011 Testbed Overview Progression Start out as a tabletop testbed Move to Long Island MAN when dark fiber is available Extend to WAN when 100 Gbps

More information

ECE 697J Advanced Topics in Computer Networks

ECE 697J Advanced Topics in Computer Networks ECE 697J Advanced Topics in Computer Networks Network Measurement 12/02/03 Tilman Wolf 1 Overview Lab 3 requires performance measurement Throughput Collecting of packet headers Network Measurement Active

More information

High-Performance TCP Tips and Tricks

High-Performance TCP Tips and Tricks High-Performance TCP Tips and Tricks TNC 16 June 12, 2015 Brian Tierney, ESnet blgerney@es.net A small amount of packet loss makes a huge difference in TCP performance Local (LAN) Metro Area With loss,

More information

Today s Agenda. Today s Agenda 9/8/17. Networking and Messaging

Today s Agenda. Today s Agenda 9/8/17. Networking and Messaging CS 686: Special Topics in Big Data Networking and Messaging Lecture 7 Today s Agenda Project 1 Updates Networking topics in Big Data Message formats and serialization techniques CS 686: Big Data 2 Today

More information

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN)

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN) Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN) Summary Internet2 Land Speed Record Contest New LSR DataTAG project and network configuration Establishing

More information

Linux Network Tuning Guide for AMD EPYC Processor Based Servers

Linux Network Tuning Guide for AMD EPYC Processor Based Servers Linux Network Tuning Guide for AMD EPYC Processor Application Note Publication # 56224 Revision: 1.10 Issue Date: May 2018 Advanced Micro Devices 2018 Advanced Micro Devices, Inc. All rights reserved.

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1343BU NSX Performance Samuel Kommu #VMworld #NET1343BU Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no

More information

Achieving the Science DMZ

Achieving the Science DMZ Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ

More information

Ronald van der Pol

Ronald van der Pol Ronald van der Pol Contributors! " Ronald van der Pol! " Freek Dijkstra! " Pieter de Boer! " Igor Idziejczak! " Mark Meijerink! " Hanno Pet! " Peter Tavenier Outline! " Network bandwidth

More information

International Climate Network Working Group (ICNWG) Meeting

International Climate Network Working Group (ICNWG) Meeting International Climate Network Working Group (ICNWG) Meeting Eli Dart ESnet Science Engagement Lawrence Berkeley National Laboratory Workshop on Improving Data Mobility & Management for International Climate

More information

S K T e l e c o m : A S h a r e a b l e D A S P o o l u s i n g a L o w L a t e n c y N V M e A r r a y. Eric Chang / Program Manager / SK Telecom

S K T e l e c o m : A S h a r e a b l e D A S P o o l u s i n g a L o w L a t e n c y N V M e A r r a y. Eric Chang / Program Manager / SK Telecom S K T e l e c o m : A S h a r e a b l e D A S P o o l u s i n g a L o w L a t e n c y N V M e A r r a y Eric Chang / Program Manager / SK Telecom 2/23 Before We Begin SKT NV-Array (NVMe JBOF) has been

More information

10GE network tests with UDP. Janusz Szuba European XFEL

10GE network tests with UDP. Janusz Szuba European XFEL 10GE network tests with UDP Janusz Szuba European XFEL Outline 2 Overview of initial DAQ architecture Slice test hardware specification Initial networking test results DAQ software UDP tests Summary 10GE

More information

perfsonar Deployment on ESnet

perfsonar Deployment on ESnet perfsonar Deployment on ESnet Brian Tierney ESnet ISMA 2011 AIMS-3 Workshop on Active Internet Measurements Feb 9, 2011 Why does the Network seem so slow? Where are common problems? Source Campus Congested

More information

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007 Working with 1 Gigabit Ethernet 1, The School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL UK E-mail: R.Hughes-Jones@manchester.ac.uk Stephen Kershaw The School of Physics

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

FUJITSU Software Interstage Information Integrator V11

FUJITSU Software Interstage Information Integrator V11 FUJITSU Software V11 An Innovative WAN optimization solution to bring out maximum network performance October, 2013 Fujitsu Limited Contents Overview Key technologies Supported network characteristics

More information

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site: Traffic engineering and GridFTP log analysis Zhenzhen Yan, Z. Liu, M. Veeraraghavan University of Virginia mvee@virginia.edu Chris Tracy, Chin Guok ESnet ctracy@es.net Jan 17, 2013 Project web site: http://www.ece.virginia.edu/mv/research/doe09/index.html

More information

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet Vijay Shankar Rajanna, Smit Shah, Anand Jahagirdar and Kartik Gopalan Computer Science, State University of New York at Binghamton

More information

Disk-to-Disk network transfers at 100 Gb/s

Disk-to-Disk network transfers at 100 Gb/s Journal of Physics: Conference Series Disk-to-Disk network transfers at 100 Gb/s To cite this article: Artur Barczyk et al 2012 J. Phys.: Conf. Ser. 396 042006 View the article online for updates and enhancements.

More information

Advanced Computer Networks. End Host Optimization

Advanced Computer Networks. End Host Optimization Oriana Riva, Department of Computer Science ETH Zürich 263 3501 00 End Host Optimization Patrick Stuedi Spring Semester 2017 1 Today End-host optimizations: NUMA-aware networking Kernel-bypass Remote Direct

More information

Ultra high-speed transmission technology for wide area data movement

Ultra high-speed transmission technology for wide area data movement Ultra high-speed transmission technology for wide area data movement Michelle Munson, president & co-founder Aspera Outline Business motivation Moving ever larger file sets over commodity IP networks (public,

More information

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research Fast packet processing in the cloud Dániel Géhberger Ericsson Research Outline Motivation Service chains Hardware related topics, acceleration Virtualization basics Software performance and acceleration

More information

DXE-810S. Manual. 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter V1.01

DXE-810S. Manual. 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter V1.01 DXE-810S 10 Gigabit PCI-EXPRESS-Express Ethernet Network Adapter Manual V1.01 Table of Contents INTRODUCTION... 1 System Requirements... 1 Features... 1 INSTALLATION... 2 Unpack and Inspect... 2 Software

More information

Solving the Data Transfer Bottleneck in Digitizers

Solving the Data Transfer Bottleneck in Digitizers Solving the Data Transfer Bottleneck in Digitizers With most modern PC based digitizers and data acquisition systems a common problem is caused by the fact that the ADC technology usually runs in advance

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

IBM POWER8 100 GigE Adapter Best Practices

IBM POWER8 100 GigE Adapter Best Practices Introduction IBM POWER8 100 GigE Adapter Best Practices With higher network speeds in new network adapters, achieving peak performance requires careful tuning of the adapters and workloads using them.

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?

More information

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat BITAG TWG Boulder, CO February 27, 2013 Kathleen Nichols Van Jacobson Background The persistently full buffer problem, now called bufferbloat,

More information

SharkFest 17 Europe. My TCP ain t your TCP. Simon Lindermann. Stack behavior back then and today. Miele & Cie KG.

SharkFest 17 Europe. My TCP ain t your TCP. Simon Lindermann. Stack behavior back then and today. Miele & Cie KG. SharkFest 17 Europe My TCP ain t your TCP Stack behavior back then and today 9th November 2017 Simon Lindermann Miele & Cie KG #sf17eu Estoril, Portugal#sf17eu My TCP Estoril, ain tportugal your TCP 7-10

More information

ESnet5 Deployment Lessons Learned

ESnet5 Deployment Lessons Learned ESnet5 Deployment Lessons Learned Joe Metzger, Network Engineer ESnet Network Engineering Group TIP January 16 2013 Outline ESnet5 Overview Transport Network Router Network Transition Constraints Deployment

More information

Analytics of Wide-Area Lustre Throughput Using LNet Routers

Analytics of Wide-Area Lustre Throughput Using LNet Routers Analytics of Wide-Area Throughput Using LNet Routers Nagi Rao, Neena Imam, Jesse Hanley, Sarp Oral Oak Ridge National Laboratory User Group Conference LUG 2018 April 24-26, 2018 Argonne National Laboratory

More information

Steven Carter. Network Lead, NCCS Oak Ridge National Laboratory OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 1

Steven Carter. Network Lead, NCCS Oak Ridge National Laboratory OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 1 Networking the National Leadership Computing Facility Steven Carter Network Lead, NCCS Oak Ridge National Laboratory scarter@ornl.gov 1 Outline Introduction NCCS Network Infrastructure Cray Architecture

More information

Traffic Characteristics of Bulk Data Transfer using TCP/IP over Gigabit Ethernet

Traffic Characteristics of Bulk Data Transfer using TCP/IP over Gigabit Ethernet Traffic Characteristics of Bulk Data Transfer using TCP/IP over Gigabit Ethernet Aamir Shaikh and Kenneth J. Christensen Department of Computer Science and Engineering University of South Florida Tampa,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2006 Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Yaaser Ahmed Mohammed Louisiana State

More information

Network Processors and their memory

Network Processors and their memory Network Processors and their memory Network Processor Workshop, Madrid 2004 Nick McKeown Departments of Electrical Engineering and Computer Science, Stanford University nickm@stanford.edu http://www.stanford.edu/~nickm

More information

Embedded Network Systems. Internet2 Technology Exchange 2018 October, 2018 Eric Boyd Ed Colone

Embedded Network Systems. Internet2 Technology Exchange 2018 October, 2018 Eric Boyd Ed Colone Embedded Network Systems Internet2 Technology Exchange 2018 October, 2018 Eric Boyd , Ed Colone 1. Background 2. Standards Principles Requirements 3. Emerging Technology

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Linux Kernel Hacking Free Course

Linux Kernel Hacking Free Course Linux Kernel Hacking Free Course 3 rd edition G.Grilli, University of me Tor Vergata IRQ DISTRIBUTION IN MULTIPROCESSOR SYSTEMS April 05, 2006 IRQ distribution in multiprocessor systems 1 Contents: What

More information

Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications

Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications Year 1 Activities report for the NSF project EIN-0335190 Title: Collaborative research: End-to-End Provisioned Optical Network Testbed for Large-Scale escience Applications Date: July 29, 2004 (this is

More information

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer TSM Performance Tuning Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer peinkofer@lrz.de Agenda Network Performance Disk-Cache Performance Tape Performance

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

TCP Performance Analysis Based on Packet Capture

TCP Performance Analysis Based on Packet Capture TCP Performance Analysis Based on Packet Capture Stanislav Shalunov shalunov@internet2.edu 2003-02-05, E2E Performance Measurement Workshop, Miami Packet Capture TCP connection runs; some performance is

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

Knut Omang Ifi/Oracle 20 Oct, Introduction to virtualization (Virtual machines) Aspects of network virtualization:

Knut Omang Ifi/Oracle 20 Oct, Introduction to virtualization (Virtual machines) Aspects of network virtualization: Software and hardware support for Network Virtualization part 2 Knut Omang Ifi/Oracle 20 Oct, 2015 32 Overview Introduction to virtualization (Virtual machines) Aspects of network virtualization: Virtual

More information

Dropping Packets in Ubuntu Linux using tc and iptables

Dropping Packets in Ubuntu Linux using tc and iptables Dropping Packets in Ubuntu Linux using tc and... 1 Dropping Packets in Ubuntu Linux using tc and iptables By Steven Gordon on Tue, 18/01/2011-8:13pm There are two simple ways to randomly drop packets on

More information

ANIC Host CPU Offload Features Overview An Overview of Features and Functions Available with ANIC Adapters

ANIC Host CPU Offload Features Overview An Overview of Features and Functions Available with ANIC Adapters ANIC Host CPU Offload Features Overview An Overview of Features and Functions Available with ANIC Adapters ANIC Adapters Accolade s ANIC line of FPGA-based adapters/nics help accelerate security and networking

More information

MySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona

MySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona MySQL Performance Optimization and Troubleshooting with PMM Peter Zaitsev, CEO, Percona In the Presentation Practical approach to deal with some of the common MySQL Issues 2 Assumptions You re looking

More information

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA High Performance Ethernet for Grid & Cluster Applications Adam Filby Systems Engineer, EMEA 1 Agenda Drivers & Applications The Technology Ethernet Everywhere Ethernet as a Cluster interconnect Ethernet

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10 Extending the LAN Info 341 Networking and Distributed Applications Context Building up the network Media NIC Application How to hook things together Transport Internetwork Network Access Physical Internet

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Update on National LambdaRail

Update on National LambdaRail GLIF 2007, Prague, September 17 th, 2007 Update on National LambdaRail John Silvester Special Advisor to CIO for High Performance Networking, Professor of Electrical Engineering, University of Southern

More information

Extending dynamic Layer-2 services to campuses

Extending dynamic Layer-2 services to campuses Extending dynamic Layer-2 services to campuses Scott Tepsuporn and Malathi Veeraraghavan University of Virginia (UVA) mvee@virginia.edu Brian Cashman Internet2 bsc@internet2.edu April 1, 2015 FTW Intl.

More information

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK

TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK domenico.vicinanza@dante.net EGI Technical Forum 2013, Madrid, Spain TCP! Transmission Control Protocol (TCP)! One of the original core protocols of the

More information

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab ESnet Update Summer 2010 Joint Techs Columbus, OH Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab Changes @ ESnet New additions: Greg Bell Chief Information Strategist - Formerly Chief Technology

More information

Session based high bandwidth throughput testing

Session based high bandwidth throughput testing Universiteit van Amsterdam System and Network Engineering Research Project 2 Session based high bandwidth throughput testing Bram ter Borch bram.terborch@os3.nl 29 August 2017 Abstract To maximize and

More information

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook EXPERIENCES EVALUATING DCTCP Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook INTRODUCTION Standard TCP congestion control, which only reacts to packet losses has many problems Can

More information

Enhancing Infrastructure: Success Stories

Enhancing Infrastructure: Success Stories Enhancing Infrastructure: Success Stories Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 24, 2012 Outline Motivation for strategic investments

More information

Improving the Robustness of TCP to Non-Congestion Events

Improving the Robustness of TCP to Non-Congestion Events Improving the Robustness of TCP to Non-Congestion Events Presented by : Sally Floyd floyd@acm.org For the Authors: Sumitha Bhandarkar A. L. Narasimha Reddy {sumitha,reddy}@ee.tamu.edu Problem Statement

More information

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am 15-744 Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am (please submit through e-mail to zhuoc@cs.cmu.edu and srini@cs.cmu.edu) Name: A Congestion Control 1. At time t, a TCP connection

More information

Ronald van der Pol

Ronald van der Pol Ronald van der Pol Outline! Goal of this project! 40GE demonstration setup! Application description! Results! Conclusions Goal of the project! Optimize single server disk to network I/O!

More information

MidoNet Scalability Report

MidoNet Scalability Report MidoNet Scalability Report MidoNet Scalability Report: Virtual Performance Equivalent to Bare Metal 1 MidoNet Scalability Report MidoNet: For virtual performance equivalent to bare metal Abstract: This

More information

Themes. The Network 1. Energy in the DC: ~15% network? Energy by Technology

Themes. The Network 1. Energy in the DC: ~15% network? Energy by Technology Themes The Network 1 Low Power Computing David Andersen Carnegie Mellon University Last two classes: Saving power by running more slowly and sleeping more. This time: Network intro; saving power by architecting

More information

CS 326: Operating Systems. Networking. Lecture 17

CS 326: Operating Systems. Networking. Lecture 17 CS 326: Operating Systems Networking Lecture 17 Today s Schedule Project 3 Overview, Q&A Networking Basics Messaging 4/23/18 CS 326: Operating Systems 2 Today s Schedule Project 3 Overview, Q&A Networking

More information

Introduction to Networking and Systems Measurements

Introduction to Networking and Systems Measurements Introduction to Networking and Systems Measurements Lecture 2: Basic Network Measurements Dr Noa Zilberman noa.zilberman@cl.cam.ac.uk Networking and Systems Measurements(L50) 1 Terminology Matters! in

More information

DYNES: DYnamic NEtwork System

DYNES: DYnamic NEtwork System DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop Prague, November 29 th, 2010 1 OUTLINE What is DYNES Status Deployment Plan 2 DYNES Overview

More information

TOC: Switching & Forwarding

TOC: Switching & Forwarding TOC: Switching & Forwarding Why? Switching Techniques Switch Characteristics Switch Examples Switch Architectures Summary TOC Switching Why? Direct vs. Switched Networks: n links Single link Direct Network

More information

Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks p. 1

Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks p. 1 Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks Hadrien Bullot & R. Les Cottrell {hadrien,cottrell}@slac.stanford.edu Stanford Linear Accelerator Center, Menlo Park Evaluation

More information

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers The Missing Piece of Virtualization I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers Agenda 10 GbE Adapters Built for Virtualization I/O Throughput: Virtual & Non-Virtual Servers Case

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances Technology Brief Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances The world

More information

CSCI Computer Networks

CSCI Computer Networks CSCI-1680 - Computer Networks Link Layer III: LAN & Switching Chen Avin Based partly on lecture notes by David Mazières, Phil Levis, John Jannotti, Peterson & Davie, Rodrigo Fonseca Today: Link Layer (cont.)

More information

OASIS: Self-tuning Storage for Applications

OASIS: Self-tuning Storage for Applications OASIS: Self-tuning Storage for Applications Kostas Magoutis, Prasenjit Sarkar, Gauri Shah 14 th NASA Goddard- 23 rd IEEE Mass Storage Systems Technologies, College Park, MD, May 17, 2006 Outline Motivation

More information