Assignment 3 Solutions 2/10/2012

Similar documents
Brief Overview and Background

DMN1 : COMMUNICATION PROTOCOL SIMULATION. Faculty of Engineering Multimedia University

The Network Simulator Fundamentals. Downloads and further info at:

Studying Fairness of TCP Variants and UDP Traffic

ns-2 Tutorial Exercise (1)

Performance Consequences of Partial RED Deployment

NS-2 Tutorial. Kumar Viswanath CMPE 252a.

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

1 What is network simulation and how can it be useful?

Performance Analysis of TCP Variants

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Simulations: ns2 simulator part I a

Buffer Management for Self-Similar Network Traffic

6.033 Spring Lecture #12. In-network resource management Queue management schemes Traffic differentiation spring 2018 Katrina LaCurts

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Chapter 6: Congestion Control and Resource Allocation

Traffic Behaviour of VoIP in a Simulated Access Network

CS 557 Congestion and Complexity

An Evaluation of Deficit Round Robin Fair Queuing Applied in Router Congestion Control

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Comparison of Different Types of Sources of Traffic Using SFQ Scheduling Discipline

Episode 5. Scheduling and Traffic Management

Project Network Simulation CSE 5346/4346

STUDY OF SOCKET PROGRAMMING AND CLIENT SERVER MODEL

Implementation of Start-Time Fair Queuing Algorithm in OPNET

QoS metrics and requirements

Queuing Algorithms Performance against Buffer Size and Attack Intensities

Simulation with NS-2 and CPN tools. Ying-Dar Lin Department of Computer Science, National Chiao Tung University

An Introduction to NS-2

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

Performance Evaluation of TCP Westwood. Summary

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

S Quality of Service in Internet. Introduction to the Exercises Timo Viipuri

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Appendix B. Standards-Track TCP Evaluation

ns-2 Tutorial Contents: Today Objectives of this week What is ns-2? Working with ns-2 Tutorial exercise ns-2 internals Extending ns-2

EE 122 Fall st Midterm. Professor: Lai Stoica

Lecture 14: Congestion Control"

TCP/IP over ATM over Satellite Links

Mohammad Hossein Manshaei 1393

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

PART A SIMULATION EXERCISES

Observing Bufferbloat using mininet

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Lecture 14: M/G/1 Queueing System with Priority

Lecture 14: Congestion Control"

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

Network Simulator 2: Introduction

DiffServ Architecture: Impact of scheduling on QoS

Impact of End-to-end QoS Connectivity on the Performance of Remote Wireless Local Networks

A Framework For Managing Emergent Transmissions In IP Networks

ns-2 Tutorial (1) Multimedia Networking Group, The Department of Computer Science, UVA Jianping Wang Jianping Wang, 2002 cs757 1

Exercises TCP/IP Networking With Solutions

Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012

Improving the Robustness of TCP to Non-Congestion Events

CPSC 3600 HW #4 Fall 2017 Last update: 11/9/2017 Please work together with your project group (3 members)

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

Experiments on TCP Re-Ordering March 27 th 2017

Congestion Control 3/16/09

Network Simulator 2. Telematica I (CdL Ing. INF) Ing. Giuseppe Piro.

On TCP friendliness of VOIP traffic

Episode 5. Scheduling and Traffic Management

CPSC 3600 HW #4 Solutions Fall 2017 Last update: 12/10/2017 Please work together with your project group (3 members)

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

XCP: explicit Control Protocol

TCP Congestion Control in Wired and Wireless networks

The Transport Control Protocol (TCP)

Basic Reliable Transport Protocols

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Lecture 22: Buffering & Scheduling. CSE 123: Computer Networks Alex C. Snoeren

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER

Active Queue Management for Self-Similar Network Traffic

Router s Queue Management

8. TCP Congestion Control

Part 3: Network Simulator 2

Answers to Sample Questions on Transport Layer

Can network coding bridge the digital divide in the Pacific?

Department of Electrical and Information Technology ETSF10 Internet Protocols Home / Laboratory Project II

Simple Data Link Protocols

Streaming Video and TCP-Friendly Congestion Control

Implementation Experiments on HighSpeed and Parallel TCP

RSVP: Resource Reservation Protocol

MAC SIMULATOR FOR HYBRID TDMA/CDMA ACCESS NETWORKS

EP2210 Scheduling. Lecture material:

Evaluation of Routing Protocols for Mobile Ad hoc Networks

Part 3. Result Analysis

CSE 573S Protocols for Computer Networks (Spring 2005 Final Project)

THE NETWORK PERFORMANCE OVER TCP PROTOCOL USING NS2

QoS provisioning. Lectured by Alexander Pyattaev. Department of Communications Engineering Tampere University of Technology

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

SUMMERY, CONCLUSIONS AND FUTURE WORK

Department of Electrical and Information Technology ETSF10 Internet Protocols Home / Laboratory Project II

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Implementation of a Multi-Channel Multi-Interface Ad-Hoc Wireless Network

EE 122: Router Support for Congestion Control: RED and Fair Queueing. Ion Stoica Oct. 30 Nov. 4, 2002

Transcription:

INFO334 / TELE302 Assignment 3 Solutions 2/10/2012 The WAN topology to be adopted for the enterprise WAN is shown in Figure 1. Figure 1: NZAM WAN topology. 1 Task 1: Reliability Analysis (4 marks) 1. What is the end-to-end reliability between Dunedin and Auckland? We have a chain structure DUD-CHC-WLG-AKL to work on. As explained in the lectures, we prefer to work with small numbers (of unavailability). For example, the 99% availability gives a 0.01 unavailability. The unavailability of a chain is as simple as the sum of unavailabilities of individual links(if they are small enough). Precisely, the equivalent unavailability between DUD-WLG should be 0.01 + 0.001 0.01 0.001 0.01 + 0.001 = 0.011. Going further with the WLG-AKL link, the unavailability for DUD-AKL is then 0.011 + 0.01 0.011 0.01 0.011 + 0.01 = 0.021, which means the availability of DUD-AKL is 1-0.021=97.9%.

2. Using short and low-reliability links (99%), can we improve the DUD-AKL reliability? Find a cheap solution. To enhance the reliability on two ends (DUD-CHC and WLG-AKL), we can establish two extra 99% links for DUD-CHC and WLG-PMR, as shown in Figure 2. The following works out the unavailabilities: Figure 2: NZAM WAN topology. DUD-CHC equivalent: 0.01 0.01 = 0.0001; CHC-WLG: 0.001; WLG-AKL equivalent: 0.01 (0.01 + 0.01 0.01 0.01) 0.0002; DUD-AKL equivalent, summing up: 0.0001 + 0.001 + 0.0002 = 0.0013. Therefore the new topology gives a DUD-AKL availability of 99.87%. Note that adding a direct WLG-AKL link instead would give the same result as a WLG-PMR link, but the latter is cheaper, and gives better reliability to PMR as well. 3. Where is the reliability bottleneck and to what level shall it be upgraded so that the DUD-AKL reliability can reach at least 99.9%? From the above analysis clearly the bottleneck lies at the CHC-WLG link and we need to reduce its unavailability down to 0.0007 (so that the sum becomes 0.001), i.e., the link s availability should be upgraded to 99.93%. 2

2 Task 2: Simulation (4 marks) Following the simple.tcl example, it s probably easier to plot out the topology horizontally, so it looks like the following: ############## topology ################ # H5 H4 H1 # \ \ / # R5 R4 R3 R1 H2 # / \ # H3 R2 ######################################## From the nam display as shown in Figure 3 (unfortunately the naming of the nodes is a bit out of control here - so hopefully you have managed to cope with that), it s obvious to see that the bottleneck is at Link R4-R3, where basically all packets get dropped. For some reason this effectively relieves Link R3-R1 and there is no further dropping from there! Figure 3: Simulation animation snapshot showing the WAN topology and packet drops. To deal with the CBR traffic that causes trouble, we have tried alternative disciplines for Link R4-R3, including RED and SFQ. This results in reduced packet drops and improved throughput during the rush hours where CBR traffic is active. 2.1 Packet drop comparison Depending on the queueing discipline chosen for Link R4-R3, the number of either CBR or TCP packet drops also varies, as summarized in Table 1. While SFQ achieved the lowest drop for TCP traffic, its CBR dropping is rather serious (in fact, almost one third are lost). DropTail gets the least dropping for CBR, but the significant amount of TCP drops. The RED seems to produce moderate CBR drops, which may be a good choice in maintaining the CBR quality. 3

Table 1: Number of packet drops. Discipline TCP drops CBR drops DropTail 43 22 RED 59 167 SFQ 24 569 2.2 Throughput comparison To assess the true TCP performance, we need to verify the TCP throughput (in terms of number of TCP packets received at H1 per second). This can be visualized using the following command line: awk $1=="r"&&$4=="5"&&$5=="tcp"{print int($2)} out.tr uniq -c awk {print $2,$1} xgraph Figure 4 compares the throughput obtained when using DropTail, RED, and SFQ. Interestingly, during the rush hour SFQ performs the best, while RED also slightly outperforms DropTail. For an explanation we can check the congestion window of the TCP flows. In the simulation script, the monitoring of cwnd can be enabled, e.g., on Flow H3-H1, by using the following lines: s e t t r f [ open cwnd. t r w] $tcp1 attach $ t r f $tcp1 t r a c e cwnd Figure 5 reveals what happens with the TCP congestion window maintained for the H3-H1 flow. Despite more packet drops, RED copes with congestion better than DropTail does. 250 DropTail RED SFQ 200 # of TCP packets rcvd.) 150 100 50 0 0 2 4 6 8 10 12 14 16 18 20 time (sec) Figure 4: TCP packets received by H1 over time using three disciplines: DropTail, RED, and SFQ on the R4-R3 link respectively. 4

60 DropTail RED SFQ 50 40 cwnd 30 20 10 0 0 2 4 6 8 10 12 14 16 18 20 Time (sec.) Figure 5: CWND variation of the H3-H1 TCP flow under DropTail, RED, and SFQ used by Link R4-R3 queue. Other queueing disciplines can also be considered, such as FQ and DRR (deficit round robin) - both of which further improve TCP throughput but let down CBR traffic. 2.3 Recommendation Based on the simulation analysis presented above, we ll recommend using RED to get satisfactory performance for both the CBR and TCP traffic. Even though this may compromise the off-peak TCP performance, the network can maintain sustainable performance during rush hours where CBR traffic challenges the capacity of the WAN. On the other hand, we d like to acknowledge the limitation of queueing optimization. To solve the bottleneck problem, sooner or later we have to consider increasing the WAN link capacities. Indeed, further simulations have revealed that the congestion issue will ease out by simply upgrading the R4-R3 link capacity to 3Mbps. 5