Simulation and Exploration of RCP in the networks

Similar documents
Simulation Based Analysis of FAST TCP using OMNET++

Efficient Distributed File System (EDFS)

Load-Balanced Anycast Routing

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

Real-Time Guarantees. Traffic Characteristics. Flow Control

Analysis of Collaborative Distributed Admission Control in x Networks

Comparisons of Packet Scheduling Algorithms for Fair Service among Connections on the Internet

Mathematics 256 a course in differential equations for engineering students

A fair buffer allocation scheme

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Gateway Algorithm for Fair Bandwidth Sharing

Avoiding congestion through dynamic load control

Scheduling and queue management. DigiComm II

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

Wishing you all a Total Quality New Year!

CS 268: Lecture 8 Router Support for Congestion Control

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

Load Balancing for Hex-Cell Interconnection Network

On the Exact Analysis of Bluetooth Scheduling Algorithms

Routing in Degree-constrained FSO Mesh Networks

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Cluster Analysis of Electrical Behavior

The Codesign Challenge

Video Proxy System for a Large-scale VOD System (DINA)

An Optimal Algorithm for Prufer Codes *

Advanced Computer Networks

Parallelism for Nested Loops with Non-uniform and Flow Dependences

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

USING GRAPHING SKILLS

AADL : about scheduling analysis

Brave New World Pseudocode Reference

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Parallel matrix-vector multiplication

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions

Network Coding as a Dynamical System

Neural Network Control for TCP Network Congestion

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

New Exploration of Packet-Pair Probing for Available Bandwidth Estimation and Traffic Characterization

WIRELESS communication technology has gained widespread

Network-Driven Layered Multicast with IPv6

Smoothing Spline ANOVA for variable screening

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems:

Fast Computation of Shortest Path for Visiting Segments in the Plane

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

On Achieving Fairness in the Joint Allocation of Buffer and Bandwidth Resources: Principles and Algorithms

Virtual Machine Migration based on Trust Measurement of Computer Node

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network

Quantifying Performance Models

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

Concurrent Apriori Data Mining Algorithms

Minimum Cost Optimization of Multicast Wireless Networks with Network Coding

S1 Note. Basis functions.

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Array transposition in CUDA shared memory

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Extending the Functionality of RTP/RTCP Implementation in Network Simulator (NS-2) to support TCP friendly congestion control

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

Internet Traffic Managers

MULTIHOP wireless networks are a paradigm in wireless

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

with `ook-ahead for Broadcast WDM Networks TR May 14, 1996 Abstract

A Binarization Algorithm specialized on Document Images and Photos

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Efficient Content Distribution in Wireless P2P Networks

Technical Report. i-game: An Implicit GTS Allocation Mechanism in IEEE for Time- Sensitive Wireless Sensor Networks

Constructing Minimum Connected Dominating Set: Algorithmic approach

ELEC 377 Operating Systems. Week 6 Class 3

Intro. Iterators. 1. Access

CE 221 Data Structures and Algorithms

Why Congestion Control. Congestion Control and Active Queue Management. TCP Congestion Control Behavior. Generic TCP CC Behavior: Additive Increase

High Performance DiffServ Mechanism for Routers and Switches: Packet Arrival Rate Based Queue Management for Class Based Scheduling

A Distributed Dynamic Bandwidth Allocation Algorithm in EPON

Journal of Chemical and Pharmaceutical Research, 2014, 6(10): Research Article. Study on the original page oriented load balancing strategy

DEAR: A DEVICE AND ENERGY AWARE ROUTING PROTOCOL FOR MOBILE AD HOC NETWORKS

Yun Bae KIM Jinsoo PARK. Dept. of Systems Management Engineering Sungkyunkwan University Cheon-cheon-dong 300, Jang-an-gu Suwon, KOREA

A New Feedback Control Mechanism for Error Correction in Packet-Switched Networks

The Shortest Path of Touring Lines given in the Plane

Computer Communications

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE

A KIND OF ROUTING MODEL IN PEER-TO-PEER NETWORK BASED ON SUCCESSFUL ACCESSING RATE

Evaluation of TCP Variants and Bandwidth on Demand over Next Generation Satellite Network

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Convolutional interleaver for unequal error protection of turbo codes

A Model Based on Multi-agent for Dynamic Bandwidth Allocation in Networks Guang LU, Jian-Wen QI

MobileGrid: Capacity-aware Topology Control in Mobile Ad Hoc Networks

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A Finite Queue Model Analysis of PMRC-based Wireless Sensor Networks

3D vector computer graphics

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

On Some Entertaining Applications of the Concept of Set in Computer Science Course

Performance analysis of distributed cluster-based MAC protocol for multiuser MIMO wireless networks

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

IEEE E: QOS PROVISIONING AT THE MAC LAYER YANG XIAO, THE UNIVERSITY OF MEMPHIS

Transcription:

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 Smulaton and Exploraton of P n the networks hanghua He changhua@stanford.edu June 06, 003 Abstract P (ate ontrol Protocol) s a rate-based congeston control algorthm, whch can reduce the flow duraton for small sze fles comparng wth TP slow start mechansm. efore transmsson the sender and the routers wll nteract to acheve an optmal rate, n order to mnmze the flow duraton. We smulate and explore P n the networks wth M/Pareto traffc. The result shows the optmal rate depends on the background traffc load and ncomng traffc load of the bottleneck lnk. Furthermore, P works very well because some error n the optmal rate won t nfluence the flow duraton greatly. Theoretcally we work out the expresson for queue sze of a sngle queue wth M/General traffc when the transmsson rate s greater than the lnk capacty, and we descrbe a method to calculate the upper bound of the queue sze when the transmsson rate s less than the lnk capacty. ased on ths, we explan the smulaton results when M/General s specfed as M/Pareto process. 1. Introducton ongeston ontrol plays a very mportant rule n the current Internet to provde better performance. Nowadays the network capacty has ncreased sgnfcantly due to the mplementaton of optcal lnks, but the amount of users and applcatons also ncreases greatly. Thus congeston control schemes are stll necessary to lmt flow rates to avod congeston n the routers, to use the network resources effcently to mnmze flow duratons, and to ensure farness n resource allocaton among flows. The exstng congeston control algorthms are mostly feedback schemes based on TP [Wd01, Flyod01]. These schemes are wndow-based and the bass les n Addtve Increase Multplcatve Decrease (AIMD) [hu87, Jaco88]. The key prncple of AIMD s halvng the congeston wndow for every wndow contanng a packet loss,.e. congeston ndcaton n wred networks, and ncreasng the congeston wndow by roughly one segment per TT otherwse. The second fundamental component of TP congeston control s the etransmt Tmer, ncludng the exponental backoff of the retransmt tmer when a retransmtted packet s tself dropped. The thrd fundamental component s the Slow-Start mechansm for the ntal probng for avalable bandwdth, nstead of ntally sendng at a hgh rate that mght not be supported by the network. Wthn ths general congeston control framework of Slow-Start, AIMD, and etransmt Tmers, there s a wde range of possble behavor dynamcs. Furthermore, some researchers have tred to develop some congeston control schemes based on flow rate to provde TP frendly behavor for multmeda traffc [Jan89, Ssa98, eja99]. The common objectve of these schemes s to fnd a sutable way for controllng traffc load accordng to the state of the network. Also some strateges are developed to constran the traffc by usng some formulas [an01], or based on the recever-based mechansm [Hand0], so called TP Frendly ate ontrol (TF). Ths s a recever-based mechansm, wth the calculaton of the congeston control nformaton (.e., the loss event rate) n the data recever rather n the data sender. Ths s well-suted to an applcaton where the sender s a large server handlng many concurrent connectons, and the recever has more memory and PU cycles avalable for computaton. In addton, the recever-based mechansm s more sutable as a buldng block for multcast congeston control. Page 1 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 However, the nvestgated protocols above are based on TP, actually they wll cause neffcency when transferrng small sze flows. Fralegh has measured the Sprnt network and found the average flow sze s less than 10,000 bytes and on average 10-15 packets (ncludng SYN packet). Wth a ound Trp Tme (TT) of 100ms and the lnk capacty greater than 100,000 bytes/sec, whch can almost always be satsfed n the current networks, the flow can be fnshed n only one TT gven the approprate rate. Whle usng TP t wll last about 500ms due to Slow Start and the average rate wll be only 0Kb/s [Fral0]. Ths means TP wll stretch flows n such case. Hence, another protocol, P (ate ontrol Protocol) s necessary n order to provde better performance. In ths protocol, proposed by u Zhang-Shen and Nandta Dukkpat, the flow rate wll be determned durng the handshake process before data transmsson by nteracton between routers and the end-host, nstead of by TP probng. Then the flow wll transmt packets wth the gven rate drectly and thus reduce the flow duraton. Usng ths scheme, each router tres to fnd the optmal rate to mnmze the average flow duraton for the actve flows accordng to the traffc stuaton of tself.. ate ontrol Protocol (P) Intutvely, when a router assgn hgh rates to the ncomng flows, the flows can be fnshed n a short tme, but mght stay n the queue for a long tme because of the ncreasng queue lengths caused by the hgh-rate flows. On the other hand, when a router assgn lower rates to the flows, the average queue length wll decrease, causng the lower packet delay, but the tme to fnsh the flow wll ncrease due to the low transmsson rate. Ths s a knd of tradeoff. The basc dea of P s that, the router wll assgn some amount of capacty to the actve flows based on the traffc stuaton of the network, mnmzng the average flow duraton. efore a flow starts to transmt data, t frst ntalzes a handshake process to determne the transmsson rate. Each route receves the request wll grant some amount of capacty accordng to ts traffc stuaton and forward the handshake message to the next router along the route. The recever wll send an acknowlegement back to the sender ndcatng the sutable rate for all the routers. Then the sender wll start transmsson wth the gven rate drectly. Ths s dfferent from TP, after handshake TP wll start transmsson wth one packet frst and ncrease the rate by Slow Start, whch actually stretch the small sze flows. The handshake s drawn n Fgure 1. Fgure 1. Hand-shakng process of P Page of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 3. Problem Statement We can consder each router n the networks as a queue. Flow has some sze L. Each router j along the route from the sender to the recever wll grant rate j to the flow, after the handshake process the flow wll use the mnmum rate k mn{ 1,,, n } to transmt data. The router k whch assgns the mnmum rate to the flow s called the bottleneck lnk for ths flow. The flow duraton s defned as the tme duraton from the flow sends the frst packet to the recever receves the last packet, say, T, T, TS,, where T, represents the duraton of flow n the networks, T, and TS, represent the start tme of the frst packet and the receved tme of the last packet, respectvely. Easly we have the relatonshp T, L k + D,, where D, s the packet delay, whch s the delay tme of the last packet, and the frst tem L s the transmttng tme of the flow. k In each router, t wll assgn the same rate to all the actve flows. The objectve of P s to determne the rate to mnmze the average flow duraton. Once the optmal rate s determned, the router wll grant the rate to the flows and t tres to operate at ths optmal pont. In a sngle lnk case, the router wll determne the rate and the operaton pont thus determne the average flow duraton. We only need to explore how the background traffc wll nfluence the optmal operaton pont. Whle n the network wth many routers along the path of the flow, each router wll assgn rates accordng to ts own traffc stuaton, whch s more complex because the dversty of the router capactes and the traffc stuatons. Ths case can be consdered as several queues n seres along the route of the flows as shown n Fgure. Fgure. Queue Models durng the flow transmsson The objectve of ths project s to explore P dynamcs over the networks, hence fnd out how well P works, how the flow duraton relates to the traffc parameters, and how to determne the optmal rate for transmsson to mnmze the flow duraton. 4. Smulaton Frst we wll run enough smulatons to get some ntuton on the algorthm and the performance. In the real networks, background traffc exsts for every router and obvously t wll affect the behavor of the router. Thus n the smulaton we wll consder the traffc n each router conssts of background flows and nterested flows. The queues along the route of the nterested flows are our studyng object. For the background flows n each queue, the relatonshp s qute complcate based on the topology and traffc pattern, here for smplcty we wll assume the background flows n dfferent queues are ndependent. In the current Internet, the traffc has been proved to be heavy-taled (Long ange Dependence) [Lela94, Paxs95, Feld98], and M/Pareto process s sutable to model and smulate such knd of Page 3 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 traffc [Neam99]. Hence we wll use ths traffc model to generate both background flows and nterested flows, say, the flow arrval s Posson process wth arrval rate λ, the flow sze L s Pareto dstrbuted and each flow has a transmsson rate. We set the capacty of each router to, and easly the average load of the router s, λel ρ In the followng smulatons, we always assume admssble traffc for each router, say, ρ < 1. Among all the queues along the route of the nterested flows, the one that assgns the mnmal rate to the flows s consdered as the bottleneck lnk for these flows. ecause the router wll assgn the same rate to all the flows passng t, we can easly conclude n the bottleneck lnk, the background flows wll have a rate no greater than the nterested flows, otherwse there must be another lnk that assgns a lower rate to the nterested flows, and become the bottleneck lnk. u and Nandta have obtaned some results on the sngle lnk case and developed P algorthm based on the observaton on the results. However, the background traffc mght change the behavor of the router, therefore at the frst step we wll smulate the sngle lnk case wth background flows. Throughout the smulatons, we wll keep the capacty of the lnk as 100Mbps, delay of the lnk as 100ms, the Pareto shape as 1.1, the mean as 10 packets and each packet has a sze of 1000 bytes. ase1: Sngle Lnk wth background traffc The lnk has 0000 nterested flows ncomng, whch cause an average load ρ I, and each flow has a transmsson rate r I. 30000 background flows exsts wth an average load ρ, each flow should have a very low transmsson rate, we set t to r 0. 0 and keep t unchanged. We can calculate ρ I and ρ respectvely from the flow nformaton usng above formula. Frst, we consder the lnk has a relatvely hgh load of background traffc, say, ρ 0. 497, and change ρ I, r I, we get the average flow duraton and queue length as shown n Fgure 3. (the transmsson rate n the fgure s normalzed by the lnk capacty.) Fgure 3. Flow duraton and queue length vs. rate of nterested flows (case 1: hgh load) Page 4 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 Intutvely when rate s low, the transmsson delay s large and packet delay s small, whle when rate s hgh, the transmsson delay s small but packet delay s large. onsder ths knd of tradeoff we should have some optmal transmsson rate to mnmze the flow duraton. From Fgure 3, we verfy ths s true wth background traffc exstng n the lnk. Also we can fnd around the optmal pont the curve s rather flat, that means the optmal pont s not very senstve to the rate change. For example, for the red curve wth ρ I 0. 3358, there are no large varatons n flow duraton when the rate changes between 0.1 and 0.. Furthermore, When ρ I ncreases (from 0.067 to 0.3358), the optmum pont moves towards left sde, whch means the router should assgn a lower rate to each flow when the load of nterested flows ncreases. Now we want to see how the background traffc wll nfluence the flow duraton. Generally the transmsson rate of the background flow s very slow, so we wll keep t at the prevous value, but change ρ from hgh load (0.497) to low load (0.1657) wth all other parameters unchanged. Fgure 4. Flow duraton and queue length vs. rate of nterested flows (case 1: low load) In Fgure 4 the shape of the curve s the same as before, also we get the optmal rate wth a rather flat part around t. ut the dfference s, for the same ρ I, the optmal pont appears at a hgher rate. For example, for ρ I 0. 3358, the optmal rate s around 0. n Fgure 3, whle t appears around 0.5 n Fgure 4. That means, when the load of background traffc s lower, we should assgn hgher rate to each flow to reduce the transmsson delay thus the flow duraton because the packet delay won t take much effect. After smulatng a sngle lnk wth background traffc, we wll consder more complex cases, say, multple lnks n seres along the route of the nterested flows. ase: Two Lnks(the second lnk s bottleneck) In ths case, we set the second lnk wth ρ 0. 497 and r 0. 0, whch s the same as the hgh load stuaton n the sngle lnk case. We then add a non-bottleneck lnk before t. For the added lnk, there are also 30000 flows. ut the average load s lower, and we are nterested n the Page 5 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 stuaton that the background traffc has a relatvely hgh transmsson rate, hence t s possble to nfluence the flow duraton much. We set them to ρ 1 0. 1471, r 1 0.. We adopt the same nterested flow sets, fed nto the non-bottleneck lnk frst then the bottleneck lnk. The results are shown n Fgure 5. Fgure 5. Flow duraton and queue length vs. rate of nterested flows (case ) omparng wth ase 1, the curve s almost a shft verson of that n Fgure 3 towards upsde. That means, the frst lnk we added only brngs a constant delay to the flows and does not change the shape of the curve, agan we have a rather flat part around the optmal rate. From these we can see the system s scalable and the optmal rate of mnmzng flow duraton s determned by the bottleneck lnk. We do not need to add more non-bottleneck lnk and smulate because they wll just add a constant delay smlarly. In case, the nterested flows are fed to the frst non-bottleneck lnk, after shaped by ths lnk they enter the bottleneck lnk. We are stll nterested n whether thngs wll change f the nterested flows are fed to the bottleneck lnk drectly. So we setup case 3 to test whether the relatve poston of the bottleneck lnk wll have any nfluence on the result. ase3: Two Lnks(the frst lnk s bottleneck) In ths case, we wll keep everythng the same as n case except exchangng the poston of the two lnks. urrently the nterested flows are fed nto the bottleneck lnk drectly and then the nonbottleneck lnk. The results are shown n Fgure 6. omparng wth Fgure 5, we can see there are only mnor dfferences between them when the load of nterested flows are hgh (0.3358), and no dfferences when the load of nterested flows are low. That s because wth the hgh load of nterested flows fed nto the bottleneck lnk drectly, the total load wll be rather hgh, the queue ncreases to cause a hgh packet delay, the bottleneck lnk should assgn lower rate to the ncomng flows. ut generally when the total load s not so hgh, the flow duraton and optmal pont s not very senstve to the relatve postons of the bottleneck lnk. Page 6 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 Fgure 6. Flow duraton and queue length vs. rate of nterested flows (case 3) Summary of observatons: ased on above smulatons, we have the followng fndngs: (1) For heavy-taled traffc, whch s true n the real networks, the optmal rate for flow transmsson exsts, and around the optmal value the curve s rather flat, whch means we have no strct requrement when assgnng the rate. () Each router can determne ts optmal rate depends on both the background traffc load and the ncomng traffc load. The hgher the load, the lower the optmal rate. (3) P can work very well wth seres of lnks n the networks because t s scalable and the optmal rate s determned only by the traffc stuaton of the bottleneck lnk. In practce, each router wll decde the rate dependng on the traffc stuaton of tself, say, the load of background (ongong) flows and the load of nterested (ncomng) flows. Fortunately these two parameters can be observed easly by the router. In the smplest way, the average queue length wll ndcate the background traffc load and the queue ncreasng rate ndcates the ncomng traffc load. Hence each router can assgn the sutable rate to the flows perodcally based on ther observatons on the traffc. 5. Analyss & Modelng From the smulaton we have acheved some useful results, t wll be a better gudance f we can do some theoretcal analyss. As dscussed n Secton, we can dvde the flow duraton to the transmsson delay and the packet delay. It s very dffcult to calculate the packet delay, so we wll focus on the queue length and approxmate the packet delay by the queue length, although we don t really know the relatonshp between them. For smplcty n the analyss we only consder the sngle lnk wthout background traffc, drawn n Fgure 7. Fgure 7. Queue model of sngle lnk wthout background traffc Page 7 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 Assume the queue length can be nfnty, let s rewrte the problem as follows: The lnk capacty s, and the server wll serve packets n the queue n a packet-based FFS way, say, the frst arrved packet wll be served frst. Flows arrve accordng to a Posson process wth arrval rate λ, the flow sze L s any general dstrbuton and each flow has a transmsson rate, our objectve s to calculate the average queue length (n bytes). When the flow sze L s exponentally dstrbuted, Pan et al [Pan91] and Kosten [Kost74] have solved the problem. ut that s not the case n real networks. Here we wll consder the flow sze s any general heavy-taled dstrbuton. ecause we can t get a general close-form expresson for the queue length n such stuaton, nstead we wll try to analyze some specal values of,,,, and so on, and try to verfy the shape of the curve. n 5.1 When the transmsson rate s nfnty, each flow wll arrve as a whole at a tme pont, ths s exactly an M/G/1 queue system. We have followng results from Pollaczek-Khnchne formula: λes EW ( 1 λes) where ET s the average flow duraton, EW s the watng tme, ES and ES are the frst and second moment of the flow servce tme. ecause the flows arrve accordng to Posson process, from PASTA, the tme average queue sze s the same as the flow arrval seen average, we have λes λ EQ EW ES ( 1 λes) ( 1 ρ) where EQ s the tme average queue sze (n bytes). 5. In ths case, nstead we consder the followng flow-based FFS Model: The flow arrvals are exactly the same as n Fgure 7, but the servce dscplne s dfferent. In ths system the servce s flow-based FFS, nstead of packet-based FFS n our system. That means, when the frst packet of a flow s served, the server wll contnue to serve all the packets of the same flow untl t s completed, no matter packets of other flows arrve or not. Let EQ be the queue length n our system, EQ F be the queue length n FFS system. We know these two system wll have the same queue length, say, EQ EQF, because at any tme, the arrvals are the same and the amount of packets served (no matter whch flow the packets come from) are also the same. On the other hand, comparng the flow-based FFS model wth M/G/1 queue, we can conclude the departures are exactly the same (packet by packet), thus the only reason that causes the dfferent queue sze s the arrvals. Generally we have EQ t [ A ( t) D ( t) ] 0 lm t t dt Page 8 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 where A (t) represents the total sze of packets arrved from 0 to t, D (t) s the total sze of packets leaves the queue from 0 to t, assume at t0 the queue s empty. We can draw the arrvng process n the M/G/1 queue and the flow-based FFS system as n Fgure 8. The dark arrow represents the arrval of M/G/1 queue and the lght lne represents the arrval of flow-based FFS system. A(t) t Fgure 8. Arrval process of M/G/1 queue and flow-based FFS system The queue dfference of these two systems can be determned by the dfference of the arrval process, say, the area of the trangles n Fgure 8. Assume from 0 to t total N (t) flows arrve, Q so EQ 1 lm t t 1 1 L N( t) 1 L N ( t) N ( t ) t ( A ( t) A ( t)) dt lm lm 0 λ t t 1 t t N( t) 1 EQ EQ F Q λ 1 ES λ ES ( ρ) ( 1 ρ) 1 ρλ ES EL λ ES 5.3 > Usng the same method as n Secton 5., we have N ( t) 1 1 EL λ Q lm L λ ES t t EQ EQ 1 EQ F Q λ 1 ( ρ) ES λ ES λ 1 1 ρ ES From the result we can see when the flow sze s Pareto dstrbuton (heavy-taled) wth the shape between 1 and, the queue sze s nfnty when because ES, whch drves the packet delay to nfnty. 5.4 < We just consder the specal ponts where, n 1 s an nteger. Smlarly, we can construct n another system wth n parallel flow-based FFS queues, each queue has a capacty of, drawn n as n Fgure 9. Assume the scheduler s ntellgent and knows the sze of each flow, t wll always Page 9 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 allocate the ncomng flows to the sub-queue wth the least unfnshed work (n bytes). Each subqueue s flow-based FFS as we dscussed n Secton 5.. We wll account the queue sze of ths system as the sum of the queue sze of all sub-queues. Q 1 n Q n Scheduler Q n n Fgure 9. n parallel flow-based FFS queues Frst of all, unlke the analogy n Secton 5., the queue sze of the system shown n Fgure 9 s not exactly the same as n our system. Whenever there are queues bult-up (arrvals greater than servce), n our system the queue wll be drlled by all avalable servers later, whle n the n parallel queues the bult-up queue dstrbutes n the separate sub-queues and can be drlled out by the only server of the sub-queue when t s free. However, ths queue sze s an upper bound of the queue sze of our system, because the arrvals are the same and the accumulatve departures are less n ths system. Ths knd of upper bound s also useful n practce. In order to calculate ths upper bound, smlarly we compare t to an M/G/n queue. The departures are the same as an M/G/n queue. Thus we can do the same thng as n Secton 5. to calculate the average queue sze. However, we can t expect to get the close-form expresson here because untl now there are no drect close-form results for an M/G/n queue. We mght use some other methods to approxmate or analyze ths knd of queue, whch we won t dscuss here. 5.5 0 Here we wll use the system drawn n Fgure 9 to show EQ 0 as 0. Instead of the ntellgent scheduler whch allocates ncomng flows to the sub-queue wth Least Unfnshed Work Frst polcy, consder a stupd scheduler that just allocates the ncomng flows to the sub-queue ndependently and randomly, obvously we have EQ EQF, LWF EQF, and, where EQ s the queue sze of our system, EQ F, LWF s the queue sze of the system wth ntellgent scheduler, and EQ, s the queue sze of the system wth stupd scheduler. F and For each sub-queue n stupd scheduler system, ts arrval process s Posson wth rate n λ, so Page 10 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 Pr ( there. s. queue. bult. up. n. ths. subqueue. at. tme. t) Pr( arrval. rate. greater. than. servce. rate. at. tme. t) Pr( frst. flow. contnus. at. any. tme. t.&. one. or. more. flows. arrve. before. t) Pr L > t n { 1 Pr( no. flows. arrve. before. t) } λ Pr t 1 n L > t e 0 as n, 0 n Ths follows 0 as 0 EQ F, and. EQ 0 as 0 5.6 Explanaton ased on above analyss, we have shown when 0 the queue sze goes to zero thus the packet delay goes to zero approxmately. As n the real networks, when the flow sze s Pareto dstrbuton (heavy-taled) wth the shape between 1 and, the queue sze s nfnty when due to ES, thus the packet delay s nfnty. On the other hand, for the transmsson delay, t decreases monotoncally from nfnty to zero when ncreases from zero to nfnty. Hence when we add them up to get the flow duraton, we wll obtan the curve shape as shown n our smulatons (though we do not prove t n rgor) and the optmal rate exsts somewhere between 0 and. However, n the smulaton result, when the queue sze and flow duraton s not bg enough to be consdered as nfnty, that s because the smulaton tme s fnte and the maxmal flow sze s lmted by the computer, we can t generate the real Pareto dstrbuton wth nfnte varance n a short runnng tme. 6. oncluson In ths project, we have done the smulatons and based on the data, we found P can work very well n the networks because the optmal rate wll mnmze the flow duraton and the flow duraton does not change much wll some error on the optmal value (the curve s flat around the optmal rate). The optmal rate s determned by the bottleneck lnk. It s scalable and not senstve to the relatve postons of the bottleneck lnk. For each router, t can determne the rate only dependng on the traffc stuaton of tself, say, the load of background (ongong) flows and the load of nterested (ncomng) flows. Furthermore, we try theoretcal analyss for the sngle queue wthout background traffc. We acheve the close-form expresson for the queue sze when, descrbe the method to calculate the upper bound of the queue sze when <, and prove that EQ 0 as 0. Also from the results, when the flow sze s Pareto dstrbuted wth the shape between 1 and, we have a rough explanaton to show the curve of flow duraton vs. rate should shape as shown n our smulatons and the optmal rate exsts somewhere between 0 and. 7. Future Work Untl now, we have acheved farly good results n both smulatons and theoretcal analyss. However, there are stll lots of work left. One of them s, n the analyss, we only consder the queue sze of the system, and approxmate the packet delay by the queue sze. ut we don t know Page 11 of 1

EE384Y Smulaton and Exploraton of P n the networks Sprng, 003 how does the packet delay relate to queue sze exactly. Thus that wll be very nterestng to do some researches on the packet delay. Furthermore, we can t get a close-form expresson for the queue sze when < because nobody solve the M/G/n problem, but t s valuable to analyze how flat the curve s around the optmal rate, that can show us how much error s tolerable when a router assgn the rate to the flows. 8. Acknowlegement Specal acknowledgements go to u Zhang-Shen and Nandta Dukkpat for contnuous assstance on the project. We are also grateful to Isaac Keslassy for offerng knd help on dscussng and handlng the project. Fnally, acknowledgement goes to Prof. Nck Mckeown for useful comments on the scope and depth of the project. eferences: [an01] D. ansal, H. alakrshnan, nomal ongeston ontrol Algorthms, IEEE INFOOM 001, Apr. 001. [hu87] D.M.hu,.Jan, Analyss of the Increase and Decrease Algorthms for ongeston Avodance n omputer Networks, DE-T-509, Aug. 1987. [Feld98] A. Feldmann, A.. Glbert, W. Wllnger, and T. G. Kurtz, The hangng Nature of Network Traffc: Scalng Phenomena, AM omputer ommuncaton evew, vol. 8, pp. 5-9, Apr. 1998. [Floy01] S. Floyd, A report on recent developments n TP congeston control, IEEE ommuncatons Magazne, pp. 84-90, Aprl 001. [Fral0] harles J. Fralegh, Provsonng Internet ackbone Networks To Support Latency Senstve Applcatons, PhD dssertaton, May 00. [Hand0] M. Handley, J. Padhye, S. Floyd, J. Wdmer, TP Frendly ate ontrol(tf): Protocol Specfcaton, draft-etf-tsvwg-tfrc-03.ps, July 001, exp. Jan. 00. [Jaco88] V.Jaconson, ongeston Avodance and ontrol, AM SIGOMM 88, p314-39. [Jan89].Jan, A Delay-ased Approach for ongeston Avodance n Interconnected Heterogeneous omputer Networks, DE-T-566, Apr. 1989. [Kost74] L. Kosten, Stochastc Theory of a Mult-entry uffer (I), Delft Progr. ep., Seres F: Mathematcal Engneerng, Mathematcs and Informaton Engneerng, pp. 10-18, 1974. [Lela94] ]Wll Leland, Murad Taqqu, Walter Wllnger, and Danel Wlson, On the Self-Smlar Nature of Ethernet Traffc (Extended Verson), IEEE/AM Transactons on Networkng, Vol., No. 1, pp. 1-15, February 1994. [Neam99] T. D. Neame, M. Zukerman and. G. Adde, Applcaton of the M/Pareto Process to Modelng roadand Traffc Streams, Proc. of ION 99, pp53-58, rsbane, Queensland, Australa, 8 September, 1999. [Pan91] Huanxu Pan, Hroyuk Okazak and Isse Kno, Analyss of a Gradual Input Model for ursty Traffc n ATM, TELETAFFI AND DATATAFFI, 1991. [Paxs95] V. Paxson and S. Floyd, Wde-area traffc: The falure of Posson modelng, IEEE/AM Trans. On Networkng, 3(3):6-44, June 1995. [eja99].ejae, M. Handley, D. Estrn, AP: An end-to-end rate based congeston control mechansm for real-tme streams n the Internet, n Proc. Of IEEE INFOOM 99, Vol.3, pp.1337-1345, Mar. 1999. [Ssa98] D.Ssalem, H. Schulzrnne, The Loss-based Adjustment Algorthm: A TP-frendly Adaptaton Schemes, n Proc. Of NOSSDAV 98, ambrdge, England, July 1998. [Wd01] J. Wdmer,.Denda, M.Mauve, A survey on TP-frendly congeston control, IEEE Network, pp. 8-37, May/June 001. Page 1 of 1