GTP: Group Transport Protocol for Lambda-Grids

Size: px
Start display at page:

Download "GTP: Group Transport Protocol for Lambda-Grids"

Transcription

1 GTP: Group Transport Protocol for Lambda-Grds Ryan X. Wu and Andrew A. Chen Department of Computer Scence and Engneerng Unversty of Calforna, San Dego {xwu, Abstract The noton of lambda-grds posts plentful collectons of computng and storage resources rchly nterconnected by dedcated dense wavelength dvson multplexng (DWDM) optcal paths. In lambda-grds, the DWDM lnks form a network wth plentful bandwdth, pushng contenton and sharng bottlenecks to the end systems (or ther network lnks) and motvatng the Group Transport Protocol (GTP). GTP features request-response data transfer model, ratebased explct flow control, and more mportantly, recever-centrc max-mn far rate allocaton across multple flows to support multpont-to-pont data movement. Our studes show that GTP performs as well as other UDP based aggressve transport protocols (e.g. RBUDP, SABUL) for sngle flows, and when convergng flows (from multple senders to one recever) are ntroduced, GTP acheves both hgh throughput and much lower loss rates than others. Ths superor performance s due to new technques n GTP for managng end system contenton. 1. Introducton Geometrc ncreases n semconductor chp capacty predcted by Moore's Law have produced a revoluton n computng over the past 4 years. Even more rapd advances n optcal networkng are producng even greater bandwdth ncreases. The OptIPuter project [1] and other efforts such as CANARIE [2] are explorng the mplcatons of these new lambda-grd envronments (low-cost, plentful wde-area bandwdth, plentful storage and computng) that ths revoluton enables. The efforts descrbed here are a part of the OptIPuter project. Crcut-swtched lambda s can provde transparent end-to-end optcal lght paths avalable at low cost and delverng huge dedcated bandwdth. Networks of such connectons form a lambda-grd (sometmes called a dstrbuted vrtual computer) n whch the geographcally dstrbuted elements can be tghtlycoupled. Compared to shared, packet-swtched IP networks, lambda-grds have fewer endponts (e.g. 1 3, not 1 8 ), dedcated hgh speed lnks (1Gg, 1Gg, etc.) between endponts, whch produce an envronment wthn the lambda-grd wth no nternal network congeston but sgnfcant endpont congeston. In addton, because lambda-grds are lkely to connect small numbers of closely nteractng resources, our perspectve has evolved from a pont-to-pont model (e.g. data transfer from sngle server to a clent) to a collecton of endponts whch engage n mult-pont to pont, multpont-to-multpont communcaton patterns at hgh speed. For example, a dstrbuted scentfc computaton n a data grd mght engage n coordnated communcaton across a number of data servers (endponts n a group), whch fetches large quanttes of data (e.g. 1GB) from ten dstnct servers to feed a local computaton or vsualzaton. These and other smlar scenaros pose a new set of research topcs for data communcaton n lambda-grds. Delverng communcaton performance n hgh bandwdth-delay product networks s a long-standng research challenge for just pont to pont data transfer. Tradtonal TCP and ts varants (e.g. [3] [4], [5]) were developed for shared networks n whch the bandwdth on nternal lnks s a crtcal and lmted resource. As such, the congeston control technques manage nternal network contenton, provdng a reasonable balance of non-aggressve competton and end-to-end performance. As a result, slow-start causes TCP to take a long tme to reach full bandwdth when RTT s large, and to recover from packet loss because of ts AIMD control law. Fgure 1 shows the throughput of TCP varyng for 1MB data transmsson across a 1Gbps lnk for dfferent round trp delays. We focus on the challenge of achevng hgh performance n complex network structures and communcaton patterns n lambda-grds. Frst, wth a network settng where nternal network congeston s rare, the focus of rate and congeston control shfts to end ponts (or ther access lnks), and the need to manage t s crtcal. Second, for multpont-to-pont communcaton patterns, where flows may have varous bandwdth and delay, t s mportant to acheve a far rate allocaton among flows.

2 Average Rate (Mbps) Round Trp Delay (second) Fgure 1: TCP throughput of transferrng 1MB data under dfferent round trp delay. Dummynet [6] s used to smulate lnk delay from.5 ms to 6ms n ths measurement. These two key ssues motvate our work on the Group Transport Protocol (GTP), a recever drven transport protocol that explots nformaton across multple flows to manage recever contenton and farness. The key novel features of GTP nclude 1) request-response based relable data transfer model, flow capacty estmaton schemes, 2) recever-orented flow co-schedulng and max-mn farness[7] rate allocaton, and 3) explct flow transton management. Measurements from an mplementaton of GTP show that for pont-to-pont sngle flow case, GTP performs as well as other UDP-based aggressve transport protocols (e.g. RBUDP[8], SABUL[9]), achevng dramatcally hgher performance than TCP, wth low loss rates. Results also show that for multpont-to-pont case, GTP stll acheves hgh throughput wth 2 to 1 tmes lower loss rates than other aggressve rate-based protocols. In addton, smulaton results show that unlke TCP, whch s unfar to flows wth dfferent RTT, GTP responses to flow dynamcs and converges to max-mn far rate-allocaton quckly. The remander of the paper s organzed as follows. We descrbe the multpont-to-pont communcaton problem of lambda-grds n Secton 2. We provde an overvew of GTP n Secton 3, and present the detals of rate and flow control mechansms of GTP n Secton 4. In Secton 5 we llustrate the performance of GTP through both ns-2 smulatons and real mplementaton measurements, followed by a summary and dscusson of future work. 2. The Problem 2.1 Modelng Lambda-Grd Communcatons Dense wavelength dvson multplexng (DWDM) allows optcal fbers to carry hundreds of wavelengths of 2.5 to 1 Gbps each for a total of terabts per second capacty per fber. A lambda-grd s a set of dstrbuted resources drectly connected wth DWDM lnks (see Fgure 2), n whch network bandwdth s no longer the key performance lmter to communcaton. Compared to shared, packet-swtched IP networks, the key dstngushng characterstcs of lambda-grd networks are: Very hgh speed (1Gg, 1Gg, etc.) dedcated lnks usng one or multple lambdas (through optcal packet-swtchng or multple optcal network nterfaces) connectng a small numbers of endponts (e.g. 1 3, not 1 8 ), and possbly long delays (e.g. 6ms RTT from SDSC to NCSA) between stes. End-to-end network bandwdth whch matches or exceeds the capabltes of the data processng (computng/i/o processng) speeds of attached systems. For example, ths supports a model employng geographcally dstrbuted storage, allowng fetchng data from multple remote storage stes to feed real-tme, local datacomputaton needs. Internal network capactes whch support prvate end-to-end connectons for a large number of lambda-grds smultaneously, enablng numerous lambda-grds to have the hgh speed dedcated bandwdth descrbed above. We can then vew the optcal lnks between end system pars as vrtual dedcated connectons, whch s n contrast to commonly shared lnks n tradtonal IP network. Grd Resource DWDM DWDM DWDM DWDM DWDM Grd Resource Fgure 2: Lambda Grds We model the set of end-to-end connectons and resources whch form a lambda-grd as follows. Lambda-grd endponts and underlyng wavelengths form a connecton graph, the vertces of whch represent dstrbuted stes on lambda grds, and edges denote dedcated lnks between endponts. Each edge has a capacty (bandwdth), whch s the allocated wavelength capacty between two destnatons; that capacty s dedcated to the lambda-grd (not shared wth others), Ths s a dfferent model from shared IP networks, where two end ponts may be connected through ntermedate nodes (e.g. routers) and shared

3 lnks. We llustrate ths n Fgure 3, whch shows that nternal network contenton s lkely to be shfted from nternal network (see Fgure 3a) to endponts (or ther access lnks, where multple dedcated connectons are termnated (see Fgure 3b)). S2 S1 (a) S3 R Fgure 3: Recever R s connecton s vew wth three senders. (a) Shared IP connecton: senders connect wth recever va shared lnks and ntermedate nodes. (b) Dedcated lambda connecton: dedcated capacty between each sender/recever par. 2.2 Shft of Management from Network to End System Recently, communcaton patterns of lambda-grds have evolved from pont-to-pont dedcated connecton model to a more sophstcated drecton where multpont to pont and multpont to multpont communcatons happen. Hgh-speed dedcated wavelength connectons wll be used to access large dstrbuted data collectons (petabytes of onlne storage), ncreasng the need to fetch data from multple stes concurrently to support local computaton. Ths model s already exstent n Content Delvery Networks (CDNs) such as Kazaa [1] and BtTorrent [11], where data s stored on multple replcated servers and clents access multple servers smultaneously to obtan the desred data. In such a multpont-to-pont settng, where multple dedcated lambda connectons come together, the aggregate capacty of multple connectons s far greater than the data handlng speed of the end system. As a result, the crtcal contenton occurs at endponts, not wthn the network. Ths motvates a fundamental change n focus from transport protocols to management of congeston at endponts. 2.3 Multpont-to-pont Communcaton Challenges We formulate the multpont-to-pont group communcaton problem as follows. Suppose that there are N senders and one recever on the lambda grd. Let C r denote the recever access bandwdth and x be the throughput of the connecton between sender (1 N) and recever. The goal s to maxmze aggregate throughput (for the best case, C r ) from these N connectons whle achevng low loss rate and farness across flows (wth varous RTT). Besdes sharng same challenges of pont-to-pont hgh delay-bandwdth product transmsson, we dentfy the followng research challenges around ths multpont-to-pont communcaton problem: Effcent low loss transmsson The soluton should be aggressve enough to utlze all of the recever s communcaton capacty wth multple connectons n a short tme perod (e.g. several RTT s), whle mantanng low average loss rate. S2 S1 (b) S3 R Convergence property If all N flows are long-lved, the rate allocaton vector (x 1, x 2,, x N ) should converge to a fxed rate allocaton (x 1 *, x 2 *,, x N *) regardless of the ntal state, flow arrval sequence, or other temporal detals. Farness among flows The allocaton of rates across flows should meet a range of farness crtera, especally farness for flows wth dfferent RTT values. Quck reacton to flow dynamcs The soluton should quckly react flow jons and termnatons, as well as converge to a far allocaton from a transton state. 3. GTP Overvew Group Transport Protocol (GTP) s desgned to provde effcent multpont-to-pont data transfer whle achevng low loss and max-mn farness among network flows. In ths secton we gve an overvew of GTP, ncludng desgn ratonale, framework, and major features. 3.1 Desgn Ratonale As descrbed n Secton II, lambda-grds shft traffc management from network nternal lnks to end systems. Ths s especally true for multpont-to-pont transfer pattern, where multple wavelengths termnatng at a recever, aggregatng a much hgher capacty than the recever can handle. In a senderorented scheme (e.g. TCP), ths problem s more severe because the hgh bandwdth-delay product of the network makes t dffcult for senders to react to congeston n a tmely and accurate manner. To address ths problem, GTP employs recever-based flow management, whch locates most of the transmsson control at the recever sde, close to where packet loss s detected and happenng n lambda-grds because of endpont congeston. Moreover, a recever-controlled ratebased scheme n GTP, where each recever explctly tells senders the rate at whch they should follow, allows flows to be adjusted as quckly as possble n response to detected packet loss. In order to support mult-flow management, enable effcent and far utlzaton of the recever capacty, GTP uses a recever-drven centralzed rate allocaton scheme. In ths approach, recevers actvely measure progress (and loss) of each flow, estmate the actual capacty for each flow, and then allocate the avalable recever capacty farly across the flows. Because GTP s recever-centrc rate-based approach can manage all senders of a recever, t enables rapd adaptaton to flow dynamcs, adjustng seamlessly when flows jon or termnate. We descrbe the features of GTP n more detal n the followng subsectons.

4 3.2 Protocol Framework GTP s a recever-drven response-request protocol. As wth a range of other expermental data transfer protocols, GTP utlzes lght-weght UDP (wth addtonal loss retransmsson mechansm) for bulk data transfer and a TCP connecton for exchangng control nformaton relably. The sender sde desgn s smple: send the requested data to recever at the receverspecfed rate (f that rate can be acheved by sender). Most of the management s at the recever sde, ncludng a Sngle Flow Controller (SFC) and Sngle Flow Montor (SFM) for each ndvdual flow, and Capacty Estmator (CE) and Max-mn Farness Scheduler (MFS) for centralzed control across flows. The GTP protocol archtecture at recever sde s depcted n Fgure Recever-orented Request-Response model GTP uses two packet types, data and control packets. Each data packet contans a header (ncludng per-connecton packet sequence number) and a unt data block (UDB). Control packets are used by recevers to send data/rate requests and exchange nformaton wth sender. In the connecton setup stage (ntated by ether sender or recever), both ends exchange resource avalablty, RTT, and sender access bandwdth. The recever sends data requests and allowed rates to sender va control packets. Wthn each data request one or a range of UDB s can be requested. The recever may adjust the sender s rate by sendng an updated rate request; however, n a smoothly runnng system, ths should rarely happen. GTP Flow 1 Sngle Flow Controller (SFC) Sngle Flow Montor (SFM) Applcatons Centralzed Scheduler Capacty Estmator Flow N Sngle Flow Controller (SFC) Max-mn Farness Scheduler UDP(data flow) / TCP (control flow) IP Fgure 4: GTP Framework (Recever) Sngle Flow Montor (SFM) 3.4 Rate Control n GTP There are two levels of rate control at the recever: per-flow rate control and centralzed rate allocaton across flows. At the per-flow level, the Sngle Flow Controller (SFC) manages the sendng of data packet requests and chooses the data-request sendng rate for each RTT accordng to measured flow statstcs provded by the Sngle Flow Montor (SFM). The goal s to acheve allocated/target rate by the central scheduler whle avodng congeston. SFC also manages recever buffer requrements by lmtng the number of outstandng UDB requests. At the centralzed scheduler level, for each control nterval (typcally several RTT s), the Capacty Estmator (CM) estmates the flow capacty of each ndvdual flow based on the flow statstcs provded by SFM. Flow statstcs ncludes the ntal allocated rate, acheved transmsson rate n the past control nterval, packet loss rate, updated RTT estmate, etc. Based on a recever s set of flow statstcs, the central scheduler allocates recever bandwdth to each flow and updates each flow s target rate. Ths bandwdth allocaton algorthm acheves max-mn farness across flows. The updated allowed (target) rate for each flow s then fed to each SFC. For more detal on flow control and centralzed schedulng schemes see Secton Relable Transmsson Unlke sender-centrc protocols, GTP senders are not responsble for loss retransmsson. Lost UDB s are requested agan by the recever. Because packet delvery s expected to be n-order, we employ a perconnecton data packet sequence number (embedded n the header of data packet) to dagnose packet loss, as well as calculate transmsson and loss rates. If needed, GTP can be augmented to handle out of order delvery. 3.6 Interacton wth TCP As prevously mentoned, TCP s not effcent for networks wth hgh delay-bandwdth product lnks. To perform well on such lnks, GTP s much more aggressve than TCP. When consderng the coexstence of GTP and TCP, there are two mechansms whch support graceful nteractons. GTP may adjust ts total allocatable bandwdth (dstrbuted to flows by the centralzed rate scheduler) to reserve a certan share of the recever capacty for TCP and other traffc. For nstance, GTP may be assgned to utlze up to 8% of the bandwdth and leave 2% for TCP flows. In our future work, we expect to dynamcally adjust allocatable GTP traffc by montorng and estmatng TCP traffc, so as to acheve max-mn farness to all flows, ncludng both TCP and GTP. 4. Flow Control and Rate Allocaton In ths secton we descrbe the rate control and allocaton mechansms n GTP. In GTP there are two levels of flow control. Frst, SFC conducts per-flow based control and adjusts flow rate for each RTT. The central scheduler reallocates rates to each flow for each centralzed control nterval (typcally a couple tmes

5 more than the maxmum RTT of ndvdual flows). GTP employs a centralzed schedulng algorthm to solve ths flow rate allocaton problem, whch we formally defne as a max-mn far rate allocaton problem wth flow capacty estmaton constrans. The contrbuton of our schedulng scheme s two-fold. Frst, by schedulng across multple connectons, we are able to make effcent utlzaton of recever bandwdth whle keepng packet loss low. Second, ths guarantees max-mn farness among flows and acheves system convergence. The necessary notatons are defned n Table 1. N C r T R,App R,Send r,target r,req r,curr r,loss r,total loss C RTT RTT max rˆ Total number of flows Recever access lnk capacty Centralzed control nterval Expected (Satsfactory) flow rate specfed by applcaton Max. possble bandwdth allocated by sender for flow Target rate (Set by centralzed scheduler) of flow Requested rate by recever for flow Observed receved rate by recever of flow over T Observed loss rate by recever of flow over T Observed total rate of flow over T, r,total = r,curr + r,loss Loss rato of flow : loss = r,loss / r,total Lnk capablty of each flow Round trp tme of flow Maxmum RTT among all the flows. Estmated capacty of flow Allowable ncrement of rˆ rˆ Table 1: Notatons 4.1 Sngle Flow Controller (SFC) The SFC has two functons: Frst, t provdes perflow based data packet request management and lmts the number of outstandng data requests. Ths lmts the usage of recever buffers for each flow and prevents recever floodng when there s congeston. Second, SFC provdes per-flow rate adaptaton n response to the loss rate. It updates the flow rate and sends the new rate request to the sender every RTT. Ths enables response to any congeston whle tryng to acheve allocated rate set by central scheduler. SFC uses a loss proportonal-decrease and proportonal-ncrease scheme for rate adaptaton, whch works as follows. For each RTT, based on the current flow loss rate loss, SFC decreases the requested rate proportonal to ths loss rate. We also set an upper bound (12.5%) for the decrease n rate. If there s no packet loss, SFC proportonally ncreases the requested rate wth a small step sze (2% per RTT). However the new rate should be no more than the allocated/target rate set by central scheduler. We defne per-flow rate update rule as follows: rreq, (1 mn{.5 loss,.125}) f loss > ; rreq, = mn{ rreq, (1 +.2), rtarget, } f loss =. We are also explorng effcent delay based sngle flow rate control mechansm, whch wll be reflected n our future work. 4.2 Flow Capablty Estmator (CE) The Capacty Estmator (CE) provdes estmaton to the achevable transmsson rate of each flow based on ts hstory provded by SFM. At the end of each centralzed control nterval (by default, we set t as three RTT max ), the CE estmates the capacty of each flow, whch s used as the upper-bound for that flow durng centralzed rate allocaton phase. Desred rate estmaton scheme needs to have the followng two characterstcs. When there s contnuously no packet loss and the acheved throughput s close to target rate, the CE needs to ncrease the flow s estmate faster. When flow ncurs a packet loss, the estmated rate should be reduced accordng to the loss rato. We use an Exponental Increment and Loss Proportonal Decrement (EIPD) scheme for estmaton. The dea behnd ncrement/decrement adjustment ( ) s shown rˆ n Fgure 5. We descrbe ths scheme as follows. yes loss less than a threshold? no Loss proportonal decrease yes Packet Loss? Fgure 5: Flow Rate Estmaton Scheme r,curr close to r,target? Frst, when there s no packet loss and the acheved rate s close to prevous estmate, we ncrease the allowable ncrement of estmaton exponentally: If r,loss = and r,total.95 r,target, then If ( rˆ = ) rˆ =.2 r,total ; else rˆ =2 rˆ. When there s no packet loss but the acheved rate s not close to prevous estmate, we ncrease the allowable ncrement of estmaton proportonally: If r,loss = and r,total <.95 r,target, then =.2 r rˆ,total. When there s packet loss, but below certan threshold (.5%), we stll ncrease the estmate: If < r,loss <.5, then =.2 r rˆ,total. Otherwse (loss rato s hgher than.5%), we reduce the estmate proportonally to the packet loss, whch s bounded by 2%. Note that s negatve n ths case: rˆ If r,loss >.5, then = - mn {.5 r rˆ,loss,.2} r,total.. rˆ Proportonal ncrease no no yes Exponental ncrease

6 After obtanng the allowable ncrement of flow rate estmaton ( ), we set the target rate of each flow as rˆ follows: rˆ = mn {r,total +, C, R,App, R,Send }, rˆ where C s the lnk capacty of flow, R,App s the expected bandwdth requrement specfed by the applcaton and R,Send s the specfed allocatable rate from sender, whch s optonal. In our future work, senders may be able to explctly specfy the bandwdth R,Send that s avalable for flow. 4.3 Max-mn Far Rate Allocaton For centralzed schedulng, an mportant problem s how to allocate recever capacty (or ts access lnk bandwdth) to multple flows (wth varous RTT) n a far manner. Max-mn farness [7] s a wdely used crteron for bandwdth sharng along sngle or multple bottlenecks. For sngle bottleneck case, under a maxmn far rate allocaton, the rate of one flow can be ncreased only by decreasng the rate of another flow wth lower or equal rate. Dfferent from the standard max-mn farness problem, here we need to take nto consderaton the estmated flow rate, whch provdes an upper bound for rate allocaton. We formally defne ths farness crteron as follows: Defnton: Let C r be of the capacty of all GTP traffc. A rate allocaton (x,, x N ) under constrant ( ˆx,, ) 1 xˆ N s feasble f N = 1 x C r, and for any, t holds that x xˆ. We call a rate allocaton (x,, x N ) max-mn far f t s mpossble to ncrease the rate of flow j wthout losng feasblty or reducng the rate of another flow j wth the rate x j ' < x. j For example, when four flows wth dfferent capactes (1, 2, 5, 5) share a lnk wth capacty 1, the max-mn far rate allocaton s (1, 2, 35, 35). Ths example shows that under maxmn farness, flows wth lower achevable rate are gven hgher prorty. To acheve max-mn farness, we need to allocate bandwdth resource C r to rate allocaton (r,target,, r N,target ) n a max-mn far way wth constrant (,, rˆ ). We come up wth a max-mn far rate allocaton rˆn scheme, whch gves hgher prorty to flows wth lower estmate. To be more specfc, our scheme tres to schedule the flow wth smaller estmate (than far share) frst, and evenly dstrbute the remanng bandwdth to those wth the estmate hgher than average far share. We descrbe ths schedulng algorthm n Fgure 6. We wll show the effectveness of our scheme by conductng smulatons n the next secton. However, we leave a formal proof of the convergence propertes of our two level rate control mechansm to future work. We let C curr denote the current avalable bandwdth, n be the number of flows that have been scheduled. 1 C curr = C r ; n = ; 2 FarShare = C curr / (N-n); 3 Fnd flow j wth mnmum rate estmate; 4 If r ˆj < FarShare then 5 r j,target = r ˆ ; C curr = C curr - r j,target ; n++; 6 Mark flow j as scheduled. 7 Repeat from 2 untl n = N or r ˆj FarShare 8 Schedule all flows that are not scheduled as: 9 r,target = C curr / (N-n). Fgure 6: Max-mn Rate Allocaton Algorthm 4.4 Transton Management. Transtons happen when new flows jon or exstng flows termnate. When a new flow starts, we set ts estmate to the mnmum of ts physcal lnk capacty, applcaton specfed rate, and sender specfed rate (f any): rˆ = mn {C, R,App, R,Send }. By dong so, we are able to treat new flows just as old ones and apply the same rate allocaton algorthm wthout any changes. When a flow termnates, we try to proportonally ncrease the estmates of all remanng flows. Let C l denote the aggregate rates from all the flows that fnsh wthn last control nterval. Then we ncrease the capacty estmate of each remanng flow by rˆ = + rˆ rˆ Cl. rˆ Agan, ths allows us to utlze the same centralzed schedulng algorthm to conduct rate allocaton. Transtons also happen when we adjust flows rates, and the skew between flows wth dfferent RTT s may cause serous end-pont congeston. Consder two flows wth RTT 5 and 5ms. At the centralzed rate allocaton stage, we may decrease the rate of flow 1 and ncrease flow 2 for the sake of achevng max-mn farness. However snce flow 2 has a smaller RTT, t responds much faster than flow 1 on rate changes. Also, t may happen that the ncreased flow 2 arrves at recever whle packets from flow 1 are stll arrvng at ts orgnal hgher rate. Ths causes congeston at the recever. To solve ths problem, we may ntroduce a delay for rate adjustment to coordnate among flows, and the delay for flow s defned as Delay = RTT max RTT, where RTT max and RTT are the maxmum round trp tme of all the flows, and the round trp tme of flow, respectvely. Ths addtonal delay wll be feedback j j j

7 together wth the new target rate update to SFC. Then each sngle flow controller wll delay updatng wth the new target rate. By dong so we have coordnated the rate update among flows and reduced recever sde congeston caused by the transton of rate changes. 5. Experments In ths secton, we explore the performance of GTP through extensve experments across three experment envronments. Frst, we run smulatons wth the packet-level smulator ns2[12] to study the best case performance and packet level dynamcs of GTP. ns2 provdes deal network performance n terms of achevable full lnk bandwdth, and omtted end system overhead. Second, we utlze a local Dummynet envronment (slower end nodes, lossy lnks wth emulated varous RTT) to perform a more realstc set of experments. Fnally, we conduct real mplementaton measurements by comparng GTP wth TCP, RBUDP[8] and SABUL[9] on TeraGrd[13] (fxed RTT, fast end-nodes, almost no-loss network lnks). The multpont-to-pont connecton topology s the same as the one n Fgure 3b, where multple senders are connected wth recever (through a ggabt swtch) va dedcated lnks wth the same 1Gbps bandwdth but wth varous delays. 5.1 ns2 Smulaton Results In ths subsecton we report the ns2 smulaton results of GTP s dynamcs to GTP flow changes, ts convergence and farness propertes, and ts nteracton wth TCP Dynamcs of GTP Frst, we llustrate the dynamcs of GTP by ncreasng the number of GTP flows arrvng at a recever. Fgure 7 shows flow rate trajectores when four flows wth dfferent RTT s (2, 4, 6, and 8ms) start at tme t =, 2, 3, 4 seconds. We see that whenever a new flow starts, each actve flow acheves an dentcal far share of bandwdth. To compare wth TCP farness across flows, we let four TCP flows wth the same set of RTT start at tme t=. Fgure 8 shows the trajectores of each flow s throughput, from whch we see that TCP flow 1 (wth shortest RTT) acheves much hgher throughput than others. Besdes usng a max-mn farness crteron, for multple competng flows wth the same lnk condtons (n ths case each flow needs to obtan same rate to acheve farness), we quanttatvely characterze the long-term farness across multple flows by usng a commonly accepted farness measurement, defned as: n 2 ( x ) = 1 ( 1, 2,..., xn ), n 2 n x 1 f x x = = where the value of f s between and 1, and x s the throughput of flow. The hgher the ndex f, the farer the rate allocaton s. For GTP flows n Fgure 7a, long term (after flow 4 starts) farness ndex s 1. And the farness ndex of TCP flows n Fgure 8 s only.67 (throughput rato of four TCP flows s approxmately 6:2:2:1). 1 5 Three TCP Flow One GTP Flow (a) Tme (Second) 1 5 GTP Flow 1 GTP Flow 2 GTP Flow 3 GTP Flow 4 Aggregate Lnk Throughput (b) Tme (Second) Fgure 7: Farness and convergence of GTP n a multpont-to-pont settng. Four GTP flows are wth RTT 2, 4, 6 and 8ms startng at tme, 2, 3, and 4s TCP Flow 1: RTT=2ms TCP Flow 2: RTT=4ms TCP Flow 3: RTT=6ms TCP Flow 4: RTT=8ms Tme (Second) Fgure 8: Unfarness over four TCP flows wth dfferent RTT (2,4,6 and 8ms), all startng at tme t =. We now demonstrate the ablty of GTP to probe remanng bandwdth share whle achevng max-mn farness. We let GTP flow 1 (wth 2ms RTT) starts at tme t=, and GTP flow 2 starts after 2 seconds. Flow 2 s not able to reach the allocated far share (5Mbps), and only acheves 2Mbps, whch may occur f the sender s the bottleneck (e.g. slow dsk I/O, or sender/server serves multple recevers at the same

8 tme). From Fgure 9 we see that flow 2 remans at 2Mbps whle flow 1 ncreases quckly and reaches 7Mbps, whch s the remanng bandwdth. Ths rate allocaton s also max-mn far (a) Tme (Second) GTP Flow 1 GTP Flow 2 Aggregate Throughput (b) Tme (Second) Fgure 9: GTP s ablty of fast probng the avalable bandwdth. GTP flow 2 starts at tme t=2s, and ts maxmum transmsson rate s 3Mbps Interacton wth TCP We llustrate the nteracton between GTP and TCP by lettng three parallel TCP flows compete wth one GTP flow. Fgure 1a shows the trajectores of sngle GTP flow s throughput and the aggregate throughput of three TCP flows, n whch case GTP capacty upper bound C r s equal to full recever access bandwdth (1Mbps). Snce GTP s very aggressve, t does not let most of the TCP traffc through (see Fgure 1a). As suggested n Secton IIIf, one possble mechansm for bandwdth sharng between GTP and TCP flows s to lmt GTP s total allocatable capacty. Fgure 1b shows the result when GTP flow capacty s set to 85Mbps, where TCP traffc could make use of the remanng bandwdth. As a future work, we would lke to enable GTP centralzed scheduler to support dynamc resource estmaton and allocaton for both TCP and GTP traffc. 5.2 Dummynet Emulaton Results In our local cluster envronment, we confgured one cluster node as a Dummynet router, whch routes packets whle nducng varous delays for dfferent flows. The maxmum acheved throughput measured by Iperf[14] on an emulated 1Gbps lnk wth 6ms RTT s 954Mbps, wth.3% packet loss. The relatvely hgh packet loss s due to the processng lmts of the Dummynet router and the end nodes on our 2Ghz Xeon machnes (a) Tme (Second) (b) Tme (Second) 1 5 One GTP Flow Three TCP Flows One GTP Flow Three TCP Flows Aggregate Throughput of (b) (c) Tme (Second) Fgure 1: Interacton between GTP and TCP. (a) Three parallel TCP flows (wth 1ms RTT) start at t=, and a GTP flow jons after 1 second wth pre-set maxmum bandwdth (1Gbps). (b) The case where GTP s maxmum bandwdth utlzaton s set to 85%. (c) The aggregate throughput of all TCP and GTP flows of (b). We frst compare GTP s behavor on Dummynet wth the deal case (provded by the ns2 smulaton) and TCP, n a smple two senders/one recever case. Flow 1 wth 25ms RTT starts at tme t=, and flow 2 wth 5ms RTT jons at t=1s. Fgure 11 shows the trajectores of these two flows, when they are both ether GTP flows on Dummynet, or deal GTP flows, or TCP flows. We see that for both flows, GTP s performance on Dummynet s close to the deal case result from ns2 smulaton. The farness ndex of two GTP flows s.99 (from 1s to 33s), whle the farness ndex of TCP flows s only Flow 1, Startng at Tme t=, RTT = 25ms (a) Tme (Second) Flow 2, Startng at Tme t=1 (s) RTT = 5ms 1 5 GTP Flow 1 (Ideal Case, ns2 Result) GTP Flow 1 (Dummynet) TCP Flow 1 (ns2) GTP Flow 2 (Ideal Case, ns2 Result) GTP Flow 2 (Dummynet) TCP Flow 2 (ns2) (b) Tme (Second) Fgure 11: Two flows wth dfferent RTT (25 and 5ms) are (1) Idea (smulated) GTP flows (Same as the smulaton result n ns2); (2) Real GTP flows on Dummynet; (3) TCP Flows (wth tuned large wndow sze).

9 Aggregate (a) Number of Flows Loss Rato (%) (b) Number of Flows Fgure 12: Aggregate throughput and average loss rate of parallel GTP flows. The RTT between sender and recever s around 25ms. We now llustrate GTP s performance over dfferent numbers of parallel connectons; four cluster nodes are setup as senders and another as a recever. We vary the number of GTP flows from 1 to 8 (dstrbuted across four senders), and present the results n Fgure 12. We see that GTP mantans a hgh aggregate throughput when the number of parallel flows ncreases (Fgure 12a). We observe hgh packet loss n the Dummynet envronment (compared wth Teragrd results n the next subsecton), whch may be due to the lmtatons of both the Dummynet router and the end nodes. However we also notce that the loss rate does not always ncrease wth the number of connectons, and s bounded by 3%. Ths loss rate s much lower than other rate based protocols n the same settng, as documented below. 5.3 TeraGrd Experments: Comparng Rate-based Protocols Methodology In ths subsecton, we compare GTP wth two other pont-to-pont rate based hgh performance data transfer protocols: RBUDP and SABUL, as well as untuned standard TCP. Relable Blast UDP (RBUDP) [8] targets relable data transfer on dedcated or QoS-enabled hgh speed lnks. It assumes users have explct knowledge about the lnk capacty, requrng the sender to specfy an ntal rate, start, and mantan ts transfer at that rate. SABUL [15] s desgned for data-ntensve applcatons n hgh bandwdth-delay product networks, whch starts senders at a fxed hgh ntal rate, adjustng rate based on experenced loss. The newest verson of SABUL (UDT) [9] uses delay-based rate adaptaton to reduce packet loss caused by ts aggressveness. Throughout our experments, we use the latest avalable verson of these protocols (RBUDP v.2, SABUL/UDT v1., and GTP prototype). Our experment s conducted both on Dummynet and TeraGrd[13] (ncludng SDSC and NCSA/UIUC stes). The achevable bandwdth between SDSC and NCSA on each connecton s 1Gbps (NIC speed lmt). The followng performance metrcs are used n our experments: - Sustaned throughput on a 1GB transfer (Pont to pont and mult-pont to pont) and average loss rate; - Farness for mult-pont to pont transmsson; - Rate allocaton convergence property TeraGrd Results Scenaro 1: Pont-to-pont: Transfer 1GB data from SDSC to NCSA (1Gbps lnk wth 58ms RTT). TCP RBUDP SABUL GTP Tme (s) Avg. Rate 4.88Mbps 881Mbps 898 Mbps 896Mbps Loss Rate unknown 1.7%.1%.2% TABLE 2: Pont-to-Pont, from SDSC to NCSA Our results show, for sngle, pont-to-pont hgh bandwdth delay product paths, the three rate-based protocols all acheve much hgher throughput than tradtonal TCP whle mantanng low loss rate. Scenaro 2: Pont-to-pont, parallel flows: Transfer 1GB data from SDSC to NCSA on the same 1Gbps lnk wth three parallel connectons. Aggregate Rate (Mbps) TCP RBUDP 2 SABUL GTP Avg. Loss unknown 2.1%.1%.3% System Yes Yes Yes Yes stablty Farness Far Far Far Far TABLE 3: Parallel Flows: SDSC to NCSA In ths scenaro, all three rate-based protocols perform well when there are parallel flows between sender and recever, and they all acheve farness among flows. RBUDP acheves hghest throughput as well as loss rate. Whle RBUDP and SABUL acheve slghtly hgher throughput than GTP wth ther aggressveness, they do so at the expense of a much hgher packet loss rate. The GTP s end-pont control scheme acheve hgh throughput wth a low loss rate. Scenaro 3: Mult pont, convergent flows: Transfer 1GB data; one recever at SDSC, and three senders wth two from NCSA, one from SDSC. Each senderrecever connecton has a 1Gbps dedcated lnk. In ths case (Table 4), all three rate based protocols acheve hgh throughput, but loss rates vary over a range of 1x, wth GTP havng lowest loss rate by a large margn. GTP also acheves farness among flows and system stablty n ths case, whle others are not 1 We are not able to measure nstant TCP loss rate, due to the lack of root prvleges on TeraGrd. 2 We assume each flow has no knowledge about others, and starts wth the rate of full bandwdth.

10 far for flows wth dfferent RTT. Note that we assume each RBUDP flow has capacty estmate of 1Gbps, and the loss rate may be reduced f we set each flows estmated throughput to be less than 1Gbps (see below). TCP RBUDP SABUL GTP Aggregate Rate 3 (Mbps) Avg. Loss unknown 53.3% 8.7%.6% System Yes No No Yes stablty Farness No No No Yes TABLE 4: Mult-Pont, Convergent Flows Scenaro 4: Two senders and one recever on Dummynet. RBUDP RBUDP2 SABUL GTP Throughput(Mbps)/ Loss Rate of Flow % 5.2% 18.9% 2.6% Throughput(Mbps)/ Loss Rate of Flow % % % % Aggregate Throughput (Mbps) TABLE 5: Two ponts to pont (wth RTT 25 and 5ms), Dummynet. RBUDP2 s the case where we manually set the transmsson rate for each RBUDP flow to be half of the lnk bandwdth. In ths last scenaro runnng on Dummynet, where hgh packet loss happens, GTP stll outperforms other rate based protocols. To summarze the TeraGrd evaluaton results, we see that for multpont-to-pont data transmsson, GTP sgnfcantly reduces the packet loss whch s generally caused by the aggressveness of rate-based protocols. 6. Related Work The earlest examples of recever-centrc relable rate based protocols nclude NETBLT [16] and PROMPT [17]. Recently, the hgh performance computng communty has proposed a range of ratebased pont-to-pont relable transport protocols for hgh bandwdth-delay product networks [8, 9, 15, 18-21]. RBUDP and SABUL are the two wth real mplementatons and measurements on hgh delaybandwdth product lnks. For hgh delay-bandwdth product lnks, parallel TCP flows [22, 23] are used to mprove the performance of current TCP. However the protocol overhead ncreases wth the ncrement of the number of parallel connectons, and t s not clear about the optmum number of parallel flows. It may also be unfar to other sngle TCP connectons sharng the same bottleneck wth these parallel TCP connectons. A key aspect of our research focuses on receverbased flow contenton management. Recever based 3 Aggregate rate and loss rate vary for RBUDP and SABUL, and numbers lsted are the average values of several measurements. multpont-to-pont transmsson has been proposed for web traffc [24] and content delvery network [25]. There are some research projects sharng the same dea of recever based management across flows. In [26], a recever-sde ntegrated congeston management archtecture s proposed, whch targets managng traffc across varous protocols for real-tme traffc. However detaled rate allocaton schemes and farness among flows are not consdered. In [27], the authors try to allocate recever access bandwdth among TCP flows accordng to ther pre-set prorty. In the real world, recever capablty may be under-utlzed due to the fact that each TCP flow may not be able to always acheve and mantan the allocated rate, and the farness among flows s not guaranteed due to ther prorty schedulng scheme. Examples of recever centrc approaches also nclude [28] for wreless networks. In nearly all cases, these studes focus on networks that are slow relatve to the nodes attached to them. In lambda grds, the stuaton s the opposte. Another focus of our work s to acheve max-mn farness among flows. Max-mn connecton farness has been studed n both the ATM [29, 3] and Internet [31] domans. For farness crterons other than max-mn farness, we refer to sngle lnk farness ndex [32] and most recently proportonal farness [33]. 7. Summary and Future Work Recent advances n DWDM networks have fundamentally changed the communcaton requrements for future lambda grds, where there s suffcent network bandwdth but lmted end system capacty. Ths motvates our work of shftng the network transmsson management from the network to the recever end. We propose GTP, a group transport protocol, as well as a recever based rate allocaton scheme to manage multpont-to-pont transmssons. We desgn a centralzed schedulng algorthm to allocate rate to multple GTP flows wth max-mn farness guarantees. Early results from both ns2 smulaton, emulaton studes on Dummynet, and real measurements on Terard show that GTP acheves hgh throughput, low loss on hgh bandwdth-delay product lnks. In addton, results also show that GTP outperforms other pont-to-pont protocol for multpleto-pont transmsson and acheves fast convergence to max-mn far rate allocaton across multple flows. We dentfy the followng future work. Frst, we are nterested n mprovng per-flow based control scheme to react more effcently to rregular network traffc (e.g. bursty traffc). Second, we are studyng how to ntroduce TCP traffc management nto our centralzed control scheme. Ths ncludes open questons such as how to estmate TCP flow s capablty, how to

11 effcently tune TCP parameters to acheve target rate, etc. Thrd, we are workng on formal proofs of system convergence propertes for our two level flow control schemes. Fnally, we expect to ntegrate GTP nto a Dstrbuted Vrtual Computer (DVC) [34], a smple grd program executon envronment beng developed for lambda-grds as part of the OptIPuter project. Wthn a DVC, GTP wll provde hgh speed communcaton and data transfer servces to applcatons Acknowledgements Supported n part by the Natonal Scence Foundaton under awards NSF EIA Grads and NSF Cooperatve Agreement ANI (OptIPuter), NSF CCR (VGrADS), NSF NGS-3539, and NSF Research Infrastructure Grant EIA Support from Hewlett-Packard, BgBangwdth, Mcrosoft, and Intel s also gratefully acknowledged. 8. References [1] L. Smarr, A. Chen, T. DeFant, J. Legh and P. Papadopoulos, The OptIPuter. Communcatons of the Assocaton for Computng Machnery. 47(11). [2] CANARIE. [3] V. Jacobson, Congeston Avodance and Control. Computer Communcaton Revew, vol. 18, no. 4, August [4] L. Brakmo and L. Peterson, TCP Vegas: End to End Congeston Avodance on a Global Internet. IEEE Journal of Selected Areas n Communcatons,. 13(8): p [5] M. Maths, J. Mahdav, S. Floyd and A. Romanow., TCP Selectve Acknowledgement Optons. RFC218, Internet Engneerng Task Force (IETF), October [6] L. Rzzo, Dummynet: a Smple Approach to the Evaluaton of Network Protocols. Computer Communcaton Revew, January [7] D.P. Bertsekas and R. Gallager, Data Networks, Second Edton. Prentce-Hall, New Jersey, [8] E. He, J. Legh, O. Yu and T. DeFant, Relable Blast UDP: Predctable Hgh Performance Bulk Data Transfer. IEEE Cluster Computng, 22: p [9] Y. Gu, X. Hong, M. Mazzucco and R.L. Grossman, SABUL: A Hgh Performance Data Transfer Protocol. Submtted for publcaton. [1] Kazaa. [11] BtTorrent. [12] Network Smulator - ns2. [13] D.A. Reed, Grds, the TeraGrd, and Beyond. IEEE Computer, (1): p [14] Iperf Tool. [15] H. Svakumar, R. Grossman, M. Mazzucco, Y. Pan and Q. Zhang, Smple Avalable Bandwdth Utlzaton Lbrary for Hgh-Speed Wde Area Networks. to appear n Journal of Supercomputng, 23. [16] D.D. Clark, M. Lambert and L. Zhang, NETBLT: A Hgh Throughput Transport Protocol. Proceedngs of ACM SIGCOMM '88, [17] T.S. Balraj and Y. Yemn, PROMPT - A Destnaton Orented Protocol for Hgh Speed Networks. IFIP WG 6.1/WG 6.4, Palo Alto, CA, Nov [18] P.M. Dckens and W. Gropp, An Evaluaton of a User- Level Data Transfer Mechansm for Hgh Performance Networks. n Proceedngs of the 12th Hgh Performance Dstrbuted Computng (HPDC12) conference, 23. [19] Tsunam. [2] A. Feng, A. Kapada, W. Feng and G. Belford, Packet Spacng: An Enablng Mechansm for the Delvery of Multmeda Content. [21] P. Dckens, FOBS: A Lghtweght Communcaton Protocol for Grd Computng. n Proceedngs of Euro- Par 23. [22] B. Allcock, J. Bester, J. Bresnahan, A.L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel and S. Tuecke, Data management and transfer n hgh-performance com- putatonal grd envronments. Parallel Computng. 28(5): p [23] H. Svakumar, S. Baley and R.L. Grossman, PSockets: The Case for Applcaton-level Network Strpng for Data Intensve Applcatons usng Hgh Speed Wde Area Networks. Proceedngs of Supercomputng 2. [24] R. Gupta, M. Chen, S. McCanne and J. Walrand, WebTP: A Recever-Drven Web Transport Protocol. [25] P. Rodrguez and E.W. Bersack, Dynamc Parallel Access to Replcated Content n the Internet. IEEE/ACM Transactons on Networkng, 22. 1(4): p [26] H. Balakrshnan, H.S. Rahul and S. Seshan, An Integrated Congeston Management Archtecture for Internet Hosts. Proceedngs of ACM SIGCOMM [27] P. Mehra, A. Zakhor and C.D. Vleeschouwer, Recever- Drven Bandwdth Sharng for TCP. n Proceedngs of IEEE INFOCOM 23. [28] H.-Y. Hseh, K.-H. Km, Y. Zhu and R. Svakumar, A recever-centrc transport protocol for moble hosts wth heterogeneous wreless nterfaces. Proceedngs of the 9th annual nternatonal conference on Moble computng and networkng, 23: p [29] Traffc Management Specfcaton. ATM Forum Verson 4.1, af-tm-121., [3] A. Charny, D.D. Clark and R. Jan, Congeston Control wth Explct Rate Indcaton. In Proceedngs of ICC'95, June [31] P. Karbhar, E. Zegura and M. Ammar, Multpont-to- Pont Sesson Farness n the Internet. n Proceedngs of IEEE INFOCOM 23. [32] D.-M. Chu and R. Jan, Analyss of the Increase and Decrease Algorthms for Congeston Avodance n Computer Networks. Journal of Computer Networks and ISDN Systems. 17(1). [33] F. Kelly, A. Maulloo and D. Tan, Rate Control for Communcaton Networks: Shadow Prces, Proportonal Farness and Stablty. Journal of the Operatonal Research Socety, 49, pp [34] N. Taesombut and A.A. Chen, Dstrbuted Vrtual Computer (DVC): Smplfyng the Development of Hgh Performance Grd Applcatons. to appear n Proceedngs of the Workshop on Grds and Advanced Networks (GAN 4), 24.

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Internet Traffic Managers

Internet Traffic Managers Internet Traffc Managers Ibrahm Matta matta@cs.bu.edu www.cs.bu.edu/faculty/matta Computer Scence Department Boston Unversty Boston, MA 225 Jont work wth members of the WING group: Azer Bestavros, John

More information

CS 268: Lecture 8 Router Support for Congestion Control

CS 268: Lecture 8 Router Support for Congestion Control CS 268: Lecture 8 Router Support for Congeston Control Ion Stoca Computer Scence Dvson Department of Electrcal Engneerng and Computer Scences Unversty of Calforna, Berkeley Berkeley, CA 9472-1776 Router

More information

Gateway Algorithm for Fair Bandwidth Sharing

Gateway Algorithm for Fair Bandwidth Sharing Algorm for Far Bandwd Sharng We Y, Rupnder Makkar, Ioanns Lambadars Department of System and Computer Engneerng Carleton Unversty 5 Colonel By Dr., Ottawa, ON KS 5B6, Canada {wy, rup, oanns}@sce.carleton.ca

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research Schedulng Remote Access to Scentfc Instruments n Cybernfrastructure for Educaton and Research Je Yn 1, Junwe Cao 2,3,*, Yuexuan Wang 4, Lanchen Lu 1,3 and Cheng Wu 1,3 1 Natonal CIMS Engneerng and Research

More information

Routing in Degree-constrained FSO Mesh Networks

Routing in Degree-constrained FSO Mesh Networks Internatonal Journal of Hybrd Informaton Technology Vol., No., Aprl, 009 Routng n Degree-constraned FSO Mesh Networks Zpng Hu, Pramode Verma, and James Sluss Jr. School of Electrcal & Computer Engneerng

More information

Real-Time Guarantees. Traffic Characteristics. Flow Control

Real-Time Guarantees. Traffic Characteristics. Flow Control Real-Tme Guarantees Requrements on RT communcaton protocols: delay (response s) small jtter small throughput hgh error detecton at recever (and sender) small error detecton latency no thrashng under peak

More information

FAST TCP: Motivation, Architecture, Algorithms, Performance

FAST TCP: Motivation, Architecture, Algorithms, Performance FAST TCP: Motvaton, Archtecture, Algorthms, Performance Cheng Jn Davd X. We Steven H. Low Engneerng & Appled Scence, Caltech http://netlab.caltech.edu Abstract We descrbe FAST TCP, a new TCP congeston

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems Fast Retransmsson of Real-Tme Traffc n HIPERLAN/ Systems José A Afonso and Joaqum E Neves Department of Industral Electroncs Unversty of Mnho, Campus de Azurém 4800-058 Gumarães, Portugal {joseafonso,

More information

Scheduling and queue management. DigiComm II

Scheduling and queue management. DigiComm II Schedulng and queue management Tradtonal queung behavour n routers Data transfer: datagrams: ndvdual packets no recognton of flows connectonless: no sgnallng Forwardng: based on per-datagram forwardng

More information

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems:

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems: Speed/RAP/CODA Presented by Octav Chpara Real-tme Systems Many wreless sensor network applcatons requre real-tme support Survellance and trackng Border patrol Fre fghtng Real-tme systems: Hard real-tme:

More information

Integrated Congestion-Control Mechanism in Optical Burst Switching Networks

Integrated Congestion-Control Mechanism in Optical Burst Switching Networks Ths full text paper was peer revewed at the drecton of IEEE Communcatons Socety subect matter experts for publcaton n the IEEE GLOBECOM 2005 proceedngs Integrated Congeston-Control Mechansm n Optcal Burst

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Comparisons of Packet Scheduling Algorithms for Fair Service among Connections on the Internet

Comparisons of Packet Scheduling Algorithms for Fair Service among Connections on the Internet Comparsons of Packet Schedulng Algorthms for Far Servce among Connectons on the Internet Go Hasegawa, Takahro Matsuo, Masayuk Murata and Hdeo Myahara Department of Infomatcs and Mathematcal Scence Graduate

More information

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture Goals and Approach CS 194: Dstrbuted Systems Resource Allocaton Goal: acheve predcable performances Three steps: 1) Estmate applcaton s resource needs (not n ths lecture) 2) Admsson control 3) Resource

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Shared Running Buffer Based Proxy Caching of Streaming Sessions Shared Runnng Buffer Based Proxy Cachng of Streamng Sessons Songqng Chen, Bo Shen, Yong Yan, Sujoy Basu Moble and Meda Systems Laboratory HP Laboratores Palo Alto HPL-23-47 March th, 23* E-mal: sqchen@cs.wm.edu,

More information

A fair buffer allocation scheme

A fair buffer allocation scheme A far buffer allocaton scheme Juha Henanen and Kalev Klkk Telecom Fnland P.O. Box 228, SF-330 Tampere, Fnland E-mal: juha.henanen@tele.f Abstract An approprate servce for data traffc n ATM networks requres

More information

High Performance DiffServ Mechanism for Routers and Switches: Packet Arrival Rate Based Queue Management for Class Based Scheduling

High Performance DiffServ Mechanism for Routers and Switches: Packet Arrival Rate Based Queue Management for Class Based Scheduling Hgh Performance DffServ Mechansm for Routers and Swtches: Packet Arrval Rate Based Queue Management for Class Based Schedulng Bartek Wydrowsk and Moshe Zukerman ARC Specal Research Centre for Ultra-Broadband

More information

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions SRB: Shared Runnng Buffers n Proxy to Explot Memory Localty of Multple Streamng Meda Sessons Songqng Chen,BoShen, Yong Yan, Sujoy Basu, and Xaodong Zhang Department of Computer Scence Moble and Meda System

More information

Fibre-Optic AWG-based Real-Time Networks

Fibre-Optic AWG-based Real-Time Networks Fbre-Optc AWG-based Real-Tme Networks Krstna Kunert, Annette Böhm, Magnus Jonsson, School of Informaton Scence, Computer and Electrcal Engneerng, Halmstad Unversty {Magnus.Jonsson, Krstna.Kunert}@de.hh.se

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai TECHNICAL REPORT AN OPTIMAL DISTRIUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION Jord Ros and We K Tsa Department of Electrcal and Computer Engneerng Unversty of Calforna, Irvne 1 AN OPTIMAL

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Priority-Based Scheduling Algorithm for Downlink Traffics in IEEE Networks

Priority-Based Scheduling Algorithm for Downlink Traffics in IEEE Networks Prorty-Based Schedulng Algorthm for Downlnk Traffcs n IEEE 80.6 Networks Ja-Mng Lang, Jen-Jee Chen, You-Chun Wang, Yu-Chee Tseng, and Bao-Shuh P. Ln Department of Computer Scence Natonal Chao-Tung Unversty,

More information

WITH rapid improvements of wireless technologies,

WITH rapid improvements of wireless technologies, JOURNAL OF SYSTEMS ARCHITECTURE, SPECIAL ISSUE: HIGHLY-RELIABLE CPS, VOL. 00, NO. 0, MONTH 013 1 Adaptve GTS Allocaton n IEEE 80.15.4 for Real-Tme Wreless Sensor Networks Feng Xa, Ruonan Hao, Je L, Naxue

More information

Network Coding as a Dynamical System

Network Coding as a Dynamical System Network Codng as a Dynamcal System Narayan B. Mandayam IEEE Dstngushed Lecture (jont work wth Dan Zhang and a Su) Department of Electrcal and Computer Engneerng Rutgers Unversty Outlne. Introducton 2.

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network IEEE INFOCOM Prcng Network Resources for Adaptve Applcatons n a Dfferentated Servces Network Xn Wang and Hennng Schulzrnne Columba Unversty Emal: {xnwang, schulzrnne}@cs.columba.edu Abstract The Dfferentated

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Maintaining temporal validity of real-time data on non-continuously executing resources

Maintaining temporal validity of real-time data on non-continuously executing resources Mantanng temporal valdty of real-tme data on non-contnuously executng resources Tan Ba, Hong Lu and Juan Yang Hunan Insttute of Scence and Technology, College of Computer Scence, 44, Yueyang, Chna Wuhan

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control Quantfyng Responsveness of TCP Aggregates by Usng Drect Sequence Spread Spectrum CDMA and Its Applcaton n Congeston Control Mehd Kalantar Department of Electrcal and Computer Engneerng Unversty of Maryland,

More information

A New Transaction Processing Model Based on Optimistic Concurrency Control

A New Transaction Processing Model Based on Optimistic Concurrency Control A New Transacton Processng Model Based on Optmstc Concurrency Control Wang Pedong,Duan Xpng,Jr. Abstract-- In ths paper, to support moblty and dsconnecton of moble clents effectvely n moble computng envronment,

More information

State of the Art in Differentiated

State of the Art in Differentiated Outlne Dfferentated Servces on the Internet Explct Allocaton of Best Effort Packet Delvery Servce, D. Clark and W. Fang A Two bt Dfferentated Servces Archtecture for the Internet, K. Nchols, V. Jacobson,

More information

Bandwidth Allocation for Service Level Agreement Aware Ethernet Passive Optical Networks

Bandwidth Allocation for Service Level Agreement Aware Ethernet Passive Optical Networks Bandwdth Allocaton for Servce Level Agreement Aware Ethernet Passve Optcal Networks Dawd Nowak Performance Engneerng Laboratory School of Electronc Engneerng Dubln Cty Unversty Emal: nowakd@eengdcue Phlp

More information

Quantifying Performance Models

Quantifying Performance Models Quantfyng Performance Models Prof. Danel A. Menascé Department of Computer Scence George Mason Unversty www.cs.gmu.edu/faculty/menasce.html 1 Copyrght Notce Most of the fgures n ths set of sldes come from

More information

Transit Networking in ATM/B-ISDN based on Service Category

Transit Networking in ATM/B-ISDN based on Service Category Transt Networkng n ATM/B-ISDN based on Servce Category Young-Tak Km Dept. of Informaton and Communcaton Engneerng, College of Engneerng, YeungNam Unv. E-mal : ytkm@ynucc.yeungnam.ac.kr ABSTRACT The ATM

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

THere are increasing interests and use of mobile ad hoc

THere are increasing interests and use of mobile ad hoc 1 Adaptve Schedulng n MIMO-based Heterogeneous Ad hoc Networks Shan Chu, Xn Wang Member, IEEE, and Yuanyuan Yang Fellow, IEEE. Abstract The demands for data rate and transmsson relablty constantly ncrease

More information

Performance Comparison of a QoS Aware Routing Protocol for Wireless Sensor Networks

Performance Comparison of a QoS Aware Routing Protocol for Wireless Sensor Networks Communcatons and Network, 2016, 8, 45-55 Publshed Onlne February 2016 n ScRes. http://www.scrp.org/journal/cn http://dx.do.org/10.4236/cn.2016.81006 Performance Comparson of a QoS Aware Routng Protocol

More information

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach Dstrbuted Resource Schedulng n Grd Computng Usng Fuzzy Approach Shahram Amn, Mohammad Ahmad Computer Engneerng Department Islamc Azad Unversty branch Mahallat, Iran Islamc Azad Unversty branch khomen,

More information

Adaptive Energy and Location Aware Routing in Wireless Sensor Network

Adaptive Energy and Location Aware Routing in Wireless Sensor Network Adaptve Energy and Locaton Aware Routng n Wreless Sensor Network Hong Fu 1,1, Xaomng Wang 1, Yngshu L 1 Department of Computer Scence, Shaanx Normal Unversty, X an, Chna, 71006 fuhong433@gmal.com {wangxmsnnu@hotmal.cn}

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

ABRC: An End-to-End Rate Adaptation Scheme for Multimedia Streaming over Wireless LAN*

ABRC: An End-to-End Rate Adaptation Scheme for Multimedia Streaming over Wireless LAN* ARC: An End-to-End Rate Adaptaton Scheme for Multmeda Streamng over Wreless LAN We Wang Soung C Lew Jack Y Lee Department of Informaton Engneerng he Chnese Unversty of Hong Kong Shatn N Hong Kong {wwang2

More information

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.15 No.10, October 2015 1 Evaluaton of an Enhanced Scheme for Hgh-level Nested Network Moblty Mohammed Babker Al Mohammed, Asha Hassan.

More information

A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS AARTHI HARNA TRIVESALOOR NARAYANAN

A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS AARTHI HARNA TRIVESALOOR NARAYANAN A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS By AARTHI HARNA TRIVESALOOR NARAYANAN Master of Scence n Computer Scence Oklahoma State

More information

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Overvew 2 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Introducton Mult- Smulator MASIM Theoretcal Work and Smulaton Results Concluson Jay Wagenpfel, Adran Trachte Motvaton and Tasks Basc Setup

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

HIERARCHICAL SCHEDULING WITH ADAPTIVE WEIGHTS FOR W-ATM *

HIERARCHICAL SCHEDULING WITH ADAPTIVE WEIGHTS FOR W-ATM * Copyrght Notce c 1999 IEEE. Personal use of ths materal s permtted. However, permsson to reprnt/republsh ths materal for advertsng or promotonal purposes or for creatng new collectve wors for resale or

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

A High-Performance Router: Using Fair-Dropping Policy

A High-Performance Router: Using Fair-Dropping Policy Internatonal Journal of Computer Scence and Telecommuncatons [Volume 5, Issue 4, Aprl 2014] A Hgh-Performance Router: Usng Far-Droppng Polcy ISSN 2047-3338 Seyyed Nasser Seyyed Hashem 1, Shahram Jamal

More information

Solution Brief: Creating a Secure Base in a Virtual World

Solution Brief: Creating a Secure Base in a Virtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Abstract The adopton rate of Vrtual Machnes has exploded at most organzatons, drven by the mproved

More information

Burst Round Robin as a Proportional-Share Scheduling Algorithm

Burst Round Robin as a Proportional-Share Scheduling Algorithm Burst Round Robn as a Proportonal-Share Schedulng Algorthm Tarek Helmy * Abdelkader Dekdouk ** * College of Computer Scence & Engneerng, Kng Fahd Unversty of Petroleum and Mnerals, Dhahran 31261, Saud

More information

A Model Based on Multi-agent for Dynamic Bandwidth Allocation in Networks Guang LU, Jian-Wen QI

A Model Based on Multi-agent for Dynamic Bandwidth Allocation in Networks Guang LU, Jian-Wen QI 216 Jont Internatonal Conference on Artfcal Intellgence and Computer Engneerng (AICE 216) and Internatonal Conference on etwork and Communcaton Securty (CS 216) ISB: 978-1-6595-362-5 A Model Based on Mult-agent

More information

Research Article A Novel Adaptation Method for HTTP Streaming of VBR Videos over Mobile Networks

Research Article A Novel Adaptation Method for HTTP Streaming of VBR Videos over Mobile Networks Moble Informaton Systems Volume 216, Artcle ID 29285, 11 pages http://dx.do.org/1.1155/216/29285 Research Artcle A Novel Adaptaton Method for HTTP Streamng of VBR Vdeos over Moble Networks Hung T. Le,

More information

Harfoush, Bestavros, and Byers, Robust Identfcaton of Shared Losses Usng End-to-End Uncast Probes 2 Introducton One of the defnng prncples of the netw

Harfoush, Bestavros, and Byers, Robust Identfcaton of Shared Losses Usng End-to-End Uncast Probes 2 Introducton One of the defnng prncples of the netw Robust Identfcaton of Shared Losses Usng End-to-End Uncast Probes Λ Khaled Harfoush Azer Bestavros John Byers harfoush@cs.bu.edu best@cs.bu.edu byers@cs.bu.edu Computer Scence Department Boston Unversty

More information

Channel-Quality Dependent Earliest Deadline Due Fair Scheduling Schemes for Wireless Multimedia Networks

Channel-Quality Dependent Earliest Deadline Due Fair Scheduling Schemes for Wireless Multimedia Networks Channel-Qualty Dependent Earlest Deadlne Due Far Schedulng Schemes for Wreless Multmeda Networks Ahmed K. F. Khattab Khaled M. F. Elsayed ahmedkhattab@eng.cu.edu.eg khaled@eee.org Department of Electroncs

More information

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks 1

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks 1 Intellgent Traffc Condtoners for Assured Forwardng Based Dfferentated Servces Networks B. Nandy, N. Seddgh, P. Peda, J. Ethrdge Nortel Networks, Ottawa, Canada Emal:{bnandy, nseddgh, ppeda, jethrdg}@nortelnetworks.com

More information

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks Appl. Math. Inf. Sc. 7, No. 2L, 467-474 2013) 467 Appled Mathematcs & Informaton Scences An Internatonal Journal http://dx.do.org/10.12785/ams/072l13 Delay Varaton Optmzed Traffc Allocaton Based on Network

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Constructing Minimum Connected Dominating Set: Algorithmic approach

Constructing Minimum Connected Dominating Set: Algorithmic approach Constructng Mnmum Connected Domnatng Set: Algorthmc approach G.N. Puroht and Usha Sharma Centre for Mathematcal Scences, Banasthal Unversty, Rajasthan 304022 usha.sharma94@yahoo.com Abstract: Connected

More information

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers Avaable onlne at www.scencedrect.com Proceda Engneerng 23 (20) 780 785 Power Electroncs and Engneerng Applcaton, 20 Game Based Vrtual Bandwdth Allocaton for Vrtual Networks n Data Centers Cu-rong Wang,

More information

BAIMD: A Responsive Rate Control for TCP over Optical Burst Switched (OBS) Networks

BAIMD: A Responsive Rate Control for TCP over Optical Burst Switched (OBS) Networks AIMD: A Responsve Rate Control for TCP over Optcal urst Swtched (OS) Networks asem Shhada 1, Pn-Han Ho 1,2, Fen Hou 2 School of Computer Scence 1, Electrcal & Computer Engneerng 2 U. of Waterloo, Waterloo,

More information

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment Journal of Physcs: Conference Seres PAPER OPEN ACCESS Resource and Vrtual Functon Status Montorng n Network Functon Vrtualzaton Envronment To cte ths artcle: MS Ha et al 2018 J. Phys.: Conf. Ser. 1087

More information

AADL : about scheduling analysis

AADL : about scheduling analysis AADL : about schedulng analyss Schedulng analyss, what s t? Embedded real-tme crtcal systems have temporal constrants to meet (e.g. deadlne). Many systems are bult wth operatng systems provdng multtaskng

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Dynamic Bandwidth Provisioning with Fairness and Revenue Considerations for Broadband Wireless Communication

Dynamic Bandwidth Provisioning with Fairness and Revenue Considerations for Broadband Wireless Communication Ths full text paper was peer revewed at the drecton of IEEE Communcatons Socety subject matter experts for publcaton n the ICC 008 proceedngs. Dynamc Bandwdth Provsonng wth Farness and Revenue Consderatons

More information

Experimentations with TCP Selective Acknowledgment

Experimentations with TCP Selective Acknowledgment Expermentatons wth TCP Selectve Acknowledgment Renaud Bruyeron, Bruno Hemon, Lxa Zhang UCLA Computer Scence Department {bruyeron, bruno, lxa}@cs.ucla.edu Abstract Ths paper reports our expermentaton results

More information

Instantaneous Fairness of TCP in Heterogeneous Traffic Wireless LAN Environments

Instantaneous Fairness of TCP in Heterogeneous Traffic Wireless LAN Environments KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 10, NO. 8, Aug. 2016 3753 Copyrght c2016 KSII Instantaneous Farness of TCP n Heterogeneous Traffc Wreless LAN Envronments Young-Jn Jung 1 and

More information

Efficient QoS Provisioning at the MAC Layer in Heterogeneous Wireless Sensor Networks

Efficient QoS Provisioning at the MAC Layer in Heterogeneous Wireless Sensor Networks Effcent QoS Provsonng at the MAC Layer n Heterogeneous Wreless Sensor Networks M.Soul a,, A.Bouabdallah a, A.E.Kamal b a UMR CNRS 7253 HeuDaSyC, Unversté de Technologe de Compègne, Compègne Cedex F-625,

More information

WIRELESS communication technology has gained widespread

WIRELESS communication technology has gained widespread 616 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 4, NO. 6, NOVEMBER/DECEMBER 2005 Dstrbuted Far Schedulng n a Wreless LAN Ntn Vadya, Senor Member, IEEE, Anurag Dugar, Seema Gupta, and Paramvr Bahl, Senor

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications Effcent Loa-Balance IP Routng Scheme Base on Shortest Paths n Hose Moel E Ok May 28, 2009 The Unversty of Electro-Communcatons Ok Lab. Semnar, May 28, 2009 1 Outlne Backgroun on IP routng IP routng strategy

More information

Adaptive Network Resource Management in IEEE Wireless Random Access MAC

Adaptive Network Resource Management in IEEE Wireless Random Access MAC Adaptve Network Resource Management n IEEE 802.11 Wreless Random Access MAC Hao Wang, Changcheng Huang, James Yan Department of Systems and Computer Engneerng Carleton Unversty, Ottawa, ON, Canada Abstract

More information

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT Bran J. Wolf, Joseph L. Hammond, and Harlan B. Russell Dept. of Electrcal and Computer Engneerng, Clemson Unversty,

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

Regional Load Balancing Circuitous Bandwidth Allocation Method Based on Dynamic Auction Mechanism

Regional Load Balancing Circuitous Bandwidth Allocation Method Based on Dynamic Auction Mechanism ATEC Web of Conferences 76, (8) IFID 8 https://do.org/./matecconf/876 Regonal Load Balancng Crcutous Bandwdth Allocaton ethod Based on Dynamc Aucton echansm Wang Chao, Zhang Dalong, Ran Xaomn atonal Dgtal

More information

Why Congestion Control. Congestion Control and Active Queue Management. TCP Congestion Control Behavior. Generic TCP CC Behavior: Additive Increase

Why Congestion Control. Congestion Control and Active Queue Management. TCP Congestion Control Behavior. Generic TCP CC Behavior: Additive Increase Congeston Control and Actve Queue Management Congeston Control, Effcency and Farness Analyss of TCP Congeston Control A smple TCP throughput formula RED and Actve Queue Management How RED works Flud model

More information

A KIND OF ROUTING MODEL IN PEER-TO-PEER NETWORK BASED ON SUCCESSFUL ACCESSING RATE

A KIND OF ROUTING MODEL IN PEER-TO-PEER NETWORK BASED ON SUCCESSFUL ACCESSING RATE A KIND OF ROUTING MODEL IN PEER-TO-PEER NETWORK BASED ON SUCCESSFUL ACCESSING RATE 1 TAO LIU, 2 JI-JUN XU 1 College of Informaton Scence and Technology, Zhengzhou Normal Unversty, Chna 2 School of Mathematcs

More information

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks 12 IEEE Wreless Communcatons and Networkng Conference: Moble and Wreless Networks Onlne Polces for Opportunstc Vrtual MISO Routng n Wreless Ad Hoc Networks Crstano Tapparello, Stefano Tomasn and Mchele

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Design of the Application-Level Protocol for Synchronized Multimedia Sessions

Design of the Application-Level Protocol for Synchronized Multimedia Sessions Desgn of the Applcaton-Level Protocol for Synchronzed Multmeda Sessons Chun-Chuan Yang Multmeda and Communcatons Laboratory Department of Computer Scence and Informaton Engneerng Natonal Ch Nan Unversty,

More information

Network-Driven Layered Multicast with IPv6

Network-Driven Layered Multicast with IPv6 Network-Drven Layered Multcast wth IPv6 Ho-pong Sze and Soung C. Lew Department of Informaton Engneerng, The Chnese Unversty of Hong Kong, Shatn, N.T., Hong Kong {hpsze8, soung}@e.cuhk.edu.hk Abstract.

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information