Experimentations with TCP Selective Acknowledgment

Size: px
Start display at page:

Download "Experimentations with TCP Selective Acknowledgment"

Transcription

1 Expermentatons wth TCP Selectve Acknowledgment Renaud Bruyeron, Bruno Hemon, Lxa Zhang UCLA Computer Scence Department {bruyeron, bruno, Abstract Ths paper reports our expermentaton results wth TCP Selectve Acknowledgments (TCP-Sack), whch became an Internet Proposed Standard protocol recently. To understand the performance mpact of TCP-Sack deployment, n ths study we examned the followng ssues: How much performance mprovement TCP-Sack may brng over the current TCP mplementaton, TCP-Reno. We conducted experments both on a lab testbed and over the Internet, under varous condtons of lnk delay and losses. In partcular, how well TCP-Sack may perform over long delay paths that nclude satellte lnks. What s the performance mpact of TCP connectons wth Sack optons on those connectons wthout Sack when the two types runnng n parallel, f there s any. 1 Introducton Up untl now the common TCP mplementaton wth the latest Performance tunng has been TCP-Reno, whch uses Cumulatve Acknowledgments. Recently, the TCP Selectve Acknowledgment opton (TCP- Sack) became an Internet Proposed Standard protocol, and s descrbed n the RFC 2018 [8]. TCP-Sack ams at mprovng TCP's ablty to recover from multple losses wthn the same congeston control wndow. Wth the current TCP mplementaton (TCP- Reno), the sender can quckly recover from an solated packet loss usng the fast retransmt and fast recovery mechansms [13]. When multple packets are lost, however, the sender tmes out n most cases and enter the slow-start phase ([7],[13]). To mnmze false-alarms, the retransmsson tmeout often takes long, and when combned wth the followng Slowstart phase whch reduces the congeston wndow sze to one data segment, can be partcularly damagng to TCP's throughput. To allow ncremental deployment, the two ends of a TCP connecton performs a Sack negotaton durng the connecton setup. A TCP mplementaton that supports Sack appends a SackOk opton n ts SYN packet. If both ends advertse Sack opton, then Sack wll be used for data transmsson n both Drectons. If any one end does not advertse Sack, the other end shall not attempt to use Sack. To understand how TCP-Sack works, let us look at the recever's end frst. If a packet s dropped, t creates a hole n the recever's wndow of ncomng data. When the recever receves the data segment rght after the hole, t sends a duplcate ACK for the segment be]ore the hole, as what TCP-Reno does. In addton, TCP-Sack also adds to the ACK packet an opton feld that contans a par of sequence numbers whch descrbe the block of data that was receved out-of-order. Gven the maxmum sze of the TCP opton feld (40 bytes), an ACK packet can carry at most 4 Sack blocks. Ths means that 4 holes n the same congeston wndow can be advertsed n one ACK packet. Practcally, because of the ncreas- ACM SIGCOMM 54 Computer Communcaton Revew

2 ng use of the tmestamp opton descrbed n [6], the number of Sack blocks carred by each ACK s lmted to 3. A TCP sender uses Sack optons to buld a table of all the correctly receved data segments, thus t knows exactly whch parts are mssng. It can retransmt them together durng a recovery phase. In TCP-Reno, a Duplcate ACK means that the recever receved data out of order and was sendng ACKs for the left edge of the wndow. In TCP-Sack, a duplcate ACK carres the same nformaton, n addton t also carres nformaton on what other data segments that have been receved out-of-order. The sender starts the data recovery mechansm after recevng 3 duplcate ACKs (as suggested n the RFC [8]). 1.1 Prevous work TCP Sack was frst descrbed n RFC 1072, and became an Internet Proposed Standard protocol n RFC In [4], Sally Floyd addressed several ssues n the behavor and performance of TCP-Sack. In [3], Kevn Fall and Sally Floyd used smulaton to compare the performance of TCP-Tahoe, TCP-Reno, and TCP-Sack. Our work on TCP-Sack s an extenson to the above, as we performed real experments on our testbed and over the Internet to verfy and confrm prevous results. 1.2 Performance Issues Our frst goal was to measure the throughput mprovement TCP-Sack may brng over TCP-Reno. We conducted test runs over both our own testbed n UCLA's Internet Research Lab and two mult-hop paths over the Internet. Our second goal was to evaluate the benefts of TCP-Sack over long delay lnks such as a satellte lnk (typcally wth a round trp tme of 500 msec.). In the case of long delay lnks, the tme-out and slow-start phase proves especally harmful to TCP's throughput: Our am was to measure TCP-Sack throughput under controlled packet losses, and compare the results wth the throughput of TCP-Reno, as well as wth the theoretcal maxmum achevable throughput. Our last, but not the least, goal was to understand the mpact of TCP-Sack on competng TCP connectons wthout Sack. Snce TCP-Sack connectons acheve hgher throughput than TCP-Reno connectons, one concern s whether TCP-Sack makes TCP-Reno connectons starve when the latter compete for bandwdth aganst the former. 1.3 Implementaton of TCP-Sack We used a TCP-Sack mplementaton by Ellot Yan of USC, whch n term s ported to FreeBSD from PSC's Sack code for NetBSD 1, whch tself s a port of Sally Floyd's Sack code for the NS smulator. Yan's mplementaton contans several flavors of TCP that can be selected for each connecton at run-tme usng a socket opton, makng t easy to run TCP connectons wth or wthout Sack at the same tme on the same host. Certan couplng and nteracton exst between TCP's congeston control algorthms and the Selectve Acknowledgment processng and generaton, whch must be complant wth the specfcaton [8]. How much of the current Congeston Algorthms should be altered to beneft from Selectve Acknowledgments remans an open research queston. Several Modfcatons to TCP's congeston control algorthm have been proposed recently ([9], [2], [1]). The congeston control algorthm n FreeBSD mplementaton s descrbed n [3]. It features the ppe algorthm from Sally Floyd's NS code, bascally an adaptaton of TCP-Reno to the new Sack capablty. Therefore, t s consdered farly conservatve. More aggressve congeston control algorthms were proposed, most notably Forward Acknowledgment n [9] (whch s explctly desgned to work together wth Sack) and TCF-Vegas ([2]). In ths study, we used TCP-Sack wth the Ppe algorthm. 1 ACM SIGCOMM 55 Computer Communcaton Revew

3 #pnglo.o.o.2 Tunnelng Process bors.cs.ucla.edu Tunnelng Process I TCP IP I/dev/tun * ~'~ Router Delayng Code ' TCP IP * I/dev/tun trstan.cs.ucla.edu morgana.cs.ucla.edu Fgure 1: Testbed - Tunnelng to emulate delay 1.4 Testbed The testbed conssted of two PCs runnng FreeBSD connected by a Sun UltraSparc workstaton whch behaves as a controllable IP router that can ntroduce defned lnk delay and packet loss patterns. To avod modfyng the Solars kernel we mplemented the above router functons n the user space on the Ultrasparc, and to make ths user level manpulaton transparent to the end-to-end TCP connectons we used IP-n-IP tunnelng. FreeBSD kernel supports a generc tunnelng nterface, whch s used n our testbed mplementaton to buld an IP tunnel between the Ultrasparc router and each of the two end PC's. IP-Tunnelng n FreeBSD The tunnel devce n FreeBSD s a pseudo-devce that can be bnded to a source and a destnaton address (see Fg. 2, the source address on the fgure s , and the destnaton address s ). When IP sends a packet to the destnaton address (e.g ), the kernel captures the packet and hands t to the tunnel devce nstead of the network nterface. The packets are queued n the tunnel devce, and can be unqueued by a user process. Smlarly, a user process can wrte a packet nto the tunnel devce. If the packet bears the source address of the tunnel ( n Fg. 2), the kernel wll unqueue t and move t to the IP layer, as f the packet was an ncomng packet. Delay Emulaton Fg. 1 shows how the tunnelng comes nto play n our testbed. Our user process lstens both to the tunnel devce and to a prearranged UDP port. Whatever s read from the tunnel devce s sent to the router through the UDP socket (IP-n-UDP encapsulaton), and whatever comes from the router through the UDP port s wrtten to the tunnel devce. The trck s nvsble to the end-to-end TCP connectons. The clent program runs on every PCs, and can handle several tunnel nterfaces, e.g. t can handle several ACM SIGCOMM 56 Computer Communcaton Revew

4 tunnel destnatons. The server program runs on the UltraSparc, and lstens to a prearranged UDP port. All ncomng packets are queued n a FIFO queue. The delay n the queue s defned by a commandlne parameter. We added the possblty to have a separate queue for each way. We also added a leakybucket algorthm 2 to lmt the avalable bandwdth. User Process Kemel TCP UDP Each end host 'sees' a vrtual lnk between and The characterstcs of ths vrtual lnk depend on the settngs of the FIFO queue on the server process on the router, and on the settngs of the user processes on both hosts. The server takes care of the delay and the bandwdth lmtatons, whle the artfcal losses were ntroduced n the user processes. IP 1.5 Measurements, nstrumentaton [ Tunnel u Ethernet To gather data on our experments, we added nstrumentaton code n the FreeBSD kernel to plot the values of cwnd and ssthresh for a gven connecton. These plots are very nterestng n that they show congeston events and slow-starts. We also hacked LBNL's tcpdump u to make t complant wth the new specfcatons of TCP-Sack descrbed n [8]. To process the output of tcpdump, we used both Shawn Ostermann's tcptrace [11] and our own AWK scrpts. Cable Fgure 2: Tunnelng n FreeBSD 1.6 Tests on the real Internet In order to confrm the results from our testbed, we ran smlar experments on the real Internet. We had two partners for ths project, the Pttsburgh Supercomputng Center and the NASA's Goddard Space Flght Center. PSC provded us wth a NetBSD machne runnng an expermental kernel wth PSC's mplementaton of TCP-Sack. At NASA, we used 2Packets enterng the FIFO queue are tme-stamped. The output of the queue s lmted to X byte/sec by settng the tme to wat before sendng next packet to P where P s the sze of the prevous packet. 3$cpdump s avalable at ftp.ee.lbl.gov. Our patch can be found at ACM SIGCOMM 57 Computer Communcaton Revew

5 a host runnng an expermental mplementaton of TCP-Sack on SunOS. 2 Comparson of TCP-Sack and TCP-Reno In the frst phase of our project, we studed the behavor of TCP-Sack and compared t to TCP- Reno. We focused on the congeston wndow and the throughput. We started wth some tests n our lab and then moved on to the Internet to confrm our results. 2.1 Over the Testbed Descrpton of the tests Durng these experments, we studed the behavor of TCP-Sack or TCP-Reno, alone, facng the same delay, bandwdth and loss condtons. The smulaton nvolved a FTP-lke transfer of data durng typcally 5 mnutes. Usng tcpdump and our nstrumentaton code, we gathered data on the congeston wndow sze and the throughput. The lnk delay was 25 ms and the bandwdth was lmted to 2 Mb/s. We used dfferent packet loss probabltes. The buffer space at the sender and at the recever was set to allow a maxmum wndow sze of 16 KB. The packets were 1400 bytes long. One advantage of usng our tunnel was that the condtons were perfectly reproductble between each run. There were two sets of experments, wth two dfferent loss patterns: solated drops or bursty drops. For solated losses, a sngle packet s dropped wth a gven probablty (between 0% and 9%). For bursty losses, a burst of packets s dropped n a row, wth a gven burst loss probablty. The number of packets n the burst s constant, typcally three. Therefore the packet loss probablty s equal to the burst loss probablty multpled by the number of packets n the burst. In the followng sectons, we wll refer to P as the packet loss probablty, P~ as the burst loss probablty and b as the number of packets n a burst Isolated losses We present n ths secton typcal results from our tests wth solated packet losses. The followng table (Fg. 3) shows the throughput of TCP-Reno or TCP- Sack when the packet loss probablty s between 0% to 9%. TCP-Reno TCP Sack Sack/Reno P = 0% 184 KB/s 184 KB/s 1.00 P = 1% 132 KB/s 133 KB/s 1.01 P = 3% 60 KB/s 67 KB/s 1.12 P=9% 19 KB/s 19 KB/s 1.00 Fgure 3: TCP-Reno and TCP-Sack wth solated drops - separate runs of 5 mnutes (50ms RTT, 2Mb/s) Frst, f there s no loss (P=0%), TCP-Reno and TCP-Sack have exactly the same behavor and have the same throughput. The congeston wndow opens durng the ntal slow-start untl t reaches ts maxmum and then remans at ts maxmum. The throughput of 184 kb/s s the maxmum achevable wth our test parameters. If the loss probablty s low (P--l%) and f the losses are solated, only one packet loss per wndow s lkely to occur. In ths case, TCP-Reno recovers usng fast retransmt and fast recovery. TCP-Sack does not brng any mprovement. If the loss probablty s P=3%, the probablty of havng multple lost packets per wndow s greater. TCP-Sack can recover most of the tme from multple losses wthn the same wndow, whle TCP-Reno very often experences a tme-out followed by a slow-start. Fg. 4 and 5 show some plots of the congeston wndow sze (cwnd) and the slow-start threshold (ssthresh) for TCP-Reno and TCP-Sack durng an nterval of 50 seconds n the P=3%, 50ms RTT and 2 Mb/s case. A slow-start event s characterzed by a 'fallng edge' on the congeston wndow plot (cwnd s suddenly set to 1). On Fg. 4 and Fg. 5, each of these 'fallng edges' represents a slow-start (ths s confrmed by the numerc data from the kernel logs). When comparng the two plots, one can notce that TCP-Sack does not show as many of these events as ACM SIGCOMM 58 Computer Communcaton Revew

6 II =eclno x los ~P I~K L~ j r J... o,oo zoo.oo 30o.oo Fgure 4: TCP-Reno - 50ms RTT, 2 Mb/s Congeston wndow - P=3~o, Fgure 6: Compared Throughput of TCP-Sack and TCP-Reno - P=3~o, 50ms RTT, 2Mb/s TCP-Reno (5 aganst 16). As a result, the average congeston wndow sze s larger wth TCP-Sack and the throughput of TCP- Sack s better. See Fg. 6 for the throughput plot for TCP-Reno and TCP-Sack durng ths typcal experment. Note that even f the tests were run one after another, the plots are on the same fgure. We see n ths case that TCP-Sack mproved the throughput by 12%. Fgure 5: TCP-Sack - Congeston wndow - P=3~o, 50ms RTT, 2 Mb/s Fnally, f the loss probablty s P=9%, both TCP- Reno and TCP-Sack show poor performance. The man reason s that, accordng to RFC 2018 [8], TCP- Sack has to tme-out when a retransmtted packet s lost agan. TCP-Sack wll also have to tme-out f a lot of acknowledgments are lost. Therefore, f the loss probablty s hgh, retransmtted packets get dropped and TCP-Sack cannot avod a tmeout/slow-start. The result s that both TCP-Reno and TCP-Sack have a low throughput n case of hgh loss probablty. ACM SIGCOMM 59 Computer Communcaton Revew

7 2.1.3 Bursty losses When congeston occurs n the real Internet, several packets are lkely to be lost. Therefore, we decded to repeat the prevous experments wth 'bursty losses'. The loss dstrbuton s very smple : a 'drop event' happen wth a gven probablty, and at each drop event, a fxed number of packets s dropped. In the followng sectons, we wll note P~ the probablty of a drop event, and b the number of packets n a burst. In ths secton, we have b = 3. The followng table (Fg. 7) shows the throughput of TCP-Reno and TCP-Sack, wth b = 3 and P~ set to 1%, 2% or 3%. TCP-Reno TCP-Sack Sack/Reno P' = 1% 60 KB/s' 98 KB/s 1.63 b=3 P~ = 2% 29 KB/s 50 KB/s 1.72 b=3 P' = 3% 23 kb/s 24 kb/s 1.04 b=3... mll... I0~0 0 I I 5O ~ 6O L! 1 Fgure 8: cwnd and ssthresh /or TCP-Reno - P'=l~o, b=3, 50ms RTT, 2Mb/s Fgure 7: Comparson o] TCP-Reno and TCP-Sack wth bursty losses The frst obvous result s that TCP-Sack mproved the throughput by 60% to 70%, when the drop event probablty s 1% or 2%. In the next paragraphs, we revew the results for a drop event probablty of 1%. The same comments apply for a burst loss probablty of 2%. See Fg. 8 and 9 for the cwnd and ssthresh plots for TCP-Reno and TCP-Sack durng an nterval of 50 seconds. See Fg. 10 for the throughput plots We can see on the cwnd plots that the number of tme-outs and slow-starts s hgher wth TCP-Reno (see Secton to understand what a slow-start looks lke on these plots). Whle TCP-Reno tmes-out and slow-starts often and regularly ( approxmately 3 tmes every 5 seconds (Fg. 8)), we can see that TCP-Sack s able to keep ts wndow open durng 30 seconds wthout any slow-start (Fg. 9). Fg. 10 shows the dfference n term of throughput between TCP-Reno and TCP-Sack. The fnal throughput of TCP-Sack s 63% hgher than the,,r,,,,, fn y :!,,,r "',,,r... Fgure 9: cwnd and ssthresh for TCP-Sack - P'=I~, b=3, 50ms RTT, 2Mb/s J ACM SIGCOMM 60 Computer Communcaton Revew

8 throughput of TCP-Reno. o:, seqr~ x I~ J ~P S~K / J!t/ ' 0, ,} tme Fgure 10: Tme-sequence dagram -P'=l~o - b=3 We llustrate now the dfferent behavors of TCP- Reno and TCP-Sack after a drop event (b = 3). Fg. 11 and 12 represent the packet sequence numbers and the acknowledgement sequence numbers wth respect to tme, for TCP-Reno and TCP-Sack respectvely. Both plots represent an nterval of tme of 1.5 second centered on the drop event. For TCP-Reno, we can see on Fg. 11 that the drop event occurs at t = s. Just after the loss, several duplcate acknowledgements are receved. After the thrd duplcate acknowledment has been receved, the frst mssng packet s retransmtted at t = s, usng the fast retransmt mechansm. After the fast retransmt, f the sendng wndow s not exhausted, more packets can be sent. We see that at t = s a new packet s sent. Up to ths pont, TCP-Reno has fast-retransmtted one mssng packet and sent new packets untl the sendng wndow s exhausted. But now, the sendng wndow s exhausted, no more acknowledgements are receved and some packets are stll mssng. TCP-Reno has to tme-out and enter a slow-start phase. At t = s, the second mssng packet s retransmtted. At t = an acknowledgement s receved. The wndow sze s now 2 packets. The two next packets are now retransmtted. The frst one was the thrd mssng packet and the second one an unnecessary retransmsson. At t = s, an acknowledgement for all the sent data s receved. The wndow sze s now three packets and TCP-Reno can now send new data. Recoverng from ths drop event nvolved 900ms of dle tme and a slow-start. For TCP-Sack, we can see on Fg. 12 that the drop event occurs at t = After the loss, several duplcate acknowledgements are receved. After the thrd duplcate acknowledgement, the three mssng packets are retransmtted at t = s, usng fast retransmt. After ths fast retransmt, f the sendng wndow s not exhausted, more data can be sent. In ths case the sendng wndow was exhausted. At t = , an acknowledgement for all the sent data s receved and some new data can be sent. We see that n ths case, no tme-out and slow-start occured. All the data has been correctly recoverered usng fast retransmt. The two prevous scenaros llustrate clearly the beneft of usng TCP-Sack when the losses are bursty. We can note the nterestng fact that TCP-Reno has the same throughput (60 KB/s) for solated losses wth a packet loss probablty of 3% and for bursty losses wth a burst loss probablty of 1% and a burst sze of three packets. Ths means that for TCP-Reno, three consecutve drops and three separate drops look and feel the same. For TCP-Sack, on the other hand, the sze of the drop event does matter to the recovery mechansm. When the burst loss probablty s hgher (P'=3%), then both TCP-Reno and TCP-Sack have the same poor performance. The reason s the same as n the solated losses case. When the loss probablty s too hgh, The wndow sze remans small and some tmeouts and slow-starts can not be avoded, even wth TCP-Sack Conclusons We can draw some conclusons from these experments. Frst, t was observed that TCP-Sack s useful for a gven range of packet loss probablty. If there ACM SIGCOMM 61 Computer Communcaton Revew

9 seqno x , ' TCP_Reno 8.48 ; ~ ~rne 123, Fgure 11: Close-up on a bursty drop event - TCP- Reno - P'--l~o - b=3 + s no loss or f the loss probablty s very low, then TCP-Reno can recover usng fast retransmt and fast recovery. If the loss probablty s very hgh, even TCP-Sack s subject to some tme-outs and slowstarts and can not keep a large wndow. In both cases, there s no real beneft of usng TCP-Sack. Then, t was observed that for a drop rate between 2% and 4%, TCP-Sack mproved the throughput by a sgnfcant amount. Ths mprovement s really mportant when the packet losses are bursty. We llustrated n a detaled analyss the dfference n behavor of TCP-Reno and TCP-Sack after a loss of a burst of packets. Whle TCP-Reno can only recover from an solated loss usng fast retransmt, TCP-Sack can recover from a burst loss wthout any tme out or slow-start. As a result, we observed a throughput mprovement of 60% to 70% wth TCP-Sack when the packet losses were bursty. The results presented n ths secton are typcal of the results we got durng our numerous experments. 2.2 Experments over the Internet... IIII TCP_Sack seqno x 106 1~ " "P , /,A IIII IIIIII II II There are two man reasons to try to valdate our results by runnng experments on a real network. Frst, the topology of our testbed s very smple (one hop): t s nterestng and useful to know what the pcture s on a mult-hop path. The second reason s that we used fxed loss dstrbutons to get the bottom lne. On a real mult-hop path, the congeston patterns are very complex and dffcult to model: we want to know what the results are n ths case. The man problem wth real-lfe experments s that the network condtons are not reproductble from one test to another. 12, @ t Between UCLA and the Pttsburgh SuperComputng Center me 12~ O0,,, Fgure 12: Close-up on a bursty drop event - TCP- Sack - P'--I~ - b=3 In ths set of experments, we used one host n our lab at UCLA and one host at the Pttsburgh Super- Computng Center. The path s 14 hops long, wth an average round-trp tme of 65 ms and a bottleneck lnk of 6 Mb/s (Van Jacobson's pathchar to measure the path). The packet sze s 496 bytes. The buffer ACM SIGCOMM 62 Computer Communcaton Revew

10 space at the sender and at the recever was set up to bytes (approxmately 132 packets). One mportant dfference wth our testbed s that dependng on the tme of the day the traffc condtons change dramatcally. Therefore, we show here three typcal experments, wth typcal (average) results. For each one of our tests, we establshed an ftp connecton between UCLA and PSC for 2 mnutes, frst wth TCP-Reno and after wth TCP-Sack. For each test three plots were generated. The frst plot s the usual tme-sequence dagram for TCP-Reno and TCP-Sack. Even f the two connectons were run one after another, the throughputs of TCP-Reno and TCP-Sack appear on the same graph for easy comparson. The second and thrd plots show the congeston wndow sze for TCP-Reno and TCP-Sack respectvely. The frst experment (Fg. 13 and 14) was performed at 12 pm (pacfc tme). The Internet traffc at ths tme s heavy. The second experment (Fg. 15 and 16) was performed at 4 pm (pacfc tme) when the traffc s 'average'. The thrd test (Fg. 17 and 18) was performed at 8 pm (pacfc tme) when the Internet traffc s lght. The frst nterestng thng to notce s the behavor of the congeston wndow (Fg. 14, 16 and 18). Every tme a tme-out followed by a slow-start occurs, the congeston wndow s reduced to 1 (fallng edge on the dagram). We can estmate the number of tmeouts and slow-starts by countng the number of fallng edges. For example, we can see on Fg. 14 that TCP- Reno tmed-out approxmately 50 tmes n 2 mnutes whle TCP-Sack tmed-out only twce. In all our test, TCP-Reno tmed-out more often than TCP-Sack. Snce TCP-Sack s able to avod some tme-outs and slow-starts, t can keep a larger average congeston wndow. It s especally true when the Internet traffc s heavy. We can see on Fg. 14 and 16 that TCP-Sack tmed-out respectvely 2 and 3 tmes n 2 mnutes, whle TCP-Reno tmed-out 50 and 25 tmes. The average congeston wndow s therefore larger wth TCP-Sack. As a result, the throughput of TCP-Sack s hgher than the throughput of TCP- Reno. When the Internet traffc s lght, TCP-Reno can recover from congeston usng fast retransmt and fast recovery and there s less beneft usng TCP- Sack. Lookng at the tcpdump traces of our tests, we were able to fgure out what the loss pattern s durng the experments. As we expected, the losses are very bursty. We observed that the average burst length s 20 to 30 packets. The maxmum wndow sze s set to bytes and the packets are 496 bytes long. Therefore, the maxmum wndow sze s set to approxmately 132 packets. Ths very bursty loss pattern explans why TCP-Sack s more effcent than TCP-Reno. The followng table (Fg. 19) shows the throughput of TCP-Reno and TCP-Sack at these three typcal tmes of the day. Fgure 19: UCLA to PSC - Throughput comparson Fg. Fg. Fg. 13&14 15&16 17&18 Reno 63 KB/s 104 KB/s 221KB/s Sack 81 KB/s 132 KB/s 257 KB/s Sack/Reno The frst comment s that there s more beneft usng TCP-Sack when the Internet traffc s average or heavy. Ths was expected snce TCP-Reno s able to recover from one sngle solated loss. In ths case, TCP-Sack does not do better than TCP-Reno. We found a throughput mprovement of 15% wth TCP- Sack durng a perod of lght Internet traffc. We found a throughput mprovement of approxmately 30% wth TCP-Sack durng perods of average or heavy Internet traffc. We thnk that such an mprovement of 30% s sgnfcant. There are two nterestng conclusons to these experments. Frst, the bursty loss pattern observed durng our tests over the real Internet was what we expected. The behavor of both TCP-Reno and TCP- Sack n the face of congeston was also what we expected. Whle TCP-Reno had to tme-out and slowstart after congeston, TCP-Sack was able to recover smoothly. The second nterestng pont s that we observed a throughput mprovement of 30% wth TCP- Sack when facng heavy or average Internet traffc. ACM SIGCOMM 63 Computer Communcaton Revew

11 _-- seqno x 10 6 IO.O0~F ~.- "top,'~ K 9.o I ~ I --,,oo-t-:f! -- o.oo-~" tm Fgure 13: UCLA to PSC - Heavy Traffc Condtons ( 12 pm p.t.) - Tme-sequence l {c-~':-1~... TCP_Reno,:,....-;~... t lcwndxl ~ TCP_Sack ' I 1~.0 -I..... tl t... ~... U, j o!0... ;;o o2... " 4[ o.!o :o!oo,oz.oo " 'H Fgure 14: UCLA to PSC - Heavy Traffc Condtons (12 pm p.t.) - cwnd ACM SIGCOMM 64 Computer Communcaton Revew

12 seqno x s.00-]... l... t ~-c?- -s~c~ - 12.oo-I --- ~ ~ w --, "00--~ I tme Fgure 15: UCLA to PSC - Average Traffc Condtons (4 pm p.t.) - Tme-sequence dagram 'cwnd x ,00 26,00 24, , , ~ 8.00~ 6.0~ rf(ll"l"rr... T IV... TCP_Reno ~me cwnd x 103 TCP_Sack I 28.o~ I! ,00-- 2o.oo , tme Fgure 16: UCLA to PSC - Average Traffc Condtons (4 pm p.t.) - cwnd ACM SIGCOMM 65 Computer Communcaton Revew

13 Lght Tme-sequence seqno x TCP 9RCK V ~ "00 I ~ I Fgure 17: UCLA to PSC Traffc Condtons (8 pm p.t.) - tme dagram... "... 'rl... TCP_Reno cwnd x oo--I I 55,00--I ~ I~ 4o.oo- I I ~ ~ ' - " " 2ooo I /I/I/,.oo ': J cwnd x TCP_Sack ~~_.~_~ ~----~-~ 35.0C _ ~ ~ (} \ 0.00 tme 0, Fgure 18: UCLA to PSC - Lght Traffc Condtons (8 pm p.t.) - cwnd 0.00 J tme " ACM SIGCOMM 66 Computer Communcaton Revew

14 2.2.2 Between UCLA and the NASA's Goddard Space Flght Center We tred to run the same experments between UCLA and NASA's Goddard Space Flght Center. Due to techncal problems wth the expermental SunOS kernel we used at GSFC, we dd not apply the same methodology and could not run as many tests between UCLA and the NASA lab as n the prevous secton. Here, we show some typcal results. The path between the two hosts s 16 hop long. Usng pathchar, we measured a round-trp tme of 80 ms and a bottleneck of 8.4 Mb/s. We could not use our own traffc generator for these experments and we used nstead a regular FTP program to transfer one bg fle. The fle was 29 MB long and the duraton of the transfer was between 2 and 3 mnutes. See Fg. 20 and 21 for the results of one typcal test. Fg. 20 shows the tme-sequence dagram of TCP-Reno and TCP-Sack. The two tests were run one after the other but we show both results on the same plot. Fg. 21 s a table whch compares the throughput of TCP-Reno and TCP-Sack. Because of techncal problems, we could not use the nstrumentaton code to plot cwnd and ssthresh. Stll, t s possble to see on the tme-sequence dagrams that more tme-outs and slow-starts occur wth TCP-Reno. A tme-out s an accdent n the tmesequence dagram, lke a bump (see the close-up on Fg. 20). The number of tme-outs and slow-starts wth TCP-Sack s smaller, and the transfer appears to be much smoother and steader. We found a throughput mprovement of 46% when usng TCP-Sack. Ths mprovement s a typcal value that was confrmed by our other tests. Almost 50% of mprovement for an FTP transfer s very sgnfcant Concluson Thanks to our two partners, the Pttsburgh SuperComputng Center and NASA's Goddard Space Flght Center, we have been able to compare the performance of TCP-Reno and TCP-Sack over a real path on Internet. All our tests confrmed the same conclusons. We frst observed that the real loss patterns are bursty as we suspected. Then we had con- seqno x 106 UCLA NASA Fgure 20: NASA "rcpsrck / IZ_ on a tme-out event \, \ to UCLA - 29 MB FTP - close-up frmaton that TCP-Sack s able to avod most tmeouts and slow-starts, keepng a larger congeston wndow than TCP-Reno. Fnally we observed some mportant throughput mprovement wth TCP-Sack, especally when facng heavy traffc. We found that TCP-Sack could mprove the throughput by 30% to 45% over two dfferent multhop paths over the real Internet. What s nterestng s that these experments were done n real condtons. We beleve that a such a bg throughput mprovement meets the expectatons placed on TCP- Sack. Our last concluson s that these experments confrm the results on our testbed. ACM SIGCOMM 67 Computer Communcaton Revew

15 Fgure 21: NASA to UCLA - Throughput comparson Fg. 20 Reno 183 kb/s Sack 268 kb/s Sack/Reno Long delay lnks One goal of our project was to study the behavor of TCP-Sack over long delay lnks, when used together wth the Wndow Scale opton defned n [6]. Our focus was manly on the 500ms range, snce ths s a typcal value for a Geosynchronous satellte. We ran several experments wth dfferent loss rates and dfferent scenaros. We set the buffer szes to 256 KB on both sdes of the vrtual lnk, to take advantage of the large delay - bandwdth product. The bandwdth settng vared between 2 Mb/s and full 10BT bandwdth (10 Mb/s). Note that when lmtng the bandwdth usng the leaky bucket algorthm, the average delay s mpacted by the modfed behavor of the FIFO queue n the delayng server (see Secton 1.4). slow - starts, hence the huge throughput ncrease. 3.2 Artfcal losses In ths experment, we set the round trp tme to 500ms, and we lmted the bandwdth to 2 Mb/s. All the drops are artfcally ntroduced. The drop rate s P' = 0.18% wth b = 3. Ths case s extremely favorable to TCP-Sack. The tme-sequence dagram s at Fg e+07 le+07 8e+06 Fgure 24: 0.15~o drop rate, burstness 3 Throughput TCP-Reno 32 KB/s TCP-Sack 55 KB/s Sahk Re~ Prelmnary results Usng the full bandwdth avalable, and wth a long RTT, we are able to generate some congeston on the vrtual lnk. Indeed, wth buffers set to 256 KB on both PCs, wth a rate of 10 Mb/s, the lnk can get congested n a realstc way (several drops per wndow, bursty drops, etc). The congeston s caused by buffer overflows on the router (the maxmum UDP buffer sze on Solars s lmted to 64 KB, hence ths problem). The delay was 80ms, and Sack does 100% better than Reno. The tme-sequence dagram can be seen n Fg. 22, and the cwn and ssthresh plots can be seen n Fg. 23. Ths experment gves the bottom lne for further refnement of the experments. The strkng dfference between the congeston behavor of Reno and Sack s very obvous when lookng at the tme-cwn dagrams. In ths case (80ms, full 10BT bandwdth, congeston loss), Sack s able to avod almost all the 6e+06 4e+06 2e Fgure 25: Tme-Sequence dagram -- Sack vs. Reno -- Lnk: 500ms RTT, 2 Mb/s, 0.18Vo drop rate, burstness Sack and Reno vs. Theoretcal bandwdth Next, we focused on the lnk utlzaton. In [10], Maths et al. defned a macroscopc behavor of ACM SIGCOMM 68 Computer Communcaton Revew

16 :) Je<417 $e,~07 2 JSe.,.O7 2e.~7 0 Fgure 22: Prelmnary test: Sack ~ Reno wth Congeston-lke type of loss O00O Reno ~ ~ ~ -- Sack 1O60OO ~ ~ 1 j o~ln-- ~oo ~... ~ ~...'"~ ::: !... 8ooo ~... ~... ~ o0oo... ~, ~ ' O0OO 6oooo OOO0 5oo ~ o~o... " OO 300OO O0O... J... 10) O 4(1 60 8O O Fgure 23: Prelmnary test: tme-cwn and tme-ssthresh dagrams the congeston avodance algorthm. It starts wth the hypothess that the current congeston algorthm (halvng the congeston wndow when there s a congeston sgnal, e.g a drop) s necessary n any TCP mplementaton. Gven ths hypothess, the model gves an deal throughput that can be acheved under a gven (moderate) drop rate, and a gven delay. The assumpton behnd the formula s that the connecton does not experence any tme-outs, and that the effects of the start-up phase are dluted over a very long connecton. The formula for the achevable throughput s BW - MSS C RTTv/- ~ where MSS s the segment sze, RTT s the round trp tme, and p s the drop rate. C s a constant that depends on the ACK strategy and several other (z) A C M S I G C O M M 69 C o m p u t e r C o m m u n c a t o n R e v e w

17 factors. As one can see n ths equaton, the throughput achevable s proportonal to the nverse of RTT. That s, the longer the delay, the smaller the throughput. Why? Isn't the wscale opton supposed to address the ssue of long-delay? The reason s that wth long delay, the lnear expanson of a large cwn s very long. Therefore, reachng the optmum cwn (.e the optmum throughput) takes a long tme. Ths s why one can expect the lnk utlzaton to be low for long delay. We compared the performance of TCP-Sack and TCP-Reno aganst the model, and the results back the concluson of [10] that TCP-Sack s ndeed closer to the deal congeston behavor. We also focused on the throughput ncrease brought by TCP-Sack when the burstness of the drops s hgher than one. In ths experment, we set the lnk propagaton delay to 250ms (RTT=500ms). We used large wndows (256 KB of avalable queue length) and artfcal losses ntroduced at the end ponts. For the model, we took C = ~/3 snce the ACKs n our mplemen- taton are not delayed (see [10]). Drop Rates Sack Reno O.O5 73 % 69 % % 77 % % 70.5 % % 66.5 % % 62 % % 73 % % 66 % O.5 85 % 68 % O.8 86 % 68 % % 64.5 % Fgure 26: Sack ~ Reno vs. Theoretcal Throughput In Fg. 26, we show the performance of Sack and Reno compared to the throughput accordng to the model from [10]. 100% would mean that TCP acheved the best possble throughput usng the Congeston Algorthm. The reason why Sack and Reno do not acheve 100 % s because of tme-outs and be- Drop Rates Sack Reno % 34 % % 25 % % 2O.5 % % 17.7 % % 15.6 % % 15 % % 11.7 % % 10.7 % % 8.5 % 1 9.7% 7% Fgure 27: Lnk Utlzaton for Sack and Reno cause of the start-up phase. TCP-Sack s closer to the model because there are less tme-outs. The plot of ths table can be seen n Fg. 28. In Fg. 27, one can see the lnk utlzaton acheved by TCP-Sack and TCP-Reno under the same condtons. It of course depends strongly on the state of the lnk. It s very low. The reason s because every tme a drop occur, cwn s dvded by 2. Therefore, at ths pont, the throughput s dvded by 2. Then, lnearly expandng cwn back to a large value takes a long tme because of the delay, and because of the large sze of the wndow (see sec. 3.4). The key to the better behavor of TCP-Sack s ts ablty to avod slow-starts. Slow-starts are especally damagng to the throughput over long-delay lnks. To tme-out, TCP-Sack must face heavy congeston: loose a strng of ACKs, or loose a retransmtted packet. 3.4 Issue of Intal ssthresh settng When we frst had a look at Fg. 27, we dd not understand the shape of the plot 'Throughput Improvement of Sack over Reno', and especally why the plot for burstness -- 3 has ths shape: why s the mprovement of Sack maxmum for p %, and why s t so low for small drop rates? ACM SIGCOMM 70 Computer Communcaton Revew

18 Pe~tage of theomt~l Bandwdth Acheved by Sack ~d n~o Sack -- Reno Throughput I m p ~ t ol Sack ~r R~o burstlneu=3... / -\,. f..-'/" " OO e0 I 6o I ; hal I / ',, O Drop Rate ~.B Drop Rate Fgure 28: TCP-Sack and TCP-Reno aganst Congeston Avodance Model - on the left, how close to the model TCP-Sack and TCP-Reno are - on the rght, per]ormance mprovement o] TCP-Sack over TCP-Reno, as a ]uncton o/ P' and b The culprt seems to be the default ntal settng of ssthresh when a TCP connecton s started. In most mplementatons of TCP, ssthresh s ntally set to the maxmum wndow sze. If the wscale opton s used, the maxmum possble wndow sze s 4 GB. Ths ntal value lets TCP start wth a slow-start that can be nterrupted only by congeston sgnals, at whch pont ssthresh s set to a more reasonable value (cwn/2 when the congeston sgnal occurs). Unfortunately, f several congeston sgnals occur, cwn s dvded by two several tmes. If the congeston sgnal s very bursty, then ssthresh s lkely to be set to a very small value (a few packets). Ths s extremely costly n the case of long-delay (lnear ncrease of cwn takes a long tme) and large wndows. The large delay and hgh bandwdth cause the slow-start phase to shoot up very hgh, and then the connecton usually experences a burst of loss, whch resets ssthresh to a very low value (sometmes a few packets). To llustrate ths effect, we show n Fg. 29 a plot of cwn/ssthresh for a connecton over the vrtual lnk wth the followng settngs: bandwdth = 5Mbps, RTT = 500ms, buffers = 512KB, no artfcal losses. The drops are caused by congeston at the UDP nter O0 25OOOO 200O00 loooo0.j J" J culn m sst~'esh Fgure 29: Effect o] hgh ntal settng o/ ssthresh on cwn expanson over long-delay wth large wndow - cwnd collapses to a very low value after 8s because o/ a slow-start phase that lasts too long ACM SIGCOMM 71 Computer Communcaton Revew

19 face n the router (the UDP buffers on the UltraSparc were set to 64KB, the maxmum value allowed by Solars). On ths plot, we see the slow-start phase shoots up very hgh, then we see a burst of drops, and ssthresh collapses to a low value (around 15KB). We can see that the equlbrum for the wndow sze seems to be around 130KB, and t takes about 85s for cwn to expand to ths value. Therefore, the start-up behavor of TCP n ths case resulted n a poor estmate of ssthresh. Ths affects both TCP-Sack and TCP-Reno, hence the lower mprovements of Sack over Reno wth small drop rates. Why does t affect the results wth small drop rate? Because wth hgher drop rate, an artfcal drop s more lkely to occur durng slow-start and nterrupts t before t shoots too hgh and generates a burst of congeston drops. Wth hgher drop rates, the slow-start phase results n a better estmate of ssthresh. Ths problem s descrbed n [5], and several solutons are beng revewed. 4 Negatve mpact of TCP- Sack on TCP-Reno TCP-Sack s now a proposed Internet standard and s soon to be deployed over the Internet. Ths deployment wll be ncremental. TCP-Sack s expected to be n most cases more effcent than TCP-Reno and one key ssue s to know what wll be the behavor of TCP-Reno when t s competng aganst the more effcent TCP-Sack. It would not be far to have all the bandwth taken by TCP-Sack, whle TCP-Reno connectons are starvng. The purpose of the last part of our experments s to study the mpact of TCP-Sack on TCP-Reno. In the followng tests, we are not nterested n the beneft of TCP-Sack but n knowng what happens to a TCP-Reno connecton when t s competng aganst a TCP-Sack connecton. One typcal test features 2 ftp connectons at the same tme, one wth TCP-Sack and one wth TCP-Reno. Then the same experment s performed wth 2 ftp connectons both usng TCP- Reno. The pont s to compare the performance of the lone TCP-Reno of the frst test wth the two TCP- Reno n the second run. 4.1 Experments wth a few flows Frst experments 3.5 Conclusons over long delay per- formance The key to hgh performance when facng hgh delaybandwdth product lnks s the use of the wscale opton. Beng able to "fll the ppe" durng the transfer s what really matters. Yet, the long delay makes the slow-start algorthm extremely costly, snce t takes several RTT to get the ack-based clock runnng agan. What TCP-Sack does much better than TCP-Reno s recover from multple loss wthn one wndow of data. TCP-Reno needs one RTT per packet drop to recover, whle TCP-Sack s able to recover from several losses n one sngle RTT: the most obvous beneft, as one can see n Fg. 23, s that TCP-Sack avods some of the slow-starts. Ths experment s over our testbed. Frst, 2 TCP- Reno connectons share the lnk for 5 mnutes. Then we run n the same condtons 1 TCP-Reno and 1 TCP-Sack connectons at the same tme. The lnk delay s set to 20 ms, the burst loss probablty s 1% and the burst sze 3 packets long (b -- 3). We saw earler that TCP-Sack sgnfcantly outperforms TCP-Reno under these condtons. The queston s: would the throughput of TCP-Reno drop when t s competng aganst TCP-Sack? See Fg. 30 for the tme-sequence plots of these 2 tests. See the followng table (Fg. 31) for a summary of the throughput of each connecton. In the frst test, the 2 TCP-Reno connectons have approxmately the same throughput (10~0 dfference). We wll refer to the one wth the hghest throughput ACM SIGCOMM 72 Computer Communcaton Revew

20 ' TCP_Reno vs TCP_Reno seqno x 106 / ' ' I ~--- I.1. I, o.oo l- 40.O0-~ I.... ~-- ~r m'rprn... r, TCP_Sack vs_tcp_reno seqno x so oo-~ TCP,RCK L I... _' t o.oo [ t ~mc I J I V t I'm~ Fgure 30: lnk delay=20 ms, burst loss probablty=l~o, TCP-Reno and TCP-Reno vs. TCP-Reno burst length=3 - Comparson o] TCP-Sack vs. Fgure 31: Throughput table ]or Fg. 30 TCP-Reno vs. TCP-Sack vs. TCP-Reno TCP-Reno 101 kb/s (Reno) 95.5 kb/s (Reno) 110 kb/s (Reno) 202 kb/s (Sack) as the best TCP-Reno. The dfference may seem sgnfcant between two dentcal algorthm, but t s actually a constant feature of long tests (the odds of an unlucky successon a drop events are greater). What s mportant s that the slope of the two plots are equal on average. We can frst notce that the throughput of TCP- Sack n the second test s really hgher than the throughput of the best TCP-Reno n the frst test. The throughput mprovement wth TCP-Sack s roughly 84%. But f we compare the throughput of the TCP- Reno wth the lowest throughput n the frst test and the throughput of TCP-Reno n the second test, we can see that ths throughput decreased by only 5.4%. As we sad earler, a 10% dfference between two den- tcal algorthms s not sgnfcant. Therefore a 5.4% dfference here s not sgnfcant. If we compare the aggregated throughput of the 2 connectons n the frst and second test, we see that t ncreased by 41%. That means that TCP-Sack uses more effcently the bandwth than TCP-Reno. Some bandwth whch was wasted n the frst test s now used by TCP-Sack n the second test Second example In ths experment, two TCP-Reno connectons start together at t=0s. At t=90s, a thrd connecton starts. At t=270s, the thrd connecton termnates. At t--360s, the frst two connectons termnate. We repeated ths test twce n the same condtons, once wth TCP-Reno as the thrd connecton and once wth TCP-Sack. The focus s on what s happenng to the two frst connectons when a thrd flow enters the competton for bandwdth. The lnk delay s set to 50ms and the bandwth s set to 10 Mb/s See Fg. 32 for tme-sequence plots. See Fg. 33 and 34 for throughput tables. ACM SIGCOMM 73 Computer Communcaton Revew

21 -- ::~: : :... seqno x I ~ [ - ~ ~ / TCP RENO... l tme seqno x @ / / ~--,o oo_ / f oo-...,~S~-/~ ~ O0 ~ ) 3010@-.~ 5.oo... 2o.oo-.-~--~ r ~. I } / I...,! 0.00 ~ ~ I ' tm, Fgure 32: Second example (lnk delay=5oms, bandwth=lomb/s) Fgure 33: Second example - 1st test TCP-Renol TCP-Reno2 TCP-Reno t=90s 190 kb/s 193 kb/s t=270s 153 kb/s 162 kb/s 140 kb/s t=360s 167 kb/s 166 kb/s Fgure 34: Second example - 2nd test TCP-Renol TCP-Reno2 TCP-Sack t=90s 180 kb/s 204 kb/s t=270s 136 kb/s 139 kb/s 222 kb/s t=360s 160 kb/s 147 kb/s The throughput of the thrd connecton, startng at t=90s, mproved by 58% wth TCP-Sack. But, f we compare the throughput of the 2 TCP-Reno connectons, t decreased by 11% and 14% at t=270s and by 4% and 11% at t=360s. If we compare our frst and second test, we can see that TCP-Sack s really more effcent than TCP- Reno. We can also see that TCP-Sack has an mpact on the two other TCP-Reno connectons. The two TCP-Reno connectons are less effcent when they are competng aganst the more effcent TCP-Sack. But ths effect s lmted n ts magntude, compared to the throughput mprovement provded by TCP- Sack. 4.2 Experments wth multple flows In Secton 4.1, the focus was on experments wth only a few connectons. In ths part, we focus on the mpact of TCP-Sack when there are multple flows Methodology All the followng experments were performed over our testbed, wth a lnk delay of 10 ms, a lnk bandwth of 4 Mbps, a burst loss probablty of land ths s exactly what we need n order to study the mpact of TCP-Sack on TCP-Reno. In ths experment, N TCP-Reno connectons compete n parallel for 90 seconds. Then, n the same condtons, N - 1 TCP-Reno compete wth 1 TCP- Sack connectons. To compare the two experments and what ther results mean, one can magne that the 'best' TCP-Reno s replaced by a TCP-Sack. The ACM SIGCOMM 74 Computer Communcaton Revew

22 meanngful nformaton here s what s the mpact on the N - 1 other flows Results The values of N range from 3 to useful varables R, r, S and s are used n the computatons:,50!! T s, R ~ - ' #..= r.o-- * o0...., ~,--:T! : ~...,,k~... ~ ~::~:::~::::= O,, ~... ~., 3OO :'~ ~"-" "'=~"~", ~ F N ure 35: R, r, S, s for N rangng from 3 to 10 R r s 266 kb/s 172 kb/s 312 kb/s 170 kb/s 319 kb/s 234 kb/s 349 kb/s 234kB/s 352 kb/s 274 kb/s 393 kb/s 270 kb/s 378 kb/s 311kB/s 399 kb/s 304 kb/s 400 kb/s 335 kb/s 419 kb/s 324 kb/s 413 kb/s 355 kb/s 420 kb/s 344 kb/s 418 kb/s 365 kb/s 421kB/s 358 kb/s 419 kb/s 367 kb/s 423 kb/s 362 kb/s (See Fg. 36 for a plot of R,S,r and s n functon of N.) R s the aggregated throughput of the N connectons, n the case (N TCP-Reno). r s the aggregated throughput of the N - 1 connectons wth the lowest throughputs, n the case (N TCP-Reno). * S s the aggregated throughput of the N connectons, n the case (N - 1 TCP-Reno + 1 TCP- Sack). * s s the aggregated throughput of the N - 1 connectons wth the lowest throughputs, n the case (N - 1 TCP-Reno + 1 TCP-Sack). Therefore, R-r s the hghest throughput, n the case (N TCP-Reno). S - s s the hghest throughput, n the case (N - 1 TCP-Reno + 1 TCP-Sack) Analyss of the results If we compare S- s to R- r, we can fnd the throughput mprovement for the best TCP- Reno when t s replaced by TCP-Sack. ratol = Fgure 36: R, r, S, s for N rangng from 3 to 10 F ure 37: rato1, rato2, rato3 N ratol rato2 rato % 19.7% -1.16% % 16.6% -0.00% % 27.7% -1.46% % 17.2% -2.25% % 19.0% -3.28% % 8.05% -3.10% % 3.66% -1.92% % 4.94% -1.37% ((S - s) - (R - r))/(r - r) quantfes ths mprovement. If we compare S to R, we can fnd what s happennng to the aggregated throughput of our N connectons after changng the best TCP-Reno nto TCP- Sack. An nterestng varable to measure the gan n aggregated bandwth s : rato2 = (S - R)/(B - R), wth B beng the maxmum lnk bandwth (500 kb/s n our case). B - R s the unused bandwth when all connectons are TCP-Reno and rato2 s the proporton of ths unused bandwth whch s now used when there s one TCP-Sack. If we compare s to r, we can fnd how much the total throughput of the other N - 1 TCP- Reno s decreasng when the best TCP-Reno s changed nto TOP-Sack. rato3 = (s- r)/r quantfes ths negatve mpact of TCP-Sack. The frst nterestng thng s that the throughput ACM S I G C O M M 75 Computer Communcaton Revew

23 mprovement for the best TCP-Reno when t s replaced by TCP-Sack s qute hgh. For N rangng from 3 to 8, ths mprovement (ratol) ranges from 30% to almost 60%. For N equal to 9 or 10, ths mprovement s a lttle less than 20%. rato2 s also relatvely hgh for N rangng from 3 to 7, beng between 15% and 20%. That means that TCP-Sack uses more effcently the bandwth than TCP-Reno. TCP-Sack s able to use some bandwth whch was wasted by TCP-Reno and therefore when one TCP-Reno s replaced by one TCP-Sack, the total aggregated bandwth ncreases. When the number of connectons N s hgh (9 or 10), fllng completely the lnk capacty, rato2 gets smaller because the amount of wasted s smaller. Ths explans too why ratol s smaller for hgh value of N. Fnally, rato3 s small wth a maxmum value of approxmately 3%. Ths means that the mpact of TCP-Sack on the other N- 1 TCP-Reno's s not sgnfcant. The concluson of these experments s that even when TCP-Sack outperforms TCP-Reno by a large amount, the negatve mpact on the other competng connectons s small. It can be explaned by the fact that TCP-Sack uses more effcently the bandwth and ncreases the total aggregated bandwth. 5 Concluson In ths report of our work, we presented the expermental results of our study of TCP-Sack performance, and how t compares to the very common TCP-Reno. We beleve our study s the frst to combn~ experments on a testbed and experments over a real path of the Internet. It backs ntal expectatons that TCP-Sack s able to recover from multple losses wthn one wndow of data wthout necessarly tmng out. TCP-Sack slow-starts less often than TCP-Reno, and t s therefore closer to the deal TCP Congeston Avodance behavor [10]. The mprovement n throughput between UCLA and the two test hosts at NASA's GSFC and PSC ranged between 15% and 45%. On the testbed (usng vrtual lnks and emulatng a long-delay lnk), we had throughput mprovements rangng from 10% to 120%, manly dependng on the congeston pattern. The other ssue was the possble unfarness of TCP- Sack when facng congeston. It was feared that TCP- Sack mght take bandwdth away from non-sack TCP stacks. Our experments show that the mplementaton of TCP-Sack we used s not too aggressve: t makes a better use of the bandwdth wasted by TCP- Reno, but the competng TCP stacks are not affected by TCP-Sack. We beleve that Selectve Acknowledgment s frst and foremost a data recovery mechansm, and that t should be clearly separated from the Congeston Avodance and Control algorthms. In TCP-Reno, the data recovery and congeston control algorthms were very tghtly bound. Modfyng each of them was very dffcult. Data Recovery usng Selectve Acknowledgments s more robust, and less dependant on the Congeston Algorthms. As a sde effect, the current Congeston Algorthms (that are very much lke Reno, even n the mplementaton we used) are perhaps a bt conservatve when assocated wth Sack. We beleve that Sack should not be vewed as Congeston Control Algorthm, but solely as a Data Recovery algorthm. We beleve that there s room for mprovements n the Congeston Control Algorthms, as llustrated n [9]. 6 Acknowledgements We would lke to thank the followng people for ther help and support throughout ths project: Lxa Zhang, our advsor, for her nsght and support. All members of the UCLA's Internet Research Laboratory, and especally Scott Mchel for hs help n settng up the testbed. Ellot L. Yan of USC, for lettng us use hs port of Sack to FreeBSD. Kalyan Kdamb of NASA's Goddard Space Flght Center, and Jeff Semke of PSC, for provdng us wth two test hosts to conduct tests over the Internet. Shawn Ostermann for modfyng TcpTrace for packets wthout ACM SIGCOMM 76 Computer Communcaton Revew

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Internet Traffic Managers

Internet Traffic Managers Internet Traffc Managers Ibrahm Matta matta@cs.bu.edu www.cs.bu.edu/faculty/matta Computer Scence Department Boston Unversty Boston, MA 225 Jont work wth members of the WING group: Azer Bestavros, John

More information

The Impact of Delayed Acknowledgement on E-TCP Performance In Wireless networks

The Impact of Delayed Acknowledgement on E-TCP Performance In Wireless networks The mpact of Delayed Acknoledgement on E-TCP Performance n Wreless netorks Deddy Chandra and Rchard J. Harrs School of Electrcal and Computer System Engneerng Royal Melbourne nsttute of Technology Melbourne,

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Real-time interactive applications

Real-time interactive applications Real-tme nteractve applcatons PC-2-PC phone PC-2-phone Dalpad Net2phone vdeoconference Webcams Now we look at a PC-2-PC Internet phone example n detal Internet phone over best-effort (1) Best effort packet

More information

CS 268: Lecture 8 Router Support for Congestion Control

CS 268: Lecture 8 Router Support for Congestion Control CS 268: Lecture 8 Router Support for Congeston Control Ion Stoca Computer Scence Dvson Department of Electrcal Engneerng and Computer Scences Unversty of Calforna, Berkeley Berkeley, CA 9472-1776 Router

More information

Why Congestion Control. Congestion Control and Active Queue Management. TCP Congestion Control Behavior. Generic TCP CC Behavior: Additive Increase

Why Congestion Control. Congestion Control and Active Queue Management. TCP Congestion Control Behavior. Generic TCP CC Behavior: Additive Increase Congeston Control and Actve Queue Management Congeston Control, Effcency and Farness Analyss of TCP Congeston Control A smple TCP throughput formula RED and Actve Queue Management How RED works Flud model

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Real-Time Guarantees. Traffic Characteristics. Flow Control

Real-Time Guarantees. Traffic Characteristics. Flow Control Real-Tme Guarantees Requrements on RT communcaton protocols: delay (response s) small jtter small throughput hgh error detecton at recever (and sender) small error detecton latency no thrashng under peak

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Gateway Algorithm for Fair Bandwidth Sharing

Gateway Algorithm for Fair Bandwidth Sharing Algorm for Far Bandwd Sharng We Y, Rupnder Makkar, Ioanns Lambadars Department of System and Computer Engneerng Carleton Unversty 5 Colonel By Dr., Ottawa, ON KS 5B6, Canada {wy, rup, oanns}@sce.carleton.ca

More information

FAST TCP: Motivation, Architecture, Algorithms, Performance

FAST TCP: Motivation, Architecture, Algorithms, Performance FAST TCP: Motvaton, Archtecture, Algorthms, Performance Cheng Jn Davd X. We Steven H. Low Engneerng & Appled Scence, Caltech http://netlab.caltech.edu Abstract We descrbe FAST TCP, a new TCP congeston

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Comparisons of Packet Scheduling Algorithms for Fair Service among Connections on the Internet

Comparisons of Packet Scheduling Algorithms for Fair Service among Connections on the Internet Comparsons of Packet Schedulng Algorthms for Far Servce among Connectons on the Internet Go Hasegawa, Takahro Matsuo, Masayuk Murata and Hdeo Myahara Department of Infomatcs and Mathematcal Scence Graduate

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

Advanced Computer Networks

Advanced Computer Networks Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) , VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Quantifying Performance Models

Quantifying Performance Models Quantfyng Performance Models Prof. Danel A. Menascé Department of Computer Scence George Mason Unversty www.cs.gmu.edu/faculty/menasce.html 1 Copyrght Notce Most of the fgures n ths set of sldes come from

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS AARTHI HARNA TRIVESALOOR NARAYANAN

A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS AARTHI HARNA TRIVESALOOR NARAYANAN A STUDY ON THE PERFORMANCE OF TRANSPORT PROTOCOLS COMBINING EXPLICIT ROUTER FEEDBACK WITH WINDOW CONTROL ALGORITHMS By AARTHI HARNA TRIVESALOOR NARAYANAN Master of Scence n Computer Scence Oklahoma State

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

Optimum Number of RLC Retransmissions for Best TCP Performance in UTRAN

Optimum Number of RLC Retransmissions for Best TCP Performance in UTRAN Optmum Number of RLC Retransmssons for Best TCP Performance n UTRAN Olver De Mey, Laurent Schumacher, Xaver Dubos {ode,lsc,xdubos}@nfo.fundp.ac.be Computer Scence Insttute, The Unversty of Namur (FUNDP)

More information

USING GRAPHING SKILLS

USING GRAPHING SKILLS Name: BOLOGY: Date: _ Class: USNG GRAPHNG SKLLS NTRODUCTON: Recorded data can be plotted on a graph. A graph s a pctoral representaton of nformaton recorded n a data table. t s used to show a relatonshp

More information

3D vector computer graphics

3D vector computer graphics 3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres

More information

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.15 No.10, October 2015 1 Evaluaton of an Enhanced Scheme for Hgh-level Nested Network Moblty Mohammed Babker Al Mohammed, Asha Hassan.

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

A fair buffer allocation scheme

A fair buffer allocation scheme A far buffer allocaton scheme Juha Henanen and Kalev Klkk Telecom Fnland P.O. Box 228, SF-330 Tampere, Fnland E-mal: juha.henanen@tele.f Abstract An approprate servce for data traffc n ATM networks requres

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control Quantfyng Responsveness of TCP Aggregates by Usng Drect Sequence Spread Spectrum CDMA and Its Applcaton n Congeston Control Mehd Kalantar Department of Electrcal and Computer Engneerng Unversty of Maryland,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Inter-protocol fairness between

Inter-protocol fairness between Inter-protocol farness between TCP New Reno and TCP Westwood+ Nels Möller, Chad Barakat, Konstantn Avrachenkov, and Etan Altman KTH, School of Electrcal Engneerng SE- 44, Sweden Emal: nels@ee.kth.se INRIA

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

Modeling the Bandwidth Sharing Behavior of Congestion Controlled Flows

Modeling the Bandwidth Sharing Behavior of Congestion Controlled Flows Modelng the Bandwdth Sharng Behavor of Congeston Controlled Flows Kang L A dssertaton presented to the faculty of the OGI School of Scence & Engneerng at Oregon Health & Scence Unversty n partal fulfllment

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

AADL : about scheduling analysis

AADL : about scheduling analysis AADL : about schedulng analyss Schedulng analyss, what s t? Embedded real-tme crtcal systems have temporal constrants to meet (e.g. deadlne). Many systems are bult wth operatng systems provdng multtaskng

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Computer models of motion: Iterative calculations

Computer models of motion: Iterative calculations Computer models o moton: Iteratve calculatons OBJECTIVES In ths actvty you wll learn how to: Create 3D box objects Update the poston o an object teratvely (repeatedly) to anmate ts moton Update the momentum

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

ABRC: An End-to-End Rate Adaptation Scheme for Multimedia Streaming over Wireless LAN*

ABRC: An End-to-End Rate Adaptation Scheme for Multimedia Streaming over Wireless LAN* ARC: An End-to-End Rate Adaptaton Scheme for Multmeda Streamng over Wreless LAN We Wang Soung C Lew Jack Y Lee Department of Informaton Engneerng he Chnese Unversty of Hong Kong Shatn N Hong Kong {wwang2

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Neural Network Control for TCP Network Congestion

Neural Network Control for TCP Network Congestion 5 Amercan Control Conference June 8-, 5. Portland, OR, USA FrA3. Neural Network Control for TCP Network Congeston Hyun C. Cho, M. Sam Fadal, Hyunjeong Lee Electrcal Engneerng/6, Unversty of Nevada, Reno,

More information

Extending the Functionality of RTP/RTCP Implementation in Network Simulator (NS-2) to support TCP friendly congestion control

Extending the Functionality of RTP/RTCP Implementation in Network Simulator (NS-2) to support TCP friendly congestion control Extendng the Functonalty of RTP/RTCP Implementaton n Network Smulator (NS-2) to support TCP frendly congeston control Chrstos Bouras Research Academc Computer Technology Insttute and Unversty of Patras

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

QoS Evaluation of HTTP over Satellites

QoS Evaluation of HTTP over Satellites 2011 Internatonal Conference on Cyber-Enabled Dstrbuted Computng and Knowledge Dscovery QoS Evaluaton of HTTP over Satelltes Lukman Audah Faculty of Electrcal and Electronc Engneerng Unverst Tun Hussen

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture Goals and Approach CS 194: Dstrbuted Systems Resource Allocaton Goal: acheve predcable performances Three steps: 1) Estmate applcaton s resource needs (not n ths lecture) 2) Admsson control 3) Resource

More information

TCP-Illinois: A Loss and Delay-Based Congestion Control Algorithm for High-Speed Networks

TCP-Illinois: A Loss and Delay-Based Congestion Control Algorithm for High-Speed Networks TCP-Illnos: A Loss and Delay-Based Congeston Control Algorthm for Hgh-Speed etworks Shao Lu, Tamer Başar and R. Srkant Abstract We ntroduce a new congeston control algorthm for hgh speed networks, called

More information

Future Generation Computer Systems

Future Generation Computer Systems Future Generaton Computer Systems 29 (2013) 1631 1644 Contents lsts avalable at ScVerse ScenceDrect Future Generaton Computer Systems journal homepage: www.elsever.com/locate/fgcs Gosspng for resource

More information

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky Improvng Low Densty Party Check Codes Over the Erasure Channel The Nelder Mead Downhll Smplex Method Scott Stransky Programmng n conjuncton wth: Bors Cukalovc 18.413 Fnal Project Sprng 2004 Page 1 Abstract

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks 1

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks 1 Intellgent Traffc Condtoners for Assured Forwardng Based Dfferentated Servces Networks B. Nandy, N. Seddgh, P. Peda, J. Ethrdge Nortel Networks, Ottawa, Canada Emal:{bnandy, nseddgh, ppeda, jethrdg}@nortelnetworks.com

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Shared Running Buffer Based Proxy Caching of Streaming Sessions Shared Runnng Buffer Based Proxy Cachng of Streamng Sessons Songqng Chen, Bo Shen, Yong Yan, Sujoy Basu Moble and Meda Systems Laboratory HP Laboratores Palo Alto HPL-23-47 March th, 23* E-mal: sqchen@cs.wm.edu,

More information

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems S. J and D. Shn: An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems 2355 An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems Seunggu J and Dongkun Shn, Member,

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications Effcent Loa-Balance IP Routng Scheme Base on Shortest Paths n Hose Moel E Ok May 28, 2009 The Unversty of Electro-Communcatons Ok Lab. Semnar, May 28, 2009 1 Outlne Backgroun on IP routng IP routng strategy

More information

Faster Network Design with Scenario Pre-filtering

Faster Network Design with Scenario Pre-filtering Faster Network Desgn wth Scenaro Pre-flterng Debojyot Dutta, Ashsh Goel, John Hedemann ddutta@s.edu, agoel@usc.edu, johnh@s.edu Informaton Scences Insttute School of Engneerng Unversty of Southern Calforna

More information

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems Fast Retransmsson of Real-Tme Traffc n HIPERLAN/ Systems José A Afonso and Joaqum E Neves Department of Industral Electroncs Unversty of Mnho, Campus de Azurém 4800-058 Gumarães, Portugal {joseafonso,

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

JTCP: Congestion Distinction by the Jitter-based Scheme over Wireless Networks

JTCP: Congestion Distinction by the Jitter-based Scheme over Wireless Networks JTCP: Congeston stncton by the Jtter-based Scheme over Wreless Networks Erc Hsao-Kuang Wu, Mng-I Hseh, Me-Zhen Chen and Shao-Y Hung ept. of Computer Scence and Informaton Engneerng, Natonal Central Unversty,

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics A Hybrd Genetc Algorthm for Routng Optmzaton n IP Networks Utlzng Bandwdth and Delay Metrcs Anton Redl Insttute of Communcaton Networks, Munch Unversty of Technology, Arcsstr. 21, 80290 Munch, Germany

More information

A New Feedback Control Mechanism for Error Correction in Packet-Switched Networks

A New Feedback Control Mechanism for Error Correction in Packet-Switched Networks Proceedngs of the th IEEE Conference on Decson and Control, and the European Control Conference 005 Sevlle, Span, December -15, 005 MoA1. A New Control Mechansm for Error Correcton n Packet-Swtched Networks

More information

Assembler. Building a Modern Computer From First Principles.

Assembler. Building a Modern Computer From First Principles. Assembler Buldng a Modern Computer From Frst Prncples www.nand2tetrs.org Elements of Computng Systems, Nsan & Schocken, MIT Press, www.nand2tetrs.org, Chapter 6: Assembler slde Where we are at: Human Thought

More information

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai TECHNICAL REPORT AN OPTIMAL DISTRIUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION Jord Ros and We K Tsa Department of Electrcal and Computer Engneerng Unversty of Calforna, Irvne 1 AN OPTIMAL

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Instantaneous Fairness of TCP in Heterogeneous Traffic Wireless LAN Environments

Instantaneous Fairness of TCP in Heterogeneous Traffic Wireless LAN Environments KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 10, NO. 8, Aug. 2016 3753 Copyrght c2016 KSII Instantaneous Farness of TCP n Heterogeneous Traffc Wreless LAN Envronments Young-Jn Jung 1 and

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Harvard University CS 101 Fall 2005, Shimon Schocken. Assembler. Elements of Computing Systems 1 Assembler (Ch. 6)

Harvard University CS 101 Fall 2005, Shimon Schocken. Assembler. Elements of Computing Systems 1 Assembler (Ch. 6) Harvard Unversty CS 101 Fall 2005, Shmon Schocken Assembler Elements of Computng Systems 1 Assembler (Ch. 6) Why care about assemblers? Because Assemblers employ some nfty trcks Assemblers are the frst

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Wightman. Mobility. Quick Reference Guide THIS SPACE INTENTIONALLY LEFT BLANK

Wightman. Mobility. Quick Reference Guide THIS SPACE INTENTIONALLY LEFT BLANK Wghtman Moblty Quck Reference Gude THIS SPACE INTENTIONALLY LEFT BLANK WIGHTMAN MOBILITY BASICS How to Set Up Your Vocemal 1. On your phone s dal screen, press and hold 1 to access your vocemal. If your

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Routing in Degree-constrained FSO Mesh Networks

Routing in Degree-constrained FSO Mesh Networks Internatonal Journal of Hybrd Informaton Technology Vol., No., Aprl, 009 Routng n Degree-constraned FSO Mesh Networks Zpng Hu, Pramode Verma, and James Sluss Jr. School of Electrcal & Computer Engneerng

More information

Fibre-Optic AWG-based Real-Time Networks

Fibre-Optic AWG-based Real-Time Networks Fbre-Optc AWG-based Real-Tme Networks Krstna Kunert, Annette Böhm, Magnus Jonsson, School of Informaton Scence, Computer and Electrcal Engneerng, Halmstad Unversty {Magnus.Jonsson, Krstna.Kunert}@de.hh.se

More information

Newton-Raphson division module via truncated multipliers

Newton-Raphson division module via truncated multipliers Newton-Raphson dvson module va truncated multplers Alexandar Tzakov Department of Electrcal and Computer Engneerng Illnos Insttute of Technology Chcago,IL 60616, USA Abstract Reducton n area and power

More information