On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources

Size: px
Start display at page:

Download "On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources"

Transcription

1 On the Farness-Effcency Tradeoff for Packet Processng wth Multple Resources We Wang, Chen Feng, Baochun L, and Ben Lang Department of Electrcal and Computer Engneerng, Unversty of Toronto {wewang, cfeng, bl, ABSTRACT Mddleboxes are wdely deployed n today s networks. They apply a varety of complex network functons to transform, flter, and optmze ncomng traffc based on the payload of packets. These functons requre the support of multple types of resources, such as CPU and lnk bandwdth, for processng ncomng packets. Hence, a mult-resource packet schedulng algorthm s needed to allow flows to share these resources farly and effcently. However, unlke tradtonal far queueng where bandwdth s the only concern, we show n ths paper that farness and effcency are conflctng objectves that cannot be acheved smultaneously n the presence of multple resources. Ideally, a schedulng algorthm should allow network operators to flexbly specfy ther farness and effcency requrements, so as to meet the Qualty of Servce demands whle keepng the system at a hgh utlzaton level. Yet, exstng mult-resource schedulng algorthms focus on farness only, and may lead to poor resource utlzaton. In ths paper, we propose a new schedulng algorthm to acheve a flexble tradeoff between farness and effcency for packet processng, consumng both CPU and lnk bandwdth. Expermental results based on both real-world mplementaton and trace-drven smulaton suggest that tradng off a modest level of farness can potentally mprove the effcency to the pont where the system capacty s almost saturated. Categores and Subject Descrptors C.2.6 [Computer-Communcaton Networks]: Internetworkng General Terms Schedulng Theory Keywords Far Queueng; Mddleboxes; Farness-Effcency Tradeoff 1. INTRODUCTION Queueng algorthms determne the order n whch packets n varous ndependent flows are processed, and serve as a fundamen- Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. Copyrghts for components of ths work owned by others than ACM must be honored. Abstractng wth credt s permtted. To copy otherwse, or republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. Request permssons from permssons@acm.org. CoNEXT 14, December 2 5, 2014, Sydney, Australa. Copyrght 2014 ACM /14/12...$ tal mechansm for allocatng resources n a network applance. Tradtonal queueng algorthms [1, 9, 21, 28] make schedulng decsons n network swtches that smply forward packets to ther next hops, and lnk bandwdth s the only resource beng allocated. In modern network applances, e.g., mddleboxes [25, 27], lnk bandwdth s no longer the only resource shared by flows. In addton to packet forwardng, mddleboxes perform a varety of crtcal network functons that requre deep packet nspecton based on the payload of packets, such as IP securty encrypton, WAN optmzaton, and ntruson detecton. Performng these complex network functons requres the support of multple types of resources, and may bottleneck on ether CPU or lnk bandwdth [10, 13]. For example, flows that requre basc forwardng may congest the lnk bandwdth [13], whle those that requre IP securty encrypton need more CPU processng tme [10]. A queueng algorthm specfcally desgned for multple resources s therefore needed for sharng these resources farly and effcently. Farness offers predctable servce solaton among flows. It ensures that the servce a flow receves (.e., number of packets processed per second) n an n-flow system s at least 1/n of that t acheves when the flow monopolzes all resources, ndependent of the behavor of other rogue flows. The noton of Domnant Resource Farness (DRF) [14, 22] embodes ths solaton property, wth whch each flow receves approxmately the same processng tme on ts domnant resource, defned as the one that requres the most packet processng tme [13]. Effcency serves as another mportant metrc measurng the resource utlzaton acheved by a queueng algorthm. Hgh resource utlzaton naturally translates nto hgh traffc throughput. Ths s of partcular mportance to enterprse networks, gven the surgng volume of traffc passng through mddleboxes [27, 35]. Both farness and effcency can be acheved at the same tme n tradtonal sngle-resource far queueng, where bandwdth s the only concern. As long as the schedule s work conservng [38], bandwdth utlzaton s 100% gven a non-empty system. That leaves farness as an ndependent objectve to optmze. However, n the presence of multple resources, farness s often a conflctng objectve aganst effcency. To see ths, consder two schedules shown n Fg. 1 wth two flows whose packets need CPU processng before transmsson. Packets that fnshes CPU processng are placed nto a buffer n front of the output lnk. Each packet n Flow 1 has a processng tme vector 2, 3, meanng that t requres 2 tme unts for CPU processng and 3 tme unts for transmsson; each packet n Flow 2 has a processng tme vector 9, 1. The domnant resource of Flow 1 s lnk bandwdth, as t takes more tme to transmt a packet than processng t usng CPU; smlarly, the domnant resource of Flow 2 s CPU. To acheve DRF, the transmsson tme Flow 1 receves should be ap-

2 CPU Lnk CPU Lnk p1 p2 p3 p1 p2 p3 q1 p4 p5 p6 q1 p4 p5 q2 p (a) A packet schedule that s far but neffcent. p1 p2 p3 p4 p5 p6 p7 p8 p1 p2 p3 p4 p5 p q1 p Tme p9 p10... p8 q1p9... Tme (b) A packet schedule that s effcent but volates DRF. Fgure 1: An example showng the tradeoff between farness and effcency for mult-resource packet schedulng. Packets that fnshes CPU processng are placed nto a buffer n front of the output lnk. Flow 1 sends packets p1, p2,..., each havng a processng tme vector 2, 3 ; Flow 2 sends packets q1, q2,..., each havng a processng tme vector 9, 1. Schedule (a) acheves DRF but s neffcent; Schedule (b) s effcent but unfar. proxmately equal to the CPU processng tme Flow 2 receves. In ths sense, Flow 1 should schedule three packets whenever Flow 2 schedules one, so that each flow receves 9 tme unts to process ts domnant resource, as shown n Fg. 1a. Ths schedule, though far, leads to poor bandwdth utlzaton the lnk s dle for 1/3 of the tme. On the other hand, Fg. 1b shows a schedule that acheves 100% CPU and bandwdth utlzaton by servng eght packets of Flow 1 and one packet of Flow 2 alternately. The schedule, though effcent, volates DRF. Whle Flow 1 receves 24/25 of the lnk bandwdth, Flow 2 receves only 9/25 of the CPU tme. The farness-effcency tradeoff shown n the example above generally exsts for mult-resource packet schedulng, but t has receved lttle attenton before. Exstng mult-resource queueng algorthms focus solely on farness [13, 31, 34]. However, for applcatons havng a loose farness requrement, tradng off a modest degree of farness for hgher effcency and hgher throughput s well justfed. In general, dependng on the underlyng applcatons, a network operator may wegh farness and effcency dfferently. Ideally, a mult-resource queueng algorthm should allow network operators to flexbly specfy ther tradeoff preference and mplement the specfed tradeoff by determnng the rght packet schedulng order. However, desgnng such a queueng algorthm s non-trval. It remans to be seen how effcency can be quanttatvely defned. Further, t remans open how the tradeoff requrement should be approprately specfed. But most mportantly, gven a specfc tradeoff requrement, how can the schedulng decson be correctly made to mplement t? Ths paper represents the frst attempt to address these challenges. We clarfy the effcency measure as the schedule makespan, whch s the completon tme of the last flow. We show that achevng a flexble tradeoff between farness and effcency s generally NPhard. We hence lmt our dscusson to a typcal scenaro where CPU and lnk bandwdth are the two types of resources requred for packet processng, whch s usually the case n mddleboxes. We show that the farness-effcency tradeoff can be strctly enforced by a GPS-lke (Generalzed Processor Sharng [9,21]) flud model, where packets are served n arbtrarly small ncrements on both resources. To mplement the dealzed flud n the real world, we desgn a packet-by-packet trackng algorthm, usng an approach smlar to the vrtual tme mplementaton of Weghted Far Queueng (WFQ) [9, 16, 21]. We have prototyped our tradeoff algorthm n the Clck modular router [19]. Both our prototype mplementaton and trace-drven smulaton show that a 15% 20% farness tradeoff s suffcent to acheve the optmal effcency, leadng to a nearly 20% mprovement n bandwdth throughput wth a sgnfcantly hgher resource utlzaton. 2. FAIRNESS AND EFFICIENCY Before dscussng the tradeoff between farness and effcency, we shall frst clarfy how the noton of farness s to be defned, and how effcency s to be measured quanttatvely. We model packet processng as gong through a resource ppelne, where the frst resource s consumed to process the packet frst, followed by the second, and so on. A packet s not avalable for the downstream resource untl the processng on the upstream resource fnshes. For example, a packet cannot be transmtted (whch consumes lnk bandwdth) before t has been processed by CPU. 2.1 Domnant Resource Farness Farness s one of the prmary desgn objectves for a queueng algorthm. A far schedule offers servce solaton among flows by allowng each flow to receve the throughput at least at the level when every resource s evenly allocated. The noton of Domnant Resource Farness (DRF) embodes ths solaton property by achevng the max-mn farness on the domnant resources of packets n ther respectve flows [13]. The domnant resource of a packet s defned as the one that requres the maxmum packet processng tme. In partcular, let τ r(p) be the tme requred to process packet p on resource r. The domnant resource of packet p s r p = arg max r τ r(p). Gven a packet schedule, let D (t 1, t 2) be the tme flow receves to process the domnant resources of ts packets n a backlogged perod (t 1, t 2). The functon D (t 1, t 2) s referred to as the domnant servce flow receves n (t 1, t 2). A schedule s sad to strctly mplement DRF f for all flows and j, and for any perod (t 1, t 2) they backlog, we have D (t 1, t 2) = D j(t 1, t 2). (1) In other words, a strct DRF schedule allows each flow to receve the same domnant servce n any backlogged perod. However, because packets are scheduled as separate enttes and are transmtted n sequence, strctly mplementng DRF at all tmes may not be possble n practce. For ths reason, a practcal far schedule only requres flows to receve approxmately the same domnant servces over tme [13, 31, 34], as shown n the prevous example of Fg. 1a. 2.2 The Effcency Measure In addton to farness, effcency s another mportant concern for a mult-resource schedulng algorthm, but has receved no sgnfcant attenton before. Even the defnton of effcency needs clarfcaton. Perhaps the most wdely adopted effcency measure s system throughput, whose conventonal defnton s the rate of completons [17], computed as the processed workload dvded by the elapsed tme (e.g., bts per second). Whle ths performance metrc s well defned for sngle-resource systems, extendng ts defnton to multple types of resources leads to a throughput vector, where each component s the throughput of one type of resource (e.g., 10 CPU nstructon completons per second and 5 bts transmtted through the output lnk per second), and dfferent throughput vectors may not be comparable.

3 Another possble effcency measure s resource utlzaton gven non- empty system, or smply resource utlzaton n the remander of ths paper. 1 However, n a mddlebox, dfferent resources may see dfferent levels of utlzaton. The queston s: how should the system utlzaton be properly defned? One possble defnton s to add up the utlzaton rates of all resources. Ths defnton mplctly assumes exchangeable resources, say, 1% CPU usage s equvalent to 1% bandwdth consumpton, whch may not be well justfed n many crcumstance, especally when one type of resource s scarce n the system and s valued more than the other. In ths paper, we measure effcency wth the schedule makespan. Gven nput flows wth a fnte number of packets, the makespan of a schedule s defned as the tme elapsed from the arrval of the frst packet to the tme when all packets fnsh processng on all resources. One can also vew makespan as the completon tme of the last flow. Intutvely, gven a fnte traffc nput, the shorter the makespan s, the faster the nput traffc s processed, and the more effcent the schedule s Tradeoff between Farness and Effcency Wth the precse measure of effcency, we are curous to know how much effcency s sacrfced for far queueng. To answer ths queston, we frst generalze the defnton of work conservng schedules from tradtonal sngle-resource far queueng to multple resources. In partcular, we say a schedule s work conservng f at least one resource s fully utlzed for packet processng when there s a backlogged flow. In other words, a work conservng schedule does not allow resources to be wasted n dle f they can be used to process a backlogged packet. Exstng mult-resource far queueng algorthms [13,31,34] use the goal of achevng work conservaton as an ndcaton of effcency. However, n the theorem below, we observe that such an approach s neffectve. Theorem 1. Let m be the number of resource types concerned. Gven any traffc nput I, let T σ (I) be the makespan of a work conservng schedule σ, and T (I) the mnmum makespan of an optmal schedule. We have T σ (I) mt (I). (2) PROOF. Gven a traffc nput I, let the work conservng schedule σ consst of n b busy perod. A busy perod s a tme nterval durng whch at least one type of resource s used for packet processng. When the system s empty and a new packet arrves, a new busy perod starts. The busy perod ends when the system becomes empty agan. We consder the followng two cases. Case 1: n b = 1. Let traffc nput I consst of N packets, ordered based on ther arrval tmes, where packet 1 arrves frst. For packet, let τ r () be ts packet processng tme on resource r. It s easy to check that the followng nequalty holds for the optmal schedule wth the mnmum makespan: T (I) max r N =1 τ () r. (3) 1 Ths defnton s dfferent from that of queueng theory, where the utlzaton s defned as the fracton of tme a devce s busy [17]. Under ths defnton, hgh utlzaton usually means a hgh congeston level wth a large queue backlog and long delays [36], and s usually not desred. 2 In general, makespan s not the only effcency measure that one can defne. For example, we can also measure effcency wth the average flow completon tme. We choose makespan as the effcency measure n ths paper because t leads to tractable analyss. More mportantly, makespan closely relates to system utlzaton and s conceptually easy to understand. The dscusson of other possble effcency measures s out of the scope of ths paper. On the other hand, for work conservng schedule σ, ts makespan reaches the maxmum when packet processng does not overlap n tme, across all resources,.e., T σ (I) N m =1 r=1 Ths leads to the followng nequaltes: T σ (I) N m τ r () =1 r=1 N =1 τ () r (4) m max τ r () mt (I). (5) r Case 2: n b > 1. Gven traffc nput I, let I(t + ) be the packets that arrve on or after tme t. For schedule σ, let t 0 be the tme when ts second last busy perod (n b 1) ends, and t 1 the tme when the last busy perod (n b ) starts. Because schedule σ s work conservng, no packet arrves between t 0 and t 1. We have and T σ (I) = t 1 + T σ (I(t + 1 )), (6) T (I) = t 1 + T (I(t + 1 )). (7) Note that gven traffc nput I(t + 1 ), schedule σ conssts of only one busy perod. By the dscusson of Case 1, we have T σ (I) = t 1 + T σ (I(t + 1 )) t 1 + mt (I(t + 1 )) mt (I), where the last nequalty s derved from (7). We make the followng three observatons from Theorem 1. Frst, the tradeoff between farness and effcency s a unque challenge facng mult-resource schedulng. When the system conssts of only one type of resource (.e., m = 1), work conservaton s suffcent to acheve the mnmum makespan, leavng farness as the only concern. For ths reason, effcency has never been a problem for tradtonal sngle-resource far queueng. Second, whle work conservaton also provdes some effcency guarantee for multresource schedulng, the more types of resources, the weaker the guarantee. Thrd, even wth a small number of resource types, the effcency loss could be qute sgnfcant. Snce bandwdth throughput s nversely proportonal to the schedule makespan, Theorem 1 mples that solely relyng on work conservaton may ncur up to 50% loss of bandwdth throughput when there are two types of resources. Whle ths s based on the worst case, as we shall see later n 6, our experments confrm that a throughput loss of as much as 20% s ntroduced by the exstng far queueng algorthms. Tradng off some degree of farness for hgher effcency s therefore well justfed, especally for applcatons wth loose farness requrements. 2.4 Challenges Unfortunately, strkng a desred balance between farness and effcency n a mult-resource system s techncally non-trval. Even mnmzng the makespan wthout regard to farness a specal case of farness-effcency tradeoff s NP-hard. In partcular, we note that mnmzng the makespan of a packet schedule can be modeled as a mult-stage flow shop problem [6, 20, 23] studed n operatons research, where the equvalent of a packet s a job, and the equvalent of a type of resource s a machne. However, flow shop schedulng s a notorously hard problem, even n ts offlne settng where the entre nput s known beforehand. Specfcally, when all jobs (packets) are avalable at the very begnnng, fndng the mn- (8)

4 mum makespan s strongly NP-hard when the number of machnes (resources) s greater than two [12]. Gven the hardness results above, n ths paper, we lmt our dscusson to two types of resources, CPU and lnk bandwdth, as these are the two most concerned mddlebox resources [13,25]. We note that even wth two types of resources, mnmzng the schedule makespan remans a hard problem. Because packets arrve dynamcally over tme, the problem resembles a 2-machne onlne flow shop schedulng problem where jobs (packets) do not reveal ther nformaton untl they arrve. For ths problem, only a lmted amount of negatve results s known [6,20,23,26,30]. Specfcally, no onlne algorthm can ensure a makespan wthn a factor of of the optmum n all cases [24]. We also notce that no exstng work gves a concrete soluton, even a heurstc algorthm, that jontly consders both makespan and farness. Table 1: Man notatons used n the flud model. The superscrpt t s dropped when tme can be clearly nferred from the context. Notaton Explanaton n maxmum number of flows that are concurrently backlogged α farness knob specfed by the network operator B (or B t ) set of flows that are currently backlogged (at tme t) d (or d t ) domnant share allocated to flow (at tme t) d (or d t ) far domnant share (at tme t), gven by (16) τ,r (or τ,r) t packet processng tme on resource r requred by the head-of-lne packet of flow (at tme t) τ,r (or τ,r) t normalzed τ,r (or τ,r), t defned by (12) 3. FAIRNESS, EFFICIENCY, AND THEIR TRADEOFF IN THE FLUID MODEL The dffculty of makespan mnmzaton s manly ntroduced by the combnatoral nature of mult-resource schedulng. One approach to crcumvent ths problem s to consder a flud relaxaton, where packets are served n arbtrarly small ncrements on all resources. For each packet, ths s equvalent to processng t smultaneously on all resources wth the same progress, and head-of-lne packets of backlogged flows can also be served n parallel, at (potentally) dfferent processng rates. Such a parallel processng flud model elmnates the need for dscussng the schedulng orders of flows. Instead, t allows us to focus on the resource shares allocated to flows, hence relaxng a combnatoral optmzaton problem to a smpler dynamc resource allocaton problem. Whle n general, optmally solvng such a dynamc problem requres knowng future packet arrvals, we show n ths secton that, under some practcal assumptons, a greedy algorthm gves an optmal onlne schedule wth the mnmum makespan. We can then strke a balance between effcency and farness by mposng some farness constrants to the flud schedule. We shall dscuss later n 4 and 5 how ths flud schedule s mplemented n practce wth a packet-by-packet trackng algorthm at acceptable complexty. 3.1 Flud Relaxaton In the flud model, a flow s relaxed to a flud where each of ts packets s served smultaneously on all resources wth the same progress. Packets of dfferent flows are also served n parallel. The schedule needs to decde, at each tme, the resource share allocated to each backlogged flow. In partcular, let B t be the set of flows that are backlogged at tme t. Let a t,r be the fracton (share) of resource r allocated to flow at tme t. The flud schedule determnes, at each tme t, the resource allocaton a t,r for each backlogged flow and each resource r. Two constrants must be satsfed when makng resource allocaton decsons. Frst, we must ensure that no resource s allocated more than ts total avalablty: B t a t,r 1, r = 1, 2. (9) The second constrant ensures that a packet s processed at a consstent rate across resources. In partcular, for a backlogged flow and ts head-of-lne packet at tme t, let τ t,r be ts packet processng tme on resource r, and r = arg max τ,r t (10) r be ts domnant resource. The processng rate that ths packet receves on resource r s computed as the rato between the resource share allocated and the processng tme requred: a t,r/τ t,r. To ensure a consstent processng rate, we have a t,r/τ t,r = a t,r /τ t,r, for all r and r. Substtutng r nto r above, we see a lnear relaton between the allocaton share of resource r and that of the domnant resource: where a t,r = τ t,r τ t,r a t,r = τ t,rd t, (11) τ t,r = τ t,r/τ t,r (12) s the normalzed packet processng tme on resource r, and d t = a t,r (13) s the domnant share allocated to flow at tme t. Pluggng (11) nto (9), we combne the two constrants nto one feasblty constrant of a flud schedule: B t τ t,rd t 1, r = 1, 2. (14) Before we dscuss the tradeoff between farness and effcency, we frst consder two specal cases, where ether farness or effcency s the only objectve to optmze n the flud model. For ease of presentaton, we drop the superscrpt t when tme can be clearly nferred from the context. Table 1 summarzes the man notatons used n the flud model. 3.2 Flud Schedule wth Perfect Farness We frst consder the farness objectve. To acheve perfect DRF, the flud schedule enforces strct max-mn farness on flows domnant shares, under the feasblty constrant. Specfcally, the flud schedule solves the followng DRF allocaton problem [14, 22] at each tme t: max d s.t. mn B d τ,rd 1, r = 1, 2. B (15) Let n be the number of backlogged flows. The optmal soluton, denoted by d = ( d 1,..., d n), allocates each backlogged flow the same domnant share,.e., d = d = 1/ max { τ,1, τ,2 }. (16)

5 CPU p1 p2 p3 p4 p5 p6 q1 q2 p1 p2 p3 p4 p5 p6 Lnk q1 q Tme Fgure 2: The DRGPS flud that mplements the perfect farness n the example of Fg. 1. Flow 1 sends packets p1, p2,..., and receves 3/5 CPU, 1/15 bandwdth ; Flow 2 sends packets q1, q2,..., and receves 3/5 CPU, 1/15 bandwdth. Only 2/3 of the lnk bandwdth s utlzed. In any backlogged perods, because flows are allocated the same domnant shares, they receve the same domnant servces, achevng strct DRF at all tmes. The resultng flud schedule s also known as DRGPS [32], a mult-resource generalzaton to the wellknown GPS [9, 21]. Any dscrete far schedule s essentally a packet-by-packet approxmaton to DRGPS. For nstance, applyng DRGPS to the example of Fg. 1 leads to a flud schedule shown n Fg. 2, where the normalzed packet processng tmes of Flow 1 and Flow 2 are τ 1,1, τ 1,2 = 2/3, 1 and τ 2,1, τ 2,2 = 1, 1/9, respectvely. By (16), both flows are allocated the same domnant share d = 3/5. Specfcally, Flow 1 receves 2/5 CPU, 3/5 bandwdth ; Flow 2 receves 3/5 CPU, 1/15 bandwdth. In total, only 2/3 of the bandwdth s utlzed, the same as the dscrete far schedule shown n Fg. 1a. 3.3 Flud Schedule wth Optmal Effcency We next dscuss the effcency objectve. Whle there are some schedules proposed n the operatons research lterature that can acheve the mnmum makspan for a flow shop problem, none of them apples n the context of packet schedulng: they ether assume no packet arrvals (e.g., [29]) or requre full knowledge of future nformaton (e.g., [5]). We propose a smple greedy flud schedule as follows. For a gven tme nstant, we defne the system s nstantaneous domnant throughput as the sum of the domnant share allocated,.e., B d. Intutvely, by maxmzng B d at all tmes, one would expect a hgh average domnant throughput B D/T, where T s the schedule makespan and D s the total domnant servces (processng tme) requred by flow. Gven domnant workload B D, maxmzng the average domnant throughput s equvalent to mnmzng the schedule makespan T. Followng ths ntuton, we propose a greedy flud schedule that solves the followng resource allocaton problem to maxmze the nstantaneous domnant throughput at every tme: max d 0 s.t. B d τ,rd 1, r = 1, 2. B (17) In case that the optmal soluton, denoted d = (d 1,..., d n), s not unque, the schedule chooses the one wth the maxmum overall utlzaton: max τ,rd d. (18) r B In the example of Fg. 1, solvng (17) allocates Flow 1 the domnant share d 1 = 9/25 and Flow 2 the domnant share d 2 = 24/25. It s easy to check that both CPU and lnk bandwdth are fully utlzed. Compared to those schedules proposed n the operatons research lterature, the greedy schedule defned by (17) s partcularly attractve for packet schedulng due to the followng three propertes. Frst, t s an onlne algorthm wthout any a pror knowledge of future packet arrvals. Further, among all packets that are backlogged, only the nformaton regardng head-of-lne packets s requred. Ths suggests that the schedule only needs to mantan a very smple per-flow state. Most mportantly, the greedy schedule s more than a smple heurstc. Below we show that under some practcal assumptons, greedly maxmzng the domnant throughput gves the mnmum makespan. Our analyss requres the followng lemma, where we show that the schedule wll not waste any resource n dle, unless all flows bottleneck on the same resource, n whch case the other resource cannot be fully utlzed anyway. The proof s gven n Appendx A. Lemma 1. The flud schedule defned by (17) fully utlzes both resources f there are two head-of-lne packets wth dfferent domnant resource,.e., there exst two flows j and l, such that τ j,1 = 1 > τ j,2 and τ l,1 < τ l,2 = 1. Wth Lemma 1, we analyze the makespan of the flud schedule defned by (17). Followng [13], we say a flow s domnantresource monotonc f t does not change ts domnant resource durng backlogged perods. To make the analyss tractable, we assume that flows are domnant-resource monotonc. Ths s often true n practce as packets n the same flow usually undergo the same processng, and hence have the same domnant resource. The followng lemma, whose proof can be found n Appendx B, states the optmalty of the flud schedule n a statc scenaro wthout dynamc packet arrvals. Lemma 2. For domnant-resource monotonc flows, the flud schedule defned by (17) gves the mnmum makespan f all packets are avalable at the begnnng. We now extend the results of Lemma 2 to an onlne case where packets dynamcally arrve over tme. The followng theorem gves the optmalty condton of the flud schedule. The proof can be found n Appendx C. Theorem 2. For domnant-resource monotonc flows, the flud schedule defned by (17) gves the mnmum makespan among all schedules, f after the system has two flows wth dfferent domnant resources, whenever a new flow arrves, there exst two backlogged flows wth dfferent domnant resources. The optmalty condtons requred by Theorem 2 can be easly met n practce. Because the number of backlogged flows s usually large, t s almost true that we can always fnd two flows wth dfferent domnant resources. In fact, even n a very unfortunate case where all flows bottleneck on the same resource, the greedy flud schedule does not devate far away from the optmum: no matter what flud schedule s used, the bottleneck resource s always fully utlzed when the system s non-empty and hence has the same backlog, whch s a domnant factor n determnng the schedule makespan. The sgnfcance of Theorem 2 s that t connects makespan, a measure defned n the tme doman, to the nstantaneous domnant throughput, a measure defned n the space doman. More mportantly, t shows that mnmzng the former s, n a practcal sense, equvalent to maxmzng the latter at all tmes, wthout the need to know future packet arrvals. We shall use ths ntuton to strke a balance between farness and effcency n the next subsecton.

6 3.4 Tradeoff between Farness and Effcency When both farness and effcency are consdered, we express the tradeoff between the two conflctng objectves as a constraned optmzaton problem mnmzng makespan under some specfed farness requrements. Recall that when perfect farness s enforced, all flows receve the same domnant share d computed by (16),.e., d = d for all. When farness s not a strct requrement, we ntroduce a farness knob α [0, 1] to specfy the farness degradaton. In partcular, an allocaton d s called α-porton far f d α d for all backlogged flow. In other words, each flow receves at least an α-porton of ts far domnant share d. A flud schedule s called α-porton far f t acheves the α-porton far allocaton at all tmes. By choosng dfferent values for α, a network operator can precsely control the farness degradaton. As two extreme cases, settng α = 0 means that farness s not consdered at all; settng α = 1 means that perfect farness must be enforced at all tmes. Gven the specfed farness knob α, the flud schedule tres to mnmze makespan under the correspondng α-porton farness constrants. Snce mnmzng makespan s, n a practcal sense, equvalent to maxmzng the system s domnant throughput, we obtan a smple tradeoff heurstc that maxmzes the domnant throughput, subject to the requred α-porton farness at every tme t: max d s.t. B d τ,rd 1, r = 1, 2, B d α d, B, (19) where the far share d s gven by (16). We see that the flud schedule captures both DRGPS and the greedy schedule defned by (17) as specal cases wth α = 1 and 0, respectvely. Specal Soluton Structure. The tradeoff problem (19) has a closed-form soluton, based on whch the tradeoff schedule can be easly computed. We frst allocate each flow ts guaranteed porton of domnant share α d. We then denote d = d α d (20) as the bonus domnant share allocated to flow. Substtutng (20) nto (19), we equvalently rewrte (19) as a problem of determnng the bonus domnant share receved by each flow: where max d 0 s.t. d + B α d B τ,r d µ r r = 1, 2, B µ r = 1 α d B (21) τ,r, r = 1, 2, (22) and s the remanng share of resource r after each flow receves ts guaranteed domnant share α d. Wthout loss of generalty, we sort all the backlogged flows based on the processng demands on the two types of resources requred by ther head-of-lne packets as follows: τ 1,1/ τ 1,2 τ n,1/ τ n,2. (23) The followng theorem shows that at most two flows are awarded the bonus share at a tme. Its proof s gven n Appendx D. Theorem 3. There exsts an optmal soluton d to (21) where d = 0 for all 2 n 1. In partcular, d s gven n the followng three cases: Case 1: µ 1/µ 2 < τ n,1/ τ n,2. In ths case, resource 1 s fully utlzed, wth d n = µ 1/ τ n,1 and d = 0 for all < n. Case 2: µ 1/µ 2 > τ 1,1/ τ 1,2. In ths case, resource 2 s fully utlzed, wth d 1 = µ 2/ τ 1,2 and d = 0 for all > 1. Case 3: τ n,1/ τ n,2 µ 1/µ 2 τ 1,1/ τ 1,2. In ths case, both resources are fully utlzed, and we have (µ 1 τ n,2 µ 2 τ n,1)/( τ 1,1 τ n,2 τ 1,2 τ n,1), = 1; d = (µ 2 τ 1,1 µ 1 τ 1,2)/( τ 1,1 τ n,2 τ 1,2 τ n,1), = n; 0, o.w. Once the optmal bonus domnant share has been determned as shown above, the optmal soluton d to (19), whch s the domnant share allocated to each flow, can be easly computed as the sum of the bonus share and the guaranteed share: d = d + α d, for all. (24) We gve an ntutve explanaton of Theorem 3 as follows. The frst two cases of Theorem 3 correspond to the scenaro where after each flow receves ts guaranteed share, the remanng amounts of the two types of resources are unbalanced and cannot be fully utlzed smultaneously. In ths case, the schedule awards the bonus share to the flow (ether Flow 1 or Flow n) whose processng demands can better utlze the remanng resources. The thrd case covers the scenaro where the remanng amounts of the two types of resources are balanced, and can be fully utlzed when the system s non-empty. In ths case, they are allocated to two flows (Flow 1 and Flow n) wth complementary resource demands as ther bonus shares. Theorem 3 reveals an mportant structure, that at most two flows are allocated more domnant shares than others. We refer to these flows as the favored flows and all the others as the regular flows. We shall show n 5 that ths structure leads to an effcent O(log n) mplementaton of the flud schedule. 4. PACKET-BY-PACKET TRACKING So far, all our dscussons are based on an dealzed flud model. In practce, however, packets are processed as separate enttes. In ths secton, we present a dscrete trackng algorthm that mplements the flud schedule as a packet-by-packet schedule n practce. We show that the dscrete schedule s asymptotcally close to the flud schedule, n terms of both farness and effcency. We start wth a comparson between two typcal trackng approaches. 4.1 Start-Tme Trackng vs. Fnsh-Tme Trackng Two common trackng algorthms may be used to mplement a flud schedule n practce, start-tme trackng and fnsh-tme trackng. The former tracks the order of packet start tmes among all packets that have already started n the flud schedule, the one that starts the earlest s scheduled frst. Fnsh-tme trackng, on the other hand, assgns the hghest schedulng prorty to the packet that completes servce the earlest n the flud schedule. In tradtonal sngle-resource far queueng, FQS [16] uses the former approach to track GPS, whle WFQ [1, 9, 21] adopts the latter approach. Whle both algorthms closely track the flud schedule of far queueng, only start-tme trackng s well defned for the tradeoff schedule gven by (19). Ths s due to the fact that, n the tradeoff schedule, future traffc arrvals may lead to a dfferent allocaton of packet processng rates and may subsequently change the packet

7 fnsh tmes of current packets. As a result, determnng the order of fnsh tmes requres future traffc arrval nformaton and hence s unrealstc. 3 Start-tme trackng avods ths problem as packets are scheduled only after they start n the flud schedule. For ths reason, we use start-tme trackng to mplement the flud schedule. We say a dscrete schedule and a flud schedule correspond to each other f the former tracks the latter by the packet start tme. Specfcally, we mantan the flud schedule n the background. Whenever there s a schedulng opportunty, among all head-of-lne packets that have already started n the flud schedule, the one that starts the earlest s chosen. Below we show that ths dscrete schedule s asymptotcally close to ts correspondng flud schedule. 4.2 Performance Analyss To analyze the performance of start-tme trackng, we ntroduce the followng notatons. Let τ max be the maxmum packet processng tme requred by any packet on any resource. Let n be the maxmum number of flows that are concurrently backlogged. Let T F be the makespan of the flud schedule, and T D the makespan of ts correspondng dscrete schedule. All proofs are gven n Appendx E. The followng theorem bounds the dfference between the makespan of the flud schedule and ts correspondng dscrete schedule. Theorem 4. For the flud schedule wth α > 0 and ts correspondng dscrete schedule, we have T D T F + nτ max. (25) The error bound nτ max can be ntutvely explaned as the total packet processng tme requred by all n concurrent flows, each sendng only one packet. In practce, the number of packets a flow sends s usually sgnfcantly larger than one. As a result, the traffc makespan s sgnfcantly larger than the error bound,.e., T F nτ max. Theorem 4 essentally ndcates that n terms of makespan, the two schedules are asymptotcally close to each other. We next analyze the farness performance of the dscrete schedule by comparng the domnant servces a flow receves under both schedules. In partcular, let D F (0, t) be the domnant servces flow receves n (0, t) under the flud schedule, and D D (0, t) the domnant servces flow receves n (0, t) under the correspondng dscrete schedule. The followng theorem shows that flows receve approxmately the same domnant servces under both schedules. Theorem 5. For the flud schedule wth α > 0 and ts correspondng dscrete schedule, the followng nequalty holds for any flow and any tme t: D F (0, t) 2(n 1)τ max D D (0, t) D F (0, t) + τ max. (26) In other words, the dfference between the domnant servces a flow receves under the two correspondng schedules s bounded by a constant amount, rrespectve of the tme t. Over the long run, the dscrete schedule acheves the same α-porton farness as ts correspondng flud schedule. To summarze, start-tme trackng retans both the effcency and farness propertes of ts correspondng flud schedule n the asymptotc regme. 5. AN O(log n) IMPLEMENTATION 3 Ths s not a problem of sngle-resource far queueng as dfferent flows are allocated the same processng rate, so that future traffc arrvals wll not affect the order of fnsh tmes of current packets. To mplement the aforementoned start-tme trackng algorthm, two modules are requred: packet proflng and flud schedulng. The former estmates the packet processng tme on both CPU and lnk bandwdth; the latter mantans the flud schedule as a reference system based on the packet proflng results. We show n ths secton that packet proflng can be quckly accomplshed n O(1) tme usng a smple approach proposed n [13]. The man challenge comes from the complexty of mantanng the flud schedule, where drect mplementaton requres O(n) tme. Here, n s the number of backlogged flows. We gve an O(log n) mplementaton based on an approach smlar to vrtual tme. We shall show n 6 that the mplementaton can be easly prototyped n the Clck modular router [19]. 5.1 Packet Proflng As ponted out by Ghods et al. [13], any mult-resource far queueng algorthm, ncludng our flud schedule, requres knowledge of the packet processng tme on each resource. Fortunately, as shown n [13], CPU processng tme can be accurately estmated as a lnear functon of packet sze. Specfcally, for a packet of sze l, the CPU processng tme s estmated as al +b, where a and b are the coeffcents dependng on the type of packet processng (e.g., IPsec). We have valdated ths lnear model through an upfront experment usng Clck [19]. For each type of packet processng, we measure the exact CPU processng tme requred by packets of dfferent szes. Ths allows us to determne the coeffcents a and b. We ft such a lnear model to the scheduler and use t to estmate the CPU processng tme requred by a packet. As for the packet transmsson tme, the estmaton s smply the packet sze dvded by the outgong bandwdth, whch s known a pror. 5.2 Drect Implementaton of Flud Schedulng Based on the packet proflng results, the flud schedule s constructed and s mantaned by the flud scheduler. In partcular, we need to determne the next packet that starts n the flud schedule. Ths requres trackng the work progress of all n flows. Below we gve a drect mplementaton that wll be used later n our vrtual tme mplementaton. For each flow, we record d, whch s the domnant share the flow receves n the flud schedule at the current tme and s computed by (24). We also record R, the remanng domnant processng tme requred by the head-of-lne packet of the flow at the current tme. For flow, ts head-of-lne packet wll fnsh n R /d tme f no event occurs then. An event s ether a packet departure or a packet beng the new head-of-lne n the flud schedule. Ether of them may change the head-of-lne packet of a flow, leadng to dfferent coeffcents of the tradeoff problem (19). Wth d and R, we can accurately track the work progress of flow n an event-drven bass. Specfcally, upon the occurrence of an event, let t be the tme elapsed snce the last update. If t < R /d, meanng that the event occurs before the head-of-lne packet fnshes, we update R R d t. If t = R /d, meanng that the event occurs at the tme when the head-of-lne packet fnshes, we check f flow has a next packet p to process. If t does, then packet p becomes the new head-of-lne and should start n the flud schedule. We update R as the domnant processng tme requred by p. Otherwse, we reset R 0, and flow leaves the flud system. We also recompute d after R s updated. (Note that t s mpossble to have t > R /d.) However, purely relyng on the approach above to track the work progress of all n flows s hghly neffcent. Whenever an event

8 occurs, each flow must be updated ndvdually, whch requres at least O(n) tme per event and s too expensve. We next ntroduce a more effcent mplementaton that requres the above procedure for at most two flows. 5.3 Vrtual Tme Implementaton of Flud Schedulng To avod the hgh complexty requred by the drect mplementaton above, we have noted, by Theorem 3, that at most two flows are favored and are allocated more domnant shares than others. Therefore, t suffces to mantan at most three domnant shares at a tme two for the favored flows and one for the other regular flows. For regular flows, we track ther work progress usng an approach smlar to the vrtual tme mplementaton of GPS [9, 21]. Our ntuton s that, by Theorem 3, all the regular flows are allocated the same domnant share, and ther schedulng resembles far queueng. For favored flows, snce there are at most two of them, we track ther work progress drectly, usng the drect mplementaton above. Our approach s detaled below Identfyng Favored and Regular Flows We frst dscuss how favored and regular flows can be quckly dentfed upon the occurrence of an event. By Theorem 3, t suffces to sort flows n order (23) and examne the three cases. Flows that receve the bonus share (.e., d > 0) are favored. Note that the entre computaton requres only nformaton regardng the headof-lne packet of the frst and the last flow n order (23) ( τ 1,r and τ n,r, the normalzed domnant processng tme). We store all the head-of-lne packets n a double-ended prorty queue mantaned by a mn-max heap for fast retreval, where the packet order s defned by (23). Ths allows us to apply Theorem 3 and dentfy the favored and regular flows n O(log n) tme Trackng Favored Flows For favored flows, because there are at most two of them, we track ther work progress usng the drect mplementaton mentoned n 5.2, where we record d and R for each favored flow. It s easy to see that the update complexty s domnated by the computaton of d. As mentoned n the prevous dscusson, ths can be done n O(log n) tme by Theorem 3. Also, snce there are at most two favored flows, the overall trackng complexty remans O(log n) per event Trackng Regular Flows For regular flows, snce they receve the same domnant share, ther schedulng resembles far queueng. We hence track ther work progress usng vrtual tme [1, 9, 21]. Specfcally, we defne vrtual tme V (t) as a functon of real tme t evolvng as follows: V (0) = 0, V (t) = α d t, t > 0. (27) Here, d t s the far domnant share computed by (16) at tme t, and s fxed between two consecutve events; α d t s the domnant share each regular flow receves. 4 Thus, V can be nterpreted as ncreasng at the margnal rate at whch regular flows receve domnant servces. Each regular flow also mantans vrtual fnsh tme F, ndcatng the vrtual tme at whch ts head-of-lne packet fnshes n the flud schedule. The vrtual fnsh tme F s updated as follows when flow has a new head-of-lne packet p at tme t: F = V (t) + τ (p), (28) 4 We restore the superscrpt t here to emphasze that the far domnant share computed by (16) may change over tme. where τ (p) s the domnant packet processng tme requred by p. Among all the regular flows, the one wth the smallest F has ts head-of-lne packet fnshng frst n the flud schedule. Unless some event occurs n between, at tme t, the next packet departure for the regular flows would be n t N = (mn F V (t))/α d tme. Usng vrtual tme defned by (27), we can accurately track the work progress of regular flows n an event-drven bass. Specfcally, upon the occurrence of an event at tme t, let t 0 be the tme of the last update, and t = t t 0 the tme elapsed snce the last update. If t < t N, meanng that the event occurs before the next packet departure of regular flows, we smply update the vrtual tme followng (27): V (t) = V (t 0) + α d t. (29) If t = t N, then the event occurs at the tme when a packet of a regular flow, say flow, fnshes n the flud schedule. In addton to updatng the vrtual tme, we check to see f flow has a next packet p to process. If t does, meanng that the packet p should start n the flud schedule, we update ts vrtual fnsh tme F followng (28). Otherwse, flow departs the system. We also recompute d by (16). The trackng complexty s domnated by the computaton of the mnmum vrtual fnsh tme,.e., mn F. By storng F s n a prorty queue mantaned by a heap, we see that the trackng complexty s O(log n) per event Handlng Identty Swtchng We note that the dentty of a flow s not fxed: upon the occurrence of an event, a favored flow may swtch to a regular flow, and vce versa. We show that such dentty swtchng can also be easly handled n O(log n) tme. We frst consder a favored flow swtchng to a regular one at tme t, whch requres the computaton of the vrtual fnsh tme F. Recall that we have recorded R, the remanng domnant processng tme requred by the head-of-lne packet, for flow as t s prevously favored. By defnton, the vrtual fnsh tme F can be smply computed as F = V (t) + R. (30) Addng F to the heap takes at most O(log n) tme. We next consder a regular flow swtchng to a favored one at tme t, whch requres the computaton of R. Recall that we have recorded the vrtual fnsh tme F for flow. By defnton, the remanng domnant processng tme requred by ts head-of-lne packet s smply R = F V (t), (31) whch s a dual of (30). We also need to remove the vrtual fnsh tme, F, from the heap. To do so, we mantan an ndex for each regular flow, recordng the locaton of ts vrtual fnsh tme stored n the heap. Followng ths ndex, we can easly locate the poston of F and delete t from the heap, followed by some standard trckle-down operatons to preserve the heap property n O(log n) tme. To summarze, our approach mantans the flud schedule by dentfyng favored and regular flows, trackng ther work progress, and handlng the potental dentty swtchng. We show that any of these operatons can be accomplshed n O(log n) tme. As a result, mantanng the flud schedule takes O(log n) tme per event. 5.4 Start-Tme Trackng and Complexty Wth the flud schedule mantaned as a reference system, the mplementaton of start-tme trackng s straghtforward. Whenever a packet starts n the flud schedule, t s added to a FIFO queue.

9 Upon a schedulng opportunty, the scheduler polls the queue and retreves a packet to schedule. Ths ensures that packets are scheduled n order of ther start tmes n the flud schedule. To mnmze the update frequency, the scheduler lazly updates the flud schedule only when the FIFO queue s empty. We now analyze the schedulng complexty of the aforementoned mplementaton. The schedulng decsons are made by updatng the flud schedule n an event-drven bass. For each event, the update takes O(log n) tme, where n s the number of backlogged flows. Note that there are only two types of events n the flud schedule, new head-of-lne and packet departure. Because a packet served n the flud schedule trggers exactly these two events over the entre schedulng perod, schedulng N packets trggers 2N updates n the flud schedule, wth the overall complexty O(2N log n). On average, the schedulng decson s made n O(2 log n) tme per packet, the same order as that of DRFQ [13]. 6. EVALUATION We evaluate the tradeoff algorthm va both our prototype mplementaton and trace-drven smulaton. We use a prototype mplementaton to nvestgate the detaled functonng of the algorthm, n a mcroscopc vew. We then take a macroscopc vew to evaluate the algorthm usng trace-drven smulaton, where flows dynamcally jon and depart the system. 6.1 Expermental Results We have prototyped our tradeoff algorthm as a new scheduler n the Clck modular router [19], based on the O(log n) mplementaton gven n the prevous secton. The scheduler classfes packets to flows (based on the IP prefx and port number) and dentfes the types of packet processng based on the port number specfed by a flow class table. The scheduler also exposes an nterface that allows the operator to dynamcally confgure the farness knob α. Our mplementaton conssts of roughly 1,000 lnes of C++ code. We run our Clck mplementaton n user mode on a Dell PowerEdge server wth an Intel Xeon 3.0 GHz processor and 1 Gbps Ethernet nterface. To make farness relevant, we throttle the outgong bandwdth to 200 Mbps whle keepng the nbound bandwdth as s. We also throttle the Clck module to use only 20% CPU so that CPU could also be a bottleneck. We confgure three packet processng modules n Clck to emulate a mult-functonng mddlebox: packet checkng, statstcal montorng, and IPsec. The former two modules are bandwdth-bound, though statstcal montorng requres more CPU processng tme than packet checkng does. The IPsec module encrypts packets usng AES (128-bt key length) and s CPU-bound. We confgure another server as a traffc source, ntatng 60 UDP flows each sendng byte packets per second to the Clck router. The frst 20 flows pass through the packet checkng module; the next 20 flows pass through the statstcal montorng module; and the last 20 flows pass through the IPsec module Farness-Effcency Tradeoff We frst evaluate the acheved tradeoff between schedule farness and makespan. To farly compare the makespan at dfferent farness levels, t s crtcal to ensure the same traffc nput when runnng the algorthm wth dfferent values of farness knob α. Therefore, we ntally consder an dealzed scenaro where each flow queue has nfnte capacty and never drops packets. Table 2 lsts the observed makespans wth varous farness requrements, n an experment where each flow keeps sendng packets for 10 seconds. We see that, as expected, tradng off some level of farness leads to a shorter makespan and hgher effcency. Furthermore, the Table 2: Schedule makespan observed n Clck at dfferent farness levels. The queue capacty s nfnte. Farness knob α Makespan (s) Normalzed Makespan % % % % % % % CPU Utlzaton (%) B/W Utlzaton (%) α = 0.85 α = 0.90 α = 0.95 α = Tme (s) Fgure 3: Overall resource utlzaton observed n Clck. No packet drops. margnal mprovement of effcency s decreasng. Ths suggests that one does not need to compromse too much farness n order to acheve hgh effcency. In our experment, tradng off 15% of farness shortens the makespan by 15.3% from the strctly far schedule (α = 1), whch s equvalent to a 18.1% bandwdth throughput enhancement and s near-optmal as seen n Table 2. Fg. 3 gves a detaled look nto the acheved resource utlzaton over tme, at four farness levels. We see that strctly far queueng (α = 1) wastes 30% of CPU cycles, leavng the bandwdth as the bottleneck at the begnnng. Ths stuaton remans untl bandwdth-bound flows fnsh, at whch tme the bottleneck shfts to CPU. By relaxng farness, CPU-bound flows receve more servces, leadng to a steady ncrease of CPU utlzaton up to 100%. Meanwhle, bandwdthbound flows experence slghtly longer completon tmes due to the farness tradeoff. We now verfy the farness guarantee. We run the scheduler at varous farness levels. At each level, for each flow, we measure ts receved domnant share every second for the frst 20 seconds, durng whch all flows are backlogged. Fg. 4 shows the results, where each cross ( x ) corresponds to the domnant share of a flow measured n one second. As expected, under strct farness (α = 1), all flows receve the same domnant share (around 2%). As α decreases, the farness requrement relaxes. Some flows are hence favored and are allocated more domnant share, whle others receve less. However, the mnmum domnant share a flow receves s lower bounded by the α-porton of the far share, shown as the sold lne n Fg. 4. Ths shows that the algorthm s correctly operatng at the desred farness level. We next extend the experment to a more practcal setup, where each flow queue has a lmted capacty and drops packets when t s full. We set the queue sze to 200 packets for each flow and repeat the prevous experments. In ths case, comparng makespan

Real-Time Guarantees. Traffic Characteristics. Flow Control

Real-Time Guarantees. Traffic Characteristics. Flow Control Real-Tme Guarantees Requrements on RT communcaton protocols: delay (response s) small jtter small throughput hgh error detecton at recever (and sender) small error detecton latency no thrashng under peak

More information

CS 268: Lecture 8 Router Support for Congestion Control

CS 268: Lecture 8 Router Support for Congestion Control CS 268: Lecture 8 Router Support for Congeston Control Ion Stoca Computer Scence Dvson Department of Electrcal Engneerng and Computer Scences Unversty of Calforna, Berkeley Berkeley, CA 9472-1776 Router

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture Goals and Approach CS 194: Dstrbuted Systems Resource Allocaton Goal: acheve predcable performances Three steps: 1) Estmate applcaton s resource needs (not n ths lecture) 2) Admsson control 3) Resource

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

On Achieving Fairness in the Joint Allocation of Buffer and Bandwidth Resources: Principles and Algorithms

On Achieving Fairness in the Joint Allocation of Buffer and Bandwidth Resources: Principles and Algorithms On Achevng Farness n the Jont Allocaton of Buffer and Bandwdth Resources: Prncples and Algorthms Yunka Zhou and Harsh Sethu (correspondng author) Abstract Farness n network traffc management can mprove

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

AADL : about scheduling analysis

AADL : about scheduling analysis AADL : about schedulng analyss Schedulng analyss, what s t? Embedded real-tme crtcal systems have temporal constrants to meet (e.g. deadlne). Many systems are bult wth operatng systems provdng multtaskng

More information

Gateway Algorithm for Fair Bandwidth Sharing

Gateway Algorithm for Fair Bandwidth Sharing Algorm for Far Bandwd Sharng We Y, Rupnder Makkar, Ioanns Lambadars Department of System and Computer Engneerng Carleton Unversty 5 Colonel By Dr., Ottawa, ON KS 5B6, Canada {wy, rup, oanns}@sce.carleton.ca

More information

Scheduling and queue management. DigiComm II

Scheduling and queue management. DigiComm II Schedulng and queue management Tradtonal queung behavour n routers Data transfer: datagrams: ndvdual packets no recognton of flows connectonless: no sgnallng Forwardng: based on per-datagram forwardng

More information

3. CR parameters and Multi-Objective Fitness Function

3. CR parameters and Multi-Objective Fitness Function 3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

A fair buffer allocation scheme

A fair buffer allocation scheme A far buffer allocaton scheme Juha Henanen and Kalev Klkk Telecom Fnland P.O. Box 228, SF-330 Tampere, Fnland E-mal: juha.henanen@tele.f Abstract An approprate servce for data traffc n ATM networks requres

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control

Quantifying Responsiveness of TCP Aggregates by Using Direct Sequence Spread Spectrum CDMA and Its Application in Congestion Control Quantfyng Responsveness of TCP Aggregates by Usng Drect Sequence Spread Spectrum CDMA and Its Applcaton n Congeston Control Mehd Kalantar Department of Electrcal and Computer Engneerng Unversty of Maryland,

More information

A Sub-Critical Deficit Round-Robin Scheduler

A Sub-Critical Deficit Round-Robin Scheduler A Sub-Crtcal Defct ound-obn Scheduler Anton Kos, Sašo Tomažč Unversty of Ljubljana, Faculty of Electrcal Engneerng, Ljubljana, Slovena E-mal: anton.kos@fe.un-lj.s Abstract - A scheduler s an essental element

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS Copng wth NP-completeness 11. APPROXIMATION ALGORITHMS load balancng center selecton prcng method: vertex cover LP roundng: vertex cover generalzed load balancng knapsack problem Q. Suppose I need to solve

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics A Hybrd Genetc Algorthm for Routng Optmzaton n IP Networks Utlzng Bandwdth and Delay Metrcs Anton Redl Insttute of Communcaton Networks, Munch Unversty of Technology, Arcsstr. 21, 80290 Munch, Germany

More information

Lecture 7 Real Time Task Scheduling. Forrest Brewer

Lecture 7 Real Time Task Scheduling. Forrest Brewer Lecture 7 Real Tme Task Schedulng Forrest Brewer Real Tme ANSI defnes real tme as A Real tme process s a process whch delvers the results of processng n a gven tme span A data may requre processng at a

More information

Network Coding as a Dynamical System

Network Coding as a Dynamical System Network Codng as a Dynamcal System Narayan B. Mandayam IEEE Dstngushed Lecture (jont work wth Dan Zhang and a Su) Department of Electrcal and Computer Engneerng Rutgers Unversty Outlne. Introducton 2.

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leadng publsher of Open Access books Bult by scentsts, for scentsts 3,500 108,000 1.7 M Open access books avalable Internatonal authors and edtors Downloads Our authors are

More information

Internet Traffic Managers

Internet Traffic Managers Internet Traffc Managers Ibrahm Matta matta@cs.bu.edu www.cs.bu.edu/faculty/matta Computer Scence Department Boston Unversty Boston, MA 225 Jont work wth members of the WING group: Azer Bestavros, John

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments Comparson of Heurstcs for Schedulng Independent Tasks on Heterogeneous Dstrbuted Envronments Hesam Izakan¹, Ath Abraham², Senor Member, IEEE, Václav Snášel³ ¹ Islamc Azad Unversty, Ramsar Branch, Ramsar,

More information

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment Journal of Physcs: Conference Seres PAPER OPEN ACCESS Resource and Vrtual Functon Status Montorng n Network Functon Vrtualzaton Envronment To cte ths artcle: MS Ha et al 2018 J. Phys.: Conf. Ser. 1087

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

A Quantitative Assured Forwarding Service

A Quantitative Assured Forwarding Service TO APPEAR IN PROCEEDINGS OF IEEE INFOCOM 00, c IEEE A Quanttatve Assured Forwardng Servce Ncolas Chrstn, Jörg Lebeherr, and Tarek F. Abdelzaher Department of Computer Scence Unversty of Vrgna P.O. Box

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems:

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems: Speed/RAP/CODA Presented by Octav Chpara Real-tme Systems Many wreless sensor network applcatons requre real-tme support Survellance and trackng Border patrol Fre fghtng Real-tme systems: Hard real-tme:

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Greedy Technique - Definition

Greedy Technique - Definition Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7 Optmzed Regonal Cachng for On-Demand Data Delvery Derek L. Eager Mchael C. Ferrs Mary K. Vernon Unversty of Saskatchewan Unversty of Wsconsn Madson Saskatoon, SK Canada S7N 5A9 Madson, WI 5376 eager@cs.usask.ca

More information

Routing in Degree-constrained FSO Mesh Networks

Routing in Degree-constrained FSO Mesh Networks Internatonal Journal of Hybrd Informaton Technology Vol., No., Aprl, 009 Routng n Degree-constraned FSO Mesh Networks Zpng Hu, Pramode Verma, and James Sluss Jr. School of Electrcal & Computer Engneerng

More information

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access Agenda Cache Performance Samra Khan March 28, 217 Revew from last lecture Cache access Assocatvty Replacement Cache Performance Cache Abstracton and Metrcs Address Tag Store (s the address n the cache?

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) , VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT Bran J. Wolf, Joseph L. Hammond, and Harlan B. Russell Dept. of Electrcal and Computer Engneerng, Clemson Unversty,

More information

Enhancing Class-Based Service Architectures with Adaptive Rate Allocation and Dropping Mechanisms

Enhancing Class-Based Service Architectures with Adaptive Rate Allocation and Dropping Mechanisms Enhancng Class-Based Servce Archtectures wth Adaptve Rate Allocaton and Droppng Mechansms Ncolas Chrstn, Member, IEEE, Jörg Lebeherr, Senor Member, IEEE, and Tarek Abdelzaher, Member, IEEE Abstract Class-based

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications Effcent Loa-Balance IP Routng Scheme Base on Shortest Paths n Hose Moel E Ok May 28, 2009 The Unversty of Electro-Communcatons Ok Lab. Semnar, May 28, 2009 1 Outlne Backgroun on IP routng IP routng strategy

More information

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai

TECHNICAL REPORT AN OPTIMAL DISTRIBUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION. Jordi Ros and Wei K Tsai TECHNICAL REPORT AN OPTIMAL DISTRIUTED PROTOCOL FOR FAST CONVERGENCE TO MAXMIN RATE ALLOCATION Jord Ros and We K Tsa Department of Electrcal and Computer Engneerng Unversty of Calforna, Irvne 1 AN OPTIMAL

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Advanced Computer Networks

Advanced Computer Networks Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Adaptive Load Shedding for Windowed Stream Joins

Adaptive Load Shedding for Windowed Stream Joins Adaptve Load Sheddng for Wndowed Stream Jons Buğra Gedk, Kun-Lung Wu, Phlp S. Yu, Lng Lu College of Computng, Georga Tech Atlanta GA 333 {bgedk,lnglu}@cc.gatech.edu IBM T. J. Watson Research Center Yorktown

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Adaptive Load Shedding for Windowed Stream Joins

Adaptive Load Shedding for Windowed Stream Joins Adaptve Load Sheddng for Wndowed Stream Jons Bu gra Gedk College of Computng, GaTech bgedk@cc.gatech.edu Kun-Lung Wu, Phlp Yu T.J. Watson Research, IBM {klwu,psyu}@us.bm.com Lng Lu College of Computng,

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

WITH rapid improvements of wireless technologies,

WITH rapid improvements of wireless technologies, JOURNAL OF SYSTEMS ARCHITECTURE, SPECIAL ISSUE: HIGHLY-RELIABLE CPS, VOL. 00, NO. 0, MONTH 013 1 Adaptve GTS Allocaton n IEEE 80.15.4 for Real-Tme Wreless Sensor Networks Feng Xa, Ruonan Hao, Je L, Naxue

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

WIRELESS communication technology has gained widespread

WIRELESS communication technology has gained widespread 616 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 4, NO. 6, NOVEMBER/DECEMBER 2005 Dstrbuted Far Schedulng n a Wreless LAN Ntn Vadya, Senor Member, IEEE, Anurag Dugar, Seema Gupta, and Paramvr Bahl, Senor

More information

Solution Brief: Creating a Secure Base in a Virtual World

Solution Brief: Creating a Secure Base in a Virtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Abstract The adopton rate of Vrtual Machnes has exploded at most organzatons, drven by the mproved

More information

A Saturation Binary Neural Network for Crossbar Switching Problem

A Saturation Binary Neural Network for Crossbar Switching Problem A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com

More information

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach Dstrbuted Resource Schedulng n Grd Computng Usng Fuzzy Approach Shahram Amn, Mohammad Ahmad Computer Engneerng Department Islamc Azad Unversty branch Mahallat, Iran Islamc Azad Unversty branch khomen,

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

HIERARCHICAL SCHEDULING WITH ADAPTIVE WEIGHTS FOR W-ATM *

HIERARCHICAL SCHEDULING WITH ADAPTIVE WEIGHTS FOR W-ATM * Copyrght Notce c 1999 IEEE. Personal use of ths materal s permtted. However, permsson to reprnt/republsh ths materal for advertsng or promotonal purposes or for creatng new collectve wors for resale or

More information

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution Dynamc Voltage Scalng of Supply and Body Bas Explotng Software Runtme Dstrbuton Sungpack Hong EE Department Stanford Unversty Sungjoo Yoo, Byeong Bn, Kyu-Myung Cho, Soo-Kwan Eo Samsung Electroncs Taehwan

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

MobileGrid: Capacity-aware Topology Control in Mobile Ad Hoc Networks

MobileGrid: Capacity-aware Topology Control in Mobile Ad Hoc Networks MobleGrd: Capacty-aware Topology Control n Moble Ad Hoc Networks Jle Lu, Baochun L Department of Electrcal and Computer Engneerng Unversty of Toronto {jenne,bl}@eecg.toronto.edu Abstract Snce wreless moble

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

IP Camera Configuration Software Instruction Manual

IP Camera Configuration Software Instruction Manual IP Camera 9483 - Confguraton Software Instructon Manual VBD 612-4 (10.14) Dear Customer, Wth your purchase of ths IP Camera, you have chosen a qualty product manufactured by RADEMACHER. Thank you for the

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks Appl. Math. Inf. Sc. 7, No. 2L, 467-474 2013) 467 Appled Mathematcs & Informaton Scences An Internatonal Journal http://dx.do.org/10.12785/ams/072l13 Delay Varaton Optmzed Traffc Allocaton Based on Network

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information