Towards High Fidelity Network Emulation

Size: px
Start display at page:

Download "Towards High Fidelity Network Emulation"

Transcription

1 Towards Hgh Fdelty Network Emulaton Lanje Cao, Xangyu Bu, Sona Fahmy, Syuan Cao Department of Computer Scence, Purdue nversty, West Lafayette, IN, SA E-mal: {cao62, xb, fahmy, Abstract Instantatng a dstrbuted applcaton that nvolves extensve nter-node communcaton onto nfrastructure s a challengng task. In ths work, we focus on the specal case of mappng a network emulaton experment onto a cluster comprsng several (possbly heterogeneous) physcal machnes. We automatcally profle the avalable physcal machne resources, and use ths nformaton, together wth the characterstcs of the expermental topology, to determne an effcent mappng that preserves performance fdelty. We desgn an algorthm, whch we call the algorthm, and ntegrate t nto a complete framework for proflng and mappng. We demonstrate the effectveness of our framework va smulatons and two sets of Crossfre Dstrbuted Denal of Servce attack testbed experments. Fg. 1. Mappng onto a (possbly heterogeneous) cluster. I. INTRODCTION In today s cloud computng systems, effcently mappng a dstrbuted/networked applcaton onto a cluster of machnes s extremely mportant. In ths paper, we consder a specal case of ths problem: mappng a network emulaton experment onto a cluster of possbly heterogeneous physcal machnes (PMs). A key requrement for network emulaton s mantanng hgh performance fdelty for any network experment. By performance fdelty, we mean that the performance results of the experment (e.g., packet latency, packet loss and flow completon tmes) should match the results obtaned from an experment conducted on a physcal topology dentcal to the expermental topology. When the expermental topology s large or the experment s hghly traffc-ntensve, a physcal machne (PM) runnng the experment may become overloaded, leadng to fdelty loss [1], [2]. A natural way to address ths problem s to ext the network emulator to run across multple PMs often referred to as cluster mode [3], [4]. In ths paper, we consder a cluster comprsng a set of PMs connected va a powerful swtch as shown n Fg. 1. The PMs may nclude varous types of hardware, possbly purchased at dfferent tmes. For example, the GENI [5], DETER [6], [7], and Emulab [8] clusters each have many types of machnes, snce upgrades cannot be performed for the entre cluster at the same tme. Heterogenety s lkely to ncrease wth the prolferaton of testbeds and cloud envronments wth dfferent vrtualzaton technologes. Takng full advantage of each PM n the cluster whle mantanng hgh performance fdelty for the experment s a key challenge. To address ths challenge, we desgn a complete framework for mappng a network experment onto a (possbly heterogeneous) cluster. We leverage MaxNet [9], whch exts the popular Mnnet network emulator [4], [3] to run on a set of physcal machnes. Our framework quantfes the capacty of the PMs n the physcal cluster, and uses ths nformaton to map an nput expermental topology onto some or all of the cluster PMs. We desgn an algorthm, the algorthm, that leverages a popular graph parttonng algorthm (we choose METIS [10], [11] n ths paper), and we plug t nto our framework. Our desgn s guded by the followng prncples: (1) Integrty and Fdelty: The network components of the orgnal experment (hosts, swtches, routers, or lnks) must all be correctly mapped. Fdelty ncludes two aspects: () not overloadng any machne and () not overloadng the lnks between those machnes. (2) Best effort: Performance fdelty s preserved when possble. Mappng s completed, however, even when the resources requred by the experment exceed avalable PM resources. In ths case, a best effort experment can be conducted. The user s warned about potental fdelty loss and can employ technques lke tme dlaton [12], [13] or add more resources to the cluster. (3) Judcous use of resources: When there are suffcent PMs to accommodate the experment, we use as few PMs as possble wthout jeopardzng fdelty. Ths allows more than one user to utlze the cluster at the same tme when possble. The key contrbutons of ths work nclude: (1) We ntroduce a capacty functon for each PM that models ts traffc processng capacty as a functon of ts CP share. The capacty s compared to the resource requrements of the expermental topology durng the mappng process; (2) We desgn the algorthm, whch teratvely nvokes a graph parttonng algorthm [10], as a plugn for our framework. The algorthm parttons and maps expermental nodes onto possbly heterogeneous PMs, reducng nter-pm traffc. The algorthm preserves the fdelty when there are suffcent resources, but uses only a subset of the physcal machnes when possble (whch a drect applcaton of graph parttonng cannot accomplsh); and (3) We evaluate our framework va smulatons and testbed experments wth Crossfre dstrbuted denal of servce (DDoS) attacks, and demonstrate that t acheves hgher fdelty than approaches that are agnostc to

2 cluster or experment characterstcs, whle effcently usng cluster machnes. The remander of the paper s organzed as follows. Secton II summares related work. Secton III descrbes the desgn of our framework and our algorthm. We evaluate our framework n Sectons IV and V, and conclude n Secton VI. II. RELATED WORK Mappng a network experment onto avalable PMs s a classc resource allocaton problem. A network experment can be modeled as a weghted graph representng the network topology and (potentally) nformaton about network traffc flows. Hence, the mappng problem can be translated nto a graph parttonng problem that attempts to mnmze the edge cut weghts. Graph parttonng s known to be NP-hard and has been studed for decades. The Kernghan-Ln (KL) algorthm [14] s one of the earlest effectve heurstcs to address 2-way parttonng by swappng vertces n the two ntal parttons to acheve a smaller edge-cut. Spectral algorthms [15], [16] partton a graph by computng the egenvectors of the adjacency matrx of the graph. The multlevel approach proposed n [10], [11], [17], [18] constructs a sequence of ncreasngly coarsened graphs and parttons the smallest graph. It eventually uncoarsens the parttonng results back to the orgnal graph. Heurstcs are used n each phase to ncrease the parttonng qualty. These algorthms take as nput the number of parttons and the shares of each partton. We leverage sngle-objectve, mult-constrant METIS [10], [11] n ths paper, and focus on balancng the seemngly-conflctng goals of performance fdelty and judcous use of resources. Another related lne of work s work on servce placement n data centers. The Vrtual Machne (VM) placement problem [19], [20], [21] allocates VMs to reduce traffc/latency and acheve hgh PM utlzaton. The requests do not typcally nclude network devces, such as swtches and routers, though. The vrtual network embeddng problem [22], [23] determnes a soluton to map each vrtual network to a substrate network, addressng potentally dfferent performance objectves (e.g., latency or throughput). Whle ths s smlar to our problem n that the underlyng resources to be allocated nclude physcal network nfrastructure (swtches, routers and lnks), our work consders an addtonal factor: the traffc processng capacty of a physcal machne. Testbed mappng [24], [7], [5], [1] nstantates a testbed user s experment onto physcal resources. Ths ncludes both one-to-one mappng (one expermental or vrtual node (vnode) to one physcal machne (PM)) and many-to-one mappng (multple vnodes to one PM) [25]. Once users swap n ther experments, they usually last for a long tme [24], [7]. Hstorcal records can be used to mprove the allocaton soluton (as n DETER assgn+ [7]). Flow-based scenaro parttonng (FSP) [26] parttons a network experment based on traffc flow nformaton and combnes the results from parttons. EasyScale [1] s a framework for leveragng multple scalng technques. Our work s complementary to ths work, and can be used to map an ndvdual EasyScale sub-topology, for example. Vrtualzed Emulab [27] s closely related to our work. It exts the assgn algorthm [24] used n Emulab wth heurstcs to enable large scale network emulaton. It does not, however, develop a unfed resource capacty abstracton for physcal nodes and the expermental topology. Vrtualzed Emulab dvdes a physcal node nto a number of slots and asks the user to provde nformaton on how many slots an expermental (vrtual) node needs. The capacty of the loopback devce that carres traffc among vrtual nodes on the same physcal node s not quantfed. VT-Mnnet [28] proposes an adaptve vrtual tme system [12], [13] to scale Mnnet [4], [3]. Tme dlaton [12], [13] prolongs the experment duraton, and can be used n conjuncton wth our work n cases when resources are nsuffcent. Fnally, MaxNet [9] develops a framework for clustermode Mnnet. By default, MaxNet employs the METIS graph parttonng software [11] to splt a network topology nto a gven number of parttons (sets). Each partton can then run on a PM n the cluster. MaxNet wth METIS does not profle the physcal machnes or consder CP requrements of expermental nodes, whch we focus on n ths paper. Mnnet also ntroduced an expermental cluster mode. The cluster mode provdes several smple placement algorthms (SwtchBnPlacer, RandomPlacer, RoundRobnPlacer), none of whch addresses fdelty concerns. III. RESORCE ALLOCATION FRAMEWORK Current network emulators use vrtualzaton technologes such as vrtual machnes, contaners, and software swtches to emulate a network. Performance fdelty loss occurs when there are nsuffcent physcal resources (e.g., CP and memory) to process emulaton events. For nstance, fdelty loss may occur when a large number of flows are transmttng at the same tme from dfferent hosts. Our framework consders the experment mappng problem from a resource allocaton perspectve. We desgn and mplement new modules that take the network experment descrpton from the network emulator, partton the experment, and allocate physcal resources to each partton. Any cluster of avalable machnes whch may not all be dentcal [7], [5] can be used to transparently conduct network experments usng our software and a dstrbuted emulator. Our new modules, whch together consttute a resource allocaton mddle layer, nterface wth MaxNet [9], whch exts the popular Mnnet-HF emulator [3] to cluster mode. The key questons n devsng ths mappng and resource allocaton framework nclude: (1) How do we quantfy the physcal resources avalable n the (possbly heterogeneous) cluster? (2) How do we partton a network experment to match the avalable underlyng physcal resources to acheve hgh performance fdelty? and (3) How do we reduce the number of allocated PMs, to allow other users to also utlze the cluster when possble? A. Resource Quantfcaton In the context of network emulaton, preservng performance fdelty means assurng users not only correct connectvty of ther topology, but also accurate performance (e.g.,

3 lnk bandwdth, delay, and loss rate) of swtches and hosts. For software swtches (whch handle network traffc n network emulators), n partcular, performance s lmted by ther mplementaton and avalable physcal resources. Performance fdelty s degraded when the software swtches are overwhelmed by network traffc n an experment. Therefore, we need to quantfy both the hardware specfcatons such as CP type and memory sze, and the traffc processng capacty of software swtches for each PM. Whle a quanttatve representaton of physcal resources s necessary to optmally map expermental nodes onto heterogeneous PMs, to the best of our knowledge, no systematc approach to do so exsts today. In our framework, we model a PM by the propertes n Table I. TABLE I. PROPERTIES OF PM Property Descrpton θ A multpler to normalze the sngle core performance of CP. Maxmum number of CP shares avalable. smn Mnmum number of CP shares for packet processng. smax Maxmum number of CP shares for packet processng. u s Number of CP shares for packet processng. u h Number of CP shares for emulated hosts. C (u s ) Capacty functon of the PM wth doman [ smn, smax ]. 1) Modelng CP Performance: Several factors affect the performance of modern CPs. Our PM model ncludes the two domnatng factors for modelng CP performance: sngle core performance and number of cores. Each core offers 100 CP shares; therefore, s usually set to (100 total cores), but can be set to a smaller value to take system overhead nto consderaton (e.g., reserve 10 shares for OS overhead). Snce PMs may be heterogeneous, the model must also take the dfference n sngle core performance nto account, whch s reflected by θ. θ reflects the relatve strength of sngle core performance. For nstance, f θ = 1 and θ j = m for PM and PM j, then the sngle core performance of PM j s m tmes as fast as that of PM. Ths property can be set usng results from benchmarkng tools, but usng CP frequency suffces n many cases. Whle certan factors that affect CP performance, such as SMT, can be modeled by the propertes we use, other factors lke dynamc frequency scalng cannot. We assume that due to the nature of network experments, all cores are heavly utlzed, so θ reflects sngle core performance when all cores of the PM are actvely utlzed. Our model does not take the dfference n memory capacty lmt of each PM nto consderaton manly because network experments are not typcally memory-ntensve. However, the model can be easly exted to support memory lmts by addng a memory capacty property for each PM, specfyng the memory requrements of swtches and hosts n the expermental topology and usng ths as an addtonal parttonng constrant. 2) Modelng Packet Processng Capablty: We determne the traffc processng capacty by runnng a set of experments on each type of PM. Ths s challengng because (1) mplementatons of dfferent software swtches exhbt sgnfcantly dfferent performance and resource usage on the same machne (e.g., Open vswtch (OVS) [29], Indgo Vrtual Swtch (IVS), or the Stanford reference swtch (serswtch)), (2) throughput vares based on packet sze, type of traffc, and swtch nstances n the network topology to be emulated, and (3) dfferent hardware features may lead to dfferent packet processng performance (e.g., SR-IOV and number of queues on a NIC). We do not compute the mpact of dfferent software swtch mplementatons or hardware features because the goal of our resource quantfcaton module s to characterze the relatonshp between throughput of software swtches and resource utlzaton for a gven PM and software swtch. If hardware s upgraded or a new software swtch s ntroduced, our capacty functons need to be updated. Our measurements show that most software swtches are CP-ntensve. Therefore, we focus on CP utlzaton n ths paper and leave other types of resource lmts for future work. We compute the packet processng capacty functon P (u s) of PM n packets per second, smn u s smax. For nstance, P (50) denotes the maxmum traffc rate that PM can handle when allocatng 50 CP shares to packet processng. The motvaton for quantfyng the traffc processng ablty of a PM usng a capacty functon rather than a sngle value representng the maxmum processng capacty s that, n network emulaton (or any dstrbuted task), a user may run on hosts custom programs (runnng n contaners n the case of Mnnet/MaxNet) that compete for resources wth software swtches. Some network emulators provde nterfaces for a user to set an upper bound on how much CP they prefer to use for each host. For nstance, a user can use CPLmtedHost n Mnnet to lmt the CP usage of an host. Therefore, abstractng packet processng capablty to a sngle value s nsuffcent, and more dynamc representatons such as capacty functons are more desrable. Capacty functons are determned by runnng our resource quantfcaton module on a smple lnear topology as llustrated n Fg. 2. When measurng capacty versus CP usage, we run the traffc generator and recever on adjacent PMs to avod ncludng the CP usage of the traffc generator n our functon. Each experment s repeated 10 tmes. Fg. 2. Lnear topology used for resource quantfcaton We nvestgated sx packet szes: 64 Bytes, 128 Bytes, 256 Bytes, 512 Bytes, 1024 Bytes, and 1250 Bytes, and observed that throughput n Mbps vares sgnfcantly at the same CP usage for certan software swtches (e.g., serswtch), whle throughput n packets per second remans stable. We therefore compute the capacty functon n packets per second. Snce many network emulators (e.g., Mnnet) and testbeds requre users to specfy bandwdth n bts per second, we desgn our parttonng module to accept both unt systems, but requre hnts from the users to convert capacty functons to Mbps. Durng each test, we measure the recepton rate (Rx)

4 and transmsson rate (Tx) of all swtch nstances and ther CP utlzaton. Fg. 3 shows the results for 6 PMs (2 PMs are of the same model) n a cluster wth dual-core CPs and quad-core CPs, 0 GHz to 2.40 GHz frequency and 4 GB to 16 GB RAM, on 4 lnear topology szes, runnng serswtch. Tests wth dfferent numbers of swtches yeld smlar results. The relatonshp between CP usage and total PM throughput s close to lnear. Wth more cores, ths relatonshp becomes less lnear when CP utlzaton ncreases. When CP usage exceeds 90%, the throughput becomes unstable. Therefore, we dscard data at more than 90%. Correspondngly, we lmt the maxmum number of CP shares allocated to software swtches to smax n our algorthm n Secton III-C. sers can also set the mnmum number of CP shares for packet processng, smn. We perform up to ffth-order polynomal regresson on the dataset we collected and select the model wth least mean squared error (MSE) usng 5-fold cross valdaton. A user can choose a dfferent regresson model, f desred. The four dfferent traffc processng capacty functons we derved are as follows, where smn u smax (the bounds may vary for dfferent PMs). P 2core@0GHz (u) = 168u u P 2core@2.39GHz (u) = 25u u P 4core@0GHz (u) = 0.359u u P 4core@2.39GHz (u) = 79u u Total Throughput (pps) e5 2core@0GHz 2core@2.39GHz 4core@0GHz 4core@2.39GHz 5-swtch 10-swtch 15-swtch 20-swtch CP sage (%) Fg. 3. Traffc processng capacty functons of the four types of PMs n our testbed 3) Property Scalng: To make the CP shares of dfferent types of PMs comparable, our framework scales PM propertes by multplyng every CP share-related property by θ. For example, the scaled s θ, and the scaled smax s θ smax. The capacty functon s also scaled accordngly. For smplcty, n the sectons that follow, we assume that the values have already been scaled (unless otherwse noted). B. Topology Abstracton We use a preprocessng module to export the network experment nformaton from a network emulator to external fles and then to mport experment partton nformaton back nto the network emulator. Ths desgn mnmzes modfcatons when applyng our resource management layer to dfferent network emulators. The experment topology (nput vrtual topology) s abstracted to a weghted undrected graph as follows: (1) 1) Each host s consdered together wth ts adjacent swtch (for smplcty, we assume sngle-homed hosts for now). Swtches/routers and lnks n network topology correspond to vertces and edges n the graph, respectvely. 2) The weght of an edge n the graph s a postve number assgned accordng the bandwdth (n Mbps) of the correspondng lnk n the network topology. 3) The weght of a vertex s the sum of the bandwdths (n Mbps) of the lnks ncdent onto that swtch or router n the network topology. Host to swtch (or host to router) lnks are thus consdered n the weght of a vertex, not the weght of an edge. The ntuton behnd step 1) above s that we need to avod cuttng lnks between hosts and swtches/routers when parttonng, because f we map a host and ts adjacent swtch onto dfferent PMs, then we would stll need to create a vrtual swtch for the host on ts PM to connect t to that adjacent swtch. MaxNet and the default placement algorthm n Mnnet cluster mode (SwtchBnPlacer) adopt the same approach. Snce our topology abstracton does not take traffc flow nformaton as nput, resource consumpton of a swtch or router must be estmated n a conservatve manner. The weghts assgned to edges and vertces n steps 2) and 3) above model the requred processng capacty of that edge or vertex. Our current model uses lnk bandwdth, whch mpacts CP requrements, but other models are also possble. For example, lnk delay affects memory requrements. To summarze, the weghted graph G = (V, E) derved from the expermental (vrtual) topology ncludes V vertces whch equals the number of swtches and routers, and E lnks whch equals the number of edges among swtches and/or routers. Weghts w(v) and w((a, b)) denote the weght of vertex v, and the weght of the edge between vertex a and vertex b, respectvely. The total weght of graph G = (V, E) s defned as the sum of the weghts of ts vertces: w(g) = v V w(v). The weght of a gven subgraph s defned smlarly for the vertces of the gven subgraph. The total number of hosts n the nput vrtual topology s m, where each host s assocated/lnked wth a vertex (swtch or router) v V. The physcal cluster contans k PMs, each wth a capacty functon P (u), derved from the resource quantfcaton module. As mentoned earler, users gve hnts on packet sze dstrbutons for ther experments n order to convert the capacty functon P (u) n pps to a functon C (u) n Mbps. In the rest of ths paper, we use Mbps for PM packet processng capacty by default. Of the CP shares PM offers, some shares, u s, are allocated to packet processng, gvng packet processng capacty C (u s), and at most u s shares for hosts. Ths dvson of CP shares s an estmate of the resource competton among custom programs runnng on vrtual hosts and software swtches. C. Parttonng and Mappng The parttonng and mappng module s the core of our framework. The module takes three nputs: (1) Weghted graph G = (V, E); (2) Host resource requrements

5 (e.g., CP usage) h j, j {1,, m}. (3) PM models [θ,, smn, smax, C (u)], {1,, k}. The module generates k subgraphs S 1, S 2,, S k, each of whch corresponds to an expermental partton, where k k and S wll be executed on PM. A partton S ncludes a subset of the vertces of the orgnal graph, where the unon of these subsets s V and the ntersecton s φ. Each lnk n the gven topology s ether part of a subgraph S (f both ts ponts belong to the same output subgraph), or connects two dfferent subgraphs (f ts ponts belong to two dfferent output subgraphs). 1) Objectve and Constrants: As dscussed n Secton I, the mappng algorthm must satsfy the gudng prncples: ntegrty, fdelty, best effort, and judcous use of resources. We use resources judcously whle mantanng hgh performance fdelty by: (1) localzng traffc as much as possble by mappng vrtual nodes that are densely connected va hgh-bandwdth lnks onto the same PM, and (2) attemptng not to overload any allocated PMs, whle maxmzng ther utlzatons. Traffc localzaton s motvated by the observaton that most software swtches yeld hgher throughput than physcal swtches. Therefore, we am to ncrease the lkelhood of mappng hghly connected subgraphs onto a sngle or few PMs, whch can ncrease fdelty. In other words, deally we want to: Mnmze w((a, b)), (2) (a, b) E; a S, b S j ; S S j ; 1, j k k By maxmzng utlzaton of PMs used, we are able to leave any unneeded machnes for runnng other tasks, takng full advantage of avalable physcal resources n the cluster. Therefore, we am to mnmze k where 1 k k. In other words, we am to maxmze the utlzaton of allocated PMs, whle makng sure not to over-utlze them (and reduce fdelty) f possble. If not, we produce a best effort mappng. We desgn the algorthm that teratvely nvokes the mult-constrant, sngle-objectve, METIS algorthm [30], [10]. We use the requrements of both emulated hosts and swtches as constrants, and ask METIS to mnmze the edge cut (.e., localzng traffc). In each teraton, we recompute the METIS nput parameters to gude METIS towards good results. We termnate when we can no longer obtan better results. 2) Parttonng Loop: Parttonng/mappng n our context s the process of dvdng a weghted nput graph G gven the nformaton of k PMs, and m host usage shares nto k subgraphs S 1, S 2,, S k, and determnng whch physcal machne n the cluster wll be used to execute each of the k subgraphs. The algorthm uses an nput queue to store the nputs for each teraton, and a mult-level hash set to store nformaton from prevous teratons (nput, result, and evaluaton metrcs). We teratvely nvoke a functon, sngle teraton, on the head of the queue (whch s dequeued) untl the queue becomes empty. When the nput queue becomes empty, the best assgnment at that tme s chosen. Evaluaton of assgnments s dscussed n Secton III-C4. The functon sngle teraton takes a queue entry whch ncludes the followng components: (1) Lsts of chosen PMs (PMs that wll be used n ths teraton) and free PMs; (2) CP shares for processng packets and for supportng hosts for each PM ; (3) METIS-specfc parameters; and (4) a termnaton counter (Secton III-C6). The ntal nput s computed by functon nt nput. The functon sngle teraton proceeds as follows. For every PM under consderaton, we compute the capacty avalable for packet processng from the capacty functon. Then, we normalze host CP shares, u h, and packet processng capactes of all PMs, C (u s), to compute two sets of fractons between 0 and 1 that each add up to 1. For nstance, f three PMs provde packet processng capacty values of 1000, 1000, and 2000, respectvely, then the normalzed values for packet processng capacty are 5, 5 and 0.5. Ths acheves our best effort goal as we wll nvoke parttonng even when lnk bandwdths and host CP requrements cannot be supported on the avalable machnes. We use the two sets of normalzed values as METIS nput parameters (parttonng constrants). After graph parttonng, we compute rankng metrcs and derve new nputs by potentally addng PMs (as explaned n Secton III-C4) and tunng the CP shares (as explaned n Secton III-C5). The result of each teraton s stored n the hash set. A pseudo-code of the algorthm s gven n Algorthm 1. begn Intalze nput queue; enqueue(nt nput()); whle nput queue non-empty do sngle teraton (nput queue.dequeue()); Output the best partton; sngle teraton (queue entry) begn Compute packet processng capactes of PMs; host n normalze (u h ); cap n normalze (C (u s)); partton (G, host n, cap n); for PM {1,, k} do û h hostj S.V h j; ĉ v S.V w(v),.e., w(s ); û s smallest u such that C (u) ĉ ; evaluate partton (û h, û s); u s, u h update cpu shares (û h, û s); Algorthm 1: Algorthm 3) Intal Input: The functon nt nput computes an ntal nput based on the gven graph and the nformaton of all avalable PMs. It conssts of three phases: (1) composng a mnmal set of chosen PMs, (2) allocatng CP shares for packet processng and hosts for the chosen PMs, and (3) calculatng parameters for METIS. In the frst phase, we compute the maxmum resources each

6 avalable PM can offer, then compute and sort the resources n order of decreasng tghtness. PMs are ranked by usefulness,.e., at most how much a PM can contrbute to each resource requrement (n tghtness order). Tghtness of a resource s defned as the rato of the sum of the maxmum amount of that resource each PM can offer to the needed amount of that resource. Next, we greedly grow the set of chosen PMs, whch s ntally empty, untl all resource requrements are met or all PMs are ncluded. For example, consder an experment that requres 1200 packet processng capacty shares and 100 CP shares for emulated hosts. Consder a cluster wth three PMs: PM1 can offer 1000 packet processng capacty or 200 CP shares, PM2 offers 500 capacty or 200 CP shares, and PM3 offers 200 capacty or 100 CP shares. Then the frst two PMs wll be chosen snce they provde a maxmum of 1500 capacty or 400 CP shares that satsfy both resource requrements. Note that ths set of chosen PMs s a lower bound because (1) each resource requrement s consdered ndepently whereas they affect each other, and (2) we are usng the maxmum possble offerng from each PM to estmate resource avalablty. In the second phase, the algorthm dstrbutes CP shares to packet processng and hosts for each PM. Depng on the tghtness of resources, t may assgn most of CP shares to packet processng and leave the rest to hosts, or assgn most CP shares to support hosts, or evenly allocate CP shares to the two requrements. sers can modfy ths parameter n a confguraton fle. In the thrd phase, the algorthm tweaks nput parameters for METIS based on the result of prevous phases. For nstance, t searches, n parallel, for the best mbalance vector (a parameter that defnes load mbalance tolerance for each constrant), frst wthn a small range of [, 1.1] [, 1.1], then a larger range of [, 1.5] [, 1.5] f METIS shows non-trval changes n edge cut. Fnally, we ntalze a termnaton counter for ths nput to a user-defned constant. 4) Evaluatng Results: Based on the mn cut value and assgnment returned by METIS, we can compute the actual CP shares and packet processng capactes of emulated hosts and swtches assgned to PM as follows: (1) û h : The actual CP shares for emulated hosts assgned to ths PM. (2) ĉ : The actual packet processng capacty for emulated swtches assgned to ths PM. (3) û s: The actual CP shares needed to provde ĉ packet processng capacty on ths PM. (4) σ h and σ s : The fractons of host and packet processng capactes that ths PM s assgned (over the total amount requred). The total CP usage of PM for ths assgnment s then computed as û = û h + û s. An assgnment s ranked accordng to the followng factors: (1) Number of PMs used n the assgnment, (2) Number of over-utlzed PMs (.e., û > ), (3) Number of under-utlzed PMs (.e., û < ), (4) Degree of over-utlzaton of PMs, as defned below n equaton (3) when û >, and (5) The edge cut gven by the graph parttonng, reflectng the total nter-pm traffc. The frst four factors are derved from usage nformaton of the assgnment, and the ffth s drectly returned by METIS. Typcally, we use mutpler thresholds for overand under-utlzaton. These are user-defned constants, and are set to 100% and 90%, respectvely, n our smulatons and experments. over utl = û (3) We rank each assgnment accordng to three keys. The frst key s a ter. If the assgnment has no over-utlzed PMs, t s consdered ter 0; otherwse t s consdered ter 1. For ter 0 assgnments, we focus on reducng overall resource usage. Thus, the second key s the number of PMs used, and the thrd key s the edge cut of the assgnment. For ter 1 assgnments, we want to reduce the degree of over-utlzaton n order to reduce fdelty loss, so the second key s the maxmum over utl, and the thrd key s the number of over-utlzed PMs. The second key s used when there s a te on the frst key, and the thrd key s used when both the frst and second keys are ted. A mult-level hash set s mplemented so that t takes constant tme to query f an assgnment has already appeared, whether an assgnment s the best one, and f not, n whch key s the assgnment domnated by others. When the majorty of PMs are overloaded yet there s at least one unused PM, the algorthm wll consder addng the next most useful PM, sorted as n Secton III-C3, to the set of PMs chosen. The algorthm wll construct a new nput wth ntal shares and METIS parameters set n the same way as n phases 2 and 3 of Secton III-C3. The termnaton counter for ths new nput s set to the ntal value. We refer to ths process as branchng because t starts a new path for exploraton. A threshold to determne majorty (a user-defned constant) s used such that f the number of overloaded PMs s at least ths fracton of the number of PMs chosen, branchng wll occur. 5) pdatng CP Shares: We update the termnaton counter based on the rank of the result, and f the updated counter s postve, we tweak the CP share allocaton and construct a new nput. The ratonale for decreasng the termnaton counter s dscussed n Secton III-C6. To adjust CP shares, we frst sort the PMs by descng values of (σ h + σ s ). We then compute the CP shares for the next teraton based on the output of the current teraton. The ntuton s that, f a PM s overloaded, we assgn t the maxmum load t can handle and s the excessve shares to the next most powerful PM; f a PM s under-utlzed, we ncrease ts shares but no more than the shares added to any stronger under-utlzed PM. The CP share adjustment thus moves excessve CP shares lke a waterfall: overloaded shares flow towards the next most powerful PMs, and the room left for expanson n an under-utlzed PM s lmted. Ths s the reason we name ths algorthm. The pseudo-code of the update algorthm s shown n Algorthm 2. 6) Algorthm Termnaton: The algorthm termnates when the nput queue becomes empty, that s, when no new branches are created and all exstng branches have exhausted ther termnaton counter. The number of branches s upper-bounded by the number of PMs, and a branch stops after several teratons (a user-specfed constant) wthout makng progress. We use several heurstcs when updatng termnaton counter. For nstance, f a new branch s created (.e., one more free PM s ncluded) and all chosen PMs n the current nput are overutlzed, then t s less lkely to fnd the best assgnment n ths

7 begn total over sum of excessve shares on all over-utlzed PMs; mn INT MAX; for PM from hghest to lowest (σ h + σ s ) do û û h + û s; f PM s over-utlzed then next pp mn(max( û s û, smn ), smax ); next host next pp ; else f total over > 0 then shares over û + total over ; f shares over > 0 then total over shares over; total over shares over; else total over; total over 0; else 0; f PM under-utlzed after addng shares then mn( mn, max(, ndert hd û )); mn ; h, s û h, û û s ; h +û s û h +û s next pp mn(max(û s + s, smn ), smax ); next host mn(û h + h, next pp ); Return next pp, next host; Algorthm 2: Algorthm: CP Share pdate. branch than n the new branch, so the termnaton counter s cut by half. If a new best assgnment s found, the counter s reset; otherwse the counter s decreased by 4, 2, or 1, depng on how the rank compares to that of the best assgnment. Several factors affect the runnng tme of the algorthm, but two factors play a major role: (1) tghtness of avalable resources, and (2) characterstcs of the expermental topology. The tghter the resources, the smaller the search space. Tghter resources make our ntal set of PMs closer to the set of PMs needed, resultng n fewer branches. Overloaded PMs are capped to ther maxmum CP shares, and there s lttle tweakng the algorthm can do for them. In the case when all PMs have to be over-utlzed, the algorthm wll only run a few teratons. The characterstcs of the expermental topology are mportant when resources are abundant. For example, f a topology s hghly clustered lke a typcal ISP topology, then, after ncreasng fractons of under-utlzed PMs beyond certan levels, METIS wll always group the clusters and assgn an entre cluster to a sngle PM, makng few changes to the assgnment. Our algorthm wll detect ths stuaton and termnate early. In contrast, the algorthm wll run substantally more teratons f the topology s more random and ts vertces are ndstngushable n terms of resource requrements, because METIS s lkely to swap vertces and return dfferent yet not better assgnments. IV. SIMLATION RESLTS We have evaluated the algorthm on a set of topologes, physcal machne characterstcs, and host CP requrements. In ths secton, we gve smulaton results on three dfferent types of topologes rangng from 41 nodes to 690 nodes: RocketFuel, Jellyfsh, and Fat-tree. To understand the mpact of dfferent PM characterstcs, we evaluate all topologes n three scenaros: large clusters (resources are abundant), medum clusters (total maxmum capacty of smulated PM cluster s close to the requrements (weght) of expermental topologes), and small clusters (total maxmum capacty of smulated cluster s less than the requrements of expermental topologes). We scale up the four capacty functons n equaton (1) n each cluster for large topologes. We compare the algorthm to four baselne algorthms: (1) Default METIS parttonng whch assgns PMs approxmately equal-weght parttons of the nput graph; (2) Capacty-based parttonng, denoted by C (0.9), assgns PM a partton whose weght s proportonal to C (0.9 ) (.e., the capacty value when 90% of CP shares are allocated to packet processng). The remanng 10% s reserved for hosts and system overhead. Ths baselne s near optmal when packet processng needs most CP shares; (3) Max CP share-based parttonng, denoted by, assgns PMs parttons proportonal to the unscaled maxmum CP shares,. For example, f two PMs have maxmum CP shares of 100 and 200, then the weghts of the parttons are set to 0.33 and 7, respectvely, regardless of ther multpler values; and (4) Scaled max CP share-based parttonng, denoted by θ, s the same as except that t uses scaled values. If the two PMs n the prevous example have multplers of 2 and 1, respectvely, then ther scaled values are equal, and thus they wll get equally weghted parttons. We use METIS to compute parttons for all the baselnes. To compare the results of baselnes wth, we use three metrcs: (1) edge cut of the partton, (2) degree of overutlzaton, defned by equaton 3, and (3) degree of underutlzaton, defned f u <, as under utl = u. The degree of over-utlzaton s the most mportant metrc snce overloaded PMs may result n fdelty loss. If there s under-utlzaton, t s possble that not all selected PMs are necessary for ths experment. The unnecessary PMs can be used by other expermenters n parallel. The edge-cut s an ndcator of traffc localzaton as we dscussed n secton III-C. Ideally, we want to keep all three metrcs as small as possble. A. Large Clusters In ths scenaro, we smulate a cluster wth 21 PMs by duplcatng capacty functons n equaton (1) and scalng them up by a factor of 10.

8 Fg. 4(a) shows the average over-utlzaton and underutlzaton for dfferent topologes. Snce the smulated PM cluster s suffcently large for every topology, both C (0.9) and acheve less than 2% over-utlzaton whle other baselne algorthms yeld 4% to 14% over-utlzaton on dfferent types of topologes. However, selects fewer PMs and exhbts smaller under-utlzaton, achevng hgher resource effcency. In contrast, baselne algorthms use all PMs n the smulated cluster. Addtonally, baselne algorthms show 1.3X to 6.6X edge-cuts compared to, snce they t to use all PMs and hence create more parttons. B. Medum Clusters In ths scenaro, we create a PM cluster for each topology such that the cluster capacty s approxmately equvalent to the resource requrements of the topology. Fg. 4(b) shows hgher over-utlzaton for all baselne algorthms. agan acheves less than 1% over-utlzaton on both Jellyfsh and Fat-tree topologes, but slghtly hgher over-utlzaton (5%) on RocketFuel topologes. Ths s because RocketFuel topologes are less symmetrc than Jellyfsh and Fat-tree and nclude some nodes wth much hgher requrements leadng to worse parttonng results when resources are lmted. Baselne algorthms show smaller under-utlzaton compared to the large cluster scenaro as we lmt PM resources, but they stll waste more than 20% of the resources on some PMs whle overloadng the rest. Baselne algorthms exhbt smlar edge-cuts (X to X) as, snce all algorthms create smlar numbers of parttons and use METIS to mnmze edge-cut. C. Small Clusters When the physcal resources of a PM cluster are nsuffcent, yelds a best effort assgnment n proporton to PM capabltes (e.g., maxmum capacty) leadng to balanced resource utlzatons for all PMs. In ths scenaro, nstead of average over-utlzaton and under-utlzaton, we use standard error of PM utlzaton ( u ), for each topology to evaluate parttonng algorthms. All algorthms overload all PMs as expected. Fg. 4(c) shows that yelds sgnfcantly smaller standard error compared to baselne algorthms, ndcatng more balanced PM utlzatons for all topologes. Baselne algorthms exhbt slghtly smaller edge-cuts (0.78X to 0.97X) than, snce they consder edge-cut mnmzaton as the hghest prorty. In contrast, consders over-utlzaton and the number of allocated PMs to be of hgher prorty than edge-cut mnmzaton. V. EXPERIMENTAL EVALATION In ths secton, we evaluate our complete framework on experments wth a dstrbuted denal of servce (DDoS) attack scenaro nspred by the recently presented Crossfre attack [31]. Ths type of experment s popular on the DETER testbed [7] and represents a worst-case for mappng, snce t pushes the emulated network to the lmt. We use a cluster n our lab that ncludes sx PMs of four types of hardware confguratons. The traffc processng capacty functons of the four types of PMs were gven n equaton (1). A. Mappng Algorthms In addton to the algorthms used n our smulatons n Secton IV, we also compare the algorthm to the Mnnet SwtchBnPlacer (SwtchBn). Ths s the default Placer of the Mnnet cluster mode. Ths mappng algorthm maps swtches and controllers of an experment nto equallyszed bns based on the number of PMs. It also attempts to place hosts and swtches to whch they are connected on the same PM. MaxNet allows users to set share n the confguraton fle for each PM n a cluster. In our experments, C (0.9),, and θ are used when assgnng shares to the PMs. Thus, SwtchBnPlacer and Equal are balanced parttonng algorthms whch try to map ether an equal number or equal weght of nodes to all PMs. C (0.9),, θ and are unbalanced parttonng algorthms that map nodes based on dfferent functons of PM capacty. C (0.9),, and θ utlze all PMs n a cluster, snce they cannot determne f fewer PMs are suffcent for an experment, whle pcks the smallest number of PMs that are suffcent. takes both the weghts of swtches (assgned based on lnk bandwdths) and the CP shares of hosts nto consderaton, and tres to balance the CP usage among hosts and software swtches. B. DDoS Attack Experments We desgn DDoS attack experments nspred by the Crossfre attack [31] to stress-test the mappng algorthms usng hgh rate traffc. Instead of attackng a web server drectly, the attack targets crtcal lnks on the paths to the web server, and saturates these lnks to degrade the user experence to the vctm web server. We compare three metrcs: CP utlzatons of PMs, utlzatons of all expermental topology lnks, and HTTP throughput (before and after the attack s launched). All experments are repeated 10 tmes and error bars are shown. 1) Topology Generaton: To make the DDoS attack experment realstc but feasble to execute on our small testbed, we reduce ISP topologes from RocketFuel [32] to a set of smallscale topologes and medum-scale topologes, contanng routers and routers, respectvely. We use the random match (RM) algorthm for graph coarsenng (used n [11]) to reduce the ISP topologes, whle attemptng to preserve the connectvty features of the orgnal graph. Each node n the reduced ISP topology s emulated as a swtch. Lnk delays are set to 1 ms n the results below, but we also nvestgated 100 and 500 ms lnks. We attach hosts to the edges of the topology. The edge of a topology s defned as the nodes wth degree 3, where degree s the number of edges ncdent onto a node. These hosts are used as vctm clents and attack sers. The same methodology was followed n [1]. 2) Attack and Vctm Host Assgnment: There are four roles for nodes n a DDoS attack experment: vctm web server, vctm clents, attack sers, attack recevers. We only assgn one vctm web server n each DDoS experment. Vctm clents are the hosts sng HTTP requests to the vctm web server, and they are affected by DDoS attacks. Attack sers are the hosts launchng DDoS traffc. As n the Crossfre

9 nder-utlzaton Over-utlzaton Equal C (0.9) θ Fat-tree Jellyfsh RocketFuel nder-utlzaton Over-utlzaton Equal C (0.9) θ Fat-tree Jellyfsh RocketFuel Standard Error Equal C (0.9) θ Fat-tree Jellyfsh RocketFuel (a) Large cluster results (b) Medum cluster results (c) Small cluster results Fg. 4. Smulaton results attack [31], we set the recevers of the attack traffc to be other servers close to the vctm. These recevers are referred to as attack recevers n our experment. Pror to assgnng hosts as attackers and vctms, we use Djkstra s algorthm to compute the shortest paths between any two hosts and use these paths as statc routes (by default, Mnnet does not nclude dynamc routng). We frst rank all the nodes n a topology by node degree, and choose the node wth medan degree value to be where vctm server s located, to avod placng the vctm server too close to the edge or to the center (.e., nodes wth large degree) of a topology. The neghbors of the vctm web server become attack recevers. We then compare routes from all edge hosts to the vctm web server (vctm clents to vctm web server) wth routes from all edge hosts to attack recever canddates (attack sers to attack recevers) and assgn hosts that share lnks n the two sets of routes as attack sers and vctm clents. Then, 30% of the edge hosts are used as vctm clents, and 70% are used as attack sers. Our goal here s to guarantee the effectveness of attack traffc,.e., the web traffc wll be affected when attack traffc s launched. 3) Traffc Generaton: A DDoS experment ncludes three types of traffc: HTTP traffc, DP attack traffc, and background traffc. We use the Web Polygraph tool to generate HTTP traffc. Vctm clents run polygraph-clent, whle the vctm web server runs polygraph-server. polygraph-clent generates HTTP requests to polygraph-server at a fxed rate (200 requests/sec per clent). Ths rate may not be reached n an experment f there s not enough bandwdth or CP for polygraph-clent processes. The HTTP response sze generated by polygraph-server s exponentally dstrbuted wth a mean value of 10 KB. For attack traffc, we use perf to generate DP traffc at the same rate as the lnk bandwdth from attack sers to attack recevers. perf s used to emulate background traffc. Both attack traffc and background traffc use a packet sze of 1250 Bytes. Both HTTP and background traffc start at the begnnng of an experment and we wat for 60 seconds for condtons to stablze. Then, DP attack traffc s launched and lasts for 60 seconds untl the experment s completed. 4) Small-scale Experments: The motvaton behnd creatng small-scale topologes s to compare wth other mappng algorthms when resources are over-provsoned. Snce t s dffcult to obtan the ground truth n network emulaton, we use resource over-provsonng to approxmate the ground truth results. For small topologes, baselne algorthms wll use all 6 PMs n our cluster. Results are not affected by resource constrants. In contrast, attempts to allocate the fewest PMs necessary. Therefore, f the results yelded by the 6 mappng algorthms are close, we consder to have been able to produce close-to-ground-truth results whle usng resources more judcously. We carefully calculate the topology sze, lnk bandwdths, and CP shares of hosts to guarantee that baselne algorthms do not overload any PM n ths case. Based on ths, lnk bandwdth s set to 60 Mbps. In ths experment, takes only fve teratons and chooses PM1 and PM2. CP and Lnk tlzaton: Fg. 5(a) and Fg. 5(b) show the CP and lnk utlzatons. The baselne algorthms use all 6 PMs, and none of the PMs are overloaded. pcks the three most powerful PMs, and CP utlzatons of all PMs are over 80%. Therefore, we confrm that the results from baselne algorthms approxmate the ground truth, snce they are not constraned by PM resources. Note that the total CP utlzaton over all sx PMs for baselne algorthms seems sgnfcantly hgher than. Ths s caused by the heterogenety of the sx PMs. Runnng the same task on a less powerful PM usually consumes more CP shares than on a more powerful PM. A key motvaton of our capacty functon s to capture ths PM heterogenety. All algorthms acheve at least 80% lnk utlzaton, ndcatng that suffcent resources are assgned to the swtches. Ths s expected for baselne algorthms snce resources are over-provsoned. For, ths confrms that our capacty functons accurately characterzed the traffc processng capacty, and the mappng process allocated PMs approprately, leadng to the expected hgh expermental lnk utlzaton n ths scenaro, whle mantanng a hgh CP utlzaton level. HTTP Throughput: HTTP throughput s the number of HTTP requests completed per second. We use ths metrc as an ndcator of applcaton-level performance fdelty. Fg. 5(c) shows the mpact of the DDoS attack. At the 60th second mark, the attack traffc s launched, causng sgnfcant drop n HTTP throughput. The results of all mappng algorthms are smlar, whch ndcates that our framework wth the algorthm s able to mantan hgh applcaton-level fdelty. 5) Medum-scale Experments: We desgn these experments such that an neffectve mappng algorthm would suffer

10 CP tlzaton PM1 PM2 PM3 PM4 PM5 PM6 Lnk tlzaton PM1 PM2 PM3 PM4 PM5 PM6 HTTP Throughput SwtchBn θ Equal C (0.9) SwtchBn θ Equal (a) CP utlzaton C (0.9) SwtchBn θ Equal (b) Lnk utlzaton C (0.9) Tme (second) (c) Completed HTTP requests/second Fg. 5. Normalzed results from small DDoS attack experments from performance fdelty loss. We calculate the topology sze, lnk bandwdths, and host CP shares such that the total requrements of the topology are slghtly lower than the total capacty of the testbed. Lnk bandwdth s thus 16 Mbps n ths case. If the mappng algorthm makes poor choces, PMs can be overloaded, and HTTP throughput may decrease. Ideally, CP usage should be less than 100%, lnk utlzatons should be hgh snce we saturate the lnks, and HTTP throughput should exhbt a sgnfcant drop when DDoS attack traffc s launched. CP and Lnk tlzaton: Fg. 6(a) and Fg. 6(b) show the CP and lnk utlzatons of the 6 mappng algorthms. SwtchBn over-utlzes PM5 and PM6 and under-utlzes PM1 and PM2. Only PM4 s assgned approprate workload. PM3 exhbts low CP and lnk utlzaton due to the fact that several flows from PM5 and PM6, whch are overloaded, to PM3 experence sgnfcant packet loss (lnk fdelty loss). Therefore, PM4 does not process expected amounts of traffc leadng to low CP and lnk utlzaton. Both and θ over-utlze PM3 whch has hgh CP utlzaton but low lnk utlzaton wth hgh varance. In addton, over-utlzes PM6. Equal over-utlzes PM6 and under-utlzes PM1 and PM2 sgnfcantly, whle C (0.9) over-utlzes PM6 and under-utlzes PM4. selects only fve PMs, but offers the best CP and lnk utlzatons compared to baselne algorthms, wth > 80% CP utlzaton and > 90% lnk utlzaton on all fve PMs. We also tred manually lmtng baselne algorthms to use the same 5 PMs as for a head-to-head comparson. However, none of them exhbted better performance than when they used all 6 PMs. HTTP Throughput: Fg. 6(c) shows that the effect of the attack can be observed n all cases. acheves hgher and more stable HTTP throughput before the DDoS attack s launched. After careful examnaton, we found that ths s due to the lmted CP resources gven to the polygraphserver process n the case of the baselne algorthms. The requred CP share for each host s 10%. Snce none of the baselne algorthms consder the host CP shares, most of the hosts cannot receve enough CP. For nstance, the host runnng polygraph-server only receves 3% CP on average when usng C (0.9). Even though C (0.9) uses all 6 PMs, the actual HTTP throughput s stll lower than. also does not acheve 100% target HTTP throughput: we found that ths s due to the mperfect CP solaton of Lnux contaners when the same CP s shared by a number of dfferent contaners. VI. CONCLSIONS In ths paper, we have proposed a complete framework for effcently mappng a networked applcaton onto a cluster of (possbly heterogeneous) physcal machnes. Although we have focused on network experments on the popular Mnnet network emulator, n the future, we plan to ext our work to map other dstrbuted applcatons onto dstrbuted smulators, network testbeds, and data centers n general. We have devsed a resource quantfcaton process to profle the physcal machnes, and desgned and mplemented a mappng algorthm,, that takes both lnk bandwdths (va the edge/vertex weghts n the nput graph) and host CP requrements nto consderaton. The algorthm attempts to use as few of the physcal machnes as possble, whle achevng hgh performance fdelty. Based on results from smulatons and DDoS attack testbed experments, we fnd that our approach performs well n terms of both performance fdelty and testbed resource utlzaton. We are currently generalzng our framework to handle networks wth mult-homed hosts and to support multple resource lmts. Fnally, we are conductng extensve experments, and makng refnements to speed up the convergence of our algorthm, even n case of abundant resources and random topologes. ACKNOWLEDGMENTS Ths work has been sponsored n part by NSF grant CNS REFERENCES [1] W.-M. Yao, S. Fahmy, and J. Zhu, EasyScale: Easy mappng for large-scale network securty experments, n IEEE Conference on Communcatons and Network Securty (CNS), Oct 2013, pp [2] R. Chertov and S. Fahmy, Forwardng devces: From measurements to smulatons, ACM Transactons on Modelng and Computer Smulaton (TOMACS), vol. 21, no. 2, pp. 1 23, [3] N. Handgol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown, Reproducble network experments usng contaner-based emulaton, n Proc. of CoNEXT, 2012, pp [4] B. Lantz, B. Heller, and N. McKeown, A network n a laptop: Rapd prototypng for software-defned networks, n Proc. of the 9th ACM SIGCOMM Workshop on Hot Topcs n Networks, ser. Hotnets-IX, 2010, pp. 19:1 19:6. [5] GENI,

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment Journal of Physcs: Conference Seres PAPER OPEN ACCESS Resource and Vrtual Functon Status Montorng n Network Functon Vrtualzaton Envronment To cte ths artcle: MS Ha et al 2018 J. Phys.: Conf. Ser. 1087

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments Comparson of Heurstcs for Schedulng Independent Tasks on Heterogeneous Dstrbuted Envronments Hesam Izakan¹, Ath Abraham², Senor Member, IEEE, Václav Snášel³ ¹ Islamc Azad Unversty, Ramsar Branch, Ramsar,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Advanced Computer Networks

Advanced Computer Networks Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT Bran J. Wolf, Joseph L. Hammond, and Harlan B. Russell Dept. of Electrcal and Computer Engneerng, Clemson Unversty,

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

THere are increasing interests and use of mobile ad hoc

THere are increasing interests and use of mobile ad hoc 1 Adaptve Schedulng n MIMO-based Heterogeneous Ad hoc Networks Shan Chu, Xn Wang Member, IEEE, and Yuanyuan Yang Fellow, IEEE. Abstract The demands for data rate and transmsson relablty constantly ncrease

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Real-Time Guarantees. Traffic Characteristics. Flow Control

Real-Time Guarantees. Traffic Characteristics. Flow Control Real-Tme Guarantees Requrements on RT communcaton protocols: delay (response s) small jtter small throughput hgh error detecton at recever (and sender) small error detecton latency no thrashng under peak

More information

3. CR parameters and Multi-Objective Fitness Function

3. CR parameters and Multi-Objective Fitness Function 3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Routing in Degree-constrained FSO Mesh Networks

Routing in Degree-constrained FSO Mesh Networks Internatonal Journal of Hybrd Informaton Technology Vol., No., Aprl, 009 Routng n Degree-constraned FSO Mesh Networks Zpng Hu, Pramode Verma, and James Sluss Jr. School of Electrcal & Computer Engneerng

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Maintaining temporal validity of real-time data on non-continuously executing resources

Maintaining temporal validity of real-time data on non-continuously executing resources Mantanng temporal valdty of real-tme data on non-contnuously executng resources Tan Ba, Hong Lu and Juan Yang Hunan Insttute of Scence and Technology, College of Computer Scence, 44, Yueyang, Chna Wuhan

More information

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network IEEE INFOCOM Prcng Network Resources for Adaptve Applcatons n a Dfferentated Servces Network Xn Wang and Hennng Schulzrnne Columba Unversty Emal: {xnwang, schulzrnne}@cs.columba.edu Abstract The Dfferentated

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7 Optmzed Regonal Cachng for On-Demand Data Delvery Derek L. Eager Mchael C. Ferrs Mary K. Vernon Unversty of Saskatchewan Unversty of Wsconsn Madson Saskatoon, SK Canada S7N 5A9 Madson, WI 5376 eager@cs.usask.ca

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach Dstrbuted Resource Schedulng n Grd Computng Usng Fuzzy Approach Shahram Amn, Mohammad Ahmad Computer Engneerng Department Islamc Azad Unversty branch Mahallat, Iran Islamc Azad Unversty branch khomen,

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some materal adapted from Mohamed Youns, UMBC CMSC 611 Spr 2003 course sldes Some materal adapted from Hennessy & Patterson / 2003 Elsever Scence Performance = 1 Executon tme Speedup = Performance (B)

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Graph-based Clustering

Graph-based Clustering Graphbased Clusterng Transform the data nto a graph representaton ertces are the data ponts to be clustered Edges are eghted based on smlarty beteen data ponts Graph parttonng Þ Each connected component

More information

Dynamic Bandwidth Provisioning with Fairness and Revenue Considerations for Broadband Wireless Communication

Dynamic Bandwidth Provisioning with Fairness and Revenue Considerations for Broadband Wireless Communication Ths full text paper was peer revewed at the drecton of IEEE Communcatons Socety subject matter experts for publcaton n the ICC 008 proceedngs. Dynamc Bandwdth Provsonng wth Farness and Revenue Consderatons

More information

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers Avaable onlne at www.scencedrect.com Proceda Engneerng 23 (20) 780 785 Power Electroncs and Engneerng Applcaton, 20 Game Based Vrtual Bandwdth Allocaton for Vrtual Networks n Data Centers Cu-rong Wang,

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Cost-efficient deployment of distributed software services

Cost-efficient deployment of distributed software services 1/30 Cost-effcent deployment of dstrbuted software servces csorba@tem.ntnu.no 2/30 Short ntroducton & contents Cost-effcent deployment of dstrbuted software servces Cost functons Bo-nspred decentralzed

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics A Hybrd Genetc Algorthm for Routng Optmzaton n IP Networks Utlzng Bandwdth and Delay Metrcs Anton Redl Insttute of Communcaton Networks, Munch Unversty of Technology, Arcsstr. 21, 80290 Munch, Germany

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Greedy Technique - Definition

Greedy Technique - Definition Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:

More information

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture Goals and Approach CS 194: Dstrbuted Systems Resource Allocaton Goal: acheve predcable performances Three steps: 1) Estmate applcaton s resource needs (not n ths lecture) 2) Admsson control 3) Resource

More information

NOVEL CONSTRUCTION OF SHORT LENGTH LDPC CODES FOR SIMPLE DECODING

NOVEL CONSTRUCTION OF SHORT LENGTH LDPC CODES FOR SIMPLE DECODING Journal of Theoretcal and Appled Informaton Technology 27 JATIT. All rghts reserved. www.jatt.org NOVEL CONSTRUCTION OF SHORT LENGTH LDPC CODES FOR SIMPLE DECODING Fatma A. Newagy, Yasmne A. Fahmy, and

More information

WITH rapid improvements of wireless technologies,

WITH rapid improvements of wireless technologies, JOURNAL OF SYSTEMS ARCHITECTURE, SPECIAL ISSUE: HIGHLY-RELIABLE CPS, VOL. 00, NO. 0, MONTH 013 1 Adaptve GTS Allocaton n IEEE 80.15.4 for Real-Tme Wreless Sensor Networks Feng Xa, Ruonan Hao, Je L, Naxue

More information

Constructing Minimum Connected Dominating Set: Algorithmic approach

Constructing Minimum Connected Dominating Set: Algorithmic approach Constructng Mnmum Connected Domnatng Set: Algorthmc approach G.N. Puroht and Usha Sharma Centre for Mathematcal Scences, Banasthal Unversty, Rajasthan 304022 usha.sharma94@yahoo.com Abstract: Connected

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

CS 268: Lecture 8 Router Support for Congestion Control

CS 268: Lecture 8 Router Support for Congestion Control CS 268: Lecture 8 Router Support for Congeston Control Ion Stoca Computer Scence Dvson Department of Electrcal Engneerng and Computer Scences Unversty of Calforna, Berkeley Berkeley, CA 9472-1776 Router

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

On Achieving Fairness in the Joint Allocation of Buffer and Bandwidth Resources: Principles and Algorithms

On Achieving Fairness in the Joint Allocation of Buffer and Bandwidth Resources: Principles and Algorithms On Achevng Farness n the Jont Allocaton of Buffer and Bandwdth Resources: Prncples and Algorthms Yunka Zhou and Harsh Sethu (correspondng author) Abstract Farness n network traffc management can mprove

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

SAO: A Stream Index for Answering Linear Optimization Queries

SAO: A Stream Index for Answering Linear Optimization Queries SAO: A Stream Index for Answerng near Optmzaton Queres Gang uo Kun-ung Wu Phlp S. Yu IBM T.J. Watson Research Center {luog, klwu, psyu}@us.bm.com Abstract near optmzaton queres retreve the top-k tuples

More information

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications Effcent Loa-Balance IP Routng Scheme Base on Shortest Paths n Hose Moel E Ok May 28, 2009 The Unversty of Electro-Communcatons Ok Lab. Semnar, May 28, 2009 1 Outlne Backgroun on IP routng IP routng strategy

More information

Priority-Based Scheduling Algorithm for Downlink Traffics in IEEE Networks

Priority-Based Scheduling Algorithm for Downlink Traffics in IEEE Networks Prorty-Based Schedulng Algorthm for Downlnk Traffcs n IEEE 80.6 Networks Ja-Mng Lang, Jen-Jee Chen, You-Chun Wang, Yu-Chee Tseng, and Bao-Shuh P. Ln Department of Computer Scence Natonal Chao-Tung Unversty,

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

Solution Brief: Creating a Secure Base in a Virtual World

Solution Brief: Creating a Secure Base in a Virtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Abstract The adopton rate of Vrtual Machnes has exploded at most organzatons, drven by the mproved

More information

ATYPICAL SDN consists of a logical controller in the

ATYPICAL SDN consists of a logical controller in the IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 25, NO. 6, DECEMBER 2017 3587 Mnmzng Flow Statstcs Collecton Cost Usng Wldcard-Based Requests n SDNs Hongl Xu, Member, IEEE, Zhuolong Yu, Chen Qan, Member, IEEE,

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems:

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems: Speed/RAP/CODA Presented by Octav Chpara Real-tme Systems Many wreless sensor network applcatons requre real-tme support Survellance and trackng Border patrol Fre fghtng Real-tme systems: Hard real-tme:

More information

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access Agenda Cache Performance Samra Khan March 28, 217 Revew from last lecture Cache access Assocatvty Replacement Cache Performance Cache Abstracton and Metrcs Address Tag Store (s the address n the cache?

More information

On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources

On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources On the Farness-Effcency Tradeoff for Packet Processng wth Multple Resources We Wang, Chen Feng, Baochun L, and Ben Lang Department of Electrcal and Computer Engneerng, Unversty of Toronto {wewang, cfeng,

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

WIRELESS communication technology has gained widespread

WIRELESS communication technology has gained widespread 616 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 4, NO. 6, NOVEMBER/DECEMBER 2005 Dstrbuted Far Schedulng n a Wreless LAN Ntn Vadya, Senor Member, IEEE, Anurag Dugar, Seema Gupta, and Paramvr Bahl, Senor

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty

More information