TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment

Size: px
Start display at page:

Download "TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment"

Transcription

1 TrpS: Automated Mult-tered Data Placement n a Geo-dstrbuted Cloud Envronment Kwangsung Oh, Abhshek Chandra, and Jon Wessman Department of Computer Scence and Engneerng Unversty of Mnnesota Twn Ctes Mnneapols, MN {ohkwang, chandra, jon}@cs.umn.edu ABSTRACT Explotng the cloud storage herarchy both wthn and across data-centers of dfferent cloud provders empowers Internet applcatons to choose data centers (DCs) and storage servces based on storage needs. However, usng multple storage servces across multple data centers brngs a complex data placement problem that depends on a large number of factors ncludng, e.g., desred goals, storage and network characterstcs, and prcng polces. In addton, dynamcs e.g., changng user locatons and access patterns, make t mpossble to determne the best data placement statcally. In ths paper, we present TrpS, a lghtweght system that consders both data center locatons and storage ters to determne the data placement for geo-dstrbuted storage systems. Such systems make use of TrpS by provdng nputs ncludng SLA, consstency model, fault tolerance, latency nformaton, and cost nformaton. Wth gven nputs, TrpS models and solves the data placement problem usng mxed nteger lnear programmng (MILP) to determne data placement. In addton, to adapt quckly to dynamcs, we ntroduce the noton of Target Locale Lst (TLL), a pro-actve approach to avod expensve re-evaluaton of the optmal placement. The TrpS prototype s runnng on Wera, a polcy drven geo-dstrbuted storage system, to show how a storage system can easly utlze TrpS for data placement. We evaluate TrpS/Wera on multple data centers of AWS and Azure. The results show that TrpS/Wera can reduce cost 14.96% 98.1% based on workloads n comparson wth other works approaches and can handle both short- and long-term dynamcs to avod SLA volatons. CCS Concepts Informaton systems Cloud based storage; Dstrbuted storage; Herarchcal storage management; Keywords Data placement;mult-tered storage;mult-dc storage Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. Copyrghts for components of ths work owned by others than ACM must be honored. Abstractng wth credt s permtted. To copy otherwse, or republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. Request permssons from permssons@acm.org. SYSTOR 17, May 22-24, 217, Hafa, Israel c 217 ACM. ISBN /17/5... $15. DOI: 1. INTRODUCTION Many cloud provders offer dverse storage optons wth dfferent characterstcs and prcng polces that can be used by applcatons to meet ther storage needs. For example, Amazon Web Servces (AWS) offers many storage servces 1, such as ElastCache, S3, EBS, and Glacer. These servces vary n ther I/O latency, durablty, and cost, provdng cloud applcatons wth multple storage optons to serve ther users. In addton, there has been a growth n the number of data centers (DCs) beng deployed n dverse geographcal locatons. For nstance, as of Jan 217, Amazon has DCs n 16 regons (and numerous Edge locatons) [5] and Mcrosoft has DCs n 26 regons [6]. Thus, besdes offerng multple storage servces, these geo-dstrbuted DCs provde cloud applcatons wth a further possblty of selectng one or more locatons for storng ther data. Many popular Internet servces e.g., Twtter and Netflx have bult mult-tered storage systems (or components) runnng on multple data centers [27, 18] to serve ther users wth such dverse storage optons. In fact, applcatons can even explot dfferent cloud provders storage servces for reduced cost or better fault tolerance [2]. A key problem n a mult-dc, mult-ter envronment s data placement: determnng whch locatons and whch storage ters to place data (replcas) on. Determnng the best data placement n such an envronment s challengng due to a large number of factors: 1) applcaton s desred goals, such as cost, performance, and fault tolerance; 2) network characterstcs, such as DC locatons, nter-dc network latences/bandwdths, and network prcng; 3) storage characterstcs, such as data models, I/O performance, nterfaces, and storage prcng polces; and 4) workload characterstcs, such as number of requests and data popularty. As cloud provders offer even more DC locatons and ntroduce new storage servces, t wll make data placement even more challengng. Further, dynamc changes to both workloads (e.g., changes n data access patterns and locatons), and the envronment (e.g., network and data center falures, varatons n network and storage performance), make t mpossble to determne the best data placement statcally. Whle several efforts have consdered the data placement problem n a geo-dstrbuted storage envronment [1, 32, 2, 4], they have not consdered the possblty of explotng multple storage ters whch can have a sgnfcant mpact on metrcs such as storage cost and performance. Recent work [26] has focused on data management across multple 1 In ths paper, we use the terms storage servce and storage ter nterchangeably.

2 storage ters wthn a sngle DC, whch may not be suffcent for a mult-dc envronment, e.g., to acheve desred fault tolerance or to serve a dspersed set of end-users. We argue that data placement n a geo-dstrbuted cloud envronment must consder both multple locatons as well as multple ters together, to allow for a rch set of storage polces across cost-performance-relablty dmensons [2, 19]. To address these problems, we present TrpS (Storage Swtch System), a system that optmzes data placement by consderng both DC locatons and storage ters. We have desgned TrpS to be lghtweght so that t can be used wth any storage system runnng n a mult-cloud envronment [19, 8, 1]. Applcatons that use a TrpS-enabled storage system can make use of TrpS by smply provdng ther hgh level goals, e.g., performance SLAs, consstency models, and desred degree of fault tolerance. TrpS uses network and storage cost nformaton, along wth montorng nformaton about user access patterns, nter-dc network latences and storage ter I/O latences to optmze data placement. Wth gven nputs, the data placement problem s modeled as a constraned optmzaton problem and solved usng mxed nteger lnear programmng (MILP) n TrpS. Whle TrpS can be programmed to optmze dfferent metrcs such as cost, performance, or relablty, n ths paper, we focus on mnmzng cost whle satsfyng latency bounds and fault tolerance requrements. TrpS-enabled storage systems can handle network and workload dynamcs at two levels. Frst, they can have TrpS recompute the optmal data placement at coarse tme granulartes to ncorporate long-term changes n system or workload characterstcs. Second, to adapt quckly to dynamcs as well as to handle short-term dynamcs such as transent falures or overloads, we ntroduce the noton of Target Locale Lst (TLL), a pro-actve approach to avod expensve re-evaluaton of the optmal placement. A TLL s a lst of multple feasble placement optons (those that satsfy the SLA requrements from any accessng locaton) computed a pror by TrpS as part of ts optmzaton. It uses the parameter locale count (LC) that enables applcatons to tradeoff cost wth performance and/or fault tolerance by usng faster storage ters and/or havng addtonal replcas. TLL allows applcatons to utlze (and swtch between) these optons at run-tme to avod SLA volatons, wthout requrng the storage system to mgrate data. We evaluate the TrpS prototype usng Wera [19], a polcy-drven key-value storage system for mult-cloud envronments on multple AWS and Azure DCs to show ts effcacy and benefts. We extended Wera to use TrpS and to apply the optmzed data placement. The man contrbutons of ths paper are: The desgn and mplementaton of TrpS, the frst system that optmzes data placement wth a consderaton of both DC locatons and storage ters n mult-cloud envronments. Modelng and solvng the data placement problem as a constraned optmzaton problem usng mxed nteger lnear programmng (MILP), enablng underlyng storage systems to handle coarse tme-scale dynamcs through re-evaluaton of the optmal placement. Introducng the noton of Target Locale Lst (TLL), a proactve approach that enables underlyng storage systems to handle short tme-scale dynamcs wthout the need to reevaluate data placement decson or move data at run-tme. An emprcal evaluaton of TrpS usng the Wera multcloud storage system, n an AWS and Azure cloud envronment, showng that TrpS can help an applcaton acheve desred goals wth mnmzed cost even n the presence of dynamcs e.g., lowerng cost 14.96% 98.1% based on workloads and sgnfcantly reducng SLA volatons wth mnmal extra cost. 2. SYSTEM MODEL 2.1 Storage System Model We consder a federated cloud-based geo-dstrbuted storage system (GDSS) spannng multple data centers (DCs) located across dfferent geographc regons. These DCs could belong to the same or dfferent cloud provders. Further, each geographc regon may contan multple DCs (belongng to one or more cloud provders) located close to each other (.e., havng low nter-dc latency). Examples of a GDSS are Wera [19], SPANStore [32], SCFS [8], and RACS [1]. Each DC supports multple storage ters wth dfferent characterstcs n terms of performance, durablty, and cost. For nstance, n AWS, an applcaton can get better performance from EBS-o1 but at a hgher cost compared to other storage ters, whle S3 can provde cheaper storage but at a hgher latency. Thus, applcatons may use multple storage ters for ther composte benefts to acheve ther desred goals [2, 19]. We assume the GDSS provdes an nterface to applcatons to access data from multple DCs and ters. We consder an object storage model where data s managed as objects [16]. Ths model enforces an explct separaton of data and metadata enablng unfed access to data dstrbuted among the dfferent storage servces and DCs. We assume that the GDSS supports operatons (Get and Put) to access objects usng a globally unque dentfer that acts as a key. In addton, we assume that a GDSS manages metadata for each object, e.g., sze, access frequency, locaton/storage ter, and tme of last access. 2.2 Applcaton Model We consder latency-senstve applcatons that use a GDSS to provde reduced user-perceved latency and better data avalablty to users across dfferent geographc regons. We assume applcaton nstances run on multple geo-dstrbuted DCs. We also assume that GDSS servers run on each DC to nterface applcaton data accesses wth the cloud storage servces. An applcaton nstance can access data from the GDSS by connectng to any GDSS server (typcally the closest one runnng on the same DC), that can provde access to the requested data (ether drectly f stored on the same DC, or ndrectly from another DC). We assume that applcatons provde hgh-level goals, e.g., SLA, consstency model, and degree of fault tolerance to a GDSS through nterfaces. In ths work, we consder data access latences between the applcaton nstances and the storage system nstead of from the end-users as done n other systems [32, 26]. Assgnng user requests to approprate applcaton nstances s out of scope for ths work, and pror technques [2, 4] could be utlzed for ths. 2.3 Data Placement Problem We defne a locale as a {DC-locaton, storage-ter} tuple, e.g., {Amazon US East, S3} 2. The data placement problem conssts of determnng a set of locales (DC locatons and 2 In the rest of the paper, we omt the name of the cloud provder unless requred.

3 Applcaton Goals Cost Informaton TrpS Data Placement Optmzer TrpS Interface Storage Ter Workload Latency Montor Montor Geo-Dstrbuted Storage System (GDSS) Get and Put Requests GDSS User Interface TrpS Inputs Network Latency Montor Data Placement & TLL Fgure 1: How TrpS works wth a GDSS. correspondng storage ters) where data should be placed (replcated) among all avalable locales, n order to satsfy the applcaton requrements (SLA, degree of fault tolerance, etc.). In ths paper, we consder the goal of mnmzng the total cost (both storage and I/O costs). 3. TRIPS DATA PLACEMENT SYSTEM TrpS s a lghtweght data placement system that can support a GDSS that needs to make data placement decsons on behalf of ts clent applcatons. In prncple, TrpS can run wth any GDSS that can provde the nformaton needed to evaluate placement decsons. Fgure 1 shows how TrpS works wth a GDSS. TrpS makes data placement decsons, whch are enacted by the GDSS whch then places the data at the desred locales. Applcatons use the TrpS-enabled GDSS for data access (storng and retrevng the data). Note that applcatons only nteract wth the GDSS, and do not communcate drectly wth TrpS, so that Trps s not on the data path of applcaton accesses. Unlke other systems, TrpS tres to fnd an optmzed data placement that consders both DC locatons and ts multple storage ters smultaneously across dfferent cloud provders. TrpS can be programmed to optmze for dfferent objectves, e.g., mnmzng cost or mnmzng latency. In ths work, we focus on the objectve of mnmzng cost whle meetng an SLA (both performance and avalablty). TrpS models the data placement problem as a constraned optmzaton problem (Secton 3.1) that takes a set of nputs (Secton 3.1.1) based on applcaton requrements and workload and envronment characterstcs. Gven these nputs, the TrpS optmzer outputs a desred data placement consstng of a lst of locales ({DC-locaton,storage-ter} tuples) where data wll be stored. TrpS enables the GDSS to handle dynamcs through re-evaluaton of the optmal soluton at coarse tme scales (Secton 3.2.1). At the same tme, t provdes the noton of Target Locale Lst (TLL) (Secton 3.1.2) to adapt quckly to dynamcs at short tmescales (Secton 3.2.2). 3.1 Data Placement Decson TrpS Inputs and Output Inputs: TrpS requres four types of nputs: applcatons goals, network and storage monetary cost, performance montorng nformaton, and workload nformaton. Table 1 shows the nputs that TrpS uses. Applcaton Desred Goals: TrpS requres applcatons to provde three types of desred goals. Frst s an SLA consstng of average latences for Get/Put operatons. Second s the degree of fault tolerance F,.e., the maxmum number of smultaneous DC faults tolerated. Thrd s a consstency Table 1: Inputs for TrpS Input Descrpton D Set of DCs D S Set of storage ters n DC Cj network Network cost between DC and DC j C storage Storage Ter t provsoned storage cost n t DC get /put req Get/Put request cost for storage ter t n Ct DC C ret/wrte Data retreval/wrte cost from/to storage t ter t n DC SLA get/put Get/Put operaton SLA from each DC Locale count n the TLL that can be accessed wthn SLA from each DC locaton LC (> ) F Mnmum number of DC faults handled Consstency Consstency Model Sze Average object sze n DC Centralzed DC locaton for a Global Lock Center (n strong Consstency) L network j Network latency from DC to DC j L get/put t Get/Put latency for storage ter t n DC Number of Get/Put requests for DC A get/put model. Currently, TrpS supports only two consstency models: strong and eventual, and supportng other well-known consstency models s left as future work. Another nput parameter, locale count (LC), s the number of feasble placement optons desred to handle short tme-scale dynamcs. TrpS requres the followng nformaton. Cost nformaton: Cost nformaton conssts of the prcng for network and storages servces of all DCs that the GDSS may want to use, as well as the nter-dc network transmsson costs. Latency nformaton: The ntra-dc latency of access to each storage ter, as well as the nter-dc network latency. Workload nformaton: The access patterns (number of requests from each locaton) and average object sze for requests. Output: Gven these nputs, TrpS computes the data placement consstng of the set of locales where data should be placed. In prncple, TrpS can determne data placement for any granularty of data (e.g., sngle data object to large data collectons) and the overhead s tolerable. In ths paper, we evaluate TrpS on a coarse placement of data (.e., the entre data set for an applcaton, as n other systems [2]), and leave fne-graned placement to future work e.g., data placements per object or objects classfcaton. In addton, TrpS also computes a target locale lst (TLL) whch we dscuss next Target Locale Lst We ntroduce the noton of target locale lst (TLL) as a pro-actve mechansm to handle dynamcs n an agle manner at short tme-scales. The man dea s for TrpS to generate multple feasble placement optons (nstead of just one placement) that all nomnally satsfy applcaton SLA requrements (based on the current or average dynamcs, but are subject to future change). Ths enables the applcaton to adapt quckly f one of the locales selected for data placement starts volatng the SLA, wthout the need for a data re-placement or mgraton. The target locale lst (TLL) conssts of multple entres,

4 - Data Placement {US East, EBS-st1}, {US West, EBS-gp2}, {CA Central, EBS-st1} {Asa SE, S3}, {EU West : S3} - Target Locale Lst (TLL) US East (1) {US East, EBS-st1} (2) {US West, EBS-gp2} US West (1) {US West, EBS-gp2} (2) {US East, EBS-st1} CA Central (1) {US Central, EBS-st1} (2) {US East, EBS-st1} EU West (1) {EU West, S3} (2) {US East, EBS-st1} Asa SE (1) {Asa SE, S3} (2) {US West, EBS-gp2} Asa NE (1) {US West, EBS-gp2} (2) {Asa SE, S3} Asa South (1) {US East, EBS-st1} (2) {EU West, S3} SA East (1) {CA Central, EBS-st1} (2) {US East, EBS-st1} Fgure 2: TrpS output example wth LC = 2. one for each data access locaton (.e., a DC locaton runnng applcaton nstances that wll access data). Each entry n the lst contans the set of locales that all meet the SLA from that DC access locaton. The number of locales specfed per DC locaton s determned by the LC parameter. Thus, each DC locaton can have multple choces of locales that can be accessed wthout SLA volaton f LC > 1. Note that whle the fault tolerance parameter F controls the number of replcas for avalablty, LC addtonally controls the number of locales that all satsfy the SLA. The applcaton can use the value of LC to acheve a desred tradeoff between cost and the lkelhood of meetng ts SLA. For hgher values of LC, data would have to be placed on more (or faster) locales to provde enough feasble optons that satsfy the SLA from each DC locaton. Ths could result n hgher cost, but the SLA wll be satsfed more often and more consstently. On the other hand, lower values of LC (esp. LC = 1) wll result n lower costs but mght result n more frequent volatons of the SLA. Fgure 2 shows an example of data placement and TLL (wth LC = 2). The data placement conssts of the locales where the data should be placed (replcated). Locales n the TLL provde a hgh degree of assurance that they wll satsfy desred SLA for each DC locaton. For example, a GDSS server runnng on Asa NE can access data stored on Asa SE (S3) and US West (EBS-gp2) for both Get and Put requests wthout SLA volaton. In Secton 3.2.2, we wll dscuss n detal how the GDSS can use the multple optons n the TLL at runtme to avod SLA volatons Optmzaton Problem Formulaton Gven the nputs, we formulate the data placement problem as a constraned optmzaton problem, whch we solve usng mxed nteger lnear programmng (MILP). The detals of the formulaton are as follows. Varables: We defne three sets of output varables:, j D, t D js : T jt T jt are bnary varables ( or 1): f 1, data can be retreved from or wrtten to storage ter t n DC j from DC wthn SLA (wth a consderaton for extra latency for a global lock and data dstrbuton for strong consstency). D, t D S : P t P t are bnary varables ( or 1): f 1, data wll be stored (replcated) n storage ter t n DC., j, k D, t D k S : B jkt B jkt are bnary varables ( or 1): f 1, DC j wll send update to storage ter t n DC k when DC sent a Put request to DC j. Objectve: Mnmze Total cost = Get cost + Put cost + Broadcast cost + Storage cost, where, Get cost: A get Put cost: A put Broadcast cost: A put j j j Storage Cost: k t t T jt (Sze (C network j T jt (Sze (C network j l B jkt (Sze (C network jk t + C ret jt +C wrte jt P t Sze C storage t ) + C )+C +C wrte kt get req jt ) put req jt ) )+C put req kt ) Here, we compute the Get and Put costs as the data retreval and wrte costs based on the number of requests, the estmated object szes, nter-dc network cost, and ntra-dc storage ter access and request cost. Broadcast cost s the cost of broadcastng updates to all replcas and s based on the number of put operatons along wth the cost of propagatng the wrtes to other DCs. The storage cost s the cost of storng data and s computed based on the storage prce and amount of data stored at each storage ter. Constrants: Set number of locales n TLL:, j D, t D js T jt = LC j t Set mnmum number of replcas for avalablty: D, t D S P t F + 1 t At most one storage ter n each DC: D, t D S t Latency SLA constrant: For eventual consstency: P t 1, j D, t D js(l network j +L get/put jt SLA get/put ) f (T jt = 1) For strong consstency:, j, l D, t D js(l network j +L get/put jt +2 L network jk(k=center) + δ sput max(l network jl ) SLA get/put ) f (T jt = 1), where, δ sput ndcates whether ths s a Put request. 3.2 Handlng Dynamcs Placement Re-evaluaton TrpS may re-evaluate the optmal data placement f the GDSS detects sustaned changes n applcaton workloads (e.g., access patterns, locaton of request orgns) or the envronment (e.g., network latences, falures) that can compromse the applcatons goals. Alternately, TrpS could perodcally re-evaluate the data placement to handle potental dynamcs, as done n other systems [32, 2]. Reevaluatng a new data placement can be expensve as solvng the optmzaton problem ncurs addtonal overhead. In addton, frequent re-evaluaton of data placement can cause unnecessary data mgraton whch s expensve. To prevent TrpS/GDSS from thrashng n response to short-tme dynamcs, a GDSS can set a perod threshold to determne whether to re-evaluate data placement ensurng the dynamcs are not transent. The handlng of short-term dynamcs s dscussed below.

5 {US West, EBS-gp2} (1) Frst access {US West, EBS- gp2} - Target Locale Lst (TLL) Asa NE (1) {US West, EBS-gp2} (2) {Asa SE, S3} Asa NE (2) Access {Asa SE, S3} f volatons occur n US West {Asa SE, S3} Fgure 3: Locale swtchng example. When a new data placement s very dfferent from the prevous one, data mgraton mght be requred for mnmzed cost. However, mgratng old data can lead to sgnfcant cost n a GDSS. In ths work, we assume that data mgraton s handled by the underlyng GDSS. That s, GDSS wll determne whether t mgrates data or not when t gets a new data placement from TrpS. In our prototype, we use lazy reactve data mgraton n whch mgraton s trggered when data stored on the prevous locale s accessed and leave proactve data mgraton strateges to future work Locale Swtchng wth TLL Whle placement re-evaluaton handles dynamcs at coarse tme-scales, t s desrable to acheve SLA goals even n the presence of dynamcs n short tme-scales. As dscussed n Secton 3.1.2, a TLL conssts of locales that can all nomnally satsfy the SLA. The GDSS can thus swtch locales to avod SLA volatons due to short-term dynamcs at runtme. When a request arrves to a GDSS, t fnds the cheapest (mnmum monetary cost) locale to mnmze cost usng TLL and cost nformaton. If t detects that a volaton could happen usng ths ter based on latency nformaton, t then fnds the next cheapest locale n the TLL, and so on. For example, fgure 3 shows how the GDSS server runnng on Asa NE (from the fgure 2) can access data wthout SLA volatons n the presence of dynamcs. To handle requests, the server frst accesses US West EBS gp2 that leads the cheapest cost due to cheaper outbound network of US West DC and zero request cost of EBS gp2. If the server detects SLA volatons from US West EBS gp2 due to dynamcs, t now accesses Asa SE S3 to avod SLA volatons. Note, applcatons cannot avod the penalty ntroduced by dynamcs for Put requests f they want to acheve strong consstency.e., data needs to be updated synchronously to all locales. Ths problem for Put requests can be relaxed by changng the consstency model to a weaker one e.g., eventual consstency as shown n our prevous work [19]. Havng multple locales n the TLL allows applcatons usng TrpS-enabled GDSS to trade off cost wth performance n the presence of dynamcs over short-tme scales. One beneft s that ths pro-actve placement can reduce the cost of dynamc re-evaluaton of placement. Ths s partcularly true for transent dynamcs. 4. TRIPS IMPLEMENTATION We have mplemented TrpS on top of the Wera [19] GDSS. Wera manages the underlyng storage and nteracts drectly wth applcatons to provde data access. Wera reles upon TrpS to make automated data placement decsons that Wera enacts. We begn by provdng a bref descrpton of Wera as background. Readers may consult the Wera paper [19] for addtonal detals. Note that Wera s just an Table 2: TrpS API API Arguments Functons set_cost cost nformaton Set cost nformaton set_goals applcaton goals Set applcatons goals evaluate montorng nformaton Evaluate data placement example of GDSS to show how GDSS can utlze and nteract wth TrpS for data placement decson. Any GDSS or applcatons can utlze TrpS based on ther requrements as mentoned n Secton Wera Geo-dstrbuted Storage System Wera s a polcy-drven key-value storage system for a mult-cloud envronment. Wera provdes a flexble framework for applcaton developers to specfy storage polces easly wth whch applcatons can explot multple storage ters across multple DCs (even across dfferent provders). The clent of Wera s shelded from the underlyng complexty ntroduced by multple storage ters of multple cloud provders by a smple Get/Put API and the encapsulaton of storage polces. In Wera, an applcaton can create a global Wera storage nstance encompassng multple DCs. Each Wera nstance s comprsed of several local Wera nstances, each runnng wthn a DC. Local Wera nstance: The local nstance [22] encapsulates multple cloud storage ters wthn a DC and enables easy specfcaton of a rch array of data storage polces to acheve desred tradeoffs. An event-response mechansm s used to express polces and manage data wthn a local nstance. An event s the occurrence of some condton and a response s the acton executed on the occurrence of an event. A local nstance supports dfferent knds of events such as tmer, threshold, and acton events (Get and Put). It supports responses such as store, retreve, copy, move, encrypt, compress, delete, and grow to react to the events. Global Wera nstance: Whle the local nstance s responsble for managng data on multple storage ters wthn a sngle DC, the global Wera nstance manages the data placement and data movement across multple local nstances runnng on geo-dstrbuted DCs. Wera supports global polces by leveragng the local polcy framework wthn each local nstance. Applcatons can launch and manage local nstances n multple regons, and can enforce a global data management polcy between them through Wera. Wera supports events (LatencyMontorng, RequestsMontorng, and ColdDataMontorng) and responses (forward, queue, and change consstency) to support polces for handlng dynamcs n a mult-cloud envronment, e.g., access pattern changes. 4.2 TrpS Interfaces and Executon Table 2 shows the TrpS API. In our prototype mplementaton, Wera sends cost nformaton, applcatons goals and montorng nformaton e.g., network latency, storage latency and access pattens through TrpS API that s declared and mplemented wth Thrft [3] to make a new data placement decson. TrpS can be executed as a standalone server but t runs alongsde the global Wera nstance server n ths work. TrpS uses PuLP [17] (toolkt for lnear programmng

6 Wera ChangeDataPlacement() { // RequestsMontorngEvent event(forwarded_updates >= updates_from_applcatons && threshold.perod > 3 seconds) : response { request_data_placement() } } Wera SwtchStorageTer() { // LatencyMontorngEvent event(threshold.latency <= operaton.latency && threshold.perod > 1 seconds) : response { set_ter_volaton(threshold.ter) } } Fgure 4: New polces (responses) for TrpS. n Python) to model a data placement problem as a mxed nteger lnear programmng (MILP) and uses solver CPLEX [11] to solve the optmzaton problem. 4.3 Wera Extensons We have added a few montorng components (for montorng nformaton) and addtonal events/responses (for handlng dynamcs) to Wera to enable t to enact the data placement va the TrpS API. Wera now exposes APIs e.g., set cost() and set goal() that forwards the cost nformaton and the applcatons goals to TrpS. As dscussed above, a Wera global nstance conssts of multple local nstances. Based on the data placement decson, t s possble that only a subset of these local nstances may store data at any pont of tme. In what follows, we use actve nstance to refer to a local nstance that s partcpatng n the current data placement and an nactve nstance to one that does not (.e., t s avalable but s not currently chosen to store data) Montorng Components A few montorng components have been added to Wera to utlze TrpS. Note that these could be provded ether by a GDSS (as now n Wera) or an external montorng servce that a GDSS reles upon. Network Latency Montorng between DCs: For network latency nformaton between DCs, local nstances send png messages perodcally to each other to estmate the network latency between them. Storage Latences and workload nformaton: Wera has a montor to check latency for each Get/Put request and number of requests for each object. To handle short and coarse tme-scale dynamcs, each nstance needs to know other nstances storage ter latency and number of requests. To ths end, the local storage ter latency nformaton of an nstance and number of requests are exchanged and pggybacked on the response for the png message. Background Storage Latency Montorng: For TrpS to work well, the storage latency of all ters must be updated. In Wera ths s done automatcally for all actve nstances that are used and accessed (thus, the montorng suffers no addtonal cost). However, nactve nstances (and ters) wll not have a chance to be accessed as requests are handled by other nstances (and storage ters). To avod outdated latency nformaton for nactve nstances, a dedcated thread n each local nstance perodcally checks the latency for storage ters by sendng empty Put and Get requests to them. Snce some storage ters are charged for requests, e.g., S3, Wera needs to set ths perod carefully to reduce the cost for montorng Event and Response Fgure 4 shows polces to handle dynamcs n TrpS/Wera. We use two Wera events 1) RequestsMontorng and 2) LatencyMontorng to handle both coarse and short tme-scale dynamcs. We let RequestsMontorng montor the number of applcaton requests from each local nstance and notfy the global Wera nstance f there s a substantal change. Specfcally, a change to the data placement s trggered when the nstance recevng the hghest number of requests wthn a specfc tme perod has changed. When the nstance montorng RequestsMontorng event sees the changes sustaned for a tme perod greater than a threshold, t asks the global Wera nstance to re-evaluate data placement through the newly added request data placement response. Then, the global Wera nstance calls the evaluate() functon of TrpS. To handle short tme-scale dynamcs, all local nstances montor local storage ters latency va LatencyMontorng. We added a set ter volaton response whch marks a storage ter that currently causes SLA volatons greater than specfc perod threshold. When nstances handle requests, they avod accessng the storage ters that have a mark. Those storage ters can be (re-)accessed when the mark s removed by the background storage latency montor. 4.4 Handlng Requests In ths secton, we descrbe how Wera works wth the data placement and the TLL generated by TrpS to handle Put and Get requests, and to adapt to short tme-scale dynamcs. Get Requests: When a local nstance receves a Get request, t fnds the cheapest locale from the TLL. Typcally, t retreves data from the local storage ter f the nstance has data stored locally, otherwse, t smply forwards the request to a locale n the TLL that offers the mnmum cost. Put Requests: In natve Wera, any local nstance that receves a Put request dstrbutes the update to all other nstances. In TrpS-enabled Wera, only selected actve nstances need to store data to mnmze cost and hence, only these need to be updated. All Put requests are handled by locales n the TLL. When an nstance receves a Put request from an applcaton, t checks the TLL to fnd the cheapest locale to store the data. Ths ntal locale selecton consders the subsequent costs that must be pad to propagate updates from ths ntal locale. When an actve nstance handles a Put request (from applcatons or other nstances), t dstrbutes the update to all other nstances. It s possble that t may forward the request to another nstance n the TLL f dong so s cheaper than wrtng locally and dstrbutng the update to other nstances,.e., when the local DC s outbound network cost s expensve 3. To mnmze network cost, only metadata ncludng key, sze, access frequency, locale nformaton, verson (f supported), and last access tme s sent to nactve nstances as they do not need to store data but need to know data locatons to redrect ther Get requests. Swtchng Storage Ters: Locales n TLL nomnally satsfy SLA goals as mentoned n Secton To handle requests wth mnmum cost, local nstances fnd and use the cheapest locale n TLL usng cost nformaton. If a local nstance detects an SLA volaton (marked by set ter volaton) for the cheapest locale, the nstance fnds (swtches to) the next cheaper locale (possbly a nearby DC s storage ter) at run-tme to avod SLA volaton. It can swtch back to the cheaper locale based on updated montor- 3 Ths s smlar to relayed update propagaton n SPANStore.

7 Cost % ~ Stroage Network Request 112.4% 11.7% 1% 1% 1% S3 EBS-st1 - S3 EBS-st1 - Smulated Spanstore TrpS Smulated Spanstore TrpS Workload 1 Workload 2 Fgure 5: Optmzed storage ters selecton by TrpS wth mnmzed cost. Costs are normalzed to the TrpS cost. ng nformaton later on. Snce Wera mgrates data lazly, after a placement re-evaluaton, some Get requests may ntally be served from locales n the old TLL, whle Put requests are always served from locales n the current TLL. 5. EXPERIMENTAL EVALUATION We evaluated the TrpS prototype on Wera n Amazon AWS and Mcrosoft Azure. For AWS, we used DCs across 8 regons: US East (Vrgna), US East 2 (Oho), US West (North Calforna), US West 2 (Oregon), CA Central (Montreal), Europe West (Ireland), Asa Southeast (Sngapore), and Asa Northeast (Tokyo). For Azure, we used DCs at 3 regons: US East, US West, and EU South. All applcaton nstances (clents) run n AWS. Due to network cost dfference, e.g., Amazon charges $.2 / GB for outbound network to other Amazon s DCs and $.9 / GB to the Internet, and Mcrosoft charges $.87 $.181 / GB based on destnatons, we fnd that TrpS typcally chooses to store data n Amazon s DCs. Therefore, we show results that nclude only Amazon s DCs, except for one scenaro where we are able to utlze both AWS and Azure together. TrpS and global Wera nstance servers are runnng on Amazon US East (Vrgna) regon whle local Wera nstances are runnng n all the regons. We used AWS t2.medum (2 vcpu 4 GB of RAM) for TrpS/Wera to have more CPUs for CPLEX and MILP solver. For local Wera nstances, we used EC2 t2.mcro nstances, 1 vcpu, 1 GB of RAM, 16 GB of EBS storage, 5 GB of EBS-st1 and 2 GB of EBSgp2 unless mentoned otherwse. Note that, TrpS does not cause any overhead to underlyng GDSS as t s not nvolved n the data path. The tme for computng data placement has a neglgble mpact on the overall cost as we can see that TrpS can solve the optmzaton problem n 1.3 seconds wth t2.medum (2 vcpu 4 GB of RAM) for our experment settng of 8 locatons and 3 storage ters per DC. For the workloads, we use both workload A: An update heavy workload (5% Put and 5% Get) and B: Read mostly workload (5% Put and 95% Get) derved from the Yahoo Cloud Servng Benchmark [9]. We manly show the result wth workload B as we can see the smlar pattern of results from both. Lkewse, we manly show the results wth eventual consstency due to space constrants. For EBS-st1, to avod the OS cache buffer effect, we assgn a latency penalty (1 ms) as reported by others [3]. Ths s a reasonable penalty as we can see that the average dsk seek tmes are ms (29.51 ms 95 percentle) and ms (38.9 ms 95 percentle) for random read and wrte (9:1 and 5:5 ratos respectvely) wth the system performance benchmark- Table 3: Cost comparson (Costs are normalzed to US East and Asa SE ) DC locatons Storage Network Total US East only 45.5% 263.6% 263.1% US East and Asa SE 1% 1% 1% ng tool [25] for EBS-st1. For EBS-gp2, we do not assgn any latency penalty as ts seek tme s less than 1 ms. For comparson purposes, we smulate Spanstore wth TrpS by allowng TrpS to use only a sngle storage ter from each DC, e.g., ether only S3 or only EBS-st1. Lastly, all cost nformaton we use n ths paper s as of Feb Optmzng Data Placement In ths secton, we show how TrpS chooses locales for a dversty of access patterns and data szes. In ths experment, we consder two scenaros: 1) latency-senstve Web applcatons that use mostly small and frequently accessed data and 2) data analytc applcatons that mostly use large and nfrequently accessed data. We use two smulated workloads wth eventual consstency: 1) 8 KB average data sze and 1, Get accesses and 1, Put accesses from each of the 8 DC locatons for the Web applcaton scenaro and 2) 1 MB average data sze and 1, Get accesses and 1 Put access from each of the 8 DC locatons for the data analytc framework scenaro. For storage cost, we use daly cost for workload 1 and monthly cost for workload 2. We use 2 ms for Get SLA and 35 ms for Put SLA for workload 1 and 5 ms for Get SLA and 85 ms Put SLA for workload 2. Fgure 5 shows the cost comparson between smulated Spanstore (consderng only a sngle storage ter) and TrpS (consderng multple storage ters). From the fgure, we can see that TrpS can mnmze cost for both workloads by explotng multple storage ters. For workload 1, TrpS manly chooses EBS-st1 as t does not charge for requests. For Workload 2, TrpS chooses only S3 as the storage cost s a non-neglgble porton of the overall cost. Ths pattern of results s smlar to Grandet [26] that consders multple storage ters wthn a sngle DC. Yet, our results consder multple DCs whle Grandet only consders a sngle DC that s nsuffcent for a mult-dc envronment. For example, f data s placed only n US East, then applcatons runnng n Asa SE cannot meet any SLA lower than the nter-dc latency between the 2 DCS, whch s more than 22 ms. In addton, even f we had a hgh latency requrement, usng a sngle DC can lead to hgher total cost due to network cost. Table 3 shows the cost comparson between usng a sngle centralzed DC US East (as n Grandet) and usng 2 DCs US East and Asa SE (2 replcas) wth Workload 1. Whle usng a sngle DC can reduce storage cost, t leads to extra network cost to access data from a remote regon that s more expensve n a mult-cloud envronment. These results show that both DC locatons and storage ters should be consdered for optmal data placement and that TrpS chooses DC locatons (as n Spanstore) and storage ters (as n Grandet) based on workloads and access patterns whle mnmzng overall cost n a mult-dc envronment. 5.2 Dynamc Data Placement Access pattern (reads vs. wrtes and user locaton) s an mportant factor to be consdered as shown n many prev-

8 Table 4: Data placement and cost comparson LC Data Placement Storage Network Total 1 US East (EBS-st1), US East 2 (EBS-st1), US West 2 (EBS-st1) 1% 1% 1% 2 US East (EBS-st1), US East 2 (EBS-gp2), US West 2 (EBS-st1) 14.7% 1% 15.3% 3 US East (EBS-gp2), US East 2 (EBS-gp2), US West (EBS-st1) 188.1% 1% 111.5% 4 US East (EBS-gp2), US East 2 (EBS-gp2), US West (EBS-st1), CA Central (EBS-gp2) 269.6% 166.7% 18.1% Cost % 52 4 ~ Network Storage Get Request Put Request 4 KB 128 KB 768 KB 121.3% 1% 292.5% 117.6% 142.5% 122.6% 1% 1% S3 EBS-st1 - S3 EBS-st1 - S3 EBS-st1 - Smulated Spanstore TrpS Smulated Spanstore TrpS Smulated Spanstore TrpS Fgure 6: Comparng Storage Cost. Costs are normalzed to the TrpS cost. ous systems such as Volley [2] and Tuba [4]. In ths secton, we show how TrpS can mnmze storage cost by handlng access pattern changes whle achevng applcatons goals. In ths experment, local Wera nstances are runnng on 8 regons. 1 clents are runnng per regon. The number of actve clents s ncreased and decreased n a cyclc manner from Asa Northeast to US West to mmc a durnal access pattern. We calculate provsoned storage cost on a day (24 hours) bass as we smulated a daly access pattern. Smulated clents send requests to nstances for each regon usng YCSB workload B: Read mostly workload (5% Put and 95% Get). We use the ChangeDataPlacement polcy (Fgure 4) n whch a new data placement request s sent to TrpS/Wera as a response for RequestsMontorng whch montors the number of requests sent from smulated clents at each nstance. We use varyng data szes 4 KB, 128 KB, and 768 KB to mmc a photo sharng applcaton s workload based on real-world statstcs as n prevous work [26]. We set 8 ms for the Get SLA, 2 ms for the Put SLA and LC = 1. Fgure 6 shows the cost beneft compared to smulated Spanstore settngs that are lmted to a sngle storage ter at each DC. Note that for the sngle storage ter cases,.e., S3 and EBS-st1, we also re-evaluate data placement to handle changes lke n Spanstore. For the 4 KB sze, TrpS can reduce overall cost 98.1% compared to S3 only cases and 17.5% for EBS-st1 only. The results show that usng block storage (EBS-st1) can reduce overall cost sgnfcantly for small and frequently accessed data as S3 charges for each request, e.g., $.4 and $.5 for a sngle Get and Put request respectvely n the US East regon whle EBS-st1 does not charge for requests. Ths corresponds to the result n Secton 5.1. From the experment log, we can see that TrpS avods usng S3 and chooses EBS-st1 and EBS-gp2. Even wth the expensve storage cost for EBS-gp2, we can see that TrpS chooses EBS-gp2 when t can reduce the number of replcas n order to reduce network traffc for dstrbutng updates. For 128 KB data, TrpS can reduce storage cost 65.8% and 14.96% for S3 and EBS-st1 only cases respectvely. For 768 KB, TrpS can reduce storage cost 29.8% and 18.4% as n other experments. The results confrm that TrpS can provde reduced overall cost by explotng multple storage ters n multple DCs n comparson wth sngle storage ter GDSS s such as Spanstore even n the presence of changng workload patterns. 5.3 Short-Tme Scale Dynamcs Next, we show how TrpS enables the underlyng GDSS to handle short tme-scale dynamcs by swtchng locales at run-tme as specfed n the SwtchngStorageTer polcy (Fgure 4), usng 1 ms for Get SLA and 2 ms for Put SLA and a perod threshold of 1 seconds. In ths experment, nstances are runnng n North Amerca regon, US East (Vrgna), US East 2 (Oho), US West (North Calforna), US West 2 (Oregon) and CA Central (Montreal), and smulated applcatons send requests to nstances n all the regons usng workload B n YCSB. We use 8 KB data sze n ths experment. Intally, TrpS evaluates the data placement wth an assumpton that all nstances receve the same number of requests from each locaton. Table 4 shows the data placement evaluated wth LC = 1, LC = 2, LC = 3, and LC = 4. The table also shows the extra cost as LC s ncreased. We can see that TrpS chooses faster (expensve) storage ter (EBSgp2) wth extra cost to satsfy LC constrants. However, the total cost s domnated by network cost rather than storage cost n a mult-cloud envronment. So as shown n the table, ncreased total cost s 5.3% and 11.5%. For LC = 3, there s no network cost change even wth a DC locaton change from US West 2 to US West as both DCs have the same network cost polcy. Lastly, the table shows that LC = 4 ncreases the number of replcas that leads to a hgher network cost. Thus, applcatons can trade off cost wth performance n the presence of dynamcs usng the LC parameter. Fgures 7(a) and 7(b) show the latency for Get operatons n US East when LC s set to 1 and 2 respectvely. The bold lne n the fgure ndcates the applcaton-perceved latency. For LC = 1 and LC = 2, the applcaton sees around 12 ms as t retreves data from local (US East) EBS-st1. We nject delays nto the US East nstance to smulate network or storage delay. In the fgure, we can see that there are 3 smulated delays (a) 6 ms delay for 3 seconds, (b) 12 ms delay for 18 seconds, and (c) 12 ms delay for 5 seconds. In both cases, delay (a) does not cause any Get SLA volaton. For the delay (b), applcatons suffer a Get SLA volaton at around 18 seconds when LC = 1. However, for LC = 2, TrpS/Wera swtches locales to retreve data from US East 2 EBS-gp2 storage to avod the volaton. For the delay (c), TrpS/Wera does not swtch the locales because the delay occurred less than perod threshold (1 seconds). Fgure 7(c) shows the latency for Get operatons n US East for LC = 3. Here, the applcaton sees less than 1 ms as t retreves data from local (US East) EBS-gp2. We nject the same delays nto both the US East and US East 2 nstances smultaneously. When the nstance n US East

9 Latency (ms) (b) US East (EBS-st1) Get SLA Perceved Get Latency (a) Tme (Every 3 seconds) (a) LC = 1 n the presence of dynamcs n US East. (c) Latecny (ms) US East (EBS-st1) US East 2 (EBS-gp2) (a) (b) Tme (Every 3 seconds) (b) LC = 2 n the presence of dynamc n US East only. (c) Latency (ms) US East (EBS-gp2) (b) US East 2 (EBS-gp2) US West (EBS-st1) (a) Tme (Every 3 seconds) (c) LC = 3 n the presence of dynamcs n both US East and US East 2. (c) Fgure 7: Applcatons-perceved latency runnng on US East % 118.9% 1% 1% 8.28% LC = 1 LC = 2 LC = % 15.3% 1% 1% 8.86% Cost Get SLA Volaton Cost Get SLA Volaton Dynamcs n US East only Dynamcs n both US East and US East 2 Fgure 8: Get SLA volaton rate and cost ncreasng rate. All values are normalzed to LC = 1. detects a delay from local (EBS-gp2), t frst swtches to remote (US East 2) EBS-gp2 whch also leads to a Get SLA volaton. Once the nstance detects a delay from US East 2, t swtches to US West EBS-st1 to avod SLA volaton. For both Fgures 7(b) and 7(c), there s addtonal network cost as the nstance n US East has to access non-local storage. Fgure 8 shows that the extra cost reduces the rate of Get volatons by 91.72% wth 12.8% extra cost for dynamc n US East and by 91.14% wth 19.7% extra cost for concurrent dynamcs n US East and US East 2. We can see a smlar pattern of results from all locatons wth varyng latency from the second cheapest locale based on the network latency between DCs. These results show that TrpS can enable a GDSS to adaptvely swtch locales to handle short tme-scale dynamcs, wth a slghtly hgher cost Usng Multple Provders Due to the long-haul network latency between DCs, t may not be possble to acheve low latency SLA n a sngle provder mult-dc envronment, f an applcaton desres a TLL wth LC>1. However, TrpS can explot multple DCs (possbly belongng to multple cloud provders) wthn a geographc regon to acheve these constrants. To show ths, we use the exact same experment settng as the prevous secton but wth a much lower SLA, e.g., Get SLA of 1 ms, Put SLA of 2 ms and LC = 2. In ths experment, nstances are runnng n 6 DCs of AWS and Azure, US East, US West, and EU South.e., there are 2 DCs n each regon. We also nject delays nto the US East nstance. We measure the latency for Get operatons n AWS US East. Fgure 9 shows applcatons can avod 88% Get volatons yet wth 585% extra cost due to the network traffc for dstrbutng updates to all 6 locatons. To relax the cost ssue, TrpS may enable applcatons to set LC only for a specfc regon e.g., only US East regon needs to meet LC constrant. Ths result shows that TrpS can explot multple provders DCs to acheve very strngent SLAs and avalablty constrants albet wth hgher cost % LC = 1 LC = 2 1% 1% 12.2% Cost Get SLA Volaton Fgure 9: Get SLA volaton rate and cost ncreasng rate for latency-crtcal applcatons. All values are normalzed to LC = Benchmark and Applcaton Scenaro To see that TrpS can help real applcatons acheve SLA goals, we ran the open-source YCSB Benchmark, and a Web applcaton, Retws on TrpS/Wera. Snce both use Reds [23] as a backend storage, we mplement Reds functons e.g., lpush, lrange, sadd, and srem wrapper nterfaces on top of Wera and modfy less than 1 lnes of code of the Reds YCSB module and Retws to enable them to use TrpS/Wera nstead of Reds. We gnore the overhead of the wrapper class as we can see that less than 2 ms s requred for transformng data from bnary format n TrpS to the Reds supported data set (hash, map, lst and so on) YCSB Benchmark To see that local nstances can access data wthn the SLA, we ran the YCSB benchmark clent from all 8 locatons. We use the same expermental settng as n Secton 5.2,.e., 8 ms for Get SLA, 2 ms for Put SLA, 1 for LC, and eventual consstency wthout changng data placement. YCSB clent sends 1, operatons wth YCSB workload B (95% read, 5% wrte) to Wera from each DC locaton. Read and update operatons n the YCSB clent correspond to the Get and Put operaton of Wera. Intally, TrpS chooses US East 2 EBS-gp2, EU West EBS-st1, and Asa NE EBS-gp2 for data placement. Fgure 1(a) shows the average read and update latency. YCSB clents can see lower latency than the desred SLA latency. For those YCSB clents runnng on US East 2, EU West, and Asa NE, they can see lower latency than other nstances as they have data n the local DC. The YCSB clent runnng on US East also can see lower latency as t s close to US East 2 n terms of network latency (<12 ms). We can see smlar results for workload A wth strong consstency. Ths result shows that TrpS helps applcatons acheve desred SLA goals wth mnmzed storage cost Retws Retws s a smple Web applcaton that mplements the functons of Twtter (loadng tmelnes, postng, followng and so on) that perform Gets and Puts on Reds. We use

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment

TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment Kwangsung Oh, Abhishek Chandra, and Jon Weissman Department of Computer Science and Engineering University of Minnesota

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access

Cache Performance 3/28/17. Agenda. Cache Abstraction and Metrics. Direct-Mapped Cache: Placement and Access Agenda Cache Performance Samra Khan March 28, 217 Revew from last lecture Cache access Assocatvty Replacement Cache Performance Cache Abstracton and Metrcs Address Tag Store (s the address n the cache?

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

an assocated logc allows the proof of safety and lveness propertes. The Unty model nvolves on the one hand a programmng language and, on the other han

an assocated logc allows the proof of safety and lveness propertes. The Unty model nvolves on the one hand a programmng language and, on the other han UNITY as a Tool for Desgn and Valdaton of a Data Replcaton System Phlppe Quennec Gerard Padou CENA IRIT-ENSEEIHT y Nnth Internatonal Conference on Systems Engneerng Unversty of Nevada, Las Vegas { 14-16

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Needed Information to do Allocation

Needed Information to do Allocation Complexty n the Database Allocaton Desgn Must tae relatonshp between fragments nto account Cost of ntegrty enforcements Constrants on response-tme, storage, and processng capablty Needed Informaton to

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some materal adapted from Mohamed Youns, UMBC CMSC 611 Spr 2003 course sldes Some materal adapted from Hennessy & Patterson / 2003 Elsever Scence Performance = 1 Executon tme Speedup = Performance (B)

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) , VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Assembler. Building a Modern Computer From First Principles.

Assembler. Building a Modern Computer From First Principles. Assembler Buldng a Modern Computer From Frst Prncples www.nand2tetrs.org Elements of Computng Systems, Nsan & Schocken, MIT Press, www.nand2tetrs.org, Chapter 6: Assembler slde Where we are at: Human Thought

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

Efficient Content Distribution in Wireless P2P Networks

Efficient Content Distribution in Wireless P2P Networks Effcent Content Dstrbuton n Wreless P2P Networs Qong Sun, Vctor O. K. L, and Ka-Cheong Leung Department of Electrcal and Electronc Engneerng The Unversty of Hong Kong Pofulam Road, Hong Kong, Chna {oansun,

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications

Efficient Load-Balanced IP Routing Scheme Based on Shortest Paths in Hose Model. Eiji Oki May 28, 2009 The University of Electro-Communications Effcent Loa-Balance IP Routng Scheme Base on Shortest Paths n Hose Moel E Ok May 28, 2009 The Unversty of Electro-Communcatons Ok Lab. Semnar, May 28, 2009 1 Outlne Backgroun on IP routng IP routng strategy

More information

Space-Optimal, Wait-Free Real-Time Synchronization

Space-Optimal, Wait-Free Real-Time Synchronization 1 Space-Optmal, Wat-Free Real-Tme Synchronzaton Hyeonjoong Cho, Bnoy Ravndran ECE Dept., Vrgna Tech Blacksburg, VA 24061, USA {hjcho,bnoy}@vt.edu E. Douglas Jensen The MITRE Corporaton Bedford, MA 01730,

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution Dynamc Voltage Scalng of Supply and Body Bas Explotng Software Runtme Dstrbuton Sungpack Hong EE Department Stanford Unversty Sungjoo Yoo, Byeong Bn, Kyu-Myung Cho, Soo-Kwan Eo Samsung Electroncs Taehwan

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments Effcent Broadcast Dsks Program Constructon n Asymmetrc Communcaton Envronments Eleftheros Takas, Stefanos Ougaroglou, Petros copoltds Department of Informatcs, Arstotle Unversty of Thessalonk Box 888,

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

3. CR parameters and Multi-Objective Fitness Function

3. CR parameters and Multi-Objective Fitness Function 3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Scheduling and queue management. DigiComm II

Scheduling and queue management. DigiComm II Schedulng and queue management Tradtonal queung behavour n routers Data transfer: datagrams: ndvdual packets no recognton of flows connectonless: no sgnallng Forwardng: based on per-datagram forwardng

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network

Pricing Network Resources for Adaptive Applications in a Differentiated Services Network IEEE INFOCOM Prcng Network Resources for Adaptve Applcatons n a Dfferentated Servces Network Xn Wang and Hennng Schulzrnne Columba Unversty Emal: {xnwang, schulzrnne}@cs.columba.edu Abstract The Dfferentated

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7 Optmzed Regonal Cachng for On-Demand Data Delvery Derek L. Eager Mchael C. Ferrs Mary K. Vernon Unversty of Saskatchewan Unversty of Wsconsn Madson Saskatoon, SK Canada S7N 5A9 Madson, WI 5376 eager@cs.usask.ca

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture

Goals and Approach Type of Resources Allocation Models Shared Non-shared Not in this Lecture In this Lecture Goals and Approach CS 194: Dstrbuted Systems Resource Allocaton Goal: acheve predcable performances Three steps: 1) Estmate applcaton s resource needs (not n ths lecture) 2) Admsson control 3) Resource

More information

Distributed Middlebox Placement Based on Potential Game

Distributed Middlebox Placement Based on Potential Game Int. J. Communcatons, Network and System Scences, 2017, 10, 264-273 http://www.scrp.org/ournal/cns ISSN Onlne: 1913-3723 ISSN Prnt: 1913-3715 Dstrbuted Mddlebox Placement Based on Potental Game Yongwen

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics A Hybrd Genetc Algorthm for Routng Optmzaton n IP Networks Utlzng Bandwdth and Delay Metrcs Anton Redl Insttute of Communcaton Networks, Munch Unversty of Technology, Arcsstr. 21, 80290 Munch, Germany

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems S. J and D. Shn: An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems 2355 An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems Seunggu J and Dongkun Shn, Member,

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Connection-information-based connection rerouting for connection-oriented mobile communication networks

Connection-information-based connection rerouting for connection-oriented mobile communication networks Dstrb. Syst. Engng 5 (1998) 47 65. Prnted n the UK PII: S0967-1846(98)90513-7 Connecton-nformaton-based connecton reroutng for connecton-orented moble communcaton networks Mnho Song, Yanghee Cho and Chongsang

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Advanced Computer Networks

Advanced Computer Networks Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

NGPM -- A NSGA-II Program in Matlab

NGPM -- A NSGA-II Program in Matlab Verson 1.4 LIN Song Aerospace Structural Dynamcs Research Laboratory College of Astronautcs, Northwestern Polytechncal Unversty, Chna Emal: lsssswc@163.com 2011-07-26 Contents Contents... 1. Introducton...

More information

arxiv: v3 [cs.ds] 7 Feb 2017

arxiv: v3 [cs.ds] 7 Feb 2017 : A Two-stage Sketch for Data Streams Tong Yang 1, Lngtong Lu 2, Ybo Yan 1, Muhammad Shahzad 3, Yulong Shen 2 Xaomng L 1, Bn Cu 1, Gaogang Xe 4 1 Pekng Unversty, Chna. 2 Xdan Unversty, Chna. 3 North Carolna

More information

THere are increasing interests and use of mobile ad hoc

THere are increasing interests and use of mobile ad hoc 1 Adaptve Schedulng n MIMO-based Heterogeneous Ad hoc Networks Shan Chu, Xn Wang Member, IEEE, and Yuanyuan Yang Fellow, IEEE. Abstract The demands for data rate and transmsson relablty constantly ncrease

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

AADL : about scheduling analysis

AADL : about scheduling analysis AADL : about schedulng analyss Schedulng analyss, what s t? Embedded real-tme crtcal systems have temporal constrants to meet (e.g. deadlne). Many systems are bult wth operatng systems provdng multtaskng

More information

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam

Sample Solution. Advanced Computer Networks P 1 P 2 P 3 P 4 P 5. Module: IN2097 Date: Examiner: Prof. Dr.-Ing. Georg Carle Exam: Final exam Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

Cost-efficient deployment of distributed software services

Cost-efficient deployment of distributed software services 1/30 Cost-effcent deployment of dstrbuted software servces csorba@tem.ntnu.no 2/30 Short ntroducton & contents Cost-effcent deployment of dstrbuted software servces Cost functons Bo-nspred decentralzed

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

Cognitive Radio Resource Management Using Multi-Agent Systems

Cognitive Radio Resource Management Using Multi-Agent Systems Cogntve Rado Resource Management Usng Mult- Systems Jang Xe, Ivan Howtt, and Anta Raja Department of Electrcal and Computer Engneerng Department of Software and Informaton Systems The Unversty of North

More information

Maintaining temporal validity of real-time data on non-continuously executing resources

Maintaining temporal validity of real-time data on non-continuously executing resources Mantanng temporal valdty of real-tme data on non-contnuously executng resources Tan Ba, Hong Lu and Juan Yang Hunan Insttute of Scence and Technology, College of Computer Scence, 44, Yueyang, Chna Wuhan

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

Memory Modeling in ESL-RTL Equivalence Checking

Memory Modeling in ESL-RTL Equivalence Checking 11.4 Memory Modelng n ESL-RTL Equvalence Checkng Alfred Koelbl 2025 NW Cornelus Pass Rd. Hllsboro, OR 97124 koelbl@synopsys.com Jerry R. Burch 2025 NW Cornelus Pass Rd. Hllsboro, OR 97124 burch@synopsys.com

More information

MOBILE Cloud Computing (MCC) extends the capabilities

MOBILE Cloud Computing (MCC) extends the capabilities 1 Resource Sharng of a Computng Access Pont for Mult-user Moble Cloud Offloadng wth Delay Constrants Meng-Hs Chen, Student Member, IEEE, Mn Dong, Senor Member, IEEE, Ben Lang, Fellow, IEEE arxv:1712.00030v2

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Intra-Parametric Analysis of a Fuzzy MOLP

Intra-Parametric Analysis of a Fuzzy MOLP Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral

More information

Fitting: Deformable contours April 26 th, 2018

Fitting: Deformable contours April 26 th, 2018 4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.

More information

An efficient iterative source routing algorithm

An efficient iterative source routing algorithm An effcent teratve source routng algorthm Gang Cheng Ye Tan Nrwan Ansar Advanced Networng Lab Department of Electrcal Computer Engneerng New Jersey Insttute of Technology Newar NJ 7 {gc yt Ansar}@ntedu

More information

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research Schedulng Remote Access to Scentfc Instruments n Cybernfrastructure for Educaton and Research Je Yn 1, Junwe Cao 2,3,*, Yuexuan Wang 4, Lanchen Lu 1,3 and Cheng Wu 1,3 1 Natonal CIMS Engneerng and Research

More information

Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations

Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations Journal of Physcs: Conference Seres Parallel Branch and Bound Algorthm - A comparson between seral, OpenMP and MPI mplementatons To cte ths artcle: Luco Barreto and Mchael Bauer 2010 J. Phys.: Conf. Ser.

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Research Article. ISSN (Print) s k and. d k rate of k -th flow, source node and

Research Article. ISSN (Print) s k and. d k rate of k -th flow, source node and Scholars Journal of Engneerng and Technology (SJET) Sch. J. Eng. Tech., 2015; 3(4A):343-350 Scholars Academc and Scentfc Publsher (An Internatonal Publsher for Academc and Scentfc Resources) www.saspublsher.com

More information

A Genetic Algorithm Based Dynamic Load Balancing Scheme for Heterogeneous Distributed Systems

A Genetic Algorithm Based Dynamic Load Balancing Scheme for Heterogeneous Distributed Systems Proceedngs of the Internatonal Conference on Parallel and Dstrbuted Processng Technques and Applcatons, PDPTA 2008, Las Vegas, Nevada, USA, July 14-17, 2008, 2 Volumes. CSREA Press 2008, ISBN 1-60132-084-1

More information

EXTENDED BIC CRITERION FOR MODEL SELECTION

EXTENDED BIC CRITERION FOR MODEL SELECTION IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7

More information

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach Dstrbuted Resource Schedulng n Grd Computng Usng Fuzzy Approach Shahram Amn, Mohammad Ahmad Computer Engneerng Department Islamc Azad Unversty branch Mahallat, Iran Islamc Azad Unversty branch khomen,

More information

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks 12 IEEE Wreless Communcatons and Networkng Conference: Moble and Wreless Networks Onlne Polces for Opportunstc Vrtual MISO Routng n Wreless Ad Hoc Networks Crstano Tapparello, Stefano Tomasn and Mchele

More information

Topology Design using LS-TaSC Version 2 and LS-DYNA

Topology Design using LS-TaSC Version 2 and LS-DYNA Topology Desgn usng LS-TaSC Verson 2 and LS-DYNA Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2, a topology optmzaton tool

More information

Sensor-aware Adaptive Pull-Push Query Processing for Sensor Networks

Sensor-aware Adaptive Pull-Push Query Processing for Sensor Networks Sensor-aware Adaptve Pull-Push Query Processng for Sensor Networks Raja Bose Unversty of Florda Ganesvlle, FL 326 U.S.A. rbose@cse.ufl.edu Abdelsalam Helal Unversty of Florda Ganesvlle, FL 326 U.S.A. helal@cse.ufl.edu

More information