Deadline Guaranteed Service for Multi-Tenant Cloud Storage

Size: px
Start display at page:

Download "Deadline Guaranteed Service for Multi-Tenant Cloud Storage"

Transcription

1 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE Deadlie Guarateed Service for Multi-Teat Cloud Storage Guoxi Liu, Studet Member, IEEE, Haiyig She*, Seior Member, IEEE ad Haoyu Wag Abstract It is imperative for cloud storage systems to be able to provide deadlie guarateed services accordig to service level agreemets (SLAs) for olie services. I spite of may previous wors o deadlie aware solutios, most of them focus o schedulig wor flows or resource reservatio i dataceter etwors but eglect the server overload problem i cloud storage systems that prevets providig the deadlie guarateed services. I this paper, we itroduce a ew form of SLAs, which eables each teat to specify a percetage of its requests it wishes to serve withi a specified deadlie. We first idetify the multiple objectives (i.e., traffic ad latecy miimizatio, resource utilizatio maximizatio) i developig schemes to satisfy the SLAs. To satisfy the SLAs while achievig the multi-objectives, we propose a Parallel Deadlie Guarateed () scheme, which schedules data reallocatio (through load re-assigmet ad data replicatio) usig a tree-based bottom-up parallel process. The observatio from our model also motivates our deadlie strictess clustered data allocatio algorithm that maps teats with the similar SLA strictess ito the same server to ehace SLA guaratees. We further ehace i supplyig SLA guarateed services through two algorithms: i) a prioritized data reallocatio algorithm that deals with request arrival rate variatio, ad ii) a adaptive request retrasmissio algorithm that deals with SLA requiremet variatio. Our trace-drive experimets o a simulator ad Amazo EC2 show the effectiveess of our schemes for guarateeig the SLAs while achievig the multi-objectives. Keywords: Cloud storage, Service level agreemet (SLA), Deadlie, Resource utilizatio. INTRODUCTION Cloud storage (e.g., Amazo Dyamodb [], Amazo S3 [2] ad Gigaspaces [3]) is emergig as a popular busiess service with the pay-as-you-go busiess model [4]. Istead of maitaiig private clusters with vast capital expeditures, more ad more eterprises shift their data worloads to the cloud. I order to supply a cost-effective service, the cloud ifrastructure is trasparetly shared by multi-teats i order to fully utilize cloud resources, which however leads to upredictable performace of teats service. Ideed, teats ofte experiece sigificat performace variatios, e.g., i service latecy of data requests [5 7]. Such upredictable performace hiders teats from migratig their worload to cloud storage systems sice the data access latecy is importat to their commercial busiess. Experimets at Amazo portal [8] demostrated that icreasig page presetatio time by as little as ms sigificatly reduces user satisfactio, ad degrades sales by oe percet. For data retrieval i the web presetatio process, the typical latecy budget iside a storage system for a web request is oly 5- ms [9]. Therefore, the upredictable performace without the deadlie guarateed services decreases the quality of service to cliets, reduces the profit of the teats, prevets teats from usig the cloud storage systems, ad hece reduces the profit of the cloud providers. Therefore, esurig service deadlie is critical for applicatio performace guaratee of teats. For this purpose, we argue that cloud storage systems should * Correspodig Author. sheh@clemso.edu; Phoe: (864) ; Fax: (864) Haiyig She, Guoxi Liu ad Haoyu Wag are with the Departmet of Electrical ad Computer Egieerig, Clemso Uiversity, Clemso, SC, {guoxil, sheh, haoyuw}@clemso.edu have service level agreemets (SLAs) [] baed ito their services as other olie services. I such a SLA, the cloud storage guaratees that the data requests of a teat will be respoded by a specific latecy target (i.e., deadlie) with o less tha a pre-promised probability. The deadlie ad probability i a SLA are specified by the teat i the SLA with the cloud provider based o the teat s provided services to the cliets. For example, the SLA ca be specified as 99.9% of web page presetatio eed to be completed withi a deadlie of 2-3ms [, ]. A ey cause for high data access latecy is excess loads o cloud storage servers. May requests from differet teats targetig a worloaditesive server may be bloced due to the server s limited service capability, which causes uexpected log latecy. Therefore, to guaratee such SLAs, a challege is how to allocate data partitios amog servers (i.e., data allocatio) uder the multiplexig of teats worloads to avoid overloaded servers. A server is called a overloaded server if the request arrival rate o it exceeds its service capability so that it caot supply a SLA guarateed data access service; otherwise, it is called a uderloaded server. However, previous deadlie aware solutioeglect this overload problem i cloud storage systems that prevets providig the deadlie guarateed services; most of them focus o schedulig wor flows or resource reservatio i dataceter etwors [, 2 5]. Therefore, i this paper, we propose our Parallel Deadlie Guarateed scheme () to esure the SLAs for multiple teats i a cloud storage system. Avoidig service overload to esure the SLAs is a otrivial problem. A data partitio request is served by oe of the servers that hold the data replicas. Each replica server has a servig ratio (i.e., the percetage of requests directed to the server) assiged by the cloud storage load balacer. We avoid service overload by data reallocatio icludig the reassigmet of servig ratios amog replica servers ad creatig data replicas. This process (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

2 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 2 is complex ad challegig due to the heterogeeity of server capacities, teat deadlie requiremets ad variatios of request rates of teats. We first formulate this data reallocatio problem by idetifyig the multiple objectives i developig a data reallocatio scheme, icludig traffic miimizatio, resource utilizatio maximizatio ad scheme executio latecy miimizatio. To solve this problem, we the build a mathematical model to measure the SLA performace uder a specific data-server allocatio give predicted data request worloads from teats. The model helps to derive the upper boud of request arrival rate o each server to guaratee the SLAs. To guaratee the SLAs while achievig the multi-objectives, schedules data reallocatio (through load re-assigmet ad data replicatio), through a tree-based bottom-up parallel process i the system load balacer. The parallel process expedites the schedulig procedure; load migratio betwee local servers reduces traffic load, ad server deactivatio icreases resource utilizatio. Our mathematical model also idicates that placig the data of two teats with greatly differet SLAs to the same server would reduce resource utilizatio, which motivates our deadlie strictess clustered data allocatio algorithm that maps teats with the same SLA ito the same server durig data reallocatio schedulig. We further ehace i supplyig SLA guarateed services through two algorithms: i) a prioritized data reallocatio algorithm, ad ii) a adaptive request retrasmissio algorithm. The prioritized data reallocatio algorithm hadles the situatio that the request rate may vary greatly over time ad eve experiece sharp icrease, which would lead to SLA violatios. I this algorithm, highly overloaded servers autoomously probe earby servers ad the load balacer istatly hadles highly overloaded servers without delay. The adaptive request retrasmissio algorithm hadles the situatio that teats SLA requiremets may vary over time. I this algorithm, whe a queried server doeot reply i time, the frod-ed server waits for a time period before retrasmittig the request to aother server. The waitig time is determied so that the SLA requiremet ca be met ad the commuicatio overhead is miimized. We summarize our cotributio below: Data reallocatio problem formulatio for SLA guaratee with multi-objectives i a multi-teat cloud storage system. A mathematical model to measure the SLA performace, which gives a upper boud of the request arrival rate of each server. The scheme to esure SLA guaratee while achievig the multi-objectives. () Tree-based parallel processig; (2) Data reallocatio schedulig; (3) Server deactivatio. ehacemet algorithms to avoid SLA violatios uder request arrival rate ad SLA requiremet variatio with low overhead. () Deadlie strictess clustered data allocatio; (2) Prioritized data reallocatio; (3) Adaptive request retrasmissio. Trace-drive experimets that show the effectiveess ad efficiecy of our schemes i achievig deadlie guaratees ad the multi-objectives o both a simulator ad Amazo EC2 [6]. The rest of the paper is orgaized as follows. Sectio 2 depicts the system model ad the problem. Sectio 3 presets the predictio of the SLAs performace i future. Based o this predictio, Sectio 4 ad Sectio 5 preset our parallel deadlie guarateed scheme ad its ehacemet i detail. Sectio 6 presets the performace evaluatio of our methods compared with other methods. Sectio 7 presets the related wor. Sectio 8 cocludes the paper with remars o our future wor. 2 PROBLEM STATEMENT 2. System Model ad A New SLA We cosider a heterogeeous cloud storage system cosistig of N teats ad M data servers of the same id, which may have differet servig capabilities ad storage capacities but supply the same storage service. As show i Figure, teat t operates a olie social etwor (OSN) (e.g., WeChat), t 2 operates a portal (e.g., Netflix) ad t N operates a file hostig service (e.g., Dropbox). A data partitio is a uit for data storage ad replicatio. Oe server may store the data partitios from differet teats ad a teat s data partitios may be stored i differet servers, e.g., s 2 stores the data replicas of t ad t 2. Each data partitio may have multiple replicas across differet servers. We assume that each data partitio has at least r (r > ) replicas. Teats Cloud storage Data servers Frot-ed t : OSN s t s Status s2 t 2 : Portal s 3 t s Posts & t 2 s Ads... s t s Message M t N : File Hostig t N s File Fig. : Multi-teat cloud storage service. A data request from a teat targets a set of data partitios i several servers, such as a News Feed request i Faceboo targetig all recet posts. The request arrives at the frot-ed server of the cloud first, ad the is redirected accordig to the load balacig strategy i the load balacer to servers, each of which hosts a replica of the requested data partitio. The service latecy of a request is the logest respod time i all target servers. As i [7], we assume that the arrival of data requests from a teat follows a Poisso distributio, where the average request rate of teat t is λ t. Each data server has a sigle queue for requests from all teats. As show i Figure, t s deadlie is d t =2ms ad its request is served by s ad s 2. Though s s respose latecy is ms, s 2 produces 5ms respose latecy due to the colocatio of data request itesive data partitios of t ad t 2 o s 2. To provide the deadlie guarateed service to teats, we itroduce a ew form of SLAs for cloud storage service. That is, for ay teat t, o more tha ɛ t percet of all requests have service latecy larger tha a give deadlie, deoted as d t. We use P t to deote the probability of t s request havig service latecy o loger tha d t, the the SLA (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

3 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 3 is deoted by (ɛ t, d t ); P t ɛ t. The probability ɛ t ad deadlie d t are specified by the teats i their SLAs with the cloud provider. For simplicity, we oly cosider a commo SLA for all requests from a teat t, which ca be easily exteded for multiple SLAs for differet types of requests from t. If there are multiple types of requests from t that have differet SLAs, t ca be treated as several differet sub-teats. We assume that the data request resposes are idepedet, which meas the servers wor idepedetly for data requests. 2.2 Problem Formulatio I this sectio, we formulate the problem of data reallocatio for the SLA guaratee service i a cloud storage system. Recall that the servig ratio of a data partitio D i s replica is the percetage of requests targetig D i that are served by this replica. We defie data allocatio as the allocatio status for data partitio placemet i servers ad servig ratios of data partitio replicas. We use Xs Di, a biary variable, to deote the existece of D i s replica o server. We use Hs Di to deote the servig ratio of the replica of D i i. The, the data allocatio (deoted by f) ca be preseted as a set of mappigs: f ={ s, (X D s, (X D Hs D, Xs D2 Hs D2,..., X D s H D s ),..., Hs D, Xs D2 Hs D2,..., X D H D ) }, We use P t to deote the probability of t s request havig service latecy o loger tha d t. I order to esure the SLAs, we should have t, P t /( ɛ t ). Thus, for data partitio replicas o overloaded servers, we either reduce their servig ratios or create ew replicas i uderloaded servers. Such data reallocatio leads to a ew data allocatio amog servers. To avoid disruptig cloud storage service, we idetify the objectives durig data reallocatio. To maximize resource utilizatio for eergy-efficiecy, we aim to miimize the umber of servers. We ame a server i use as a active server, ad deote the whole set of active servers (M u ) as M u = { : X Di Hs Di > M}, D i D where D is the set of all data partitios. Aother importat issue is the traffic load (replicatio cost through etwor), caused by replicatig data partitios to uderloaded servers. We use the product of data size (S Di ) ad the umber of trasmissio hops (i.e., switches i the routig path) betwee servers s m ad (Is sm ) to measure the traffic load (ξ s D i ) for replicatig D i from s m to [8, 9]; ξ s D i = S Di Is sm. Suppose f is the origial data allocatio, ad f is a ew data allocatio to esure the SLAs. s f = {D i : Xs Di = } deotes the set of data partitios cotaied i i f. Thus, the total traffic load for a specific f is Φ f = M D i s f D i s f ξ s D i. We aim to fid a ew data allocatio f, so that the traffic load that coverts f to f is miimized. The coversio from f to f also itroduces data access worload o servers. I order ot to iterfere i teats data requests, each server maitais a priority queue, where the data trasmissio for coversio has a lower priority tha customer s requests. Also, sice the coversio time is very small compared to the time for data allocatio f, the effect of coversio o SLA ca be igored. Fially, we formulate the problem of data reallocatio for deadlie guaratee as a oliear programmig by simultaeousess achievig these two goals as: M u X Di mi ( M u + βφ f ) () subject to t, P t /( ɛ t ) (2) H D i = D i D (3) D i D S Di X D i C s M (4) M X D i r D i D (5) X D i {, } M, D i D (6) H D i M, D i D (7) where C s deotes the storage capacity of. I Formula (), β is a relative weight betwee the two objectives. If β is larger, the data reallocatio teds to reduce the traffic load more tha the umber of active servers, ad vice versa. Costrait (2) esures the SLAs. Costrait (3) esures that all data requests targetig ay data partitio ca be successfully served. Although the storage capacity of a dataceter ca be icreased ifiitely, the storage capacity of a server i the dataceter is still limited. Costrait (4) esures that the storage usage caot exceed storage capacity i ay server. Costrait (5) guaratees that there are at least r replicas for each data partitio i the system i order to maitai data availability. Costrait (6) guaratees that each data partitio is either stored at most oce or ot stored i a data server. Costrait (7) guaratees that each replica s servig ratio is betwee ad. Beside the two objectives, the executio time of creatig f is importat to costatly maitai the SLA guaratee over time. Thus, aother objective is to miimize the executio time of the data reallocatio scheme. Lemma. The problem of data reallocatio for deadlie guaratee is NP-Hard. Proof: The service rate of a server is the average umber of requests served by it per uit time. Suppose that all servers are homogeeous with equal service rate ad storage capacity. Assume that the servers service rate is large eough to esure the SLAs, ad we do ot cosider the traffic load cost, which meas β =. The, the deadlie guaratee problem is to create a data allocatio with the miimum umber of active servers uder storage capacity costraits of all servers, which is a bi pacig problem [2]. Sice the bi pacig problem is NP-hard, our problem is also NP-hard. We the propose our heuristic scheme to solve this problem. To achieve the coditio i Equatio (2), i Sectio 3, we build a mathematical model to derive the upper boud of request arrival rate at each server to satisfy Equatio (2), which iamed as deadlie guarateed arrival rate, deoted by λ g. The, i Sectio 4, we preset to costrai the request arrival rate i each server below λ g through data reallocatio (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

4 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 4 3 PREDICTION OF SLA PERFORMANCE Accordig to [5], the respose time of worflows follows a log tail distributio with low latecy i most cases. Thus, we assume that the service latecy follows a expoetial distributio. I additio, we assume that the arrival rate of requests follows the Poisso process as i [7], ad each server wors idepedetly with a sigle queue. Therefore, each server ca be modeled as a M/M/ queuig system [2]. I a M/M/ queuig system, there is a sigle server, where arrivals follow a Poisso process ad the job service time follows a expoetial distributio. To calculate the parameters, we profile the average service latecy T s of a request of a server, ad the calculate its service rate µ s = T. I each short period T, the system moitor tracs the s request arrival rate of each data partitio by λ D i = N Di /T, where N Di is the umber of requests o this partitio. The, we ca forecast λ Di for the ext period, as λ Di = g(λ D i ), where g(λ) is a demad forecastig method as itroduced i [22]. Thus, the request arrival rate of : λ s = D i D λ Di X Di H Di. Based o the forecasted λ s ad λ g give by our mathematical model, the available service capacity of is calculated by µ a = λ g λ s. is a overloaded server if µ a < ; a uderloaded server if µ a > ; ad a idle server if λ s =. the coducts data reallocatio to elimiate the overloaded servers. Below, we build the mathematical model to calculate λ g. Suppose T s is t t i s request i s service latecy o server. Accordig to [23], the correspodig cumulative distributio fuctio of T s for t t i s request i i a M/M/ queuig system is: F (t) s = e (µs λs ) t. (8) For a request i, targetig a set of data partitios i several servers, the request s service latecy depeds o the logest service latecy amog all target servers. The, the correspodig probability that the service latecy meets the deadlie requiremet is P t i s = p(max{t } s R(t i ) d t ), (9) t i where R(t i ) is the set of target data servers for the request i, ad each request partitio is served by a server. I Equatio (9), max{t s } t i s R(t i ) d t also meas that R(t i s ), T d t i t. Sice Tt s is a idepedet variable for differet servers, we ca have P t i = F (d t ) s. () R(t i ) Evet A meas that d t is satisfied. Evet B meas that the data request has a target server set from φ t ={R, R 2,...R j,...}. We use evet B j to mea that the target server set is R j. P t = p(a B) = p(b A) p(a) = p(( Rj φ t B j ) A) p(a). Assumig each B j is idepedet to each other, we have P t = p(b j A) p(a) = p(a B j ) p(b j ). R j φ t R j φ t Accordig to Equatio (), the deadlie satisfyig probability ca be rewritte as P t = ( F (d t ) s ) p(b j ). () R j φ t R j However, φ t grows expoetially, so tracig all p(b j ) to calculate P t is impractical. The, for teat t, we defie b t = mi{f (d t ) s }. (2) Thus, we ca rewrite Equatio () by combiig differet B j with same cardiality as P t (b t ) Rj p(b j ) = b j t F t (j), (3) R j φ t j [,] where F t (j) is the probability desity fuctio that t s request targets j servers i the ext period, ad is the maximum cardiality of R j, which ca be derived from the trace of the previous period. Combig Formulas (3) ad (2), we get f(b t ) = j [,] b t F t (j) = ɛ t. We use x t to deote the solutio for b t (, ), ad call it the supportive probability of teat t Lemma 2. If t, R t F (d t ) s x t, the the SLAs are guarateed. Proof: Based o this coditio ad Equatio (2), we ca get b t x t. Due to mootoe icreasig of f(b t ) whe b t (, ), we ca get that f(b t ) f(x t ) = ɛ t. Accordig to Equatio (3), for ay t, we ca get P t f(x t ) = ɛ t. Thus, each t s SLA is esured. Accordig to Lemma 2 ad Equatio (8), for each teat t, we ca get a upper boud of λ s to satisfy the SLAs: λ s µ s l( x t )/d t. Defiitio. We use K t to deote l( x t )/d t, ad call K t the deadlie strictess of teat t, which reflects the hardess of t s deadlie requiremet. The, i order to esure the SLAs, the deadlie guarateed arrival rate should satisfy: λ g = µ s max{k t : R(t )}. (4) If, λ s λ g is satisfied i a specific data allocatio, the SLAs are esured. This is the goal i data reallocatio i to satisfy the SLAs. 4 PARALLEL DEADLINE GUARANTEED SCHEME 4. Overview Figure 2 shows a overview of the parallel deadlie guaratee scheme (). It cosists of three basic compoets ad three compoets for ehacemet. Whe a server s worload doeot satisfy λ s λ g, it is a overloaded server ad its excess worload eeds to be offloaded to other uderloaded servers. The tree-based parallel processig algorithm builds servers to a logical tree. It eables the iformatio of servers to be collected i the bottom-up maer ad arrages the worload trasfer from overloaded servers to uderloaded servers. The data reallocatio schedulig algorithm is executed i each paret ode i the tree to arrage the worload trasfer through load re-assigmet ad data replicatio. Fially, the server deactivatio algorithm aims to miimize the umber of active servers (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

5 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 5 Fig. 2: Overview of. The three ehacemet algorithms improve the performace of. The deadlie strictess clustered data allocatio algorithm groups the teats with similar deadlie strictess ad places their data partitios to the same server i order to icrease teats deadlie strictess, hece reduces the probability of SLA violatios. The prioritized data reallocatio algorithm eables overloaded servers to probe earby servers to offload their excess loads without waitig for the ext time period for the data reallocatio schedulig based o the tree. I the adaptive request retrasmissio algorithm, the frot-ed server retrasmits a request targetig a overloaded server to other servers storig the requested data partitio s replicas i order to guaratee SLAs. 4.2 Tree-based Parallel Processig The load balacer i the system coducts the SLA performace predictio i the ext period ad triggers data reallocatio process if, λ s > λ g. The load balacer is a cluster of physical machies that cooperate to coduct the load balacig tas. I order to reduce the executio time of data reallocatio schedulig, we propose a cocept of tree-based parallel processig. We assume a mai tree topology for the servers [24] i the cloud. The load balacer abstracts a tree structure from the topology of data servers ad switches (routers), with all data servers as leaves ad switches (routers) as parets (Figure 3 (a)). To abstract the tree structure from ay topology of data servers ad switches (routers), such as a fat tree [24], selects oe of the core routers as the source, ad fids the shortest paths from it to all data servers to build the tree structure. It the creates a umber of virtual odes (VNs). The VNs form a tree that mirrors the paret odes i the topology tree ad still uses the servers as leaves as show i Figure 3(b). Each VN is mapped to a physical machie i the load balacer; that is, the VN s job is executed by its mapped physical machie. The parallel data reallocatio schedulig is coducted based o the tree structure i a bottom-up maer. The bottom-up process reduces the traffic load geerated durig the coversio to a ew data allocatio, by reducig the umber of trasmissio hops for data replicatio. The VNs i the bottom level are resposible for collectig the followig iformatio for their childre (i.e., servers):, (Xs D Hs D, Xs D2 Hs D2,..., X D s H D s ), the request arrival rate ad the umber of replicas of each data partitio, ad each t s supportive probability. The, they calculate µ a for their servers ad classify them to overloaded, uderloaded ad idle servers. After that, it coducts the data reallocatio schedulig, which moves data service load from overloaded servers to uderloaded or idle servers. We will explai the details Uresolved servers Data allocatio VN VN3 VN2 s..... s j s j s M s s j s j+ s M (a) Cloud tree topology (b) Virtual ode based tree structure Fig. 3: Tree structure for parallel data reallocatio schedulig. of this process later. After the schedulig, if some servers are still overloaded or are still uderloaded, the paret forwards the iformatio of these servers to its paret. This process repeats util the root ode fiishes the schedulig process. Therefore, the schedulig for servers i the same sub-tree is coducted i parallel, which expedites the schedulig process of the data reallocatio. 4.3 Data Reallocatio Schedulig Each VN groups overloaded servers, uderloaded servers ad idle servers ito a overloaded list (L o ), a available list (L a ) ad a idle list (L i ), respectively. I the data reallocatio schedulig algorithm, the lists are sorted i order to move the load from most overloaded servers to the most uderloaded servers to quicly improve their service latecy. I the data reallocatio, each VN first coducts the servig ratio reassigmet algorithm ad the coducts the ew replica creatio algorithm to release the load of overloaded servers. I the servig ratio reassigmet algorithm, the VN fetches each from L o ad releases its extra load µ a to servers i L a by reassigig the servig ratios o its data partitios to the same partitios replicas i uderloaded servers. I, the data partitios D i that have higher request rate (λ Di ) should be selected i order to more quicly release the extra load. Also, larger data partitios should be selected first because it ca proactively reduce the traffic load i the subsequet data replicatio phase. To cosider both factors, we use the harmoic mea metric 2 λ Di S Di /(λ Di + S Di ) to sort D i i decreasig order. It teds to quicly release the load of overloaded servers, ad reduce the traffic load i the ew replica creatio algorithm by avoidig replicatig partitios with a larger arrival rate ad data size. The, the partial of the servig ratio mi{ µ a, µ a s m, λ Di } o the replica i is moved to the replica i s m. This process repeats util releases all µ a or caot fid a uderloaded server to release load. I the ew replica creatio algorithm, each usolved overloaded server i L o replicates its data partitios to uderloaded servers i L a. The data partitios with higher λ Di should be selected first to replicate sice they ca more quicly release the extra load. Also, the replicatio of D i that has larger size will geerate higher traffic load. To cosider these two factors, we propose a metric of λ Di /S Di. The D i i a overloaded server are sorted i decreasig order of λ Di /S Di. It aims to quicly release the load of overloaded servers while reducig both the umber ad the data size of replicas. Also, with the proximity cosideratio, s m replicates D i from the closest server with a replica of D i i the curret subtree to reduce the traffic load by reducig the umber of trasmissio hops i replicatio. If caot release all of its extra load, it replicates its data partitios to the servers i the idle list (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

6 6 4.4 Server Deactivatio This algorithm aims to deactivate as may servers to sleep as possible i order to maximize resource utilizatio while esurig the SLAs. I each period, whe the data reallocatio successfully achieves the SLA guaratee, the the server deactivatio ca be triggered if M u (λ g λ s ) mi{λ s } s M u, i.e., the sum of the available service capacities of all active servers io less tha the miimum value amog all servers request arrival rates. I this case, the worload o the server itroduced by the miimum request arrival rate may be supported by other servers. This algorithm is coducted by the root. It first sorts active servers i ascedig order of λ g. The, startig from the first active server, it sets its λ g to, ad rus the data reallocatio schedulig offlie. If the data reallocatio is successful, i.e., s all worload ca be offloaded to other servers while esurig the SLAs, the root coducts the data reallocatio, ad deactivates to sleep. Otherwise, the process termiates. The, the system has the ew data allocatio satisfyig the SLAs with the miimum umber of active servers. 5 ENHANCEMENT 5. Deadlie Strictess Clustered Data Allocatio Differet teats have differet deadlie strictess (deoted by K t ), where K t = Fs (d t, ɛ t ). Ituitively, a teat with a short deadlie (d t ) ad small exceptio probability (ɛ t ) has a higher K t, which leads to a small deadlie guarateed arrival rate (λ g ) give the service rate of. We use M t to deote the set of all servers servig the data requests from t. Based o Formula (4), if we place the data partitios of teats with greatly differet K t i the same server, may teats deadlie strictess are much larger tha mi{fs (d t, ɛ t )} t, M t, which leads to low resource utilizatio of all servers with small guarateed arrival rates. By isolatig the services of groups of teats havig differet deadlie strictess, we ca reduce the average max{k t : R(t )} of all uderloaded servers, which leads to a higher potetial resource utilizatio. To avoid this problem, each VN classifies all teats ito differet groups (G i ) accordig to their K t : t G i iff Kt [τ i, τ (i + )), (5) where τ is the K t rage of each group, ad K t is the average of K t i previous data reallocatio operatios. After classificatio, the VN avoids placig the data partitios of teats from differet groups i the same server. To this ed, it coducts data reallocatio i Sectio 4 separately for differet groups. That is, a VN rus oe data reallocatio process for idividual groups oly with the servers for the group teats ad idle servers. The, this algorithm icreases the λ g of a server by reducig the variace of K t of teats havig data partitios o it, which icreases resource utilizatio of the system. I our future wor, we will ivestigate the resource multiplexig amog differet groups while icreasig resource utilizatio. 5.2 Deadlie Guaratee uder Request Rate Variatios Withi a period, the request arrival rates may vary greatly over time ad sometimes eve experiece sharp icreases, which would violate the SLAs. Whe a heavily overloaded server waits for the periodical data reallocatio from the load balacer, it may experiece the overload for a relatively log time, which exacerbates the SLA violatio. I order to costatly esure the SLAs dyamically withi each period, we ca use the highest arrival rate i a certai previous time period as the predicted rate for the ext period. However, it will lead to low resource utilizatio by usig more servers. Thus, we propose prioritized data reallocatio algorithm that quicly releases the load o the heavily overloaded servers i order to guaratee the SLAs. Load balacer... Priority Queue: REQ Alloc. Overloaded Server i Alloc Uder- Loaded Server i RESP: <{D 2,D 5 }, a sj > Edge Switch..... Overloaded Server Fig. 4: Prioritized data reallocatio for deadlie guaratee. The overloaded server autoomously probeearby servers to quicly release its load. selects data partitios with the largest request arrival rates, sum of which is larger tha µ a. It the broadcasts the iformatio of these selected data partitios to earby data servers. Whe a uderloaded server, say s m, receives this message, it respods with its available service capacity µ a s m ad the iformatio of duplicated data partitios i it. Whe receives the resposes from itearby servers, it coducts the servig ratio reassigmet algorithm ad otifies the data reallocatio iformatio to the load balacer ad participatig servers. If is still overloaded, seds a load releasig request to the load balacer. Iside the load balacer, we set a threshold T r for the available service capacity µ a of overloaded servers. Whe µ a < T r, i.e., the overload degree is high, the request is put ito a priority queue maitaied by the root VN i Figure 4. Oce the root VN otices the existece of such a server, it hadles the server with the smallest µ a usig the data reallocatio schedulig algorithm istatly. 5.3 Adaptive Request Retrasmissio Withi a period, teat t may mae its SLA requiremet more rigid by requirig a smaller d t or ɛ t, so that its deadlie strictess becomes more rigid. Accordig to Formula (4), a more rigid deadlie strictess of a teat leads to a smaller deadlie guarateed arrival rate λ g, which is the upper boud of request arrival rate at without SLA violatios. Thus, servers servig this teat s requests may become overloaded. We ca deped o the data reallocatio schedulig algorithm i Sectio 4.3 to achieve load balace agai. However, it eeds to replicate data partitios from overloaded servers to uderloaded servers, ad itroduces a certai traffic load. I order to save the traffic load, we rely o a request retrasmissio algorithm ruig o the froted server without depedig o data reallocatio.

7 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 7 I a request retrasmissio algorithm, the frot-ed server retrasmits a request to other servers storig requested data partitio s replicas i order to guaratee SLAs. This way, although some of the servers caot supply a SLA guaratee service idepedetly to t, the earliest respose time amog them may satisfy the SLA requiremet. Oce there is a respose, the froted server cacels all other redudat requests [25]. The, the cacelled requests will ot be served, ad the request arrival rate of the requested data partitio will ot be chaged. Ituitively, we ca simultaeously trasmit a request of data partitio D i to all servers that store a replica of D i i order to achieve a low respose latecy with a high probability. However, it geerates high commuicatio overhead due to may trasmitted messages ad request cacelatio. To reduce the commuicatio overhead, we ca retrasmit requests to servers sequetially. I Percetile [25], a frot-ed server trasmits requests to servers storig the requested data partitio oe by oe ad waits a fixed percetile of the CDF of the respose latecies of all the servers i the system after each request trasmissio util it receives a respose. However, sice it determies the waitig time without deadlie awareess, if the percetile is high, it may ot guaratee the SLA (i.e., a probability higher tha ɛ t to receive a respose withi the deadlie); otherwise, it may geerate high commuicatio overhead. Also, due to the fixed waitig time, it caot costatly supply a SLA guarateed service whe the SLA requiremet varies. A challege here is to adaptively determie the waitig time before retrasmissio so that the SLA requiremet still ca be satisfied ad the commuicatio overhead is miimized. To tacle this challege, we propose a adaptive request retrasmissio algorithm. I this algorithm, the waitig time, amed as adaptive waitig time (deoted by τ t ) is specified to be the logest delay with deadlie awareess, so that it ca supply a SLA guarateed service to teat t ad meawhile miimize the commuicatio overhead. That is, the settig of τ t ca esure that the respose is received by deadlie d t with a probability equal to ɛ t while miimizig the commuicatio overhead. We use L Di to deote the list of servers (that store a replica of D i ) ordered i ascedig order of their request arrival rates with idex startig from. The frot-ed sequetially seds the requests for D i to the servers i L Di oe by oe so that more loaded servers will be requested later. We assume that each server respods the request idepedetly. Give the CDF of the respose latecy of each server servig the request from teat t ad the t s SLA requiremet d t, ɛ t, the probability that all servers do ot respod the requests withi the deadlie should be equal to ɛ t : F s (λ s, d t τ t I(, L Di )) = ɛ t (6) L Di where I(, L Di ) [, L Di ] is a fuctio that returs the idex of server s positio i list L Di. F s (λ s, d t τ t I(, L Di )) represets the probability of receivig the respose from server at positio I(, L Di ) i list L Di. Sice the frot-ed server waits for time τ t I(, L Di ) before the retrasmissio to, should respod the request before d t τ t I(, L Di ) i order to meet the deadlie. By solvig this equatio, we ca derive the adaptive waitig time τ t that satisfies the rigid SLA requiremet of teat t ad also saves the commuicatio overhead maximally. Based o the adaptive determiatio of τ t, we the preset the adaptive request retrasmissio algorithm. Startig from the first server i L Di, the frot-ed server waits for a adaptive waitig time τ t, after trasmittig a request from teat t to. If there is a respose durig the waitig time, all requestot respoded yet are caceled ad the process is termiated; otherwise, the frot-ed server seds the request to the ext server i L Di. 6 PERFORMANCE EVALUATION I simulatio. We coducted a trace-drive simulatio o both a simulator ad Amazo EC2 [6] to evaluate the performace of i compariso with other methods. I the simulatio, there were 3 data servers, each of which has a storage capacity radomly chose from {6TB, 2TB, 24TB} [26, 27]. Each te servers were simulated by oe ode i the Palmetto Cluster [28], which has 77 8-core odes. The topology of the storage system is a typical fat tree with three levels [24]. I each rac, there were 4 servers, ad each aggregatio switch lied to five edges. I the experimets, each server was modeled as a M/M/ queuig system [29, 2]. I a M/M/ queuig system, there is a sigle server, where the arrivals of requests follow a Poisso process ad the job service time follows a expoetial distributio. Accordig to [23], the correspodig iverse fuctio of the CDF of the respose latecy distributio is Fs (d, ɛ) = µ s l( ɛ)/d. The service rate µ of each server was radomly chose from [8,]. Accordig to Equatio (4), we ca derive λ g. The default umber of teats was 5. For each teat, the umber of its data partitios was radomly chose from [,9]. Each partitio has the size radomly chose from [6GB,36GB], ad the request arrival rate i the Poisso process was geerated as times of a radomly selected file s visit rate from the CTH trace [3] as i Sectio 5.3. For each teat s SLA, d t was radomly chose from [ms, 2ms] [3], ad ɛ t was set to 5% referrig to 95th-percetile pricig [32]. We set the miimum umber of replicas of each partitio as 2. Iitially, each replica of a partitio has the same servig ratio. O Amazo EC2. We repeated the experimets i simulatio i a real-world eviromet cosistig of 3 odes i a availability zoe of EC2 s US west regio [6]. We chose all odes as frot-ed servers o EC2, ad the request arrival rate of each data partitio requested has the same visit rate as i [3]. Each ode i EC2 simulates data servers i order to elarge system scale, ad each data server has a service rate radomly chose from [8, ]. Due to the local storage limitatio of VMs i EC2, the partitio size ad server storage capacity were set to /3 of the settigs i Sectio 6. The default umber of teats was. We measured the distace of ay pair of data servers by the average pig latecy, based o which we mapped all simulated (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

8 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 8 Average latecy (ms) QoS of SLA.E+5.E+4.E+3.E+2.E+.E+ 2% % 8% 6% 4% 2% % Radom Deadlie Pisces CDG Number of teats Average request rate per teat Fig. 5: Average latecy. Radom Deadlie Pisces CDG Number of teats Average latecy (ms) Radom Deadlie Pisces CDG Fig. 7: QoS of SLA. QoS of SLA Radom Deadlie Pisces CDG Average request rate per teat storage servers ito a typical three layer fat-tree with 2 severs i a rac. Accordig to the settig, the average request rate per teat is aroud 3 requests per secod. I all experimets, we elarged the request arrival rates of each partitio by oe to six times. Thus, the average request rate per teat was icreased from 3 to 8 requests per secod with 3 icrease at each step. The default average request rate per teat was set to 2. We compared with CDG, which rus i a cetralized maer without the tree structure. We also compared with a deadlie uaware strategy, which places replicas greedily ad sequetially to servers with costraits of each server s storage capacity ad service rate. It is adopted by [33] to allocate data partitios to differet servers, so we deoted it by Pisces. I order to compare the performace of our strategies i Sectio 4.3, we provided a ehaced Pisces strategy (amed as Deadlie) for compariso, which additioally esures that the request arrival rate o a server caot exceed its deadlie guarateed arrival rate λ g. We also added aother compariso method (deoted by Radom), which radomly places data replicas to servers that have eough storage capacity. We set the SLA predictio period to oe hour. We coducted each experimet times with a hour ruig ad report the average experimetal results. 6. Performace of Deadlie Guaratee I this experimet, each teat stores all of its data partitio replicas radomly ito the system. Figures 5(a) ad 5(b) show the average latecy of requests of all teats versus the umber of teats i simulatio ad o testbed, respectively. They show that Pisces> CDG Deadlie>Radom whe there are o larger tha 5 teats. With 6 teats, the average latecy of Radom is larger tha all three methods with deadlie awareess. With fewer teats, Radom uses all servers, so the load of a server is the smallest. Whe the system has a heavy data request load from 6 teats, Radom produces ubalaced utilizatio amog servers, Average excess latecy (ms) SLA satisfactio level 5% % 5% % Radom Deadlie Pisces CDG Number of teats Average excess latecy (ms) Radom Deadlie Average request rate per teat Fig. 6: Average excess latecy. Radom Pisces Deadlie CDG Number of teats 5% % Fig. 8: SLA satisfactory level. SLA satisfactio level 5% % Pisces CDG Radom Pisces Deadlie CDG Average request rate per teat ad some overloaded servers have much larger latecy tha the deadlies. Sice, CDG ad Deadlie supply deadlie guarateed services, they produce similar average latecies. Pisces doeot cosider deadlie, ad distributes more load o a server, which leads to a much loger service latecy tha all other methods. The figures also show that the average latecy of Radom icreases proportioally to the umber of teats, while other methods have early stable average latecy. The methods except Radom costrai the request arrival rate o a server below λ g, ad try to fully utilize the active servers. Thus, their expected load o a active server iearly stable as the umber of teats icreases. I Radom, more replicas of partitios are allocated to a server, which leads to a icreasig average latecy as the umber of teats icreases. Figures 5(a) ad 5(b) idicate that ad CDG ca supply deadlie guarateed service with stable ad low average latecy to teats eve uder a heavy system load. We also evaluate the excess latecy of a data request, which is defied as the extra service latecy time beyod the deadlie for a request. Figures 6(a) ad 6(b) show the average excess latecy of all requests. They show a similar curve ad relatioship for all methods as Figure 5(a) due to the same reasos. A oteworthy differece is that ulie the average latecy, the average excess latecy of Radom is larger tha Deadlie, ad CDG whe the umber of teats exceeds 3 or 6 due to iteglect of SLAs i simulatio ad o testbed, respectively. Also, Radom geerates a average excessive latecy larger tha ms with 4 or more teats, which will degrade the sale of customers [8] ad prevet them to shift worload to cloud storage systems. Figures 6(a) ad 6(b) also idicate that ad CDG ca provide a lower average excess latecy, which meas a lower average excess latecy whe the SLA is violated. We defie QoS of SLA as mi{ t, P t /( ɛ t )}. Figures 7(a) ad 7(b) show the QoS of each method. They show that all Deadlie, CDG ad ca supply a deadlie-aware service with a QoS slightly larger tha, which meas SLAs of all teats are satisfied. Due (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

9 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 9 Resource utilizatio Computig time (ms) 4% 2% % 8% 6% 4% 2% % Radom Pisces Deadlie CDG Number of teats Average request rate per teat Fig. 9: Resource utilizatio. Deadlie CDG 5% % 5% 2% 25% Average arrival rate variace per partitio Resource utilizatio Radom Pisces Deadlie CDG Fig. : Computig time. Computig time (ms) Deadlie CDG 5% % 5% 2% 25% Average arrival rate variace per partitio Saved eergy Traffic load (GB*hop) Pisces Deadlie CDG Average request rate per teat Saved eergy Fig. : Saved eergy. Deadlie CDG 5% % 5% 2% 25% Average arrival rate variace per partitio Pisces Deadlie CDG Average request rate per teat Fig. 2: Traffic load. Traffic load (GB*hop) 5 Deadlie CDG 5% % 5% 2% 25% Average arrival rate variace per partitio to the worst performace o overloaded servers for the same reaso as i Figure 5(a), Radom caot supply a deadlie guarateed service whe the request load is heavy. Its QoS is reduced to 8% whe there are 6 teats. Sice the QoS is very importat for teats operatig web applicatios, this is a big obstacle for customers to shift their worload to cloud storage systems. Also, Radom always uses all servers eve whe the umber of teats is small. Sice the servers ca supply SLA guarateed services to 6 teats as show i ad CDG, Radom wastes at least 83%, 67% ad 5% resources to supply a deadlie guarateed service whe there are, 2 ad 3 teats, respectively i simulatio. Also, due to the same reaso as Figure 5(a), Pisces has a much worse QoS tha other methods. Although Deadlie ca supply a deadlie-aware service, its QoS is larger tha s ad CDG s. That is because it uses more servers to supply the deadlie-aware service, which meas Deadlie wastes system resources to supply over-satisfied services. Figures 7(a) ad 7(b) idicate that ad CDG achieve QoS of SLA larger tha ad closer to %, respectively, which are higher tha those of all other methods. Figures 8(a) ad 8(b) show the media, 5th ad 95th percetiles of all teats SLA satisfactio level, defied as P t /( ɛ t ), i simulatio ad o testbed, respectively. Due to the same reaso as Figure 7(a), the media satisfactio level follows Radom>Deadlie CDG >Pisces, whe the umber of teats io larger tha 5 (9), ad Radom supplies a worse performace tha, CDG ad Deadlie i simulatio (o testbed). Radom exhibits larger variaces betwee the 5th ad 95th percetiles tha the three deadlie-aware methods whe the request load is heavy. They idicate that Radom supplies ufair deadlie guarateed service amog all teats with differet SLAs. Also, Pisces produces the largest variace, because the requests from teats with looser deadlie requiremets ca be more easily satisfied. Also, Deadlie ca supply SLA guarateed services for all teats, but it uses more system resources tha ad CDG due to the same reasos as i Figure 7(a). Figures 8(a) ad 8(b) idicate that ad CDG ca costatly supply SLAs guarateed services usig less system resources. 6.2 Performace for Multiple Objectives I this sectio, we measure the performace of all systems i achievig the multi-objectives icludig resource utilizatio maximizatio, traffic load ad scheme executio latecy miimizatio. Figures 9(a) ad 9(b) show the media, the 5th ad 95th percetiles of the server resource utilizatio, calculated by ρ s = λ s /µ s. The media server utilizatio follows Radom<Deadlie< <CDG<Pisces. Radom geerates the smallest utilizatio by usig all servers, ad Pisces geerates the highest utilizatio by fully utilizig the service rates of servers with the greedy strategy, but at the cost of a very low QoS as show i Figure 7(a). ad CDG produce higher resource utilizatio tha Deadlie. ad CDG fully utilize available service capacities of active servers by servig ratio reassigmet ad data replicatio. Whe Deadlie tries to allocate a partitio replica with a request arrival rate, it choose a server that must be able to support this request arrival rate without cosiderig the distributio of the load amog several servers, thus leadig to lower server utilizatio. Also, by balacig the load betwee most overloaded ad uderloaded servers, ad CDG have smaller variaces betwee the 5th ad 95th percetile of resource utilizatio tha Deadlie. CDG has higher resource utilizatio tha (.3% more o average). This is because i CDG, the cetralized load balacer ca deactivate a server with the highest service rate amog all sleepig servers, which leads to fewer active servers to support the deadlieawareess service. Thus, CDG has higher utilizatio tha. The experimetal results idicate that ca achieve comparable resource utilizatio as CDG, ad both of them have higher ad more balaced resource utilizatio tha Deadlie, which also offers a deadlieaware service (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

10 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE As i [34], we measured the eergy savigs i server hour by coutig the sleepig time of all servers. Sice Radom uses all servers without eergy cosideratio, we oly measured the performace of all other methods. Figures (a) ad (b) show that saved eergy follows Deadlie<<CDG<Pisces due to the same reaso as i Figure 9(a). ca save up to 95 server hour more tha Deadlie o average. The figures idicate that both ad CDG ca save more eergy tha Deadlie. Eve though CDG saves more eergy tha, CDG uses much more computig time ad itroduces more traffic load tha. I order to measure these overheads, we set the request arrival rate of each partitio, λ p, to a value radomly chose from [λ p ( 2x), λ p ( + 2x)], where x is the average arrival rate variace, ad is icreased from 5% to 25% by 5% at each step. Radom ad Pisces caot supply deadlie guarateed services, ad they do ot schedule data reallocatio after the request arrival rate varyig. Therefore, we compare the performace of with CDG ad Deadlie. Figures (a) ad (b) show the media, the 5th ad 95th percetiles of algorithm computig time. We see that the computig time ad its variace follows Deadlie<<CDG. This is because the data reallocatio algorithm i both ad CDG has higher time complexity tha a greedy algorithm i Deadlie. I, the tree-based parallel processig shortes the computig time. Thus, oly taes aroud 5.8% computig time of CDG. We measured the traffic load i GB hop as itroduced i Sectio 2.2. Figures 2(a) ad 2(b) show the media, the 5th ad 95th percetiles of traffic load, which follows <CDG<Deadlie. Sice ad CDG try to reduce traffic load i Average coversio time (s) 5 5 Deadlie CDG 5% % 5% 2% 25% Average arrival rate variace per partitio Fig. 3: Coversio time. data reallocatio, they produce lower traffic load tha Deadlie. has lower traffic load tha CDG because has lower expected trasmissio path legth tha CDG by resolvig the overloaded servers locally first. The figures idicate that itroduces the lowest traffic load to the system, which produces the least iterruptio to the cloud data storage service. We also measured the coversio time of data reallocatio schedule, as the time that all servers fiish coversio to the ew data allocatio. Figure 3 shows the average coversio time of all systems, which shows a similar curve ad relatioship for all methods as Figure 2(a) due to the same reaso. It idicates that achieves the lowest coversio time, o loger tha 5 secods, causig the fewest effects o the SLA. All Figures 2 ad 3 show that achieves a better performace i miimizig the traffic load. 6.3 Performace of Deadlie Guaratee Ehacemet I this sectio, we preset the performace of each of the ehacemet algorithms idividually Performace of Deadlie Strictess Clustered Data Allocatio I order to mae the deadlie strictess of teats havig data o the same server vary greatly, differet from the sceario i Sectio 6., i this experimet, teats add data replicas to servers i tur ad each teat adds oe data replica to a server at each time. Sice this method doeot affect the performace of Radom ad Pisces, which do ot cosider teat deadlie requiremets, we compared the performace of Deadlie, ad with ad without the deadlie strictess clustered data allocatio algorithm, deoted by (w/ c) ad (w/o c), respectively. (w/ c) groups all teats ito 5 differet clusters. Figures 4(a) ad 4(b) shows the media, the 5th ad 95th percetiles of the server resource utilizatio versus the umber of teats. Due to the same reasos as i Figure 9(a), Deadlie geerates lower resource u- tilizatio tha. Also, (w/ c) geerates higher utilizatio tha (w/o c). This is because the data partitios with strict SLAs icrease the deadlie strictess requiremet of data partitios o servers, ad hece reduces λ g of the servers, ad the decreases the resource utilizatio. Thus, (w/o c) supplies overqualified service of higher P t to the teats with lower deadlie strictess, while (w/ c) isolates the deadlie service performace of teats with differet deadlie strictess. Without supplyig overqualified service, (w/ c) produces higher utilizatio tha (w/o c). Figure 4(a) idicates that the deadlie strictess clustered data allocatio algorithm ca help achieve higher resource utilizatio whe the teat deadlie strictess varies greatly. By ratioally utilizig the system resources, (w/ c) ca still supply a deadlie guarateed service whe there are 6 teats, while others caot. The experimetal results idicate that ca achieve higher resource utilizatio with the deadlie strictess clustered data allocatio algorithm. Figures 5(a) ad 5(b) show the extra eergy saved by (w/o c) ad (w/ c) versus the umber of teats. (w/ c) saves more eergy tha (w/o c). These results idicate that the deadlie strictess classificatio strategy is effective i helpig maximize the resource utilizatio ad miimize the umber of active servers Performace of Prioritized Data Reallocatio We measured the effectiveess of the prioritized data reallocatio algorithm i satisfyig the SLAs of all teats. I this experimet, each data partitio s request arrival rate varies oce at a radomly selected time durig the experimet time. The variatio of request arrival rates is the same as i Figure. We use R ad NR to deote with ad without this algorithm. We set T r = i R. We use H to deote that uses the highest arrival rate i a previous time period as the predicted rate for the ext period. Figures 6(a) ad 6(b) show the media, the 5th ad 95th percetiles of QoS of SLA of each method. They show that the QoS follows NR< R< H. NR caot supply a deadlie guarateed service with varyig data request (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

11 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE (w/o c) (w/ c) 8% 6% 4% 2% % Number of teats 5 % Deadlie (w/o c) (w/ c) 8% 6% 4% 2% % Average request rate per teat 6 (w/o c) (w/ c) Extra saved eergy Deadlie Resource utilizatio Resource utilizatio % Extra saved eergy (w/o c) Number of teats (w/ c) Average request rate per teat _H _NR 5% _R 8% 6% 4% _H _NR _R % 5% 2% % % 5% % 5% 2% 25% Average arrival rate variace per partitio 5% % 5% 2% 25% Average arrival rate variace per partitio _H _NR _R Extra saved eergy QoS of SLA % QoS of SLA 2% Extra saved eergy Fig. 4: Resource utilizatio improvemet of the deadlie strict- Fig. 5: Extra saved eergy of the deadlie strictess clustered ess clustered algorithm. algorithm. _H _NR _R % % 5% 2% 25% Average arrival rate variace per partitio 5% % 5% 2% 25% Average arrival rate variace per partitio Fig. 6: QoS of SLA ehacemet of the prioritized data Fig. 7: Extra saved eergy of the prioritized data reallocatio reallocatio algorithm. algorithm. arrival rates i ext period. The QoS of NR decreases whe the variace icreases. With greater request arrival rate variace, the overloaded servers with larger arrival rates may supply loger latecy to more requests, which leads to QoS lower tha %. R istatly reallocates the data replicas of high request arrival rate, which ca always supply a deadlie guarateed service with o less tha % QoS. The H uses the past highest request arrival rate of each data partitio as the predicted value, so it supplies a deadlie guarateed service. Figures 6(a) ad 6(b) idicate that the prioritized data reallocatio algorithm helps supply a deadlie guarateed service with varyig request arrival rates. Figure 7(a) ad 7(b) show the eergy saved by differet methods versus the average arrival rate variace per partitio. It shows that the saved eergy follows NR> R> H. Because both R ad NR have the same data allocatio iitially, ad R eeds to additioally execute data reallocatio algorithm for the prioritized servers experiecig severe SLA violatios, so it saves less eergy tha NR. Due to the same reaso as Figure 6(a), H produces more active servers tha other two methods. These figures idicate that the prioritized data reallocatio algorithm saves more eergy tha simply usig the largest data request arrival rate i hadlig the request burst, while esurig the SLAs Performace of Adaptive Request Retrasmissio I order to show the idividual performace of the adaptive request retrasmissio algorithm, we measure its performace i Amazo EC2 [6] without P DG s other ehacemet algorithms i Sectio 5. I this experimet, by default, we tested the performace of data request from oe teat t, ad the umber of t s data partitios were set to. The distributios of the size ad the visit rate of a data partitio are the same as before. We used two odes i Amazo EC2 [6] to be the frot-ed servers. By default, we chose r = 6 other odes i the same regio of Amazo EC2 [6] as replica servers, each of which stores the replicas of all data partitios. I this experimet, we first show the effectiveess of the sequetial retrasmissio i meetig SLA requiremets ad savig commuicatio overhead. We use OeOe to deote the algorithm that radomly selects oe server to request the data partitio, ad use Oe-All to deote the algorithm that simultaeously seds requests to all servers storig replicas of the requested data. We use p = x% to deote the Percetile [25] algorithm with waitig time equals the x% of the respose latecies of all requests of data partitios i the system i the last period. We coducted the experimet for oe hour to get the CDF of the respose latecy of each server ad the evaluated the performace of all algorithms durig the ext hour i Amazo EC2 US East ad West (Orego), separately. We the measured the effectiveess of our adaptive request retrasmissio algorithm (deote by Adaptive) i satisfyig the SLA ad reducig commuicatio overhead. I this experimet, after oe-hour ruig, teat t amog all teats reduced its dt to 4ms from 5ms ad ept t = 95% the same as before. τt was oly calculated oce after the first hour. We compared Adaptive with the Oe-Oe ad Oe-All algorithms. We also compared it with Percetile [25], i which the frot-ed server retrasmits the request after 95% of the respose latecies of all resposes from all servers for all requests i the last period if there io respose. We measured the performace of each algorithm durig each hour of cosecutive 5 hours after oe hour ruig. Figure 8(a) shows the user satisfactio level of differet algorithms durig each hour. It shows that the user satisfactio level follows Oe-All> Adaptive >Percetile>Oe-Oe. Oe-All submits the requests to all servers cotaiig the replica of the requested (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

12 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 2 SLA satisfactio level Oe Oe Oe All Percetile Adaptive 2% % 8% 6% 4% 2% % Hour idex (a) SLA satisfactio level Commuicatio overhead Oe Oe Oe All Percetile Adaptive Hour idex (b) Commuicatio overhead Fig. 8: Performace of the adaptive request retrasmissio algorithm. SLA satisfactio level 2% % 8% 6% 4% 2% % _Plus Number of teats 2% % 8% 6% 4% 2% % Average request rate per teat Fig. 2: SLA satisfactio level of ehaced. data partitio simultaeously, which ca be regarded as Adaptive with τ t =. Adaptive retrasmits the request to servers oe by oe after adaptive waitig time τ t, which satisfies the SLA. Sice a lower τ t leads to a higher probability to receive a respose withi the deadlie, Oe-All geerates a higher satisfactio level tha Adaptive. The 95% of the respose latecy used i Percetile is much loger tha τ t i Adaptive. Therefore, it has a lower probability to receive the respose by the deadlie, ad caot supply a SLA guarateed service. Oe-Oe doeot have the retrasmissio, ad hece produces a lower probability to receive the respose withi the deadlie tha Percetile. This figure idicates that Adaptive ad Oe-All ca supply a SLA guarateed service. However, Oe-All geerates much higher trasmissio overhead, which is show i the followig. SLA satisfactio level _Plus Figure 8(b) shows the commuicatio overhead of differet algorithms durig each hour. It shows that the commuicatio overhead follows Oe- All>Adaptive>Percetile>Oe-Oe. Oe-All submits the requests to all servers cotaiig the requested data partitio simultaeously, leadig to the highest commuicatio overhead. Other algorithms sed requests to servers oe by oe after a certai waitig time, durig which a respose may be received. Thus, they geerate lower commuicatio overhead tha Oe-All. Adaptive adaptively sets the waitig time τ t to guaratee the SLA while miimizig the umber of retrasmissio messages. The waitig period used i Percetile is much loger tha the adaptive waitig time τ t. Thus, it saves more retrasmissio messages tha Adaptive. Oe-Oe selects oly oe server to request the data partitio without retrasmissio, resultig i the lowest commuicatio overhead. Figures 8(b) idicates that Adaptive geerates lower trasmissio overhead tha Oe-All, though both of them ca supply deadlie guarateed service. Although Percetile ad Oe-Oe geerate lower commuicatio overhead tha Adaptive, they caot provide SLA guarateed service. Figures 8(a) ad 8(b) together idicate that Adaptive Commuicatio overhead Oe Oe Oe All Percetile Adaptive Hours (a) SLA satisfactio level Commuicatio overhead 3ms 35ms 4ms 45ms 5ms Hour idex (b) Commuicatio overhead Fig. 9: Performace of the adaptive request retrasmissio algorithm with differet deadlie requiremets. Extra saved eergy _Plus 5% % 5% 2% 25% Average arrival rate variace per partitio Extra saved eergy (a) I Simulatio Fig. 2: Saved eergy of ehaced. ca supply a SLA guarateed service while maximally savig commuicatio overhead. We the measured the Adaptive s performace uder differet SLA requiremet chages. We tested the performace of the data requests of 5 teats, each havig data partitios. After oe-hour ruig, each teat reduces the deadlie from 5m to a lower value (idicated i the figure). Figure 9(a) shows the (% ɛ) (95%) of respose latecy of the data requests of each of the 5 teats i Adaptive after each of total 5 hours. From the figure, we ca observe that the 95% of respose latecies are all below the required deadlie, which meas that Adaptive ca receive at least (% ɛ) of requests withi each differet d t. Adaptive chages the adaptive waitig time τ t accordig to Equatio (6) uder differet SLA requiremets. The figure idicates that Adaptive ca always supply a SLA guarateed service eve whe a teat has shorter deadlie requiremet. Figure 9(b) shows the commuicatio overhead of the data requests of each of the 5 teats i Adaptive after each of total 5 hours. It shows that a lower deadlie requiremet leads to a larger commuicatio overhead. This is because a lower deadlie requiremet leads to a shorter adaptive waitig time τ t accordig to Equatio (6). The experimetal results idicate that Adaptive ca save commuicatio overhead by adaptively adjustig the adaptive waitig time τ t whe the deadlie is decreased ad supply a SLA guarateed service as show i Figure 9(a) _Plus 5% % 5% 2% 25% Average arrival rate variace per partitio Performace of Ehaced We the measure the performace of with all the three ehacemet algorithms (deoted by Plus) icludig the deadlie strictess clustered data allocatio, the regular prioritized data reallocatio with T r = ad the adaptive request retrasmissio. We measured the SLA satisfactio performace ad eergy savig sice they are the most importat metrics. I this experimet, teats add data replicas to servers i tur ad each teat adds oe data replica to a server (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

13 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 3 at each time. Also, each data partitio s request arrival rate varies oce at a radomly selected time durig the experimet time. Figures 2(a) ad 2(b) show the media, 5th ad 95th percetiles of all teats SLA satisfactio level i simulatio ad o testbed, respectively. I this experimet, caot supply a deadlie guarateed service while Plus ca, which show the combied effectiveess of the three ehacemet algorithms i improvig the SLA satisfactio performace. Figure 2(b) ad 2(b) show the extra saved eergy versus the average arrival rate variace per partitio. They show that the extra saved eergy follows Plus>. The results also cofirm the combied effectiveess of the three ehacemet algorithms i reducig the eergy cosumptio. For the details of the reasos, please refer to the previous sectios i Sectio 6.3. The results suggest that ) groupig teats with similar deadlie strictess ca lead to higher resource utilizatio; 2) whe the request arrival rates vary greatly, distributed load balacig method ca offload the excess load from overloaded servers more quicly; ad 3) the assiged servers for a data request ca be adaptively determied i order to improve the SLA satisfactio performace. 7 RELATED WORK Recetly, several wors [, 2 5] have bee proposed o deadlie-aware etwor commuicatios i dataceters. Sice badwidth fair sharig amog etwor flows i the curret dataceter eviromet ca degrade applicatio deadlie awareess performace, Wilso et al. [] proposed D 3 explicit rate cotrol to apportio badwidth accordig to flow deadlies istead of fairess. Hog et al. [2] proposed a distributed flow schedulig protocol. A flow prioritizatio method is adopted by all itermediate switches based o a rage of schedulig priciples, such as EDF (Earliest Deadlie First) ad so o. Earliest Deadlie First (EDF) [3] is oe of the earliest pacet schedulig algorithms. It assigs a dyamic priority to each pacet to achieve high resource utilizatio ad satisfy the deadlie. Vamaa et al. [4] proposed a deadlie-aware dataceter TCP protocol, which hadles bursts of traffic by prioritizig ear deadlie flows over far deadlie flows i badwidth allocatio to avoid cogestio. I [5], a ew cross-layer etwor stac was proposed to reduce the log tail of flow completio times. Our wor shares a similar goal of deadlie guaratee as the above wors. However, they focus o schedulig wor flows for deadlie-aware etwor commuicatios rather tha cloud storage systems. Spillae et al. [35] used advaced cachig algorithms, data structures ad Bloom filters to reduce the data Read/Write latecies i a cloud storage system. However, it caot quatify the probability of guarateed latecy performace without cosiderig request rates of stored data partitios i a server. To reduce the service latecy of teats, Pisces [33] assigs the resources accordig to teat loads ad allocates the partitios of teats usig a greedy strategy that aimot to exceed storage capacity ad service capacity of servers. I [36], the authors improve Best-Fit schedulig algorithm to achieve throughput-optimal. Wei et al. [37] proposed a cost-effective dyamic replicatio maagemet scheme to esure the data availability. It joitly cosiders the average latecy ad failure rate of each server to decide optimal replica allocatio. Wag et al. [26] proposed a scalable bloc storage system usig pipelied commit ad replicatio techiques to improve the data access efficiecy ad data availability. I [38 4], the data availability is improved by selectig data servers iside a dataceter to allocate replicas i order to reduce data loss due to simultaeous server failures. Ford et al. [4] proposed a replicatio method over multiple geo-distributed file system istaces to improve data availability by avoidig cocurret ode failures. However, these methods caot guaratee SLAs of teats without cosiderig the request rates of stored data i a server ad its service rate. There are related wors i dataceter focusig o topology improvemet/maagemet to improve the bisectio badwidth usage of the etwor to icrease the throughput, such as FatTrees [24], VL2 [42], BCube [43], ad DCell [43], which fially reduce the average latecy. However, oe of them ca guaratee the deadlies of data requests. 8 CONCLUSION I this paper, we propose the parallel deadlie guarateed scheme () for cloud storage systems, which dyamically moves data request load from overloaded servers to uderloaded servers to esure the SLAs for teats. icorporates differet methods to achieve SLA guaratee with multi-objectives icludig low traffic load, high resource utilizatio ad fast scheme executio. Our mathematical model calculates the extra load that each overloaded server eeds to release to meet the SLAs. The load balacer builds a virtual tree structure to reflect the real server topology, which helps schedule load movemet betwee close servers i a bottom-up parallel maer, thus reducig traffic load ad expedite scheme executio. The schedulig cosiders data partitio size ad request rate to more quicly resolve the overloaded servers. A server deactivatio method also helps miimize the umber of active servers while guarateeig the SLAs. is further ehaced by the deadlie strictess clustered data allocatio algorithm to icrease resource utilizatio, a prioritized data reallocatio algorithm ad a adaptive request retrasmissio algorithm to dyamically stregthe SLA guaratee uder the variaces of request arrival rates ad SLA requiremets, respectively. Our trace-drive experimets o both a simulator ad Amazo EC2 [6] show that outperforms other methods i guarateeig SLA ad the multi-objectives. I our future wor, we will implemet our scheme i a cloud storage system to examie its real-world performace. ACKNOWLEDGEMENTS This research was supported i part by U.S. NSF grats NSF-4498, IIS-35423, CNS-2546, ad Microsoft Research Faculty Fellowship A early versio of this wor is preseted i the Proc. of P2P 5 [44] (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

14 This article has bee accepted for publicatio i a future issue of this joural, but haot bee fully edited. Cotet may chage prior to fial publicatio. Citatio iformatio: DOI.9/TPDS , IEEE 4 REFERENCES [] Amazo DyamoDB. [Accessed i Nov. 25]. [2] Amazo S3. [Accessed i Nov. 25]. [3] Gigaspaces. [Accessed i Nov. 25]. [4] H. Steves ad C. Pettey. Garter Says Cloud Computig Will be as Ifluetial as E-Busiess. Garter Newsroom, Olie Ed., 28. [5] S. L. Garfiel. A Evaluatio of Amazos Grid Computig Services: EC2, S3 ad SQS. Techical Report TR-8-7, 27. [6] N. Yigitbasi A. Iosup ad D. Epema. O the Performace Variability of Productio Cloud Services. I Proc. of CCGrid, 2. [7] M. Zaharia, A. Kowisi, A. D. Joseph, R. Katz, ad I. Stoica. Improvig MapReduce Performace i Heterogeeous Eviromets. I Proc. of OSDI, 28. [8] R. Kohavl ad R. Logbotham. Olie Experimets: Lessos Leared [9] B. F. Cooper, R. Ramarisha, U. Srivastava, A. Silberstei, P. Bohaoa, H.-A. Jacobse, N. Puz, D. Weaver, ad R. Yerei. PNUTS: Yahoo! s hosted data servig platform. I Proc. of VLDB, 28. [] C. Wilso, H. Ballai, T. Karagiais, ad A. Rowstro. Better Never tha Late: Meetig Deadlies i Dataceter Networs. I Proc. of SIGCOMM, 2. [] M. Alizadeh, A. Greeberg, D. A. Maltz, P. Patel J. Padhye, B. Patel, S. Segupta, ad M. Sridhara. Data Ceter TCP (DCTCP). I Proc. of SIGCOMM, 2. [2] C. Hog, M. Caesar, ad P. B. Godfrey. Fiishig Flows Quicly with Preemptive Schedulig. I Proc. of SIGCOMM, 22. [3] C. L. Liu ad J. W. Laylad. Schedulig Algorithms for Multiprogrammig i a Hard-Real-Time Eviromet. Joural of the ACM, 973. [4] B. Vamaa, J. Hasa, ad T. N. Vijayumar. Deadlie-Aware Dataceter TCP (D2TCP). I Proc. of SIGCOMM, 22. [5] D. Zats, T. Das, P. Moha, D. Borthaur, ad R. Katz. DeTail: Reducig the Flow Completio Time Tail i Dataceter Networs. I Proc. of SIGCOMM, 22. [6] Amazo EC2. [Accessed i Nov. 25]. [7] D. Wu, Y. Liu, ad K. W. Ross. Modelig ad Aalysis of Multichael P2P Live Video Systems. TON, 2. [8] A. Beloglazov ad R. Buyya. Optimal Olie Determiistic Algorithms ad Adaptive Heuristics for Eergy ad Performace Efficiet ddamic Cosolidatio of Virtual Machies i Cloud Data Ceters. CCPE, 2. [9] C. Peg, M. Kim, Z. Zhag, ad H. Lei. VDN: Virtual Machie Image Distributio Networ for Cloud Data Ceters. I Proc. of INFOCOM, 22. [2] M. R. Garey ad D. S. Johso. Computers ad Itractability: A Guide to the Theory of NP-Completeess. W. H. Freema, 979. [2] L. Kleiroc. Queueig Systems. Joh Wiley & Sos, 975. [22] N. Bobroff, A. Kochut, ad K. Beaty. Dyamic Placemet of Virtual Machies for Maagig SLA Violatios. I Proc. of IM, 27. [23] W. J. Stewart. Probability, Marov Chais, Queues, ad Simulatio: The Mathematical Basis of Performace Modelig. Priceto Uiversity Press, 29. [24] M. AI-Fares, A. Louissas, ad A. Vahdat. A Scalable, Commodity Data Ceter Networ Architecture. I Proc. of SIGCOMM, 28. [25] J. Dea ad L. A. Barroso. The tail at scale. Commuicatios of the ACM, 23. [26] Y. Wag, M. Kapritsos, Z. Re, P. Mahaja, J. Kirubaadam, L. Alvisi, ad M. Dahli. Robustess i the Salus scalable bloc store. I Proc. of NSDI, 23. [27] Apache Hadoop FileSystem ad its Usage i Faceboo. [28] Palmetto Cluster. [Accessed i Nov. 25]. [29] N. B. Shah, K. Lee, ad K. Ramchadra. The MDS Queue: Aalysig Latecy Performace of Codes ad Redudat Requests. arxiv:2.545, 23. [3] CTH Trace. IO/SNL Trace Data/, [Accessed i Nov. 25], 29. [3] H. Medhioub, B. Msei, ad D. Zeghlache. Oci-ope cloud etworig iterface. I Proc. of ICCCN, 23. [32] R. Staojevic, N. Laoutaris, ad P. Rodriguez. O Ecoomic Heavy Hitters: Shapley Value Aalysis of 95th-Percetile Pricig. I Proc. of IMC, 2. [33] D. Shue ad M. J. Freedma. Performace Isolatio ad Fairess for Multi-Teat Cloud Storage. I Proc. of OSDI, 22. [34] S. Sey, J. R. Lorch, R. Hughes, C. G. J. Suarez, B. Zill, W. Cordeiroz, ad J. Padhye. Do t Lose Sleep Over Availability: The GreeUp DecetralizedWaeup Service. I Proc. of NSDI, 22. [35] R. P. Spillae, P. Shetty, E. Zado, S. Dixit, ad S. Archa. A Efficiet Multi-Tier Tablet Server Storage Architecture. I Proc. of SoCC, 2. [36] S. T. Maguluri, R. Sriat, ad L. Yig. Stochastic Models of Load Balacig ad Schedulig i Cloud Computig Clusters. I Proc. of INFOCOM, 22. [37] Q. Wei, B. Veeravalli, B. Gog, L. Zeg, ad D. Feg. CDRM: A Cost-Effective Dyamic Replicatio Maagemet Scheme for Cloud Storage Cluster. I Proc. of Cluser, 2. [38] A. Cido, S. Rumble, R. Stutsma, S. Katti, J. Ousterhout, ad M. Roseblum. Copysets: Reducig the Frequecy of Data Loss i Cloud Storage. I Proc. of USENIX ATC, 23. [39] E. Theresa, A. Doelly, ad D. Narayaa. Sierra: Practical Power-Proportioality for Data Ceter Storage. I Proc. of Eurosys, 2. [4] D. Borthaur, J. Gray, J. S. Sarma, K. Muthuaruppa, N. Spiegelberg, H. Kuag, K. Ragaatha, D. Molov, A. Meo, S. Rash, R. Schmidt, ad A. Aiyer. Apache Hadoop Goes Realtime at Faceboo. I Proc. of SIGMOD, 2. [4] D. Ford, F. Labelle, F. I. Popovici, M. Stoely, V.-A. Truog, L. Barroso, C. Grimes, ad S. Quila. Availability i Globally Distributed Storage Systems. I Proc. of OSDI, 2. [42] A. Greeberg, J. R. Hamilto, N. Jai, S. Kadula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, ad S. Segupta. Vl2: A Scalable ad Flexible Data Ceter Networ. I Proc. of SIGCOMM, 29. [43] C. Guo, G. Lu, D. Li, H. Wu, X. Zhag, Y. Shi, C. Tia, Y. Zhag, ad S. Lu. Bcube: A High Performace, Server-Cetric Networ Architecture for Modular Data Ceters. I Proc. of SIGCOMM, 29. [44] G. Liu ad H. She. Deadlie Guarateed Service for Multi- Teat Cloud Storage. I Proc. of P2P, 25. Guoxi Liu received the BS degree i BeiHag Uiversity 26, ad the MS degree i Istitute of Software, Chiese Academy of Scieces 29. He is curretly a Ph.D. studet i the Departmet of Electrical ad Computer Egieerig of Clemso Uiversity. His research iterests iclude distributed etwors, with a emphasis o Peer-to-Peer, dataceter ad olie social etwors. Haiyig She received the BS degree i Computer Sciece ad Egieerig from Togji Uiversity, Chia i 2, ad the MS ad Ph.D. degrees i Computer Egieerig from Waye State Uiversity i 24 ad 26, respectively. She is curretly a Associate Professor i the ECE Departmet at Clemso Uiversity. Her research iterests iclude distributed computer systems ad computer etwors with a emphasis o P2P ad cotet delivery etwors, mobile computig, wireless sesor etwors, ad grid ad cloud computig. She was the Program Co-Chair for a umber of iteratioal cofereces ad member of the Program Committees of may leadig cofereces. She is a seior member of the IEEE ad a member of the ACM. Haoyu Wag received the BS degree i Uiversity of Sciece & Techology of Chia, ad the MS degree i Columbia Uiversity i the city of New Yor. He is curretly a Ph.D studet i the Departmet of Electrical ad Computer Egieerig of Clemso Uiversity. His research iterests iclude dataceter, cloud ad distributed etwors (c) 25 IEEE. Persoal use is permitted, but republicatio/redistributio requires IEEE permissio. See for more iformatio.

Ones Assignment Method for Solving Traveling Salesman Problem

Ones Assignment Method for Solving Traveling Salesman Problem Joural of mathematics ad computer sciece 0 (0), 58-65 Oes Assigmet Method for Solvig Travelig Salesma Problem Hadi Basirzadeh Departmet of Mathematics, Shahid Chamra Uiversity, Ahvaz, Ira Article history:

More information

Lecture 28: Data Link Layer

Lecture 28: Data Link Layer Automatic Repeat Request (ARQ) 2. Go ack N ARQ Although the Stop ad Wait ARQ is very simple, you ca easily show that it has very the low efficiecy. The low efficiecy comes from the fact that the trasmittig

More information

Global Optimization of File Availability Through Replication for Efficient File Sharing in MANETs

Global Optimization of File Availability Through Replication for Efficient File Sharing in MANETs Global Optimizatio of File Availability Through Replicatio for Efficiet File Sharig i MANETs Kag Che ad Haiyig She Departmet of Electrical ad Computer Egieerig Clemso Uiversity Clemso, South Carolia 29631

More information

An Improved Shuffled Frog-Leaping Algorithm for Knapsack Problem

An Improved Shuffled Frog-Leaping Algorithm for Knapsack Problem A Improved Shuffled Frog-Leapig Algorithm for Kapsack Problem Zhoufag Li, Ya Zhou, ad Peg Cheg School of Iformatio Sciece ad Egieerig Hea Uiversity of Techology ZhegZhou, Chia lzhf1978@126.com Abstract.

More information

Enhancing Cloud Computing Scheduling based on Queuing Models

Enhancing Cloud Computing Scheduling based on Queuing Models Ehacig Cloud Computig Schedulig based o Queuig Models Mohamed Eisa Computer Sciece Departmet, Port Said Uiversity, 42526 Port Said, Egypt E. I. Esedimy Computer Sciece Departmet, Masoura Uiversity, Masoura,

More information

Operating System Concepts. Operating System Concepts

Operating System Concepts. Operating System Concepts Chapter 4: Mass-Storage Systems Logical Disk Structure Logical Disk Structure Disk Schedulig Disk Maagemet RAID Structure Disk drives are addressed as large -dimesioal arrays of logical blocks, where the

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpeCourseWare http://ocw.mit.edu 6.854J / 18.415J Advaced Algorithms Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.415/6.854 Advaced Algorithms

More information

Algorithms for Disk Covering Problems with the Most Points

Algorithms for Disk Covering Problems with the Most Points Algorithms for Disk Coverig Problems with the Most Poits Bi Xiao Departmet of Computig Hog Kog Polytechic Uiversity Hug Hom, Kowloo, Hog Kog csbxiao@comp.polyu.edu.hk Qigfeg Zhuge, Yi He, Zili Shao, Edwi

More information

A Hierarchical Load Balanced Fault tolerant Grid Scheduling Algorithm with User Satisfaction

A Hierarchical Load Balanced Fault tolerant Grid Scheduling Algorithm with User Satisfaction A Hierarchical Load Balaced Fault tolerat Grid Schedulig Algorithm with User Satisfactio 1 KEERTHIKA P, 2 SURESH P Assistat Professor (Seior Grade), Departmet o Computer Sciece ad Egieerig Assistat Professor

More information

Session Initiated Protocol (SIP) and Message-based Load Balancing (MBLB)

Session Initiated Protocol (SIP) and Message-based Load Balancing (MBLB) F5 White Paper Sessio Iitiated Protocol (SIP) ad Message-based Load Balacig (MBLB) The ability to provide ew ad creative methods of commuicatios has esured a SIP presece i almost every orgaizatio. The

More information

Optimization of Priority based CPU Scheduling Algorithms to Minimize Starvation of Processes using an Efficiency Factor

Optimization of Priority based CPU Scheduling Algorithms to Minimize Starvation of Processes using an Efficiency Factor Iteratioal Joural of Computer Applicatios (97 8887) Volume 132 No.11, December21 Optimizatio of based CPU Schedulig Algorithms to Miimize Starvatio of Processes usig a Efficiecy Factor Muhammad A. Mustapha

More information

Load balanced Parallel Prime Number Generator with Sieve of Eratosthenes on Cluster Computers *

Load balanced Parallel Prime Number Generator with Sieve of Eratosthenes on Cluster Computers * Load balaced Parallel Prime umber Geerator with Sieve of Eratosthees o luster omputers * Soowook Hwag*, Kyusik hug**, ad Dogseug Kim* *Departmet of Electrical Egieerig Korea Uiversity Seoul, -, Rep. of

More information

condition w i B i S maximum u i

condition w i B i S maximum u i ecture 10 Dyamic Programmig 10.1 Kapsack Problem November 1, 2004 ecturer: Kamal Jai Notes: Tobias Holgers We are give a set of items U = {a 1, a 2,..., a }. Each item has a weight w i Z + ad a utility

More information

Elementary Educational Computer

Elementary Educational Computer Chapter 5 Elemetary Educatioal Computer. Geeral structure of the Elemetary Educatioal Computer (EEC) The EEC coforms to the 5 uits structure defied by vo Neuma's model (.) All uits are preseted i a simplified

More information

Adaptive Resource Allocation for Electric Environmental Pollution through the Control Network

Adaptive Resource Allocation for Electric Environmental Pollution through the Control Network Available olie at www.sciecedirect.com Eergy Procedia 6 (202) 60 64 202 Iteratioal Coferece o Future Eergy, Eviromet, ad Materials Adaptive Resource Allocatio for Electric Evirometal Pollutio through the

More information

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5 Morga Kaufma Publishers 26 February, 28 COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Iterface 5 th Editio Chapter 5 Set-Associative Cache Architecture Performace Summary Whe CPU performace icreases:

More information

CS 683: Advanced Design and Analysis of Algorithms

CS 683: Advanced Design and Analysis of Algorithms CS 683: Advaced Desig ad Aalysis of Algorithms Lecture 6, February 1, 2008 Lecturer: Joh Hopcroft Scribes: Shaomei Wu, Etha Feldma February 7, 2008 1 Threshold for k CNF Satisfiability I the previous lecture,

More information

Improvement of the Orthogonal Code Convolution Capabilities Using FPGA Implementation

Improvement of the Orthogonal Code Convolution Capabilities Using FPGA Implementation Improvemet of the Orthogoal Code Covolutio Capabilities Usig FPGA Implemetatio Naima Kaabouch, Member, IEEE, Apara Dhirde, Member, IEEE, Saleh Faruque, Member, IEEE Departmet of Electrical Egieerig, Uiversity

More information

Analysis Metrics. Intro to Algorithm Analysis. Slides. 12. Alg Analysis. 12. Alg Analysis

Analysis Metrics. Intro to Algorithm Analysis. Slides. 12. Alg Analysis. 12. Alg Analysis Itro to Algorithm Aalysis Aalysis Metrics Slides. Table of Cotets. Aalysis Metrics 3. Exact Aalysis Rules 4. Simple Summatio 5. Summatio Formulas 6. Order of Magitude 7. Big-O otatio 8. Big-O Theorems

More information

CSC 220: Computer Organization Unit 11 Basic Computer Organization and Design

CSC 220: Computer Organization Unit 11 Basic Computer Organization and Design College of Computer ad Iformatio Scieces Departmet of Computer Sciece CSC 220: Computer Orgaizatio Uit 11 Basic Computer Orgaizatio ad Desig 1 For the rest of the semester, we ll focus o computer architecture:

More information

Pattern Recognition Systems Lab 1 Least Mean Squares

Pattern Recognition Systems Lab 1 Least Mean Squares Patter Recogitio Systems Lab 1 Least Mea Squares 1. Objectives This laboratory work itroduces the OpeCV-based framework used throughout the course. I this assigmet a lie is fitted to a set of poits usig

More information

n Learn how resiliency strategies reduce risk n Discover automation strategies to reduce risk

n Learn how resiliency strategies reduce risk n Discover automation strategies to reduce risk Chapter Objectives Lear how resiliecy strategies reduce risk Discover automatio strategies to reduce risk Chapter #16: Architecture ad Desig Resiliecy ad Automatio Strategies 2 Automatio/Scriptig Resiliet

More information

3D Model Retrieval Method Based on Sample Prediction

3D Model Retrieval Method Based on Sample Prediction 20 Iteratioal Coferece o Computer Commuicatio ad Maagemet Proc.of CSIT vol.5 (20) (20) IACSIT Press, Sigapore 3D Model Retrieval Method Based o Sample Predictio Qigche Zhag, Ya Tag* School of Computer

More information

Chapter 4 Threads. Operating Systems: Internals and Design Principles. Ninth Edition By William Stallings

Chapter 4 Threads. Operating Systems: Internals and Design Principles. Ninth Edition By William Stallings Operatig Systems: Iterals ad Desig Priciples Chapter 4 Threads Nith Editio By William Stalligs Processes ad Threads Resource Owership Process icludes a virtual address space to hold the process image The

More information

1&1 Next Level Hosting

1&1 Next Level Hosting 1&1 Next Level Hostig Performace Level: Performace that grows with your requiremets Copyright 1&1 Iteret SE 2017 1ad1.com 2 1&1 NEXT LEVEL HOSTING 3 Fast page loadig ad short respose times play importat

More information

Lecture 6. Lecturer: Ronitt Rubinfeld Scribes: Chen Ziv, Eliav Buchnik, Ophir Arie, Jonathan Gradstein

Lecture 6. Lecturer: Ronitt Rubinfeld Scribes: Chen Ziv, Eliav Buchnik, Ophir Arie, Jonathan Gradstein 068.670 Subliear Time Algorithms November, 0 Lecture 6 Lecturer: Roitt Rubifeld Scribes: Che Ziv, Eliav Buchik, Ophir Arie, Joatha Gradstei Lesso overview. Usig the oracle reductio framework for approximatig

More information

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #2: Randomized MST and MST Verification January 14, 2015

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #2: Randomized MST and MST Verification January 14, 2015 15-859E: Advaced Algorithms CMU, Sprig 2015 Lecture #2: Radomized MST ad MST Verificatio Jauary 14, 2015 Lecturer: Aupam Gupta Scribe: Yu Zhao 1 Prelimiaries I this lecture we are talkig about two cotets:

More information

Sorting in Linear Time. Data Structures and Algorithms Andrei Bulatov

Sorting in Linear Time. Data Structures and Algorithms Andrei Bulatov Sortig i Liear Time Data Structures ad Algorithms Adrei Bulatov Algorithms Sortig i Liear Time 7-2 Compariso Sorts The oly test that all the algorithms we have cosidered so far is compariso The oly iformatio

More information

Administrative UNSUPERVISED LEARNING. Unsupervised learning. Supervised learning 11/25/13. Final project. No office hours today

Administrative UNSUPERVISED LEARNING. Unsupervised learning. Supervised learning 11/25/13. Final project. No office hours today Admiistrative Fial project No office hours today UNSUPERVISED LEARNING David Kauchak CS 451 Fall 2013 Supervised learig Usupervised learig label label 1 label 3 model/ predictor label 4 label 5 Supervised

More information

Task scenarios Outline. Scenarios in Knowledge Extraction. Proposed Framework for Scenario to Design Diagram Transformation

Task scenarios Outline. Scenarios in Knowledge Extraction. Proposed Framework for Scenario to Design Diagram Transformation 6-0-0 Kowledge Trasformatio from Task Scearios to View-based Desig Diagrams Nima Dezhkam Kamra Sartipi {dezhka, sartipi}@mcmaster.ca Departmet of Computig ad Software McMaster Uiversity CANADA SEKE 08

More information

. Written in factored form it is easy to see that the roots are 2, 2, i,

. Written in factored form it is easy to see that the roots are 2, 2, i, CMPS A Itroductio to Programmig Programmig Assigmet 4 I this assigmet you will write a java program that determies the real roots of a polyomial that lie withi a specified rage. Recall that the roots (or

More information

CIS 121 Data Structures and Algorithms with Java Spring Stacks, Queues, and Heaps Monday, February 18 / Tuesday, February 19

CIS 121 Data Structures and Algorithms with Java Spring Stacks, Queues, and Heaps Monday, February 18 / Tuesday, February 19 CIS Data Structures ad Algorithms with Java Sprig 09 Stacks, Queues, ad Heaps Moday, February 8 / Tuesday, February 9 Stacks ad Queues Recall the stack ad queue ADTs (abstract data types from lecture.

More information

Analysis of Server Resource Consumption of Meteorological Satellite Application System Based on Contour Curve

Analysis of Server Resource Consumption of Meteorological Satellite Application System Based on Contour Curve Advaces i Computer, Sigals ad Systems (2018) 2: 19-25 Clausius Scietific Press, Caada Aalysis of Server Resource Cosumptio of Meteorological Satellite Applicatio System Based o Cotour Curve Xiagag Zhao

More information

A Study on the Performance of Cholesky-Factorization using MPI

A Study on the Performance of Cholesky-Factorization using MPI A Study o the Performace of Cholesky-Factorizatio usig MPI Ha S. Kim Scott B. Bade Departmet of Computer Sciece ad Egieerig Uiversity of Califoria Sa Diego {hskim, bade}@cs.ucsd.edu Abstract Cholesky-factorizatio

More information

Lecture Notes 6 Introduction to algorithm analysis CSS 501 Data Structures and Object-Oriented Programming

Lecture Notes 6 Introduction to algorithm analysis CSS 501 Data Structures and Object-Oriented Programming Lecture Notes 6 Itroductio to algorithm aalysis CSS 501 Data Structures ad Object-Orieted Programmig Readig for this lecture: Carrao, Chapter 10 To be covered i this lecture: Itroductio to algorithm aalysis

More information

Solving Fuzzy Assignment Problem Using Fourier Elimination Method

Solving Fuzzy Assignment Problem Using Fourier Elimination Method Global Joural of Pure ad Applied Mathematics. ISSN 0973-768 Volume 3, Number 2 (207), pp. 453-462 Research Idia Publicatios http://www.ripublicatio.com Solvig Fuzzy Assigmet Problem Usig Fourier Elimiatio

More information

IMP: Superposer Integrated Morphometrics Package Superposition Tool

IMP: Superposer Integrated Morphometrics Package Superposition Tool IMP: Superposer Itegrated Morphometrics Package Superpositio Tool Programmig by: David Lieber ( 03) Caisius College 200 Mai St. Buffalo, NY 4208 Cocept by: H. David Sheets, Dept. of Physics, Caisius College

More information

Guaranteeing Hard Real Time End-to-End Communications Deadlines

Guaranteeing Hard Real Time End-to-End Communications Deadlines Guarateeig Hard Real Time Ed-to-Ed Commuicatios Deadlies K. W. Tidell A. Burs A. J. Welligs Real Time Systems Research Group Departmet of Computer Sciece Uiversity of York e-mail: ke@mister.york.ac.uk

More information

An Algorithm to Solve Fuzzy Trapezoidal Transshipment Problem

An Algorithm to Solve Fuzzy Trapezoidal Transshipment Problem Iteratioal Joural of Systems Sciece ad Applied Mathematics 206; (4): 58-62 http://www.sciecepublishiggroup.com/j/ssam doi: 0.648/j.ssam.206004.4 A Algorithm to Solve Fuzzy Trapezoidal Trasshipmet Problem

More information

Introduction. Nature-Inspired Computing. Terminology. Problem Types. Constraint Satisfaction Problems - CSP. Free Optimization Problem - FOP

Introduction. Nature-Inspired Computing. Terminology. Problem Types. Constraint Satisfaction Problems - CSP. Free Optimization Problem - FOP Nature-Ispired Computig Hadlig Costraits Dr. Şima Uyar September 2006 Itroductio may practical problems are costraied ot all combiatios of variable values represet valid solutios feasible solutios ifeasible

More information

Quality of Service. Spring 2018 CS 438 Staff - University of Illinois 1

Quality of Service. Spring 2018 CS 438 Staff - University of Illinois 1 Quality of Service Sprig 2018 CS 438 Staff - Uiversity of Illiois 1 Quality of Service How good are late data ad lowthroughput chaels? It depeds o the applicatio. Do you care if... Your e-mail takes 1/2

More information

Hash Tables. Presentation for use with the textbook Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.

Hash Tables. Presentation for use with the textbook Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015. Presetatio for use with the textbook Algorithm Desig ad Applicatios, by M. T. Goodrich ad R. Tamassia, Wiley, 2015 Hash Tables xkcd. http://xkcd.com/221/. Radom Number. Used with permissio uder Creative

More information

Reliable Transmission. Spring 2018 CS 438 Staff - University of Illinois 1

Reliable Transmission. Spring 2018 CS 438 Staff - University of Illinois 1 Reliable Trasmissio Sprig 2018 CS 438 Staff - Uiversity of Illiois 1 Reliable Trasmissio Hello! My computer s ame is Alice. Alice Bob Hello! Alice. Sprig 2018 CS 438 Staff - Uiversity of Illiois 2 Reliable

More information

Graphs. Minimum Spanning Trees. Slides by Rose Hoberman (CMU)

Graphs. Minimum Spanning Trees. Slides by Rose Hoberman (CMU) Graphs Miimum Spaig Trees Slides by Rose Hoberma (CMU) Problem: Layig Telephoe Wire Cetral office 2 Wirig: Naïve Approach Cetral office Expesive! 3 Wirig: Better Approach Cetral office Miimize the total

More information

Traditional queuing behaviour in routers. Scheduling and queue management. Questions. Scheduling mechanisms. Scheduling [1] Scheduling [2]

Traditional queuing behaviour in routers. Scheduling and queue management. Questions. Scheduling mechanisms. Scheduling [1] Scheduling [2] Traditioal queuig behaviour i routers Schedulig ad queue maagemet Data trasfer: datagrams: idividual packets o recogitio of flows coectioless: o sigallig Forwardig: based o per-datagram, forwardig table

More information

The Magma Database file formats

The Magma Database file formats The Magma Database file formats Adrew Gaylard, Bret Pikey, ad Mart-Mari Breedt Johaesburg, South Africa 15th May 2006 1 Summary Magma is a ope-source object database created by Chris Muller, of Kasas City,

More information

CMSC Computer Architecture Lecture 12: Virtual Memory. Prof. Yanjing Li University of Chicago

CMSC Computer Architecture Lecture 12: Virtual Memory. Prof. Yanjing Li University of Chicago CMSC 22200 Computer Architecture Lecture 12: Virtual Memory Prof. Yajig Li Uiversity of Chicago A System with Physical Memory Oly Examples: most Cray machies early PCs Memory early all embedded systems

More information

Lecture 5. Counting Sort / Radix Sort

Lecture 5. Counting Sort / Radix Sort Lecture 5. Coutig Sort / Radix Sort T. H. Corme, C. E. Leiserso ad R. L. Rivest Itroductio to Algorithms, 3rd Editio, MIT Press, 2009 Sugkyukwa Uiversity Hyuseug Choo choo@skku.edu Copyright 2000-2018

More information

Data diverse software fault tolerance techniques

Data diverse software fault tolerance techniques Data diverse software fault tolerace techiques Complemets desig diversity by compesatig for desig diversity s s limitatios Ivolves obtaiig a related set of poits i the program data space, executig the

More information

1 Graph Sparsfication

1 Graph Sparsfication CME 305: Discrete Mathematics ad Algorithms 1 Graph Sparsficatio I this sectio we discuss the approximatio of a graph G(V, E) by a sparse graph H(V, F ) o the same vertex set. I particular, we cosider

More information

1 Enterprise Modeler

1 Enterprise Modeler 1 Eterprise Modeler Itroductio I BaaERP, a Busiess Cotrol Model ad a Eterprise Structure Model for multi-site cofiguratios are itroduced. Eterprise Structure Model Busiess Cotrol Models Busiess Fuctio

More information

Multi-Threading. Hyper-, Multi-, and Simultaneous Thread Execution

Multi-Threading. Hyper-, Multi-, and Simultaneous Thread Execution Multi-Threadig Hyper-, Multi-, ad Simultaeous Thread Executio 1 Performace To Date Icreasig processor performace Pipeliig. Brach predictio. Super-scalar executio. Out-of-order executio. Caches. Hyper-Threadig

More information

Python Programming: An Introduction to Computer Science

Python Programming: An Introduction to Computer Science Pytho Programmig: A Itroductio to Computer Sciece Chapter 6 Defiig Fuctios Pytho Programmig, 2/e 1 Objectives To uderstad why programmers divide programs up ito sets of cooperatig fuctios. To be able to

More information

Content Reserve Utility Based Video Segment Transmission Scheduling for Peer-to-Peer Live Video Streaming System

Content Reserve Utility Based Video Segment Transmission Scheduling for Peer-to-Peer Live Video Streaming System Cotet Reserve Utility Based Video Segmet Trasmissio Schedulig for Peer-to-Peer Live Video Streamig System Zhu Li 1, Jiawei Huag 2, ad Aggelos K. Katsaggelos 3 1,3 Multimedia Research Lab, Motorola Labs,

More information

CIS 121 Data Structures and Algorithms with Java Spring Stacks and Queues Monday, February 12 / Tuesday, February 13

CIS 121 Data Structures and Algorithms with Java Spring Stacks and Queues Monday, February 12 / Tuesday, February 13 CIS Data Structures ad Algorithms with Java Sprig 08 Stacks ad Queues Moday, February / Tuesday, February Learig Goals Durig this lab, you will: Review stacks ad queues. Lear amortized ruig time aalysis

More information

APPLICATION NOTE PACE1750AE BUILT-IN FUNCTIONS

APPLICATION NOTE PACE1750AE BUILT-IN FUNCTIONS APPLICATION NOTE PACE175AE BUILT-IN UNCTIONS About This Note This applicatio brief is iteded to explai ad demostrate the use of the special fuctios that are built ito the PACE175AE processor. These powerful

More information

Python Programming: An Introduction to Computer Science

Python Programming: An Introduction to Computer Science Pytho Programmig: A Itroductio to Computer Sciece Chapter 1 Computers ad Programs 1 Objectives To uderstad the respective roles of hardware ad software i a computig system. To lear what computer scietists

More information

Greedy Algorithms. Interval Scheduling. Greedy Algorithms. Interval scheduling. Greedy Algorithms. Interval Scheduling

Greedy Algorithms. Interval Scheduling. Greedy Algorithms. Interval scheduling. Greedy Algorithms. Interval Scheduling Greedy Algorithms Greedy Algorithms Witer Paul Beame Hard to defie exactly but ca give geeral properties Solutio is built i small steps Decisios o how to build the solutio are made to maximize some criterio

More information

Throughput-Delay Scaling in Wireless Networks with Constant-Size Packets

Throughput-Delay Scaling in Wireless Networks with Constant-Size Packets Throughput-Delay Scalig i Wireless Networks with Costat-Size Packets Abbas El Gamal, James Mamme, Balaji Prabhakar, Devavrat Shah Departmets of EE ad CS Staford Uiversity, CA 94305 Email: {abbas, jmamme,

More information

Fast Fourier Transform (FFT) Algorithms

Fast Fourier Transform (FFT) Algorithms Fast Fourier Trasform FFT Algorithms Relatio to the z-trasform elsewhere, ozero, z x z X x [ ] 2 ~ elsewhere,, ~ e j x X x x π j e z z X X π 2 ~ The DFS X represets evely spaced samples of the z- trasform

More information

Protected points in ordered trees

Protected points in ordered trees Applied Mathematics Letters 008 56 50 www.elsevier.com/locate/aml Protected poits i ordered trees Gi-Sag Cheo a, Louis W. Shapiro b, a Departmet of Mathematics, Sugkyukwa Uiversity, Suwo 440-746, Republic

More information

Security of Bluetooth: An overview of Bluetooth Security

Security of Bluetooth: An overview of Bluetooth Security Versio 2 Security of Bluetooth: A overview of Bluetooth Security Marjaaa Träskbäck Departmet of Electrical ad Commuicatios Egieerig mtraskba@cc.hut.fi 52655H ABSTRACT The purpose of this paper is to give

More information

Heaps. Presentation for use with the textbook Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

Heaps. Presentation for use with the textbook Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Presetatio for use with the textbook Algorithm Desig ad Applicatios, by M. T. Goodrich ad R. Tamassia, Wiley, 201 Heaps 201 Goodrich ad Tamassia xkcd. http://xkcd.com/83/. Tree. Used with permissio uder

More information

Exact Minimum Lower Bound Algorithm for Traveling Salesman Problem

Exact Minimum Lower Bound Algorithm for Traveling Salesman Problem Exact Miimum Lower Boud Algorithm for Travelig Salesma Problem Mohamed Eleiche GeoTiba Systems mohamed.eleiche@gmail.com Abstract The miimum-travel-cost algorithm is a dyamic programmig algorithm to compute

More information

Copyright 2016 Ramez Elmasri and Shamkant B. Navathe

Copyright 2016 Ramez Elmasri and Shamkant B. Navathe Copyright 2016 Ramez Elmasri ad Shamkat B. Navathe CHAPTER 18 Strategies for Query Processig Copyright 2016 Ramez Elmasri ad Shamkat B. Navathe Itroductio DBMS techiques to process a query Scaer idetifies

More information

Evaluation of Distributed and Replicated HLR for Location Management in PCS Network

Evaluation of Distributed and Replicated HLR for Location Management in PCS Network JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 9, 85-0 (2003) Evaluatio of Distributed ad Replicated HLR for Locatio Maagemet i PCS Network Departmet of Computer Sciece ad Iformatio Egieerig Natioal Chiao

More information

Copyright 2016 Ramez Elmasri and Shamkant B. Navathe

Copyright 2016 Ramez Elmasri and Shamkant B. Navathe Copyright 2016 Ramez Elmasri ad Shamkat B. Navathe CHAPTER 19 Query Optimizatio Copyright 2016 Ramez Elmasri ad Shamkat B. Navathe Itroductio Query optimizatio Coducted by a query optimizer i a DBMS Goal:

More information

The golden search method: Question 1

The golden search method: Question 1 1. Golde Sectio Search for the Mode of a Fuctio The golde search method: Questio 1 Suppose the last pair of poits at which we have a fuctio evaluatio is x(), y(). The accordig to the method, If f(x())

More information

A New Morphological 3D Shape Decomposition: Grayscale Interframe Interpolation Method

A New Morphological 3D Shape Decomposition: Grayscale Interframe Interpolation Method A ew Morphological 3D Shape Decompositio: Grayscale Iterframe Iterpolatio Method D.. Vizireau Politehica Uiversity Bucharest, Romaia ae@comm.pub.ro R. M. Udrea Politehica Uiversity Bucharest, Romaia mihea@comm.pub.ro

More information

Fundamentals of Media Processing. Shin'ichi Satoh Kazuya Kodama Hiroshi Mo Duy-Dinh Le

Fundamentals of Media Processing. Shin'ichi Satoh Kazuya Kodama Hiroshi Mo Duy-Dinh Le Fudametals of Media Processig Shi'ichi Satoh Kazuya Kodama Hiroshi Mo Duy-Dih Le Today's topics Noparametric Methods Parze Widow k-nearest Neighbor Estimatio Clusterig Techiques k-meas Agglomerative Hierarchical

More information

Evaluation scheme for Tracking in AMI

Evaluation scheme for Tracking in AMI A M I C o m m u i c a t i o A U G M E N T E D M U L T I - P A R T Y I N T E R A C T I O N http://www.amiproject.org/ Evaluatio scheme for Trackig i AMI S. Schreiber a D. Gatica-Perez b AMI WP4 Trackig:

More information

MAC Throughput Improvement Using Adaptive Contention Window

MAC Throughput Improvement Using Adaptive Contention Window Joural of Computer ad Commuicatios, 2015, 3, 1 14 Published Olie Jauary 2015 i SciRes. http://www.scirp.org/joural/jcc http://dx.doi.org/10.4236/jcc.2015.31001 MAC Throughput Improvemet Usig Adaptive Cotetio

More information

Architectural styles for software systems The client-server style

Architectural styles for software systems The client-server style Architectural styles for software systems The cliet-server style Prof. Paolo Ciacarii Software Architecture CdL M Iformatica Uiversità di Bologa Ageda Cliet server style CS two tiers CS three tiers CS

More information

Prevention of Black Hole Attack in Mobile Ad-hoc Networks using MN-ID Broadcasting

Prevention of Black Hole Attack in Mobile Ad-hoc Networks using MN-ID Broadcasting Vol.2, Issue.3, May-Jue 2012 pp-1017-1021 ISSN: 2249-6645 Prevetio of Black Hole Attack i Mobile Ad-hoc Networks usig MN-ID Broadcastig Atoy Devassy 1, K. Jayathi 2 *(PG scholar, ME commuicatio Systems,

More information

Lecture 10 Collision resolution. Collision resolution

Lecture 10 Collision resolution. Collision resolution Lecture 10 Collisio resolutio Collisio resolutio May applicatios ivolve a iquiry over a shared chael, which ca be ivoked for: discovery of eighborig devices i ad hoc etworks, coutig the umber of RFID tags

More information

Image Segmentation EEE 508

Image Segmentation EEE 508 Image Segmetatio Objective: to determie (etract) object boudaries. It is a process of partitioig a image ito distict regios by groupig together eighborig piels based o some predefied similarity criterio.

More information

Energy-Efficient Transaction Management for Mobile Adhoc Network Databases

Energy-Efficient Transaction Management for Mobile Adhoc Network Databases Eergy-Efficiet Trasactio Maagemet for Mobile Adhoc Network Databases 1st Author 1st author's affiliatio 1st lie of address 2d lie of address Telephoe umber, icl. coutry code 1st author's email address

More information

1. SWITCHING FUNDAMENTALS

1. SWITCHING FUNDAMENTALS . SWITCING FUNDMENTLS Switchig is the provisio of a o-demad coectio betwee two ed poits. Two distict switchig techiques are employed i commuicatio etwors-- circuit switchig ad pacet switchig. Circuit switchig

More information

Outline. CSCI 4730 Operating Systems. Questions. What is an Operating System? Computer System Layers. Computer System Layers

Outline. CSCI 4730 Operating Systems. Questions. What is an Operating System? Computer System Layers. Computer System Layers Outlie CSCI 4730 s! What is a s?!! System Compoet Architecture s Overview Questios What is a?! What are the major operatig system compoets?! What are basic computer system orgaizatios?! How do you commuicate

More information

Ch 9.3 Geometric Sequences and Series Lessons

Ch 9.3 Geometric Sequences and Series Lessons Ch 9.3 Geometric Sequeces ad Series Lessos SKILLS OBJECTIVES Recogize a geometric sequece. Fid the geeral, th term of a geometric sequece. Evaluate a fiite geometric series. Evaluate a ifiite geometric

More information

( n+1 2 ) , position=(7+1)/2 =4,(median is observation #4) Median=10lb

( n+1 2 ) , position=(7+1)/2 =4,(median is observation #4) Median=10lb Chapter 3 Descriptive Measures Measures of Ceter (Cetral Tedecy) These measures will tell us where is the ceter of our data or where most typical value of a data set lies Mode the value that occurs most

More information

Announcements. Reading. Project #4 is on the web. Homework #1. Midterm #2. Chapter 4 ( ) Note policy about project #3 missing components

Announcements. Reading. Project #4 is on the web. Homework #1. Midterm #2. Chapter 4 ( ) Note policy about project #3 missing components Aoucemets Readig Chapter 4 (4.1-4.2) Project #4 is o the web ote policy about project #3 missig compoets Homework #1 Due 11/6/01 Chapter 6: 4, 12, 24, 37 Midterm #2 11/8/01 i class 1 Project #4 otes IPv6Iit,

More information

Symmetric Class 0 subgraphs of complete graphs

Symmetric Class 0 subgraphs of complete graphs DIMACS Techical Report 0-0 November 0 Symmetric Class 0 subgraphs of complete graphs Vi de Silva Departmet of Mathematics Pomoa College Claremot, CA, USA Chaig Verbec, Jr. Becer Friedma Istitute Booth

More information

Chapter 1. Introduction to Computers and C++ Programming. Copyright 2015 Pearson Education, Ltd.. All rights reserved.

Chapter 1. Introduction to Computers and C++ Programming. Copyright 2015 Pearson Education, Ltd.. All rights reserved. Chapter 1 Itroductio to Computers ad C++ Programmig Copyright 2015 Pearso Educatio, Ltd.. All rights reserved. Overview 1.1 Computer Systems 1.2 Programmig ad Problem Solvig 1.3 Itroductio to C++ 1.4 Testig

More information

CMSC Computer Architecture Lecture 10: Caches. Prof. Yanjing Li University of Chicago

CMSC Computer Architecture Lecture 10: Caches. Prof. Yanjing Li University of Chicago CMSC 22200 Computer Architecture Lecture 10: Caches Prof. Yajig Li Uiversity of Chicago Midterm Recap Overview ad fudametal cocepts ISA Uarch Datapath, cotrol Sigle cycle, multi cycle Pipeliig Basic idea,

More information

Adaptive Graph Partitioning Wireless Protocol S. L. Ng 1, P. M. Geethakumari 1, S. Zhou 2, and W. J. Dewar 1 1

Adaptive Graph Partitioning Wireless Protocol S. L. Ng 1, P. M. Geethakumari 1, S. Zhou 2, and W. J. Dewar 1 1 Adaptive Graph Partitioig Wireless Protocol S. L. Ng 1, P. M. Geethakumari 1, S. Zhou 2, ad W. J. Dewar 1 1 School of Electrical Egieerig Uiversity of New South Wales, Australia 2 Divisio of Radiophysics

More information

Examples and Applications of Binary Search

Examples and Applications of Binary Search Toy Gog ITEE Uiersity of Queeslad I the secod lecture last week we studied the biary search algorithm that soles the problem of determiig if a particular alue appears i a sorted list of iteger or ot. We

More information

Minimum Spanning Trees

Minimum Spanning Trees Miimum Spaig Trees Miimum Spaig Trees Spaig subgraph Subgraph of a graph G cotaiig all the vertices of G Spaig tree Spaig subgraph that is itself a (free) tree Miimum spaig tree (MST) Spaig tree of a weighted

More information

On Nonblocking Folded-Clos Networks in Computer Communication Environments

On Nonblocking Folded-Clos Networks in Computer Communication Environments O Noblockig Folded-Clos Networks i Computer Commuicatio Eviromets Xi Yua Departmet of Computer Sciece, Florida State Uiversity, Tallahassee, FL 3306 xyua@cs.fsu.edu Abstract Folded-Clos etworks, also referred

More information

Computer Science Foundation Exam. August 12, Computer Science. Section 1A. No Calculators! KEY. Solutions and Grading Criteria.

Computer Science Foundation Exam. August 12, Computer Science. Section 1A. No Calculators! KEY. Solutions and Grading Criteria. Computer Sciece Foudatio Exam August, 005 Computer Sciece Sectio A No Calculators! Name: SSN: KEY Solutios ad Gradig Criteria Score: 50 I this sectio of the exam, there are four (4) problems. You must

More information

Chapter 11. Friends, Overloaded Operators, and Arrays in Classes. Copyright 2014 Pearson Addison-Wesley. All rights reserved.

Chapter 11. Friends, Overloaded Operators, and Arrays in Classes. Copyright 2014 Pearson Addison-Wesley. All rights reserved. Chapter 11 Frieds, Overloaded Operators, ad Arrays i Classes Copyright 2014 Pearso Addiso-Wesley. All rights reserved. Overview 11.1 Fried Fuctios 11.2 Overloadig Operators 11.3 Arrays ad Classes 11.4

More information

SD vs. SD + One of the most important uses of sample statistics is to estimate the corresponding population parameters.

SD vs. SD + One of the most important uses of sample statistics is to estimate the corresponding population parameters. SD vs. SD + Oe of the most importat uses of sample statistics is to estimate the correspodig populatio parameters. The mea of a represetative sample is a good estimate of the mea of the populatio that

More information

CSE 417: Algorithms and Computational Complexity

CSE 417: Algorithms and Computational Complexity Time CSE 47: Algorithms ad Computatioal Readig assigmet Read Chapter of The ALGORITHM Desig Maual Aalysis & Sortig Autum 00 Paul Beame aalysis Problem size Worst-case complexity: max # steps algorithm

More information

Stone Images Retrieval Based on Color Histogram

Stone Images Retrieval Based on Color Histogram Stoe Images Retrieval Based o Color Histogram Qiag Zhao, Jie Yag, Jigyi Yag, Hogxig Liu School of Iformatio Egieerig, Wuha Uiversity of Techology Wuha, Chia Abstract Stoe images color features are chose

More information

Lecture 1: Introduction and Strassen s Algorithm

Lecture 1: Introduction and Strassen s Algorithm 5-750: Graduate Algorithms Jauary 7, 08 Lecture : Itroductio ad Strasse s Algorithm Lecturer: Gary Miller Scribe: Robert Parker Itroductio Machie models I this class, we will primarily use the Radom Access

More information

Social-P2P: An Online Social Network Based P2P File Sharing System

Social-P2P: An Online Social Network Based P2P File Sharing System 1.119/TPDS.214.23592, IEEE Trasactios o Parallel ad Distributed Systems 1 : A Olie Social Network Based P2P File Sharig System Haiyig She*, Seior Member, IEEE, Ze Li, Studet Member, IEEE, Kag Che Abstract

More information

Pruning and Summarizing the Discovered Time Series Association Rules from Mechanical Sensor Data Qing YANG1,a,*, Shao-Yu WANG1,b, Ting-Ting ZHANG2,c

Pruning and Summarizing the Discovered Time Series Association Rules from Mechanical Sensor Data Qing YANG1,a,*, Shao-Yu WANG1,b, Ting-Ting ZHANG2,c Advaces i Egieerig Research (AER), volume 131 3rd Aual Iteratioal Coferece o Electroics, Electrical Egieerig ad Iformatio Sciece (EEEIS 2017) Pruig ad Summarizig the Discovered Time Series Associatio Rules

More information

Project 2.5 Improved Euler Implementation

Project 2.5 Improved Euler Implementation Project 2.5 Improved Euler Implemetatio Figure 2.5.10 i the text lists TI-85 ad BASIC programs implemetig the improved Euler method to approximate the solutio of the iitial value problem dy dx = x+ y,

More information

Non-hierarchical Coloured Petri Nets

Non-hierarchical Coloured Petri Nets hapter 2 No-hierarchical oloured Petri Nets This chapter itroduces the cocepts of o-hierarchical oloured Petri Nets. This is doe by meas of a ruig example cosistig of a set of simple commuicatio protocols.

More information

performance to the performance they can experience when they use the services from a xed location.

performance to the performance they can experience when they use the services from a xed location. I the Proceedigs of The First Aual Iteratioal Coferece o Mobile Computig ad Networkig (MobiCom 9) November -, 99, Berkeley, Califoria USA Performace Compariso of Mobile Support Strategies Rieko Kadobayashi

More information