CLOUD computing has evolved as an important and

Size: px
Start display at page:

Download "CLOUD computing has evolved as an important and"

Transcription

1 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 1 Cloud Servce Relablty Enhancement va Vrtual Machne Placement Optmzaton Ao Zhou, Shangguang Wang, Member, IEEE, Bo Cheng, Member, IEEE, Zbn Zheng, Member, IEEE, Fangchun Yang, Senor Member, IEEE, Rong N. Chang, Senor Member, IEEE Mchael R. Lyu, Fellow, IEEE, and Rajkumar Buyya, Fellow, IEEE Abstract Wth rapd adopton of the cloud computng model, many enterprses have begun deployng cloud-based servces. Falures of vrtual machnes (VMs) n clouds have caused serous qualty assurance ssues for those servces. VM replcaton s a commonly used technque for enhancng the relablty of cloud servces. However, when determnng the VM redundancy strategy for a specfc servce, many state-of-the-art methods gnore the huge network resource consumpton ssue that could be experenced when the servce s n falure recovery mode. Ths paper proposes a redundant VM placement optmzaton approach to enhancng the relablty of cloud servces. The approach employs three algorthms. The frst algorthm selects an approprate set of VM-hostng servers from a potentally large set of canddate host servers based upon the network topology. The second algorthm determnes an optmal strategy to place the prmary and backup VMs on the selected host servers wth k-fault-tolerance assurance. Lastly, a heurstc s used to address the task-to-vm reassgnment optmzaton problem, whch s formulated as fndng a maxmum weght matchng n bpartte graphs. The evaluaton results show that the proposed approach outperforms four other representatve methods n network resource consumpton n the servce recovery stage. Index Terms Cloud computng, cloud servce, relablty, fault-tolerance, datacenter, network resource. 1 INTRODUCTION CLOUD computng has evolved as an mportant and popular computng model [1] [2]. Smlar to publc utlty servces, computng resources n a cloud computng envronment can be provsoned n an on-demand manner [3] [4], and can be purchased va a pay-as-you-go model [5] [6]. Ths obvates the need to costly over-provson onpremse computng resources to accommodate peak demand [7] [8]. Thus, deployng servces nto the cloud has become a growng trend [9]. Relablty s an mportant aspect of Qualty of Servce (QoS) [10]. Wth many vrtual machnes (VMs) runnng n a cloud datacenter, t s dffcult to ensure all the VMs always perform satsfactorly [11]. In realty, many cloud servces faled to fulfll ther relablty assurance commtment due to VM falures [12]. It s mperatve to enhance the relablty of VM-based servces n a cloud computng envronment [7] [13]. Many solutons have been proposed to address servce relablty ssues. Fault removal, fault preventon, fault forecastng, and fault tolerance are four basc relablty enhance- A. Zhou, S. Wang, B. Cheng, F. Yang are wth the State Key Laboratory of Networkng and Swtchng Technology, Bejng Unversty of Posts and Telecommuncatons. E-mal: hellozhouao@gmal.com; sgwang@bupt.edu.cn; chengbo@bupt.edu.cn; fcyang@bupt.edu.cn Z. Zheng s wth School of Data and Computer Scence, Sun Yat-sen Unversty. E-mal: zhzbn@mal.sysu.edu.cn. M. R. Lyu s wth Department of Computer Scence & Engneerng, The Chnese Unversty of Hong Kong. E-mal: lyug@cse.cuhk.edu.hk. R. N. Chang s wth IBM Research. E-mal: rong@us.bm.com. R. Buyya s wth Cloud Computng and Dstrbuted Systems (CLOUDS) Lab, Department of Computng and Informaton Systems, The Unversty of Melbourne, Australa. E-mal: rbuyya@unmelb.edu.au. Manuscrpt receved ; revsed. ment technques [14]. The frst three of them attempt to dentfy and remove faults that occur n the system wth the goal of preventng mpact-makng faults. Ths goal s unrealstc for a complex computng system lke a cloud computng envronment n producton, n whch VM falure s nevtable [15]. Fault tolerance technques, whch try to ensure servce contnuty when falure occurs, complements those three technques wth a fundamentally dfferent servce relablty enhancement approach and wth a more practcal relablty management goal for cloud servces [16] [17]. Many fault tolerance mechansms have been proposed [18]. Checkpontng [19] s a common fault tolerance mechansm for cloud servces. The checkpontng mechansm perodcally saves the executon state of a runnng task (e.g., as a VM mage fle [20]), and enables the task to be resumed from the latest saved state after falure occurs. However, takng checkponts perodcally and resumng a faled servce va checkpont mage(s) are tme-consumng. Ths mechansm may ncur too much performance overhead when t s deployed for some small scale tasks or dvdable tasks (e.g., a data analytc task that can be dvded nto a set of small tasks). Replcaton [21], e.g., one-to-one and one-to-many standby, s another common fault tolerance mechansm, whch explots redundant deployment of computng resources, e.g., VMs. When the fault tolerance capablty of a specfc servce s provsoned va VM replcaton, the redundant VMs are classfed nto two categores: prmary VMs and backup VMs. Notable approaches [22] [23] [24] were developed to reduce the mplementaton cost by explotng the degree of redundancy. k-fault tolerance [25] [26] s a specfc type of

2 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 2 replcaton-based fault tolerance mechansm and supports a confguraton-based fault-tolerance measurement of a server-based servce. A k-fault-tolerant servce must be confgured wth k addtonal servers such that the mnmum server confguraton for the servce can stll be satsfed when k hostng servers fal smultaneously. In a VM-based cloud envronment, for example, deployng a specfc servce on only one server makes the servce 0-fault-tolerant (because the servce becomes unavalable when the only hostng server fals), regardless the number of redundant VMs that may have been deployed on the same server and the feasblty of restorng the affected servce va server reboot or replacement. When deployng a replcaton-based fault tolerance mechansm for cloud servces, we note that re-assgnng an ncomplete task from one faled prmary VM to a backup VM often requres a huge amount of data to be retreved and processed once more from the central storage servers. Ths s a tme-consumng and network-resource-consumng process. Moreover, the host server on whch a specfc faled VM resdes may stll be runnng and be able to let the data used by the faled VM accessble from wthn the VMs runnng on other servers, e.g., the backup VMs that are n proxmty to the faled prmary VM. Thus, approprate VM placement could save consderable amount of tme and network resources n falure recovery mode. Amng at reducng the lost tme and the network resource consumpton when the k-fault-tolerance requrement must be satsfed, ths paper proposes a novel redundant VM placement approach to enhancng the relablty of cloud servces, whch s named OPVMP (optmal redundant vrtual machne placement). The commercalzaton nature of network resources n cloud computng prompted us to make OPVMP reduce network resource consumpton n addton to enhancng cloud servce relablty. The proposed approach s a three-step process wth one algorthm for each of the steps, namely (1) host server selecton, (2) optmal VM placement, and (3) recovery s- trategy decson. The frst algorthm selects an approprate set of VM-hostng servers from a potentally large set of canddate host servers based upon the network topology. The second algorthm determnes an optmal strategy to place the prmary and backup VMs on the selected host servers. Lastly, a heurstc s used to address the task-to-vm reassgnment optmzaton problem, whch s formulated as fndng a maxmum weght matchng n bpartte graphs. We construct an expermental platform based on our prevous research results [27] [28]. Effectveness of the proposed approach has been evaluated va the platform. The evaluaton was done by comparng OPVMP aganst four other representatve redundant VM placement algorthms n terms of four network resource consumpton related performance metrcs. All of the fve approaches were mplemented n FT- CloudSm. The evaluaton results show that OPVMP outperforms the other four methods n network resource consumpton n the servce recovery stage. The remander of ths paper s organzed as follows. Secton 2 presents a revew of related work. In Secton 3, the background of the proposed approach s presented. Techncal detals of the proposed approach are llustrated n Secton 4. Expermental evaluaton results are reported n Secton 5, and the concluson s presented n Secton 6. 2 RELATED WORK Enhancng the relablty of cloud servces s an mportant aspect of cloud computng and has receved consderable attenton from the research communty. The complex cloud computng envronment poses partcular challenges to researchers. A varety of servce relablty enhancement approaches have been proposed to address related ssues. Checkpontng s a wdely used basc fault tolerance mechansm that functons by perodcally savng the executon state of a VM as an mage fle. However, datacenters have lmted network resources and may readly become congested when a huge number of checkpont mage fles are transferred. Attemptng to avod ths problem, Zhang, et al. [29] presented a theoretcal delta-checkpont approach n whch the base system only needs to be saved once the frst checkpont completes and subsequent checkpont mages only contan the ncrementally modfed pages. A theoretcal delta-checkpont approach was mplemented by Gor, et al. n [19]. A further reducton n network resource consumpton was developed by Lmrungs, et al. [20], who proposed a peer-to-peer checkpont approach n whch the checkpont mages are stored on the neghborng host servers. If the storage server s located n the same pod as the servceprovdng server, t s unnecessary to transfer the checkpont mages va core swtches. For some small scale tasks or dvdable tasks, for example, Scentfc Computng, one huge dataset can be dvded nto smaller data blocks. Processng each data block would consume much less tme than dong that for the huge source dataset. In ths case, the cloud suppler may choose other mechansms n terms of reducton of the checkpontng executon and servce resumng overhead. Replcaton s another type of relablty assurance mechansm. Replcaton s based on the explotaton of redundancy. One-to-one and one-to-many standbys are two well known mechansms. Xu, et al. n [21] tred to map each prmary VM to a backup VM. A prmary VM and ts mappng backup node form a survvable group. A task can be completed n tme f at least one VM n the survvable group works well. The work takes the bandwdth reservaton nto consderaton when solvng the mappng problem. In ths regard, an optmal algorthm that maps a survvable group to the physcal data center was proposed [21]. How to reduce the cost of replcaton s a problem that has been addressed by several proposals [22] [23] [30]. All of these proposals am to reduce the degree of redundancy. Another effort [30] was based on the fact that dfferent modules have dfferent redundancy requrements and that modules wth a hgher nvocaton frequency are more sgnfcant than other modules. Thus, the work attempted to rank all the modules of a system based on ther sgnfcance value. The same problem was approached dfferently [24] by adjustng the redundancy of the same module under dfferent executon condtons. k-fault tolerance [25] [26] s another approach amng at reducng the cost of mplementng redundancy. k-fault

3 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 3 tolerance ensure that the smultaneous falure of any k computng nodes would not make the servce unavalable. There s a large quantty of tasks. To complete all tasks n tme, a servce would be deployed n several VMs. The tasks are scheduled to the VMs accordng to an approprate schedulng strategy. In a cloud computng envronment, the computng resources of each server are vrtualzed to several VMs. Both hardware and software problems can result n VM falures. When a host server crashes, all the VMs t s hostng wll no longer operate. The more the number of VMs provdng the same servce s placed on the same server, the more serous the damage s when a host server fals. Takng ths nto consderaton, Machda, et al. n [25] proposed a redundant VM placement approach to ensurng k-fault tolerance. However, when restartng a task from one backup VM, we need to re-fetch the data to be processed from the central database. The process s tme-consumng and network resource-consumng for cloud servces. We note that the host server on whch the faled VM runs may store a copy of data for the task. If the backup VM can fetch the data t needs drectly from that host server despte of the unexpected prmary VM falure, a lot of data processng tme and network resources can be saved f the prmary VMs and the backup VMs are properly allocated. One major dfference between our proposed cloud servce relablty enhancement approach and related work s that we use a redundant VM placement method and optmze network resource consumpton through approprate placement of the requred VMs. Symbol Meanng TABLE 1 Notatons PM The physcal machne or host server n the data center, = 1, 2,... V M j The vrtual machnes n the data center, j = 1, 2,... pod x The pods n the data center, x = 1, 2,... T y The task submtted by users, y = 1, 2,... subnet l The subnet n the data center, l = 1, 2,... max subnet The number of subnets whch contan avalable host servers S A servce V M P (S) Return the prmary VMs of S V M B (S) Return the backup VMs of S V M F (S) Return current faled VMs of S PM(S) Return all the servers on whch servce provdng VMs of S locate Succ The next element after current element sze Return the element number of a lst DSze Return the sze of the data stored n a server a The number of avalable PM n a subnet m The number of prmary VM of S k The number of backup VM of S mn Return the mn value of the nputs length The lnkage length copy Copy all data from a vector to another vector 3 PRELIMINARIES Ths secton provdes the background and motvaton for our work as well as explans how we formulate the redundant VM placement problem as an optmzaton problem. The notatons lsted n Table 1 wll be used throughout the rest of the paper. 3.1 Background In the cloud computng envronment, a statstcally rare falure event may be a common occurrence due to scale [31]. Cloud servce relablty enhancement s becomng an mportant research challenge. Fg. 1 shows the task processng model we use. The cloud servce s employed n several VMs because, consderng the huge amount of servce requests, the computng power of a sngle VM would be nsuffcent. Upon recevng the servce requests, the dvder parttons the large scale task nto smaller sub-tasks. Processng each sub-task would not consume too much tme. Based on the schedulng algorthm, each task to one of the servce-provdng VMs. Each VM has a task watng queue. Snce a VM may fal due to a software or hardware fault, an assgned task may not be completed as scheduled, and may n turn delay the entre servce request operaton. k-fault tolerance [25] [26] can be chosen by the cloud servce provder to reduce lost tme and to ensure servce relablty. Besdes the m number of prmary VMs, there are k backup VMs for each servce. k-fault tolerance serves to ensure that the task processng servce Fg. 1. Task processng model wll not be down n the event of the smultaneous falures of any k computng VMs or servers. All the prmary and backup VMs are placed on dfferent host servers. Otherwse, when a host server crashes, all the VMs t s hostng wll no longer operate. Falures of one of the prmary VMs result n t beng mapped to a backup VM, and the tasks n ts watng queue are reassgned to the backup VM. 3.2 Motvaton Suppose there are m vrtual machnes that provde servce S. All tasks of S are assgned to the m vrtual machnes based on approprate strategy. When k-fault tolerance replcaton s adopted to ensure the servce relablty, there are k backup vrtual machnes for S. To avod both hardware and software problems that lead to VM falures, the prmary and backup vrtual machnes are dstrbuted to dfferent servers. When a prmary VM fals, t s mapped to a backup VM, and the tasks n ts watng queue are re-assgned to the backup VM. The backup VM need to re-fetch the data from the database. Ths scheme has two shortcomngs. Frst, the storage servers are contnually requred to process a huge

4 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 4 number of data read and wrte requests, whch s sometmes tme consumng [20]. Second, transferrng large amounts of data consumes consderable network resources. When a VM falure event s caused by hardware problems, the mapped backup vrtual machne re-fetches the needed data from the database. When a VM falure event s caused by software problems and the server whch hosts the faled VM has a copy of data, the backup VM can fetch the data from the server. More network resources can be saved f the faled prmary VM and the backup VM are n proxmty to each other. Therefore, approprate VM placement could save consderable amount of tme and network resources n the falure recovery mode. To save more network resources, we place the prmary and backup vrtual machnes by consderng the network topology of the datacenter. Datacenter networks always adopt a tree-lke structure [32] of whch the fat-tree network structure s a commonly used datacenter network archtecture [33]. A fat-tree network typcally conssts of trees wth three levels of swtches [34] as llustrated n Fg. 2, n whch a fat-tree datacenter network wth four ports s depcted. The swtches n the top, mddle, and bottom layers are referred to as root, aggregaton, and edge swtches, respectvely. The host servers physcally connect to the network va the edge swtches. All the host servers sharng the same edge swtch wth each other are addressed n the same subnet. All host servers sharng the same aggregaton swtches are addressed n the same pod. Upper layer swtches transfer data from more host servers, and are therefore more lkely to become congested than the lower layer swtches [35]. Therefore, reducng the network resource consumpton of the upper layer lnks becomes an mportant problem that has to be solved [20]. The host servers must retreve data from the storage servers, all of whch are connected by the storage area network (SAN). In turn, each one of these storage resources s segmented nto a number of vrtual dsks [36]. In a centralzed storage scheme where the SAN swtches connect to the root layer swtches [20], re-fetchng the data from the storage server may consume too much upper layer resources. As descrbed before, the server hostng the faled VM may store a copy of the data for the affected tasks. If the VM falure was caused by a software problem, the data may be retreved from the host server on whch the faled VM s placed. If the prmary and the backup VMs are n the same subnet, the transfer only consumes the edge-level network resource. However, f the prmary and the backup VMs are n the same pod, the transfer wll consume both the edge-level and the aggregaton-level network resources. Thus, approprate VM placement would save tme and network resources. As ndcated above, the best soluton would be to place all the prmary and backup VMs on host servers n the same subnet. However, ths may not be possble f some of the host servers n the datacenter have already been allocated to other tasks and have nsuffcent free computng resources. Alternatvely, a subnet may not even contan a suffcent number of avalable host servers. Therefore, t may be necessary to place the (m+k) VMs n dfferent subnets. The problem becomes complex now because dfferent VM placement strateges could result n dfferent network resource consumpton. Suppose a servce needs (2+2) VMs for two-fault tolerance and there are two Fg. 2. Fat-tree data center network wth four ports avalable subnets n the same pod, each wth two avalable host servers. There are two placement strateges: (1) the two prmary VMs are placed on host servers n subnet 1, whereas the backup VMs on host servers n subnet 2; (2) one prmary and one backup VM are placed on host servers n subnet 1, whereas the remanng two VMs on host servers n subnet 2. In strategy 2, the data transfer only consumes edge level network resources n the recovery stage; therefore, strategy 1 wll consume more network resources than strategy 2. When k s equal to m, the problem can be solved by mnmzng the dfference of the number of prmary and backup VMs n each subnet. However, the problem becomes more complex when k s smaller than m. To address ths problem, our approach ams to determne an optmal placement strategy to enhance the servce relablty and mnmze the network resource consumpton. 3.3 Problem Defnton The redundant VM placement problem can be formulated as the followng optmzaton problem: subject to: UP(S) = mnup(s)andud(s) (1) UD(S) = (DSze(pkt )) w j (2) j delay j (3) j w j {0,1} (4) x = m (5) x {0,1} (6) y = k (7) y {0,1} (8) z = m + k (9) z {0,1} (10) where UP(S) denotes the total network consumpton. The value of w j s 1 f pkt transfers through lnk j. When the replcaton strategy s adopted, the downtme s manly

5 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 5 affected by the data transfer delay. UD(S) denotes the total data transfer delay. delay j denotes the transfer delay of pkt transferrng through lnk j. The constrants n (5) and (6) ensure there are m prmary VMs for the servce. The constrants n (7) and (8) ensure there are k backup VMs for the servce. The constrant n (9) and (10) ensures the (m+k) VM are all placed on dfferent host servers, therefore, the placement strategy ensures k-fault tolerance rrespectve of whether the falure of the VMs s caused by a software fault or host server fault. Suppose the number of subnets whch contan avalable host servers s max sub, then the number of avalable host servers n each subnet s stored by usng the followng vector: A = [a 1,a 2,...,a max subnet ] (11) The soluton to the problem can be defned by the followng two vectors: M = [m 1,m 2,...,m max subnet ] (12) K = [k 1,k 2,...,k max subnet ] (13) where (12) denotes the number of prmary VMs n each subnet, and (13) denotes the number of backup VMs n each subnet. Therefore, t s necessary to obtan an optmal soluton by consderng all the solutons of the followng ndetermnate equatons: m 1 + m m max subnet = m k 1 + k k max subnet = k m + k a, = 1,2,3...,max subnet (14) m j 0, = 1,2,3...,max subnet k 0, = 1,2,3...,max subnet There s a huge number of pods, subnets and host servers n a cloud datacenter. Iteratng over all the placement strateges would be ntractable; therefore, the problem s solved by adoptng a heurstc optmal algorthm, as dscussed n the next secton. 4 PROPOSED APPROACH The formulated problem essentally nvolves fndng (k+m) host servers followed by placng (k+m) VMs on those host servers. Snce there are a huge number of host servers n a cloud datacenter, the possble number of solutons s exponentally large. It s consequently necessary to dentfy a subset of good host servers from whch to obtan the best solutons. The procedure that was used to select (m+k) good host servers s provded n Secton 4.1 and the algorthm used for placng (m+k) VMs on those host servers s presented n Secton 4.2. Gven the nformaton about the faled and the backup VMs, a recovery strategy decson algorthm calculates the optmal matchng strategy. The proposed recovery strategy decson algorthm wll be dscussed n Secton Phase 1: Host Servers Selecton As explaned n Secton 3.2, the further the two host servers are located from one another, the greater the delay becomes. In addton, the data transfer traffc would consume more network resources. To avod ths stuaton, t s desreable to place all VMs n a subnet that contans (m+k) avalable host servers. Suppose there are two subnets, one of whch has (m+k+20) avalable host servers, and the other has (m+k+1) host servers. In cases such as these, our approach would choose the second opton and leave the frst opton to another servce that may requre more host servers. In other words, we would follow the just enough rule. The just enough rule means that we select the pod or subnet wth just enough resource, and leave the resdual capactes for future use. If none of the subnets have a suffcent number of avalable host servers, the VMs must be dstrbuted to several subnets or even several pods. In ths case, those pods wth a greater number of avalable host servers wll be consdered frst to avod traffc between pods n the recovery stage. We also consder selectng the subnets based on the above rule. The host server selecton procedure s shown n Algorthm 1 as explaned n the followng steps: Step 1: Sort all subnet based on avalable host servers. A host server s avalable when the server has suffcent computng resources to host the vrtual machne. Search for a subnet subnet that satsfes the just enough rule. Select (m+k) servers from subnet and assgn them to servers, and return. (Lnes 1 to 4 and Lnes 49 to 53) Step 2: Add all pods n the datacenter to a lst, and sort the lst accordng to the avalable host servers. Assgn the head of the lst to varable HPod. If the number of avalable host server of HPod s larger than (m+k), goto Step 6. Else, goto Step 3. (Lnes 5 to 7) Step 3: Add all host servers n the pod to servers. Iterate the pod lst and collect host servers untl sum of sze(servers) and the avalable host servers n current pod s larger than (m+k). If sze(servers) s equal to (m+k), return. (Lnes 15 to 26) Step 4: Sort all subnets n current pod accordng to the avalable host servers. Assgn the head of the lst to varable HSubnet. If the number of avalable host server of HSubnet s larger than (m+k), goto Step 5. Else, goto Step 8. (Lnes 27 to 33) Step 5: Iterate the subnet lst. Search for a subnet HSubnet that satsfes the just enough rule. Select (m+k) host server from the subnet and return. (Lnes 49 to 52) Step 6: Contnue to terate the pod lst and search for a pod pod that satsfes the just enough rule. If the number of avalable host servers s equal to (m+k-sze(servers)), add all host server n the pod to servers, and return. (Lnes 8 to 12) Step 7: Add all subnets n the pod to a lst, and sort the lst accordng to the avalable host servers. Assgn the head subnet of the lst to varable HSubnet. If the number of avalable server s larger than (m+k-sze(servers)), goto Step 9. Else, goto Step 8. (Lnes 28 to 33) Step 8: Add all host servers n HSubnet to servers. Iterate the subnet lst and collect host servers untl sum of sze(servers) and the avalable host servers n current subnet s larger than (m+k). If sze(servers) s equal to (m+k), return. (Lnes 35 to 44) Step 9: Contnue to terate the subnet lst. Search for a subnet subnet that satsfes the just enough rule. Select (m+k) from subnet and assgn them to servers, and return. (Lnes 49 to 52) The teratons enable all the good host servers to be obtaned. Secton 4.2 descrbes the procedure that was used

6 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 6 to place the (m+k) VMs on the (m+k) host servers.the tme complexty of Algorthm 1 s O(nlogn), and n s the number of subnets n the datacenter. 4.2 Phase 2: Vrtual Machne Placement Placng (m+k) VMs on the (m+k) host servers requres the number of backup and prmary VMs n each subnet to be determned. A heurstc algorthm s used to solve ths problem. Two heurstc condtons are adopted to narrow the searchng space. If there are even number of avalable servers n a subnet, the number of backup vm n the subnet should be less or equal to the number of prmary vm n the subnet. If there are odd number of avalable servers n a subnet, the number of backup vm n the subnet should be less or equal to one plus the number of prmary vm n the subnet. Suppose there s a subnet that contans fewer prmary VMs than backup VMs. As the total number of prmary VMs s larger than or equal to the number of backup VMs, there s at least one subnet n whch the number of backup VMs s smaller than the number of prmary VMs. Now, a backup VM n the frst subnet and a prmary VM n the second subnet exchange poston wth each other. Compared to the frst strategy, one more faled VM n the second subnet does not requre to be mapped to a backup VM n dfferent subnets when k number of VMs fal at the same tme. The new placement strategy wll consume less aggregaton layer network resource. The second heurstc condton s that the subnet that contans more avalable host servers should be allocated more backup VMs. If a subnet that contans more avalable host servers, the dfference between the number of prmary VMs and backup VMs of t should be smaller. When the dfference between the number of prmary VMs and backup VMs s larger, there s a larger chance that the data transfer would consume more network resources. The proof of the frst heurstc condton can be found n appendx A. The proof of the second heurstc condton can be found n appendx B. Our algorthm s shown n Algorthm 2 and 3, whch are recursve n nature. In each recurson, the algorthm determnes the number of backup VMs that are placed n the current subnet, and the rest backup VMs are placed n the followng subnets. When the number of rest backup VM equals 0 or the last subnet has reached, the resource consumpton of the current placement strategy s computed and compared wth the current optmal one. If the resource consumpton s smaller than that of the current optmal strategy, the current strategy s consdered optmal. When the current placement cannot satsfy the two heurstc condtons, the algorthm termnates current recurson and backtrace. When a backup VM fals, a new backup VM s searched around the old one. 4.3 Phase 3: Recovery Strategy Decson When one or more VMs fal, a recovery strategy has to be decded upon, and each faled VM has to be mapped to a backup VM. All tasks n the watng queue of the faled VM are rescheduled to ts mappng backup VM, and the data to be processed have to be retreved agan to the backup VM. Algorthm 1: Host Server Selecton Input: the number of needed prmary VMs m,the number of needed backup VMs k Output: lst of nterestng host servers servers 1 sort all subnets by the number of avalable host servers and subnet = subnets >head; 2 f (subnet > f reeserversze) (m+k) then 3 goto fnal2; 4 end 5 sort pods by the number of avalable host servers; 6 pod = pods.head; 7 f (pod. f reeserversze) (m+k) then 8 next = Succ(pod); 9 whle (next. f reeserversze) (m+k) do 10 pod = next and next =Succ(pod); 11 end 12 add all subnets n pod to subnets and goto fnal1; 13 end 14 else 15 add all avalable host servers n pod to servers; 16 whle sze(servers) < (m+k) do 17 pod = Succ(pod); 18 f sze(pod. f reeserversze)+sze(servers) > (m+k) then 19 add all subnets n pod to subnets; 20 goto fnal1; 21 end 22 else 23 add all avalable host servers n pod to servers; 24 end 25 end 26 end 27 fnal1: 28 sort subnets by the number of avalable host servers; 29 subnet = subnets.head; 30 fnal2: 31 f (subnet. f reeserversze)+ sze(servers) (m+k) then 32 goto fnal 3; 33 end 34 else 35 add all avalable host servers n subnet to servers; 36 whle sze(servers) < (m+k) do 37 subnet = Succ(subnet); 38 f (subnet. f reeserversze)+sze(servers) (m+k) then 39 goto fnal3; 40 end 41 else 42 add all avalable servers n subnet to servers; 43 return servers; 44 end 45 end 46 end 47 fnal3: 48 next = Succ(subnet); 49 whle (next. f reeserversze)+ sze(servers) (m+k) do 50 subnet = next and next =Succ(subnet); 51 end 52 select (m + k)- sze(servers) avalable servers from subnet, and assgned them to servers; 53 return servers;

7 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 7 Algorthm 2: Vrtual Machne Placement on Specfc Servers Input: nterestng host servers servers, the number of backup servers k Output: nt strategy[] 1 Obtan all the good subnets that at least have one good host server; 2 Store the good host server number of each nterestng subnet to vector subnets[]; 3 Sort subnets[] by the number of good host server desc; 4 nt strategy[sze(subnets)]; 5 nt optmal[sze(subnets)]; 6 nt mncost = ; 7 optmal placement strategy searchng(subnets,strategy,optmal,k,mncost,1); 8 return strategy; Algorthm 3: Optmal Placement Strategy Searchng Input: subnets, strategy, optmal, the number of un-placed backup VMs rest, mncost, nested level Output: The placement strategy 1 f rest == 0 then 2 strategy[] = rest; 3 Compute cost of current strategy; 4 f currentcost mncost then 5 mncost=currentcost; 6 copy(strategy[], optmal[]); 7 end 8 return; 9 end 10 f == sze(subnets) then 11 f rest > mn(strategy[-1], cel(subnet[]/2)) then 12 return; 13 end 14 strategy[] = rest; 15 Compute cost of current strategy; 16 f currentcost mncost then 17 mncost=currentcost; 18 copy(strategy[], optmal[]); 19 end 20 return; 21 end 22 n = mn(strategy[-1], cel(subnets[]/2), rest); 23 whle n 0 do 24 strategy[]=n; 25 optmal placement strategy searchng(subnets,strategy,optmal,k-n,cost); 26 n- -; 27 end 28 return; If the VM fals because of a software fault, the partcular data block may be obtaned from the host server on whch the faled VM resdes. Gven the nformaton on the VM falure caused by a software fault and the backup VMs, the recovery strategy decson algorthm matches the faled VMs and the backup VMs. Then each faled VM has to be mapped to a backup VM. The recovery strategy should mnmze the total network resource consumpton. In ths case t s possble to formulate the recovery strategy decson problem as a mnmum weght matchng n bpartte graphs [37] [38]. Gven a complete bpartte graph G=(V,E) wth bpartton(v F,V B ), where V s the set of all faled VMs and backup VMs, V F s the set of all faled VMs, V B s the set of all backup VMs, and E s the set of shortest paths connectng nodes for each par of VMs from dfferent parttons. A matchng set M s a subset of E. Suppose w denotes the weght functon, and then t s necessary to fnd a matchng of mnmum weghts where the weght of matchng M s gven by: w(m) = (w(e)) (15) e M In other words, the recovery problem can be formulated as the followng: subject to: mn (w(v f,v b ) x(v f,v b )) (16) (V F,V B ) (x(v f,v b ) = 1), v f V F (17) (v f ) x(v f,v b ) 0,1, v f V F, v b V B (18) w(v f,v b ) = DSze(v f ) length(e(v f,v b )) (19) where (17) ensures each faled VM s matched to a backup VM. In (18), x(v f,v b ) = 1 f the edge (v f,v b ) belongs to the matchng; otherwse, x(v f,v b ) = 0. DSze n (19) returns the data sze that can be retreved from the host server on whch v f s placed. Therefore, w(v f,v b ) denotes the transfer cost. As explaned before, nformaton that s exchanged between host servers n the same subnet only utlzes an edge swtch; however, when two hosts are n the same pod, all communcated traffc s routed through both the edge and the aggregaton swtches. Therefore, the transfer wll consume more network resource and the delay becomes greater. For each backup VM, we try to fnd a correspondng faled VM n the same subnet for t f there s one. Otherwse, VMs n dfferent subnets would have to be mapped. Algorthm 4 detals our recovery strategy decson algorthm: Step 1: For each backup VM, all faled VMs n the same subnet that have not been matched are sorted accordng to ther data sze. The backup VM s mapped to the faled VM wth the largest data sze. Step 2: If there stll reman faled VMs that have not been matched to a backup VM, all unmatched faled VMs n the same pod are sorted accordng to data sze. The backup VM s mapped to the faled VM wth the largest data sze n the pod. Step 3: If there stll reman faled VMs that have not been matched to a backup VM, the backup VMs are randomly

8 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 8 matched to the faled VMs. In addton, the data are refetched from the storage server. If we terate all subnets concurrently, the tme complexty of Algorthm 4 s O(nlogn). n s the number of faled VM. We note that the proposed network-topology-aware redundant VM placement approach ams at enhancng the relablty of server-based cloud servces whose fault-tolerance level can be measured and assured n terms of the k-fault-tolerance metrc. In practce, rebootng or replacng a specfc faled VM on the same hostng server could be attempted n a controlled manner (as a precondton for executng the aforementoned recovery strategy) as a VM-specfc optmzaton of the proposed approach, though the VM falure handlng scheme s not helpful to the k-fault-tolerance measurement. The scheme needs be done n a controlled manner because servce-specfc VM management polcy may need be followed, partcularly when the root cause s unknown (whch may lead to contnual falure recovery attempts caused by partal server falures). 5 EXPERIMENTAL EVALUATIONS In the followng sectons, the expermental settng s frst outlned. Then OPVMP s compared wth other four representatve approaches n terms of the total network resource consumpton and other performance metrcs. Fnally, the parameters of our approach are studed. 5.1 Expermental Setup We construct an expermental platform based on our prevous research results [39] [28]. In our experment, a 32-port fat-tree data center network s constructed. The capacty of the root-layer lnk and aggregaton-layer lnk s set as 10 Gbps, and the capacty of the edge-layer lnk s set as 1 Gbps [21]. There are 16 host servers n each subnet. Each of these host servers can host four VMs at most. The performance of our method (OPVMP) was studed by comparng t wth four other exstng representatve methods: RSV MP. Data are re-fetched from the central storage server n the recovery stage. Ths approach does not take the network topology nto consderaton. After havng selected (m+k) host servers, all prmary VMs and backup VMs are randomly placed on the selected host servers n the data center [25]. RLV MP. Data are re-fetched from the central storage server or the host server on whch the faled VM resdes. The strategy s determned by the network dstance and whether a data copy exsts. After havng selected (m+k) host servers, all prmary VMs and backup VMs are randomly placed on the selected host servers. PLV MP. Data are re-fetched from the central storage server or the host server on whch the faled VM resdes. The strategy s determned by the network dstance and whether a data copy exsts. After havng selected (m+k) host servers, the backup VMs are unformly dstrbuted across the pods. SLV MP. Data are re-fetched from the central storage server or the host server on whch the faled VM resdes. The strategy s determned by the network Algorthm 4: Recovery Strategy Decson Input: Set of all faled host servers V M F, set of all backups V M B, w(vm f,vm b ) for each vm f V M F and vm b V M B Output: Map maps between each faled host server and ts backup 1 add all subnets that contans at least one backup VM to subnets ; 2 for each subnet subnet n subnets do 3 add V M F V M(subnet ) to lst srcv M; 4 add V M B V M(subnet ) to lst dstv M; 5 sort srcv M f by data sze; 6 whle srcv M s not /0 and dstv M s not /0 do 7 key = srcv M-> head; 8 value = dstv M-> head; 9 add < key,value > to maps; 10 remove key from PM F and srcv M ; 11 remove value from PM B and dstv M ; 12 end 13 end 14 f V M F s not /0 then 15 for each pod pod n pods do 16 clear srcv M and add V M F V M(pod ) to lst srcv M; 17 clear dstv M add V M B V M(pod ) to lst dstv M; 18 whle srcv M s not /0 and dstv M s not /0 do 19 key = srcv M-> head; 20 value = dstv M-> head; 21 add < key,value > to maps; 22 remove key from PM F and srcv M ; 23 remove value from PM B and dstv M ; 24 end 25 end 26 end 27 f V M F s not /0 then 28 whle V M F s not /0 do 29 key = V M F -> head; 30 value = V M B -> head; 31 add < key,value > to maps; 32 remove key from PM F ; 33 remove value from PM B ; 34 end 35 end 36 return maps; dstance and whether a data copy exsts. After havng selected (m+k) host servers, the backup VMs are unformly dstrbuted across the subnets. All the methods were evaluated usng the followng performance metrcs: TDelay: Total data transfer delay, T Delay, can be calculated as follows: T Delay = delay(pkt ) (20) where pkt denotes a network packet.

9 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 9 PRoot: The total sze of network packet that has been transferred by the root layer swtches. PRoot can be calculated as follows: PRoot = w r sze(pkt ) (21) where w r denotes the frequency wth whch packet pkt has been transferred by the root swtches. PAgg: The total sze of network packet that has been transferred by the aggregaton layer swtches. PAgg can be calculated as follows: PAgg = w a sze(pkt ) (22) where w a denotes the frequency wth whch packet pkt has been transferred by the aggregaton swtches. PEdge: The total sze of network packet that has been transferred by the edge layer swtches. PEdge can be calculated as follows: PEdge = w e sze(pkt ) (23) where w e denotes the frequency wth whch packet pkt has been transferred by the edge swtches. PTotal: The total sze of all packet that has been transferred by the all swtches, whch can be calculated as follows: Fg. 3. The performance of total data transfer delay (s) PTotal = PRoot + PAgg + PEdge (24) where w r denotes the frequency wth whch packet pkt has been transferred by the root swtches. 5.2 Performance Evaluatons The experment nvolved 20 servces, each of whch nvolved 50 prmary VMs and 40 backup VMs. Two hundred VM falure events were trggered data processng tasks were generated. The data sze of each task s 300 MB and the task sze s set as 10 mnutes. The task arrval rate of each servce s 200 per hour. The performance of all approaches was studed. Fg.3 llustrates the performance of T Delay. Fgs. 4 to 7 present the performance of network resource consumpton. Fgs. 4 to 6 provde the results of PRoot, PAgg, and PEdge, respectvely. Fg. 7 depcts the results of PTotal. The results demonstrate that: Compared to other approaches, f the data are refetched from the central storage server, more data are processed by the root and the aggregaton swtches. In addton, the data transfer delay s larger. Ths s because whle retrevng the data from the neghborng host servers s possble, t s unnecessary for the data traffc transfer to consume upper layer network resources, and the data transfer takes up less tme. As our approach OPVMP takes the network topology and the servce characterstcs nto consderaton when solvng the optmal problem, t consumes the smallest amount of root and aggregaton layer network resources. Among all the approaches, the edge layer network resource consumpton of RSVMP s less than those of other approaches. That s because when the prmary Fg. 4. The performance of root layer network resource consumpton (MB) VM and the backup VM are n the same pod, the data would transfer through the edge layer swtch twce. Of all seven approaches, our approach consumes the least amount of total network resources, because t takes up less root and aggregaton layer network resources than the other approaches. 5.3 Impact of Parameter k Ths secton contans the result of the study of the mpact of parameter k on network resource consumpton. The experment nvolved 10 servces, each of whch nvolved 50 prmary VMs. A hundred VM falure events were trggered data processng tasks were generated. The data sze of each task s 300 MB and the task sze s set as 10 mnutes. The performance metrcs conssted of PRoot, PAgg, PEdge, and PTotal. As shown n Fg. 8, caused by some random factor, the network resource consumpton ncreases a lttle when k ncreases from 35 to 40. However, the total amount of data that are processed by the root swtches, aggregaton swtches and the edge swtches n our approach almost show a decreasng trend when the value of parameter k ncreases from 20 to 50. Because an ncrease n k results n an ncrease n the number of backup VMs for the

10 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 10 Fg. 5. The performance of aggregaton layer network resource consumpton (MB) Fg. 8. Impact of parameter k to root layer, aggregaton layer, and edge layer network resource consumpton (MB), k represents backup VM number Fg. 6. The performance of edge layer network resource consumpton (MB) Fg. 9. Impact of parameter k to total network resource consumpton, k represents backup VM number. Fg. 7. The performance of total network resource consumpton (MB) Fg. 10. Impact of task arrval rate to root layer, aggregaton layer, and edge layer network resource consumpton (MB). The task arrval rate of each servce ncreases from 40 per hour to 200 per hour

11 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 11 Fg. 11. Impact of task arrval rate to total network resource consumpton. The task arrval rate of each servce ncreases from 40 per hour to 200 per hour n our experment same servce. There s a greater chance that the prmary and backup VMs are n the same subnet. Therefore, our approach consumes less upper layer network resources n the recovery stage. Because ths reducton occurs as a result of an ncrease n the value of parameter k, the total network resources also show a decreasng trend n Fg Impact of Parameter Task Arrval Rate We present the mpact of task arrval rate on network resource consumpton n ths secton. There are 11 servces. Each servce has 50 prmary VMs and 40 backup VMs. We generate 2000 data block processng tasks. The data sze of each task s 300 MB and the task sze s set as 10 mnutes. We trgger 100 vrtual machne falure events. The task arrval rate of all servces ncreases from 40 per hour to 200 per hour. To show the mpact of parameter arrval rate, we calculate the network resource consumpton dfference between our approach and RSVMP. The performance metrcs nclude: root-layer, aggregaton-layer, edge-layer, and total network resource consumpton dfference between our approach and RSVMP. As shown n Fg. 10, the root-layer, the aggregatonlayer and the edge-layer network resource consumpton dfference ncreases wth the ncrease of task arrval rate. As shown n Fg. 11, the total network resource consumpton also ncreases wth the ncrease of task arrval rate. When the task arrval rate ncreases, the task watng queue of each VM becomes longer. A falure event wll affect more tasks and the total re-fetched data ncrease. Therefore, our approach can save more upper layer network resource comparng to RSVMP, and the network resource consumpton dfference ncreases. 6 CONCLUSIONS AND FUTURE WORK Ths paper ams at enhancng the relablty of serverbased cloud servces whose fault-tolerance level can be measured and assured n terms of the replcaton-based k-faulttolerance metrc. It proposes a novel network-topologyaware redundant VM placement approach to mnmzng the consumpton of network resources when prmary VM falures need be recovered by backup VMs under the k-faulttolerance constrants. The proposed approach s a threestep process: host server selecton, optmal redundant VM placement, and recovery strategy decson. By explotng the characterstcs of the datacenter network, a heurstc algorthm capable of effcently selectng approprate host servers and determnng the optmal VM placement strategy s presented. Fnally, the recovery strategy decson problem s formulated as a maxmum weght matchng n bpartte graphs problem. An optmal algorthm s presented to solve the problem. The expermental evaluaton results show that the proposed approach consumes less network resources than four other representatve approaches. Our future work ncludes: (1) reducng the complexty of our approach based on some probablstc analyss, (2) tradng off between the effect of the edge swtch falure and the network resource savng by adoptng a fault avodance approach, and (3) consderng the relablty problem for a complex cloud workflow servce. ACKNOWLEDGMENTS The work presented n ths study s supported by NSFC ( ), NSFC ( ), Guangdong Natural Scence Foundaton (Project No. 2014A ), and the Fundamental Research Funds for the Central Unverstes. Shangguang Wang s the correspondng author. REFERENCES [1] M. Armbrust, A. Fox, R. Grffth, A. D. Joseph, R. Katz, A. Konwnsk, G. Lee, D. A. Patterson, A. Rabkn, I. Stoca, and M. Zahara, Above the clouds: A berkeley vew of cloud computng, Feb [2] W. Voorsluys, J. Broberg, and R. Buyya, Introducton to cloud computng, Cloud computng: Prncples and paradgms, pp. 3 37, [3] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandc, Cloud computng and emergng t platforms: Vson, hype, and realty for delverng computng as the 5th utlty, Future Generaton computer systems, vol. 25, no. 6, pp , [4] B. P. Rmal, E. Cho, and I. Lumb, A taxonomy and survey of cloud computng systems, n INC, IMS and IDC, NCM 09. Ffth Internatonal Jont Conference on, pp , Ieee, [5] M. Armbrust, A. Fox, R. Grffth, A. D. Joseph, R. Katz, A. Konwnsk, G. Lee, D. Patterson, A. Rabkn, I. Stoca, et al., A vew of cloud computng, Communcatons of the ACM, vol. 53, no. 4, pp , [6] Z. Xa, X. Wang, X. Sun, and Q. Wang, A secure and dynamc mult-keyword ranked search scheme over encrypted cloud data, [7] M. A Vouk, Cloud computng ssues, research and mplementatons, CIT. Journal of Computng and Informaton Technology, vol. 16, no. 4, pp , [8] Y. Ren, J. Shen, J. Wang, J. Han, and S. Lee, Mutual verfable provable data audtng n publc cloud storage, Journal of Internet Technology, vol. 16, no. 2, pp , [9] S. Marston, Z. L, S. Bandyopadhyay, J. Zhang, and A. Ghalsas, Cloud computngłthe busness perspectve, Decson Support Systems, vol. 51, no. 1, pp , [10] M. Melo, P. Macel, J. Araujo, R. Matos, and C. Araujo, Avalablty study on cloud computng envronments: Lve mgraton as a rejuvenaton mechansm, n Dependable Systems and Networks (DSN), rd Annual IEEE/IFIP Internatonal Conference on, pp. 1 6, IEEE, [11] R. Jhawar, V. Pur, and M. Santambrogo, Fault tolerance management n cloud computng: A system-level perspectve, Systems Journal, IEEE, vol. 7, pp , June [12] A. ntroducton to desgnng relable cloud servces, whte paper of mcrosoft,

12 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 12 [13] E. Bauer and R. Adams, Relablty and avalablty of cloud computng. John Wley & Sons, [14] M. R. Lyu et al., Handbook of software relablty engneerng, vol IEEE computer socety press CA, [15] Q. Zhang, L. Cheng, and R. Boutaba, Cloud computng: stateof-the-art and research challenges, Journal of nternet servces and applcatons, vol. 1, no. 1, pp. 7 18, [16] Y.-S. Da, B. Yang, J. Dongarra, and G. Zhang, Cloud servce relablty: Modelng and analyss, n 15th IEEE Pacfc Rm Internatonal Symposum on Dependable Computng, pp. 1 17, [17] M. Schwarzkopf, D. G. Murray, and S. Hand, The seven deadly sns of cloud computng research, n USENIX HotCloud, pp. 1 5, USENIX, [18] W. Zhao, P. Mellar-Smth, and L. Moser, Fault tolerance mddleware for cloud computng, n Cloud Computng (CLOUD), 2010 IEEE 3rd Internatonal Conference on, pp , July [19] I. Gor, F. Jula, J. Gutart, and J. Torres, Checkpont-based faulttolerant nfrastructure for vrtualzed servce provders, n Network Operatons and Management Symposum (NOMS), 2010 IEEE, pp , Aprl [20] N. Lmrungs, J. Zhao, Y. Xang, T. Lan, H. Huang, and S. Subramanam, Provdng relablty as an elastc servce n cloud computng, n Communcatons (ICC), 2012 IEEE Internatonal Conference on, pp , June [21] J. Xu, J. Tang, K. Kwat, W. Zhang, and G. Xue, Survvable vrtual nfrastructure mappng n vrtualzed data centers, n Cloud Computng (CLOUD), 2012 IEEE 5th Internatonal Conference on, pp , June [22] Z. Zheng, Y. Zhang, and M. Lyu, Cloudrank: A qos-drven component rankng framework for cloud computng, n Relable Dstrbuted Systems, th IEEE Symposum on, pp , Oct [23] Z. Zheng, T. Zhou, M. Lyu, and I. Kng, Ftcloud: A component rankng framework for fault-tolerant cloud applcatons, n Software Relablty Engneerng (ISSRE), 2010 IEEE 21st Internatonal Symposum on, pp , Nov [24] G. Jung, K. Josh, M. Hltunen, R. Schlchtng, and C. Pu, Performance and avalablty aware regeneraton for cloud based multter applcatons, n Dependable Systems and Networks (DSN), 2010 IEEE/IFIP Internatonal Conference on, pp , June [25] F. Machda, M. Kawato, and Y. Maeno, Redundant vrtual machne placement for fault-tolerant consoldated server clusters, n Network Operatons and Management Symposum (NOMS), 2010 IEEE, pp , Aprl [26] A. S. Tanenbaum and M. Van Steen, Dstrbuted systems. Prentce- Hall, [27] A. Zhou, S. Wang, C. Yang, L. Sun, Q. Sun, and F. Yang, Ftcloudsm: support for cloud servce relablty enhancement smulaton, Internatonal Journal of Web and Grd Servces, vol. 11, no. 4, pp , [28] A. Zhou, S. Wang, Q. Sun, H. Zou, and F. Yang, Ftcloudsm: a smulaton tool for cloud servce relablty enhancement mechansms, n Proceedngs Demo & Poster Track of ACM/IFIP/USENIX Internatonal Mddleware Conference, pp. 1 2, [29] M. Zhang, H. Jn, X. Sh, and S. Wu, Vrtcft: A transparent vmlevel fault-tolerant system for vrtual clusters, n Parallel and Dstrbuted Systems (ICPADS), 2010 IEEE 16th Internatonal Conference on, pp , Dec [30] Z. Zheng, T. Zhou, M. Lyu, and I. Kng, Component rankng for fault-tolerant cloud applcatons, Servces Computng, IEEE Transactons on, vol. 5, pp , Fourth [31] Us government cloud computng technology roadmap volume hgh-prorty requrements to further usg agency cloud computng adopton, [32] A. L. M. Al-Fares and A. Vahdat, A scalable, commodty data center network archtecture, No , ACM SIGCOMM Computer Communcaton Revew, [33] R. Nranjan Mysore, A. Pambors, N. Farrngton, N. Huang, P. Mr, S. Radhakrshnan, V. Subramanya, and A. Vahdat, Portland: a scalable fault-tolerant layer 2 data center network fabrc, n ACM SIGCOMM Computer Communcaton Revew, vol. 39, pp , ACM, [34] K. Blal, M. Manzano, S. Khan, E. Calle, K. L, and A. Zomaya, On the characterzaton of the structural robustness of data center networks, Cloud Computng, IEEE Transactons on, vol. 1, pp. 1 1, Jan [35] Csco data center nfrastructure 2.5 desgn gude, Csco Systems, Inc., San Jose, CA , USA, [36] A. Sngh, M. Korupolu, and D. Mohapatra, Server-storage vrtualzaton: ntegraton and load balancng n data centers, n Proceedngs of the 2008 ACM/IEEE conference on Supercomputng, p. 53, IEEE Press, [37] J. E. Hopcroft and R. M. Karp, An nˆ5/2 algorthm for maxmum matchngs n bpartte graphs, SIAM Journal on computng, vol. 2, no. 4, pp , [38] R. E. Burkard and E. Cela, Lnear assgnment problems and extensons. Sprnger, [39] R. N. Calheros, R. Ranjan, A. Beloglazov, C. A. De Rose, and R. Buyya, Cloudsm: a toolkt for modelng and smulaton of cloud computng envronments and evaluaton of resource provsonng algorthms, Software: Practce and Experence, vol. 41, no. 1, pp , Ao Zhou s an assstant professor at the S- tate Key Laboratory of Networkng and Swtchng Technology, Bejng Unversty of Posts and Telecommuncatons. She receved her Ph.D. degree n computer scence at Bejng Unversty of Posts and Telecommuncatons of Chna n Her research nterests nclude cloud computng, servce relablty. Shangguang Wang s an assocate professor at the State Key Laboratory of Networkng and Swtchng Technology, Bejng Unversty of Posts and Telecommuncatons. He receved hs Ph.D. degree n computer scence at Bejng Unversty of Posts and Telecommuncatons of Chna n Hs PhD thess was awarded as outstandng doctoral dssertaton by BUPT n Hs research nterests nclude Servce Computng, Cloud Servces, QoS Management. He s a member of IEEE. Bo Cheng receved the PhD degree n computer scence and technology from the Unversty of Electroncs Scence and Technology of Chna n Hs research nterests nclude multmeda communcatons, and servces computng. Currently, he s an assocate professor of state key laboratory of networkng and swtchng technology of Bejng Unversty of posts and telecommuncatons. Zbn Zheng s an assocate professor at Sun Yat-sen Unversty, Guangzhou, Chna. He receved Outstandng Ph.D. Thess Award of The Chnese Unversty of Hong Kong at 2012, ACM SIGSOFT Dstngushed Paper Award at ICSE2010, Best Student Paper Award at ICWS2010, and IBM Ph.D. Fellowshp Award at Hs research nterests nclude servce computng and cloud computng.

13 IEEE TRANSACTIONS ON SERVICE COMPUTING, VOL. XX, NO. XX, X XXXX 13 Fangchun Yang receved hs PhD degree n communcaton and electronc system from the Bejng Unversty of Posts and Telecommuncaton n He s currently a professor at the Bejng Unversty of Posts and Telecommuncaton, Chna. He has publshed 6 books and more than 80 papers. Hs current research nterests nclude network ntellgence, servces computng, communcatons software, soft swtchng technology, and network securty. He s a fellow of the IET. Rong Chang receved hs PhD degree n computer scence and engneerng from the Unversty of Mchgan n He s wth IBM Research leadng a global team creatng nnovatve IoT cloud servces technologes. He holds 30+ patents and has publshed 40+ papers. He s Member of IBM Academy of Technology, ACM Dstngushed Engneer, Char of IEEE Computer Socety TCSVC, and Assocate Edtor of IEEE Trans. on Servces Computng. Mchael R. Lyu s currently a Professor n the Department of Computer Scence and Engneerng, The Chnese Unversty of Hong Kong. Hs research nterests nclude software relablty engneerng, dstrbuted systems, servce computng, nformaton retreval, socal networks, and machne learnng. He has publshed over 400 refereed journal and conference papers n these areas. Dr. Lyu s an IEEE Fellow and an AAAS Fellow for hs contrbutons to software relablty engneerng and software fault tolerance. Rajkumar Buyya s a Fellow of IEEE, Professor of Computer Scence and Software Engneerng, Future Fellow of the Australan Research Councl, and Drector of the Cloud Computng and Dstrbuted Systems (CLOUDS) Laboratory at the Unversty of Melbourne, Australa. He s also servng as the foundng CEO of Manjrasoft, a spn-off company of the Unversty, commercalzng ts nnovatons n Cloud Computng. He has authored over 450 publcatons and fve text books ncludng Masterng Cloud Computng publshed by McGraw Hll and Elsever/Morgan Kaufmann, 2013 for Indan and nternatonal markets respectvely. Software technologes for Grd and Cloud computng developed under Dr. Buyya s leadershp have ganed rapd acceptance and are n use at several academc nsttutons and commercal enterprses n 40 countres around the world. Dr. Buyya has led the establshment and development of key communty actvtes, ncludng servng as foundaton Char of the IEEE Techncal Commttee on Scalable Computng and fve IEEE/ACM conferences. These contrbutons and nternatonal research leadershp of Dr. Buyya are recognzed through the award of 2009 IEEE TCSC Medal for Excellence n Scalable Computng. He s currently servng as Co-Edtor-n-Chef of Journal of Software: Practce and Experence, whch was establshed 40+ years ago.

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Distributed Middlebox Placement Based on Potential Game

Distributed Middlebox Placement Based on Potential Game Int. J. Communcatons, Network and System Scences, 2017, 10, 264-273 http://www.scrp.org/ournal/cns ISSN Onlne: 1913-3723 ISSN Prnt: 1913-3715 Dstrbuted Mddlebox Placement Based on Potental Game Yongwen

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Load-Balanced Anycast Routing

Load-Balanced Anycast Routing Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

CLOUD computing is widely adopted in current professional

CLOUD computing is widely adopted in current professional IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. 6, NO. 4, OCTOBER-DECEMBER 2018 1191 Usng Proactve Fault-Tolerance Approach to Enhance Cloud Servce Relablty Jale Lu, Shangguang Wang, Senor Member, IEEE, Ao

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers

Game Based Virtual Bandwidth Allocation for Virtual Networks in Data Centers Avaable onlne at www.scencedrect.com Proceda Engneerng 23 (20) 780 785 Power Electroncs and Engneerng Applcaton, 20 Game Based Vrtual Bandwdth Allocaton for Vrtual Networks n Data Centers Cu-rong Wang,

More information

Needed Information to do Allocation

Needed Information to do Allocation Complexty n the Database Allocaton Desgn Must tae relatonshp between fragments nto account Cost of ntegrty enforcements Constrants on response-tme, storage, and processng capablty Needed Informaton to

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research

Scheduling Remote Access to Scientific Instruments in Cyberinfrastructure for Education and Research Schedulng Remote Access to Scentfc Instruments n Cybernfrastructure for Educaton and Research Je Yn 1, Junwe Cao 2,3,*, Yuexuan Wang 4, Lanchen Lu 1,3 and Cheng Wu 1,3 1 Natonal CIMS Engneerng and Research

More information

Adaptive Energy and Location Aware Routing in Wireless Sensor Network

Adaptive Energy and Location Aware Routing in Wireless Sensor Network Adaptve Energy and Locaton Aware Routng n Wreless Sensor Network Hong Fu 1,1, Xaomng Wang 1, Yngshu L 1 Department of Computer Scence, Shaanx Normal Unversty, X an, Chna, 71006 fuhong433@gmal.com {wangxmsnnu@hotmal.cn}

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

Solution Brief: Creating a Secure Base in a Virtual World

Solution Brief: Creating a Secure Base in a Virtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Soluton Bref: Creatng a Secure Base n a Vrtual World Abstract The adopton rate of Vrtual Machnes has exploded at most organzatons, drven by the mproved

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Real-Time Guarantees. Traffic Characteristics. Flow Control

Real-Time Guarantees. Traffic Characteristics. Flow Control Real-Tme Guarantees Requrements on RT communcaton protocols: delay (response s) small jtter small throughput hgh error detecton at recever (and sender) small error detecton latency no thrashng under peak

More information

A MapReduce-supported Data Center Networking Topology

A MapReduce-supported Data Center Networking Topology A MapReduce-supported Data Center Networkng Topology Zelu Dng*,, Xue Lu, Deke Guo*, Honghu Chen*, Xueshan Luo* * Natonal Unversty of Defense Technology, Chna McGll Unversty, Canada Abstract Several novel

More information

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach Dstrbuted Resource Schedulng n Grd Computng Usng Fuzzy Approach Shahram Amn, Mohammad Ahmad Computer Engneerng Department Islamc Azad Unversty branch Mahallat, Iran Islamc Azad Unversty branch khomen,

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments Comparson of Heurstcs for Schedulng Independent Tasks on Heterogeneous Dstrbuted Envronments Hesam Izakan¹, Ath Abraham², Senor Member, IEEE, Václav Snášel³ ¹ Islamc Azad Unversty, Ramsar Branch, Ramsar,

More information

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks

Online Policies for Opportunistic Virtual MISO Routing in Wireless Ad Hoc Networks 12 IEEE Wreless Communcatons and Networkng Conference: Moble and Wreless Networks Onlne Polces for Opportunstc Vrtual MISO Routng n Wreless Ad Hoc Networks Crstano Tapparello, Stefano Tomasn and Mchele

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.15 No.10, October 2015 1 Evaluaton of an Enhanced Scheme for Hgh-level Nested Network Moblty Mohammed Babker Al Mohammed, Asha Hassan.

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

FAHP and Modified GRA Based Network Selection in Heterogeneous Wireless Networks

FAHP and Modified GRA Based Network Selection in Heterogeneous Wireless Networks 2017 2nd Internatonal Semnar on Appled Physcs, Optoelectroncs and Photoncs (APOP 2017) ISBN: 978-1-60595-522-3 FAHP and Modfed GRA Based Network Selecton n Heterogeneous Wreless Networks Xaohan DU, Zhqng

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

A Frame Packing Mechanism Using PDO Communication Service within CANopen

A Frame Packing Mechanism Using PDO Communication Service within CANopen 28 A Frame Packng Mechansm Usng PDO Communcaton Servce wthn CANopen Mnkoo Kang and Kejn Park Dvson of Industral & Informaton Systems Engneerng, Ajou Unversty, Suwon, Gyeongg-do, South Korea Summary The

More information

3. CR parameters and Multi-Objective Fitness Function

3. CR parameters and Multi-Objective Fitness Function 3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

Research Article. ISSN (Print) s k and. d k rate of k -th flow, source node and

Research Article. ISSN (Print) s k and. d k rate of k -th flow, source node and Scholars Journal of Engneerng and Technology (SJET) Sch. J. Eng. Tech., 2015; 3(4A):343-350 Scholars Academc and Scentfc Publsher (An Internatonal Publsher for Academc and Scentfc Resources) www.saspublsher.com

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Routing in Degree-constrained FSO Mesh Networks

Routing in Degree-constrained FSO Mesh Networks Internatonal Journal of Hybrd Informaton Technology Vol., No., Aprl, 009 Routng n Degree-constraned FSO Mesh Networks Zpng Hu, Pramode Verma, and James Sluss Jr. School of Electrcal & Computer Engneerng

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 A Tme-drven Data Placement Strategy for a Scentfc Workflow Combnng Edge Computng and Cloud Computng Bng Ln, Fangnng

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Sgnal Processng: Image Communcaton 23 (2008) 754 768 Contents lsts avalable at ScenceDrect Sgnal Processng: Image Communcaton journal homepage: www.elsever.com/locate/mage Dstrbuted meda rate allocaton

More information

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT Bran J. Wolf, Joseph L. Hammond, and Harlan B. Russell Dept. of Electrcal and Computer Engneerng, Clemson Unversty,

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems S. J and D. Shn: An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems 2355 An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems Seunggu J and Dongkun Shn, Member,

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images Internatonal Journal of Informaton and Electroncs Engneerng Vol. 5 No. 6 November 015 Usng Fuzzy Logc to Enhance the Large Sze Remote Sensng Images Trung Nguyen Tu Huy Ngo Hoang and Thoa Vu Van Abstract

More information

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Overvew 2 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Introducton Mult- Smulator MASIM Theoretcal Work and Smulaton Results Concluson Jay Wagenpfel, Adran Trachte Motvaton and Tasks Basc Setup

More information

A Saturation Binary Neural Network for Crossbar Switching Problem

A Saturation Binary Neural Network for Crossbar Switching Problem A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com

More information

Constructing Minimum Connected Dominating Set: Algorithmic approach

Constructing Minimum Connected Dominating Set: Algorithmic approach Constructng Mnmum Connected Domnatng Set: Algorthmc approach G.N. Puroht and Usha Sharma Centre for Mathematcal Scences, Banasthal Unversty, Rajasthan 304022 usha.sharma94@yahoo.com Abstract: Connected

More information

Hybrid Job Scheduling Mechanism Using a Backfill-based Multi-queue Strategy in Distributed Grid Computing

Hybrid Job Scheduling Mechanism Using a Backfill-based Multi-queue Strategy in Distributed Grid Computing IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.12 No.9, September 2012 39 Hybrd Job Schedulng Mechansm Usng a Backfll-based Mult-queue Strategy n Dstrbuted Grd Computng Ken Park

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Cost-efficient deployment of distributed software services

Cost-efficient deployment of distributed software services 1/30 Cost-effcent deployment of dstrbuted software servces csorba@tem.ntnu.no 2/30 Short ntroducton & contents Cost-effcent deployment of dstrbuted software servces Cost functons Bo-nspred decentralzed

More information

Real-time Fault-tolerant Scheduling Algorithm for Distributed Computing Systems

Real-time Fault-tolerant Scheduling Algorithm for Distributed Computing Systems Real-tme Fault-tolerant Schedulng Algorthm for Dstrbuted Computng Systems Yun Lng, Y Ouyang College of Computer Scence and Informaton Engneerng Zheang Gongshang Unversty Postal code: 310018 P.R.CHINA {ylng,

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Query Clustering Using a Hybrid Query Similarity Measure

Query Clustering Using a Hybrid Query Similarity Measure Query clusterng usng a hybrd query smlarty measure Fu. L., Goh, D.H., & Foo, S. (2004). WSEAS Transacton on Computers, 3(3), 700-705. Query Clusterng Usng a Hybrd Query Smlarty Measure Ln Fu, Don Hoe-Lan

More information

A fault tree analysis strategy using binary decision diagrams

A fault tree analysis strategy using binary decision diagrams Loughborough Unversty Insttutonal Repostory A fault tree analyss strategy usng bnary decson dagrams Ths tem was submtted to Loughborough Unversty's Insttutonal Repostory by the/an author. Addtonal Informaton:

More information

Data-Aware Scheduling Strategy for Scientific Workflow Applications in IaaS Cloud Computing

Data-Aware Scheduling Strategy for Scientific Workflow Applications in IaaS Cloud Computing Data-Aware Schedulng Strategy for Scentfc Workflow Applcatons n IaaS Cloud Computng Sd Ahmed Makhlouf*, Belabbas Yagoub LIO Laboratory, Department of Computer Scence, Faculty of Exact and Appled Scences,

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

CS 268: Lecture 8 Router Support for Congestion Control

CS 268: Lecture 8 Router Support for Congestion Control CS 268: Lecture 8 Router Support for Congeston Control Ion Stoca Computer Scence Dvson Department of Electrcal Engneerng and Computer Scences Unversty of Calforna, Berkeley Berkeley, CA 9472-1776 Router

More information

Reliability and Performance Models for Grid Computing

Reliability and Performance Models for Grid Computing Relablty and Performance Models for Grd Computng Yuan-Shun Da,2, Jack Dongarra,3,4 Department of Electrcal Engneerng and Computer Scence, Unversty of Tennessee, Knoxvlle 2 Department of Industral and Informaton

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment

TripS: Automated Multi-tiered Data Placement in a Geo-distributed Cloud Environment TrpS: Automated Mult-tered Data Placement n a Geo-dstrbuted Cloud Envronment Kwangsung Oh, Abhshek Chandra, and Jon Wessman Department of Computer Scence and Engneerng Unversty of Mnnesota Twn Ctes Mnneapols,

More information

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks

Delay Variation Optimized Traffic Allocation Based on Network Calculus for Multi-path Routing in Wireless Mesh Networks Appl. Math. Inf. Sc. 7, No. 2L, 467-474 2013) 467 Appled Mathematcs & Informaton Scences An Internatonal Journal http://dx.do.org/10.12785/ams/072l13 Delay Varaton Optmzed Traffc Allocaton Based on Network

More information

A fair buffer allocation scheme

A fair buffer allocation scheme A far buffer allocaton scheme Juha Henanen and Kalev Klkk Telecom Fnland P.O. Box 228, SF-330 Tampere, Fnland E-mal: juha.henanen@tele.f Abstract An approprate servce for data traffc n ATM networks requres

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

THE MAP MATCHING ALGORITHM OF GPS DATA WITH RELATIVELY LONG POLLING TIME INTERVALS

THE MAP MATCHING ALGORITHM OF GPS DATA WITH RELATIVELY LONG POLLING TIME INTERVALS THE MA MATCHING ALGORITHM OF GS DATA WITH RELATIVELY LONG OLLING TIME INTERVALS Jae-seok YANG Graduate Student Graduate School of Engneerng Seoul Natonal Unversty San56-, Shllm-dong, Gwanak-gu, Seoul,

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Towards High Fidelity Network Emulation

Towards High Fidelity Network Emulation Towards Hgh Fdelty Network Emulaton Lanje Cao, Xangyu Bu, Sona Fahmy, Syuan Cao Department of Computer Scence, Purdue nversty, West Lafayette, IN, SA E-mal: {cao62, xb, fahmy, cao208}@purdue.edu Abstract

More information

Configuration Management in Multi-Context Reconfigurable Systems for Simultaneous Performance and Power Optimizations*

Configuration Management in Multi-Context Reconfigurable Systems for Simultaneous Performance and Power Optimizations* Confguraton Management n Mult-Context Reconfgurable Systems for Smultaneous Performance and Power Optmzatons* Rafael Maestre, Mlagros Fernandez Departamento de Arqutectura de Computadores y Automátca Unversdad

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Two-Stage Data Distribution for Distributed Surveillance Video Processing with Hybrid Storage Architecture

Two-Stage Data Distribution for Distributed Surveillance Video Processing with Hybrid Storage Architecture Two-Stage Data Dstrbuton for Dstrbuted Survellance Vdeo Processng wth Hybrd Storage Archtecture Yangyang Gao, Hatao Zhang, Bngchang Tang, Yanpe Zhu, Huadong Ma Bejng Key Lab of Intellgent Telecomm. Software

More information

A HEURISTIC METHOD FOR RELIABILITY REDUNDANCY OPTIMIZATION OF FLOW NETWORKS

A HEURISTIC METHOD FOR RELIABILITY REDUNDANCY OPTIMIZATION OF FLOW NETWORKS Kumar Pardeep and Chaturved D.K., Pahua G.L. - A HEURISTIC METHOD FOR RELIABILITY REDUNDANCY OPTIMIZATION OF FLOW NETWORKS A HEURISTIC METHOD FOR RELIABILITY REDUNDANCY OPTIMIZATION OF FLOW NETWORKS Kumar

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information