Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station
|
|
- Clarence Leonard
- 6 years ago
- Views:
Transcription
1 Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station Abstract With the growing use of cluster systems in file distribution, web servers and database transactions, efficiency and power optimization have realized a lot of significance. Distributor based systems has been widely adopted, which forward the requests from the clients to a set of waiting balanced servers in complete transparency to the clients. The policy employed in forwarding the requests from the front-end distributor to the backend servers plays a very important role in the overall performance of the system. In this paper, we use power and locality based request distribution, which aims to provide optimum energy conservation, while maintaining the required QoS of the system. We use a basic locality policy, which distributes the incoming requests to the backend servers based on the partitioning of the data set among the backend servers memory. It aims to generate more number of hits in the backend server s memory by data specific distribution. This policy provides the required efficiency. We then implement an optimum on-ff power policy on top of this locality-based distribution, to achieve considerable energy conservation. The whole system is made to work under a standard set of memory management policy, which improves the performance further. We back the idea with simulation results and future implementation possibilities. 1. Introduction Cluster systems are being increasingly used in the web server management, file distribution and database transaction. The main reason for the large-scale deployment of the cluster systems is the well-established distributor based request management technique. The system based with a distributor has a front-end server (distributor), which receives all the requests from the clients. The requests are then forwarded to the bunch of backend servers that contain the actual content for the clients. The requests are forwarded to the backend servers based on various policies. The forwarding of the requests from the distributor to the backend servers is carried out in complete transparency to the clients. A handoff protocol is employed in most cases to make the transition smooth and transparent. The operational power budget for maintaining a large cluster may run into millions of dollars. Thus, any viable approach for energy saving in a cluster should be considered seriously. Hence the paper tries to achieve a balance between achieving high efficiency and optimum power conservation. Of the various policies that are used to forward the requests from the distributor to the backend servers, the weighted round robin, locality-based request distribution (LARD) [1] and power aware request distribution (PARD) [2] are the most common and successful. Each has their own set of pros and cons. The effective use of the policies in such a way that the system is most efficient would form a high performance policy for the distributor. We strive to achieve such a policy.
2 In this paper, we make advantage of the Locality aware request distribution (LARD) to achieve high efficiency, in terms of throughput (number of requests served per second) and average response time (service time). Then, we implement a simple and optimum power policy over the LARD to make it more power efficient and as well as content efficient. We then present simulation results to show the efficient working of our algorithm. The rest of the paper is organized as follows: In Section 2, previous related work is addressed. It gives a brief description of the already existing policies for the distributor. Section 3 explains our policy in detail. Section 4 presents the details about the simulations that have been carried out and the pseudo code representing our algorithm. The section 5 enumerates the results by comparing the performance of our system against the various other policies. We then conclude the paper with section Related Work Of the various policies that are employed at the distributor, the policies that provide better load balancing among the backend servers, better efficiency and considerable power conservation are the most preferred. In this section, we would review the following policies: Weighted Round Robin (WRR) Locality Aware Request Distribution (LARD) Power Aware Request Distribution (PARD) 2.1 Weighted Round Robin The choice of the policy is very critical for the efficient operation of the system. The weighted round robin policy is applied based on the current load at the backend servers. The policy is applied at the distributor, where the requests are forwarded to the backend servers. The distributor maintains the record of the current load at the backend servers and it forwards the request from the client, based on this information. The request is forwarded to the least loaded backend server among the bunch of servers. The request forwarding is thus weighted based on the current load on the servers. The server that is most loaded is relieved off the load by forwarding the requests to the least loaded server. So, at any given point of time, the load is evenly balanced among all the available servers and thus providing very good load balancing.
3 Figure 1: Weighted Round-Robin The main drawback of the system is that it does not concern about the locality of the requests and the power conservation among the servers. In case of large deployment of cluster systems, the power consumed becomes a very significant factor to be considered. Since all the servers are turned ON during the entire period of operation, the system turns out to conserve zero power. Also, as the system does not consider the locality of the data among the backend servers, the different data requests land up in different servers and incur large disk latencies. This increases the response time (service time) of the servers and hence the throughput. Considering this, power and locality based request distribution policies have more significance. 2.2 Locality Aware Request Distribution The major drawback of the Weighted Round Robin policy is that it incurs a large amount of disk latency by not considering the locality of the data in the backend server s memory. Our simulation of the Weighted Round Robin policy shows that during worst case, this could generate an unacceptable amount of disk latency in the backend server and lead to increased response time of the server and reduced throughput. This causes large delays to be experienced by the clients and brings down the performance of the whole system. To overcome this, Locality aware request distribution [1] employs locality based distribution policy at the distributor and strives to increase the memory hits at the backend server s rather than the disk latency. Figure 2: LARD
4 The distributor maintains a table of the data types available at the backend servers memory. The data types are assigned to the backend servers based on the initial server/data partitioning and are initially distributed evenly across the servers. When a new request arrives at the distributor, its data type is looked up in the distributor table and the corresponding server is identified. The request is forwarded always to that server for that particular data type. By this assignment, the request will incur disk latency only during the first initial assignment to that backend server. The consecutive requests of the same data type end up as server memory hits, since it has already been fetched from the disk and is now in the memory. Once the requests starts overflowing at one of the servers, one of the least loaded servers is added to serve that data type and the server set for that data type starts growing. Similarly, when a server becomes underutilized, a server is removed from the server set. However, when there is no change in target server set for given K seconds, the most load server is removed from the server set. This ensures load balancing up to some extent. But, the major drawback of the LARD system is that it too does not take into consideration the power factor and has all the backend servers running. This makes the power conservation zero and is the least power efficient. Also, the load balancing of the LARD system is not as good as the Weighted Round Robin policy. 2.3 Power Conservation by Multi-speed disks E. V. Carrera et al., [4] have proposed a technique to conserve energy in network servers using a multi-speed disk technology. The idea is to use two disks with different speed to emulate a multi-speed disk. Thus multi-speed disk concepts need to be applied to achieve this kind of optimization. When the load is hits higher than a pre-defined threshold, the high-speed disk serves it, and vice versa. Thus, this approach can save energy consumption up to 23%, in comparison to conventional servers. They also argue that the performance degradation is very negligible. A major setback to this approach is that it requires multi-speed disks, which are not very popular nowadays. 2.4 Dynamic Cluster Re-configuration E. Pinheiro et al., [5] have proposed a dynamic cluster reconfiguration technique., to bring down the energy consumption in servers. In this technique, a cluster node is dynamically added or removed to the cluster system based on the following constraints: Efficiency and performance of the system Other power implications Thus, the cluster is dynamically re-configurable and proves to be intelligent to reconfigure itself based on the load and other system efficiency related parameters. The paper shows the results of the implementation of this algorithm on a real time cluster system. The results show the power and energy consumption to be reduced considerably up to 71% and 45%, respectively, when compared to the traditional network servers.
5 2.5 Power Aware Request Distribution The drawbacks of WRR and LARD regarding power consumption is that both of them always have their servers turned ON even though some of them do not serve any requests. Therefore, they cannot conserve any power. Unlike WRR and LARD, PARD takes a great consideration to reduce power consumption of the cluster system, and hence, PARD [2] policy is the most efficient policy in terms of power saving among the three policies. Figure 3: PARD PARD employs the ON-OFF Model; any backend server, which is idle, is turned off, and backend servers are turned on whenever they are required to serve requests. In [2], they assume that power is equal to its maximum power when it is ON and simply zero when it is OFF. Turning off unused server is considered to be the best way to save power. Figure 4: PARD Policy However, this policy brings more number of disk latency than LARD. Whenever backend servers are turned ON, the data stored in the backend server s memory is completely deleted. This causes a lot of disk latency as distributor forwards requests to the backend servers. Furthermore, there are Startup delay and Shutdown delay when backend servers are turned ON and OFF, respectively. While the Startup delay results during the booting up of the Operating System, the Shutdown delay is a period between pruning the idle backend server from service mode and shutting it down. These delays cause worse QoS as compared to LARD. On overall, there are trade-offs between QoS and power savings.
6 3. Power and Locality-Aware Redirect Distributor (PLARD) The basic PLARD system is basically a combination of LARD and PARD. This ensures that both power conservation and locality improve the performance and optimum power conservation. The Power policy is implemented over the Locality based policy to provide better power conservation and higher efficiency. The distributor forwards the requests to the backend servers that are turned ON, based on the locality of the incoming data. Figure5: PLARD In PLARD, the distributor sends requests only to the powered-on backend servers, depending on content-based distribution. While the distributor transmits the requests to the backend servers, PLARD checks if there are any idle backend servers. If it happens that some of the backend servers are idle (which is very much possible in traditional cluster based systems), the idle backend servers will be turned OFF for power conservation. This is a simple ON-OFF policy, which achieves very good power conservation. On the other hand, it still maintains high QoS like the LARD policy as it employs the content-based distribution. However, this basic PLARD policy still has the drawback in the form of delay as PARD. There are always Startup and Shutdown delay when the backend server becomes ON and OFF, respectively. Therefore, we propose an improvement in the form of PLARD with Prediction. This policy predicts the oncoming congestion on the turned ON servers and initiates a turned OFF server to turn ON, well in advance of the impending congestion. This prevents the overflow at the other backend server and eliminates the Startup and Shutdown delays that are otherwise incurred to the incoming requests. One other feature of the power policy is that atleast one server is maintained ON always. This ensures that there are no startup and shutdown delays associated with the future requests, when all the current requests are quenched out and the servers are turned off. 3.1 Memory management policy One other important and unique feature of the PLARD policy is the pin-down memory. The pin-down memory is the part of the main memory exclusively reserved for the web, mail or file server running on the backend server. The number of data-types supported by a server at a given point of time is directly proportional to the amount of pin-down memory available. This ensures that any other program running on the server does not occupy the memory completely and remove the data-types under request from the memory. This enhances the performance of the system by many folds. The memory management policy works as follows:
7 Pick a server based on the PLARD policy which is described later in the paper If the data-type is new and there exists no server associated with it, select a server which has enough pin-down memory to support the new data-type If a server is about to reach its maximum capacity, migrate the data-type to a new server which is capable of accommodating the data-type in its pin-down memory The pin-down memory area is dynamically adjustable and can be varied anytime, depending on the load on the system With these performance enhancement features, PLARD with Prediction and memory management has a considerable drop in the power conservation. The PLARD policy alone produced a power conservation of 67%, but with the above enhancements included the power conservation dropped to 50%. This is because, the powered-off backend server is turned on earlier to compensate the Startup delay and it consumes more energy since it is turned on before it really does serve the requests. Nevertheless, PLARD with prediction and pin-down memory proves to be the most optimum distribution policy among the rest. 4. Simulation We have built the simulators and run simulations to back our idea. Our work consists of the build of the WRR, LARD, PLARD and PLARD with Prediction simulators and extensive simulations on these simulators. The simulation model consisted of 100 random requests served from a trace file, which emulates the client requests, one distributor frontend server that collects the requests and forwards to the backend servers and 6 backend servers that provide the service. The policies are implemented at the front-end distributor. The backend servers are assumed to be capable of serving a maximum 30 connection at any given time. The pin-down memory size is fixed to 1024 Bytes (for testing purposes). The value of T low and T high are fixed to be 12 and 17 respectively. 4.1 Pseudo-Algorithm: Read requests from the trace file; Check for the least loaded server, which is turned ON; Apply the memory management policy; Forward the request to that backend server; Update the distributor_table at the front-end about the locality of the data; If (any_server in the backend cluster.load > threshold) Turn ON a new server; Else Go to step 1
8 5. Results The simulations were run for all the 4 policies with the same trace file and the various performance metrics have been calculated Average Response Time WRR LARD LARD-M PLARD Figure 6: Average Response Time From the above bar chart it is clear that the WRR suffers the worst ART as it incurs a lot of disk latencies due to inefficient locality awareness. The rest of the policies perform almost to the same mark. The PLARD has slightly more ART as it incurs the 30-second startup delay on to the incoming requests st Qtr 2nd Qtr 3rd Qtr 4th Qtr North Figure 7: Overall Throughput The Overall throughput is a direct reflection of the same reason stated above.
9 Memory Hits 20 0 WRR LARD LARD-M PLARD Figure 8: Memory Hits The memory hits too are way better in the locality based policies as they adopt a highly efficient locality based algorithm. But, the load balancing is better with the WRR policy Power Conservation 10 0 WRR LARD LARD-M PLARD Figure 9: Power Conservation The power conservation can be seen to be high in the PLARD simulators as they use the ON-OFF mechanism to conserve the power. The servers are always turned ON in the WRR and PLARD and hence provide zero power conservation.
10 WRR LARD LARD-M PLARD Server 1 Server 2 Server 3 Server 4 Server 5 Server 6 Figure 9: Memory Management The above is the histogram representing the available memory after simulation on each of the servers on all the systems. A negative value shows that the server has discarded the contents on the memory to accommodate new requests. A very high negative value implies a poor memory management scheme. This is the reason for poor performance on those systems. 6. Conclusion As the use of cluster system increases, conserving power has been a critical issue. A variety of policies employed in forwarding the requests from the distributor to the backend servers are addressed, for they vitally affect the overall performance of the cluster system. In this paper, we compare four different policies: WRR, LARD, PLARD and PLARD with Prediction, so as to compare which policy is the best for both power conservation and performance. WRR has a good load balance, but its locality is so poor that it increases miss rates. In order to reduce the miss rates and improve secondary storage scalability, LARD is used. However, WRR and LARD save zero power in the cluster system. Thus, we propose PLARD that employs not only content-based request distribution, but On-Off policy to maintain minimum required QoS and saves power; and distributes the incoming requests to the backend servers based on the type of the requests. This content-based request distribution provides locality and high hit rates. Meanwhile, it turns off any idle backend servers to achieve significant power conservation. Even though PLARD with Prediction is also taken in consideration to reduce Startup and Shutdown delay in PLARD, it consumes more energy than PLARD as shown in our results. Besides, there is significant
11 difference in Average Response Time, Overall Throughput and Memory Hits between PLARD and PLARD with Prediction (when 30 seconds of startup delay is considered). Therefore, PLARD with prediction happens to be the best policy among the other policies in terms of QoS and power conservation. In the future, we will implement real cluster system and apply our PLARD onto it. We would also implement the pin-down memory configuration in the backend servers. References [1]. Vivek S. Pai, Mohit Aron, Gaurov Banga, Michael Svendsen, Peter Druschel, Willy Zwaenepoel, Erich Nahum, Locality-aware request distribution in clusterbased network servers, Proceedings of the eighth international conference on Architectural support for programming languages and operating systems, p , October 02-07, 1998, San Jose, California, United States [2]. K. Rajamani and C. Lefurgy, On evaluating request-distribution schemes for saving energy in server clusters, in Proc. Intl. Sym. Performance Analysis of Systems and Software, March [3]. Mohit Aron, Darren Sanders, Peter Druschel and Willy Zwaenepoel, "Scalable Content-aware Request Distribution in Cluster-based Network Servers, In Proceedings of the USENIX 2000 Annual Technical Conference, San Diego, CA, June [4]. E. V. Carrera, E. Pinheiro, and R. Bianchini, Conserving Disk Energy in Network Servers, in Proceedings of the 17 th Annual International Conference on Supercomputing, pp , June [5]. E. Pinheiro, R. Bianchini, E. V. Carrera and T. Heath, Dynamic Cluster Reconfiguration for Power and Performance. Kluwer Academic Publishers, [6]. E. V. Carrera and R. Bianchini, "Improving Disk Throughput in Data-Intensive Servers". Proceedings of the 10th International Symposium on High-Performance Computer Architecture (HPCA 10), February [7]. E. V. Carrera, S. Rao, L. Iftode, and R. Bianchini. "User-Level Communication in Cluster-Based Servers". Proceedings of the 8th IEEE International Symposium on High-Performance Computer Architecture (HPCA 8), February 2002.
A PROactive Request Distribution (PRORD) Using Web Log Mining in a Cluster-Based Web Server
A PROactive Request Distribution (PRORD) Using Web Log Mining in a Cluster-Based Web Server Heung Ki Lee 1, Gopinath Vageesan 1, Ki Hwan Yum 2 and Eun Jung Kim 1 1 Texas A&M University 2 University of
More informationADAPTIVE RESOURCE MANAGEMENT SCHEMES FOR WEB SERVICES. A Dissertation HEUNG KI LEE
ADAPTIVE RESOURCE MANAGEMENT SCHEMES FOR WEB SERVICES A Dissertation by HEUNG KI LEE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the
More informationScalable and Decentralized Content-Aware Dispatching in Web Clusters
1 Scalable and Decentralized Content-Aware Dispatching in Web Clusters Zhiyong Xu Suffolk University zxu@mcs.suffolk.edu Jizhong Han Chinese Academy of Sciences hjz@ict.ac.cn Laxmi Bhuyan University of
More informationCharles Lefurgy IBM Research, Austin
Super-Dense Servers: An Energy-efficient Approach to Large-scale Server Clusters Outline Problem Internet data centers use a lot of energy Opportunity Load-varying applications Servers can be power-managed
More informationMigration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM
Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,
More informationCommunication using Multiple Wireless Interfaces
Communication using Multiple Interfaces Kameswari Chebrolu and Ramesh Rao Department of ECE University of California, San Diego Abstract With the emergence of different wireless technologies, a mobile
More informationCHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE
143 CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE 6.1 INTRODUCTION This chapter mainly focuses on how to handle the inherent unreliability
More informationStatement of Research for Taliver Heath
Statement of Research for Taliver Heath Research on the systems side of Computer Science straddles the line between science and engineering. Both aspects are important, so neither side should be ignored
More informationVirtual Machine Placement in Cloud Computing
Indian Journal of Science and Technology, Vol 9(29), DOI: 10.17485/ijst/2016/v9i29/79768, August 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Virtual Machine Placement in Cloud Computing Arunkumar
More informationDesign and Evaluation of I/O Strategies for Parallel Pipelined STAP Applications
Design and Evaluation of I/O Strategies for Parallel Pipelined STAP Applications Wei-keng Liao Alok Choudhary ECE Department Northwestern University Evanston, IL Donald Weiner Pramod Varshney EECS Department
More informationAchieving Distributed Buffering in Multi-path Routing using Fair Allocation
Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois
More informationLocality-Aware Request Distribution in Cluster-based Network Servers. Network servers based on clusters of commodity workstations
Locality-Aware Request Distribution in Cluster-based Network Servers Vivek S. Pai z, Mohit Aron y, Gaurav Banga y, Michael Svendsen y, Peter Druschel y, Willy Zwaenepoel y, Erich Nahum { z Department of
More informationLH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers
LH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers Written by: Salman Zubair Toor E-Mail: salman.toor@it.uu.se Teacher: Tore Risch Term paper for
More informationIdentifying Stable File Access Patterns
Identifying Stable File Access Patterns Purvi Shah Jehan-François Pâris 1 Ahmed Amer 2 Darrell D. E. Long 3 University of Houston University of Houston University of Pittsburgh U. C. Santa Cruz purvi@cs.uh.edu
More informationThree Tier Proximity Aware Cache Hierarchy for Multi-core Processors
Three Tier Proximity Aware Cache Hierarchy for Multi-core Processors Akshay Chander, Aravind Narayanan, Madhan R and A.P. Shanti Department of Computer Science & Engineering, College of Engineering Guindy,
More informationPerformance of Multihop Communications Using Logical Topologies on Optical Torus Networks
Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,
More informationProbability Admission Control in Class-based Video-on-Demand System
Probability Admission Control in Class-based Video-on-Demand System Sami Alwakeel and Agung Prasetijo Department of Computer Engineering College of Computer and Information Sciences, King Saud University
More informationSR college of engineering, Warangal, Andhra Pradesh, India 1
POWER OPTIMIZATION IN SYSTEM ON CHIP BY IMPLEMENTATION OF EFFICIENT CACHE ARCHITECTURE 1 AKKALA SUBBA RAO, 2 PRATIK GANGULY 1 Associate Professor, 2 Senior Research Fellow, Dept. of. Electronics and Communications
More informationComputer Sciences Department
Computer Sciences Department SIP: Speculative Insertion Policy for High Performance Caching Hongil Yoon Tan Zhang Mikko H. Lipasti Technical Report #1676 June 2010 SIP: Speculative Insertion Policy for
More informationFB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network
Congestion-free Routing of Streaming Multimedia Content in BMIN-based Parallel Systems Harish Sethu Department of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104, USA sethu@ece.drexel.edu
More informationCHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS
28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the
More informationPerformance and Scalability with Griddable.io
Performance and Scalability with Griddable.io Executive summary Griddable.io is an industry-leading timeline-consistent synchronized data integration grid across a range of source and target data systems.
More informationNowadays data-intensive applications play a
Journal of Advances in Computer Engineering and Technology, 3(2) 2017 Data Replication-Based Scheduling in Cloud Computing Environment Bahareh Rahmati 1, Amir Masoud Rahmani 2 Received (2016-02-02) Accepted
More informationStudy of Load Balancing Schemes over a Video on Demand System
Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video
More informationLocality-Aware Request Distribution in Cluster-based Network Servers
Locality-Aware Request Distribution in Cluster-based Network Servers Vivek S. Pai z, Mohit Aron y, Gaurav Banga y, Michael Svendsen y, Peter Druschel y, Willy Zwaenepoel y, Erich Nahum { z Department of
More informationSERVICE ORIENTED REAL-TIME BUFFER MANAGEMENT FOR QOS ON ADAPTIVE ROUTERS
SERVICE ORIENTED REAL-TIME BUFFER MANAGEMENT FOR QOS ON ADAPTIVE ROUTERS 1 SARAVANAN.K, 2 R.M.SURESH 1 Asst.Professor,Department of Information Technology, Velammal Engineering College, Chennai, Tamilnadu,
More informationAn Enhanced Binning Algorithm for Distributed Web Clusters
1 An Enhanced Binning Algorithm for Distributed Web Clusters Hann-Jang Ho Granddon D. Yen Jack Lee Department of Information Management, WuFeng Institute of Technology SingLing Lee Feng-Wei Lien Department
More informationCHAPTER 4 OPTIMIZATION OF WEB CACHING PERFORMANCE BY CLUSTERING-BASED PRE-FETCHING TECHNIQUE USING MODIFIED ART1 (MART1)
71 CHAPTER 4 OPTIMIZATION OF WEB CACHING PERFORMANCE BY CLUSTERING-BASED PRE-FETCHING TECHNIQUE USING MODIFIED ART1 (MART1) 4.1 INTRODUCTION One of the prime research objectives of this thesis is to optimize
More informationSEVEN Networks Open Channel Traffic Optimization
SEVEN Networks Open Channel Traffic Optimization Revision 3.0 March 2014 The Open Channel family of software products is designed to deliver device-centric mobile traffic management and analytics for wireless
More informationOn Evaluating Request-Distribution Schemes for Saving Energy in Server Clusters
On Evaluating Request-Distribution Schemes for Saving Energy in Server Clusters Karthick Rajamani karthick@us.ibm.com Charles Lefurgy lefurgy@us.ibm.com IBM Austin Research Lab Abstract Power-performance
More informationRAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE
RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting
More informationOVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI
CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing
More informationVERITAS Volume Replicator. Successful Replication and Disaster Recovery
VERITAS Volume Replicator Successful Replication and Disaster Recovery V E R I T A S W H I T E P A P E R Table of Contents Introduction.................................................................................1
More informationChapter 8 Virtual Memory
Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven
More informationAbstract. Testing Parameters. Introduction. Hardware Platform. Native System
Abstract In this paper, we address the latency issue in RT- XEN virtual machines that are available in Xen 4.5. Despite the advantages of applying virtualization to systems, the default credit scheduler
More informationT9: SDN and Flow Management: DevoFlow
T9: SDN and Flow Management: DevoFlow Critique Lee, Tae Ho 1. Problems due to switch HW may be temporary. HW will evolve over time. Section 3.3 tries to defend against this point, but none of the argument
More informationCHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION
CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant
More informationOn Caching Effectiveness of Web Clusters under Persistent Connections
On Caching Effectiveness of Web Clusters under Persistent Connections Xueyan Tang & Samuel T. Chanson Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay,
More informationDelay Controlled Elephant Flow Rerouting in Software Defined Network
1st International Conference on Advanced Information Technologies (ICAIT), Nov. 1-2, 2017, Yangon, Myanmar Delay Controlled Elephant Flow Rerouting in Software Defined Network Hnin Thiri Zaw, Aung Htein
More informationLive Virtual Machine Migration with Efficient Working Set Prediction
2011 International Conference on Network and Electronics Engineering IPCSIT vol.11 (2011) (2011) IACSIT Press, Singapore Live Virtual Machine Migration with Efficient Working Set Prediction Ei Phyu Zaw
More informationIMPROVING WEB SERVER PERFORMANCE USING TWO-TIERED WEB CACHING
IMPROVING WEB SERVER PERFORMANCE USING TWO-TIERED WEB CACHING 1 FAIRUZ S. MAHAD, 2 WAN M.N. WAN-KADIR Software Engineering Department, Faculty of Computer Science & Information Systems, University Teknologi
More informationVERITAS Volume Replicator Successful Replication and Disaster Recovery
VERITAS Replicator Successful Replication and Disaster Recovery Introduction Companies today rely to an unprecedented extent on online, frequently accessed, constantly changing data to run their businesses.
More informationThe Journal of Systems and Software
The Journal of Systems and Software 81 (28) 244 258 Contents lists available at ScienceDirect The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss Design and implementation
More informationProcess size is independent of the main memory present in the system.
Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time.
More informationAzure Scalability Prescriptive Architecture using the Enzo Multitenant Framework
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should
More informationVirtual Memory. Chapter 8
Chapter 8 Virtual Memory What are common with paging and segmentation are that all memory addresses within a process are logical ones that can be dynamically translated into physical addresses at run time.
More informationKing 2 Abstract: There is one evident area of operating systems that has enormous potential for growth and optimization. Only recently has focus been
King 1 Input and Output Optimization in Linux for Appropriate Resource Allocation and Management James Avery King March 25, 2016 University of North Georgia Annual Research Conference King 2 Abstract:
More informationChapter 8 Virtual Memory
Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Modified by Rana Forsati for CSE 410 Outline Principle of locality Paging - Effect of page
More informationIMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS
IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces
More informationSF-LRU Cache Replacement Algorithm
SF-LRU Cache Replacement Algorithm Jaafar Alghazo, Adil Akaaboune, Nazeih Botros Southern Illinois University at Carbondale Department of Electrical and Computer Engineering Carbondale, IL 6291 alghazo@siu.edu,
More informationBuilding a low-latency, proximity-aware DHT-based P2P network
Building a low-latency, proximity-aware DHT-based P2P network Ngoc Ben DANG, Son Tung VU, Hoai Son NGUYEN Department of Computer network College of Technology, Vietnam National University, Hanoi 144 Xuan
More informationCooperative Caching Middleware for Cluster-Based Servers
Cooperative Caching Middleware for Cluster-Based Servers Francisco Matias Cuenca-Acuna and Thu D. Nguyen {mcuenca, tdnguyen}@cs.rutgers.edu Department of Computer Science, Rutgers University 11 Frelinghuysen
More informationArchitecture Tuning Study: the SimpleScalar Experience
Architecture Tuning Study: the SimpleScalar Experience Jianfeng Yang Yiqun Cao December 5, 2005 Abstract SimpleScalar is software toolset designed for modeling and simulation of processor performance.
More informationDynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism
Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Xiao Qin, Hong Jiang, Yifeng Zhu, David R. Swanson Department of Computer Science and Engineering
More informationRAMS: A RDMA-enabled I/O Cache Architecture for Clustered network Servers
RAMS: A RDMA-enabled I/O Cache Architecture for Clustered network Servers Peng Gu, Jun Wang Computer Science and Engineering Department University of Nebraska-Lincoln Abstract Abstract: Previous studies
More informationProfile of CopperEye Indexing Technology. A CopperEye Technical White Paper
Profile of CopperEye Indexing Technology A CopperEye Technical White Paper September 2004 Introduction CopperEye s has developed a new general-purpose data indexing technology that out-performs conventional
More informationKanban Scheduling System
Kanban Scheduling System Christian Colombo and John Abela Department of Artificial Intelligence, University of Malta Abstract. Nowadays manufacturing plants have adopted a demanddriven production control
More informationPage Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices
Page Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices Ilhoon Shin Seoul National University of Science & Technology ilhoon.shin@snut.ac.kr Abstract As the amount of digitized
More informationSwitch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment
Switch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment SANJEEV KUMAR, SENIOR MEMBER, IEEE AND ALVARO MUNOZ, STUDENT MEMBER, IEEE % Networking Research Lab,
More informationUnderstanding the ESVA Architecture
Understanding the ESVA Architecture Overview Storage virtualization is the basis of the ESVA (Enterprise Scalable Virtualized Architecture). The virtualized storage powered by the architecture is highly
More informationEfficient Power Management of Heterogeneous Soft Real-Time Clusters
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln CSE Technical reports Computer Science and Engineering, Department of 5-24-8 Efficient Power Management of Heterogeneous
More informationBuilding a Single Distributed File System from Many NFS Servers -or- The Poor-Man s Cluster Server
Building a Single Distributed File System from Many NFS Servers -or- The Poor-Man s Cluster Server Dan Muntz Hewlett-Packard Labs 1501 Page Mill Rd, Palo Alto CA 94304, USA dmuntz@hpl.hp.com Tel: +1-650-857-3561
More informationAN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING. 1. Introduction. 2. Associative Cache Scheme
AN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING James J. Rooney 1 José G. Delgado-Frias 2 Douglas H. Summerville 1 1 Dept. of Electrical and Computer Engineering. 2 School of Electrical Engr. and Computer
More informationA Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing
727 A Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing 1 Bharati B. Sayankar, 2 Pankaj Agrawal 1 Electronics Department, Rashtrasant Tukdoji Maharaj Nagpur University, G.H. Raisoni
More informationSocket Cloning for Cluster-Based Web Servers
Socket Cloning for Cluster-Based s Yiu-Fai Sit, Cho-Li Wang, Francis Lau Department of Computer Science and Information Systems The University of Hong Kong E-mail: {yfsit, clwang, fcmlau}@csis.hku.hk Abstract
More informationPreface. Fig. 1 Solid-State-Drive block diagram
Preface Solid-State-Drives (SSDs) gained a lot of popularity in the recent few years; compared to traditional HDDs, SSDs exhibit higher speed and reduced power, thus satisfying the tough needs of mobile
More informationPERSONAL communications service (PCS) provides
646 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 5, NO. 5, OCTOBER 1997 Dynamic Hierarchical Database Architecture for Location Management in PCS Networks Joseph S. M. Ho, Member, IEEE, and Ian F. Akyildiz,
More informationSome Joules Are More Precious Than Others: Managing Renewable Energy in the Datacenter
Some Joules Are More Precious Than Others: Managing Renewable Energy in the Datacenter Christopher Stewart The Ohio State University cstewart@cse.ohio-state.edu Kai Shen University of Rochester kshen@cs.rochester.edu
More informationAn Evaluation of Caching Strategies for Clustered Web Servers
An Evaluation of Caching Strategies for Clustered Web Servers Thomas Larkin A dissertation submitted to the University of Dublin, in partial fulfilment of the requirements for the degree of Master of Science
More informationMEMORY/RESOURCE MANAGEMENT IN MULTICORE SYSTEMS
MEMORY/RESOURCE MANAGEMENT IN MULTICORE SYSTEMS INSTRUCTOR: Dr. MUHAMMAD SHAABAN PRESENTED BY: MOHIT SATHAWANE AKSHAY YEMBARWAR WHAT IS MULTICORE SYSTEMS? Multi-core processor architecture means placing
More informationChris Moultrie Dr. Prasad CSc 8350 [17] S. Gurumurthi, A. Sivasubramaniam, M. Kandemir, and H. Franke, "DRPM: dynamic speed control for power
[1] This paper outlines different strategies for a new, improved version of raid named EERaid. They were able to produce savings of 60% and 70% of the energy used in typical RAID setups while keeping the
More informationDelay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks
Delay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks Wenqing Ding and Abbas Jamalipour School of Electrical and Information Engineering The University of Sydney Sydney
More informationEnergy Management of MapReduce Clusters. Jan Pohland
Energy Management of MapReduce Clusters Jan Pohland 2518099 1 [maps.google.com] installed solar panels on headquarters 1.6 MW (1,000 homes) invested $38.8 million North Dakota wind farms 169.5 MW (55,000
More informationAn Optimized Virtual Machine Migration Algorithm for Energy Efficient Data Centers
International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 8 Issue 01 Ver. II Jan 2019 PP 38-45 An Optimized Virtual Machine Migration Algorithm
More informationChapter 3. Design of Grid Scheduler. 3.1 Introduction
Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies
More informationCapacity Planning for Application Design
WHITE PAPER Capacity Planning for Application Design By Mifan Careem Director - Solutions Architecture, WSO2 1. Introduction The ability to determine or forecast the capacity of a system or set of components,
More informationAn Enhanced Dynamic Packet Buffer Management
An Enhanced Dynamic Packet Buffer Management Vinod Rajan Cypress Southeast Design Center Cypress Semiconductor Cooperation vur@cypress.com Abstract A packet buffer for a protocol processor is a large shared
More informationAdaptive Prefetching Technique for Shared Virtual Memory
Adaptive Prefetching Technique for Shared Virtual Memory Sang-Kwon Lee Hee-Chul Yun Joonwon Lee Seungryoul Maeng Computer Architecture Laboratory Korea Advanced Institute of Science and Technology 373-1
More informationSMD149 - Operating Systems - Multiprocessing
SMD149 - Operating Systems - Multiprocessing Roland Parviainen December 1, 2005 1 / 55 Overview Introduction Multiprocessor systems Multiprocessor, operating system and memory organizations 2 / 55 Introduction
More informationOverview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy
Overview SMD149 - Operating Systems - Multiprocessing Roland Parviainen Multiprocessor systems Multiprocessor, operating system and memory organizations December 1, 2005 1/55 2/55 Multiprocessor system
More informationForwarding Requests among Reverse Proxies
Forwarding Requests among Reverse Proxies Limin Wang Fred Douglis Michael Rabinovich Department of Computer Science Princeton University Princeton, NJ 08544 lmwang@cs.princeton.edu AT&T Labs-Research 180
More informationINTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND TECHNOLOGY (IJARET)
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND TECHNOLOGY (IJARET) ISSN 0976-6480 (Print) ISSN 0976-6499 (Online) Volume 4, Issue 1, January- February (2013), pp. 50-58 IAEME: www.iaeme.com/ijaret.asp
More informationCache Performance Research for Embedded Processors
Available online at www.sciencedirect.com Physics Procedia 25 (2012 ) 1322 1328 2012 International Conference on Solid State Devices and Materials Science Cache Performance Research for Embedded Processors
More informationTitan SiliconServer for Oracle 9i
Titan SiliconServer for 9i Abstract Challenges facing deployment include the ever-growing size of the database and performance scalability. Enterprise businesses rely heavily on databases for day-to-day
More informationThe NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware
The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware Matthias Vallentin 1, Robin Sommer 2,3, Jason Lee 2, Craig Leres 2 Vern Paxson 3,2, and Brian Tierney 2 1 TU München
More informationInternational Journal of Scientific & Engineering Research Volume 8, Issue 5, May ISSN
International Journal of Scientific & Engineering Research Volume 8, Issue 5, May-2017 106 Self-organizing behavior of Wireless Ad Hoc Networks T. Raghu Trivedi, S. Giri Nath Abstract Self-organization
More informationOnline Optimization of VM Deployment in IaaS Cloud
Online Optimization of VM Deployment in IaaS Cloud Pei Fan, Zhenbang Chen, Ji Wang School of Computer Science National University of Defense Technology Changsha, 4173, P.R.China {peifan,zbchen}@nudt.edu.cn,
More informationAnalysis of Cluster based Routing Algorithms in Wireless Sensor Networks using NS2 simulator
Analysis of Cluster based Routing Algorithms in Wireless Sensor Networks using NS2 simulator Ashika R. Naik Department of Electronics & Tele-communication, Goa College of Engineering (India) ABSTRACT Wireless
More informationA Low Energy Clustered Instruction Memory Hierarchy for Long Instruction Word Processors
A Low Energy Clustered Instruction Memory Hierarchy for Long Instruction Word Processors Murali Jayapala 1, Francisco Barat 1, Pieter Op de Beeck 1, Francky Catthoor 2, Geert Deconinck 1 and Henk Corporaal
More informationA Mediator based Dynamic Server Load Balancing Approach using SDN
I J C T A, 9(14) 2016, pp. 6647-6652 International Science Press A Mediator based Dynamic Server Load Balancing Approach using SDN Ashwati Nair 1, Binya mol M. G. 2 and Nima S. Nair 3 ABSTRACT In the modern
More informationA LITERATURE SURVEY ON CPU CACHE RECONFIGURATION
A LITERATURE SURVEY ON CPU CACHE RECONFIGURATION S. Subha SITE, Vellore Institute of Technology, Vellore, India E-Mail: ssubha@rocketmail.com ABSTRACT CPU caches are designed with fixed number of sets,
More informationEastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.
Eastern Mediterranean University School of Computing and Technology ITEC255 Computer Organization & Architecture CACHE MEMORY Introduction Computer memory is organized into a hierarchy. At the highest
More informationAdaptive Real-time Monitoring Mechanism for Replicated Distributed Video Player Systems
Adaptive Real-time Monitoring Mechanism for Replicated Distributed Player Systems Chris C.H. Ngan, Kam-Yiu Lam and Edward Chan Department of Computer Science City University of Hong Kong 83 Tat Chee Avenue,
More informationDetermining the Number of CPUs for Query Processing
Determining the Number of CPUs for Query Processing Fatemah Panahi Elizabeth Soechting CS747 Advanced Computer Systems Analysis Techniques The University of Wisconsin-Madison fatemeh@cs.wisc.edu, eas@cs.wisc.edu
More informationDesigning Efficient Systems Services and Primitives for Next-Generation Data-Centers
Designing Efficient Systems Services and Primitives for Next-Generation Data-Centers K. Vaidyanathan S. Narravula P. Balaji D. K. Panda Department of Computer Science and Engineering The Ohio State University
More informationBPCLC: An Efficient Write Buffer Management Scheme for Flash-Based Solid State Disks
BPCLC: An Efficient Write Buffer Management Scheme for Flash-Based Solid State Disks Hui Zhao 1, Peiquan Jin *1, Puyuan Yang 1, Lihua Yue 1 1 School of Computer Science and Technology, University of Science
More informationCS-534 Packet Switch Architecture
CS-534 Packet Switch Architecture The Hardware Architect s Perspective on High-Speed Networking and Interconnects Manolis Katevenis University of Crete and FORTH, Greece http://archvlsi.ics.forth.gr/~kateveni/534
More informationIX: A Protected Dataplane Operating System for High Throughput and Low Latency
IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this
More informationOptimized Paging Cache Mappings for efficient location management Hyun Jun Lee, Myoung Chul Jung, and Jai Yong Lee
Optimized Paging Cache Mappings for efficient location management Hyun Jun Lee, Myoung Chul Jung, and Jai Yong Lee Abstract Cellular IP maintains distributed cache for location management and routing purposes.
More informationIntegrating VVVVVV Caches and Search Engines*
Global Internet: Application and Technology Integrating VVVVVV Caches and Search Engines* W. Meira Jr. R. Fonseca M. Cesario N. Ziviani Department of Computer Science Universidade Federal de Minas Gerais
More information