Elastic Resource Provisioning for Cloud Data Center

Similar documents
Distributed Autonomous Virtual Resource Management in Datacenters Using Finite- Markov Decision Process

Efficient Task Scheduling Algorithms for Cloud Computing Environment

ENERGY EFFICIENT VIRTUAL MACHINE INTEGRATION IN CLOUD COMPUTING

Simulation of Cloud Computing Environments with CloudSim

Figure 1: Virtualization

Online Optimization of VM Deployment in IaaS Cloud

Virtual Machine Placement in Cloud Computing

A Load Balancing Approach to Minimize the Resource Wastage in Cloud Computing

Double Threshold Based Load Balancing Approach by Using VM Migration for the Cloud Computing Environment

Dynamic Resource Allocation on Virtual Machines

Traffic-aware Virtual Machine Placement without Power Consumption Increment in Cloud Data Center

Energy Efficient Live Virtual Machine Provisioning at Cloud Data Centers - A Comparative Study

RIAL: Resource Intensity Aware Load Balancing in Clouds

An Optimized Virtual Machine Migration Algorithm for Energy Efficient Data Centers

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments

Star: Sla-Aware Autonomic Management of Cloud Resources

Chapter 3 Virtualization Model for Cloud Computing Environment

PERFORMANCE CONSTRAINT AND POWER-AWARE ALLOCATION FOR USER REQUESTS IN VIRTUAL COMPUTING LAB

GSJ: VOLUME 6, ISSUE 6, August ISSN

Consolidating Complementary VMs with Spatial/Temporalawareness

Cloud Computing introduction

Department of Information Technology Sri Venkateshwara College of Engineering, Chennai, India. 1 2

Efficient Technique for Allocation of Processing Elements to Virtual Machines in Cloud Environment

Load Balancing in Cloud Computing System

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

Available online at ScienceDirect. Procedia Computer Science 89 (2016 ) 27 33

8. CONCLUSION AND FUTURE WORK. To address the formulated research issues, this thesis has achieved each of the objectives delineated in Chapter 1.

An Experimental Cloud Resource Broker System for Virtual Application Control with VM Allocation Scheme

A QoS Load Balancing Scheduling Algorithm in Cloud Environment

Energy-Aware Dynamic Load Balancing of Virtual Machines (VMs) in Cloud Data Center with Adaptive Threshold (AT) based Migration

Fundamental Concepts and Models

PriDynSim: A Simulator for Dynamic Priority Based I/O Scheduling for Cloud Applications

Scheduling of Independent Tasks in Cloud Computing Using Modified Genetic Algorithm (FUZZY LOGIC)

High Performance Computing Cloud - a PaaS Perspective

A Survey on CloudSim Toolkit for Implementing Cloud Infrastructure

Hybrid Auto-scaling of Multi-tier Web Applications: A Case of Using Amazon Public Cloud

A Comparative Study of Various Computing Environments-Cluster, Grid and Cloud

Considering Resource Demand Misalignments To Reduce Resource Over-Provisioning in Cloud Datacenters

Virtual Machine (VM) Earlier Failure Prediction Algorithm

Chapter 5. Minimization of Average Completion Time and Waiting Time in Cloud Computing Environment

Cloud Computing An IT Paradigm Changer

CHAPTER 6 ENERGY AWARE SCHEDULING ALGORITHMS IN CLOUD ENVIRONMENT

Available online at ScienceDirect. Procedia Computer Science 93 (2016 )

Performance Assurance in Virtualized Data Centers

CHAPTER 6 STATISTICAL MODELING OF REAL WORLD CLOUD ENVIRONMENT FOR RELIABILITY AND ITS EFFECT ON ENERGY AND PERFORMANCE

PERFORMANCE ANALYSIS AND OPTIMIZATION OF MULTI-CLOUD COMPUITNG FOR LOOSLY COUPLED MTC APPLICATIONS

A Comparative Performance Analysis of Load Balancing Policies in Cloud Computing Using Cloud Analyst

Comparative Analysis of VM Scheduling Algorithms in Cloud Environment

The End of Storage. Craig Nunes. HP Storage Marketing Worldwide Hewlett-Packard

An EMUSIM Technique and its Components in Cloud Computing- A Review

The Study of Genetic Algorithm-based Task Scheduling for Cloud Computing

Dynamic control and Resource management for Mission Critical Multi-tier Applications in Cloud Data Center

PriDynSim: A Simulator for Dynamic Priority Based I/O Scheduling for Cloud Applications

SEGMENT STATURE HASH TABLE BASED COST EFFICIENT DATA SHARING IN CLOUD ENVIRONMENT

Energy Aware Scheduling in Cloud Datacenter

Click to edit Master title style

THE DATA CENTER AS A COMPUTER

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

Load Balancing Algorithm over a Distributed Cloud Network

Power-Aware Virtual Machine Scheduling-policy for Virtualized Heterogeneous Multicore Systems

The Software Driven Datacenter

Priority-Aware Virtual Machine Selection Algorithm in Dynamic Consolidation

Load Balancing The Essential Factor In Cloud Computing

A Novel Energy Efficient Algorithm for Cloud Resource Management. Jing SiYuan. Received April 2013; revised April 2013

Automated Control for Elastic Storage

LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTING

Autonomic Cloud Computing Resource Scaling

ABSTRACT I. INTRODUCTION

An Analytical Model for Dynamic Resource Allocation Framework in Cloud Environment

Introduction to Cloud Computing and Virtual Resource Management. Jian Tang Syracuse University

Dynamic Task Scheduling in Cloud Computing Based on the Availability Level of Resources

Optimization of Multi-server Configuration for Profit Maximization using M/M/m Queuing Model

High Cost of ESL Design

Machine Learning Opportunities in Cloud Computing Datacenter Management for 5G Services

Introduction To Cloud Computing

Managed Platform for Adaptive Computing mpac

Large Scale Computing Infrastructures

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack

Modeling and Optimization of Resource Allocation in Cloud

Cloud Computing Concepts, Models, and Terminology

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. reserved. Insert Information Protection Policy Classification from Slide 8

A FRAMEWORK AND ALGORITHMS FOR ENERGY EFFICIENT CONTAINER CONSOLIDATION IN CLOUD DATA CENTERS

BigDataBench-MT: Multi-tenancy version of BigDataBench

Supplementary File: Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment

Quality of Service Assurance for Enterprise Cloud Computing (QoSAECC)

Towards Energy Efficient Change Management in a Cloud Computing Environment

Model-Driven Geo-Elasticity In Database Clouds

Core of Cloud Computing

CLOUD COMPUTING: SEARCH ENGINE IN AGRICULTURE

Assistant Professor, School of Computer Applications,Career Point University,Kota, Rajasthan, India Id

CLOUD COMPUTING. Rajesh Kumar. DevOps Architect.

Towards Efficient Resource Allocation for Heterogeneous Workloads in IaaS Clouds

Integrated IoT and Cloud Environment for Fingerprint Recognition

Two-Level Dynamic Load Balancing Algorithm Using Load Thresholds and Pairwise Immigration

1 Introduction. Abstract. Keywords: Virtual Server, Distributed, Resource, Virtual Machine, Lease.

Introduction to data centers

Vblock Infrastructure Packages: Accelerating Deployment of the Private Cloud

A New Approach to Ant Colony to Load Balancing in Cloud Computing Environment

An Intelligent Service Oriented Infrastructure supporting Real-time Applications

Global Journal of Engineering Science and Research Management

Transcription:

Elastic Resource Provisioning for Cloud Data Center Thant Zin Tun, and Thandar Thein Abstract Cloud data centers promises flexible, scalable, powerful and cost-effective executing environment to users. There are still challenges of cloud systems while there are several advantages of cloud computing infrastructures such as on-demand resources scalability. The amount of resources needed in cloud data centers is often dynamic due to its dynamic workload demand. Resource provisioning with the right amount of dynamic resource demand while meeting service level objectives (SLOs) becomes a critical issue in cloud data centers. Elastic resource provisioning mechanism for the Cloud Data Center is proposed by applying timeshared policy for Virtual Machines (VMs) and tasks. It is focused to maximize the utilization of resources and minimizing the cost associated with the resources. The proposed system is simulated and evaluated with real world workload traces. The evaluation results show that the proposed provisioning system achieves high utilization of resources for the cloud data center to allocate the resources. Keywords Data Center, Resource Provisioning, Service Level Objective, Time-Shared Policy A I. INTRODUCTION CLOUD is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud provider offers services as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). When a cloud provider accepts a request from a customer, it must create the appropriate number of virtual machines (VMs) and allocate resources to support them [10]. Cloud service providers are facing many challenges of resource demands as cloud computing grows in popularity and usage. Data centers resource needs are often dynamic, varying as a result of changes in overall workload. A key problem when provisioning virtual infrastructures is how to deal with Thant Zin Tun, University of Computer Studies, Yangon, Myanmar. (email: thantzintunster@gmail.com). Thandar Thein, University of Computer Studies, Yangon, Myanmar. (email: thandartheinn@gmail.com). situations where the demand for resources. Resource Provisioning is the mapping and scheduling of VMs onto physical Cloud servers within a cloud. Cloud providers must ensure utilizing and allocating scare resources within the limit of cloud environment so as to meet the needs of dynamic resource demand. Cloud data center providers either do not offer dynamic resource provisioning or support any performance guarantee leads to inefficient utilization of resources and occurs SLO violations. The cloud provider s task is, therefore, to make sure that resource allocation requests are satisfied with specific probability and timeliness. These requirements are formalized in infrastructure SLAs between the service owner and cloud provider, separate from the high-level SLAs between the service owner and its end users. SLA-oriented capacity planning guarantees that there is enough capacity to guarantee service elasticity with minimal over-provisioning. Thus, the IaaS providers make the Service Level Objectives to grantee the SLA for the dynamic workload demand for different resources. In order to avoid the under-provision, which leads to compensation costs for the provider, the cloud providers plan to predict the dynamic workload demand in advance by different methods. In this paper, the SLO Granted Resource Prediction (SGERP) is used to predict the CPU resource usage [9]. At the same time, the IaaS cloud provider strives to minimally over-provision capacity, thus minimizing the operational costs. In this paper, we propose resource provisioning system that makes the resource provision for the IaaS cloud data center to achieve high utilization of data center resources. The rest of the paper is organized as follow. The proposed architecture is presented in the next section. Then, the detail design of provision strategies is discussed. The experimental results are also conducted. And then we discuss related work, concluding remarks and future work are provided. II. SYSTEM ARCHITECTURE An elastic resource provisioning system is proposed by using time-shared allocation policy for both VMs and tasks. It is tried to achieve the high utilization of data center resources while preventing the over provisioning of resources. In the proposed provision system, two different provisioning strategies are used to make the decision to create the hosts and VMs. 45

Workload Traces Resource Usage Data Resource Usage Predictor Predicted Resource Usage Host 1 VMn Elastic Resource Provisioning System Host 2 VMn Provisioning Information Resource Allocator IaaS Cloud Provider Host n Fig. 1 The Architecture of Resource Provisioning System The architecture of the proposed system is shown Figure 1. In the resource provisioning system, SLO Granted Resource Prediction (SGERP) results are used to predict the CPU resource workload in order to avoid the under provisioning of dynamic resource demand. The predicted resource usages are used by the resource provisioning system. The provisioning information is sent to resource allocator of the IaaS cloud provider for the allocation of the resources requested by the cloud customers. A cloud data center is composed by a set of hosts, which are responsible for managing VMs during their life cycles. Host is a component that represents a physical computing node in a cloud which is assigned a pre-configured processing capability and a scheduling policy for allocating processing cores to virtual machines. The Host component implements interfaces that support modeling and simulation of both singlecore and multi-core nodes. In this paper, we focus on the CPU resource usage to provision for the tasks. The predicted CPU usage by SGERP is used in our provisioning system. The CPU resource usage is predicted as the batch mode. Firstly, the real world workload traces are clustered based on the deadline of the requests to handle non uniform cases of execution time and wait time of the data centers requests. III. RESOURCE PROVISIONING MODEL The resource provisioning system is developed for handling dynamic workload nature resource provisioning ahead of the needs in the cloud data centers. In this provision system we use SGERP prediction model which integrate signal processing approach and statistical learning approach to predict both repeating pattern and non-repeating pattern workload [8]. To overcome the under provisioning, SLO analysis is conducted in the prediction system. By increasing 5% of the maximum predicted value, it can almost eliminate under provisioning of the predictor and can meet SLOs of the cloud provider. SGERP and the time sharing resource allocation are used in the provisioning system to achieve the right amount of resource provisioning. In this paper, we focus on the resource allocation strategies of the data centers. A. RESOURCE PROVISIONING OF CLOUDS One of the key advantages of a Cloud computing infrastructure is the immense deployment of virtualization VMn technologies and tools. Hence, as compared to Grids, Clouds have a virtualization layer that acts as an execution and hosting environment for Cloud-based application services. The hosts component in cloud data centers implements interfaces that support modeling and simulation of both single-core and multi-core nodes. The data center entity manages a number of host entities. The hosts are assigned to one or more VMs based on a VM allocation policy that should be defined by the Cloud service provider. The control policies of the operations related to VM life cycle such as: VM creation, VM destruction, and VM migration stands for provisioning of a host to a VM. Similarly, one or more application services can be provisioned within a single VM instance, referred to as application provisioning in the context of Cloud computing. Hence, the amount of hardware resources available to each VM is constrained by the total processing power and system bandwidth available within the host. The critical factor to be considered during the VM provisioning process, to avoid creation of a VM that demands more resource than is available within the host, is referred to as the resource provisioning. In order to allow simulation of different provisioning policies under varying levels of performance isolation, we apply the time sharing allocation policy as CloudSim supports. Two different allocation policies are conducted in the resource provisioning system. VM provisioning at two levels: first, at the host level and second, at the VM level. At the host level, it is possible to specify how much of the overall processing power of each core will be assigned to each VM. At the VM level, the VM assigns a fixed amount of the available resources to the individual task units that are hosted within its execution engine. B. TIME-SHARED ALLOCATION POLICY CloudSim supports the time-shared and space-shared resource allocation policies for the VMs and tasks. The timeshared allocation example for both VMs and task units is shown in Fig. 2. In this figure, a host with two CPU cores receives request for hosting two VMs, such that each one requires two cores and plans to host four tasks units. More specifically, tasks T1, T2, T3, and T4 to be hosted in, whereas T5, T6, T7, and T8 to be hosted in. The CPU resources of the host are concurrently shared by the VMs and the shares of each VM are concurrently divided among the task units assigned to each VM. In this case, there are no queues either for virtual machines or for task units. We proposed two provisioning scenarios based on allocation of the tasks to each VM while using the time-shared provisioning policy. We assume each VM characteristic is homogeneous for both scenarios. In provisioning strategy 1, the tasks are assigned to their corresponding VMs that the tasks use the resources of the VMs which are hosted. In the provisioning strategy 2, the available resources of the VM are shared for the tasks. 46

3rd International Conference on Computational Techniques and Artificial Intelligence (ICCTAI'2014) Feb. 11-12, 2014 Singapore Cores 2 1 T8 T7 T4 T3 T6 T5 T2 T1 Time Fig. 2 Time-shared allocation for VMs and Tasks Fig. 3 Example of Resource Provisioning Strategy 1 C. PROVISIONING STRATEGY 1 A host with two CPU cores receives request for hosting two VMs, such that each one requires two cores and plans to host four tasks units. The tasks are assigned to their corresponding VMs which are hosted. The resource requests for each task are varied depending on the type of tasks. The resource provisioning strategy 1 by using time-shared allocation policy is shown in algorithm 1. Figure 3 presents the example of provisioning strategy 2 with the sixteen tasks. By allocating VMs and tasks as shown in Fig. 3, the host need for these tasks is calculated as shown in (1), where the result is needed to check the multiple of four. If the number of host for the tasks is not the multiple of four, it is increased to the upper adjacent multiple of four. The symbols and notations of the equations are shown in Table I. (1) Algorithm 1: Elastic Resource Provisioning Strategy1 Input :x Output : y 1. //Resource Usage data //Number of Host TABLE I SYMBOLS AND NOTATIONS Classify Resource Usage data into clusters //K-means 2. for each cluster in k clusters Symbols Definition 3. total CPU=Calculate the total number of CPU requests Nhost Total task Total CPU Number of host for the tasks Total no of tasks Total no of CPU resource Number of hosts for each task Number of tasks in each host Number of tasks in each VM Deadline Run time of each job Wait time of each job Waiting factor for each job 4. y= Calculate_total_number_of_host(total CPU, total task) Nhosts(each task) Ntasks(each host) Ntasks(each VM) D Rt Wt wf 5. end for Calculate_total_number_of_host(total CPU, total task) Input : total CPU, total task Output : Nhost 1. Nhost =(total task* Nhosts(each task))/ Ntasks(each host)))+((total CPU-total task)* Nhosts(each task))/ Ntasks(each The requests are processed in batch mode for both prediction and provision. We do not consider for each request to provision. Hence, (1) is used for all the requests in batch mode and the number of host calculated by (1) is the maximum possible hosts for any requests in batch. VM) 2. if the number of host for the tasks is not the multiple of four 3. Nhost=the upper adjacent multiple of four 4. end if D.PROVISIONING STRATEGY 2 The tasks are assigned to the available VMs in the hosts by using the time-shared allocation policy for both VMs and tasks. The resource provisioning strategy 2 by using timeshared allocation policy is shown in algorithm 2. Example of provisioning strategy 2 is shown in Fig 4. In this example, there are sixteen tasks which are assigned to their corresponding VMs, where T7 (task 7) requests four CPU cores and T15 requests two CPU cores and the other tasks request one core. Each task gets the one fourth of a core according to the policy such that task 1 requests one core and it needs four VMs to complete the task. 47

Algorithm 2: Elastic Resource Provisioning Strategy2 Input : x //Resource Usage data Output : y //Number of Host 1. Classify Resource Usage data into clusters //K-means 2. for each cluster in k clusters 3. total CPU=Calculate the total number of CPU requests 4. y= Calculate_total_number_of_host(total CPU, total task) 5. end for Calculate_total_number_of_host(total CPU, total task) Input : total CPU, total task Output : Nhost 1. Nhost = (total CPU*N hosts(each task))/n tasks(each host) 2. if the number of host for the tasks is not the multiple of four 3. N host=the upper adjacent multiple of four 4. end if Fig. 4 Example of time-shared allocation for VMs and Tasks for Strategy 2 The provisioning scenario in Fig. 3 changes to the Fig. 4 by using the strategy 2. T7 requests four CPU cores and T15 requests two CPU cores and the other tasks request one core. T7 and T15 are assigned at all the available VMs in a host. The host needed for the tasks is calculated as shown in (2), where the result is needed to check the multiple of four. If the number of host for the tasks is not the multiple of four, it is increased to the upper adjacent multiple of four. IV. PERFORMANCE EVALUATION A. SIMULATION SET UP The simulated model is composed of one Cloud data center containing hosts. Each host has two CPU cores receives request for hosting two VMs, such that each one requires two cores and plans to host four tasks units as discussed in the above. The time-shared policy for resource provisioning is conducted where new VMs are created because resource provisioning decision is the main goal of this work. (2) TABLE II SELECTED IMPORTANT FEATURES OF THREE WORKLOAD TRACES HPC2N CEA -Curie Anon Job Number Job Number JobId Submit Time Submit Time Submit Time Wait Time Wait Time Wait Time Run Time Run Time Run time Number of Allocated Number of Allocated Processors Processors Nproc Average CPU Time Used Average CPU Time Used UsedMemory Used Memory Used Memory ReqNProcs Deadline Deadline Deadline - Request Number of Processor ReqTime - - Status Output metrics collected for each scenario is the average resources utilization rate, which we define as the rate between the actual resource usage and the maximum available resource of hosts in data center. In this paper, we use three workload traces from Parallel Workload Archives [1]. The important selected features of three workload traces are shown in Table II. Simulation of each scenario was repeated 10 times for three workload traces, and we report the average for each output metric. B. SIMULATION SCENARIO Depending on the nature of the workload we varied the total capacity of the data centers because the workloads are with the non uniform execution time and wait time of the requests. In this case the workload can be decomposed according to their associated deadline. The deadline D for each request is calculated as in (3). In our experiment, the waiting factor is set as five seconds. The capacity of the data centers is more efficient by using the clustered workloads. Clustering is the process of partitioning or grouping a given set of patterns into disjoint clusters and view as an unsupervised method for data analysis. K-means clustering is a method commonly used to automatically partition a data set into k groups. The process flow of k-means clustering is shown in Fig 5. TABLE III CLUSTER SIZE AND DEADLINE RANGE OF CEA-CURIE WORKLOAD Cluster Deadline Range 1 6-281 9246 2 284-1241 741 3 1264-3253 124 4 3282-8345 73 5 9033-21619 31 6 36006-132249 32 No of Tasks (Size of Cluster) We use k-means clustering to classify the workloads into 10 groups based on the deadline of the requests. 100000 records (3) 48

of each workload are set to group according to their deadlines. The characteristics of each cluster of Anon workload traces are described in Table III. Start Number of cluster K Centroid Fig. 7 The Utilization Rate of CEA-Curie Workload for both Policies Distance objects to centroids Grouping based on minimum distance No objects move? End The maximum utilization rate of 73% is achieved for strategy 1 and 98% is achieved for the strategy 2 on HPC2N workload as shown in Fig 8. Fig. 5 The Flow Diagram of K-means Clustering C. EXPERIMENTAL RESULTS We test the simulation of provisioning strategies with the clusters that we mentioned in the previous section. The utilization of resource for each strategy is calculated in (4). The requests are processed in batch mode for both prediction and provision. We do not consider for each request to provision. Figure 6 shows the comparison of the utilization in percentage of both strategies of provisioning of Anon Grid workload. According to Fig 6, we can see that provisioning strategy 1 scores higher utilization rate than the strategy 2. (3) Fig. 9 The Utilization Rate of HPC2N Workload for both Policies The average resource utilization rate of three workload traces, 100000 records for each workload with 6 clusters, is described in Fig 9. We test ten times of 10000 records and calculate average for all output metric. According to Fig 9, we can see that the strategy 2 achieves high utilization rate of resource provisioning with the batch prediction of the resource usages. Fig. 6 The Utilization Rate of Anon Workload for both Strategies The utilization in percentage of both policies CEA-Curie workload trace is shown in Fig 7. The maximum utilization rate is 52% for strategy 1 and approximately 99% of utilization for strategy 2. Fig. 9 Average Resource Utilization of three Workload traces (6 clusters) The average resource utilization rate of three workload traces, 100000 records for each workload with 5 clusters, is described in Fig 10. We test ten times of 10000 records and calculate average for all output metric. According to Fig 10, 49

we can see that the strategy 2 achieves high utilization rate of resource provisioning with the batch prediction of the resource usages. Fig. 10 Average Resource Utilization of three Workload traces (5 clusters) V. RELATED WORK B. Urgaonkar et. al. [3] have used virtual machines (VM) to implement dynamic provisioning of multi-tiered applications based on an underlying queuing model. For each physical host, however, only a single VM can be run. T. Wood et. al. [7] use a similar infrastructure as in [3]. They concentrate primarily on dynamic migration of VMs to support dynamic provisioning. They define a unique metric based on the consumption data of the three resources: CPU, network and memory to make the migration decision. R. N. Calheiros et al. [5] presented a provisioning technique that automatically adapts to workload changes related to applications for facilitating the adaptive management of system and offering end users guaranteed Quality of Services (QoS) in large, autonomous, and highly dynamic environments. They model the behavior and performance of applications and Cloud-based IT resources to adaptively serve end-user requests. To improve the efficiency of the system, we use analytical performance (queuing network system model) and workload information to supply intelligent input about system requirements to an application provisioner with limited information about the physical infrastructure. S. K. Garg et al. [6] proposed admission control and scheduling mechanism to maximize the resource utilization and profit and ensures the SLA requirements of users [4]. They use an artificial Neural Network based prediction model by using standard Back Propagation (BP) algorithm for prediction. The number of hidden layers is varied to tune the performance of the network and through iterations it was found to be optimum at the value of 5 hidden layers. In their experimental study, the mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations. R. Buyya et al. [4] presented vision, challenges, and architectural elements for energy efficient management of Cloud computing environments. They focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software). Unlike our system their provisioning scheme holistically works to boost data center energy efficiency and performance. X. Kong et al. [9] presented a fuzzy prediction method to model the uncertain workload and the vague availability of virtualized server nodes by using the type-i and type-ii fuzzy logic systems. They also proposed an efficient dynamic task scheduling algorithm named SALAF for virtualized data centers. VI. CONCLUSION A key problem when provisioning virtual infrastructures is how to deal with situations where the demand for resources. Resource Provisioning is the mapping and scheduling of VMs onto physical Cloud servers within a cloud. In this paper, we presented design and implementation of resource provision system for cloud data centers by using two provisioning strategies based on time-shared allocation policy for both VMs and tasks. The provisioning system is simulated and evaluated with real world workload traces. The evaluation results show that the proposed provisioning system achieve high utilization of resources of the cloud data center. REFERENCES [1] http://www.cs.huji.ac.il/labs/parallel/workload/ [2] B. Sotomayor, R. S. Montero, I. M. Llorente, and I. Foster, Virtual infrastructure management in private and hybrid clouds, IEEE Internet Computing, 13(5):14_22, September/October, 2009. [3] B. Sotomayor, R. S. Montero, I. M. Llorente, and I. Foster, Virtual infrastructure management in private and hybrid clouds, IEEE Internet Computing, 13(5):14_22, September/October, 2009. [4] R. Buyya, A. Beloglazov, J. Abawajy, Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges, Proceedings of the 7th High Computing and Simulation (HPCS 2009) Conference, Leipzig, Germany, June 21-24,2009 [5] R. N. Calheiros, R. Ranjany, and R. Buyya, Virtual Machine Provisioning Based on Analytical Performance and QoS in Cloud Computing Environments, in Parallel Processing (ICPP), International Conference September, 2011, pp. 295-304 [6] S. K. Garg, S. K. Gopalaiyengar, R. Buyya, SLA-Based Resource Provisioning for Heterogeneous Workloads in a Virtualized Cloud Datacenter, in Proceedings of the 11th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2011), Melbourne, Australia, October, 2011. [7] T. Wood, P. J. Shenoy, A. Venkataramani, and M. S. Yousif, Blackbox and gray-box strategies for virtual machine migration, in NSDI, 2007. [8] T. Z. Tun, T. Thein, SLO Granted Elastic Resource Prediction in Cloud Data Center, International Journal of Information Engineering, Jeju Island, Korea, December 2013. [9] X. Kong, C. Lin, Y. Jiang, W. Yan, X. Chu, Efficient dynamic task scheduling in virtualized data centers with fuzzy prediction, in Journal of Network and Computer Applications, 34(4), 2010, 1068-1077. [10] Z. Gong, X. Gu, and J. Wilkes, "PRESS: PRedictive Elastic ReSource Scaling for cloud systems," in Proceeding of CNSM 10, Niagara Falls, Canada, 2010, pp. 9-16. 50