Keywords: disk throughput, virtual machine, I/O scheduling, performance evaluation

Size: px
Start display at page:

Download "Keywords: disk throughput, virtual machine, I/O scheduling, performance evaluation"

Transcription

1 Simple and practical disk performance evaluation method in virtual machine environments Teruyuki Baba Atsuhiro Tanaka System Platforms Research Laboratories, NEC Corporation 1753, Shimonumabe, Nakahara-Ku, Kawasaki, Kanagawa , Japan Keywords: disk throughput, virtual machine, I/O scheduling, performance evaluation Abstract This paper proposes a disk access throughput evaluation method in virtual machine environments where multiple independent virtual machines share a common physical shared disk drive. drive simulation is one candidate to evaluate the total performance of disk throughput by modeling I/O management mechanisms inside virtual machine monitors. However, making such a model might be impossible when the source code of a target virtual machine monitor is not open. Therefore, a simple evaluation method is required. We propose a simple and practical method with an analytical model which has only two types of parameters: sequential access ratio and disk performance profile. These two parameters can be obtained without knowledge about operations inside a virtual machine monitor. The sequential access ratio is defined as the probability whether the next disk access is to the same file or not. It represents virtual machine monitor s I/O scheduling characteristics. The disk performance profile is the relationship between seek distance and throughput, and we assume, in this paper, it is measured in advance. We have developed a calculation method by combining the two parameters above. Then we have compared the estimation throughputs and the measured ones in Xen virtual machine environments with random read accesses. The experimental results show that errors of our method are no more than 1%. 1. INTRODUCTION Virtualization technology has been common in the field of IT system operation and management, providing benefits such as improved computer resource utilization, isolated systems, and easier management [1, 2]. Virtualization technologies can create multiple virtual machines (VMs) on a single physical machine. Each VM can run its own operating system (OS) as a guest OS. A software layer providing virtualization is called a virtual machine monitor (VMM). The VMM manages hardware resources and arbitrates the requests of the multiple guest OSs and applications. It divides physical resources (i.e. CPUs, disks, memories, network bandwidths) into multiple logical resources. The virtual machine is configured with the logical resources. Examples of VMMs are VMware [3], VirtualPC [4], and Xen [5, 6]. One of the most pressing problems faced by system administrators managing VMs is: how many VMs can be supported by a particular hardware configuration and virtualization platform? There is a trade-off between application performance and the number of VMs on a single physical machine. On the one hand, sufficient resources are required to avoid degradation in performance from the point of view of application performance. On the other hand, the number of VMs on a single physical machine should be increased from the point of view of cost reduction. Thus, system administrators need performance evaluation methods which enable them to predict the performance from the amount of logical resources. Performance evaluation methods of CPU power in virtual environments have been reported in references [7, 8, 9]. These methods can estimate response times and throughputs using queuing theory. performance, however, has not been sufficiently studied yet. One of the simplest management methods in virtual machine environments is based on disk size alone [7]. It decides each VM's disk size so that the summation of the disk size of all VMs does not exceed the size of the physical disk. Considering disk performance, disk access throughputs are an important factor that affects the performance of application software. The disk throughput is affected by position of data on the disk and frequency of disk access. The disk throughput is reduced with increases in the seek distance between data. The total disk throughput is reduced when multiple applications wish to access the disk simultaneously. In our experimental results, the disk throughput is decreased to about 5% when multiple VMs run on a single disk (see Section 2). A disk performance evaluation method for virtual machine environments is desired because multiple VMs which share a single disk drive could dramatically degrade disk throughput. drive simulations have been previously proposed [1, 11]. Since they are able to calculate accurate results, they are prospective candidates to evaluate the total performance of disk throughput by modeling I/O management mechanisms inside VMM. However, making such models might be impossible when the source code of a target VMM is not open. Moreover, they generally have another demerit in that simulation time is too SPECTS ISBN:

2 long. Analytical models [12, 13, 14] can reduce computing time for evaluating disk performance, although their results are more approximate. However, the existing analytical models need many parameters, such as rotation time, data transfer time, positioning time and maximum bandwidth. However, it is difficult to obtain these parameters. In this paper we propose a simple method of evaluating disk access throughputs in virtual machine environments. Our method addresses the problem with existing analytical methods, i.e., they need too many parameters. Our model uses only two types of parameters: sequential access ratio and disk performance profile. The sequential access ratio represents VMM's I/O scheduling characteristics, and is defined as the probability whether a disk access requires moving the disk head within a file or not. The sequential access ratio depends not on application software but on platforms like Linux and Xen. Once the ratio is measured with an application, it could be used for evaluating other application's I/O performance. The disk performance profile is the relationship between seek distance and throughput. This represents a device-specific performance characteristic. We derive disk access throughput by combining the sequential access ratio and disk profile. The two parameters can be obtained if the source code of the VMM is not open. The outline for the rest of the paper is as follows: Section 2 describes virtual machine environment and disk performance profile. Section 3 introduces our performance evaluation method. Section 4 evaluates the practicality of our method. Finally, Section 5 summarizes the main points of this paper. 2. TARGETS This section describes our targets, i.e., virtual machine environments and disk performance profile. The disk performance profile depends on the disk device. The performance profile measured in this section is used to estimate disk throughput by our method described in Section Virtual machine environment Our target virtual machine environment is shown in Fig. 1. Multiple VMs are created by a VMM on a single physical machine. The VMs share a single physical disk. Applications which run in VMs access the same disk simultaneously. We focus on a local disk in this paper. space is divided into multiple partitions. A VM image file which includes a guest OS and application data is made in each partition for security. The locations of the partitions can be decided when system administrators design a VM system. Each VM accesses its own partition. We propose a disk performance evaluation method using this disk access characteristic of VM in Section 3. Read throughput [MB/sec] Virtual machine Application Data Virtual machine monitor (VMM) Data 2.2. performance profile We measured the disk performance profile of our target disk device. The performance profile is the relationship between throughput and seek distance. The disk device described in this section is the same as that used for experimental evaluations described in Section 4. The specifications of the computer we measured are as follows: CPU is Pentium GHz, memory size is 3 GB and disk size is 16 GB (Western Digital, WD16JS). The disk is used as a local disk. The OS is Fedora Core 6 (Linux kernel ). We ran a random read program to access the disk device. In order to avoid the memory cache effect, we ran reads system calls at random positions on the disk. Reads system calls read fixed size data (4 kb). In this paper, a random read throughput (B random ) is defined by Sread B random =, T (Eq. 1) where T run is the running time of the random read program, and S read is the total amount of data which is read by the random read program in T run. We measured random read throughputs with varying seek distances. The experimental results are shown in Fig. 2. The horizontal axis of the graph is the average seek distance and the vertical axis is the random read run Virtual machine Application Partition Partition Fig. 1: Virtual machine environment access Average seek distance [GB] Fig. 2: performance profile of our target disk SPECTS ISBN:

3 throughput defined as Eq. 1. Generally, the seek distance is presented in cylinder units. However, in this paper, we present the seek distance in byte units because byte units are familiar to system managers. In our experiments, the size of a cylinder is 7.84 MB. As shown in Fig. 2, the throughput decreases to less than 5% when the seek distance is long. When multiple VMs access the disk simultaneously, there are two types of seek distances. One is short distance within one file in a VM. The other is long distance between VMs. Typical file size is less than 2 GB, which is the maximum size of the default Linux system. In this case, the average seek distance is 1 GB by uniform random access. When the average seek distance is 1 GB, the throughput is.45 MB. A is more than 1 GB typically. When the average seek distance is 1 GB, the throughput is.33 MB. In our method, we estimate disk throughput based on probabilities of two types of seek distances. 3. PERFORMANCE EVALUATION METHOD OF DISK ACCESS This section explains our disk access performance evaluation method in virtual machine environments. Our performance evaluation method does not simulate every seek distance but analytically calculates representative seek distances between representative locations of each file. To present these representative seek distances, we classify seek distances into two types, a seek distance within a file and a seek distance between files. Locations of s, in virtual machine environments, are limited by predetermined disk partitions, so that the two types of seek distances can be obtained as averages of partition size and distances between partitions. throughput is estimated based on probabilities of the two types of seek distances. To present the probabilities, which depend on the I/O scheduler, a sequential access ratio is defined. Total average seek distance is derived from the two types of seek distances and the sequential access ratio. In the rest of this section we describe these two definitions in detail. Section 3.4 describes the performance evaluation method of the disk access throughputs using the calculated total average seek distance and the disk performance profile Classification of seek distance First, let us define seek distances within / between files. Fig. 3 shows locations of files on a disk. For example, let us assume there are two files, file i and file j, on a disk. Let l ij denote an average seek distance between file i and file j, and l ii, an average seek distance within a file (file i). Assuming that the locations of disk accesses are distributed at uniform random, the average seek distance between file i and file j (l ij ) is the length between the mean points of file i and file j. The average seek distance within a file (l ii ) is the half length of the size of file i. In this paper, we take account of these uniform random accesses. In virtual machine environments, each VM accesses data in its own partition, because s are anchored in disk partitions. The locations of the partitions Sequential access ratio: α File i Seek distance within file: l i-i Transfer ratio to another file =(1-α)/(N-1) Seek distance between files: l i-j = l j-i Sequential access ratio: α File j N: Number of files Fig. 3: Definitions of seek distances within file and seek distances between files and sequential access ratio access throughput (B) B est (=Estimated throughput) performance profile: B = f(l) l ave (= Calculated total average seek distance) Average seek distance (l) Fig. 4: Estimation of disk access throughput using disk performance profile and average seek distance are decided when system administrators design a VM system. The two types of seek distances can be obtained easily Definition of sequential access ratio The definition of sequential access ratio is explained here. Fig. 3 shows an image of the definition of a sequential access ratio. There are two files, file i and file j, on a disk. These two files are accessed simultaneously. We define sequential access ratio (α) as the probability whether the next access is to the same file or not. For example, if the access sequence of file number is iiii i, the sequential access ratio is one (α = 1). The disk head moves within the same file (file i) after the head accesses data in file i. If ijij ij, the ratio is zero (α = ). When the current access is file i, the probability that the next access is one of the other files is (1-α)/(N-1), where the number of files is N (in Fig. 3 only two files (i and j) are presented). In this calculation, we assume that priorities of the disk accesses to all files are the same. The sequential access ratio is between zero and one. When the ratio is zero (α = ), after the disk head accesses data, the next access request always moves the disk head to another file. In this case, the seek distance is long; therefore, the disk access throughput is low. When the sequential access ratio is one (α = 1), the disk head moves within the same file. The seek distance is short; therefore, the disk access throughput is high. SPECTS ISBN:

4 In VM environments, the sequential access ratio presents a probability that the disk access request from the same VM is executed continuously by the I/O scheduler in a VMM Derivation of total average seek distance We derive a total average seek distance (l ave ) as Eq. 2, when the sequential access ratio and the two types of seek distances are defined as above. N 1 1 α l (Eq. 2) ave = α lii + lij N = i 1 i j N 1 In Eq. 2, N is the number of files which are accessed on the disk simultaneously, α is the sequential access ratio, l ii is the seek distance within a file (file i), and l ij is the seek distance between file i and file j. The first term is the expectation of seek distance within file i, and the second term is a summation of the expected seek distance from file i to file j (i j), where (1-α)/(1-N) represents the access probability to file j. The total average seek distance represents the average length of the disk head s trace when disk accesses are executed many times Estimation of disk throughput Finally, disk throughput is estimated from the performance profile of a disk device using the total average seek distance. The performance profile is the relation between seek distances (l) and throughputs (B) as B = f(l) in Fig. 4. The performance profile of the disk device is measured in advance. The method of measuring the disk performance profile was described in Section 2. The estimated throughput (B est ) is obtained from B est = f(l ave ) using l ave calculated in Eq EXPERIMENTAL RESULTS In this section, we evaluate our method by measuring disk access throughputs. The measured throughputs are compared with the estimated throughputs so as to demonstrate the practicality of our method when the number of VMs and the locations of s are varied. We measured the throughputs in Xen virtual machine environments and native Linux environments. (A normal Linux OS without virtualization is called native Linux in this paper.) We focused on random read accesses in our experiments, because the disk throughput of other access patterns is greater than those of random read access due to the memory cache effect Setups Experimental setups are shown in Fig. 5. Multiple random read processes access test files on a local disk simultaneously in Xen virtual machines and native Linux. The random read process is a test program that is described in Section 2. The fixed size of read data is 4 kb in our experiments. In Xen VM environments, a single random read process runs in a single VM (Fig. 5 (a)). In native Linux environment, multiple random read processes run in one native Linux OS (Fig. 5 (b)). The random read process VM (dom) Table 1: Specifications of physical machine and OSs CPU Memory OS (Kernel) Test file size =2 GB Partition size = size = 1 GB VM (domu) Random read Test file Native Linux 3 GB Xen Fedora Core 6 (Linux ) Scheduling by Xen scheduler Xen (version 3..4) Dom Pentium 4 3.8GHz 1 GB Test file Fedora Core 6 (Linux ) VM (domu) Random read (a) Xen virtual machine Random read Test file Native Linux Scheduling by Linux scheduler (b) Native Linux Test file Random read Dom U 16 GB (Western Digital, WD16JS) 256 MB Fedora Core 6 (Linux ) Partitioin 1 Partitioin 2 Partitioin 13 Partitioin 14 VM 1 image file VM 2 image file VM 13 image file VM 14 image file Test file Test file Test file Test file Fig. 6: Configuration of disk partitions Random read a test file Random read a test file Fig. 5: Experimental setups in Xen virtual machine environment and native Linux environment in the native Linux accesses the test file in the VM image file, which is mounted in the native Linux environment so that the same file is accessed in both environments. The same physical computer is used in both experimental environments to ensure that no machine condition will affect experimental results. Xen and native Linux are installed into the computer as a dual booting system. The specifications of the physical computer SPECTS ISBN:

5 Table 2: Selected virtual machines and total average seek distance in each experimental condition Total average Number Condition seek distance [GB] of ID Xen Native Linux processes α =. α = VM1 VM3 Selected Virtual machines VM1 VM VM1 VM VM1 VM VM1 VM VM1 VM VM1 VM2 VM3 VM VM2 VM4 VM6 VM VM1 VM3 VM6 VM VM1 VM5 VM9 VM VM1 VM2 VM13 VM VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM VM1 VM2 VM5 VM6 VM7 VM8 VM9 VM VM2 VM3 VM6 VM7 VM8 VM9 VM12 VM VM1 VM3 VM5 VM7 VM8 VM1 VM13 VM VM1 VM2 VM3 VM4 VM11 VM12 VM13 VM14 are shown in Table 1. The machine has a Pentium GHz CPU, 16 GB HDD and 3 GB memory. The HDD is used as a local disk. This disk s performance profile, measured in Section 2 (Fig. 2), is as follows: B =.587 l [MB/sec] if 1 l 2 [GB] (Eq. 3) 1 B = [MB/sec] if l 2 [GB] (Eq.4).244 l In native Linux environment, Fedora Core 6 is operated as a native Linux OS. In Xen, a virtual machine is called a domain [5]. There are two kinds of domains, domain (dom ) and domain U (dom U). The domain is a special VM that is allowed to access the physical device directly. When another VM (domain U) operates I/O access, the domain U asks the domain to execute I/O access and returns the result of the I/O operation to the domain U. In this paper, we assume that applications run in domain Us. This is because the domain is used for administration and the domain Us are used for user applications, as is the usual case in Xen operations. The specifications of VM are as follows: The domain has a 1 GB memory and each of the domain Us has a 256 MB memory. The Fedora Core 6 is operated as the guest OS in the domain and the domain U. We divided the disk space into multiple partitions and a test file was created in each partition. Fig. 6 shows the configuration of disk partitions in our experiments. The size of each partition is 1 GB (135 cylinders). Fourteen partitions are made for VMs in the disk because Linux makes a maximum of 15 partitions, and one of them is an extended partition in which files are not recorded. A VM image file which includes a guest OS and a test file is deployed in each partition. The test file, which is accessed by the random read process, is created by using the following Linux dd command in each VM: $ dd if=/dev/urandom of=testfile bs=1m count=248 This command writes random numbers into the test file. The size of the test file is 2 GB (l ii = 1 GB in Eq. 2). When we measure the random read throughputs, the random read process reads fixed-size data (4 kb) 15, times. The total size of read data is 6 MB, which is about 3% of the test file (2 GB). Therefore, we can ignore the memory cache effect Conditions Table 2 shows the experimental conditions. We measured total random read throughputs at each condition in Xen virtual machine environments and native Linux environments. The total random read throughputs are the summation of all the random read processes throughputs, defined by Eq. 1. The numbers of active random read processes are 1, 2, 4 and 8. When multiple random read processes run, we have many combinations of the active VMs. We select the combinations shown in the fifth column of Table 2. In Xen virtual machine environments, the selected VMs are active and the other VMs are shut down. In native Linux environments, the selected VM image files are mounted and a test file in each selected VM is accessed by the random read process. The same files are accessed in Xen and native Linux environments. We select these conditions in order to have various total average seek distances. The total average seek distances in Xen VM and SPECTS ISBN:

6 Read throughput [M B/sec] Tim e [sec] Read throughput [M B /sec] Time [sec] (a) Xen virtual machine (l ave = 1 GB) (b) Native Linux (l ave = 1 GB) Fig. 7: Measured total disk access throughput (the number of processes (N) = 1) native Linux at each condition are shown in the third and fourth columns of Table 2. The total average seek distances are calculated using α =. and α =.99 in Xen VM and native Linux. These values are obtained by fitting Eq. 2 to the results of two processes conditions. The sequential access ratio (α) depends not on application software but on platforms such as Linux and Xen. The sequential access ratio measured in advance can be adapted to various application software. For example under condition 4-2, four VMs (VM2, VM4, VM6 and VM8) are active and a random read process runs in each VM in the Xen virtual machine environment. The seek distances between files are as follows: l 2,2 = 1 [GB], l 2,4 = 2 [GB], l 2,6 = 4 [GB], l 2,8 = 6 [GB], l 4,2 = 2 [GB], l 4,4 = 1 [GB], l 4,6 = 2 [GB], l 4,8 = 4 [GB], l 6,2 = 4 [GB], l 6,4 = 2 [GB], l 6,6 = 1 [GB], l 6,8 = 2 [GB], l 8,2 = 6 [GB], l 8,4 = 4 [GB], l 8,6 = 2 [GB], l 8,8 = 1 [GB] l ij represents the seek distance between the test files in the partition i and j. Here, it is assumed that the distance between files is equal to the distance between the partitions. l 2,4 represents that the seek distance between the test files in VM2 and VM4 is 2 GB. l kk represents the seek distance within file k. Since the size of the test file is 2 GB and the random read process accesses uniform random locations in our experiments, the average seek distances within a file l kk is 1 GB. Using these values in Eq. 2, the total average seek distances are calculated as 33.3 GB and 1.32 GB in Xen VM environment and native Linux Discussion Fig. 7 shows the measured disk access throughputs in the case where a single random read process accesses one given test file in partition 12. Figs. 7 (a) and (b) present the results in a Xen VM environment and in a native Linux environment. Since there is a single active test file, we consider only the seek distance within a file. In both environments, the seek distance within a file is 1 GB and the average seek distance calculated from Eq. 2 is 1 GB, because the test file size is set to 2 GB. The horizontal axis in Fig. 7 is the amount of time elapsed after starting the random read and the vertical axis is the random read throughput of each process. The throughput is initially low, because it takes considerable time to search the test file in the file system. When the same test file is accessed many times, the search in the file system becomes more effective and the throughput increases. We measured the random read throughput in the range where the throughput is stable. The random read throughputs are.456 MB/sec and.459 MB/sec in the Xen virtual machine and in the native Linux. These results lead to the conclusion that a Xen virtual machine has almost no overhead in disk accesses. In addition, we measured the effect of VM s memory size on disk throughput. When the memory size of domain U is set to 512 MB, the measured throughput is the same value as that measured in the case where the domain U s memory size is 256 MB (Fig. 7 (a)). When the memory size of domain is 512 MB, we measured the same throughput that as that obtained in the case where the domain s memory size is 1 GB (Fig. 7 (a)). This ensured that the memory sizes of domain U and domain did not affect disk performance in our experiments. Fig. 8 shows measured disk access throughputs in the case where two random read processes run. Figs. 8 (a-1) and (a-2) present the results in a Xen VM environment and (b-1) and (b-2) present those in a native Linux environment. Fig. 8 (a-1) and (b-1) present the random read throughputs measured under the condition 2-1 given in Table. 2. Figs. 8 (a-2) and (b-2) present the results under the condition 2-6. Under the condition 2-1 (Fig. 8 (a-1) and (b-1)), VM1 and VM3 are active. In this case, the seek distance between VM1 and VM3 (l 1,3, l 3,1 ) is 2 GB and that within file (l 1,1, l 3,3 ) is 1 GB. The average seek distances are calculated from Eq. 2 as 2. GB and 1.19 GB in a Xen virtual machine environment and in a native Linux environment. In the Xen virtual machine s result (a-1), the throughputs of both process 1 and process 2 are.132 MB/sec. The total random read throughput is 64 MB/sec. In the native Linux s result (b-1), throughputs of process 1 and process 2 are.192 MB/sec and.197 MB/sec. The total random SPECTS ISBN:

7 Read throughput [M B/sec] P rocess1 P rocess Time [sec] Read throughput [M B/sec] P rocess1 P rocess Tim e [sec] (a-1) Xen virtual machine under condition 2-1 (VM1, VM3) (b-1) Native Linux under condition 2-1 (VM1, VM3) (l ave = 2. GB) (l ave =1.19 GB) Read throughput [M B/sec] Process1 Process Time [sec] Read throughput [M B /sec] Process1 Process Time [sec] (a-2) Xen virtual machine under condition 2-6 (VM1, VM13) (b-2) Native Linux under condition 2-6 (VM1, VM13) (l ave = 12 GB) (l ave =2.19 GB) Fig. 8: Measured total disk access throughput using different VM images (the number of processes (N) = 2) throughput is.389 MB/sec. The estimated total throughputs are calculated from the average seek distances, which from Eq. 3 and Eq. 4 are 94 MB/sec and.442 MB/sec. Under the condition 2-6 (Fig. 8 (a-2) and (b-2)), VM1 and VM13 are active. In this case, l 1,13 = l 13,1 = 12 GB and l 1,1 = l 13,13 = 1 GB. The average seek distances are calculated as 12 GB and 2.19 GB in the Xen virtual machine environment and in the native Linux environment. The measured total random read throughputs are.151 MB/sec and.383 MB/sec. The estimated total throughputs are.171 MB/sec and.383 MB/sec. In the Xen virtual machine environment, the total random read throughput decreased to 5% of that in the case where a single random read process accesses, when the total average seek distance is long. In native Linux, however, there is very little decrease in total throughput from the result of a single random access case. When we focus on flucuation of throughput, the throughput of each process in Xen is almost the same, but those in native Linux varied widely. This is because native Linux, which has a large sequential access ratio, tends to access one file sequentially. These results show that there is a trade-off between total disk throughput and fairness of disk access. Xen may have a policy that Xen hypervisor provides throughput to each VM as fairly as possible. On the other hand, native Linux has a policy that gives priority to throughput over fairness. Fig. 9 compares estimated throughputs and measured throughputs when the average seek distance in Table 2 is varied. The horizontal axis of the graph is the total average seek distance (l ave ) and the vertical axis is the total random read throughput. The marks in the graph represent the results under the conditions shown in Table 2. The total average seek distance in Xen VM environments is longer than that in a native Linux environment because the sequential access ratio is small in Xen virtual machine environments. Fig. 9 reveals that our method can estimate the total disk throughputs with no more than 1% error in all ranges. This is reasonable when the system administrators operate the virtual machine systems. SPECTS ISBN:

8 Read throughput [MB/sec] process 2 processes 4 processes 8 processes Estimate Average seek distance [GB] Read throughput [MB/sec] process 2 processes 4 processes 8 processes Estimate Average seek distance [GB] (a) Xen virtual machine (b) Native Linux Fig. 9: Comparison between estimated throughputs and measured throughputs 5. CONCLUSION In this paper we proposed a disk access throughput evaluation method for virtual machine environments. Our performance evaluation method is simple and practical, because it does not simulate every seek distance but analytically calculates seek distances between representative location of each file. To present these representative seek distances, we classified seek distances into two types, seek distance within a file and seek distance between files. In virtual machine environments, locations of s are limited by disk partitions. The two types of seek distances can be obtained easily. throughput was estimated based on probabilities of the two types of seek distances. To present the probabilities depending on the I/O scheduler, sequential access ratio was defined. Total average seek distance was derived from the two types of seek distances and the sequential access ratio. We evaluated our method experimentally. We measured the random read throughputs in Xen virtual machine environments and native Linux environments. We focused on random read accesses in our experiments, because disk throughput of other access patterns is greater than that of random read access due to the memory cache effect. The measured throughputs were compared with the throughputs estimated by our proposed method when the number of virtual machines and the files locations were varied. The experimental results show that our proposed method can estimate the total disk throughputs with a maximum of 1% error. This is reasonable for system administrators managing virtual machine systems. In this paper, virtual machines were created using Xen VMM, but our method can apply other VMMs. We took account of the case that disk access patterns of all virtual machines are the same in this paper. Considering a case in which each virtual machine has a different disk access pattern will be a subject of future work. ACKNOWLEDGEMENT We would like to thank a number of unnamed reviewers for their constructive comments and suggestions. This work was partly supported by Ministry of Internal Affairs and Communications (MIC). REFERENCES [1] J. Rolia, L. Cherkasova and A. Andrzejak, A Capacity Management Service for Resource Pools, in Proceedings of ACM WOSP 25, pp , 25. [2] M. Rosenblum and T. Garfinkel, Virtual Machine Monitors: Current Technology and Future Trends, in IEEE Computer, Vol. 38, No. 5, pp , 25. [3] [4] winfamily/virtualpc/default.mspx [5] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the Art of Virtualization, in Proceedings of ACM SOSP 23, pp , October 23. [6] [7] Y. Ding and E. Bolker, How Many Guests Can You Serve? On the Number of Partitions, in Proceedings of CMG conference, 26. [8] E. Bolker and Y. Ding, Virtual performance won t do: Capacity planning for virtual systems, in Proceedings of CMG conference, 25. [9] G. Khanna, K Beaty, G. Kar and A. Kochut, Application Performance Management in Virtualized Server Environments, in Proceedings of IEEE/IFIP NOMS 26, pp , 26. [1] C. Ruemmler and J. Wilkes, An Introduction to Drive Modeling, in IEEE Computer, Vol. 27, No. 3, pp , [11] [12] E. Varki, A. Merchant, J. Xu, and X. Qiu, Issues and Challenges in the Performance Analysis of Real Arrays, in IEEE Trans. Parallel and distributed systems, vol. 15, no. 6, pp , 24. [13] M. Uysal, G. A. Alvarez and A. Merchant, A Modular, Analytical Throughput Model for Modern Arrays, in Proceedings of IEEE MASCOTS 21, pp , 21. [14] A. Merchant and P. S. Yu, Analytic Modeling of Clustered RAID with Mapping Based on Nearly Random Permutation, in IEEE Trans. Computers, vol. 45, no. 3, pp , SPECTS ISBN:

Java. Measurement of Virtualization Overhead in a Java Application Server. Kazuaki Takahashi 1 and Hitoshi Oi 1. J2EE SPECjAppServer2004

Java. Measurement of Virtualization Overhead in a Java Application Server. Kazuaki Takahashi 1 and Hitoshi Oi 1. J2EE SPECjAppServer2004 Vol.1-EVA-3 No. 1//3 1. Java 1 1 JEE SPECjAppServer CPU SPECjAppServer 3 CPU Measurement of Virtualization Overhead in a Java Application Server Kazuaki Takahashi 1 and Hitoshi Oi 1 In this technical report,

More information

Live Virtual Machine Migration with Efficient Working Set Prediction

Live Virtual Machine Migration with Efficient Working Set Prediction 2011 International Conference on Network and Electronics Engineering IPCSIT vol.11 (2011) (2011) IACSIT Press, Singapore Live Virtual Machine Migration with Efficient Working Set Prediction Ei Phyu Zaw

More information

Virtual machine architecture and KVM analysis D 陳彥霖 B 郭宗倫

Virtual machine architecture and KVM analysis D 陳彥霖 B 郭宗倫 Virtual machine architecture and KVM analysis D97942011 陳彥霖 B96902030 郭宗倫 Virtual machine monitor serves as an interface between hardware and software; no matter what kind of hardware under, software can

More information

Xen and the Art of Virtualization

Xen and the Art of Virtualization Xen and the Art of Virtualization Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, Andrew Warfield Presented by Thomas DuBuisson Outline Motivation

More information

Virtual Machines Disco and Xen (Lecture 10, cs262a) Ion Stoica & Ali Ghodsi UC Berkeley February 26, 2018

Virtual Machines Disco and Xen (Lecture 10, cs262a) Ion Stoica & Ali Ghodsi UC Berkeley February 26, 2018 Virtual Machines Disco and Xen (Lecture 10, cs262a) Ion Stoica & Ali Ghodsi UC Berkeley February 26, 2018 Today s Papers Disco: Running Commodity Operating Systems on Scalable Multiprocessors, Edouard

More information

Dynamic Translator-Based Virtualization

Dynamic Translator-Based Virtualization Dynamic Translator-Based Virtualization Yuki Kinebuchi 1,HidenariKoshimae 1,ShuichiOikawa 2, and Tatsuo Nakajima 1 1 Department of Computer Science, Waseda University {yukikine, hide, tatsuo}@dcl.info.waseda.ac.jp

More information

Fairness Issues in Software Virtual Routers

Fairness Issues in Software Virtual Routers Fairness Issues in Software Virtual Routers Norbert Egi, Adam Greenhalgh, h Mark Handley, Mickael Hoerdt, Felipe Huici, Laurent Mathy Lancaster University PRESTO 2008 Presenter: Munhwan Choi Virtual Router

More information

Xen and the Art of Virtualization

Xen and the Art of Virtualization Xen and the Art of Virtualization Paul Barham,, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer,, Ian Pratt, Andrew Warfield University of Cambridge Computer Laboratory Presented

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

MODELING OF CPU USAGE FOR VIRTUALIZED APPLICATION

MODELING OF CPU USAGE FOR VIRTUALIZED APPLICATION e-issn 2455 1392 Volume 2 Issue 4, April 2016 pp. 644-651 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com MODELING OF CPU USAGE FOR VIRTUALIZED APPLICATION Lochan.B 1, Divyashree B A 2 1

More information

Is Virtualization Killing SSI Research? Jérôme Gallard Kerrighed Summit Paris February 2008

Is Virtualization Killing SSI Research? Jérôme Gallard Kerrighed Summit Paris February 2008 Is Virtualization Killing SSI Research? Jérôme Gallard Kerrighed Summit Paris February 2008 Supervisor : Co supervisor: Christine Morin Adrien Lèbre Outline Context Virtualization / SSI Combining Virtualization

More information

Domain Level Page Sharing in Xen Virtual Machine Systems

Domain Level Page Sharing in Xen Virtual Machine Systems Domain Level Page Sharing in Xen Virtual Machine Systems Myeongjae Jeon, Euiseong Seo, Junghyun Kim, and Joonwon Lee CS Division, Korea Advanced Institute of Science and Technology {mjjeon,ses,joon}@calabkaistackr

More information

Presented by: Nafiseh Mahmoudi Spring 2017

Presented by: Nafiseh Mahmoudi Spring 2017 Presented by: Nafiseh Mahmoudi Spring 2017 Authors: Publication: Type: ACM Transactions on Storage (TOS), 2016 Research Paper 2 High speed data processing demands high storage I/O performance. Flash memory

More information

THE phenomenon that the state of running software

THE phenomenon that the state of running software TRANSACTION ON DEPENDABLE AND SECURE COMPUTING 1 Fast Software Rejuvenation of Virtual Machine Monitors Kenichi Kourai, Member, IEEE Computer Society, and Shigeru Chiba Abstract As server consolidation

More information

MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION

MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION INFORMATION SYSTEMS IN MANAGEMENT Information Systems in Management (2014) Vol. 3 (4) 273 283 MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION MATEUSZ SMOLIŃSKI Institute of

More information

VM Migration, Containers (Lecture 12, cs262a)

VM Migration, Containers (Lecture 12, cs262a) VM Migration, Containers (Lecture 12, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley February 28, 2018 (Based in part on http://web.eecs.umich.edu/~mosharaf/slides/eecs582/w16/021516-junchenglivemigration.pptx)

More information

Experience with PROOF-Lite in ATLAS data analysis

Experience with PROOF-Lite in ATLAS data analysis Journal of Physics: Conference Series Experience with PROOF-Lite in ATLAS data analysis To cite this article: S Y Panitkin et al 2011 J. Phys.: Conf. Ser. 331 072057 View the article online for updates

More information

Exploring I/O Virtualization Data paths for MPI Applications in a Cluster of VMs: A Networking Perspective

Exploring I/O Virtualization Data paths for MPI Applications in a Cluster of VMs: A Networking Perspective Exploring I/O Virtualization Data paths for MPI Applications in a Cluster of VMs: A Networking Perspective Anastassios Nanos, Georgios Goumas, and Nectarios Koziris Computing Systems Laboratory, National

More information

Improving CPU Performance of Xen Hypervisor in Virtualized Environment

Improving CPU Performance of Xen Hypervisor in Virtualized Environment ISSN: 2393-8528 Contents lists available at www.ijicse.in International Journal of Innovative Computer Science & Engineering Volume 5 Issue 3; May-June 2018; Page No. 14-19 Improving CPU Performance of

More information

Scalability and performance of a virtualized SAP system

Scalability and performance of a virtualized SAP system Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2010 Proceedings Americas Conference on Information Systems (AMCIS) 8-2010 Scalability and performance of a virtualized SAP system

More information

Kernel Support for Paravirtualized Guest OS

Kernel Support for Paravirtualized Guest OS Kernel Support for Paravirtualized Guest OS Shibin(Jack) Xu University of Washington shibix@cs.washington.edu ABSTRACT Flexibility at the Operating System level is one of the most important factors for

More information

A Case for High Performance Computing with Virtual Machines

A Case for High Performance Computing with Virtual Machines A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation

More information

COMPUTER ARCHITECTURE. Virtualization and Memory Hierarchy

COMPUTER ARCHITECTURE. Virtualization and Memory Hierarchy COMPUTER ARCHITECTURE Virtualization and Memory Hierarchy 2 Contents Virtual memory. Policies and strategies. Page tables. Virtual machines. Requirements of virtual machines and ISA support. Virtual machines:

More information

Guest-Aware Priority-Based Virtual Machine Scheduling for Highly Consolidated Server

Guest-Aware Priority-Based Virtual Machine Scheduling for Highly Consolidated Server Guest-Aware Priority-Based Virtual Machine Scheduling for Highly Consolidated Server Dongsung Kim 1,HwanjuKim 1, Myeongjae Jeon 1, Euiseong Seo 2,andJoonwonLee 1 1 CS Department, Korea Advanced Institute

More information

Performance Evaluation of Virtualization Technologies

Performance Evaluation of Virtualization Technologies Performance Evaluation of Virtualization Technologies Saad Arif Dept. of Electrical Engineering and Computer Science University of Central Florida - Orlando, FL September 19, 2013 1 Introduction 1 Introduction

More information

Comparison of Storage Protocol Performance ESX Server 3.5

Comparison of Storage Protocol Performance ESX Server 3.5 Performance Study Comparison of Storage Protocol Performance ESX Server 3.5 This study provides performance comparisons of various storage connection options available to VMware ESX Server. We used the

More information

24-vm.txt Mon Nov 21 22:13: Notes on Virtual Machines , Fall 2011 Carnegie Mellon University Randal E. Bryant.

24-vm.txt Mon Nov 21 22:13: Notes on Virtual Machines , Fall 2011 Carnegie Mellon University Randal E. Bryant. 24-vm.txt Mon Nov 21 22:13:36 2011 1 Notes on Virtual Machines 15-440, Fall 2011 Carnegie Mellon University Randal E. Bryant References: Tannenbaum, 3.2 Barham, et al., "Xen and the art of virtualization,"

More information

Profiling and Understanding Virtualization Overhead in Cloud

Profiling and Understanding Virtualization Overhead in Cloud Profiling and Understanding Virtualization Overhead in Cloud Liuhua Chen, Shilkumar Patel, Haiying Shen and Zhongyi Zhou Department of Electrical and Computer Engineering Clemson University, Clemson, South

More information

Improving performance of Virtual Machines by Virtio bridge Bypass for PCI devices

Improving performance of Virtual Machines by Virtio bridge Bypass for PCI devices www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 6 Issue 4 April 2017, Page No. 20931-20937 Index Copernicus value (2015): 58.10 DOI: 10.18535/ijecs/v6i4.24

More information

Block I/O bandwidth Control

Block I/O bandwidth Control Block I/O bandwidth Control Hirokazu Takahashi Hiroya Inakoshi 20-21th November 2008 Copyright 2006, VA Linux Systems Japan K.K. All rights reserved. This work

More information

CS252 S05. CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2. I/O performance measures. I/O performance measures

CS252 S05. CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2. I/O performance measures. I/O performance measures CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2 I/O performance measures I/O performance measures diversity: which I/O devices can connect to the system? capacity: how many I/O devices

More information

Virtualization. Part 1 Concepts & XEN

Virtualization. Part 1 Concepts & XEN Part 1 Concepts & XEN Concepts References and Sources James Smith, Ravi Nair, The Architectures of Virtual Machines, IEEE Computer, May 2005, pp. 32-38. Mendel Rosenblum, Tal Garfinkel, Virtual Machine

More information

An Integration and Load Balancing in Data Centers Using Virtualization

An Integration and Load Balancing in Data Centers Using Virtualization An Integration and Load Balancing in Data Centers Using Virtualization USHA BELLAD #1 and JALAJA G *2 # Student M.Tech, CSE, B N M Institute of Technology, Bengaluru, India * Associate Professor, CSE,

More information

Performance and Scalability Evaluation of Oracle VM Server Software Virtualization in a 64 bit Linux Environment

Performance and Scalability Evaluation of Oracle VM Server Software Virtualization in a 64 bit Linux Environment 2011 IEEE International Conference on Privacy, Security, Risk, and Trust, and IEEE International Conference on Social Computing Performance and Scalability Evaluation of Oracle VM Server Software Virtualization

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006 November 21, 2006 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds MBs to GBs expandable Disk milliseconds

More information

Virtualization. Michael Tsai 2018/4/16

Virtualization. Michael Tsai 2018/4/16 Virtualization Michael Tsai 2018/4/16 What is virtualization? Let s first look at a video from VMware http://www.vmware.com/tw/products/vsphere.html Problems? Low utilization Different needs DNS DHCP Web

More information

CS 350 Winter 2011 Current Topics: Virtual Machines + Solid State Drives

CS 350 Winter 2011 Current Topics: Virtual Machines + Solid State Drives CS 350 Winter 2011 Current Topics: Virtual Machines + Solid State Drives Virtual Machines Resource Virtualization Separating the abstract view of computing resources from the implementation of these resources

More information

A Fast Rejuvenation Technique for Server Consolidation with Virtual Machines

A Fast Rejuvenation Technique for Server Consolidation with Virtual Machines A Fast Rejuvenation Technique for Server Consolidation with Virtual Machines Kenichi Kourai Tokyo Institute of Technology 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8552, Japan kourai@is.titech.ac.jp Shigeru

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Deploying Application and OS Virtualization Together: Citrix and Parallels Virtuozzo Containers www.parallels.com Version 1.0 Table of Contents The Virtualization

More information

The Architecture of Virtual Machines Lecture for the Embedded Systems Course CSD, University of Crete (April 29, 2014)

The Architecture of Virtual Machines Lecture for the Embedded Systems Course CSD, University of Crete (April 29, 2014) The Architecture of Virtual Machines Lecture for the Embedded Systems Course CSD, University of Crete (April 29, 2014) ManolisMarazakis (maraz@ics.forth.gr) Institute of Computer Science (ICS) Foundation

More information

An Analysis of HPC Benchmarks in Virtual Machine Environments

An Analysis of HPC Benchmarks in Virtual Machine Environments An Analysis of HPC Benchmarks in Virtual Machine Environments Anand Tikotekar, Geoffroy Vallée, Thomas Naughton, Hong Ong, Christian Engelmann, Stephen L. Scott Oak Ridge National Laboratory Computer Science

More information

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved. Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more

More information

PARDA: Proportional Allocation of Resources for Distributed Storage Access

PARDA: Proportional Allocation of Resources for Distributed Storage Access PARDA: Proportional Allocation of Resources for Distributed Storage Access Ajay Gulati, Irfan Ahmad, Carl Waldspurger Resource Management Team VMware Inc. USENIX FAST 09 Conference February 26, 2009 The

More information

Lecture 21. Monday, February 28 CS 470 Operating Systems - Lecture 21 1

Lecture 21. Monday, February 28 CS 470 Operating Systems - Lecture 21 1 Lecture 21 Case study guidelines posted Case study assignments Third project (virtual memory simulation) will go out after spring break. Extra credit fourth project (shell program) will go out after that.

More information

Interrupt Coalescing in Xen

Interrupt Coalescing in Xen Interrupt Coalescing in Xen with Scheduler Awareness Michael Peirce & Kevin Boos Outline Background Hypothesis vic-style Interrupt Coalescing Adding Scheduler Awareness Evaluation 2 Background Xen split

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, page 1 Virtual Machine Configuration Recommendations, page 1 Configuring Resource Pools Using vsphere Web Client, page 4 Configuring a Virtual Machine

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, on page 1 Virtual Machine Configuration Recommendations, on page 1 Configuring Resource Pools Using vsphere Web Client, on page 4 Configuring a Virtual

More information

Using Transparent Compression to Improve SSD-based I/O Caches

Using Transparent Compression to Improve SSD-based I/O Caches Using Transparent Compression to Improve SSD-based I/O Caches Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

Difference Engine: Harnessing Memory Redundancy in Virtual Machines (D. Gupta et all) Presented by: Konrad Go uchowski

Difference Engine: Harnessing Memory Redundancy in Virtual Machines (D. Gupta et all) Presented by: Konrad Go uchowski Difference Engine: Harnessing Memory Redundancy in Virtual Machines (D. Gupta et all) Presented by: Konrad Go uchowski What is Virtual machine monitor (VMM)? Guest OS Guest OS Guest OS Virtual machine

More information

Deploying Application and OS Virtualization Together: Citrix and Virtuozzo

Deploying Application and OS Virtualization Together: Citrix and Virtuozzo White Paper Deploying Application and OS Virtualization Together: Citrix and Virtuozzo www.swsoft.com Version 1.0 Table of Contents The Virtualization Continuum: Deploying Virtualization Together... 3

More information

Empirical Evaluation of Latency-Sensitive Application Performance in the Cloud

Empirical Evaluation of Latency-Sensitive Application Performance in the Cloud Empirical Evaluation of Latency-Sensitive Application Performance in the Cloud Sean Barker and Prashant Shenoy University of Massachusetts Amherst Department of Computer Science Cloud Computing! Cloud

More information

Computer Systems Laboratory Sungkyunkwan University

Computer Systems Laboratory Sungkyunkwan University I/O System Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Introduction (1) I/O devices can be characterized by Behavior: input, output, storage

More information

VMMS: DISCO AND XEN CS6410. Ken Birman

VMMS: DISCO AND XEN CS6410. Ken Birman VMMS: DISCO AND XEN CS6410 Ken Birman Disco (First version of VMWare) Edouard Bugnion, Scott Devine, and Mendel Rosenblum Virtualization 3 a technique for hiding the physical characteristics of computing

More information

A Rank-based VM Consolidation Method for Power Saving in Datacenters

A Rank-based VM Consolidation Method for Power Saving in Datacenters Regular Paper A Rank-based VM Consolidation Method for Power Saving in Datacenters Shingo Takeda 1 and Toshinori Takemura 2 In this paper, we propose a simple but flexible virtual machine consolidation

More information

Database Virtualization: A New Frontier for Database Tuning and Physical Design

Database Virtualization: A New Frontier for Database Tuning and Physical Design Database Virtualization: A New Frontier for Database Tuning and Physical Design Ahmed A. Soror Ashraf Aboulnaga Kenneth Salem David R. Cheriton School of Computer Science University of Waterloo {aakssoro,

More information

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University Hard Disk Drives (HDDs) Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Virtualization Virtual CPUs Virtual memory Concurrency Threads Synchronization

More information

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri PAC094 Performance Tips for New Features in Workstation 5 Anne Holler Irfan Ahmad Aravind Pavuluri Overview of Talk Virtual machine teams 64-bit guests SMP guests e1000 NIC support Fast snapshots Virtual

More information

Modeling VMware ESX Server Performance A Technical White Paper. William L. Shelden, Jr., Ph.D Sr. Systems Analyst

Modeling VMware ESX Server Performance A Technical White Paper. William L. Shelden, Jr., Ph.D Sr. Systems Analyst Modeling VMware ESX Server Performance A Technical White Paper William L. Shelden, Jr., Ph.D Sr. Systems Analyst Modeling VMware ESX Server Performance William L. Shelden, Jr., Ph.D. The Information Systems

More information

Evaluate the Performance and Scalability of Image Deployment in Virtual Data Center

Evaluate the Performance and Scalability of Image Deployment in Virtual Data Center Evaluate the Performance and Scalability of Image Deployment in Virtual Data Center Kejiang Ye, Xiaohong Jiang, Qinming He, Xing Li, and Jianhai Chen College of Computer Science, Zhejiang University, Zheda

More information

Hard Disk Drives (HDDs)

Hard Disk Drives (HDDs) Hard Disk Drives (HDDs) Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu EEE3052: Introduction to Operating Systems, Fall 2017, Jinkyu Jeong (jinkyu@skku.edu)

More information

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018 Scalability Testing with Login VSI v16.2 White Paper Parallels Remote Application Server 2018 Table of Contents Scalability... 3 Testing the Scalability of Parallels RAS... 3 Configurations for Scalability

More information

ECE 331 Hardware Organization and Design. UMass ECE Discussion 11 4/12/2018

ECE 331 Hardware Organization and Design. UMass ECE Discussion 11 4/12/2018 ECE 331 Hardware Organization and Design UMass ECE Discussion 11 4/12/2018 Today s Discussion Topics Hamming Codes For error detection and correction Virtual Machines Virtual Memory The Hamming SEC Code

More information

Four Components of a Computer System

Four Components of a Computer System Four Components of a Computer System Operating System Concepts Essentials 2nd Edition 1.1 Silberschatz, Galvin and Gagne 2013 Operating System Definition OS is a resource allocator Manages all resources

More information

Implementation and Analysis of Large Receive Offload in a Virtualized System

Implementation and Analysis of Large Receive Offload in a Virtualized System Implementation and Analysis of Large Receive Offload in a Virtualized System Takayuki Hatori and Hitoshi Oi The University of Aizu, Aizu Wakamatsu, JAPAN {s1110173,hitoshi}@u-aizu.ac.jp Abstract System

More information

Modification and Evaluation of Linux I/O Schedulers

Modification and Evaluation of Linux I/O Schedulers Modification and Evaluation of Linux I/O Schedulers 1 Asad Naweed, Joe Di Natale, and Sarah J Andrabi University of North Carolina at Chapel Hill Abstract In this paper we present three different Linux

More information

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1 Ref: Chap 12 Secondary Storage and I/O Systems Applied Operating System Concepts 12.1 Part 1 - Secondary Storage Secondary storage typically: is anything that is outside of primary memory does not permit

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Year and Semester : II / IV Subject Code : CS6401 Subject Name : Operating System Degree and Branch : B.E CSE UNIT I 1. Define system process 2. What is an

More information

SANDPIPER: BLACK-BOX AND GRAY-BOX STRATEGIES FOR VIRTUAL MACHINE MIGRATION

SANDPIPER: BLACK-BOX AND GRAY-BOX STRATEGIES FOR VIRTUAL MACHINE MIGRATION SANDPIPER: BLACK-BOX AND GRAY-BOX STRATEGIES FOR VIRTUAL MACHINE MIGRATION Timothy Wood, Prashant Shenoy, Arun Venkataramani, and Mazin Yousif * University of Massachusetts Amherst * Intel, Portland Data

More information

Coexisting Scheduling Policies Boosting I/O Virtual Machines

Coexisting Scheduling Policies Boosting I/O Virtual Machines Coexisting Scheduling Policies Boosting I/O Virtual Machines Dimitris Aragiorgis, Anastassios Nanos, and Nectarios Koziris Computing Systems Laboratory, National Technical University of Athens {dimara,ananos,nkoziris}@cslab.ece.ntua.gr

More information

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Maximizing VMware ESX Performance Through Defragmentation of Guest Systems This paper details the results of testing performed to determine if there was any measurable performance benefit to be derived

More information

Copyright 2012, Elsevier Inc. All rights reserved.

Copyright 2012, Elsevier Inc. All rights reserved. Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology

More information

Xen and the Art of Virtualiza2on

Xen and the Art of Virtualiza2on Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian PraF, Andrew Warfield University of Cambridge Computer Laboratory Kyle SchuF CS 5204 Virtualiza2on Abstrac2on

More information

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved. Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Programmers want unlimited amounts of memory with low latency Fast memory technology is more expensive per

More information

File System Internals. Jo, Heeseung

File System Internals. Jo, Heeseung File System Internals Jo, Heeseung Today's Topics File system implementation File descriptor table, File table Virtual file system File system design issues Directory implementation: filename -> metadata

More information

Support for Smart NICs. Ian Pratt

Support for Smart NICs. Ian Pratt Support for Smart NICs Ian Pratt Outline Xen I/O Overview Why network I/O is harder than block Smart NIC taxonomy How Xen can exploit them Enhancing Network device channel NetChannel2 proposal I/O Architecture

More information

PRESENTATION TITLE GOES HERE

PRESENTATION TITLE GOES HERE Performance Basics PRESENTATION TITLE GOES HERE Leah Schoeb, Member of SNIA Technical Council SNIA EmeraldTM Training SNIA Emerald Power Efficiency Measurement Specification, for use in EPA ENERGY STAR

More information

Virtualization and memory hierarchy

Virtualization and memory hierarchy Virtualization and memory hierarchy Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering Department

More information

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc.

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc. The Convergence of Storage and Server Virtualization 2007 Solarflare Communications, Inc. About Solarflare Communications Privately-held, fabless semiconductor company. Founded 2001 Top tier investors:

More information

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

IBM Emulex 16Gb Fibre Channel HBA Evaluation

IBM Emulex 16Gb Fibre Channel HBA Evaluation IBM Emulex 16Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

Copyright 2012, Elsevier Inc. All rights reserved.

Copyright 2012, Elsevier Inc. All rights reserved. Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more

More information

Iomega REV Drive Data Transfer Performance

Iomega REV Drive Data Transfer Performance Technical White Paper March 2004 Iomega REV Drive Data Transfer Performance Understanding Potential Transfer Rates and Factors Affecting Throughput Introduction Maximum Sustained Transfer Rate Burst Transfer

More information

Dell Compellent Storage Center and Windows Server 2012/R2 ODX

Dell Compellent Storage Center and Windows Server 2012/R2 ODX Dell Compellent Storage Center and Windows Server 2012/R2 ODX A Dell Technical Overview Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date July 2013 October 2013 Description Initial

More information

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN

More information

Figure 1: Virtualization

Figure 1: Virtualization Volume 6, Issue 9, September 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Profitable

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

Mission-Critical Enterprise Linux. April 17, 2006

Mission-Critical Enterprise Linux. April 17, 2006 Mission-Critical Enterprise Linux April 17, 2006 Agenda Welcome Who we are & what we do Steve Meyers, Director Unisys Linux Systems Group (steven.meyers@unisys.com) Technical Presentations Xen Virtualization

More information

CLOUD COMPUTING IT0530. G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University

CLOUD COMPUTING IT0530. G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University CLOUD COMPUTING IT0530 G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University What is virtualization? Virtualization is way to run multiple operating systems and user applications on the same

More information

CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS UNIT I OPERATING SYSTEMS OVERVIEW

CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS UNIT I OPERATING SYSTEMS OVERVIEW CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS SYLLABUS UNIT I OPERATING SYSTEMS OVERVIEW Computer System Overview-Basic Elements, Instruction Execution, Interrupts, Memory

More information

The Continuity of Out-of-band Remote Management Across Virtual Machine Migration in Clouds

The Continuity of Out-of-band Remote Management Across Virtual Machine Migration in Clouds The Continuity of Out-of-band Remote Management Across Virtual Machine Migration in Clouds Sho Kawahara Department of Creative Informatics Kyushu Institute of Technology Fukuoka, Japan kawasho@ksl.ci.kyutech.ac.jp

More information

SNS COLLEGE OF ENGINEERING

SNS COLLEGE OF ENGINEERING SNS COLLEGE OF ENGINEERING Coimbatore. Department of Computer Science and Engineering Question Bank- Even Semester 2015-2016 CS6401 OPERATING SYSTEMS Unit-I OPERATING SYSTEMS OVERVIEW 1. Differentiate

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Parallels Virtuozzo Containers for Windows Capacity and Scaling www.parallels.com Version 1.0 Table of Contents Introduction... 3 Resources and bottlenecks...

More information

Storage Technologies - 3

Storage Technologies - 3 Storage Technologies - 3 COMP 25212 - Lecture 10 Antoniu Pop antoniu.pop@manchester.ac.uk 1 March 2019 Antoniu Pop Storage Technologies - 3 1 / 20 Learning Objectives - Storage 3 Understand characteristics

More information

I/O & Storage. Jin-Soo Kim ( Computer Systems Laboratory Sungkyunkwan University

I/O & Storage. Jin-Soo Kim ( Computer Systems Laboratory Sungkyunkwan University I/O & Storage Jin-Soo Kim ( jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Today s Topics I/O systems Device characteristics: block vs. character I/O systems

More information

Originally prepared by Lehigh graduate Greg Bosch; last modified April 2016 by B. Davison

Originally prepared by Lehigh graduate Greg Bosch; last modified April 2016 by B. Davison Virtualization Originally prepared by Lehigh graduate Greg Bosch; last modified April 2016 by B. Davison I. Introduction to Virtualization II. Virtual liances III. Benefits to Virtualization IV. Example

More information

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III NEC Express5800 A2040b 22TB Data Warehouse Fast Track Reference Architecture with SW mirrored HGST FlashMAX III Based on Microsoft SQL Server 2014 Data Warehouse Fast Track (DWFT) Reference Architecture

More information

I, J A[I][J] / /4 8000/ I, J A(J, I) Chapter 5 Solutions S-3.

I, J A[I][J] / /4 8000/ I, J A(J, I) Chapter 5 Solutions S-3. 5 Solutions Chapter 5 Solutions S-3 5.1 5.1.1 4 5.1.2 I, J 5.1.3 A[I][J] 5.1.4 3596 8 800/4 2 8 8/4 8000/4 5.1.5 I, J 5.1.6 A(J, I) 5.2 5.2.1 Word Address Binary Address Tag Index Hit/Miss 5.2.2 3 0000

More information

OpenStack hypervisor, container and Baremetal servers performance comparison

OpenStack hypervisor, container and Baremetal servers performance comparison OpenStack hypervisor, container and Baremetal servers performance comparison Yoji Yamato a) Software Innovation Center, NTT Corporation, 3 9 11 Midori-cho, Musashino-shi, Tokyo 180 8585, Japan a) yamato.yoji@lab.ntt.co.jp

More information

S4D-Cache: Smart Selective SSD Cache for Parallel I/O Systems

S4D-Cache: Smart Selective SSD Cache for Parallel I/O Systems S4D-Cache: Smart Selective SSD Cache for Parallel I/O Systems Shuibing He, Xian-He Sun, Bo Feng Department of Computer Science Illinois Institute of Technology Speed Gap Between CPU and Hard Drive http://www.velobit.com/storage-performance-blog/bid/114532/living-with-the-2012-hdd-shortage

More information