Increasing performance in KVM virtualization within a Tier-1 environment

Size: px
Start display at page:

Download "Increasing performance in KVM virtualization within a Tier-1 environment"

Transcription

1 Journal of Physics: Conference Series Increasing performance in KVM virtualization within a Tier-1 environment To cite this article: Andrea Chierici and Davide Salomoni 2012 J. Phys.: Conf. Ser View the article online for updates and enhancements. Related content - A quantitative comparison between xen and kvm Andrea Chierici and Riccardo Veraldi - Integration of virtualized worker nodes in standard batch systems Volker Büge, Hermann Hessling, Yves Kemp et al. - Lxcloud: a prototype for an internal cloud in HEP. Experiences and lessons learned Sebastien Goasguen, Belmiro Moreira, Ewan Roche et al. Recent citations - Big Data Over a 100G Network at Fermilab Gabriele Garzoglio et al This content was downloaded from IP address on 12/02/2018 at 18:39

2 Increasing performance in KVM virtualization within a Tier-1 environment Andrea Chierici INFN-CNAF andrea.chierici@cnaf.infn.it Davide Salomoni INFN-CNAF davide.salomoni@cnaf.infn.it Abstract. This work shows the optimizations we have been investigating and implementing at the KVM (Kernel-based Virtual Machine) virtualization layer in the INFN Tier-1 at CNAF, based on more than a year of experience in running thousands of virtual machines in a production environment used by several international collaborations. These optimizations increase the adaptability of virtualization solutions to demanding applications like those run in our institute (High-Energy Physics). We will show performance differences among different filesystems (like ext3 vs ext4) when used as KVM host local storage. We will provide guidelines for solid state disks (SSD) adoption, for deployment of SR-IOV (Single Root I/O Virtualization) enabled hardware and what is the best solution to distribute and instantiate read-only virtual machine images. This work has been driven by the project called Worker Nodes on Demand Service (WNoDeS), a framework designed to offer local, grid or cloud-based access to computing and storage resources, preserving maximum compatibility with existing computing center policies and work-flows. 1. Introduction This work describes the optimizations we have been investigating and implementing at the KVM virtualization layer in the INFN Tier-1 at CNAF (Bologna, Italy), based on more than a year of experience in running thousands of virtual machines in a production environment used by several international collaborations. These optimizations increase the adaptability of virtualization solutions to demanding applications like those run in our institute (mostly related to High-Energy Physics). This work has been driven by the project called Worker Nodes on Demand Service (WNoDeS) [1][2], a framework designed to offer local, grid or cloud-based access to computing and storage resources, preserving maximum compatibility with existing computing center policies and work-flows. Published under licence by IOP Publishing Ltd 1

3 2. Testbed description 2.1. Hardware configuration The hardware we used for our testbed consists of two identical machines with the following characteristics: dual Intel Xeon GB of DDR3 2 Western Digital disks 300GB 10K RPM 16MB cache 2.5 SATA 3.0Gbps Internal Enterprise Hard Drive 82574L 1Gbps LAN adapter by Intel In order to perform some tests we added to this configuration: 160GB SATA II MLC Internal Solid State Drive 82599eb 10Gbps LAN adapter both provided by Intel. With recent CPU improvements it s also necessary to fine tune BIOS settings. In our case we used these options: hyperthreading disabled (no SMT) disks controller configured as AHCI raid disabled vt-d enabled virtualization support enabled SR-IOV enabled 2.2. Software configuration Our hypervisor run on a scientific linux 6.1 while the virtual machines run on scientific linux 5.7 ( SL from now on). Here is the detailed description of core packages installed: Software running on SL6 machine: kernel el6 glibc qemu-kvm libvirt Software running on SL5 machine: kernel el5 glibc kvm libvirt WNoDeS is currently used to virtualize EMI [3] Worker Nodes, so the operating system of the VM we tested is uniquely SL5. In the future, when EMI WNs are available for SL6, we will update our test environment Test software We run basically two sets of benchmarks, one for disk I/O and one for network I/O. 2

4 Disk We used iozone for disk measures with this specific command line: iozone -Mce -I -+r -r 256k -s <2xram>g -f /tmp/iozone -i0 -i1 -i2 Detailed description of this specific command line: -M iozone will call uname() and will put the string in the output file; -ce include flush (fsync,fflush) and close() in the timing calculations; -I use DIRECT I/O for all file operations; tells the filesystem that all operations are to bypass the buffer cache and go directly to disk; -+r enable O RSYNC and O SYNC for all I/O testing; -r # used to specify the record size, in Kb, to test; -s # used to specify the size, in Kb, of the file to test; -f filename used to specify the filename for the temporary file under test; -i # used to specify which tests to run. (0=write/rewrite, 1=read/re-read, 2=randomread/write) This test measures 6 core aspects of a disk performance: Read: Indicates the performance of reading a file that already exists in the filesystem; Write: Indicates the performance of writing a new file to the filesystem; Re-read: After reading a file, this indicates the performance of reading a file again; Re-write: Indicates the performance of writing to an existing file; Random Read: Indicates the performance of reading a file by reading random information from the file; i.e this is not a sequential read; Random Write: Indicates the performance of writing to a file in various random locations; i.e this is not a sequential write; Network We used netperf to test network throughput, with this specific command line: netperf -t TCP STREAM -H <dest-host> -l 180 Following is a detailed description of this specific command line: -t TCP STREAM indicates desire to measure TCP protocol performance; -H <dest-hosts> indicates the destination host of the stream to measure; -l 180 indicates the time in seconds for the test; We used lmbench to test network latency, with this specific command line: lat tcp -N 5 <dest-host> lat udp -N 5 <dest-host> -N 5 parameter indicates that test will be repeated 5 times in a row, to make a more accurate measure. We run tests for both TCP and UDP protocols. In both netperf and lmbench tests, the second host in our testbed was used as a server, to receive (and for inbound tests, transmit) the network streams. All the rpm packages containing the software were taken from the DAG[4] repository. 3

5 3. Disk I/O Disk I/O is a major issue in virtualization technologies. WNoDes currently uses KVM-based VMs, exploiting the KVM -snapshot flag. This allows to download (either via http or posix I/O) a single read-only VM image file to each hypervisor, and run VMs writing automatically purged delta files only. This saves substantial disk space and time, which would be needed if one had to locally replicate multiple images. Performance characteristics of this solution (disk caching does not allow us to publish a reliable benchmark) and the latest enhancements in qcow2 image handling pushed us to investigate the possibility to improve this solution What we tested We investigated performance differences among these solutions: raw image: default libvirt option, where the virtual disk is stored on an uncompressed image file, currently the most performing option among image files; ext4 file system: the default file system for SL6, declared to be faster and more reliable than the classic ext3. Since it s been back-ported to SL5, we wanted to test if using ext4 as file system for virtual image can improve performance; qcow2 standard image qcow2 image file without any particular optimization (no preallocation of metadata); qcow2 with metadata pre-allocation: qcow2 image file with some internal image setup to speed-up I/O performance (metadata pre-allocation); qcow2 image snapshot: allows to create a new image that refers to an original image (so called a backing file) using Redirect-on-Write [5]; any changes to the snapshot will not be reflected in the original image, but will be permanently stored in the snapshot image (also called delta image). The most interesting solution to investigate is qcow2 image snapshot with backing file. In this particular approach, a new external copy-on-write image is created, that can be used as a different image file for our virtual machines. Any modification made inside the virtual machine will only be reflected in this delta file image, leaving the original (also called backing file) image unchanged. If we test the write speed of the newly created image, according to the specification of a standard qcow2 image (a snapshot is exactly this), the image file has to grow as much as the size of the test file we use in iozone. This process of disk growth is time and resource consuming, giving as a result poor perfomance. If we run the test a second time, the disk is already big enough to contain the file iozone uses to test, and the general perfomance measured (particularly the write speed) is optimal. According to figure 1, if one excludes write performance, the remaining measures are rather similar. Writing performance is problematic and the difference between various solutions is more than 50% in some cases. The solution of qcow2 with metadata pre-allocation did not behave as expected for what concerns write speed and exhibited a significant performance degradation. As expected, a promising solution is the qcow2 image snapshot: indeed, in the second run of the test, write speed was the same as raw image, providing a significant boost if compared to other solutions investigated. The main problem, as we just explained, is the fact that we need to allow the disk snapshot to expand in order to reach good results. In figure 2, results similar to previous are illustrated running VMs on an SSD disk. With this kind of disk its easy to see the general performance boost among all solutions. In this case qcow2 with metadata pre-allocation behaves as expected, in some cases even better than raw image. The most interesting solution even in this test is the one with backing file on qcow2 image. Indeed we put the snapshot on the ssd but the backing file was still on the hdd as in previous test. The performance boost is really remarkable, and there is no need to allow the snapshot to expand to reach maximum write speed as in previous test. 4

6 Figure 1. Hard disk drive I/O performance From this bunch of tests we can claim that I/O virtualization has improved a lot with the latest KVM provided on SL6.1. Raw image is still appealing for general purpose but qcow2 format is more appealing for its enhanced features. Ext4 FS is clearly not mature enough on SL5 and so we discourage its adoption on production worker nodes. Current WNoDeS solution adopting snapshot option (that provides performance similar to raw image) could be substituted by qcow2 snapshot with backing file, particularly if adopting an ssd disk to store delta files (qcow2 snapshots). 4. SR-IOV The PCI-SIG (PCI Special Interest Group) developed the Single Root I/O Virtualization (SR- IOV) specification. The SR-IOV specification is a standard for a type of PCI passthrough which natively shares a single device to multiple guests. SR-IOV reduces hypervisor involvement by specifying virtualization compatible memory spaces, interrupts and DMA streams. SR-IOV improves device performance for virtualized guests. SR-IOV enables a Single Root Function (for example, a single Ethernet port), to appear as multiple, separate, physical devices. A physical device with SR-IOV capabilities can be configured to appear in the PCI configuration space as multiple functions, each device has its own configuration space complete with Base Address Registers (BARs) Advantages of SR-IOV SR-IOV [6] devices can share a single physical port with multiple virtualized guests. Virtual Functions have near-native performance and provide better performance than para-virtualized drivers and emulated access. Virtual Functions provide data protection between virtualized guests on the same physical server as the data is managed and controlled by the hardware and not by the software. These features allow for increased virtualized guest density on hosts within 5

7 Figure 2. Solid state drive I/O performance Figure 3. SR-IOV network card throughput 6

8 Figure 4. SR-IOV network card latency a data center. In other words, SR-IOV is able to better utilize the bandwidth of devices with multiple guests Enabling SR-IOV on a KVM host In order to enable SR-IOV support on a KVM hosts it s necessary to: Enable the Intel VT-d or AMD-Vi extensions in the BIOS of the host Activate I/O MMU in the kernel by appending intel iommu=on to the kernel line in the /boot/grub/grub.conf file; nothing is required for AMD hardware; Activate Virtual Functions within the network card kernel module: max vfs=<0..63> modprobe ixgbe 4.3. Tests performed We performed several tests to verify if SR-IOV enabled network cards are really performing as vendors claim [7]. Our test focused on a 10Gbps network card. We measured inbound and outbound connectivity, as well as latency. As it can be seen in figure 3 and 4, the 10Gbps SR- IOV NIC provides excellent performance, both for latency and aggregate throughput: indeed there is no significant difference in throughput between VMs and host, except for unexpected low performance when running a single VM instance, to be investigated further. Network latency is close to bare metal hardware, 3 times better than virtio. Virtio is a Linux standard for network and disk device drivers where just the guest s device driver knows it is running in a virtual environment, and cooperates with the hypervisor: this enables guests to get high performance network and disk operations. We think that SR-IOV tecnology is mature enough to be adopted in production environment, particularly with application where network latency is a major issue. 7

9 Figure 5. SCP-Tsunami vs SCP-Wave vs Plain Copy 5. Image distribution An important topic in the WNoDeS virtualization environment is the need to distribute the same image file to multiple virtualization hosts. Currently no specific tool is used, creating a potential bandwidth problem every time a new image has to be deployed on hypervisors. We tested several solutions and the most functional one is currently SCP-Tsunami. SCP-Tsunami [8] is a Python script that splits the file/images in chunks and transfers multiple chunks between virtualization hosts. With this simple script it s possible to pre-stage a VM image to a large set of nodes efficiently. SCP-Tsunami resembles the bittorent protocol but does not require the same complicate setup. SCP-Tsunami is a major improvement to SCP-Wave which offers a logarithmic speed-up, not enough for our production environment. In figure 5 one can see the remarkable performance boost compared to the current solution adopted by WNoDeS (plain image copy). Every node owning the image file contributes to spread the image to others, drastically reducing the time required to complete the copy operation. 6. KVM best practices Several documents are available on the web suggesting best practices for the optimization of KVM (see for example [9]). Here we highlight them according to our personal experience in particular for the virtualization of EMI Worker Nodes: Use KVM para-virtualized drivers for disk, memory and network: this is the starting point for every other optimization; Use if possible block devices for VM storage: we already investigated in previous work [10] that a guest operating system using block devices achieves lower-latency and higher throughput; Asynchronous I/O model for KVM guests: using AIO (aio=native) support can improve guest I/O performance, especially when there are multiple threads performing I/O operations at the same time; Disk caching: use the writeback option where both the host page cache and the disk write cache are enabled for the guest. 8

10 During our investigations we tested some of the suggested optimizations that did not meet the expectations in our tests. For example, some best practices suggest to overcommit memory and cpu. An EMI worker node in our computing farm is 99% of the time fully loaded, so overcommitting, particularly cpu, is a very bad idea, because of reducing performace significantly. Another hot topic in virtualization optimizations is I/O schedulers. We did some prelimiar tests and discovered that virtualizing EMI Worker Nodes, changing scheduler (sometimes called also elevator ) does not affect performance. Deadline and CFQ elevators make sense in a different environment, where fiber channel storage is attached to the node to be tested, clearly not our case. 7. Future work This work showed some interesting results, anyway we need to continue testing new solutions that are continuosly made available. The recently released scientific linux version 6.2, has been advertised as a significant improvement in virtualization performance, but we were not able to test it on time for this work. Moreover, as soon as SL6 EMI worker nodes will be available for general use, we will test performance differences compared to SL5 one, and we are sure there will be a general improvement: indeed, some solution for advanced resource optimization, like KSM [11] and huge pages[12], are available only starting from SL6 (backports have been done on SL5, but are just proof-of-concepts, not ready for production). 8. Conclusions In this work we tried to show if and how it s possible to increase performance in the virtualization solution currently adopted by WNoDeS project. We showed that qcow2 snapshot with backing file is a good solution for creating a new image, as compared to the current KVM snapshot approach. This solution can be a major improvement to easily manage updates of VMs on WNoDeS hypervisors: right now applying a kernel or security fix triggers a new copy of the whole image, while with the qcow2 snapshot one would simply have to copy the delta file (generally just a few megabytes, compared to some gigabytes of the whole image) leaving the original image (the backing file) copied previously untouched. We examined solid state drive perfomance and, as expected, a significant boost in perfomance is measurable in every aspect of the disk I/O. Since ssd disks are still rather expensive, we think that a good compromise could be to use such a disk for storing only the qcow2 snapshots of images archived on standard hdd drives. Network perfomance has not been a problem since the very beginning of the project and we proved that 10Gbps networks with SR-IOV technology is a significant improvement in a very high demanding environment or if the network throughput/latency required is very high. Finally we showed that using a tool like SCP-Tsunami, is extremely convenient in order to be able to rapidly distribute image files across several different virtualization hosts. Appendix A. qemu-img usage guide In this paper we talked about some special features of qcow2 images: in this section we will show the command lines required for that. Appendix A.1. metadata preallocation The first feature of qcow2 format we talked about is the metadata preallocation. Generally if we generate an image in qcow2 format (either using virt-manager or not) here is what we get on a standard SL6 hypervisor: # qemu-img create -f qcow2 sample.img 5G Formatting sample.img, fmt=qcow2 size= encryption=off cluster_size=0 9

11 # qemu-img info sample.img image: sample.img file format: qcow2 virtual size: 5.0G ( bytes) disk size: 136K cluster_size: # ll -h grep sample -rw-r--r-- 1 root root 256K May 22 15:36 sample.img As we can see the image is created with a disk size of 136K (ls shows 256K) even if the virtual size we required is 5Gb. Now let s see how it s possible to preallocate metadata information on the image and how this affects file size: # qemu-img create -f qcow2 sample.img 5G -o preallocation=metadata Formatting sample.img, fmt=qcow2 size= encryption=off cluster_size=0 preallocation= metadata # qemu-img info sample.img image: sample.img file format: qcow2 virtual size: 5.0G ( bytes) disk size: 912K cluster_size: # ll -h grep sample -rw-r--r-- 1 root root 5.1G May 22 15:35 sample.img Now it s clear something changed. The ls command shows a 5Gb file and qemu-img shows a disk size of 912K, bigger than in previous case. Appendix A.2. snapshot To create a snapshot the process is similar to the one we have seen in previous section and again we need to use the qemu-img command by hand: # qemu-img create -f qcow2 -b sample.img snapshot.img Formatting snapshot.img, fmt=qcow2 size= backing_file= sample.img encryption=off cluster_size=0 # qemu-img info snapshot.img image: snapshot.img file format: qcow2 virtual size: 5.0G ( bytes) disk size: 136K cluster_size: backing file: sample.img (actual path: sample.img) The info about the snapshot are clearly displayed and the image that we took the snapshot of is indicated as the backing file. Please remember that no changes have to occur to the backing file once the snapshot is taken or unpredictable behavior will happen. Currently these advanced image manipulation features are not supported under libvirt. 10

12 Appendix B. 10Gbps Optimizations In order to achieve high throughput with 10Gbps network cards, it s necessary to perform some tuning on a standard SL distribution. Here is what we used in our tests (see also [13]). #!/bin/bash echo "Optimise TCP parameters" # enable timestamps, window scaling echo 1 > /proc/sys/net/ipv4/tcp_timestamps echo 1 > /proc/sys/net/ipv4/tcp_window_scaling echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf echo > /proc/sys/net/core/wmem_max echo > /proc/sys/net/core/rmem_max echo > /proc/sys/net/core/rmem_default echo > /proc/sys/net/core/wmem_default echo " " > /proc/sys/net/ipv4/tcp_rmem echo " " > /proc/sys/net/ipv4/tcp_wmem # echo "Optimise ethernet queue" /sbin/ifconfig eth0 txqueuelen # Jumbo Frames /sbin/ifconfig eth0 mtu 9000 # enable path mtu discovery sysctl -w net.ipv4.tcp_mtu_probing=1 sysctl -w net.ipv4.ip_no_pmtu_disc=0 # re-enable SACK sysctl -w net.ipv4.tcp_sack=1 # increase backlog ("rxqueuelen"): sysctl -w net.core.netdev_max_backlog= # discard metrics of old connections: sysctl -w net.ipv4.tcp_no_metrics_save=1 # select htcp congestion control algorithm: # (or, another interesting one, bic) sysctl -w net.ipv4.tcp_congestion_control=htcp References [1] Salomoni D et al 2011 J. Phys.: Conf. Ser. 331 [2] WNoDeS Website: [3] EMI Website: [4] DAG Repository: [5] [6] root/ [7] [8] SCP-Tsunami webpage: 11

13 [9] Virtualization Best Practices: pdf.pdf [10] Chierici A, Salomoni D, Veraldi R 2009 Measuring performances of linux hypervisors Il Nuovo Cimento Vol.32 C, N. 2 [11] [12] Hat Enterprise Linux/6/html/Performance Tuning Guide/smemory-transhuge.html [13] Chierici et al Performance of 10 Gigabit ethernet using commodity hardware IEEE Transactions on Nuclear Science Vol.57, N. 2 12

PoS(ISGC 2011 & OGF 31)049

PoS(ISGC 2011 & OGF 31)049 Performance improvements in a large scale virtualization system INFN-CNAF E-mail: davide.salomoni@cnaf.infn.it Anna Karen Calbrese Melcarne INFN-CNAF E-mail: anna.karen.melcarne@cnaf.infn.it Andrea Chierici

More information

XEN and KVM in INFN production systems and a comparison between them. Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009

XEN and KVM in INFN production systems and a comparison between them. Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009 XEN and KVM in INFN production systems and a comparison between them Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009 Outline xen kvm Test description Benchmarks Conclusions Riccardo.Veraldi@cnaf.infn.it

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

Amazon EC2 Deep Dive. Michael #awssummit

Amazon EC2 Deep Dive. Michael #awssummit Berlin Amazon EC2 Deep Dive Michael Hanisch @hanimic #awssummit Let s get started Amazon EC2 instances AMIs & Virtualization Types EBS-backed AMIs AMI instance Physical host server New root volume snapshot

More information

Analysis of high capacity storage systems for e-vlbi

Analysis of high capacity storage systems for e-vlbi Analysis of high capacity storage systems for e-vlbi Matteo Stagni - Francesco Bedosti - Mauro Nanni May 21, 212 IRA 458/12 Abstract The objective of the analysis is to verify if the storage systems now

More information

Spring 2017 :: CSE 506. Introduction to. Virtual Machines. Nima Honarmand

Spring 2017 :: CSE 506. Introduction to. Virtual Machines. Nima Honarmand Introduction to Virtual Machines Nima Honarmand Virtual Machines & Hypervisors Virtual Machine: an abstraction of a complete compute environment through the combined virtualization of the processor, memory,

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

What is KVM? KVM patch. Modern hypervisors must do many things that are already done by OSs Scheduler, Memory management, I/O stacks

What is KVM? KVM patch. Modern hypervisors must do many things that are already done by OSs Scheduler, Memory management, I/O stacks LINUX-KVM The need for KVM x86 originally virtualization unfriendly No hardware provisions Instructions behave differently depending on privilege context(popf) Performance suffered on trap-and-emulate

More information

Virtualization and Performance

Virtualization and Performance Virtualization and Performance Network Startup Resource Center www.nsrc.org These materials are licensed under the Creative Commons Attribution-NonCommercial 4.0 International license (http://creativecommons.org/licenses/by-nc/4.0/)

More information

Performance of popular open source databases for HEP related computing problems

Performance of popular open source databases for HEP related computing problems Journal of Physics: Conference Series OPEN ACCESS Performance of popular open source databases for HEP related computing problems To cite this article: D Kovalskyi et al 2014 J. Phys.: Conf. Ser. 513 042027

More information

Distributed Filesystem

Distributed Filesystem Distributed Filesystem 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributing Code! Don t move data to workers move workers to the data! - Store data on the local disks of nodes in the

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

Virtualization, Xen and Denali

Virtualization, Xen and Denali Virtualization, Xen and Denali Susmit Shannigrahi November 9, 2011 Susmit Shannigrahi () Virtualization, Xen and Denali November 9, 2011 1 / 70 Introduction Virtualization is the technology to allow two

More information

Red Hat Enterprise Virtualization Hypervisor Roadmap. Bhavna Sarathy Senior Technology Product Manager, Red Hat

Red Hat Enterprise Virtualization Hypervisor Roadmap. Bhavna Sarathy Senior Technology Product Manager, Red Hat Red Hat Enterprise Virtualization Hypervisor Roadmap Bhavna Sarathy Senior Technology Product Manager, Red Hat RHEV Hypervisor 1 RHEV Hypervisor Themes & 2 Architecture & Use cases 3 Q&A 4 Future 5 } HYPERVISOR

More information

Introduction to the Cisco ASAv

Introduction to the Cisco ASAv Hypervisor Support The Cisco Adaptive Security Virtual Appliance (ASAv) brings full firewall functionality to virtualized environments to secure data center traffic and multitenant environments. You can

More information

Tuning Your SUSE Linux Enterprise Virtualization Stack. Jim Fehlig Software Engineer

Tuning Your SUSE Linux Enterprise Virtualization Stack. Jim Fehlig Software Engineer Tuning Your SUSE Linux Enterprise Virtualization Stack Jim Fehlig Software Engineer jfehlig@suse.com Agenda General guidelines Network Disk CPU Memory NUMA 2 General Guidelines Minimize software installed

More information

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage TechTarget Dennis Martin 1 Agenda About Demartek Enterprise Data Center Environments Storage Performance Metrics

More information

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN)

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN) Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN) Summary Internet2 Land Speed Record Contest New LSR DataTAG project and network configuration Establishing

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers The Missing Piece of Virtualization I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers Agenda 10 GbE Adapters Built for Virtualization I/O Throughput: Virtual & Non-Virtual Servers Case

More information

Experience with PROOF-Lite in ATLAS data analysis

Experience with PROOF-Lite in ATLAS data analysis Journal of Physics: Conference Series Experience with PROOF-Lite in ATLAS data analysis To cite this article: S Y Panitkin et al 2011 J. Phys.: Conf. Ser. 331 072057 View the article online for updates

More information

System Requirements. Hardware and Virtual Appliance Requirements

System Requirements. Hardware and Virtual Appliance Requirements This chapter provides a link to the Cisco Secure Network Server Data Sheet and lists the virtual appliance requirements. Hardware and Virtual Appliance Requirements, page 1 Virtual Machine Appliance Size

More information

Performance Considerations of Network Functions Virtualization using Containers

Performance Considerations of Network Functions Virtualization using Containers Performance Considerations of Network Functions Virtualization using Containers Jason Anderson, et al. (Clemson University) 2016 International Conference on Computing, Networking and Communications, Internet

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

Sybase Adaptive Server Enterprise on Linux

Sybase Adaptive Server Enterprise on Linux Sybase Adaptive Server Enterprise on Linux A Technical White Paper May 2003 Information Anywhere EXECUTIVE OVERVIEW ARCHITECTURE OF ASE Dynamic Performance Security Mission-Critical Computing Advanced

More information

KVM Virtualized I/O Performance

KVM Virtualized I/O Performance KVM Virtualized I/O Performance Achieving Leadership I/O Performance Using Virtio- Blk-Data-Plane Technology Preview in Red Hat Enterprise Linux 6.4 Khoa Huynh, Ph.D. - Linux Technology Center, IBM Andrew

More information

A High Availability Solution for GRID Services

A High Availability Solution for GRID Services A High Availability Solution for GRID Services Álvaro López García 1 Mirko Mariotti 2 Davide Salomoni 3 Leonello Servoli 12 1 INFN Sezione di Perugia 2 Physics Department University of Perugia 3 INFN CNAF

More information

Virtual Machine Provisioning and Performance for Scientific Computing

Virtual Machine Provisioning and Performance for Scientific Computing Virtual Machine Provisioning and Performance for Scientific Computing Davide Salomoni INFN-CNAF 1 Outline WNoDeS Updates and Cloud Provisioning Performance Tests Network Virtualization 2 WNoDeS Updates

More information

Rack Disaggregation Using PCIe Networking

Rack Disaggregation Using PCIe Networking Ethernet-based Software Defined Network (SDN) Rack Disaggregation Using PCIe Networking Cloud Computing Research Center for Mobile Applications (CCMA) Industrial Technology Research Institute 雲端運算行動應用研究中心

More information

Passthrough in QEMU/KVM on Linux

Passthrough in QEMU/KVM on Linux Passthrough in QEMU/KVM on Linux G. Lettieri 2 Nov. 2017 Let see how hardware passthrough can be used in practice, using the QEMU hypervisor with the KVM API on Linux. We assume Intel hardware throughout

More information

White Paper. File System Throughput Performance on RedHawk Linux

White Paper. File System Throughput Performance on RedHawk Linux White Paper File System Throughput Performance on RedHawk Linux By: Nikhil Nanal Concurrent Computer Corporation August Introduction This paper reports the throughput performance of the,, and file systems

More information

Extremely Fast Distributed Storage for Cloud Service Providers

Extremely Fast Distributed Storage for Cloud Service Providers Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed

More information

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri PAC094 Performance Tips for New Features in Workstation 5 Anne Holler Irfan Ahmad Aravind Pavuluri Overview of Talk Virtual machine teams 64-bit guests SMP guests e1000 NIC support Fast snapshots Virtual

More information

A block layer overview. Red Hat Kevin Wolf 8 November 2012

A block layer overview. Red Hat Kevin Wolf 8 November 2012 A block layer overview Red Hat Kevin Wolf 8 November 2012 Section 1 Overview Overview Parts of the QEMU block subsystem Virtual devices IDE, virtio-blk,... Backend Block drivers raw,,... file, nbd, iscsi,

More information

KVM PERFORMANCE OPTIMIZATIONS INTERNALS. Rik van Riel Sr Software Engineer, Red Hat Inc. Thu May

KVM PERFORMANCE OPTIMIZATIONS INTERNALS. Rik van Riel Sr Software Engineer, Red Hat Inc. Thu May KVM PERFORMANCE OPTIMIZATIONS INTERNALS Rik van Riel Sr Software Engineer, Red Hat Inc. Thu May 5 2011 KVM performance optimizations What is virtualization performance? Optimizations in RHEL 6.0 Selected

More information

Virtualization at Scale in SUSE Linux Enterprise Server

Virtualization at Scale in SUSE Linux Enterprise Server Virtualization at Scale in SUSE Linux Enterprise Server Jim Fehlig Software Engineer jfehlig@suse.com Agenda General guidelines Network guidelines Disk guidelines CPU and memory guidelines NUMA guidelines

More information

IBM B2B INTEGRATOR BENCHMARKING IN THE SOFTLAYER ENVIRONMENT

IBM B2B INTEGRATOR BENCHMARKING IN THE SOFTLAYER ENVIRONMENT IBM B2B INTEGRATOR BENCHMARKING IN THE SOFTLAYER ENVIRONMENT 215-4-14 Authors: Deep Chatterji (dchatter@us.ibm.com) Steve McDuff (mcduffs@ca.ibm.com) CONTENTS Disclaimer...3 Pushing the limits of B2B Integrator...4

More information

10GE network tests with UDP. Janusz Szuba European XFEL

10GE network tests with UDP. Janusz Szuba European XFEL 10GE network tests with UDP Janusz Szuba European XFEL Outline 2 Overview of initial DAQ architecture Slice test hardware specification Initial networking test results DAQ software UDP tests Summary 10GE

More information

CSE 120 Principles of Operating Systems

CSE 120 Principles of Operating Systems CSE 120 Principles of Operating Systems Spring 2018 Lecture 16: Virtual Machine Monitors Geoffrey M. Voelker Virtual Machine Monitors 2 Virtual Machine Monitors Virtual Machine Monitors (VMMs) are a hot

More information

Mission-Critical Enterprise Linux. April 17, 2006

Mission-Critical Enterprise Linux. April 17, 2006 Mission-Critical Enterprise Linux April 17, 2006 Agenda Welcome Who we are & what we do Steve Meyers, Director Unisys Linux Systems Group (steven.meyers@unisys.com) Technical Presentations Xen Virtualization

More information

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate NIC-PCIE-1SFP+-PLU PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate Flexibility and Scalability in Virtual

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in

More information

Agilio CX 2x40GbE with OVS-TC

Agilio CX 2x40GbE with OVS-TC PERFORMANCE REPORT Agilio CX 2x4GbE with OVS-TC OVS-TC WITH AN AGILIO CX SMARTNIC CAN IMPROVE A SIMPLE L2 FORWARDING USE CASE AT LEAST 2X. WHEN SCALED TO REAL LIFE USE CASES WITH COMPLEX RULES TUNNELING

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Virtio-blk Performance Improvement

Virtio-blk Performance Improvement Virtio-blk Performance Improvement Asias He , Red Hat Nov 8, 2012, Barcelona, Spain KVM FORUM 2012 1 Storage transport choices in KVM Full virtualization : IDE, SATA, SCSI Good guest

More information

Duy Le (Dan) - The College of William and Mary Hai Huang - IBM T. J. Watson Research Center Haining Wang - The College of William and Mary

Duy Le (Dan) - The College of William and Mary Hai Huang - IBM T. J. Watson Research Center Haining Wang - The College of William and Mary Duy Le (Dan) - The College of William and Mary Hai Huang - IBM T. J. Watson Research Center Haining Wang - The College of William and Mary Virtualization Games Videos Web Games Programming File server

More information

Course Review. Hui Lu

Course Review. Hui Lu Course Review Hui Lu Syllabus Cloud computing Server virtualization Network virtualization Storage virtualization Cloud operating system Object storage Syllabus Server Virtualization Network Virtualization

More information

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark Testing validation report prepared under contract with Dell Introduction As innovation drives

More information

Storage Performance Tuning for FAST! Virtual Machines

Storage Performance Tuning for FAST! Virtual Machines Storage Performance Tuning for FAST! Virtual Machines Fam Zheng Senior Software Engineer LC3-2018 Outline Virtual storage provisioning NUMA pinning VM configuration options Summary Appendix 2 Virtual storage

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Bacula Systems Virtual Machine Performance Backup Suite

Bacula Systems Virtual Machine Performance Backup Suite Bacula Systems Virtual Machine Performance Backup Suite Bacula Systems VM Performance Backup Suite is part of Bacula Enterprise Edition. It comprises of modules that can be utilized to perfectly fit any

More information

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc.

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc. The Convergence of Storage and Server Virtualization 2007 Solarflare Communications, Inc. About Solarflare Communications Privately-held, fabless semiconductor company. Founded 2001 Top tier investors:

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Pexip Infinity Server Design Guide

Pexip Infinity Server Design Guide Pexip Infinity Server Design Guide Introduction This document describes the recommended specifications and deployment for servers hosting the Pexip Infinity platform. It starts with a Summary of recommendations

More information

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1 BIG-IP Virtual Edition and Linux KVM: Setup Version 12.1 Table of Contents Table of Contents Getting Started with BIG-IP Virtual Edition on KVM...5 Steps to deploy BIG-IP VE...5 Prerequisites for BIG-IP

More information

High Performance Solid State Storage Under Linux

High Performance Solid State Storage Under Linux High Performance Solid State Storage Under Linux Eric Seppanen, Matthew T. O Keefe, David J. Lilja Electrical and Computer Engineering University of Minnesota April 20, 2010 Motivation SSDs breaking through

More information

Nested Virtualization and Server Consolidation

Nested Virtualization and Server Consolidation Nested Virtualization and Server Consolidation Vara Varavithya Department of Electrical Engineering, KMUTNB varavithya@gmail.com 1 Outline Virtualization & Background Nested Virtualization Hybrid-Nested

More information

I/O Virtualization The Next Virtualization Frontier

I/O Virtualization The Next Virtualization Frontier I/O Virtualization The Next Virtualization Frontier Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure

More information

High Performance SSD & Benefit for Server Application

High Performance SSD & Benefit for Server Application High Performance SSD & Benefit for Server Application AUG 12 th, 2008 Tony Park Marketing INDILINX Co., Ltd. 2008-08-20 1 HDD SATA 3Gbps Memory PCI-e 10G Eth 120MB/s 300MB/s 8GB/s 2GB/s 1GB/s SSD SATA

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Virtualization Overview NSRC

Virtualization Overview NSRC Virtualization Overview NSRC Terminology Virtualization: dividing available resources into smaller independent units Emulation: using software to simulate hardware which you do not have The two often come

More information

Red Hat Gluster Storage performance. Manoj Pillai and Ben England Performance Engineering June 25, 2015

Red Hat Gluster Storage performance. Manoj Pillai and Ben England Performance Engineering June 25, 2015 Red Hat Gluster Storage performance Manoj Pillai and Ben England Performance Engineering June 25, 2015 RDMA Erasure Coding NFS-Ganesha New or improved features (in last year) Snapshots SSD support Erasure

More information

Choosing Hardware and Operating Systems for MySQL. Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc

Choosing Hardware and Operating Systems for MySQL. Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc Choosing Hardware and Operating Systems for MySQL Apr 15, 2009 O'Reilly MySQL Conference and Expo Santa Clara,CA by Peter Zaitsev, Percona Inc -2- We will speak about Choosing Hardware Choosing Operating

More information

Knut Omang Ifi/Oracle 20 Oct, Introduction to virtualization (Virtual machines) Aspects of network virtualization:

Knut Omang Ifi/Oracle 20 Oct, Introduction to virtualization (Virtual machines) Aspects of network virtualization: Software and hardware support for Network Virtualization part 2 Knut Omang Ifi/Oracle 20 Oct, 2015 32 Overview Introduction to virtualization (Virtual machines) Aspects of network virtualization: Virtual

More information

Hostless Xen Deployment

Hostless Xen Deployment Hostless Xen Deployment Xen Summit Fall 2007 David Lively dlively@virtualiron.com dave.lively@gmail.com Hostless Xen Deployment What Hostless Means Motivation System Architecture Challenges and Solutions

More information

Applying Polling Techniques to QEMU

Applying Polling Techniques to QEMU Applying Polling Techniques to QEMU Reducing virtio-blk I/O Latency Stefan Hajnoczi KVM Forum 2017 Agenda Problem: Virtualization overhead is significant for high IOPS devices QEMU

More information

Deploy the ASAv Using KVM

Deploy the ASAv Using KVM You can deploy the ASAv using the Kernel-based Virtual Machine (KVM). About ASAv Deployment Using KVM, on page 1 Prerequisites for the ASAv and KVM, on page 2 Prepare the Day 0 Configuration File, on page

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Parallels Virtuozzo Containers for Windows Capacity and Scaling www.parallels.com Version 1.0 Table of Contents Introduction... 3 Resources and bottlenecks...

More information

Improving throughput for small disk requests with proximal I/O

Improving throughput for small disk requests with proximal I/O Improving throughput for small disk requests with proximal I/O Jiri Schindler with Sandip Shete & Keith A. Smith Advanced Technology Group 2/16/2011 v.1.3 Important Workload in Datacenters Serial reads

More information

Seagate Enterprise SATA SSD with DuraWrite Technology Competitive Evaluation

Seagate Enterprise SATA SSD with DuraWrite Technology Competitive Evaluation August 2018 Seagate Enterprise SATA SSD with DuraWrite Technology Competitive Seagate Enterprise SATA SSDs with DuraWrite Technology have the best performance for compressible Database, Cloud, VDI Software

More information

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE Maximizing Revenue per Server with Parallels Containers for Linux Q1 2012 1 Table of Contents Overview... 3 Maximizing Density

More information

打造 Linux 下的高性能网络 北京酷锐达信息技术有限公司技术总监史应生.

打造 Linux 下的高性能网络 北京酷锐达信息技术有限公司技术总监史应生. 打造 Linux 下的高性能网络 北京酷锐达信息技术有限公司技术总监史应生 shiys@solutionware.com.cn BY DEFAULT, LINUX NETWORKING NOT TUNED FOR MAX PERFORMANCE, MORE FOR RELIABILITY Trade-off :Low Latency, throughput, determinism Performance

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

Virtualization Practices: Providing a Complete Virtual Solution in a Box

Virtualization Practices: Providing a Complete Virtual Solution in a Box PRESENTATION TITLE GOES HERE Virtualization Practices: Providing a Complete Virtual Solution in a Box Jyh-shing Chen / NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by

More information

Using Transparent Compression to Improve SSD-based I/O Caches

Using Transparent Compression to Improve SSD-based I/O Caches Using Transparent Compression to Improve SSD-based I/O Caches Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr

More information

Parallels Remote Application Server. Scalability Testing with Login VSI

Parallels Remote Application Server. Scalability Testing with Login VSI Parallels Remote Application Server Scalability Testing with Login VSI Contents Introduction... 3 Scalability... 4 Testing the Scalability of Parallels RAS... 4 Configurations for Scalability Testing...

More information

Virtualization and the Metrics of Performance & Capacity Management

Virtualization and the Metrics of Performance & Capacity Management 23 S September t b 2011 Virtualization and the Metrics of Performance & Capacity Management Has the world changed? Mark Preston Agenda Reality Check. General Observations Traditional metrics for a non-virtual

More information

Modification and Evaluation of Linux I/O Schedulers

Modification and Evaluation of Linux I/O Schedulers Modification and Evaluation of Linux I/O Schedulers 1 Asad Naweed, Joe Di Natale, and Sarah J Andrabi University of North Carolina at Chapel Hill Abstract In this paper we present three different Linux

More information

qcow2 Red Hat Kevin Wolf 15 August 2011

qcow2 Red Hat Kevin Wolf 15 August 2011 qcow2 Red Hat Kevin Wolf 15 August 2011 Section 1 qcow2 format basics qcow2 format basics Overview of qcow2 features Sparse images Snapshots Internal or external Internal snapshots can contain VM state

More information

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines AtHoc SMS Codes Version Version 7.5, May 1.0, November 2018 2016 1 Copyright 2010 2018 BlackBerry Limited. All Rights Reserved.

More information

Technical Presales Guidance for Partners

Technical Presales Guidance for Partners Technical Presales Guidance for Partners Document version Document release date 1.1 25th June 2012 document revisions Contents 1. OnApp Deployment Types... 3 1.1 Public Cloud Deployments... 3 1.2 Private

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

Maximizing NFS Scalability

Maximizing NFS Scalability Maximizing NFS Scalability on Dell Servers and Storage in High-Performance Computing Environments Popular because of its maturity and ease of use, the Network File System (NFS) can be used in high-performance

More information

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

VMware vsan 6.0 Performance First Published On: Last Updated On:

VMware vsan 6.0 Performance First Published On: Last Updated On: First Published On: 07-20-2016 Last Updated On: 07-20-2016 1 Table of Contents 1. Executive Summary 1.1.Executive Summary 2. Introduction 2.1.Overview 3. vsan Cluster Setup 3.1.Overview 3.2.Hybrid vsan

More information

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer TSM Performance Tuning Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer peinkofer@lrz.de Agenda Network Performance Disk-Cache Performance Tape Performance

More information

Tuning Intelligent Data Lake Performance

Tuning Intelligent Data Lake Performance Tuning Intelligent Data Lake Performance 2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without

More information

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication CDS and Sky Tech Brief Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication Actifio recommends using Dedup-Async Replication (DAR) for RPO of 4 hours or more and using StreamSnap for

More information

KVM / QEMU Storage Stack Performance Discussion

KVM / QEMU Storage Stack Performance Discussion 2010 Linux Plumbers Conference KVM / QEMU Storage Stack Performance Discussion Speakers: Khoa Huynh khoa@us.ibm.com Stefan Hajnoczi stefan.hajnoczi@uk.ibm.com IBM Linux Technology Center 2010 IBM Corporation

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Configure Windows VM to CGM-SRV Module on CGR1xxx

Configure Windows VM to CGM-SRV Module on CGR1xxx Configure Windows VM to CGM-SRV Module on CGR1xxx Contents Introduction Prerequisites Requirements Components Used Background Information Configure Create the Windows VM Image Install KVM on your Linux

More information

RHEL OpenStack Platform on Red Hat Storage

RHEL OpenStack Platform on Red Hat Storage RHEL OpenStack Platform on Red Hat Storage Cinder Volume Performance Version 1.0 June 2014 100 East Davie Street Raleigh NC 27601 USA Phone: +1 919 754 4950 Fax: +1 919 800 3804 Linux is a registered trademark

More information

Red Hat Storage (RHS) Performance. Ben England Principal S/W Engr., Red Hat

Red Hat Storage (RHS) Performance. Ben England Principal S/W Engr., Red Hat Red Hat Storage (RHS) Performance Ben England Principal S/W Engr., Red Hat What this session covers What's new with Red Hat Storage (perf.) since summit 2013? How to monitor RHS performance Future directions

More information

The Leading Parallel Cluster File System

The Leading Parallel Cluster File System The Leading Parallel Cluster File System www.thinkparq.com www.beegfs.io ABOUT BEEGFS What is BeeGFS BeeGFS (formerly FhGFS) is the leading parallel cluster file system, developed with a strong focus on

More information

Accelerate Applications Using EqualLogic Arrays with directcache

Accelerate Applications Using EqualLogic Arrays with directcache Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves

More information

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Luanne Dauber, Pure Storage Author: Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless

More information

Changpeng Liu. Senior Storage Software Engineer. Intel Data Center Group

Changpeng Liu. Senior Storage Software Engineer. Intel Data Center Group Changpeng Liu Senior Storage Software Engineer Intel Data Center Group Legal Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware,

More information

Virtualization Practices:

Virtualization Practices: Virtualization Practices: Providing a Complete Virtual Solution in a Box Jyh-shing Chen, NetApp Author: Jyh-shing Chen, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by

More information

ORACLE RMAN DESIGN BEST PRACTICES WITH PANZURA QUICKSILVER CLOUD STORAGE CONTROLLERS

ORACLE RMAN DESIGN BEST PRACTICES WITH PANZURA QUICKSILVER CLOUD STORAGE CONTROLLERS WHITE PAPER ORACLE RMAN DESIGN BEST PRACTICES WITH PANZURA QUICKSILVER CLOUD STORAGE CONTROLLERS Oracle is the de facto standard in the enterprise world when it comes to mission critical databases. Panzura

More information