White Paper. File System Throughput Performance on RedHawk Linux

Similar documents
Comparing Performance of Solid State Devices and Mechanical Disks

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Modification and Evaluation of Linux I/O Schedulers

Extreme Storage Performance with exflash DIMM and AMPS

SFS: Random Write Considered Harmful in Solid State Drives

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Presented by: Nafiseh Mahmoudi Spring 2017

HP Z Turbo Drive G2 PCIe SSD

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Seagate Enterprise SATA SSD with DuraWrite Technology Competitive Evaluation

Using Transparent Compression to Improve SSD-based I/O Caches

VERITAS Foundation Suite TM 2.0 for Linux PERFORMANCE COMPARISON BRIEF - FOUNDATION SUITE, EXT3, AND REISERFS WHITE PAPER

Best Practices for SSD Performance Measurement

Picking the right number of targets per server for BeeGFS. Jan Heichler March 2015 v1.3

System recommendations for version 17.1

A Case Study: Performance Evaluation of a DRAM-Based Solid State Disk

High-Performance Lustre with Maximum Data Assurance

MELLANOX MTD2000 NFS-RDMA SDK PERFORMANCE TEST REPORT

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Optimizing Local File Accesses for FUSE-Based Distributed Storage

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

Azor: Using Two-level Block Selection to Improve SSD-based I/O caches

HP SAS benchmark performance tests

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

SSD-based Information Retrieval Systems

Introduction To Computer Hardware. Hafijur Rahman

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Solving the I/O bottleneck with Flash

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

FhGFS - Performance at the maximum

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Performance Modeling and Analysis of Flash based Storage Devices

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE

NVMe SSDs A New Benchmark for OLTP Performance

SSD-based Information Retrieval Systems

Block Storage Service: Status and Performance

Optimizing Fusion iomemory on Red Hat Enterprise Linux 6 for Database Performance Acceleration. Sanjay Rao, Principal Software Engineer

Accelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads

Iozone Filesystem Benchmark

File System Forensics : Measuring Parameters of the ext4 File System

Aerie: Flexible File-System Interfaces to Storage-Class Memory [Eurosys 2014] Operating System Design Yongju Song

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Quantifying FTK 3.0 Performance with Respect to Hardware Selection

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

Accelerate Applications Using EqualLogic Arrays with directcache

Milestone Systems CERTIFICATION TEST REPORT Version /08/17

Accelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740

Recovering Disk Storage Metrics from low level Trace events

Deep Learning Performance and Cost Evaluation

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Analysis of high capacity storage systems for e-vlbi

Extremely Fast Distributed Storage for Cloud Service Providers

Benchmarking Enterprise SSDs

PowerVault MD3 SSD Cache Overview

Outline 1 Motivation 2 Theory of a non-blocking benchmark 3 The benchmark and results 4 Future work

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY

Experiences with the Parallel Virtual File System (PVFS) in Linux Clusters

Deep Learning Performance and Cost Evaluation

High Performance Solid State Storage Under Linux

Analyzing NFS Client Performance with IOzone

Iomega REV Drive Data Transfer Performance

The Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput

BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE

BeeGFS Benchmarks on IBM OpenPOWER Servers. Ely de Oliveira October 2016 v 1.0

Linux 2.6 performance improvement through readahead optimization

HAMMER and PostgreSQL Performance. Jan Lentfer Oct (document still being worked upon, intermediate version)

MODERN FILESYSTEM PERFORMANCE IN LOCAL MULTI-DISK STORAGE SPACE CONFIGURATION

Comparison of Storage Protocol Performance ESX Server 3.5

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Evaluation of the Chelsio T580-CR iscsi Offload adapter

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

PebblesDB: Building Key-Value Stores using Fragmented Log Structured Merge Trees

STORAGE LATENCY x. RAMAC 350 (600 ms) NAND SSD (60 us)

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Testing 6x DS-CAM-600. Gigabit-Ethernet Camera

Virtualization of the MS Exchange Server Environment

Join Processing for Flash SSDs: Remembering Past Lessons

Three Paths to Better Business Decisions

Database Hardware Benchmarking

STORING DATA: DISK AND FILES

Interface Trends for the Enterprise I/O Highway

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Kinetic Action: Micro and Macro Benchmark-based Performance Analysis of Kinetic Drives Against LevelDB-based Servers

Map3D V58 - Multi-Processor Version

HD Tune Pro manual version 4.50 HD Tune Pro manual

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

Gaining Insights into Multicore Cache Partitioning: Bridging the Gap between Simulation and Real Systems

Strata: A Cross Media File System. Youngjin Kwon, Henrique Fingler, Tyler Hunt, Simon Peter, Emmett Witchel, Thomas Anderson

CS 261 Fall Mike Lam, Professor. Memory

HPC Storage Use Cases & Future Trends

Adaptec MaxIQ SSD Cache Performance Solution for Web Server Environments Analysis

CIS 601 Graduate Seminar. Dr. Sunnie S. Chung Dhruv Patel ( ) Kalpesh Sharma ( )

RIGHTNOW A C E

Transcription:

White Paper File System Throughput Performance on RedHawk Linux By: Nikhil Nanal Concurrent Computer Corporation August

Introduction This paper reports the throughput performance of the,, and file systems on RedHawk Linux 7. running on an Intel Xeon server. Measurements were taken using the IOZone file system benchmark using a 8 GB solid state disk and a TB spinning hard disk. The study shows that gives the best write throughput for small file sizes, while and file systems have better read throughput. gave the best overall performance for large file sizes. Overview A file system plays a central role in determining the overall performance of an operating system. It serves as the interface for interacting with external devices such as the disk. Accessing storage devices may introduce significant latency when making the data available to an application thus affecting system performance. To reduce this latency, file system implementations have incorporated features to optimize when and how data is read from or written to the disk. The Linux kernel supports a number of block-based file systems like EXT//,,, etc. This flexibility of choice makes it important to evaluate which file system delivers the best throughput performance for specific application goals. The goal of this study is to compare the performance of different file systems under similar workloads and system configurations and identify the file system that would best suit the end user's requirements. Section of this report describes the configuration of the system under test. Section gives a brief overview of the IOZone benchmarking tool that was used for this study along with the various options enabled to create conditions that are likely to affect the throughput performance and provide realistic performance results. Section describes the methodology that was followed to perform the tests. Section contains observations and results followed by concluding remarks in Section 7. System Configuration Operating System: RedHawk Linux 7. x8_ Processor Model: Dual. GHz Intel Xeon E-7 v Processor Cache Size: MB Processor Cores: cores per CPU Main Memory: GB Disks Tested: 8 GB SSD, SATA III TB hard drive, SATA III File systems under test:,,, Benchmark: IOZone. compiled for -bit mode. concurrent.com 8.. info@ccur.com

IOZone Benchmark IOZone is a general purpose file system benchmarking tool that provides multiple command line options that can be set to define the scope of a test. A complete list of parameters can be found in the IOZone documentation. The following options were set in the tests performed in this study. Record Length (-r): Specifies the size of each I/O request. The number of operations multiplied by the record size equals the total file size. Minimum File Size (-m): The minimum file size for which the test is carried out should be equal to the record length. In this case the number of operations =. Maximum File Size (-g): The maximum file size can be any arbitrary value. It is recommended that it be set to at least twice the total system memory size, so that memory and cache buffer effects can be transcended and the effect of disk access on the throughput can be observed. File sizes are specified with units of K, M or G for kilobytes, megabytes or gigabytes respectively. Processor Cache Line Size (-S): Defines the processor cache size (in Kbytes). It is used by the IOZone code internally for buffer alignment and purge functionality. Test Number (-i): IOZone measures throughput for sequential writes, sequential reads, random read, random write, re-read, re-write, etc. A complete list can be found in the IOZone documentation. Cache Purge enable (-p): Purges processor cache. Test Procedure The following steps were taken to perform the file system benchmarking.. Create a partition. Build a Linux file system. Mount the file system. Install the IOZone benchmark rpm or build and install its custom package. Execute the IOZone test, taking care to flush file system buffers and drop caches before the run of each test.. Plot the test results using gnuplot. Observations and Discussion of Results Throughput measurements were made for the following six operations: Write sequential, Read sequential, Reread, Re-write, Random read and Random write. The following bullets summarize the results of the tests. concurrent.com 8.. info@ccur.com

Noticeable is the performance of the file systems in two separate regions the Low File Size region where the file size is still less than the system memory size (up to GB) and in the High File Size region where file size is more than memory size ( GB to 8 GB). The performance of the file systems is seen to vary under both conditions. These observations apply only to the file systems tested in this study. 8 GB SSD Performance: SEQUENTIALWRITE: gives the highest write throughput for small file sizes. As the file size increases, performs much better. is the worst performer, especially for smaller files. SEQUENTIAL READ: and have the best read throughput. and follow closely behind. With larger files,, and show similar performance, with performing slightly better. RE-READ: and give the best re-read performance for small sizes, whereas is the best performer for larger files, followed closely by. RE-WRITE: and are the best performers with small files and with larger files. is the slowest. RANDOM READ/WRITE: shows poorest but still respectable performance for small files, however it maintains a higher throughput with larger files, similar to all other tests. outperforms all other file systems with following closely. concurrent.com 8.. info@ccur.com

TB HDD Performance: SEQUENTIAL WRITE: is the best performer for small files, followed closely by. With larger files, is significantly better than all other file systems. has the poorest write performance among all four file systems. SEQUENTIAL READ/RE-READ: All file systems have similar performance, except with large files where is the best. RE-WRITE: gives the best performance for small files. and compete closely for second best for small files. performs significantly better with larger files. RANDOM WRITE: is the best performer with small files and with larger files. follows closely. is not as good as in random seeks. gives the poorest random write throughput performance. RANDOM READ: Similar to sequential read, random read performance of all file systems is found to be nearly the same. concurrent.com 8.. info@ccur.com

FileSystem Performance (Throughput) Comparsion for 8 GB SSD Using IOZone. System:Intel(R) Xeon(R)CPU E-7 v,.ghz,gb RAM, Core HTT Maximum file size of 8 Gigabytes.Minimum file size of Megabytes.Record Size Megabytes Machine = Linux beast..-rt7-redhawk-7.-custom # SMP PREEMPT Wed Jun Purge Mode On Output is in GBytes/sec Processor cache size set to 7 kbytes. Processor cache line size set to bytes.file stride size set to 7 * record size. Command line used: iozone -a -g 8G -r m -n m -S 7 -L -M -p -i [-i -i].. Write. Read Random-Read Re-write.... Re-read..... Random-Write

FileSystem Performance (Throughput) Comparsion for TB HDD Using IOZone. System:Intel(R) Xeon(R)CPU E-7 v,.ghz,gb RAM, Core HTT Maximum file size of 8 Gigabytes.Minimum file size of Megabytes.Record Size Megabytes Machine = Linux beast..-rt7-redhawk-7.-custom # SMP PREEMPT Wed Jun Purge Mode On Output is in GBytes/sec Processor cache size set to 7 kbytes. Processor cache line size set to bytes.file stride size set to 7 * record size. Command line used: iozone -a -g 8G -r m -n m -S 7 -L -M -p -i [-i -i] Write... Read Random-Read.. Re-write.. Re-read.... Random-Write.

Conclusion The RedHawk 7. kernel mounted with is a good option for applications that are write-intensive whereas is a good file system for applications which are read intensive. For general purpose applications, could be a good choice for both reads and writes. However, this is true only for small files. If an application uses larger files most of the time, then the file system shows significantly higher performance in reads and writes. For random, burst applications, is recommended regardless of the file size as it gives more consistent throughput. It should be noted that these recommendations are based only on this throughput study. Other application needs such as crash resiliency and data integrity may affect throughput to different extents and warrant the selection of a different file system concurrent.com 8.. info@ccur.com