An Introduction to I/O and Storage Tuning. Randy Kreiser Senior MTS Consulting Engineer, SGI

Similar documents
Performance Benchmark of the SGI TP9400 RAID Storage Array

IBM High-End Disk Solutions, Version 3. Download Full Version :

PESIT Bangalore South Campus

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

LSI Corporation

S SNIA Storage Networking Management & Administration

Lecture 23: I/O Redundant Arrays of Inexpensive Disks Professor Randy H. Katz Computer Science 252 Spring 1996

Chapter 10: Mass-Storage Systems

Definition of RAID Levels

Oracle on HP Storage

WHITEPAPER. Disk Configuration Tips for Ingres by Chip nickolett, Ingres Corporation

CS3600 SYSTEMS AND NETWORKS

RAID (Redundant Array of Inexpensive Disks)

FlexArray Virtualization

Presented by: Nafiseh Mahmoudi Spring 2017

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Disk Configuration with Ingres

Windows NT Server Configuration and Tuning for Optimal SAS Server Performance

DELL TM AX4-5 Application Performance

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS

IBM TotalStorage Enterprise Storage Server Model 800

V. Mass Storage Systems

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

Oracle Performance on M5000 with F20 Flash Cache. Benchmark Report September 2011

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files)

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Virtual File System -Uniform interface for the OS to see different file systems.

An Overview of The Global File System

Abdel Etri, SAS Institute Inc., Cary, NC

IBM and HP 6-Gbps SAS RAID Controller Performance

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array configuration best practices white paper

Recommendations for Aligning VMFS Partitions

Disk I/O and the Network

Disk Storage Systems. Module 2.5. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Disk Storage Systems - 1

HITACHI VIRTUAL STORAGE PLATFORM G SERIES FAMILY MATRIX

/ : Computer Architecture and Design Fall Final Exam December 4, Name: ID #:

VDBENCH Overview. Steven Johnson. SNIA Emerald TM Training. SNIA Emerald Power Efficiency Measurement Specification, for use in EPA ENERGY STAR

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CSE 451: Operating Systems. Section 10 Project 3 wrap-up, final exam review

CSE 153 Design of Operating Systems

Readings. Storage Hierarchy III: I/O System. I/O (Disk) Performance. I/O Device Characteristics. often boring, but still quite important

HP SAS benchmark performance tests

Emulating High Speed Tape with RAID. Presented by Martin Bock Storage Concepts, Inc.

Introduction. Secondary Storage. File concept. File attributes

Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill

Storing Data: Disks and Files

Performance Testing December 16, 2017

IBM TotalStorage Enterprise Storage Server Model 800

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:

VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. NetApp E-Series. User Guide

Distributed Video Systems Chapter 3 Storage Technologies

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

What s New in VMware Virtual SAN (VSAN) v 0.1c/AUGUST 2013

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7

CSE 451: Operating Systems Winter Redundant Arrays of Inexpensive Disks (RAID) and OS structure. Gary Kimura

I/O CANNOT BE IGNORED

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

HITACHI VIRTUAL STORAGE PLATFORM F SERIES FAMILY MATRIX

Guidelines for Efficient Parallel I/O on the Cray XT3/XT4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

VERITAS Database Edition for Sybase. Technical White Paper

Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System

UK LUG 10 th July Lustre at Exascale. Eric Barton. CTO Whamcloud, Inc Whamcloud, Inc.

designed. engineered. results. Parallel DMF

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

Accelerate Applications Using EqualLogic Arrays with directcache

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Storage Performance 101. Ray Lucchesi, President Silverton Consulting, Inc.

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Volume Management in Linux with EVMS

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

CS370 Operating Systems

Smart Array Controller technology: drive array expansion and extension technology brief

CSE544 Database Architecture

iscsi STORAGE ARRAYS: AND DATABASE PERFORMANCE

HP AutoRAID (Lecture 5, cs262a)

2012 Enterprise Strategy Group. Enterprise Strategy Group Getting to the bigger truth. TM

Chapter 10: Mass-Storage Systems

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Components of the Virtual Memory System

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

Fall COMP3511 Review

SPARCstorage Array Configuration Guide

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part 2 - Disk Array and RAID

Lecture 29. Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Guide to SATA Hard Disks Installation and RAID Configuration

I/O CANNOT BE IGNORED

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

How Verify System Platform Max. Performance

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Nexenta Technical Sales Professional (NTSP)

Chapter 13: I/O Systems

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Transcription:

Randy Kreiser Senior MTS Consulting Engineer, SGI

40/30/30 Performance Rule 40% Hardware Setup 30% System Software Setup 30% Application Software Analyze the Application Large/Small I/O s Sequential/Random I/O s Concurrent # of I/O s Percent of I/O mix of Reads/Writes Type of I/O (direct, buffered, raw)

Proper Application Analysis Yields the following: Transaction I/O size and I/O type Raid level to use (raid 5, raid 3, raid 1) Number of disks to use in a raid lun (4+1 versus 8+1, mirror, etc.) Write caching or Write buffering Cache page size (transaction I/O) Capacity requirements dictates: Size of volume (# raid luns) Performance requirements dictates: Size of volume (# disks/raid luns)

Proper Application Analysis Yields the following: Mix of I/O, read versus writes Number of concurrent I/O s Network based I/O or local I/O Raw, direct or buffered filesystem I/O All of the above items plus the previous page items should dictate the raid level to be used, plus the size of the raid luns and write caching parameters used!

Transaction size dictates raid level Small I/O s are classic to raid 5, raid 1 or raid 1/0 luns Large I/O s are classic to raid 3 What is small/large I/O size Small I/O < 32K Large I/O > 256K Grey area >= 32K and <= 256K Application dependent

I/O characteristics and effects: The number of concurrent I/O s could impact the RAID level How many I/O s is too many? Raid 3 is good for sequential I/O and probably not more than several concurrent I/O s. Four to eight I/O s depending on the size and layout of the volume/filesystem.

I/O characteristics and effects: Sequential versus random I/O Raw, direct buffered filesystem I/O What type of I/O best for which raid level: Large sequential I/O is best suited for direct I/O with a filesystem. You can achieve near raw performance and gain the benefits of having a filesystem.

What type of I/O is best for which raid level: Raw I/O is best suited for small transaction based I/O applications such as databases. Buffered filesystem I/O could be used here but better performance is generally found using raw I/O. Random I/O is usually found to be better on RAID 5, RAID 1 and RAID 1/0. High percentage read based applications generally are served better from RAID 5, RAID 1 or RAID 1/0. Typically 70% better read based applications can benefit from RAID 5, RAID 1 or RAID 1/0.

Other characteristics of RAID: Transaction based systems require spindles to provide the I/O s to the application. Clearly the number of drives will determine the performance. Fibre allows more drives per loop thus providing more I/O s per loop/bus compared to SCSI. Upto several thousand I/O s are possible per loop. When partitioning raid never create a filesystem log on the same raid lun. This can cause thrashing of the disks. Creating a single partition encompassing all usable space is recommended. Creating more than 3 concurrently active partitions will also cause disk thrashing thus degrading performance.

Other characteristics of RAID: Enabling command tag queuing and setting an appropriate queue depth is very important when using multi-threaded I/O or multiple I/O s to the same filesystem. CTQ depth calculation could be different between different versions of UNIX. The calculation for IRIX is as follows: CTQ depth = 256 <max. queue depth supported by IRIX) / Total # luns owned by the RAID device (redundant storage processors). When creating striped volumes select the appropriate stripe unit size and use lun interleaving. Always select an even stripe width I/O when possible.

Other characteristics of RAID (continued): Even stripe width I/O calculation: Application I/O = ( # of luns * stripe unit). If the stripe group (# of luns in stripe) = 4 and the stripe unit is 2048 blocks. The application I/O size = 4MB. Use a preallocated stripe width I/O when creating the file for better performance (if the version of UNIX your using supports this feature).

Two ways for scaling I/O. Use even stripe widths from the application program. Use more threads of I/O from the application program. Threads could be through the use of posix threads in the application program or running of more application program processes. Bottom line creating more sustained I/O will give the best overall I/O results!

Randy Kreiser rkreiser@sgi.com 301-572-8926