Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts
|
|
- Harry Bryan
- 6 years ago
- Views:
Transcription
1 Raid: Who What Where When and Why 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts 1
2 Table of Contents General Concepts and Definitions... 3 What is Raid... 3 Origins of RAID... 3 Why do we need to Discuss Raid?... 4 What is parity?... 4 Degraded Array... 5 Rebuilding a failed drive... 5 Hot Spare... 5 Cold Spare... 6 Concept of performance scaling... 6 Raid Levels... 6 Raid Raid Raid 4 (Not Supported by ViSX)... 8 Raid Raid Raid What Raid level is right for ViSX
3 General Concepts and Definitions What is Raid RAID is a data storage technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the specific level of redundancy and performance required. The term "RAID" was first defined as a Redundant Array of Inexpensive Disks. Industry RAID manufacturers later tended to interpret the acronym as standing for Redundant Array of Independent Disks RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple physical drives: RAID is an example of storage virtualization and the array can be accessed by the operating system as one single drive. The different schemes or architectures are named by the word RAID followed by a number (e.g. RAID 0, RAID 1). Each scheme provides a different balance between the key goals: reliability and availability, performance and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors, as well as whole disk failure. Origins of RAID Raid technology came about because of some major deficiencies in server architecture in the past. Before RAID users that needed to expand space beyond the physical limits of the drive capacities of the day, had to add another hard drive to a server and assign another drive letter. There was no way to virtualize both hard drives into the same free space. Companies that had applications with large datasets had to strategically separate the data into discrete units that would fit on the relatively small drives of the time. It was not unheard of to see a server with 15 or more drive letters. Managing this data sprawl and keeping the application stable with 15 drive letters was very daunting. The industry came up with a way to virtualize many drives into one free space pool for a server. This allowed application datasets to grow beyond the limitations of discrete drives without having to maintain many mount points or drive letters in the operating system. This created flexibility and stability to the environment. The solution did however inject some risk. Spanning data across multiple hard drives looking like one increased the risk of data loss. If any drive participating in the virtualization failed, all of the data was lost even if located on a drive that was still operational. Drive virtualization added technology that would protect against drive failures so that one or more drives could fail leaving the access to data intact. This added ability was known as RAID. 3
4 Why do we need to Discuss Raid? We need to discuss RAID because of the many different types of raid, specific applications and user requirements will determine what raid type is used. For example if performance is the driver, Raid 10 is the best. If protection is the driver Raid 6 would be selected. If the need is more efficiency of usable space, Raid 5 would be selected. In our discussion with FLAS technology, many of these needs derived from the spinning hard drive world are moot. Arming you with this information will allow you to have discussions with your customer on how to more efficiently and cost effectively implement FLASH. What is parity? Different raid levels as mentioned above have different characteristics for capacity, performance, reliability or protection, and reliability. Performance is based on two factors; how many drives are available for read and write, and how the protection is generated (parity or copies) Parity is a protection scheme based on math. The basic form of parity conforms with the definition of the term; even or odd. Parity is calculated on a stripe of data on the drives and a parity bit is generated. A 1 if the stripe adds up to odd value and a 0 if the stripe is even. On the example below you can see 4 hard drives in a raid level with parity. Disk 1 Disk 2 Disk 3 Disk 4 (Parity) Stripe Stripe Stripe From this example adding Disks 1-3 up and determining if even or odd calculated the parity drive or bit. Stripe 1 adds up to 2 which is even so the parity bit is 0 or even. Stripe 2 adds up to 1 or odd so the parity bit is 1 and so on. 4
5 Degraded Array If one of the drives in the above example had failed say disk 1. The array would be in a degraded state but the array is still able to remain online. The system would be able to remain online because it can calculate what should be on disk 1 through reversing the parity calculation. If disk 1 was failed the system should add up disk two and three of stripe 1 and come up with 1 which is odd. The parity bit says it should be even so we know that Disk 1 Stripe 1 should be a 1 because that will make the stripe even. The system would do this for every read or write to a stripe of data by the application. The degraded array will not perform as fast in a degraded state because the parity calculation will add latency. Rebuilding a failed drive If a drive has failed and the array is in a degraded state, the only resolution is to replace the failed drive. Once you replace the failed drive and the drive is a data drive (Disk 1-3), the system will use the parity information and immediately start rebuilding the data on the failed drive. This process runs in the background and will take some time to complete. The time to completion depends on the amount of resources allotted to the rebuild process, the workload currently running on the controllers, and the amount of data to be rebuilt/drive size. It is not unheard of for a 3TB NLSAS drive taking days to rebuild. This parity rebuild would occur on Raid5, and 6. At current publish date of this article, all drives supported by ViSX rebuild in less than 2 hours. If the failed drive is party of a non- parity raid set that uses copies for protection, the rebuild is just a copy process. Fewer calculations are involved so the rebuild is significantly quicker. This would be in Raid1 or 10 Hot Spare A hot spare is a drive in the physical drive that is there in case of a failed drive. There is no data on the HS or Hot Spare so it cannot be calculated in the usable capacity of the solution. The HS drive allows the rebuild process to begin as soon as the drive fails thus not having to wait for the drive to be shipped to the site and physically replaced. See diagram below: Disk 4 Disk 1 Disk 2 Disk 3 (Parity) HS Stripe Stripe Stripe A HS drive will reduce the overall efficiency of the array from a raw versus usable capacity perspective and will not increase the performance of the raid set as there is no data on it. Because you do not have to wait for the failure to be recognized, 5
6 shipment of a new drive, and replacement of the drive, this is the lowest risk. Rebuild starts as soon as the drive fails with no user interaction. Cold Spare This is almost the same concept as Hot Spare (HS) but the drive is not installed into a drive slot but sitting on a shelf at the customer site. The rebuild process will begin as sun as the user swaps the failed drive with the Cold Spare. This has more risk than a HS scenario as the array will be in a degraded state longer. This has less risk than no spare as the shipping time is eliminated before rebuild can start. Concept of performance scaling Just like with most tasks and individual can only do one thing at a time and a group of individuals can get a job done faster than one. This concept is the same for drives in an array. Two drives can get the job done twice as fast as one, three drives can do the job 3 times as fast as one. NOTE Parity drives, Hot Spares (HS), and Cold Spares (CS) do not count in this effect. Adding a Hot Spare will not increase performance. Only active drives count to performance. Raid Levels Raid 0 Minimum Drive Count: 2 Maximum Drive Count: none Performance: Highest Protection: None Efficiency: 100% 6
7 Raid zero is the first of the raid levels supported by ViSX. The characteristics are more in favor of performance and not reliability. Raid 0 has no protection of drive failure, has no option for a Hot Spare (HS), and no parity is calculated. If you had 10 drives in a raid 0 you would get 10 drives of performance as described in the concepts section above. The big caveat is that there is no redundancy in this raid type. You can have 100 hard drives in a raid zero for really fast performance but if one of those hard drives were to fail, you will need to restore your data from the last backup. For this reason Raid 0 is reserved for data or applications that use scratch space for really fast calculations but once the end result is achieved, the data is moved to a more secure location. A great example for this would be in genomic mapping. Once the DNA is scanned, the application will perform billions of operations on the dataset to get the ending Genomic map. The data is copied to a raid 0 set for the calculations but the initial DNA sequence and the end map are on raid sets that offer less performance but better redundancy. If the Raid 0 were to fail you still have the initial dataset to start the calculations over from. Raid 1 Minimum Drive Count: 2 Maximum Drive Count: 2 Performance: Good Protection: Good - Single drive failure Efficiency: 50% 7
8 Raid 1 is often referred to as a Mirror. Only 2 drives can be used in this raid level and the data is on both drives. Reads can be read from either drive or both drives at once. In the above example you would only need 2 read cycles to get all 4 blocks of data. A1 from Disk 1 and A2 from disk 2 in the first read cycle and A3 from Disk 1 and A4 from disk 2 in the second read cycle. Writes see no change as you would have to write twice but the two drives can do this in unison. Raid 4 (Not Supported by ViSX) Minimum Drive Count: 3 Maximum Drive Count: None Performance: OK Protection: Good - Single drive failure Efficiency: Varies Number of drives - 1 The only reason to discuss Raid 4 is because you will need the concepts before we go to raid 5. Raid 4 is the concept described above in the Parity definition. You have data drives and a parity drive, a failed drive is compensated for by calculating whet is on the failed drive from the remaining data drives and the parity drive. The major setback for this raid level is that all of the parity is on one drive, which can lead to contention. Contention is when a resource that can only do one thing at a time has multiple requests of it. This will cause the requests to queue up and hinder performance. In the above example if I just wanted to write A1, the controller would write A1 and its associated parity Ap at the same time. If the application wanted to write A1 and B2 if should be able to write these at the same time because they are on different drives. It cant however because A1 needs to write its parity Ap and B2 needs to write its parity Bp at the same time and it can t because the parity drive is already in use writing Ap. Therefore the raid set will slow down. 8
9 Raid 5 Minimum Drive Count: 3 Maximum Drive Count: None Performance: Good Protection: Good - Single drive failure Efficiency: Number of drives - 1 A raid 5 is the same concept as the Raid 4 described above. It gets a better performance rating because the parity is distributed among the drives. There is still only one parity bit per stripe but the distribution of that parity reduces the chances of contention of writing parity. Using the example in Raid 4, I can write A1 and B2 at the same time because the Ap and Bp parity bits are on different drives. There is a chance of contention as the example of writing C2 and B2 however it is much less than in Raid 4. 9
10 Raid 6 Minimum Drive Count: 4 Maximum Drive Count: None Performance: OK Protection: Very Good - Double drive failure Efficiency: Number of drives 2 Raid 6 like raid 5 uses parity for protection. And while the first set of parity is rather simple, to get the second is a little more complicated. Needless to say it can generate what is supposed to be on two missing drives using the remaining data drives and the remaining parity drive. The formula looks something like this. Explaining this is outside of the scope of this paper. If more in depth information is needed, consult Wikipedia at the following link. The performance is the same as raid 5 given the same number of active drives but writes will be slower as the controller will have to do two sets of parity calculation for each write. Raid 6 is often used for larger hard drive capacities. Because it is not unheard of for 3TB and larger drives taking days to rebuild, the chance of a second drive failure during a rebuild sequence is too high to maintain the % uptime hurdle. Therefore raid6 and its ability to survive a dual drive failure is needed to maintain % uptime. 10
11 Raid 10 Minimum Drive Count: 4 Maximum Drive Count: None Performance: Very Good Protection: Very Good - Double drive failure +* Efficiency: 50% Raid 10 is the first of the raid levels that is a combination of two raid levels together. This is called nested raid and it is the only nested raid supported by ViSX at the current time. Raid 10 gives the protection of raid 1 and the expandability of raid 0. You can see from the above figure that the we are using the striping of raid 0 over an endless set of raid 1 mirrors. Database administrators like this raid level because there is no raid calculation penalty, you can increase performance by just adding drives, and you can get over dual drive failure protection. Since this raid level is a stripe of endless mirror sets you can lose any number of drives and still maintain the data online. This is true as long as you don t lose more than one drive in a mirror set. If you lose more than one drive in a Raid 1 set the overlying Raid 0 will be failed and you will need to restore from the latest backup. 11
12 What Raid level is right for ViSX Culture change??? Now comes the question what raid level is right for ViSX. To get to this answer we will need to know some characteristics of the ViSX All Flash appliance. Consider these facts: The advertised IOP numbers of ViSX are CONTROLLER limitations. This means once we hit 6 active drives behind the controllers, adding more drives will not increase performance. Failed drives regardless of size rebuild in less than two hours. A Raid set cannot span appliances Flash drives are much less likely to fail than spinning hard drives. The answer is obvious: Raid 5 is the best choice. Correct balance of space used for protection versus usable space and great scale for capacity. Reasons Why: Raid 0 You will hit the controller limitation very quickly and you will want some protection against drive failure. This is still an option if you need very fast calculation scratch space with minimal waste of capacity for un- needed protection. Raid 1 too much of an efficiency cost. The robust nature of Flash negates the need to have two copies of data. Raid 6 The robust nature of flash and fast rebuild time negates the need for dual drive failure and the additional use of capacity for protection. Raid 10 Again the low efficiency and controller limitation on performance will negate the need for this level of protection and performance scale. Sources and more reading Wikipedia
Mladen Stefanov F48235 R.A.I.D
R.A.I.D Data is the most valuable asset of any business today. Lost data, in most cases, means lost business. Even if you backup regularly, you need a fail-safe way to ensure that your data is protected
More informationThe term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive
HP NetRAID Tutorial RAID Overview HP NetRAID Series adapters let you link multiple hard disk drives together and write data across them as if they were one large drive. With the HP NetRAID Series adapter,
More informationIBM i Version 7.3. Systems management Disk management IBM
IBM i Version 7.3 Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM Note Before using this information and the product it supports, read the information in
More informationIBM. Systems management Disk management. IBM i 7.1
IBM IBM i Systems management Disk management 7.1 IBM IBM i Systems management Disk management 7.1 Note Before using this information and the product it supports, read the information in Notices, on page
More informationINFINIDAT Storage Architecture. White Paper
INFINIDAT Storage Architecture White Paper Abstract The INFINIDAT enterprise storage solution is based upon the unique and patented INFINIDAT Storage Architecture (ISA). The INFINIDAT Storage Architecture
More informationZFS STORAGE POOL LAYOUT. Storage and Servers Driven by Open Source.
ZFS STORAGE POOL LAYOUT Storage and Servers Driven by Open Source marketing@ixsystems.com CONTENTS 1 Introduction and Executive Summary 2 Striped vdev 3 Mirrored vdev 4 RAIDZ vdev 5 Examples by Workload
More informationLenovo RAID Introduction Reference Information
Lenovo RAID Introduction Reference Information Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and cost-efficient methods to increase server's storage performance,
More informationRAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE
RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting
More informationChapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space
More informationChapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition
Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space
More informationIST346. Data Storage
IST346 Data Storage Data Storage Why Data Storage? Information is a the center of all organizations. Organizations need to store data. Lots of it. What Kinds of Data? Documents and Files (Reports, Proposals,
More informationSYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID
System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide
More informationUncovering the Full Potential of Avid Unity MediaNetworks
Uncovering the Full Potential of Avid Unity MediaNetworks REALIZING GREATER REWARDS WITHOUT THE TRADITIONAL RISKS Archion Technologies 700 S. Victory Blvd Burbank, CA. 91502 818.840.0777 www.archion.com
More informationDesign Concepts & Capacity Expansions of QNAP RAID 50/60
Design Concepts & Capacity Expansions of QNAP RAID 50/60 Your challenges, our solutions: Insufficient protection of RAID 5? Will performance be degraded when using RAID 50/60? Why You Should Use RAID 50/60
More informationMass-Storage Structure
CS 4410 Operating Systems Mass-Storage Structure Summer 2011 Cornell University 1 Today How is data saved in the hard disk? Magnetic disk Disk speed parameters Disk Scheduling RAID Structure 2 Secondary
More informationFile. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access
File File System Implementation Operating Systems Hebrew University Spring 2009 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write
More informationLecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown
Lecture 21: Reliable, High Performance Storage CSC 469H1F Fall 2006 Angela Demke Brown 1 Review We ve looked at fault tolerance via server replication Continue operating with up to f failures Recovery
More informationCSE380 - Operating Systems. Communicating with Devices
CSE380 - Operating Systems Notes for Lecture 15-11/4/04 Matt Blaze (some examples by Insup Lee) Communicating with Devices Modern architectures support convenient communication with devices memory mapped
More informationDisks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]
Disks and RAID CS 4410 Operating Systems [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse] Storage Devices Magnetic disks Storage that rarely becomes corrupted Large capacity at low cost Block
More informationDemartek December 2007
HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.
More informationAssociate Professor Dr. Raed Ibraheem Hamed
Associate Professor Dr. Raed Ibraheem Hamed University of Human Development, College of Science and Technology Computer Science Department 2015 2016 1 Points to Cover Storing Data in a DBMS Primary Storage
More informationI/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University
I/O, Disks, and RAID Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Disks How does a computer system permanently store data? RAID How to make storage both efficient and reliable? 2 What does
More informationPESIT Bangalore South Campus
PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of Information Science & Engineering SOLUTION MANUAL INTERNAL ASSESSMENT TEST 1 Subject & Code : Storage Area
More information5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks Amdahl s law in Chapter 1 reminds us that
More informationAppendix D: Storage Systems
Appendix D: Storage Systems Instructor: Josep Torrellas CS433 Copyright Josep Torrellas 1999, 2001, 2002, 2013 1 Storage Systems : Disks Used for long term storage of files temporarily store parts of pgm
More informationV. Mass Storage Systems
TDIU25: Operating Systems V. Mass Storage Systems SGG9: chapter 12 o Mass storage: Hard disks, structure, scheduling, RAID Copyright Notice: The lecture notes are mainly based on modifications of the slides
More information1 of 6 4/8/2011 4:08 PM Electronic Hardware Information, Guides and Tools search newsletter subscribe Home Utilities Downloads Links Info Ads by Google Raid Hard Drives Raid Raid Data Recovery SSD in Raid
More informationFile. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File
File File System Implementation Operating Systems Hebrew University Spring 2007 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write
More informationECE Enterprise Storage Architecture. Fall 2018
ECE590-03 Enterprise Storage Architecture Fall 2018 RAID Tyler Bletsch Duke University Slides include material from Vince Freeh (NCSU) A case for redundant arrays of inexpensive disks Circa late 80s..
More informationChapter 12: Mass-Storage
Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Revised 2010. Tao Yang Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management
More informationFully journaled filesystems. Low-level virtualization Filesystems on RAID Filesystems on Flash (Filesystems on DVD)
RAID_and_Flash Page 1 Beyond simple filesystems 4:33 PM Fully journaled filesystems. Low-level virtualization Filesystems on RAID Filesystems on Flash (Filesystems on DVD) RAID_and_Flash Page 2 Network
More informationRocketU 1144CM Host Controller
RocketU 1144CM Host Controller 4-Port USB 3.0 PCI-Express 2.0 x4 RAID HBA for Mac User s Guide Revision: 1.0 Dec. 13, 2012 HighPoint Technologies, Inc. 1 Copyright Copyright 2013 HighPoint Technologies,
More informationCSE 451: Operating Systems Spring Module 18 Redundant Arrays of Inexpensive Disks (RAID)
CSE 451: Operating Systems Spring 2017 Module 18 Redundant Arrays of Inexpensive Disks (RAID) John Zahorjan 2017 Gribble, Lazowska, Levy, Zahorjan, Zbikowski 1 Disks are cheap Background An individual
More informationFrequently asked questions from the previous class survey
CS 370: OPERATING SYSTEMS [MASS STORAGE] Shrideep Pallickara Computer Science Colorado State University L29.1 Frequently asked questions from the previous class survey How does NTFS compare with UFS? L29.2
More informationRAID Levels Table RAID LEVEL DESCRIPTION EXAMPLE
Product: RAID Enabled NVRs and Hybrids Page: 1 of 6 Summary RAID (Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple disk drive components into a logical
More informationAll-Flash Storage Solution for SAP HANA:
All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table
More informationHow to recover a failed Storage Spaces
www.storage-spaces-recovery.com How to recover a failed Storage Spaces ReclaiMe Storage Spaces Recovery User Manual 2013 www.storage-spaces-recovery.com Contents Overview... 4 Storage Spaces concepts and
More informationMethod to Establish a High Availability and High Performance Storage Array in a Green Environment
Method to Establish a High Availability and High Performance Storage Array in a Green Environment Dr. M. K. Jibbe Director of Quality Architect Team, NetApp APG mahmoudj@netapp.com Marlin Gwaltney Quality
More informationPowerVault MD3 Storage Array Enterprise % Availability
PowerVault MD3 Storage Array Enterprise 99.999% Availability Dell Engineering June 2015 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
More informationWhite Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018
EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance
More informationCONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS
Best Practices CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS Best Practices 2 Abstract ftscalable TM Storage G1, G2 and G3 arrays are highly flexible, scalable hardware storage subsystems that
More informationLecture 9. I/O Management and Disk Scheduling Algorithms
Lecture 9 I/O Management and Disk Scheduling Algorithms 1 Lecture Contents 1. I/O Devices 2. Operating System Design Issues 3. Disk Scheduling Algorithms 4. RAID (Redundant Array of Independent Disks)
More informationAdministrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review
Administrivia CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Homework #4 due Thursday answers posted soon after Exam #2 on Thursday, April 24 on memory hierarchy (Unit 4) and
More informationAdvanced Database Systems
Lecture II Storage Layer Kyumars Sheykh Esmaili Course s Syllabus Core Topics Storage Layer Query Processing and Optimization Transaction Management and Recovery Advanced Topics Cloud Computing and Web
More informationRAID Technology White Paper
RAID Technology White Paper As specialists in data storage, LaCie recognizes that almost all computer users will need a storage or backup solution and that people use and store data in different ways.
More informationCSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011
CSE325 Principles of Operating Systems Mass-Storage Systems David P. Duggan dduggan@sandia.gov April 19, 2011 Outline Storage Devices Disk Scheduling FCFS SSTF SCAN, C-SCAN LOOK, C-LOOK Redundant Arrays
More informationINFINIDAT Data Protection. White Paper
INFINIDAT Data Protection White Paper Abstract As data has taken on the role of being the lifeblood of business, protecting that data is the most important task IT has in the datacenter today. Data protection
More informationTrading Capacity for Data Protection
Trading Capacity for Data Protection A Guide to Capacity Overhead on the StoreVault S500 How capacity is calculated What to expect Benefits of redundancy Introduction Drive-Level Capacity Losses Bytes
More informationCOMP283-Lecture 3 Applied Database Management
COMP283-Lecture 3 Applied Database Management Introduction DB Design Continued Disk Sizing Disk Types & Controllers DB Capacity 1 COMP283-Lecture 3 DB Storage: Linear Growth Disk space requirements increases
More informationChapter 12: Mass-Storage
hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems To explain the performance characteristics of mass-storage devices To evaluate disk scheduling algorithms To discuss operating-system services
More informationDATA DOMAIN INVULNERABILITY ARCHITECTURE: ENHANCING DATA INTEGRITY AND RECOVERABILITY
WHITEPAPER DATA DOMAIN INVULNERABILITY ARCHITECTURE: ENHANCING DATA INTEGRITY AND RECOVERABILITY A Detailed Review ABSTRACT No single mechanism is sufficient to ensure data integrity in a storage system.
More informationChapter 12: Mass-Storage
hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management RAID Structure Objectives Moving-head Disk
More informationStoring Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:
Storing : Disks and Files base Management System, R. Ramakrishnan and J. Gehrke 1 Storing and Retrieving base Management Systems need to: Store large volumes of data Store data reliably (so that data is
More informationModern RAID Technology. RAID Primer A Configuration Guide
Modern RAID Technology RAID Primer A Configuration Guide E x c e l l e n c e i n C o n t r o l l e r s Modern RAID Technology RAID Primer A Configuration Guide 6th Edition Copyright 1997-2003 ICP vortex
More informationGot Isilon? Need IOPS? Get Avere.
Got Isilon? Need IOPS? Get Avere. Scalable I/O Performance to Complement Any EMC Isilon Environment By: Jeff Tabor, Director of Product Marketing Achieving Performance Scaling Overcoming Random I/O and
More informationWhite Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft
White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS
More informationStoring and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?
Storing and Retrieving Storing : Disks and Files base Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve data efficiently Alternatives for
More informationStoring Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7
Storing : Disks and Files Chapter 7 base Management Systems, R. Ramakrishnan and J. Gehrke 1 Storing and Retrieving base Management Systems need to: Store large volumes of data Store data reliably (so
More informationStoring and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?
Storing and Retrieving Storing : Disks and Files Chapter 9 base Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve data efficiently Alternatives
More informationCS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University
Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [MASS STORAGE] How does the OS caching optimize disk performance? How does file compression work? Does the disk change
More informationPerformance of relational database management
Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate
More informationSolidFire and Pure Storage Architectural Comparison
The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 Lecture 24 Mass Storage, HDFS/Hadoop Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ What 2
More informationThe Microsoft Large Mailbox Vision
WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more email has many advantages. Large mailboxes
More informationIn the late 1980s, rapid adoption of computers
hapter 3 ata Protection: RI In the late 1980s, rapid adoption of computers for business processes stimulated the KY ONPTS Hardware and Software RI growth of new applications and databases, significantly
More informationThe Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION
The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The future of storage is flash The all-flash datacenter is a viable alternative You ve heard it
More informationCopyright 2012 EMC Corporation. All rights reserved.
1 FLASH 1 ST THE STORAGE STRATEGY FOR THE NEXT DECADE Richard Gordon EMEA FLASH Business Development 2 Information Tipping Point Ahead The Future Will Be Nothing Like The Past 140,000 120,000 100,000 80,000
More informationLEVERAGING FLASH MEMORY in ENTERPRISE STORAGE
LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Luanne Dauber, Pure Storage Author: Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless
More informationCSE 451: Operating Systems Winter Redundant Arrays of Inexpensive Disks (RAID) and OS structure. Gary Kimura
CSE 451: Operating Systems Winter 2013 Redundant Arrays of Inexpensive Disks (RAID) and OS structure Gary Kimura The challenge Disk transfer rates are improving, but much less fast than CPU performance
More informationModule 13: Secondary-Storage Structure
Module 13: Secondary-Storage Structure Disk Structure Disk Scheduling Disk Management Swap-Space Management Disk Reliability Stable-Storage Implementation Operating System Concepts 13.1 Silberschatz and
More informationHardware RAID, RAID 6, and Windows Storage Server
White Paper NETWORK ATTACHED STORAGE SOLUTIONS FOR IT ADMINISTRATORS, DECISION-MAKERS, AND BUSINESS OWNERS Network Attached Storage (NAS) Solutions with. High Data Backup and Reliability without Loss of
More informationRocketU 1144BM Host Controller
RocketU 1144BM Host Controller USB 3.0 Host Adapters for Mac User s Guide Revision: 1.0 Oct. 22, 2012 HighPoint Technologies, Inc. 1 Copyright Copyright 2012 HighPoint Technologies, Inc. This document
More informationConfiguring Short RPO with Actifio StreamSnap and Dedup-Async Replication
CDS and Sky Tech Brief Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication Actifio recommends using Dedup-Async Replication (DAR) for RPO of 4 hours or more and using StreamSnap for
More informationCS510 Operating System Foundations. Jonathan Walpole
CS510 Operating System Foundations Jonathan Walpole Disk Technology & Secondary Storage Management Disk Geometry Disk head, surfaces, tracks, sectors Example Disk Characteristics Disk Surface Geometry
More informationRocketRAID Intelli-VRM (Intelligent Virtual RAID Management) Early Warning System and Virtual System Rescue
RocketRAID Intelli-VRM (Intelligent Virtual RAID Management) Early Warning System and Virtual System Rescue Introduction The fast-paced, high-definition requirements of our modern, digital age, has increased
More informationAMD SP Promise SATA RAID Guide
AMD SP5100 + Promise SATA RAID Guide Tyan Computer Corporation v1.00 Index: Section 1: Promise Firmware Overview (Page 2) Option ROM version Location (Page 3) Firmware menus o Main Menu (Page 4) o Drive
More informationChe-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University
Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University l Chapter 10: File System l Chapter 11: Implementing File-Systems l Chapter 12: Mass-Storage
More informationBecome a MongoDB Replica Set Expert in Under 5 Minutes:
Become a MongoDB Replica Set Expert in Under 5 Minutes: USING PERCONA SERVER FOR MONGODB IN A FAILOVER ARCHITECTURE This solution brief outlines a way to run a MongoDB replica set for read scaling in production.
More informationFusion-io: Driving Database Performance
Fusion-io: Driving Database Performance THE CHALLENGE Today, getting database performance means adding disks, RAM, servers, and engineering resources, each of which unbalances already inefficient systems
More informationDefinition of RAID Levels
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
More informationHP AutoRAID (Lecture 5, cs262a)
HP AutoRAID (Lecture 5, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley January 31, 2018 (based on slide from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk
More informationvsan Mixed Workloads First Published On: Last Updated On:
First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity
More informationMass-Storage. ICS332 Operating Systems
Mass-Storage ICS332 Operating Systems Magnetic Disks Magnetic disks are (still) the most common secondary storage devices today They are messy Errors, bad blocks, missed seeks, moving parts And yet, the
More informationCS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 Instructors: Nicholas Weaver & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Components of a Computer Processor
More information3.3 Understanding Disk Fault Tolerance Windows May 15th, 2007
3.3 Understanding Disk Fault Tolerance Windows May 15th, 2007 Fault tolerance refers to the capability of a computer or network to continue to function when some component fails. Disk fault tolerance refers
More informationStorage Systems. Storage Systems
Storage Systems Storage Systems We already know about four levels of storage: Registers Cache Memory Disk But we've been a little vague on how these devices are interconnected In this unit, we study Input/output
More informationChapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition
Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Objectives To describe the physical structure of secondary storage devices and its effects on the uses of the devices To explain the
More informationCMSC 424 Database design Lecture 12 Storage. Mihai Pop
CMSC 424 Database design Lecture 12 Storage Mihai Pop Administrative Office hours tomorrow @ 10 Midterms are in solutions for part C will be posted later this week Project partners I have an odd number
More informationMass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova
Mass-Storage ICS332 - Fall 2017 Operating Systems Henri Casanova (henric@hawaii.edu) Magnetic Disks! Magnetic disks (a.k.a. hard drives ) are (still) the most common secondary storage devices today! They
More informationCISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College
CISC 7310X C11: Mass Storage Hui Chen Department of Computer & Information Science CUNY Brooklyn College 4/19/2018 CUNY Brooklyn College 1 Outline Review of memory hierarchy Mass storage devices Reliability
More informationRAID Tower XIII (RT134SDEU3)
T E C H N O L O G I E S User Guide RAID Tower XIII (RT134SDEU3) www.addonics.com v5.1.11 Technical Support If you need any assistance to get your unit functioning properly, please have your product information
More informationTechnical Note P/N REV A01 March 29, 2007
EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...
More informationChapter 6 External Memory
Chapter 6 External Memory Magnetic Disk Removable RAID Disk substrate coated with magnetizable material (iron oxide rust) Substrate used to be aluminium Now glass Improved surface uniformity Increases
More informationDELL EMC UNITY: HIGH AVAILABILITY
DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information
More informationCS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh
CS24: Computer Architecture Storage systems Sangyeun Cho Computer Science Department (Some slides borrowed from D Patterson s lecture slides) Case for storage Shift in focus from computation to communication
More informationHitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage
O V E R V I E W Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Adaptable Modular Storage and Hitachi Workgroup
More informationStoring Data: Disks and Files
Storing Data: Disks and Files Chapter 7 (2 nd edition) Chapter 9 (3 rd edition) Yea, from the table of my memory I ll wipe away all trivial fond records. -- Shakespeare, Hamlet Database Management Systems,
More informationDisk Scheduling COMPSCI 386
Disk Scheduling COMPSCI 386 Topics Disk Structure (9.1 9.2) Disk Scheduling (9.4) Allocation Methods (11.4) Free Space Management (11.5) Hard Disk Platter diameter ranges from 1.8 to 3.5 inches. Both sides
More informationPerformance Testing December 16, 2017
December 16, 2017 1 1. vsan Performance Testing 1.1.Performance Testing Overview Table of Contents 2 1. vsan Performance Testing Performance Testing 3 1.1 Performance Testing Overview Performance Testing
More informationActiveScale Erasure Coding and Self Protecting Technologies
WHITE PAPER AUGUST 2018 ActiveScale Erasure Coding and Self Protecting Technologies BitSpread Erasure Coding and BitDynamics Data Integrity and Repair Technologies within The ActiveScale Object Storage
More information