CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS

Size: px
Start display at page:

Download "CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS"

Transcription

1 Best Practices CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS

2 Best Practices 2 Abstract ftscalable TM Storage G1, G2 and G3 arrays are highly flexible, scalable hardware storage subsystems that offer a variety of configuration options for capacity, performance, and availability. Understanding the benefits and pitfalls of the various configuration options, especially how they interact with the OpenVOS operating system, allows you to create an optimal disk topology. This white paper details the various RAID types supported by ftscalable Storage, outlining their strengths and weaknesses, typical usages, and ways to design a disk topology best suited to your OpenVOS application. NOTE: The Glossary of terms on pages 10 and 11 define some of the common storage-industry terms used in this document. Operating system data versus user data New VOS systems ship with pre-initialized system disk, usually named sid_master_disk. This disk is intended to support operating system files, data, logs, and configuration information. Do not place your own user data (files, programs, logs, etc.) on this disk. Instead, create one or more data disks for your information. In addition to defining a clear separation between system data and user data, there is also a performance advantage. Each virtual disk (VDISK) or storage pool is assigned to one of two storage controllers, and so using at least two virtual disks or pools allows the storage array to use both storage controllers for greater performance. As additional virtual disks are created, they are balanced across the two controllers. See Balancing VDISKs between Storage Controllers later in this document. Linear versus pooled storage/tiered storage Linear allocation is the only method supported on earlier G1 and G2 ftscalable Storage while ftscalable G3 storage supports two types of storage allocation: linear and pooled. With linear allocation the administrator assigns physical disks to create a VDISK and then creates one or more volumes on that VDISK. With pooled storage the administrator creates disk groups which make up a storage pool and then creates volumes within the pool. All volumes thus created will span all disk groups in the pool. Storage pools can be made up of differently sized disks. If a customer s storage requirements increase, additional disk groups can be dynamically added to the storage pool. With linear storage it is a best practice to create volumes (LUNs) sized to match the capacity of the VDISK. This step is not necessary with pooled storage due to the introduction of tiered storage. If all the physical disks in a pool are the same type there is no functional difference between a storage pool and a large VDISK. However, pools may be made up of physical disks with dissimilar performance characteristics, e.g. some rotating HDDs and some SSDs. Such a pool is called tiered storage. ftscalable G3 storage will dynamically allocate the frequently used blocks of each volume in a tiered pool to the higher performance devices. Typically, volumes contain a large number of blocks that are infrequently accessed and a smaller number of blocks that are frequently used. Dynamically relocating the frequently accessed blocks to higher performance disks allows customers to gain most of the performance benefit of SSD storage at a lesser cost than an all-ssd implementation. When implementing tiered storage it is recommended that a minimum of 10-15% of the available storage in the pool consist of SSD storage. NOTE: Henceforth, the term VDISK refers to both a linear storage VDISK or a pooled storage disk group. RAID types The ftscalable Storage array supports a variety of RAID types. These include: Non-fault-tolerant RAID types (RAID-0, NRAID) Parity-based RAID types (RAID-3, RAID-5, and RAID-6) Mirroring RAID types (RAID-1) Combination RAID types (RAID-10, RAID-50) You must specify a RAID type when creating each VDISK. Each RAID type has unique availability, cost, performance, scalability, and serviceability characteristics. Understanding them allows you to make an informed selection when creating your array disk topology. Non-fault-tolerant RAID types There are two non-fault-tolerant RAID types available in ftscalable Storage, RAID-0 and NRAID. RAID-0 A RAID-0 VDISK consists of at least two physical disk drives, with data striped across all the physical disk drives in the set. It provides the highest degrees of I/O performance, but offers no fault tolerance. Loss of any physical disk drive will cause total loss of data in this VDISK. Since RAID-0 is a non fault-tolerant RAID type, the ftscalable Storage array cannot automatically take marginal or failing physical disk drives out of service and proactively rebuild the data using an available spare disk drive. Instead, recovery completely depends on the traditional OpenVOS system of fault tolerance via duplexed disks.

3 Best Practices 3 As a result, a series of manual service operations are required to recreate and recover any RAID-0 VDISK. These operations include: deleting the failing VDISK, physically removing the bad physical disk, installing a replacement physical drive, recreating the VDISK, reformatting the logical disk, and re-duplexing via VOS. Your data is simplexed until all these recovery operations are completed. For further information regarding the impacts that physical drive insertions or removals have on I/O processing, see Physical Disk Drive Insertions and Removals: Impacts to I/O Performance. Stratus does not recommend using this RAID type without also using the software-based mirroring available in OpenVOS. Even with OpenVOS mirroring, you should strongly consider the potential for data loss given the manual service operations and associated time required to restore your data to full redundancy. NRAID An NRAID VDISK is basically a single physical disk drive without any fault tolerance. It offers no striping and thus has the performance characteristics of a single physical disk drive. NRAID VDISKs have all the same availability and serviceability characteristics as a RAID-0 VDISK. Parity-Based RAID types: RAID-3, RAID-5, RAID-50, and RAID-6 The ftscalable Storage array supports four types of parity-based VDISKs: RAID-3, RAID-5, RAID-50 and RAID-6. Given the low usage of RAID-3 and RAID-50, this whitepaper focuses on the more commonly used RAID-5 and RAID-6 types. These RAID types use parity-based algorithms and striping to offer high availability at a reduced cost compared to mirroring. A RAID-5 VDISK uses the capacity equivalent of one physical disk drive for storing XOR generated parity data, while a RAID-6 VDISK uses the equivalent of two physical disk drives, as both XOR and Reed-Solomon parity data is generated and stored. Both RAID-5 and RAID-6 VDISKs distribute parity and data blocks among all the physical disk drives in the set. VDISKs using parity-based RAID types require less storage capacity for RAID overhead compared to mirroring RAID types. The minimum number of physical disk drives to create a RAID-5 VDISK is three, while a RAID-6 requires at least four. A RAID-5 VDISK can survive a single disk drive failure without data loss, while a RAID-6 VDISK can survive two drive failures. The ftscalable Storage array can proactively remove a marginal or failing physical disk drive from the VDISK without affecting availability of data. In addition, if a spare drive is available, recovery mode starts automatically. Since the recovery is handled transparently at the storage array level, there is no need for any operator intervention, physical drive insertions, or need to re-duplex logical disks in OpenVOS. You can then schedule a time to replace the failed disk drive and create a new spare. See Physical Disk Drive Insertions and Removals: Impacts to I/O Performance, for further information regarding the impacts that a physical drive removal and insertion have on I/O processing. Both types offer excellent read performance, but write performance is impacted by needing to write not only the data block, but also by the calculation and read / modify/re-write operations necessary for the parity block(s). A RAID-5 or RAID-6 VDISK running in degraded mode after a single physical disk drive failure has a medium impact on throughput. However, a VDISK in recovery mode with data being rebuilt has a high impact on throughput. A RAID-6 VDISK running in degraded mode resulting from two failed physical disk drives has a medium to high impact on throughput, while one running in recovery mode with two drives being rebuilt has an extremely high impact on throughput. Review Table 1 for the estimated impact to I/O when running with a RAID-5 or RAID-6 VDISK in degraded or recovery mode. Table 1. Estimated degradation to I/O performance RAID 5 / RAID 6 degraded mode single drive failure RAID 5 / RAID 6 recovery mode single drive failure RAID 6 degraded mode dual drive failure RAID 6 recovery mode dual drive failure Read performance 40-50% 50-60% 50-60% 60-70% Write performance 10-15% 15-25% 20-25% 25-35% NOTE: These are estimates, and the actual impact in your environment may vary depending on your configuration, workload, and your application I/O profile.

4 Best Practices 4 Mirroring RAID types: RAID-1 and RAID-10 With ftscalable Storage, you can create two types of mirroring RAID VDISKs, RAID-1 and RAID-10. RAID-1 A RAID-1 VDISK is a simple pair of mirrored physical disk drives. It offers good read and write performance and can survive the loss of a single physical disk drive without impacting data availability. Reads can be handled by either physical drive, while writes must be written to both drives. Since all data is mirrored in a RAID-1 VDISK, there is a high degree of RAID overhead compared to parity-based RAID types. Recovery from a failed physical disk drive is a straight-forward operation, requiring only a re-mirroring from the surviving partner. The ftscalable Storage array can proactively remove a marginal or failing physical disk drive from a RAID-1 VDISK without affecting availability of data. As with parity-based RAID types, if a spare drive is available, recovery mode starts automatically. Since the recovery is handled transparently at the ftscalable Storage array level, there is no need for any operator intervention, physical drive insertions, or need to re-duplex logical disks in OpenVOS. You can then schedule a time to replace the failed disk drive and create a new spare. See Physical Disk Drive Insertions and Removals: Impacts to I/O Performance, for further information regarding the impacts that a physical drive removal and insertion have on I/O processing. There is typically a small impact on performance while running in either degraded or recovery mode. RAID-10 A RAID-10 VDISK is comprised of two or more RAID-1 disk pairs, with data blocks striped across them all. A RAID-10 VDISK offers high performance, scalability and the ability to potentially survive multiple physical drive failures without losing data. The serviceability, RAID overhead, and impact on performance while running in degraded or recovery mode is similar to that of a RAID-1 VDISK. RAID type characteristics summary Table 2 summarizes the characteristics of the RAID types discussed. It rates VDISKs of each type in several categories on a scale of 0 to 5, where 0 is very poor and 5 is very good. Selecting a RAID type Each RAID type has specific benefits and drawbacks. By understanding them, you can select the RAID type best suited to your environment. Keep in mind that you can create multiple VDISKs that use any of the RAID types supported by the ftscalable Storage array, allowing you to design a RAID layout that is optimal for your application and system environment. You do not need to use the same RAID type for all the VDISKs on your ftscalable Storage array. NOTE: Stratus s use of a specific RAID type and LUN topology for the OpenVOS system volume does not imply that is the optimal RAID type for your application or data. RAID-5 is a good choice for data and applications where write throughput or latency is not critical (for example, batch processing), or which are heavily biased toward reads versus writes. In return for accepting lower write throughput performance and higher latency, you can use fewer physical disk drives for a given capacity, yet still achieve a high degree of fault-tolerance. However, you must also consider the impact that running with a VDISK in either degraded or recovery mode could have on your application. Overall I/O performance and latency during degraded and recovery mode suffers more with parity-based RAID types compared to mirroring RAID types. Table 2. RAID type characteristics Category NRAID RAID 0 RAID 1 RAID 10 RAID 5 RAID 6 Availability RAID overhead Read performance Write performance Degraded mode performance N/A N/A Recovery mode performance N/A N/A NOTE: You should only compare values within each row; comparisons between rows are not valid.

5 Best Practices 5 Mirroring RAID types (RAID-1 or RAID-10) offer a better solution for data and applications which require optimum write throughput with smallest latencies (for example, online transaction processing systems), which perform more writes than reads, or which cannot tolerate degraded performance in the event of a physical drive failure. These RAID types eliminate the additional I/Os resulting from read-before-write penalty for parity data of RAID-5 or RAID-6, so writing data is a simple operation. RAID-10 is generally a better choice than RAID-1 because it allows you to stripe the data over multiple physical drives, which can significantly increase overall read and write performance. See OpenVOS Multi-Member Logical Disks Versus ftscalable RAID-10 VDISKs and OpenVOS Queue Depth and ftscalable Storage for additional information about OpenVOS I/O queuing, LUN counts and striping considerations. Consider using NRAID and RAID-0 VDISKs only if your data and applications can tolerate longer periods with simplexed data after a drive failure or which are not sensitive to longer latencies. If you use these RAID types, you must then use OpenVOS s mirroring to provide the fault tolerance. Selecting one of these RAID types allows using the fewest number of physical disk drives for a given capacity point, in exchange for lessened availability. Given these restrictions and availability implications, Stratus does not recommend using these RAID types. If you can t decide whether to select a parity-based or mirroring RAID type, then the prudent choice is to use one of the mirroring RAID types as they will offer the best performance and availability characteristics in the majority of applications. In summary, customers should carefully consider the following points when selecting a RAID type for a VDISK: RAID-0 and NRAID are not fault-tolerant and are therefore not recommended. RAID-5 and RAID-6 are fault-tolerant but are not suitable for high-volume, low-latency transaction processing environments due to their additional I/O overhead. RAID-5 and RAID-6 are acceptable for storing lightly-referenced data, such as historical data or archives. RAID-1 usually offers the maximum throughput and minimum response times. Customers who need larger capacity volumes than are available via RAID-1 should define RAID-10 volumes at the VOS level (via multi-member VOS disk volumes). This method usually offers higher throughput than defining RAID-10 volumes at the ftscalable Storage level. Drive types The ftscalable G2 and G3 storage array supports both HDDs and SSDs within the array. Each disk drive type has different availability, cost, performance, and capacity characteristics. Hard disk drives A hard disk drive is a device for storing and retrieving computer data. It consists of one or more rigid rapidly rotating discs (platters) coated with magnetic material, and with magnetic heads arranged to write data to the surfaces and read it from them. In order to satisfy a read, the disk performs what is called a seek where the magnetic head assembly is positioned over the track where the data resides, waits until the sector where the data resides rotates under the head assembly, and transfers the data to the host. ftscalable storage supports enterprise class HDDs that rotate at 15,000 revolutions per minute. Solid state disks An SSD is similar to a hard disk drive in that they both use the same physical host interface and host interface protocols. In an SSD however, the rotating media and head assembly have been replaced by flash memory to store the data and a controller to manage the memory and the interface to the host. In general, SSDs perform best when processing small block size random reads. Small block random writes also display significant improvement over HDDs. However, large block sequential reads do not deliver the same levels of improvement over HDDs since in both cases the data is essentially being read out of disk cache. ftscalable storage G2 and G3 supports enterprise class SSDs that utilize flash memory to deliver the highest availability and performance.

6 Best Practices 6 HDD versus SSD summary The table below compares the characteristics of the enterprise class HDDs and SSDs supported by ftscalable storage. It rates them in several categories on a scale of 0 to 5 where 0 is very poor and 5 is very good. Table 3. Drive type characteristics Category Creating VDISKs Hard disk drive Solid state disk Random read (small block) 2 5 Random write (small block) 2 4 Sequential read (large block) 2 2 Sequential write (large block) 2 2 Cost/GB 4 1 Capacity 4 3 Environmental tolerance (shock and vibration, temperature, etc.) 2 4 The G1 and G2 storage arrays allocate virtual disks (VDISKS) using a method known as linear storage. A VDISK created using linear storage has exclusive use of one or more physical disks. The number of physical disks varies by the RAID type you specify when you create the VDISK. This method is simple and provides total control over the relation of VDISKS to physical disks, but it can also waste disk space when a VDISK is smaller than the usable capacity of the underlying physical disks. It also requires that the replacement disks be identical to the original disks. The G3 storage array can allocate virtual disks using either the traditional linear storage technique, or a new pooled storage technique. With the pooled storage technique, physical disks are assigned to disk groups which make up a storage pool. Using this method, the size of each physical disk can vary, and the physical disks are essentially anonymous suppliers of disk capacity to the storage pools. See Linear vs. Pooled Storage/ Tiered Storage above in this document. While the ftscalable Storage array supports partitioning a VDISK into multiple LUNs, this can introduce significant performance penalties that affect both I/O throughput and latency for all the LUNs on that VDISK if the VDISK is configured with HDDs. As a result, Stratus does not recommend configurations using multiple LUNs per VDISK configured with HDDs for customer data. There are several reasons for the performance penalties seen running multi-lun VDISK configurations, but the basic ones are disk contention and head seeks. Each time the ftscalable Storage array has to satisfy an I/O request to one of the LUNs in a multi-lun VDISK configuration, it has to seek the physical disk drive heads. The more LUNs that comprise a VDISK, the more head movement occurs. The more head movement there is, the greater the latencies become as disk contention increases. Remember, all I/O must eventually be handled by the physical disk drives that make up the VDISK; the array s cache memory cannot replace this physical I/O. Since SSDs do not require a seek of the physical disk drive heads, this penalty is not incurred in VDISKS configured with SSDs and, on ftscalable G3, can be dramatically reduced when using tiered storage pools. See the discussion of Linear vs. Pooled Storage / Tiered Storage above. Stratus has run benchmarks demonstrating that the aggregate I/O throughput of a 4-LUN VDISK is about half the throughput of the same VDISK configured as a single LUN, while the average latency can be over four times greater. Figure 1 shows the impacts using multiple LUNs per VDISK have on performance. These charts show the aggregate of write I/Os per second (IOPS) and maximum latencies in milliseconds (ms) seen when using a 4 drive RAID-5 VDISK configured with one, two or three LUNs. Figure 1. Multiple LUNs per VDISK performance impacts Partitioning VDISKs into LUNs This section pertains to linear storage allocation. Before a VDISK can be used by OpenVOS, it must first be partitioned into one or more LUNs. Each LUN is assigned to a specific VOS member disk. One or more member disks are combined into a single OpenVOS logical disk. NOTE: These charts are based on results from Stratus internal lab testing under controlled conditions. Your actual results may vary.

7 Best Practices 7 Assigning OpenVOS logical disks to LUNs The simplest approach is to assign each member disk within an OpenVOS logical disk to a LUN. If you need a VOS logical disk that is bigger than a single LUN, or if you want the performance benefits of striping, you can create a VOS multi-member logical disk where each member disk is a single LUN. Figure 2 shows the relationship between physical disk drives, VDISKs, and LUNs on the ftscalable Storage array and OpenVOS logical disks. Figure 2. Logical/physical relationship This is an example of a simple OpenVOS logical disk, consisting of two member disks, where each member disk is a single RAID1 VDISK/LUN on an ftscalable Storage array. OpenVOS multi-member logical disks versus ftscalable RAID-10 VDISKs There are now a variety of ways in which you can implement striping in OpenVOS. Prior to the release of ftscalable Storage, the only method available was to use multiple physical disk drives configured as a VOS multi-member logical disk. With the advent of ftscalable Storage, you can create RAID-10 VDISKs where the array handles all striping. You can combine VOS disk striping with ftscalable Storage striping by creating a VOS multi-member logical disk, where each members is a LUN that uses one of the RAID striping methods (RAID-10, RAID-50). If you want to obtain the performance benefits of striping, Stratus recommends you use non-striping RAID type VDISKs (for example, RAID-1 or RAID-5), with a single LUN per VDISK, and combine them into VOS multi-member logical disks. This allows OpenVOS to maintain a separate disk queue for each LUN, maximizing throughput and while minimizing latency. Review OpenVOS Queue Depth and ftscalable Storage for some considerations regarding numbers of allocated LUNs and potential performance implications. OpenVOS queue depth and ftscalable storage All storage arrays, physical disk drives, Fibre Channel HBAs, and modern operating systems have various sized queues for I/O requests. The queue depth basically defines how many unique I/O requests can be pending (queued) for a specific device at any given time. A queue full condition occurs when a device becomes extremely busy and cannot add any additional I/O requests onto its queue. When a queue full condition exists, new I/O requests are aborted and re-tried until there is space on the queue again. This causes increased I/O latency, increased application response times, and decreased I/O throughput. Each host (Fibre Channel) port on the ftscalable Storage array has a single queue with a depth of 128 for G1 and G2 and 1024 for G3. OpenVOS maintains a separate queue for every LUN. The queue depth will be determined by the setting of the opt_for parameter for that particular volume in disks.table. Refer to the section on Per Volume Optimization Attributes later in this document for more detail. opt_for setting Queue depth thruput 18 response 6 balanced 12 In OpenVOS configurations with a large number of LUNs, it is possible to fill the host port queues on the ftscalable Storage array with a relatively small number of very busy LUNs. This results in I/O requests for other LUNs receiving a queue full condition status, delaying them and your application.

8 Best Practices 8 Assigning files to VOS logical disks When possible, assign randomly-accessed files and sequentially-accessed files to separate logical disks. Mixing the two types of file access methods on the same logical disk increases the worst-case time (maximum latency) needed to access the random-access files and reduces the maximum possible throughput of sequentially-accessed files. Also keep in mind that you can use a different RAID type for each logical disk, to best match the I/O access type. Balancing VDISKs between storage controllers The ftscalable Storage array has an active-active storage controller design, with two controllers actively processing I/O. However, every VDISK is assigned to a specific storage controller when allocated; either controller A or controller B. All I/O for a specific VDISK is handled by the assigned storage controller. If you do not specify which controller you want assigned to a particular VDISK, the ftscalable Storage array assigns them in a round-robin fashion, alternating between the two controllers. While this may balance the number of VDISKs between the two storage controllers, it may not ensure that the I/O workload is evenly split. For example, suppose that you have 6 VDISKs in your configuration, called VDISK1 through VDISK6. VDISK1 and VDISK3 handle all your primary online data and are both very I/O intensive, while the rest of the VDISKs handle offline archival data and are much less busy. If you did not explicitly assign the VDISKs to controllers, you will end up with VDISK1, VDISK3, and VDISK5 assigned to controller A, while VDISK2, VDISK4, and VDISK6 will be on controller B. This would result in an unbalanced I/O load between the two storage controllers. You should consider the estimated I/O workloads when allocating your VDISKs, and, if necessary manually assign specific VDISKs to controllers during the VDISK creation process. If you find that your workload changes or that you have an unbalanced I/O allocation, you can reassign an existing VDISK to a new storage controller. CAUTION: Changing controller ownership of a VDISK is a disruptive operation and cannot be done without temporarily impacting access to your data while the VDISK is moved between the two controllers and the LUN number is re-assigned. This cannot be done to online OpenVOS logical disks. Consult with Stratus for assistance before doing this operation. Single VDISK configurations While it is possible to create a single large VDISK on an ftscalable Storage array, you should avoid doing so as this has performance penalties and is not recommended by Stratus. As described previously, there are two storage controllers within each ftscalable Storage array, running in an active-active mode. Each VDISK is assigned to a specific storage controller that owns and executes all I/O processing for that VDISK. In a single VDISK configuration, you are halving the total available performance of the ftscalable Storage array, as you will have only one of the two storage controllers processing all I/O requests. In OpenVOS configurations, there is a separate queue of I/O disk requests for each LUN. By having only a single VDISK, you minimize OpenVOS ability to send parallel I/O requests to the ftscalable Storage array, again degrading your overall I/O throughput and latency. VDISK/LUN sizing implications on OpenVOS Raw versus usable capacity The OpenVOS operating system utilizes meta-data to ensure the highest degree of data integrity for disk data. This meta-data is stored as a separate physical sector from the data itself. As a result, OpenVOS uses nine physical sectors to store every eight sectors of usable data. When configuring ftscalable VDISKs and LUNs, remember that the size presented to OpenVOS represents RAW capacity and does not reflect the meta-data overhead. Your usable capacity is approximately 8/9ths (88%) of the raw physical size of the VDISK / LUN. In addition, OpenVOS also reserves approximately 1.1 GB of space for partitioning overheard. OpenVOS normally utilizes storage on a LUN rounded to the nearest 5 GB boundary. This allows partnering of LUNS with slightly dissimilar sizes. The only exception to this rounding is for LUNs that exactly match the size of certain legacy OpenVOS disk types (for example, a D913, or D914).

9 Best Practices 9 POSIX restrictions on logical disk size The number of disk members and the total size of a VOS logical disk determine the size of generated inode numbers used by POSIX applications. In earlier VOS releases, that value is restricted to 32-bits, which allows a logical disk size of approximately 545 GB. If that limit is exceeded, the VOS initialize_disk command generates a warning that this could present a compatibility issue for your existing POSIX applications. If all your POSIX applications that access logical disks have been recompiled on OpenVOS 17.1 with the preprocessor symbol _VOS_LARGE_INODE defined, and rebound with the OpenVOS 17.1 POSIX runtime routines that support both 32 and 64-bit inode numbers, there is no issue. The warning message for that disk may be ignored. You may suppress the message with the no_check_legacy_inodes option added to the initialize_disk command in OpenVOS release 17.1 and beyond. Refer to the Software Release Bulletin: OpenVOS Release 17.1 (R622-01) and OpenVOS System Administration: Disk and Tape Administration (R284-12) documentation for further information. Physical disk drive insertions and removals: Impacts to I/O performance The ftscalable Storage array supports online insertion and removal of physical disk drives without powering off the array, or stopping the host or application. After one or more drives are inserted or removed, the ftscalable Storage array must go through a process of remapping the underlying physical disk topology to determine if there are any relocated, removed or newly inserted physical disk drives. This is called a rescan. These rescans are done automatically by the ftscalable Storage array, without any manual operator commands. While this rescan process is occurring, any pending I/O requests may be temporarily delayed until it has completed. In the first-generation ftscalable Storage array, a drive insertion could cause multiple I/O delays ranging from 4 to 7 seconds over a period of approximately 40 seconds. A drive removal typically results in two I/O delays of between 3 to 11 seconds over a period of roughly 15 seconds. With ftscalable Storage G2 and G3 arrays, I/O delays resulting from drive insertions or removals are now 3 seconds or less. NOTE: These are results from Stratus internal lab testing under controlled conditions with the latest firmware versions and with maximum physical disk configurations (3 enclosures per array with 36 drives for the first-generation ftscalable Storage array, or 72 drives for ftscalable Storage G2 array). Your actual results may vary depending on your specific configuration and workload. Per-volume optimization attributes Beginning with VOS 17.1, you configure the per-volume optimization attributes by setting the value of the opt_for field in the disks.tin file as described in the following sections. Before per-volume optimization attributes were available, disk optimization involved setting a collection of separate tuning parameters. The tuning parameter method is complex and it only works on a system-wide basis. For example, it does not allow you to optimize some disks for fast response and others for maximum throughput. The per-volume usage feature is easy to use, it provides much more flexibility, and when used, it typically eliminates the need to make other cache-related tuning parameter adjustments. Fast response Some applications, such as those in online transaction processing environments, may require fast access to a certain group of files, because they may set an upper limit on the total elapsed time allowed to process a transaction. If it takes longer, the software may time-out or initiate retries. Fast disk response and minimal latency is required to insure optimal performance for such applications. To configure disks that contain such files for fast response, set the value of the opt_for field to response in those disks entries in the disks.tin file. For disks optimized in this way, both the cache manager and disk driver use tuning parameter values and algorithms which minimize latency. By default, file copy operations to or from such disks are paced in order to minimize access time for other files on the disk. By optimizing a disk for fast response, you typically impact its ability to achieve maximum throughput.

10 Best Practices 10 Maximum throughput You can improve the efficiency of some applications, such as those that write log files (or other files to which fast access is not essential), by optimizing the disks that contain such files for throughput. To configure disks for maximum throughput, set the value of the opt_for field to thruput in those disk s entries in the disks.tin file. For disks optimized in this way, both the cache manager and disk driver use tuning parameter values and algorithms which produce optimal disk throughput at the expense of fast response time. Balanced usage By default, disks which are not explicitly optimized for response or throughput are optimized for balanced usage. You can also explicitly designate disks to be optimized for balanced usage, by setting the value of the opt_for field to balanced in those disk s entries in the disks.tin file. Recommendations Potential I/O delays may occur during rescan processing can have a negative effect on latency-sensitive applications. There are several recommendations which can minimize those impacts. Configure at least one physical disk drive as a spare drive and use fault-tolerant RAID type VDISKs. By allocating a spare drive and using fault-tolerant RAID types, the ftscalable Storage array can automatically remove a marginal or failing physical disk from a VDISK and start the recovery process without requiring any drive insertion or removal, avoiding a rescan. You can replace the failing drive during a less critical period. If using non fault-tolerant RAID type VDISKs (RAID-0, NRAID), Stratus recommends creating an extra VDISK as a standby spare. You can use this standby VDISK as a replacement member disk and re-duplex using OpenVOS mirroring to provide redundancy. This allows you to replace the failing drive during a less critical period. Do not move physical drives to preserve specific enclosure slot positions after service operations. The ftscalable Storage array design does not require the physical drives for a VDISK to remain in the same enclosure slot positions as when allocated. Do not physically remove marginal or failing disk drives until a replacement is received and ready to be installed at the same time. By coordinating physical drive removals and insertions, you can minimize the number of times a rescan process occurs as multiple drive topology changes can occur within one rescan process. Summary The combination of the OpenVOS operating system with ftscalable Storage arrays gives you a robust, scalable and flexible storage environment to host your most critical applications. By understanding the benefits and drawbacks of the various RAID types, LUN topologies and configuration choices available, you can create an optimal storage layout to meet your business, performance and availability needs. Glossary of terms Degraded mode. The mode of operation of a VDISK after one of its physical disk drives has failed, but before any recovery operation starts. While in this mode of operation, the VDISK is not fully redundant, and a subsequent physical drive failure could result in loss of this VDISK. Disk group. A collection of physical disks with a specific RAID type. Disk Groups make up a storage pool. HBA or host bus adapter. A PCI-X or PCI-e circuit board or integrated circuit adapter that provides input/output (I/O) processing and physical connectivity between a server and a storage device. Linear storage. A type of storage allocation where physical disks make up a VDISK of a specified RAID type. Those VDISKs are then partitioned into volumes. Supported on ftscalable G1, G2 and G3. Logical disk. An OpenVOS logical volume that contains one or more member disks. Each member disk is either a single physical disk in a D910 Fibre Channel disk enclosure, a LUN in an ftscalable Storage array or a duplexed pair of either type or both types. LUN or logical unit number. The unique address identifier associated with a volume.

11 Best Practices 11 Glossary of terms (continued) Multi-member logical disk. An OpenVOS logical volume consisting of at least two pairs of duplexed member disks with data striped across all the member disk pairs. Pooled storage. A type of storage allocation where physical disks make up disk groups of a specified RAID type. One or more disk groups make up a storage pool. The storage pool is then partitioned into volumes. Supported on ftscalable G3 only. RAID overhead. The amount of physical drive capacity used in a specific RAID type to provide redundancy. For example, in a RAID1 VDISK, this would be 50% as one drive is mirrored to its partner. Recovery mode. The mode of operation of the VDISK while it is rebuilding after a drive failure. While in this mode of operation, the VDISK is not fully redundant and a subsequent physical drive failure could result in loss of this VDISK. Sector. A segment of a track on a hard disk drive. It is the smallest unit of data that can be accessed by a disk drive Tiered storage. Pooled storage which contains disk groups with dissimilar performance characteristics, e.g. HDDs and SSDs. With tiered storage the storage controllers will dynamically relocate the most frequently referenced blocks of each volume to the highest performing disk groups. Supported on ftscalable G3 only. Track. Any of the concentric circles on the magnetic coating on a disk platter over which one magnetic head passes while it is stationary but the platter is spinning VDISK or virtual disk. A group of one or more physical disk drives in an ftscalable Storage array, organized using a specific RAID type into what appears to the operating system as one or more disks, depending on the number of LUNs defined. Volume. A defined amount of storage that is contained within either a VDISK or a storage pool. The terms VOS and OpenVOS are used interchangeably in this document in reference to Stratus s OpenVOS operating system. Striping. A method of improving I/O performance by breaking data into blocks and writing the blocks across multiple physical disks. Specifications and descriptions are summary in nature and subject to change without notice. Stratus and the Stratus Technologies logo are trademarks or registered trademarks of Stratus Technologies Bermuda Ltd. All other marks are the property of their respective owners Stratus Technologies Bermuda Ltd. All rights reserved

COMP283-Lecture 3 Applied Database Management

COMP283-Lecture 3 Applied Database Management COMP283-Lecture 3 Applied Database Management Introduction DB Design Continued Disk Sizing Disk Types & Controllers DB Capacity 1 COMP283-Lecture 3 DB Storage: Linear Growth Disk space requirements increases

More information

Lenovo RAID Introduction Reference Information

Lenovo RAID Introduction Reference Information Lenovo RAID Introduction Reference Information Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and cost-efficient methods to increase server's storage performance,

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Chapter 6 - External Memory

Chapter 6 - External Memory Chapter 6 - External Memory Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 6 - External Memory 1 / 66 Table of Contents I 1 Motivation 2 Magnetic Disks Write Mechanism Read Mechanism

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Method to Establish a High Availability and High Performance Storage Array in a Green Environment

Method to Establish a High Availability and High Performance Storage Array in a Green Environment Method to Establish a High Availability and High Performance Storage Array in a Green Environment Dr. M. K. Jibbe Director of Quality Architect Team, NetApp APG mahmoudj@netapp.com Marlin Gwaltney Quality

More information

PANASAS TIERED PARITY ARCHITECTURE

PANASAS TIERED PARITY ARCHITECTURE PANASAS TIERED PARITY ARCHITECTURE Larry Jones, Matt Reid, Marc Unangst, Garth Gibson, and Brent Welch White Paper May 2010 Abstract Disk drives are approximately 250 times denser today than a decade ago.

More information

SolidFire and Pure Storage Architectural Comparison

SolidFire and Pure Storage Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as

More information

RAID EzAssist Configuration Utility User Reference Guide

RAID EzAssist Configuration Utility User Reference Guide RAID EzAssist Configuration Utility User Reference Guide DB13-000047-00 First Edition 08P5519 Proprietary Rights Notice This document contains proprietary information of LSI Logic Corporation. The information

More information

Storage Profiles. Storage Profiles. Storage Profiles, page 12

Storage Profiles. Storage Profiles. Storage Profiles, page 12 , page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 6 Automatic Disk Selection, page 7 Supported LUN Modifications, page 8 Unsupported LUN Modifications, page 8 Disk Insertion

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

The term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive

The term physical drive refers to a single hard disk module. Figure 1. Physical Drive HP NetRAID Tutorial RAID Overview HP NetRAID Series adapters let you link multiple hard disk drives together and write data across them as if they were one large drive. With the HP NetRAID Series adapter,

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Hybrid Storage for Data Warehousing. Colin White, BI Research September 2011 Sponsored by Teradata and NetApp

Hybrid Storage for Data Warehousing. Colin White, BI Research September 2011 Sponsored by Teradata and NetApp Hybrid Storage for Data Warehousing Colin White, BI Research September 2011 Sponsored by Teradata and NetApp HYBRID STORAGE FOR DATA WAREHOUSING Ever since the advent of enterprise data warehousing some

More information

VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Storage Foundation 4.0 TM for Databases VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON A SOLIDFIRE COMPETITIVE COMPARISON NetApp SolidFire and Pure Storage Architectural Comparison This document includes general information about Pure Storage architecture as it compares to NetApp SolidFire.

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper demonstrates

More information

AMD SP Promise SATA RAID Guide

AMD SP Promise SATA RAID Guide AMD SP5100 + Promise SATA RAID Guide Tyan Computer Corporation v1.00 Index: Section 1: Promise Firmware Overview (Page 2) Option ROM version Location (Page 3) Firmware menus o Main Menu (Page 4) o Drive

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Objectives To describe the physical structure of secondary storage devices and its effects on the uses of the devices To explain the

More information

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage CSCI-GA.2433-001 Database Systems Lecture 8: Physical Schema: Storage Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com View 1 View 2 View 3 Conceptual Schema Physical Schema 1. Create a

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

1 of 6 4/8/2011 4:08 PM Electronic Hardware Information, Guides and Tools search newsletter subscribe Home Utilities Downloads Links Info Ads by Google Raid Hard Drives Raid Raid Data Recovery SSD in Raid

More information

V. Mass Storage Systems

V. Mass Storage Systems TDIU25: Operating Systems V. Mass Storage Systems SGG9: chapter 12 o Mass storage: Hard disks, structure, scheduling, RAID Copyright Notice: The lecture notes are mainly based on modifications of the slides

More information

hot plug RAID memory technology for fault tolerance and scalability

hot plug RAID memory technology for fault tolerance and scalability hp industry standard servers april 2003 technology brief TC030412TB hot plug RAID memory technology for fault tolerance and scalability table of contents abstract... 2 introduction... 2 memory reliability...

More information

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Storage Update and Storage Best Practices for Microsoft Server Applications Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Agenda Introduction Storage Technologies Storage Devices

More information

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan White paper Version: 1.1 Updated: Sep., 2017 Abstract: This white paper introduces Infortrend Intelligent

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts

Raid: Who What Where When and Why. 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts Raid: Who What Where When and Why 3/23 Draft V1 David Arts 4/3 Draft V2 David Arts 4/10 First Release David Arts 1 Table of Contents General Concepts and Definitions... 3 What is Raid... 3 Origins of RAID...

More information

Mladen Stefanov F48235 R.A.I.D

Mladen Stefanov F48235 R.A.I.D R.A.I.D Data is the most valuable asset of any business today. Lost data, in most cases, means lost business. Even if you backup regularly, you need a fail-safe way to ensure that your data is protected

More information

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili Virtual Memory Lecture notes from MKP and S. Yalamanchili Sections 5.4, 5.5, 5.6, 5.8, 5.10 Reading (2) 1 The Memory Hierarchy ALU registers Cache Memory Memory Memory Managed by the compiler Memory Managed

More information

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan

Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan Intelligent Drive Recovery (IDR): helping prevent media errors and disk failures with smart media scan White paper Version: 1.1 Updated: Oct., 2017 Abstract: This white paper introduces Infortrend Intelligent

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University l Chapter 10: File System l Chapter 11: Implementing File-Systems l Chapter 12: Mass-Storage

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.3.2 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems To explain the performance characteristics of mass-storage devices To evaluate disk scheduling algorithms To discuss operating-system services

More information

Storage Devices for Database Systems

Storage Devices for Database Systems Storage Devices for Database Systems 5DV120 Database System Principles Umeå University Department of Computing Science Stephen J. Hegner hegner@cs.umu.se http://www.cs.umu.se/~hegner Storage Devices for

More information

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management RAID Structure Objectives Moving-head Disk

More information

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array configuration best practices white paper

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array configuration best practices white paper HP StorageWorks 4000/6000/8000 Enterprise Virtual Array configuration best practices white paper Abstract... 3 Background... 3 Overview... 3 Best practices summary... 4 First best practices... 5 Best practices

More information

Software-defined Storage: Fast, Safe and Efficient

Software-defined Storage: Fast, Safe and Efficient Software-defined Storage: Fast, Safe and Efficient TRY NOW Thanks to Blockchain and Intel Intelligent Storage Acceleration Library Every piece of data is required to be stored somewhere. We all know about

More information

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Revised 2010. Tao Yang Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley January 31, 2018 (based on slide from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk

More information

White Paper. Extending NetApp Deployments with stec Solid-State Drives and Caching

White Paper. Extending NetApp Deployments with stec Solid-State Drives and Caching White Paper Extending NetApp Deployments with stec Solid-State Drives and Caching Contents Introduction Can Your Storage Throughput Scale to Meet Business Demands? Maximize Existing NetApp Storage Investments

More information

Introduction to NetApp E-Series E2700 with SANtricity 11.10

Introduction to NetApp E-Series E2700 with SANtricity 11.10 d Technical Report Introduction to NetApp E-Series E2700 with SANtricity 11.10 Todd Edwards, NetApp March 2014 TR-4275 1 Introduction to NetApp E-Series E2700 with SANtricity 11.10 TABLE OF CONTENTS 1

More information

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces Evaluation report prepared under contract with LSI Corporation Introduction IT professionals see Solid State Disk (SSD) products as

More information

SATA RAID For The Enterprise? Presented at the THIC Meeting at the Sony Auditorium, 3300 Zanker Rd, San Jose CA April 19-20,2005

SATA RAID For The Enterprise? Presented at the THIC Meeting at the Sony Auditorium, 3300 Zanker Rd, San Jose CA April 19-20,2005 Logo of Your organization SATA RAID For The Enterprise? Scott K. Cleland, Director of Marketing AMCC 455 West Maude Ave., Sunnyvale, CA 94085-3517 Phone:+1-408-523-1079 FAX: +1-408-523-1001 E-mail: scleland@amcc.com

More information

Database Systems II. Secondary Storage

Database Systems II. Secondary Storage Database Systems II Secondary Storage CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 29 The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM

More information

Dionseq Uatummy Odolorem Vel

Dionseq Uatummy Odolorem Vel W H I T E P A P E R Aciduisismodo Tiered Storage Design Dolore Eolore Guide Dionseq Uatummy Odolorem Vel Best Practices for Cost-effective Designs By John Harker September 2010 Hitachi Data Systems 2 Table

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

Storage and File Structure

Storage and File Structure CSL 451 Introduction to Database Systems Storage and File Structure Department of Computer Science and Engineering Indian Institute of Technology Ropar Narayanan (CK) Chatapuram Krishnan! Summary Physical

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

Chapter 14: Mass-Storage Systems

Chapter 14: Mass-Storage Systems Chapter 14: Mass-Storage Systems Disk Structure Disk Scheduling Disk Management Swap-Space Management RAID Structure Disk Attachment Stable-Storage Implementation Tertiary Storage Devices Operating System

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University CS 370: SYSTEM ARCHITECTURE & SOFTWARE [MASS STORAGE] Frequently asked questions from the previous class survey Shrideep Pallickara Computer Science Colorado State University L29.1 L29.2 Topics covered

More information

Demartek December 2007

Demartek December 2007 HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.

More information

Chapter 11. I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling Operating System Chapter 11. I/O Management and Disk Scheduling Lynn Choi School of Electrical Engineering Categories of I/O Devices I/O devices can be grouped into 3 categories Human readable devices

More information

ActiveScale Erasure Coding and Self Protecting Technologies

ActiveScale Erasure Coding and Self Protecting Technologies WHITE PAPER AUGUST 2018 ActiveScale Erasure Coding and Self Protecting Technologies BitSpread Erasure Coding and BitDynamics Data Integrity and Repair Technologies within The ActiveScale Object Storage

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Recommendations for Aligning VMFS Partitions

Recommendations for Aligning VMFS Partitions VMWARE PERFORMANCE STUDY VMware ESX Server 3.0 Recommendations for Aligning VMFS Partitions Partition alignment is a known issue in physical file systems, and its remedy is well-documented. The goal of

More information

Automated Storage Tiering on Infortrend s ESVA Storage Systems

Automated Storage Tiering on Infortrend s ESVA Storage Systems Automated Storage Tiering on Infortrend s ESVA Storage Systems White paper Abstract This white paper introduces automated storage tiering on Infortrend s ESVA storage arrays. Storage tiering can generate

More information

Disk Scheduling COMPSCI 386

Disk Scheduling COMPSCI 386 Disk Scheduling COMPSCI 386 Topics Disk Structure (9.1 9.2) Disk Scheduling (9.4) Allocation Methods (11.4) Free Space Management (11.5) Hard Disk Platter diameter ranges from 1.8 to 3.5 inches. Both sides

More information

A Look at CLARiiON with ATA CX Series Disk Drives and Enclosures

A Look at CLARiiON with ATA CX Series Disk Drives and Enclosures A Look at CLARiiON with ATA CX Series Disk Drives and Enclosures Applied Technology Abstract As the need for data storage continues to grow, developing lower-cost storage devices becomes imperative. This

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

IBM System Storage DS8000 series (Machine types 2421, 2422, 2423, and 2424) delivers new security, scalability, and business continuity capabilities

IBM System Storage DS8000 series (Machine types 2421, 2422, 2423, and 2424) delivers new security, scalability, and business continuity capabilities , dated February 10, 2009 IBM System Storage DS8000 series (Machine types 2421, 2422, 2423, and 2424) delivers new security, scalability, and business continuity capabilities Table of contents 1 At a glance

More information

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [MASS STORAGE] How does the OS caching optimize disk performance? How does file compression work? Does the disk change

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006 November 21, 2006 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds MBs to GBs expandable Disk milliseconds

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh CS24: Computer Architecture Storage systems Sangyeun Cho Computer Science Department (Some slides borrowed from D Patterson s lecture slides) Case for storage Shift in focus from computation to communication

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 9: Mass Storage Structure Prof. Alan Mislove (amislove@ccs.neu.edu) Moving-head Disk Mechanism 2 Overview of Mass Storage Structure Magnetic

More information

AssuredSAN Event Descriptions Reference Guide

AssuredSAN Event Descriptions Reference Guide AssuredSAN Event Descriptions Reference Guide Abstract This guide is for reference by storage administrators to help troubleshoot storage-system issues. It describes event messages that may be reported

More information

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill

Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill Lecture Handout Database Management System Lecture No. 34 Reading Material Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill Modern Database Management, Fred McFadden,

More information

EMC VNX2 Deduplication and Compression

EMC VNX2 Deduplication and Compression White Paper VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Maximizing effective capacity utilization Abstract This white paper discusses the capacity optimization technologies delivered in the

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

SPARCstorage Array Configuration Guide

SPARCstorage Array Configuration Guide SPARCstorage Array Configuration Guide A Sun Microsystems, Inc. Business 2550 Garcia Avenue Mountain View, CA 94043 U.S.A. 415 960-1300 FAX 415 969-9131 Part No.: 802-2041-10 Revision A, March 1995 1995

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Physical Storage Media

Physical Storage Media Physical Storage Media These slides are a modified version of the slides of the book Database System Concepts, 5th Ed., McGraw-Hill, by Silberschatz, Korth and Sudarshan. Original slides are available

More information

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

CMSC 424 Database design Lecture 12 Storage. Mihai Pop CMSC 424 Database design Lecture 12 Storage Mihai Pop Administrative Office hours tomorrow @ 10 Midterms are in solutions for part C will be posted later this week Project partners I have an odd number

More information

Monday, May 4, Discs RAID: Introduction Error detection and correction Error detection: Simple parity Error correction: Hamming Codes

Monday, May 4, Discs RAID: Introduction Error detection and correction Error detection: Simple parity Error correction: Hamming Codes Monday, May 4, 2015 Topics for today Secondary memory Discs RAID: Introduction Error detection and correction Error detection: Simple parity Error correction: Hamming Codes Storage management (Chapter

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [MASS STORAGE] Shrideep Pallickara Computer Science Colorado State University L29.1 Frequently asked questions from the previous class survey How does NTFS compare with UFS? L29.2

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

Dell EMC ME4 Series Storage System Administrator s Guide

Dell EMC ME4 Series Storage System Administrator s Guide Dell EMC ME4 Series Storage System Administrator s Guide Regulatory Model: E09J, E10J, E11J Regulatory Type: E09J001, E10J001, E11J001 Notes, cautions, and warnings NOTE: A NOTE indicates important information

More information

Mass-Storage Structure

Mass-Storage Structure Operating Systems (Fall/Winter 2018) Mass-Storage Structure Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review On-disk structure

More information

Disk Storage Systems. Module 2.5. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Disk Storage Systems - 1

Disk Storage Systems. Module 2.5. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Disk Storage Systems - 1 Disk Storage Systems Module 2.5 2006 EMC Corporation. All rights reserved. Disk Storage Systems - 1 Disk Storage Systems After completing this module, you will be able to: Describe the components of an

More information

ECE Enterprise Storage Architecture. Fall 2018

ECE Enterprise Storage Architecture. Fall 2018 ECE590-03 Enterprise Storage Architecture Fall 2018 RAID Tyler Bletsch Duke University Slides include material from Vince Freeh (NCSU) A case for redundant arrays of inexpensive disks Circa late 80s..

More information

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems Building Self-Healing Mass Storage Arrays for Large Cluster Systems NSC08, Linköping, 14. October 2008 Toine Beckers tbeckers@datadirectnet.com Agenda Company Overview Balanced I/O Systems MTBF and Availability

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011 CSE325 Principles of Operating Systems Mass-Storage Systems David P. Duggan dduggan@sandia.gov April 19, 2011 Outline Storage Devices Disk Scheduling FCFS SSTF SCAN, C-SCAN LOOK, C-LOOK Redundant Arrays

More information

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices Where Are We? COS 318: Operating Systems Storage Devices Jaswinder Pal Singh Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) u Covered: l Management of CPU

More information