RAID-5+ for OLTP Applications
|
|
- Ella Stevenson
- 6 years ago
- Views:
Transcription
1 RAID-5+ for OLTP Applications Oracle and Hitachi Data Systems Freedom 7700E for Unrivaled Performance Xiaoping Li, Oracle Corporation Kenneth Wood, Hitachi Data Systems January 22, 1999 Oracle Corporation and Hitachi Data Systems confirm high performance in Online Transaction Processing environments from the Freedom 7700E using RAID-5+ (pronounced RAID-5 Plus) technology. The industry-leading speed, capacity, features, and throughput of the 7700E enhances Oracle8 Server performance for all workloads. With a wide range of price-performance options as well, the Freedom 7700E is the premier performer among disk subsystems in most Online Transaction Processing (OLTP) environments. The Hitachi Data Systems Freedom 7700E was tested using Oracle8 Server and the standard TPC-C benchmark test suite. The same configurations of hardware and software were used for two RAID configurations: RAID-5+ and RAID-1. The results show that the RAID-5+ configuration overall outperforms the RAID-1 configurations measured by tpm-c (transaction per minute using the TPC-C benchmark) by as much as 4.5%.
2 COST + CAPACITY + RELIABILITY + PERFORMANCE = RAID-5+ IS BEST... 3 THE HITACHI DATA SYSTEMS FREEDOM 7700E AND RAID Mirroring with RAID Traditional Data Striping with RAID High Performance Data Striping with RAID Cache Management... 4 Hitachi Data Systems Freedom 7700E Architecture... 4 DETAILED TEST CONFIGURATION... 5 TPC-C Information and Configuration... 5 Oracle8 Database and Server Configuration... 6 Datafile Distribution... 6 RAID-1 Configuration RAID-1 Configuration RAID-5+ Configuration...10 TEST RESULTS AND DETAILED ANALYSIS Under the tpm-c Number How the Test is Executed Physical I/O Comparisons Data File Read Comparison...11 Data File Write Comparisons...11 Redo Log File Write Comparisons...12 Conclusion about the Results and Analysis Robust Features, High Availability, and Reliability Unmatched in the Industry CONCLUSION REFERENCES
3 Cost + Capacity + Reliability + Performance = RAID-5+ is Best The performance reputation of traditional RAID-5 technologies over the years has labeled RAID-5 performance to an unfavorable status among database architects in OLTP environments. The cost for capacity did not out-weigh its performance shortcomings, until now. RAID-5 s reputation for performance over the past few years had come to a favorable status among database architects in the Decision Support System (DSS) environments where large block transfers, table scan operations and high read to write ratios dominate. However, database architect attitudes toward RAID-5 s performance shortcomings for OLTP applications traditionally have been, just say no!, (Millsap, C.) when compared with traditional RAID-1 technology. This attitude may need to be adjusted with serious consideration towards the Hitachi Data Systems Freedom 7700E configured with RAID-5+ for many OLTP applications. The Hitachi Data Systems Freedom 7700E and RAID-5+ The Hitachi Data Systems Freedom 7700E is a state-of-the-art. The features and capabilities extend well beyond the scope of this discussion, however a primer is provided in order to more accurately explain the results of the test results. The Hitachi Data Systems Freedom 7700E is capable of running in RAID-1, RAID-5+, or a combination of both. Advanced technology and architectural superiority enables the Hitachi Data Systems Freedom 7700E to perform well in all I/O hungry environments by incorporating intelligent cache routines to manage cache resources. Combine this with the most reliable storage array designed in the industry, the Hitachi Data Systems Freedom 7700E is an unequalled solution to database environments anywhere, or of any kind. Mirroring with RAID 1 A RAID-1 array group consists of two pairs of disk drives in a mirrored configuration, regardless of hard disk size. In this test, the HDS 6GB hard disk drive (12,000-RPM) is used. Data written to a primary drive is written simultaneously to a secondary drive. A read data request from a mirrored pair can be satisfied from which ever disk drive in the pair can satisfy the read request first. Traditional Data Striping with RAID-5 RAID-5 consists of a minimum of two disks to a theoretical and practical maximum number of disks. This disk group is configured in such a way that striped data and parity information reside within the same stripe and is written to all the disks in the RAID-5 disk group. The read performance for this kind of disk striping has always been good for read intensive environments as striping distributes data across multiple disk spindles. Write performance, however, has always been the bane for RAID-5. To write data to a RAID-5 disk group, the section of the stripe on the disk to be updated needs to be read first. The blocks within that stripe that need to be changed are updated within the stripe. Parity is regenerated for that stripe and then the section and parity on the stripe is written back to the disk group. This sequence for a RAID-5 write is referred to as the RAID-5 write penalty, technically known as a read-modify-write. 3
4 High Performance Data Striping with RAID-5+ For the 7700E, a RAID-5+ array group consists of either four or seven disk drives, depending on the type of hard disks installed in the Freedom 7700E. For the 6GB hard disk drive (12, 000 RPM) used in this test, four disks per array group (with RAID-5+ this is also referred to as a parity-group) was configured. The enhanced RAID-5+ write performance in the Freedom 7700E comes from advanced caching algorithms. The Freedom 7700E keeps write data in cache until an entire stripe can be built, then writes the entire data stripe (replacing the original stripe) to the array group disk drives to minimize the write penalty inherent with traditional RAID-5 implementations. This write stripe caching method is referred to as fast-writes. Cache Management The 7700E subsystem places all read and write data in cache as a staging buffer. All cache memory (up to 100%) is available for read operations. The amount of fast-write data in cache is dynamically managed by the cache control algorithms to provide the optimum amount of read and write cache, depending on the workload s read and write I/O characteristics. The 7700E utilizes the following important algorithms for internal cache control: HDS Intelligent Learning Algorithm. The HDS-proprietary Intelligent Learning Algorithm identifies random and sequential data access patterns and selects the amount of data to be staged (read from the array group(s) into cache). Least-recently-used (LRU) algorithm (modified). When a read hit or write I/O occurs in a non-sequential operation, the least-recently-used (LRU) algorithm marks the cache segment as most recently used and promotes it to the top of the appropriate LRU list. In a sequential write operation, the data is destaged by priority, so the cache segment marked as least recently used is immediately available for reallocation. Sequential prefetch algorithm. The sequential prefetch algorithm is used for sequential access commands or access patterns identified as sequential by the Intelligent Learning Algorithm. The algorithm directs the Array Control Processors () to prefetch and cache ahead up to one full RAID stripe of the current access. Hitachi Data Systems Freedom 7700E Architecture An overview of the Hitachi Data Systems Freedom 7700E s architecture is shown in Figure-1. The front-end microprocessors, called Client-Host interface Processors (CHPs), are connected to the host(s) through either SCSI or Fibre Channel attachments and to cache via the redundant high-speed multi-bus (750MB/s bandwidth). 4
5 Host-interface options Up to 32 UltraSCSI or 16 FC Up to 16GB CHP CHP Duplex Cache High speed multi-bus Up to 208 disk drives excluding spares Figure-1: Hitachi Data Systems Freedom 7700E Architecture Overview. The CHPs process host communications and manage access to cache. The back-end microprocessors, called Array Control Processors (s), are also connected to cache, via the redundant high-speed multi-bus, and control the transfer of data between disk and cache. Detailed Test Configuration The following describes the details of the TPC-C configuration, the layout of the TPC- C database, and the hardware system used. The purpose of this test is to measure the effect that storage has on a known consistent database benchmark, not how the computer system performs. Therefore, minimal details of the platform used for all test components (users, clients, server, etc.) are included in this report; the results are indicative of changes in the storage configuration. The TPC-C configuration used for this test is not a sanctioned Transaction Processing Council (TPC ) auditable configuration. TPC-C Information and Configuration The TPC-C benchmark models a large wholesale outlet's inventory management system. The operation consists of a number of warehouses, each with about ten terminals representing point-of-sale or point-of-inquiry stations. Number can be loosely related to the scale factor (SF) used in the TPC-D benchmark. The number of 5
6 warehouses configured in the TPC-C benchmark determines the overall size, users, and complexity of the model and the benchmark. The defined transactions handle a new order entry, inquire about order status, and settle payment. These three user interaction transactions are straightforward. Two other transactions included in this test simulate behind-the-scenes activity throughout the warehouses. These are the stocking level inquiry and the delivery transactions. The stocking level inquiry scans a warehouse's inventory for items which are out of stock or are nearly so. The delivery transaction collects a number of orders and marks them as having been delivered. One instance of either of these transactions represents much more load than an instance of the new order, order status, or payment transactions (Wong, B.). Based on our testing requirements and available hardware and system resources, a 100- warehouse TPC-C database configuration was used. The decision was also made not to use a front-end system for the client applications. Therefore, all users, front-end processing and database server shared the same computer system, and because of this, the user connection number was set to 30. The 30-user load created adequate system load and Disk I/Os for the configured system and database. The ultimate goal was to create a heavy I/O load to the disk subsystem, and measure and analyze the details of the test runs between the two RAID technology implementations in this simulated OLTP environment. Oracle8 Database and Server Configuration The Oracle8 Server version was used in this test. Since the purpose of this test was to compare two levels of RAID technologies, only a limited performance tuning effort was performed on the database server. The goal was to tune the test environment, which included the Oracle8 Server and datafile distribution, to a reasonable performance level for a typical TPC-C test. The major non-default Oracle initialization parameters were set as: db_block_size: 2K; db_block_buffers: 210,000; log_buffer: 1,048,576; shared_pool_size: 10,000,000. Four FWD SCSI-2 interfaces were used to interface the server to the Hitachi Data Systems Freedom 7700E and to distribute the datafiles. Table-1 describes the server used for this project. Table-1: Overview of server configuration used. Item Quantity Comments HP /K460 1 Running HP-UX MHz CPUs 2 Memory 768MB SCSI Interfaces 4 Fast-Wide Differential SCSI-2 Datafile Distribution The distribution of the datafile locations for tablespace creation is very important for I/O performance. Every effort was made to provide adequate I/O bandwidth to the Hitachi Data Systems Freedom 7700E and datafiles for Oracle8 Server. Overall, there were three configurations built and tested; one RAID-5+ and two RAID-1. Of the two RAID-1 6
7 configurations tested, only the best performing configuration is reported. The reason for the two RAID-1 configurations was to verify datafile distribution effects. The Hitachi Data Systems Freedom 7700E was configured with 1GB of cache. RAID-1 Configuration 1 The first RAID-1 configuration included 24 mirrored disk sets (48 disk spindles total). The Logical Volume Manager (LVM) was used to combine the mirrored disk sets into volumes groups and then divided up into logical volumes. There were 12 volume groups built each containing two mirrored disk sets (4 disk spindles per volume group). Each mirrored disk set was configured with 2 Logical Units (LUNs) from the Hitachi Data Systems Freedom 7700E, for a total of four LUNs per volume group each at 2.4GB of storage. Figure-2 shows the RAID-1 configuration of the volume groups and how the logical volumes mapped back to LUNs on the RAID-1 mirrored disk set. A physical extent (PE) is 4096KB and determines how much of the volume group s total capacity is used for the definition of a logical volume. Logical volumes greater than 586 PE span across physical disk boundaries. The actual datafile to tablespace to logical volume to volume group to LUN is listed in Table-2 for RAID-1 configuration 1. Of the two RAID-1 configurations described, the results of this configuration yielded the best overall result. 2 Mirrored Pairs From 7700E Volume Group On Server Logical Volume On Server LUN0 LVOL1 100 PE LUN2 LUN0 586 PE LUN1 586 PE LVOL2 500 PE LUN1 LUN2 586 PE LVOL3 100 PE LUN3 LUN3 586 PE LVOL4 300 PE Figure-2: Overview of how physical LUNs from the 7700E are mapped into logical volumes on the server. Table-2: Mapping from tablespace back to physical 7700E LUN. Pair LUN VG LVOL Datafile Tablespace Table Size 1 0 vg_tpcc01 rstok_0 stok_0 stok stock 500 PE 2 1 vg_tpcc01 ridist idist idist idist 200 PE 1 2 vg_tpcc vg_tpcc vg_tpcc02 rlogs1 logs1 Redo Logs 586 PE 4 1 vg_tpcc02 rlogs2 logs2 Redo Logs 586 PE 3 2 vg_tpcc02 7
8 4 3 vg_tpcc vg_tpcc03 rware ware ware warehouse 100 PE 6 0 vg_tpcc03 rdist_1 dist_1 dist district 100 PE vg_tpcc03 rordrl ordrl ordrl order_line 800 PE 6 1 vg_tpcc vg_tpcc vg_tpcc vg_tpcc04 rstok_1 stok_1 stok stock 500 PE 8 1 vg_tpcc04 riitem iitem iitem iitem 200 PE 7 2 vg_tpcc vg_tpcc vg_tpcc05 rsys sys Systemfile 100 PE vg_tpcc05 riordl iordrl iordrl iorder_line 586 PE 9 2 vg_tpcc05 ricust2 icust2 icust2 icustomer2 200PE 10 3 vg_tpcc vg_tpcc06 rcust_0 cust_0 cust customer 600 PE 12 1 vg_tpcc06 rtemp_0 temp_0 temp tempspace 300 PE 11 2 vg_tpcc vg_tpcc vg_tpcc08 rstok_2 stok_2 stok stock 500 PE 14 1 vg_tpcc08 ricust1 icust1 icust1 icustomer 200 PE 13 2 vg_tpcc vg_tpcc vg_tpcc09 ristk istk istk istk 200PE vg_tpcc09 rordl_1 ordrl_1 ordrl order_line 800 PE 15 2 vg_tpcc09 rroll_0 roll_0 rollback file 250 PE 16 3 vg_tpcc vg_tpcc10 riord2 iord2 ordr2 iord2 200 PE 17 0 vg_tpcc10 rinord inord inord inord 200 PE 18 1 vg_tpcc vg_tpcc vg_tpcc vg_tpcc11 rstok_3 stok_3 stok stock 500 PE 20 1 vg_tpcc11 riware iware iware iwarehouse 200 PE 19 2 vg_tpcc vg_tpcc vg_tpcc12 rroll_1 roll_1 rollback file 250 PE 22 1 vg_tpcc12 ritem_0 item_0 item item 100 PE 21 2 vg_tpcc12 rnord_1 nord_1 nord_1 nord_1 200 PE 22 3 vg_tpcc12 riord1 irord1 inord1 inord1 200 PE 23 0 vg_tpcc13 rcust_1 cust_1 cust customer 600 PE 24 1 vg_tpcc13 rtemp_1 temp_1 temp tempspace 300 PE 23 2 vg_tpcc vg_tpcc13 This RAID-1 configuration places the main tablespaces, cust, stok, and ordrl physically on 4 mirrored disk sets or RAID group entities (8 disk spindles total), 2 RAID group entities (4 disk spindles total), and 2 RAID group entities (4 disk spindles total) respectively. RAID-1 Configuration 2 The second RAID-1 configuration is identical to the first RAID-1 configuration with respect to the way that the physical LUNs and the volume groups are built. The difference between RAID-1 configuration 1 and RAID-1 configuration 2 is in the 8
9 creation order of the logical volumes. RAID-1 configuration 2 forces the logical volumes for the three main tablespaces, cust, stok and ordrl, across RAID group entities. This configuration in essence doubles the number of RAID group entities from 4 to 8, and doubles the number of disk spindles from 8 to 16 for the stok tablespace. Likewise, the cust and ordrl tablespaces also double the number of RAID group entities from 2 to 4, and double the number of disk spindles from 4 to 8. Although the number of RAID group entities and disk spindles double, the number of datafiles supporting these tablespaces remain the same throughout the different test runs. Table-3 lists the mapping of physical LUNs to logical volumes similar to Table-2; however, the order of the logical volumes and their placements within the volume group differ. The results of this RAID-1 configuration are not reported in the following sections. This test configuration was performed as an experiment to answer questions about datafile distribution. It became very clear that datafile placement under RAID-1 requires more attention to detail and needs to be monitored more closely than for RAID-5+. However, it was decided to discuss the configuration here as part of the RAID-1 setup. Table-3: Mapping from Tablespace back to physical 7700E LUN with a different creation order and logical volume placement within the volume group. Pair LUN VG LVOL Datafile Tablespace Table Size 1 0 vg_tpcc01 ridist idist idist idist 200 PE 2 1 vg_tpcc01 rstok_0 stok_0 stok stock 500 PE 1 2 vg_tpcc vg_tpcc vg_tpcc02 rlogs1 logs1 Redo Logs 586 PE 4 1 vg_tpcc02 rlogs2 logs2 Redo Logs 586 PE 3 2 vg_tpcc vg_tpcc vg_tpcc03 rware ware ware warehouse 100 PE 6 0 vg_tpcc03 rdist_1 dist_1 dist district 100 PE vg_tpcc03 rordrl_0 ordrl_0 ordrl order_line 800 PE 6 1 vg_tpcc vg_tpcc vg_tpcc vg_tpcc04 riitem iitem iitem iitem 200 PE 8 1 vg_tpcc04 rstok_1 stok_1 stok stock 500 PE 7 2 vg_tpcc vg_tpcc vg_tpcc05 rsys sys Systemfile 100 PE vg_tpcc05 riordl iordrl iordrl iorder_line 586 PE 9 2 vg_tpcc05 ricust2 icust2 icust2 icustomer2 200PE 10 3 vg_tpcc vg_tpcc06 rtemp_0 temp_0 temp tempspace 300 PE 12 1 vg_tpcc06 rcust_0 cust_0 cust customer 600 PE 11 2 vg_tpcc vg_tpcc vg_tpcc08 ricust1 icust1 icust1 icustomer 200 PE 14 1 vg_tpcc08 rstok_2 stok_2 stok stock 500 PE 13 2 vg_tpcc vg_tpcc vg_tpcc09 ristk istk istk istk 200PE vg_tpcc09 rordl_1 ordrl_1 ordrl order_line 800 PE 15 2 vg_tpcc09 rroll_0 roll_0 rollback file 250 PE 16 3 vg_tpcc09 9
10 17 0 vg_tpcc10 riord2 iord2 iordr2 iord2 200 PE 17 0 vg_tpcc10 rinord inord inord inord 200 PE 18 1 vg_tpcc vg_tpcc vg_tpcc vg_tpcc11 riware iware iware iwarehouse 200 PE 20 1 vg_tpcc11 rstok_3 stok_3 stok stock 500 PE 19 2 vg_tpcc vg_tpcc vg_tpcc12 rroll_1 roll_1 rollback file 250 PE 22 1 vg_tpcc12 ritem_0 item_0 item item 100 PE 21 2 vg_tpcc12 rnord_1 nord_1 nord_1 nord_1 200 PE 22 3 vg_tpcc12 riord1 irord1 inord1 inord1 200 PE 23 0 vg_tpcc13 rtemp_1 temp_1 temp tempspace 300 PE 24 1 vg_tpcc13 rcust_1 cust_1 cust customer 600 PE 23 2 vg_tpcc vg_tpcc13 RAID-5+ Configuration RAID-5+ was by far the easiest to configure. The data striping of the 4 disks in the RAID entity made it easy to distribute the datafiles evenly across all disks. The LVM was used the same way as in the RAID-1 configurations. 12 RAID entities, each with 4 disk spindles, were configured into volume groups (48 disk spindles total). Each volume group consisted of 4 physical LUNs from the Hitachi Data Systems Freedom 7700E. The logical volumes were defined from the volume groups in the same order as listed in Table-2 except that there are no disk pairs to delineate disk boundaries. Because of the hardware controlled data striping, little effort was required to properly place datafiles within the logical volumes and volume groups. Contention of disks resources for all three RAID configurations were greatly analyzed and examined. Several other datafile configurations were configured and tested during this project. However, the configuration described in these three RAID configurations yielded the best tpm-c results by almost 30%. Test Results and Detailed Analysis Overall the RAID-5+ results yielded the highest transactions per minute compared to the RAID configuration, as reported by the TPC-C benchmark reporting system. From the test results, as reported by the TPC-C in tpm-c, the RAID-5+ reported a tpm-c of The RAID-1 configuration reported a tpm-c of Overall, the RAID-5+ configuration performed 4.5% more transactions per minute than the RAID-1 configuration. Under the tpm-c Number Detailed analysis of the high tpm-c result for the RAID-5+ configuration reveals that for the datafiles configured, RAID-5+ physical reads are about 18% faster than the RAID-1 configuration reads. However, RAID-5+ writes are slower than the RAID-1 writes by about 35%. What this says is that the RAID-1 writes outperformed the RAID- 5+ writes by a larger percentage than the read performance. However, the TPC-C read to write ratio is almost 3:2. That is, 62% of the TPC-C test consists of table reads and 10
11 38% of the test are writes. This causes the weight of the 18% read performance to outweigh the 35% write performance. More important than the tablespace writes are the log file writes for the TPC-C benchmark. Log file writes are very efficient for both RAID configurations, however, RAID-5+ writes are about 7% faster than the RAID-1 writes. This is due to the fastwrite algorithms of the Hitachi Data Systems Freedom 7700E discussed earlier. How the Test is Executed The TPC-C benchmark is started with a ramp-up time of 2 minutes, a sustained running time of 40 minutes, and a ramp-down of 1 minute. The ramp-up and ramp-down time built into the TPC-C kit provided by Oracle is used to allow the system and database to stabilize before and after the run. The sequence of events for an entire test goes like this: Once the database is in place (unmodified after an initial build), the system is rebooted. After the system is back up, the UNIX command dd is executed to flush the Hitachi Data Systems Freedom 7700E s cache of all data. Oracle8 Server is started. Then the TPC-C benchmark is started for a run time of 40 minutes. 10 minutes into the TPC-C test, an Oracle checkpoint is started (start flushing buffer cache from system memory to disk). Thirty minutes later, the test is over, statistics are gathered from the database and the system, and reports are generated. These tests have been run several times on the same RAID configurations to validate the results and to ensure the results are repeatable. There is less than 1% deviation between the runs. Physical I/O Comparisons The three most important I/O s in the Oracle8 environment are: datafile reads, datafile writes and logfile writes. Analysis of these three I/O operations for both RAID configurations will enhance the understanding of the overall tpm-c result. In order to simplify the computations, only the physical I/Os for the major tables and indexes in TPC-C database are examined. Data File Read Comparison Table-4 lists the combined physical reads performed values for the top 12 datafiles, which represents about 98% of all datafile physical read operations. Table-4: Physical reads compared. Configuration Blocks Time (ms) Time (ms/block) RAID K 5645k 4.41 RAID K 5040k 3.75 For each block read, the RAID-1 configuration used about 18% more time compared with the RAID-5+ configuration. Data File Write Comparisons Table-5 lists the combined physical writes performed values for the top 10 datafiles, which represents about 98% of all datafile physical write operations 11
12 Table-5: Physical writes compared. Configuration Blocks Time (ms) Time (ms/block) RAID-1 809K 8968k RAID K 12728k For each block written, the RAID-5+ configuration took only 35% more time compared to the RAID-1 configuration. This value is substantially better than the 10 times slower figure mentioned in C. Millsap s paper (Millsap C.) when describing traditional RAID-5 technologies and performance. Redo Log File Write Comparisons Table-6 compares the redo logfiles write-time-per-transactions between the two RAID configurations. These are the physical write time values for the redo logfile: Table-6: Redo write logfile write time per transaction comparison. Configuration Redo-write-time (ms)/transaction RAID RAID The RAID-1 configuration was 4% slower than the RAID-5+ configuration. The I/O characteristic of the redo logfile is a sequential write when update transactions are being performed on the database. Conclusion about the Results and Analysis The RAID-5+ configuration datafile reads are 18% faster than the RAID-1 configuration datafile reads. The RAID-1 configuration datafile writes are 35% faster than the RAID-5+ configuration datafile writes. The RAID-5+ configuration logfile writes are 4% faster than the RAID-1 configuration logfile writes. Bottom line is that the RAID-5+ configuration is capable of performing 4.5% MORE transactions per minute than the RAID-1 configuration as defined by the TPC-C benchmark. Datafile writes do not affect TPC-C throughput as much as datafile reads and logfile writes. This conclusion is consistent with the Oracle8 Server transaction process model. Oracle8 commits a transaction once the redo log entry is written to the logfile. This explains why the datafile write performance is not as important as the datafile read performance and the logfile write performance in an OLTP environment, if overall transactional throughput is the metric used for comparison. Robust Features, High Availability, and Reliability Unmatched in the Industry The 7700E has been developed to provide the best responsiveness and throughput of any storage subsystem, particularly so in its implementation of RAID-5+. The 7700E s RAID-5+ striping discussed earlier coupled with sophisticated technology, cache 12
13 management and parity generation techniques have made the 7700E capable of outperforming all other RAID implementations for multi-platform storage subsystems. The closest competitor does not support a full RAID-5, but has implemented a subset of RAID-5 functionality. In contrast to traditional RAID-5 data striping, this implementation stripes logical volumes across its physical drives. This means that the data for a logical volume is located on one disk, and the parity for that volume on yet another. And each physical drive contains multiple complete logical volumes and/or parity volumes. Parity regeneration for a write to a volume, then, requires that data be read from 2 other logical volumes on 2 other disks. The resulting performance degradation in read/write (database) environments is dramatic. This explains why the competition suggests and encourages RAID-1 as the only high performing database RAID implementation. The Hitachi Data Systems Freedom 7700E provides capabilities and storage features unparalleled in the industry including disaster recovery, remote copy, centralized and distributed management tools, high availability software, data migration and conversion, backup and restore. The Hitachi Data Systems Freedom 7700E is the most reliable, most fault tolerant storage subsystem built. Mirrored cache (battery backed up), redundant power supplies, dualported disk drives, channel clusters, and much more make uninterrupted access to data unseen elsewhere in the industry. The RAID Advisory Board (RAB) has issued the Hitachi Data Systems Freedom 7700E their highest rating for data availability; Disaster Tolerant Disk Subsystem Plus (DTDS+). For a complete list and descriptions of features, options, and a description of the reliability and high availability capabilities built into the Hitachi Data Systems Freedom 7700E, refer to the Hitachi Data Systems Freedom 7700E Users and Reference Guide supplied by Hitachi Data Systems. Conclusion From this joint effort between Oracle Corporation and Hitachi Data Systems, RAID-5+ incorporated within the Hitachi Data Systems Freedom 7700E proves that high performance in Online Transaction Processing environments is possible. Whether the application is a decision support system or an OLTP environment, the Hitachi Data Systems Freedom 7700E can enhance I/O performance at a fraction of the cost while leading the industry in reliability, data integrity, and availability, thus increasing the usability of a storage investment. When subjected to hard hitting, I/O hungry OLTP applications like the TPC-C benchmark test suite, the Hitachi Data Systems Freedom 7700E, together with the Oracle8 Server, sets new standards and knocks down traditional thinking for database designers and architects. The 7700E has maintained its goal of allowing the best possible application performance while minimizing storage management and storage architecture concerns. There will continue to be many intellectual arguments about the best RAID implementations. However, the proof of storage processor architecture is the one that provides the highest availability and performance with the least amount of storage management. And the Freedom 7700E multi-platform storage subsystem provides solutions for all application environments. 13
14 References Millsap, C Configuring Oracle Server for VLDB. Oracle Internal Document Wong. B The TPC-C database benchmark What does it really mean?. SunWorld Online. TPC Benchmark C, Standard Specifications Revision Transaction Processing Performance Council. HDS, Hitachi Data Systems Freedom 7700E Users and Reference Guide RAB RAID Advisory Board Classification. Disaster Tolerant Disk Subsystem Plus, DTDS+ 14
VERITAS Storage Foundation 4.0 for Oracle
J U N E 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief OLTP Solaris Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics
More informationVERITAS Storage Foundation 4.0 for Oracle
D E C E M B E R 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief AIX 5.2, Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics
More informationVeritas Storage Foundation and. Sun Solaris ZFS. A performance study based on commercial workloads. August 02, 2007
Veritas Storage Foundation and Sun Solaris ZFS A performance study based on commercial workloads August 02, 2007 Introduction...3 Executive Summary...4 About Veritas Storage Foundation...5 Veritas Storage
More informationDoubling Performance in Amazon Web Services Cloud Using InfoScale Enterprise
Doubling Performance in Amazon Web Services Cloud Using InfoScale Enterprise Veritas InfoScale Enterprise 7.3 Last updated: 2017-07-12 Summary Veritas InfoScale Enterprise comprises the Veritas InfoScale
More informationOracle on HP Storage
Oracle on HP Storage Jaime Blasco EMEA HP/Oracle CTC Cooperative Technology Center Boeblingen - November 2004 2003 Hewlett-Packard Development Company, L.P. The information contained herein is subject
More informationPerformance of relational database management
Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate
More informationHP SAS benchmark performance tests
HP SAS benchmark performance tests technology brief Abstract... 2 Introduction... 2 Test hardware... 2 HP ProLiant DL585 server... 2 HP ProLiant DL380 G4 and G4 SAS servers... 3 HP Smart Array P600 SAS
More informationOS and Hardware Tuning
OS and Hardware Tuning Tuning Considerations OS Threads Thread Switching Priorities Virtual Memory DB buffer size File System Disk layout and access Hardware Storage subsystem Configuring the disk array
More informationOS and HW Tuning Considerations!
Administração e Optimização de Bases de Dados 2012/2013 Hardware and OS Tuning Bruno Martins DEI@Técnico e DMIR@INESC-ID OS and HW Tuning Considerations OS " Threads Thread Switching Priorities " Virtual
More informationvsan 6.6 Performance Improvements First Published On: Last Updated On:
vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions
More informationIT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://
IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com Exam : 000-208 Title : Open Systems Storage Solutions Version 6 Version : DEMO 1 / 6 1. A customer reports
More informationThe term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive
HP NetRAID Tutorial RAID Overview HP NetRAID Series adapters let you link multiple hard disk drives together and write data across them as if they were one large drive. With the HP NetRAID Series adapter,
More informationComparing Software versus Hardware RAID Performance
White Paper VERITAS Storage Foundation for Windows Comparing Software versus Hardware RAID Performance Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS
More informationEvaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades
Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state
More informationVERITAS Dynamic Multipathing. Increasing the Availability and Performance of the Data Path
VERITAS Dynamic Multipathing Increasing the Availability and Performance of the Data Path 1 TABLE OF CONTENTS I/O Path Availability and Performance... 3 Dynamic Multipathing... 3 VERITAS Storage Foundation
More informationVERITAS Storage Foundation 4.0 TM for Databases
VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth
More informationPESIT Bangalore South Campus
PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of Information Science & Engineering SOLUTION MANUAL INTERNAL ASSESSMENT TEST 1 Subject & Code : Storage Area
More informationMaintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS
Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and
More informationMaintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS
Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed
More informationI/O CANNOT BE IGNORED
LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.
More informationVERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path
White Paper VERITAS Storage Foundation for Windows VERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path 12/6/2004 1 Introduction...3 Dynamic MultiPathing (DMP)...3
More informationEMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing
EMC VMAX 400K SPC-2 Proven Performance Silverton Consulting, Inc. StorInt Briefing EMC VMAX 400K SPC-2 PROVEN PERFORMANCE PAGE 2 OF 10 Introduction In this paper, we analyze all- flash EMC VMAX 400K storage
More informationThe Modern Virtualized Data Center
WHITEPAPER The Modern Virtualized Data Center Data center resources have traditionally been underutilized while drawing enormous amounts of power and taking up valuable floorspace. Storage virtualization
More informationEMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE
White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix
More informationLEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS
White Paper LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS Abstract This white paper introduces EMC s latest innovative technology, FAST Cache, and emphasizes how users can leverage it with Sybase
More informationDELL TM AX4-5 Application Performance
DELL TM AX4-5 Application Performance A Comparison of Entry-level Storage Platforms Abstract This paper compares the performance of the Dell AX4-5 with the performance of similarly configured IBM DS3400
More informationLSI Corporation
Figure 47 RAID 00 Configuration Preview Dialog 18. Check the information in the Configuration Preview Dialog. 19. Perform one of these actions: If the virtual drive configuration is acceptable, click Accept
More informationIBM InfoSphere Streams v4.0 Performance Best Practices
Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related
More informationIMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES
IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES Ram Narayanan August 22, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS The Database Administrator s Challenge
More informationRecommendations for Aligning VMFS Partitions
VMWARE PERFORMANCE STUDY VMware ESX Server 3.0 Recommendations for Aligning VMFS Partitions Partition alignment is a known issue in physical file systems, and its remedy is well-documented. The goal of
More informationEMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives
EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper demonstrates
More informationIBM Power 570. TPC Benchmark TM C Full Disclosure Report
IBM Power 570 Using Oracle Database 10g Release 2 Enterprise Edition and Red Hat Enterprise Linux Advanced Platform 5 for POWER TPC Benchmark TM C Full Disclosure Report Second Edition July 16, 2008 1
More informationAvaya IQ 5.1 Database Server Configuration Recommendations And Oracle Guidelines
Avaya IQ 5.1 Database Server Configuration Recommendations Avaya IQ Database Server Page 2 of 11 Issue 4.0 1. INTRODUCTION... 3 1.1 Purpose...3 1.2 BACKGROUND...3 1.3 Terminology...3 2. CONFIGURING IQ
More informationTechnical Note P/N REV A01 March 29, 2007
EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...
More informationIBM i Version 7.3. Systems management Disk management IBM
IBM i Version 7.3 Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM Note Before using this information and the product it supports, read the information in
More informationSYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID
System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide
More informationVERITAS Database Edition for Sybase. Technical White Paper
VERITAS Database Edition for Sybase Technical White Paper M A R C H 2 0 0 0 Introduction Data availability is a concern now more than ever, especially when it comes to having access to mission-critical
More informationAssessing performance in HP LeftHand SANs
Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of
More informationEMC CLARiiON CX3 Series FCP
EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008
More informationA GPFS Primer October 2005
A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those
More informationPerformance/Throughput
Markets, S. Zaffos Research Note 31 March 2003 ATA Disks Redefine RAID Price/Performance Cost-optimized storage infrastructures should include redundant arrays of independent disks built with low-cost
More information1 of 6 4/8/2011 4:08 PM Electronic Hardware Information, Guides and Tools search newsletter subscribe Home Utilities Downloads Links Info Ads by Google Raid Hard Drives Raid Raid Data Recovery SSD in Raid
More informationBACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review
White Paper BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION EMC GLOBAL SOLUTIONS Abstract This white paper provides guidelines for the use of EMC Data Domain deduplication for Oracle
More information1 of 8 14/12/2013 11:51 Tuning long-running processes Contents 1. Reduce the database size 2. Balancing the hardware resources 3. Specifying initial DB2 database settings 4. Specifying initial Oracle database
More informationEMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture
EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published
More informationIBM TotalStorage Enterprise Storage Server Model RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons
IBM TotalStorage Enterprise Storage Server Model 800 - RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons May 2003 IBM Systems Group Open Storage Systems Laboratory, San
More informationStorage Optimization with Oracle Database 11g
Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000
More informationCost and Performance benefits of Dell Compellent Automated Tiered Storage for Oracle OLAP Workloads
Cost and Performance benefits of Dell Compellent Automated Tiered Storage for Oracle OLAP This Dell technical white paper discusses performance and cost benefits achieved with Dell Compellent Automated
More informationRAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE
RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting
More informationStorage Adapter Testing Report
-Partnership that moves your business forward -Making imprint on technology since 1986 LSI MegaRAID 6Gb/s SATA+SAS Storage Adapter Testing Report Date: 12/21/09 (An Authorized Distributor of LSI, and 3Ware)
More informationI/O CANNOT BE IGNORED
LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.
More informationOverview of Enhancing Database Performance and I/O Techniques
Overview of Enhancing Database Performance and I/O Techniques By: Craig Borysowich Craigb@imedge.net Enterprise Architect Imagination Edge Inc. Database Performance and I/O Overview Database Performance
More informationReference Architecture
EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com
More informationIBM. Systems management Disk management. IBM i 7.1
IBM IBM i Systems management Disk management 7.1 IBM IBM i Systems management Disk management 7.1 Note Before using this information and the product it supports, read the information in Notices, on page
More informationEMC CLARiiON Backup Storage Solutions
Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer
More informationI/O Characterization of Commercial Workloads
I/O Characterization of Commercial Workloads Kimberly Keeton, Alistair Veitch, Doug Obal, and John Wilkes Storage Systems Program Hewlett-Packard Laboratories www.hpl.hp.com/research/itc/csl/ssp kkeeton@hpl.hp.com
More information1. Introduction. Traditionally, a high bandwidth file system comprises a supercomputer with disks connected
1. Introduction Traditionally, a high bandwidth file system comprises a supercomputer with disks connected by a high speed backplane bus such as SCSI [3][4] or Fibre Channel [2][67][71]. These systems
More informationStorage Designed to Support an Oracle Database. White Paper
Storage Designed to Support an Oracle Database White Paper Abstract Databases represent the backbone of most organizations. And Oracle databases in particular have become the mainstream data repository
More informationSAP Applications on IBM XIV System Storage
SAP Applications on IBM XIV System Storage Hugh Wason IBM Storage Product Manager SAP Storage Market - Why is it Important? Storage Market for SAP is estimated at $2Bn+ SAP BW storage sizes double every
More informationDell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions
Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product
More informationEMC XTREMCACHE ACCELERATES ORACLE
White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in
More informationHow to Speed up Database Applications with a Purpose-Built SSD Storage Solution
How to Speed up Database Applications with a Purpose-Built SSD Storage Solution SAN Accessible Storage Array Speeds Applications by up to 25x Introduction Whether deployed in manufacturing, finance, web
More informationEMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005
EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005 Best Practices Planning Abstract This technical white paper explains best practices associated with Microsoft SQL Server 2000
More informationSTEPS Towards Cache-Resident Transaction Processing
STEPS Towards Cache-Resident Transaction Processing Stavros Harizopoulos joint work with Anastassia Ailamaki VLDB 2004 Carnegie ellon CPI OLTP workloads on modern CPUs 6 4 2 L2-I stalls L2-D stalls L1-I
More informationDisk Storage Systems. Module 2.5. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Disk Storage Systems - 1
Disk Storage Systems Module 2.5 2006 EMC Corporation. All rights reserved. Disk Storage Systems - 1 Disk Storage Systems After completing this module, you will be able to: Describe the components of an
More informationDELL EMC CX4 EXCHANGE PERFORMANCE THE ADVANTAGES OF DEPLOYING DELL/EMC CX4 STORAGE IN MICROSOFT EXCHANGE ENVIRONMENTS. Dell Inc.
DELL EMC CX4 EXCHANGE PERFORMANCE THE ADVANTAGES OF DEPLOYING DELL/EMC CX4 STORAGE IN MICROSOFT EXCHANGE ENVIRONMENTS Dell Inc. October 2008 Visit www.dell.com/emc for more information on Dell/EMC Storage.
More informationHP Supporting the HP ProLiant Storage Server Product Family.
HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication
More informationTuning WebHound 4.0 and SAS 8.2 for Enterprise Windows Systems James R. Lebak, Unisys Corporation, Malvern, PA
Paper 272-27 Tuning WebHound 4.0 and SAS 8.2 for Enterprise Windows Systems James R. Lebak, Unisys Corporation, Malvern, PA ABSTRACT Windows is SAS largest and fastest growing platform. Windows 2000 Advanced
More informationStorage Edition 2.0. for Oracle. Performance Report
Storage Edition 2.0 for Oracle Performance Report O C T O B E R 1 9 9 9 Table of Contents Executive Summary...1 Introduction...1 About the Benchmark...2 Test Configuration...2 Overview and Analysis of
More informationOPS-23: OpenEdge Performance Basics
OPS-23: OpenEdge Performance Basics White Star Software adam@wss.com Agenda Goals of performance tuning Operating system setup OpenEdge setup Setting OpenEdge parameters Tuning APWs OpenEdge utilities
More informationSAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2
SAP SD Benchmark using DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2 Version 1.0 November 2008 SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2 1801 Varsity Drive
More informationSurvey Of Volume Managers
Survey Of Volume Managers Nasser M. Abbasi May 24, 2000 page compiled on June 28, 2015 at 10:44am Contents 1 Advantages of Volume Managers 1 2 Terminology used in LVM software 1 3 Survey of Volume Managers
More informationAdministrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review
Administrivia CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Homework #4 due Thursday answers posted soon after Exam #2 on Thursday, April 24 on memory hierarchy (Unit 4) and
More informationFour-Socket Server Consolidation Using SQL Server 2008
Four-Socket Server Consolidation Using SQL Server 28 A Dell Technical White Paper Authors Raghunatha M Leena Basanthi K Executive Summary Businesses of all sizes often face challenges with legacy hardware
More informationDatabase Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill
Lecture Handout Database Management System Lecture No. 34 Reading Material Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill Modern Database Management, Fred McFadden,
More informationDesign Considerations for Using Flash Memory for Caching
Design Considerations for Using Flash Memory for Caching Edi Shmueli, IBM XIV Storage Systems edi@il.ibm.com Santa Clara, CA August 2010 1 Solid-State Storage In a few decades solid-state storage will
More informationDefinition of RAID Levels
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
More informationClustering and Reclustering HEP Data in Object Databases
Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications
More informationIBM Tivoli Storage Manager for Windows Version Installation Guide IBM
IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM IBM Tivoli Storage Manager for Windows Version 7.1.8 Installation Guide IBM Note: Before you use this information and the product
More informationStorage Area Networks: Performance and Security
Storage Area Networks: Performance and Security Presented by Matthew Packard July 27, 2003 SAN Architecture - Definition & DAS Limitations Storage Area Network (SAN) universal storage connectivity free
More informationHow HP delivered a 3TB/hour Oracle TM backup & 1TB/hour restore. Andy Buckley Technical Advocate HP Network Storage Solutions
How HP delivered a 3TB/hour Oracle TM backup & 1TB/hour restore Andy Buckley Technical Advocate HP Network Storage Solutions Why Oracle? RDBMS Market share 2002 (%) NCR 3.5 Microsoft 22.8 Others 7.1 Oracle
More informationAvaya IQ 5.0 Database Server Configuration Recommendations And Oracle Guidelines
Avaya IQ 5.0 Database Server Configuration Recommendations Avaya IQ Database Server Page 2 of 12 Issue 3.0 1. INTRODUCTION... 3 1.1 Purpose... 3 1.2 BACKGROUND... 3 1.3 Terminology... 3 2. CONFIGURING
More informationIBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability
Hardware Announcement February 17, 2003 IBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability Overview The IBM
More informationNEC Express5800/1320Xd (32 SMP)
TPC Benchmark C Full Disclosure Report NEC Express5800/1320Xd (32 SMP) with Oracle Database 10g Enterprise Edition and SUSE LINUX Enterprise Server 9 for Itanium Processors Second Edition August 30, 2004
More informationAn Introduction to I/O and Storage Tuning. Randy Kreiser Senior MTS Consulting Engineer, SGI
Randy Kreiser Senior MTS Consulting Engineer, SGI 40/30/30 Performance Rule 40% Hardware Setup 30% System Software Setup 30% Application Software Analyze the Application Large/Small I/O s Sequential/Random
More informationManaging Oracle Real Application Clusters. An Oracle White Paper January 2002
Managing Oracle Real Application Clusters An Oracle White Paper January 2002 Managing Oracle Real Application Clusters Overview...3 Installation and Configuration...3 Oracle Software Installation on a
More informationVeritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)
Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Manageability and availability for Oracle RAC databases Overview Veritas InfoScale Enterprise for Oracle Real Application Clusters
More informationQLogic 2500 Series FC HBAs Accelerate Application Performance
QLogic 2500 Series FC HBAs Accelerate QLogic 8Gb Fibre Channel Adapters from Cavium: Planning for Future Requirements 8Gb Performance Meets the Needs of Next-generation Data Centers EXECUTIVE SUMMARY It
More informationQuickSpecs. Models. HP Smart Array 642 Controller. Overview. Retired
Overview The Smart Array 642 Controller (SA-642) is a 64-bit, 133-MHz PCI-X, dual channel, SCSI array controller for entry-level hardwarebased fault tolerance. Utilizing both SCSI channels of the SA-642
More informationTechnologies of ETERNUS6000 and ETERNUS3000 Mission-Critical Disk Arrays
Technologies of ETERNUS6000 and ETERNUS3000 Mission-Critical Disk Arrays V Yoshinori Terao (Manuscript received December 12, 2005) Fujitsu has developed the ETERNUS6000 and ETERNUS3000 disk arrays for
More informationChapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space
More informationIomega REV Drive Data Transfer Performance
Technical White Paper March 2004 Iomega REV Drive Data Transfer Performance Understanding Potential Transfer Rates and Factors Affecting Throughput Introduction Maximum Sustained Transfer Rate Burst Transfer
More informationVERITAS File System Version 3.4 Patch 1 vs. Solaris 8 Update 4 UFS
Performance Comparison: VERITAS File System Version 3.4 Patch 1 vs. Solaris 8 Update 4 UFS V E R I T A S W H I T E P A P E R Table of Contents Executive Summary............................................................................1
More informationWHITE PAPER. VERITAS Database Edition 1.0 for DB2 PERFORMANCE BRIEF OLTP COMPARISON AIX 5L. September
WHITE PAPER VERITAS Database Edition 1.0 for DB2 PERFORMANCE BRIEF OLTP COMPARISON AIX 5L September 2002 1 TABLE OF CONTENTS Introduction...3 Test Configuration...3 Results and Analysis...4 Conclusions...6
More informationData Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases
Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available
More informationDell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage
Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary
More informationChapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition
Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space
More informationWhite Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series
White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.
More informationIBM System p 570 Model 9117-MMA Using AIX 5L Version 5.3 and Oracle Database 10g Enterprise Edition TPC Benchmark TM C Full Disclosure Report
IBM System p 570 Model 9117-MMA Using AIX 5L Version 5.3 and Oracle Database 10g Enterprise Edition TPC Benchmark TM C Full Disclosure Report First Edition August 6, 2007 Special Notices The following
More informationS SNIA Storage Networking Management & Administration
S10 201 SNIA Storage Networking Management & Administration Version 23.3 Topic 1, Volume A QUESTION NO: 1 Which two (2) are advantages of ISL over subscription? (Choose two.) A. efficient ISL bandwidth
More information