VERITAS Storage Foundation 4.0 for Oracle

Similar documents
VERITAS Storage Foundation 4.0 for Oracle

Veritas Storage Foundation and. Sun Solaris ZFS. A performance study based on commercial workloads. August 02, 2007

WHITE PAPER. VERITAS Database Edition 1.0 for DB2 PERFORMANCE BRIEF OLTP COMPARISON AIX 5L. September

VERITAS File System Version 3.4 Patch 1 vs. Solaris 8 Update 4 UFS

VERITAS Storage Foundation 4.0 TM for Databases

WHITE PAPER. VERITAS Database Edition 1.0 for DB2 PERFORMANCE BRIEF OLTP COMPARISON. Solaris 8. August

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES

Oracle Storage Management: New Techniques and Choices

Comparing Software versus Hardware RAID Performance

Doubling Performance in Amazon Web Services Cloud Using InfoScale Enterprise

Storage Edition 2.0. for Oracle. Performance Report

VERITAS File Server Edition Performance Brief: A PostMark 1.11 Benchmark Comparison

Oracle Performance on M5000 with F20 Flash Cache. Benchmark Report September 2011

Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are

Introduction. Assessment Test. Chapter 1 Introduction to Performance Tuning 1. Chapter 2 Sources of Tuning Information 33

The Oracle Disk Manager API: A Study of the VERITAS Implementation

LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS

Four-Socket Server Consolidation Using SQL Server 2008

Oracle on HP Storage

VERITAS Storage Foundation 4.0 for Sybase - Performance Brief OLTP COMPARISON

VERITAS Database Edition for Oracle on HP-UX. Performance Report

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

VERITAS Foundation Suite TM 2.0 for Linux PERFORMANCE COMPARISON BRIEF - FOUNDATION SUITE, EXT3, AND REISERFS WHITE PAPER

Removing the I/O Bottleneck in Enterprise Storage

DELL TM AX4-5 Application Performance


VERITAS Database Edition for Sybase. Technical White Paper

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series

Performance of relational database management

Experience the GRID Today with Oracle9i RAC

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Cost and Performance benefits of Dell Compellent Automated Tiered Storage for Oracle OLAP Workloads

IBM TotalStorage Enterprise Storage Server Model RAID 5 and RAID 10 Configurations Running Oracle Database Performance Comparisons

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

RAID-5+ for OLTP Applications

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

Avaya IQ 5.1 Database Server Configuration Recommendations And Oracle Guidelines

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES

Recommendations for Aligning VMFS Partitions

Storage Optimization with Oracle Database 11g

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Storage Designed to Support an Oracle Database. White Paper

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

The VERITAS VERTEX Initiative. The Future of Data Protection

Veritas Storage Foundation from Symantec

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Hitachi Converged Platform for Oracle

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including:

Oracle Platform Performance Baseline Oracle 12c on Hitachi VSP G1000. Benchmark Report December 2014

IBM and HP 6-Gbps SAS RAID Controller Performance

HP SAS benchmark performance tests

Optimizing Cache for Individual Workloads with the Hitachi Cache Partition Manager Feature

Avaya IQ 5.0 Database Server Configuration Recommendations And Oracle Guidelines

1-2 Copyright Ó Oracle Corporation, All rights reserved.

An Oracle White Paper September Optimizing Oracle Database Performance on Oracle Linux with Flash

EMC VFCache. Performance. Intelligence. Protection. #VFCache. Copyright 2012 EMC Corporation. All rights reserved.

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Using EMC FAST with SAP on EMC Unified Storage

EMC DMX Disk Arrays with IBM DB2 Universal Database Applied Technology

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

IBM DS8870 Release 7.0 Performance Update

Veritas Storage Foundation 6.1 Storage and Availability Management for Oracle Databases

Consolidating OLTP Workloads on Dell PowerEdge R th generation Servers

STEPS Towards Cache-Resident Transaction Processing

Ingo Brenckmann Jochen Kirsten Storage Technology Strategists SAS EMEA Copyright 2003, SAS Institute Inc. All rights reserved.

USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS

Creating the Fastest Possible Backups Using VMware Consolidated Backup. A Design Blueprint

W H I T E P A P E R : C A C H E D O R A C L E D I S K M A N A G E R - U S A G E G U I D E L I N E S A N D B E S T P R A C T I C E S

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

Appliances and DW Architecture. John O Brien President and Executive Architect Zukeran Technologies 1

Horizontal Scaling Solution using Linux Environment

Technical Note P/N REV A01 March 29, 2007

Scaling PostgreSQL on SMP Architectures

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Managing Oracle Real Application Clusters. An Oracle White Paper January 2002

EZY Intellect Pte. Ltd., #1 Changi North Street 1, Singapore

VERITAS SANPoint Storage Appliance Overview of an Open Platform for the Implementation of Intelligent Storage Server

AIX Power System Assessment

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Impact of Dell FlexMem Bridge on Microsoft SQL Server Database Performance

Interoperability of Bloombase StoreSafe Security Server, QLogic FC-HBAs and QLogic SAN Switch for Transparent Storage Area Network (SAN) Encryption

How HP delivered a 3TB/hour Oracle TM backup & 1TB/hour restore. Andy Buckley Technical Advocate HP Network Storage Solutions

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Database over NFS. beepy s personal perspective. Brian Pawlowski Vice President and Chief Architect Network Appliance

Two hours - online. The exam will be taken on line. This paper version is made available as a backup

OS and Hardware Tuning

Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

Let s Tune Oracle8 for NT

The Modern Virtualized Data Center

Transcription:

J U N E 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief OLTP Solaris Oracle 9iR2 VERITAS Storage Foundation for Oracle

Abstract This document details the high performance characteristics of VERITAS Storage Foundation 4.0 for Oracle (SFOR) in a 64- bit Solaris 9 operating environment based on the results of OLTP benchmark tests. VERITAS Quick IO (QIO), VERITAS Extension for Oracle Disk Manager (ODM), VERITAS Volume Manager, (VxVM) Raw I/O and Solaris Volume Manager (SVM) Raw I/O were the primary I/O configurations tested. Oracle 9 s ODM is Oracles disk management designed to improve file management and maintain Raw I/O performance. VERITAS Extension for Oracle Disk Manager interfaces into Oracle Disk Manager. Raw I/O delivers in most cases delivers the best OLTP performance numbers in an Oracle environment but at the cost of file system manageability, disk space management and CPU loading. Unix File Systems in a database environment traditional sacrifice performance due to file system locking layer and buffering for reads and writes. SFOR QIO and ODM eliminate double-buffering and double copying and while delivering files system manageability and disk space management. VERITAS Storage Foundation (SFOR) 4.0 for Oracle delivers Raw I/O Performance with file system manageability. The database OLTP benchmark throughput results show that SFOR QIO and ODM maintain their performance throughput against VxVM and SVM Raw I/O performance results throughout the range of the performance test. ODM was the only I/O configuration that maintained on average 97% Raw I/O performance at the highest user stress levels. An interesting note, the stress load tests (10-100 users) show that ODM on average, but specifically at higher stress levels not only achieves the highest database throughput but also consumes the lowest CPU usage. Cached Quick I/O (CQIO), a read-intensive variety of QIO is able to outperform QIO, ODM, VxVM and SVM Raw I/O by up to 33%. These results convincely show that CQIO can be used in database servers with available memory for incremental read performance. Historically DBAs do not typically allocate more than 15GB of memory to Oracle buffer cache. CQIO enables the use of this operating system memory, external to the Oracle SGA, as a second level cache to buffer Oracle databases. The OLTP benchmark used in this study is commonly used to evaluate database performance of specific hardware and software configurations. By normalizing the system configuration and varying the file system I/O configuration, it was possible to study the impact of various storage layouts on database performance with this benchmark. The OLTP performance measurements illustrate that the Quick I/O and Oracle Disk Manager features enable the VERITAS Storage Foundation for Oracle to achieve comparable performance to RAW I/O configurations at lower cpu utilization. As the previous studies reported, this performance superiority remains the same no matter which Oracle release (32-bit Oracle or 64- bit Oracle) or which Solaris 8/9 flavor (32-bit or 64-bit) was used Page 1

Introduction This document describes the performance of VERITAS Storage Foundation 4.0 for Oracle in a 64-bit Solaris 9 operating environment as measured by an Online Transaction Processing (OLTP) workload. The purpose of this brief is to illustrate the relationship of different I/O, memory configurations and their corresponding effects on database 1/O performance. The benchmark used for this performance comparison was derived from the commonly known TPC-C benchmark that comprises a mixture of read-only and update intensive transactions that simulate a warehouse supplier environment. (Details on this benchmark can be obtained from the Transaction Processing Council s web page at http://www.tpc.org.) The following software releases were used in the tests: VERITAS Storage Foundation 4.0 for Oracle (SFOR) Oracle 9iR2 Patch Set 3 (release 9.2.0.4, 64-bit) Solaris 9 Update 4 (release 8/03, 64-bit) VERITAS Storage Foundation 4.0 for Oracle (SFOR) is comprised of the following components: VERITAS Storage Foundation 4.0 including: VERITAS Volume Manager TM 4.0 (VxVM ) VERITAS File System TM 4.0 (VxFS ) VERITAS Quick I/O VERITAS Cached Quick I/O VERITAS FlashSnap VERITAS Storage Mapping VERITAS Storage Checkpoint / Rollback VERITAS Enterprise Administrator VERITAS Extension for Oracle Disk Manager TM Test Configuration The OLTP benchmark tests were conducted on a Sun Fire F15000 Server and a Sun StorEdge 9970 Storage Array. The Sun Fire F15000 Server and the Sun StorEdge 9970 Storage Array were connected on (16) 2Gb Fibre Channel loops using Brocade Silkworm 3800 switches. There were (2) 2 GB ports per Host Bus Adapters (HBA) for the Sun Fire F15000 Server and (16) target ports for the Sun StorEdge 9970 Storage Array. The Sun Fire F15000 Server domain was configured with: (16) UltraSPARC III processors (1.05 GHz), (4) System boards with (4) I/O boards per system board 64 GB of memory The Sun StorEdge 9970 Storage Array was configured: 16 GB cache (124) 36 GB Seagate drives (10,000 RPM) Disk drives were grouped into 31 parity groups (30) RAID 5 (3D+1P) (1) RAID 1 (2D+2P) - using the OPEN-E emulation (4) Brocade Silkworm 3800 switches (8) Sun Qlogic 2312 Host Bus Adapters Page 2

The benchmark tests used (31) LUNs in the Sun StorEdge 9970 Arrays. The fastest LUNs from each parity group were selected. These LUNs were pooled together and carved into logical volumes for individual tests with either Solaris Volume Manger (SVM) Soft Partition or the VERITAS Volume Manager (VxVM) respectively. For the file system I/O feature type tests, two logical volumes were created to separate Oracle datafiles from online redo logs. For the datafile file system, a 30- way striped volume of 250 GB was created using the (30) LUNs from the RAID 5 parity groups. For the redo log file system, a single LUN volume of 9 GB in size was created from the RAID 1 parity group. For RAW I/O configurations, each Oracle datafile resides on a separate 30-way striped volume using the (30) LUNs from the RAID 5 parity groups. Similarly, each redo log resides on a separate logical volume using the LUN from the RAID 1 parity group. The size of the database used for the tests was 200 GB with a total of (165) Oracle database files, including redo logs, indexes, undo, temporary and user tablespaces. The size of the database is that of a TPC-C database with a scale factor of 2,000 warehouses. The benchmark tests were conducted with 5 to 30 GB of Oracle buffer cache and the following (9) I/O configurations. VxVM Raw I/O Quick I/O Cached Quick I/O Oracle Disk Manager I/O VxFS Direct I/O VxFS Buffered I/O SVM Raw I/O UFS Concurrent Direct I/O UFS Buffered I/O Tests VERITAS Volume Manager raw I/O performance Tests VERITAS File System Quick I/O feature performance Tests VERITAS File System Cached Quick I/O feature performance Tests VERITAS File System Extension for Oracle Disk Manager performance Tests VERITAS File System direct I/O mode performance Tests VERITAS File System buffered I/O mode performance Tests Solaris Volume Manager raw I/O performance Tests Solaris Unix File System Concurrent Direct I/O mode performance Tests default Solaris Unix File System buffered I/O mode performance The Oracle block size used for all these tests was 8 KB. During the test, Oracle statistics, Volume Manager statistics, Quick I/O statistics, and ODM I/O statistics were gathered in addition to the benchmark throughput report. Results and Analysis The primary performance metric used in this brief is a throughput metric that measures the number of transactions completed per minute (TPM). The transaction mix in this OLTP benchmark represents the processing of entering an order, paying for an order, checking an order and delivering an order; this follows the model of a complete business activity. The TPM metric is a measure of business throughput. Table 1 lists the database throughput results of various benchmark tests of different I/O configurations in various Oracle buffer cache sizes combinations. The stress level was fixed at 50 batch users in all the tests. The relative plot of all database throughput results are compared to VxVM Raw I/O performance throughput results for individual buffer cache sizes and are shown in Figure 1. Page 3

Table 1 - Database throughput of the benchmark tests. Database Throughput in Transactions per Minute (TPM) Size of Oracle Buffer Cache I/O Configuration 5GB 10GB 15GB 20GB 25GB 30GB VxVM Raw I/O 14,919 18,956 21,025 21,780 21,813 21,539 Quick I/O 15,045 19,166 21,331 22,247 22,416 21,912 Cached Quick I/O 19,967 21,898 23,319 23,256 23,164 22,952 Oracle Disk Manager I/O 15,398 19,182 21,309 22,142 22,676 23,017 VxFS Direct I/O 15,130 19,016 21,292 22,161 22,446 22,354 VxFS Buffered I/O 18,642 19,725 19,436 18,376 18,314 18,702 SVM Raw I/O 14,980 19,032 21,327 21,908 22,175 21,897 UFS Concurrent Direct I/O 14,705 18,206 20,891 22,105 22,579 22,577 UFS Buffered I/O 13,968 14,746 15,422 15,408 15,434 15,181 1.4 Database throughput relative to VxVM-RAW (1.0) 1.2 1.0 0.8 0.6 0.4 0.2 qio cqio odm dio bio svm-raw cdio ubio 0.0 5GB 10GB 15GB 20GB 25GB 30GB Oracle Buffer Cache Size Figure 1 - Relative plot of database throughput compared to VxVM Raw I/O. Figure 1 shows that the database throughput with VxFS QIO, ODM, and UFS CDIO matches closely with that of VxVM or SVM Raw I/O. An interest note, VxFS DIO is able to match RAW I/O performance closely without being slowed down by the file system i-node locking. A close examination of disk I/O statistics indicates the write-behind cache in the backend Sun StorEdge 9970 Storage Array drastically reduces the performance penalty associated with the i-node locking. These performance results were due to disk write time in intelligent arrays being much smaller those associated with JBODs. Page 4

Figure 1 also shows that CQIO is able to outperform QIO by up to 33%. This shows CQIO can be used in database servers with additional memory. Cached Quick I/O uses operating system memory external to Oracle SGA as a second level cache to buffer Oracle databases. The effectiveness of the second level cache is thus determined by the cache-hit ratio of the file system page cache. Oracle Buffer Cache File System Page Cache combined 100% 90% 80% Cache Hit Ratio 70% 60% 50% 40% 30% 20% 10% 0% 5GB 10GB 15GB 20GB 25GB 30GB Oracle Buffer Cache Size Figure 2 Cache effectiveness of double buffering in Cached Quick I/O. Figure 2 shows the cache hit ratio of these two caches individually and combined with different Oracle buffer cache sizes. It is interesting to observe that moving system memory between these two caches only marginally affects the combined cache hit ratio. The test result shows that database throughput of ODM, DIO, and CDIO match closely with RAW I/O in different Oracle buffer cache sizes. The users stress test measures how the various file system I/O configurations scale under different stress levels by changing the number of batch users. The TPM results of these I/O configurations vs. user stress tests are plotted in Figure 3. The size of Oracle buffer cache was fixed at 15 GB and the stress loading was varied from 20 to 100 users Figure 3 shows that DIO, ODM, and CDIO scale with the increasing stress levels. ODM particularly outperforms all other I/O configurations at the highest stress levels because multiple DBWR asynchronous writes can be combined into a single ODM I/O call. To illustrate the performance advantage of ODM at higher stress levels, partial vmstat outputs of these five I/O configurations at the stress level of 100 batch users are summarized in Table 3. Page 5

25,000 Database Throughput in TPM 20,000 15,000 10,000 5,000 vxvm-raw odm dio svm-raw cdio - 10 20 30 40 50 60 70 80 90 100 stress level (# of batch users) Figure 3 - Database throughput measured at different stress levels. There are five vmstat metrics in Table 3 in addition to the database throughput (TPM): intr/s average number of interrupts per second, sys-call/s average number of system calls per second, csw/s average number of context switches per second, %usr average CPU utilization in user mode, %sys average CPU utilization in system mode, and %cpu average CPU utilization (= %usr + %sys). Table 3 shows that ODM at higher stress levels not only achieves the highest database throughput but also consumes the lowest CPU usage. The benefit of combining multiple DBWR asynchronous writes is also obvious from the lower rates of system calls and context switches as compared to RAW I/O. Table 3 Summary of vmstat outputs for RAW I/O, ODM, DIO, and CDIO at the stress level of 100 batch users. I/O configuration TPM intr/s sys-call/s csw/s %usr %sys %cpu VxVM RAW 22,148 8,486 40,562 12,555 41.7% 14.1% 55.8% VxFS ODM 24,107 8,873 33,319 10,143 38.9% 14.5% 53.4% VxFS DIO 22,818 13,478 41,912 26,828 39.5% 16.8% 56.3% SVM RAW 22,434 8,632 41,423 12,768 42.6% 11.2% 53.8% UFS CDIO 23,363 13,219 43,292 24,114 40.1% 14.8% 54.9% Another interesting observation from Table 3 is that CPU utilization of file system I/O features is only slightly higher than that of RAW I/O. This is a big justification for moving databases to file systems from raw partitions because modern file system I/O features such as ODM, QIO, DIO, and CDIO can all achieve RAW-equivalent performance with acceptable extra CPU usage. The ease of database management with file systems should justify the small increase in CPU usage. Page 6

Summary The OLTP benchmark used in this study is commonly used to evaluate database performance of specific hardware and software configurations. By normalizing the system configuration and varying the file system I/O configuration, it was possible to study the impact of various storage layouts on database performance with this benchmark. The OLTP performance measurements illustrate that the Quick I/O and Oracle Disk Manager features enable the VERITAS Storage Foundation 4.0 for Oracle to achieve comparable performance to RAW I/O configurations. As the previous studies reported, this performance superiority remains the same no matter which Oracle release (32-bit Oracle or 64-bit Oracle) or which Solaris 8/9 flavor (32-bit or 64-bit) was used. For database servers using intelligent arrays with large cache, VxFS Direct I/O feature is also able to achieve RAW I/O equivalent performance and scales well at higher stress levels When 32-bit Oracle is used, we can only allocate up to 4GB of operating system memory to the Oracle SGA. For large memory systems, VERITAS Cached Quick I/O is able to utilize the memory beyond the 4GB Oracle SGA limit as a second level cache to Oracle databases. The second level cache improves Oracle read performance, when data blocks are not cached in the Oracle buffer cache, but in the file system page cache. With 64-bit Oracle, the benefit of the second level cache diminishes quickly as the size of the Oracle SGA is increased. Page 7

Page 8