EMC VNX SCALING PERFORMANCE FOR ORACLE 12c RAC ON VMWARE VSPHERE 5.5

Similar documents
EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1

DATA PROTECTION IN A ROBO ENVIRONMENT

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES ORACLE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX END-USER COMPUTING

EMC Business Continuity for Microsoft Applications

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS

EMC VSPEX END-USER COMPUTING

EMC Business Continuity for Oracle Database 11g

EMC VSPEX END-USER COMPUTING

Microsoft Office SharePoint Server 2007

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with Digifort Enterprise

Using EMC FAST with SAP on EMC Unified Storage

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX END-USER COMPUTING

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS

DELL EMC UNITY: BEST PRACTICES GUIDE

Oracle RAC 10g Celerra NS Series NFS

EMC Integrated Infrastructure for VMware. Business Continuity

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Dell EMC SAN Storage with Video Management Systems

EMC Backup and Recovery for Microsoft SQL Server

LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Oracle Database Consolidation on FlashStack

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC Backup and Recovery for Microsoft Exchange 2007

EMC FAST CACHE. A Detailed Review. White Paper

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Copyright 2012 EMC Corporation. All rights reserved.

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION

ORACLE DATA WAREHOUSE ON EMC SYMMETRIX VMAX 40K

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

The Oracle Database Appliance I/O and Performance Architecture

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

EMC VSPEX PRIVATE CLOUD

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Oracle Databases on VMware VMware vsphere 5 RAC Workload Characterization Study (VMware VMFS) December 2011

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Synectics Digital Recording System

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

Video Surveillance EMC Storage with Digifort Enterprise

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Assessing performance in HP LeftHand SANs

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

Accelerate Applications Using EqualLogic Arrays with directcache

DELL EMC UNITY: DATA REDUCTION

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

NexentaVSA for View. Hardware Configuration Reference nv4v-v A

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Virtualization of the MS Exchange Server Environment

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Storage Optimization with Oracle Database 11g

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC VMAX3 & VMAX ALL FLASH ENAS BEST PRACTICES

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

VMware vsphere 6.5 Boot Camp

Surveillance Dell EMC Storage with Verint Nextiva

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC Symmetrix VMAX with FAST

Dell EMC Ready Solutions for Oracle: Design for Dell EMC Unity All Flash Unified Storage

EMC VSPEX PRIVATE CLOUD

Transcription:

White Paper EMC VNX SCALING PERFORMANCE FOR ORACLE 12c RAC ON VMWARE VSPHERE 5.5 EMC Next-Generation VNX8000, EMC FAST Suite, and EMC SnapSure Automate storage performance Scale OLTP workloads Rapidly provision Oracle databases EMC Solutions Abstract This white paper describes the benefits of virtualizing an Oracle 12c RAC database using VMware vsphere. The VNX8000 model of the EMC nextgeneration VNX ` series with EMC FAST Suite provides high performance network file storage that is accessed using Oracle Direct NFS (dnfs) client. December 2013

Copyright 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All trademarks used herein are the property of their respective owners. Part Number H23606 2

Table of contents Executive summary... 6 Business case... 6 Solution overview... 7 Key results... 7 Introduction... 9 Purpose... 9 Scope... 9 Audience... 9 Terminology... 9 Technology overview... 10 EMC VNX8000... 10 EMC FAST Suite... 10 EMC FAST Cache... 10 EMC FAST VP... 10 EMC SnapSure... 10 VMware vsphere... 11 Oracle Database Enterprise Edition... 11 Oracle Clusterware... 11 Oracle Direct NFS client... 11 Solution architecture... 13 Hardware resources... 14 Software resources... 14 Oracle storage layout... 15 Oracle 12c database file system allocation on VNX8000... 17 Oracle dnfs client configuration... 17 Configuring Oracle databases... 19 Create CDB... 19 Create PDB... 19 Automate startup of PDB with event trigger... 19 Database and workload profile... 20 Oracle database schema... 20 Enable HugePages... 20 Configuring FAST Cache on EMC VNX8000... 22 Analyze the application workload... 22 FAST Cache best practices for Oracle... 22 3

Configuring FAST VP on EMC VNX8000... 24 Overview... 24 Tiering policies... 24 Start high then auto-tier (default policy)... 24 Auto-tier... 24 Highest available tier... 25 Lowest available tier... 25 No data movement... 25 Configure FAST VP... 25 VMware ESXi server configuration... 26 Step 1: Create virtual switches... 26 Step 2: Configure the virtual machine template... 28 Step 3: Deploy the virtual machines... 29 Step 4: Enable access to the storage devices... 30 Step 5: Enable Jumbo frames... 31 Oracle RAC 12c virtual machine... 31 vds... 31 Data mover... 32 Node scalability test... 33 Test objective... 33 Test procedure... 33 Test results... 34 FAST Suite test... 35 FAST Suite and manual tiering comparison... 35 FAST Cache test... 35 FAST Cache warm-up... 36 FAST Cache test procedure... 36 FAST VP... 37 FAST Suite test... 37 FAST Suite test procedure... 37 FAST Suite test results... 38 FAST Suite effects on database transactions per minute... 38 FAST Suite effects on Oracle read response times... 40 Rapid provisioning of PDBs... 41 Test procedure... 41 Test results... 42 Conclusion... 43 Summary... 43 4

Findings... 44 References... 45 Technical documentation... 45 Product documentation... 45 Other documentation... 45 5

Executive summary Business case Oracle mission-critical business applications have service levels that require high performance, low latency, and resilience. As a result, Oracle environments must address an increasingly broad range of business demands, including the ability to do the following: Scale Oracle online transaction processing (OLTP) workloads for performance. VMware vsphere 5.5 enables efficient use of the physical server hardware (database servers) by providing extensibility and scalability of the virtual environment in the following way: Larger virtual machines Virtual machines can grow to support the most advanced applications. Virtual machines can scale up to 64 virtual CPUs (vcpus) and 1 TB of virtual RAM (vram). Maximize performance while reducing the cost of ownership of the system. The Oracle Database 12c dnfs client is optimized for Oracle workloads and provides a level of load-balancing and failover across multiple network paths. This significantly improves the availability and performance in an NAS based storage architecture. EMC FAST Suite automatically and non-disruptively optimizes storage, based on application access patterns. FAST Cache services active data set with few flash drives. Fully Automated Storage Tiering for Virtual Pools (FAST VP) optimizes disk utilization and efficiency with Serial Attached SCSI (SAS) and Near-Line SAS (NL-SAS) drives. Deploying an Oracle NAS solution with 10 Gigabit Ethernet (GbE) fabric on the EMC VNX8000 delivers both infrastructure cost and people and process cost efficiencies on converged architecture. VMware vsphere 5.5 provides a new virtual machine feature to meet the demands of latency sensitive applications. Rapidly provision cost-efficient and resourceful Oracle databases for development, test, and production environments. This solution addresses all these challenges by consolidating multiple databases in a scalable virtualized Oracle 12c RAC environment. 6

Solution overview Key results This solution uses the following technologies to support the demands of a growing enterprise infrastructure: EMC VNX8000 EMC Unisphere EMC FAST VP EMC FAST Cache EMC SnapSure checkpoint VMware vsphere Oracle 12c RAC Database with Multitenant option enabled Oracle 12c Grid Infrastructure Technologies such as fully automated storage tiering provide simplified storage management to meet the following business needs: Efficiency Automates storage tuning for Oracle database performance. With FAST VP and FAST Cache enabled, the storage array adjusts to data access patterns. Cost savings Improves the total cost of ownership (TCO). FAST Cache continuously and rapidly responds to changes in data access with fewer flash drives, while FAST VP optimizes disk utilization and efficiency across SAS and NL-SAS drives. Scalability Supports growing Oracle workloads that require high I/Os per second (IOPS) by scaling out a virtual Oracle RAC. Agility Enables rapid cloning of Oracle database environments for testing, developing, and patching of databases using EMC and VMware technologies. This solution demonstrates the following key results: Performance improvement with FAST Suite: 3 times improvement in transactions per minute (TPM) FAST Cache after warm up, delivers over 90 percent of application data from high performance low latency flash drives. Simple management Requires only a few steps to configure FAST VP and FAST Cache. Customers can enable or disable FAST Cache and FAST VP without affecting system operations. Non-disruptive performance FAST VP and FAST Cache can identify and promote hot data automatically and non-disruptively. Scalability Customers can easily and non-disruptively scale out VMware virtualized Oracle RAC nodes as application needs evolve, enabling them to take an incremental approach to address growing workload needs. Agility Rapid provisioning of Oracle databases In comparison with the traditional method of database cloning, using EMC SnapSure checkpoint, enables a quick and simple Oracle database clone provisioning process for 7

cloning a database for test/development purposes in less than 10 minutes, while minimizing the impact on the performance of the production database. This also saves DBA time and reduces the storage requirement. 8

Introduction Purpose Scope Audience Terminology This white paper introduces how Oracle OLTP databases can use EMC FAST technology with RAC databases to achieve scalability, performance, and rapid provisioning in a virtual environment using VMware vsphere 5.5 on EMC VNX storage. The scope of the white paper is to: Introduce the key solution technologies Describe the solution architecture and design Describe the solution scenarios and present the results of validation testing Identify the key business benefits of the solution. This paper is intended for chief information officers (CIOs), data center directors, Oracle database administrators (DBAs), storage administrators, system administrators, virtualization administrators, technical managers, and any others involved in evaluating, acquiring, managing, operating, or designing Oracle database environments. This paper includes the following terminology. Table 1. Terminology Acronym AWR CDB dnfs FAST VP NFS PDB RAC SAS SGA TCO TPM vds VNX OE Term Automatic Workload Repository Oracle 12c multitenant container database Direct NFS Fully Automated Storage Tiering for Virtual Pools Network file system Oracle 12c pluggable database Real Application Clusters Serial Attached SCSI System global area Total cost of ownership Transactions per minute vnetwork Distributed Switch VNX operating environment 9

Technology overview The solution uses the following hardware and software components: EMC VNX8000 EMC FAST Suite EMC SnapSure VMware vsphere Oracle Database 12c R1 Enterprise Edition with Multitenant option Oracle Clusterware 12c Release 1 EMC VNX8000 VNX8000 is a member of the VNX series next-generation storage platform, which is powered by Intel quad-core Xeon 2680 series processors and delivers five 9s availability. The VNX series is designed to deliver maximum performance and scalability for enterprises, enabling them to dramatically grow, share, and costeffectively manage multiprotocol file and block systems. The VNX operating environment (VNX OE) allows Microsoft Windows, Linux, and UNIX clients to share files in multiprotocol NFS and Common Internet File System (CIFS) environments. VNX OE also supports iscsi, FC, and Fibre Channel over Ethernet (FCoE) access for high-bandwidth and latency-sensitive block applications. EMC FAST Suite The FAST Suite for VNX includes FAST Cache and FAST VP. EMC FAST Cache FAST Cache uses flash drives to add an extra layer of cache between the dynamic random access memory (DRAM) cache and the rotating disk drives, which creates a faster medium for storing frequently accessed data. FAST Cache is an extendable, read/write cache. It boosts application performance by ensuring that the most active data is served from high-performing flash drives and resides on this faster medium for as long as necessary. EMC FAST VP FAST VP is a policy-based, auto-tiering solution for enterprise applications. FAST VP operates at a granularity of 256 MB, referred to as a slice. The goal of FAST VP is to efficiently use storage tiers to lower TCO by tiering colder slices of data to highcapacity drives, such as NL-SAS, and to increase performance by keeping hotter slices of data on performance drives, such as flash drives. This process occurs automatically and transparently to the host environment. EMC SnapSure SnapSure enables you to create point-in-time logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. When a block within the PFS is modified, SnapSure saves a copy of the block s original contents to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol and the unchanged PFS blocks remain in the PFS according to a 10

bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. VMware vsphere VMware vsphere provides the virtualization platform for the VMware ESXi physical machines hosting the Oracle RAC nodes in the virtual environment. VMware vsphere abstracts applications and information from the complexity of the underlying infrastructure. vsphere is the most complete and robust virtualization platform, virtualizing business-critical applications with dynamic resource pools for unprecedented flexibility and reliability. VMware vcenter provides the centralized management platform for vsphere environments, enabling control and visibility at every level of the virtual infrastructure. Oracle Database Enterprise Edition Oracle Database Enterprise Edition (EE) 12c Release 1 (R1) introduces a new architecture with the Oracle Multitenant option, where multiple databases can be consolidated and managed in a shared database and instance. With the Oracle Multitenant option, multiple pluggable databases (PDBs) can be created within a single multitenant container database (CDB). The PDBs share resources provided by the CDB, such as memory, background processes, undo, redo, and control files. The Oracle multitenant architecture offers operational flexibility for moving a PDB between multitenant CDBs by unplugging from one and plugging into the other. The multitenant architecture supports the following configurations: Single-tenant configuration one PDB plugged into a CDB is available at no extra cost in all editions. The Multitenant option for up to 252 pluggable databases for each CDB is available with Oracle 12c EE at an additional cost. Oracle Multitenant is fully interoperable with Oracle Real Application Cluster (RAC) where each RAC instance opens the CDB as a whole and each SGA shares the data blocks and library cache for each of the PDBs contained in the CDB. The database environment for this solution consists of four RAC virtual machine nodes on two ESXi servers. Oracle Clusterware Oracle Direct NFS client Oracle Clusterware 12c R1 enables servers to communicate with each other, so that they appear to function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are stand-alone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one system to applications and end users. Oracle Direct NFS Client (dnfs) is an alternative to using kernel-managed NFS. With Oracle Database 12c, instead of using the operating system kernel NFS client, you can configure an Oracle Database to access NFS servers directly by using an Oracle internal dnfs client. This native capability enables direct I/O with the storage devices, which bypasses the operating system file cache and reduces the need to copy data 11

between the operating system and the database memory. The dnfs client also enables asynchronous I/O access to NFS appliances. Oracle dnfs uses Ethernet for storage connectivity. This eliminates the need for expensive, redundant host bus adaptors (such as FC HBA) or FC switches. Because Oracle dnfs implements multipath I/O internally, there is no need to configure bonded network interfaces (such as EtherChannel or 802.3ad Link Aggregation) for performance or availability. This results in additional cost savings, as most NIC bonding strategies require advanced Ethernet switch support. 12

Solution architecture This virtualized Oracle RAC Database 12c dnfs solution is designed to test and document: Node scalability Performance of the Oracle database with FAST Suite Provisioning of test or development environments Resilience of an OLTP workload on an Oracle RAC database configured with dnfs We 1 carried out the testing on an Oracle RAC database 12c using a VNX8000 array as the underlying storage. VMware vsphere was used as the virtualization platform. The VNX array was configured as an NFS server and the Oracle RAC nodes were configured to access the NFS server directly using the Oracle dnfs client. Figure 1 depicts the architecture of the solution. With VMware vsphere version 5.5 installed, the ESXi server farm consists of three ESXi servers. Four virtual machines (on two ESXi servers) were deployed as a four-node RAC database. The storage and cluster interconnect networks used 10 GbE. Figure 1. Architecture overview 1 In this white paper, we refers to the EMC solutions engineering team that deployed and validated the solution. 13

Hardware resources Table 2 details the required hardware resources for the solution. Table 2. Hardware resources Hardware Quantity Configuration Storage system 1 EMC VNX8000 with: 2 storage processors, each with 128 GB Memory (cache size 46 GB) 75 x 300 GB 10K 2.5 inch SAS drives 4 x 300 GB 15K 3.5 inch SAS drives (vault disk) 5 x 200 GB 3.5 inch flash drives 9 x 3 TB 7.2K 3.5 inch NL_SAS drives 4 x data movers (2 primary and 2 standby) Dual-port 10 GbE for each data mover ESXi server 3 4 x 10-core CPUs 196 GB RAM 1 x dual-port 1 Gb/s Ethernet NICs 2 x dual-port 10 Gb/s CNA NICs Ethernet switch 2 10 Gb/s Ethernet switches (for interconnect/storage Ethernet) 2 1 Gb/s Ethernet switches (for public Ethernet) Software resources Table 3 details the software resources for the solution. Table 3. Software resources Software Version Purpose EMC VNX OE for block 05.33.000.5.034 VNX operating environment EMC VNX OE for file 8.1.1.33 VNX operating environment Unisphere 1.3.1 VNX management software Oracle Grid Infrastructure 12.1.0.1.0 Oracle Clusterware Oracle Database 12.1.0.1.0 Oracle Database software Red Hat Enterprise Linux 6.3 Database server OS VMware vsphere 5.5 Hypervisor hosting all virtual machines VMware vcenter 5.5 Management of VMware vsphere Swingbench 2.4 Similar to a TPC-C benchmark tool 14

Oracle storage layout The disk configuration uses four back-end 6 Gb/s SAS ports within the VNX8000 storage system. Figure 2 illustrates the disk layout of the environment. Figure 2. Disk layout Note: A Cluster Ready Services (CRS) pool was deployed on the Redo Pool due to low I/O activities. 15

Figure 3 represents a logical layout of the file system used for the Oracle data files. We used a data-mover configuration with two active and two standby data movers. The two active data movers were used to access the file systems, which were distributed evenly across the four SAS ports. The back-end configuration with the two standbys was based on the I/O requirements. Figure 3. Data file system logical view Unisphere provides a simple GUI to create and manage the file systems. Figure 4 shows the usage of each file system and data mover, which is well balanced for the workload. Figure 4. The file system information panel in Unisphere 16

Oracle 12c database file system allocation on VNX8000 Table 4 details the Oracle 12c database file system storage allocation on the VNX8000. Table 4. Oracle file system allocation on VNX8000 File type RAID type No. of LUNs Disk volumes (dvols) Size Data mover 500 GB Server 2 PDB data files 4+1 RAID 5 10 D1 to D10 500 GB Server 3 500 GB Server 2 500 GB Server 3 CDB data files 4+1 RAID 5 10 D1 to D10 500 GB Server 2 Temp files 4+1 RAID 5 2 Redo logs 2+2 RAID 10 2 D22 133 GB Server 2 D23 133 GB Server 3 D24 100 GB Server 2 D25 100 GB Server 3 FRA files 4+1 RAID 5 10 D11 to D20 4 TB Server 2 CRS files 2+2 RAID 10 1 D24 5 GB Server 2 Note: All storage pools were created on 300 GB 10K SAS drives. Oracle dnfs client configuration Oracle dnfs client is a standard feature with Oracle Database 12c and provides improved performance and resilience over OS kernel NFS. Oracle dnfs client technologies provide both resiliency and performance over OS NFS with the ability to automatically failover on the 10 G Ethernet fabric and to perform concurrent IOPS, which bypass any operating system caches and OS write-order locks. dnfs also performs asynchronous IOPS, which allows processing to continue while the I/O request is submitted and processed. You must configure the Oracle database to use the Oracle dnfs client ODM disk libraries. This is a one-time operation and after it is set, the database uses the Oracleoptimized native Oracle dnfs client instead of the kernel NFS client. The standard ODM library was replaced with one that supports the dnfs client. Figure 5 shows the commands that enable the dnfs client ODM library. Figure 5. Enable the dnfs client ODM library 17

We configured the Oracle dnfs client for the virtual environment. We mounted the Oracle file systems and made them available over the regular NFS mounts. The Oracle dnfs client used the oranfstab configuration file to determine the mount point settings for the NFS storage devices. The oranfstab must be configured for each RAC node. Figure 6 shows an extract from the oranfstab file used for this solution. Figure 6. Example of oranfstab configuration file After it is configured, the management of dnfs mount points and load balancing is controlled from oranfstab and not by the OS. 18

Configuring Oracle databases Create CDB An Oracle 12c Multitenant database with one pluggable database was created using the Database Configuration Assistant (DBCA). Figure 7 shows the option to create a CDB with or without PDBs. Figure 7. DBCA example showing the option to create a CDB with one or more PDBs Create PDB You can create pluggable databases with DBCA, SQL Developer, Cloud Control, or manually, with the SQL*Plus command. PDBs can be created at any stage after the CDB is created. Figure 8 shows the DBCA step to create a pluggable database. Figure 8. Create a pluggable database Automate startup of PDB with event trigger You can mount PDBs only when the CDB is open. To open the PDBs automatically with the CDB, use an event trigger, as shown in Figure 9, which opens all PDBs. Figure 9. Create database trigger to automatically open all pluggable databases 19

In this solution the Swingbench Order Entry Wizard was used to populate the SOE schema into a PDB, as shown in Figure 10. From this gold copy (PDB_GOLD) you can create multiple PDB clones and SnapSure checkpoints. Figure 10. PDB Gold copy Database and workload profile Table 5 details the database and workload profile for this solution. Table 5. Database and workload profile Profile characteristic Database type Database size Oracle RAC Oracle SGA for each node Database performance metric Details OLTP 2 TB 4 nodes 17 GB TPM Database read/write ratio 60/40 Oracle database schema This solution applies a simulated OLTP workload by scaling the number of users using Swingbench. We populated a 2 TB PDB database, using Swingbench Order Entry Wizard to create and populate two SOE schemas. Figure 11. Oracle RAC database running on all nodes Enable HugePages HugePages is crucial for better Oracle database performance on where data access is via the SGA. Enabling HugePages can provide better overall memory and CPU performance as follows: Larger page size means a smaller page table to manage Fewer system calls No Kernel Swap Daemon operations See My Oracle Support Note ID 361468.1 for details about HugePages on Oracle Linux 64-bit (requires an Oracle web account). 20

We performed the following steps to tune the HugePages parameters for optimal performance: 1. To calculate the values recommended for Linux HugePages, ensure that the database is running and run the following script: hugepages_settings.sh My Oracle Support Note ID 401749.1 provides this script and more information (requires an Oracle web account). Figure 12. HugePages script 2. Set the vm.nr_hugepages parameter in /etc/sysctl.conf to the recommended size. In this solution, we used 8996 to accommodate an SGA of 17 GB. 3. Restart the database. 4. Check the values of the HugePages parameters using the following command: grep Huge /proc/meminfo Figure 13. Check HugePages parameters 21

Configuring FAST Cache on EMC VNX8000 FAST Cache uses flash drives to add an extra layer of high-speed cache between DRAM cache and rotating disk drives, thereby creating a faster medium for storing frequently accessed data. FAST Cache is an extendable, read/write cache. It boosts application performance by ensuring that the most active data is served from highperforming flash drives and can reside on this faster medium for as long as necessary. FAST Cache is most effective when application workloads exhibit a high data activity skew. This is where a subset of data is responsible for most of the dataset activities. FAST Cache is more effective when the primary block reads and writes are small and fit within the 64K FAST Cache track. The storage system is able to take advantage of the data skew by dynamically placing data according to its activity. For those applications whose datasets exhibit a high degree of skewing, FAST Cache can be assigned to concentrate a high percentage of application IOPS on flash capacity. This section outlines the main steps used to configure and enable FAST Cache for this solution. You can perform the configuration steps using either the Unisphere GUI or the Unisphere command line interface (CLI). For further information about configuring FAST Cache, see Unisphere Help in the Unisphere GUI. Analyze the application workload Before you decide to implement FAST Cache, you must analyze the application workload characteristics. Array-level tools are available to EMC field and support personnel for determining both the suitability of FAST Cache for a particular environment and the right size cache to configure. Contact your EMC sales teams for guidance. Whether a particular application can benefit from using FAST Cache and what the optimal cache size should be depends on the size of the application s active working set, the access pattern, the IOPS requirement, the RAID type, and the read/write ratio. EMC FAST Cache on page 10 details how the workload characteristics of OLTP databases make them especially suitable for using FAST Cache. The EMC FAST Cache A Detailed Review and Deploying Oracle Database 11g Release 2 on EMC Unified Storage White Papers provide further information. For this solution, we performed an analysis using the EMC array-level tools, which recommend using FAST Cache and four 200 GB flash drives as the optimal configuration. FAST Cache best practices for Oracle The following are recommended practices: Disable FAST Cache on pools or LUNs that do not require it. Size FAST Cache appropriately, depending on the application s active dataset. Disable FAST Cache on pools or LUNs where Oracle online Redo logs reside. Never enable FAST Cache on archive logs, because these files are never overwritten and are rarely read back. EMC recommends that you enable FAST Cache for the Oracle data files only. Oracle archive log files and online Redo log files have a predictable workload composed 22

mainly of sequential writes. The array s write cache and assigned HDDs can efficiently handle these archive files and Redo log files. Enabling FAST Cache on these files is neither beneficial nor cost effective. 23

Configuring FAST VP on EMC VNX8000 Overview EMC FAST VP provides compelling advantages over traditional tiering options. It combines the advantages of automated storage tiering with Virtual Provisioning to optimize performance and cost while radically simplifying storage management and increasing efficiency. Like FAST Cache, FAST VP works best on datasets that have a high data activity skew. FAST VP is very flexible and supports several tiered configurations, such as single tiered, multitiered, with or without a flash tier, and FAST Cache. Adding a flash tier can locate hot data on flash storage in 256 MB slices. FAST VP can be used to aggressively reduce TCO and to increase performance. A target workload that requires a large number of performance drives can be serviced with a mix of tiers, and a much lower drive count. In some environments, you can achieve an almost two-thirds reduction in drive count. You can use FAST VP in combination with FAST Cache to gain TCO benefits while using FAST Cache to boost overall system performance. This paper discusses considerations for an optimal deployment of these technologies. For further information on FAST VP algorithm and policies, see EMC FAST VP for Unified Storage Systems White Paper. Tiering policies FAST VP includes the following tiering policies: Start high then auto-tier (default) Auto-tier Highest available tier Lowest available tier No data movement Start high then auto-tier (default policy) Start high then auto-tier is the default setting for all pool LUNs upon their creation. Initial data placement is on the highest available tier and then data movement is subsequently based on the activity level of the data. This tiering policy maximizes the initial performance and takes full advantage of the most expensive and fastest drives first, while providing subsequent TCO by allowing less active data to be tiered down, making room for more active data in the highest tier. When a pool has multiple tiers, the start high then auto-tier design is capable of relocating data to the highest available tier regardless of the drive type combination. Also, when adding a new tier to a pool, the tiering policy remains the same and there is no need to manually change it. Auto-tier FAST VP relocates slices of LUNs based solely on their activity level after all slices with the highest/lowest available tier settings are relocated. LUNs specified with the highest available tier setting have precedence over LUNs set to Auto-tier. 24

Highest available tier Select the highest available tier setting for those LUNs which, although not always the most active, require high levels of performance whenever they are accessed. FAST VP prioritizes slices of a LUN with the highest available tier selected above all other settings. Slices of LUNs set to the highest available tier are rank-ordered with each other according to activity. Therefore, in cases where the sum total of LUN capacity set to the highest available tier is greater than the capacity of the pool s highest tier, the busiest slices occupy that capacity. Lowest available tier Select the lowest available tier for LUNs that are not performance-sensitive or response time-sensitive. FAST VP maintains slices of these LUNs on the lowest storage tier available, regardless of activity level. No data movement The no data movement policy can be selected only after a LUN has been created. FAST VP will not move slices from their current positions after the no data movement selection has been made. Statistics are still collected on these slices for use if and when the tiering policy is changed. Configure FAST VP In this solution, we set the Auto-Tiering policy to Scheduled. For demonstration purposes, we configured the Data Relocation Schedule setting as Monday to Sunday, starting from 00:00 to 23:45. This determines the time window when FAST VP moves data between tiers. Note: The Data Relocation Rate and Data Relocation Schedule are highly dependent on the real workload in a customer environment. Usually, setting the Data Relocation Rate to Low has less impact on the current running workload. Set the tiering policy for all LUNs containing data files to Auto-tier, so that FAST VP can automatically move the most active data to flash drive devices. The white paper, EMC FAST VP for Unified Storage Systems A Detailed Review provides details for FAST VP configuration. 25

VMware ESXi server configuration As virtualization is now a critical component of an overall IT strategy, it is important to select the right vendor. VMware is the leading business virtualization infrastructure provider, offering the most trusted and reliable platform for building private clouds and federating to public clouds. For the virtual environment, we configured two ESXi servers on the same server hardware. Two virtual machines were created on each ESXi server to form a four-node Oracle RAC cluster. We created the virtual machines using a VMware template. First we created a Red Hat Enterprise Linux 6.3 virtual machine and installed Oracle prerequisites and software. We then created a template of this virtual machine and used this to create the other virtual machines to be used as cluster nodes. We performed the following main steps to configure the ESXi servers: 1. Create virtual switches for the cluster interconnects and the connection to the NFS server. 2. Configure the virtual machine template. 3. Deploy the virtual machines. 4. Enable virtual machine access to the storage devices. Step 1: Create virtual switches One standard vswitch and two vnetwork Distributed Switches (vds) were created. The standard vswitch was a public network configured with two 1 Gb NICs for fault tolerance, as shown in Figure 14. Figure 14. Standard vswitch configuration The vds was used to manage the network traffic between different virtual machines and to manage the connections from the virtual machine to external data movers. Each vds was configured with 10 Gb Ethernet connectivity. 26

As shown in Figure 15, a total of two virtual distributed switches were created. dvswitcha & dvswitchb were created for storage redundancy to demonstrate the multipath function of Oracle dnfs. For testing, dvswitchb also handles the Oracle cluster interconnects. No performance issue was observed. This is not recommended in a production environment. Figure 15. vds configuration 27

Each switch was created with a dvport group and an uplink port group. The uplink port group was served by two uplinks. Each uplink used one physical NIC from each ESXi server, as shown in Figure 16. Figure 16. Detailed vds configuration Step 2: Configure the virtual machine template The virtual machine template was configured according to the requirements and prerequisites for the Oracle software, as shown in Table 6, including the following: Table 6. Operating system and Red Hat Package Manager (RPM) Kernel configuration OS users Supporting software Virtual machine template configuration Part CPU Memory Virtual machine settings Operating system Kernel Description 14 vcpus 32 GB Latency sensitivity = high (vcpu and memory fully reserved) Red Hat Enterprise Linux Server release 6.3 (Santiago) 64-bit 2.6.32-279.el6.x86_64 28

Part Network interfaces Description Eth0 (1 Gb) public/management IP network Eth1 (10 Gb): Dedicated to cluster interconnect Eth2 (10 Gb): Dedicated to NFS connection to Data Mover 2 Eth3 (10 Gb): Dedicated to NFS connection to Data Mover 3 OS user (user created and password set) OS groups Software pre-installed rpm packages installed (as Oracle prerequisites) Disk configuration System configuration (Oracle prerequisites) Username: oracle UserID: 1101 Group: oinstall GroupID: 1000 Group: dba GroupID: 1031 The script sshusersetup.sh was copied from the Oracle Grid Infrastructure 12c binaries to the following folder: /home/oracle/sshusersetup.sh. See the relevant Oracle installation guide. 30 GB virtual disk for root, /tmp, and the swap space 15 GB virtual disk for Oracle 12c Grid and RAC Database binaries Note: As of Oracle Grid Infrastructure12.1, allow for an additional 1 GB of disk space per node for the Cluster Health Monitor (CHM) Repository. By default, this resides within the Grid Infrastructure. See the relevant Oracle Installation Guide: Oracle Real Application Clusters Installation Guide 12cfor Linux Oracle Grid Infrastructure Installation Guide 12c for Linux Step 3: Deploy the virtual machines We deployed four virtual machines from the template stored in VMware vcenter. The Deploy Template wizard was used to specify the name and location of the new virtual machines and to select the option for customizing the guest operating system. 29

We chose an existing customization specification (in vcenter) to define the configuration of the network interfaces for new virtual machines, as shown in Figure 17. Figure 17. Deploy Template wizard Step 4: Enable access to the storage devices To enable host access using the Unisphere GUI, select the Create NFS Export option under Storage > Shared folder > NFS, and type the host IP addresses for each NFS export, as shown in Figure 18. Figure 18. Configure host access 30

Step 5: Enable Jumbo frames For Oracle RAC 12c installations, jumbo frames are recommended for the private RAC interconnect and storage networks. This boosts the throughput as well as possibly lowering the CPU utilization caused by the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). Jumbo frames were configured for four layers in a virtualized environment as follows: Oracle RAC 12c Virtual Machine vds Physical switch Note: Vendor-specific configuration steps for the switch are beyond the scope of this document. Check your switch documentation for details. VNX Data Mover Oracle RAC 12c virtual machine To configure Jumbo frames on the Linux Guest OS, run the following command: ifconfig eth2 mtu 9000 Alternatively, place the following statement in the network scripts in /etc/sysconfig/network-scripts: MTU=9000 vds Figure 19 shows how to configure Jumbo frames on a vds. Figure 19. Configure Jumbo frames on vds 31

Data mover Figure 20 shows how to configure Jumbo frames on the data mover. Figure 20. Configure Jumbo frames on VNX Data Mover 32

Node scalability test Test objective Test procedure This test demonstrated the performance scalability, in which both nodes and their users were scaled out on an Oracle RAC database with dnfs and 10 GbE in a virtualized environment. We used a workload similar to OLTP with a single node. We then added users and nodes to show the scalability of both nodes and users. We used Swingbench to generate the workload. The testing included the following steps: 1. Run the workload on the first node by gradually increasing the number of concurrent users from 50 to 250 in increments of 50. 2. Add the second node into the workload, and run the same workload as in the previous step on each node separately. The total users scaled from 100 (50 on each node) to 500 (250 on each node) on this two-node RAC database. 3. Repeat the previous two steps after adding the third and fourth nodes. 4. For each user iteration, record the front-end IOPS from Unisphere, the TPM from Swingbench, and the performance statistics from Oracle Automatic Workload Repository (AWR) reports. Notes: Benchmark results are highly dependent on workload, specific application configurations, and system design and implementation. Relative system performance varies based on many factors. Therefore, you cannot use this workload as a substitute for a specific environment s application benchmark when making critical capacity planning or product evaluation decisions. The testing team obtained all performance data in a rigorously controlled environment. Results of other operating environments can vary significantly. EMC Corporation does not guarantee that a user can achieve similar performance demonstrated in TPM. 33

Test results The Cache Fusion architecture of Oracle RAC immediately uses the CPU and memory resources of the new nodes. Thus, we can easily scale out the system CPU and memory resource without affecting the online users. The architecture provides a scalable computing environment that supports the application workload. Figure 21 shows the TPM that Swingbench recorded during the node scalability testing, scaling both nodes and concurrent users. We scaled the RAC database nodes from one to four. In each RAC configuration, we ran the Swingbench workload with 50, 100, 150, 200, and 250 users on each node. We observed a near-linear scaling of TPM as the concurrent user load increased along with the scale of nodes. The chart illustrates the benefits of using EMC VNX8000 storage with Oracle RAC and dnfs for achieving a scalable OLTP environment. Oracle RAC provides not only horizontal scaling, but also guaranteed continuous availability. Figure 21. Node scalability test 34

FAST Suite test FAST Suite and manual tiering comparison Manual tiering involves a repeated process that can take nine hours or more to complete each time. In contrast, both FAST VP and FAST Cache operate automatically, eliminating the need to manually identify and move or cache the hot data. As shown in Figure 22, configuring FAST Cache can take 50 minutes or less; hot and cold data is then cached in and out of FAST Cache continuously and automatically. Figure 22. FAST Suite and manual tiering comparison Note: The time stated for configuring FAST VP is a conservative estimate. For details about configuring FAST VP, see the Configuring FAST VP on EMC section of this white paper. FAST Cache test FAST Cache boosts the overall performance of the I/O subsystem and works very well with Oracle dnfs in a virtualized Ethernet architecture. FAST Cache enables applications to deliver consistent performance by absorbing heavy read/write loads at flash drive speeds. We configured four 200 GB flash drives in RAID 10 for FAST Cache. This provided usable and protected FAST Cache of 400 GB for the storage pool that contains the database data files. 35

FAST Cache warm-up FAST Cache required warm-up time before hot data was promoted to FAST Cache. Figure 23 tracks the FAST Cache read/write hit ratio of the storage pool that stores the data files. Figure 23. FAST Cache warm-up period FAST Cache was empty when it was initially created. During the warm-up period, as more hot data was cached, the FAST Cache hit rate increased gradually. In this test, the write hit ratio increased between 87 and 90 percent while the read hit ratio increased gradually to 100 percent after a warm-up period. When the locality of the active data changes, FAST Cache requires a warm-up of the new data. This process is a normal, expected behavior and is fully automatic. FAST Cache test procedure To test the performance enhancement provided by FAST Cache, we ran the Swingbench workload on the four-node RAC database, with and without FAST Cache enabled. The test procedure included the following steps: 1. Baseline testing: a. Run the workload against the four-node RAC database without FAST Cache, and scale the number of concurrent users from 250 to 750 on each node. The active data size was 1 TB, which was deployed on SAS drives only. 36

b. Monitor the performance statistics, including average front-end IOPS and database TPM for each user iteration, from Oracle AWR reports and Unisphere. 2. FAST Cache testing: a. Enable FAST Cache on the storage array after the baseline testing, then run the same workload and collect the same performance statistics as those on the baseline. b. After all the FAST Cache testing finishes, compare the performance data with the baseline to determine how much performance enhancement FAST Cache can offer. The results of the tests are detailed in the FAST Suite test results section. FAST VP We created a two-tier FAST VP with a mixed storage pool consisting of 15 SAS drives and eight NL_SAS drives on a VNX8000 and used the capacity tier to decrease the per GB cost of data. This tier, consisting of 7.2k RPM Near-Line SAS (NL-SAS) drives, was designed for maximum capacity at a modest performance level. Although NL-SAS drives have a slower rotational speed compared to drives in the Performance tier, NL- SAS drives can significantly reduce energy use and free up capacity in the more expensive and higher-performing storage tiers. NL-SAS drives cost less than performance drives on a per-gb basis and their cost is a small fraction of the cost of flash drives. They are the most appropriate type of media for this cold data. NL-SAS drives consume 96 percent less power per terabyte than performance drives and offer a compelling opportunity for improvement that considers both purchase cost and operational efficiency. FAST Suite test To demonstrate the advantages of FAST Cache in absorbing random I/O bursts and the benefits of FAST VP auto-tiering, we used the following test scenario: Eight NL_SAS and fifteen SAS drives were used for a FAST VP baseline, and four flash drives were added for the FAST VP plus FAST Cache test. Note: Refer to Analyze the application workload on page 22 to appropriately size the flash drives for FAST Cache. Refer to the EMC VNX Unified Best Practice for Performance Applied Best Practices Guide. To understand why we used eight drives for FAST VP, the rule of thumb for tier construction on extreme performance flash tier is 6+2 RAID 6. To test the performance of FAST Suite, we ran the same Swingbench workload for the FAST Cache with FAST VP test scenario. FAST Suite test procedure Initially, we stored the data files on SAS devices and performed a baseline test. These results were used to compare with the FAST Cache test (added four flash drives) using the same workload as before. We performed following high-level steps: 1. Enabled FAST Cache using four flash drives. 37

2. Generated the workload against the database to warm up the FAST Cache. Increased the number of users running transactions at intervals to determine how the database performed. 3. Monitored the performance of the database and recorded the average frontend IOPS and database TPM for each user iteration. FAST Suite test results FAST Suite effects on database transactions per minute This section compares the database TPM for each of the test cases, which includes the following: Baseline (40 SAS drives) FAST Cache-only testing using four 200 GB flash drives and 40 SAS drives FAST Suite combination testing using four 200 GB flash drives for FAST Cache and eight 3 TB NL_SAS drives and 15 SAS for FAST VP A close analysis of the performance data from underlying rotating spindles revealed that once FAST Cache cached the hot data, the 40 rotating drives containing Oracle PDB data files receive less than 100 lops each. A lower number of SAS drives or even NL-SAS drives can meet such low per-drive lops requirements. This test was repeated by creating the data storage pool on just 15 SAS drives instead of 40 drives to determine if FAST Cache still delivers the same improvement when the original database is created on fewer rotating drives. Figure 24 shows that the new pool with a reduced number of drives (15 drives) yielded almost the same level of performance as that of the data pool with 40 drives. Figure 24. Performance level of new pool with 15 drives 38

Figure 25 shows the TPM recorded during the period that the Swingbench workload scaled from 250 to 750 users on each node. This chart shows the performance comparison between the SAS-only baseline test and the test with FAST Cache enabled. This test shows that TPM increased to 744,260 with FAST Cache enabled. Figure 25. Baseline (SAS-Only) and FAST Cache 39

Figure 26 compares the performance tests using baseline and the FAST VP and FAST Cache combination with four 200 GB flash drives for FAST Cache. The TPM increased to around 695,915 and stabilized at that level almost three times more TPMs compared with that recorded in the baseline test. Using a mixed pool of disk 15 SAS, 8 NL_SAS, and FAST Suite, 93 percent of TPMs from the previous FAST Cache-only test were achieved using fewer disks. Figure 26. FAST VP with and without FAST Cache (FAST SUITE combination) FAST Suite effects on Oracle read response times Figure 27 shows the significant improvement in read response time provided by the EMC FAST Suite combination when compared with the baseline. When we enabled FAST Cache, the response time decreased from 18.88 ms to 1.52 ms. Total db file sequential read wait events dropped by 91 percent. 40

Figure 27. AWR reports comparison between baseline and FAST Cache-only tests Rapid provisioning of PDBs Test procedure Customers often need to rapidly provision databases for operational, unit or system test environments. The objective here was to demonstrate the instantaneous provision of a PDB clone using EMC SnapSure Checkpoint and then using SQL, plug in the PDB into the CDB. The following procedure is an example of how to use the Create Pluggable Database statement to quickly provision a test database based on a checkpoint of the database file system created by EMC VNX SnapSure. 1. Install Oracle 12.1 database software in the test environment. 2. Run the command to enable dnfs in the test/development environment and create a dnfs configuration file, as shown in the Oracle dnfs client configuration section. 3. Using Unisphere, create a SnapSure checkpoint of a PDB_GOLD file system. Figure 28. Create Checkpoint example Note: If you are using SnapSure to create user checkpoints of the primary file system, place SavVol on separate disks when possible and avoid enabling FAST Cache on SavVol. The Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance White Paper provides details. 41

4. Mount the SnapSure checkpoint to the target virtual database server with the existing CDB. 5. After the file system of the new checkpoints are mounted to a virtual machine with an existing CDB configuration, issue the following commands: a. To generate the manifest file for the pluggable database PDB_GOLD (note this is clone of the production database): exec dbms_pdb.describe(pdb_descr_file=>'/fra/goldpdb/gol Dpdb.xml',PDB_NAME=>'PDB_GOLD') b. To create the database using both the manifest file and the checkpoint (host mount point /clonedb/pdb1_ckpt3), and then open the pluggable database ready for use: create pluggable database pdb1_dev3 as clone using '/fra/goldpdb/goldpdb.xml' source_file_name_convert=('/fra/goldpdb/','/clonedb /pdb1_ckpt3/') nocopy TEMPFILE REUSE; Test results When the cloned database was up and running, we performed read and write activities on the test database. When the workload was run, storage consumption of the cloned database grew with the speed at which the data was modified. To verify this newly created pluggable database, we used Swingbench to generate the workload, as shown in Figure 29. Figure 29. Swingbench workload against pdb1_dev3 42

Conclusion Summary VMware vsphere 5.5 enables efficient use of the server hardware (RAC database servers) by scaling the virtual environment to the following: Larger virtual machines Virtual machines can grow to support the most advanced applications. Virtual machines can now have up to 64 virtual CPUs (vcpus) and 1 TB of virtual RAM (vram). Oracle RAC 12c can easily scale out the nodes to increase the resources (CPU and memory) of the database server as application needs grow, enabling customers to take an incremental approach to address increases in the Oracle workload. Oracle dnfs client technologies perform concurrent I/O that bypasses any OS caches and write-order locks. dnfs also performs asynchronous I/O, which allows processing to continue while the I/O request is submitted and processed. Performance is further improved by load balancing across multiple network interfaces (if available). EMC FAST Suite, which includes FAST Cache and FAST VP, is ideal for the Oracle database environment. FAST Cache and FAST VP complement each other, can boost storage performance, and can lower TCO if used together. FAST Cache can improve performance immediately for burst-prone Oracle data workloads, while FAST VP optimizes TCO by moving Oracle data to the appropriate storage tier, based on sustained data access and demands over time. All use cases discussed in this paper prove that just by deploying a few flash drives and using the FAST Suite, users can significantly reduce the total number of drives required for any Oracle Database implementation. With its advanced data features, the VNX series not only reduces the initial cost of the deployment but also significantly reduces the complexity associated with day-to-day data management by automating the complex and time-consuming storage tiering process. Additionally, deploying NAS with a 10 GbE fabric on the VNX8000 (NFS, CIFS, and pnfs) delivers cost efficiencies with regard to infrastructure, people, and processes versus a block-deployed storage solution. The VNX8000 platform provides consistent, optimal performance scalability for the Oracle workload. By deploying an Oracle RAC database on a VNX8000 array, performance scales in a near-linear manner when additional storage network and RAC nodes are introduced, providing higher throughput based on the configuration in this solution. With the combination of the EMC SnapSure checkpoint and the Oracle create pluggable database statement, Oracle DBAs can replicate their production environments for test and development purposes in less than 10 minutes. This solution offers near immediate access to the newly provisioned database. 43