EMC Business Continuity for Oracle Database 11g

Size: px
Start display at page:

Download "EMC Business Continuity for Oracle Database 11g"

Transcription

1 EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS

2 Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H6949

3 Table of Contents Table of Contents Chapter 1: About this Document Overview... 6 Audience and purpose... 7 Scope... 8 Business challenge... 9 Technology solution... 9 Migration Reference Architecture Validated environment profile Hardware and software resources Prerequisites and supporting documentation Terminology Typographic conventions Chapter 2: Storage Design Overview Concepts Best practices Data Mover parameters setup Data Mover failover Storage design layout Chapter 3: File System Layout Overview Chapter 4: Application Design Considerations Application design layout Oracle 11g DNFS Memory configuration for Oracle 11g HugePages Chapter 5: Network Design Concepts Best practices

4 Table of Contents Network layout Virtual LANs Jumbo frames Ethernet trunking and link aggregation Public and private networks Oracle RAC 11g server network architecture Chapter 6: Installation and Configuration Overview Task 1: Build the network infrastructure Task 2: Set up and configure NAS for Celerra Task 3: Set up and configure database servers Task 4: Configure NFS client options Task 5: Install Oracle Database 11g Task 6: Configure database server memory options Task 7: Tune HugePages Task 8: Set database initialization parameters Task 9: Configure the Oracle DNFS client Task 10: Verify that DNFS has been enabled (Oracle 11g only) Task 11: Configure Oracle Database control files and logfiles Task 12: Enable passwordless authentication using SSH Task 13: Set up and configure Celerra SnapSure Task 14: Set up the virtualized utility servers Chapter 7: Testing and Validation Overview Section A: Store solution Section B: Basic backup solution Section C: Advanced backup solution Section D: Test/Dev solution Section E: Backup server solution Section F: Online migration solution Chapter 8: Virtualization Overview

5 Table of Contents Advantages of virtualization Considerations VMware infrastructure Virtualization best practices VMware ESX server VMware and NFS Chapter 9: Backup and Restore Overview Section A: Backup and restore concepts Section B: Backup and recovery strategy Section C: Logical backup and restore using EMC SnapSure Section D: Comparison - EMC SnapSure and Flashback Database Section E: Physical backup and restore using Oracle RMAN Chapter 10: Data Protection and Replication Overview EMS SnapSure and Oracle Data Guard Chapter 11: Test/Dev Overview Database cloning Configuring Oracle to facilitate cloning Creating a test/dev system using Celerra SnapSure writeable checkpoints Chapter 12: Conclusion Overview Chapter 13: Supporting Information Overview Managing and monitoring Celerra Initializing logical storage: scripts and outputs Setting up iterative logical storage: scripts and outputs Test/dev using Celerra SnapSure: scripts and outputs

6 Chapter 1: About this Document Overview Chapter 1: About this Document Overview Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. This document summarizes a series of implementation procedures and best practices that were discovered, validated, or otherwise encountered during the validation of a solution for Oracle Database 11g using the EMC Celerra unified storage platform and EMC SnapSure over kernel NFS (KNFS) or over Oracle Direct NFS (DNFS). EMC Business Continuity for Oracle Database 11g/10g Enabled by EMC Celerra using DNFS and NFS 6

7 Chapter 1: About this Document Overview Audience and purpose Audience The intended audience for the is: Internal EMC personnel EMC partners Customers Purpose The purposes of this solution are as follows: Demonstrate the functionality, performance, resiliency, and scalability capabilities of an Oracle software stack that is physically booted, and that uses either the Oracle Direct NFS (DNFS) protocol or the kernel NFS (KNFS) protocol to access the storage elements for the Oracle database. Demonstrate how to migrate a production database from an FCP/ASM mount to an NFS mount, and vice versa, with minimal impact on performance and with no system downtime. Demonstrate the use of Celerra SnapSure to: Carry out physical backup and restore of an Oracle 11g production database while offloading all performance impacts of the backup operation from the production server; and demonstrate significant performance and manageability benefits in comparison to normal Oracle Recovery Manager (RMAN) backup and recovery. Create a test/dev system by cloning a running production database with minimal performance impact and no downtime using writeable checkpoints. The solution includes bringing up the clone as: RAC on physically booted servers Single instance on a virtualized target Demonstrate the use of EMC SnapSure to enable the physical backup and recovery of an Oracle 11g production database while offloading all performance impacts of the backup operation off of the production server. This demonstrates significant performance and manageability benefits in comparison to normal Oracle Recovery Manager (RMAN) backup and recovery. Demonstrate the significant performance, manageability, and efficiency benefits of Oracle DNFS over kernel NFS (KNFS). Provide the capability to clone a running production database with minimal performance impact and no downtime using SnapSure writeable checkpoints. The solution covers bringing up the clone database as Real Application Cluster (RAC) on physically booted servers and also as single instance on a virtualized target. Support disaster recovery using RMAN and Oracle Data Guard. Demonstrate the significant increase in performance and other advantages while using Enterprise Flash Drives (EFDs), when compared to Fibre Channel (FC) drives. Use VMware to create virtualized database servers that act as targets for 7

8 Chapter 1: About this Document Overview backup and restore, disaster recovery, and test/dev. Scope Overview All database objects are stored on an NFS mount. In the case of Oracle RAC 11g, datafiles, tempfiles, control files, online redo logfiles and archive log files are accessed using the DNFS protocol. Two sites connected by a WAN are used in the solution: one site is used for production; the other site is used as a disaster recovery target. Oracle RAC 11g for x86-64 is run on Red Hat Enterprise Linux or on Oracle Enterprise Linux. Core solution components The following table describes the core solution components that are included in this solution: Component Scale-up OLTP Resiliency Description Using an industry-standard OLTP benchmark against a single database instance, comprehensive performance testing is performed to validate the maximum achievable performance using the solution stack of hardware and software. The purpose of resiliency testing is to validate the fault tolerance and high-availability features of the hardware and software stack. Faults are inserted into the configuration at various layers in the solutions stack. Some of the layers where fault tolerance is tested include: Oracle RAC node, Oracle RAC node interconnect port, storage processors, and Data Movers. Functionality solution components The following table describes the functionality solution components that are included in this solution: Component Description Basic Backup Advanced Backup Basic Protect Advanced Protect This is backup and recovery using Oracle RMAN, the built-in backup and recovery tool provided by Oracle. This is backup and recovery using EMC value-added software or hardware. In this solution the following is used to provide Advanced Backup functionality: EMC SnapSure This is disaster recovery using Oracle Data Guard, Oracle s built-in remote replication tool. This is disaster recovery using EMC value-added software and hardware: In this solution the following are used to provide Advanced 8

9 Chapter 1: About this Document Overview Protect functionality: EMC RecoverPoint with CLARiiON splitters EMC MirrorView /A over iscsi Test/dev Migration A running production OLTP database is cloned with minimal, if any, performance impact on the production server, as well as no downtime. The resulting dataset is provisioned on another server for use for testing and development. An online production Oracle database that is mounted over FCP/ASM is migrated to a target database mounted using NFS, with no downtime and minimal performance impact on the production database. Business challenge Business challenges for midsize enterprises Midsize enterprises face the same challenges as their larger counterparts when it comes to managing database environments. These challenges include: Rising costs Control over resource utilization and scaling Lack of sufficient IT resources to deploy, manage, and maintain complex environments at the departmental level The need to reduce power, cooling, and space requirements Unlike large enterprises, midsize enterprises are constrained by smaller budgets and cannot afford a custom, one-off solution. This makes the process of creating a database solution for midsize enterprises even more challenging than for large enterprises. Technology solution Solution for midsize enterprises This solution demonstrates how organizations can: Maximize the use of the database-server CPU, memory, and I/O channels by offloading the performance impacts of backup, restore, and recovery operations from the production server. Use DNFS to: Simplify network setup and management by taking advantage of DNFS automated management of tasks, such as setting up network subnets, LACP bonding, and tuning of Linux NFS parameters. Increase the capacity and throughput of their existing infrastructure. Transactions per second and user load are both higher with DNFS than with KNFS, enabling more output from the same infrastructure. Migrate an online production database from one protocol to another, that is, 9

10 Chapter 1: About this Document Overview from FCP to NFS and vice versa. Use Celerra SnapSure to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. SnapSure writeable checkpoints also help in creating test/development systems without any impact on the production environment. Ensure business continuity by using Celerra and Oracle Data Guard to provide disaster recovery capability. Overview All database objects are stored on an NFS mount. In the case of Oracle RAC 11g, datafiles, tempfiles, control files, online redo logfiles, and archive log files are accessed using the DNFS protocol. Two sites connected by a WAN are used in the solution, one site is used for production and the other site is used as a disaster recovery target. Oracle RAC 11g for x86-64 is run on Red Hat Enterprise Linux or on Oracle Enterprise Linux. The solution also includes virtualized servers for use as Test/Dev, Basic Protect, and Advanced Protect targets. Virtualization of the test/dev and disaster recovery (DR) target servers is supported using VMware ESX Server. A RAC-to-RAC cloning test/dev solution component is included that consists of four physically booted, target database servers. Production site The production site consists of: A physically booted four-node Oracle RAC 11g cluster A Celerra NS-480 connected to the Oracle RAC 11g servers through the production storage network. EMC SnapSure is used to provide an advanced backup solution and SnapSure writeable checkpoints are used to create a test/dev target database. The Celerra is used for storage and consolidation. Network load balancing and high availability are managed by the Oracle DNFS client. A physically booted, RAC-to-RAC cloning test/dev solution component. This consists of a four-node physically booted Oracle 11g RAC cluster that is used as a test/dev target. The Oracle RAC 11g servers are connected to the client, RAC interconnect, WAN, and production storage networks. Disaster recovery target site The disaster recover site consists of: A virtualized single-instance Oracle Database 11g server that is used as the disaster recovery target for Basic Protect and Advanced Protect. The virtualized single-instance server is connected to the client WAN and target storage networks through virtualized connections on the virtualization server. A Celerra that is connected to a VMware ESX server through both the 10

11 Chapter 1: About this Document Overview production and the disaster recovery storage networks. The Oracle Database 11g single-instance target server accesses these networks through a virtualized switch on the ESX server. Storage layout The following table describes how each Oracle file type and database object is stored for this solution: What Where File-system type Oracle datafiles Oracle tempfiles Oracle online redo logfiles Oracle controlfiles Voting disk OCR files Archived logfiles Flashback recovery area Backup target FC disks SATA II RAID-protected NFS file system For implementations using Oracle Database 11g, all files are accessed using DNFS. RAID-protected NFS file systems are designed to satisfy the I/O demands of particular database objects. For example, RAID 5 is sometimes used for the datafiles and tempfiles, but RAID 1 is always used for the online redo logfiles. Two separate RAID configurations are supported. For more information, refer to EMC Solutions for Oracle Database 11g for Midsize Enterprises Physically Booted Solutions with EMC Celerra NS40 Unified Storage Platform - Reference Architecture. Oracle datafiles and online redo logfiles reside on their own NFS file system. Online redo logfiles are mirrored across two different file systems using Oracle software multiplexing. Three NFS file systems are used one file system for datafiles and tempfiles, and two file systems for online redo logfiles. Oracle control files are mirrored across the online redo logfile NFS file systems. 11

12 Chapter 1: About this Document Overview Network architecture The design implements the following physical connections: TCP/IP provides network connectivity. DNFS provides file system semantics for Oracle RAC 11g. Client virtual machines run on a VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. 12

13 Chapter 1: About this Document Overview Migration Introduction The ability to migrate an Oracle database across storage protocols is a frequent customer request. The EMC Global Solutions group has tested and validated a solution component for migrating an online production Oracle database, which is mounted over FCP/ASM to a target database mounted using NFS, as well as migrating from NFS back to FCP/ASM. This is performed with minimal performance impact on the production database and no downtime. Migration diagram The following illustration is a high-level view of the migration component. 13

14 Chapter 1: About this Document Overview Reference Architecture Corresponding Reference Architecture This solution has a corresponding Reference Architecture document that is available on Powerlink, EMC.com, and EMC KB.WIKI. Refer to EMC Business Continuity for Oracle Database 11g - Enabled by EMC Celerra using DNFS and NFS Reference Architecture for details. If you do not have access to this content, contact your EMC representative. 14

15 Chapter 1: About this Document Overview Reference Architecture diagram The following diagram depicts the overall physical architecture of the solution. Validated environment profile Environment profile and test results For information on the validated environment profile and performance results, refer to the testing summary results contained in this document. 15

16 Chapter 1: About this Document Overview Hardware and software resources Hardware The hardware used to validate the solution is listed below. Equipment Quantity Configuration EMC Celerra unified storage platforms (includes an EMC CLARiiON CX4 back-end storage array) 2 2 Data Movers 4 GbE network connections per Data Mover 2 or 3 FC shelves 1 SATA shelf 30 or GB FC disks (depending on configuration) GB SATA disks 1 Control Station 2 storage processors DART version Gigabit Ethernet switches 5 24 ports per switch Database servers (Oracle RAC 11g servers) Virtualization server (VMware ESX server) GHz Intel Pentium 4 quad-core processors 24 GB of RAM GB 15k internal SCSI disks 2 onboard GbE Ethernet NICs 2 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) GHz AMD Opteron quad-core processors 32 GB of RAM GB 15k internal SCSI disks 2 onboard GbE Ethernet NICs 3 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) 16

17 Chapter 1: About this Document Overview Software The software used to validate the solution is listed below. Software Version Oracle Enterprise Linux 4.7 VMware vsphere 4 Microsoft Windows Server 2003 Standard Edition 2003 Oracle RAC Enterprise Edition 11g ( ) Oracle Database Standard Edition 11g ( ) Quest Benchmark Factory for Databases EMC Celerra Manager Advanced Edition 5.6 EMC Navisphere Agent EMC Replication Manager EMC PowerPath (build 157) EMC FLARE EMC DART EMC Navisphere Management 6.28 Prerequisites and supporting documentation Technology It is assumed the reader has a general knowledge of: EMC Celerra Oracle Database (including RMAN and Data Guard) EMC SnapSure VMware ESX Server Supporting documents The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. CLARiiON CX4 series documentation EMC Unified Storage for Oracle Database 11g Physically Booted Solution Enabled by EMC Celerra and Linux using NFS and DNFS - Reference Architecture SAP solutions documentation 17

18 Chapter 1: About this Document Overview Third-party documents The following resources have more information about Oracle: Oracle Technology Network MetaLink Oracle support Terminology Terms and definitions This section defines the terms used in this document. Term Solution Solution attribute Definition A solution is a complete stack of hardware and software upon which a customer would choose to run their entire business or business function. A solution includes database server hardware and software, IP networks, storage networks, storage array hardware and software, among other components. A solution attribute addresses the entire solution stack, but does so in a way relating to a discrete area of testing. For example, performance testing is a solution attribute. Solution component Core solution component Functionality solution component Basic solution component Advanced solution component Physically-booted solution Virtualized solution A solution component addresses a subset of the solution stack that consists of a discrete set of hardware or software, and focuses on a single IT function. For example, backup and recovery, and disaster recovery are solution components. A solution component can be either basic or advanced. A core solution component addresses the entire solution stack, but does so in a way relating to a discrete area of testing. For example, performance testing is a core solution component. A functionality solution component addresses a subset of the solution stack that consists of a discrete set of hardware or software, and focuses on a single IT function. For example, backup and recovery, and disaster recovery are both functionality solution components. A functionality solution component can be either basic or advanced. A basic solution component uses only the features and functionality provided by the Oracle stack. For example, RMAN is for backup and recovery, and Data Guard is for disaster recovery. An advanced solution component uses the features and functionality of EMC hardware or software. For example, EMC SnapView is for backup and recovery, and EMC MirrorView is for disaster recovery. A configuration in which the production database servers are directly booted off a locally attached hard disk without the use of a hypervisor such as VMware or Oracle VM. Utility servers (such as test/dev target or disaster recovery target) may still be virtualized in a physically-booted solution. A configuration in which the production database servers are virtualized using a 18

19 Chapter 1: About this Document Overview hypervisor technology such as VMware or Oracle VM. Scale-up Resiliency Test/dev Advanced Backup and Recovery The use of a clustered or single-image database server configuration. Scaling is provided by increasing the number of CPUs in the database server (in the case of a single-instance configuration) or by adding nodes to the cluster (in the case of a clustered configuration). Scale-up assumes that all customers of the database will be able to access all database data. Testing that is designed to validate the ability of a configuration to withstand faults at various layers. The layers that are tested include: network switch, database server storage network port, storage array network port, database server cluster node, and storage processor. The use of storage layer replication (such as snapshots and clones) to provide an instantaneous, writeable copy of a running production database with no downtime on the production database server and with minimal, if any, performance impact on the production server. A solution component that provides backup and recovery functionality through the storage layer using specialized hardware or software. Advanced Backup and Recovery has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the backup and recovery operations Superior Mean Time to Recovery (MTTR) through the use of virtual storage layer replication (commonly referred to as snapshots) Basic Backup and Recovery Advanced Protect A solution component that provides backup and recovery functionality through the operating system and database server software stack. Basic Backup and Recovery uses the database server s CPUs for all I/O and processing of backup and recovery operations. A solution component that provides disaster recovery functionality provided through the storage layer using specialized hardware or software. Advanced Protect has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the disaster recovery operations Superior failover and failback capabilities Reduces the software required to be installed at the disaster recovery target because of the use of consistency technology Basic Protect Direct NFS (DNFS) Kernel NFS (KNFS) High availability A solution component that provides disaster recovery functionality provided through the operating system and database server software stack. Basic Protect uses the database server s CPUs for all I/O and processing of disaster recovery operations. A network storage protocol in which the NFS client is embedded in the Oracle 11g database kernel. A network storage protocol in which the NFS client is embedded in the operating system kernel. The use of specialized hardware or software technology to reduce both planned 19

20 Chapter 1: About this Document Overview Fault tolerance Enterprise flash drive (EFD) Serial advanced technologyattachment (SATA) drive Migration and unplanned downtime. The use of specialized hardware or software technology to eliminate both planned and unplanned downtime. A drive that stores data using Flash memory and contains no moving parts. SATA is a newer standard for connecting hard drives into computer systems. SATA is based on serial signaling technology, while Integrated Drive Electronics (IDE) hard drives use parallel signaling. The ability to transfer a running production database from one environment to another, for example, from FCP/ASM to KNFS/DNFS. 20

21 Chapter 1: About this Document Overview Typographic conventions Typographic conventions In this document, many steps are listed in the form of terminal output. This is referred to as a code listing. For example: Note the following about code listings: Commands you type are shown in bold. For lengthy commands the backslash \ character is used to show line continuation. While this is a common UNIX convention, it may not work in all cases. You should enter the command on one line. The use of ellipses ( ) in the output indicates that lengthy output was deleted for brevity. If a Celerra or Linux command is referred to in text it is indicated in bold and lowercase, like this: the fs_copy command. If a SQL or RMAN command is referred to in text, it is indicated in uppercase, like this: The ALTER DATABASE RENAME FILE command. A special font is not used in either case. Migration Customers often request the ability to migrate a virtualized Oracle Database across storage protocols. In response to this, the Oracle Consulting (CSV) group has validated that customers who have an Oracle Database can migrate data from: An FCP/ASM to an NFS-mounted file system An NFS-mounted file system to an FCP/ASM Detailed information regarding migration is found in the Oracle RAC/Database 11g Cross-Protocol Migration Technical Notes. 21

22 Chapter 2: Storage Design Chapter 2: Storage Design Overview Introduction to Storage Design The storage design layout instructions presented in this chapter apply to the specific components used during the development of this solution. Concepts Setting up NAS storage To set up NAS, the following steps must be carried out: Step Action 1 Create RAID groups. 2 Allocate hot spares. 3 Create user-defined pools. 4 Create file systems and file system exports. High availability and failover EMC Global Solutions have built in high-availability (HA) features. These HA features allow the Celerra to survive various failures without a loss of access to the Oracle database. These HA features protect against the following: Data Mover failure Network port failure Power loss affecting a single circuit connected to the storage array Storage processor failure Disk failure Automatic Volume Management Automatic Volume Management (AVM) is a way for the user to configure Celerra volumes from RAID groups. User-defined pools allow the user to control exactly which RAID groups are used for a given volume. With AVM user-defined pools, a pool can be automatically expanded if additional RAID groups are added later. This configuration option provides the greatest flexibility and control for configuring Celerra volumes. AVM with user-defined pools was used to configure all volumes in the solution. Best practices 22

23 Chapter 2: Storage Design Disk drives The following are the general recommendations for disk drives: Drives with higher revolutions per minute (rpm) provide higher overall randomaccess throughput and shorter response times than drives with slower rpm. For optimum performance, higher-rpm drives are recommended for datafiles and tempfiles as well as online redo logfiles. Because of significantly better performance, Fibre Channel drives are always recommended for storing datafiles, tempfiles, and online redo log files. Serial Advanced Technology-Attached (SATA II) drives have slower response and rotational speed, and moderate performance with random I/O. However, they are less expensive than the Fibre Channel drives for the same or similar capacity. SATA II drives are frequently the best option for storing archived redo logs and the flashback recovery area. In the event of high performance requirements for backup and recovery, Fibre Channel drives can also be used for this purpose. Enterprise Flash Drives (EFDs) To determine if EFDs will provide improved performance in your environment, you need to identify what set(s) of datafiles have low cache read-hit rates and are exhibiting random I/O patterns. These random workloads are usually the best candidates for migration to EFDs and tend to exhibit the highest performance gains. Pure sequential workloads like REDO logs will benefit as well, although to a lesser degree than random workloads. 23

24 Chapter 2: Storage Design RAID types and file types The following table describes the recommendations for RAID types corresponding to Oracle file types: Description RAID 5/EFD RAID 10/FC RAID 5/FC RAID 5/SATA II Datafiles/tempfiles Possible (apply Recommended Recommended Avoid tuning) 1 Control files Avoid Recommended Recommended Avoid Online redo logs Avoid Recommended Avoid Avoid Archived logs Avoid Possible (apply tuning) 1 Flashback recovery area OCR file/voting disk Possible (apply tuning) 2 Recommended Avoid OK OK Recommended Avoid OK OK Avoid 1 The decision to use EFDs for datafiles and tempfiles must be driven by the I/O requirements for specific datafiles. 2 The use of FC disks for archived logs is fairly rare. However, if many archived logs are being created, and the I/O requirements for archived logs exceed a reasonable number of SATA II disks, this may be a more cost-effective solution. Tempfiles, undo, and sequential table or index scans In some cases, if an application creates a large amount of temp activity, placing your tempfiles on RAID 10 devices may be faster due to RAID 10 s superior sequential I/O performance. This is also true for undo. Further, an application that performs many full table scans or index scans may benefit from these datafiles being placed on a separate RAID 10 device. Online redo logfiles Online redo log files should be put on RAID 1 or RAID 10 devices. You should not use RAID 5 because sequential write performance of distributed parity (RAID 5) is not as high as that of mirroring (RAID 1). RAID 1 or RAID 10 provides the best data protection; protection of online redo log files is critical for Oracle recoverability. OCR files and voting disk files You should use FC disks for OCR files and voting disk files; unavailability of these files for any significant period of time (due to disk I/O performance issues) may cause one or more of the RAC nodes to reboot and fence itself off from the cluster. The RAID group layout graphics in Chapter 2: Storage Design > Storage design layout, show two different storage configurations that can be used for Oracle RAC 11g databases on a Celerra. That section can help you to determine the best configuration to meet your performance needs. 24

25 Chapter 2: Storage Design Shelf configuration The most common error when planning storage is designing for capacity rather than for performance. The single most important storage parameter for performance is disk latency. High disk latency is synonymous with slower performance; low disk counts lead to increased disk latency. The recommendation is a configuration that produces average database I/O latency (the Oracle measurement db file sequential read) of less than or equal to 20 ms. In today s disk technology, the increase in storage capacity of a disk drive has outpaced the increase in performance. Therefore, the performance capacity must be the standard to use when planning an Oracle database s storage configuration, not disk storage capacity. The number of disks that should be used is determined first by the I/O requirements then by capacity. This is especially true for datafiles and tempfiles. EFDs can dramatically reduce the number of disks required to perform the I/O required by the workload. Consult with your EMC sales representative for specific sizing recommendations for your workload. Stripe size EMC recommends a stripe size of 32 KB for all types of database workloads. The default stripe size for all the file systems on FC shelves (redo logs and data) should be 32 KB. Similarly, the recommended stripe size for the file systems on SATA II shelves (archive and flash) should be 256 KB. The default stripe size for AVM is 32 KB. If you decide to use AVM, you should preserve this setting for optimal performance. EMC recommends using AVM with user-defined storage pools. Control Station security The Control Station is based on a variant of Red Hat Linux. Therefore it is possible to install any publicly available system tools that your organization may require for additional security. 25

26 Chapter 2: Storage Design Data Mover parameters setup noprefetch EMC recommends that you turn off file-system read prefetching for an online transaction processing (OLTP) workload. Leave it on for a Decision Support System (DSS) workload. Prefetch will waste I/Os in an OLTP environment, since few, if any, sequential I/Os are performed. In a DSS, setting the opposite is true. To turn off the read prefetch mechanism for a file system, type: $ server_mount <movername> -option <options>,noprefetch <fs_name> <mount_point> For example: $ server_mount server_3 option rw,noprefetch ufs1 /ufs1 NFS thread count EMC recommends that you use the default Network File System (NFS) thread count of 256 for optimal performance. Do not set this to a value lower than 32 or to a value higher than 512. The Celerra Network Server Parameters Guide on Powerlink has more information about these parameters. file.asyncthre shold EMC recommends that you use the default value of 32 for the parameter file.asyncthreshold. This provides optimum performance for databases. The Celerra Network Server Parameters Guide on Powerlink has more information about these parameters. Data Mover failover High availability The Data Mover failover capability is a key feature unique to the Celerra. This feature offers redundancy at the file-server level, allowing continuous data access. It also helps to build a fault-resilient RAC architecture. Configuring failover EMC recommends that you set up an auto-policy for the Data Mover, so that if a Data Mover fails, either due to hardware or software failure, the Control Station immediately fails the Data Mover over to its partner. The standby Data Mover assumes the faulted Data Mover s: Network identity: The IP and MAC addresses of all its NICs Storage identity: The file systems that the faulted Data Mover controlled Service identity: The shares and exports that the faulted Data Mover controlled 26

27 Chapter 2: Storage Design This ensures continuous file sharing transparently for the database without requiring users to unmount and remount the file system. The NFS applications and NFS clients do not see any significant interruption in I/O. Preconditions for failover Data Mover failover occurs if any of these conditions exists: Failure (operation below the configured threshold) of both internal network interfaces by the lack of a heartbeat (Data Mover timeout) Power failure within the Data Mover (unlikely as the Data Mover is typically wired into the same power supply as the entire array) Software panic due to exception or memory error Data Mover hang Events that do not cause failover Data Mover failover does not occur under these conditions: Removing a Data Mover from its slot Manually rebooting a Data Mover Manual failover Because manual rebooting of Data Mover does not initiate a failover, EMC recommends that you initiate a manual failover before taking down a Data Mover for maintenance. 27

28 Chapter 2: Storage Design Storage design layout RAID group layout design Two sets of RAID and disk configurations have been tested. These are described below. RAID group layout: 3-FC shelf The RAID group layout for three-fc shelf RAID 5/RAID 1 AVM using user-defined storage pools is as follows: 28

29 Chapter 2: Storage Design RAID group layout: 2-FC shelf The RAID group layout for two-fc shelf RAID 5/RAID 1 AVM using user-defined storage pools is as follows: 29

30 Chapter 3: File System Layout Chapter 3: File System Layout Overview File system layout The file systems shown in the table below were created on AVM user-defined pools, exported on the Celerra, and mounted on the database servers. File system/export AVM user-defined pool Volumes /crs /datafs /log1fs /log2fs /archfs /flashfs /snapdatafs /snaplog1fs /snaplog2fs log1pool (user-defined storage pool created using log1stripe volume) datapool (user-defined storage pool created using datastripe volume) log1pool (user-defined storage pool created using log1stripe volume) log2pool (user-defined storage pool created using log2stripe volume) archpool (user-defined storage pool created using archstripe volume) flashpool (user-defined storage pool created using flashstripe volume) datapool (user-defined storage pool created using datastripe volume) log1pool (user-defined storage pool created using log1stripe volume) log2pool (user-defined storage pool created using log2stripe volume) /crsfs datastripe (metavol consisting of all available FC 4+1 RAID 5 groups) log1stripe (metavol using half of the RAID 1 groups) log2stripe (metavol using half of the RAID 1 groups) archstripe (metavol using the SATA 6+1 RAID 5 group) flashstripe (metavol using the SATA 6+1 RAID 5 group) snapdatafs (SnapSure writeable checkpoint of the datafs volume) snaplog1fs (SnapSure writeable checkpoint of the log1fs volume) snaplog2fs (SnapSure writeable checkpoint of the log2fs volume) 30

31 Chapter 4: Application Design Chapter 4: Application Design Considerations Heartbeat mechanisms The synchronization services component (CSS) of Oracle Clusterware maintains two heartbeat mechanisms: The disk heartbeat to the voting disk The network heartbeat across the RAC interconnects that establishes and confirms valid node membership in the cluster Both of these heartbeat mechanisms have an associated time-out value. For more information on Oracle Clusterware MissCount and DiskTimeout parameters see MetaLink Note UU. EMC recommends setting the disk heartbeat parameter disktimeout to 160 seconds. You should leave the network heartbeat parameter misscount at the default of 60 seconds. Rationale These settings will ensure that the RAC nodes do not evict when the active Data Mover fails over to its partner. The command to configure this option is: $ORA_CRS_HOME/bin/crsctl set css disktimeout 160 Application design layout Oracle Cluster Ready Services Oracle Cluster Ready Services (CRS) are enabled on each of the Oracle RAC 11g servers. The servers operate in active/active mode to provide local protection against a server failure and to provide load balancing. CRS required files (including the voting disk and the OCR file) can reside on NFS volumes provided that the required mount-point parameters are used. For more information on the mount-point parameters that are required for the Oracle Clusterware files, see Chapter 6: Installation and Configuration > Task 4: Configure NFS client options. NFS client In the case of Oracle RAC 11g, which is hosted in DB, the embedded Oracle DNFS protocol is used to connect to the Celerra storage array. DNFS runs over TCP/IP. Oracle binary The Oracle RAC 11g binary files, including the Oracle CRS, are all installed on the 31

32 Chapter 4: Application Design files database servers' local disks. Stored on Celerra Datafiles, online redo log files, archive log files, tempfiles, and CRS files reside on Celerra NFS file systems. These file systems are designed (in terms of the RAID level and number of disks used) to be appropriate for each type of file. The following table lists each file or activity type and indicates where it resides. File or activity type Database binary files Datafiles, tempfiles Online redo log files Archived log files Flashback recovery area Control files CRS, OCR, and voting disk files Location Database servers local disk (or vmdk file for virtualized servers) /datafs Mirrored across /log1fs and /log2fs /archfs /flashfs Mirrored across /log1fs and /log2fs /datafs 32

33 Chapter 4: Application Design Oracle 11g DNFS Overview of Oracle Direct NFS Oracle 11g includes a new feature for storing Oracle datafiles on a NAS device, referred to as Direct NFS or DNFS. DNFS integrates the NFS client directly inside the database kernel instead of the operating system kernel. As part of this solution, the storage elements for Oracle RAC 11g were accessed using the DNFS protocol. It is relatively easy to configure DNFS. It applies only to the storage of Oracle datafiles. Redo log files, tempfiles, control files, and the like are not affected. You can attempt to configure the mount points where these files are stored to support DNFS, but this will have no impact. DNFS provides performance advantages over conventional Linux kernel NFS (or KNFS) in that fewer context switches are required to perform an I/O. Because DNFS integrates the client NFS protocol into the Oracle kernel, this allows all I/O calls to be made in user space, rather than requiring a context switch to kernel space. As a result, CPU utilization associated with the I/O of the database server is reduced. Disadvantages of KNFS I/O caching and performance characteristics vary between operating systems. This leads to varying NFS performance across different operating systems (for example, Linux versus Solaris), and across different releases of the same operating system (for example, RHEL 4.7 and RHEL 5.4). This in turn results in varying NFS performance across your implementations. DNFS and Celerra The Oracle RAC 11g and EMC Celerra DNFS solution enables a midsize enterprise to deploy an EMC NAS architecture with DNFS connectivity, for its Oracle RAC 11g database applications that have lower cost and reduced complexity than directattached storage (DAS) or than a storage area network (SAN). The image below illustrates how DNFS can be used to deploy an Oracle 11g and EMC Celerra solution. 33

34 Chapter 4: Application Design DNFS performance advantages The following table describes the performance advantages that can be gained by using DNFS. Advantage Consistent performance Improved caching and I/O management Asynchronous direct I/O Overcomes OS write locking Reduced CPU and memory usage Details Consistent NFS performance is observed across all operating systems. The DNFS kernel is designed for improved caching and management of the I/O patterns that are typically experienced in database environments, that is, larger and more efficient reads/writes. The DNFS kernel enables asynchronous direct I/O, which is typically the most efficient form of I/O for databases. Asynchronous direct I/O significantly improves database read/write performance by enabling I/O to continue while other requests are being submitted and processed. DNFS overcomes OS write locking, which can be inadequate in some operating systems and can cause I/O performance bottlenecks in others. Database server CPU and memory usage are reduced by eliminating the overhead of copying data to and from the OS memory cache to the database SGA cache. Included in 11g DNFS is included free of charge with the Oracle 11g. Enhanced data integrity To ensure database integrity, immediate writes must be made to the database when requested. Operating system caching delays writes for efficiency reasons; this potentially compromises data integrity during failure scenarios. DNFS uses database caching techniques and asynchronous direct I/O to ensure almost immediate data writes, thus reducing data integrity risks. Load balancing and high availability Load balancing and high availability (HA) are managed internally within the DNFS client, rather than at the OS level. This greatly simplifies network setups in HA environments and reduces dependence on IT network administrators by eliminating the need to set up network subnets and bond ports, for example, LACP bonding. DNFS allows multiple parallel network paths/ports to be used for I/O between the database server and the IP storage array. (Four paths were used in the testing performed for this.) For efficiency and performance, these paths are managed and load balanced by the DNFS client, not by the operating system. The four paths should be configured in separate subnets for effective load balancing by DNFS. Less tuning required Oracle 11g DNFS requires little additional tuning, other than the tuning considerations necessary in any IP storage environment with Oracle. In an unchanging environment, once tuned, DNFS requires no ongoing maintenance. 34

35 Chapter 4: Application Design Memory configuration for Oracle 11g Memory configuration and performance Memory configuration in Oracle 11g is one of the most challenging aspects of configuring the database server. If the memory is not configured the performance of the database server will be very poor. If memory is configured incorrectly: The database server will be unstable. The database may not open at all; and if it does open, you may experience errors due to lack of shared pool space. In an OLTP context, the size of the shared pool is frequently the limitation on performance of the database. Automatic Memory Management A new feature called Automatic Memory Management simplifies the memory configuration process in Oracle 11g 64 bit (Release 1). The purpose of Automatic Memory Management is to simplify the memory configuration process for Oracle 11g. For example, in Oracle 10g, the user is required to set two parameters, SGA_TARGET and PGA_AGGREGATE_TARGET, so that Oracle can manage other memory-related configurations such as buffer cache and shared pool. When using Oracle 11g-style Automatic Memory Management, the user does not set these SGA and PGA parameters. Instead, the following parameters are set: MEMORY_TARGET MEMORY_MAX_TARGET Once these parameters are set, Oracle 11g can, in theory, handle all memory management issues, including both SGA and PGA memory. However, the Automatic Memory Management model in Oracle 11g 64 bit (Release 1) requires configuration of shared memory as a file system mounted under /dev/shm. This adds an additional management burden to the DBA/system administrator. Effects of Automatic Memory Management on performance Decreased database performance We observed a significant decrease in performance when we enabled the Oracle 11g Automatic Memory Management feature. Linux HugePages are not supported Linux HugePages are not supported when the Automatic Memory Management feature is implemented. When Automatic Memory Management is enabled, the entire SGA memory should fit under /dev/shm and, as a result, HugePages are not used. Tuning HugePages increases the performance of the database significantly. It is EMC s opinion that the performance improvements of HugePages, plus the lack of a requirement for a /dev/shm file system, make the Oracle 11g automatic memory model a poor trade-off. 35

36 Chapter 4: Application Design EMC recommendations To achieve optimal performance on Oracle 11g, EMC recommends the following: Disable the Automatic Memory Management feature Use the 10g style of memory management on Oracle 11g The memory management configuration procedure is described in the previous section. This provides optimal performance and manageability per our testing. HugePages HugePages The Linux 2.6 kernel includes a feature called HugePages. This feature allows you to specify the number of physically contiguous large memory pages that will be allocated and pinned in RAM for shared memory segments like the Oracle System Global Area (SGA). The pre-allocated memory pages can only be used for shared memory and must be large enough to accommodate the entire SGA. HugePages can create a very significant performance improvement for Oracle RAC 11g database servers. The performance payoff for enabling HugePages is significant. Warning HugePages must be tuned carefully and set correctly. Unused HugePages can only be used for shared memory allocations - even if the system runs out of memory and starts swapping. Incorrectly configured HugePages settings may result in poor performance and may even make the machine unusable. HugePages parameters The HugePages parameters are stored in /etc/sysctl.conf. You can change the value of HugePages parameters by editing the systctl.conf file and rebooting the instance. The following table describes the HugePages parameters: Parameter HugePages_Total HugePages_Free Hugepagesize Description Total number of HugePages that are allocated for shared memory segments (This is a tunable value. You must determine how to set this value.) Number of HugePages that are not being used Size of each Huge Page Optimum values for HugePages parameters The amount of memory allocated to HugePages must be large enough to accommodate the entire SGA: HugePages_Total x Hugespagessize = Amount of memory allocated to HugePages. To avoid wasting memory resources, the value of HugePages_Free should be zero. Note The value of vm.nr_hugepages should be set to a value that is at least equal to kernel.shmmax/2048. When the database is started, the HugePages_Free should 36

37 Chapter 4: Application Design show a value close to zero to reflect that memory is tuned. For more information on tuning HugePages, see Chapter 6: Installation and Configuration > Task 7: Tune HugePages. 37

38 Chapter 5: Network Design Chapter 5: Network Design Concepts Jumbo frames Maximum Transfer Unit (MTU) sizes of greater than 1,500 bytes are referred to as jumbo frames. Jumbo frames require Gigabit Ethernet across the entire network infrastructure server, switches, and database servers. VLAN Virtual local area networks (VLANs) logically group devices that are on different network segments or sub-networks. Trunking TCP/IP provides the ability to establish redundant paths for sending I/O from one networked computer to another networked computer. This approach uses the link aggregation protocol, commonly referred to as trunking. Redundant paths facilitate high availability and load balancing for the networked connection. Trunking device A trunking device is a virtual device created using two or more network devices to achieve higher performance with load-balancing capability, and high availability with failover capability. With Ethernet trunking/link aggregation, packets traveling through the virtual device are distributed among the underlying devices to achieve higher aggregated bandwidth, based on the source MAC address. Best practices Gigabit Ethernet EMC recommends that you use Gigabit Ethernet for the RAC interconnects if RAC is used. If 10 GbE is available, that is even better. Jumbo frames and the RAC interconnect For Oracle RAC 11g installations, jumbo frames are recommended for the private RAC interconnect. This boosts the throughput as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). VLANs EMC recommends that you use VLANs to segment different types of traffic to 38

39 Chapter 5: Network Design Network layout specific subnets. This provides better throughput, manageability, application separation, high availability, and security. Network design for validated scenario Two sites are used, one for production, the other for disaster recovery target. A Celerra is located at each site. A WAN connects the two sites. TCP/IP provides network connectivity. DNFS provides file system semantics for Oracle RAC 11g. Client virtual machines run on a VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. The Oracle RAC 11g servers are connected to the client, RAC interconnect, WAN, and production storage networks. Virtual LANs Virtual LANs This solution uses three VLANs to segregate network traffic of different types. This improves throughput, manageability, application separation, high availability, and security. The following table describes the database server network port setup: VLAN ID Description CRS setting 1 Client network Public 2 RAC interconnect Private 3 Storage None (not used) Client VLAN The client VLAN supports connectivity between the physically booted Oracle RAC 11g servers, the virtualized Oracle Database 11g, and the client workstations. The client VLAN also supports connectivity between the Celerra and the client workstations to provide network file services to the clients. Control and management of these devices are also provided through the client network. RAC interconnect VLAN The RAC interconnect VLAN supports connectivity between the Oracle RAC 11g servers for network I/O required by Oracle CRS. Three network interface cards (NICs) are configured on each Oracle RAC 11g server to the RAC interconnect network. Link aggregation is configured on the servers to provide load balancing and 39

40 Chapter 5: Network Design port failover between the two ports for this network. Storage VLAN The storage VLAN uses the NFS protocol to provide connectivity between servers and storage. Each database server connected to the storage VLAN has two NICs dedicated to the storage VLAN. Link aggregation is configured on the servers to provide load balancing and port failover between the two ports. Note on DNFS For validating DNFS, link aggregation is removed. DNFS was validated using one-, two-, three-, and four-port configurations. Link aggregation is not required on DNFS because Oracle 11g internally manages load balancing and high availability. Redundant switches In addition to VLANs, separate redundant storage switches are used. The RAC interconnect connections are also on a dedicated switch. For real-world solution builds, it is recommended that these switches support Gigabit Ethernet (GbE) connections, jumbo frames, and port channeling. Jumbo frames Overview Jumbo frames are configured for three layers: Celerra Data Mover Oracle RAC 11g servers Switch Note Configuration steps for the switch are not covered here, as that is vendor-specific. Check your switch documentation for details. 40

41 Chapter 5: Network Design Celerra Data Mover To configure jumbo frames on the Data Mover, execute the following command on the Control Station: server_ifconfig server_2 int1 mtu=9000 Where: server_2 is the Data Mover and int1 is the interface Linux servers To configure jumbo frames on a Linux server, execute the following command: ifconfig eth0 mtu 9000 Alternatively, place the following statement in the network scripts in /etc/sysconfig/network-scripts: MTU=9000 RAC interconnect Jumbo frames should be configured for the storage and RAC interconnect networks of this solution to boost the throughput, as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). Typical Oracle database environments transfer data in 8 KB and 32 KB block sizes, which require multiple 1,500 frames per database I/O, while using an MTU size of 1,500. Using jumbo frames, the number of frames needed for every large I/O request can be reduced, thus the host CPU needed to generate a large number of interrupts for each application I/O is reduced. The benefit of jumbo frames is primarily a complex function of the workload I/O sizes, network utilization, and Oracle database server CPU utilization, and so is not easy to predict. For information on using jumbo frames with the RAC Interconnect, see MetaLink Note Verifying that jumbo frames are enabled To test whether jumbo frames are enabled, use the following command: ping M do s 8192 <target> Where: target is the interface to be tested Jumbo frames must be enabled on all layers of the network for this command to succeed. 41

42 Chapter 5: Network Design Ethernet trunking and link aggregation Trunking and link aggregation Two NICs on each Oracle RAC 11g server are used in the NFS connection, referred to previously as the storage network. The RAC interconnect network is trunked in a similar manner using three NICs. EMC recommends that you configure an Ethernet trunking interface with two Gigabit Ethernet ports to the same switch. Oracle 11g DNFS NICs used for storage connectivity are not bonded when using Oracle 11g DNFS. Bonding is not required because the Oracle DNFS protocol manages load balancing and high availability internally across the available ports. Enabling trunking on a Celerra Data Mover The following table describes how to enable trunking on a Data Mover: Step Action 1 Set up a two-port channel device, as follows: server_sysconfig server_2 -virtual -name lacp1 - create trk \ -option "cge0,cge1 protocol=lacp" 2 Assign an IP address to the logical device, as follows: server_ifconfig server_2 -create -Device lacp1 -name int1 \ -protocol IP x.x.x.x y.y.y.y z.z.z. Public and private networks Public and private networks Each node should have: One static IP address for the public network One static IP address for the private cluster interconnect The private interconnect should only be used by Oracle to transfer cluster manager and cache fusion related data. Although it is possible to use the public network for the RAC interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for cache fusion and cluster manager traffic). Configuring virtual IP addresses The virtual IP addresses must be defined in either the /etc/hosts file or DNS for all RAC nodes and client nodes. The public virtual IP addresses will be configured automatically by Oracle when the Oracle Universal Installer is run, which starts 42

43 Chapter 5: Network Design Oracle's Virtual Internet Protocol Configuration Assistant (vipca). All virtual IP addresses will be activated when the following command is run: srvctl start nodeapps -n <node_name> Where: node_name is the hostname/ip address that will be configured in the client's tnsnames.ora file. 43

44 Chapter 5: Network Design Oracle RAC 11g server network architecture Database server network interfaces Each Oracle RAC 11g server has 10 network interfaces: Two interfaces connect to the storage network, using a link aggregation trunk. Three interfaces connect the server to the RAC interconnect network, enabling the heartbeat and other network I/O required by Oracle CRS. One interface connects to the client. Oracle RAC 11g server network interfaces - DNFS The following table lists each interface and describes its use for the Oracle 11g DNFS configuration. Interface port ID eth0 eth1 eth2 eth3 eth4 eth5 eth6 eth7 eth8 eth9 Description Client network Unused Storage network Storage network Storage network Storage network Unused RAC interconnect (trunked) RAC interconnect (trunked) RAC interconnect (trunked) 44

45 Chapter 6: Installation and Configuration Chapter 6: Installation and Configuration Overview Introduction This chapter provides procedures and guidelines for installing and configuring the components that make up the validated solution scenario. Scope The installation and configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate installation and configuration documentation for the revision levels of the hardware and software components as planned in the solution. Version-specific release notes are especially important. 45

46 Chapter 6: Installation and Configuration Task 1: Build the network infrastructure Network infrastructure For details on building a network infrastructure, see Chapter 5: Network Design > Network layout. Task 2: Set up and configure NAS for Celerra Configure NAS and manage Celerra For details on configuring NAS and managing Celerra, see Supporting Information > Managing and monitoring Celerra. Task 3: Set up and configure database servers Check BIOS version Dell PowerEdge 2900 servers were used in our testing. These servers were preconfigured with the A06 BIOS. Upgrading the BIOS to the latest version (2.2.6 as of the time of this publication) resolved a range of issues, including hanging reboot problems and networking issues. Regardless of the server vendor and architecture, you should monitor the BIOS version shipped with the system and determine if it is the latest production version supported by the vendor. If it is not the latest production version supported by the vendor, then flashing the BIOS is recommended. Disable Hyper- Threading Intel Hyper-Threading Technology allows multi-threaded operating systems to view a single physical processor as if it were two logical processors. A processor that incorporates this technology shares CPU resources among multiple threads. In theory, this enables faster enterprise-server response times and provides additional CPU processing power to handle larger workloads. As a result, server performance will supposedly improve. In EMC s testing, however, performance with Hyper-Threading was poorer than performance without it. For this reason, EMC recommends disabling Hyper-Threading. There are two ways to disable Hyper-Threading: in the kernel or through the BIOS. Intel recommends disabling Hyper-Threading in the BIOS because it is cleaner than doing so in the kernel. Refer to your server vendor s documentation for instructions. 46

47 Chapter 6: Installation and Configuration Task 4: Configure NFS client options NFS client options For optimal reliability and performance, EMC recommends the NFS client options listed in the table below. The mount options are listed in the /etc/fstab file. Option Syntax Recommended Description Hard mount hard Always The NFS file handles are kept intact when the NFS server does not respond. When the NFS server responds, all the open file handles resume, and do not need to be closed and reopened by restarting the application. This option is required for Data Mover failover to occur transparently without having to restart the Oracle instance. NFS protocol version vers= 3 Always Sets the NFS version to be used. Version 3 is recommended. TCP proto=tcp Always All the NFS and RPC requests will be transferred over a connection-oriented protocol. This is required for reliable network transport. Background bg Always Enables client attempts to connect in the background if the connection fails. No interrupt nointr Always This toggle allows or disallows client keyboard interruptions to kill a hung or failed process on a failed hard-mounted file system. Read size and write size rsize=32768,wsi ze=32768 Always Sets the number of bytes NFS uses when reading or writing files from an NFS server. The default value is dependent on the kernel. However, throughput can be improved greatly by setting rsize/wsize= No auto noauto Only for backup/utility Disables automatic mounting of the file system 47

48 Chapter 6: Installation and Configuration file systems on boot-up. This is useful for file systems that are infrequently used (for example, stage file systems). ac timeo actimeo=0 RAC only Sets the minimum and maximum time for regular files and directories to 0 seconds. Timeout timeo=600 Always Sets the time (in tenths of a second) the NFS client waits for the request to complete. sunrpc.tcp_sl ot_table_entri es The NFS module called sunrpc.tcp_slot_table_entries controls the concurrent I/Os to the storage system. The default value of this parameter is 16. The parameter should be set to the maximum value (128) for enhanced I/O performance. To configure this option, type the following command: [root@mteoraesx2-vm3 ~]# sysctl -w sunrpc.tcp_slot_table_entries=128 sunrpc.tcp_slot_table_entries = 128 Important Before configuring this option, you must make the changes in sysctl.conf, and then run sysctl w. This reparses the file, and the resulting text is output. No protocol overhead Typically, in comparison to the host file system implementations, NFS implementations increase database server CPU utilization by 1 percent to 5 percent. However, most online environments are tuned to run with significant excess CPU capacity. EMC testing has confirmed that in such environments protocol CPU consumption does not affect the transaction response times Task 5: Install Oracle Database 11g Install Oracle database 11g for Linux See Oracle s installation guide: Oracle Database Installation Guide 11g Release 1 (11.1) for Linux 48

49 Chapter 6: Installation and Configuration Task 6: Configure database server memory options Database server memory Refer to your database server documentation to determine the total number of memory slots your database server has, and the number and density of memory modules that you can install. EMC recommends that you configure the system with the maximum amount of memory feasible to meet the scalability and performance needs. Compared to the cost of the remaining components in an Oracle database server configuration, the cost of memory is minor. Configuring an Oracle database server with the maximum amount of memory is entirely appropriate. EMC s Oracle RAC 11g testing was done with servers containing 48 GB of RAM. Shared memory Oracle uses shared memory segments for the Shared Global Area (SGA), which is an area of memory that is shared by Oracle processes. The size of the SGA has a significant impact on the database performance; and there is a direct correlation between SGA size and disk I/O. EMC s Oracle RAC 11g testing was done with servers using 20 GB of SGA. Memory configuration files The following table describes the files that must be configured for memory management: File Created by Function /etc/sysctl.conf Linux installer Contains the shared memory parameters for the Linux operating system. This file must be configured in order for Oracle to create the SGA with shared memory. /etc/security/limits.conf Linux installer Contains the limits imposed by Linux on users use of resources. This file must be configured correctly in order for Oracle to use shared memory for the SGA. Oracle parameter file Oracle installer, dbca, or DBA who creates the database Contains the parameters used by Oracle to start an instance. This file must contain the correct parameters in order for Oracle to start an instance using shared memory. Configuring /etc/sysctl.co nf Configure the etc/sysctl.conf file as follows: # Oracle parameters kernel.shmall = kernel.shmmax = kernel.shmmni = 4096 kernel.sem =

50 Chapter 6: Installation and Configuration fs.file-max = net.ipv4.ip_local_port_range = net.core.rmem_default = net.core.rmem_max = net.core.wmem_default = net.core.wmem_max = vm.nr_hugepages = sunrpc.tcp_slot_table_entries = 128 Recommended parameter values The following table describes recommended values for kernel parameters: Kernel parameter kernel.shmall kernel.shmmni kernel.shmmax Parameter function Defines the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space. Since the SGA is comprised of shared memory, SHMMAX can potentially limit the size of the SGA. Sets the system-wide maximum number of shared memory segments. The value should be at least ceil (shmmax/page_size). The PAGE_SIZE on our Linux systems was Recommended value (Slightly larger than the SGA size)

51 Chapter 6: Installation and Configuration Configuring /etc/security/li mits.conf The section of the /etc/security/limits.conf file relevant to Oracle should be configured as follows: # Oracle parameters oracle soft nproc 2047 oracle hard nproc oracle soft nofile 1024 oracle hard nofile oracle soft memlock oracle hard memlock Important Ensure that the memlock parameter has been configured. This is required for the shared memory file system. This is not covered in the Oracle Database 11g Installation Guide, so be sure to set this parameter. If you do not set the memlock parameter, your database will behave uncharacteristically. Task 7: Tune HugePages Tuning HugePages The following table describes how to tune HugePages parameters to ensure optimum performance. Step Action 1 Ensure that the machine you are using has adequate memory. For example, our test system had 24 GB of RAM and a 20 GB SGA. 2 Set the HugePages parameters in /etc/sysctl.conf to a size into which the SGA will fit comfortably. For example, to create a HugePages pool of 21 GB, which would be large enough to accommodate the SGA, set the following parameter values: HugePages_Total: Hugepagesize: 2048 KB 3 Reboot the instance. 4 Check the values of the HugePages parameters by typing the following command: [root@mteoradb51 ~]# grep Huge /proc/meminfo On our test system, this command produced the following output: 51

52 Chapter 6: Installation and Configuration HugePages_Total: HugePages_Free: 1000 Hugepagesize: 2048 KB 5 If the value of HugePages_Free is equal to zero, the tuning is complete: If the value of HugePages_Free is greater than zero: a) Subtract the value of HugePages_Free from HugePages_Total. Make note of the answer. b) Open /etc/sysctl.conf and change the value of HugePages_Total to the answer you calculated in step a). c) Repeat steps 3, 4, and 5. Tuning HugePages on RHEL 5/OEL 5 On Red Hat Enterprise Linux 5 and on Oracle Enterprise Linux 5 systems, HugePages cannot be configured using the steps mentioned above. We used a shell script called hugepage_settings.sh to configure HugePages on these systems. This script is available on Oracle MetaLink Note The hugepage_settings.sh script configures HugePages as follows: HugePages_Total: HugePages_Free: 2244 HugePages_Rsvd: 2240 Hugepagesize: 2048 kb More information about HugePages For more information on enabling and tuning HugePages, refer to: Oracle MetaLink Note Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases 52

53 Chapter 6: Installation and Configuration Task 8: Set database initialization parameters Overview This section describes the initialization parameters that should be set in order to configure the Oracle instance for optimal performance on the CLARiiON CX4 series. These parameters are stored in the spfile or init.ora file for the Oracle instance. Database block size Parameter Syntax Description Database block size DB_BLOCK_SIZE=n For best database performance, DB_BLOCK_SIZE should be a multiple of the OS block size. For example, if the Linux page size is 4096, DB_BLOCK_SIZE =4096 *n. Direct I/O Parameter Direct I/O Syntax Description FILESYSTEM_IO_OPTIONS=setall This setting enables direct I/O and async I/O. Direct I/O is a feature available in modern file systems that delivers data directly to the application without caching in the file system buffer cache. Direct I/O preserves file system semantics and reduces the CPU overhead by decreasing the kernel code path execution. I/O requests are directly passed to network stack, bypassing some code layers. Direct I/O is a very beneficial feature to Oracle s log writer, both in terms of throughput and latency. Async I/O is beneficial for datafile I/O. Multiple database writer processes Parameter Syntax Description Multiple database writer processes DB_WRITER_PROCESSES=2*n The recommended value for db_writer_processes is that it at least matches the number of CPUs. During testing, we observed very good performance by just setting db_writer_processes to 1. Multi Block Read Count Parameter Syntax Multi Block Read Count DB_FILE_MULTIBLOCK_READ_COUNT= n 53

54 Chapter 6: Installation and Configuration Description DB_FILE_MULTIBLOCK_READ_COUNT determines the maximum number of database blocks read in one I/O during a full table scan. The number of database bytes read is calculated by multiplying the DB_BLOCK_SIZE by the DB_FILE_MULTIBLOCK_READ_COUNT. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance for databases that perform many full table scans, but degrade performance for OLTP databases where full table scans are seldom (if ever) performed. Setting this value to a multiple of the NFS READ/WRITE size specified in the mount limits the amount of fragmentation that occurs in the I/O subsystem. This parameter is specified in DB Blocks and NFS settings are in bytes - adjust as required. EMC recommends that DB_FILE_MULTIBLOCK_READ_COUNT be set to between 1 and 4 for an OLTP database and to between 16 and 32 for DSS. Disk Async I/O Parameter Syntax Description Disk Async I/O DISK_ASYNCH_IO=true RHEL 4 update 3 and later support async I/O with direct I/O on NFS. Async I/O is now recommended on all the storage protocols. 54

55 Chapter 6: Installation and Configuration Task 9: Configure the Oracle DNFS client Install Oracle patch Do not implement Direct NFS unless the appropriate Oracle patch has been installed and configured. Oracle 11g R1 has a known bug with regard to DNFS resiliency. The bug is resolved in the patch. WARNING If the appropriate patch is not applied, it can have serious implications for the stability and continuity of a running database when configured to use DNFS. See Oracle MetaLink for more information on downloading and installing the Oracle patch. Configure oranfstab If you use DNFS, then you must create a new configuration file, oranfstab, to specify the options/attributes/parameters that enable Oracle Database to use DNFS. The oranfstab file must be placed in the ORACLE_BASE\ORACLE_HOME\dbs directory. When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the entries in this file are specific to a single database. The DNFS client searches for the mount point entries as they appear in oranfstab. DNFS uses the first matched entry as the mount point. The following table describes how to configure the oranfstab file. Step Action 1 Create a file called oranfstab at the location $ORACLE_HOME/dbs/ [oracle@mteoradb55 ~]$ cat /u01/app/oracle/product/11.1.0/db_1/dbs/oranfstab server: mteorans40-2 path: path: path: path: export: /datafs mount: /u02 export: /log1fs mount: /u03 export: /log2fs mount: /u04 export: /archfs mount: /u05 2 Replicate the oranfstab file on all nodes and keep it synchronized. 55

56 Chapter 6: Installation and Configuration Apply ODM NFS library To enable DNFS, Oracle database uses an ODM library called libnfsodm11.so. You must replace the standard ODM library libodm11.so, with the ODM NFS library libnfsodm11.so. The table below describes the steps you must follow. Step Action 1 Change the directory to $ORACLE_HOME\bin 2 Shut down Oracle. 3 Run the following commands on the database servers: $ cp libodm11.so libodm11.stub $ mv libodm11.so libodm11.so_stub $ ln s libnfsodm11.so libodm11.so Enable transchecksum on the Celerra Data Mover EMC recommends that you enable transchecksum on the Data Mover that serves the Oracle DNFS clients. This avoids the likelihood of TCP Port and XID (transaction identifier) reuse by two or more databases running on the same physical server, which could possibly cause data corruption. To enable the transchecksum, type: #server_param <movername> -facility nfs -modify transchecksum -value 1 Note This applies to NFS version 3 only. Refer to the NAS Support Matrix available on Powerlink to understand the Celerra version that supports this parameter. DNFS network setup Port bonding and load balancing are managed by the Oracle DNFS client in the database; therefore, there are no additional network setup steps. If OS NIC/connection bonding is already configured, you should reconfigure the OS to release the connections so that they operate as independent ports. DNFS will then manage the bonding, high availability, and load balancing for the connections. Dontroute specifies that outgoing messages should not be routed using the operating system but sent using the IP address to which they are bound. If dontroute is not specified, it is mandatory that all paths to the Celerra are configured in separate network subnets. The network setup can now be managed by an Oracle DBA, through the oranfstab file. This frees up the database sysdba from specific bonding tasks previously necessary for OS LACP-type bonding, for example, the creation of separate subnets. 56

57 Chapter 6: Installation and Configuration Disable reserved port checking Some NFS file servers require NFS clients to connect using reserved ports. If your file server is running with reserved port checking, then you must disable it for DNFS to operate. Mounting DNFS If you use DNFS, then you must create a new configuration file, oranfstab, to specify the options/attributes/parameters that enable Oracle Database to use DNFS. These include: Add oranfstab to the ORACLE_BASE\ORACLE_HOME\dbs directory Oracle RAC: replicate the oranfstab file on all nodes and keep synchronized Mounting multiple servers When oranfstab is placed in the ORACLE_BASE\ORACLE_HOME\dbs directory, the entries in this file are specific to a single database. The DNFS client searches for the mount point entries as they appear in oranfstab. DNFS uses the first matched entry as the mount point. Optimizing BI/DSS and data warehousing workloads BI/DSS and data warehousing workloads with complex query generation involving outer table joins or full database table scans can be optimized on DNFS by configuring the degree of parallelism (DOP) used by the database in the int.ora file. DOP is set to eight by default for a standard database install. Validation testing for DSS workloads with DNFS concluded that DOP set to 32 was optimum for the TPC-H-like workloads applied to the servers during this testing. 57

58 Chapter 6: Installation and Configuration Task 10: Verify that DNFS has been enabled (Oracle 11g only) What this map contains This map contains a number of queries that you can run to verify that DNFS is enabled for the database. Check the available DNFS storage paths To check the available DNFS storage paths, run the following query: SQL> select unique path from v$dnfs_channels; PATH Check the data files configured under DNFS To check the data files configured under DNFS, run the following query: SQL> select FILENAME from V_$DNFS_FILES; FILENAME /u03/oradata/mterac23/controlfile/o1_mf_4mst6cxw_.ctl /u04/oradata/mterac23/controlfile/o1_mf_4mst6d7s_.ctl /u02/oradata/mterac23/datafile/o1_mf_system_4mst3l7g_.dbf /u02/oradata/mterac23/datafile/o1_mf_sysaux_4mst3lbm_.dbf /u02/oradata/mterac23/datafile/o1_mf_undotbs1_4mst3ld8_.dbf /u02/oradata/mterac23/datafile/o1_mf_users_4mst3lgc_.dbf /u02/oradata/mterac23/datafile/o1_mf_undotbs2_4mstl01c_.dbf /u02/oradata/mterac23/datafile/o1_mf_undotbs3_4mstl17x_.dbf /u02/oradata/mterac23/datafile/o1_mf_undotbs4_4mstl2co_.dbf /u02/oradata/mterac23/datafile/o1_mf_test_4msxhblb_.dbf... /u02/oradata/mterac23/datafile/o1_mf_temp_4mstkcsg_.tmp... /u03/oradata/mterac23/onlinelog/o1_mf_1_4npzyjfj_.log /u04/oradata/mterac23/onlinelog/o1_mf_1_4nq0114z_.log /u03/oradata/mterac23/onlinelog/o1_mf_2_4nq03db9_.log /u04/oradata/mterac23/onlinelog/o1_mf_2_4nq05t20_.log 47 rows selected. 58

59 Chapter 6: Installation and Configuration Check the server and the directories configured under DNFS To check the server and the directories configured under DNFS, run the following query: SQL> select SVRNAME, DIRNAME, MNTPORT, NFSPORT from V_$DNFS_SERVERS; Here is the output that we recorded when we ran the query on our system: SVRNAME DIRNAME MNTPORT NFSPORT mteorans40-2 /log1fs mteorans40-2 /log2fs mteorans40-2 /datafs mteorans40-2 /archfs Task 11: Configure Oracle Database control files and logfiles Control files EMC recommends that when you create the control file, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, and MAXLOGMEMBERS to high values. Your database should have a minimum of two control files located on separate physical ASM diskgroups. One way to multiplex your control files is to store a control file copy on every diskgroup that stores members of the redo log groups. Online and archived redo log files EMC recommends that you: Run a mission-critical, production database in ARCHIVELOG mode. Multiplex your redo log files for these databases. Loss of online redo log files could result in a database recovery failure. The best practice to multiplex your online redo log files is to place members of a redo log group on different ASM diskgroups. To understand how redo log and archive log files can be placed, refer to the Reference Architecture diagram. 59

60 Chapter 6: Installation and Configuration Task 12: Enable passwordless authentication using SSH Overview The use of passwordless authentication using ssh is a fundamental concept to make successful use of Oracle RAC 11g with Celerra. SSH files SSH passwordless authentication relies on the three files described in the following table. File Created by Purpose ~/.ssh/id_dsa.pub ssh-keygen Contains the host s dsa key for ssh authentication (functions as the proxy for a password) ~/.ssh/authorized_keys ssh Contains the dsa keys of hosts that are authorized to log in to this server without issuing a password ~/.ssh/known_hosts ssh Contains the dsa key and hostname of all hosts that are allowed to log in to this server using ssh id_dsa.pub The most important ssh file is id_dsa.pub. Important If the id_dsa.pub file is re-created after you have established a passwordless authentication for a host onto another host, the passwordless authentication will cease to work. Therefore, do not accept the option to overwrite id_dsa.pub if ssh-keygen is run and it discovers that id_dsa.pub already exists. Enabling authentication: Single user/single host The following table describes how to enable passwordless authentication using ssh for a single user on a single host: Step Action 1 Create the dsa_id.pub file using ssh-keygen. 2 Copy the key for the host for which authorization is being given to the authorized_keys file of the host that allows the login. 3 Complete a login so that ssh knows about the host that is logging in. That is, record the host s key and hostname in the known_hosts file. 60

61 Chapter 6: Installation and Configuration Enabling authentication: Single user/multiple hosts Prerequisites To enable authentication for a user on multiple hosts, you must first enable authentication for the user on a single host. For additional information see Chapter 6: Installation and Configuration > Task 12: Enable passwordless authentication using SSH > Enabling authentication: Single user/single host. Procedure summary After you have enabled authentication for a user on a single host, you can then enable authentication for the user on multiple hosts by copying the authorized_keys and known_hosts files to the other hosts. This is a very common task when setting up Oracle RAC 11g prior to installation of Oracle Clusterware. It is possible to automate this task by using the ssh_multi_handler.bash script. ssh_multi_handler.bash #!/bin/bash # # # Script: ssh_multi_handler.bash # # Purpose: Handles creation of authorized_keys # # # ALL_HOSTS="rtpsol347 rtpsol348 rtpsol349 rtpsol350" THE_USER=root mv -f ~/.ssh/authorized_keys ~/.ssh/authorized_keys.bak mv -f ~/.ssh/known_hosts ~/.ssh/known_hosts.bak for i in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh-keygen -t dsa" ssh ${THE_USER}@${i} "cat ~/.ssh/id_dsa.pub" \ >> ~/.ssh/authorized_keys ssh ${THE_USER}@${i} date done for i in $ALL_HOSTS do scp ~/.ssh/authorized_keys ~/.ssh/known_hosts \ ${THE_USER}@${i}:~/.ssh/ done for i in ${ALL_HOSTS} do for j in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh ${THE_USER}@${j} date" 61

62 Chapter 6: Installation and Configuration done done mv -f ~/.ssh/authorized_keys.bak ~/.ssh/authorized_keys mv -f ~/.ssh/known_hosts.bak ~/.ssh/known_hosts exit How to use ssh_multi_handler.bash At the end of the process described below, all of the equivalent users on the set of hosts will be able to log in to all of the other hosts without issuing a password. Step Action 1 Copy and paste the text from ssh_multi_handler.bash into a new file on the Linux server. 2 Edit the variable definitions at the top of the script. 3 chmod the script to allow it to be executed. 4 Run the script. Output on our systems On our systems with the settings noted previously, this script produced the following effect: ssh multi-host output [root@rtpsol347 ~]#./ssh_multi_handler.bash Enter file in which to save the key (/root/.ssh/id_dsa): Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: f8:21:61:55:55:92:15:ed:0a:62:89:c5:ed:93:5f:27 root@rtpsol347.solutions1.rtp.dg.com root@rtpsol347's password: Tue Aug 8 22:21:31 EDT 2006 root@rtpsol348's password:...(additional similar output not shown) authorized_keys 100% KB/s 00:00 known_hosts 100% KB/s 00:00 root@rtpsol348's password:...<repeated 3 times> Tue Aug 8 22:22:05 EDT <repeated 15 times> [root@rtpsol347 ~]# 62

63 Chapter 6: Installation and Configuration The 16 date outputs, without any requests for passwords, indicate that the passwordless authentication files on all root users among these four hosts have been successfully created. Enabling authentication: Single host/different user Another common task is to set up passwordless authentication across two users between two hosts. For example, enable the Oracle user on the database server to run commands as the root or nasadmin user on the Celerra Control Station. You can set this up by using the ssh_single_handler.bash script. This script creates passwordless authentication from the presently logged in user to the root user on the Celerra Control Station. ssh_single_handler.bash #!/bin/bash # # # Script: ssh_single_handler.bash # # Purpose: Handles creation of authorized_keys # # # THE_USER=root THE_HOST=rtpsol33 ssh-keygen -t dsa KEY=`cat ~/.ssh/id_dsa.pub` ssh ${THE_USER}@${THE_HOST} "echo ${KEY} >> \ ~/.ssh/authorized_keys" ssh ${THE_USER}@${THE_HOST} date exit Output on our systems On our systems with the settings noted previously, ssh_single_handler.bash produced the following effect: ssh single host output [oracle@rtpsol347 scripts]$./ssh_single_handler.bash Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 09:13:4d:7d:20:0c:9a:c4:4e:35:c9:c9:11:9e:30:31 oracle@rtpsol347.solutions1.rtp.dg.com Wed Aug 9 09:40:01 EDT 2006 [oracle@rtpsol347 scripts]$ 63

64 Chapter 6: Installation and Configuration The date output without a password request indicates that the passwordless authentication files have been created. Task 13: Set up and configure Celerra SnapSure Setup and configuration of SnapSure Refer to Chapter 6: Installation and Configuration > Task 2: Set up and configure NAS for Celerra > Configure NAS and manage Celerra. Task 14: Set up the virtualized utility servers Setting up the virtualized utility servers Virtualized single instance database servers were used as targets for test/dev and disaster recovery solutions. To set up a virtualization configuration, you need to do the steps outlined in the following table: Step Action 1 Deploy a VMware ESX server. 2 Capture the total physical memory and total number of CPUs that are available on the ESX server. 3 Create four virtual machines (VMs) on the ESX server. For the storage network configuration, see Chapter 8: Virtualization > VMware ESX server > Typical storage network configuration. 4 Distribute the memory and CPUs available equally to each of the VMs. 5 Assign a VMkernel IP ( ) to each ESX server so that it can be used to mount NFS storage. For the storage configuration, see Chapter 8: Virtualization > VMware ESX server > Storage configuration. Note All the VMs need to be located on common storage. This is mandatory for performing VMotion. 6 Configure four additional NICs on the ESX server; dedicate each NIC to a VM. These additional NICs are used to configure the dedicated private network connection to Celerra where the database files reside. 7 Ensure all necessary software, for example, Oracle, is installed and configured. Note All database objects are stored on an NFS mount. 64

65 Chapter 7: Testing and Validation Chapter 7: Testing and Validation Overview Introduction to testing and validation This document provides a summary and characterization of the tests performed to validate the solution. The goal of the testing was to characterize the end-to-end solution and component subsystem response under reasonable load, representing the commercial and midsize enterprise market for Oracle 11g on Oracle Enterprise Linux 4 with a Celerra NS40 over KNFS/DNFS. Objectives The objectives of this testing were to carry out: Performance testing of a physically booted pure NFS solution using a best practices compliant storage configuration Functionality testing of a Test/Dev solution component using a physically booted pure NFS configuration, whereby a cloned version of a physically booted production Oracle 11g database is replicated and then mounted on a VMware virtual machine running a single-instance Oracle Database 11g. Establish a reference architecture of validated hardware and software that permits easy and repeatable deployment of Oracle Database 11g on integrated Celerra Network Servers over NFS. Establish the storage best practices for configuring an integrated Celerra Network Server for use with Oracle Database 11g over NFS in a manner that provides optimal performance, recoverability, and protection. Provide migration of a running production Oracle database from NFS to ASM/FCP and vice versa while virtualized using VMware vsphere. Testing focus This focuses on the Store and Advanced Backup solution components of the objectives. To accomplish these objectives, EMC did the following: Used Quest Benchmark Factory for Databases to run a TPC-C workload against the solution stack described in the system configuration. Scaled the workload iteratively until a breaking point in the test was reached (either an error was returned due to some resource constraint or performance scaling became negative). Gathered the operating system and Oracle performance statistics, and used these as the input to a tuning effort. In this way, the maximum workload reasonably achievable on this solution stack was reached. Performed functional tests, as appropriate, for each solution being validated. Section A: Store solution 65

66 Chapter 7: Testing and Validation Overview of the store solution The store solution was designed as a set of performance measurements to determine the bounding point of the solution stack in terms of performance. A reasonable amount of fine tuning was performed in order to ensure that the performance measurements achieved were consistent with real-world, best-practice performance. The test was conducted on a four node Oracle RAC cluster. These four nodes are created on Oracle Cluster Ready Services (CRS). This guide documents the results for AVM 3-FC shelf consolidation testing. Test Results Summary The summary of the test results for the physically booted Oracle 11g 4-Node RAC KNFS and DNFS configuration is as shown below. The memory on the database servers was increased to 48 GB for the following tests. 1-Port DNFS Users: TPS: Response Time: Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Port DNFS Users: TPS: Response Time: Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Port DNFS Users: TPS: Response Time: Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Port DNFS Users: TPS: Response Time:

67 Chapter 7: Testing and Validation Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size KNFS test Users: TPS: Response Time: Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size Section B: Basic backup solution Overview of the basic backup solution The basic backup solution demonstrates that the Oracle 11g configuration is compatible with RMAN disk-to-disk backup. AVM 3 Shelf RUN Description 94 RMAN backup The backup tests are performance tests, where the performance of each node level was observed and RMAN backup/restore was performed on one node. The restore is a functionality test, but the amount of time required to perform the RMAN restore was tuned and measured. The transactions restored and recovered are measured to ensure that there is no data loss. System configuration The test configuration in the basic backup was identical to the store solution. The following tests were executed with 24 GB of memory on each database servers. Test procedure The following procedure was used for the basic backup validation: Step Action 1 Close all the Benchmark Factory agents (if running). 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop the database on each node and restart all the nodes. 67

68 Chapter 7: Testing and Validation 5 Start listener and database instances on all nodes. 6 Start the Benchmark Factory test with the user load ranging from 4000 to with intervals of When the user load reaches iteration 5000, initiate RMAN backup on the first node and monitor the performance impact on the production database. 8 Verify that the RMAN backup is completed successfully and allow the test to complete. Test results Basic backup solution test results The basic backup operation using RMAN was performed while the OLTP load was running on the database. When the database backup was performed at the 5000th iteration, there is a moderate increase in response time and moderate decrease in transaction throughput when RMAN was initiated at user load Backup and restore summary Test run ID 94 Test run duration 05 hours 30 minutes User load range with interval 100 Profile Driver Start time of test End time of test Mterac54 Oracle 5:34:14 AM 11:06:26 AM RMAN backup start time 06:53:00 at user load 5000 RMAN backup end time 08:03:00 at user load 6400 Total time for Backup Total time for Restore 1 hour 10 minutes 05 hours 49minutes Basic backup solution conclusion RMAN provided a reliable high-performance backup solution for Oracle RAC 11g using our configuration. However, the time required to restore the database was significant. Section C: Advanced backup solution Overview of The purpose of the advanced backup solution was to demonstrate the use of the 68

69 Chapter 7: Testing and Validation the advanced backup solution Celerra s unique storage capabilities in conjunction with Celerra SnapSure checkpoint. It also demonstrates that the Oracle 11g configuration is compatible with Celerra SnapSure. Test Run Test Description 109 Advance backup using Celerra SnapSure The backup test run was a performance test. The performance impact of the fournodes load was tested while performing a SnapSure checkpoint. The restore was a functionality test, but the amount of time required to perform the SnapSure restore was tuned and measured. The transactions restored and recovered were measured to ensure no data loss. System configuration The test configuration in the advanced backup was identical to the store solution. Test procedure The following procedure was used for the advanced backup validation: Step Action 1 Close all the Benchmark Factory agents (if running). 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop and restart the database instances. 5 Create a SnapSure checkpoint for the data file system. 6 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load is at the 5000th iteration, place the database in hot backup mode, and monitor the performance impact on the production database. 8 Once the database is in hot backup mode, refresh the data file system checkpoint. 9 Take the database out of hot backup mode and allow the test to complete. 10 After completing the test shut down the database and restore the database. 11 Capture the time taken to restore the database. Test results Advanced backup solution test results The advanced backup operation using SnapSure was performed while the OLTP 69

70 Chapter 7: Testing and Validation load was running on the database. When the database was taken into hot backup mode at the 3600th iteration to refresh the checkpoint, there was a significant increase in response time and a significant decrease in transaction throughput. Backup and restore - summary Test run duration 6 hours 5.57 minutes User load range with interval 100 Profile Driver Start time of test End time of Test Mterac54 Oracle 3:06 A.M. EDT 11:57 A.M. EDT Hot backup start time 11:42 A.M. start of user load 5000 Checkpoint refresh start time Checkpoint refresh end time Hot backup end time Total time for restore 4:22:35 EDT 4:22:36 EDT 4:22:36 EDT 07 seconds Advanced backup solution conclusion The Celerra SnapSure feature works with Oracle 11g for our configuration. A modest performance hit is observed while taking the database in hot backup mode to refresh the checkpoint. However, this is temporary as the performance recovered to the expected levels after that point for the entire test run. The restore from SnapSure checkpoint is faster than RMAN disk-to-disk restore. 70

71 Chapter 7: Testing and Validation Section D: Test/Dev solution Overview of the test/dev solution The test/dev solution provided a rapid, high-performance method for copying a running Oracle 11g database, such that the copy can be used for testing and development purposes. It also demonstrates that the Oracle 11g configuration is compatible with Celerra SnapSure. Test Run Test Description 24 Test/dev solution using Celerra Writeable Checkpoint System configuration The test configuration in the test/dev solution was identical to the store solution. Testing procedure The following procedure was used to validate the Test/dev solution using writable snapshots: Step Action 1 Create file systems snap_log1 & snap_log2 for redolog files. 2 Place the production database in archive log mode, in case it is not currently in archive log mode. 3 Put the production database in hot backup mode. 4 Create a writeable checkpoint of the database data file system from Celerra. 5 Move the production database out of hot backup mode. 6 Archive the current redo log file. 7 List the current active log files. 8 Using the same procedure, create a writeable checkpoint of the database archive log file system. 9 Create nfs exports for the data and arch writeable checkpoints. 10 As the respective database components are placed on the production database servers, mount the data and arch file system writeable checkpoints and snap_log1 and snap_log2 file systems on both the test/dev database servers to the same directory structure. 11 On the test/dev database server, set the environment parameters ORACLE_SID and ORACLE_HOME to the same settings as on the production database servers, or set them according to the Oracle installation procedure. 12 Create the required database dump directories on the test/dev database server. 13 Create a pfile for the test/dev instance using the file from the production instance as a model. 71

72 Chapter 7: Testing and Validation 14 Copy the parameter, control files from the production database to the log areas on the test/dev database server. 15 List the available archive logs. This list is required for recovering the test/dev database. 16 Start up the test/dev database in mount mode. 17 Recover the test/dev database. Specify file names of the archive logs from the list. At the end, specify the current log file name from the same thread. 18 Open the test/dev database with the resetlogs option. Test results Test/dev solution test results The test/dev operation using SnapSure writeable checkpoints was performed while the OLTP load was running on the database. The database was put into hot backup mode at the 3000th user iteration, and the database was taken out of hot backup at the 3200th iteration. The database recovery at the clone target was initiated while the load was running at the 4000th iteration. The database was opened at the clone target while the load was at the 4200th iteration. Test/dev solution conclusion Celerra Writeable Checkpoints are ideal for creating test/dev environments. This solution enables instant creation of more than one copy of the Oracle database with minimal consumption of additional storage resources. With a writeable copy of the production database, small iterative changes can be made against the testing database and applied back to the production database if required, or the changes can simply be discarded. Celerra Writeable Checkpoints help Oracle DBAs speed up creation and deployment of Oracle test/dev environments for application development and testing. It eliminates unnecessary system downtimes while keeping the production environment safe from unexpected changes and interruptions. 72

73 Chapter 7: Testing and Validation Section E: Backup server solution Overview of the backup server solution This solution demonstrates that the Oracle RAC 11g configuration is compatible with the backup server solution. Oracle Database 11g RMAN has a CATALOG BACKUPPIECE command. This command adds backup pieces of information of the target database on disk to the production database RMAN repository. The backup pieces should be on the shared location. As long as backup pieces are accessible to both production and target databases, RMAN commands such as RESTORE and RECOVER behave transparently across different databases. System configuration The test configuration in the backup server solution was identical to the store solution. Test procedure The following procedure was used to validate the backup server solution: Step Action 1 Start the Benchmark Factory test starting ranging with the user load from 4000 to When the user load is at iteration 8000, initiate storage-layer copy (writeable checkpoint) by placing the production database in backup mode. 3 Place storage-layer copy in the same place in the file system as the production copy. 4 Catalog the backup pieces using the CATALOG BACKUPPIECE command within RMAN in the production server. 5 Shut down the production server. 6 Perform restore and recovery on the production server of the backup taken on the target server. Test results Backup server solution test results The writeable checkpoint is kicked off at user load There was a moderate increase in response time and moderate decrease in transaction throughput when the writeable checkpoint was initiated at user load 8000 as shown below. Backup server solution conclusion Using snapshot technology for storage-based backup can dramatically decrease the impact of the backup operation on the production database server. Using RMAN disk-to-disk backups also reduces the load on the production database versus tape 73

74 Chapter 7: Testing and Validation backup, as well as improving manageability and reducing complexity. RMAN disk-to-disk backups of the disaster recovery target can also be utilized as a form of backup restore and recover the production database. Users: 8900 TPS: Response Time: Users TPS Response Time DB CPU Average NS CPU Average DB Latency Physical Reads Physical Writes I/Os per drive Redo Size The backup / restore / recover time duration for backup server solution is as follows: Test run ID 176 Test run duration 05 hours 30 minutes User load range with interval 100 Profile Mterac23 Driver Oracle Start time of test 2/10/2009 2:34:14 AM End time of test 2/10/2009 8:06:26 AM Hot backup start time 06:56:27 at user load 8000 Writeable checkpoint start time 07:01:10 at user load 8000 Writeable checkpoint end time 07:01:12 at user load 8000 Hot backup end time 07:02:20 at user load 8000 Total time for Backup 1 hour 09 minutes 26 seconds Total time for Restore 04 hours 39minutes 42 seconds Total time for Recovery 03 seconds 74

75 Chapter 7: Testing and Validation Section F: Online migration solution Overview of the online migration solution This solution demonstrates that EMC Replication Manager can be used to migrate an Oracle 11g database mounted on FCP/NFS to a target database mounted on NFS with minimum performance impact and no downtime of the production database. System configuration The test configuration in the online migration solution was identical to the store solution. Test procedure The following procedure was used to validate the online migration solution: Step Action 1 Using EMC Replication Manager, a consistent backup of the running physical production database is performed on the EMC Celerra SnapSure checkpoint snapshot. 2 This backup is mounted (but not opened) on the migration server, in this case a physically booted server. The FCP/ASM target LUNs are also mounted on the migration server. 3 Using Oracle Recovery Manager (RMAN), a backup of this database is taken onto the target location. This backup is performed as a database image, so that the datafiles are written directly to the target FCP/ASM LUNs. 4 The migration server is then switched to the new database, which has been copied by RMAN to the FCP/ASM LUNs. 5 The physical target database is set in Data Guard continuous recovery mode, and Data Guard log ship/log apply is used to catch the physical target database up to the production version. 6 Once the physical target database is caught up to production, Data Guard failover can be used to retarget to the physical target database. If appropriate networking configuration is performed, clients will not see any downtime when this operation occurs. Test results This test was strictly a functionality validation of the migration of an Oracle 11g database from SAN to NAS configuration. The performance impact on the production database during online migration was not validated. 75

76 Chapter 8: Virtualization Chapter 8: Virtualization Overview Introduction to virtualization Virtualized Oracle database servers were used as targets for test/dev, backup, and disaster recovery for this solution. These servers are more conveniently managed as virtual machines than as physically booted Oracle database servers. The advantages of consolidation, flexible migration and so forth, which are the mainstays of virtualization, apply to these servers very well. A single VMware Linux host was used as the target for test/dev, backup, and disaster recovery. For test/dev, the target database was brought up as a singleinstance database on the VMware host. Similarly, the standby database for disaster recovery was a single-instance database running on a VMware host. This chapter provides procedures and guidelines for installing and configuring the virtualization components that make up the validated solution scenario. Advantages of virtualization Advantages Some advantages of including virtualized test/dev and disaster recovery (DR) target servers in the solution are: Consolidation Flexible migration Cloning Reduced costs 76

77 Chapter 8: Virtualization Considerations Virtualized singleinstance Oracle only Due to the requirement for RAC qualification, presently there is no support for Oracle 11g RAC servers on virtualized devices. For this reason, EMC does not publish such a configuration as a supported and validated solution. However, the use of Oracle Database 11g (in single-instance mode) presents far fewer support issues. VMware infrastructure Setting up the virtualized utility servers For details on setting up the virtualized utility servers, see Chapter 6: Installation and Configuration > Task 14: Set up the virtualized utility servers > Setting up the virtualized utility servers. Virtualization best practices VMotion storage requirements You must have a common storage network configured on both source and target ESX servers to perform VMotion. Even the network configuration including the vswitch names should be exactly the same and the connectivity to the LUNs on the back-end storage from the ESX servers also should be established in the same way. ESX servers must have identical configuration All ESX servers must have an identical configuration, other than the IP address for the VMkernel port. Dedicated private connection When NFS connectivity is used, it is a best practice to have a dedicated private connection to the back-end storage from each of the VMs. We assigned four NICs (one NIC for each VM) on the ESX server, assigned private IPs to the same, and set up the connectivity from these four NICs to the Data Movers of the back-end storage using a Dell PowerConnect switch. NFS mount points If the Oracle database files sit on NFS storage, the NFS share should be mounted as a files system within the Linux guest VM using /etc/fstab. This can deliver vastly superior performance when compared to storing Oracle database files on virtual disks that reside on an NFS share and are mounted as NFS 77

78 Chapter 8: Virtualization datastores on the ESX server. VMware ESX server Typical storage network configuration Storage configuration In the example below, the VMkernel Storage Network is being used to store the files for the VMs (through NFS). The storage pane shows that the NFS-mounted volume vm is where these files are stored. VMware and NFS NFS mounts in ESX NFS is a viable storage option for VMware ESX. For the utility servers used in this solution, NFS was used to store the OS images for the VMs. 78

79 Chapter 9: Backup and Restore Chapter 9: Backup and Restore Overview Introduction to backup and restore A thoughtful and complete backup strategy is an essential part of database maintenance in a production environment. Data backups are an essential part of any production environment. Regardless of the RAID protection level, hardware redundancy, and other high-availability features present in EMC Celerra storage arrays, conditions exist where you may need to be able to recover a database to a previous point in time. This solution used EMC Celerra SnapSure to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. Scope This section covers the use of Celerra SnapSure checkpoint to perform backup and restore operations on Oracle RAC database servers. Important note on scripts The scripts provided assume that the passwordless authentication is set up using ssh between the oracle user account and the Celerra Control Station. Passwordless authentication allows the oracle user account to issue commands to the Control Station within a script. For additional information, see Chapter 6: Installation and Configuration > Task 12: Enable passwordless authentication using SSH. 79

80 Chapter 9: Backup and Restore Section A: Backup and restore concepts Physical storage backup A full and complete copy of the database to a different physical media is a physical storage backup. Logical backup A backup that is performed using the Oracle import/export utilities is a logical backup. The term logical backup is generally used within the Oracle community. Logical storage backup Creating a backup using a logical image is referred to as a logical storage backup. A logical storage backup is a backup that does not physically exist. Rather, it consists of the blocks in the active file system, combined with blocks in a SavVol, an area where the original versions of the updated blocks are retained. The effect of a logical storage backup is that a view of the file system as of a certain point in time can be assembled. Unlike a physical storage backup, a logical storage backup can be taken very rapidly, and requires very little space to store (typically a small fraction of the size of a physical storage backup). Important Taking logical storage backups is not enough to protect the database from all risks. Physical storage backups are also required to protect the database against double disk failures and other hardware failures at the storage layer. Celerra SnapSure checkpoint The Celerra checkpoint command (fs_ckpt) allows a database administrator to capture an image of the entire file system as of a point in time. This image takes up very little space and can be created very rapidly. It is thus referred to as a logical image. Flashback Database The Oracle Flashback Database command enables you to restore an Oracle database to a recent point in time, without first needing to restore a backup of the database. 80

81 Chapter 9: Backup and Restore Section B: Backup and recovery strategy Use both logical and physical backup Logical backup The best practice for the backup of Oracle Database 11g is to perform approximately six logical storage backups per day, at four-hour intervals, using SnapSure checkpoints. To facilitate the ability to recover smaller granularities than the datafile (a single block for example), you should catalog all the SnapSure checkpoint backups within the RMAN catalog. Physical backup As logical backups do not protect you from hardware failures (such as double-disk failures), you should also perform one physical backup per day, typically during a period of low user activity. For this purpose, EMC recommends RMAN using an incremental strategy, if the database is larger than 500 GB, and using a full strategy otherwise. Further, EMC recommends that the RMAN backup be to a SATA II disk configuration rather than to tape. Reduced mean time to recovery Using a strategy that combines physical and logical backups optimizes the mean time to recovery. In the event of a fault that is not related to the hardware, you can restore instantly from a SnapSure checkpoint. According to Oracle, approximately 90 percent of all restore/recovery events are not related to hardware failures, but rather to user errors such as deleting a datafile or truncating a table. Further, the improved frequency of backups over what can be achieved with a pure physical backup strategy means that you have fewer logs to apply, thereby improving mean time to recovery. Even in the case where you need to restore from physical backup, the use of SATA II disk will improve restore time. Multiple restore points using EMC SnapSure 81

82 Chapter 9: Backup and Restore Rapid restore and recovery using EMC SnapSure 82

83 Chapter 9: Backup and Restore Section C: Logical backup and restore using EMC SnapSure Overview A typical backup scheme would use six logical storage backups per day, at four-hour intervals, combined with one physical storage backup per day. The example scripts provided in this section can be integrated into the Oracle Enterprise Manager job scheduling process or cron, having them execute every four hours. Documentation on scheduling a job in this way can be found at: Oracle Database Administrator's Guide 10g Release 2 (10.2) The process documented here assumes a four-hour interval. Advantages of logical storage using SnapSure Recovery from human errors Logical backup protects against logical corruption of the database, as well as accidental file deletion, and other similar human errors. Frequency without performance impact The logical storage operation is very lightweight and, as a result, a logical storage backup can be taken very frequently. Most customers report that they cannot perceive the performance impact of this operation because it is so slight. Reduced MTTR Restoring from a logical storage backup can occur very quickly, depending on the amount of data changes. This dramatically reduces mean time to recovery (MTTR) over that which can be achieved restoring from a physical backup. Less archived redo logfiles Due to the high frequency of backups, a small number of archived redo log files need to be applied if a recovery is needed. This further reduces mean time to recovery. 83

84 Chapter 9: Backup and Restore Logical backup process Logical storage process and SnapSure A logical storage backup is a virtual copy of the datafiles. This is enabled by the Celerra SnapSure checkpoint feature. The following image illustrates the process: The datafile copies can be used in the same manner as any other backup of the datafiles. Stages of the logical Backup process The following table summarizes the logical backup and recovery process: Step Action 1 Initialize the logical backup process 2 Set up iterative logical backups 84

85 Chapter 9: Backup and Restore 3 Restore from a logical backup Initializing logical storage How to initialize logical storage The table below describes how to initialize a logical storage backup process using a four-hour interval, or six daily SnapSure checkpoints. Step Action 1 Create six checkpoints on the Celerra for the datafile volume. 2 Create mount points for the checkpoint file systems on the Celerra. 3 The checkpoint file systems are automatically mounted with read-only permission on the Celerra. 4 Export the checkpoint file systems from the Celerra. 5 Create read-only mount points for the checkpoint file systems on the Oracle database server. 6 Create the mount point directories for the checkpoint file systems, and mount them on the Oracle database server. These steps can be automated using a script. A sample script is available at: Supporting Information > Setting up iterative logical storage: scripts and outputs > Sample script: log_bkup_init. bash Sample script: log_bkup_init. Bash The sample script, log_bkup_init.bash, can be used to automate the steps required to initialize logical storage. To see the sample script, log_bkup_init.bash, go to: Supporting Information > Setting up iterative logical storage: scripts and outputs > Sample script: log_bkup_init. bash Note The script assumes that passwordless authentication using ssh has been enabled. Instructions on how to enable passwordless authentication can be found in: Chapter 6: Installation and Configuration > Task 12: Enable passwordless authentication using SSH Note Remember that the lines ending with the backslash (\) symbol should be entered on one line. Also remember that this script must be run as root. Output of log_bkup_init. Bash To see the output of the sample script, log_bkup_init.bash, go to: Supporting Information > Setting up iterative logical storage: scripts and outputs > Output of log_bkup_init. bash 85

86 Chapter 9: Backup and Restore Setting up iterative logical storage How to set up iterative logical storage The table below describes the steps that can be set in the Oracle Job Scheduler to run at a stated interval, to enable iterative logical storage. Step Action 1 Schedule the SnapSure checkpoints in intervals of 4 hours. 2 From the database server, place the database in hot backup mode. 3 From the Celerra, refresh the SnapSure checkpoint that was created during the initialization process, relating to the current hour. 4 Exit hot backup mode on the Oracle database server. 5 If RMAN integration is required, from the database server, using a SQLon-SQL approach, create RMAN scripts: to uncatalog the datafile copies; and to catalog them. SET FEEDBACK off SET HEADING off SET VERIFY off SET LINESIZE 100 SET PAGESIZE 2000 SPOOL catalog.rman SELECT 'change datafilecopy ' CHR(39) '&1' SUBSTR(name, INSTR(name,'/',-1, 1), LENGTH(name)) CHR(39) ' uncatalog;' FROM v$datafile; SELECT 'catalog datafilecopy ' CHR(39) '&1' SUBSTR(name, INSTR(name,'/',-1, 1), LENGTH(name)) CHR(39) ';' FROM v$datafile; SPOOL off EXIT This script frees the user from the burden of maintaining and updating a list of datafiles that must be backed up, because the list is obtained from the data dictionary when the backup occurs. The heart of the SQL Plus script is the SELECT statements. The first part of each statement is a literal that outputs a snippet of RMAN script 86

87 Chapter 9: Backup and Restore code. The CHR (39) statement outputs the single-quote character. The idiom &1 is a parameter passed into the script. This parameter contains the file system path to the checkpoint file system, as it is mounted on the Oracle database server. This should include all directories necessary to reach the datafiles. The statement SUBSTR (name, INSTR (name,'/',-1, 1), LENGTH (name)) simply produces the names of all of the datafiles, not including any of the path information prior to the filename (but including the leading slash). Again, pass the path information to the checkpoint file system in the parameter given in the script. Note that the last trailing slash is omitted from the parameter that is passed in. The above script can be executed within RMAN to catalog and uncatalog the datafiles as shown below: rman target 6 As a best practice during backup, you should also switch log files, archive all log files, and back up the control file. These steps can be automated by creating a script and incorporating that script into a scheduling mechanism (for example, the Oracle Enterprise Manager job control process), which will cause the script to be executed at a stated interval. To see the sample script, log_bkup_iter.bash, go to: Supporting Information > Setting up iterative logical storage: scripts and outputs > Sample script: log_bkup_iter. bash Sample script: log_bkup_iter.bash The sample script, log_bkup_iter.bash, can be used to automate the steps required to iterate logical storage. To see the sample script, log_bkup_iter.bash, go to: Supporting Information > Setting up iterative logical storage: scripts and outputs > Sample script: log_bkup_iter. bash In the sample script, a variable named $SNAPNAME stores the checkpoint name for the current hour. The date command, with the +%k format string, outputs the current hour in 0-23 format. Finally, integer division (the / operator) divides the current hour by four and discards the remainder. The result is a number from zero to five for any given hour. Test this code yourself to see the behavior. Note The code in the example script is executed on the Oracle database server. It assumes that passwordless authentication using ssh between the oracle user account and the Control Station s root user has been enabled. Instructions on how to enable passwordless authentication can be found in: Chapter 6: Installation and Configuration > Task 12: Enable passwordless authentication using SSH 87

88 Chapter 9: Backup and Restore Output of log_bkup_iter.bash To see the output of the sample script, log_bkup_init.bash, go to: Supporting Information > Setting up iterative logical storage: scripts and outputs > Output of log_bkup_iter. bash Restoring logical backups available methods Overview Depending on whether you need to restore an entire database or only part of a database, there are different options available. Linux file system commands To restore part of a database, use file system commands such as cp, dd, and tar. The Celerra rootfs_ckpt Restore command EMC recommends that you use the rootfs_ckpt Restore command to restore an entire database. Both of these options are described in full in this section. Recovery using Linux file system commands Once a checkpoint file system is mounted as a normal (albeit read-only) file system on the Oracle database server, the datafile is copied to the checkpoint file system and can be treated as if it were any other user-managed backup of the Oracle database. As a result: The normal file system semantics can be used to restore these files into the active file system. For example, Linux file system commands such as cp, dd, and tar can be used. Restore can be carried out without downtime. However, there is a performance hit because file system commands, such as cp, dd, and tar, will round-trip the network, reading the blocks off of the Celerra within the checkpoint file system, and then writing the blocks back out to the Celerra into the active file system. The restore operations affect a small subset of the database. Downtime on the entire database cannot be allowed. Caution Restore and recovery of a full database can be accomplished using file system commands; however, there will be a significant impact on performance. While the restore is running, you will get significant I/O from the restore command, for example, dd or cp. You can avoid this by using rootfs_ckpt. The Celerra rootfs_ckpt -Restore command If there is a valid logical-storage backup of the database, the most efficient way to restore the database in terms of time and I/O is by using the rootfs_ckpt - Restore command. This is because the rootfs_ckpt -Restore command requires minimal I/O (basically restoring changed blocks) in order to complete the restore operation. The rootfs_ckpt -Restore command has the following 88

89 Chapter 9: Backup and Restore limitations: It is limited to restoring the entire file system. Downtime is required. To use this method: An entire database has to be restored. The restore operation has to be completed quickly with minimal impact on systems performance. Using the rootfs_ckpt Restore command to restore a logical backup How to restore a logical backup using rootfs_ckpt -Restore The table below describes how to restore a logical backup of an Oracle database (a SnapSure checkpoint), by using the rootfs_ckpt Restore command. Step Action 1 Shut down all the database instances by typing the following command: [oracle@mteoradb51 oracle]$ srvctl stop database -d mterac5 2 Verify that all database instances have been shut down by typing the following command: [oracle@mteoradb51 oracle]$ crs_stat -t 3 Restore and recover the database by executing a script that incorporates the rootfs_ckpt -Restore command. A sample script (log_bkup_restore.bash) is available below. Sample script: log_bkup_restore.bash The sample script below, log_bkup_restore.bash, does a logical restore of an Oracle database on a Celerra using the rootfs_ckpt -Restore command to do a checkpoint restore. It allows the user to pass in the name of a SnapSure checkpoint as the first parameter. Note The script assumes that passwordless authentication using ssh has been enabled. Instructions on how to enable passwordless authentication can be found in Chapter 6: Installation and Configuration > Task 12: Enable passwordless authentication using SSH #!/bin/bash # # # Script: log_bkup_restore.bash # # Purpose: Restores and performs full database recovery # # using backup taken with fs_ckpt. # # This script can be run as oracle if passwordless ssh is # # set up between the oracle user account and the NS Series # 89

90 Chapter 9: Backup and Restore # Control Station root account # # # THE_NS=rtpsol33 THE_USER=root SNAPNAME=datafs_chkpt1 echo "SNAPNAME is ${SNAPNAME}" THE_DATAMNT=/u02/oradata/mterac5 cp -fp ${THE_DATAMNT}/*.ctl /tmp ssh ${THE_USER}@${THE_NS} "export NAS_DB=/nas; \ /nas/sbin/rootfs_ckpt ${SNAPNAME} -Restore -Force" cp -fp /tmp/*.ctl ${THE_DATAMNT} sqlplus /nolog <<EOF2 connect / as sysdba startup mount; set autorecovery on; recover database; alter database open; exit EOF2 Date System output of log_bkup_restore.bash The output our system produces after running log_bkup_restore.bash is shown below: [oracle@mteoradb51 oracle]$./log_bkup_restore.bash SNAPNAME is datafs_chkpt1 "EMC" is a registered trademark of EMC Corporation, and "Linux" is a registered trademark of Linus Torvalds. "Celerra" is a trademark of EMC. EMC Celerra Control Station Linux 1.0 operation in progress (not interruptible)...id = 155 name = datafs_chkpt (output deleted for brevity) stor_devs = APM disks = d22 disk=d22 stor_dev=apm addr=c0t2l4 server=server_2 90

91 Chapter 9: Backup and Restore disk=d22 stor_dev=apm addr=c16t2l4 server=server_2 SQL*Plus: Release Production on Tue May 18 10:13: Copyright (c) 1982, 2005, Oracle. All Rights Reserved. SQL>Connected to an idle instance. SQL> ORACLE instance started. Total System Global Area E+10 bytes Fixed Size bytes Variable Size bytes Database Buffers E+10 bytes Redo Buffers bytes Database mounted. Media recovery complete. SQL> Database altered. SQL> Disconnected from Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options Thu May 20 10:45:57 EDT 2010 [oracle@mteoradb51 oracle]$ Section D: Comparison - EMC SnapSure and Flashback Database Overview With Oracle Database 11g, Oracle introduced the Flashback Database command. In some respects it is similar to using EMC SnapSure to create a logical backup. Both tools provide you with the ability to revert the database to a point in time. Thus, both tools allow you to undo certain user errors that affect the database. However, Flashback Database has certain limitations. These limitations are described below. Both technologies should be evaluated carefully, however, as many customers choose to use both. I/O Flashback Database requires a separate set of logs, increasing I/O at the database layer. 91

92 Chapter 9: Backup and Restore The SnapSure checkpoints require some I/O as well, but this is at the storage layer, significantly lower in the stack than the database. In general, SnapSure checkpoints are lighter in weight than the flashback logs. Restore time The amount of time required to restore a database to a point in time using Flashback Database will be longer than that using Celerra SnapSure checkpoint restore. However, SnapSure checkpoints require you to apply archive logs, and Flashback Database does not. Thus, the mean time to recovery may vary between the two features. For Flashback Database, the mean time to recovery will be strictly proportional to the amount of time you are discarding. In the case of Celerra SnapSure, the number of archived redo logs that must be applied is the major factor. Because of this, the frequency of logical backup largely determines the mean time to recovery. Degree of protection from logical errors Flashback Database does not protect you from all logical errors. For example, deleting a file or directory in the file system cannot be recovered by Flashback Database but can be recovered using Celerra SnapSure checkpoints. Only errors or corruptions created within the database can be corrected using Flashback Database. Section E: Physical backup and restore using Oracle RMAN RMAN and Celerra Physical backup of the Celerra array can be accomplished using Oracle RMAN. The backup target is typically SATA or LCFC disks on the Celerra array. If tape is used with a product that includes a media management layer, such as EMC NetWorker, Oracle Secure Backup must be used. Normal RMAN semantics apply to this backup method. This is thoroughly covered on the Oracle Technology Network website and will not be included in this document. RMAN backup script: rmanbkp.bash Run the following script from the database server to carry out physical backup of a Celerra array using Oracle RMAN: #!/bin/bash # # # Script: rmanbkp.bash # # Purpose: It creates rman backup # # # echo "This is rmanbkp.bash" echo Starting RMAN Backup 92

93 Chapter 9: Backup and Restore rman<<eof1 connect target / backup database; backup current controlfile; exit EOF1 echo "End of RMAN backup rmanbkp.bash" System output of rmanbkp.bash The output our system produces after running rmanbkp.bash is shown below: [oracle@mteoradb1 ~]$./rmanbkp.bash This is rmanbkp.bash Starting RMAN Backup Recovery Manager: Release Production on Fri May 21 06:53: Copyright (c) 1982, 2005, Oracle. All rights reserved. RMAN> connected to target database: MTERAC5 (DBID= ) RMAN> Starting backup at 21-MAY-10 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=4166 instance=mterac51 devtype=disk channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00008 name=/u02/oradata/mterac5/test01.dbf input datafile fno=00009 name=/u02/oradata/mterac5/test02.dbf input datafile fno=00010 name=/u02/oradata/mterac5/test03.dbf input datafile fno=00011 name=/u02/oradata/mterac5/test04.dbf input datafile fno=00012 name=/u02/oradata/mterac5/test05.dbf input datafile fno=00013 name=/u02/oradata/mterac5/test06.dbf input datafile fno=00014 name=/u02/oradata/mterac5/test07.dbf input datafile fno=00015 name=/u02/oradata/mterac5/test08.dbf input datafile fno=00016 name=/u02/oradata/mterac5/test09.dbf input datafile fno=00017 name=/u02/oradata/mterac5/test10.dbf input datafile fno=00018 name=/u02/oradata/mterac5/test11.dbf input datafile fno=00019 name=/u02/oradata/mterac5/test12.dbf input datafile fno=00020 name=/u02/oradata/mterac5/test13.dbf ncluding current control file in backupset channel ORA_DISK_1: starting piece 1 at 21-MAY-10 channel ORA_DISK_1: finished piece 1 at 21-MAY-10 piece handle=/u06/oradata/mterac5/backupset/2010_05_21/o1_mf_ncnnf_t AG T065724_3fj98bcs_.bkp tag=tag t comment=none 93

94 Chapter 9: Backup and Restore channel ORA_DISK_1: backup set complete, elapsed time: 01:10:00 Finished backup at 21-MAY-10 RMAN> End of RMAN backup rmanbkp.bash 94

95 Chapter 10: Data Protection and Replication Chapter 10: Data Protection and Replication Overview This solution provides options to create local and remote replicas of application data that are suitable for testing, development, reporting, and disaster recovery and many other operations that can be important in your environment. EMS SnapSure and Oracle Data Guard The best practice for disaster recovery of an Oracle Database 11g over NFS is to use the Celerra fs_copy for seeding the disaster recovery copy of the production database, and then to use the Oracle Data Guard log transport and log apply services. The source of the database used for seeding the disaster recovery site can be a hot backup of the production database within a Celerra SnapSure checkpoint. This avoids any downtime on the production server relative to seeding the disaster recovery database. The configuration steps for shipping the redo logs and bringing up the standby database are accomplished using Oracle Data Guard. The Data Guard Failover operation was performed in MAXIMUM AVAILABILITY mode. For best practices on Oracle Data Guard configuration, refer to the Oracle documentation on this subject. The following image illustrates the setup for disaster recover using Celerra fs_copy and Oracle Data Guard. 95

96 Chapter 11: Test/Dev Chapter 11: Test/Dev Overview Introduction The following use case scenarios were validated as part of this solution. Single-instance test/dev using Celerra SnapSure writeable checkpoints Create a Celerra SnapSure writeable checkpoint while a Benchmark Factory workload is running on a four-node RAC production database. Use the checkpoint to bring up a test/dev database as a single instance on a VMware host. RAC test/dev using Celerra SnapSure writeable checkpoints Create a Celerra SnapSure writeable checkpoint while a Benchmark factory workload is running on the four-node RAC production database. Use the checkpoint to bring up the test/dev database as an RAC database on a different set of four physical servers. A separate test/dev environment is created by replicating the production database using the EMC Celerra SnapSure writeable checkpoints feature. The replicated database is a read/writeable copy that can be used by developers and testers without impacting on the production database. The unchanged blocks will be shared between the production database and the test/dev database, so care must be taken to ensure that I/O to the test/dev database does not impose an unreasonable burden on the production database. The procedure used to create the test/dev database is similar to the procedure that is used to create a logical storage backup using SnapSure. 96

97 Chapter 11: Test/Dev Database cloning The importance of cloning The ability to clone a running production Oracle database is a key requirement for many customers. The creation of test and development databases, enabling of datamart and data warehouse staging, and Oracle and OS version migration are just a few applications of this important functionality. Cloning methods Two methods can be used for database cloning: Full clone Full clone involves taking a full copy of the entire database. Full clone is recommended for small databases or for a one-time cloning process. Incremental cloning Incremental cloning is more complex but allows you to create a clone, making a full copy on the first iteration and, thereafter, making an incremental clone for all other iterations, by copying only the changed data in order to update the clone. Incremental cloning is recommended for larger databases and for situations where there is an ongoing or continuous need to clone the production database. Creating a writeable clone EMC provides online, zero-downtime cloning of Oracle databases using the Celerra fs_copy feature. The best practice for creating a writeable clone of a production Oracle Database 11g over NFS is described in the following table: Step Action 1 Take a hot backup of the database using the SnapSure checkpoint. 2 Copy that hot backup to another location (possibly within the same Celerra array) using Celerra fs_copy. 3 Run a recovery against the hot backup copy to bring it to a consistent state. 97

98 Chapter 11: Test/Dev Configuring Oracle to facilitate cloning CSS disktimeout parameter The Cluster Synchronization Services (CSS) component of Oracle Clusterware maintains a heartbeat parameter called disktimeout. This parameter guarantees the amount of time that RAC nodes will not evict when there is no active I/O at the backend storage. During validation of the test/dev solution using writeable checkpoints for pure NFS, we found that the disktimeout parameter should be set to a value of at least 900, so that the test/dev operation could be performed successfully without impacting the production database. Setting the disktimeout parameter to a higher value does not have any performance impact. Configuring a value for CSS disktimeout To configure a value for the disktimeout parameter, type the following command: $ORA_CRS_HOME/bin/crsctl set css disktimeout 900 In this example, the disktimeout parameter is configured as 900. Oracle Clusterware and database software The Oracle Clusterware and database software should be installed at the same location at both the production site and the clone target site. The following paths should be identical on both the source and the clone target site: ORA_CRS_HOME=/u01/crs/oracle/product/10/crs ORACLE_HOME=/u01/app/oracle/oracle/product/10.2.0/db_1 The Oracle Cluster Registry (OCR) file and the voting disks for the source and clone target sites should be placed on separate, independent file systems. The kernel parameters, memory settings, and the database directory structure should be identical on both the source and clone target sites. Creating a test/dev system using Celerra SnapSure writeable checkpoints Overview This map describes how to use Celerra SnapSure writeable checkpoints to create a writeable test/dev database that contains a copy of an Oracle production database. This process can be used to create both single-instance and RAC test/dev systems. The test/dev system can be used for: Testing and development Data warehouse staging Backup 98

99 Chapter 11: Test/Dev Any other purpose for which you need a copy of the production database. Note The test/dev system shares unchanged blocks with the production database and reads of those blocks will impact production database performance as well. Prerequisites Ensure that the following prerequisites are in place, before you create a singleinstance test/dev system using Celerra SnapSure writeable checkpoints. RAC and single-instance The target clone nodes have access to the Celerra system that is hosting the production database file systems. Two new file systems snap_log1 and snap_log2 have been created to store the redo logs of the clone target database. The database software is installed on the target clone nodes. The software is installed at the same location on the target clone nodes as on the production database servers. For more information on this, see Chapter 11: Test/Dev > Configuring Oracle to facilitate cloning. RAC only Cluster services are up and running on the RAC nodes. Process for creating a test/dev system The table below describes the stages that are required to create test/dev system using SnapSure writeable checkpoints: Step Action 1 Create the writeable checkpoints: Checkpoint for the Oracle data files (datafs) Checkpoint for the Oracle archive files (archfs) 2 Mount the checkpoints on the target test/dev system. 3 Recover the database from the checkpoints. 99

100 Chapter 11: Test/Dev Creating writeable checkpoints The table below describes how to create writeable checkpoints of the Oracle data files (datafs) and the Oracle archive logs (archfs): Step Action 1 Place the production database into archive log mode: MTERAC15> archive log list This produced the following output on our test system: Database log mode Archive Mode Automatic archival Enabled Archive destination /u05/mterac15 Oldest online log sequence 7 Next log sequence to archive 8 Current log sequence 8 MTERAC15> 2 Place the production database into hot backup mode: MTERAC15> alter database begin backup; This produced the following output on our test system: Database altered 3 Create a writeable checkpoint of the database data file system (datafs): [root@mteorans40-1 ~]# fs_ckpt datafs -Create - readonly n To view the output that this command produced on our test system, refer to: Supporting Information > Test/dev using Celerra SnapSure: scripts and outputs > Create a writeable checkpoint 4 Take the production database out of hot backup mode: MTERAC15>alter database end backup; Database altered. 5 Archive the current redo log file: MTERAC15> alter system archive log current; System altered. 6 List the current active log files: MTERAC15> SELECT member FROM v$log l, v$logfile f WHERE l.group# = f.group# AND l.status= CURRENT'; To view the output that this command produced on our test system, refer to: Supporting Information > Test/dev using Celerra SnapSure: scripts and outputs > Create a writeable checkpoint 8 Repeat steps 1 to 7 to create a writeable checkpoint of the database archive log file system (archfs). 100

101 Chapter 11: Test/Dev Mounting writeable checkpoints The table below describes how to mount writeable checkpoints of the Oracle data files (datafs) and the Oracle archive logs (archfs) on a test/dev target: Step Action 1 Create NFS exports for the data and arch writeable checkpoints: [root@mteorans40-1 ~]# server_export server_2 -P nfs \ /datafs_ckpt1_writeable1 server_2 : done [root@mteorans40-1 ~]# server_export server_2 -P nfs \ /archfs_ckpt1_writeable1 server_2 : done 2 Mount the data and archive file system writeable checkpoints and snap_log1 and snap_log2 file systems, on the test/dev database server; mount them to the same directory structure as the respective database components are placed on the production database servers: [root@mteoraesx1-vm6 ~]# cat /etc/fstab ---Relevant portion only shown :/datafs_ckpt1_writeable1/oradata /u02/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo =0,vers=3,timeo= :/snap_log1/oradata /u03/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo =0,vers=3,timeo= :/snap_log2/oradata /u04/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo =0,vers=3,timeo= :/archfs_ckpt1_writeable1 /u05 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo =0,vers=3,timeo= [root@mteoraesx1-vm6 ~]# mount /u02/oradata [root@mteoraesx1-vm6 ~]# mount /u03/oradata [root@mteoraesx1-vm6 ~]# mount /u04/oradata [root@mteoraesx1-vm6 ~]# mount /u05 3 On the test/dev database server, set the environment parameters ORACLE_SID and ORACLE_HOME to the same settings as on the production database servers, or set them as per the Oracle installation procedure. [oracle@mteoraesx1-vm6 ~] $export ORACLE_SID=mterac155 [oracle@mteoraesx1-vm6 ~] $export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 4 Create the required database dump directories on the test/dev database server: 101

102 Chapter 11: Test/Dev ~]$ mkdir $ORACLE_HOME/admin ~]$ mkdir $ORACLE_HOME/admin/mterac155 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/adump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/bdump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/cdump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/dpdump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/hdump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/udump [oracle@mteoraesx1-vm6 ~]$ mkdir $ORACLE_HOME/admin/ \ mterac155/pfile 5 Configure the listener.ora and tnsnames.ora files on the test/dev database server. Note If the clone target is RAC, repeat steps 1 to 4 on each of the RAC nodes of the target clone database. 6 Create a pfile for the test/dev instance; use the parameter file from the production instance as a base template. 7 Modify the parameter file depending on whether the target is a singleinstance database or a RAC database. Ensure that parameters such as control_files, db_recovery_file_dest, db_recovery_file_dest_size, and so on, are updated. If the clone target is a RAC database, update the parameter file on all RAC nodes. 8 Copy the control file from the production database server to the corresponding location on the test/dev database server. [oracle@mteoradb51 ~]$ scp /u03/oradata/mterac15/controlfile/o1_mf_40qxyh25_.ctl oracle@mteoraesx1- vm6:/u03/oradata/mterac15/controlfile/ oracle@mteoraesx1-vm6's password: o1_mf_40qxyh25_.ctl 100% 102

103 Chapter 11: Test/Dev 15MB 14.6MB/s 00:01 ~]$ 9 RAC only Configure the target clone RAC database by adding database instances. Note This step is required only if the clone target is a RAC database. The target clone database must be configured by adding the appropriate database and corresponding instances. [oracle@mteoradb63 ~]$ srvctl add database -d mterac20 -o /u01/app/oracle/product/10.2.0/db_1 [oracle@mteoradb63 ~]$ srvctl add instance -d mterac20 -i mterac201 -n mteoradb63 [oracle@mteoradb63 ~]$ srvctl add instance -d mterac20 -i mterac202 -n mteoradb64 [oracle@mteoradb63 ~]$ srvctl add instance -d mterac20 -i mterac203 -n mteoradb65 [oracle@mteoradb63 ~]$ srvctl add instance -d mterac20 -i mterac204 -n mteoradb66 [oracle@mteoradb63 ~]$ srvctl config database mterac20 10 Start up the test/dev database in mount mode. If the clone target is a RAC database, execute the following from the first node: MTERAC155> startup mount ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. MTERAC155> Recovering a database from writeable checkpoints The table below describes how to restore a database after SnapSure writeable checkpoints of the Oracle data files (datafs) and the Oracle archive files (archfs) have been mounted on a test/dev target: Step Action 1 List the available archive logs. This list is required for recovering the test/dev database. [oracle@mteoraesx1-vm6 dbs]$ ls -ltr /u05/mterac15/ To view the output that this command produced on our test system, refer to: Supporting Information > Test/dev using Celerra SnapSure: scripts and 103

104 Chapter 11: Test/Dev outputs > List the available archive logs and Create a writeable checkpoint 2 Recover the test/dev database. Specify the filenames of the archive logs from the list displayed in step 1. At the end, specify the current log file name from the same thread. MTERAC155> recover database until cancel; ORA-00279: change generated at 04/22/ :23:06 needed for thread 1 ORA-00289: suggestion : /u05/mterac15/1_17_ dbf ORA-00280: change for thread 1 is in sequence #17 Specify log: {<RET>=suggested filename AUTO CANCEL} /u05/mterac15/1_17_ dbf ORA-00279: change generated at 04/22/ :18:49 needed for thread 2 ORA-00289: suggestion : /u05/mterac15/2_4_ dbf ORA-00280: change for thread 2 is in sequence #4 Specify log: {<RET>=suggested filename AUTO CANCEL} /u05/mterac15/2_4_ dbf ORA-00279: change generated at 04/22/ :19:00 needed for thread 3 ORA-00289: suggestion : /u05/mterac15/3_13_ dbf ORA-00280: change for thread 3 is in sequence #13 Specify log: {<RET>=suggested filename AUTO CANCEL} /u05/mterac15/3_13_ dbf ORA-00279: change generated at 04/22/ :19:52 needed for thread 4 ORA-00289: suggestion : /u05/mterac15/4_17_ dbf ORA-00280: change for thread 4 is in sequence #17 Apr 22 23:30 4_18_ dbf Specify log: {<RET>=suggested filename AUTO CANCEL} /u05/mterac15/4_21_ dbf ORA-00279: change generated at 03/21/

105 Chapter 11: Test/Dev 01:23:28 needed for thread 4 ORA-00289: suggestion : /u05/mterac15/4_18_ dbf ORA-00280: change for thread 4 is in sequence #18 ORA-00278: log file '/u05/4_17_ dbf' no longer needed for this recovery Specify log: {<RET>=suggested filename AUTO CANCEL} cancel Media recovery cancelled. 3 Open the test/dev database with the resetlogs option. MTERAC155> alter database open resetlogs; 4 Verify that the test/dev database instance is opened in read-write mode on the clone target. MTERAC155> select name, open_mode from v$database; NAME OPEN_MODE PRIMDB READ WRITE 5 If the clone target is a RAC database, shut down and restart both the database and the cluster services. Verify that the cluster services are up and that all the RAC database instances are online: [oracle@mteoradb63 ~]$ crs_stat t To view the output that this command produced on our test system, refer to: Supporting Information > Test/dev using Celerra SnapSure: scripts and outputs > Verify that RAC nodes are online 105

106 Chapter 12: Conclusion Chapter 12: Conclusion Overview Introduction The EMC Celerra unified storage platform s high-availability features combined with EMC s proven storage technologies provide a very attractive storage system for the Oracle RAC 11g over Oracle Direct NFS (DNFS). Conclusion The solution simplifies database installation, backup, and recovery. The solution enables the Oracle RAC 11g configuration by providing shared disk. The Data Mover failover capability provides uninterruptible database access. Redundant components on every level, such as the network connections, backend storage connections, RAID, and power supplies, achieve a very high level of fault tolerance, thereby providing continuous storage access to the database. Celerra with SnapSure provides rapidly available backups. The overall Celerra architecture and its connectivity to the back-end storage make it highly scalable, with the ease of increasing capacity by simply adding components for immediate usability. Running the Oracle RAC 11g with Celerra provides the best availability, scalability, manageability, and performance for your database applications. Reduced total cost of ownership In any reasonable configuration, the database server's CPU is the most precious component of the entire architecture. Therefore, the over-arching principle of EMC's Oracle RAC 11g solutions for midsize enterprises is to free up the database server's CPU (as well as memory and I/O channels) from utility operations such as backup and recovery, disaster recovery staging, test/dev, and cloning. The highest and best use of the database server s CPUs is to parse and execute the SQL statements that are required by the application user. CPU usage This solution reduces the load on the database server CPU by using: EMC SnapSure to carry out a physical backup of an Oracle 11g production database while offloading all performance impacts of the backup operation off of the production server. EMC Replication Manager for NFS with EMC SnapSure to carry out a physical backup of an Oracle 11g production database while offloading all performance impacts of the backup operation off the production server. Oracle DNFS to achieve better performance due to the reduction of memory consumption and CPU utilization. 106

107 Chapter 12: Conclusion Improved performance The Direct NFS (DNFS) client performs concurrent I/O by bypassing the operating system. The benefits of this are: Consistent NFS performance is observed across all operating systems. DNFS is optimized for database workloads and supports asynchronous I/O, which is suitable for most databases; it delivers optimized performance by automatically load balancing across the available paths. Load balancing in DNFS is frequently superior to the conventional Linux kernel NFS (KNFS). Ease of use The use of DNFS simplifies network setup and management by eliminating administration tasks such as: Setting up network subnets LACP bonding Tuning of Linux NFS parameters Load balancing and high availability (HA) are managed internally within the DNFS client. Business continuity Advanced backup and recovery Advanced backup and recovery with EMC SnapSure dramatically improves the mean time to recovery (MTTR) by reducing the time required for the restore operation. Further, as the backup operation has minimal impact on the database server performance, the backup can be run more often. This means that the recovery operation is also optimized since fewer archived logs must be applied. In this solution, one of the components using Replication Manager included Advanced Backup and Recovery using EMC SnapSure checkpoints. Test/dev The ability to deploy a writeable copy of the production database is required by many customers. The process of provisioning this copy must create minimal, if any, performance impact on the production database server. Also, absolutely no downtime can be tolerated. The test/dev solution documented here provides this using EMC Replication Manager for NFS with EMC SnapSure writable checkpoint. Robust performance and scaling The resiliency testing carried out by EMC ensures that the database configuration is reliable. High availability is used at every major layer of the solution, including the database server; NAS file server, and back-end SAN array. By testing the fault tolerance of all of these layers, the ability of the application to withstand hardware failures with no downtime is assured. The performance testing carried out by EMC utilizes an industry-standard OLTP benchmark, but does so without exotic tunings that are not compliant with best practices. In addition, real-world configurations that the customer is likely to deploy 107

108 Chapter 12: Conclusion are used. This enables the customer to be reasonably assured that the configuration that they choose to run their application will do so predictably and reliably. 108

109 Chapter 13: Supporting Information Chapter 13: Supporting Information Overview Introduction This chapter contains supporting information plus scripts and system outputs referred to in this guide. Managing and monitoring Celerra Celerra Manager The Celerra Manager is a web-based graphical user interface (GUI) for remote administration of a Celerra unified storage platform. Various tools within Celerra Manager provide the ability to monitor the Celerra. These tools are available to highlight potential problems that have occurred or could occur in the future. Some of these tools are delivered with the basic version of Celerra Manager, while more detailed monitoring capabilities are delivered in the advanced version. Celerra Manager can be used to create Ethernet channels, link aggregations, and fail-safe networks Celerra Data Mover ports The following image illustrates the network ports on the rear of two EMC NS-480 Data Movers. The storage network ports are cge0 through cge3, on the top of each Data Mover. For KNFS, ports cge0 (the last character is a zero) and cge1 are aggregated and connected to the storage network. They handle all I/O required by the database servers to the datafiles, online redo log files, archived log files, control files, OCR file, and voting disk. Ports cge2 and cge3 are left open for future growth. Link aggregation is removed for validating DNFS. Each of the four individual ports, cge0, cge1, cge2, and cge3, are used as independent ports to validate DNFS. Enterprise Grid Control storage monitoring plug-in EMC recommends use of the Oracle Enterprise Manager monitoring plug-in for the EMC Celerra unified storage platform. This system monitoring plug-in enables you to: Realize immediate value through out-of-box availability and performance monitoring Realize lower costs through knowledge: know what you have and what has changed 109

110 Chapter 13: Supporting Information Centralize all of the monitoring information in a single console Enhance service modeling and perform comprehensive root cause analysis More information on the plug-in for an EMC Celerra server is available on the Oracle Technology Network at this location: Oracle Enterprise Manager 10g System Monitoring Plug-In for EMC Celerra Server The following image shows the EMC Celerra OEM plug-in: 110

EMC Business Continuity for Oracle Database 11g

EMC Business Continuity for Oracle Database 11g EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS Copyright 2010 EMC Corporation. All rights reserved. Published March, 2010 EMC believes the information in this

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Unified Storage for Oracle Database 11g

EMC Unified Storage for Oracle Database 11g EMC Unified Storage for Oracle Database 11g Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide 1 Copyright 2011 EMC Corporation. All rights reserved. Published February 2011 EMC

More information

EMC Business Continuity for Oracle Database 11g/10g

EMC Business Continuity for Oracle Database 11g/10g EMC Business Continuity for Oracle Database 11g/10g Enabled by EMC CLARiiON CX4 and EMC Celerra Using FCP and NFS Proven Solution Guide Copyright 2010 EMC Corporation. All rights reserved. Published January

More information

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System EMC Solutionsfor Oracle 10g / 11g EMC Global Solutions 42 South Street Hopkinton, MA

More information

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS Abstract This white paper examines the performance considerations of placing Oracle Databases on Enterprise Flash

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

Virtualizing Oracle Database 10g/11g on VMware Infrastructure

Virtualizing Oracle Database 10g/11g on VMware Infrastructure Virtualizing Oracle Database 10g/11g on VMware Infrastructure Consolidation Solutions with VMware Infrastructure 3 and EMC Celerra NS40 Multi-Protocol Storage May 2009 Contents Executive Overview...1 Introduction...1

More information

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide White Paper Third-party Information Provided to You Courtesy of Dell Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide Abstract This document provides an overview of the architecture of the

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review White Paper BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION EMC GLOBAL SOLUTIONS Abstract This white paper provides guidelines for the use of EMC Data Domain deduplication for Oracle

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Oracle Database 10g/Oracle RAC 10g Celerra NS Series NFS

Oracle Database 10g/Oracle RAC 10g Celerra NS Series NFS Best Practices Planning Abstract This white paper presents the best practices for configuration, backup and recovery, and protection of Oracle Database 10g Single Instance and Real Application Clusters

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Reference Architecture Copyright 2010 EMC Corporation.

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture. Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Writeable Checkpoints

Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Writeable Checkpoints Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Applied Technology Abstract This white paper first reviews the business case for and the challenges associated

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required

More information

EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1

EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1 White Paper EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1 Automate performance Scale OLTP workloads Rapidly provision Oracle databases EMC Solutions Group Abstract This solution

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

White Paper. Dell Reference Configuration

White Paper. Dell Reference Configuration White Paper Dell Reference Configuration Deploying Oracle Database 10g R2 Standard Edition Real Application Clusters with Red Hat Enterprise Linux 4 Advanced Server x86_64 on Dell PowerEdge Servers and

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

Introduction to Using EMC Celerra with VMware vsphere 4

Introduction to Using EMC Celerra with VMware vsphere 4 Introduction to Using EMC Celerra with VMware vsphere 4 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 White Paper EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 EMC VNX Replicator, VMware vcenter Site Recovery Manager, and VMware View Composer Automating failover of virtual desktop instances Preserving user

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series A Dell Technical White Paper Database Solutions Engineering Dell Product Group April 2009 THIS WHITE

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES WHITE PAPER BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES Using Solid State Disks (SSDs) in enterprise storage arrays is one of today s hottest storage

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard On February 11 th 2003, the Internet Engineering Task Force (IETF) ratified the iscsi standard. The IETF was made up of

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled by

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft Exchange 2007 EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright

More information

EMC Business Continuity for Microsoft Exchange 2010

EMC Business Continuity for Microsoft Exchange 2010 EMC Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage and Microsoft Database Availability Groups Proven Solution Guide Copyright 2011 EMC Corporation. All rights reserved.

More information

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi Best Practices Planning Abstract This white paper presents the best practices for optimizing performance for a Microsoft Exchange 2007

More information

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

EMC Business Continuity for Microsoft Office SharePoint Server 2007

EMC Business Continuity for Microsoft Office SharePoint Server 2007 EMC Business Continuity for Microsoft Office SharePoint Server 27 Enabled by EMC CLARiiON CX4, EMC RecoverPoint/Cluster Enabler, and Microsoft Hyper-V Proven Solution Guide Copyright 21 EMC Corporation.

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary v1.0 January 8, 2010 Introduction This guide describes the highlights of a data warehouse reference architecture

More information

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Technical Report Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Tomohiro Iwamoto, Supported by Field Center of Innovation,

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

Many organizations rely on Microsoft Exchange for

Many organizations rely on Microsoft Exchange for Feature section: Microsoft Exchange server 007 A Blueprint for Implementing Microsoft Exchange Server 007 Storage Infrastructures By Derrick Baxter Suresh Jasrasaria Designing a consolidated storage infrastructure

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

Disaster Recovery-to-the- Cloud Best Practices

Disaster Recovery-to-the- Cloud Best Practices Disaster Recovery-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE

More information

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi EMC Solutions for Microsoft Exchange 2007 NS Series iscsi Applied Technology Abstract This white paper presents the latest storage configuration guidelines for Microsoft Exchange 2007 on the Celerra NS

More information

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products Sheetal Kochavara Systems Engineer, EMC Corporation Agenda Overview of EMC Hardware and Software Best practices with

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC SAN Copy Command Line Interfaces

EMC SAN Copy Command Line Interfaces EMC SAN Copy Command Line Interfaces REFERENCE P/N 069001189 REV A13 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2008 EMC Corporation. All

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Unified Storage (FC), Microsoft Windows Server 2008 R2 Hyper-V, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled

More information

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper demonstrates

More information

EMC VNX SCALING PERFORMANCE FOR ORACLE 12c RAC ON VMWARE VSPHERE 5.5

EMC VNX SCALING PERFORMANCE FOR ORACLE 12c RAC ON VMWARE VSPHERE 5.5 White Paper EMC VNX SCALING PERFORMANCE FOR ORACLE 12c RAC ON VMWARE VSPHERE 5.5 EMC Next-Generation VNX8000, EMC FAST Suite, and EMC SnapSure Automate storage performance Scale OLTP workloads Rapidly

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005

EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005 EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005 Best Practices Planning Abstract This technical white paper explains best practices associated with Microsoft SQL Server 2000

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

EMC SAN Copy. Command Line Interface (CLI) Reference P/N REV A15

EMC SAN Copy. Command Line Interface (CLI) Reference P/N REV A15 EMC SAN Copy Command Line Interface (CLI) Reference P/N 069001189 REV A15 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2010 EMC Corporation.

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT WHITE PAPER LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT Continuous protection for Oracle environments Simple, efficient patch management and failure recovery Minimal downtime for Oracle

More information

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots Dell EqualLogic Best Practices Series Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots A Dell Technical Whitepaper Storage Infrastructure

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines An Oracle Technical White Paper December 2013 Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines A configuration best practice guide for implementing

More information

VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Storage Foundation 4.0 TM for Databases VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth

More information

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Manageability and availability for Oracle RAC databases Overview Veritas InfoScale Enterprise for Oracle Real Application Clusters

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information