EMC Business Continuity for Oracle Database 11g/10g

Size: px
Start display at page:

Download "EMC Business Continuity for Oracle Database 11g/10g"

Transcription

1 EMC Business Continuity for Oracle Database 11g/10g Enabled by EMC CLARiiON CX4 and EMC Celerra Using FCP and NFS Proven Solution Guide

2 Copyright 2010 EMC Corporation. All rights reserved. Published January 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H6647

3 Table of Contents Chapter 1: About this Document... 6 Overview... 6 Audience and purpose... 7 Scope... 8 Business challenge Technology solution Reference Architecture Validated environment profile Hardware and software resources Prerequisites and supporting documentation Terminology Typographic conventions Chapter 2: Storage Design Overview Concepts Best practices CX4 cache configuration for SnapView snapshot Storage processor failover LUN/RAID group layout Storage design layout Chapter 3: File System Overview File system layout Chapter 4: Application Design Overview Considerations Application design layout Memory configuration for Oracle 11g HugePages Chapter 5: Network Design Overview Concepts Best practices SAN network layout IP network layout Virtual LANs Jumbo frames Ethernet trunking and link aggregation Public and private networks Oracle RAC 11g/10g server network architecture

4 Chapter 6: Installation and Configuration Overview Task 1: Build the network infrastructure Task 2: Set up and configure ASM for CLARiiON Task 3: Set up and configure database servers Task 4: Configure NFS client options Task 5: Install Oracle Database 11g/10g Task 6: Configure database server memory options Task 7: Tune HugePages Task 8: Set database initialization parameters Task 9: Configure Oracle Database control files and logfiles Task 10: Enable passwordless authentication using SSH Task 11: Set up and configure CLARiiON storage for Replication Manager and SnapView Task 12: Install and configure EMC RecoverPoint Task 13: Set up the virtualized utility servers Task 14: Configure and connect EMC RecoverPoint appliances (RPAs) Task 15: Install and configure EMC MirrorView/A Task 16: Install and configure EMC CLARiiON (CX) splitters Chapter 7: Testing and Validation Overview Section A: Store solution component Test results Section B: Basic Backup solution component Test results Section C: Advanced Backup solution component Test results Section D: Basic Protect solution component Test results Section E: Advanced Protect solution component using EMC MirrorView and Oracle Data Guard.. 92 Test results Section F: Advanced Protect solution component using EMC RecoverPoint Test results Section G: Test/Dev solution component using EMC SnapView clone Test results Section H: Backup Server solution component Test results Section I: Migration solution component Test results Chapter 8: Virtualization Overview Advantages of virtualization Considerations VMware infrastructure

5 Virtualization best practices VMware ESX server VMware and NFS Chapter 9: Backup and Restore Overview Section A: Backup and restore concepts Advantages of logical storage Section B: Backup and recovery strategy Logical storage backup using EMC SnapView and EMC Replication Manager Section C: Physical backup and restore Physical backup using Oracle RMAN Section D: Replication Manager in Test/Dev and Advanced Backup solution components Chapter 10: Data Protection and Replication Overview Section A: Basic Protect using Oracle Data Guard Section B: Advanced Protect using EMC MirrorView and Oracle Data Guard Section C: Advanced Protect using EMC RecoverPoint Overview EMC RecoverPoint CLARiiON (CX) splitters Scope Best practices Conclusion Chapter 11: Test/Dev Solution Using EMC SnapView Clone Overview CLARiiON SnapView clone Best practices Mount and recovery of a target clone database using Replication Manager Database cloning Chapter 12: Migration Overview Chapter 13: Conclusion Overview

6 Chapter 1: About this Document Chapter 1: About this Document Overview Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. The introduction of the EMC Celerra unified storage platform prompted the creation of a solution that showcases the new capability of this unit: to expose the back-end EMC CLARiiON array to hosts for Fibre Channel Protocol (FCP) access, in addition to the normal Network File System (NFS) access previously provided by the Celerra NS Series. In this solution, all Oracle objects that require higher performance and lower latency I/O are placed over an FCP connection, and all other objects are placed over an NFS connection. This may sound counterintuitive as it requires the management of two separate storage protocols. However, it is far simpler to manage and configure this solution than to manage a solution using the FCP alone, and this solution provides identical performance. Thus, management and simplicity are improved by the blended FCP/NFS solution, while performance is not affected. This document summarizes a series of implementation procedures and best practices that were discovered, validated, or otherwise encountered during the validation of a solution for Oracle Database 11g/10g using the EMC Celerra unified storage platform and Oracle RAC 11g and 10g on Linux over FCP and NFS. Contents The content of this chapter includes the following topics. Topic See Page Audience and purpose 7 Scope 8 Business challenge 10 Technology solution 10 Reference Architecture 14 Validated environment profile 15 Hardware and software resources 15 Prerequisites and supporting documentation 17 Terminology 18 Typographic conventions 20 using FCP and NFS Proven Solution Guide 6

7 Chapter 1: About this Document Audience and purpose Audience The intended audience for the Proven Solution Guide is: Internal EMC personnel EMC partners Customers Purpose The purpose of this solution is to: Improve the performance, scalability, flexibility, and resiliency of an Oracle software stack that is physically booted on normal hardware by connecting multiple protocols to one storage platform as follows: Fibre Channel Protocol (FCP) and Oracle ASM to access high-demand, lowlatency storage elements NFS to access all other storage elements Facilitate and reduce the risk of migrating an existing Oracle Database 10g installations to 11g by providing documentation of best practices. Reduce cost by migrating an online production Oracle Database mounted over FCP to a target database mounted over NFS with no downtime and minimal performance impact. Improve the performance of an Oracle 11g or 10g production database by offloading all performance impacts of database operations, such as backup and recovery, using: EMC Replication Manager EMC SnapView TM These demonstrate significant performance and manageability benefits in comparison to normal Oracle Recovery Manager (RMAN) backup and recovery. Provide disaster recovery capability using: EMC RecoverPoint with CLARiiON splitters EMC MirrorView TM /Asynchronous over iscsi These demonstrate significant performance and manageability benefits in comparison to normal Oracle Data Guard disaster recovery. Provide the capability to clone a running production database with minimal performance impact and no downtime using SnapView clones and Replication Manager. 7

8 Chapter 1: About this Document Scope Overview This section describes the components of the solution. Core solution components The following table describes the core solution components that are included in this solution: Component Scale-up OLTP Resiliency Description Using an industry-standard OLTP benchmark against a single database instance, comprehensive performance testing is performed to validate the maximum achievable performance using the solution stack of hardware and software. The purpose of resiliency testing is to validate the faulttolerance and high-availability features of the hardware and software stack. Faults are inserted into the configuration at various layers in the solutions stack. Some of the layers where fault tolerance is tested include: Oracle RAC node, Oracle RAC node interconnect port, storage processors, and Data Movers. Functionality solution components The following table describes the functionality solution components that are included in this solution: Component Description Basic Backup Advanced Backup Basic Protect Advanced Protect Test/dev This is backup and recovery using Oracle RMAN, the built-in backup and recovery tool provided by Oracle. This is backup and recovery using EMC value-added software or hardware. In this solution, the following are used to provide Advanced Backup functionality: EMC Replication Manager EMC SnapView snapshot This is disaster recovery using Oracle Data Guard, Oracle s built-in remote replication tool. This is disaster recovery using EMC value-added software and hardware: In this solution the following are used to provide Advanced Protect functionality: EMC RecoverPoint with CLARiiON splitters EMC MirrorView/A over iscsi A running production OLTP database is cloned with minimal, if any, performance impact on the production server, as well as no downtime. The resulting dataset is provisioned on 8

9 Chapter 1: About this Document Migration another server for use for testing and development. EMC Replication Manager is used to automate the test/dev process. An online production Oracle database that is mounted over FCP/ASM is migrated to a target database mounted using NFS, with no downtime and minimal performance impact on the production database. 9

10 Chapter 1: About this Document Business challenge Business challenges for midsize enterprises Midsize enterprises face the same challenges as their larger counterparts when it comes to managing database environments. These challenges include: Rising costs Control over resource utilization and scaling Lack of sufficient IT resources to deploy, manage, and maintain complex environments at the departmental level The need to reduce power, cooling, and space requirements Unlike large enterprises, midsize enterprises are constrained by smaller budgets and cannot afford a custom, one-off solution. This makes the process of creating a database solution for midsize enterprises even more challenging than for large enterprises. Technology solution Blended solution for midsize enterprises This solution demonstrates how organizations can: Deploy a solution using a combination of the NFS and FCP protocols on the Celerra. FCP is used for high-i/o and low-latency database objects (notably the datafiles, tempfiles, online redo logfiles, and controlfiles). NFS is used for all other database objects (consisting basically of the flashback recovery area, disk-based backups, archive logs, and CRS files). Manageability advantages are obtained by using a combination of FCP and NFS. Specifically, archived logs and backups can be accessed through a normal file system interface rather than ASM. Further, another clustered file system is not required for the CRS files. This simplifies the software installation and configuration on the database servers. Avoid investing in additional FC infrastructure by implementing a blended solution that uses both FCP and NFS to access storage elements. Work with different protocols in the blended solution to migrate an online production Oracle Database mounted over FCP to a target database mounted over NFS, with no downtime and minimal performance impact on the production database. Maximize the use of the database-server CPU, memory, and I/O channels by offloading performance impacts from the production server during: Backup and recovery by using Replication Manager or SnapView Disaster recovery operations by using RecoverPoint or MirrorView/A Reduce the complexity of backup operations and eliminate the need to implement scripted solutions by using Replication Manager. Save time and maximize system uptime when migrating existing Oracle Database 10g systems to Oracle Database 11g. Implement a disaster recovery solution with MirrorView/A over iscsi that reduces 10

11 Chapter 1: About this Document costs and complexity by using IP as the network protocol. Use SnapView to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. SnapView clones also help in creating test/development systems without any impact on the production environment. Blended FCP/NFS solution This is a blended FCP/NFS solution. Depending on the nature of the database object, either FCP or NFS is used to access it. The following table shows which protocol is used to access each database object. Database object Type Accessed using Datafiles Online redo logfiles Controlfiles Tempfiles Flashback recovery area Archive logs Disk-based backups CRS files High demand Low latency Low demand High latency FCP NFS Two sites connected by WAN Two sites connected by a WAN are used in the solution: one site is used for production; the other site is used as a disaster recovery target. A Celerra is present at each site. Oracle RAC 11g or 10g for x86-64 is run on Red Hat Enterprise Linux or on Oracle Enterprise Linux. FCP storage networks consisting of dedicated, redundant FCP switches are present at both sites. An EMC RecoverPoint cluster is also included at each site. The solution includes virtualized servers for use as Test/dev, Basic Protect, and Advanced Protect targets. Virtualization of the test/dev and disaster recovery (DR) target servers is supported using VMware ESX Server. Production site The following components are present at the production site and are connected to the production FCP storage network and to the WAN: A Celerra (actually the CLARiiON back-end array) A physically booted four-node Oracle RAC 11g or 10g cluster A RecoverPoint cluster connected to the FCP storage network and the WAN The Oracle RAC 11g or 10g servers are also connected to the client and RAC interconnect networks. 11

12 Chapter 1: About this Document Disaster recovery target site The disaster recovery target site consists of: A target Celerra (actually the CLARiiON back-end array) connected to the target FCP storage network A RecoverPoint cluster connected to the FCP storage network and the WAN Connected to both sites The following are present at both sites: A VMware ESX server is connected to both the production and target FCP storage networks. A virtualized single-instance Oracle 11g or 10g server is used as: The disaster recovery target for Basic Protect and Advanced Protect (DR site) The target for Test/Dev (production site) The virtualized single-instance Oracle 11g or 10g target server accesses both the production and target FCP storage networks and is connected to the client WAN through virtualized connections on the virtualization server. A virtualized Replication Manager Server is responsible for handling replication tasks through the Replication Manager Agent, which is installed on the production database servers. The LUNs on the Celerra are discovered using Raw Device Mapping (RDM) on the target VMs. Storage layout The following table describes how each Oracle file type and database object is stored and accessed for this solution: Oracle datafiles What Protocol Stored on File-system type Oracle tempfiles Oracle online redo logfiles FCP FC disk (LUNs) RAID-protected ASM diskgroup Oracle controlfiles Voting disk OCR files Archived logfiles Flashback recovery area NFS FC disk SATA II RAID-protected NFS Backup target High-performance database objects are accessed over an FCP network using redundant network switches. 12

13 Chapter 1: About this Document ASM and ASMLib Oracle ASM is used as the file system/volume manager. Oracle ASMLib is used to virtualize the LUNs on the database server. Oracle datafiles, tempfiles, and online redo logfiles are stored on separate LUNs that are mounted on the database server using ASM over FCP. Three ASM diskgroups are used one diskgroup for datafiles and tempfiles, and two diskgroups for online redo logfiles. The online redo logfiles are mirrored across the two ASM diskgroups using Oracle software multiplexing. The controlfiles are mirrored across the online redo log ASM diskgroups. Each ASM diskgroup and its underlying LUNs are designed to satisfy the I/O demands of individual database objects. For example, RAID 5 is used for the datafiles and the tempfiles, but RAID 1 is used for the online redo logfiles. All of these diskgroups are stored on FC disks. Network architecture TCP/IP and NFS provide network connectivity and file system semantics for NFS file systems on Oracle RAC 11g or 10g. Client virtual machines run on the VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. 13

14 Chapter 1: About this Document Reference Architecture Corresponding Reference Architecture This solution has a corresponding Reference Architecture document that is available on Powerlink, EMC.com, and EMC KB.WIKI. Refer to EMC Business Continuity for Oracle Database 11g/10g - Enabled by EMC CLARiiON CX4 and EMC Celerra Using FCP and NFS Reference Architecture for details. If you do not have access to this content, contact your EMC representative. Reference Architecture diagram The following diagram depicts the overall physical architecture of the solution. 14

15 Chapter 1: About this Document Validated environment profile Environment profile and test results For information on the performance results, refer to the testing summary results contained in Chapter 7: Testing and Validation. Hardware and software resources Hardware The hardware used to validate the solution is listed below. Equipment Quantity Configuration EMC Celerra unified storage platforms (includes an EMC CLARiiON CX4 back-end storage array) 2 2 Data Movers 4 GbE network connections per Data Mover 2 or 3 FC shelves 1 SATA shelf 30 or GB FC disks (depending on configuration) GB SATA disks 1 Control Station 2 storage processors DART version Dell PowerConnect Gigabit Ethernet switches 5 24 ports per switch QLogic FCP switches 2 16 ports 4 Gb throughput Database servers Dell PowerEdge 2900 (Oracle RAC 11g/10g servers) EMC RecoverPoint appliances (RPA) Virtualization server Dell PowerEdge 6450 (VMware ESX server) GHz Intel Pentium 4 quad-core processors 24 GB of RAM GB 15k internal SCSI disks 2 onboard GbE Ethernet NICs 2 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) 4 2 Dell 2950 servers per site QLA2432 HBA cards GHz AMD Opteron quad-core processors 32 GB of RAM GB 15k internal SCSI disks 2 onboard GbE Ethernet NICs 3 additional Intel PRO/1000 PT quad-port GbE Ethernet NICs 2 SANblade QLE2462-E-SP 4 Gb/s dual-port FC HBAs (4 ports in total) 15

16 Chapter 1: About this Document Software The software used to validate the solution is listed below. Software Version Oracle Enterprise Linux 4.7 VMware ESX Server/vSphere 4.0 Oracle VM Microsoft Windows Server 2003 Standard Edition Oracle RAC Enterprise Edition Oracle Database Standard Edition g or 10g 11g or 10g (11g version ) Quest Benchmark Factory for Databases EMC Celerra Manager Advanced Edition 5.6 EMC Navisphere Agent EMC PowerPath (build 157) EMC FLARE EMC DART EMC Navisphere Management 6.28 EMC RecoverPoint 3.0 SP1 EMC Replication Manager EMC MirrorView 6.7 EMC CLARiiON splitter driver

17 Chapter 1: About this Document Prerequisites and supporting documentation Technology It is assumed the reader has a general knowledge of: EMC Celerra EMC CLARiiON CX4 Oracle Database (including RMAN and Data Guard) EMC SnapView EMC Replication Manager EMC RecoverPoint EMC MirrorView VMware ESX Server VMware vsphere Supporting documents The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. CLARiiON CX4 series documentation EMC Unified Storage for Oracle Database 11g/10g - Physically Booted Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Third-party documents The following resources have more information about Oracle: Oracle Technology Network MetaLink Oracle support 17

18 Chapter 1: About this Document Terminology Terms and definitions Term Solution Core solution component This section defines the terms used in this document. Definition A solution is a complete stack of hardware and software upon which a customer would choose to run their entire business or business function. A solution includes database server hardware and software, IP networks, storage networks, storage array hardware and software, among other components. A core solution component addresses the entire solution stack, but does so in a way relating to a discrete area of testing. For example, performance testing is a core solution component. Functionality solution component Basic solution component Advanced solution component Physically-booted solution Virtualized solution Scale-up Resiliency Test/dev A functionality solution component addresses a subset of the solution stack that consists of a discrete set of hardware or software, and focuses on a single IT function. For example, backup and recovery, and disaster recovery are both functionality solution components. A functionality solution component can be either basic or advanced. A basic solution component uses only the features and functionality provided by the Oracle stack. For example, RMAN is for backup and recovery, and Data is Guard for disaster recovery. An advanced solution component uses the features and functionality of EMC hardware or software. For example, EMC SnapView is for backup and recovery, and EMC MirrorView is for disaster recovery. A configuration in which the production database servers are directly booted off a locally attached hard disk without the use of a hypervisor such as VMware or Oracle VM. Utility servers (such as test/dev target or disaster recovery target) may still be virtualized in a physically-booted solution. A configuration in which the production database servers are virtualized using a hypervisor technology such as VMware or Oracle VM. The use of a clustered or single-image database server configuration. Scaling is provided by increasing the number of CPUs in the database server (in the case of a single-instance configuration) or by adding nodes to the cluster (in the case of a clustered configuration). Scale-up assumes that all customers of the database will be able to access all database data. Testing that is designed to validate the ability of a configuration to withstand faults at various layers. The layers that are tested include: network switch, database server storage network port, storage array network port, database server cluster node, and storage processor. The use of storage layer replication (such as snapshots and clones) to provide 18

19 Chapter 1: About this Document an instantaneous, writeable copy of a running production database with no downtime on the production database server and with minimal, if any, performance impact on the production server. Advanced Backup and Recovery Basic Backup and Recovery Advanced Protect Basic Protect Kernel NFS (KNFS) High availability Fault tolerance Enterprise Flash Drive (EFD) Serial advanced technologyattachment (SATA) drive Migration A solution component that provides backup and recovery functionality through the storage layer using specialized hardware or software. Advanced Backup and Recovery has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the backup and recovery operations Superior Mean Time to Recovery (MTTR) through the use of virtual storage layer replication (commonly referred to as snapshots) A solution component that provides backup and recovery functionality through the operating system and database server software stack. Basic Backup and Recovery uses the database server s CPUs for all I/O and processing of backup and recovery operations. A solution component that provides disaster recovery functionality provided through the storage layer using specialized hardware or software. Advanced Protect has the following benefits: Offloads the database server s CPUs from the I/O and processing requirements of the disaster recovery operations Superior failover and failback capabilities Reduces the software required to be installed at the disaster recovery target because of the use of consistency technology A solution component that provides disaster recovery functionality provided through the operating system and database server software stack. Basic Protect uses the database server s CPUs for all I/O and processing of disaster recovery operations. A network storage protocol in which the NFS client is embedded in the operating system kernel. The use of specialized hardware or software technology to reduce both planned and unplanned downtime. The use of specialized hardware or software technology to eliminate both planned and unplanned downtime. A drive that stores data using Flash memory and contains no moving parts. SATA is a newer standard for connecting hard drives into computer systems. SATA is based on serial signaling technology, while Integrated Drive Electronics (IDE) hard drives use parallel signaling. The ability to transfer a running production database from one environment to another, for example, from FCP/ASM to KNFS/DNFS. 19

20 Chapter 1: About this Document Typographic conventions Typographic conventions In this document, many steps are listed in the form of terminal output. This is referred to as a code listing. For example: Note the following about code listings: Commands you type are shown in bold. For lengthy commands the backslash \ character is used to show line continuation. While this is a common UNIX convention, it may not work in all cases. You should enter the command on one line. The use of ellipses ( ) in the output indicates that lengthy output was deleted for brevity. If a Celerra or Linux command is referred to in text it is indicated in bold and lowercase, like this: the fs_copy command. If a SQL or RMAN command is referred to in text, it is indicated in uppercase, like this: The ALTER DATABASE RENAME FILE command. A special font is not used in either case. 20

21 Chapter 2: Storage Design Chapter 2: Storage Design Overview Introduction to Storage Design The storage design layout instructions presented in this chapter apply to the specific components used during the development of this solution. Contents This chapter contains the following topics: Topic See Page Concepts 22 Best practices 23 CX4 cache configuration for SnapView snapshot 25 Storage processor failover 25 LUN/RAID group layout 26 Storage design layout 28 21

22 Chapter 2: Storage Design Concepts Setting up CX storage To set up CLARiiON (CX) the following steps must be carried out: Step Action 1 Configure zoning. 2 Configure RAID groups and bind LUNs. 3 Allocate hot spares. 4 Create storage groups. 5 Discover FCP LUNs from the database servers. High availability and failover EMC Celerra has built in high-availability (HA) features. These HA features allow the Celerra to survive various failures without a loss of access to the Oracle database. These HA features protect against the following: Power loss affecting a single circuit connected to the storage array Storage processor failure Storage processor reboot Disk failure 22

23 Chapter 2: Storage Design Best practices Disk drives The following are the general recommendations for disk drives: Drives with higher revolutions per minute (rpm) provide higher overall randomaccess throughput and shorter response times than drives with slower rpm. For optimum performance, higher-rpm drives are recommended for datafiles and tempfiles as well as online redo logfiles. Because of significantly better performance, Fibre Channel drives are always recommended for storing datafiles, tempfiles, and online redo log files. Serial Advanced Technology-Attached (SATA II) drives have slower response and rotational speed, and moderate performance with random I/O. However, they are less expensive than the Fibre Channel drives for the same or similar capacity. SATA II drives are frequently the best option for storing archived redo logs and the flashback recovery area. In the event of high performance requirements for backup and recovery, Fibre Channel drives can also be used for this purpose. Enterprise Flash Drives (EFDs) Enterprise Flash Drives (EFDs) can be used to dramatically improve cost, performance, efficiency, power, space, and cooling requirements of Oracle databases stored on EMC Celerra. To know if EFDs will fit in your situation, you need to determine if a set of datafiles is being accessed more heavily than other datafiles. This is an extremely common condition in Oracle databases. If so, then migrate this set of datafiles to EFDs. The datafiles that are affected may change over time, requiring application of an Information Lifecycle Management (ILM) strategy. Because of the sequential nature of redo I/O, we do not recommend storing online redo logfiles on EFDs. EFDs primarily accelerate random read and write I/O. RAID types and file types The following table describes the recommendations for RAID types corresponding to Oracle file types: Description RAID 5/EFD RAID 10/FC RAID 5/FC RAID 5/SATA II Datafiles/tempfiles Possible (apply Recommended Recommended Avoid tuning) 1 Control files Avoid Recommended Recommended Avoid Online redo logs Avoid Recommended Avoid Avoid Archived logs Avoid Possible (apply tuning) 1 Possible (apply tuning) 2 Recommended Flashback recovery area Avoid OK OK Recommended 23

24 Chapter 2: Storage Design OCR file/voting disk Avoid OK OK Avoid 1 The decision to use EFDs for datafiles and tempfiles must be driven by the I/O requirements for specific datafiles. 2 The use of FC disks for archived logs is fairly rare. However, if many archived logs are being created, and the I/O requirements for archived logs exceeds a reasonable number of SATA II disks, this may be a more cost-effective solution. Tempfiles, undo, and sequential table or index scans In some cases, if an application creates a large amount of temp activity, placing your tempfiles on RAID 10 devices may be faster due to RAID 10 s superior sequential I/O performance. This is also true for undo. Further, an application that performs many full table scans or index scans may benefit from these datafiles being placed on a separate RAID 10 device. Online redo logfiles Online redo log files should be put on RAID 1 or RAID 10 devices. You should not use RAID 5 because sequential write performance of distributed parity (RAID 5) is not as high as that of mirroring (RAID 1). RAID 1 or RAID 10 provides the best data protection; protection of online redo log files is critical for Oracle recoverability. OCR files and voting disk files You should use FC disks for OCR files and voting disk files; unavailability of these files for any significant period of time (due to disk I/O performance issues) may cause one or more of the RAC nodes to reboot and fence itself off from the cluster. The LUN/RAID group layout images in Chapter 2: Storage Design > LUN/RAID group layout, show two different storage configurations that can be used for Oracle RAC 11g/10g databases on a Celerra. That section can help you to determine the best configuration to meet your performance needs. Stripe size EMC recommends a stripe size of 32 KB for all types of database workloads. The default stripe size for all the file systems on FC shelves (redo logs and data) should be 32 KB. Similarly, the recommended stripe size for the file systems on SATA II shelves (archive and flash) should be 256 KB. Shelf configuration The most common error when planning storage is designing for capacity rather than for performance. The single most important storage parameter for performance is disk latency. High disk latency is synonymous with slower performance; low disk counts lead to increased disk latency. The recommendation is a configuration that produces average database I/O latency (the Oracle measurement db file sequential read) of less than or equal to 20 ms. In today s disk technology, the increase in storage capacity of a disk drive has outpaced the increase in performance. Therefore, the performance capacity must be the standard to use when planning an Oracle database s storage configuration, not disk storage capacity. 24

25 Chapter 2: Storage Design The number of disks that should be used is determined first by the I/O requirements then by capacity. This is especially true for datafiles and tempfiles. EFDs can dramatically reduce the number of disks required to perform the I/O required by the workload. Consult with your EMC sales representative for specific sizing recommendations for your workload. Reserved LUN pool There is no benefit in assigning LUNs with a high capacity, such as 25 GB to 30 GB, to the reserved LUN pool. It is better to configure the reserved LUN pool with a higher number of LUNs with less capacity (around 5 GB to 8 GB) than with a lower number of LUNs with higher capacity. Approximately 20 to 25 small LUNs are sufficient for most purposes. CX4 cache configuration for SnapView snapshot Recommended cache settings Poor performance was observed using SnapView snapshot with the default settings. This was in the context of a scale-up OLTP workload using a TPC-C-like benchmark. If you have a similar workload and wish to use SnapView snapshot on a CX4 CLARiiON array, you will experience better performance by setting your cache settings as described in the following table of recommended cache settings. Cache Setting Low watermark High watermark SP A read cache memory SP B read cache memory Write cache memory Value 10 percent 30 percent 200 MB 200 MB 1061 MB Storage processor failover High availability The storage processor (SP) failover capability is a key feature that offers redundancy at the storage processor level, allowing continuous data access. It also helps to build a fault-resilient RAC architecture. 25

26 Chapter 2: Storage Design LUN/RAID group layout LUN/RAID group layout design A LUN/RAID group configuration consisting of three Fibre Channel shelves with RAID 10 and RAID 1 was tested and found to provide good performance for Oracle RAC 11g databases on Celerra. Two RAID and disk configurations were tested over the FCP protocol. These are described below. RAID group layout: 3 FC shelf RAID 5 The RAID group layout for three-fc shelf RAID 5/RAID 1 is as follows. 26

27 Chapter 2: Storage Design RAID group layout: 3 FC shelf RAID 10 The RAID group layout for three-fc shelf RAID 10/RAID 1 is as follows. 27

28 Chapter 2: Storage Design Storage design layout ASM diskgroup guidelines Automatic Storage Management (ASM) is used to store the database objects requiring high performance. The Oracle Cluster Registry file and the voting disk must be stored on shared storage that is not on the ASM file system; therefore, NFS is used to store these files. In addition, files not requiring high performance are stored on NFS. These NFS file systems are in turn stored on low-cost SATA II drives. This lowers the cost in terms of storage while improving manageability. The following table contains a detailed description of all the database objects and where they should be stored. File system/ mount point File system type LUNs stored on Contents +DATA ASM LUNs 5 through 10 Oracle datafiles +LOG1 and +LOG2 ASM LUNs 1 through 4 Online redo logs and control file (mirrored copies) /u02 NFS LUN 0 Oracle Cluster Registry file and voting disk /u03 NFS LUN 9 Flashback recovery area (all backups stored here) /u04 NFS LUN 10 Archived log dump destination ASM diskgroup design best practice A diskgroup should consist entirely of LUNs that are all of the same RAID type and that consist of the same number and type of component spindles. EMC does not recommend mixing any of the following within a single ASM diskgroup: RAID levels Disk types Disk rotational speeds PowerPath/ ASM workaround The validation was performed using Oracle Enterprise Linux 5.1. We observed the following issue while creating ASM disks: [root@mteoradb55 ~]# service oracleasm createdisk LUN1 /dev/emcp 28

29 Chapter 2: Storage Design Marking disk "/dev/emcpowera1" as an ASM disk: asmtool: Device "/dev/emcpowera1" is not a partition We used the following workaround to create the ASM disks: [FAILED] [root@mteoradb55 ~]# /usr/sbin/asmtool -C -l /dev/oracleasm -n LUN1 \ > -s /dev/emcpowera1 -a force=yes asmtool: Device "/dev/emcpowera1" is not a partition asmtool: Continuing anyway [root@mteoradb55 ~]# service oracleasm scandisks Scanning system for ASM disks: [ OK ] [root@mteoradb55 ~]# service oracleasm listdisks LUN1 [root@mteoradb55 ~]# Note PowerPath 5.1 was required for this version of Linux, but this version was not yet GA at the time the validation was started. Therefore, a GA release candidate was used for these tests. 29

30 Chapter 3: File System Chapter 3: File System Overview Contents This chapter contains the following topic: Topic File system layout 31 See Page 30

31 Chapter 3: File System File system layout ASM diskgroup guidelines Automatic Storage Management (ASM) is used to store the database objects requiring high performance. This does not work for the Oracle Cluster Registry file and the voting disk. These two files must be stored on shared storage that is not on the ASM file system. Therefore, NFS is used to store these files. In addition, files not requiring high performance are stored on NFS. These NFS file systems are in turn stored on low-cost SATA II drives. This drives down the cost in terms of storage, while improving manageability. File system layout The following table contains a detailed description of all the database objects and where they are stored. File system/ mount point File system type LUNs stored on Contents /u02 NFS LUN 0 Oracle Cluster Registry file and voting disk +DATA ASM LUNs 5 through 10 Oracle datafiles +LOG1 and +LOG2 ASM LUNs 1 through 4 Online redo logs and control file (mirrored copies) /u03 NFS LUN 9 Flashback recovery area (all backups stored here) /u04 NFS LUN 10 Archived log dump destination 31

32 Chapter 4: Application Design Chapter 4: Application Design Overview Contents This chapter contains the following topics: Topic See Page Considerations 33 Application design layout 34 Memory configuration for Oracle 11g 35 HugePages 37 32

33 Chapter 4: Application Design Considerations Heartbeat mechanisms The synchronization services component (CSS) of Oracle Clusterware maintains two heartbeat mechanisms: The disk heartbeat to the voting disk The network heartbeat across the RAC interconnects that establishes and confirms valid node membership in the cluster Both of these heartbeat mechanisms have an associated time-out value. For more information on Oracle Clusterware MissCount and DiskTimeout parameters see MetaLink Note UU. EMC recommends setting the disk heartbeat parameter disktimeout to 160 seconds. You should leave the network heartbeat parameter misscount at the default of 60 seconds. Rationale These settings will ensure that the RAC nodes do not evict when the active Data Mover fails over to its partner. The command to configure this option is: $ORA_CRS_HOME/bin/crsctl set css disktimeout

34 Chapter 4: Application Design Application design layout Oracle Cluster Ready Services Oracle Cluster Ready Services (CRS) are enabled on each of the Oracle RAC 11g/10g servers. The servers operate in active/active mode to provide local protection against a server failure and to provide load balancing. CRS required files (including the voting disk and the OCR file) can reside on NFS volumes provided that the required mount-point parameters are used. For more information on the mount-point parameters required for the Oracle Clusterware files, see Chapter 6: Installation and Configuration > Task 4: Configure NFS client options. NFS client Each Oracle RAC 11g/10g server, which is hosted in the operating system (OS), uses the KNFS protocol to connect to the Celerra storage array. KNFS runs over TCP/IP. Oracle binary files The Oracle RAC 11g/10g binary files, including the Oracle CRS, are all installed on the database servers' local disks. Stored on Celerra Datafiles, tempfiles, online redo logfiles, and controlfiles reside on the FCP file system. Flashback recovery area, disk-based backups, archive logs, and CRS files reside on Celerra NFS file systems. These file systems are designed (in terms of the RAID level and number of disks used) to be appropriate for each type of file. A separate clustered file system is not required for the CRS files. The following table lists each file or activity type and indicates where it resides. File or activity type Database binary files Datafiles, tempfiles Online redo log files Archived log files Flashback recovery area Control files CRS, OCR, and voting disk files Location Database servers local disk (or vmdk file for virtualized servers) +DATA Mirrored across +LOG1 and +LOG2 /archfs /flashfs Mirrored across +LOG1 and +LOG2 /datafs 34

35 Chapter 4: Application Design Memory configuration for Oracle 11g Memory configuration and performance Memory configuration in Oracle 11g is one of the most challenging aspects of configuring the database server. If the memory is not configured the performance of the database server will be very poor. If memory is configured incorrectly: The database server will be unstable The database may not open at all; and if it does open, you may experience errors due to lack of shared pool space In an OLTP context, the size of the shared pool is frequently the limitation on performance of the database. Automatic Memory Management A new feature called Automatic Memory Management was introduced in Oracle 11g 64 bit (Release 1). The purpose of Automatic Memory Management is to simplify the memory configuration process for Oracle 11g. For example, in Oracle 10g, the user is required to set two parameters, SGA_TARGET and PGA_AGGREGATE_TARGET, so that Oracle can manage other memory-related configurations such as buffer cache and shared pool. When using Oracle 11g-style Automatic Memory Management, the user does not set these SGA and PGA parameters. Instead, the following parameters are set: MEMORY_TARGET MEMORY_MAX_TARGET Once these parameters are set, Oracle 11g can, in theory, handle all memory management issues, including both SGA and PGA memory. However, the Automatic Memory Management model in Oracle 11g 64 bit (Release 1) requires configuration of shared memory as a file system mounted under /dev/shm. This adds an additional management burden to the DBA/system administrator. Effects of Automatic Memory Management on performance Decreased database performance We observed a significant decrease in performance when we enabled the Oracle 11g Automatic Memory Management feature. Linux HugePages are not supported Linux HugePages are not supported when the Automatic Memory Management feature is implemented. When Automatic Memory Management is enabled, the entire SGA memory should fit under /dev/shm and, as a result, HugePages are not used. On both Oracle 11g and Oracle 10g, tuning HugePages increases the performance of the database significantly. It is EMC s opinion that the performance improvements of HugePages, plus the lack of a requirement for a /dev/shm file system, make the Oracle 11g automatic memory model a poor trade-off. 35

36 Chapter 4: Application Design EMC recommendations To achieve optimal performance on Oracle 11g, EMC recommends the following: Disable the Automatic Memory Management feature Use the 10g style of memory management on Oracle 11g The memory management configuration procedure is described in the previous section. This provides optimal performance and manageability per our testing. 36

37 Chapter 4: Application Design HugePages HugePages The Linux 2.6 kernel includes a feature called HugePages. This feature allows you to specify the number of physically contiguous large memory pages that will be allocated and pinned in RAM for shared memory segments like the Oracle System Global Area (SGA). The pre-allocated memory pages can only be used for shared memory and must be large enough to accommodate the entire SGA. HugePages can create a very significant performance improvement for Oracle RAC 11g/10g database servers. The performance payoff for enabling HugePages is significant. Warning HugePages must be tuned carefully and set correctly. Unused HugePages can only be used for shared memory allocations - even if the system runs out of memory and starts swapping. Incorrectly configured HugePages settings may result in poor performance and may even make the machine unusable. HugePages parameters The HugePages parameters are stored in /etc/sysctl.conf. You can change the value of HugePages parameters by editing the systctl.conf file and rebooting the instance. The following table describes the HugePages parameters: Parameter HugePages_Total HugePages_Free Hugepagesize Description Total number of HugePages that are allocated for shared memory segments (This is a tunable value. You must determine how to set this value.) Number of HugePages that are not being used Size of each Huge Page Optimum values for HugePages parameters The amount of memory allocated to HugePages must be large enough to accommodate the entire SGA: HugePages_Total x Hugepagessize = Amount of memory allocated to HugePages To avoid wasting memory resources, the value of HugePages_Free should be zero. Note The value of vm.nr_hugepages should be set to a value that is at least equal to kernel.shmmax/2048. When the database is started, the HugePages_Free should show a value close to zero to reflect that memory is tuned. For more information on tuning HugePages, see Chapter 6: Installation and Configuration > Task 7: Tune HugePages. 37

38 Chapter 5: Network Design Chapter 5: Network Design Overview Contents This chapter contains the following topics: Topic See Page Concepts 39 Best practices 40 SAN network layout 41 IP network layout 41 Virtual LANs 41 Jumbo frames 42 Ethernet trunking and link aggregation 43 Public and private networks 45 Oracle RAC 11g/10g server network architecture 45 38

39 Chapter 5: Network Design Concepts Jumbo frames Maximum Transfer Unit (MTU) sizes of greater than 1,500 bytes are referred to as jumbo frames. Jumbo frames require Gigabit Ethernet across the entire network infrastructure server, switches, and database servers. VLAN Virtual local area networks (VLANs) logically group devices that are on different network segments or sub-networks. Trunking TCP/IP provides the ability to establish redundant paths for sending I/O from one networked computer to another networked computer. This approach uses the link aggregation protocol, commonly referred to as trunking. Redundant paths facilitate high availability and load balancing for the networked connection. Trunking device A trunking device is a virtual device created using two or more network devices to achieve higher performance with load-balancing capability, and high availability with failover capability. With Ethernet trunking/link aggregation, packets traveling through the virtual device are distributed among the underlying devices to achieve higher aggregated bandwidth, based on the source MAC address. 39

40 Chapter 5: Network Design Best practices Gigabit Ethernet EMC recommends that you use Gigabit Ethernet for the RAC interconnects if RAC is used. If 10 GbE is available, that is even better. Jumbo frames and the RAC interconnect For Oracle RAC 11g/10g installations, jumbo frames are recommended for the private RAC interconnect. This boosts the throughput as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). VLANs EMC recommends that you use VLANs to segment different types of traffic to specific subnets. This provides better throughput, manageability, application separation, high availability, and security. 40

41 Chapter 5: Network Design SAN network layout SAN network layout for validated scenario The SAN network layout is configured as follows: Two QLogic FC switches are used for the test bed. Two connections from each database servers are connected to the QLogic switches. One FC port from SPA and SPB are each connected to the two FC switches. Zoning Each FC port from the database servers are zoned to both SP ports. IP network layout IP network design for validated scenario The IP network layout is configured as follows: TCP/IP and NFS provide network connectivity. Client virtual machines run on a VMware ESX server. They are connected to a client network. Client, RAC interconnect, and redundant TCP/IP storage networks consist of dedicated network switches and virtual local area networks (VLANs). The RAC interconnect and storage networks consist of trunked IP connections to balance and distribute network I/O. Jumbo frames are enabled on these networks. The Oracle RAC 11g or 10g servers are connected to the client, RAC interconnect, WAN, and production storage networks. Virtual LANs Virtual LANs This solution uses three VLANs to segregate network traffic of different types. This improves throughput, manageability, application separation, high availability, and security. The following table describes the database server network port setup: VLAN ID Description CRS setting 1 Client network Public 2 RAC interconnect Private 3 Storage None (not used) Client VLAN The client VLAN supports connectivity between the physically booted Oracle RAC 11g/10g servers, the virtualized Oracle Database 11g/10g, and the client 41

42 Chapter 5: Network Design workstations. The client VLAN also supports connectivity between the Celerra and the client workstations to provide network file services to the clients. Control and management of these devices are also provided through the client network. RAC interconnect VLAN The RAC interconnect VLAN supports connectivity between the Oracle RAC 11g/10g servers for network I/O required by Oracle CRS. Three network interface cards (NICs) are configured on each Oracle RAC 10g server to the RAC interconnect network. Link aggregation is configured on the servers to provide load balancing and port failover between the two ports for this network. Redundant switches In addition to VLANs, separate redundant storage switches are used. The RAC interconnect connections are also on a dedicated switch. For real-world solution builds, it is recommended that these switches support Gigabit Ethernet (GbE) connections, jumbo frames, and port channeling. Jumbo frames Overview Jumbo frames are configured for the following layers: Celerra Data Mover Oracle RAC 11g/10g servers Switch Note Configuration steps for the switch are not covered here, as that is vendor-specific. Check your switch documentation for details. Linux servers To configure jumbo frames on a Linux server, execute the following command: ifconfig eth0 mtu 9000 Alternatively, place the following statement in the network scripts in /etc/sysconfig/network-scripts: MTU=9000 RAC interconnect Jumbo frames should be configured for the storage and RAC interconnect networks of this solution to boost the throughput, as well as possibly lowering the CPU utilization due to the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes). Typical Oracle database environments transfer data in 8 KB and 32 KB block sizes, which require multiple 1,500 frames per database I/O, while using an MTU size of 1,500. Using jumbo frames, the number of frames needed for every large I/O request can be reduced, thus the host CPU needed to generate a large number of interrupts for each application I/O is reduced. The benefit of jumbo frames is primarily a complex function of the workload I/O sizes, network utilization, and Oracle database 42

43 Chapter 5: Network Design server CPU utilization, and so is not easy to predict. For information on using jumbo frames with the RAC Interconnect, see MetaLink Note Verifying that jumbo frames are enabled To test whether jumbo frames are enabled, use the following command: ping M do s 8192 <target> Where: target is the interface to be tested Jumbo frames must be enabled on all layers of the network for this command to succeed. Ethernet trunking and link aggregation Trunking and link aggregation Two NICs on each Oracle RAC 11g/10g server are used in the NFS connection, referred to previously as the storage network. The RAC interconnect network is trunked in a similar manner using three NICs. EMC recommends that you configure an Ethernet trunking interface with two Gigabit Ethernet ports to the same switch. Enabling trunking on a Linux database server On the database servers, network redundancy is achieved by using Linux kernel bonding. This is accomplished using the scripts contained in /etc/sysconfig/networkscripts. A typical bonded connection is as follows: DEVICE=bond0 ONBOOT=yes BOOTPROTO=none USERCTL=no IPADDR= NETMASK= MTU=9000 This device (bond0) consists of two Ethernet ports, whose scripts are similar to the following: DEVICE=eth1 MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none HWADDR=00:04:23:B9:66:F3 The result is that the Ethernet ports that show their master as bond0 are joined to the bonded connection. Modify the /etc/modprobe.conf file. 43

44 Chapter 5: Network Design The following is an example of the lines that must be added: options bonding max_bonds=2 mode=4 alias bond0 bonding alias bond1 bonding Either reboot the Linux server or down and up the interfaces to enable the trunk. 44

45 Chapter 5: Network Design Public and private networks Public and private networks Each node should have: One static IP address for the public network One static IP address for the private cluster interconnect The private interconnect should only be used by Oracle to transfer cluster manager and cache fusion-related data. Although it is possible to use the public network for the RAC interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for cache fusion and cluster manager traffic). Configuring virtual IP addresses The virtual IP addresses must be defined in either the /etc/hosts file or DNS for all RAC nodes and client nodes. The public virtual IP addresses will be configured automatically by Oracle when the Oracle Universal Installer is run, which starts Oracle's Virtual Internet Protocol Configuration Assistant (vipca). All virtual IP addresses will be activated when the following command is run: srvctl start nodeapps -n <node_name> Where: node_name is the hostname/ip address that will be configured in the client's tnsnames.ora file. Oracle RAC 11g/10g server network architecture Oracle RAC 11g/10g server network interfaces - NFS The following table lists each interface and describes its use for the Oracle 11g/10g NFS configuration. Interface port ID eth0 eth1 eth2 eth3 eth4 eth5 eth6 Description Client network Unused Unused Unused Storage network (trunked) Storage network (trunked) Unused 45

46 Chapter 5: Network Design eth7 eth8 eth9 RAC interconnect (trunked) RAC interconnect (trunked) RAC interconnect (trunked) Oracle RAC 11g/10g server network interfaces - FCP The following tables list each server and describe its use for the Oracle 11g/10g FCP configuration. There are two dual FC port host bus adapters on each of the database servers. Two FC ports from each of these database servers are connected to one of the QLogic FC switches. The other two FC ports are connected to a different switch for high availability. One port from SPA and one port from SPB are connected to each of the two FC switches. Database Server HBA Port 0 HBA Port 1 HBA Port 2 HBA Port 3 Description QLogic FC Switch-1 QLogic FC Switch-1 QLogic FC Switch-2 QLogic FC Switch-2 CLARiiON SPA Port 0 SPB Port 1 SPA Port 1 SPB Port 0 Description QLogic FC Switch-1 QLogic FC Switch-1 QLogic FC Switch-2 QLogic FC Switch-2 46

47 Chapter 6: Installation and Configuration Chapter 6: Installation and Configuration Overview Introduction This chapter provides procedures and guidelines for installing and configuring the components that make up the validated solution scenario. Scope The installation and configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate installation and configuration documentation for the revision levels of the hardware and software components as planned in the solution. Version-specific release notes are especially important. Contents This chapter contains the following tasks: Topic See Page Task 1: Build the network infrastructure 48 Task 2: Set up and configure ASM for CLARiiON 48 Task 3: Set up and configure database servers 49 Task 4: Configure NFS client options 50 Task 5: Install Oracle Database 11g/10g 51 Task 6: Configure database server memory options 51 Task 7: Tune HugePages 54 Task 8: Set database initialization parameters 56 Task 9: Configure Oracle Database control files and logfiles 58 Task 10: Enable passwordless authentication using SSH 59 Task 11: Set up and configure CLARiiON storage for Replication Manager and SnapView 64 Task 12: Install and configure EMC RecoverPoint 67 Task 13: Set up the virtualized utility servers 70 Task 14: Configure and connect EMC RecoverPoint appliances (RPAs) 71 Task 15: Install and configure EMC MirrorView/A 71 Task 16: Install and configure EMC CLARiiON (CX) splitters 71 47

48 Chapter 6: Installation and Configuration Task 1: Build the network infrastructure Network infrastructure For details on building a network infrastructure, see Chapter 5: Network Design > IP network layout > IP network design for validated scenario. Task 2: Set up and configure ASM for CLARiiON Configure ASM and manage CLARiiON For details on configuring ASM and managing the CLARiiON, follow the steps in the table below. Step Action 1 Find the operating system (OS) version. [root@mteoradb55 ~]# cat /etc/redhat-release Enterprise Linux Enterprise Linux Server release 5.1 (Carthage) [root@mteoradb55 ~]# 2 Check the PowerPath installation. [root@mteoradb55 ~]# rpm -qa EMC* EMCpower.LINUX [root@mteoradb55 ~]# 3 Check the ASM rpms applied on the OS. [root@mteoradb55 ~]# rpm -qa grep oracleasm oracleasm-support el5 oracleasmlib el5 oracleasm el el5 [root@mteoradb55 ~]# 4 Configure the ASM. [root@mteoradb55 ~]# /etc/init.d/oracleasm configure 5 Check the status of ASM. [root@mteoradb55 ~]# /etc/init.d/oracleasm status 6 Create the ASM disk. [root@mteoradb55 ~]# /etc/init.d/oracleasm createdisk LUN1 /dev/emcpowerab 7 Scan the ASM disk. [root@mteoradb55 ~]# /etc/init.d/oracleasm scandisks 8 List the ASM disks. [root@mteoradb55 ~]# /etc/init.d/oracleasm listdisks 48

49 Chapter 6: Installation and Configuration Task 3: Set up and configure database servers Check BIOS version Dell PowerEdge 2900 servers were used in our testing. These servers were preconfigured with the A06 BIOS. Upgrading the BIOS to the latest version (2.2.6 as of the time of this publication) resolved a range of issues, including hanging reboot problems and networking issues. Regardless of the server vendor and architecture, you should monitor the BIOS version shipped with the system and determine if it is the latest production version supported by the vendor. If it is not the latest production version supported by the vendor, then flashing the BIOS is recommended. Disable Hyper- Threading Intel Hyper-Threading Technology allows multi-threaded operating systems to view a single physical processor as if it were two logical processors. A processor that incorporates this technology shares CPU resources among multiple threads. In theory, this enables faster enterprise-server response times and provides additional CPU processing power to handle larger workloads. As a result, server performance will supposedly improve. In EMC s testing, however, performance with Hyper-Threading was poorer than performance without it. For this reason, EMC recommends disabling Hyper-Threading. There are two ways to disable Hyper-Threading: in the kernel or through the BIOS. Intel recommends disabling Hyper-Threading in the BIOS because it is cleaner than doing so in the kernel. Refer to your server vendor s documentation for instructions. 49

50 Chapter 6: Installation and Configuration Task 4: Configure NFS client options NFS client options For optimal reliability and performance, EMC recommends the NFS client options listed in the table below. The mount options are listed in the /etc/fstab file. Option Syntax Recommended Description Hard mount hard Always The NFS file handles are kept intact when the NFS server does not respond. When the NFS server responds, all the open file handles resume, and do not need to be closed and reopened by restarting the application. This option is required for Data Mover failover to occur transparently without having to restart the Oracle instance. NFS protocol version vers= 3 Always Sets the NFS version to be used. Version 3 is recommended. TCP proto=tcp Always All the NFS and RPC requests will be transferred over a connection-oriented protocol. This is required for reliable network transport. Background bg Always Enables client attempts to connect in the background if the connection fails. No interrupt nointr Always This toggle allows or disallows client keyboard interruptions to kill a hung or failed process on a failed hard-mounted file system. Read size and write size rsize=32768, wsize=32768 Always Sets the number of bytes NFS uses when reading or writing files from an NFS server. The default value is dependent on the kernel. However, throughput can be improved greatly by setting rsize/wsize= No auto noauto Only for backup/utility file systems Disables automatic mounting of the file system on boot-up. This is useful for file systems that are infrequently used (for example, stage file systems). Timeout timeo=600 Always Sets the time (in tenths of a second) the NFS client waits for the request to complete. 50

51 Chapter 6: Installation and Configuration sunrpc.tcp_slot _table_entries The NFS module called sunrpc.tcp_slot_table_entries controls the concurrent I/Os to the storage system. The default value of this parameter is 16. The parameter should be set to the maximum value (128) for enhanced I/O performance. To configure this option, type the following command: ~]# sysctl -w sunrpc.tcp_slot_table_entries=128 sunrpc.tcp_slot_table_entries = 128 Important Before configuring this option, you must make the changes in sysctl.conf, and then run sysctl w. This reparses the file, and the resulting text is output. Task 5: Install Oracle Database 11g/10g Install Oracle Database 11g for Linux See Oracle s installation guide: Oracle Database Installation Guide 11g Release 1 (11.1) for Linux Install Oracle Database 10g for Linux See Oracle s installation guide: Oracle Database Client Installation Guide 10g Release 1 ( ) for Linux x86-64 Task 6: Configure database server memory options Database server memory Refer to your database server documentation to determine the total number of memory slots your database server has, and the number and density of memory modules that you can install. EMC recommends that you configure the system with the maximum amount of memory feasible to meet the scalability and performance needs. Compared to the cost of the remaining components in an Oracle database server configuration, the cost of memory is minor. Configuring an Oracle database server with the maximum amount of memory is entirely appropriate. Shared memory Oracle uses shared memory segments for the Shared Global Area (SGA), which is an area of memory that is shared by Oracle processes. The size of the SGA has a significant impact on the database performance, and there is a direct correlation between SGA size and disk I/O. EMC s Oracle RAC 11g/10g testing was done with servers using 20 GB of SGA. 51

52 Chapter 6: Installation and Configuration Memory configuration files The following table describes the files that must be configured for memory management: File Created by Function /etc/sysctl.conf Linux installer Contains the shared memory parameters for the Linux operating system. This file must be configured in order for Oracle to create the SGA with shared memory. /etc/security/limit s.conf Oracle parameter file Linux installer Oracle installer, dbca, or DBA who creates the database Contains the limits imposed by Linux on users use of resources. This file must be configured correctly in order for Oracle to use shared memory for the SGA. Contains the parameters used by Oracle to start an instance. This file must contain the correct parameters in order for Oracle to start an instance using shared memory. Configuring /etc/sysctl.conf Configure the etc/sysctl.conf file as follows: # Oracle parameters kernel.shmall = kernel.shmmax = kernel.shmmni = 4096 kernel.sem = fs.file-max = net.ipv4.ip_local_port_range = net.core.rmem_default = net.core.rmem_max = net.core.wmem_default = net.core.wmem_max = vm.nr_hugepages = sunrpc.tcp_slot_table_entries = 128 Recommended parameter values The following table describes recommended values for kernel parameters. 52

53 Chapter 6: Installation and Configuration Kernel parameter kernel.shmmax kernel.shmini kernel.shmall Parameter function Defines the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space. Since the SGA is comprised of shared memory, SHMMAX can potentially limit the size of the SGA. Sets the system-wide maximum number of shared memory segments. The value should be at least ceil(shmmax/page_size). The PAGE_SIZE on our Linux systems was Recommended value (Slightly larger than the SGA size) Configuring /etc/security/lim its.conf The section of the /etc/security/limits.conf file relevant to Oracle should be configured as follows: # Oracle parameters oracle soft nproc 2047 oracle hard nproc oracle soft nofile 1024 oracle hard nofile oracle soft memlock oracle hard memlock Important Ensure that the memlock parameter has been configured. This is required for the shared memory file system. This is not covered in the Oracle Database 11g Installation Guide, so be sure to set this parameter. If you do not set the memlock parameter, your database will behave uncharacteristically. 53

54 Chapter 6: Installation and Configuration Task 7: Tune HugePages Tuning HugePages The following table describes how to tune HugePages parameters to ensure optimum performance. Step Action 1 Ensure that the machine you are using has adequate memory. For example, our test system had 24 GB of RAM and a 20 GB SGA. 2 Set the HugePages parameters in /etc/sysctl.conf to a size into which the SGA will fit comfortably. For example, to create a HugePages pool of 21 GB, which would be large enough to accommodate the SGA, set the following parameter values: HugePages_Total: Hugepagesize: 2048 kb 3 Reboot the instance. 4 Check the values of the HugePages parameters by typing the following command: [root@mteoradb51 ~]# grep Huge /proc/meminfo On our test system, this command produced the following output: HugePages_Total: HugePages_Free: 1000 Hugepagesize: 2048 KB 5 If the value of HugePages_Free is equal to zero, the tuning is complete: If the value of HugePages_Free is greater than zero: a) Subtract the value of HugePages_Free from HugePages_Total. Make note of the answer. b) Open /etc/sysctl.conf and change the value of HugePages_Total to the answer you calculated in step a). c) Repeat steps 3, 4, and 5. Tuning HugePages on RHEL 5/OEL 5 On Red Hat Enterprise Linux 5 and on Oracle Enterprise Linux 5 systems, HugePages cannot be configured using the steps mentioned above. We used a shell script called hugepage_settings.sh to configure HugePages on these systems. This script is available on Oracle MetaLink Note The hugepage_settings.sh script configures HugePages as follows: HugePages_Total: HugePages_Free: 2244 HugePages_Rsvd: 2240 Hugepagesize: 2048 kb 54

55 Chapter 6: Installation and Configuration More information about HugePages For more information on enabling and tuning HugePages, refer to: Oracle MetaLink Note Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases 55

56 Chapter 6: Installation and Configuration Task 8: Set database initialization parameters Overview This section describes the initialization parameters that should be set in order to configure the Oracle instance for optimal performance on the CLARiiON CX4 series. These parameters are stored in the spfile or init.ora file for the Oracle instance. Database block size Parameter Syntax Description Database block size DB_BLOCK_SIZE=n For best database performance, DB_BLOCK_SIZE should be a multiple of the OS block size. For example, if the Linux page size is 4096, DB_BLOCK_SIZE =4096 *n. Direct I/O Parameter Direct I/O Syntax Description FILESYSTEM_IO_OPTIONS=setall This setting enables direct I/O and async I/O. Direct I/O is a feature available in modern file systems that delivers data directly to the application without caching in the file system buffer cache. Direct I/O preserves file system semantics and reduces the CPU overhead by decreasing the kernel code path execution. I/O requests are directly passed to network stack, bypassing some code layers. Direct I/O is a very beneficial feature to Oracle s log writer, both in terms of throughput and latency. Async I/O is beneficial for datafile I/O. Multiple database writer processes Parameter Syntax Description Multiple database writer processes DB_WRITER_PROCESSES=2*n The recommended value for db_writer_processes is that it at least match the number of CPUs. During testing, we observed very good performance by just setting db_writer_processes to 1. Multi Block Read Count Parameter Syntax Description Multi Block Read Count DB_FILE_MULTIBLOCK_READ_COUNT= n DB_FILE_MULTIBLOCK_READ_COUNT determines the maximum number of database blocks read in one 56

57 Chapter 6: Installation and Configuration I/O during a full table scan. The number of database bytes read is calculated by multiplying the DB_BLOCK_SIZE by the DB_FILE_MULTIBLOCK_READ_COUNT. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance for databases that perform many full table scans, but degrade performance for OLTP databases where full table scans are seldom (if ever) performed. Setting this value to a multiple of the NFS READ/WRITE size specified in the mount limits the amount of fragmentation that occurs in the I/O subsystem. This parameter is specified in DB Blocks and NFS settings are in bytes - adjust as required. EMC recommends that DB_FILE_MULTIBLOCK_READ_COUNT be set to between 1 and 4 for an OLTP database and to between 16 and 32 for DSS. Disk Async I/O Parameter Disk Async I/O Syntax Description DISK_ASYNCH_IO=true RHEL 4 update 3 and later support async I/O with direct I/O on NFS. Async I/O is now recommended on all the storage protocols. Use Indirect Memory Buffers Parameter Syntax Description Use Indirect Memory Buffers USE_INDIRECT_DATA_BUFFERS=true Required to support the use of the /dev/shm inmemory file system for storing the SGA shared memory structures. 57

58 Chapter 6: Installation and Configuration Task 9: Configure Oracle Database control files and logfiles Control files EMC recommends the following: When you create the control file, allow for growth by setting MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, and MAXLOGMEMBERS to high values. Your database should have a minimum of two control files located on separate physical ASM diskgroups. One way to multiplex your control files is to store a control file copy on every diskgroup that stores members of the redo log groups. Online and archived redo log files EMC recommends that you: Run a mission-critical, production database in ARCHIVELOG mode. Multiplex your redo log files for these databases. Loss of online redo log files could result in a database recovery failure. The best practice to multiplex your online redo log files is to place members of a redo log group on different ASM diskgroups. To understand how redo log and archive log files can be placed, refer to the Reference Architecture diagram. 58

59 Chapter 6: Installation and Configuration Task 10: Enable passwordless authentication using SSH Overview The use of passwordless authentication using ssh is a fundamental concept to make successful use of Oracle RAC 10g or 11g with Celerra. SSH files SSH passwordless authentication relies on the three files described in the following table. File Created by Purpose ~/.ssh/id_dsa.pub ssh-keygen Contains the host s dsa key for ssh authentication (functions as the proxy for a password) ~/.ssh/authorized_keys ssh Contains the dsa keys of hosts that are authorized to log in to this server without issuing a password ~/.ssh/known_hosts ssh Contains the dsa key and hostname of all hosts that are allowed to log in to this server using ssh id_dsa.pub The most important ssh file is id_dsa.pub. Important If the id_dsa.pub file is re-created after you have established a passwordless authentication for a host onto another host, the passwordless authentication will cease to work. Therefore, do not accept the option to overwrite id_dsa.pub if ssh-keygen is run and it discovers that id_dsa.pub already exists. Enabling authentication: Single user/single host The following table describes how to enable passwordless authentication using ssh for a single user on a single host: Step Action 1 Create the dsa_id.pub file using ssh-keygen. 2 Copy the key for the host for which authorization is being given to the authorized_keys file of the host that allows the login. 3 Complete a login so that ssh knows about the host that is logging in. That is, record the host s key and hostname in the known_hosts file. 59

60 Chapter 6: Installation and Configuration Enabling authentication: Single user/multiple hosts Prerequisites To enable authentication for a user on multiple hosts, you must first enable authentication for the user on a single host: Chapter 6: Installation and Configuration > Task 10: Enable passwordless authentication using SSH > Enabling authentication: Single user/single host Procedure summary After you have enabled authentication for a user on a single host, you can then enable authentication for the user on multiple hosts by copying the authorized_keys and known_hosts files to the other hosts. This is a very common task when setting up Oracle RAC 11g//10g prior to installation of Oracle Clusterware. It is possible to automate this task by using the ssh_multi_handler.bash script. ssh_multi_handler.bash #!/bin/bash # # # Script: ssh_multi_handler.bash # # Purpose: Handles creation of authorized_keys # # # ALL_HOSTS="rtpsol347 rtpsol348 rtpsol349 rtpsol350" THE_USER=root mv -f ~/.ssh/authorized_keys ~/.ssh/authorized_keys.bak mv -f ~/.ssh/known_hosts ~/.ssh/known_hosts.bak for i in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh-keygen -t dsa" ssh ${THE_USER}@${i} "cat ~/.ssh/id_dsa.pub" \ >> ~/.ssh/authorized_keys ssh ${THE_USER}@${i} date done for i in $ALL_HOSTS do scp ~/.ssh/authorized_keys ~/.ssh/known_hosts \ ${THE_USER}@${i}:~/.ssh/ done 60

61 Chapter 6: Installation and Configuration for i in ${ALL_HOSTS} do for j in ${ALL_HOSTS} do ssh ${THE_USER}@${i} "ssh ${THE_USER}@${j} date" done done mv -f ~/.ssh/authorized_keys.bak ~/.ssh/authorized_keys mv -f ~/.ssh/known_hosts.bak ~/.ssh/known_hosts exit How to use ssh_multi_handler.bash At the end of the process described below, all of the equivalent users on the set of hosts will be able to log in to all of the other hosts without issuing a password. Step Action 1 Copy and paste the text from ssh_multi_handler.bash into a new file on the Linux server. 2 Edit the variable definitions at the top of the script. 3 Use chmod on the script to allow it to be executed. 4 Run the script. Output on our systems On our systems with the settings noted previously, this script produced the following effect: ssh multi-host output [root@rtpsol347 ~]#./ssh_multi_handler.bash Enter file in which to save the key (/root/.ssh/id_dsa): Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: f8:21:61:55:55:92:15:ed:0a:62:89:c5:ed:93:5f:27 root@rtpsol347.solutions1.rtp.dg.com root@rtpsol347's password: Tue Aug 8 22:21:31 EDT

62 Chapter 6: Installation and Configuration password:...(additional similar output not shown) authorized_keys 100% KB/s 00:00 known_hosts 100% KB/s 00:00 password:...<repeated 3 times> Tue Aug 8 22:22:05 EDT <repeated 15 times> [root@rtpsol347 ~]# The 16 date outputs, without any requests for passwords, indicate that the passwordless authentication files on all root users among these four hosts have been successfully created. Enabling authentication: Single host/different user Another common task is to set up passwordless authentication across two users between two hosts. For example, enable the Oracle user on the database server to run commands as the root or nasadmin user on the Celerra Control Station. You can set this up by using the ssh_single_handler.bash script. This script creates passwordless authentication from the presently logged in user to the root user on the Celerra Control Station. ssh_single_handler.bash #!/bin/bash # # # Script: ssh_single_handler.bash # # Purpose: Handles creation of authorized_keys # # # THE_USER=root THE_HOST=rtpsol33 ssh-keygen -t dsa KEY=`cat ~/.ssh/id_dsa.pub` ssh ${THE_USER}@${THE_HOST} "echo ${KEY} >> \ ~/.ssh/authorized_keys" ssh ${THE_USER}@${THE_HOST} date exit 62

63 Chapter 6: Installation and Configuration Output on our systems On our systems with the settings noted previously, ssh_single_handler.bash produced the following effect: ssh single host output scripts]$./ssh_single_handler.bash Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 09:13:4d:7d:20:0c:9a:c4:4e:35:c9:c9:11:9e:30:31 Wed Aug 9 09:40:01 EDT 2006 [oracle@rtpsol347 scripts]$ The date output without a password request indicates that the passwordless authentication files have been created. 63

64 Chapter 6: Installation and Configuration Task 11: Set up and configure CLARiiON storage for Replication Manager and SnapView Setup and configuration of CLARiiON storage Carry out the steps in the following table to configure CLARiiON storage to use Replication Manager (RM) and SnapView to create clones for backup and recovery and test/dev. Note To enable the CLARiiON cloning feature, two LUNs with a minimum capacity of 1 GB (for CX4 series) must be designated as clone private LUNs. These LUNs will be used during cloning. Step Action 1 Create a storage group: a. In Navisphere Manager, click the Storage tab, right-click Storage Groups and select Create Storage Group. b. In the Storage Group Name field, enter EMC Replication Storage. The name EMC Replication Storage must be used in order for Replication Manager to work correctly. The name is case sensitive. c. Click Apply, then click OK. 2 Add LUNs to the storage group: a. In Navisphere Manager, under Storage Groups, right-click EMC Replication Storage and choose Select LUNs. (The Storage Group Properties tab is displayed.) b. In the LUNs tab, select all the LUNs that are used to store the database, then click Apply. 3 Create a separate storage group for clone target LUNs: a. In Navisphere Manager, click the Storage tab, right-click Storage Groups and select Create Storage Group. b. In the Storage Group Name field, enter Mount Host SG. 4 Add target LUNs to the Mount Host SG storage group: a. Select Mount Host SG and right-click. b. Click Select LUNs. (The Storage Group Properties tab is displayed.) c. Select the target LUNs to be used for cloning, then click Apply. 5 Add the host to the clone target storage group. This allows the clone target LUNs to be mounted on the target storage group to bring up the test/dev copy of the database. 64

65 Chapter 6: Installation and Configuration Configuring Replication Manager The procedure for preparing and installing Replication Manager (RM) components is outside the scope of this document. After installing Replication Manager Server and Agents, carry out the steps in the following tables to: Add a host Add storage Create a LUNs pool Create an application set Adding a host using RM The following table lists the steps for adding a host using Replication Manager. Step Action 1 In the Replication Manager console, select and right-click Hosts, then select New Host. 2 Type a name for the host (database server name), then click OK. 3 Repeat steps 1 and 2 to add all other hosts (database servers) and the mount host server to Replication Manager. Adding storage using RM After you have added the hosts and the mount host, you must add the storage. The following table lists the steps for adding storage using Replication Manager. Step Action 1 In Replication Manager, select and right-click Storage Services, then select Add Storage. 2 To start the Add Storage Wizard, in the Confirm dialog box, click Yes, then click Next to go to the next screen. 3 On the Select Storage Services screen, select the appropriate storage service and double-click it. 4 On the Array Connections screen, enter the login credentials for the storage array, then click OK. 5 When the Discover Storage screen progress bar reaches 100%, click Close. 6 On the Select Target Devices screen, select the snap cache for snapshots and the LUNs, then click Next. 7 To complete the procedure, click Finish. 8 When the Add Storage screen progress bar reaches 100%, click Close. The storage array is now visible in the Storage Services list. 65

66 Chapter 6: Installation and Configuration Creating a LUNS pool A dedicated storage pool for cloning must be created so that the specified LUNs in that pool can be selected as cloning targets. To create a new storage pool, carry out the steps in the following table: Step Action 1 In Replication Manager, select and right-click Storage Pools, then select New Storage Pool. 2 On the New Pool screen, enter a pool name and a description, then click Add. 3 Select the LUNs that you want to include in the storage pool and click OK. 4 Click Add to add the updated LUNs to the new pool, then click Close. Creating an application set using RM After you have added the storage to Replication Manager, you must then create an application set using the Replication Manager console. The following table lists the steps for creating an application set: Step Action 1 In Replication Manager, select and right-click Application Sets, then select New Application Set. 2 In the Add Application Set wizard, click Next to go to the next screen. 3 On the Application Set Name and Objects screen, click Add Instance to create a database instance. 4 On the Application Credentials screen, enter the Oracle database credentials or the ASM credentials, then click OK. The database instance created. 5 Select the newly created instance and type a name for the application set, then click Next. 6 On the Completing the Application Set wizard screen, click Finish. The application set is created. 66

67 Chapter 6: Installation and Configuration Task 12: Install and configure EMC RecoverPoint Installation When performing the following steps, the RecoverPoint GUI provides many options. A detailed explanation of those options is available in the EMC RecoverPoint Administrator s Guide. Refer to the EMC RecoverPoint Installation Guide for full installation instructions. Configuring RecoverPoint Configuration requires one storage group per site, which consists of the repository volume, journal volumes, and all the replication volumes. The repository volume and journal volumes will not be accessible from any hosts other than the RPAs. But the RPAs will need to have access to all the volumes that will take part in replication. Similarly, there must be one more storage group consisting of hosts and the replication volumes to which the hosts will have access. The hosts should not have access to repository and journal volumes. The steps for validating the disaster recovery solution using RecoverPoint are described in the following table. Step Action 1 Zone all the RPA ports with all the CLARiiON array ports. A single zone consisting of all RPA ports and all CLARiiON ports per site should be created. 2 Manually register the RPA initiators as Initiator Type: RecoverPoint Appliance with the failover mode set to 4. 3 Complete the installation of the RPAs. Refer to Chapter 2 of the EMC RecoverPoint Installation Guide for instructions. Start from the step after booting up the RPA with the new software and continue up to, but not including, attaching the splitters. During our testing, the RecoverPoint appliances at both sites were configured as shown below: RPA Setup Details Parameter Site 1: Value Site 2: Value Site Name Source Target Management Default Gateway Management Subnet Mask WAN default gateway WAN subnet mask Site Management IP Time zone America/New_York America/New_York Primary DNS server Secondary DNS server Local domain solutions1.rtp.dg.com solutions1.rtp.dg.com 67

68 Chapter 6: Installation and Configuration NTP server N/A Number of virtual ports N/A N/A Initiator only mode N/A N/A Number of exposed LUNs N/A N/A Box 1 Box Management IP Box WAN IP Remote maintenance port N/A N/A Box 2 Box Management IP Box WAN IP Remote maintenance port N/A N/A 4 Verify the RPA to CX splitter connectivity by logging in to the CLI of the RPA. You can verify the RPA to CX splitter connectivity using the following option: Run: [3]:Diagnostics > [2]:Fibre Channel diagnostics > [4]:Detect Fibre Channel LUNs If connectivity has been established, you will see the FC LUNs. 5 To add new splitters to the RPA environment, right-click on the splitter icon in the RecoverPoint GUI. Splitters can then be selected and added. Splitters must be added for both sites. 6 Create a consistency group: a) In the navigation pane of the RecoverPoint management console, select Consistency Groups. b) In the Consistency Groups pane, click Add new group. A detailed description of the different options that can be specified for a consistency group is available in Chapter 2 of the EMC RecoverPoint Administrator s Guide. 7 Configure the source and target copies: a) In the navigation pane of the RecoverPoint management console, select Consistency Groups > Oracle DB Consistency Group. b) In the Oracle DB Consistency Group pane, select the Status tab. c) Click the Add Copy icon. The New Copy dialog box is displayed. 8 In the New Copy dialog box, select the site of the production source and enter the values for General Settings and Advanced Settings. Refer to the EMC RecoverPoint Administrator s Guide for a detailed explanation of all the available options. 68

69 Chapter 6: Installation and Configuration 9 Define the replication sets: a) In the navigation pane, select Volumes. b) Click Configuration. The Volumes Configuration dialog box is displayed. c) To create a new replication set, click Add New Replication Set. d) Add a volume to both the source-side replica and target-side replica. e) Add at least one journal volume per replication set. 10 Attach volumes to the splitters: a) In the navigation pane, select Splitters. The Splitters tab with the available splitters is displayed. b) Select a splitter and double-click. c) Click Rescan and then click Attach. The replication sets that are discovered by the splitter are displayed. d) Select the replication sets that you want to add to the splitters. Note A volume cannot be replicated unless it is attached to a splitter. Once the replication sets have been added, the Splitter Properties screen will display all the replication sets that are associated with the splitters. 11 Enable a consistency group by selecting its name in the navigation panel and clicking Enable Group. Note The consistency groups should be enabled before starting replication. 12 Start replication from source to target site by clicking Start Transfer. 13 Enable image access. Once the replication is complete, the target image can be accessed by selecting Image Access from the drop-down menu. 69

70 Chapter 6: Installation and Configuration Task 13: Set up the virtualized utility servers Setting up the virtualized utility servers Virtualized single instance database servers were used as targets for test/dev and disaster recovery solutions. To set up a virtualization configuration, you need to do the steps outlined in the following table: Step Action 1 Deploy a VMware ESX server. 2 Capture the total physical memory and total number of CPUs that are available on the ESX server. 3 Create four virtual machines (VMs) on the ESX server. For the storage network configuration, see Chapter 8: Virtualization > VMware ESX server > Typical storage network configuration. 4 Distribute the memory and CPUs available equally to each of the VMs. 5 Assign a VMkernel IP ( ) to each ESX server so that it can be used to mount NFS storage. For the storage configuration, see Chapter 8: Virtualization > VMware ESX server > Storage configuration. Note All the VMs need to be located on common storage. This is mandatory for performing VMotion. 6 Configure four additional NICs on the ESX server; dedicate each NIC to a VM. These additional NICs are used to configure the dedicated private network connection to Celerra where the database files reside. 7 Ensure all necessary software, for example, Oracle, is installed and configured. Note All database objects are stored on an NFS mount. 70

71 Chapter 6: Installation and Configuration Task 14: Configure and connect EMC RecoverPoint appliances (RPAs) Configuring RecoverPoint appliances (RPAs) Refer to the EMC RecoverPoint Installation Guide, located on Powerlink.com, for full installation and configuration instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. Task 15: Install and configure EMC MirrorView/A Installing MirrorView/A Refer to the EMC MirrorView Installation Guide, located on Powerlink.com, for full installation instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. Task 16: Install and configure EMC CLARiiON (CX) splitters Installing CX splitters Refer to the section on installing CLARiiON (CX) splitters in the EMC RecoverPoint Installation Guide, located on Powerlink.com, for full installation instructions. Access to this document is based on your login credentials. If you do not have access, contact your EMC representative. 71

72 Chapter 7: Testing and Validation Chapter 7: Testing and Validation Overview Introduction to testing and validation This chapter provides a detailed summary and description of the tests performed to validate an EMC Proven Solution. The goal of the testing was to record the response of the end-to-end solution and component subsystem under reasonable load. The solution was tested under a load that is representative of the market for Oracle RAC 11g/10g on Linux with an EMC Celerra unified storage platform over Fibre Channel Protocol (FCP) and Network File System (NFS). Objectives The objectives of this testing were to carry out: Performance testing of a blended solution using FCP for datafiles, online redo log files, and controlfiles (all of the files in the Oracle database environment that require high-performance I/O), and NFS for all the other files required to be stored by the Oracle database environment (archived log files, OCR files, flashback recovery area, and database backups). Functionality testing of a test/dev solution using a blended configuration, whereby a cloned version of a physically-booted production Oracle RAC 11g/10g database was replicated and then mounted on a VMware virtual machine running a singleinstance Oracle Database 11g/10g. Cycling issues and peak performance During testing, we observed cycling issues at higher user loads. For the most part, these issues were related to the Quest Benchmark Factory application, which was used to generate the TPC-C OLTP load. As a result, the peak performance for all these tests was considered while ignoring the cycling issues. Detailed investigations are being carried out with the Quest support team to troubleshoot the cycling issues. Contents This chapter contains the following topics: Topic See Page Section A: Store solution component 73 Section B: Basic Backup solution component 79 Section C: Advanced Backup solution component 81 Section D: Basic Protect solution component 84 Section E: Advanced Protect solution component using EMC MirrorView and Oracle Data Guard Section F: Advanced Protect solution component using EMC RecoverPoint Section G: Test/Dev solution component using EMC 98 72

73 Chapter 7: Testing and Validation SnapView clone Section H: Backup Server solution component 101 Section I: Migration solution component

74 Chapter 7: Testing and Validation Section A: Store solution component Overview of the Store solution component The Store solution component was designed as a set of performance measurements to determine the bounding point of the solution stack in terms of performance. A reasonable amount of fine tuning was performed in order to ensure that the performance measurements achieved were consistent with real-world, best-of-breed performance. Test procedure The following procedure was used to validate the Store solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Restart all the client machines. 3 Stop all the database instances. 4 Initiate the Benchmark Factory console and agents on the client machines. 5 Start the Benchmark Factory job. 6 Monitor the progress of the test. 7 Allow the test to finish. 8 Capture the results. 74

75 Chapter 7: Testing and Validation Test results Summary The testing was conducted on a four-node RAC cluster. We scaled the RAC one node at a time, from one node up to four nodes. The summary of the test results is shown below. Test results for 1 RAC node 3 FC shelves RAID 5/RAID 1 The table below shows the best result of the test runs for the 1 RAC node. Users 3800 Transactions per second (TPS) Response time 0.2 The chart below shows the relationship between users, transactions per second (TPS), and response time. TPS / Response Time TPS Response Time TPS Response Time User Count 75

76 Chapter 7: Testing and Validation Test results for 2 RAC nodes 3 FC shelves RAID 5/RAID 1 The table below shows the best result of the test runs for the 2 RAC node. Users 6400 Transactions per second (TPS) Response time The chart below shows the relationship between users, TPS, and response time. 76

77 Chapter 7: Testing and Validation Test results for 3 RAC nodes 3 FC shelves RAID 5/RAID 1 The table below shows the best result of the test runs for the 3 RAC node. Users 8400 Transactions per second (TPS) Response time 0.99 The chart below shows the relationship between users, TPS, and response time. 77

78 Chapter 7: Testing and Validation Test results for 4 RAC nodes 3 FC shelves RAID 5/RAID 1 The table below shows the best result of the test runs for the 4 RAC node. Users Transactions per second (TPS) Response time The chart below shows the relationship between users, TPS, and response time. TPS / Response Time TPS Response Time TPS Response Time User Count Conclusion These results prove that the validated configuration produces the best performance for this real-world configuration that the customer is likely to deploy. 78

79 Chapter 7: Testing and Validation Section B: Basic Backup solution component Overview of the Basic Backup solution component The Basic Backup solution component demonstrates that the validated configuration is compatible with Oracle Recovery Manager (RMAN) disk-to-disk backup. The backup tests were performance tests, where the performance of each node level was tested while creating an RMAN backup. The restore was a functionality test, but the amount of time required to perform the RMAN restore was also tuned and measured. The transactions that were restored and recovered were examined to ensure that there was no data loss. Test configuration The test configuration for the Basic Backup solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Basic Backup solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Restart all the client machines and stop all the database instances. 3 Initiate the Benchmark Factory console and agents on the client machines. 4 Start the Benchmark Factory job. 5 Monitor the progress of the test. 6 Initiate RMAN backup at user load 3600 and monitor the progress. 7 Allow the test to finish. 8 Shut down the database and mount the database. 9 Perform the restore operation using RMAN and capture the observations. 79

80 Chapter 7: Testing and Validation Test results Summary There was a moderate increase in response time and a moderate decrease in transaction throughput when RMAN was initiated at user load Apart from that, TPS to user-load scaling was linear and the highest TPS of was achieved at user load Test results The RMAN backup started at user load 5000 and ended at user load The table below shows the best result of the test runs. Users Transactions per second (TPS) Response time The chart below shows the relationship between users, TPS, and response time. Conclusion RMAN provides a reliable high-performance backup solution for the validated configuration. However, the time required to restore the database is significant. 80

81 Chapter 7: Testing and Validation Section C: Advanced Backup solution component Overview of Advanced Backup solution component The Advanced Backup solution component demonstrates that the validated configuration is compatible with CLARiiON SnapView using Replication Manager. The backup tests were performance tests. The performance of each node level was tested while performing hot backup using SnapView snapshot with Replication Manager. The restore was a functionality test. The amount of time required to perform the SnapView restore was tuned and measured Test Type Performance Functional Test Description 1 hot backup using SnapView snapshot with Replication Manager Restore and recover from a SnapView snapshot using Replication Manager Test configuration The test configuration for the Advanced Backup solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Advanced Backup solution component: Step Action 1 Configure Replication Manager. 2 Register the production hosts, mount hosts, and storage in Replication Manager. 3 Create the application set in Replication Manager for the database to be replicated. 4 Create a job in the Replication Manager console to take the SnapView snapshot. 5 Close all the Benchmark Factory agents that are running. 6 Close the Benchmark Factory console. 7 Restart the Benchmark Factory console and agents. 8 Stop and restart the database instances. 9 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 5500, take a snapshot of the database by running the job in the Replication Manager console. 81

82 Chapter 7: Testing and Validation 11 Monitor the performance impact on the production database. 12 When the Benchmark Factory test is complete, capture the results. 13 Shut down the database. 14 Stop and disable the ASM instances. 15 Dismount the data diskgroups. 16 Restore the database using Replication Manager. 17 Recover the database. 18 Capture the time taken to restore the database. Test results Performance test results The snapshot job was initiated at user load For the optimum reported iteration, the TPS peaked at 9100 users. This was the highest user count that had a response time of less than 2 seconds. The table below shows the best result of the test runs. Users 9100 Transactions per second (TPS) Response time The chart below shows the relationship between the number of users, TPS, and response time. 82

83 Chapter 7: Testing and Validation Functional test results This test measured the time required to perform a full restore and recovery from a hot backup taken using a CLARiiON SnapView snapshot with Replication Manager. The backup created in the performance test above was used as the source for this restore. Conclusion The CLARiiON SnapView feature works with Oracle RAC 11g/10g for our validated configuration and can be performed successfully using Replication Manager. In most test runs, a very slight performance hit was observed during backup. However, this was temporary and the performance recovered to the expected levels within a short span of time. The restore of a SnapView hot backup is faster than an RMAN disk-to-disk restore. 83

84 Chapter 7: Testing and Validation Section D: Basic Protect solution component Overview The Basic Protect solution component was designed to test the disaster recovery functionality already built into the Oracle RAC of the validated configuration. The Basic Protect solution component assumes that there are no additional costs to the customer in terms of software or hardware. Oracle Data Guard was used for the Basic Protect solution. Test configuration The test configuration for the Basic Protect solution component was identical to the Store solution component. Functional validation only Only functional validation was done for the Basic Protect solution component. No tuning or performance measurements were carried out. Test procedure The following procedure was used to validate the use of Data Guard with the validated configuration: Step Action 1 Configure Data Guard for Maximum Performance mode. SQL> select PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY 2 Enable automatic archival on the database. SQL> ALTER DATABASE FORCE LOGGING; 3 Place the primary database in Force Logging mode. 4 Set the initialization parameters for the primary database. a) Create a parameter file from the spfile used by the primary database. b) Use the following commands to create the pfile initmterac4.ora for the primary database: sqlplus / as sysdba; Create pfile= /u02/dataguard/initmterac4.ora from spfile; 5 Modify the pfile initmterac4.ora for the Data Guard configuration as shown below: ## *.log_archive_dest_1='location=+srcarch/mterac4/' 84

85 Chapter 7: Testing and Validation **************************************************** ****************** ADD THE FOLLOWING FOR PRIMARY DATABASE DATAGUARD CONFIG **************************************************** ****************** db_unique_name=mterac4 log_archive_config= dg_config=(mterac4,mterac4_sb) log_archive_dest_1= LOCATION=+SRCARCH/mterac4/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mterac4' log_archive_dest_2= service=mterac4_sb VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=mterac4_sb' log_archive_dest_state_1=enable log_archive_dest_state_2=enable fal_server=mterac4_sb fal_client=mterac4 db_file_name_convert='+srcdata/mterac4/','+srcdata/m terac4/' log_file_name_convert= +srclog1/mterac4/, +srclog1/ mterac4/, +srclog2/mterac4/, +srclog2/mterac4 standby_file_management=auto 6 Modify the spfile parameters using the parameter file initmterac4.ora : SQL>Create spfile= +SRCDATA/mterac4/spfilemterac4.ora from pfile=/u02/dataguard/initmterac4.ora ; 7 Copy a complete set of database datafiles to the destination CLARiiON LUNs using RMAN. 8 Create the standby control file for the standby database, then open the primary database to user access. SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/u02/dataguard/mterac4_sb.ctl'; SQL> ALTER DATABASE OPEN; 9 Create the parameter file for the standby database. The parameter values for the standby database are the same as the primary database. 10 Create and modify the /u02/dataguard/initmterac4_sb.ora file as follows: 85

86 Chapter 7: Testing and Validation db_unique_name=mterac4_sb log_archive_config= dg_config=(mterac4,mterac4_sb) log_archive_dest_1= LOCATION=+SRCARCH/mterac4/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mterac4_sb' log_archive_dest_2= service=mterac4 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=mterac4' log_archive_dest_state_1=enable log_archive_dest_state_2=enable fal_server=mterac4 fal_client=mterac4_sb db_file_name_convert='+srcdata/mterac4/','+srcdata/m terac4/' log_file_name_convert= +srclog1/mterac4/, +srclog1/mterac4_sb/, +srclog2/mterac4/, +srclog2/ mterac4_sb/ standby_file_management=auto *.control_files= /u02/dataguard/mterac4_sb.ctl 11 Modify the tnsnames.ora file on all primary database nodes and standby database nodes to include all primary and standby net service names. This allows transparent failover. 12 Create a password file. cd /u01/app/oracle/product/10.2.0/db_1/dbs mv orapwmterac41 orapwmterac41_old mv orapwmterac42 orapwmterac42_old mv orapwmterac43 orapwmterac43_old mv orapwmterac44 orapwmterac44_old orapwd file=orapwmterac41 password=nasadmin orapwd file=orapwmterac42 password=nasadmin orapwd file=orapwmterac43 password=nasadmin 86

87 Chapter 7: Testing and Validation orapwd file=orapwmterac44 password=nasadmin orapwd file=orapwmterac4_sb1 password=nasadmin orapwd file=orapwmterac4_sb2 password=nasadmin orapwd file=orapwmterac4_sb3 password=nasadmin orapwd file=orapwmterac4_sb4 password=nasadmin Important The password for the SYS user must be identical on every node (both primary and standby) for successful transmission of the redo logs. 12 Create the following directory in all standby nodes: /u01/app/oracle/product/10.2.0/db_1/admin/mterac4_sb 13 Create the following directories for each /u01/app/oracle/product/10.2.0/db_1/admin/mterac4_sb directory in all standby nodes: adump bdump cdump udump 14 Register the standby database and database instances with the Oracle Cluster Registry (OCR) using the server control utility. srvctl add database -d mterac4_sb -o /u01/app/oracle/product/10.2.0/db_1 srvctl add instance -d mterac4_sb -i mterac4_sb1 -n mteoradb9 srvctl add instance -d mterac4_sb -i mterac4_sb2 -n mteoradb10 srvctl add instance -d mterac4_sb -i mterac4_sb3 -n mteoradb11 srvctl add instance -d mterac4_sb -i mterac4_sb4 -n mteoradb12 15 Update the value of ORACLE_SID on all standby nodes (mteoradb9, mteoradb10, mteoradb11 and mteoradb12). This setting is contained in the.bash_profile.bash file for all Oracle users. 16 Copy the modified pfile from the standby database (initmterac4_sb.ora) to the standby database nodes. 17 Copy the parameter file and the standby control file to the standby database nodes. Note Because the ASM file system was used to configure the database, no utilities were supported by Oracle to copy the files between ASM disk groups. The control files were kept under the OCFS2 file system 87

88 Chapter 7: Testing and Validation (/u02/dataguard/mterac4_sb.ctl). To use Oracle s utility DBMS_FILE_TRANSFER to copy the contents to ASM disk groups, open the DR database in write mode. However, this defeats the purpose of the standby database. The standby database should not be opened in write mode at the standby site during normal operation. See the initialization parameter file in the previous procedure for the control_files parameter. 18 Perform the following steps to bring up the database: SQL> startup nomount; ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes SQL> alter database mount; Database altered. SQL> show parameter control_files; NAME TYPE VALUE control_files string /u02/dataguard/mterac4_sb.ctl 19 Create the standby redo log files on the standby database. Important The size of the current standby redo log files must exactly match the size of the current primary database online redo log files. 20 Start the managed recovery and realtime apply on the standby database. The statement includes the DISCONNECT FROM SESSION option so that redo apply ran in a background session. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; 21 Verify the status of the existing logs on the primary and standby database. Primary database SQL> select GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS from v$log; GROUP# THREAD# SEQUENCE# ARC STATUS 88

89 Chapter 7: Testing and Validation YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT YES INACTIVE NO CURRENT 8 rows selected. Secondary database SQL> select GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS from v$log; GROUP# THREAD# SEQUENCE# ARC STATUS YES CLEARING YES CLEARING_CURRENT YES CLEARING YES CLEARING_CURRENT YES CLEARING YES CLEARING_CURRENT YES CLEARING YES CLEARING_CURRENT 8 rows selected. 22 Force a log switch to archive the current online redo log file on the primary database. SQL> alter system archive log current; System altered. 23 Archive the new redo data on the standby database. To verify whether the standby database received and archived the redo data, on the standby database, query the V$ARCHIVED_LOG view. SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; 24 Apply the new archived redo log files. To verify whether the archived redo log files were applied, on the standby database, query the V$ARCHIVED_LOG view. SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; SEQUENCE# APP 89

90 Chapter 7: Testing and Validation YES 46 YES 46 YES 47 YES 47 YES 47 YES 48 YES 48 YES 48 YES 49 YES 49 YES SEQUENCE# APP YES 50 YES 50 YES 50 YES 51 NO 51 NO 51 YES 79 YES 80 YES 81 YES 82 YES SEQUENCE# APP YES 84 YES 85 NO 25 rows selected. Note The last redo log is not cleared. This is normal as another log is being applied by the time the last redo log is cleared. 90

91 Chapter 7: Testing and Validation Test results Summary The Basic Protect solution component was used for both sites. No attempt was made to emulate a WAN connection as this round of testing included only functionality tests. The production and disaster recovery (DR) sites are directly connected. Test results No tuning or performance measurements were carried out for the Basic Protect solution component. Conclusion This solution component uses tools provided by the operating system and database server software to provide disaster recovery. This is a very effective solution component that uses the database server s CPU, memory, and I/O channels for all operations relating to the disaster recovery configuration. 91

92 Chapter 7: Testing and Validation Section E: Advanced Protect solution component using EMC MirrorView and Oracle Data Guard Overview of Advanced Protect solution component The purpose of this test series was to test the disaster protection functionality of the validated configuration in conjunction with EMC MirrorView. The Advanced Protect solution component uses MirrorView to enhance the disaster recovery functionality of the blended FCP/NFS solution. The use of Oracle Data Guard to create a disaster recovery configuration is an established best practice. MirrorView/Asynchronous (MirrorView/A) over iscsi is commonly used as a way of seeding the database for the Data Guard configuration. After the production database was copied to the target location, redo log shipping was then established using Data Guard. No attempt was made to emulate a WAN connection in this round of testing. The production site and the disaster recovery (DR) site were directly connected over a LAN. The advantages of MirrorView are: The data can be replicated over a long distance. Local access is not required. No downtime on the source database is required. Test configuration The test configuration for the Advanced Backup solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Advanced Protect solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop and restart the database instances. 5 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 5000, run a script to mirror the LUNs. 7 Wait for the Benchmark Factory test to complete and for the mirrors to synchronize and to become consistent. 8 Fracture the mirrors. 9 Copy the standby control file and the parameter file that is created from 92

93 Chapter 7: Testing and Validation production to the target host. 10 Update the parameters at the target host. 11 Start the database in mount phase on the target host. 12 Do the remaining tasks to enable Data Guard to carry out redo log shipping. 13 Do the switchover and switchback. Test results Summary Testing showed that a Data Guard configuration could be successfully initialized (or seeded ) using MirrorView. However, performance was limited by Data Guard to only 8000 users and 415 TPS. Data Guard is a viable option for remote replication but clearly creates a performance penalty. Test results TPS peaked at 8000 users. This was the highest user count with a response time of less than 2 seconds. The table below shows the best result of the test runs. Users 8000 Transactions per second (TPS) Response time The chart below shows the relationship between the number of users, TPS, and response time. 93

94 Chapter 7: Testing and Validation Conclusion This solution component enables the creation of a writeable copy of the production database on the disaster recovery target, which allows this database to be used for operations such as backup, test/dev, and data warehouse staging. The solution component uses additional software components at the storage layer to enable disaster recovery, which frees up the database server s CPU, memory, and I/O channels from the effects of these operations. 94

95 Chapter 7: Testing and Validation Section F: Advanced Protect solution component using EMC RecoverPoint Overview of Advanced Protect solution component The purpose of this solution component was to replicate a small RAC database from the production site and to bring up the replicated database at the target site as a single-instance database on a VMware host. Functional validation only Only functional validation was done for the Advanced Protect solution component using EMC RecoverPoint with CLARiiON (CX) splitters. No tuning or performance measurements were carried out. EMC plans to carry out performance testing during the next test cycle. Test configuration The test configuration as shown in the image below was used to validate the Advanced Protect solution component. 95

96 Chapter 7: Testing and Validation Test procedure The following procedure was used to validate the functionality of RecoverPoint with the Advanced Protect solution component: Step Action 1 Create a small database using the listed LUNs at the production site. 2 Create consistency groups comprising of DATA and REDO LOG LUNs. 3 Establish the replication pairs for these LUNs. 4 Enable the consistency groups. 5 Verify that the replication starts successfully. 6 While the replication is in progress, create a table named foo at the source site and insert a row. 7 At the target site, access the latest system-defined bookmark image and discover the replicated LUNs on the VMware ESX server. 8 At the target site, map the LUNs discovered by the ESX server onto the VMware host using Raw Device Mapping (RDM). Note Detailed instructions for mapping the LUNs from an ESX server to a VMware host using RDM is available in Chapter 8: Virtualization > VMware ESX server > LUN discovery. 9 Once the LUNs are mapped to the VMware host, discover the corresponding PowerPath devices and ASM disks. 10 Mount the ASM disk groups and modify the pfile parameters so that the replicated database can be brought up as a single-instance database. 11 Bring up the single-instance database at the target site and verify that the entries inserted in table foo at the source site have been successfully replicated to the target site. LUN designation A single VMware host was used to bring up a single-instance database at the target site. The source LUNs were designated as follows: Name Number Type RAID Size Data 320 FC R5 2GB Log1 321 FC R5 1GB Arch 330 FC R5 2GB Log2 331 FC R5 1GB Flash 340 FC R5 2GB The target LUNs were designated as follows: Name Number Type RAID Size 96

97 Chapter 7: Testing and Validation Data replica 320 FC R10 2GB Log1 replica 321 FC R10 1GB Arch replica 330 FC R10 2GB Log2 replica 331 FC R10 1GB Flash replica 340 FC R10 2GB Test results Summary RecoverPoint remote replication was successfully tested for Oracle database environments. The functionality required by the solution were fully validated. Test results No tuning or performance measurements were carried out for the Advanced Protect solution component. This will be done in future testing. Conclusion EMC RecoverPoint with CX splitters can be used to successfully replicate a database in the context of the validated configuration described in the reference architecture for this solution. 97

98 Chapter 7: Testing and Validation Section G: Test/Dev solution component using EMC SnapView clone Overview of Test/Dev solution component The Test/Dev solution component provides a rapid, high-performance method for copying a running Oracle 11g/10g database. The copy can then be used for testing and development purposes. SnapView clones EMC SnapView clone can be used to clone a running Oracle RAC 11g/10g production database for rapidly creating testing and development copies of this database. A number of best practices should be followed. Archived log and flashback recovery area LUNs While cloning the LUNs, it is better to avoid cloning the archived log and flashback recovery area LUNs. These LUNs are usually large and are typically stored on LCFC or SATA II disks. The combination of these settings means that cloning these LUNs will take much longer than cloning the datafile LUNs. If required, you can configure a separate set of LUNs for the archived logs and flashback recovery area for the testing or development database. Since the test/dev database can be easily refreshed, you may choose to simply skip backing up these databases. In this case, a flashback recovery area is not required. In order for the test/dev database to perform a successful recovery, the archived logs from the production database should be accessible to the test/dev database. You must always move the archived log dump destination of the test/dev database to a new set of LUNs before opening this database for transactional use. Significant transactional I/O on the test/dev database could create an archived log. If you do not change the archive dump destination of the test/dev database, and it uses the same LUNs for storing the archived logs as the production database, the test/dev database could overwrite the archived logs of the production database. This could destroy the recoverability of the production database. This setting is contained in the pfile or spfile of the test/dev database in the parameters LOG_ARCHIVE_DUMP_DEST1 to LOG_ARCHIVE_DUMP_DEST10. SyncRate While initializing the clone, the most important setting is the SyncRate. If you need the test/dev database to be created rapidly, specify the SyncRate option as high. This speeds up the synchronization process, but at the cost of a greater performance impact on the production database. If performance on the production database is your primary concern, specify the SyncRate option as low. Here is an example of the naviseccli command using the SyncRate option: naviseccli address snapview addclone name lun0clonegrp luns 50 -SyncRate <high medium low value> If you do not specify this option, then the default is set as medium. 98

99 Chapter 7: Testing and Validation Test configuration The test configuration for the Test/Dev solution component was identical to the Store solution component. A storage group named EMC Replication Storage was created and populated with target LUNs for storing replicas. Replication Manager takes care of copying data from the source LUNs to the target LUNs. The time required to create the clone depends on the size of the source LUN. For this test, only the data and redo log LUNs were cloned. Because of the amount of time required, archived logs and flashback recovery area files were not cloned. Only the source archive and the flash LUNs were used to bring up the cloned database. Testing procedure The following procedure was used to validate the Test/Dev solution component: Step Action 1 Configure Replication Manager. 2 Register the production hosts, mount hosts, and storage in Replication Manager. 3 Create the application set in Replication Manager for the database to be replicated. 4 Create a job in the Replication Manager console to create a SnapView clone. 5 Close all the Benchmark Factory agents that are running. 6 Close the Benchmark Factory console. 7 Restart the Benchmark Factory console and agents. 8 Stop and restart the database instances. 9 Start the Benchmark Factory test with the user load ranging from 4000 to When the user load reaches iteration 6000, take a snapshot of the database by running the job in the Replication Manager console. Monitor the performance impact on the production database. 11 Capture the results when the Benchmark Factory test completed. 12 Discover the PowerPath devices and ASM disks for the mount host (target database server). 13 Perform the mount and recovery of the replica (SnapView clone) on the mount host using Replication Manager. 14 Capture the time taken to recover the database. 99

100 Chapter 7: Testing and Validation Test results Summary The SnapView clone was successfully validated. The performance impact of the operation was minor. Performance was very similar to the baseline. Test results The cloning job was initiated at user load 6000 and ended at user load TPS peaked at 9100 users. This was the highest user count that had a response time of less than 2 seconds. The table below shows the best result of the test runs. Users Transactions per second (TPS) Response time The chart below shows the relationship between the number of users, TPS, and response time. Conclusion The CLARiiON SnapView clone feature works with the validated configuration and can be performed successfully through Replication Manager. In most test runs, a modest performance hit was observed during hot backup, however, this was temporary. The performance recovered to the expected levels after 10 to 15 minutes. This again depends on the size of the database. 100

101 Chapter 7: Testing and Validation Section H: Backup Server solution component Overview The purpose of the Backup Server solution component is to offload the burden of backup from the production database servers to a utility database server running within a VM. The Backup Server solution component is becoming more popular because of higher performance disk-to-disk snapshot technology (using Replication Manager). Oracle Database 11g RMAN has a CATALOG BACKUPPIECE command. This command adds backup pieces of information of the target database on disk to the production database RMAN repository. The backup pieces should be on the shared location. As long as backup pieces are accessible to both production and target databases, RMAN commands such as RESTORE and RECOVER behave transparently across different databases. Test configuration The test configuration for the Backup Server solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Backup Server solution component: Step Action 1 Close all the Benchmark Factory agents that are running. 2 Close the Benchmark Factory console. 3 Restart the Benchmark Factory console and agents. 4 Stop and restart the database instances. 5 Start the Benchmark Factory test and start ranging the user load from 4000 to When the user load is at iteration 5600, initiate cloning of the production database with Replication Manager. 7 Start the ASM instance in the target server, mount the database, and run the RMAN backup. 8 Place the RMAN backup pieces in a shared flashback recovery area. The backup pieces should be accessible to both the target and production servers. 9 Catalog the backup pieces using the CATALOG BACKUPPIECE command within RMAN in the production server. 10 Shut down the production server. 101

102 Chapter 7: Testing and Validation 11 Perform restore and recovery on the production server of the backup taken on the target server. Test results Summary Minimal performance impact was observed on the production server when using the backup server approach to backup. Basically, the load relating to backup was offloaded from the production server. The only performance impact occurred when the snapshot was taken. Test results for 3 FC shelves/asm with userdefined pools The table below shows the best result of the test runs. Users 5600 Transactions per second (TPS) Response time The chart below shows the relationship between users, TPS, and response time. Conclusion RMAN provides a reliable high-performance backup server solution for the Oracle RAC 11g/10g configuration. However, the time required to restore the database is significant. The restore operation for a 2000-warehouse database typically takes around one and half hours. 102

103 Chapter 7: Testing and Validation Section I: Migration solution component Overview The Migration solution component demonstrates that EMC Replication Manager can be used to migrate an Oracle 11g/10g database mounted on FCP/ASM to a target database mounted on NFS with minimum performance impact and no downtime of the production database. Test objectives This test was just a functionality validation of the migration of Oracle 11g/10g database from a SAN to NAS configuration. The performance impact on the production database during online migration was not validated. Test configuration The test configuration for the Migration solution component was identical to the Store solution component. Test procedure The following procedure was used to validate the Migration solution component: Step Action 1 Using EMC Replication Manager, perform a consistent backup of the running production database on the CLARiiON using a CLARiiON SnapView snapshot. 2 Mount (but do not open) this backup on the migration server, in this case a VMware virtual machine (VM) (a physically booted server would also work). The NFS target array is also mounted on the migration server. 3 Using Oracle Recovery Manager (RMAN), make a backup of this database onto the target site. This backup is performed as a database image, so that the datafiles are written directly to the target NFS mount. 4 Switch the migration server to the new database that has been copied by RMAN to the NFS mount. 5 Set the target database in Data Guard continuous recovery mode, and use Data Guard log ship/log apply to catch the target database up to the production version. 6 Once the target database is caught up to production, use Data Guard failover to retarget to the target database. If appropriate networking configuration is performed, clients will see no downtime when this operation occurs. 103

104 Chapter 7: Testing and Validation Test results Summary We were able to validate the functionality of migrating an online production Oracle database from FCP to NFS. This was accomplished with minimal performance impact and no downtime on the production server. Test results The test showed that the impact on the production database was identical to the backup server, which is contained in Chapter 7: Testing and Validation > Section H: Backup Server solution component. Conclusion The ability to migrate a SAN to NAS database is a frequent request from the customer. Customers are forced to switch from SAN to NAS or NAS to SAN based on ever-changing requirements. This solution proves that EMC Replication Manager can be a very effective application to achieve this goal with ease and no impact or downtime to the production database. 104

105 Chapter 8: Virtualization Chapter 8: Virtualization Overview Introduction to virtualization Virtualization lets a customer run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. The VMware virtualization platform is built on a business-ready architecture. Customers can use software such as VMware vsphere and VMware ESX to transform or virtualize the hardware resources of an x86-based computer - including the CPU, RAM, hard disk, and network controller - to create a fully functional virtual machine that can run its own operating system and applications just like a real (or physical) computer. Server virtualization offers energy efficiencies, cost savings, and better management of service level agreements. VMware ESX abstracts server processor, memory, storage, and networking resources into multiple virtual machines. By doing so, it can dramatically improve the utilization of these resources. Using virtualization Virtualized Oracle database servers were used as targets for test/dev, backup, and disaster recovery for this solution. These servers are more conveniently managed as virtual machines than as physically booted Oracle database servers. The advantages of consolidation, flexible migration and so forth, which are the mainstays of virtualization, apply to these servers very well. A single VMware Linux host was used as the target for test/dev, backup, and disaster recovery. For test/dev, the target database was brought up as a singleinstance database on the VMware host. Similarly, the standby database for disaster recovery was a single-instance database running on a VMware host. This chapter provides procedures and guidelines for installing and configuring the virtualization components that make up the validated solution scenario. Contents This chapter contains the following topics: Topic See Page Advantages of virtualization 106 Considerations 106 VMware infrastructure 106 Virtualization best practices 109 VMware ESX server 110 VMware and NFS

106 Chapter 8: Virtualization Advantages of virtualization Advantages Some advantages of including virtualized test/dev and disaster recovery (DR) target servers in the solution are: Consolidation Flexible migration Cloning Reduced costs Considerations Virtualized single-instance Oracle only Due to the requirement for RAC qualification, presently there is no support for Oracle 11g and 10g RAC servers on virtualized devices. For this reason, EMC does not publish such a configuration as a supported and validated solution. However, the use of Oracle Database 11g and 10g (in singleinstance mode) presents far fewer support issues. VMware infrastructure Setting up the virtualized utility servers For details on setting up the virtualized utility servers, see Chapter 6: Installation and Configuration > Task 13: Set up the virtualized utility servers > Setting up the virtualized utility servers. Virtualization best practices VMotion storage requirements You must have a common storage network configured on both source and target ESX servers to perform VMotion. The network configuration including the vswitch names should be exactly the same. The connectivity to the LUNs on the back-end storage from the ESX servers should also be established in the same way. ESX servers must have identical configuration All ESX servers must have an identical configuration, other than the IP address for the VMkernel port. 106

107 Chapter 8: Virtualization Dedicated private connection When NFS connectivity is used, it is a best practice to have a dedicated private connection to the back-end storage from each of the VMs. We did the following: Assigned four NICs (one NIC for each VM) on the ESX server Assigned private IPs to the NICs Set up the connectivity from these four NICs to the Data Movers of the back-end storage using a Dell PowerConnect switch NFS mount points If the Oracle database files sit on NFS storage, the NFS share should be mounted as a file system within the Linux guest VM using /etc/fstab. This can deliver vastly superior performance when compared to storing Oracle database files on virtual disks that reside on an NFS share and are mounted as NFS datastores on an ESX server. VMware ESX server Typical storage network configuration The image below contains some detail from the Configuration -> Network tap on the VMware vsphere vcenter server. The storage network configuration is shown. We used two physical NICs to support the storage network. This provides physical port redundancy, as well as link aggregation and load balancing across the NICs. Ports are provided for both the NFS mounts on the VMs, as well as the NFS mounts on the ESX servers. 107

108 Chapter 8: Virtualization Storage configuration In the image below, the VMkernel Storage Network is being used to store the files for the VMs (through NFS). The storage pane shows that the NFS-mounted volume vm is where these files are stored. LUN discovery LUNs can be discovered on an ESX server in two ways: The first method uses Virtual Machine File System (VMFS). The second method uses Raw Device Mapping (RDM). EMC recommends using RDM for discovering the LUNs on the ESX server because RDM provides better disk I/O performance and also supports VMotion. VMware and NFS NFS mounts in ESX NFS is a viable storage option for VMware ESX. It provides a simple and manageable storage networking alternative to FCP or iscsi. NFS does not require the ESX servers to run a clustered file system (in this case VMFS). For the utility servers used in this solution, NFS was used to store the OS images for the VMs. 108

109 Chapter 9: Backup and Restore Chapter 9: Backup and Restore Overview Introduction to backup and restore A thoughtful and complete backup strategy is an essential part of database maintenance in a production environment. Data backups are an essential part of any production environment. Regardless of the RAID protection level, hardware redundancy, and other high-availability features present in EMC Celerra storage arrays, conditions exist where you may need to be able to recover a database to a previous point in time. This solution used EMC SnapView to free up the database server s CPU, memory, and I/O channels from the effects of operations relating to backup, restore, and recovery. Scope This section covers the use of SnapView snapshots to perform backup and restore operations on Oracle RAC database servers. Important note on scripts The scripts provided assume that the passwordless authentication is set up using ssh between the oracle user account and the Celerra Control Station. Passwordless authentication allows the oracle user account to issue commands to the Control Station within a script. Instructions on how to accomplish this can be found in Chapter 6: Installation and Configuration > Task 10: Enable passwordless authentication using SSH. Contents This chapter contains the following topics: Topic See Page Section A: Backup and restore concepts 110 Section B: Backup and recovery strategy 112 Section C: Physical backup and restore 116 Section D: Replication Manager in Test/Dev and Advanced Backup solution components

110 Chapter 9: Backup and Restore Section A: Backup and restore concepts Physical storage backup A full and complete copy of the database to a different physical media. Logical backup A backup that is performed using the Oracle import/export utilities. The term logical backup is generally used within the Oracle community. Logical storage backup Creating a backup using a logical image is referred to as a logical storage backup. A logical storage backup is a backup that does not physically exist. Rather, it consists of the blocks in the active file system, combined with blocks in a SavVol, an area where the original versions of the updated blocks are retained. The effect of a logical storage backup is that a view of the file system as of a certain point in time can be assembled. Unlike a physical storage backup, a logical storage backup can be taken very rapidly, and requires very little space to store (typically a small fraction of the size of a physical storage backup). Important Taking logical storage backups is not enough to protect the database from all risks. Physical storage backups are also required to protect the database against double disk failures and other hardware failures at the storage layer. Flashback Database The Oracle Flashback Database command enables you to restore an Oracle database to a recent point in time, without first needing to restore a backup of the database. EMC SnapView snapshot The EMC SnapView snapshot allows a database administrator to create a point-intime copy of the database that can be made accessible to another host or simply held as a point-in-time copy for possible restoration. 110

111 Chapter 9: Backup and Restore Advantages of logical storage Recovery from human errors Logical backup protects against logical corruption of the database, as well as accidental file deletion, and other similar human errors. Frequency without performance impact The logical storage operation is very lightweight and, as a result, a logical storage backup can be taken very frequently. Most customers report that they cannot perceive the performance impact of this operation because it is so slight. Reduced MTTR Depending on the amount of data changes, restoring from a logical storage backup can occur very quickly. This dramatically reduces mean time to recovery (MTTR) compared to what can be achieved restoring from a physical backup. Less archived redo logfiles Due to the high frequency of backups, a small number of archived redo log files need to be applied if a recovery is needed. This further reduces mean time to recovery. 111

112 Chapter 9: Backup and Restore Section B: Backup and recovery strategy Use both logical and physical backup Logical backup In the context of this document, the term logical storage backup is distinguished from the term logical backup, which is generally used within the Oracle community and which is a backup performed using the import/export utilities. A logical storage backup is a backup that does not physically exist. Rather, it consists of the blocks in the active LUN, combined with blocks in a SavVol, an area where the original versions of updated blocks are retained. The effect of a logical storage backup is that you can assemble a view of the LUN as of a certain point in time. A logical storage backup is distinguished also from a physical storage backup, which is a full and complete copy of the database objects made to different physical media. Unlike a physical storage backup, a logical storage backup can be taken very rapidly, and requires very little space to store (typically a small fraction of the size of a physical storage backup). At the center of the logical backup approach is the EMC CLARiiON SnapView feature, which uses the Navisphere SnapView command. This command allows a database administrator to capture a logical storage backup of an entire LUN, or with consistency technology, a group of LUNs. The best practice for the backup of Oracle Database 11g/10g is to perform approximately six logical storage backups per day, at four-hour intervals, using CLARiiON SnapView. To facilitate the ability to recover smaller granularities than the datafile (a single block for example), you should catalog all the SnapView snapshots backups within the RMAN catalog. Physical backup As logical backups do not protect you from hardware failures (such as double-disk failures), you should also perform one physical backup per day, typically during a period of low user activity. For this purpose, EMC recommends RMAN using an incremental strategy, if the database is larger than 500 GB, and using a full strategy otherwise. Further, EMC recommends that the RMAN backup be to a SATA II disk configuration rather than to tape. Reduced mean time to recovery Using a strategy that combines physical and logical backups optimizes the mean time to recovery. In the event of a fault that is not related to the hardware, you can restore instantly from a SnapView snapshot. According to Oracle, approximately 90 percent of all restore/recovery events are not related to hardware failures, but rather to user errors, such as deleting a datafile or truncating a table. Further, the improved frequency of backups over what can be achieved with a blended physical backup strategy means that you have fewer logs to apply, thereby improving mean time to recovery (MTTR). Even in the case where you need to restore from physical backup, the use of SATA II disk will improve restore time. 112

113 Chapter 9: Backup and Restore Logical storage backup using EMC SnapView and EMC Replication Manager Overview of SnapView A logical storage backup consists of virtual copies of all LUNs being used to store datafiles. This is enabled by EMC CLARiiON SnapView. Using EMC consistency technology, multiple LUNs can be used to store an automated storage management (ASM) disk group and snapshots of all of those LUNs can be created in a consistent manner. SnapView snapshot A SnapView snapshot is a virtual point-in-time copy of a LUN. This virtual copy is assembled by using a combination of data in the source LUN, and the before images of updated blocks that are stored on the CLARiiON target array in the Reserved LUN Pool (RLP). Note In Replication Manager, the RLP is referred to as the snap cache. We will adhere to Replication Manager terminology and use the term snap cache. Multiple restore points using EMC SnapView The following image compares backup using SnapView to conventional backup over a typical 24-hour period. Midnight Database Storage Tape Midnight BEFORE AFTER Conventional backup Celerra SnapSure CLARiiON SnapView Midnight 4:00 a.m. 8:00 a.m. Noon 4:00 p.m. 8:00 p.m. Midnight Rapid restore and recovery using EMC SnapView The following image compares restore and recovery using SnapView to conventional backup over a typical 24-hour period. Backup time Data loss Recovery time Backup Restore (move data files) Recovery (apply logs) BEFORE Tape backup Multiple hours AFTER Celerra SnapSure CLARiiON SnapView Backup time Minutes Recovery time 113

114 Chapter 9: Backup and Restore Replication Manager EMC Replication Manager automates the creation and management of EMC diskbased point-in-time replicas. Replication Manager integrates with the Oracle database server and provides an easy interface to create and manage Oracle replicas. Logical storage process A typical backup scheme would use six logical storage backups per day, at four-hour intervals, combined with one physical storage backup per day. The procedure described next can be integrated with the Oracle Enterprise Manager job scheduling process or cron. This can be used to execute a logical storage backup once every four hours. Logical storage backup using Replication Manager The following table outlines the steps used to perform a backup using SnapView snapshots: Step Action 1 Create a Replication Manager job for SnapView snapshot: a. Select Jobs, right-click and select New Job. b. To go to the Job Name and Settings screen, click Next. c. Enter: job name, replication source, replication technology, and number of replicas to be created, then click Next. d. Select the mount options, then click Next. Note: If necessary, you can select the mounting options at a later point in time. e. At the Completing the Job wizard screen, click Finish. 2 Execute a Replication Manager job: a. In Replication Manager, select Jobs from the navigation panel. b. Select a job, right-click and select Run to execute the job. c. To confirm that you want to run the job, click Yes. d. When the Running Job progress bar reaches 100%, click Close. The status of the job displays as Successful. 3 To verify the Snapshot Session information, select the appropriate item under Storage Services. Note: You can also use Navisphere to view the snapshot sessions that were created using Replication Manager. 114

115 Chapter 9: Backup and Restore Best practice During backup, EMC recommends that you: Switch the log files Archive all the log files Back up the control file The instructions on how to carry out these operations are beyond the scope of this document. 115

116 Chapter 9: Backup and Restore Section C: Physical backup and restore Physical backup using Oracle RMAN RMAN and Celerra Physical backup of the production array can be accomplished using Oracle RMAN. The target was an NFS mount on the Celerra. The backup target is typically SATA or LCFC disks on the target array. If tape is used with a product that includes a media management layer, such as EMC NetWorker, Oracle Secure Backup must be used. Normal RMAN semantics apply to this backup method. This is thoroughly covered on the Oracle Technology Network website and will not be included in this document. RMAN backup script: rmanbkp.bash Run the following script from the database server to carry out physical backup of a Celerra array using Oracle RMAN: [oracle@mteoradb55 ~]$ cat rmanbkp.bash #!/bin/bash. ~/.bash_profile. /cygdrive/c/common/initialize.bash echo "This is oracle_cold_backup_database.bash" echo "rman" echo "connect target /" echo "backup database plus archivelog;" echo "exit" rman <<EOF2 connect target / backup database plus archivelog; exit EOF2 echo "Now exiting oracle_cold_backup_database.bash" exit [oracle@mteoradb55 ~]$. 116

117 Chapter 9: Backup and Restore System output of rmanbkp.bash The output our system produces after running rmanbkp.bash is shown below: ~]$. rmanbkp.bash This is rmanbkp.bash Starting RMAN Backup Recovery Manager: Release Production on Fri Mar 7 00:49: Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database: MTERAC16 (DBID= ) RMAN> backup database; Starting backup at 07-MAR-08 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=4512 instance=mterac161 devtype=disk channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00008 name=+data/mterac16/datafile/test input datafile fno=00009 name=+data/mterac16/datafile/test input datafile fno=00010 name=+data/mterac16/datafile/test input datafile fno=00011 name=+data/mterac16/datafile/test input datafile fno=00012 name=+data/mterac16/datafile/test input datafile fno=00013 name=+data/mterac16/datafile/test input datafile fno=00014 name=+data/mterac16/datafile/test input datafile fno=00015 name=+data/mterac16/datafile/test input datafile fno=00016 name=+data/mterac16/datafile/test input datafile fno=

118 Chapter 9: Backup and Restore name=+data/mterac16/datafile/test input datafile fno=00018 name=+data/mterac16/datafile/test input datafile fno=00002 name=+data/mterac16/datafile/undotbs input datafile fno=00005 name=+data/mterac16/datafile/undotbs input datafile fno=00003 name=+data/mterac16/datafile/sysaux input datafile fno=00001 name=+data/mterac16/datafile/system input datafile fno=00006 name=+data/mterac16/datafile/undotbs input datafile fno=00007 name=+data/mterac16/datafile/undotbs input datafile fno=00004 name=+data/mterac16/datafile/users channel ORA_DISK_1: starting piece 1 at 07-MAR-08 channel ORA_DISK_1: finished piece 1 at 07-MAR-08 piece handle=/u06/mterac16/backupset/2008_03_07/o1_mf_nnndf_tag T005008_3x1oz3t8_.bkp tag=tag t comment=none channel ORA_DISK_1: backup set complete, elapsed time: 01:21:26 Finished backup at 07-MAR-08 Starting Control File and SPFILE Autobackup at 07-MAR-08 piece handle=/u06/mterac16/autobackup/2008_03_07/o1_mf_s_ _3 x1to7tb_.bkp comment=none Finished Control File and SPFILE Autobackup at 07-MAR-08 RMAN> End of RMAN backup rmanbkp.bash 118

119 Chapter 9: Backup and Restore Section D: Replication Manager in Test/Dev and Advanced Backup solution components Overview The Test/Dev and Advanced Backup solution components are integrated with EMC Replication Manager. This has significant advantages in that Replication Manager provides a layered GUI application to manage these processes. This includes a scheduler so that the jobs can be run on a regular basis. Replication Manager, however, introduces a few issues that are covered in this section. Oracle home location Currently, Replication Manager does not support ASM and Oracle having separate Oracle homes. This may be confusing, because the Oracle installation guide presents an installation in which ASM is located in its own home directory. Important If you choose to use Replication Manager for storage replication management, install Oracle and ASM in the same home directory. Dedicated server process Replication Manager cannot create an application set when connected to the target database using SHARED SERVER. Replication Manager requires a dedicated server process. In the TNSNAMES.ORA file, you must modify the value of SERVER as shown below to connect to the target database. This is only needed for the service that is used for the Replication Manager connection. # tnsnames.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. MTERAC211 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = mteoradb67-vip)(port = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = mterac21) (INSTANCE_NAME = mterac211) ) ) 119

120 Chapter 10: Data Protection and Replication Chapter 10: Data Protection and Replication Overview Introduction This solution provides options to create local and remote replicas of application data that are suitable for testing, development, reporting, and disaster recovery and many other operations that can be important in your environment. Contents This chapter contains the following topics: Topic Section A: Basic Protect using Oracle Data Guard 121 See Page Section B: Advanced Protect using EMC MirrorView and Oracle Data Guard 122 Section C: Advanced Protect using EMC RecoverPoint

121 Chapter 10: Data Protection and Replication Section A: Basic Protect using Oracle Data Guard Overview Oracle Data Guard is used with the Basic Protect solution component. For best practices on Oracle Data Guard configuration, refer to the Oracle documentation on this subject. EMS CLARiiON SAN Copy and Oracle Data Guard The best practice for disaster recovery of an Oracle Database 11g/10g over NFS is to use the CLARiiON SAN Copy for seeding the disaster recovery copy of the production database, and then to use the Oracle Data Guard log transport and log apply services. The source of the database used for seeding the disaster recovery site can be a hot backup of the production database within a Celerra SnapSure checkpoint. This avoids any downtime on the production server relative to seeding the disaster recovery database. The configuration steps for shipping the redo logs and bringing up the standby database are accomplished using Oracle Data Guard. The Data Guard Failover operation was performed in MAXIMUM AVAILABILITY mode. For best practices on Oracle Data Guard configuration, refer to the Oracle documentation on this subject. The following image illustrates the setup for disaster recover using CLARiiON SAN Copy and Oracle Data Guard. 121

122 Chapter 10: Data Protection and Replication Section B: Advanced Protect using EMC MirrorView and Oracle Data Guard Overview The use of Oracle Data Guard to create a disaster recovery configuration is an established best practice. EMC MirrorView Asynchronous (MirrorView/A) over iscsi is commonly used as a way of seeding the database for the Data Guard configuration. Once the production database has been copied to the target location, then redo log shipping can be established using Data Guard. The use of MirrorView over iscsi requires specific network configuration. Various means can be used to bridge an iscsi network over a WAN connection so that the data on an iscsi network can be transmitted over long distances. The advantages of MirrorView are: The data can be replicated over a long distance. Local access is not required. No downtime on the source database is required. Note EMC assumes that you have established a mechanism to transmit the data from the source to the target array over an IP network using the iscsi protocol. MirrorView mirroring prerequisites Before starting the mirroring procedure: The source and target arrays must be in the same CLARiiON domain. You must designate the source CLARiiON as the master and the target CLARiiON as a domain member. Once this is done, both the source and target arrays appear in the same Navisphere GUI and can be managed together. This is mandatory for MirrorView mirroring to be established. Reserved LUN pool LUNs are required for MirrorView mirroring. You should configure a large number of small LUNs for best results. We configured 20 LUNs of 10 GB in size. Seeding the Oracle database Use MirrorView to seed the Oracle database for the Data Guard configuration as described in the following table: Step Action 1 Create a consistency group on the source array with the following command: [root@mteoradb1 db_root]# naviseccli -h mirror async \ -creategroup name ConsistentDBGroup o 2 Create a mirror group for the source LUNs, for example: [root@mteoradb59 ~]# naviseccli -h mirror -async -create -name LUN1_LOG1_mirror -lun 1-122

123 Chapter 10: Data Protection and Replication requiredimages 1 Warning Make sure you are finished enabling paths among all arrays. If not, exit and do so. 3 Verify that all the mirror groups were created successfully using the following command: [root@mteoradb59 ~]# naviseccli -h mirror -async -list 4 Use the following script to add the target LUNs to the mirror groups and also to the consistency group: Code Listing 22: MirrorView/A Data Guard seeding script [root@mteoradb59 db_root]# cat mirror_new.bash echo "This is mirror.bash" DATA_LUNS=" " LOG_LUNS="1 2" SPA= echo "Add the target LOG LUNs for mirroring" for i in ${LOG_LUNS} do echo "Now adding lun LUN${i} of target Clarion to mirror" naviseccli -address ${SPA} mirror -async -addimage - name LUN${i}_LOG${i}_mirror -arrayhost lun 5${i} -recoverypolicy auto -syncrate high done echo "Add the target DATA LUNs for mirroring" for i in ${DATA_LUNS} do echo "Now adding lun LUN${i} of target Clarion to mirror" naviseccli -address ${SPA} mirror -async -addimage - name LUN${i}_DATA_mirror -arrayhost lun 5${i} -recoverypolicy auto -syncrate high done echo "Now adding mirror for LOG LUNS to consistent group." for i in ${LOG_LUNS} do naviseccli -address ${SPA} mirror -async -addtogroup -name ConsistentDBGroup -mirrorname 123

124 Chapter 10: Data Protection and Replication LUN${i}_LOG${i}_mirror done echo "Now adding mirror for DATA LUNS to consistent group." for i in ${DATA_LUNS} do naviseccli -address ${SPA} mirror -async -addtogroup -name ConsistentDBGroup -mirrorname LUN${i}_DATA_mirror done echo "Now exiting mirror.bash" Code Listing 23: The output from the MirrorView/A script [root@mteoradb59 db_root]#./mirror_new.bash This is mirror.bash Add the target LOG LUNs for mirroring Now adding lun LUN51 of target Clarion to mirror Now adding lun LUN52 of target Clarion to mirror Add the target DATA LUNs for mirroring Now adding lun LUN53 of target Clarion to mirror Now adding lun LUN54 of target Clarion to mirror Now adding lun LUN55 of target Clarion to mirror Now adding lun LUN56 of target Clarion to mirror Now adding lun LUN57 of target Clarion to mirror Now adding lun LUN58 of target Clarion to mirror Now adding mirror for LOG LUNS to consistent group. Now adding mirror for DATA LUNS to consistent group. Now exiting mirror.bash 5 Verify the mirroring status by executing the following command: [root@mteoradb59 ~]# naviseccli -h mirror -async \ -list -images grep Progress Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%): 100 Synchronizing Progress(%):

125 Chapter 10: Data Protection and Replication Synchronizing Progress(%): 100 Synchronizing Progress(%): Once the mirrors are synchronized, fracture them using the following command: [root@mteoradb59 ]# naviseccli -h mirror -async \ -fracturegroup -name ConsistentDBGroup o Note The mirror LUNs will become available for I/O only after they are fractured. Once the seeding of the database is complete, the remaining tasks for shipping the redo logs can be performed. Shipping the redo logs using Data Guard The configuration steps for shipping the redo logs and bringing up the standby database are accomplished using Oracle Data Guard. The semantics are covered thoroughly on the Oracle Technology Network website and will not be included here. Data Guard failover operation The Data Guard failover operation is performed in MAXIMUM AVAILABILITY mode using the following steps: Step Action 1 Shut down all the database instances at the production/primary site. MTERAC71> shutdown abort ORACLE instance shut down. MTERAC72> shutdown abort ORACLE instance shut down. MTERAC73> shutdown abort ORACLE instance shut down. MTERAC74> shutdown abort ORACLE instance shut down. 2 Issue the following commands to change the standby database to primary: [MTERAC7 Production Database & MTERAC7-SB Standby Database] MTERAC7_SB>SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM $ARCHIVE_GAP; no rows selected 125

126 Chapter 10: Data Protection and Replication MTERAC7_SB>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH; Database altered. MTERAC7_SB>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY; Database altered. MTERAC7_SB>shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. mteora7_sb>set sqlprompt NEW_PRIMARY> NEW_PRIMARY>startup ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Database opened. NEW_PRIMARY> select DATABASE_ROLE, SWITCHOVER_STATUS, GUARD_STATUS from v$database; DATABASE_ROLE SWITCHOVER_STATUS GUARD_S PRIMARY NOT ALLOWED NONE 126

127 Chapter 10: Data Protection and Replication Section C: Advanced Protect using EMC RecoverPoint Overview Introduction EMC RecoverPoint with the CLARiiON (CX) splitter provides several advantages: You do not need to install a dedicated splitter driver on the database servers. You do not need to install a special hardware driver into an FCP switch. This provides savings on both cost and manageability. All of the splitter driver functionality is incorporated into the array. Supporting documentation The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access, contact your EMC representative: EMC RecoverPoint Administrator s Guide EMC RecoverPoint Installation Guide Contents This section contains the following topics: Topic See Page EMC RecoverPoint 128 CLARiiON (CX) splitters 128 Scope 129 Best practices 129 Conclusion

128 Chapter 10: Data Protection and Replication EMC RecoverPoint RecoverPoint EMC RecoverPoint provides integrated continuous remote replication (CRR) and continuous data protection (CDP). The RecoverPoint family enables data to be replicated and recovered from either the local site (CDP) or at a remote site (CRR). RecoverPoint CRR complements the existing portfolio of EMC remote-replication products by adding heterogeneous replication (with bandwidth compression) in asynchronous-replication environments, which lowers multi-year total cost of ownership. RecoverPoint CDP offered as a stand-alone solution, or combined with CRR, enables you to roll back to any point in time for effective local recovery from events such as database corruption. Release 3.0 and later of RecoverPoint includes support for the CLARiiON (CX) splitter driver. These are array-based splitters that can be directly installed on a CLARiiON. Replication with RecoverPoint In normal replication, the target-site volumes are not accessible from the host. This is to prevent the RecoverPoint appliances and the host servers from trying to write to the volumes at the same time. Target-side processing allows the user to roll back the replication volume to a pointin time snapshot. Once the target replication volumes have been paused on this snapshot, the target host server is given access to these volumes. While the target replication volumes are paused on this snapshot, replication continues from the source site. Snapshots are still stored in the snapshot portion of the target journal volume, but are not distributed to the replication volume because it is paused on a snapshot. Once the replication is complete, the target image can be accessed by selecting Image Access from the drop-down menu. CLARiiON (CX) splitters CX splitters The CX splitter is integrated with each storage processor of the CLARiiON array. This sends one copy of the I/O to the storage array and another copy to the RecoverPoint appliance. The main advantages of the CX array splitters are that they: Reduce the cost associated with the additional host-based splitter agents or specialized fabric. Provide concurrent local and remote (CLR) protection. CLR replication can be performed on the same LUNs. When an application writes to storage, the CX splitter splits the data and sends a copy of the data over Fibre Channel to the source-site RPA, and the other copy to the storage array. These requests are acknowledged by the source RPAs and are 128

129 Chapter 10: Data Protection and Replication then replicated to the remote RPAs over the IP network. The remote RPAs will write this data over to journal volumes at the remote site and the consistent data is distributed to the remote volumes later. Scope Functional validation only Only functional validation was done for the Advanced Protect solution using EMC RecoverPoint. No tuning or performance measurements were carried out. EMC plans to carry out performance testing during the next test cycle. Best practices Journal volumes If there are multiple consistency groups, then you need to configure journal volumes for each consistency group on different RAID groups so that the journal of one group will not slow down the other groups. You should configure journal volumes on separate RAID groups from the user volumes. Journal volumes can be corrupted if any host writes to it other than the RecoverPoint Appliance (RPA). So, you must ensure that the journal volumes are zoned only with RPAs. RecoverPoint performs striping on journal volumes; using a large number from different RAID groups increases the performance. All journal volume LUNs should be of the same size because RecoverPoint uses the smallest LUN size, and it stripes the snapshot across all LUNs. It will not be able to stripe evenly across different sized LUNs. The size of the journal volumes should be at least 20 percent larger than the data being replicated. Journal volumes are required on both local and remote sides for Continuous Remote Replication (CRR) to support failover. Repository volumes Repository volumes should be at least 4 GB with an additional 2 GB per consistency group. In a CRR configuration, there must be one repository volume for the RPA cluster on the local and remote sites. WAN compression guidelines If you set a strong compression, this will cause CPU congestion. Conversely, if you set it to low, it will cause high loads. EMC recommends a 5x-10x compression ratio for Oracle databases. Clusters The RecoverPoint clustering does not support one-to-many RPAs between sites. The configuration should have two RPAs on both sites. 129

130 Chapter 10: Data Protection and Replication Zoning When discovering the CX array splitters, you must ensure that all the RPA ports and all the CX SPA/SPB ports are included in the same zone. You must ensure that this zone is present on both sites. This can be accomplished by using a zone that spans multiple FCP switches, if required. Storage volumes The following types of storage volumes are required for RecoverPoint configuration: Repository volume This volume holds the configuration and marking information during the replication. At least one repository volume is required per site and this should be accessible from all the RPAs at the site. Journal volume This volume is used to store all the modifications. The application-specific bookmarks and timestamp details are written to the journal volume. There should be at least one journal copy per consistency group. Best practices information on the configuration of journal volumes is available in the EMC RecoverPoint Installation Guide. Replication set The association created between the source volume and target volume is called the replication set. The source volume cannot be greater in size than the target volume. Consistency group The logical group of replication sets identified for replication is called a consistency group. Consistency groups ensure that the updates to the associated volumes are always consistent and that they can be used to restore the database at any point of time. Conclusion Conclusion The primary benefits of deploying an Advanced Protect solution component using RecoverPoint with CLARiiON splitters are: Up to 148 percent total cost savings and up to 238 percent in bandwidth cost savings The only replication product that supports both local and remote replication Local and remote replication with any point in time recovery to meet RPO/RTO requirements Network-based architecture optimized for availability and performance Transparent support for heterogeneous server and storage platforms Deploying the CLARiiON splitter driver results in significant cost reductions for customers because low-cost FCP switches, such as QLogic switches, can be used in the place of high-cost intelligent switches. 130

131 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Chapter 11: Test/Dev Solution Using EMC SnapView Clone Overview Introduction There is strong interest in configuring a writeable test/dev copy solution component that does not impact the production server in terms of downtime or performance. Contents This chapter contains the following topics: Topic See Page CLARiiON SnapView clone 132 Best practices 132 Mount and recovery of a target clone database using Replication Manager 133 Database cloning

132 Chapter 11: Test/Dev Solution Using EMC SnapView Clone CLARiiON SnapView clone SnapView clones You can use EMC CLARiiON SnapView clone with consistency technology to create a test/dev solution using a restartable copy of a running Oracle RAC 11g/10g production database. Doing so has minimal impact on the production database. You can either use SnapView snapshot or SnapView clone to create the copy. With SnapView snapshot, the I/Os that are performed on the database copy will be handled by the same physical hardware as the production database. This can have an impact on the production database performance. With SnapView clone, the cloned LUNs can be created on a completely different set of RAID groups from the original LUNs. As a result, I/O performed on the database copy is handled by different physical hardware from the production database. The database copy s I/O does not impact the production database in any way. The SnapView clone approach requires a full copy of the production database, whereas, the SnapView snapshot approach does not. The read I/O to the production LUNs that is required to create the clone LUNs does impact the production database. However, the ongoing operation of the copy database then has no appreciable impact on the production database. We assume that the SnapView clone approach is preferable for most customers. Use of EMC SnapView clone A clone is a full binary copy of a source LUN. Each clone LUN must be the same size as the source LUN. Best practices Mounting LUNs EMC recommends using a second set of database servers to mount the LUNs and manage them within the oracleasm kernel module. ASM writes a unique signature to each LUN and will not open a LUN containing the same signature as an existing LUN. Consistency technology Consistency technology allows the customer to create a copy (either virtual or physical) of a set of LUNs on either a single array or multiple arrays with the writes to all of the LUNs being in perfect order. The state of the copy LUNs is identical to the state in which the production LUNs would be if the database server was powered off. This allows you to create a restartable database copy; the database engine will perform crash recovery on this copy using the online redo logs in exactly the same manner as a power loss. Because of this unique functionality, the use of backup mode or RMAN cloning is not required, which is advantageous as both of these approaches use host and array resources and can have an impact on production database performance. 132

133 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Restartable copy versus backup A restartable copy should not be considered a substitute for a backup. Database restart following database server crash is not guaranteed by Oracle. Therefore, a crash-consistent image is not a reliable backup. Recovery cannot be performed on a crash-consistent image. The image can be restarted in the state it was in at the time the copy occurred. It cannot be recovered to the state of a later time. Therefore, EMC recommends that you use normal backup procedures to back up the production Oracle database, using either SnapView snapshot or RMAN. This does not mean, however, that a restartable copy is useless; it is very useful. For example, a restartable copy can be used for testing and development purposes. Since the creation of the restartable copy has low impact on the production database, it can be created fairly often, typically daily. Mount and recovery of a target clone database using Replication Manager Target clone database We cloned a four-node RAC production database and then mounted the cloned database on a VMware host as a single-instance database. The cloning and mounting of the cloned database on the target VM host was performed using Replication Manager (RM). Creating and executing an RM job for SnapView clone The following table describes the main steps that are followed to create and execute an RM job for SnapView clone: Step Action 1 In Replication Manager, select Jobs, then right-click and select New Job. 2 On the Welcome screen of the Job Wizard, choose the application set that you want to replicate, then click Next. 3 On the Job Name and Settings screen, type the job name and select: Replication source Replication technology name Number of replicas to be created Click Next. 4 On the Replication Storage screen, select the storage pool that you want to use for the replica, then click Next. 5 On the Mount Options screen, select the mount options, then click Next. Note You can select the mount options at a later point in time, if required. 133

134 Chapter 11: Test/Dev Solution Using EMC SnapView Clone 6 On the Starting the Job screen, choose how you want to start the job, then click Next. 7 On the Users to be Notified screen, type the addresses of the users to be notified when a job is completed, then click Next. 8 On the Completing the Job wizard screen, save the job, then click Finish. 9 In Replication Manager, under Jobs, verify that the job has been created without any errors. If errors occur during the creation of the job, this will be indicated in the Latest Status column. 10 After the clone job has been created, select the job, then right-click and select Run to execute the job. The complete logs will be displayed during the execution. 11 In Replication Manager, under Jobs, check the Status column to verify that the job executed successfully. Mount and recovery of a target clone database using RM Replication Manager generates its own init.ora file as part of the replication process. This file is then placed in the directory specified by the ERM_TEMP_BASE variable (/tmp by default). This init.ora file is usually sufficient to start the database, but it does not necessarily contain all the parameters from the original init.ora file. The procedure to customize the init.ora file that is generated by Replication Manager is described in the following table. Step Action 1 On the mount host, change to the RM client install directory: [root@mteoraesx1-vm5 ~]# cd /opt/emc/rm/client/bin/ 2 Create a new directory using the SID name for the mount, then change to that directory: [root@mteoraesx1-vm5 bin]# mkdir mterac211 [root@mteoraesx1-vm5 bin]# cd mterac211/ 3 Create a new init<sid>.ora file using the same SID name that is specified above: [root@mteoraesx1-vm5 mterac211]# vi initmterac211.ora 4 Customize the parameter file as required. Important These parameters will be appended to the init.ora file that was generated by Replication Manager, so they must follow the correct Oracle syntax. For more information on parameter file customization, refer to EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra Unified Storage Platform Best Practices Planning. 134

135 Chapter 11: Test/Dev Solution Using EMC SnapView Clone 5 In Replication Manager, select the clone replication job, then right-click and select Mount. 6 Specify the correct SID name. The new init<sid>.ora file will be picked up dynamically. Those parameters will be used to start the database on the mount host in conjunction with the parameter file generated by Replication Manager. 7 On the Mount Wizard screen, select the replica to be mounted, then click Next. 8 On the Mount Options screen: a) Under Path options, select Original path. This ensures that the database can be mounted using the same path on the target host as well. b) Under Oracle, select Recover the database. This ensures that the target clone database will be recovered automatically after mounting by Replication Manager. c) Click Finish. 9 In Replication Manager, verify that the mount/recovery of the clone database is started without any errors. 10 Once the job is completed successfully, verify that the clone database is open in read/write mode: SQL> select name,open_mode from v$database; NAME OPEN_MODE MTERAC21 READ WRITE Replication Manager can be used to automate the complete process of cloning the LUNs, mounting the cloned LUNs on the target host, and restoring and recovering the clone database. 135

136 Chapter 11: Test/Dev Solution Using EMC SnapView Clone Database cloning The importance of cloning The ability to clone a running production Oracle database is a key requirement for many customers. The creation of test and development databases, enabling of datamart and data warehouse staging, and Oracle and OS version migration are just a few applications of this important functionality. Cloning methods Two methods can be used for database cloning: Full clone Full clone involves taking a full copy of the entire database. Full clone is recommended for small databases or for a one-time cloning process. Incremental cloning Incremental cloning is more complex but allows you to create a clone, making a full copy on the first iteration and, thereafter, making an incremental clone for all other iterations, by copying only the changed data in order to update the clone. Incremental cloning is recommended for larger databases and for situations where there is an ongoing or continuous need to clone the production database. 136

137 Chapter 12: Migration Chapter 12: Migration Overview Migration from FCP/ASM to NFS The ability to migrate an Oracle database across storage protocols is a frequent customer request. The EMC Oracle CSV group has tested and validated a solution component for migrating an online production Oracle database, which is mounted over FCP/ASM to a target database mounted using NFS. This is performed with minimal performance impact on the production database and no downtime Migrating an online Oracle database To see the steps that were followed to perform the migration operation, see Chapter 7: Testing and Validation > Section I: Migration solution component > Test procedure. 137

138 Chapter 12: Migration Migration diagram The following diagram is a high-level view of the migration component. 138

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Business Continuity for Oracle Database 11g

EMC Business Continuity for Oracle Database 11g EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS Copyright 2010 EMC Corporation. All rights reserved. Published March, 2010 EMC believes the information in this

More information

EMC Business Continuity for Oracle Database 11g

EMC Business Continuity for Oracle Database 11g EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC believes the information in this publication

More information

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System EMC Solutionsfor Oracle 10g / 11g EMC Global Solutions 42 South Street Hopkinton, MA

More information

EMC Unified Storage for Oracle Database 11g

EMC Unified Storage for Oracle Database 11g EMC Unified Storage for Oracle Database 11g Performance Enabled by EMC Celerra Using DNFS or ASM Proven Solution Guide 1 Copyright 2011 EMC Corporation. All rights reserved. Published February 2011 EMC

More information

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide White Paper Third-party Information Provided to You Courtesy of Dell Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide Abstract This document provides an overview of the architecture of the

More information

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS Abstract This white paper examines the performance considerations of placing Oracle Databases on Enterprise Flash

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

Virtualizing Oracle Database 10g/11g on VMware Infrastructure

Virtualizing Oracle Database 10g/11g on VMware Infrastructure Virtualizing Oracle Database 10g/11g on VMware Infrastructure Consolidation Solutions with VMware Infrastructure 3 and EMC Celerra NS40 Multi-Protocol Storage May 2009 Contents Executive Overview...1 Introduction...1

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review White Paper BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION EMC GLOBAL SOLUTIONS Abstract This white paper provides guidelines for the use of EMC Data Domain deduplication for Oracle

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series A Dell Technical White Paper Database Solutions Engineering Dell Product Group April 2009 THIS WHITE

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Reference Architecture Copyright 2010 EMC Corporation.

More information

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES WHITE PAPER BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES Using Solid State Disks (SSDs) in enterprise storage arrays is one of today s hottest storage

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

Local and Remote Data Protection for Microsoft Exchange Server 2007

Local and Remote Data Protection for Microsoft Exchange Server 2007 EMC Business Continuity for Microsoft Exchange 2007 Local and Remote Data Protection for Microsoft Exchange Server 2007 Enabled by EMC RecoverPoint CLR and EMC Replication Manager Reference Architecture

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture. Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

White Paper. Dell Reference Configuration

White Paper. Dell Reference Configuration White Paper Dell Reference Configuration Deploying Oracle Database 10g R2 Standard Edition Real Application Clusters with Red Hat Enterprise Linux 4 Advanced Server x86_64 on Dell PowerEdge Servers and

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

EMC Business Continuity for Microsoft Exchange 2010

EMC Business Continuity for Microsoft Exchange 2010 EMC Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage and Microsoft Database Availability Groups Proven Solution Guide Copyright 2011 EMC Corporation. All rights reserved.

More information

EMC Business Continuity for Microsoft Office SharePoint Server 2007

EMC Business Continuity for Microsoft Office SharePoint Server 2007 EMC Business Continuity for Microsoft Office SharePoint Server 27 Enabled by EMC CLARiiON CX4, EMC RecoverPoint/Cluster Enabler, and Microsoft Hyper-V Proven Solution Guide Copyright 21 EMC Corporation.

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi Best Practices Planning Abstract This white paper presents the best practices for optimizing performance for a Microsoft Exchange 2007

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Many organizations rely on Microsoft Exchange for

Many organizations rely on Microsoft Exchange for Feature section: Microsoft Exchange server 007 A Blueprint for Implementing Microsoft Exchange Server 007 Storage Infrastructures By Derrick Baxter Suresh Jasrasaria Designing a consolidated storage infrastructure

More information

VMware with CLARiiON. Sheetal Kochavara

VMware with CLARiiON. Sheetal Kochavara VMware with CLARiiON Sheetal Kochavara Agenda Overview of CLARiiON hardware and software products Integration of CLARiiON with VMware Use Cases of CLARiiON replication software with VMware EMC CLARiiON

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

EMC Celerra Unified Storage Platforms

EMC Celerra Unified Storage Platforms EMC Solutions for Microsoft Exchange 2007 EMC Celerra Unified Storage Platforms EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Warehouse A Dell Technical Configuration Guide base Solutions Engineering Dell Product Group Anthony Fernandez Jisha J Executive Summary

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots Dell EqualLogic Best Practices Series Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots A Dell Technical Whitepaper Storage Infrastructure

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi EMC Solutions for Microsoft Exchange 2007 NS Series iscsi Applied Technology Abstract This white paper presents the latest storage configuration guidelines for Microsoft Exchange 2007 on the Celerra NS

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Unified Storage (FC), Microsoft Windows Server 2008 R2 Hyper-V, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

Rrootshell Technologiiss Pvt Ltd.

Rrootshell Technologiiss Pvt Ltd. Course Description Information Storage and Management (ISM) training programme provides a comprehensive introduction to information storage technology that will enable you to make more informed decisions

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products Sheetal Kochavara Systems Engineer, EMC Corporation Agenda Overview of EMC Hardware and Software Best practices with

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Proven Solution Guide Copyright 2010 EMC Corporation.

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper demonstrates

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005

EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005 EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005 Best Practices Planning Abstract This technical white paper explains best practices associated with Microsoft SQL Server 2000

More information

Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Writeable Checkpoints

Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Writeable Checkpoints Accelerate Oracle Database 10g Creation and Deployment Using VMware Infrastructure and EMC Celerra Applied Technology Abstract This white paper first reviews the business case for and the challenges associated

More information

Comparison of Storage Protocol Performance ESX Server 3.5

Comparison of Storage Protocol Performance ESX Server 3.5 Performance Study Comparison of Storage Protocol Performance ESX Server 3.5 This study provides performance comparisons of various storage connection options available to VMware ESX Server. We used the

More information

EMC Solutions for Microsoft Exchange 2003 CX Series iscsi

EMC Solutions for Microsoft Exchange 2003 CX Series iscsi EMC Solutions for Microsoft Exchange 2003 CX Series iscsi Best Practices Planning Guide Abstract This white paper presents the best practices for optimizing the performance for Exchange 2003 storage configuration

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary v1.0 January 8, 2010 Introduction This guide describes the highlights of a data warehouse reference architecture

More information

12/04/ Dell Inc. All Rights Reserved. 1

12/04/ Dell Inc. All Rights Reserved. 1 Dell Solution for JD Edwards EnterpriseOne with Windows and Oracle 10g RAC for 200 Users Utilizing Dell PowerEdge Servers Dell EMC Storage Solutions And Dell Services Dell/EMC storage solutions combine

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

EMC SAN Copy Command Line Interfaces

EMC SAN Copy Command Line Interfaces EMC SAN Copy Command Line Interfaces REFERENCE P/N 069001189 REV A13 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2006-2008 EMC Corporation. All

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture EMC Virtual Architecture for Microsoft SharePoint Server 2007 Enabled by EMC CLARiiON CX3-40, VMware ESX Server 3.5 and Microsoft SQL Server 2005 Reference Architecture EMC Global Solutions Operations

More information

Exchange 2003 Archiving for Operational Efficiency

Exchange 2003 Archiving for Operational Efficiency Exchange 2003 Archiving for Operational Efficiency Enabled by EmailXtender Reference Architecture EMC Global Solutions Operations EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4 EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4 A Detailed Review EMC Information Infrastructure Solutions Abstract Customers are looking for

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information