Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

Similar documents
Overview. Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance. August By Tom Hanvey; update by Peter Brouwer

Technical White Paper August Recovering from Catastrophic Failures Using Data Replicator Software for Data Replication

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines

StorageTek ACSLS Manager Software Overview and Frequently Asked Questions

Sun Fire X4170 M2 Server Frequently Asked Questions

Best Practice Guide for Implementing VMware vcenter Site Recovery Manager 4.x with Oracle ZFS Storage Appliance

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

Technical White Paper August Migrating to Oracle 11g Using Data Replicator Software with Transportable Tablespaces

An Oracle White Paper November Primavera Unifier Integration Overview: A Web Services Integration Approach

An Oracle White Paper June Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8

Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018

Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding

Achieving High Availability with Oracle Cloud Infrastructure Ravello Service O R A C L E W H I T E P A P E R J U N E

Installation Instructions: Oracle XML DB XFILES Demonstration. An Oracle White Paper: November 2011

Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H

Generate Invoice and Revenue for Labor Transactions Based on Rates Defined for Project and Task

Benefits of an Exclusive Multimaster Deployment of Oracle Directory Server Enterprise Edition

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

An Oracle White Paper October The New Oracle Enterprise Manager Database Control 11g Release 2 Now Managing Oracle Clusterware

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

Oracle DIVArchive Storage Plan Manager

An Oracle White Paper July Methods for Downgrading from Oracle Database 11g Release 2

ORACLE SNAP MANAGEMENT UTILITY FOR ORACLE DATABASE

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

ORACLE SOLARIS CLUSTER INTEROPERABILITY MATRIX

ORACLE DATABASE LIFECYCLE MANAGEMENT PACK

An Oracle Technical White Paper September Detecting and Resolving Oracle Solaris LUN Alignment Problems

Maximum Availability Architecture. Oracle Best Practices For High Availability

An Oracle White Paper December, 3 rd Oracle Metadata Management v New Features Overview

Integrating Oracle SuperCluster Engineered Systems with a Data Center s 1 GbE and 10 GbE Networks Using Oracle Switch ES1-24

ORACLE SOLARIS CLUSTER

Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y

Handling Memory Ordering in Multithreaded Applications with Oracle Solaris Studio 12 Update 2: Part 2, Memory Barriers and Memory Fences

Migrating VMs from VMware vsphere to Oracle Private Cloud Appliance O R A C L E W H I T E P A P E R O C T O B E R

ORACLE SOLARIS CLUSTER INTEROPERABILITY MATRIX

ORACLE FABRIC MANAGER

Creating Custom Project Administrator Role to Review Project Performance and Analyze KPI Categories

Correction Documents for Poland

Oracle VM 3: IMPLEMENTING ORACLE VM DR USING SITE GUARD O R A C L E W H I T E P A P E R S E P T E M B E R S N

RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017

An Oracle White Paper September Security and the Oracle Database Cloud Service

Loading User Update Requests Using HCM Data Loader

An Oracle White Paper December Oracle Exadata Database Machine Warehouse Architectural Comparisons

Hard Partitioning with Oracle VM Server for SPARC O R A C L E W H I T E P A P E R J U L Y

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Cloud Operations for Oracle Cloud Machine ORACLE WHITE PAPER MARCH 2017

An Oracle White Paper October Advanced Compression with Oracle Database 11g

VIRTUALIZATION WITH THE SUN ZFS STORAGE APPLIANCE

An Oracle Technical Article August Certification with Oracle Linux 7

An Oracle Technical Article November Certification with Oracle Linux 7

Repairing the Broken State of Data Protection

ORACLE SOLARIS CLUSTER INTEROPERABILITY MATRIX

SUN ZFS STORAGE 7X20 APPLIANCES

Using the Oracle Business Intelligence Publisher Memory Guard Features. August 2013

An Oracle White Paper September Oracle Integrated Stack Complete, Trusted Enterprise Solutions

Oracle Secure Backup. Getting Started. with Cloud Storage Devices O R A C L E W H I T E P A P E R F E B R U A R Y

Automatic Receipts Reversal Processing

Deploying Custom Operating System Images on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R M A Y

Oracle Enterprise Manager Ops Center. Introduction. Creating Oracle Solaris 11 Zones Guide 12c Release 1 ( )

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

ORACLE SERVICES FOR APPLICATION MIGRATIONS TO ORACLE HARDWARE INFRASTRUCTURES

Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y

An Oracle Technical Article March Certification with Oracle Linux 4

Tutorial on How to Publish an OCI Image Listing

Oracle Fusion Configurator

Best Practices for Upgrading the Oracle ZFS Storage Appliance O R A C L E W H I T E P A P E R A P R I L

An Oracle White Paper September Upgrade Methods for Upgrading to Oracle Database 11g Release 2

October Oracle Application Express Statement of Direction

An Oracle White Paper November Oracle RAC One Node 11g Release 2 User Guide

Oracle Hyperion Planning on the Oracle Database Appliance using Oracle Transparent Data Encryption

Oracle Cloud Applications. Oracle Transactional Business Intelligence BI Catalog Folder Management. Release 11+

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

Oracle Enterprise Manager Ops Center. Introduction. What You Will Need. Configure and Install Root Domains 12c Release 3 (

JD Edwards EnterpriseOne Licensing

Oracle Enterprise Performance Reporting Cloud. What s New in September 2016 Release (16.09)

Siebel CRM Applications on Oracle Ravello Cloud Service ORACLE WHITE PAPER AUGUST 2017

Oracle Fusion Middleware

Working with Time Zones in Oracle Business Intelligence Publisher ORACLE WHITE PAPER JULY 2014

August 6, Oracle APEX Statement of Direction

An Oracle White Paper September Methods for Upgrading to Oracle Database 11g Release 2

Load Project Organizations Using HCM Data Loader O R A C L E P P M C L O U D S E R V I C E S S O L U T I O N O V E R V I E W A U G U S T 2018

Oracle Financial Services Regulatory Reporting for US Federal Reserve Lombard Risk Integration Pack

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

StorageTek ACSLS Manager Software

DATA INTEGRATION PLATFORM CLOUD. Experience Powerful Data Integration in the Cloud

An Oracle White Paper April Sun Storage 7000 Unified Storage Systems and XML-Based Archiving for SAP Systems

An Oracle White Paper June StorageTek In-Drive Reclaim Accelerator for the StorageTek T10000B Tape Drive and StorageTek Virtual Storage Manager

Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016

Oracle Utilities CC&B V2.3.1 and MDM V2.0.1 Integrations. Utility Reference Model Synchronize Master Data

Oracle Enterprise Manager Ops Center

Oracle Solaris 11: No-Compromise Virtualization

Oracle Learn Cloud. Taleo Release 16B.1. Release Content Document

April Understanding Federated Single Sign-On (SSO) Process

SUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER

VISUAL APPLICATION CREATION AND PUBLISHING FOR ANYONE

Oracle Enterprise Data Quality New Features Overview

Oracle Enterprise Manager Ops Center E Introduction

Oracle s Netra Modular System. A Product Concept Introduction

An Oracle White Paper September, Oracle Real User Experience Insight Server Requirements

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0

Transcription:

Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.0 This paper describes how to implement a Fibre Channel (FC) SAN boot solution using Oracle ZFS Storage Appliance on a high-availability SAN. This solution has been validated with a variety of servers, operating systems, and hardware configurations. Introduction Overview Benefits of Booting from a SAN FC SAN Boot Configuration Requirements Configuring the Client Server for Fibre Channel SAN Boot Installing the Operating System on the Server Best Practices Conclusion Appendix: References Introduction Organizations are continually looking for ways to simplify their management infrastructure, enhance scalability, and reduce costs while increasing reliability and performance in their data centers. Booting from a storage area network (SAN) offers many benefits, leading to cost savings as well as higher levels of protection, ease of management, increased flexibility, and reduced down time. Traditionally, operating systems have been installed on internal disks on individual servers or on direct attached storage (DAS). This approach presents a number of challenges to an IT organization. Dedicated internal boot devices cannot be shared with other servers, so are often underutilized. IT staff must perform management tasks on these systems locally rather than from a central management system, leading to increased administrative costs. For optimal redundancy and performance, additional RAID software or host bus adapters (HBAs) are required to manage these storage devices. Local disks on individual servers present particular challenges for multisite administration and disaster recovery site maintenance. Creating clones of disk content on off-site hosts or replicating server operating systems to a disaster recovery backup site can be complex operations requiring specialized software. The complex task of managing the servers of an entire enterprise can be simplified by enabling data center administrators to centrally manage all storage-related tasks, such as operating system maintenance, at the array level rather than at the individual server level. Locating server boot devices on an Oracle ZFS Storage Appliance accessed by servers across a high-availability Fibre Channel (FC) SAN enables increased efficiency, and even automation, of many administrative tasks, significantly reducing operating expenses. If a server goes down, a system administrator can boot up a standby server in a matter of minutes to resume business. Snapshots and clones of operating system images stored on the Oracle ZFS Storage Appliance can be simultaneously deployed to servers in development and test environments or to secondary disaster recovery sites. Booting from the SAN reduces the amount of time required for server upgrades. With minimal reconfiguration, you can replace an outdated or underpowered server with a new server, which you can then point to the original FC SAN boot device. By locating server boot devices on a RAID-protected shared storage device like the Oracle ZFS Storage Appliance, you can eliminate the need for hardware or software RAID devices in each server, which helps reduce hardware costs. Overview A boot-from-san solution implemented with an Oracle ZFS Storage Appliance located on a high-availability FC SAN is shown in Figure 1. In this solution, servers are booted from a pool of centrally managed storage volumes located in the Oracle ZFS Storage Appliance. Each storage volume in the pool serves as a boot LUN for a specific server. The figure shows two types of servers used in the validation testing (x86, and Solaris SPARC) and the operating systems validated on each server type. Note that each server platform is also available in a modular hardware blade architecture. When the Oracle ZFS Storage Appliance is also used for data storage, best practices dictate that a dedicated pool and separate data paths be used for boot devices. See the following section "Best Practices" for more details. You can use any server and host bus adapter (HBA) combination that supports SAN boot to implement an FC boot solution using an Oracle ZFS Storage Appliance.

Figure 1. Fibre Channel boot solution using an Oracle ZFS Storage Appliance Benefits of Booting from a SAN A boot from FC SAN solution provides significant benefits: Reduces data center footprint and facility costs - Booting from FC SAN enables you to use diskless servers and blade servers, which take up less space, consume less power, and require less cooling. Lowers administrative overhead - All operating system storage is provisioned and managed from the Oracle ZFS Storage Appliance. A server can be easily replaced by re-mapping its corresponding boot LUN to a new server. If the new server has the same profile as the server being replaced, it will boot the operating system from the SAN without requiring reconfiguration. Snapshots and clones of operating system images can be created and mapped to new servers on the SAN with just a few clicks of a mouse, simplifying migration and scalability tasks. Facilitates disaster and server failure recovery - By installing operating systems on the Oracle ZFS Storage Appliance rather than individual servers, you can take advantage of the data protection and redundancy features of the Oracle ZFS Storage Appliance to help reduce downtime during maintenance and fault outages. Operating system images can be protected using snapshots and clones or backed up using Network Data Management Protocol (NDMP). FC SAN Boot Configuration Requirements Configuring a Fibre Channel boot solution using an Oracle ZFS Storage Appliance requires the following: Zoning must be configured in the local SAN such that the server FC ports can see the Oracle ZFS Storage FC target ports. For more details, refer to documentation in Appendix A: References.

In the Oracle ZFS Storage Appliance, at least one FC PCIe card must be installed with one port enabled in target mode. For details, see the section "Configuring the Oracle ZFS Storage Appliance for Fibre Channel Boot" that follows. An HBA that supports SAN boots must be installed in each server to be provisioned from the SAN. The solution described in this paper was tested with the following FC HBA driver and firmware versions: o QLogic QLE2562 (Firmware Version 4.03.02, BIOS Revision 2.02) o o Emulex LPe12002 (BIOS Version 2.11a2) The FC HBA on each server must be configured as the primary boot device, a storage target LUN in the Oracle ZFS Storage Appliance must be provisioned with the appropriate operating system, and the LUN mapped to the server s initiator port. For details, see the section "Configuring the Client Server for Fibre Channel SAN Boot" that follows. Configuring the Oracle ZFS Storage Appliance for Fibre Channel Boot To configure the Oracle ZFS Storage Appliance for FC SAN boot, complete the following steps: 1. Check that at least one FC PCIe card is installed in the Oracle ZFS Storage Appliance. 2. By default, all the FC ports on the Oracle ZFS Storage Appliance are set to initiator mode. To enable a port in target mode, complete these steps: a. Log in to the Oracle ZFS Storage Appliance and navigate to Configuration > SAN > Fibre Channel Ports. b. Set the selected port to Target mode as shown in Figure 2. c. Click the Apply button. NOTE: Changing this setting requires a reboot. Figure 2. Setting a PCIe port to target mode in the Oracle ZFS Storage Appliance 3. Provision each LUN that will serve as a server boot LUN with the appropriate initiator and target groups. Configuring the Client Server for Fibre Channel SAN Boot To configure each client server for an FC SAN boot, first confirm that a Fibre Channel HBA is installed on the client and that the HBA supports SAN boot. Then set the boot precedence in the system BIOS to make the FC HBA card the highest priority boot device and configure the HBA to boot from the LUN on which the operating system for that server has been installed in the Oracle ZFS Storage Appliance. These procedures are described in the following sections. Setting Boot Precedence in the System BIOS Set the boot precedence in the system BIOS so that the FC HBA card is the highest priority boot device by completing these steps: 1. Reboot the server. While the server is initializing, press F2 to display the system BIOS menu. 2. If the server has an LSI PCI card installed, disable the PCI slot in the system BIOS. In some servers, such as Oracle's Sun x86 servers, the LSI card takes higher boot precedence and will try to boot the local operating system. To prevent this from happening, complete the following steps: a. Select Advanced to display the PCI Configuration screen. b. Disable the PCI slot in which the LSI card is installed as shown in Figure 3. c. Press F10 to save the setting and exit the screen and reboot the server.

Figure 3. System BIOS PCI Configuration screen showing PCI Slot1 disabled 3. Set the FC HBA card to be the highest priority boot device. a. From the BIOS menu, select Boot to display the Boot Device Priority screen. b. Select the FC HBA as the 1st Boot Device as shown in Figure 4. c. Press F10 to save settings, exit, and reboot the server. Figure 4. System BIOS PCI Configuration screen showing the FC HBA set as the primary boot device Configuring the Host Bus Adapter for Fibre Channel Boot One or more ports on the FC HBA on the server must be configured to boot from the LUN on which the operating system for that server has been installed in the Oracle ZFS Storage Appliance. The following procedure shows the steps for a QLogic FC HBA. The procedure is similar for an Emulex FC HBA. 1. Reboot the system. When the initialization screen shown in Figure 5 appears, log in to the HBA BIOS menu.

Figure 5. QLogic FC HBA Initialization screen providing access to HBA BIOS settings 2. Select one of the two HBA ports as shown in Figure 6. Figure 6. Selecting an HBA port to configure 3. In the menu displayed, select Configuration Settings as shown in Figure 7. Figure 7. Selecting the HBA BIOS utility Configuration Settings option 4. On the Configuration Settings menu, select Adapter Settings as shown in Figure 8.

Figure 8. Accessing the Adapter Settings for the HBA 5. On the Adapter Settings screen, select Host Adapter BIOS and press ENTER to enable the host adapter BIOS as shown in Figure 9 (the host adapter BIOS is disabled by default). Figure 9. Enabling the host adapter BIOS 6. To change the boot device priority level, press <Esc> to return to the Configuration Settings menu and select Selectable Boot Settings as shown in Figure 10. A list of available FC target ports displays, as shown in Figure 11. Figure 10. Accessing the HBA boot settings

7. Select the FC target port the HBA will use on the Oracle ZFS Storage Appliance as shown in Figure 11 and press <Enter>. The Selectable Boot Setting screen for the HBA port displays as shown in Figure 12. Figure 11. Selecting the FC target port the HBA will use on the Oracle ZFS Storage Appliance 8. Select (Primary) Boot Port Name as shown in Figure 12 and press <Enter> to display a list of all the available LUNs as shown in Figure 13. Figure 12. Selecting the primary boot port 9. Select the number of the LUN from which the server operating system should boot as shown in Figure 13. Figure 13. Selecting the boot LUN for the server When configuring the Emulex BIOS, you have the option to boot the server using the World Wide Port Name (WWPN) or Device ID as shown in Figure 14.

Figure 14. Selecting the boot device identification method when configuring an Emulex HBA 10. For a second HBA port, repeat Steps 2 through 10. Use the same settings as for the first HBA port. 11. Press <Esc> and save the configuration settings as shown in Figure 15. Figure 15. Saving the HBA configuration settings 12. Reboot the server. When the server boots, it will now choose the FC HBA as the primary boot device. It will use the primary boot setting in the HBA BIOS to select the appropriate LUN in the Oracle ZFS Storage Appliance from which to boot the operating system. Installing the Operating System on the Server To install the operating system on the server, follow the instructions for the specific operating system as shown. Installing Oracle Solaris To install Oracle Solaris on a server, during the installation process, select the appropriate FC LUN from which to install the operating system.

Installing Oracle Linux To install Oracle Linux (5 u4) on a server, during the installation process, select Advanced Configuration to install the GRUB boot loader and the operating system on the same FC LUN device. Otherwise, the GRUB master boot record (MBR) will be installed on the local disk and the operating system will not boot from the primary boot FC LUN. Installing SUSE Linux Enterprise 11 To install SUSE Linux Enterprise 11 on a server, during the installation process, select Advanced Configuration to install the GRUB boot loader and the operating system on the same FC LUN device. Otherwise, the GRUB master boot record (MBR) will be installed on the local disk and the operating system will not boot from the primary boot FC LUN. Installing Microsoft Windows 2003 To install Microsoft Windows 2003 on a server, complete the following steps: 1. Create an image on a floppy disk of the driver for the QLogic or Emulex FC HBA installed on the server. 2. Reboot the server to start the installation process. During installation initialization, press F6 to provide the path to the HBA driver image on the floppy disk. 3. Continue the installation using the HBA driver image. The driver enables the HBA to see the FC LUN that is serving as the primary boot LUN for the operating system installation. 4. Install the operating system on the new FC bootable LUN. Installing Microsoft Windows 2008 To install Microsoft Windows 2008 on a server, complete the following steps: 1. Set the boot LUN as the primary boot device on the FC HBA on the server. 2. Proceed with the installation. Best Practices Consider these practices for optimal configuration of the Oracle ZFS Storage Appliance: Configure storage on the Oracle ZFS Storage Appliance for the highest level of fault tolerance for FC SAN boot solutions. Storage pool(s) should be configured for optimum fault tolerance and performance by using mirroring. Storage LUNs should be mapped to multiple HBA ports accessing the SAN. Identify multiple ports on the client server in the systems BIOS. The BIOS will go through the list of targets upon boot until it can find the active path to the boot device. NOTE: Asynchronous Logical Unit Access (ALUA) is not supported for boot solutions because most HBAs do not support ALUA in the HBA firmware. When configuring an FC boot device in an Oracle ZFS Storage Appliance that is also hosting LUNS for other applications, be sure to separate the boot paths from the application data paths. Sharing FC target ports and storage pools designated for booting servers with FC target ports and storage pools servicing applications can adversely affect performance and is not recommended. Configure dedicated boot storage pools (mirrored) and separate application pools (variable). Map targets across alternate target ports on the Oracle ZFS Storage Appliance so they are not shared with boot ports. Conclusion A Fibre Channel SAN boot solution using the Oracle ZFS Storage Appliances on a high availability SAN provides a high level of protection while lowering administrative overhead. Your operating system assets can be protected using a variety of methods

including snapshots, clones, replication, or Network Data Management Protocol (NDMP) backup to tape. The ability to manage and maintain operating systems on servers throughout the organization from a central appliance lowers administrative costs. Appendix: References For troubleshooting information related to setting up the FC driver and ALUA, see: SAN Configuration and Multipathing topic, Oracle Solaris documentation http://docs.oracle.com/cd/e19082-01/820-3070/index.html Note: The ALUA path failover feature used in the Oracle ZFS Storage Appliance cluster configurations is not supported for boot devices. Other useful links: Oracle ZFS Storage Appliance documentation; see Chapter 3, Configuration, in the Administration Guide http://www.oracle.com/technetwork/documentation/oracle-unified-ss-193371.html SUSE Linux Enterprise Server 11 documentation http://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/da ta/be5rvii.html "Multipathing Support in Windows Server 2008," blog http://blogs.msdn.com/b/san/archive/2008/07/27/multipathing-support-in-windows-server- 2008.aspx Enterprise Linux Multipathing https://access.redhat.com/site/documentation/en-us/red_hat_enterprise_linux/6/htmlsingle/dm_multipath/ Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance January 2014, Version 2.0 Authors: Application Integration Engineering, Tom Hanvey; update by Peter Brouwer Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 Copyright 2011, 2014 Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0611 oracle.com 10 Version 2.0