Blueprints. Installing Linux on a Multipath iscsi LUN on an IP Network

Similar documents
1. Set up the storage to allow access to the LD(s) by the server following the NEC storage user guides.

Blueprints. Securing Sensitive Files With TPM Keys

The Contents and Structure of this Manual. This document is composed of the following four chapters.

DM-Multipath Guide. Version 8.2

Deploying Red Hat Enterprise Linux with Dell EqualLogic PS Series Arrays

Using Device-Mapper Multipath. Configuration and Administration 5.2

Linux Host Utilities 6.2 Quick Start Guide

Novell SUSE Linux Enterprise Server

Host Redundancy, and IPoIB and SRP Redundancies

iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc.

Blueprints. Quick Start Guide for installing and running KVM

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes

Unless otherwise noted, all references to STRM refer to STRM, STRM Log Manager, and STRM Network Anomaly Detection.

Red Hat Enterprise Linux 5 DM Multipath. DM Multipath Configuration and Administration Edition 3

istorage Server: High-Availability iscsi SAN for Citrix Xen Server

StorTrends - Citrix. Introduction. Getting Started: Setup Guide

QLogic QLA4010/QLA4010C/QLA4050/QLA4050C/ QLA4052C/QMC4052/QLE4060C/QLE4062C iscsi Driver for Linux Kernel 2.6.x.

Linux Howtos. Fedora 9 Install (114) CIS Fall Fedora 9 Install (114) Fedora 9 installation with custom partitions.

The Contents and Structure of this Manual. This document is composed of the following three chapters.

iscsi Boot from SAN with Dell PS Series

Red Hat Enterprise Linux 6

FC SAN Boot Configuration Guide

iscsi Configuration for Red Hat Enterprise Linux Express Guide

Assignment No. SAN. Title. Roll No. Date. Programming Lab IV. Signature

ETERNUS Disk storage systems Server Connection Guide (FCoE) for Linux

GL-280: Red Hat Linux 7 Update. Course Description. Course Outline

The kernel constitutes the core part of the Linux operating system. Kernel duties:

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

RHEL Clustering and Storage Management. 5 Days

Red Hat Enterprise Linux 8.0 Beta

Installing and Configuring for Linux

CPSC 457 OPERATING SYSTEMS FINAL EXAM

Linux Diskless iscsi Boot HowTo ( V1.0)

Production Installation and Configuration. Openfiler NSA

System p. Partitioning with the Integrated Virtualization Manager

Dell Compellent Storage Center. XenServer 6.x Best Practices

Optimize Storage Performance with Red Hat Enterprise Linux

RocketRAID 2310/2300 Controller Fedora Linux Installation Guide

SAP NetWeaver on IBM Cloud Infrastructure Quick Reference Guide Red Hat Enterprise Linux. December 2017 V1.0

Manually Mount Usb Flash Drive Linux Command Line Redhat

IBM XIV Host Attachment Kit for Linux. Version Release Notes. First Edition (December 2011)

Susanne Wintenberger IBM Lab Boeblingen, Germany

Red Hat Enterprise Linux 4 DM Multipath. DM Multipath Configuration and Administration

Using iscsi On Debian Lenny (Initiator And Target)

Chapter 6. Boot time configuration. Chapter 6 Boot time configuration

Chapter 7. Getting Started with Boot from Volume

FUJITSU Storage ETERNUS Multipath Driver V2 (for Linux) Installation Information

NetApp SolidFire for Linux Configuration Guide

IBM TotalStorage N3700 Interoperability Matrix

Red Hat Enterprise Linux 7

Linux on System z. July 28, Linux Kernel 2.6 SC

CST8177 Linux II. Linux Boot Process

"Charting the Course... Enterprise Linux System Administration Course Summary

Blueprints. Protecting your data at rest with Red Hat Enterprise Linux on System x

Oracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux

iscsi Adapter Inbox Driver Update for Linux Kernel 2.6.x Table of Contents

Linux+ Guide to Linux Certification, Third Edition. Chapter 2 Linux Installation and Usage

RocketRAID 2680/2684 SAS Controller Red Hat Enterprise/CentOS Linux Installation Guide

Red Hat Enterprise Linux 5

Adding a block devices and extending file systems in Linux environments

High-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers

Logical Volume Management

NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide

Oracle 12c deployment using iscsi with IBM Storwize V7000 for small and medium businesses Reference guide for database and storage administrators

Exadata Landing Pad: Migrating a Fibre Channel-based Oracle Database Using an Oracle ZFS Storage Appliance

RocketRAID 231x/230x SATA Controller Red Hat Enterprise/CentOS Linux Installation Guide

Installing and Configuring for Linux

Learn Linux, 101: Control mounting and unmounting of

UltraPath Technical White Paper

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.

Saving Your Bacon Recovering From Common Linux Startup Failures

Fibre Channel Adapter and Converged Network Adapter Inbox Driver Update for Linux Kernel 2.6.x and 3.x. Readme. QLogic Corporation All rights reserved

SLES Linux Installation Guide

1 LINUX KERNEL & DEVICES

Introduction to Linux features for disk I/O

Using GNBD with Global File System. Configuration and Administration

As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.

Recovering GRUB: Dual Boot Problems and Solutions

Configuring and Managing Virtual Storage

APPLICATION NOTE Using DiskOnChip Under Linux With M-Systems Driver

IBM Geographically Dispersed Resiliency for Power Systems. Version Release Notes IBM

UUID and R1Soft. What is a UUID and what is it used for?

Notes on Using Red Hat Enterprise Linux AS (v.3 for x86)

SAS Connectivity Card (CIOv) for IBM BladeCenter IBM Redbooks Product Guide

IBM XIV Host Attachment Kit for Linux Version Release Notes

For personnal use only

Virtual Iron Software Release Notes

CompTIA Linux+/LPIC-1 COPYRIGHTED MATERIAL

Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes

DtS Data Migration to the MSA1000

Notes on Using Red Hat Enterprise Linux AS (v.4 for EM64T)

RocketRAID 231x/230x SATA Controller Debian Linux Installation Guide

Veritas NetBackup for SQLite Administrator's Guide

Using GNBD with Global File System. Configuration and Administration 5.2

Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2

More on file systems, Booting Todd Kelley CST8177 Todd Kelley 1

Deploying Solaris 11 with EqualLogic Arrays

Red Hat Enterprise Linux 7 DM Multipath

"Charting the Course... MOC B: Linux System Administration. Course Summary

The Linux IPL Procedure

Transcription:

Blueprints Installing Linux on a Multipath iscsi LUN on an IP Network

Blueprints Installing Linux on a Multipath iscsi LUN on an IP Network

Note Before using this information and the product it supports, read the information in Notices on page 27. First Edition (August 2009) Copyright IBM Corporation 2009. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Introduction............. v Chapter 1. Scope, requirements, and support............... 1 Chapter 2. iscsi and Multipath overview 3 Chapter 3. Installing Linux distributions on multipathed iscsi target devices.. 5 Installing RHEL5.3 on a multipath iscsi storage device................ 5 Installing SLES 11 on a multipath iscsi storage device (System x only)........... 9 Chapter 4. Troubleshooting tips.... 13 iscsi and Multipath overview........ 15 Installing Linux distributions on multipathed iscsi target devices.............. 16 Installing RHEL5.3 on a multipath iscsi storage device............... 16 Installing SLES 11 on a multipath iscsi storage device (System x only).......... 21 Troubleshooting tips........... 23 Appendix. Related information and downloads............. 25 Notices.............. 27 Trademarks.............. 28 Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network. 15 Scope, requirements, and support....... 15 Copyright IBM Corp. 2009 iii

iv Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Introduction This blueprint provides step by step instructions for installing Red Hat Enterprise Linux (RHEL) 5.3 and SUSE Linux Enterprise Server (SLES) 11 on a Multipath iscsi Logical Unit (LUN). The procedures are tested on a System x and a System p blades connected to a NETAPP storage server through an Ethernet IP network. You can adapt these instructions to install either of these Linux distributions onto other supported models of iscsi storage devices. Key tools and technologies discussed in this demonstration include ISCSI logical unit (LUN), device mapper (DM) multipath, multipathed iscsi, iscsiadm, and dmsetup. Intended audience This document is intended for Linux system administrators who have prior experience in installing Red Hat Enterprise Linux 5 or SUSE Linux Enterprise Server 11 and have a moderate level of knowledge in Device Mapper (DM) Multipath and iscsi. Scope and purpose This document provides installation steps of RHEL5.3 and SLES11 to a System x or System p host that is connected to a iscsi storage device through an IP network. The instructions may change in newer releases of the same distributions. The configuration and setup of the host and the iscsi storage device and the physical setup of multiple paths to this storage device are not covered in this document. Refer to the documentation supplied with your storage device for more information. The instructions in this blueprint were tested on System x and System p blades. The instructions should work on System x and System p non-blades with some adaptation. In addition, the instructions assume installation of RHEL 5.3 or SLES 11 onto the multipath iscsi boot device. Part of the instructions should work even if you want to only setup the multipath on iscsi non-boot device. Software requirements This blueprint is written for Red Hat Enterprise Linux (RHEL) 5.3 and SUSE Linux Enterprise Server (SLES) 11. Hardware requirements See the IBM Storage interoperability matrices at http://www.ibm.com/systems/storage/product/ interop.html for supported storage configurations. The examples included in this blueprint are tested on a System x LS21 blade and a System p JS12 blade host system with one network adapter. The iscsi storage target is a NetApp dual-node file server with two logical units. Author names Malahal Naineni Other contributors Monza Lui Robb Romans Kersten Richter Copyright IBM Corp. 2009 v

IBM Services Linux offers flexibility, options, and competitive total cost of ownership with a world class enterprise operating system. Community innovation integrates leading-edge technologies and best practices into Linux. IBM is a leader in the Linux community with over 600 developers in the IBM Linux Technology Center working on over 100 open source projects in the community. IBM supports Linux on all IBM servers, storage, and middleware, offering the broadest flexibility to match your business needs. For more information about IBM and Linux, go to ibm.com/linux (https://www.ibm.com/linux) IBM Support Questions and comments regarding this documentation can be posted on the developerworks Storage Connectivity Blueprint Community Forum: http://www.ibm.com/developerworks/forums/forum.jspa?forumid=1334 The IBM developerworks discussion forums let you ask questions, share knowledge, ideas, and opinions about technologies and programming techniques with other developerworks users. Use the forum content at your own risk. While IBM will attempt to provide a timely response to all postings, the use of this developerworks forum does not guarantee a response to every question that is posted, nor do we validate the answers or the code that are offered. Typographic conventions The following typographic conventions are used in this Blueprint: Bold Italics Monospace Identifies commands, subroutines, keywords, files, structures, directories, and other items whose names are predefined by the system. Also identifies graphical objects such as buttons, labels, and icons that the user selects. Identifies parameters whose actual names or values are to be supplied by the user. Identifies examples of specific data values, examples of text like what you might see displayed, examples of portions of program code like what you might write as a programmer, messages from the system, or information you should actually type. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. vi Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Chapter 1. Scope, requirements, and support This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Systems to which this information applies System x running Linux and PowerLinux Copyright IBM Corp. 2009 1

2 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Chapter 2. iscsi and Multipath overview The iscsi standard (RFC 3720) defines transporting of the SCSI protocol over a TCP/IP network that allows block access to target devices. A host connection to the network can be provided by a iscsi host bus adapter or a iscsi software initiator that uses the standard network interface card in the host. For more information, see RFC 3720 at http://tools.ietf.org/html/rfc3720. The connection from the server through the Host Bus Adapter (HBA) to the storage controller is referred as a path. Within the context of this blueprint, multipath connectivity refers to a system configuration where multiple connection paths exist between a server and a storage unit (Logical Unit (LUN)) within a storage subsystem. This configuration can be used to provide redundancy or increased bandwidth. Multipath connectivity provides redundant access to the storage devices, for example, to have access to the storage device when one or more of the components in a path fail. Another advantage of using multipath connectivity is the increased throughput by way of load balancing. Note that multipathing protects against the failure of path(s) and not the failure of a specific storage unit. A simple example of multipath connectivity is two NICs connected to a network to which the storage controllers are connected. In this case, the storage units can be accessed from either of the NICs and hence you have multipath connectivity. In the following diagram, each host has two NICs and each storage unit has two controllers. With the given configuration setup, each host will have four paths to each of the LUNs in each of the storage devices. Copyright IBM Corp. 2009 3

Figure 1. Simple IP network example Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. 4 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Chapter 3. Installing Linux distributions on multipathed iscsi target devices Use these instructions to install RHEL 5.3 or SLES 11 distribution to logical volumes created from two physical iscsi storage devices. Before beginning, complete the physical configuration of the multipath servers to the iscsi storage. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing RHEL5.3 on a multipath iscsi storage device Follow these steps to install Red Hat Enterprise Linux 5.3 on a multipath iscsi storage device. Procedure 1. To install RHEL 5.3 on an iscsi LUN, follow the steps in the iscsi Overview blueprint at http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm. Adjust the following step to prepare for the multipath creation after the installation: Add the mpath parameter at the boot prompt during installation. For example, instead of using the command linux vnc, this blueprint used linux mpath vnc command in the test environment. This demonstration assumes the default LVM partitioning scheme. If you want to install RHEL 5.3 onto a different partitioning scheme, adjust the steps. 2. Check that your root file system image is discovered as installed on a multipath iscsi device. If it is, then you have completed all required steps and no further multipath configuration is necessary. Otherwise, continue with the next step. a. Enter the mount command to find where the root file system image is located. The following output shows that root is on the logical volume VolGroup00/LogVol00: # mount /dev/mapper/volgroup00-logvol00 on / type ext3 (rw,_netdev) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda1 on /boot type ext3 (rw,_netdev) b. Enter the lvs -o lv_name,vg_name,devices command to check where this logical volume resides. If the output is similar to the following output (which shows that the root file system image is located on devices named /dev/sdx), then the root file system image is detected through single path devices. In the following output, the root file system image is detected on /dev/sda2 and /dev/sdb1 which are not multipath devices. If your root file system image is detected on single path devices as below, go to step 3. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/sdb1(0) LogVol00 VolGroup00 /dev/sda2(0) LogVol01 VolGroup00 /dev/sda2(31) On the other hand, if you see something like the following output (which shows that the root file system image is on devices named /dev/dm-x), gather more information to see if the DM device is a multipath device. Copyright IBM Corp. 2009 5

# lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/dm-2(0) LogVol00 VolGroup00 /dev/dm-4(0) LogVol01 VolGroup00 /dev/dm-4(31) c. Enter the command ls -l on both DM devices to find their major and minor numbers: # ls -l /dev/dm-2 brw-rw---- 1 root root 253, 2 Jul 14 14:30 /dev/dm-2 # ls -l /dev/dm-4 brw-rw---- 1 root root 253, 4 Jul 14 14:30 /dev/dm-4 d. Enter the dmsetup command with the major and minor numbers as parameters to see if they are multipath devices. If they are, determine the corresponding mpath names. The following output shows that /dev/dm-2 and /dev/dm-4 correspond to mpath1p1 and mpath0p2. If they are, then this root file system image is installed on a multipath iscsi LUN. # dmsetup info -c -o name,major,minor --major 253 --minor 2 Name Maj Min mpath1p1 253 2 # dmsetup info -c -o name,major,minor --major 253 --minor 4 Name Maj Min mpath0p2 253 4 The multipath -ll command shows you that mpath0 and mpath1 are multipath devices with two paths each, as follows: # multipath -ll mpath1 (360a98000686f68656c6f516457373349) dm-1 NETAPP,LUN [size=7.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdd 8:48 [active][ready] mpath0 (360a98000686f68656c6f51645736374f) dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] If you see similar output, you are finished because the multipath setup was performed automatically. No further multipath configuration is necessary. If you do not see similar output, continue to the next step. 3. Create multiple iscsi sessions (multiple paths) a. Enter the command ls -d /sys/block/sd* to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the command iscsiadm -m node --login to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260] Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: 6 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

# ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 4. Edit the /etc/multipath.conf file. a. Ensure that the file contains the following line: defaults { user_friendly_names yes } b. Multipath-tools blacklist everything listed within the blacklist {} statement contained in the /etc/multipath.conf file. Comment out the wwid line within the blacklist {} statement so that the iscsi devices are not blacklisted. For example, here is a modified multipath.conf file: # cat /etc/multipath.conf defaults { user_friendly_names yes } blacklist { devnode "^(ram raw loop fd md dm- sr scd st)[0-9]*" devnode "^(hd xvd vd)[a-z]*" # wwid "*" } 5. Enter the multipath command to create a bindings file named /var/lib/multipath/bindings. This file is included in the new initrd image. # multipath 6. Edit /etc/fstab file to ensure it doesn't have single path device names or LABELs The /etc/init.d/netfs script may try to mount single path names (such as /dev/sda1) because the label is the same on a single path device as well as its corresponding multipath device name. However, the script cannot mount the device because the corresponding multipath device is already active. To avoid this error during boot, edit the /etc/fstab file and replace any LABELs or path names with LVM names or multipath names, as appropriate. See the next step for an example. In this example, the default LVM partitioning scheme was accepted during system installation so the /etc/fstab file on the test system contains the following LABEL entry: LABEL=/boot / boot ext3 defaults,_netdev 1 2 a. Find out which multipath device corresponds to /boot LABEL. Enter the blkid -l -t LABEL=/boot command to determine which device has /boot LABEL. The following output shows that /dev/sda1 has /boot LABEL: # blkid -t LABEL=/boot /dev/sda1: LABEL="/boot" UUID="cec3f5e2-5436-4afb-a3fec0b40ccb4420" SEC_TYPE="ext2" TYPE="ext3" b. Determine the wwid (World Wide IDentifier, a unique identifier for a logical unit in a SCSI storage subsystem) of the boot device (/dev/sda in this example) by typing the following command: # /sbin/scsi_id -g -u -s /block/sda 360a98000686f68656c6f51645736374f c. Determine the boot device's mpath name by looking at the /var/lib/multipath/bindings bindings file. In our bindings file, the wwid of sda corresponds to mpath0: # cat /var/lib/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpath0 360a98000686f68656c6f51645736374f mpath1 360a98000686f68656c6f516457373349 Chapter 3. Installing Linux distributions on multipathed iscsi target devices 7

d. In general, multipath device names take the form of /dev/mapper/mpathxpy, where X is the multipath number and Y is the partition number. From the previous step, mpath0 corresponds to /dev/sda. Therefore, you can translate /dev/sda1 to /dev/mapper/mpath0p1 and replace the LABEL=/boot with the multipath device name of /dev/sda1: /dev/mapper/mpath0p1. Here is the new entry for the boot device in the /etc/fstab file: /dev/mapper/mpath0p1 /boot ext3 defaults,_netdev 1 2 It replaces the old entry: LABEL=/boot /boot ext3 defaults,_netdev 1 2 Note: If you choose to not to use friendly names in step 4.a, you need to use this format: /dev/mapper/<wwid>p<partition number> for the multipath device names. For example, here is an entry in /etc/fstab for /boot without friendly names: /dev/mapper/360a98000686f68656c6f51645736374fp1 /boot ext3 defaults,_netdev 1 2 7. Create an iscsi/multipath capable initrd image by following these steps: a. Save the original /sbin/mkinitrd and /boot/initrd image files for backup. Enter the following commands to create backup files: # cp /sbin/mkinitrd /sbin/mkinitrd.orig # cp /boot/initrd-2.6.18-128.el5.img /boot/initrd-2.6.18-128.el5.img.orig b. Edit the /sbin/mkinitrd file. Open the file and search for the findstoragedriver () function. The arguments passed to findstoragedriver () are not correct for the default LVM installation. Typing ls -d /sys/block/sd* shows you all paths. Replace the $@ in the first line of findstoragedriver() function definition with your list of all boot and root device paths. On the test system, there are two disks with two paths per disk, for a total of four path names. Therefore, change the for loop from: for device in $@ ; do to: for device in sda sdb sdc sdd; do c. Enable multipath in the mkinitrd script. Search within the /sbin/mkinitrd file for use_multipath=0, and change it to use_multipath=1. On the test system, this definition is on line 1270. d. Because your specific devices are not going to be listed in the initrd image created by the default /sbin/mkinitrd script, you must add them to the correct place in the script. Search for "echo Creating multipath devices" in the file. On the test system, this definition was found on line 1688. For each of your multipath disks, add a line such as the following example after the echo command: emit "/bin/multipath -v 0 <wwid>" For example, on the test system, these two lines were added to the /sbin/mkinitrd file: emit "/bin/multipath -v 0 360a98000686f68656c6f51645736374f" emit "/bin/multipath -v 0 360a98000686f68656c6f516457373349" To show the changes that we made, here is a comparison between the original and the modified mkinitrd file: # diff -Nau /sbin/mkinitrd.orig /sbin/mkinitrd --- /sbin/mkinitrd.orig 2009-07-28 17:29:05.000000000-0400 +++ /sbin/mkinitrd 2009-07-28 17:31:08.000000000-0400 @@ -414,7 +414,7 @@ } findstoragedriver () { - for device in $@ ; do + for device in sda sdb sdc sdd ;do case " $handleddevices " in *" $device "*) continue ;; @@ -1222,7 +1222,7 @@ echo $NONL "$@" >> $RCFILE } 8 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

-use_multipath=0 +use_multipath=1 use_emc=0 use_xdr=0 if [ -n "$testdm" -a -x /sbin/dmsetup -a -e /dev/mapper/control ]; then @@ -1666,6 +1666,8 @@ if [ "$use_multipath" == "1" ]; then emit "echo Creating multipath devices" + emit "/bin/multipath -v 0 360a98000686f68656c6f51645736374f" + emit "/bin/multipath -v 0 360a98000686f68656c6f516457373349" for wwid in $root_wwids ; do emit "/bin/multipath -v 0 $wwid" done e. Use the modified mkinitrd script to create a new initrd image: # mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r) f. Reboot your system: # reboot 8. After the system reboots, the root file system image should be discovered on multpath iscsi devices. Repeat the instructions in step 2 to verify. 9. To ensure that these changes persist even if you change the storage environment, follow these steps: a. Restore the original mkinitrd that you saved in step 8.1: # cp /sbin/mkinitrd.orig /sbin/mkinitrd b. Edit /etc/sysconfig/mkinitrd/multipath file and change MULTIPATH=no to MULTIPATH=yes. If this file does not exist on your system, create the file by entering the following command: echo "MULTIPATH=yes" > /etc/sysconfig/mkinitrd/multipath Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing SLES 11 on a multipath iscsi storage device (System x only) The following steps are currently supported in System x only. Procedure 1. Follow all the steps in the iscsi Overview blueprint at http://publib.boulder.ibm.com/infocenter/ lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm to install SLES 11 on an iscsi LUN. The following steps are needed once you have completed the installation and your system has rebooted. 2. Enable Multipath services at system startup by entering the following commands: # chkconfig boot.multipath on # chkconfig multipathd on 3. Create the /etc/multipath.conf file. The default SLES 11 installation does not create the /etc/multipath.conf file. See the /usr/share/doc/packages/multipath-tools/ directory for more information. In that directory, refer to the multipath.conf.synthetic template and the multipath.conf.annotated HOWTO. In the test environment, the multipath.conf.synthetic file was copied to the /etc/multipath.conf file. To do so, enter the following command: # cp /usr/share/doc/packages/multipath-tools\ /multipath.conf.synthetic /etc/multipath.conf Chapter 3. Installing Linux distributions on multipathed iscsi target devices 9

All entries in the example file are commented. You can change the values in this file if necessary for your environment. 4. Enter the iscsiadm -m discovery -t sendtargets -p <target node ip address>:<target port> command to add the node records that were not discovered during installation. In this example, there are two target nodes (192.168.1.22 and 192.168.1.152 with the default iscsi target port 3260) # iscsiadm -m discovery -t sendtargets -p 192.168.1.22:3260 192.168.1.152:3260,1001 iqn.1992-08.com.netapp:sn.84183797 192.168.1.22:3260,1000 iqn.1992-08.com.netapp:sn.84183797 # iscsiadm -m discovery -t sendtargets -p 192.168.1.152:3260 192.168.1.152:3260,1001 iqn.1992-08.com.netapp:sn.84183797 192.168.1.22:3260,1000 iqn.1992-08.com.netapp:sn.84183797 5. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.startup -v onboot command to set each node to start up onboot. Note that the test target IQN is iqn.1992-08.com.netapp:sn.84183797. # iscsiadm -m node -p 9.47.69.22:3260 -T iqn.1992-08.com.netapp:sn.84183797 \ -o update -n node.startup -v onboot # iscsiadm -m node -p 9.47.67.152:3260 -T iqn.1992-08.com.netapp:sn.84183797 \ -o update -n node.startup -v onboot 6. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.conn\[0\].startup -v onboot command to set each node connection to start up onboot # iscsiadm -m node -p 192.168.1.22:3260 -T iqn.1992-08.com.netapp:sn.84183797 -o update -n node.conn\[0\].startup -v onboot # iscsiadm -m node -p 192.168.1.152:3260 -T iqn.1992-08.com.netapp:sn.84183797 -o update -n node.conn\[0\].startup -v onboot 7. Verify that the host system can now find all iscsi paths required for booting a. Enter the ls -d /sys/block/sd* command to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the iscsiadm -m node --login command to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260] Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 8. Create a multipath-capable initrd image a. Edit the /etc/sysconfig/kernel file to add dm-multipath to INITRD_MODULES keyword. If your storage configuration requires additional multipath modules, add them here as well. On the test system, only dm-multipath was added, as the storage NetApp File server did not need additional kernel modules. Here is a comparison between the original and modified file on the test system: # diff -Nau /etc/sysconfig/kernel.orig /etc/sysconfig/kernel --- /etc/sysconfig/kernel.orig 2009-08-17 18:46:59.000000000-0400 +++ /etc/sysconfig/kernel 2009-07-29 13:39:00.000000000-0400 @@ -7,7 +7,7 @@ # ramdisk by calling the script "mkinitrd" # (like drivers for scsi-controllers, for lvm or reiserfs) # 10 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

-INITRD_MODULES="processor thermal fan jbd ext3 edd" +INITRD_MODULES="dm-multipath processor thermal fan jbd ext3 edd" ## Type: string ## Command: /sbin/mkinitrd b. Create a backup copy of your initrd file c. Run the mkinitrd command to create an initrd image: # mkinitrd 9. Update your boot loader configuration with the new initrd, if needed. During the test, the original initrd file was overwritten, so the boat loader configuration file did not require updating. 10. Reboot with the new initrd image and verify that your root is on multipath. a. Use the mount command to find where the root is. The following output shows that root is on the device mapper (DM) device /dev/dm-3: # mount /dev/dm-3 on / type ext3 (rw,acl,user_xattr) /proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) b. Enter the ls -l command on the DM device to find its major and minor numbers. The following output shows that /dev/dm-3's major number is 253 and minor number is 3: # ls -l /dev/dm-3 brw-rw---- 1 root disk 253, 3 Jul 26 16:13 /dev/dm-3 c. Enter the dmsetup table command with the major and minor numbers as parameters to look at the current table of the device. The following output shows that /dev/dm-3 is a linear mapping (partition) on DM devices whose major number is 253 and minor number is 0: # dmsetup table --major 253 --minor 3 0 9092790 linear 253:0 1381590 d. Enter the dmsetup info command with the major and minor numbers we get from the last step, 253 and 0 in this example, to get the device name. This output shows that the suspected multipath device name is 360a98000686f68656c6f51645736374f. # dmsetup info -c -o name,major,minor --major 253 --minor 0 Name Maj Min 360a98000686f68656c6f51645736374f 253 0 e. Enter the multipath -ll command with the device name as parameter. Check if multiple paths are discovered. In this output, two paths are shown under device 360a98000686f68656c6f51645736374f: sda and sdc. If you see similar output, the multipath setup is complete. # multipath -ll 360a98000686f68656c6f51645736374f 360a98000686f68656c6f51645736374f dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Chapter 3. Installing Linux distributions on multipathed iscsi target devices 11

12 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Chapter 4. Troubleshooting tips This topic discusses troubleshooting tips and caveats. The host blade installed when testing this blueprint has two network adapters. Configuring the eth0 interface for iscsi was unsuccessful. The BIOS failed to log in to the target. Configuring the eth1 interface for iscsi was successful. Make sure that the /etc/fstab has correct LVM or multipath names because /etc/init.d/netfs tries to mount all net devices. A LABEL in the /etc/fstab file for a network device causes problems because the script tries to use underlying devices rather than a multipath device. Use only by-id names in the /etc/fstab file for the SLES 11 distribution. Both SLES 11 and RHEL 5.3 x86_64 Xen kernels failed to boot from the iscsi boot device. However, physical machine installations for both distributions were successful. SLES 11 non-xen default kernels also failed to boot from iscsi boot device with our network installation. When the ip=* parameter was removed from the grub menu, the kernel booted fine. The failsafe kernel booted normally in either case. SLES 11 cannot boot even to rescue mode if the target iqn wasn't set to onboot mode (automatic as suggested in the iscsi paper for SLES 10 Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Copyright IBM Corp. 2009 13

14 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network Scope, requirements, and support This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Systems to which this information applies System x running Linux and PowerLinux iscsi and Multipath overview The iscsi standard (RFC 3720) defines transporting of the SCSI protocol over a TCP/IP network that allows block access to target devices. A host connection to the network can be provided by a iscsi host bus adapter or a iscsi software initiator that uses the standard network interface card in the host. For more information, see RFC 3720 at http://tools.ietf.org/html/rfc3720. The connection from the server through the Host Bus Adapter (HBA) to the storage controller is referred as a path. Within the context of this blueprint, multipath connectivity refers to a system configuration where multiple connection paths exist between a server and a storage unit (Logical Unit (LUN)) within a storage subsystem. This configuration can be used to provide redundancy or increased bandwidth. Multipath connectivity provides redundant access to the storage devices, for example, to have access to the storage device when one or more of the components in a path fail. Another advantage of using multipath connectivity is the increased throughput by way of load balancing. Note that multipathing protects against the failure of path(s) and not the failure of a specific storage unit. A simple example of multipath connectivity is two NICs connected to a network to which the storage controllers are connected. In this case, the storage units can be accessed from either of the NICs and hence you have multipath connectivity. In the following diagram, each host has two NICs and each storage unit has two controllers. With the given configuration setup, each host will have four paths to each of the LUNs in each of the storage devices. Copyright IBM Corp. 2009 15

Figure 2. Simple IP network example Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing Linux distributions on multipathed iscsi target devices Use these instructions to install RHEL 5.3 or SLES 11 distribution to logical volumes created from two physical iscsi storage devices. Before beginning, complete the physical configuration of the multipath servers to the iscsi storage. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing RHEL5.3 on a multipath iscsi storage device Follow these steps to install Red Hat Enterprise Linux 5.3 on a multipath iscsi storage device. 16 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

Procedure 1. To install RHEL 5.3 on an iscsi LUN, follow the steps in the iscsi Overview blueprint at http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm. Adjust the following step to prepare for the multipath creation after the installation: Add the mpath parameter at the boot prompt during installation. For example, instead of using the command linux vnc, this blueprint used linux mpath vnc command in the test environment. This demonstration assumes the default LVM partitioning scheme. If you want to install RHEL 5.3 onto a different partitioning scheme, adjust the steps. 2. Check that your root file system image is discovered as installed on a multipath iscsi device. If it is, then you have completed all required steps and no further multipath configuration is necessary. Otherwise, continue with the next step. a. Enter the mount command to find where the root file system image is located. The following output shows that root is on the logical volume VolGroup00/LogVol00: # mount /dev/mapper/volgroup00-logvol00 on / type ext3 (rw,_netdev) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda1 on /boot type ext3 (rw,_netdev) b. Enter the lvs -o lv_name,vg_name,devices command to check where this logical volume resides. If the output is similar to the following output (which shows that the root file system image is located on devices named /dev/sdx), then the root file system image is detected through single path devices. In the following output, the root file system image is detected on /dev/sda2 and /dev/sdb1 which are not multipath devices. If your root file system image is detected on single path devices as below, go to step 3. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/sdb1(0) LogVol00 VolGroup00 /dev/sda2(0) LogVol01 VolGroup00 /dev/sda2(31) On the other hand, if you see something like the following output (which shows that the root file system image is on devices named /dev/dm-x), gather more information to see if the DM device is a multipath device. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/dm-2(0) LogVol00 VolGroup00 /dev/dm-4(0) LogVol01 VolGroup00 /dev/dm-4(31) c. Enter the command ls -l on both DM devices to find their major and minor numbers: # ls -l /dev/dm-2 brw-rw---- 1 root root 253, 2 Jul 14 14:30 /dev/dm-2 # ls -l /dev/dm-4 brw-rw---- 1 root root 253, 4 Jul 14 14:30 /dev/dm-4 d. Enter the dmsetup command with the major and minor numbers as parameters to see if they are multipath devices. If they are, determine the corresponding mpath names. The following output shows that /dev/dm-2 and /dev/dm-4 correspond to mpath1p1 and mpath0p2. If they are, then this root file system image is installed on a multipath iscsi LUN. # dmsetup info -c -o name,major,minor --major 253 --minor 2 Name Maj Min mpath1p1 253 2 # dmsetup info -c -o name,major,minor --major 253 --minor 4 Name Maj Min mpath0p2 253 4 Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 17

The multipath -ll command shows you that mpath0 and mpath1 are multipath devices with two paths each, as follows: # multipath -ll mpath1 (360a98000686f68656c6f516457373349) dm-1 NETAPP,LUN [size=7.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdd 8:48 [active][ready] mpath0 (360a98000686f68656c6f51645736374f) dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] If you see similar output, you are finished because the multipath setup was performed automatically. No further multipath configuration is necessary. If you do not see similar output, continue to the next step. 3. Create multiple iscsi sessions (multiple paths) a. Enter the command ls -d /sys/block/sd* to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the command iscsiadm -m node --login to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260] Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 4. Edit the /etc/multipath.conf file. a. Ensure that the file contains the following line: defaults { user_friendly_names yes } b. Multipath-tools blacklist everything listed within the blacklist {} statement contained in the /etc/multipath.conf file. Comment out the wwid line within the blacklist {} statement so that the iscsi devices are not blacklisted. For example, here is a modified multipath.conf file: # cat /etc/multipath.conf defaults { user_friendly_names yes } blacklist { devnode "^(ram raw loop fd md dm- sr scd st)[0-9]*" devnode "^(hd xvd vd)[a-z]*" # wwid "*" } 18 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

5. Enter the multipath command to create a bindings file named /var/lib/multipath/bindings. This file is included in the new initrd image. # multipath 6. Edit /etc/fstab file to ensure it doesn't have single path device names or LABELs The /etc/init.d/netfs script may try to mount single path names (such as /dev/sda1) because the label is the same on a single path device as well as its corresponding multipath device name. However, the script cannot mount the device because the corresponding multipath device is already active. To avoid this error during boot, edit the /etc/fstab file and replace any LABELs or path names with LVM names or multipath names, as appropriate. See the next step for an example. In this example, the default LVM partitioning scheme was accepted during system installation so the /etc/fstab file on the test system contains the following LABEL entry: LABEL=/boot / boot ext3 defaults,_netdev 1 2 a. Find out which multipath device corresponds to /boot LABEL. Enter the blkid -l -t LABEL=/boot command to determine which device has /boot LABEL. The following output shows that /dev/sda1 has /boot LABEL: # blkid -t LABEL=/boot /dev/sda1: LABEL="/boot" UUID="cec3f5e2-5436-4afb-a3fec0b40ccb4420" SEC_TYPE="ext2" TYPE="ext3" b. Determine the wwid (World Wide IDentifier, a unique identifier for a logical unit in a SCSI storage subsystem) of the boot device (/dev/sda in this example) by typing the following command: # /sbin/scsi_id -g -u -s /block/sda 360a98000686f68656c6f51645736374f c. Determine the boot device's mpath name by looking at the /var/lib/multipath/bindings bindings file. In our bindings file, the wwid of sda corresponds to mpath0: # cat /var/lib/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpath0 360a98000686f68656c6f51645736374f mpath1 360a98000686f68656c6f516457373349 d. In general, multipath device names take the form of /dev/mapper/mpathxpy, where X is the multipath number and Y is the partition number. From the previous step, mpath0 corresponds to /dev/sda. Therefore, you can translate /dev/sda1 to /dev/mapper/mpath0p1 and replace the LABEL=/boot with the multipath device name of /dev/sda1: /dev/mapper/mpath0p1. Here is the new entry for the boot device in the /etc/fstab file: /dev/mapper/mpath0p1 /boot ext3 defaults,_netdev 1 2 It replaces the old entry: LABEL=/boot /boot ext3 defaults,_netdev 1 2 Note: If you choose to not to use friendly names in step 4.a, you need to use this format: /dev/mapper/<wwid>p<partition number> for the multipath device names. For example, here is an entry in /etc/fstab for /boot without friendly names: /dev/mapper/360a98000686f68656c6f51645736374fp1 /boot ext3 defaults,_netdev 1 2 7. Create an iscsi/multipath capable initrd image by following these steps: a. Save the original /sbin/mkinitrd and /boot/initrd image files for backup. Enter the following commands to create backup files: # cp /sbin/mkinitrd /sbin/mkinitrd.orig # cp /boot/initrd-2.6.18-128.el5.img /boot/initrd-2.6.18-128.el5.img.orig b. Edit the /sbin/mkinitrd file. Open the file and search for the findstoragedriver () function. The arguments passed to findstoragedriver () are not correct for the default LVM installation. Typing Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 19

ls -d /sys/block/sd* shows you all paths. Replace the $@ in the first line of findstoragedriver() function definition with your list of all boot and root device paths. On the test system, there are two disks with two paths per disk, for a total of four path names. Therefore, change the for loop from: for device in $@ ; do to: for device in sda sdb sdc sdd; do c. Enable multipath in the mkinitrd script. Search within the /sbin/mkinitrd file for use_multipath=0, and change it to use_multipath=1. On the test system, this definition is on line 1270. d. Because your specific devices are not going to be listed in the initrd image created by the default /sbin/mkinitrd script, you must add them to the correct place in the script. Search for "echo Creating multipath devices" in the file. On the test system, this definition was found on line 1688. For each of your multipath disks, add a line such as the following example after the echo command: emit "/bin/multipath -v 0 <wwid>" For example, on the test system, these two lines were added to the /sbin/mkinitrd file: emit "/bin/multipath -v 0 360a98000686f68656c6f51645736374f" emit "/bin/multipath -v 0 360a98000686f68656c6f516457373349" To show the changes that we made, here is a comparison between the original and the modified mkinitrd file: # diff -Nau /sbin/mkinitrd.orig /sbin/mkinitrd --- /sbin/mkinitrd.orig 2009-07-28 17:29:05.000000000-0400 +++ /sbin/mkinitrd 2009-07-28 17:31:08.000000000-0400 @@ -414,7 +414,7 @@ } findstoragedriver () { - for device in $@ ; do + for device in sda sdb sdc sdd ;do case " $handleddevices " in *" $device "*) continue ;; @@ -1222,7 +1222,7 @@ echo $NONL "$@" >> $RCFILE } -use_multipath=0 +use_multipath=1 use_emc=0 use_xdr=0 if [ -n "$testdm" -a -x /sbin/dmsetup -a -e /dev/mapper/control ]; then @@ -1666,6 +1666,8 @@ if [ "$use_multipath" == "1" ]; then emit "echo Creating multipath devices" + emit "/bin/multipath -v 0 360a98000686f68656c6f51645736374f" + emit "/bin/multipath -v 0 360a98000686f68656c6f516457373349" for wwid in $root_wwids ; do emit "/bin/multipath -v 0 $wwid" done e. Use the modified mkinitrd script to create a new initrd image: # mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r) f. Reboot your system: # reboot 20 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network

8. After the system reboots, the root file system image should be discovered on multpath iscsi devices. Repeat the instructions in step 2 to verify. 9. To ensure that these changes persist even if you change the storage environment, follow these steps: a. Restore the original mkinitrd that you saved in step 8.1: # cp /sbin/mkinitrd.orig /sbin/mkinitrd b. Edit /etc/sysconfig/mkinitrd/multipath file and change MULTIPATH=no to MULTIPATH=yes. If this file does not exist on your system, create the file by entering the following command: echo "MULTIPATH=yes" > /etc/sysconfig/mkinitrd/multipath Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing SLES 11 on a multipath iscsi storage device (System x only) The following steps are currently supported in System x only. Procedure 1. Follow all the steps in the iscsi Overview blueprint at http://publib.boulder.ibm.com/infocenter/ lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm to install SLES 11 on an iscsi LUN. The following steps are needed once you have completed the installation and your system has rebooted. 2. Enable Multipath services at system startup by entering the following commands: # chkconfig boot.multipath on # chkconfig multipathd on 3. Create the /etc/multipath.conf file. The default SLES 11 installation does not create the /etc/multipath.conf file. See the /usr/share/doc/packages/multipath-tools/ directory for more information. In that directory, refer to the multipath.conf.synthetic template and the multipath.conf.annotated HOWTO. In the test environment, the multipath.conf.synthetic file was copied to the /etc/multipath.conf file. To do so, enter the following command: # cp /usr/share/doc/packages/multipath-tools\ /multipath.conf.synthetic /etc/multipath.conf All entries in the example file are commented. You can change the values in this file if necessary for your environment. 4. Enter the iscsiadm -m discovery -t sendtargets -p <target node ip address>:<target port> command to add the node records that were not discovered during installation. In this example, there are two target nodes (192.168.1.22 and 192.168.1.152 with the default iscsi target port 3260) # iscsiadm -m discovery -t sendtargets -p 192.168.1.22:3260 192.168.1.152:3260,1001 iqn.1992-08.com.netapp:sn.84183797 192.168.1.22:3260,1000 iqn.1992-08.com.netapp:sn.84183797 # iscsiadm -m discovery -t sendtargets -p 192.168.1.152:3260 192.168.1.152:3260,1001 iqn.1992-08.com.netapp:sn.84183797 192.168.1.22:3260,1000 iqn.1992-08.com.netapp:sn.84183797 5. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.startup -v onboot command to set each node to start up onboot. Note that the test target IQN is iqn.1992-08.com.netapp:sn.84183797. # iscsiadm -m node -p 9.47.69.22:3260 -T iqn.1992-08.com.netapp:sn.84183797 \ -o update -n node.startup -v onboot # iscsiadm -m node -p 9.47.67.152:3260 -T iqn.1992-08.com.netapp:sn.84183797 \ -o update -n node.startup -v onboot 6. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.conn\[0\].startup -v onboot command to set each node connection to start up onboot Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 21

# iscsiadm -m node -p 192.168.1.22:3260 -T iqn.1992-08.com.netapp:sn.84183797 -o update -n node.conn\[0\].startup -v onboot # iscsiadm -m node -p 192.168.1.152:3260 -T iqn.1992-08.com.netapp:sn.84183797 -o update -n node.conn\[0\].startup -v onboot 7. Verify that the host system can now find all iscsi paths required for booting a. Enter the ls -d /sys/block/sd* command to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the iscsiadm -m node --login command to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260] Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.152,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn.1992-08.com.netapp:sn.84183797, portal: 192.168.1.22,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 8. Create a multipath-capable initrd image a. Edit the /etc/sysconfig/kernel file to add dm-multipath to INITRD_MODULES keyword. If your storage configuration requires additional multipath modules, add them here as well. On the test system, only dm-multipath was added, as the storage NetApp File server did not need additional kernel modules. Here is a comparison between the original and modified file on the test system: # diff -Nau /etc/sysconfig/kernel.orig /etc/sysconfig/kernel --- /etc/sysconfig/kernel.orig 2009-08-17 18:46:59.000000000-0400 +++ /etc/sysconfig/kernel 2009-07-29 13:39:00.000000000-0400 @@ -7,7 +7,7 @@ # ramdisk by calling the script "mkinitrd" # (like drivers for scsi-controllers, for lvm or reiserfs) # -INITRD_MODULES="processor thermal fan jbd ext3 edd" +INITRD_MODULES="dm-multipath processor thermal fan jbd ext3 edd" ## Type: string ## Command: /sbin/mkinitrd b. Create a backup copy of your initrd file c. Run the mkinitrd command to create an initrd image: # mkinitrd 9. Update your boot loader configuration with the new initrd, if needed. During the test, the original initrd file was overwritten, so the boat loader configuration file did not require updating. 10. Reboot with the new initrd image and verify that your root is on multipath. a. Use the mount command to find where the root is. The following output shows that root is on the device mapper (DM) device /dev/dm-3: # mount /dev/dm-3 on / type ext3 (rw,acl,user_xattr) /proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) 22 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network