Blueprints. Installing Linux on a Multipath iscsi LUN on an IP Network
|
|
- Meagan Melton
- 6 years ago
- Views:
Transcription
1 Blueprints Installing Linux on a Multipath iscsi LUN on an IP Network
2
3 Blueprints Installing Linux on a Multipath iscsi LUN on an IP Network
4 Note Before using this information and the product it supports, read the information in Notices on page 27. First Edition (August 2009) Copyright IBM Corporation US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
5 Contents Introduction v Chapter 1. Scope, requirements, and support Chapter 2. iscsi and Multipath overview 3 Chapter 3. Installing Linux distributions on multipathed iscsi target devices.. 5 Installing RHEL5.3 on a multipath iscsi storage device Installing SLES 11 on a multipath iscsi storage device (System x only) Chapter 4. Troubleshooting tips iscsi and Multipath overview Installing Linux distributions on multipathed iscsi target devices Installing RHEL5.3 on a multipath iscsi storage device Installing SLES 11 on a multipath iscsi storage device (System x only) Troubleshooting tips Appendix. Related information and downloads Notices Trademarks Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network. 15 Scope, requirements, and support Copyright IBM Corp iii
6 iv Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
7 Introduction This blueprint provides step by step instructions for installing Red Hat Enterprise Linux (RHEL) 5.3 and SUSE Linux Enterprise Server (SLES) 11 on a Multipath iscsi Logical Unit (LUN). The procedures are tested on a System x and a System p blades connected to a NETAPP storage server through an Ethernet IP network. You can adapt these instructions to install either of these Linux distributions onto other supported models of iscsi storage devices. Key tools and technologies discussed in this demonstration include ISCSI logical unit (LUN), device mapper (DM) multipath, multipathed iscsi, iscsiadm, and dmsetup. Intended audience This document is intended for Linux system administrators who have prior experience in installing Red Hat Enterprise Linux 5 or SUSE Linux Enterprise Server 11 and have a moderate level of knowledge in Device Mapper (DM) Multipath and iscsi. Scope and purpose This document provides installation steps of RHEL5.3 and SLES11 to a System x or System p host that is connected to a iscsi storage device through an IP network. The instructions may change in newer releases of the same distributions. The configuration and setup of the host and the iscsi storage device and the physical setup of multiple paths to this storage device are not covered in this document. Refer to the documentation supplied with your storage device for more information. The instructions in this blueprint were tested on System x and System p blades. The instructions should work on System x and System p non-blades with some adaptation. In addition, the instructions assume installation of RHEL 5.3 or SLES 11 onto the multipath iscsi boot device. Part of the instructions should work even if you want to only setup the multipath on iscsi non-boot device. Software requirements This blueprint is written for Red Hat Enterprise Linux (RHEL) 5.3 and SUSE Linux Enterprise Server (SLES) 11. Hardware requirements See the IBM Storage interoperability matrices at interop.html for supported storage configurations. The examples included in this blueprint are tested on a System x LS21 blade and a System p JS12 blade host system with one network adapter. The iscsi storage target is a NetApp dual-node file server with two logical units. Author names Malahal Naineni Other contributors Monza Lui Robb Romans Kersten Richter Copyright IBM Corp v
8 IBM Services Linux offers flexibility, options, and competitive total cost of ownership with a world class enterprise operating system. Community innovation integrates leading-edge technologies and best practices into Linux. IBM is a leader in the Linux community with over 600 developers in the IBM Linux Technology Center working on over 100 open source projects in the community. IBM supports Linux on all IBM servers, storage, and middleware, offering the broadest flexibility to match your business needs. For more information about IBM and Linux, go to ibm.com/linux ( IBM Support Questions and comments regarding this documentation can be posted on the developerworks Storage Connectivity Blueprint Community Forum: The IBM developerworks discussion forums let you ask questions, share knowledge, ideas, and opinions about technologies and programming techniques with other developerworks users. Use the forum content at your own risk. While IBM will attempt to provide a timely response to all postings, the use of this developerworks forum does not guarantee a response to every question that is posted, nor do we validate the answers or the code that are offered. Typographic conventions The following typographic conventions are used in this Blueprint: Bold Italics Monospace Identifies commands, subroutines, keywords, files, structures, directories, and other items whose names are predefined by the system. Also identifies graphical objects such as buttons, labels, and icons that the user selects. Identifies parameters whose actual names or values are to be supplied by the user. Identifies examples of specific data values, examples of text like what you might see displayed, examples of portions of program code like what you might write as a programmer, messages from the system, or information you should actually type. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. vi Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
9 Chapter 1. Scope, requirements, and support This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Systems to which this information applies System x running Linux and PowerLinux Copyright IBM Corp
10 2 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
11 Chapter 2. iscsi and Multipath overview The iscsi standard (RFC 3720) defines transporting of the SCSI protocol over a TCP/IP network that allows block access to target devices. A host connection to the network can be provided by a iscsi host bus adapter or a iscsi software initiator that uses the standard network interface card in the host. For more information, see RFC 3720 at The connection from the server through the Host Bus Adapter (HBA) to the storage controller is referred as a path. Within the context of this blueprint, multipath connectivity refers to a system configuration where multiple connection paths exist between a server and a storage unit (Logical Unit (LUN)) within a storage subsystem. This configuration can be used to provide redundancy or increased bandwidth. Multipath connectivity provides redundant access to the storage devices, for example, to have access to the storage device when one or more of the components in a path fail. Another advantage of using multipath connectivity is the increased throughput by way of load balancing. Note that multipathing protects against the failure of path(s) and not the failure of a specific storage unit. A simple example of multipath connectivity is two NICs connected to a network to which the storage controllers are connected. In this case, the storage units can be accessed from either of the NICs and hence you have multipath connectivity. In the following diagram, each host has two NICs and each storage unit has two controllers. With the given configuration setup, each host will have four paths to each of the LUNs in each of the storage devices. Copyright IBM Corp
12 Figure 1. Simple IP network example Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. 4 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
13 Chapter 3. Installing Linux distributions on multipathed iscsi target devices Use these instructions to install RHEL 5.3 or SLES 11 distribution to logical volumes created from two physical iscsi storage devices. Before beginning, complete the physical configuration of the multipath servers to the iscsi storage. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing RHEL5.3 on a multipath iscsi storage device Follow these steps to install Red Hat Enterprise Linux 5.3 on a multipath iscsi storage device. Procedure 1. To install RHEL 5.3 on an iscsi LUN, follow the steps in the iscsi Overview blueprint at Adjust the following step to prepare for the multipath creation after the installation: Add the mpath parameter at the boot prompt during installation. For example, instead of using the command linux vnc, this blueprint used linux mpath vnc command in the test environment. This demonstration assumes the default LVM partitioning scheme. If you want to install RHEL 5.3 onto a different partitioning scheme, adjust the steps. 2. Check that your root file system image is discovered as installed on a multipath iscsi device. If it is, then you have completed all required steps and no further multipath configuration is necessary. Otherwise, continue with the next step. a. Enter the mount command to find where the root file system image is located. The following output shows that root is on the logical volume VolGroup00/LogVol00: # mount /dev/mapper/volgroup00-logvol00 on / type ext3 (rw,_netdev) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda1 on /boot type ext3 (rw,_netdev) b. Enter the lvs -o lv_name,vg_name,devices command to check where this logical volume resides. If the output is similar to the following output (which shows that the root file system image is located on devices named /dev/sdx), then the root file system image is detected through single path devices. In the following output, the root file system image is detected on /dev/sda2 and /dev/sdb1 which are not multipath devices. If your root file system image is detected on single path devices as below, go to step 3. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/sdb1(0) LogVol00 VolGroup00 /dev/sda2(0) LogVol01 VolGroup00 /dev/sda2(31) On the other hand, if you see something like the following output (which shows that the root file system image is on devices named /dev/dm-x), gather more information to see if the DM device is a multipath device. Copyright IBM Corp
14 # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/dm-2(0) LogVol00 VolGroup00 /dev/dm-4(0) LogVol01 VolGroup00 /dev/dm-4(31) c. Enter the command ls -l on both DM devices to find their major and minor numbers: # ls -l /dev/dm-2 brw-rw root root 253, 2 Jul 14 14:30 /dev/dm-2 # ls -l /dev/dm-4 brw-rw root root 253, 4 Jul 14 14:30 /dev/dm-4 d. Enter the dmsetup command with the major and minor numbers as parameters to see if they are multipath devices. If they are, determine the corresponding mpath names. The following output shows that /dev/dm-2 and /dev/dm-4 correspond to mpath1p1 and mpath0p2. If they are, then this root file system image is installed on a multipath iscsi LUN. # dmsetup info -c -o name,major,minor --major minor 2 Name Maj Min mpath1p # dmsetup info -c -o name,major,minor --major minor 4 Name Maj Min mpath0p The multipath -ll command shows you that mpath0 and mpath1 are multipath devices with two paths each, as follows: # multipath -ll mpath1 (360a f68656c6f ) dm-1 NETAPP,LUN [size=7.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdd 8:48 [active][ready] mpath0 (360a f68656c6f f) dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] If you see similar output, you are finished because the multipath setup was performed automatically. No further multipath configuration is necessary. If you do not see similar output, continue to the next step. 3. Create multiple iscsi sessions (multiple paths) a. Enter the command ls -d /sys/block/sd* to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the command iscsiadm -m node --login to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: 6 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
15 # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 4. Edit the /etc/multipath.conf file. a. Ensure that the file contains the following line: defaults { user_friendly_names yes } b. Multipath-tools blacklist everything listed within the blacklist {} statement contained in the /etc/multipath.conf file. Comment out the wwid line within the blacklist {} statement so that the iscsi devices are not blacklisted. For example, here is a modified multipath.conf file: # cat /etc/multipath.conf defaults { user_friendly_names yes } blacklist { devnode "^(ram raw loop fd md dm- sr scd st)[0-9]*" devnode "^(hd xvd vd)[a-z]*" # wwid "*" } 5. Enter the multipath command to create a bindings file named /var/lib/multipath/bindings. This file is included in the new initrd image. # multipath 6. Edit /etc/fstab file to ensure it doesn't have single path device names or LABELs The /etc/init.d/netfs script may try to mount single path names (such as /dev/sda1) because the label is the same on a single path device as well as its corresponding multipath device name. However, the script cannot mount the device because the corresponding multipath device is already active. To avoid this error during boot, edit the /etc/fstab file and replace any LABELs or path names with LVM names or multipath names, as appropriate. See the next step for an example. In this example, the default LVM partitioning scheme was accepted during system installation so the /etc/fstab file on the test system contains the following LABEL entry: LABEL=/boot / boot ext3 defaults,_netdev 1 2 a. Find out which multipath device corresponds to /boot LABEL. Enter the blkid -l -t LABEL=/boot command to determine which device has /boot LABEL. The following output shows that /dev/sda1 has /boot LABEL: # blkid -t LABEL=/boot /dev/sda1: LABEL="/boot" UUID="cec3f5e afb-a3fec0b40ccb4420" SEC_TYPE="ext2" TYPE="ext3" b. Determine the wwid (World Wide IDentifier, a unique identifier for a logical unit in a SCSI storage subsystem) of the boot device (/dev/sda in this example) by typing the following command: # /sbin/scsi_id -g -u -s /block/sda 360a f68656c6f f c. Determine the boot device's mpath name by looking at the /var/lib/multipath/bindings bindings file. In our bindings file, the wwid of sda corresponds to mpath0: # cat /var/lib/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpath0 360a f68656c6f f mpath1 360a f68656c6f Chapter 3. Installing Linux distributions on multipathed iscsi target devices 7
16 d. In general, multipath device names take the form of /dev/mapper/mpathxpy, where X is the multipath number and Y is the partition number. From the previous step, mpath0 corresponds to /dev/sda. Therefore, you can translate /dev/sda1 to /dev/mapper/mpath0p1 and replace the LABEL=/boot with the multipath device name of /dev/sda1: /dev/mapper/mpath0p1. Here is the new entry for the boot device in the /etc/fstab file: /dev/mapper/mpath0p1 /boot ext3 defaults,_netdev 1 2 It replaces the old entry: LABEL=/boot /boot ext3 defaults,_netdev 1 2 Note: If you choose to not to use friendly names in step 4.a, you need to use this format: /dev/mapper/<wwid>p<partition number> for the multipath device names. For example, here is an entry in /etc/fstab for /boot without friendly names: /dev/mapper/360a f68656c6f fp1 /boot ext3 defaults,_netdev Create an iscsi/multipath capable initrd image by following these steps: a. Save the original /sbin/mkinitrd and /boot/initrd image files for backup. Enter the following commands to create backup files: # cp /sbin/mkinitrd /sbin/mkinitrd.orig # cp /boot/initrd el5.img /boot/initrd el5.img.orig b. Edit the /sbin/mkinitrd file. Open the file and search for the findstoragedriver () function. The arguments passed to findstoragedriver () are not correct for the default LVM installation. Typing ls -d /sys/block/sd* shows you all paths. Replace the $@ in the first line of findstoragedriver() function definition with your list of all boot and root device paths. On the test system, there are two disks with two paths per disk, for a total of four path names. Therefore, change the for loop from: for device in $@ ; do to: for device in sda sdb sdc sdd; do c. Enable multipath in the mkinitrd script. Search within the /sbin/mkinitrd file for use_multipath=0, and change it to use_multipath=1. On the test system, this definition is on line d. Because your specific devices are not going to be listed in the initrd image created by the default /sbin/mkinitrd script, you must add them to the correct place in the script. Search for "echo Creating multipath devices" in the file. On the test system, this definition was found on line For each of your multipath disks, add a line such as the following example after the echo command: emit "/bin/multipath -v 0 <wwid>" For example, on the test system, these two lines were added to the /sbin/mkinitrd file: emit "/bin/multipath -v 0 360a f68656c6f f" emit "/bin/multipath -v 0 360a f68656c6f " To show the changes that we made, here is a comparison between the original and the modified mkinitrd file: # diff -Nau /sbin/mkinitrd.orig /sbin/mkinitrd --- /sbin/mkinitrd.orig :29: /sbin/mkinitrd ,7 } findstoragedriver () { - for device in $@ ; do + for device in sda sdb sdc sdd ;do case " $handleddevices " in *" $device "*) continue -1222,7 echo $NONL "$@" >> $RCFILE } 8 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
17 -use_multipath=0 +use_multipath=1 use_emc=0 use_xdr=0 if [ -n "$testdm" -a -x /sbin/dmsetup -a -e /dev/mapper/control ]; -1666,6 if [ "$use_multipath" == "1" ]; then emit "echo Creating multipath devices" + emit "/bin/multipath -v 0 360a f68656c6f f" + emit "/bin/multipath -v 0 360a f68656c6f " for wwid in $root_wwids ; do emit "/bin/multipath -v 0 $wwid" done e. Use the modified mkinitrd script to create a new initrd image: # mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r) f. Reboot your system: # reboot 8. After the system reboots, the root file system image should be discovered on multpath iscsi devices. Repeat the instructions in step 2 to verify. 9. To ensure that these changes persist even if you change the storage environment, follow these steps: a. Restore the original mkinitrd that you saved in step 8.1: # cp /sbin/mkinitrd.orig /sbin/mkinitrd b. Edit /etc/sysconfig/mkinitrd/multipath file and change MULTIPATH=no to MULTIPATH=yes. If this file does not exist on your system, create the file by entering the following command: echo "MULTIPATH=yes" > /etc/sysconfig/mkinitrd/multipath Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing SLES 11 on a multipath iscsi storage device (System x only) The following steps are currently supported in System x only. Procedure 1. Follow all the steps in the iscsi Overview blueprint at lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm to install SLES 11 on an iscsi LUN. The following steps are needed once you have completed the installation and your system has rebooted. 2. Enable Multipath services at system startup by entering the following commands: # chkconfig boot.multipath on # chkconfig multipathd on 3. Create the /etc/multipath.conf file. The default SLES 11 installation does not create the /etc/multipath.conf file. See the /usr/share/doc/packages/multipath-tools/ directory for more information. In that directory, refer to the multipath.conf.synthetic template and the multipath.conf.annotated HOWTO. In the test environment, the multipath.conf.synthetic file was copied to the /etc/multipath.conf file. To do so, enter the following command: # cp /usr/share/doc/packages/multipath-tools\ /multipath.conf.synthetic /etc/multipath.conf Chapter 3. Installing Linux distributions on multipathed iscsi target devices 9
18 All entries in the example file are commented. You can change the values in this file if necessary for your environment. 4. Enter the iscsiadm -m discovery -t sendtargets -p <target node ip address>:<target port> command to add the node records that were not discovered during installation. In this example, there are two target nodes ( and with the default iscsi target port 3260) # iscsiadm -m discovery -t sendtargets -p : :3260,1001 iqn com.netapp:sn :3260,1000 iqn com.netapp:sn # iscsiadm -m discovery -t sendtargets -p : :3260,1001 iqn com.netapp:sn :3260,1000 iqn com.netapp:sn Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.startup -v onboot command to set each node to start up onboot. Note that the test target IQN is iqn com.netapp:sn # iscsiadm -m node -p :3260 -T iqn com.netapp:sn \ -o update -n node.startup -v onboot # iscsiadm -m node -p :3260 -T iqn com.netapp:sn \ -o update -n node.startup -v onboot 6. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.conn\[0\].startup -v onboot command to set each node connection to start up onboot # iscsiadm -m node -p :3260 -T iqn com.netapp:sn o update -n node.conn\[0\].startup -v onboot # iscsiadm -m node -p :3260 -T iqn com.netapp:sn o update -n node.conn\[0\].startup -v onboot 7. Verify that the host system can now find all iscsi paths required for booting a. Enter the ls -d /sys/block/sd* command to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the iscsiadm -m node --login command to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 8. Create a multipath-capable initrd image a. Edit the /etc/sysconfig/kernel file to add dm-multipath to INITRD_MODULES keyword. If your storage configuration requires additional multipath modules, add them here as well. On the test system, only dm-multipath was added, as the storage NetApp File server did not need additional kernel modules. Here is a comparison between the original and modified file on the test system: # diff -Nau /etc/sysconfig/kernel.orig /etc/sysconfig/kernel --- /etc/sysconfig/kernel.orig :46: /etc/sysconfig/kernel ,7 # ramdisk by calling the script "mkinitrd" # (like drivers for scsi-controllers, for lvm or reiserfs) # 10 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
19 -INITRD_MODULES="processor thermal fan jbd ext3 edd" +INITRD_MODULES="dm-multipath processor thermal fan jbd ext3 edd" ## Type: string ## Command: /sbin/mkinitrd b. Create a backup copy of your initrd file c. Run the mkinitrd command to create an initrd image: # mkinitrd 9. Update your boot loader configuration with the new initrd, if needed. During the test, the original initrd file was overwritten, so the boat loader configuration file did not require updating. 10. Reboot with the new initrd image and verify that your root is on multipath. a. Use the mount command to find where the root is. The following output shows that root is on the device mapper (DM) device /dev/dm-3: # mount /dev/dm-3 on / type ext3 (rw,acl,user_xattr) /proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) b. Enter the ls -l command on the DM device to find its major and minor numbers. The following output shows that /dev/dm-3's major number is 253 and minor number is 3: # ls -l /dev/dm-3 brw-rw root disk 253, 3 Jul 26 16:13 /dev/dm-3 c. Enter the dmsetup table command with the major and minor numbers as parameters to look at the current table of the device. The following output shows that /dev/dm-3 is a linear mapping (partition) on DM devices whose major number is 253 and minor number is 0: # dmsetup table --major minor linear 253: d. Enter the dmsetup info command with the major and minor numbers we get from the last step, 253 and 0 in this example, to get the device name. This output shows that the suspected multipath device name is 360a f68656c6f f. # dmsetup info -c -o name,major,minor --major minor 0 Name Maj Min 360a f68656c6f f e. Enter the multipath -ll command with the device name as parameter. Check if multiple paths are discovered. In this output, two paths are shown under device 360a f68656c6f f: sda and sdc. If you see similar output, the multipath setup is complete. # multipath -ll 360a f68656c6f f 360a f68656c6f f dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Chapter 3. Installing Linux distributions on multipathed iscsi target devices 11
20 12 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
21 Chapter 4. Troubleshooting tips This topic discusses troubleshooting tips and caveats. The host blade installed when testing this blueprint has two network adapters. Configuring the eth0 interface for iscsi was unsuccessful. The BIOS failed to log in to the target. Configuring the eth1 interface for iscsi was successful. Make sure that the /etc/fstab has correct LVM or multipath names because /etc/init.d/netfs tries to mount all net devices. A LABEL in the /etc/fstab file for a network device causes problems because the script tries to use underlying devices rather than a multipath device. Use only by-id names in the /etc/fstab file for the SLES 11 distribution. Both SLES 11 and RHEL 5.3 x86_64 Xen kernels failed to boot from the iscsi boot device. However, physical machine installations for both distributions were successful. SLES 11 non-xen default kernels also failed to boot from iscsi boot device with our network installation. When the ip=* parameter was removed from the grub menu, the kernel booted fine. The failsafe kernel booted normally in either case. SLES 11 cannot boot even to rescue mode if the target iqn wasn't set to onboot mode (automatic as suggested in the iscsi paper for SLES 10 Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Copyright IBM Corp
22 14 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
23 Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network Scope, requirements, and support This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Systems to which this information applies System x running Linux and PowerLinux iscsi and Multipath overview The iscsi standard (RFC 3720) defines transporting of the SCSI protocol over a TCP/IP network that allows block access to target devices. A host connection to the network can be provided by a iscsi host bus adapter or a iscsi software initiator that uses the standard network interface card in the host. For more information, see RFC 3720 at The connection from the server through the Host Bus Adapter (HBA) to the storage controller is referred as a path. Within the context of this blueprint, multipath connectivity refers to a system configuration where multiple connection paths exist between a server and a storage unit (Logical Unit (LUN)) within a storage subsystem. This configuration can be used to provide redundancy or increased bandwidth. Multipath connectivity provides redundant access to the storage devices, for example, to have access to the storage device when one or more of the components in a path fail. Another advantage of using multipath connectivity is the increased throughput by way of load balancing. Note that multipathing protects against the failure of path(s) and not the failure of a specific storage unit. A simple example of multipath connectivity is two NICs connected to a network to which the storage controllers are connected. In this case, the storage units can be accessed from either of the NICs and hence you have multipath connectivity. In the following diagram, each host has two NICs and each storage unit has two controllers. With the given configuration setup, each host will have four paths to each of the LUNs in each of the storage devices. Copyright IBM Corp
24 Figure 2. Simple IP network example Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing Linux distributions on multipathed iscsi target devices Use these instructions to install RHEL 5.3 or SLES 11 distribution to logical volumes created from two physical iscsi storage devices. Before beginning, complete the physical configuration of the multipath servers to the iscsi storage. Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing RHEL5.3 on a multipath iscsi storage device Follow these steps to install Red Hat Enterprise Linux 5.3 on a multipath iscsi storage device. 16 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
25 Procedure 1. To install RHEL 5.3 on an iscsi LUN, follow the steps in the iscsi Overview blueprint at Adjust the following step to prepare for the multipath creation after the installation: Add the mpath parameter at the boot prompt during installation. For example, instead of using the command linux vnc, this blueprint used linux mpath vnc command in the test environment. This demonstration assumes the default LVM partitioning scheme. If you want to install RHEL 5.3 onto a different partitioning scheme, adjust the steps. 2. Check that your root file system image is discovered as installed on a multipath iscsi device. If it is, then you have completed all required steps and no further multipath configuration is necessary. Otherwise, continue with the next step. a. Enter the mount command to find where the root file system image is located. The following output shows that root is on the logical volume VolGroup00/LogVol00: # mount /dev/mapper/volgroup00-logvol00 on / type ext3 (rw,_netdev) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda1 on /boot type ext3 (rw,_netdev) b. Enter the lvs -o lv_name,vg_name,devices command to check where this logical volume resides. If the output is similar to the following output (which shows that the root file system image is located on devices named /dev/sdx), then the root file system image is detected through single path devices. In the following output, the root file system image is detected on /dev/sda2 and /dev/sdb1 which are not multipath devices. If your root file system image is detected on single path devices as below, go to step 3. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/sdb1(0) LogVol00 VolGroup00 /dev/sda2(0) LogVol01 VolGroup00 /dev/sda2(31) On the other hand, if you see something like the following output (which shows that the root file system image is on devices named /dev/dm-x), gather more information to see if the DM device is a multipath device. # lvs -o lv_name,vg_name,devices LV VG Devices LogVol00 VolGroup00 /dev/dm-2(0) LogVol00 VolGroup00 /dev/dm-4(0) LogVol01 VolGroup00 /dev/dm-4(31) c. Enter the command ls -l on both DM devices to find their major and minor numbers: # ls -l /dev/dm-2 brw-rw root root 253, 2 Jul 14 14:30 /dev/dm-2 # ls -l /dev/dm-4 brw-rw root root 253, 4 Jul 14 14:30 /dev/dm-4 d. Enter the dmsetup command with the major and minor numbers as parameters to see if they are multipath devices. If they are, determine the corresponding mpath names. The following output shows that /dev/dm-2 and /dev/dm-4 correspond to mpath1p1 and mpath0p2. If they are, then this root file system image is installed on a multipath iscsi LUN. # dmsetup info -c -o name,major,minor --major minor 2 Name Maj Min mpath1p # dmsetup info -c -o name,major,minor --major minor 4 Name Maj Min mpath0p Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 17
26 The multipath -ll command shows you that mpath0 and mpath1 are multipath devices with two paths each, as follows: # multipath -ll mpath1 (360a f68656c6f ) dm-1 NETAPP,LUN [size=7.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdd 8:48 [active][ready] mpath0 (360a f68656c6f f) dm-0 NETAPP,LUN [size=5.0g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=4][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sdc 8:32 [active][ready] If you see similar output, you are finished because the multipath setup was performed automatically. No further multipath configuration is necessary. If you do not see similar output, continue to the next step. 3. Create multiple iscsi sessions (multiple paths) a. Enter the command ls -d /sys/block/sd* to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the command iscsiadm -m node --login to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 4. Edit the /etc/multipath.conf file. a. Ensure that the file contains the following line: defaults { user_friendly_names yes } b. Multipath-tools blacklist everything listed within the blacklist {} statement contained in the /etc/multipath.conf file. Comment out the wwid line within the blacklist {} statement so that the iscsi devices are not blacklisted. For example, here is a modified multipath.conf file: # cat /etc/multipath.conf defaults { user_friendly_names yes } blacklist { devnode "^(ram raw loop fd md dm- sr scd st)[0-9]*" devnode "^(hd xvd vd)[a-z]*" # wwid "*" } 18 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
27 5. Enter the multipath command to create a bindings file named /var/lib/multipath/bindings. This file is included in the new initrd image. # multipath 6. Edit /etc/fstab file to ensure it doesn't have single path device names or LABELs The /etc/init.d/netfs script may try to mount single path names (such as /dev/sda1) because the label is the same on a single path device as well as its corresponding multipath device name. However, the script cannot mount the device because the corresponding multipath device is already active. To avoid this error during boot, edit the /etc/fstab file and replace any LABELs or path names with LVM names or multipath names, as appropriate. See the next step for an example. In this example, the default LVM partitioning scheme was accepted during system installation so the /etc/fstab file on the test system contains the following LABEL entry: LABEL=/boot / boot ext3 defaults,_netdev 1 2 a. Find out which multipath device corresponds to /boot LABEL. Enter the blkid -l -t LABEL=/boot command to determine which device has /boot LABEL. The following output shows that /dev/sda1 has /boot LABEL: # blkid -t LABEL=/boot /dev/sda1: LABEL="/boot" UUID="cec3f5e afb-a3fec0b40ccb4420" SEC_TYPE="ext2" TYPE="ext3" b. Determine the wwid (World Wide IDentifier, a unique identifier for a logical unit in a SCSI storage subsystem) of the boot device (/dev/sda in this example) by typing the following command: # /sbin/scsi_id -g -u -s /block/sda 360a f68656c6f f c. Determine the boot device's mpath name by looking at the /var/lib/multipath/bindings bindings file. In our bindings file, the wwid of sda corresponds to mpath0: # cat /var/lib/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpath0 360a f68656c6f f mpath1 360a f68656c6f d. In general, multipath device names take the form of /dev/mapper/mpathxpy, where X is the multipath number and Y is the partition number. From the previous step, mpath0 corresponds to /dev/sda. Therefore, you can translate /dev/sda1 to /dev/mapper/mpath0p1 and replace the LABEL=/boot with the multipath device name of /dev/sda1: /dev/mapper/mpath0p1. Here is the new entry for the boot device in the /etc/fstab file: /dev/mapper/mpath0p1 /boot ext3 defaults,_netdev 1 2 It replaces the old entry: LABEL=/boot /boot ext3 defaults,_netdev 1 2 Note: If you choose to not to use friendly names in step 4.a, you need to use this format: /dev/mapper/<wwid>p<partition number> for the multipath device names. For example, here is an entry in /etc/fstab for /boot without friendly names: /dev/mapper/360a f68656c6f fp1 /boot ext3 defaults,_netdev Create an iscsi/multipath capable initrd image by following these steps: a. Save the original /sbin/mkinitrd and /boot/initrd image files for backup. Enter the following commands to create backup files: # cp /sbin/mkinitrd /sbin/mkinitrd.orig # cp /boot/initrd el5.img /boot/initrd el5.img.orig b. Edit the /sbin/mkinitrd file. Open the file and search for the findstoragedriver () function. The arguments passed to findstoragedriver () are not correct for the default LVM installation. Typing Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 19
28 ls -d /sys/block/sd* shows you all paths. Replace the in the first line of findstoragedriver() function definition with your list of all boot and root device paths. On the test system, there are two disks with two paths per disk, for a total of four path names. Therefore, change the for loop from: for device in $@ ; do to: for device in sda sdb sdc sdd; do c. Enable multipath in the mkinitrd script. Search within the /sbin/mkinitrd file for use_multipath=0, and change it to use_multipath=1. On the test system, this definition is on line d. Because your specific devices are not going to be listed in the initrd image created by the default /sbin/mkinitrd script, you must add them to the correct place in the script. Search for "echo Creating multipath devices" in the file. On the test system, this definition was found on line For each of your multipath disks, add a line such as the following example after the echo command: emit "/bin/multipath -v 0 <wwid>" For example, on the test system, these two lines were added to the /sbin/mkinitrd file: emit "/bin/multipath -v 0 360a f68656c6f f" emit "/bin/multipath -v 0 360a f68656c6f " To show the changes that we made, here is a comparison between the original and the modified mkinitrd file: # diff -Nau /sbin/mkinitrd.orig /sbin/mkinitrd --- /sbin/mkinitrd.orig :29: /sbin/mkinitrd ,7 } findstoragedriver () { - for device in $@ ; do + for device in sda sdb sdc sdd ;do case " $handleddevices " in *" $device "*) continue -1222,7 echo $NONL "$@" >> $RCFILE } -use_multipath=0 +use_multipath=1 use_emc=0 use_xdr=0 if [ -n "$testdm" -a -x /sbin/dmsetup -a -e /dev/mapper/control ]; -1666,6 if [ "$use_multipath" == "1" ]; then emit "echo Creating multipath devices" + emit "/bin/multipath -v 0 360a f68656c6f f" + emit "/bin/multipath -v 0 360a f68656c6f " for wwid in $root_wwids ; do emit "/bin/multipath -v 0 $wwid" done e. Use the modified mkinitrd script to create a new initrd image: # mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r) f. Reboot your system: # reboot 20 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
29 8. After the system reboots, the root file system image should be discovered on multpath iscsi devices. Repeat the instructions in step 2 to verify. 9. To ensure that these changes persist even if you change the storage environment, follow these steps: a. Restore the original mkinitrd that you saved in step 8.1: # cp /sbin/mkinitrd.orig /sbin/mkinitrd b. Edit /etc/sysconfig/mkinitrd/multipath file and change MULTIPATH=no to MULTIPATH=yes. If this file does not exist on your system, create the file by entering the following command: echo "MULTIPATH=yes" > /etc/sysconfig/mkinitrd/multipath Related reference: Chapter 1, Scope, requirements, and support, on page 1 This blueprint applies to System x running Linux and PowerLinux. You can learn more about the systems to which this information applies. Installing SLES 11 on a multipath iscsi storage device (System x only) The following steps are currently supported in System x only. Procedure 1. Follow all the steps in the iscsi Overview blueprint at lnxinfo/v3r0m0/topic/liaai/iscsi/liaaiiscsi.htm to install SLES 11 on an iscsi LUN. The following steps are needed once you have completed the installation and your system has rebooted. 2. Enable Multipath services at system startup by entering the following commands: # chkconfig boot.multipath on # chkconfig multipathd on 3. Create the /etc/multipath.conf file. The default SLES 11 installation does not create the /etc/multipath.conf file. See the /usr/share/doc/packages/multipath-tools/ directory for more information. In that directory, refer to the multipath.conf.synthetic template and the multipath.conf.annotated HOWTO. In the test environment, the multipath.conf.synthetic file was copied to the /etc/multipath.conf file. To do so, enter the following command: # cp /usr/share/doc/packages/multipath-tools\ /multipath.conf.synthetic /etc/multipath.conf All entries in the example file are commented. You can change the values in this file if necessary for your environment. 4. Enter the iscsiadm -m discovery -t sendtargets -p <target node ip address>:<target port> command to add the node records that were not discovered during installation. In this example, there are two target nodes ( and with the default iscsi target port 3260) # iscsiadm -m discovery -t sendtargets -p : :3260,1001 iqn com.netapp:sn :3260,1000 iqn com.netapp:sn # iscsiadm -m discovery -t sendtargets -p : :3260,1001 iqn com.netapp:sn :3260,1000 iqn com.netapp:sn Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.startup -v onboot command to set each node to start up onboot. Note that the test target IQN is iqn com.netapp:sn # iscsiadm -m node -p :3260 -T iqn com.netapp:sn \ -o update -n node.startup -v onboot # iscsiadm -m node -p :3260 -T iqn com.netapp:sn \ -o update -n node.startup -v onboot 6. Enter the iscsiadm -m node -p <target node ip address>:<target port> -T <target IQN> -o update -n node.conn\[0\].startup -v onboot command to set each node connection to start up onboot Chapter 5. Installing Linux on a Multipath iscsi LUN on an IP Network 21
30 # iscsiadm -m node -p :3260 -T iqn com.netapp:sn o update -n node.conn\[0\].startup -v onboot # iscsiadm -m node -p :3260 -T iqn com.netapp:sn o update -n node.conn\[0\].startup -v onboot 7. Verify that the host system can now find all iscsi paths required for booting a. Enter the ls -d /sys/block/sd* command to see how many block devices are found by the initrd image created by the installer, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb b. Enter the iscsiadm -m node --login command to find other iscsi paths: # iscsiadm -m node --login Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Logging in to [iface: default, target: iqn com.netapp:sn , portal: ,3260] Login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: successful iscsiadm: Could not login to [iface: default, target: iqn com.netapp:sn , portal: ,3260]: iscsiadm: initiator reported error (15 - already exists) c. You should see extra device paths in the /sys/block/ directory after running the above command. These devices are newly-created iscsi paths. In this example, the newly-discovered iscsi paths are sdc and sdd, as follows: # ls -d /sys/block/sd* /sys/block/sda /sys/block/sdb /sys/block/sdc /sys/block/sdd 8. Create a multipath-capable initrd image a. Edit the /etc/sysconfig/kernel file to add dm-multipath to INITRD_MODULES keyword. If your storage configuration requires additional multipath modules, add them here as well. On the test system, only dm-multipath was added, as the storage NetApp File server did not need additional kernel modules. Here is a comparison between the original and modified file on the test system: # diff -Nau /etc/sysconfig/kernel.orig /etc/sysconfig/kernel --- /etc/sysconfig/kernel.orig :46: /etc/sysconfig/kernel ,7 # ramdisk by calling the script "mkinitrd" # (like drivers for scsi-controllers, for lvm or reiserfs) # -INITRD_MODULES="processor thermal fan jbd ext3 edd" +INITRD_MODULES="dm-multipath processor thermal fan jbd ext3 edd" ## Type: string ## Command: /sbin/mkinitrd b. Create a backup copy of your initrd file c. Run the mkinitrd command to create an initrd image: # mkinitrd 9. Update your boot loader configuration with the new initrd, if needed. During the test, the original initrd file was overwritten, so the boat loader configuration file did not require updating. 10. Reboot with the new initrd image and verify that your root is on multipath. a. Use the mount command to find where the root is. The following output shows that root is on the device mapper (DM) device /dev/dm-3: # mount /dev/dm-3 on / type ext3 (rw,acl,user_xattr) /proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) 22 Blueprints: Installing Linux on a Multipath iscsi LUN on an IP Network
1. Set up the storage to allow access to the LD(s) by the server following the NEC storage user guides.
Overview Server running Red Hat Enterprise Linux (RHEL) must be configured to recognize and work with NEC storage systems. The following procedure demonstrates the steps necessary to configure multipath
More informationBlueprints. Securing Sensitive Files With TPM Keys
Blueprints Securing Sensitive Files With TPM Keys Blueprints Securing Sensitive Files With TPM Keys Note Before using this information and the product it supports, read the information in Notices on page
More informationThe Contents and Structure of this Manual. This document is composed of the following four chapters.
Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000
More informationDM-Multipath Guide. Version 8.2
DM-Multipath Guide Version 8.2 SBAdmin and DM-Multipath Guide The purpose of this guide is to provide the steps necessary to use SBAdmin in an environment where SAN storage is used in conjunction with
More informationDeploying Red Hat Enterprise Linux with Dell EqualLogic PS Series Arrays
TECHNICAL REPORT Deploying Red Hat Enterprise Linux with Dell EqualLogic PS Series Arrays Abstract This technical report documents the procedure for configuring the Red Hat Enterprise Linux (RHEL) operating
More informationUsing Device-Mapper Multipath. Configuration and Administration 5.2
Using Device-Mapper Multipath Configuration and Administration 5.2 DM_Multipath ISBN: N/A Publication date: May 2008 Using Device-Mapper Multipath This book provides information on using the Device-Mapper
More informationLinux Host Utilities 6.2 Quick Start Guide
Linux Host Utilities 6.2 Quick Start Guide This guide is for experienced Linux users. It provides the basic information required to get the Linux Host Utilities installed and set up on a Linux host. The
More informationNovell SUSE Linux Enterprise Server
SLES 10 Storage Administration Guide for EVMS Novell SUSE Linux Enterprise Server 10 February 1, 2007 STORAGE ADMINISTRATION GUIDE FOR EVMS www.novell.com Legal Notices Novell, Inc., makes no representations
More informationHost Redundancy, and IPoIB and SRP Redundancies
CHAPTER 6 Host Redundancy, and IPoIB and SRP Redundancies This chapter describes host redundancy, IPoIB redundancy, and SRP redundancy and includes the following sections: HCA Redundancy, page 6-1 IPoIB
More informationiscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc.
Configure iscsi Target & Initiator on CentOS 7 / RHEL7 iscsi stands for Internet Small Computer Systems Interface, IP-based storage, works on top of internet protocol by carrying SCSI commands over IP
More informationBlueprints. Quick Start Guide for installing and running KVM
Blueprints Quick Start Guide for installing and running KVM Blueprints Quick Start Guide for installing and running KVM Note Before using this information and the product it supports, read the information
More informationDevice Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes
Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.2.1 release notes Part number: AA-RWF9H-TE First edition: March 2009 Legal and notice information Copyright 2009 Hewlett-Packard
More informationUnless otherwise noted, all references to STRM refer to STRM, STRM Log Manager, and STRM Network Anomaly Detection.
TECHNICAL CONFIGURING iscsi AUGUST 2012 You can use a iscsi storage network in your STRM deployment. This document provides information on configuring and using iscsi devices with your deployment. Unless
More informationRed Hat Enterprise Linux 5 DM Multipath. DM Multipath Configuration and Administration Edition 3
Red Hat Enterprise Linux 5 DM Multipath DM Multipath Configuration and Administration Edition 3 Red Hat Enterprise Linux 5 DM Multipath DM Multipath Configuration and Administration Edition 3 Legal Notice
More informationistorage Server: High-Availability iscsi SAN for Citrix Xen Server
istorage Server: High-Availability iscsi SAN for Citrix Xen Server Wednesday, Nov 21, 2013 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2013. All right reserved. Table
More informationStorTrends - Citrix. Introduction. Getting Started: Setup Guide
StorTrends - Citrix Setup Guide Introduction This guide is to assist in configuring a Citrix virtualization environment with a StorTrends SAN array. It is intended for the virtualization and SAN administrator
More informationQLogic QLA4010/QLA4010C/QLA4050/QLA4050C/ QLA4052C/QMC4052/QLE4060C/QLE4062C iscsi Driver for Linux Kernel 2.6.x.
QLogic QLA4010/QLA4010C/QLA4050/QLA4050C/ QLA4052C/QMC4052/QLE4060C/QLE4062C iscsi Driver for Linux Kernel 2.6.x This software license applies only to QLogic customers. QLogic Corporation. All rights reserved.
More informationLinux Howtos. Fedora 9 Install (114) CIS Fall Fedora 9 Install (114) Fedora 9 installation with custom partitions.
Linux Howtos Fedora 9 Install (114) CIS 191 - Fall 2008 Fedora 9 Install (114) Fedora 9 installation with custom partitions. Requirements: Fedora 9 DVD ISO files http://iso.linuxquestions.org/ VMWare Server
More informationThe Contents and Structure of this Manual. This document is composed of the following three chapters.
Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000
More informationiscsi Boot from SAN with Dell PS Series
iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description
More informationRed Hat Enterprise Linux 6
Red Hat Enterprise Linux 6 DM Multipath DM Multipath Configuration and Administration Last Updated: 2017-10-20 Red Hat Enterprise Linux 6 DM Multipath DM Multipath Configuration and Administration Steven
More informationFC SAN Boot Configuration Guide
White Paper R120d-2M, R120d-1M R120d-2E, R120d-1E FC SAN Boot Configuration Guide Windows Server 2008 (Hyper-V) Windows Server 2008 R2 (Hyper-V 2.0) Red Hat Enterprise Linux 5 Red Hat Enterprise Linux
More informationiscsi Configuration for Red Hat Enterprise Linux Express Guide
ONTAP 9 iscsi Configuration for Red Hat Enterprise Linux Express Guide December 2017 215-11192_D0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Deciding whether to use this
More informationAssignment No. SAN. Title. Roll No. Date. Programming Lab IV. Signature
Assignment No. Title (A-6) SAN Roll No. Class T.E. Date Subject Programming Lab IV Signature Assignment no: (A-6) Title: Study of Storage Area Network (SAN). Problem Statement: Design and Setup LAN with
More informationETERNUS Disk storage systems Server Connection Guide (FCoE) for Linux
Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000
More informationGL-280: Red Hat Linux 7 Update. Course Description. Course Outline
GL-280: Red Hat Linux 7 Update Course Description This is a differences course that focuses on the new technologies and features that made their appearance in Red Hat Enterprise Linux v7. It is intended
More informationThe kernel constitutes the core part of the Linux operating system. Kernel duties:
The Linux Kernel The kernel constitutes the core part of the Linux operating system. Kernel duties: System initialization: detects hardware resources and boots up the system. Process scheduling: determines
More informationTECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux
TECHNICAL WHITE PAPER Using Stateless Linux with Veritas Cluster Server Linux Pranav Sarwate, Assoc SQA Engineer Server Availability and Management Group Symantec Technical Network White Paper Content
More informationRHEL Clustering and Storage Management. 5 Days
QWERTYUIOP{ RHEL Clustering and Storage Management 5 Days This hands on course covers the high availability clustering and storage management technologies found in Red Hat Enterprise Linux 6+. Each student
More informationRed Hat Enterprise Linux 8.0 Beta
Red Hat Enterprise Linux 8.0 Beta Configuring and managing storage hardware Deploying and configuring single-node storage in Red Hat Enterprise Linux 8 Last Updated: 2018-11-16 Red Hat Enterprise Linux
More informationInstalling and Configuring for Linux
SANtricity Storage Manager 11.40 Installing and Configuring for Linux Express Guide September 2017 215-11891_A0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this Express
More informationCPSC 457 OPERATING SYSTEMS FINAL EXAM
CPSC 457 OPERATING SYSTEMS FINAL EXAM Department of Computer Science University of Calgary Professor: Carey Williamson December 10, 2008 This is a CLOSED BOOK exam. Textbooks, notes, laptops, calculators,
More informationLinux Diskless iscsi Boot HowTo ( V1.0)
Linux Diskless iscsi Boot HowTo ( V1.0) This document describes using the Cisco Linux 3.4.2 iscsi initiator for diskless booting of Red Hat Linux 9 (RH9). EqualLogic has enhanced the initiator to provide
More informationProduction Installation and Configuration. Openfiler NSA
Production Installation and Configuration Openfiler NSA Table of Content 1. INTRODUCTION... 3 1.1. PURPOSE OF DOCUMENT... 3 1.2. INTENDED AUDIENCE... 3 1.3. SCOPE OF THIS GUIDE... 3 2. OPENFILER INSTALLATION...
More informationSystem p. Partitioning with the Integrated Virtualization Manager
System p Partitioning with the Integrated Virtualization Manager System p Partitioning with the Integrated Virtualization Manager Note Before using this information and the product it supports, read the
More informationDell Compellent Storage Center. XenServer 6.x Best Practices
Dell Compellent Storage Center XenServer 6.x Best Practices Page 2 Document revision Date Revision Description 2/16/2009 1 Initial 5.0 Documentation 5/21/2009 2 Documentation update for 5.5 10/1/2010 3
More informationOptimize Storage Performance with Red Hat Enterprise Linux
Optimize Storage Performance with Red Hat Enterprise Linux Mike Snitzer Senior Software Engineer, Red Hat 09.03.2009 2 Agenda Block I/O Schedulers Linux DM Multipath Readahead I/O
More informationRocketRAID 2310/2300 Controller Fedora Linux Installation Guide
RocketRAID 2310/2300 Controller Fedora Linux Installation Guide Version 1.1 Copyright 2006 HighPoint Technologies, Inc. All rights reserved. Last updated on Jan 20, 2006 Table of Contents 1 Overview...1
More informationSAP NetWeaver on IBM Cloud Infrastructure Quick Reference Guide Red Hat Enterprise Linux. December 2017 V1.0
SAP NetWeaver on IBM Cloud Infrastructure Quick Reference Guide Red Hat Enterprise Linux December 2017 V1.0 2 Copyright IBM Corp. 2017. All rights reserved. without prior written permission of IBM. Contents
More informationManually Mount Usb Flash Drive Linux Command Line Redhat
Manually Mount Usb Flash Drive Linux Command Line Redhat How to Format USB in Linux using Command Line. This article will help you to format USB Flash drive in Ubuntu systems via Command line. So first
More informationIBM XIV Host Attachment Kit for Linux. Version Release Notes. First Edition (December 2011)
Version 1.7.1 Release Notes First Edition (December 2011) First Edition (December 2011) This document edition applies to version 1.7.1 of the IBM XIV Host Attachment Kit for Linux software package. Newer
More informationSusanne Wintenberger IBM Lab Boeblingen, Germany
SCSI over FCP for Linux on System z Susanne Wintenberger (swinten@de.ibm.com) IBM Lab Boeblingen, Germany Trademarks The following are trademarks of the International Business Machines Corporation in the
More informationRed Hat Enterprise Linux 4 DM Multipath. DM Multipath Configuration and Administration
Red Hat Enterprise Linux 4 DM Multipath DM Multipath Configuration and Administration DM Multipath Red Hat Enterprise Linux 4 DM Multipath DM Multipath Configuration and Administration Edition 1.0 Copyright
More informationUsing iscsi On Debian Lenny (Initiator And Target)
By Falko Timme Published: 2009-03-10 20:05 Using iscsi On Debian Lenny (Initiator And Target) Version 1.0 Author: Falko Timme Last edited 02/24/2009 This guide explains how
More informationChapter 6. Boot time configuration. Chapter 6 Boot time configuration
Chapter 6. Boot time configuration Chapter 6 Boot time configuration Last revised: 20/6/2004 Chapter 6 Outline In this chapter we will learn about: How the system boots How to configure the boot loaders
More informationChapter 7. Getting Started with Boot from Volume
Chapter 7. Getting Started with Boot from Volume This chapter discusses creating a bootable volume from an existing minidisk, cloning a volume from an existing volume, and booting a virtual server instance
More informationFUJITSU Storage ETERNUS Multipath Driver V2 (for Linux) Installation Information
FUJITSU Storage ETERNUS Multipath Driver V2 (for Linux) Installation Information Mar 2015 Contents Correspondence of ETERNUS Multipath Driver 's Version Level and Patch... 0 Multipath Driver's Version
More informationNetApp SolidFire for Linux Configuration Guide
Technical Report NetApp SolidFire for Linux Configuration Guide SolidFire Engineering Team, NetApp October 2017 TR-4639 TABLE OF CONTENTS 1 Introduction... 4 1.1 Creating a SolidFire Account...4 1.2 Creating
More informationIBM TotalStorage N3700 Interoperability Matrix
IBM TotalStorage N3700 Interoperability Matrix 1 IBM TotalStorage N3700 Interoperability Matrix Covering: N3700 (2863-A10, 2863-A20, 2863-001) N3700 Licensed Functions (2870-631, 2870-632) Updated as of
More informationRed Hat Enterprise Linux 7
Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven
More informationLinux on System z. July 28, Linux Kernel 2.6 SC
Linux on System z How to use FC-attached SCSI devices with Linux on System z July 28, 2006 Linux Kernel 2.6 SC33-8291-00 Linux on System z How to use FC-attached SCSI devices with Linux on System z July
More informationCST8177 Linux II. Linux Boot Process
CST8177 Linux II Linux Boot Process Reference information from the text, http://www.linuxdoc.org and several other web sites Linux Boot Process Topics covered in this slide-set Basic definition of the
More information"Charting the Course... Enterprise Linux System Administration Course Summary
Course Summary Description This is an in-depth course that explores installation, configuration and maintenance of Linux systems. The course focuses on issues universal to every workstation and server.
More informationBlueprints. Protecting your data at rest with Red Hat Enterprise Linux on System x
Blueprints Protecting your data at rest with Red Hat Enterprise Linux on System x Blueprints Protecting your data at rest with Red Hat Enterprise Linux on System x Note Before using this information and
More informationOracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux
Oracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux 1 Best Practices Deployment Guide: Oracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux This document
More informationiscsi Adapter Inbox Driver Update for Linux Kernel 2.6.x Table of Contents
iscsi Adapter Inbox Driver Update for Linux Kernel 2.6.x QLogic Corporation All rights reserved Table of Contents 1 Package Contents 2 OS Support 3 Supported Features 4 Using the iscsi Adapter Driver for
More informationLinux+ Guide to Linux Certification, Third Edition. Chapter 2 Linux Installation and Usage
Linux+ Guide to Linux Certification, Third Edition Chapter 2 Linux Installation and Usage Objectives Install Red Hat Fedora Linux using good practices Outline the structure of the Linux interface Enter
More informationRocketRAID 2680/2684 SAS Controller Red Hat Enterprise/CentOS Linux Installation Guide
RocketRAID 2680/2684 SAS Controller Red Hat Enterprise/CentOS Linux Installation Guide Version 1.0 Copyright 2008 HighPoint Technologies, Inc. All rights reserved. Last updated on November 13, 2008 Table
More informationRed Hat Enterprise Linux 5
Red Hat Enterprise Linux 5 Online Storage Reconfiguration Guide Edition 1 For Red Hat Enterprise Linux 5 Last Updated: 2017-11-14 Red Hat Enterprise Linux 5 Online Storage Reconfiguration Guide For Red
More informationAdding a block devices and extending file systems in Linux environments
Adding a block devices and extending file systems in Linux environments In this exercise we simulate situation where user files partition /home fills up and needs to be extended. Also we migrate from static
More informationHigh-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers
High-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers This tutorial exists for these OS versions CentOS 6.3 CentOS 5.4 On this page 1 Preliminary Note 2 Enable additional
More informationLogical Volume Management
Logical Volume Management for Linux on System z Horst Hummel (Horst.Hummel@de.ibm.com) Linux on System z Development IBM Lab Boeblingen, Germany San Jose, Agenda Logical volume management overview RAID
More informationNEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide
NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide (M120/M320/M320F/M110/M310/M310F/M510/M710/M710F) August, 2018 NEC Copyright 2018 NEC Corporation.
More informationOracle 12c deployment using iscsi with IBM Storwize V7000 for small and medium businesses Reference guide for database and storage administrators
Oracle 12c deployment using iscsi with IBM Storwize V7000 for small and medium businesses Reference guide for database and storage administrators Shashank Shingornikar IBM Systems and Technology Group
More informationExadata Landing Pad: Migrating a Fibre Channel-based Oracle Database Using an Oracle ZFS Storage Appliance
An Oracle Technical White Paper April 2014 Exadata Landing Pad: Migrating a Fibre Channel-based Oracle Database Using an Oracle ZFS Storage Appliance Introduction 2 Data Migration Architecture Using Oracle
More informationRocketRAID 231x/230x SATA Controller Red Hat Enterprise/CentOS Linux Installation Guide
RocketRAID 231x/230x SATA Controller Red Hat Enterprise/CentOS Linux Installation Guide Version 1.0 Copyright 2008 HighPoint Technologies, Inc. All rights reserved. Last updated on November 5, 2008 Table
More informationInstalling and Configuring for Linux
SANtricity System Manager 11.41 Installing and Configuring for Linux Express Guide December 2017 215-11893_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding whether to use this Express Guide...
More informationLearn Linux, 101: Control mounting and unmounting of
Getting to your data Ian Shields January 27, 2016 (First published October 20, 2010) Learn to mount your Linux ; configure and use removable USB, IEE 1394, or other devices; and properly access floppy
More informationUltraPath Technical White Paper
HUAWEI OceanStor Enterprise Unified Storage System Issue 01 Date 2014-04-02 HUAWEI TECHNOLOGIES CO, LTD Copyright Huawei Technologies Co, Ltd 2014 All rights reserved No part of this document may be reproduced
More informationThis section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.
Adding a New Disk, page 1 Mounting the Replication Set from Disk to tmpfs After Deployment, page 3 Manage Disks to Accommodate Increased Subscriber Load, page 5 Adding a New Disk This section describes
More informationSaving Your Bacon Recovering From Common Linux Startup Failures
Saving Your Bacon Recovering From Common Linux Startup Failures Mark Post Novell, Inc. Friday, August 12, 2011 Session Number 10105 Agenda How the boot process is supposed to work What things can go wrong
More informationFibre Channel Adapter and Converged Network Adapter Inbox Driver Update for Linux Kernel 2.6.x and 3.x. Readme. QLogic Corporation All rights reserved
Fibre Channel Adapter and Converged Network Adapter Inbox Driver Update for Linux Kernel 2.6.x and 3.x Readme QLogic Corporation All rights reserved Table of Contents 1. Package Contents 2. OS Support
More informationSLES Linux Installation Guide
Rocket RAID 278x SAS Controller SLES Linux Installation Guide Version 1.1 Copyright 2012 HighPoint Technologies, Inc. All rights reserved. Created on May 29, 2012 Table of Contents 1 Overview... 1 2 Installing
More information1 LINUX KERNEL & DEVICES
GL-250: Red Hat Linux Systems Administration Course Length: 5 days Course Description: The GL250 is an in-depth course that explores installation, configuration and maintenance of Linux systems. The course
More informationIntroduction to Linux features for disk I/O
Martin Kammerer 3/22/11 Introduction to Linux features for disk I/O visit us at http://www.ibm.com/developerworks/linux/linux390/perf/index.html Linux on System z Performance Evaluation Considerations
More informationUsing GNBD with Global File System. Configuration and Administration
Using GNBD with Global File System Configuration and Administration Using GNBD with Global File System: Configuration and Administration Copyright 2007 Red Hat, Inc. This book provides an overview on using
More informationAs this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.
How to Increase the size of a Linux LVM by adding a new disk This post will cover how to increase the disk space for a VMware virtual machine running Linux that is using logical volume manager (LVM). First
More informationRecovering GRUB: Dual Boot Problems and Solutions
Recovering GRUB: Dual Boot Problems and Solutions Published by the Open Source Software Lab at Microsoft. October 2007. Special thanks to Chris Travers, Contributing Author to the Open Source Software
More informationConfiguring and Managing Virtual Storage
Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks
More informationAPPLICATION NOTE Using DiskOnChip Under Linux With M-Systems Driver
APPLICATION NOTE Using DiskOnChip Under Linux With M-Systems Driver SWM-640000016 rev A APPLICATION NOTE Using DiskOnChip Under Linux With M-Systems Driver RTD Embedded Technologies, INC. 103 Innovation
More informationIBM Geographically Dispersed Resiliency for Power Systems. Version Release Notes IBM
IBM Geographically Dispersed Resiliency for Power Systems Version 1.2.0.0 Release Notes IBM IBM Geographically Dispersed Resiliency for Power Systems Version 1.2.0.0 Release Notes IBM Note Before using
More informationUUID and R1Soft. What is a UUID and what is it used for?
UUID and R1Soft What is a UUID and what is it used for? A Universally Unique Identifier (UUID) is a 36-digit code that is used to identify or label something. For the purposes of this article, we will
More informationNotes on Using Red Hat Enterprise Linux AS (v.3 for x86)
2005-09-01 Notes on Using Red Hat Enterprise Linux AS (v.3 for x86) Preface About This Manual This manual provides notes on PRIMERGY operation with Linux installed. Be sure to read this manual before using
More informationSAS Connectivity Card (CIOv) for IBM BladeCenter IBM Redbooks Product Guide
SAS Connectivity Card (CIOv) for IBM BladeCenter IBM Redbooks Product Guide The SAS Connectivity Card (CIOv) for IBM BladeCenter is an expansion card that offers the ideal way to connect the supported
More informationIBM XIV Host Attachment Kit for Linux Version Release Notes
IBM XIV Host Attachment Kit for Linux Version 2.4.0 Release Notes First Edition (March 2015) This document edition applies to version 2.4.0 of the IBM XIV Host Attachment Kit for Linux software package.
More informationFor personnal use only
Adding and Removing Disks From VMware RHEL7 Guests Without Rebooting Finnbarr P. Murphy (fpm@fpmurphy.com) Consider the following scenario. You are studying for your RHCSA or RHCE using one or more RHEL
More informationVirtual Iron Software Release Notes
Virtual Iron Software Release Notes Virtual Iron Version 4.2 Copyright (c) 2007 Virtual Iron Software, Inc. 00122407R1 This information is the intellectual property of Virtual Iron Software, Inc. This
More informationCompTIA Linux+/LPIC-1 COPYRIGHTED MATERIAL
CompTIA Linux+/LPIC-1 COPYRIGHTED MATERIAL Chapter System Architecture (Domain 101) THE FOLLOWING COMPTIA LINUX+/LPIC-1 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER: 101.1 Determine and Configure hardware
More informationDevice Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes
Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes April 2010 H Legal and notice information Copyright 2009-2010 Hewlett-Packard Development Company, L.P. Overview
More informationDtS Data Migration to the MSA1000
White Paper September 2002 Document Number Prepared by: Network Storage Solutions Hewlett Packard Company Contents Migrating Data from Smart Array controllers and RA4100 controllers...3 Installation Notes
More informationNotes on Using Red Hat Enterprise Linux AS (v.4 for EM64T)
2005-09-01 Notes on Using Red Hat Enterprise Linux AS (v.4 for EM64T) Preface About This Manual This manual provides notes on PRIMERGY operation with Linux installed. Be sure to read this manual before
More informationRocketRAID 231x/230x SATA Controller Debian Linux Installation Guide
RocketRAID 231x/230x SATA Controller Debian Linux Installation Guide Version 1.0 Copyright 2008 HighPoint Technologies, Inc. All rights reserved. Last updated on September 17, 2008 Table of Contents 1
More informationVeritas NetBackup for SQLite Administrator's Guide
Veritas NetBackup for SQLite Administrator's Guide Windows and Linux Release 8.1.1 Documentation version: 8.1.1 Legal Notice Copyright 2018 Veritas Technologies LLC. All rights reserved. Veritas and the
More informationUsing GNBD with Global File System. Configuration and Administration 5.2
Using GNBD with Global File System Configuration and Administration 5.2 Global_Network_Block_Device ISBN: N/A Publication date: May 2008 Using GNBD with Global File System This book provides an overview
More informationUsing Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2
Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2 Dell Engineering Donald Williams November 2013 A Dell Deployment and Configuration Guide Revisions Date November 2013 Description Initial
More informationMore on file systems, Booting Todd Kelley CST8177 Todd Kelley 1
More on file systems, Booting Todd Kelley kelleyt@algonquincollege.com CST8177 Todd Kelley 1 bind mounts quotas Booting process and SysVinit Installation Disk rescue mode 2 A bind mount is used to mount
More informationDeploying Solaris 11 with EqualLogic Arrays
Deploying Solaris 11 with EqualLogic Arrays Step-by-step guide to integrating an Oracle Solaris 11 server with a Dell EqualLogic PS Series Array Dell Storage Engineering February 2014 A Dell Deployment
More informationRed Hat Enterprise Linux 7 DM Multipath
Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine
More information"Charting the Course... MOC B: Linux System Administration. Course Summary
Description Course Summary This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional Linux system administrator. The course covers
More informationThe Linux IPL Procedure
The Linux IPL Procedure SHARE - Tampa February 13, 2007 Session 9274 Edmund MacKenty Rocket Software, Inc. Purpose De-mystify the Linux boot sequence Explain what happens each step of the way Describe
More information