Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage

Size: px
Start display at page:

Download "Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage"

Transcription

1 Reference Architecture Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage Jon Benedict, NetApp September 2010 RA

2 TABLE OF CONTENTS PURPOSE OF THIS DEPLOYMENT GUIDE INTENDED AUDIENCE TERMINOLOGY SYSTEM REQUIREMENTS MINIMUM SYSTEM REQUIREMENTS RECOMMENDED SYSTEM REQUIREMENTS NETWORK REQUIREMENTS KVM REQUIREMENTS STORAGE REQUIREMENTS SUPPORTED GUEST OPERATING SYSTEMS KVM HARDWARE LIMITATIONS NETWORK ARCHITECTURE BASIC INFRASTRUCTURE DESCRIPTION NETAPP CONFIGURATION BASE CONFIGURATION OF NETAPP FAS CONTROLLER INSTALLING LICENSES CONFIGURE SSH CONFIGURE DISK SPACE ON THE NETAPP FAS CONTROLLER NETWORK CONFIGURATION FOR THE NETAPP FAS CONTROLLER CONFIGURING SHARED STORAGE ON THE NETAPP FAS CONTROLLER CONFIGURE NETAPP FAS3170 FOR NFS CONFIGURE NETAPP FAS3170 FOR ISCSI OR FCP CONFIGURE FIBRE HBAS ON THE NETAPP FAS CONTROLLER INSTALLATION AND BASE CONFIGURATION OF HOST NODES BIOS CONFIGURATION INSTALLATION OF RED HAT ENTERPRISE LINUX DISK LAYOUT REGISTER WITH RED HAT NETWORK HOST SECURITY DISABLE UNNECESSARY AND INSECURE SERVICES SECURE REMOTE ACCESS TO HOST NODES (SSH KEYS) NETWORK CONFIGURATION OF HOST NODES CONFIGURE A REMOTE ADMINISTRATION HOST (OPTIONAL) BASIC REMOTE HOST CONFIGURATION CONFIGURE SECURITY

3 7.3 CONFIRM NTP IS RUNNING AND STARTS ON BOOT CONFIGURE NFS ACCESS TO THE NETAPP FAS CONTROLLER INSTALL THE PACKAGES NEEDED TO ADMINISTER KVM REMOTELY CONFIGURE THE SSH KEY PAIR INSTALL AND CONFIGURE KVM INSTALL THE REQUIRED PACKAGES SHARED STORAGE CONFIGURE NFS-BASED SHARED STORAGE ON THE HOST NODES CONFIGURE ISCSI-BASED SHARED STORAGE ON THE HOST NODES CONFIGURE FCP-BASED SHARED STORAGE ON THE HOST NODES CONFIGURE MULTIPATHING ON THE HOST NODES CONFIGURE GFS2-BASED SHARED STORAGE CONFIGURE THE HOST NODES CONFIGURE THE CLUSTER MANAGER CONFIGURE THE CLUSTER CREATE FENCING DEVICES CONFIGURE GFS SELINUX CONSIDERATIONS CREATE A GOLDEN IMAGE OR TEMPLATE CREATE AND ALIGN A DISK IMAGE FOR VIRTUAL GUESTS PREPARE THE GOLDEN IMAGE FOR CLONING CLONE VIRTUAL SERVERS LIVE MIGRATION OF VIRTUAL SERVERS LIVE MIGRATION USING VIRTUAL MACHINE MANAGER LIVE MIGRATION FROM COMMAND LINE CONFIGURE DATA RESILIENCY AND EFFICIENCY THIN PROVISIONING DEDUPLICATION SNAPSHOT APPENDIXES APPENDIX A: CONFIGURE HARDWARE-BASED ISCSI INITIATOR APPENDIX B: CHANNEL BONDING MODES APPENDIX C: SAMPLE FIREWALL FOR HOST NODES APPENDIX D: SAMPLE SNAPSHOT SCRIPT APPENDIX E: SAMPLE KICKSTART FILE FOR A PROPERLY ALIGNED VIRTUAL SERVER REFERENCES

4 PURPOSE OF THIS DEPLOYMENT GUIDE This deployment guide discusses tested best practices for setting up a virtual server environment based around the Kernel Virtual Machine (KVM) hypervisor and NetApp storage. This guide provides instructions for deploying a stable and efficient virtual server environment that serves as a solid foundation for many different applications and workloads. 1.1 INTENDED AUDIENCE This guide is written for system architects, system administrators, and storage administrators who deploy the KVM hypervisor in a data center where NetApp is the intended back-end storage. A level of expertise is expected in Linux, virtualization, and storage, preferably with a focus on Red Hat Enterprise Linux and NetApp. Additional expertise in IP and switched fabric networks (if using Fibre Channel) is also required. Setting up an IP or switched fabric network is not covered in this guide; expertise in these areas, however, is necessary to deploy certain elements of the KVM virtual environment. 1.2 TERMINOLOGY The following terms are used in this guide: Channel bond: Red Hat s naming convention for bonding two or more physical NICs for purposes of redundancy or aggregation. Cluster: A group of related host nodes that support the same virtual servers. Host node: The physical server or servers that host one or more virtual servers. KVM environment: A general term that encompasses KVM, Red Hat Enterprise Linux (RHEL), network, and NetApp storage as described in this guide. Shared storage: A common pool of disk space, file or local unit number (LUN) based, simultaneously available to two or more host nodes. Virtual interface (VIF): A means of bonding two or more physical network interface cards (NICs) for purposes of redundancy or aggregation. Virtual local access network (VLAN): Useful at Layer 2 switching to segregate broadcast domains and to ease the physical elements of managing a network. Virtual server: A guest instance that resides on a host node. 4

5 2 SYSTEM REQUIREMENTS Requirements to launch the hypervisor are conservative; however, overall system performance depends on the nature of the workload. 2.1 MINIMUM SYSTEM REQUIREMENTS The following list specifies the minimum system requirements: 6GB free disk space per host node 2GB RAM 2.2 RECOMMENDED SYSTEM REQUIREMENTS Although not required, NetApp strongly recommends the system considerations described in the following list: One processor core or hyperthread for each virtualized CPU and one for the hypervisor 2GB RAM plus additional RAM for virtualized guests Some type of out-of-band management (IBM RSA, HP ILO, Dell DRAC, and so on) Multiple sets of at least 1GB NICs to separate traffic and allow for bonding, or one pair of 10GB NICs to be bonded to carry all traffic Fibre Channel or iscsi host bus adapters (HBAs) (if using hardware initiators and LUN-based storage) Redundant power 2.3 NETWORK REQUIREMENTS The following list specifies the network requirements: Switches capable of VLAN segmentation Gigabit Ethernet (GbE), or 10GbE, if available Multiple switches for channel bonding 2.4 KVM REQUIREMENTS The KVM hypervisor requires a 64-bit Intel processor with the Intel VT extensions or a 64-bit AMD processor with the AMD-V extensions. It might be necessary first to enable the hardware virtualization support from the system BIOS. Run the command shown in Figure 1 from within Linux to verify that the CPU virtualization extensions are available. Figure 1) Verify availability of CPU virtualization extensions. 5

6 If the output includes vmx (Intel) or svm (AMD), the proper extensions are present and enabled. 2.5 STORAGE REQUIREMENTS Whether there are one or many host nodes hosting virtual machines, KVM requires a flexible way to store virtual systems. KVM supports the following storage types: Directly attached storage iscsi or Fibre Channel LUNs, which may be shared in GFS or GFS2 configurations Network File System (NFS)-mounted file system 2.6 SUPPORTED GUEST OPERATING SYSTEMS The following guest operating systems are supported: RHEL 3, 4, and 5 (32-bit and 64-bit) Windows 2003 Server, Windows 2008 Server (32-bit and 64-bit) Windows XP 2.7 KVM HARDWARE LIMITATIONS The following limitations apply to KVM: 256 CPUs per host node 16 virtual CPUs per guest 8 virtual NICs per guest 1TB RAM per host node 256GB RAM per guest 2.8 NETWORK ARCHITECTURE Although this deployment guide does not discuss the specifics of setting up an Ethernet or switched fabric, certain items, such as configuration of VLANs, need to be addressed. Deployed switches must support: VLAN segregation Link Aggregation Control Protocol (LACP) (if deploying LACP-mode channel bonds or VIFs) At least 1Gbps line rate (10Gbps is preferred) Additional considerations include: Multiple switches must be configured for redundancy. Redundancy is required for Fibre Channel fabric switches. Considerations must be made for redundant fibre switches, as well as zoning. 6

7 3 BASIC INFRASTRUCTURE DESCRIPTION Table 1 describes the major components of the KVM virtualization environment that are used in this guide. Table 1) Major components of the KVM virtualization environment. Function Name Description Notes Host node 1 chzbrgr Dual quad-core Intel-VT, RHEL 5.4 x86_64 Qlogic dual-port iscsi HBA, Qlogic dualport Fibre HBA Host node 2 hmbrgr Dual quad-core Intel-VT, RHEL 5.4 x86_64 Qlogic dual-port iscsi HBA, Qlogic dualport Fibre HBA Network switch 1 n/a Cisco Catalyst 4948 Network switch 2 n/a Cisco Catalyst 4948 Used for primary traffic (primary in context of channel bond and VIF); also, separate VLAN for management traffic Used for secondary traffic (secondary in context of channel bonding and VIF) Remote host taco Nondescript host running RHEL 5.4 Used for remote administration of NetApp FAS controller, KVM, and GFS2 cluster Storage ice3170-3a NetApp FAS3170, Data ONTAP Back-end storage providing Fibre Channel Protocol (FCP), iscsi, and NFS-based storage Fibre switch icefc-4 Brocade Silkworm 410 Fibre switch for FCP connectivity Repository Red Hat Network Subscription compliant means of getting Red Hat packages/updates Web-accessible means of managing subscriptions and packages 7

8 4 NETAPP CONFIGURATION 4.1 BASE CONFIGURATION OF NETAPP FAS CONTROLLER This deployment guide makes the following assumptions regarding the setup of the NetApp FAS controller: NetApp FAS controller is deployed according to existing best practices. The latest version of Data ONTAP is installed (7.3.2 minimum). The management interface or serial console is set up. Licenses for NFS, FCP, and iscsi are installed. (See section 4.2.) 4.2 INSTALLING LICENSES The quickest method of installing licenses is from the console of the FAS controller, using the command: icd3170-3a> license add <license_number> where <license_number> is the code provided for the requested feature. You can add multiple licenses at one time. 4.3 CONFIGURE SSH With the possible exception of a purely private terminal server, all traffic to the NetApp FAS controller should be encrypted. Perform the commands in Figure 2 to confirm that Secure Shell (SSH) is enabled and that Remote Shell (RSH) and telnet are disabled. Figure 2) Confirm SSH is enabled and RSH and telnet are disabled. 8

9 4.4 CONFIGURE DISK SPACE ON THE NETAPP FAS CONTROLLER CREATE A DISK AGGREGATE If the NetApp FAS controller is new, there should be only one existing disk aggregate that contains three disks within a single volume, vol0. This volume stores the Data ONTAP operating system and should never contain user data. To prepare the NetApp FAS controller for shared storage, you must configure at least one additional disk aggregate. If such an aggregate is already present, skip to the next section, Create a Volume. 1. From the FilerView Web console, select Aggregates Manage to view the one existing aggregate, aggr0. 2. Under Aggregates, select Add to launch the Aggregate wizard, as shown in the following window. 3. For most of the items in the Aggregate wizard, choose the defaults, such as double parity and RAID group size of When the wizard gets to number of disks, choose as many as possible. 5. Select Commit. 6. As shown in the following window, return to the Aggregates Manage menu to place the aggregate online and confirm the configuration. CREATE A VOLUME The next activity, common to all types of storage, is to create a flexible volume (FlexVol volume). FlexVol volumes will contain the NFS export or LUNs that the host nodes use for shared storage. In the following procedure, ISOs are stored in the same volume as virtual server data, but this is not required. 1. Log into the Web interface. 9

10 2. From the menu on the left, select Volumes Add to launch the Volume wizard, as shown in the following window. 3. Following the prompts, choose the following: a. Flexible (for a FlexVol volume). b. Type kvm_nfs or another arbitrary but meaningful name for the volume name, along with POSIX. Leave UTF-8 unchecked. c. Choose aggr1 for the containing aggregate. Never use aggr0 for user data. d. Set Space Guarantee to none in order to maintain the concept of thin provisioning across volumes, LUNs, and disk images. e. For size, choose according to the initial needs, taking into consideration the following: i. Number and size of ISO images (DVD ISOs are from 3GB to 4.5GB). ii. iii. iv. Number and size of guest disk images. Number and size of guest template (golden) images. Whether or not deduplication is to be used. If using NetApp deduplication on a volume that stores a LUN, size the volume at two times the size of the LUN. Also, enable thin provisioning on the volume by setting the Space Guarantee to none. No other considerations need to be made for volumes that will store NFS exports. v. In this procedure, a 100GB volume was used for each of the shared storage protocols (NFS, iscsi, and FCP), with 20% Snapshot reserve, leaving 80GB of usable space for each. 4. Click Commit. 5. Select Volumes Manage, as shown in the following window, to view the new volumes created in this procedure. By default, a volume is automatically exported as an NFS share when it is created, regardless of how it will be used. For volumes that will be used for LUNs, go to NFS Manage Exports and delete the unwanted exports. 10

11 4.5 NETWORK CONFIGURATION FOR THE NETAPP FAS CONTROLLER CONFIGURE VLANS TO SEGREGATE DATA TRAFFIC Create unique VLANs for each role to separate public, management, and storage traffic. In addition to providing important security benefits, creating unique VLANs for each role allows for the configuration of jumbo frames for NFS and iscsi traffic. Even if you expect to use FCP to access the storage, complete this configuration to separate public and management traffic. For VLANs with jumbo frames, you must configure the maximum transmission unit (MTU) setting end to end, from storage to host NIC and every switch port in between. Table 2 shows the VLANs used in this document. Table 2) VLANs used in this document. Description VLAN Subnet Public traffic /24 Management traffic /24 iscsi traffic /24 NFS traffic /24 The configuration procedure is the same for public, iscsi, and NFS traffic; simply substitute the proper VLAN and subnet information as appropriate. Follow three main steps to create the VLANs on the NetApp FAS controller: 1. Configure a VIF on the NetApp FAS controller. 2. Configure a VLAN on the NetApp FAS controller. 3. Assign an IP address to the VLAN. 11

12 CONFIGURE A VIF ON THE NETAPP FAS CONTROLLER A VIF is necessary to facilitate redundancy for the physical network interfaces. To configure a VIF using interfaces that are already in use, access the NetApp FAS controller from the management console or serial console. In the following procedure, the onboard interfaces e0a and e0b are used for the VIF. 1. Log into the management console or serial port of the NetApp FAS controller. 2. Bring down the physical interfaces that will be used in the VIF, then create the VIF: ice3170-3a> ifconfig e0a down ice3170-3a> ifconfig e0b down ice3170-3a> vif create lacp vif1 b ip e0a e0b You can use the Web console to configure the VIF if you are creating a VIF from NICs on the NetApp FAS controller that is not already being used. Note: NICs must be down in order to be configured for use within a VIF. After creating the VIF, you can configure one or more VLANs for use with that VIF. IP addresses are then assigned to the individual VLANs. CONFIGURE A VLAN ON THE NETAPP FAS CONTROLLER Use the following procedure to configure a VLAN on the NetApp FAS controller. 1. From the menu on the left of the Web console, select Network Manage Interfaces. 2. Select Add a New VLAN. 3. As shown in the following window, select the VIF created in the preceding procedure for the physical interface. Also, select the VLAN tag. In this procedure, VLAN 3027 has been created for private NFS traffic. You must also configure all switches between the host s private interface and the NetApp FAS controller to forward VLAN traffic. Trunk ports must be configured on the switches that allow specific VLAN traffic to pass. ASSIGN AN IP ADDRESS TO THE VLAN Use the following procedure to configure an IP address for the VLAN. 1. From the menu on the left of the Web console, select Network Manage Interfaces. 2. As shown in the following window, select Modify on the line that contains the VIF that you just created (vif1-3027). 12

13 3. Populate the fields to match the network, as shown in the following window. In the case of the iscsi or NFS nonrouted VLANs, the IP assignments are arbitrary. As long as the IP addresses of the host nodes private NICs are on the same subnet and the switches are configured properly, the VLAN is complete. If jumbo frames are being used in the KVM environment, increase the default of 1500 for MTU size to Every port along the way needs to support and be configured for jumbo frames. This includes the private 13

14 interface of the host nodes. The host node configuration of jumbo frames is discussed in section 6.8, Network Configuration of Host Nodes. 14

15 5 CONFIGURING SHARED STORAGE ON THE NETAPP FAS CONTROLLER Shared storage may be based on: NFS iscsi FCP The following sections cover all three types of shared storage configuration; however, unless you are configuring multiple virtualization environments on the same NetApp FAS controller, you need to configure only one type of shared storage. 5.1 CONFIGURE NETAPP FAS3170 FOR NFS The following list contains prerequisites for NFS-based shared storage: License the controller for NFS. Create a private VLAN to segregate NFS traffic. Create a volume. See section 4, NetApp Configuration, for more details on these prerequisites. CREATE A QTREE (OPTIONAL) Use the following procedure to create a qtree. Note: Qtrees are not required for an NFS share. 1. Log into the Web console. 2. As shown in the following window, select Volumes QTrees Add from the menu on the left of the Web console to launch the QTree wizard. 3. For Volume, select the volume previously created. (Never use vol0 for user data.) 4. For QTree Name, choose an arbitrary but meaningful name, such as kvm_q. 5. For Security Style, choose Unix. 6. Leave Oplocks checked. 7. Click Add. The qtree is created. 15

16 CREATE THE EXPORT Use the following procedure to create the export. 1. Log into the Web console. 2. From the menu on the left of the Web console, select NFS Add Export to launch the NFS Export wizard. 3. On the first window, check Read-Write Access, Root Access, and Security. 4. For Export Path, choose the previously created volume, /vol/kvm_vol. If a qtree was created, type the path of the qtree, such as /vol/kvm_vol/kvm_q." 5. For read-write hosts, add the individual IP addresses of the private interfaces that will be used for NFS traffic. 6. As seen in the following window, do the same for the Root Access window. 7. For Security, select UNIX style. 8. Click Commit. 9. As shown in the following window, click Manage Exports to review the NFS share. 16

17 5.2 CONFIGURE NETAPP FAS3170 FOR ISCSI OR FCP The procedures for configuring LUNs on the NetApp FAS controller for iscsi and FCP are almost identical. Any differences are noted in the following prerequisite list and subsequent procedure. Following are prerequisites for iscsi- and FCP-based shared storage: License the controller for iscsi or FCP. Set up a VLAN for private iscsi traffic or set up a switched fabric network for FCP traffic. Create a volume. CONFIGURE LUNS ON THE NETAPP FAS CONTROLLER FOR ISCSI AND FCP Use the following procedure to configure LUNs on the NetApp FAS controller for iscsi and FCP. This procedure uses the two volumes (kvm_iscsi and kvm_vol, sized 80GB and 66GB, respectively) that were created earlier in this deployment guide. See the following window. 1. Create a LUN inside that volume by selecting LUNs Add. The path needs to include the created volume as well as the name of the LUN. The LUN protocol type needs to match the operating system, and you should include a brief description, as shown in the following window. 17

18 2. Create an initiator group (igroup) by selecting Initiator Groups Add. The igroup enables the host nodes to access to the LUN being created. The group name should be easily recognizable. In the following window, the name of the host gaining access is used for part of the name. The operating system is Linux. 3. Select a Type for the igroup. 18

19 For iscsi-based storage, select iscsi. The initiators need to match either what is in the contents of /etc/iscsi/initiatorname.iscsi of the host node (for software iscsi) or the initiator name configured in the iscsi HBA BIOS (for hardware iscsi). In the preceding window, the names from the iscsi HBAs are used. Although a separate igroup was created for the other host node, in practice, all host nodes in the same storage can use the same igroup. In an environment with many host nodes, it is easier to manage a single igroup than to create a new igroup each time you add a host node to the infrastructure. For FCP-based storage, select FCP. The initiators need to be the WWPN(s) of each HBA. The WWPN, or port name, can be found in /sys/class/fc_host/hostx/port_name on the host nodes, where X is the host ID of the HBA. For more information, see section 9.3, Configure FCP-based Shared Storage on the Host Nodes. 4. For both FCP and iscsi, complete the procedure to configure the iscsi LUN on the NetApp FAS controller by mapping the LUN to the igroup. As shown in the following window, return to LUNs Manage and click the No Maps link on the right. 5. On the subsequent window, click Add Groups to Map. 6. On the LUN Map Add Groups window following, select the igroups that are to have access and click Add. 7. On the final window, click Apply. If using iscsi, the final step is to restrict which network interfaces have access to the iscsi LUNs. See the following section, Restrict iscsi Traffic to VLAN. If using FCP, the final step on the NetApp FAS controller is to make sure that the HBAs are configured. See section 5.3, Configure Fibre HBAs on the NetApp FAS Controller. 19

20 RESTRICT ISCSI TRAFFIC TO VLAN For security purposes, the network interfaces that have access to the iscsi LUNs should be restricted. Use the following procedure to restrict iscsi traffic to VLAN. 1. Select LUNs iscsi Manage Interfaces. See the following window for an example of the Manage ISCSI interfaces window. 2. Check each interface that should not have access. 3. Click Disable. In the following window, only the VLAN created for iscsi traffic is enabled. 5.3 CONFIGURE FIBRE HBAS ON THE NETAPP FAS CONTROLLER Use the fcadmin command on the NetApp FAS controller to view and alter the configuration of the onboard fibre HBAs. In Figure 3, adapters 0b and 0d are configured as targets. As targets, they can receive and handle requests related to FCP-based LUNs. (The initiators are attached to the disk shelves.) To configure an onboard fibre HBA as a target, issue the command: fcadmin config -t target <adapter> HBA add-on cards are generally preconfigured as target ports. 20

21 Figure 3) View and alter the configuration of the onboard fibre HBAs. The FCP service is then confirmed to be running. If the FCP service is not running, issue the following command on the NetApp FAS controller: fcp start 21

22 6 INSTALLATION AND BASE CONFIGURATION OF HOST NODES This deployment guide provides the procedures for setting up two hosts to host a number of guest virtual machines. The two hosts use shared storage to facilitate live migration of virtual machines from one host to the other. You can add hosts as needed. The host node installations should be very basic and should use the 64-bit version of RHEL 5.4. For security and performance reasons, configure the minimum set of services. Also, the host nodes should be identical except for naming and IP information. Identical information should include mountpoints, packages, layout, and security settings. 6.1 BIOS CONFIGURATION To take advantage of the Intel VT or AMD-V virtualization enhancements, you might need to toggle them in the server BIOS. In addition, if the servers are to be configured with Red Hat Cluster Suite, disable ACPI in the BIOS, if possible. 6.2 INSTALLATION OF RED HAT ENTERPRISE LINUX 5.4 All typical means of RHEL installation (CD-ROM, HTTP, FTP, NFS, and PXE) are available; however, a minimal install is preferable. For example, the package and package group listing for the servers used in this document @text-internet device-mapper-multipath You can choose the KVM-related packages at install time, but for the sake of example, they will be installed manually postinstall. If using Kickstart to install the packages, add the following packages to the package list: kvm libvirt libvirt-python python-virtinst virt-manager virt-viewer If you are not installing graphical packages in the KVM environment, you can omit the packages virtmanager and virt-viewer. 6.3 DISK LAYOUT The disk layout should follow the needs of the data center, provided that the layout meets Red Hat best practices, for example, providing at least 6GB of space plus the recommended swap. Table 3 provides the basic layouts of the servers used in this document. Table 3) Basic server layout. Partition Size LVM /boot 100MB n/a 22

23 Partition Size LVM / 10,240MB VolGroup00/LogVol02 /var 1024MB VolGroup00/LogVol00 Swap 8192MB (based on Table 4) VolGroup00/LogVol01 You may deploy any partition layout that provides at least 6GB of root storage and at least the swap space recommended by Red Hat. Table 4 shows the recommended swap space. Table 4) Swap space recommended by Red Hat. Amount of Physical RAM Recommended Swap 4 GB or less At least 2GB 4GB to 16GB 16GB to 64GB 64GB to 256GB 256GB to 512GB At least 4GB At least 8GB At least 16GB At least 32GB 6.4 REGISTER WITH RED HAT NETWORK If you have not already completed the procedure for the base configuration, register the host nodes to the Red Hat network (RHN). Repeat the registration for each host node. This offers proper compliance with Red Hat subscription requirements and provides access to all of the necessary packages. In the following window, rhn_register was run from one of the host nodes. Except for the account login and password, defaults are chosen. 23

24 After properly registering the hosts, subscribe them to the virtualization child channel. 1. Log in to RHN. For the remaining procedure, refer to the following window. 2. Click the Systems tab. 24

25 3. Select the system to be managed. 4. Click the Software tab. 5. Click the Software Channels tab. 6. Check the RHEL Virtualization Channel Entitlement. 7. Click the Change Subscriptions button at the lower right. 6.5 HOST SECURITY SELinux, a security enhancement for Linux, is enabled by default. Disable it only if using Red Hat Cluster Suite, in which case it must be disabled. As will be explained in section 11, Configure GFS2-based Shared Storage, Red Hat Cluster Suite adds a required layer of data integrity for LUN-based shared storage needed by GFS2. The iptables firewall should be enabled and configured to allow the ports shown in Table 5. Table 5) Service and KVM-related ports. Port Protocol Description 22 TCP SSH 53 TCP, UDP DNS 111 TCP, UDP Portmap 123 TCP NTP 3260 TCP, UDP iscsi (only if using a software iscsi initiator) 5353 TCP, UDP mdns TCP KVM interhost communication 32803, 662 TCP NFS (only if using NFS, also requires additional configuration) TCP KVM migration TCP Virtual consoles (extend out for additional consoles) 67, 68 TCP, UDP DHCP n/a n/a ICMP n/a n/a Public virtual bridge (see section 6.8, Network Configuration of Host Nodes ) You must perform additional configuration for NFS to use consistent ports. See section 9.1, Configure NFS- Based Shared Storage on the Host Nodes, for instructions on how to perform this configuration. If you use Red Hat Cluster Suite, GFS, GFS2, or any combination of these, you should open the additional ports listed in Table 6. The services listed in Table 6 are all components of the Red Hat Cluster Suite, such as the cluster manager, the luci and ricci agents, distributed lock manager, and the daemon that monitors configuration consistency between host nodes. Table 6) Cluster-related ports. Port Protocol Description 5404, 5405 UDP cman 8084 TCP luci TCP ricci 25

26 16851 TCP modclusterd TCP dlm UDP ccsd 50006, 50008, TCP ccsd Appendix C: Sample Firewall for Host Nodes, contains an example of one way to set up the iptables firewall. 6.6 DISABLE UNNECESSARY AND INSECURE SERVICES You should disable all unnecessary services on the host server. For example, you should stop and disable services such as avahi-daemon, bluetooth, cups, hplip, and pcsd. Perform this task on all host nodes. In the interest of security, no unnecessary packages should be installed on the host nodes. SELinux is enabled by default; however, confirm that it is enabled, as shown in Figure 4. Figure 4) Confirm that SELinux is enabled. If SELINUX=enforcing is listed, SELinux is enabled. Note: If using Red Hat Cluster Suite, SELinux must be disabled. Insecure protocols such as FTP, TFTP, RSH, and Telnet should never be used. Secure equivalents such as SSH, SCP, and SFTP should be used instead; these services are provided by the SSH daemon and are enabled by default. 6.7 SECURE REMOTE ACCESS TO HOST NODES (SSH KEYS) SSH KEY PAIRS AND TLS Securing access to the host nodes is critical to the security of the virtual environment as well as to secure live migration. SSH and TLS are two primary methods for securing administrative communication between nodes. This guide covers setting up SSH key pairs. For information on using TLS, see section 20.2 of the Red Hat Enterprise Linux 5 Virtualization Guide. CREATE SSH KEY PAIRS Set up SSH keys on every host that is expected to run the Virtual Machine Manager (virt-manager). For example, consider the following two basic scenarios: There are two or more nodes hosting guests. There is no remote access server. Node 1 is selected to be the host running virt-manager. An SSH key pair is created on node 1. The public key is distributed to each of the other nodes, thereby allowing an encrypted means of communicating without the use of a password. There are two or more host nodes with a remote system that is used to manage the virtual environment. The SSH key pair is created on the remote node. The public key is then distributed to each of the host nodes. In either scenario, begin by creating the key pair. Figure 5 shows the key pair being created from a remote administration node. 26

27 Figure 5) Create the key pair. Accept the defaults and leave the passphrase empty. This creates two files: A private key (id_rsa) A public key (id_rsa.pub) DISTRIBUTE THE PUBLIC KEY 1. Copy the public key to each host node. In the following screenshot, the remote administration host taco has the key pair. In the process of copying the file to chzbrgr, the file name is changed to track the key s origin. After copying the public key to host node chzbrgr, configure it on host node chzbrgr. 2. Log in to the host. 3. Make sure that the.ssh directory has permissions set to Create the authorized_keys file with permissions set to Test the key by logging out of the host node, then logging back in. Note that if SSH does not require a password, the key is working properly, as seen in the following screenshot. 27

28 6. Distribute the public key and test it on each host node. 6.8 NETWORK CONFIGURATION OF HOST NODES As seen in Table 7, the IP network configuration of the host nodes configured for this example deployment accounts for: Access to the host node A private network for NFS A virtual Ethernet bridge allowing two-way traffic for the virtual guests In addition, all of the interfaces have been configured for redundancy with channel bonds. The two Ethernet devices bonded for the private network can be used for a software-based iscsi initiator as well. Table 7 contains the specifics of the host node IP configuration. Table 7) Host node IP configuration. Interface Channel Bond Subnet Note eth0 eth1 eth2 eth3 bond /24 Public traffic to host nodes bond /24 Private traffic for NFS (or iscsi) eth4 eth5 bond /24 Public traffic for virtual guests by way of the virtual Ethernet bridge The host nodes in this deployment guide also have hardware-based iscsi initiators installed that are seen by the operating system as eth6 and eth7. While there are configuration files in the network-scripts directory that were created at install time, their configuration is handled at the PCI BIOS layer. See Appendix A: Configure Hardware-Based iscsi Initiator, for more information. PUBLIC INTERFACE As noted, all of the connections to the host nodes are accomplished through channel-bonded interfaces for redundancy. In the following examples, the standard configuration files (ifcfg-ethx, ifcfg-bondx) are listed consecutively for brevity. In practice, the lines under ETH0 belong in the ifcfg-eth0 file. Figure 6 shows the proper configuration. 28

29 Figure 6) Channel bond for public host node traffic. Before the bonded interface can be brought up, you must configure the bonding module in /etc/modprobe.conf. Figure 7 shows three alias lines as well as two option lines added to the file. The max_bonds parameter is necessary if more than one bonded interface is to be configured. The second options line is unnecessary if the BONDING_OPTS line is used as in Figure 6. Figure 7) Configure the bonding module. After the configuration files and modules are configured, restart networking with the command: service network restart Channel bonding mode 1 (active-passive) was chosen for this environment but might not be appropriate for every environment. See Appendix B: Channel Bonding Modes, for a brief description of the different modes available. PRIVATE INTERFACE Except for the IP and subnet information, configure the channel bond for the private NFS traffic identically to the public bond. Figure 8 provides an example. 29

30 Figure 8) Channel bond for private NFS traffic. Like the public channel bond, restart networking to test. Alternatively, you can use the ifup command to bring up only the newly configured channel bond. If you are to configure jumbo frames on an interface, add the line MTU=9000 to the ifcfg-ethx file. CREATE A VIRTUAL ETHERNET BRIDGE The default virtual bridge in KVM uses NAT to forward traffic from the virtual servers to the outside network. It also allows the virtual servers to communicate with each other, but there is no path back to the virtual servers from the outside network. The only way to deploy new virtual servers and golden images is from ISO images. To circumvent this limitation, an additional bridge that binds to a physical Ethernet NIC or channel bond is configured. This allows two-way traffic to the virtual guests on the public network. It also opens up the possibility of network installations of guests. For the virtual servers to have two-way access to the network outside of the host nodes, you need to create at least one virtual Ethernet bridge on each host node. With the exception of the MAC addresses, the bridges must have the identical configuration on all nodes. Like the other interfaces on the host node, the bridge is configured for redundancy by way of channelbonded interfaces. Figure 9 shows the contents of the relevant configuration files in the /etc/sysconfig/network-scripts directory. Interfaces eth4 and eth5 are first bonded into bond2, then bond2 is configured with the entry BRIDGE=br0 line, and finally, the bridge (br0) is configured. 30

31 Figure 9) Channel bond for public virtual guest traffic through virtual Ethernet bridge. To test the bonded bridge immediately, use the ifup command to bring up the channel bond, followed by the bridge. As shown in Figure 10, check the status of the newly created bridge. 31

32 Figure 10) Status of the newly created virtual Ethernet bridge. ALLOW TWO-WAY TRAFFIC FOR VIRTUAL ETHERNET BRIDGES After configuring the bonded bridge, configure the host node to allow inbound traffic back through the host node. There are two methods for configuring the host node; both methods are equally acceptable. Method 1 Use iptables. As shown in Figure 11, add a rule that allows traffic bound for the Ethernet bridge and save the configuration: Figure 11) Using iptables to allow two-way traffic for virtual Ethernet bridges. Repeat this configuration on all host nodes. Method 2 Use kernel tunable parameters in /etc/sysctl.conf. The keys in Figure 12 tell iptables not to filter bridge traffic: Figure 12) Using kernel tunable parameters in /etc/sysctl.conf to allow two-way traffic for virtual Ethernet bridges. The three net.bridge parameters listed in Figure 12 need to be disabled (switched to 0) so that bridged traffic is forwarded successfully back to the virtual servers. Repeat this configuration on all host nodes. After disabling the keys, run the command shown in Figure 13 to enact the changes. 32

33 Figure 13) Enact the configuration changes. CONFIGURE NTP Configure the host nodes for use with Network Timing Protocol (NTP) to keep the time synchronized. In Figure 14, the time is given an initial sync with the ntpdate command. The ntp.conf file then is backed up, copied, and edited. Finally, NTP is configured to start on boot and started. Figure 14) Configure host nodes for use with NTP. 33

34 7 CONFIGURE A REMOTE ADMINISTRATION HOST (OPTIONAL) There are two primary options for performing administration tasks on the virtual environment: Graphical Virtual Machine Manager tool Command-line tools provided by libvirt and kvm-qemu-img (virsh, qemu-img, and so on) Virtual Machine Manager can accomplish many common administration tasks. The libvirt command-line tools provide a detailed interface, while the libvirt API allows for the development of integration and automation. This deployment guide does not cover specific integration. Strongly consider making use of a remote administration host whether using the graphical Virtual Machine Manager or text-mode virsh tool. You can use the remote administration host to manage the host nodes; manage the NetApp FAS controller; and, in the case of GFS2, manage the Red Hat Cluster Suite. Otherwise, one or more host nodes must be configured for these duties. While you can use our RHEL desktops to manage the KVM environment, it is not the best way to manage the cluster or the NetApp FAS controller. The remote administration host provides a more centralized way of managing the different aspects of the virtual environment. If not using the graphical tools, there is no need to install any desktop, X-windows, or virt-manager package on the host nodes. This helps to limit the number of packages installed on the host nodes. 7.1 BASIC REMOTE HOST CONFIGURATION Basic remote host configuration can be performed from any server or workstation running RHEL 5.4 or later. If installing virt-manager, the gnome desktop environment is an optional GUI. It must be registered to RHN or to a local Yellowdog Updater, Modified (YUM) repository containing the proper packages. 7.2 CONFIGURE SECURITY Table 8 lists the remote host ports that should be enabled in iptables on the remote host. Table 8) Remote host ports to be enabled on the remote host. Port Protocol Description 22 TCP SSH 53 TCP, UDP DNS 123 TCP NTP N/A N/A ICMP 8084 TCP luci (If using Cluster Suite and/or GFS) TCP ricci (If using Cluster Suite and/or GFS) Also enable SELinux. 7.3 CONFIRM NTP IS RUNNING AND STARTS ON BOOT Refer to the previous section, "Installation and Base Configuration of Host Nodes, for information on how to configure NTP. 7.4 CONFIGURE NFS ACCESS TO THE NETAPP FAS CONTROLLER If using the remote administration host to manage the NetApp FAS controller, its NFS exports must be amended to provide access to the administration host. 34

35 From the Web console of the NetApp FAS controller, select NFS Manage Exports, as seen in Figure 15. View the Path column for /vol/vol0. This volume contains all of the configuration files for the Data ONTAP operating system. In the corresponding Options column, the Read-Write and Root access should be configured for the IP address of the remote administration host. Figure 15) Manage NFS export. After the access is granted, create a mountpoint on the remote administration host, as shown in Figure 16. Figure 16) Create a mountpoint on the remote administration host. 7.5 INSTALL THE PACKAGES NEEDED TO ADMINISTER KVM REMOTELY Note: Install virt-manager only if using the graphical Virtual Machine Manager. To install libvirt and its associated command-line tools, use the following command: # yum install libvirt To include the graphical Virtual Machine Manager, append virt-manager to the command: # yum install libvirt virt-manager A number of dependencies will also be installed. 35

36 7.6 CONFIGURE THE SSH KEY PAIR See section 6.7, Secure Remote Access to Host Nodes (SSH Keys), for the configuration of SSH key pairs. You must distribute the public key for the remote administration host to each of the host nodes. If you are using the remote administration host to manage the NetApp FAS controller, you must create an additional SSH key pair. Creation of this additional SSH key pair is described in section 6.7, Secure Remote Access to Host Nodes (SSH Keys) ; however, there are two important differences: The key pair for the NetApp FAS controller must be of the type dsa instead of rsa. The DSA public key must be distributed to the NetApp FAS controller. After you configure the second SSH key pair, use the NFS mount /vol/vol0 as described in section 7.4, Configure NFS Access to the NetApp FAS Controller, to copy the public key to the appropriate file on the NetApp FAS controller. In Figure 17, vol0 is NFS mounted under /na_fas on a remote administration host. The /etc/sshd directory on the storage controller was created automatically, but the root/.ssh subdirectories needed to be added for this process, as did the file authorized_keys. Figure 17) NFS mount voi0 under /na_fas. The public key is then appended to the authorized_keys file. 36

37 8 INSTALL AND CONFIGURE KVM Before continuing, be sure that the host nodes are registered to the Red Hat network (including subscription to the virtualization child channel), as described in section 6.4, Register with Red Hat Network. If the host nodes are being deployed in a secure environment that does not have access to the Internet, you need to deploy and configure a YUM repository. The Red Hat Enterprise Linux 5 Deployment Guide contains additional information on repositories. 8.1 INSTALL THE REQUIRED PACKAGES Run the command shown in Figure 18 to install the KVM-related packages. Figure 18) Install the KVM-related packages. Note: You can also install the virt-viewer. It is optional, but adds additional functionality in a graphical environment. There are many packages necessary to run KVM, but because YUM properly identifies and grabs packages based on dependencies, all required packages are installed. Currently, running the yum install kvm command installs 24 packages. After installing the KVM-related packages, start libvirtd, as shown in Figure 19, and make sure that it starts automatically on boot. Figure 19) Start libvirtd. Install the KVM-related packages on each host node. 37

38 9 SHARED STORAGE Shared storage is the keystone of a flexible and scalable virtual environment such as KVM. The ability to migrate a virtual server from one host node to another, without downtime, requires that both host nodes see the same storage in the same manner. The primary media for shared storage in a KVM virtual environment are NFS, iscsi, and FCP. The following subsections discuss each primary medium. Note that a KVM and NetApp infrastructure supports multiple environments and that all three storage media can be used simultaneously, but only one should be used at a time for each environment. For example, assume a cluster of 5 host nodes using NFS and a cluster of 10 host nodes using FCP. The shared storage for both environments can be maintained on the same NetApp FAS controller. However, the servers in the same cluster have to use the same shared storage medium. If additional host nodes are added to the first cluster of 5, the new host nodes also must use NFS. 9.1 CONFIGURE NFS-BASED SHARED STORAGE ON THE HOST NODES NFS-based shared storage is very straightforward in a KVM and NetApp virtual environment. It typically involves the following tasks: Configure private bonded interfaces for NFS traffic Specify predictable NFS client ports Mount the NFS export Tune the number of concurrent I/Os Configure SELinux CONFIGURE PRIVATE BONDED INTERFACES FOR NFS TRAFFIC If the private bonded interfaces for NFS traffic have not yet been configured, refer to section 6.8, Network Configuration of Host Nodes. In this KVM environment, bond1 was set up on both host nodes on a private /24 network. This corresponds to a VIF on the NetApp storage that is tied to VLAN The switches between the host nodes and the NetApp storage have been configured to deliver traffic on VLAN 3027 to the VIF designated for private NFS traffic. SPECIFY PREDICTABLE NFS CLIENT PORTS In /etc/sysconfig/nfs, uncomment LOCKD_TCPPORT=32803 and STATD_PORT=662, as shown in Figure 20. This forces the NFS to use those ports instead of the default random ones, allowing iptables to lock down the ports. Figure 20) Specify predictable NFS client ports. After making sure that the proper ports are opened in the firewall and editing /etc/sysconfig/nfs, restart the host node. This makes certain that the newly configured NFS ports are in use. Section 6.5, Host Security, provides more information on firewall ports to be opened on the host nodes in the KVM environment. MOUNT THE NFS EXPORT The NFS export must be created on the NetApp FAS controller before moving forward. 38

39 Figure 21 shows the NFS mount entry in /etc/fstab. Note the use of the _netdev option. This makes sure that the NFS mount is not attempted until networking is up and running. Test the NFS mount entry in /etc/fstab with the mount -a command; then run mount to list out the newly mounted file system. Figure 21) Test the NFS mount entry. Running mount -a mounts anything listed in /etc/fstab that is not already mounted. A lack of output from the command indicates that there are no errors. Then run the nfsstat m command to illustrate the mount options included when using the defaults mount option. TUNE THE NUMBER OF CONCURRENT I/OS The default number of concurrent I/Os to be submitted to the NetApp FAS controller is 16. The relevant key in /etc/sysctl.conf must be reassigned a value of 128. Figure 22 shows the key changed on the live system and then made permanent by appending it to the /etc/sysctl.conf file. Figure 22) Making the key permanent. Note: There is also a key in /etc/sysctl.conf for the relevant User Datagram Protocol (UDP) slot table, but it can be ignored because NFS is going over only Transmission Control Protocol (TCP) in this environment. Next, create any wanted subdirectories under /var/lib/libvirt/images and upload any ISO images. For instance, some environments might need separate subdirectories for each operating system, release, ISO image, and golden image. CONFIGURE SELINUX It is very important to configure the images directory and all of its contents for use with SELinux. If done improperly, the KVM virtual servers will fail to operate. Section 12, SELinux Considerations, contains details on configuring the images directory. 9.2 CONFIGURE ISCSI-BASED SHARED STORAGE ON THE HOST NODES The process of configuring iscsi-based shared storage includes four major steps: Configuring an iscsi initiator Rescanning or discovery on the SCSI bus Configuring multipathing Configuring GFS2 39

40 For iscsi, there is the concept of the initiator and the target. The initiator is the hardware or software device that makes requests of the storage, which is also known as the target. The KVM and NetApp environment set up for this deployment guide made use of Qlogic dual-port iscsi HBAs (hardware); however, instructions for both hardware- and software-based initiators are included in the guide. This section contains instructions for the Red Hat supplied software initiator. Appendix A: Configure Hardware-Based iscsi Initiator, contains instructions for the hardware initiator. The iscsi-initiator-utils package is required to configure the software-based iscsi initiator. Complete the following procedure for configuring a separate VLAN on the NetApp FAS controller for iscsi traffic, as well as a private interface or bond on the host nodes, prior to configuring the software-based iscsi initiator. 1. Identify the initiator name. The initiator name will need to be entered as part of an igroup on the NetApp FAS controller. As shown in the following screenshot, the initiator name is created automatically when the iscsi-initiatorutils package is installed. 2. Be sure that the private VLAN on the NetApp FAS controller that handles the iscsi traffic can be pinged from the host. 3. Use the iscsiadm command to discover the iscsi target. The IP addresses for the private iscsi network exist on the /24 network, on VLAN After the discovery process returns with the target address (in the following screenshot, iqn com.netap:sn ), restart the iscsi service. This information is saved automatically. 5. Confirm that you can see the new partitions. In the following screenshot, two devices are seen because there are two paths to the same device. 40

41 6. Repeat this procedure exactly on each host node. It is also imperative that each host node assigns the same device names. In this KVM and NetApp environment, as seen in the preceding screenshot, each host node sees the devices as sda and sdb. There are two final configurations: Configure multipathing Configure GFS2 These configurations are covered in separate sections because they also apply to FCP-based shared storage. Section 10 covers multipathing configuration, and section 11 covers GFS2 configuration. DISK ALIGNMENT FOR ISCSI SHARED STORAGE The initial requirements of aligning an iscsi-based LUN are satisfied by properly configuring the LUN and igroup when you first create the LUN. This includes proper operating system and type. The remaining requirements to properly align the iscsi-based shared storage are satisfied by following the instructions in section 11.5, Configure GFS2. The steps to align a raw disk image properly are included in section 13, Create a Golden Image or Template. 9.3 CONFIGURE FCP-BASED SHARED STORAGE ON THE HOST NODES The configuration of the fibre HBAs is done automatically at install time and requires little additional configuration. The major steps are directed more toward information gathering than configuration. The major steps include: Capture the HBA host IDs Capture the port names of the HBAs Rediscover the fabric Note: Other steps, which are outside the scope of this deployment guide, involve proper zoning in the fabric by way of the fibre switch. 1. Determine how the operating system defines the HBAs. At this point, it is important to know the make and model of all HBAs installed on the host nodes. For example, the host nodes in this KVM and NetApp environment have both iscsi and FCP HBAs installed, all made by Qlogic. The following screenshot, based on the models listed, shows that the first two HBAs are iscsi and that the second two are the fibre HBAs. 41

42 2. Determine the port names to configure the igroup on the NetApp FAS controller as well as the zoning on the fabric switch. The following screenshot shows the command used to identify the port names. 3. After the LUN and igroup are configured and mapped on the NetApp FAS controller, rediscover the fabric. In the following screenshot, devices sdc through sdf are actually the same device. 42

43 Because of the redundant paths as well as how the device was zoned on the fabric switch, it shows up as four devices. As a result, the next task is to configure multipathing. Section 10, Configure Multipathing on the Host Nodes, explains multipathing on the host nodes. DISK ALIGNMENT FOR FCP SHARED STORAGE The initial requirements of aligning an iscsi-based LUN are satisfied by properly configuring the LUN and igroup when you first create the LUN. This includes proper operating system and type. The remaining requirements to properly align the iscsi-based shared storage are satisfied by following the instructions in section 11.5, Configure GFS2. The steps to align a raw disk image properly are included in section 13, Create a Golden Image or Template. 43

44 10 CONFIGURE MULTIPATHING ON THE HOST NODES Redundant paths running to the shared storage require multipathing to manage the paths. Without multipathing software, the operating system does not recognize that the device at the end of each path is the same device. Red Hat Device Mapper Multipath is included with Red Hat Enterprise Linux and is configured in the following procedure to manage the multiple storage paths in preparation for use with GFS2. The configuration for multipathing is identical for iscsi and FCP. 1. Confirm that the multipath package is installed. 2. If the package needs to be installed, use the command shown in the following screenshot. 3. Edit the /etc/multipath.conf file to comment out the blacklist block near the top of the file. 4. Insert the multipath module. In the following screenshot, the configuration is completed by starting the service, running the multipath -v2 command to configure the paths, and running multipath ll to list the configured paths. Device mpath0 was created. This device will be used by GFS2 to complete the configuration of the LUN-based iscsi or FCP shared storage. 44

45 5. Configure multipath to start automatically at boot time. The following screenshot shows the multipath configuration for the FCP LUN that was created earlier with four paths. The multipath device mpath1 was created with the four devices. The Red Hat Enterprise Linux 5 DM Multipath guide provides additional information on multipath. Continue to section 11, Configure GFS2-based Shared Storage. 45

46 11 CONFIGURE GFS2-BASED SHARED STORAGE Note: SELinux is not supported for use with Red Hat Cluster Suite in RHEL 5.4 and must be disabled prior to configuration of the cluster or GFS2. SELinux will be supported in RHEL 6, which is scheduled to be released in late A clustered file system is required to provide the file locking needed to prevent data corruption when two or more hosts have read and write access to the same LUN-based file system. Red Hat Enterprise Linux 5.4 AP ships with both GFS and GFS2. While both satisfy the clustered file system requirement, this deployment guide covers only the GFS2 configuration. Prior to the GFS2 configuration, set up a basic cluster that includes all of the host nodes that will access the same shared storage device. Red Hat Cluster Suite, also included in RHEL 5.4 AP, provides a configurable means of providing high availability to various services. However, for the purposes of the shared storage, the configuration is quite basic; it is enough to fence a host node properly, as well as provide the cluster requirement to the GFS2 file system. Fencing refers to the process by which one host node forces another host node out of the cluster (reboot) or triggers an action that cuts data access to another host node. This happens when a host node is hung or otherwise fails to respond to a heartbeat. When fenced, the hung host node is forced to release any locks on the file system and is prevented from writing dirty, old, or corrupted data. In the case of a fencing action that forces a reboot, the offending host node rejoins the cluster when it comes back up. The Red Hat Enterprise Linux 5 Cluster Administration guide provides more information on fencing. Note: Although it is not documented in this deployment guide, Red Hat Cluster Suite also supports configuring virtual systems as cluster nodes to provide high availability to the virtual environment CONFIGURE THE HOST NODES Subscribe the host nodes to the cluster and cluster storage child channels in Red Hat network. Section 6.4, Register with Red Hat Network, contains additional information on adding subscriptions. These child channels contain the GFS2 and Red Hat Cluster Suite packages required to continue host node configuration. After updating the channel subscriptions, install ricci on all host nodes, as shown in Figure 23. Figure 23) Install ricci. Installing the ricci package also installs several dependencies, as shown in Figure

47 Figure 24) Installing dependencies CONFIGURE THE CLUSTER MANAGER If using a remote administration host, install the luci package on the host, as shown in Figure 25. If not using a remote administration host, choose a host node that will also serve as the management node and install luci on that host node: Figure 25) Install luci. Initialize luci. This triggers a password prompt for the interface, as shown in Figure

48 Figure 26) Initialize luci CONFIGURE THE CLUSTER Begin configuring the cluster after restarting luci. 1. To complete the cluster configuration, open a Web browser and go to where name_of_luci_host is the hostname or IP address of the host running luci. 2. Log in to Red Hat cluster and storage systems. 3. Click the Cluster tab, then click Create a New Cluster, as shown in the following window. 48

49 4. Populate the fields with the appropriate host and password information and click Submit. This creates the cluster as well as automatically installs the required cluster packages on the host nodes. If there are errors, be sure that the ricci service is started on the host nodes. The following window appears when the cluster is being built. 49

50 Next, create and configure fencing devices CREATE FENCING DEVICES When a failure occurs, a fencing device enables the cluster to remove a node in order to prevent data corruption. The cluster in this deployment guide uses the management card included in each host. If you have not already done so, review the Red Hat Enterprise Linux 5 Cluster Administration guide for information on other supported fencing devices. 1. Click the Cluster tab; then click Nodes Configure, as shown in the following window. 2. Click the first host node. 3. Under Main Fencing Method, click Add a fence device to this level. 50

51 4. Enter the information as appropriate on the Failover Domain Membership window. The following window shows the HP ilo management card being set up to power the host on and off as required. 51

52 5. Disable ACPI with the following command: chkconfig acpid off; service acpid stop This keeps the ACPI from interfering with the fencing of a node. 6. Repeat this procedure for each host node prior to configuring GFS CONFIGURE GFS2 CREATE THE GFS2 FILE SYSTEM USING LVM ON THE ENTIRE LUN 1. Make sure that the GFS2 kernel module, GFS2 utilities, and clustered LVM packages are installed on each host node. 2. Start the clustered LVM service and configure it to start automatically at boot time. Clustered LVM uses the same commands and options as LVM; it is simply a cluster-aware version. It does not conflict with the existing LVM package or configuration. 3. Initialize the multipath device for use with LVM and then create a volume group. Run the pvdisplay command to capture the number of free physical extents. This number, highlighted in the following screenshot, is used in the next step. Note: In the following screenshot, the mpath0 multipath device, which was created in section 10, Configure Multipathing on the Host Nodes, is used for creating the GFS2 file system. 4. Using all available free physical extents, create a logical volume, or partition, within the volume group. 52

53 5. Scan the volumes from the other host nodes. Because the LVM is running in clustered mode, the volume group and logical volume need only to be created from one node. 6. Create one journal for each host node; however, you can create additional journals if additional host nodes need to be added later. The locking protocol, lock_dlm, is required for use in a cluster. In the following window, the locking table is specific to the cluster in this example. The locking table shown, sharstor:kvm_data, includes the cluster name plus an arbitrary name for the file system. In this example, the cluster name, which was created previously, is sharstor; the arbitrary file system name is kvm_data. There are four journals created, and the file system is created on the lv-kvm logical volume. Like the clustered LVM volumes, the GFS2 file system is created on one host node only. 53

54 7. Create an entry in /etc/fstab and test the entry. Note that the options are noatime and _netdev. The noatime option improves performance, and the _netdev option prevents mounting the file system until networking is up. 8. Execute the /etc/fstab entry and mount it on all host nodes. Section 0, References, provides links to documents containing additional information on Red Hat Cluster Suite and GFS2. If LVM is used either on the entire LUN or on a partition that is properly offset, the LUN itself will be aligned properly. The default sizes for LVM physical extents and GFS2 block size are both evenly divisible by 8. The Best Practices for File System Alignment in Virtual Environments guide provides a full explanation of disk alignment. USE LVM ON A PARTITION WITH THE PROPER OFFSET The preceding procedure explained how to create the GFS2 file system using LVM on the entire LUN. This procedure explains how to use LVM on a partition with the proper offset. The following screenshot shows the command used and the system output. 1. Run the parted command to create a disk label on mpath1. 2. Create a single partition starting at sector 64 and ending when it runs out of space. This results in the creation of the partition mpath1p1. 3. The partition is given the type LVM, and the LVM-related commands are run against the new partition. 4. Run the fdisk command to show the details of the newly created partition. 5. Run the pvcreate command to label the disk. 6. Run the vgcreate command to create the volume group. All other commands are the same. 54

55 Each procedure affects performance equally, whether using the entire disk or creating a partition first; however, creating a partition first uses a few more commands, and the first 64 sectors remain unused. 55

56 12 SELINUX CONSIDERATIONS Note: SELinux must be disabled if using Cluster Suite in RHEL 5.4. SELinux and Red Hat Cluster Suite will be supported for use together in late 2010 when RHEL 6 is scheduled for release. This section explains how to make sure that KVM works properly with SELinux for NFS-based shared storage. SELinux is a key layer in securing the host nodes. In Figure 27, three subdirectories have been created in the images directory. Other environments might have more or fewer directories or might even have a nondefault location for the images directory. The instructions in this section apply to all of the use cases. Configure the security context for the subdirectories. If the images directory is moved to a nondefault location, it too needs to be configured so KVM is able to use its contents. In this scenario, the ls -Z command lists the directories SELinux context prior to having their context corrected. The semanage command is used first to update the images directory, to which it replies that it is already defined. (Normally, you do not need to run the semanage command on the default directory because it already has the proper security context. In this example, it was run to show what to do if a nondefault directory is used.) Finally, the restorecon command is used recursively to update the SELinux context on everything under the images directory. Figure 27) Configure the security context for the subdirectories. In Figure 28, the newly updated SELinux context (virt_image_t) is listed. An empty file is then created to show that created files will inherit the proper context. Figure 28) Newly created files inheriting the proper context. 56

57 KVM will now work properly with SELinux for NFS-based shared storage. 57

58 13 CREATE A GOLDEN IMAGE OR TEMPLATE Creating a base system to work from makes good sense for many reasons, including: Automation Consistency Predictability Faster deployment of virtual servers After creating a template or golden image, you can clone virtual servers in a fraction of the time. In this deployment guide, the words template and golden image are used interchangeably. Creating a template image follows this general process: 1. Choose an operating system. 2. Determine sizing, layout, and package requirements. 3. Make available an ISO image, DVD, or PXE environment to a host node. 4. Create and align a disk image. 5. Build a virtual server using the disk image. 6. Install the operating system. 7. Reboot the virtual server and make it generic. 8. Shut down the virtual server. 9. Using the template, make one or more clones. Templates are created based on the different types of servers being deployed in the environment. In this deployment guide, the template is created for a very basic Web server based on RHEL CREATE AND ALIGN A DISK IMAGE FOR VIRTUAL GUESTS In the KVM virtual environment, you can create a disk image automatically during the process of creating a virtual server, or you can create a disk image as a separate process. In this deployment guide, the disk image is created as a separate process using the qemu-img command. As documented in NetApp Best Practices for File System Alignment in Virtual Environments, it is necessary to align properly each layer of storage between the virtual server and the underlying storage. In the context of a KVM disk image, this means that each partition needs to start at a sector number that is cleanly divisible by 8. For example, a legacy default starting sector is 63, but an aligned first partition will start at 64 or 128. For the purposes of this deployment guide, a simple script was written to automate the creation of a disk image, create the virtual server, and then use the virtual Ethernet bridge to access a Kickstart server. Figure 29 shows the script that was used. 58

59 Figure 29) Script to automate creation of a disk image and creation of the virtual server and use the virtual Ethernet bridge to access a Kickstart server. The two primary commands are qemu-img and virt-install. The qemu-img command in this script creates a raw disk image 8GB in size. The new image is named when the script is run. For example, if using the script in Figure 29,./build_me.sh dbserver01 creates a new virtual server named dbserver01. The virt-install command in Figure 29 specifies that a virtual server is to be created on the local host using KVM (hvm), the virtual bridge, and 1GB of memory, and is pointed to the Kickstart server found at Note that the --file-size=8 is not actually needed here because the qemu-img command was used earlier in the script. This option automatically creates the disk image at the creation time of the virtual server. Appendix E: Sample Kickstart File for a Properly Aligned Virtual Server, lists the referenced Kickstart file in its entirely. It is very basic in that it specifies only a few packages and a basic disk layout. However, it is very important to note two sections of the Kickstart file as they relate to disk alignment: %pre section Disk layout section Figure 30 shows the %pre section, which executes prior to the rest of the install. The parted tool creates two partitions on the disk image that is seen by the virtual server as device /dev/hda. Each of the partitions is started on a sector that is cleanly divisible by 8. Figure 30) Kickstart file, %pre section. 59

60 Figure 31 shows the disk layout section. Note the two lines that start with the abbreviation part. They dictate that the /boot directory and that LVM are used, respectively, on the partitions created in the %pre section, thereby preserving the properly aligned partitions. Any remaining partitions are created within the LVM managed partition. Do not use the clearpart directive in this Kickstart scenario because it wipes out the partitions created in the %pre section. Figure 31) Kickstart file, disk layout section. For more information on Kickstart, see the Red Hat Enterprise Linux 5 Deployment Guide. Note: The creation of a golden image does not require Kickstart; however, Kickstart automates the alignment process. DVD or ISO images are the default means for creating virtual servers. The use of a virtual bridge opens the possibility of using Kickstart and PXE for installation. Figure 32 shows the build_me.sh script launched to create a golden image for a RHEL 5.4 Web server. The Virt Viewer window opens automatically because the virt-install command in the build_me script called for VNC. 60

61 Figure 32) Create a golden image. The virtual server rhel54_web_gimage is created PREPARE THE GOLDEN IMAGE FOR CLONING Next, configure the newly created virtual server to be a golden image. This involves adding any third-party software, configuring extra security settings, disabling unnecessary services, and anything else that contributes to the automation of cloning the golden image or template. At a minimum, the hostname, IP addresses, and MAC addresses for any network interfaces need to be removed. The ultimate purpose is the ability to clone servers on demand that require very little or no interaction to put them into production. In this particular scenario, only the hostname, IP address, and MAC address are unconfigured. Later, you can assign a MAC address during the cloning process. 61

62 Figure 33) Generic network configuration. Shut down the golden image once it has all of its software configured and is made generic. The golden image is not meant to be a running server; it is only a template to be cloned. Also note that both raw disk images and cloned raw disk images are thin by default. That is, while a raw disk image might be 8GB in size, it might only take up 2GB in space. Figure 34 shows this situation. The raw disk image webserv01.img was created as 8GB in size, but the leftmost column shows that it takes up only 1.2GB space. As more data is stored on the disk image, that number grows. Figure 34) Raw disk image size. Section 16, Configure Data Resiliency and Efficiency, illustrates the means of taking advantage of this thin provisioning on the NetApp FAS controller. 62

63 14 CLONE VIRTUAL SERVERS After you create a golden image or template, two different types of files are created. The raw disk image is created to provide a logical abstraction that the virtual server sees as a physical disk. In addition, an XML file, which stores the metadata for the virtual server, is created in the /etc/libvirt/qemu directory. The metadata includes where to locate the disk image, how it is connected to the network, and all of the hardware resources it has. During cloning, both the original disk image and the XML file are referenced to create the new virtual server. The virt-clone command is used to clone a template. Figure 35 shows a rudimentary script that has been created to automate the process and is based on virt-clone. Figure 35) Script created to automate the process. The virt-clone command is the core of the script. The virsh command is used to start the newly cloned server. These commands, in addition to having a virtual server to clone, are all that is needed to create a new server based on a golden image. Also note that you could specify a predetermined MAC address as an option to the virt-clone command. Figure 36 shows a script being called to clone the template rhel54_web_gimage. The newly cloned virtual server is websrv01. 63

64 Figure 36) Cloning a template. Note that the automatically assigned MAC address is listed as part of the script output. In this example, it is assigned by the virt-clone command, but it also could have been specified in the script or as an option to the virt-clone command. In addition, note use of the Virtual Machine Manager, or virt-manager. This is a graphical tool that can be used interactively to monitor and manage the KVM virtual environment. 64

65 15 LIVE MIGRATION OF VIRTUAL SERVERS The process for moving a virtual server from one physical server to another is a cornerstone of any virtual environment, not just KVM. Live migration on KVM and NetApp is straightforward. The main requirement is that the source and target host nodes have access to the same shared storage. If you have not already created and distributed the SSH keys, stop and perform this task now. In the KVM and NetApp environment depicted in this deployment guide, a remote administration host is used to perform many of the tasks rather than perform them directly from one of the host nodes. Therefore, the remote host, taco, has its public SSH key distributed to the host nodes chzbrgr and hmbrgr. You can initiate the live migration from the command line or from the Virtual Machine Manager. Both are illustrated in this deployment guide. Regardless of the method, if the virtual server has never run on the target host node, the disk image and the XML file are both copied over automatically LIVE MIGRATION USING VIRTUAL MACHINE MANAGER 1. Run the virt-manager & command from the console to launch the Virtual Machine Manager. The first task is to establish a connection between the remote administration host and the host nodes. 2. Select File Add Connection, as shown in the following window. The Add Connection dialogue box opens. 3. As shown in the following window, select QEMU from the Hypervisor drop-down menu and Remote tunnel over SSH from the Connection drop-down menu. Also, enter the hostname that needs to be connected to. 4. Repeat these last two steps for each host node. Note: If not using a remote administration host, choose a host node to host the Virtual Machine Manager in addition to virtual guests. 65

66 5. After you enter all of the host nodes, right-click the virtual server that needs to be migrated. 6. Select Migrate <destination host>. In the following window, there is only one other host node configured; therefore, there is only one choice of host nodes. Note: You can configure the Virtual Machine Manager to connect to all host nodes; however, the Virtual Machine Manager is not aware of what nodes share the same storage and which ones do not. The migration will fail when attempting to migrate a virtual server between physical servers that do not share the same storage. 66

67 The following window shows that virtual server websrv01 has been migrated successfully from host node chzbrgr to host node hmbrgr. The Virtual Machine Manager lists all virtual servers, regardless of whether they are active, shut off, or paused LIVE MIGRATION FROM COMMAND LINE Initiating a live migration using the command line involves running the virsh command. Figure 37 shows running the virsh command from a remote administration host. The first command lists the running virtual servers on host node hmbrgr. The second command adds the --migrate switch to initiate the migration from one of the host nodes. The third command shows that the virtual server is no longer running on the source host. The final command confirms that the virtual server was migrated successfully. 67

68 Figure 37) Initiating a live migration. Note: If initiating the migration from the source host node instead of a remote host, the --connect and initial uniform resource identifier (URI) are not necessary. For example, running the following command from host node hmbrgr successfully migrates virtual server websrv01 to a host node: chzbrgr: virsh migrate live websrv01 qemu-ssh://chzbrgr/system. 68

69 16 CONFIGURE DATA RESILIENCY AND EFFICIENCY 16.1 THIN PROVISIONING Thin provisioning is a way to allocate space without actually reserving all of the space at once; it only reserves written sectors. In contrast, thick provisioning reserves all space at creation time. You can enable thin provisioning at the volume, LUN, and disk image layers. If you enable thin provisioning at one layer, it is important to enable it at all layers, or the space efficiency will not be fully recognized. For example, thin provisioning a LUN on a thick-provisioned volume provides little or no benefit because the volume still reserves all allocated space. THIN-PROVISIONED VOLUME As shown in Figure 38, enable thin provisioning on a volume at creation time by setting the Space Guarantee to none. Figure 38) Enabling thin provisioning on a volume. THIN-PROVISIONED LUN As shown in Figure 39, enable thin provisioning on a LUN at creation time by leaving Space Reserved unchecked. 69

70 Figure 39) Enabling thin provisioning on a LUN. If using NetApp deduplication on a volume serving LUNs, you should thin provision the volume at two times the size of the LUN or LUNs. THIN-PROVISIONED DISK IMAGE As shown in Figure 40, raw disk images in KVM are thin by default. Creating a 10GB raw disk image with no other options results in a file that allocates 10GB of space but reserves only 12K. Also, note the difference between allocated space and reserved space in other disk image files. For example, the disk image align.img is 8GB in size, but only takes up 2.7GB. Figure 40) Enabling thin provisioning on a disk image DEDUPLICATION Because most virtual servers are cloned from a golden image or template, they share many of the same data blocks. NetApp deduplication folds the identical blocks from the different virtual server images into a single instance on the NetApp FAS controller. Deduplication is enabled on the volume level. In addition, deduplication requires a license to enable it. 70

71 There is no graphical tool for NetApp deduplication. As shown in Figure 41, it must be enabled from the command line. In this example, the sis command enables and starts deduplication on the volume kvm_fcp. You should configure a schedule to automate the deduplication at a regular interval. For more information on NetApp deduplication, see the NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide. Figure 41) Enabling deduplication SNAPSHOT Snapshot technology provides a read-only, point-in-time copy of a volume. It takes very little space and usually takes less than a second to create. Once a Snapshot copy is created, you can restore entire volumes or individual files. If a virtual server configuration is altered or data on a virtual server is deleted, you can restore the virtual server from the Snapshot copy. In addition, you can back up Snapshot copies to tape or replicate them to another site using NetApp SnapVault or SnapMirror. Snapshot technology requires very little configuration, but the license for SnapManager must be installed on the NetApp FAS controller. When creating a Snapshot copy of a volume that contains active virtual servers, you must quiesce the active virtual servers first to get the most accurate view of the servers and data. This means that if there are virtual servers distributed among three host nodes, you must quiesce all of the virtual servers because they all reside on the same volume. Figure 42 shows remote commands being run to list the active virtual servers and then suspending (quiescing) the active virtual servers. After the active virtual servers are quiesced, a remote command is issued to the NetApp FAS controller to create a Snapshot copy of the volume kvm_iscsi, named kvmsnap. The second virsh list command is run to demonstrate the paused state of the virtual server before it is put back into active status. 71

72 Figure 42) Listing and suspending the active virtual servers. Appendix D: Sample Snapshot Script, contains a basic script to automate the process of quiescing the running virtual servers, triggering the Snapshot copy, and resuming the virtual servers. Because all running virtual machines must be quiesced prior to a Snapshot copy being created, the normal schedule for Snapshot copy should be disabled. Instead, a Snapshot copy should be triggered by a Linux or UNIX style cron job, such as a cron job from a remote administration host that includes first quiescing the virtual servers. Figure 43 shows how to disable the scheduled Snapshot copy from the NetApp FAS controller. Select Volumes Snapshots Configure. From the Volume drop-down menu, select the shared storage volume. In Figure 43, the kvm_iscsi volume is chosen, and the Scheduled Snapshots check box is unchecked, thereby disabling the default schedule. The hourly Snapshot schedule is in effect only if the Scheduled Snapshots check box is checked. 72

73 Figure 43) Disable the scheduled Snapshot copy from the NetApp FAS controller. 73

74 APPENDIXES APPENDIX A: CONFIGURE HARDWARE-BASED ISCSI INITIATOR A hardware-based iscsi initiator was used for the purposes of this deployment guide. It is a Qlogic QLE4062C iscsi HBA, based on the ISP4032 chip. To configure the HBA, you must reboot the host and trigger the Qlogic BIOS configuration. 1. Press Ctrl-Q after the words Press <CTRL-Q> for Fast!UTIL appear on the console. The Select Host Adapter window appears. Because the HBA has two ports, it appears as two separate adapters in the BIOS. You must configure each adapter separately. 2. Select the first adapter and press Enter. 3. On the Fast!UTIL Options window, select Configuration Settings and press Enter. 4. On the Configuration Settings window, select Host Adapter Settings and press Enter. 74

75 5. On the Host Adapter Settings window, select Initiator IP Settings. 6. Enter the IP and Netmask that match the private VLAN for iscsi traffic that was set up during the base configuration. 7. Select Initiator iscsi Name and edit the iscsi name after the colon so it is more meaningful and easier to remember. 8. Return to the Configuration Settings window by pressing Esc (Escape). 9. Select iscsi Boot Settings and press Enter. 10. In the iscsi Boot Settings window, select Manual for Adapter Boot Mode. 11. Select Primary Boot Device Settings. 75

76 12. In the Primary Boot Device Settings window, for the Target IP, enter the IP of the private VLAN IP for the NetApp FAS controller. Do not edit the values for the following fields: Use IPv4 or IPv6 Target Port Boot LUN These are default values and do not need to be edited. 13. Edit the iscsi Name field on the Primary Boot Device Settings window. You can find the edited iscsi node name on the Web console by selecting LUNs iscsi Manage Names. 14. Repeat this configuration procedure for each HBA, on each host node. You must configure multipathing and GFS2 to complete the process. APPENDIX B: CHANNEL BONDING MODES The following information on channel bonding modes is from Red Hat Knowledge Base Article Table 9) Channel bonding modes. Channel Bonding Modes Balance-rr or 0 Description Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance. 76

77 Channel Bonding Modes Active-backup or 1 Description Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on one port only (network adapter) to avoid confusing the switch. In bonding version or later, when a failover occurs in active-backup mode, bonding issues one or more gratuitous Address Resolution Protocols (ARPs) on the newly active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interface configured above it, provided that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN ID. This mode provides fault tolerance. The primary option affects the behavior of this mode. Balance-xor or 2 XOR policy: Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count]. Alternate transmit policies may be selected by use of the xmit_hash_policy option, described below. This mode provides load balancing and fault tolerance. Broadcast or 3 Broadcast policy: Transmits everything on all slave interfaces. This mode provides fault tolerance ad or 4 IEEE 802.3ad Dynamic link aggregation: Creates aggregation groups that share the same speed and duplex settings. Uses all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy by use of the xmit_hash_policy option. Note that not all transmit policies may be 802.3ad compliant, particularly in regard to the packet misordering requirements of section of the 802.3ad standard. Differing peer implementations have varying tolerances for noncompliance. Balance-tlb or 5 Prerequisites: Ethtool support in the base drivers for retrieving the speed and duplex of each slave. A switch that supports IEEE 802.3ad Dynamic link aggregation. Note: Most switches require some type of configuration to enable 802.3ad mode. Adaptive transmit load balancing: Channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave. 77

78 Channel Bonding Modes Balance-alb or 6 Description Adaptive load balancing: Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond, such that different peers use different hardware addresses for the server. Receive traffic from connections created by the server is also balanced. When the local system sends an ARP request, the bonding driver copies and saves the peer's IP information from the ARP packet. When the ARP reply arrives from the peer, its hardware address is retrieved and the bonding driver initiates an ARP reply to this peer, assigning it to one of the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is that each time that an ARP request is broadcast, it uses the hardware address of the bond. Hence, peers learn the hardware address of the bond and the balancing of receive traffic collapses to the current slave. This is handled by sending updates (ARP replies) to all the peers with their individually assigned hardware address such that the traffic is redistributed. Receive traffic is also redistributed when a new slave is added to the bond and when an inactive slave is reactivated. The receive load is distributed sequentially (round robin) among the group of highest speed slaves in the bond. When a link is reconnected or a new slave joins the bond, the receive traffic is redistributed among all active slaves in the bond by initiating ARP replies with the selected MAC address to each of the clients. The updelay parameter must be set to a value equal to or greater than the switch's forwarding delay so that the ARP replies sent to the peers will not be blocked by the switch. Prerequisites: Ethtool support in the base drivers for retrieving the speed of each slave. Base driver support for setting the hardware address of a device while it is open. This is required so that there will always be one slave in the team using the bond hardware address (the curr_active_slave) while having a unique hardware address for each slave in the bond. If the curr_active_slave fails, its hardware address is swapped with the new curr_active_slave that was chosen. 78

79 APPENDIX C: SAMPLE FIREWALL FOR HOST NODES Figure 44 is an example of one method used to set up the iptables firewall; it is not the only method. Figure 44) Set up the iptables firewall. APPENDIX D: SAMPLE SNAPSHOT SCRIPT Figure 45 shows how to script Snapshot copies and run them from a remote administration host or from a host node. 79

80 Figure 45) Script Snapshot copies. The script requires that SSH keys (dsa) be set up on the hosts expected to run the script and the public key distributed to the NetApp FAS controller. This is described in section 6.7, Secure Remote Access to Host Nodes (SSH Keys). 80

81 APPENDIX E: SAMPLE KICKSTART FILE FOR A PROPERLY ALIGNED VIRTUAL SERVER Figure 46 provides an example of a Kickstart file for a properly aligned virtual server. Figure 46) Kickstart file. The %pre section creates properly aligned partitions, and the disk layout references the newly created partitions. If using the virtio drivers, use device vda instead of hda. 81

82 REFERENCES Home Page for KVM Red Hat Enterprise Linux and Microsoft Windows Virtualization Interoperability KVM: Kernel-Based Virtual Machine Red Hat Enterprise Linux 5 Virtualization Guide Red Hat Enterprise Linux 5 Deployment Guide Red Hat Enterprise Linux 5 Installation Guide Red Hat Enterprise Linux 5.5 Online Storage Guide US/Red_Hat_Enterprise_Linux/html/Online_Storage_Reconfiguration_Guide/index.html Red Hat Enterprise Linux 5 DM Multipath Best Practices for File System Alignment in Virtual Environments Technical Report: Using the Linux NFS Client with Network Appliance Storage Storage Best Practices and Resiliency Guide KVM Known Issues NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide Red Hat Enterprise Linux 5 Global File System 2 SnapMirror Async Overview and Best Practices Guide SnapVault Best Practices Guide Data ONTAP 7.3 Data Protection Online Backup and Recovery Guide (available on NOW ) 82

83 Red Hat Enterprise Linux 5 Cluster Administration NetApp provides no representations or warranties regarding the accuracy, reliability or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer s responsibility and depends on the customer s ability to evaluate and integrate them into the customer s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document. Copyright 2010 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. NetApp, the NetApp logo, Go further, faster, Data ONTAP, FilerView, FlexVol, Network Appliance, NOW, SnapManager, SnapMirror, Snapshot, and SnapVault are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Intel is a registered trademark of Intel Corporation. Linux is a registered trademark of Linus Torvalds. UNIX is a registered trademark of The Open Group. Windows is a registered trademark of Microsoft Corporation. All other brands or products are trademarks or registered trademarks of their 83 respective holders and should be treated as such. RA

Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage

Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage Technical Report Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage Jon Benedict, NetApp May 2010 TR-3848 TABLE OF CONTENTS 1 PURPOSE OF THIS DOCUMENT... 4 1.1 INTENDED AUDIENCE... 4

More information

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Oded Nahum Principal Systems Engineer PLUMgrid EMEA November 2014 Page 1 Page 2 Table of Contents Table

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.5 August 2017 215-12375_A0 doccomments@netapp.com Updated for ONTAP Select 9.2 Table of Contents 3 Contents

More information

Monitoring and Reporting for an ONTAP Account

Monitoring and Reporting for an ONTAP Account This chapter contains the following sections: About, page 1 About Disks, page 2 Managing Filers, page 3 Managing Virtual Machines, page 3 Managing Interfaces, page 6 Managing IP Spaces, page 7 Managing

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.7 March 2018 215-12976_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.6 November 2017 215-12637_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

DSI Optimized Backup & Deduplication for VTL Installation & User Guide DSI Optimized Backup & Deduplication for VTL Installation & User Guide Restore Virtualized Appliance Version 4 Dynamic Solutions International, LLC 373 Inverness Parkway Suite 110 Englewood, CO 80112 Phone:

More information

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems April 2017 215-12035_C0 doccomments@netapp.com Table of Contents 3 Contents Before you create ONTAP Cloud systems... 5 Logging in

More information

Citrix XenServer Quick Start Guide. Published Tuesday, 25 September Edition

Citrix XenServer Quick Start Guide. Published Tuesday, 25 September Edition Citrix XenServer 6.0 Quick Start Guide Published Tuesday, 25 September 2012 0 Edition Citrix XenServer 6.0 Quick Start Guide Copyright 2012 Citrix Systems. Inc. All Rights Reserved. Version: 6.0 Citrix,

More information

Installing and Configuring Oracle VM on Oracle Cloud Infrastructure ORACLE WHITE PAPER NOVEMBER 2017

Installing and Configuring Oracle VM on Oracle Cloud Infrastructure ORACLE WHITE PAPER NOVEMBER 2017 Installing and Configuring Oracle VM on Oracle Cloud Infrastructure ORACLE WHITE PAPER NOVEMBER 2017 Disclaimer The following is intended to outline our general product direction. It is intended for information

More information

Installing and Configuring Oracle VM on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R D E C E M B E R

Installing and Configuring Oracle VM on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R D E C E M B E R Installing and Configuring Oracle VM on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R D E C E M B E R 2 0 1 7 Disclaimer The following is intended to outline our general product direction.

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

Blueprints. Quick Start Guide for installing and running KVM

Blueprints. Quick Start Guide for installing and running KVM Blueprints Quick Start Guide for installing and running KVM Blueprints Quick Start Guide for installing and running KVM Note Before using this information and the product it supports, read the information

More information

Copy-Free Transition Guide

Copy-Free Transition Guide 7-Mode Transition Tool 3.0 Copy-Free Transition Guide For Transitioning to ONTAP September 2016 215-11373_B0 doccomments@netapp.com Table of Contents 3 Contents Transition overview... 6 Copy-free transition

More information

Control Center Planning Guide

Control Center Planning Guide Control Center Planning Guide Release 1.4.2 Zenoss, Inc. www.zenoss.com Control Center Planning Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks

More information

Configuring Cisco UCS Server Pools and Policies

Configuring Cisco UCS Server Pools and Policies This chapter contains the following sections: Global Equipment Policies, page 1 UUID Pools, page 4 Server Pools, page 5 Management IP Pool, page 7 Boot Policy, page 8 Local Disk Configuration Policy, page

More information

Control Center Planning Guide

Control Center Planning Guide Release 1.2.0 Zenoss, Inc. www.zenoss.com Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States and other

More information

ElasterStack 3.2 User Administration Guide - Advanced Zone

ElasterStack 3.2 User Administration Guide - Advanced Zone ElasterStack 3.2 User Administration Guide - Advanced Zone With Advance Zone Configuration TCloud Computing Inc. 6/22/2012 Copyright 2012 by TCloud Computing, Inc. All rights reserved. This document is

More information

Part 1 : Getting Familiar with Linux. Hours. Part II : Administering Red Hat Enterprise Linux

Part 1 : Getting Familiar with Linux. Hours. Part II : Administering Red Hat Enterprise Linux Part 1 : Getting Familiar with Linux Chapter 1 : Getting started with Red Hat Enterprise Linux Chapter 2 Finding Your Way on the Command Line Hours Part II : Administering Red Hat Enterprise Linux Linux,

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.9 August 2018 215-13526_A0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

RHEL 5 Essentials. Red Hat Enterprise Linux 5 Essentials

RHEL 5 Essentials. Red Hat Enterprise Linux 5 Essentials Red Hat Enterprise Linux 5 Essentials 2 Red Hat Enterprise Linux 5 Essentials First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Production Installation and Configuration. Openfiler NSA

Production Installation and Configuration. Openfiler NSA Production Installation and Configuration Openfiler NSA Table of Content 1. INTRODUCTION... 3 1.1. PURPOSE OF DOCUMENT... 3 1.2. INTENDED AUDIENCE... 3 1.3. SCOPE OF THIS GUIDE... 3 2. OPENFILER INSTALLATION...

More information

1 LINUX KERNEL & DEVICES

1 LINUX KERNEL & DEVICES GL-250: Red Hat Linux Systems Administration Course Length: 5 days Course Description: The GL250 is an in-depth course that explores installation, configuration and maintenance of Linux systems. The course

More information

Data ONTAP 8.1 Software Setup Guide for 7-Mode

Data ONTAP 8.1 Software Setup Guide for 7-Mode IBM System Storage N series Data ONTAP 8.1 Software Setup Guide for 7-Mode GA32-1044-03 Contents Preface................................ 1 About this guide.............................. 1 Supported features.............................

More information

Configuring Server Boot

Configuring Server Boot This chapter includes the following sections: Boot Policy, page 1 UEFI Boot Mode, page 2 UEFI Secure Boot, page 3 CIMC Secure Boot, page 3 Creating a Boot Policy, page 5 SAN Boot, page 6 iscsi Boot, page

More information

Copy-Based Transition Guide

Copy-Based Transition Guide 7-Mode Transition Tool 3.2 Copy-Based Transition Guide For Transitioning to ONTAP February 2017 215-11978-A0 doccomments@netapp.com Table of Contents 3 Contents Transition overview... 6 Copy-based transition

More information

Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3

Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3 Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3 Introduction Preparing the 3.2.X system for the upgrade Installing the BIG-IP version 9.2.3 software Licensing the software using

More information

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1

BIG-IP Virtual Edition and Linux KVM: Setup. Version 12.1 BIG-IP Virtual Edition and Linux KVM: Setup Version 12.1 Table of Contents Table of Contents Getting Started with BIG-IP Virtual Edition on KVM...5 Steps to deploy BIG-IP VE...5 Prerequisites for BIG-IP

More information

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview ONTAP 9 Cluster Administration Course Code: Duration: 3 Days Product Page: https://digitalrevolver.com/product/ontap-9-cluster-administration-2/ This 3-day, instructor led course uses lecture and hands-on

More information

Cluster Management Workflows for OnCommand System Manager

Cluster Management Workflows for OnCommand System Manager ONTAP 9 Cluster Management Workflows for OnCommand System Manager August 2018 215-12669_C0 doccomments@netapp.com Table of Contents 3 Contents OnCommand System Manager workflows... 5 Setting up a cluster

More information

Unit 2: Manage Files Graphically with Nautilus Objective: Manage files graphically and access remote systems with Nautilus

Unit 2: Manage Files Graphically with Nautilus Objective: Manage files graphically and access remote systems with Nautilus Linux system administrator-i Unit 1: Get Started with the GNOME Graphical Desktop Objective: Get started with GNOME and edit text files with gedit Unit 2: Manage Files Graphically with Nautilus Objective:

More information

VMware View on NetApp Deployment Guide

VMware View on NetApp Deployment Guide Technical Report VMware View on NetApp Deployment Guide Jack McLeod, Chris Gebhardt, Abhinav Joshi, NetApp February 2010 TR-3770 A SCALABLE SOLUTION ARCHITECTURE USING NFS ON A CISCO NEXUS NETWORK INFRASTRUCTURE

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, four-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

vcmp for Appliance Models: Administration Version

vcmp for Appliance Models: Administration Version vcmp for Appliance Models: Administration Version 12.1.1 Table of Contents Table of Contents Introduction to the vcmp System...7 What is vcmp?...7 Other vcmp system components...8 BIG-IP license considerations

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Workflow Guide for 7.2 release July 2018 215-13170_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Deep Dive - Veeam Backup & Replication with NetApp Storage Snapshots

Deep Dive - Veeam Backup & Replication with NetApp Storage Snapshots Deep Dive - Veeam Backup & Replication with NetApp Storage Snapshots Luca Dell Oca EMEA Evangelist, Product Strategy Specialist for Veeam Software, VMware vexpert, VCAP-DCD, CISSP Modern Data Protection

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

SAN Configuration Guide

SAN Configuration Guide ONTAP 9 SAN Configuration Guide November 2017 215-11168_G0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Considerations for iscsi configurations... 5 Ways to configure iscsi

More information

Citrix XenServer 7.3 Quick Start Guide. Published December Edition

Citrix XenServer 7.3 Quick Start Guide. Published December Edition Citrix XenServer 7.3 Quick Start Guide Published December 2017 1.0 Edition Citrix XenServer 7.3 Quick Start Guide 1999-2017 Citrix Systems, Inc. All Rights Reserved. Version: 7.3 Citrix Systems, Inc. 851

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

CA Agile Central Administrator Guide. CA Agile Central On-Premises

CA Agile Central Administrator Guide. CA Agile Central On-Premises CA Agile Central Administrator Guide CA Agile Central On-Premises 2018.1 Table of Contents Overview... 3 Server Requirements...3 Browser Requirements...3 Access Help and WSAPI...4 Time Zone...5 Architectural

More information

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5 VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron VI-Center to configure and manage virtual

More information

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron Virtualization Manager to configure

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Configuring Cisco UCS Server Pools and Policies

Configuring Cisco UCS Server Pools and Policies This chapter contains the following sections: Global Equipment Policies, page 1 UUID Pools, page 3 Server Pools, page 5 Management IP Pool, page 7 Boot Policy, page 8 Local Disk Configuration Policy, page

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

OpenStack Havana All-in-One lab on VMware Workstation

OpenStack Havana All-in-One lab on VMware Workstation OpenStack Havana All-in-One lab on VMware Workstation With all of the popularity of OpenStack in general, and specifically with my other posts on deploying the Rackspace Private Cloud lab on VMware Workstation,

More information

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

Technical Brief: How to Configure NPIV on VMware vsphere 4.0 Technical Brief: How to Configure NPIV on VMware vsphere 4.0 Provides step-by-step instructions on how to configure NPIV on VMware vsphere 4.0 in a Brocade fabric. Leveraging NPIV gives the administrator

More information

Installing the CGDS - Substation Workbench Server Software

Installing the CGDS - Substation Workbench Server Software CHAPTER 2 Installing the CGDS - Substation Workbench Server Software Revised: April 15, 2013, Tips on Performing the CGDS - Substation Workbench Base Software Installation This section will cover how to

More information

Enterprise Linux System Administration

Enterprise Linux System Administration Enterprise Linux System Administration Course GL250, 5 Days, Hands-On, Instructor-Led Introduction The GL250 is an in-depth course that explores installation, configuration and maintenance of Linux systems.

More information

NetApp Data Ontap Simulator Cookbook

NetApp Data Ontap Simulator Cookbook Hernán J. Larrea NetApp Data Ontap Simulator Cookbook HOW TO BUILD YOUR OWN VIRTUAL, ALL FUNCTIONAL STORAGE SIMULATOR, WITH UBUNTU OS AND ISCSI FEATURES. CONTENT Introduction... 3 Concepts... 3 Ingredients...

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 September 2008 Dell Virtualization Solutions Engineering Dell PowerVault Storage Engineering www.dell.com/vmware www.dell.com/powervault

More information

Cisco Stealthwatch. Installation and Configuration Guide 7.0

Cisco Stealthwatch. Installation and Configuration Guide 7.0 Cisco Stealthwatch Installation and Configuration Guide 7.0 Table of Contents Introduction 7 Overview 7 Virtual Edition (VE) 7 Hardware 7 Audience 7 New Process 7 Terminology 8 Abbreviations 8 Before You

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

Securing Containers Using a PNSC and a Cisco VSG

Securing Containers Using a PNSC and a Cisco VSG Securing Containers Using a PNSC and a Cisco VSG This chapter contains the following sections: About Prime Network Service Controllers, page 1 Integrating a VSG into an Application Container, page 4 About

More information

Cisco Prime Collaboration Deployment

Cisco Prime Collaboration Deployment Install System Requirements for Installation, page 1 Browser Requirements, page 2 IP Address Requirements, page 2 Virtualization Software License Types, page 3 Frequently Asked Questions About the Installation,

More information

Cisco Stealthwatch. Installation and Configuration Guide 7.0

Cisco Stealthwatch. Installation and Configuration Guide 7.0 Cisco Stealthwatch Installation and Configuration Guide 7.0 Table of Contents Introduction 7 Overview 7 Virtual Edition (VE) 7 Hardware 7 Audience 7 New Process 7 Terminology 8 Abbreviations 8 Before You

More information

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.5 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

Citrix XenServer with Dell SC Series Storage Configuration and Deployment

Citrix XenServer with Dell SC Series Storage Configuration and Deployment Citrix XenServer with Dell SC Series Storage Configuration and Deployment Dell Storage Engineering January 2017 A Dell EMC Deployment and Configuration Guide Revisions Date January 2016 Description Initial

More information

CA Agile Central Installation Guide On-Premises release

CA Agile Central Installation Guide On-Premises release CA Agile Central Installation Guide On-Premises release 2016.2 Agile Central to Go 2017.1 rallysupport@rallydev.com www.rallydev.com 2017 CA Technologies (c) 2017 CA Technologies Version 2016.2 (c) Table

More information

Trend Micro Incorporated reserves the right to make changes to this document and to the product described herein without notice. Before installing and using the product, please review the readme files,

More information

"Charting the Course... RHCE Rapid Track Course. Course Summary

Charting the Course... RHCE Rapid Track Course. Course Summary Course Summary Description This course is carefully designed to match the topics found in the Red Hat RH299 exam prep course but also features the added benefit of an entire extra day of comprehensive

More information

Securing Containers Using a PNSC and a Cisco VSG

Securing Containers Using a PNSC and a Cisco VSG Securing Containers Using a PNSC and a Cisco VSG This chapter contains the following sections: About Prime Network Service Controllers, page 1 Integrating a VSG into an Application Container, page 3 About

More information

Red Hat Virtualization 4.1 Product Guide

Red Hat Virtualization 4.1 Product Guide Red Hat Virtualization 4.1 Product Guide Introduction to Red Hat Virtualization 4.1 Red Hat Virtualization Documentation TeamRed Hat Red Hat Virtualization 4.1 Product Guide Introduction to Red Hat Virtualization

More information

Exam Questions NS0-157

Exam Questions NS0-157 Exam Questions NS0-157 NetApp Certified Data Administrator, Clustered https://www.2passeasy.com/dumps/ns0-157/ 1. Clustered Data ONTAP supports which three versions of NFS? (Choose three.) A. NFSv4.1 B.

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE. Product: Virtual Iron Virtualization Manager Version: 4.2

VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE. Product: Virtual Iron Virtualization Manager Version: 4.2 VIRTUALIZATION MANAGER ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron Virtualization Manager to configure

More information

Cisco Exam Questions & Answers

Cisco Exam Questions & Answers Cisco 648-244 Exam Questions & Answers Number: 648-244 Passing Score: 790 Time Limit: 110 min File Version: 23.4 http://www.gratisexam.com/ Cisco 648-244 Exam Questions & Answers Exam Name: Designing and

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

BIG-IP Virtual Edition and VMware ESXi: Setup. Version 12.1

BIG-IP Virtual Edition and VMware ESXi: Setup. Version 12.1 BIG-IP Virtual Edition and VMware ESXi: Setup Version 12.1 Table of Contents Table of Contents Getting Started with BIG-IP Virtual Edition on ESXi...5 Steps to deploy BIG-IP VE...5 Prerequisites for BIG-IP

More information

BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION. Yaniv Kaul Director, SW engineering June 2016

BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION. Yaniv Kaul Director, SW engineering June 2016 BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION Yaniv Kaul Director, SW engineering June 2016 HOSTS IN A RHEV SYSTEM Host functionality Hosts run the KVM hypervisor

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Creating Application Containers

Creating Application Containers This chapter contains the following sections: General Application Container Creation Process, page 1 Creating Application Container Policies, page 2 About Application Container Templates, page 5 Creating

More information

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode

NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode NETAPP - Accelerated NCDA Boot Camp Data ONTAP 7-Mode Duration: 5 Days Course Price: $5,850 Course Description Course Overview This training course is a 5-day boot camp with extended hours. The training

More information

HySecure Quick Start Guide. HySecure 5.0

HySecure Quick Start Guide. HySecure 5.0 HySecure Quick Start Guide HySecure 5.0 Last Updated: 25 May 2017 2012-2017 Propalms Technologies Private Limited. All rights reserved. The information contained in this document represents the current

More information

ClearCube Virtualization. Deployment Guide. ClearCube Technology, Inc.

ClearCube Virtualization. Deployment Guide. ClearCube Technology, Inc. ClearCube Virtualization Deployment Guide ClearCube Technology, Inc. Copyright 2006, 2007, ClearCube Technology, Inc. All rights reserved. Under copyright laws, this publication may not be reproduced or

More information

1Y0-A26 Citrix XenServer 6.0 Practice Exam

1Y0-A26 Citrix XenServer 6.0 Practice Exam 1Y0-A26 Citrix XenServer 6.0 Practice Exam Section 1: Implementing XenServer 1.1 Specific Task: Configure boot storage from SAN Objective: Given a scenario, determine how to configure options on the XenServer

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

Deploy the ExtraHop Explore Appliance on a Linux KVM

Deploy the ExtraHop Explore Appliance on a Linux KVM Deploy the ExtraHop Explore Appliance on a Linux KVM Published: 2018-07-17 In this guide, you will learn how to deploy an ExtraHop Explore virtual appliance on a Linux kernel-based virtual machine (KVM)

More information

NS0-171.network appliance

NS0-171.network appliance NS0-171.network appliance Number: NS0-171 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 An administrator is deploying a FlexPod solution for use with VMware vsphere 6.0. The storage environment

More information

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. Updated for NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S. Updated for 8.2.2 Data ONTAP 8.2 MultiStore Management Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:

More information

Baremetal with Apache CloudStack

Baremetal with Apache CloudStack Baremetal with Apache CloudStack ApacheCon Europe 2016 Jaydeep Marfatia Cloud, IOT and Analytics Me Director of Product Management Cloud Products Accelerite Background Project lead for open source project

More information

Deploy the ExtraHop Discover Appliance with VMware

Deploy the ExtraHop Discover Appliance with VMware Deploy the ExtraHop Discover Appliance with VMware Published: 2018-07-17 The ExtraHop virtual appliance can help you to monitor the performance of your applications across internal networks, the public

More information

Data ONTAP 8.2 Software Setup Guide For 7-Mode

Data ONTAP 8.2 Software Setup Guide For 7-Mode IBM System Storage N series Data ONTAP 8.2 Software Setup Guide For 7-Mode SC27-5926-02 Table of Contents 3 Contents Preface... 7 About this guide... 7 Supported features... 7 Websites... 8 Getting information,

More information

Installing Cisco APIC-EM on a Virtual Machine

Installing Cisco APIC-EM on a Virtual Machine About the Virtual Machine Installation, page 1 System Requirements Virtual Machine, page 2 Pre-Install Checklists, page 4 Cisco APIC-EM Ports Reference, page 7 Verifying the Cisco ISO Image, page 8 Installing

More information

HCI File Services Powered by ONTAP Select

HCI File Services Powered by ONTAP Select Technical Report HCI File Services Powered by ONTAP Select Quick Start Guide Aaron Patten, NetApp March 2018 TR-4669 Abstract NetApp ONTAP Select extends the NetApp HCI product, adding a rich set of file

More information

Active System Manager Release 8.2 Compatibility Matrix

Active System Manager Release 8.2 Compatibility Matrix Active System Manager Release 8.2 Compatibility Matrix Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates

More information

Install ISE on a VMware Virtual Machine

Install ISE on a VMware Virtual Machine Supported VMware Versions, page 1 Support for VMware vmotion, page 1 Support for Open Virtualization Format, page 2 Virtual Machine Requirements, page 3 Virtual Machine Resource and Performance Checks,

More information

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0.

NetApp. Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0. NetApp Number: NS0-156 Passing Score: 800 Time Limit: 120 min File Version: 1.0 http://www.gratisexam.com/ Exam A QUESTION 1 Which statement is true about a Data Protection (DP)-type SnapMirror destination

More information

StorageGRID Installation Guide. For Red Hat Enterprise Linux or CentOS Deployments. February _A0

StorageGRID Installation Guide. For Red Hat Enterprise Linux or CentOS Deployments. February _A0 StorageGRID 11.2 Installation Guide For Red Hat Enterprise Linux or CentOS Deployments February 2019 215-13579_A0 doccomments@netapp.com Table of Contents 3 Contents Installation overview... 5 Planning

More information

StorageGRID Webscale 11.1 Expansion Guide

StorageGRID Webscale 11.1 Expansion Guide StorageGRID Webscale 11.1 Expansion Guide October 2018 215-12800_B0 doccomments@netapp.com Table of Contents 3 Contents Expansion overview... 4 Planning and preparation... 5 Reviewing the options and

More information