Configuring and Managing Virtual Storage Module 6
You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks Configuring and Managing vsphere Storage Virtual Machine Management Data Protection Access and Authentication Control Resource Management and Monitoring High Availability and Fault Tolerance Host Scalability Patch Management Installing VMware vsphere Components 6-2
Importance Storage options give you the flexibility to set up your storage based on your cost, performance, and manageability requirements. Shared storage is useful for disaster recovery, high availability, and moving virtual machines between hosts. 6-3
Module Lessons Lesson 1: Lesson 2: Lesson 3: Lesson 4: Lesson 5: Lesson 6: Storage Concepts Configuring iscsi Storage Configuring NAS/NFS Storage Fibre Channel SAN Storage VMFS Datastores VSA 6-4
Lesson 1: Storage Concepts 6-5
Learner Objectives After this lesson, you should be able to do the following: Describe VMware vsphere storage technologies and datastores. Describe the storage device naming convention. 6-6
Storage Overview VMware vsphere ESXi hosts datastore types VMware vsphere VMFS NFS file system storage technology direct attached Fibre Channel FCoE iscsi NAS 6-7
Storage Protocol Overview Storage protocol Supports boot from SAN Supports VMware vsphere vmotion Supports VMware vsphere High Availability (vsphere HA) Supports VMware vsphere Distributed Resource Scheduler (DRS) Supports raw device mapping (RDM) Fibre Channel FCoE iscsi NFS DAS 6-8
Datastore host host A datastore is a logical storage unit that can use disk space on one physical device or span several physical devices. Types of datastores: VMFS NFS Datastores are used to hold virtual machine files, templates, and ISO images. datastore 6-9
VMFS-5 VMFS-5: Allows concurrent access to shared storage Can be dynamically expanded Uses a 1MB block size, good for storing large virtual disk files Uses subblock addressing, good for storing small files: The subblock size is 8KB. Provides on-disk, block-level locking host host VMFS datastore 6-10
NFS NFS: Is storage shared over the network at the file system level Supports NFS version 3 over TCP/IP host host NFS datastore 6-11
Storage Device Naming Conventions Storage devices are identified in several ways: SCSI ID Unique SCSI identifier Canonical name The Network Address Authority ID is a unique logical unit number (LUN) identifier, guaranteed to be persistent across reboots. In addition to NAA IDs, devices can also be identified with mpx or T10 identifiers. Runtime name Uses the convention vmhban:c:t:l. This name is not persistent through reboots. 6-12
Viewing Storage Maps HBA target LUN 6-13
Physical Storage Considerations Discuss vsphere storage needs with your storage administration team, including: LUN sizes I/O bandwidth I/O requests per second that a LUN is capable of Disk cache parameters Zoning and masking Identical LUN presentation to each ESXi host Active-active or active-passive arrays Export properties for NFS datastores 6-14
Review of Learner Objectives You should be able to do the following: Describe vsphere storage technologies and datastores. Describe the storage device naming convention. 6-15
Lesson 2: Configuring iscsi Storage 6-16
Learner Objectives After this lesson, you should be able to do the following: Describe uses of IP storage with ESXi. Describe iscsi components and addressing. Configure iscsi initiators. 6-17
iscsi Components 6-18
iscsi Addressing iscsi target name: iqn.1992-08.com.mycompany:stor1-47cf3c25 or eui.fedcba9876543210 iscsi alias: stor1 IP address: 192.168.36.101 iscsi initiator name: iqn.1998-01.com.vmware:train1-64ad4c29 or eui.1234567890abcdef iscsi alias: train1 IP address: 192.168.36.88 6-19
iscsi Initiators 6-20
Configuring Software iscsi To configure the iscsi software initiator: 1. Configure a VMkernel port for accessing IP storage. 2. Enable the iscsi software adapter. 3. Configure the iscsi qualified name (IQN) and alias (if required) 4. Configure iscsi software adapter properties, such as static/dynamic discovery addresses and iscsi port binding 5. Configure iscsi security (Challenge Handshake Authentication Protocol (CHAP)). 6-21
ESXi Network Configuration for IP Storage A VMkernel port must be created for ESXi to access software iscsi. The same port can be used to access NAS/NFS storage. To optimize your vsphere networking setup: Separate iscsi networks from NAS/NFS networks. Physical separation is preferred. If physical separation is not possible, use VLANs. 6-22
iscsi Target-Discovery Methods Two discovery methods are supported: Static Dynamic (also known as SendTargets) The SendTargets response returns IQN and all available IP addresses. 192.168.36.101:3260 SendTargets request SendTargets response iscsi target 192.168.36.101:3260 6-23
iscsi Security: CHAP iscsi initiators use CHAP for authentication purposes. By default, CHAP is not configured. ESXi supports two types of CHAP authentication: Unidirectional Bidirectional: Software iscsi only Target authenticates host. ESXi also supports per-target CHAP authentication. Software iscsi only Different credentials for each target Software iscsi properties > General tab Host authenticates target. 6-24
Configuring Hardware iscsi To configure the iscsi hardware initiator: 1. Install the iscsi hardware adapter. a. For independent hardware iscsi adapters, verify properly formatted IP address and IQN names. b. For dependent hardware iscsi adapters, determine the name of the physical NIC associated with the adapter so that port binding is properly configured. 2. Modify the iscsi name and configure the iscsi alias. 3. Configure iscsi target addresses. 4. Configure iscsi security (CHAP). 6-25
Multipathing with iscsi Storage Hardware iscsi: Use two or more hardware iscsi adapters. Software or dependent hardware iscsi: Use multiple network interface cards (NICs). Connect each NIC to a separate VMkernel port. Associate VMkernel ports with iscsi initiator. Configure port binding in the Properties window of the iscsi adapter. 6-26
Review of Learner Objectives You should be able to do the following: Describe uses of IP storage with ESXi. Describe iscsi components and addressing. Configure iscsi initiators. 6-27
Lesson 3: Configuring NAS/NFS Storage 6-28
Learner Objectives After this lesson, you should be able to do the following: Describe NFS components and addressing. Create an NFS datastore. 6-29
NFS Components NAS device or a server with storage directory to share with the ESXi host over the network ESXi host with NIC mapped to virtual switch VMkernel port defined on virtual switch 6-30
Addressing and Access Control with NFS 192.168.81.33 192.168.81.72 VMkernel port configured with IP address 6-31
Configuring an NFS Datastore Create a VMkernel port: For better performance and security, separate it from the iscsi network. Provide the following information: NFS server name (or IP address) Folder on the NFS server, for example, /LUN1 and /LUN2 Whether to mount the NFS file system read-only: Default is to mount read/write NFS datastore name 6-32
Viewing IP Storage Information Hosts and Clusters view > Configuration tab > Storage link Datastores view > Storage Views tab 6-33
Unmounting or Deleting an NFS Datastore Click the Storage link in the Configuration tab to unmount an NFS datastore. Unmounting or deleting an NFS datastore causes the files on the datastore to become inaccessible. 6-34
Multipathing and NFS Storage A recommended configuration for NFS multipathing: Configure one VMkernel port. Use adapters attached to the same physical switch to configure NIC teaming. Configure the NFS server with multiple IP addresses. IP addresses can be on the same subnet. To use multiple links, configure NIC teams with the IP hash loadbalancing policy. NIC vmnic0 physical switch NIC vmnic1 ESXi host 6-35
Lab 6 In this lab, you will configure access to an iscsi and NFS datastore. 1. Add a VMkernel port group to a standard virtual switch. 2. Configure the iscsi software adapter. 6-36
Lab 7 In this lab, you will configure access to an iscsi and NFS datastore. 1. Configure access to NFS datastores. 2. View iscsi and NFS storage information. 6-37
Review of Learner Objectives You should be able to do the following: Describe NFS components and addressing. Create an NFS datastore. 6-38
Lesson 4: Fibre Channel SAN Storage 6-39
Learner Objectives After this lesson, you should be able to do the following: Describe uses of Fibre Channel with ESXi. Describe Fibre Channel components and addressing. Access Fibre Channel storage. 6-40
Using Fibre Channel with ESXi ESXi supports: 16Gb Fibre Channel Fibre Channel over Ethernet (FCoE) 6-41
Fibre Channel SAN Components 6-42
Fibre Channel Addressing and Access Control 6-43
Accessing Fibre Channel Storage Install one or more supported Fibre Channel adapters in the ESXi host. The Fibre Channel adapters are recognized by the host during the boot sequence. 6-44
Viewing Fibre Channel Storage Information The Storage Views tab provides information about all SCSI adapters and NAS mounts. 6-45
FCoE Adapters hardware FCoE ESXi host network driver FC driver converged network adapter 10 Gigabit Ethernet software FCoE* ESXi 5.x host network driver software FC NIC with FCoE support FCoE switch Ethernet IP frames to LAN devices FC frames to FC storage arrays LAN FC SAN * New in vsphere 5.0 6-46
Configuring Software FCoE: Create a VMkernel Port Step 1: Connect the VMkernel to physical FCoE NICs that are installed on your host. The VLAN ID and the priority class are discovered during FCoE initialization. The priority class is not configured in vsphere. ESXi supports the maximum of four network adapter ports used for software FCoE. Physical adapter: vmnic2 VMkernel label: FCoE-2 VLAN ID: 20 IP address: 172.17.12.150 Subnet mask: 255.255.255.0 vmnic2 VMkernel port NIC with FCoE support 6-47
Configuring Software FCoE: Activate the Software FCoE Adapter Select host > Configuration tab > Storage Adapters link > Add. Step 2: Add the software FCoE adapter. 6-48
Multipathing with Fibre Channel Multipathing enables continued access to SAN LUNs if hardware fails. It also provides load balancing. 6-49
Multipathing with Software FCoE Physical adapter: vmnic2 VMkernel label: FCoE-2 VLAN ID: 20 IP address: 172.17.12.150 Subnet mask: 255.255.255.0 VMkernel ports Physical adapter: vmnic3 VMkernel label: FCoE-3 VLAN ID: 20 IP address: 172.17.12.151 Subnet mask: 255.255.255.0 vmnic2 vmnic3 NICs with FCoE support 6-50
Review of Learner Objectives You should be able to do the following: Describe uses of Fibre Channel with ESXi. Describe Fibre Channel components and addressing. Access Fibre Channel storage. 6-51
Lesson 5: VMFS Datastores 6-52
Learner Objectives After this lesson, you should be able to do the following: Create a VMFS datastore. Increase the size of a VMFS datastore. Delete a VMFS datastore. 6-53
Using a VMFS Datastore with ESXi Use VMFS datastores whenever possible: VMFS is optimized for storing and accessing large files. A VMFS datastore can have a maximum volume size of 64TB. NFS datastores are good for storing virtual machines. But some functions are not supported. Use RDMs if the following conditions are true of your virtual machine: It is taking Storage Array level snapshots. It is clustered to a physical machine. It has large amounts of data that you do not want to convert into a virtual disk. 6-54
Creating a VMFS Datastore To create a VMFS datastore, start the Add Storage wizard: 1. Select the storage type Disk/LUN. 2. Select an available LUN. 3. Specify a datastore name. 4. Specify the datastore size: use full or partial LUN. 6-55
Viewing VMFS Datastores Click the Storage link in the Configuration tab 6-56
Browsing Datastore Contents Right-click the datastore in the host s Summary tab, or click the Storage link in the Configuration tab. 6-57
Managing Overcommitted Datastores An overcommitted datastore can occur when the total provisioned space of thin-provisioned disks is greater than the size of the datastore. Actively monitor your datastore capacity: Alarms assist through notifications: Datastore disk overallocation Virtual machine disk usage Use reporting to view space usage. Actively manage your datastore capacity: Increase datastore capacity when necessary. Use VMware vsphere Storage vmotion to mitigate space usage issues on a particular datastore. 6-58
Increasing the Size of a VMFS Datastore Increase a VMFS datastore s size to give it more space or possibly to improve performance. Two ways to dynamically increase the size of a VMFS datastore: Add an extent (LUN). Expand the datastore within its extent. 6-59
Comparing Methods for Increasing VMFS Datastore Size Adding an extent to the datastore Expanding the datastore in the extent Virtual machine power state On On SAN administrator tasks Limits Add one or more LUNs (extents). A datastore can have up to 32 LUNs (extents), with each extent up to 64TB Increase the size of the LUN. A LUN can be expanded any number of times, up to 64TB. 6-60
Before Increasing the Size of a VMFS Datastore In general, before making any changes to your storage allocation: Perform a rescan to ensure that all hosts see the most current storage. Quiesce I/O on all disks involved. Record the unique identifier (for example, the NAA ID of the volume that you want to expand). 6-61
Deleting/Unmounting a VMFS Datastore Deleting a VMFS datastore: Unmounting a VMFS datastore: 6-62
Multipathing Algorithms Arrays provide different features. Some offer active-active storage processors. Others offer activepassive SPs. SP A 1 0 SP B 1 0 storage array SPs vsphere 5.1 offers native path selection, load-balancing, and failover mechanisms. Third-party vendors can create their own software to be installed on your ESXi host that will allow the ESXi host to properly interact with the storage arrays that it uses. switches ESXi hosts 6-63
Managing Multiple Storage Paths To modify the number of storage paths to use, select the datastore to modify and click that datastore s Properties link. Click Manage Paths in the Properties window. 6-64
Configuring Storage Load Balancing Path selection policies exist for: Scalability: Round Robin A multipathing policy that performs load balancing across paths Availability: Most Recently Used (MRU) and Fixed 6-65
Lab 8 In this lab, you will create and manage VMFS datastores. 1. Review your shared storage configuration. 2. Change the name of a VMFS datastore. 3. Create a VMFS datastore. 4. Expand a VMFS datastore to consume unused space on a LUN. 5. Remove a VMFS datastore. 6. Extend a VMFS datastore. Ask your instructor which LUNs contain VMFS datastores that should not be removed or reformatted. 6-66
Review of Learner Objectives You should be able to do the following: Create a VMFS datastore. Increase the size of a VMFS datastore. Delete a VMFS datastore. 6-67
Lesson 6: VSA 6-68
Learner Objectives After this lesson, you should be able to do the following: Describe the architecture and requirements of the VMware vsphere Storage Appliance (VSA) cluster configuration. Discuss how a VSA cluster handles failures. 6-69
What Is VSA? Cost-effective, easy-to-deploy shared storage that enables high availability for any small environment 6-70
VSA Benefits Simple manageability Installed, configured, and managed in VMware vcenter Server Abstraction from underlying hardware Resilient to server failures Delivers high availability Highly available during disk (spindle) failure Provides storage framework for vmotion, vsphere HA, and DRS Pools local server disk capacity to form shared storage Creates shared storage Leverages vsphere Thin Provisioning for space utilization Enables storage scalability 6-71
Features of VSA VSA for Essentials Plus VSA Nodes per cluster 3 3 Raid 5/6/10 Support Yes Yes vcenter running on VSA Yes Yes Add/replace hard disk drives non-disruptively Yes Yes Centralized management of multiple VSA Clusters Yes 6-72
Central Management of VSA VSA Cluster 1 VSA Cluster 2 VMware vcenter Server Branch Office 1 Branch Office 2 VSA Cluster 3 VSA Cluster 4 Central Office Branch Office 3 Branch Office 4 6-73
VSA Architecture VSA VSA VSA vcenter Server vsphere vsphere vsphere VSA Manager vsphere Client NFS NFS NFS NFS exports that can be mounted by client ESXi hosts Enables advanced features like vmotion, vsphere HA, DRS 6-74
VSA Cluster Requirements Ensure that your environment meets the following requirements: Physical or virtual machine that runs vcenter Server vcenter can run on one of the ESXi hosts in the cluster. Two or three physical hosts with ESXi installed All hosts must use the same type of ESXi installation. One Gigabit Ethernet or one 10Gigabit Ethernet switch Two switches can be used to eliminate single points of failure. Switches must be configured to support the front-end and back-end networks of the VSA Cluster. 6-75
VSA Cluster Configuration Requirements The VSA Manager is used to install the VSA cluster. VSA cluster VSA Cluster IP 10.10.120.200 vcenter Server 5.0 10.15.20.100 VSA Manager VSA cluster service 10.15.20.201 VSA Manager 10.10.120.3 NFS Volume IP 10.10.120.5 front end VSA back end vmotion IP 192.168.1.1 Recommended: 24GB of RAM, 4 hard disks, RAID controller, Gigabit Ethernet switches ESXi host 10.15.20.150 6-76
VSA Manager 6-77
VSA Cluster with Two ESXi 5.1 Hosts VSA Manager vcenter Server VSA cluster service Manage Volume 1 Volume 2 (Replica) Volume 2 Volume 1 (Replica) VSA Datastore 1 VSA Datastore 2 Configure RAID 1+0 on local system disks. 6-78
VSA Cluster with Three ESXi 5.1 Hosts vcenter Server VSA Manager VSA Datastore 1 Manage VSA Datastore 2 VSA Datastore 3 Volume 1 Volume 3 (Replica) Volume 2 Volume 1 (Replica) Volume 3 Volume 2 (Replica) 6-79
VSA Resilience vcenter Server VSA Manager VSA cluster service Manage Volume 1 Volume 2 (Replica) VSA Datastore 1 VSA Datastore 2 Volume 2 Volume 1 (Replica) 6-80
Differences Between VSA Clusters and SANs Volume replica placement across spindles Volume replica placement across servers Redundant Network direct attached direct attached Server 1 Server 2 Storage Area Network VSA Cluster 6-81
Review of Learner Objectives You should be able to do the following: Describe the architecture and requirements of the VSA cluster configuration. Discuss how a VSA cluster handles failures. 6-82
Key Points Use VMFS datastores to hold virtual machine files. NFS datastores are useful as a repository for ISO images. Shared storage is integral to vsphere features like vmotion, vsphere HA, and DRS. VSA enables low-end configurations to use vsphere HA, vmotion, and VMware vsphere Storage vmotion without requiring external shared storage. Questions? 6-83