Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

Similar documents
Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

Comparison of Storage Protocol Performance ESX Server 3.5

iscsi Design Considerations and Deployment Guide VMware Infrastructure 3

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Configuring and Managing Virtual Storage

SAN Virtuosity Fibre Channel over Ethernet

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Storage Compatibility Guide for ESX Server 3.0

Configuration Maximums VMware vsphere 5.0

Microsoft E xchange 2010 on VMware

Recommendations for Aligning VMFS Partitions

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

W H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Dell EMC SAN Storage with Video Management Systems

3.1. Storage. Direct Attached Storage (DAS)

VERSION 1.0. VMware Guest SDK. Programming Guide

iscsi Security Overview

Managing Performance Variance of Applications Using Storage I/O Control

PAC532 iscsi and NAS in ESX Server 3.0. Andy Banta Senior Storage Engineer VMware

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Vmware VCP-310. VMware Certified Professional on VI3.

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Storage / SAN Compatibility Guide For ESX Server 3.x

iscsi SAN Configuration Guide

NIC TEAMING IEEE 802.3ad

Parallels Virtuozzo Containers 4.6 for Windows

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure 3

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

Fibre Channel SAN Configuration Guide Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Eliminate the Complexity of Multiple Infrastructure Silos

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

VMware vsphere Storage Appliance Installation and Configuration

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Surveillance Dell EMC Storage with FLIR Latitude

Using vmkusage to Isolate Performance Problems

Virtual Machine Backup Guide Update 2 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

VM Aware Fibre Channel

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Retired. Microsoft iscsi Software Target for ProLiant Storage Servers Overview

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

Configuration Maximums

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Advanced iscsi Management April, 2008

iscsi Target Usage Guide December 15, 2017

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

Parallels Containers for Windows 6.0

EMC Business Continuity for Microsoft Applications

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

iscsi Technology: A Convergence of Networking and Storage

iscsi Boot from SAN with Dell PS Series

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware Overview VMware Infrastructure 3: Install and Configure Rev C Copyright 2007 VMware, Inc. All rights reserved.

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

vcloud Automation Center Reference Architecture vcloud Automation Center 5.2

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

Surveillance Dell EMC Storage with Digifort Enterprise

ESX Server 3 Configuration Guide ESX Server 3.5 and VirtualCenter 2.5

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Veritas Storage Foundation for Windows by Symantec

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

VMware vsphere with ESX 4 and vcenter

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

HP OpenVMS Software-Based iscsi Initiator Technology Demonstration Kit Configuration and User s Guide

Enforcing Patch Management

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

How to Use a Tomcat Stack on vcloud to Develop Optimized Web Applications. A VMware Cloud Evaluation Reference Document

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

VMware VMFS Volume Management VMware Infrastructure 3

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

PeerStorage Arrays Unequalled Storage Solutions

Basic System Administration ESX Server and Virtual Center 2.0.1

Veritas Storage Foundation for Windows by Symantec

Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

Guest Operating System Installation Guide. May 28, 2008

Introduction to iscsi

VMware Infrastructure 3 Primer Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

EMC Infrastructure for Virtual Desktops

Performance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2

Improving Blade Economics with Virtualization

Understanding Data Locality in VMware Virtual SAN

Best Practices Guide: Emulex Virtual HBA Solutions and VMware ESX Server 3.5

EMC Integrated Infrastructure for VMware. Business Continuity

Real-time Protection for Microsoft Hyper-V

vsphere Installation and Setup Update 1 Modified on 04 DEC 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Traditional SAN environments allow block

Cisco MDS 9000 Family Highlights: Server Virtualization Series

VERSION 2.1. Installation Guide

Best Practices for Running VMware vsphere on iscsi TECHNICAL MARKETING DOCUMENTATION

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

Transcription:

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

Contents Introduction...1 iscsi Explained...1 Initiators...1 Discovery and Logging On...2 Authentication...2 Designing the Environment...3 Network Considerations...3 Disk Considerations...3 Initiator Considerations...3 Software Initiator Specific...4 Hardware Initiator Specific...4 Maximum Configurations...6 Conclusion...6 About the Author...6 Contents i

Introduction VMware Infrastructure 3 is the industry s first full Infrastructure virtualization suite that empowers enterprises and small businesses alike to transform, manage, and optimize their IT systems infrastructure through virtualization. VMware Infrastructure 3 delivers comprehensive virtualization, management, resource optimization, application availability, and operational automation capabilities in an integrated offering. With the release of VMware ESX Server 3, VMware has expanded support for storage options available for the VMware VMFS cluster file system. In addition to Fibre Channel SAN storage, VMware Infrastructure 3 adds support for iscsi SAN storage. The adoption of iscsi storage is quickly expanding throughout IT datacenters worldwide. The benefits of an iscsi storage solution are evident: ISCSI SAN provides enterprise-class storage at a fraction of the price of a Fibre Channel SAN iscsi SAN can be integrated into an existing Ethernet infrastructure. iscsi storage permits IT departments to deploy their storage solutions using existing resources without the need for additional hardware or IT staff. The support of iscsi SAN in a virtual environment provides IT administrators a low-cost, highly available solution for deploying virtual machines. For the first time, IT departments that could not afford a Fibre Channel SAN will now be able to take advantage of advanced VMware Infrastructure functionality such as: Redundant storage paths Centralized storage environment Live migration of running virtual machines with VMware VMotion Dynamic allocation and balancing of computing capacity with VMware DRS This paper provides information on best practices for configuring and using iscsi storage with VMware ESX Server 3. To get the most out of this paper, you should understand the basic concepts of virtualization and be familiar with the process of configuring VMware ESX Server. iscsi Explained Internet Small Computer Systems Interface (iscsi RFC 3720) is a relatively new SAN technology. Simply put, iscsi is a means of transport for the SCSI protocol, encapsulating SCSI commands within TCP packets. The commands are encapsulated at the block I/O level, rather than the file I/O level, making the storage appear local to the host operating system. Since all the commands are at the block level, all SCSI rules still apply. For instance, a SCSI storage device cannot be shared without a file system that allows SCSI command sharing. Now that flexible and inexpensive storage can be connected over a standard Ethernet infrastructure, IT Administrators have a world of new possibilities for deploying storage solutions. Initiators An iscsi initiator provides a means for an iscsi client (the ESX Server host) to access storage on the iscsi target (the storage device). There are two implementations of iscsi initiators: hardware and software. Introduction 1

The software initiator is a driver that interacts with the ESX Server host s TCP/IP stack to contact the iscsi target via an existing Ethernet adapter. This adds a significant amount of workload to the host s CPUs because the iscsi protocol needs to be unpackaged and read by the host CPUs, resulting in a performance decrease under any type of significant I/O load. Implementations of a software initiator should be restricted to areas where performance is not a requirement, such as for development. No additional software is needed to configure a software initiator on VMware ESX Server. A hardware initiator is an adapter card commonly referred to as a Host Bus Adapter, or HBA that implements connectivity from the iscsi client to the iscsi target but does so more efficiently. Rather than utilizing the host s CPU cycles to process the iscsi protocol, this approach offloads the traffic to the HBA, which does the necessary processing. Hardware-initiated iscsi is supported experimentally with ESX Server 3.0. In particular, the QLogic 4010 adapter can be used for test and development environments, but it is not currently recommended for a production environment. Software-initiated iscsi,, on the other hand, is supported fully and is the recommended configuration for production iscsi deployments with ESX Server 3.0. Discovery and Logging On Discovery allows an initiator to find the iscsi target or targets to which it has access. This requires minimal user configuration. To discover an iscsi target using a software initiator, the initiator sends a query (SEND TARGET) to the iscsi target on a specified TCP port (TCP port 3260 by default). Once an iscsi target is discovered, the initiator logs on and a TCP/IP connection is created. This connection provides the means for iscsi data transfer, the negotiation of parameters, and (optionally) authentication. After it logs on, the initiator can begin sending SCSI commands to the target. Authentication iscsi implementations commonly use the Challenge Handshake Authentication Protocol (CHAP) for authentication. This method is supported for the VMware ESX Server implementation of iscsi storage. During the CHAP authentication process, the target sends its ID along with a random key to the initiator. The initiator replies with the proper credentials that allow for authentication. The initiator also sends a hash value containing the same random key, the initiator s ID, and a CHAPsecret, or password. If the CHAPsecret is correct, the target grants access to the initiator. VMware ESX Server supports one set of CHAP credentials per software initiator. The goal of CHAP is to authenticate the iscsi client at the target and to prevent any illegitimate access to the target s storage. This is known as unidirectional CHAP. Bidirectional CHAP is the process of authenticating on both the client and target side. ESX Server supports only unidirectional CHAP authentication. In addition, only one set of authentication credentials can be sent from the ESX Server host. ESX Server does not support per-target authentication credentials. Note: The following authentication protocols are not supported: Kerberos, Secure Remote Protocol (SRP), or public key authentication methods for iscsi. Additionally, IPsec authentication and encryption are not supported with VMware ESX Server. The following discovery and management protocols are not supported: isns and SLP. iscsi Explained 2

Designing the Environment The following suggestions for implementing an iscsi solution in a VMware ESX Server environment will help you design a system that provides the highest level of services, improving performance and providing redundancy in the event of a hardware failure. Network Considerations The proper network architecture is essential to a healthy, high-performance environment. There are several different ways of approaching network architecture and fine-tuning it to meet the demands of a virtualized environment. Use the following recommendations to guide you in configuring the iscsi network: For iscsi configurations, ESX Server supports only isolated networks. Inherently, Fibre Channel SANs have an added security advantage over iscsi SANs because they are based on a physically isolated fabric. In a typical network every server, workstation, printer, etc. communicates using an Ethernetbased network, the same network used by the iscsi nodes and hosts. This opens iscsi traffic to a wide array of security vulnerabilities as well as degraded performance. VMware recommends dedicated Ethernet switches for iscsi connections. Use Gigabit Ethernet network adapters for iscsi initiators. 100Mb adapters are not supported for iscsi connectivity. Ensure that the iscsi interface is set to full duplex or configured to negotiate at full duplex. To prevent oversubscribing (input from multiple network interfaces being sent to fewer network interfaces) in heavily utilized networks, utilize multiple network connections on the iscsi target. Ensure that you do not have multiple virtual machines performing large write operations to a single storage point. Use static IP addresses for initiators. ESX Server does not support DHCP for iscsi connections. Note: If DHCP must be used and the storage is on a public LAN, be sure CHAP authentication is implemented. Disk Considerations Use the following recommendations for configuring the iscsi storage: iscsi vendors offer a variety of RAID implementations to meet the performance and redundancy needs of different applications. Each LUN should have the right RAID level and storage characteristic for the specific applications running in the virtual machines allocated to the LUN. If a virtual machine needs more space than can be allocated from the existing VMware VMFS file system, avoid extending VMware VMFS volumes. Instead, create a new LUN with a new VMware VMFS volume. An extended volume will not balance data across the two physical participants, resulting in hot and cold spots at the target. Enable read/write cache on the iscsi target. Where possible, dedicate disk or RAID groups to LUNs that will host VMware VMFS volumes. Remember that multiple hosts will be requesting I/O from the disk or RAID group simultaneously. Initiator Considerations By enabling redundant iscsi initiators and switches, you greatly reduce the possibility of a service outage caused by a hardware failure. It is a good idea to diagram the proposed environment to Designing the Environment 3

identify any single points of failure. Figure 1 is a sample diagram of a fully redundant iscsi implementation. Figure 1: A topology providing redundant paths to iscsi storage Software Initiator Specific The following recommendations are important when you are using a software initiator: When using a software initiator, use a dedicated virtual switch to lower chances of having network traffic intercepted by potential attackers during transmission. This configuration will physically segment virtual machine network traffic and iscsi traffic. You can configure only one software initiator on an ESX Server host. When configuring a virtual switch that will provide iscsi connectivity, bind multiple network connections to the switch to provide redundancy. Ensure the network adapters that are bound to the virtual switch originate from separate network switches (see Figure 1) to eliminate any single points of failure. Hardware Initiator Specific The following recommendations are important when you are using a hardware initiator: To enable path failover for any given LUN, each initiator must have access to that LUN. If you define a fixed failover path policy, the storage LUN uses the preferred path you specify whenever available. If the preferred path becomes unavailable, the ESX Server host chooses the next available path to use, then fails back to the preferred path once it becomes available again. Alternatively, the most recently used (MRU) failover policy continues to use the same path until a service interruption. With an MRU policy, there is no fail back action. Failover policies are configured from either the Virtual Infrastructure Client or the ESX Server command line. Designing the Environment 4

Figure 2: Managing the failover path policy from the Virtual Infrastructure Client Many iscsi targets utilize more than one storage processor. This increases performance and provides redundancy in the case of a storage processor or path failure. Depending on the vendor, the target uses either an active/active or active/passive method of accessing a LUN. When using an active/active iscsi target, use fixed mode failover policy. With an active/passive target, use MRU. Figure 3: Setting a fixed path policy from the Virtual Infrastructure Client Manually spread I/O across all available initiators and storage processors. This helps to balance I/O traffic on both the ESX Server host and the iscsi target. In the event of a failure, the alternate initiator or the alternate storage processor assumes ownership of the LUN. Disconnect the HBAs during ESX Server installation when you install an ESX Server host connected to an existing SAN. Set maximum queue depths to prevent bottlenecks at the iscsi initiator during high I/O periods. This example uses the syntax for the QLogic 4010 HBA. Verify the syntax for other adapters with the manufacturer s documentation. Find the appropriate device name in /etc/vmware/esx.conf. /device/001:02:0/vmkname = "vmhba1" Add the following line after the device name: Designing the Environment 5

/device/001:02:0/options = "ql4xmaxqdepth=nn" Maximum Configurations The following list shows supported features and maximum configurations allowed by ESX Server 3.0: A maximum of 254 LUNs A maximum of 128 VMware VMFS 3 volumes A maximum size of 64TB per VMware VMFS 3 volume A maximum of 8 targets for iscsi Clustering not supported for iscsi Boot from SAN not possible using software initiator, only using a hardware initiator supported experimentally by VMware Conclusion iscsi is an excellent fit for many VMware ESX Server environments. When iscsi is implemented correctly, IT staffs can cut costs and save IT resources while surrendering little to no functionality. Implementing the iscsi environment correctly through careful design and planning can provide a high level of uptime while minimizing costs associated with a virtual infrastructure deployment. About the Author Rob Daly is a Senior Systems Engineer for Foedus. He is a VMware Certified Professional and has worked in the IT industry for nine years. Over the last four years, Rob has concentrated his focus on VMware Infrastructure, designing and implementing a wide variety of VMware ESX Server solutions for companies throughout the United States, including several for Fortune 100 companies. Conclusion 6

Revision: 20060912 Item: OP-005-INF-002-110 VMware, Inc. 3145 Porter Drive Palo Alto CA 94304 USA Tel 650-475-5000 Fax 650-475-5001 www.vmware.com 20066VMware, Inc. All rights reserved. Protected by one or more of U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022 6,961,941, 6,961,806 and 6,944,699; patents pending. VMware, the VMware boxes logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective companies.