Supplemental Implementation Guide:

Similar documents
A Dell Technical White Paper Dell Virtualization Solutions Engineering

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

vstart 50 VMware vsphere Solution Overview

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Reference Architecture for Microsoft Hyper-V R2 on Dell PowerEdge M- Series Servers with EqualLogic Storage

vstart 50 VMware vsphere Solution Specification

DELL TM PowerVault TM DL Backup-to-Disk Appliance

A Dell Interoperability Whitepaper Victor Teeter

Deployment of Dell M6348 Blade Switch with Cisco 4900M Catalyst Switch (Simple Mode)

Microsoft Exchange Server 2010 Implementation on Dell Active System 800v

Implementing SharePoint Server 2010 on Dell vstart Solution

Microsoft SharePoint Server 2010 on Dell Systems

DELL TM PowerVault TM DL Backup-to-Disk Appliance

Business-Ready Configurations for VMware vsphere 4 with PowerEdge R810 Servers and EqualLogic PS6010 Storage Array Utilizing iscsi Boot Technology

Deployment of Dell M8024-k Blade Switch in Simple Mode with Cisco Nexus 5k Switch

Dell Networking N4000 Series and Dell PowerConnect 8100 Series

iscsi Boot from SAN with Dell PS Series

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

VMware vsphere with ESX 4.1 and vcenter 4.1

Business-Ready Configuration for Microsoft Hyper-V R2 on Dell PowerEdge R-Series Servers with EqualLogic Storage

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

Microsoft Exchange, Lync, and SharePoint Server 2010 on Dell Active System 800v

Deployment of VMware ESX 2.5.x Server Software on Dell PowerEdge Blade Servers

Dell EqualLogic Best Practices Series. Enhancing SQL Server Protection using Dell EqualLogic Smart Copy Snapshots A Dell Technical Whitepaper

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

EMC Virtual Infrastructure for Microsoft Exchange 2007

VMware vsphere Administration Training. Course Content

EMC Backup and Recovery for Microsoft Exchange 2007

Reference Architecture for a Virtualized SharePoint 2010 Document Management Solution A Dell Technical White Paper

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Deployment of VMware ESX 2.5 Server Software on Dell PowerEdge Blade Servers

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

Best Practices for Sharing an iscsi SAN Infrastructure with Dell PS Series and SC Series Storage using VMware vsphere Hosts

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series

DELL POWERVAULT MD32XXI / MD36XXI DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Dell Networking N2000 Series

DELL POWERVAULT MD3200I / MD3600I DEPLOYMENT GUIDE FOR VMWARE ESX4.1 SERVER SOFTWARE

Database Solutions Engineering. Dell Reference Configuration Deploying Oracle Database on Dell EqualLogic PS5000XV iscsi Storage

17TB Data Warehouse Fast Track Reference Architecture for Microsoft SQL Server 2014 using PowerEdge R730 and Dell Storage PS6210S

EMC VSPEX END-USER COMPUTING

Vmware VCP-310. VMware Certified Professional on VI3.

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 4 and vcenter

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

Deploying Solaris 11 with EqualLogic Arrays

VMware vsphere Storage Appliance Installation and Configuration

NetApp HCI Network Setup Guide

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Active System Manager Release 8.2 Compatibility Matrix

EqualLogic iscsi SAN Concepts for the Experienced Fibre Channel Storage Professional A Dell Technical Whitepaper

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

EMC VSPEX END-USER COMPUTING

Remote and Branch Office Reference Architecture for VMware vsphere with Dell PowerEdge VRTX

Performance Baseline for Deploying Microsoft SQL Server 2012 OLTP Database Applications Using EqualLogic PS Series Hybrid Storage Arrays

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

vstart 1000v for Enterprise Virtualization using VMware vsphere: Reference Architecture

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

iscsi Configuration for ESXi using VSC Express Guide

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Managing and Protecting a Windows Server Hyper-V Environment using Dell EqualLogic PS Series Storage and Tools

Validated Components List for EqualLogic PS Series SANs

Cisco Nexus 1100 Series Virtual Services Appliances

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

VMware Infrastructure 3.5 Update 2 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Administering VMware vsphere and vcenter 5

Dell EMC Networking S4148-ON and S4128-ON

Setup for Failover Clustering and Microsoft Cluster Service

Dell EMC SC Series Storage with SAS Front-end Support for VMware vsphere

Overview of Microsoft Private Cloud with Dell EqualLogic Storage Arrays

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

Preinstallation Checklist

Shared LOM support on Modular

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i

HP StoreVirtual Storage Multi-Site Configuration Guide

Basic Configuration Installation Guide

Dell EMC Ready Solutions for Oracle with Dell EMC XtremIO X2 and Data Domain

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

DEPLOYING A STACK OF DELL M-SERIES BLADE SWITCHES IN SIMPLE SWITCH MODE (SSM)

Dell PowerEdge M1000e Blade Enclosure and Dell PS Series SAN Design Best Practices Using Dell S-Series and M-Series Networking

Planning and Preparation Guide

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

10GbE Network Configuration

Transcription:

Dell EqualLogic Best Practices Series Supplemental Implementation Guide: Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage Ananda Sankaran Storage Infrastructure and Solutions Dell Product Group October 2010

Table of Contents 1 Introduction... 1 1.1 Audience... 1 2 Test System Configuration... 2 3 Server LAN and iscsi SAN Configuration... 4 3.1 Server LAN (M6220) Switch Configuration... 5 3.2 iscsi SAN (M6348) Switch Configuration... 6 4 VMWare vsphere Configuration... 8 4.1 ESX Host Configuration... 9 4.2 Virtual Machine Configuration... 9 4.2.1 ESX Virtual Network Configuration... 10 4.2.2 Virtual Machine Networking Details... 17 4.3 Configuring iscsi Initiator in the Guest OS... 20 4.4 Configuring iscsi Initiator within the ESX Host... 28 4.5 IP Address Assignments... 30 4.6 Exchange Server Installation and Configuration... 31 4.6.1 Exchange Server 2010 Requirements... 31 4.6.2 Steps for Setting Up Windows NLB on HUB/CAS Servers... 34 Appendix A... 35 A.1 Component Versions... 35 A.2 PowerConnect M6220 Configuration... 36 A.3 PowerConnect M6348 Configuration...41 A.4 PowerConnect 6248 Configuration... 46 ii

1 Introduction The goal of this paper is to provide technical information regarding the configuration and implementation of EqualLogic iscsi SAN components while deploying Microsoft Exchange 2010 on VMware vsphere. The solution configuration we document in this paper uses Dell PowerEdge Servers, Dell PowerConnect switches and EqualLogic PS Series storage. This whitepaper illustrates implementation steps used to configure test systems at Dell Labs. 1.1 Audience This paper provides a supplemental technical implementation guide for use in conjunction with the following Dell whitepaper: Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage: http://www.delltechcenter.com/page/sizing+and+best++practices +for+microsoft+exchange+2010+on+vmware+vsphere+and+equallogic+storage This paper is for systems engineers and administrators that need to understand how to configure VMware VSphere and EqualLogic iscsi SANs for Microsoft Exchange 2010 deployments. Readers should first refer to the Sizing and Best Practices whitepaper mentioned above, and then refer to this paper for additional system configuration guidance. The information in this document contains configuration details that are specific to a test environment created at Dell Labs. The design and configuration methods in this document provide examples that Solution Architects and Storage Consultants can adapt to their own solution design requirements. We assume the reader has prior experience in setting up the following: Dell PowerEdge servers Dell PowerConnect switches Dell EqualLogic PS Series SANs Microsoft Exchange Server and Windows Server 2008 VMware ESX and VMware vsphere Please note that this paper is does not provide a comprehensive implementation guide for Exchange 2010 running on VMware vsphere and EqualLogic PS Series Storage. Readers should first read the Sizing and Best Practices whitepaper, and then refer to this paper for additional system configuration details on the test system configuration used by Dell Labs. 1

2 Test System Configuration The test system configuration included Dell PowerEdge blade servers hosted within a Dell PowerEdge M1000e chassis. Figure 1 shows the test system configuration, illustrating the host, switch and SAN connection paths with one M710 blade server. We used the following components: Dual PowerConnect M6220 switches: installed in I/O modules A1 and A2, used for host-tohost and Server LAN connectivity. Dual PowerConnect M6348 switches: installed in I/O modules B1 and B2, used for server iscsi SAN connections and uplinked to the top-of-rack SAN switches. Dual PowerConnect 6248 switches: installed in top of rack, dedicated exclusively for iscsi SAN traffic and used for iscsi storage array connections. BroadCom 5709 NIC mezzanine cards: installed in each blade server to support connection to the iscsi SAN via switch fabric B. LAN-On-Motherboard (LOM) Broadcom 5709 NIC ports used in each blade server to connect to the Server LAN via switch fabric A. Test configuration SAN design references: EqualLogic Configuration Guide http://www.delltechcenter.com/page/equallogic+configuration+guide Dell EqualLogic PS Series Network Performance Guidelines http://www.equallogic.com/resourcecenter/assetview.aspx?id=5229 Dell M1000e high performance I/O Architecture reference: Dell M1000e Modular Enclosure Architecture http://www.dell.com/downloads/global/products/pedge/en/poweredge-m1000ewhite-paper-en.pdf. Dell PowerEdge Blade Server and Enclosure Documentation: http://support.dell.com/support/edocs/systems/pem/en/index.htm 2

Figure 1 Test System Configuration 3

3 Server LAN and iscsi SAN Configuration We used the following networking components in our test configuration: Two Dell PowerConnect M6220 1 Ethernet Switches (modular blade) 2 Two Dell PowerConnect M6348 Ethernet Switches (modular blade) Each of the M6348 switches connected to the top-of-rack PowerConnect 6248 switches via Link Aggregation Groups (LAG). The LAGs combined two 10GbE uplinks to create 20Gbps total bandwidth using the uplink modules on respective switches. Two Dell PowerConnect 6248 3 Ethernet Switches (48 port, top-of-rack) The two top-of-rack 6248 switches are also connected via a 20Gbps LAG group using a second uplink module installed in each switch. Broadcom 5709 1GbE NICs (integrated LAN-On-Motherboard plus additional mezzanine cards) Fabric A NIC ports on the blade servers connected to the server LAN via a pair of stacked PowerConnect M6220 switch modules installed in the M1000e blade chassis (I/O modules A1 and A2). Fabric B NIC ports on the blade servers connected to the iscsi SAN via a pair of PowerConnect M6348 switch modules installed in the M1000e blade chassis (I/O modules B1 and B2). Note: In this test configuration PowerConnect M6220 or M6348 switches could have been used in both Fabric A and Fabric B. Table 1 and Table 2 show the functional assignments of the blade server switch modules and server NIC port assignments. M1000e Chassis Switch Module Switch Model Purpose I/O Module A1 PowerConnect M6220 Server LAN I/O Module A2 PowerConnect M6220 Server LAN I/O Module B1 PowerConnect M6348 iscsi Storage I/O Module B2 PowerConnect M6348 iscsi Storage Table 1 Switch Module Slot Assignments Blade Server I/O Module Number of NIC Ports (via mezzanine card) Purpose 1 http://www.dell.com/us/en/enterprise/networking/pwcnt_6220/pd.aspx?refid=pwcnt_6220 2 http://www.dell.com/us/en/enterprise/networking/switch-powerconnect-m6348/pd.aspx?refid=switchpowerconnect-m6348 3 http://www.dell.com/us/en/enterprise/networking/pwcnt_6248/pd.aspx?refid=pwcnt_6248 4

I/O Module A1 2 ESX01,ESX02 (PowerEdge M710) INFRA, MGMT (PowerEdge M610) Table 5 Blade Server NIC Port Details I/O Module A2 2 Total on A Modules 4 I/O Module B1 2 I/O Module B2 2 Total on B Modules 4 I/O Module A1 1 I/O Module A2 1 Total on A Modules 2 I/O Module B1 1 I/O Module B2 1 Total on B Modules 2 Server LAN iscsi Storage Server LAN iscsi Storage The following sections describe implementation details for each of the switches used in the configuration. Some additional considerations were as follows: We left spanning tree enabled by default. We made sure that no network loops exist as part of the physical network design. Ports 1 through 32 on the M6348 (which serve the internal blade servers in the chassis) do not show the portfast setting enabled in the running switch configuration. Portfast is enabled by default on all server ports (this is not displayed explicitly). We made sure flow control was globally enabled on all ports by default. 3.1 Server LAN (M6220) Switch Configuration We stacked PowerConnect M6220 switches to provide a single logical switch for host server and virtual machine LAN communications. We also separated the different types of network traffic generated by virtual machines and host management processes into segregated virtual LANs (VLANs). Key configuration steps: Connected the host virtual switch to the PowerConnect M6220 via the blade server NIC ports on fabric A. Tagged the VMware vsphere host and guest traffic with VLAN IDs to segregate it at the vsphere host virtual switch (using Virtual Switch Tagging 4 or VST) Configured the physical switch ports connected to the host virtual switch via the server NICs 5 for trunk mode. (This was required to use VST mode in VLAN tagging.) 4 See this document for more information on VST: http://www.vmware.com/pdf/esx3_vlan_wp.pdf 5 See the VMware Knowledge Base document Sample configuration of virtual switch VLAN tagging (VST Mode) and ESX : http://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&externalid=1004074 5

Created four VLANs for server LAN traffic: 101, 102, 103 and 104. Example 1 shows the commands used for configuring and verifying trunk mode for the M6220 switch. See Example 4 in Appendix A.2 for the full M6220 switch configuration. Example 1 Enable trunk mode on M6220 console>en console#config console(config)#interface range ethernet 1/g1-1/g16 console(config-if)# console(config-if)#switchport trunk allowed vlan add 101,102,103,104 Warning: The use of large numbers of VLANs or interfaces may cause significant delays in applying the configuration. console(config-if)# console(config)#interface range ethernet 2/g1-2/g16 console(config-if)# console(config-if)#switchport trunk allowed vlan add 101,102,103,104 Warning: The use of large numbers of VLANs or interfaces may cause significant delays in applying the configuration. console(config-if)# console(config)# Save the running configuration to the startup configuration: console# copy running-config startup-config This operation may take a few minutes. Management interfaces will not be available during this time. Are you sure you want to save? (y/n) y Configuration Saved console# Note: You must reload the switches for settings to take effect. 3.2 iscsi SAN (M6348) Switch Configuration A stack or LAG interconnect between the M6348 switches was not required. We created path redundancy across the SAN switch fabric by using dual NICs on the server and interconnecting the top-of-rack switches (the PowerConnect 6248 switches shown in Figure 1). Key configuration steps: Enabled port fast mode on all M6348 and 6248 switch ports connecting to end devices (server NIC ports or array NIC ports). Enabled flow control and jumbo frames with MTU = 9216 on all switch ports. Using the two 10GbE uplink modules on each M6348, we created a 20 Gbps LAG connection to the respective top-of-rack 6248 switches. We used a single uplink module in each 6248 for this connection. (A second 20Gbs LAG was created between the two top-of-rack 6248 switches using the other available uplink module.) 6

Example 2 shows the commands used for configuring the M6348 switches. See Example 5 in Appendix A.3 for the full M6348 switch configuration. Example 2 M6348 Switch Configuration for iscsi SAN console>en console#config console(config)#interface range ethernet 1/g1-1/g48 console(config-if)# console(config-if)# console(config-if)# console(config)#interface port-channel 1 console(config-if-ch1)#no shut console(config-if-ch1)# console(config)#interface range ethernet 1/xg1-1/xg2 console(config-if)#channel-group 1 mode on console(config-if)# console(config-if)# console(config)#interface port-channel 1 console(config-if-ch1)# console(config-if-ch1)# console(config)# Save the running configuration to the startup configuration: console# copy running-config startup-config This operation may take a few minutes. Management interfaces will not be available during this time. Are you sure you want to save? (y/n) y Configuration Saved console# Note: You must reload the switches for settings to take effect. 7

4 VMware vsphere Configuration We used four Dell PowerEdge blade servers running VMware ESX Server v4.1 as host platforms in our test environment. Figure 2 shows two PowerEdge M710 and two PowerEdge M610 blade servers. Each server is running ESX v4.1 and is hosting two virtual machines. Figure 2 Test System Configuration: ESX Server Detail Table 2 describes the role for each blade server we used in the test configuration.. Blade Server Label Purpose PowerEdge M710 (Slot 1) ESX01 vsphere host for Exchange Server Role virtual machines PowerEdge M710 (Slot 2) ESX02 vsphere host for Exchange Server Role virtual machines PowerEdge M610 (Slot 6) PowerEdge M610 (Slot 7) Table 2 Blade Server Assignments INFRA MGMT vsphere host for active directory and virtual center virtual machines vsphere host for load client and monitor client virtual machines 8

4.1 ESX Host Configuration We installed VMware ESX Server v4.1 on the Dell PowerEdge blade servers and Microsoft Windows Server 2008 R2 Enterprise Edition in the guest virtual machines running on the ESX host. ESX host configuration overview: We configured the two installed disk drives as a RAID 1 set and installed VMware ESX server there. We formatted the remaining local storage as a VMFS partition. The local VMFS data store contained all of the guest VM OS images. The Exchange Server databases and logs were stored in volumes hosted by the externally connected EqualLogic iscsi SAN shown in Figure 1. Note: For more details describing Exchange 2010 support requirements for Hardware Virtualized environments, see the following Microsoft TechNet article: Exchange 2010 Requirements http://technet.microsoft.com/en-us/library/aa996719.aspx Table 3 shows the ESX host system specifications. Host CPU Memory Local Disk Drives ESX01 Dell PowerEdge M710 2 x Quad Core Intel Xeon Processor X5570 (8M Cache, 2.93 GHz, 6.40 GT/s Intel QPI) Dell PowerEdge M710 2 x Quad Core Intel Xeon Processor X5570 (8M Cache, 2.93 GHz, 6.40 GT/s Intel QPI) Dell PowerEdge M610 2 x Quad Core Intel Xeon Processor E5520 (8M Cache, 2.26 GHz, 5.86 GT/s Intel QPI) 64GB 2 x 146GB 15K SAS RAID 1 ESX02 64GB 2 x 146GB 15K SA RAID 1 INFRA 16GB 2 x 73GB 15K SAS RAID 1 Dell PowerEdge M610 MGMT 2 x Quad Core Intel Xeon Processor E5520 (8M Cache, 2.26 GHz, 5.86 GT/s Intel QPI) Table 3 ESX Host Specifications 16GB 2 x 73GB 15K SAS RAID 1 4.2 Virtual Machine Configuration We used Microsoft Windows Server 2008 R2 Enterprise Edition as the operating system for each virtual machine. We started the VM OS deployment process by first creating a template virtual machine OS image. In addition to the OS, we installed all required windows updates and the VMware guest tools. After making further customizations to the image we captured it for deployment using Windows 9

Automated Installation Kit 6 (AIK). We cloned the template image and deployed it to additional virtual machines as necessary. Table 4 shows the virtual machine resource allocations for each of the VM running on the ESX hosts We explicitly reserved the amount of memory assigned to each virtual machine at the ESX hypervisor layer. We used static memory resource allocations for the virtual machines. Virtual Machine ESX Host vcpus Memory Guest OS Disk Drive Application Data MBX1 ESX01 4 48GB Host local - 64GB External SAN HUBCAS1 ESX01 4 8GB Host local - 24GB External SAN MBX2 ESX02 4 48GB Host local - 64GB External SAN HUBCAS2 ESX02 4 8GB Host local - 24GB External SAN VCENTER INFRA 2 4GB Host local - 20GB VM Local AD INFRA 4 8GB Host local - 20GB VM Local CLIENT MGMT 2 4GB Host local - 20GB VM Local MONTR MGMT 2 4GB Host local - 20GB VM Local Table 4 Virtual Machine Resource Configuration 4.2.1 ESX Virtual Network Configuration We configured the virtual switches on the ESX hosts as follows: One virtual switch used for iscsi SAN access, with the NIC ports on Fabric B as uplinks. One virtual switch used for server LAN access with the NIC ports on Fabric A as uplinks. Table 5 shows the blade server NIC port assignments for each vswitch. Example 3 shows vsphere CLI commands used for configuring the vswitch0 in our test configuration. We enabled jumbo frames on the host vswitch. (The Broadcom 5709 NICs installed on blade servers support jumbo frames. Note: In ESX 4.1, Jumbo Frames are not supported when using hardware iscsi initiators at the ESX layer.) Note We used the default load-balancing policy Route based on the originating virtual switch port ID at the vswitch for load balancing across multiple NIC uplinks. Reference: www.vmware.com/files/pdf/virtual_networking_concepts.pdf 6 http://en.wikipedia.org/wiki/windows_automated_installation_kit 10

Example 3 Sample vsphere commands for vswitch configuration esxcfg-vswitch -m 9000 vswithc0 (to enable jumbo frames with 9000 MTU) esxcfg-vswitch -L vmnic1 vswithc0 (to add an enumerated server NIC to vswitch) esxcfg-vswitch -p Service Console -v 101 vswithc0 (to assign VLAN ID to service console port group) vswitch Purpose Server Server NICs as vswitch Uplinks vswitch0 Server LAN vswitch1 iscsi Storage ESX01, ESX02 INFRA, MGMT ESX01, ESX02 INFRA, MGMT Table 5 vsphere vswitch and NIC Assignments vmnic0, vmnic1, vmnic2, vmnic3 vmnic0, vmnic1 Vmnic6, vmnic7, vmnic8, vmnic9 vmnic2, vmnic3 Note: We did not use the physical NIC ports and their corresponding vmnics for fabric C. For more information on vmnic enumeration within vsphere hosts, see: Networking Best Practices: VMware vsphere 4 on Dell Blade Servers http://content.dell.com/us/en/enterprise/d/business~solutions~engineeringdocs~en/documents~networkingguide_vsphere4_blades.pdf.aspx 11

We segregated network traffic by using VLANs for different classes of traffic (tagged packets) at the virtual switch layer 7. Table 6 shows the VLAN and port group assignments (on vswitch0, the Server LAN) for the vsphere host. VLAN ID Port Group Label Port Group Type Purpose 101 Service Console Service Console Service console management traffic 102 VMkernel VMkernel Kernel level traffic (e.g. vmotion) 103 VM Network Public Virtual Machine Guest VM traffic (Exchange client/server LAN) Guest VM traffic (ie. Exchange 104 VM Network Private Virtual Machine server private replication) Table 6 vswitch0 VLAN and Port Group Assignment on Each vsphere Host Table 7 shows the VLAN assignments for the VMs used in the test configuration. Virtual Machine vsphere Host vswitch0 VLANs Purpose MBX1 ESX01 103, 104 Exchange Server 2010 Mailbox Role Exchange Server 2010 combined Hub Transport HUBCAS1 ESX01 103 and Client Access Roles MBX2 ESX02 103, 104 Exchange Server 2010 Mailbox Role Exchange Server 2010 combined Hub Transport HUBCAS2 ESX02 103 and Client Access Roles VCENTER INFRA 101, 103 vsphere Virtual Center server for management AD INFRA 103 Windows Server 2008 Active Directory CLIENT MGMT 103 Load generation client MONTR MGMT 103 Storage and Server monitoring client Table 7 Virtual Machine Assignments Note: We created an additional virtual machine port group created with VLAN 101 (on ESX host INFRA). This VLAN was used for VMware vcenter communication with service consoles. 7 This does not apply to the iscsi storage LAN (vswitch1 in our configuration). 12

For the host ESX1, Figure 3 shows the network path between the virtual machines and the SAN. Note in Figure 3 how all iscsi SAN traffic is consolidated on vswitch1 and Fabric B. Figure 3 vswitch and Physical NIC Port Mapping on host ESX01 13

Figure 4 shows the vswitch0 configuration on host ESX01. Figure 4 vswitch0 Configuration for Host ESX01 The vswitch0 configuration for ESX host INFRA is shown in Figure 5 below. Figure 5 vswitch0 Configuration for Host INFRA Table 8 shows the configuration for vswitch1 when using the Windows 2008 R2 software initiator within the guest VM: The iscsi initiator within the VM was used to access the external volumes on the EqualLogic SAN in this configuration. 14

The EqualLogic Host Integration Toolkit 8 (HIT) was installed in the VM to provide the EqualLogic DSM (Device Specific Module) and enable MPIO (Multi-path I/O) to storage. Virtual Machine vsphere Host Port Group Label Port Group type Purpose MBX1 ESX01 VM Network - iscsi Virtual Machine HUBCAS1 ESX01 VM Network - iscsi Virtual Machine MBX2 ESX02 VM Network - iscsi Virtual Machine HUBCAS2 ESX02 VM Network - iscsi Virtual Machine Exchange Mailbox Storage Access Queue Database Storage Access Exchange Mailbox Storage Access Queue Database Storage Access MONTR MGMT VM Network - iscsi Virtual Machine Storage Monitoring access Table 8 vswitch1 Configuration for using iscsi Initiator within Virtual Machine Figure 6 shows the vswitch1 configuration for host ESX01. Figure 6 vswitch Configuration for Host ESX01 using iscsi initiator within guest VM Note: We configured the load balancing policy at the vswitch to balance load across the multiple NIC uplinks. The default policy was used ( Route based on the originating virtual switch port ID ). For further information, see the following VMware Information Guide article: VMware Virtual Networking Concepts, http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf 8 The EqualLogic Host Integration Toolkit is available at the following download location (support login ID required): https://www.equallogic.com/support/download.aspx?id=3199 15

When using the VMware ESX software iscsi initiator within vsphere host, the following configuration steps were taken for vswitch1: The ESX hypervisor directly accessed the volumes on the EqualLogic SAN using the ESX iscsi initiator. We used the VMware VMFS3 filesystem to format the volumes. We created virtual hard drives (vmdk files) on the VMFS3 formatted volumes. These virtual hard drives were formatted with NTFS file system by the guest OS (Windows Server 2008 R2). Instead of using the Native Multipathing (NMP) available in ESX, we used the EqualLogic Multipathing Extension Module (MEM) for vsphere to manage iscsi connection management and load balancing. Figure 7 shows the configuration of the virtual switch ( vswitchiscsi ) for iscsi access. We used the software iscsi initiator provided by the ESX host in this configuration. We created four VMkernel ports and attached to the ESX iscsi software initiator. We assigned a physical NIC uplink was exclusively to each port. Figure 7 vswitch Configuration for Host ESX01 Using ESX Software iscsi Initiator If you choose to use the iscsi software initiator in the ESX host, you should take advantage of EqualLogic aware connection and path management by installing the EqualLogic Multipathing Extenstion Module (MEM) for vsphere 4.1. The EqualLogic MEM offers: Easy installation and configuration Automatic connection management Automatic load balancing across multiple active paths Increased bandwidth Reduced network latency Automatic failure detection and failover Multiple connections to a single iscsi host 16

Note: Detailed steps for configuring MEM and the ESX host initiator with EqualLogic storage are provided in the following document: Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vsphere 4.1 and PS Series SANs: http://www.equallogic.com/resourcecenter/assetview.aspx?id=9823 The iscsi initiator you choose to use (the initiator in the ESX host or the initiator within the guest OS) will depend on factors such as backup and recovery process requirements and compatibility with other management tools used. If you use the iscsi initiator within the guest OS (running Microsoft Windows 2008 Server), then you will be able to use Equallogic Auto Snapshot Manager (ASM) / Microsoft Edition (ME). ASM/ME provides the ability to create consistent Exchange aware snapshots or clones of data volumes using storage hardware based snapshots or clones. It also enables other third party backup solutions based on Windows VSS (Volume Shadow copy Services) to utilize storage hardware based snapshots or clones for Exchange Server data backup. You should investigate other solutions for providing for application consistent backups if you use the iscsi initiator provided by the ESX host operating system. Even if you use the iscsi initiator in the Windows guest VM for Exchange Server deployment, we recommend that you still use the ESX iscsi initiator on the host, along with the EqualLogic MEM, for other virtual machines and applications that need it. In this case, we recommended you configure separate virtual switches for each storage type access path as follows: One vswitch with its own set of NIC uplinks for iscsi guest initiators One vswitch with a second set of uplinks for the ESX software or hardware initiator 4.2.2 Virtual Machine Networking Details Virtual NIC(s) were configured within the VMs connecting to the virtual switch (vswitch0) configured for server LAN access on respective VLANs. The virtual network adapter assignments for the VMs (using the Windows 2008 R2 iscsi initiator within the VM) are as follows: The native OS iscsi initiator within the guest OS VM was used to connect the data volumes on EqualLogic iscsi storage. Virtual NICs were configured within the VM connecting to the virtual switch (vswitch1) configured for storage. The EqualLogic Host Integration Toolkit 9 (HIT) was installed in the VM to provide the EqualLogic DSM (Device Specific Module) and enable MPIO (Multi-path I/O) to storage. 9 The EqualLogic Host Integration Toolkit is available at the following download location (support login ID required): https://www.equallogic.com/support/download.aspx?id=3199 17

Table 9 show virtual machine assignments and virtual NIC adapter assignments for iscsi initiator within guest VM Virtual Machine Virtual Network Adapter Adapter Type vswitch Port Group VLAN MBX1 Network Adapter 1 E1000 vswitch0 VM Network - Public 103 Network Adapter 2 E1000 vswitch0 VM Network - Private 104 Network Adapter 3 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 4 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 5 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 6 VMXNET 3 vswitch1 VM Network - iscsi N/A HUBCAS1 Network Adapter 1 E1000 vswitch0 VM Network - Public 103 Network Adapter 2 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 3 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 4 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 5 VMXNET 3 vswitch1 VM Network - iscsi N/A MBX2 Network Adapter 1 E1000 vswitch0 VM Network - Public 103 Network Adapter 2 E1000 vswitch0 VM Network - Private 104 Network Adapter 3 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 4 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 5 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 6 VMXNET 3 vswitch1 VM Network - iscsi N/A HUBCAS2 Network Adapter 1 E1000 vswitch0 VM Network - Public 103 Network Adapter 2 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 3 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 4 VMXNET 3 vswitch1 VM Network - iscsi N/A Network Adapter 5 VMXNET 3 vswitch1 VM Network - iscsi N/A AD Network Adapter 1 E1000 vswitch0 VM Network - Public 103 VCENTER Network Adapter 1 E1000 vswitch0 VM Network - Service Console 101 Network Adapter 2 E1000 vswitch0 VM Network - Public 103 MONTR Network Adapter 1 E1000 vswitch0 VM Network - Public 103 Network Adapter 2 E1000 vswitch1 VM Network - iscsi N/A Table 9 Virtual Adapter Assignments for VMs (using iscsi Initiator within VM) 18

Figure 8 shows the virtual network adapter and VLAN connectivity paths for the virtual machine MBX1 using guest iscsi initiator. Figure 8 Virtual Network Configuration using guest software iscsi Initiator Figure 9 shows the Virtual Network Adapter assignments for VM MBX1 with iscsi initiator in guest OS. Figure 9 Virtual Network Adapter Assignments for VM MBX1 with iscsi Initiator in Guest OS 19

Virtual Adapter assignments for VMs (iscsi initiator within ESX) are as follows. Virtual machines had the same virtual adapter assignments as in table above except that the virtual adapters for iscsi traffic were removed. We exposed the hard disks for virtual machines as virtual hard drives (vmdk) from the ESX host layer. The ESX host software iscsi initiator connected via VMKernel ports to the virtual switch for storage access. Figure 10 shows the virtual machine virtual network adapter and VLAN connectivity using the software iscsi initiator provided by the VMware ESX server. Figure 10 Virtual Network Configuration Using ESX Software iscsi Initiator 4.3 Configuring iscsi Initiator in the Guest OS For the test configuration using the iscsi initiator within the guest OS: We created a separate vswitch ( vswitch1 ) for storage access. We assigned physical blade server NIC ports connecting to fabric B on the blade chassis to be used as uplinks to vswitch1. Load-balancing policy: o Route based on the originating virtual switch port ID : A given VM cannot use more than one physical uplink if it has only one virtual NIC with this load balancing policy. Thus we configured least four virtual NICs to use all four physical uplinks for VM traffic. We created virtual network adapters of type VMXNET 3 (Figure 7 above) and assigned them to the Exchange Server 2010 guest virtual machine for storage access (e.g., mailbox role). o We assigned these adapters to vswitch1 on the vsphere host. 20

Note: For more information see: Networking Best Practices: VMware vsphere 4 on Dell Blade Servers http://content.dell.com/us/en/enterprise/d/business~solutions~engineeringdocs~en/documents~networkingguide_vsphere4_blades.pdf.aspx VMware vsphere Online Library: TCP Segmentation Offload and Jumbo Frames http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=serve r_config/c_tcp_segmentation_offload_and_jumbo_frames.html VMware Virtual Networking Concepts http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf Within the Windows Server 2008 R2 guest OS, the following properties were enabled on each of the four virtual adapters configured for storage access: Jumbo Frames (9000) Large Segment Offload (LSO) TCP Segment Offload (TSO) Windows Receive Side Scaling We installed the Dell EqualLogic Host Integration Toolkit (HIT) for Windows within the guest OS. This installs the EqualLogic DSM for the Windows Server MPIO framework. The DSM provides multi-path optimizations tailored to the EqualLogic storage arrays. We configured the MPIO settings 10 to a maximum of four iscsi connections per array 12 total iscsi connections per volume. These settings best enabled load balancing of iscsi traffic for a given storage volume across the four virtual adapters within the guest OS. Traffic from the four virtual adapters is then load balanced in turn across four physical uplink NIC ports on the server host by the vswitch. Figure 11 shows the EqualLogic DSM MPIO settings. This configuration dialog box can be accessed by launching the Remote Setup Wizard from the Dell EqualLogic MPIO tab of the Microsoft software iscsi initiator. 10 See: Configuring and Deploying the Dell EqualLogic Multipath I/O Device Specific Module (DSM) in a PS Series SAN: http://www.equallogic.com/resourcecenter/assetview.aspx?id=5255 21

Figure 11 MPIO Settings We created data volumes on the Dell EqualLogic Storage Pool and configured them with the iscsi initiator name based access method. (Other methods include CHAP or IP Address based access.) The iscsi initator name is found in the configuration tab of the Microsoft software iscsi initiator, as shown in Figure 12. 22

Figure 12 Initiator Name from Microsoft Software Initiator Once we configured volumes in storage with the access information, we discovered them by adding the EqualLogic Group address to the target portal list in the Microsoft software initiator. This is shown in Figure 13. After clicking the refresh button in the target tab of the initiator the available volumes were displayed as shown in Figure 14. 23

Figure 13 iscsi Target Discovery 24

Figure 14 Target Volumes Discovered We connected to each volume by selecting it and clicking connect on the target tab. This is shown in Figure 15. You will see the target status change from Inactive to Connected, as shown in Figure 16. 25

Figure 15 Connecting to the Target Volumes 26

Figure 16 All Volumes Connected We verified the actual connections setup by the Dell Equallogic DSM MPIO from the Dell EqualLogic MPIO tab of the initiator. This is shown in Figure 17. 27

Figure 17 MPIO Connections After connecting to volumes, we used windows disk management tool to bring online, partition and format the disks with NTFS. 4.4 Configuring iscsi Initiator within the ESX Host The configuration in the previous section used the Microsoft Windows 2008 R2 iscsi initiator within the virtual machine to connect to the iscsi volumes on the EqualLogic SAN. Those iscsi volumes were directly accessed by the Windows 2008 VM and natively formatted as NTFS. In this section we describe the configuration with the following key differences in the iscsi initiator stack: The ESX hypervisor directly connected to the the volumes on the EqualLogic SAN using the ESX software iscsi initiator and were natively formatted using the VMware VMFS3 filesystem. 28

We created virtual hard drives on the VMFS3 formatted volumes. These virtual hard drives were formatted with NTFS file system by the guest OS (Windows Server 2008 R2). Instead of using Native Multipathing (NMP) available in ESX, we used the EqualLogic Multipathing Extension Module (MEM) for vsphere for iscsi connection management and load balancing. For this access mode: We configured a separate vswitch labeled vswitchiscsi for ESX iscsi traffic and configured it to use fabric B NICs as uplinks. We created four VMkernel ports on the vswitch with each physical NIC uplink assigned exclusively to a port. We enabled jumbo frames (MTU 9000) at the vswitch and the VMKernel ports. We assigned the VMKernel ports to the ESX host software iscsi initiator. We used the EqualLogic MEM (Multi-path Extension Module for VMware vsphere) plug-in for the ESX Pluggable Storage Architecture (PSA) to take advantage of EqualLogic aware multipath I/O. The EqualLogic MEM provides iscsi multipath optimizations at the ESX host level (when using an iscsi initiator on the ESX host.) The number of EQL MEM paths per member array was set to 4 and per volume to 12. Note: See the following documents for detailed steps on how to configure MEM and vsphere software iscsi with EqualLogic storage: Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vsphere 4.1 and PS Series SANs: http://www.equallogic.com/resourcecenter/assetview.aspx?id=9823 Configuring VMware vsphere Software iscsi with Dell EqualLogic PS Series Storage: http://www.equallogic.com/resourcecenter/assetview.aspx?id=8453 29

4.5 IP Address Assignments Table 10 illustrates the IP address scheme and assignments for various systems and components used in the test configuration. Server IP Convention: 172.16.[ABC].[XY] Storage IP Convention : 172.16.201..2[MN] ABC VLAN ID M Array number X ESX Host number N NIC sequence within array Y VM sequence within host Note: Netmask used for all test bed subnets: 255.255.255.0 Entity IP Address VLAN Purpose INFRA 172.16.101.1 101 MGMT 172.16.101.2 101 ESX01 172.16.101.3 101 ESX02 172.16.101.4 101 ESX Host: INFRA vsphere ESX Hosts Virtual Machines Management - Service Console AD 172.16.103.11 103 VM Network - Public VCENTER 172.16.101.6 101 ESX Host: MGMT Management - Service Console access to hosts 172.16.103.12 103 VM Network - Public CLIENT 172.16.103.21 103 MONTR 172.16.103.22 103 ESX Host: ESX01 VM Network - Public 172.16.201.21 N/A VM Network - iscsi (Monitoring) HUBCAS1 172.16.103.31 103 MBX1 172.16.103.32 103 ESX Host: ESX02 172.16.104.32 104 172.16.201.31 N/A 172.16.201.32 N/A 172.16.201.33 N/A 172.16.201.34 N/A HUBCAS2 172.16.103.41 103 MBX2 172.16.103.42 103 VM Network - Public VM Network Private (Exchange cluster communications) VM Network - iscsi (Guest VM initiator) VM Network - Public 30

EqualLogic Storage 172.16.104.42 104 172.16.201.41 N/A 172.16.201.42 N/A 172.16.201.43 N/A 172.16.201.44 N/A VM Network Private (Exchange cluster communications) VM Network - iscsi (Guest VM initiator) Group IP 172.16.201.200 N/A Group IP Address EQL1 172.16.201.211 N/A Member 1 Ethernet port 1 172.16.201.212 N/A Member 1 Ethernet port 2 172.16.201.213 N/A Member 1 Ethernet port 3 172.16.201.214 N/A Member 1 Ethernet port 4 Additional arrays EQL2, EQL3, EQL4 configured as needed with same convention Table 10 IP Address Scheme and Assignments 4.6 Exchange Server Installation and Configuration 4.6.1 Exchange Server 2010 Requirements System requirements including virtualization requirements: http://technet.microsoft.com/enus/library/aa996719.aspx Planning and Deployment: http://technet.microsoft.com/en-us/library/aa998636.aspx Pre-requisites for installation: http://technet.microsoft.com/en-us/library/bb691354.aspx Installation of Exchange Server 2010: http://technet.microsoft.com/enus/library/bb124778.aspx Figure 18 shows the physical and logical system components used in the Exchange Server DAG test configuration. 31

Figure 18 Sample Physical and Logical Configurations for Exchange Server DAG The actual server connectivity to storage is identical to the configuration shown earlier. We deployed two PowerEdge M710 servers for Exchange Server 2010 roles. Each M710 server hosted two virtual machines one for the Exchange Server 2010 mailbox role and another for Exchange Server 2010 Hub Transport / Client Access Server (CAS) Roles. We configured the Mailbox Server VM with 8 vcpus and 48GB of memory (reserved in ESX host). The Hub / CAS VM was configured with 4 vcpus and 6GB of memory (reserved). We configured two mailbox server VMs across the two blade servers as members of an Exchange 2010 DAG. This way if one of the servers incurs failure, the other server in the DAG can continue hosting databases that were active on the failed server via a fail-over process. In 32

a DAG, passive database copies are maintained on the secondary server and kept synchronized with the active copies via log replication. We configured one of the Hub/CAS servers as the file share witness for the DAG. We configured two Hub / CAS VMs as members of a Network Load Balancing (NLB) cluster (unicast mode and single affinity). We used one virtual NIC for NLB and another for public access. The NLB was used to load balance the Exchange CAS services. We configured each Hub/CAS VM with two virtual NICs for server LAN connectivity one for public access and other for NLB private connectivity (unicast NLB mode). Each VM was configured with 4 virtual NICs of type vmxnet3 for iscsi connectivity when using iscsi initiator within guest OS. We hosted the Hub Transport queue database and logs on a separate volume on the same group member array used by their respective mailbox servers. We configured each mailbox VM to use two virtual NICs for server LAN connectivity: one for public access and other for DAG replication. Also each mailbox VM was configured with 4 additional virtual NICs of type vmxnet3 for iscsi connectivity when using the iscsi initiator in the guest OS. We used the Microsoft iscsi initiator in the guest Windows Server 2008 R2 OS for storage connectivity when using guest iscsi connectivity. We installed the EqualLogic MPIO DSM (HIT kit) for EqualLogic aware multi-path I/O to the EqualLogic SAN in these VMs. On each ESX host, we connected the server-side LAN virtual NICs for all VMs to vswitch0 and storage iscsi virtual NICs for all VMs (guest iscsi initiator) to vswitch1. vswitch0 used the onboard Broadcom 5709 1GbE NICs on server Fabric A to connect to the external M6220 switches for server LAN access. vswitch1 used the 1GbE Broadcom 5709 NICs on fabric B to connect to the external M6348 switches for storage access. We used two PowerEdge M610 servers on the same blade M1000e chassis to host other test components. One M610 server hosted the Active Directory VM and the vsphere Virtual Center VM. The second M610 server hosted the client VM used for executing the simulation tool Loadgen and the monitoring VM for running Windows PerfMon and EqualLogic SAN Headquarters. 33

When using DAG for mailbox server replication, load-balancing services such as Windows NLB running on the same server as the mailbox server is not a supported configuration. Separate servers are required for hosting mailbox services and HUB/CAS services when using Windows NLB for load balancing. 4.6.2 Steps for Setting Up Windows NLB on HUB/CAS Servers High Level NLB Installationand Exchange Server 2010 Client Access Configuration Steps NLB installation: http://technet.microsoft.com/en-us/library/cc731695(ws.10).aspx Install NLB using defined best practices on both HUBCAS servers (unicast mode preferred). Set the NLB cluster to load balance only the TCP ports used by CAS services to be configured/deployed. Setup the client access array within the exchange to provide a single client access service name and IP. This will be the NLB services cluster name and IP. The client access array can be setup using the New-ClientAccessArray exchange management shell cmdlet within Exchange Management Shell (EMS). Create client access array: http://technet.microsoft.com/en-us/library/dd351149.aspx Move Hub Transport queue database to another storage location (from default location) as needed: http://technet.microsoft.com/en-us/library/bb125177.aspx Other storage locations could be on separate hard drives on the server or external SAN. High Level Exchange Server 2010 Mailbox Server DAG Installation Steps Create DAG using Exchange Management Console (EMC) or Exchange Management Shell (EMS). Use Exchange management shell to set the DAG cluster IP Address via Set-Dag cmdlet. Exclude iscsi NICs/network as network for DAG. 1. Use Set-DagNetwork cmdlet in EMS ( ignorenetwork option). 34

Appendix A A.1 Component Versions Component Version Application Microsoft Exchange Server 2010 Enterprise Edition Guest Operating System Windows Server 2008 Enterprise Edition Virtualization hypervisor VMware vsphere 4.1 Virtualization management VMware VCenter 4.1 Jetstress 14.00.0639.013 Loadgen 14.00.0639.013 M1000e CMC firmware 2.30 M710 BIOS 2.0.13 M610 BIOS 2.0.13 M710 Broadcom 5709 NIC firmware 5.0.13, A13 PowerConnect M6220 firmware 3.1.3.9 PowerConnect M6348 firmware 3.1.3.9 PowerConnect 6248 firmware 3.2.0.6 EqualLogic HIT Kit (Windows DSM) 3.3.1 EqualLogic Array firmware 5.000 Table 11 Component Versions 35

A.2 PowerConnect M6220 Configuration Example 4 PowerConnect M6220 Configuration console#show running-config Current Configuration: System Description PowerConnect M6220, 3.1.3.9, VxWorks 6.5 System Software Version 3.1.3.9 System Operational Mode Normal configure vlan database vlan 101-104 stack member 1 1 member 2 1 interface out-of-band ip address none ip address 192.168.71.247 255.255.255.0 192.168.71.254 ip address 192.168.2.1 255.255.255.0 interface vlan 101 name Service Console interface vlan 102 name VM Kernel interface vlan 103 name VM Public interface vlan 104 name VM Private username root password e6e66b8981c1030d5650da159e79539a level 15 encrypted interface ethernet 1/g1 interface ethernet 1/g2 interface ethernet 1/g3 36

interface ethernet 1/g4 interface ethernet 1/g5 interface ethernet 1/g6 interface ethernet 1/g7 interface ethernet 1/g8 interface ethernet 1/g9 interface ethernet 1/g10 interface ethernet 1/g11 interface ethernet 1/g12 37

interface ethernet 1/g13 interface ethernet 1/g14 interface ethernet 1/g15 interface ethernet 1/g16 interface ethernet 2/g1 interface ethernet 2/g2 interface ethernet 2/g3 interface ethernet 2/g4 interface ethernet 2/g5 38

interface ethernet 2/g6 interface ethernet 2/g7 interface ethernet 2/g8 interface ethernet 2/g9 interface ethernet 2/g10 interface ethernet 2/g11 interface ethernet 2/g12 interface ethernet 2/g13 interface ethernet 2/g14 39

interface ethernet 2/g15 interface ethernet 2/g16 console# 40

A.3 PowerConnect M6348 Configuration Example 5 PowerConnect M6348 Configuration console#show running-config Current Configuration: System Description "PowerConnect M6348, 3.1.3.9, VxWorks 6.5" System Software Version 3.1.3.9 System Operational Mode "Normal" configure stack member 1 1 interface out-of-band ip address none ip address 192.168.71.248 255.255.255.0 192.168.71.254 ip address none username "root" password e6e66b8981c1030d5650da159e79539a level 15 encrypted interface ethernet 1/g1 interface ethernet 1/g2 interface ethernet 1/g3 interface ethernet 1/g4 interface ethernet 1/g5 interface ethernet 1/g6 interface ethernet 1/g7 interface ethernet 1/g8 41

interface ethernet 1/g9 interface ethernet 1/g10 interface ethernet 1/g11 interface ethernet 1/g12 interface ethernet 1/g13 interface ethernet 1/g14 interface ethernet 1/g15 interface ethernet 1/g16 interface ethernet 1/g17 interface ethernet 1/g18 interface ethernet 1/g19 interface ethernet 1/g20 interface ethernet 1/g21 interface ethernet 1/g22 42

interface ethernet 1/g23 interface ethernet 1/g24 interface ethernet 1/g25 interface ethernet 1/g26 interface ethernet 1/g27 interface ethernet 1/g28 interface ethernet 1/g29 interface ethernet 1/g30 interface ethernet 1/g31 interface ethernet 1/g32 interface ethernet 1/g33 interface ethernet 1/g34 43

interface ethernet 1/g35 interface ethernet 1/g36 interface ethernet 1/g37 interface ethernet 1/g38 interface ethernet 1/g39 interface ethernet 1/g40 interface ethernet 1/g41 interface ethernet 1/g42 interface ethernet 1/g43 interface ethernet 1/g44 44

interface ethernet 1/g45 interface ethernet 1/g46 interface ethernet 1/g47 interface ethernet 1/g48 interface ethernet 1/xg1 channel-group 1 mode on interface ethernet 1/xg2 channel-group 1 mode on interface port-channel 1 console# 45

A.4 PowerConnect 6248 Configuration Example 6 PowerConnect 6248 Configuration console#show running-config Current Configuration: System Description PowerConnect 6248, 3.2.0.6, VxWorks 6.5 System Software Version 3.2.0.6 Cut-through mode is configured as disabled configure stack member 2 2 ip address 172.16.201.250 255.255.255.0 username admin password bb60f21fbeab9054266507292d7bb381 level 15 encrypted interface ethernet 2/g1 interface ethernet 2/g2 interface ethernet 2/g3 interface ethernet 2/g4 interface ethernet 2/g5 interface ethernet 2/g6 interface ethernet 2/g7 interface ethernet 2/g8 46

interface ethernet 2/g9 interface ethernet 2/g10 interface ethernet 2/g11 interface ethernet 2/g12 interface ethernet 2/g13 interface ethernet 2/g14 interface ethernet 2/g15 interface ethernet 2/g16 interface ethernet 2/g17 interface ethernet 2/g18 47

interface ethernet 2/g19 interface ethernet 2/g20 interface ethernet 2/g21 interface ethernet 2/g22 interface ethernet 2/g23 interface ethernet 2/g24 interface ethernet 2/g25 interface ethernet 2/g26 interface ethernet 2/g27 interface ethernet 2/g28 interface ethernet 2/g29 48

interface ethernet 2/g30 interface ethernet 2/g31 interface ethernet 2/g32 interface ethernet 2/g33 interface ethernet 2/g34 interface ethernet 2/g35 interface ethernet 2/g36 interface ethernet 2/g37 interface ethernet 2/g38 interface ethernet 2/g39 interface ethernet 2/g40 49

interface ethernet 2/g41 interface ethernet 2/g42 interface ethernet 2/g43 interface ethernet 2/g44 interface ethernet 2/g45 interface ethernet 2/g46 interface ethernet 2/g47 interface ethernet 2/g48 interface ethernet 2/xg1 channel-group 1 mode on interface ethernet 2/xg2 channel-group 1 mode on interface ethernet 2/xg3 channel-group 2 mode on 50

interface ethernet 2/xg4 channel-group 2 mode on interface port-channel 1 interface port-channel 2 console# 51

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2010 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.