Dell Compellent Storage Center. XenServer 6.x Best Practices

Size: px
Start display at page:

Download "Dell Compellent Storage Center. XenServer 6.x Best Practices"

Transcription

1 Dell Compellent Storage Center XenServer 6.x Best Practices

2 Page 2

3 Document revision Date Revision Description 2/16/ Initial 5.0 Documentation 5/21/ Documentation update for /1/ Document Revised for 5.6 and iscsi MPIO 12/21/ Updated iscsi information 8/22/ Documentation updated for /29/ Update for Software iscsi information THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names Page 3

4 or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own. Page 4

5 Contents Document revision... 3 Contents... 5 General syntax... 8 Conventions... 8 Preface... 9 Audience... 9 Purpose... 9 Customer support... 9 Introduction XenServer Storage Overview XenServer Storage Terminology Shared iscsi Storage Shared Fibre Channel Storage Shared NFS Volume to Virtual Machine Mapping NIC Bonding vs. iscsi MPIO Multi-Pathing Enable Multi-pathing in XenCenter Software iscsi Overview Open iscsi initiator Setup with Dell Compellent Multipath with Dual Subnets Configuring Dedicated Storage NIC To Assign NIC Functions using the XE CLI XenServer Software iscsi Setup Login to Compellent Control Ports Configure Server Objects in Enterprise Manager View Multipath Status Multi-path Requirements with Single Subnet Configuring Bonded Interface Configuring Dedicated Storage Network To assign NIC functions using the XE CLI: XenServer Software iscsi Setup Page 5

6 Configure Server Objects in Enterprise Manager Multi-path Requirements with Dual Subnets, Legacy Port Mode Log in to Dell Compellent iscsi Target Ports View Multipath Status iscsi SR Using iscsi HBA Fibre Channel Overview Adding a FC LUN to XenServer Pool Data Instant Replay to Recover Virtual Machines or Data Overview Recovery Option 1 One VM per LUN Recovery Option 2 Recovery Server Dynamic Capacity Dynamic Capacity Overview Dynamic Capacity with XenServer Data Progression Data Progression on XenServer Boot from SAN VM Metadata Backup and Recovery Backing Up VM MetaData Importing VM MetaData Disaster Recovery Replication Overview Test XenServer Disaster Recovery Recovering from a Disaster Replication Based Disaster Recovery Disaster Recovery Replication Example Live Volume Overview Appendix 1 Troubleshooting XenServer Pool FC Mapping Issue Starting Software iscsi Two ways to Start iscsi Software iscsi Fails to Start as Server Boot Wildcard Doesn t Return All Volumes Page 6

7 View Multipath Status XenCenter GUI displays Multipathing Incorrectly Connectivity issues with a Fibre Channel Storage Repository Page 7

8 General syntax Figure 1, Document Syntax Item Menu items, dialog box titles, field names, keys Mouse click required User Input User typing required Website addresses addresses Convention Bold Click: Monospace Font Type: Conventions Notes are used to convey special information or instructions. Timesavers are tips specifically designed to save time or reduce the number of steps. Caution indicates the potential for risk including system or data damage. Warning indicates that failure to follow directions could result in bodily harm. Page 8

9 Preface Audience The audience for this document is System Administrators who are responsible for the setup and maintenance of Citrix XenServer and associated storage. Readers should have a working knowledge of the installation and management of Citrix XenServer and the Dell Compellent Storage Center. Purpose This document provides best practices for the setup, configuration and management of Citrix XenServer with Dell Compellent Storage Center. This document is highly technical and intended for storage and server administrators as well as information technology professionals interested in learning more about how Citrix XenServer integrates with Compellent Storage Center. Customer support Dell Compellent provides live support EZSTORE ( ), 24 hours a day, 7 days a week, 365 days a year. For additional support, Dell Compellent at support@compellent.com. Dell Compellent responds to s during normal business hours. Additional information on XenServer 6.0 can be found in the Citrix XenServer 6.0 Administration Guide located on the Citrix download site. Information on Dell Compellent Storage Center is located on the Dell Compellent Knowledge Center. Page 9

10 Introduction This document will provide configuration examples, tips, recommended settings, and other storage guidelines a user can follow while integrating Citrix XenServer with the Dell Compellent Storage Center. This document has been written to answer many frequently asked questions with regard to how XenServer interacts with the Dell Compellent Storage Center's various features such as Dynamic Capacity, Data Progression, Replays, and Remote Instant Replay. This document focuses on XenServer 6.0, however most of the concepts apply to XenServer 5.X unless otherwise noted. Dell Compellent advises customers to read XenServer documentation which are publically available on the Citrix XenServer knowledge base documentation pages to provide additional information on installation and configuration. This document assumes the reader has had formal training or has advanced working knowledge of the following: Installation and configuration of Citrix XenServer Configuration and operation of the Dell Compellent Storage Center Operating systems such as Windows or Linux The Citrix XenServer 6.0 Administrators Guide NOTE: the information contained within this document is based on general circumstances and environments. Actual configuration may vary in different environments. Page 10

11 XenServer Storage Overview XenServer Storage Terminology In working with XenServer 6.0, there are four object classes that are used to describe, configure, and manage storage: Storage Repositories (SRs) are storage targets containing homogeneous virtual disks (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain. A storage repository is a persistent, on-disk data structure. So the act of "creating" a new SR is similar to that of formatting a disk -- for single LUN-based SR types, i.e. LVM over iscsi or Fibre Channel, the creation of a new SR involves erasing any existing data on the specified LUN. SRs are long-lived, and may in some cases be shared among XenServer Hosts, or moved between them. The interface to storage hardware allows VDIs to be supported on a large number of SR types. With built-in support for IDE, SATA, SCSI and SAS drives locally connected, and iscsi and Fibre Channel remotely connected, the XenServer host SR is very flexible. Each XenServer host can access multiple SRs in parallel of any type. When hosting direct attached shared Storage Repositories on a Dell Compellent Storage Center, there are 2 options; an iscsi connected LUN or a Fibre Channel connected LUN. Physical Block Devices (PBDs) represent the interface between a physical server and an attached SR. PBDs are connector objects that allow a given SR to be mapped to a XenServer Host. PBDs store the device configuration fields that are used to connect to and interact with a given storage target. PBD objects manage the run-time attachment of a given SR to a given XenServer Host. Virtual Disk Images (VDIs) are an on-disk representation of a virtual disk provided to a VM. VDIs are the fundamental unit of virtualized storage in XenServer. Similar to SRs, VDIs are persistent, on-disk objects that exist independently of XenServer Hosts. Virtual Block Devices (VBDs) are a connector object (similar to the PBD described above) that allows mappings between VDIs and Virtual Machines (VMs). In addition to providing a mechanism to attach (or plug) a VDI into a VM, VBDs allow fine-tuning of parameters regarding QoS (quality of service), statistics, and the boot ability of a given VDI. Shared iscsi Storage Citrix XenServer on Dell Compellent Storage provides support for shared SRs on iscsi attached LUNs. iscsi is supported using the open-iscsi software initiator or a supported iscsi Host Bus Adapter (HBA). Shared iscsi support is implemented based on a Logical Volume Manager (LVM). LVM-based storage is high-performance and allows virtual disks to be dynamically resized. Virtual disks are fully allocated as an isolated volume on the underlying physical disk and so there is a minimum of storage virtualization overhead imposed. As such, this is a good option for high-performance storage. Below is a diagrammatic representation of using shared storage with iscsi HBAs in XenServer. The second diagram illustrates shared storage with the open iscsi initiator. Page 11

12 Figure 2, Shared iscsi Storage with iscsi HBA Figure 3, Shared iscsi with Software Initiator Shared Fibre Channel Storage XenServer hosts with Dell Compellent Storage supports Fibre Channel SANs using the Emulex or QLogic host bus adapters (HBAs). Logical unit numbers (LUNs) are mapped to the XenServer host as disk devices. Like HBA iscsi storage, Fibre Channel storage support is implemented based on the same Logical Volume Manager with the same benefits as iscsi storage, just utilizing a different data I/O path. Page 12

13 Figure 4, Shard Fibre Channel Storage Shared NFS XenServer supports NFS file servers, such as the Dell NX3000 with Dell Compellent storage to host SRs. NFS storage repositories can be shared within a resource pool of XenServers. This allows virtual machines to be migrated between XenServers within the pool using XenMotion. Attaching an NFS storage repository requires the hostname or IP address of the NFS server. The NFS server must be configure to export the specified path to all XenServers in a pool or the reading of the SR will fail. Using and NFS share is a relatively simple way to create an SR and doesn t involve the complexity of iscsi or expense of Fibre Channel. There are some limitations that must be considered before implementing NFS however. An NFS SR will utilize a similar network infrastructure as iscsi to support redundant paths to the NFS share. The main difference is that iscsi uses MPIO to support multipathing and load balancing between multiple the paths while NFS is limited to one network interface per SR. Redundancy in an NFS environment can be accomplished by using XenServer bonded interfaces. Bonded interfaces are active/passive and won t provide load balancing across both physical adapters such as iscsi can provide. Page 13

14 Figure 5, Shared NFS SR A new feature with XenServer 6.0 is the ability to provide a high availability (HA) quorum disk on an NFS volume. However, the XenServer 6.0 Disaster Recovery feature can only be enabled when using LVM over HBA or software iscsi. The underlying protocol choice for SRs is a business decision that will be unique to each environment. Given the performance benefits and the requirement for Disaster Recovery it is the recommendation of Dell Compellent to use iscsi or FC HBA, or software iscsi over NFS. Volume to Virtual Machine Mapping XenServer is fully capable of deploying a many-to-one VM-to-volume (LUN) deployment. The number of VM on a volume is dependent on the workload and IOPS requirement of the VM. When multiple virtual disks share a volume they also share the disk queue for that volume on the host. For this reason, care should be taken to prevent a bottleneck condition on the volume. Additionally, replication and DR become a factor when hosting multiple VMs on a volume. This is due to replication and recovery taking place on a per-volume basis. NIC Bonding vs. iscsi MPIO NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. If one NIC within the bond fails the host's network traffic will automatically be routed over the second NIC. NIC bonds supports active/active mode, but only supports load-balancing of VM traffic across the physical NICS. Any given virtual network interface will only use one of the links in the bond at a time. Load-balancing is not available for non-vm traffic. MPIO also provides host resiliency by using two physical NICs. MPIO uses round robin to balance the Storage traffic between separate targets on the Dell Compellent Storage Center. By spreading the load between multiple Dell Compellent Target iscsi bottlenecks can be avoided while providing network adapter, subnet, and switch redundancy. If all Front End iscsi ports on the Dell Compellent System are on the same subnet, then NIC bonding is the better option since XenServer iscsi MPIO requires at least two separate subnets. In this configuration all iscsi connections will use the same physical NIC because Bonding does not support Page 14

15 active/active connections for anything but VM traffic. For this reason, it is recommended that front end iscsi ports across be configured two subnets. This allows load balancing across all NICs and failover with MPIO. Multi-Pathing Multi-Pathing allows for failures in HBAs, Switch Ports, Switches, and SAN IO ports. It is recommended to utilize Multi-Pathing to increase availability and redundancy for critical systems such as production deployments of XenServer when hosting critical servers. XenServer supports Active/Active Multi-Pathing for iscsi and FC protocols for I/O datapaths. Dynamic Multi-Pathing uses a round-robin mode load balancing algorithm, so both routes will have active traffic on them during normal operations. Multi-Pathing can be enabled via XenCenter or on the command line. Please see the XenServer 6.0 Administrator Guide for information on enabling Multi-Pathing on XenServer hosts. Enabling Multi-Pathing requires a server restart and should be enabled before storage is added to the server. Only use Multi-Pathing when there are multiple paths to the storage center. Enable Multi-pathing in XenCenter 1. Right click on the server in XenCenter and select Enter Maintenance Mode 2. Right click on the server and select Properties 3. In the Properties window, select Multipathing 4. Check the Enable Multipathing on this server box and click OK 5. The server will need to be restarted for Multipathing to take affect Figure 6, Enable Multipathing Page 15

16 Software iscsi Overview XenServer Supports shared Storage Repositories (SRs) on iscsi LUNs. iscsi is implemented using the open-iscsi software initiator or by using a supported iscsi HBAs. XenServer iscsi Storage Repositories are supported with Dell Compellent Storage Center running in either Legacy mode or Virtual Port mode. Shared iscsi using the software iscsi initiator is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM on local disks. Shared iscsi SRs using the software-based host initiator are capable of supporting VM agility. Using XenMotion, VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable interruption. iscsi SRs utilize the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases. NOTE: Use dedicated network adapters for iscsi traffic. The default connection can be used however it is always best practice to separate iscsi and network traffic. All iscsi initiators and targets must have a unique name to ensure they can be identified on the network. An initiator has an iscsi initiator address, and a target has an iscsi target address. Collectively these are called iscsi Qualified Names, or IQNs. XenServer hosts support a single iscsi initiator which is automatically created and configured with a random IQN during host installation. iscsi targets commonly provide access control via iscsi initiator IQN lists, so all iscsi targets/luns to be accessed by a XenServer host must be configured to allow access by the host's initiator IQN. Similarly, targets/luns to be used as shared iscsi SRs must be configured to allow access by all host IQNs in the resource pool. iscsi targets that do not provide access control will typically default to restricting LUN access to a single initiator to ensure data integrity. If an iscsi LUN is intended for use as a shared SR across multiple XenServer hosts in a resource pool ensure that multi-initiator access is enabled for the specified LUN. It is strongly suggested to change the default XenServer IQN to one that is consistent with a naming schema in the iscsi environment. The XenServer host IQN value can be adjusted using XenCenter, or via the CLI with the following command when using the iscsi software initiator: xe host-param-set uuid=<valid_host_id> otherconfig:iscsi_iqn=<new_initiator_iqn> Caution: It is imperative that every iscsi target and initiator have a unique IQN. If a non-unique IQN identifier is used, data corruption and/or denial of LUN access can occur. Caution: Do not change the XenServer host IQN with iscsi SRs attached. Doing so can result in failures connecting to new targets or existing SRs. Page 16

17 Open iscsi initiator Setup with Dell Compellent Caution: Issues have been identified with the Citrix implementation of multipathing and Storage Center in virtual port mode. It is strongly recommended to use iscsi HBAs when implementing XenServer with Storage Center in virtual port mode. When planning iscsi it is important that networks used for software-based iscsi have separate switching and different subnets from those used for management. The use of separate subnets ensures that management and storage traffic flows over the intended interface and avoids complex workarounds that may compromise reliability or performance. If planning to utilize iscsi storage with Multi-Pathing, it is important to ensure that none of the redundant paths reported by iscsi are within the same subnet as the management interface. If this occurs the iscsi initiator may not be able to successfully establish a session over each path because the management interface comes up separate to the storage interface(s). There are three options when implementing the XenServer software iscsi initiator to connect to Dell Compellent storage. They are: Multipath with dual subnets, virtual port mode - In this configuration the Storage Center is set to Virtual Port mode and the front end controller ports are on two separate subnets. This option uses MPIO for multipathing. This is the recommended option when HA is required. Multipath with single subnet - In this configuration the Storage Center is set to Virtual Port mode and all controller front end ports are on the same subnet. This option uses NIC Bonding for path failover. This is also an option when the servers have a single iscsi Storage NIC and HA is not required. Multipath with dual subnets, Legacy port mode - This is the option for HA when the Storage Center is set to Legacy Port mode. Multipath with Dual Subnets The requirements for software iscsi Multi-pathing with dual subnets and Compellent Storage Center in virtual port mode are as follows: XenServer 6.0 iscsi using 2 unique dedicated storage NICs/subnets o Citrix best practices states that these 2 subnets should be different from the XenServer management network. Multi-pathing enabled on all XenServer pool hosts iscsi Target IP addresses for the Storage Center Front End Control ports o In the example below the iscsi FE Control ports on Storage Center Controller are assigned IP address /16 and /16 In this configuration the Storage Center is set to virtual port mode and the iscsi Front End ports are on two separate subnets different from the management interface. The Storage Center is configured with two control ports, one for each subnet. Multipathing is controlled through MPIO. Page 17

18 Figure 7, Dual Subnet, MPIO Configuring Dedicated Storage NIC XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a NIC to specific functions, such as storage traffic. Assigning a NIC to a specific function will prevent the use of the NIC for other functions such as host management, but requires that the appropriate network configuration be in place to ensure the NIC is used for the desired traffic. For example, to dedicate a NIC to storage traffic the NIC, storage target, switch, and/or VLAN must be configured so the target is only accessible over the assigned NIC. Ensure that the dedicated storage interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, storage traffic may be directed over the main management interface after a host reboot due to the order in which network interfaces are initialized. To Assign NIC Functions using the XE CLI 1. Ensure that the Physical Interface (PIF) is on a separate subnet, or routing is configured to suit your network topology in order to force the desired traffic over the selected PIF. 2. Get the PIF UUID for the interface 2.1. If on a stand-alone server, use xe pif-list to list the PIFs on the server 2.2. If on a host in a resource pool, first type xe host-list to retrieve a list of the hosts and UUIDs 2.3. Use the command xe pif-list host-uuid=<host-uuid> to list the host PIFs Page 18

19 3. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if using static IP addressing the IP, netmask, gateway, and DNS parameters: xe pif-reconfigure-ip mode=<dhcp Static> uuid=<pif-uuid> Example: xe pif-reconfigure-ip mode=static ip= netmask= gateway= uuid=<pif-uuid> 4. Set the PIF's disallow-unplug parameter to true: xe pif-param-set disallow-unplug=true uuid=<pif-uuid> 5. Set the Management Purpose of the interface: xe pif-param-set other-config:management_purpose="storage" uuid=<pif- UUID> 6. Repeat this process for each eth interface in the XenServer host that will be dedicated for storage traffic. For iscsi MPIO configurations this should be a minimum of two eth interfaces that are on separate subnets. For more information on this topic see the Citrix XenServer 6.0 Administrator Guide. XenServer Software iscsi Setup A server object on the Dell Compellent Storage Center can be created once the XenServer has been configured for iscsi traffic. NOTE: Best practice recommendation is to change the XenServer IQN from the randomly assigned IQN to one that identifies the system on the iscsi network. The IQN must be unique to avoid data corruption or loss. Gather Dell Compellent iscsi Target Info Within Storage Center Manager, go to Controllers, IO Cards, iscsi and note the IP address of the two control ports. These should be on the same IP subnet as the server s storage NICs. Figure 8, Control Port IP Addresses In this example the IP addresses are: / /16 Login to Compellent Control Ports In this step the iscsiadm command will be utilized in the XenServer CLI to discover and login to all the Compellent iscsi targets. 1. From the XenServer console run the following command for each iscsi control ports. Page 19

20 iscsiadm -m discovery --type sendtarget --portal <Control Port IP:3260> Example: iscsiadm m discovery --type stendtarget --portal :3260 Figure 9, Discover Storage Center Ports NOTE: If problems are encountered while running the iscsiadm commands, see the iscsi Troubleshooting section at the end of this document. 2. Repeat the discovery process for each Dell Compellent Control Port. 3. Once all target ports are discovered run iscsiadm with the Login parameter: iscsiadm m node --login Figure 10, Log into Storage Center Ports The server objects can be configured in the Storage Center now that the server has logged in. Configure Server Objects in Enterprise Manager Follow the steps below to configure the server object for access to the Storage Center 1. In Enterprise Manager, go to Storage Center and select Storage Management 2. In the object tree, right click on Servers and select Create Server 3. Complete all options as specific in the Compellent Administrators guide 4. Uncheck the Use iscsi Name box 5. Select both connections listed under WWName and click OK to finish NOTE: Unchecking the Use iscsi Name box will aid in identifying the status of MPIO paths. Page 20

21 Figure 11, Create Server, Enterprise Manager NOTE: Starting in Storage Center version 5.5.x, the steps listed above must be completed using Enterprise Manager. It is not possible to create server objects with the Use iscsi Names box unchecked when connected directly to the Storage Center. After creating the server object the volumes can be created and mapped to the server. In a server pool, map the LUN to all servers specifying the same LUN number. See the Dell Compellent documentation for detailed instructions on creating and mapping volumes. NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter The steps for adding storage through the CLI can be found in the XenServer 6.0 Administrator s Guide. 1. Select the server or pool in XenCenter and click on New Storage 2. Select the Software iscsi option under virtual disk storage, click next Figure 12, Add iscsi Disk 3. Give the new Storage Repository a name and click next 4. Enter one of the Dell Compellent control ports in the Target Host field, click Discover IQNs 5. Click Discover LUNs 6. Select the LUN to add under Target LUN and click finish Page 21

22 Figure 13, Add iscsi SR NOTE: When the Storage Center is in virtual port mode and adding storage with the wildcard option, an incomplete list of volumes mapped to the server may be returned. This is a know issue with the XenCenter GUI. To work around the issue, cycle through the Control Ports in the Target Host field using the (*) wildcard Target IQNs until the Target LUN appears. This is a GUI issue and will not affect multipathing. The SR should now be available to the server. Repeat the steps for mapping and adding storage for any additional SRs. View Multipath Status To view the status of the multipath use the following command: mpathutil status Figure 14, Multipath Status Page 22

23 Multi-path Requirements with Single Subnet The process for configuring multi-pathing in a single subnet environment is similar to that of a dual subnet environment. The key difference is that redundancy is handled by the bonded network adapters. The requirements for software iscsi multi-pathing with the Compellent Storage Center in a single subnet are as follows: XenServer 6.0 iscsi using 2 bonded NICs o Citrix best practices states that these 2 NICs should bonded through the XenCenter GUI. iscsi Target IP addresses for the Storage Center Front End Control ports o In this example the IP address for the Control port will be Network Storage Interfaces on XenServer on the bonded interface. Figure 15, Single Subnet Configuring Bonded Interface In this configuration redundancy to the network is provided by two bonded NICs. Bonding the two NICs will create a new bonded interface that network interfaces will be associated with. This will create multiple paths with one storage IP address on the server. Page 23

24 NOTE: The process of configuring a single-path, non redundant connection to a Dell Compellent Storage Center is the same except for excluding the steps to bond the two NICs. NOTE: Create NIC bonds as part of the initial resource pool creation, prior to joining additional hosts to the pool. This will allow the bond configuration to be replicated to new hosts as they join the pool. The steps below outline the process of creating a NIC bond in XenServer Go into Citrix XenCenter, select the server and go to the NIC tab. 2. At the bottom of the NIC window is the option to create a bond. Select the NICs you would like to bond and click create. Figure 16, Add Bonded Interface 3. Once complete, there will be a new bonded NIC displayed in the list of NICs. Figure 17, Bonded Interface Configuring Dedicated Storage Network XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a network to specific functions, such as storage traffic. The steps below outline the process of creating a dedicated storage network interface through the CLI. Assigning a network to storage will prevent the use of the network for other functions such as host management, but requires that the appropriate configuration be in place in order to ensure the network is used for the desired traffic. For example, to dedicate a network to storage traffic the NIC, storage target, switch, and/or VLAN must be configured such that the target is only accessible over the assigned NIC. This allows use of standard IP routing to control how traffic is routed between multiple NICs within a XenServer. Page 24

25 Before dedicating a network interface as a storage interface for use with iscsi SRs, ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed over the main management interface after a host reboot, due to the order in which network interfaces are initialized. To assign NIC functions using the XE CLI: 1. Ensure that the Bond PIF is on a separate subnet, or routing is configured to force the desired traffic over the selected PIF. 2. Get the PIF UUID for the Bond interface 2.1. If on a stand-alone server, use xe pif-list to list the PIFs on the server 2.2. If on a host in a resource pool, first type xe host-list to retrieve a list of the hosts and UUIDs 2.3. Use the command xe pif-list host-uuid=<host-uuid> to list the host PIFs 3. Setup an IP configuration for the PIF identified in the previous step, adding appropriate values for the mode parameter and if using static IP addressing: 3.1. xe pif-reconfigure-ip mode=<dhcp Static> uuid=<pif-uuid> Example: xe pif-reconfigure-ip mode=static ip= netmask= gateway= uuid=<3f5a072f-ea3b-de28-aeab-47c7d7f2b58f > 4. Set the PIF's disallow-unplug parameter to true: 4.1. xe pif-param-set disallow-unplug=true uuid=<pif-uuid> 5. Set the Management Purpose of the interface 5.1. xe pif-param-set other-config:management_purpose="storage" uuid=<pif-uuid> For more information on this topic see the Citrix XenServer 6.0 Administrator Guide. XenServer Software iscsi Setup Once the XenServer has been configured for iscsi traffic a server object on the Dell Compellent Storage Center can be created. NOTE: Best practice is to change the XenServer IQN from the randomly assigned IQN to one that identifies the system on the iscsi network. The IQN must be unique to avoid data corruption or loss. 1. To gather the Storage Center iscsi target Info from Storage Center go to Controllers, IO Cards, iscsi and note the IP address of the control port. It should be on the same IP subnet as the server s storage NICs. Page 25

26 Figure 18, Control Port IP address In this example the IP address is: /16 2. Login to Compellent Control Ports. In this step the iscsiadm command will be utilized in the XenServer CLI to discover and login to all the Dell Compellent iscsi targets. 3. From the XenServer console, run the following command for the iscsi control port. iscsiadm -m discovery --type sendtarget --portal <Control Port IP:3260> Example: iscsiadm m discovery --type sendtarget --portal :3260 Figure 19, Discover Storage Center Ports NOTE: If problems are encountered while running the iscsiadm commands, see the iscsi troubleshooting section at the end of this document. 4. Once all target ports are discovered, run iscsiadm with the Login parameter: iscsiadm m node --login Figure 20, log into Storage Center Ports 5. Now that the server has logged in the server objects can be configured in the Storage Center. Page 26

27 Configure Server Objects in Enterprise Manager Follow the steps below to configure the server object for access to the Storage Center 1. In Enterprise Manager, go to Storage Center Manager and select Storage Management In the object tree. 2. Right click on Servers and select Create Server. Complete all options as specific in the Compellent Administrators Guide including server name and operating system. 3. Select the server IQN listed under WWName and click OK to finish Figure 21, Create Server in Enterprise Manager After creating the server object the volumes can be created and mapped to the server. In a server pool, be sure the LUNS are mapped to the servers with the same LUN number. See the Dell Compellent Admin Guide for detailed instructions on creating and mapping volumes. NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI can be found in the XenServer 6.0 Administrator s Guide. 1. Select the server or pool in XenCenter and click on New Storage 2. Select the Software iscsi option under virtual disk storage, click next Page 27

28 Figure 22, Add iscsi Disk 3. Give the new Storage Repository a name and click next 4. Enter the Dell Compellent control port in the Target Host field, click Discover IQNs 5. Click Discover LUNs to view the available LUNs. Figure 23, Add iscsi SR 6. Select the LUN to add under Target LUN and click finish NOTE: When the Storage Center is in virtual port mode and storage is added with the wildcard option, an incomplete list of volumes mapped to the server may be returned. This is a know issue with the XenCenter GUI. To work around the problem, cycle through the Target Host IP addresses using the (*) wildcard IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing. The SR will now be available to the server. Repeat the steps for mapping and adding storage for any additional SRs. Multi-path Requirements with Dual Subnets, Legacy Port Mode Dell Compellent Legacy Port Mode uses the concept of Fault Domains to provide redundant paths to the Storage Center. To ensure redundancy, a fault domain consists of a primary port on one controller and a failover port on the second controller. The two ports are linked in the same domain by the identical Fault Domain number. This provides redundancy with the requirement that half the Front End ports will only be utilized in the event of a failover. The requirements for software iscsi Multi-pathing with the Compellent Storage Center Legacy Port Mode are as follows: Page 28

29 XenServer 6.0 iscsi using 2 unique dedicated storage NICs/subnets o Citrix best practices states that these 2 subnets should be different from the XenServer management network. Multi-pathing enabled on all XenServer Pool Hosts iscsi Target IP addresses for the Storage Center Front End ports o In this example the primary iscsi Front End ports IP address are , , , In this configuration the Storage Center is set to Legacy Port mode and the iscsi Front End ports are on two subnets separate from each other and the management interface. Multipathing is controlled through MPIO. Figure 24, Legacy Port Mode The first step to configure XenServer for Dell Compellent in Legacy Port mode is to identify the primary iscsi target IP addresses on each controller the Storage Center. This can be done by going to the controllers listed in Storage Center, expanding IO cards, iscsi and clicking on each iscsi port listed. Page 29

30 Figure 25, Legacy Port IP addresses Log in to Dell Compellent iscsi Target Ports This step uses the iscsiadm command in the XenServer CLI to discover and login to all the Compellent iscsi targets. 1. For each of the Target IP addresses enter the following command: iscsiadm -m discovery --type sendtarget --portal <control port ip:3260> Example: iscsiadm -m discovery --type sendtarget --portal :3260 Figure 26, Discover Storage Center Ports 2. Repeat the discovery process for each Target Port 3. Once all the ports are discovered, run the iscsiadm command with Login parameter to connect the host to the Storage Center Iscsiadm -m node --login Figure 27, log into Storage Center Ports Configure Server Objects in Enterprise Manager Follow the steps below to configure the server object for access to the Storage Center 1. In Enterprise Manager, go to Storage Center and select Storage Management 2. In the object tree, right click on Servers and select Create Server 3. Complete all options as specified in the Dell Compellent Administrators Guide. Page 30

31 Figure 28, Create Server in Enterprise Manager After creating the server object the volumes can be created and mapped to the server. See the Dell Compellent documentation for detailed instructions on creating and mapping volumes. NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI can be found in the XenServer 6.0 Administrator s Guide. 1. Select the server or Pool in XenCenter and click on New Storage 2. Select the Software iscsi option under virtual disk storage, click next Figure 29, Add iscsi Disk 3. Give the new Storage Repository a name and click Next 4. Enter the Dell Compellent control ports in the Target Host filed, click Discover IQNs Page 31

32 Figure 30, Discover Storage Center LUNs 5. Click Discover LUNs Figure 31, Add iscsi SR 6. Select the LUN to add under Target LUN and click finish NOTE: When Storage Center is in legacy port mode adding storage may return an incomplete list of volumes mapped to the server. This is a know issue with the XenCenter GUI where only the LUNs active on the first IP address in Target Host are returned. To work around this issue, cycle through the Target Hosts IP using the (*) wildcard Target IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing. Page 32

33 The SR will now be available to the server. Repeat the steps above for mapping and adding storage for any additional SRs. View Multipath Status To view the status of the multipath use the following command: mpathutil status Figure 32, Multipath Status iscsi SR Using iscsi HBA If using an iscsi HBA to create the iscsi SR, either the CLI from the control domain needs to be used, or the BIOS level management interface needs to be updated for target information. Depending on what HBA is being used; the initiator IQN for the HBA needs to be configured. Given the type of HBA used, the documentation for that HBA should be consulted to configure the IQN. Once the IQN has been configured for the HBA, use the Storage Center GUI to create a new LUN. However, instead of using the XenServer s IQN, specify the IQN of the various ports of the HBA. Do this for every XenServer host in the pool. Qlogic s HBA CLI in included in the XenServer host and located at: Qlogic: /opt/qlogic_corporation/sansurfericli/iscli If using Emulex iscsi HBAs, consult the Emulex documentation for instructions on installing and configuring the HBA. For the purposes of an example, this guide illustrates how the QLogic iscsi HBA CLI iscli can be used to configure an IP addresses on a dual port QLE4062C iscsi HBA Adapter, add the iscsi server to the Compellent SAN, and configure a LUN for the server. This setup will also utilize Multi-Pathing since there are two iscsi HBA ports. 1. From the XenServer Console launch the SanSurfer iscli From XenServer Command Prompt type in: /opt/qlogic_corporation/sansurfericli/iscli NOTE: This configuration can also be performed during the Server boot by entering Ctrl Q when prompted. Page 33

34 Figure 33, iscli Menu 2. Configure IP Address for the iscsi HBA 2.1. In order to set the IP address for the HBA choose option 4 (Port Level Info & Operations), and then option 2 (Port Network Settings Menu) Enter option 4 (Select HBA Port) to select the appropriate HBA port then select option 2 (Configure IP Settings). Figure 34, Configure HBA IP Address 2.3. Enter the appropriate IP Settings for the HBA adapter port when finished exit and save or select another HBA port to configure. Page 34

35 In this example another HBA port will be configured. Figure 35, Enter IP Address Information 2.4. From the Port Network Settings Menu select option 4 to select an additional HBA port to configure. Enter 2 and to select the second HBA port. Once the second HBA port is selected choose option 2 (Configure IP Settings) from the Port network settings menu to input the appropriate IP settings for the second HBA port. Figure 36, Enter IP Address Info Page 35

36 2.5. Choose option 5 (Save changes and reset HBA (if necessary). Then select Exit until back at the main menu. The iscsi name or IQN can also be changed using the iscli utility. This menu can be access by selecting option 4 (Port Level info & Operations Menu) from the main menu, then selecting Option 3 (Edit Configured Port Settings menu) then Option 3 (Port Firmware Settings Menu), then Option 7 (Configure Advanced Settings). Select <Enter> until reaching iscsi_name, then enter a unique IQN name for the adapter. 3. The next step is to establish a target from XenServer so it registers with the Compellent Storage Center From the main interactivee iscli menu select option 4 (Port level info & Operations) 3.2. From the Port Level Info & Operations menu select option 7 (---> Target level Info & Operations) 3.3. On the HBA target menu screen select option 6 (Add a Target) Select Enter until reaching the TGT_TargetIPAddress option. Enter the target IP address of the Compellent Controller. (Repeat for each target.) In this example and are used. These are the primary iscsi connectionn on both Dell Compellent Storage Center Controllers. Figure 37, Enter Target IP Address Once all targets are entered for HBA 0 select option 9 to the save the port information Select option 10 to select the second HBA port Repeat the steps in section 3.3 for the iscsi targets Enter option 12 to exit. Enter YES to save the changes Exit out of the iscli utility. 4. Add server iscsi connection HBAs to the Dell Compellent Storage Center Logon to the Storage Center console Expand Servers and select the location or folder to store the server in For ease of use the servers in this view are separated into folders based on function Right click the location to create the server in and select Create Server. Page 36

37 Note: You may have to uncheck show only active/up connections in the create a server wizard 4.4. Select the appropriate iscsi HBA/IQNs for the new server object then click Continue Depending on the Storage Center version select the XenServer Operating system or just select Other Multipath OS if XenServer is not listed. 5. Repeat preceding 4 steps for each XenServer in the Pool. 6. Once all the XenServer servers are added to the Compellent Storage Center, create a new volume on the Compellent Storage Center and map it to all the XenServers in the pool with the same LUN Number, or create a Compellent Clustered server object, add all the XenServers to the Cluster, and map the volume to the XenServer Clustered server object. 7. The final step of the process is adding the new Volume to XenServer Logon to XenCenter, right click on the appropriate XenServer to add the connection to, and select New Storage Repository. If the storage is being added to a resource pool, select the Pool instead of the server Select Hardware HBA option as the iscsi connection is using iscsi HBAs, then click Next. Figure 38, Storage Type There is short delay while XenServer probes for available LUNs Select the appropriate LUN. Give the SR an appropriate Name and click Finish A warning is displayed that the LUN will be formatted and any data present will be destroyed. Click Yes to format the disk. Page 37

38 Fibre Channel Overview XenServer provides support for shared Storage Repositories (SRs) on Fibre Channel (FC) LUNs. FC is supported on the Dell Compellent SAN by utilizing QLogic or Emulex HBAs. Fibre Channel support is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM VDIs in the local disk case. Fibre Channel SRs are capable of supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable downtime. The following sections details the steps involved in adding a new Fibre Channel connected volume to a XenServer pool. Adding a FC LUN to XenServer Pool The following section will cover the creation of the Volume on the Compellent Storage Center, the LUN mapping on the Dell Compellent, and adding the new SR to the XenServer pool. This procedure assumes that the servers Fibre Channel connection have been zoned to the Dell Compellent Storage Center and the server objects have been added to the Storage Center. 1. Once all the XenServer servers are added to the Dell Compellent Storage Center, create a new volume and map it to all the XenServers in the pool with the same LUN Number, or create a Compellent Clustered server object, add all the XenServers to the Cluster, and map the volume to the XenServer Clustered server object. 2. When finished mapping the volume to all the XenServers in the Pool launch the XenCenter Management console, right click on the pool name and select New Storage Repository. Figure 39, New Storage Repository 3. On the Choose the type of new storage screen select Hardware HBA then click Next. Page 38

39 Figure 40, Choose Storage Type 4. On the Select the LUN to reattach or create a new SR screen select the appropriate volume, then enter a descriptive name. Click Finished to continue. Figure 41, Select LUN 5. A dialog box will appear asking: Do you wish to format the disk? Click Yes to Format the SR. 6. The SR should now be created and mapped to all the servers in the pool. Page 39

40 Data Instant Replay to Recover Virtual Machines or Data Overview The Dell Compellent Storage Center system allows for the creation of Data Instant Replays (snapshots) to recover crash-consistent states of virtual machines. When mapping Dell Compellent iscsi or Fiber Channel volumes to XenServer, the SRs will be created as LVM disks, therefore stamping each SR with a unique identifier (UUID). When creating Dell Compellent Replays of LVM volumes, the Replay will not be able to be mapped to the XenServer without first unmapping the original volume from the server because the LVM UUID will conflict due to being the same. There are two different options to recover data or virtual machines using Dell Compellent Replays. Recovery Option 1 One VM per LUN The first option is the easiest way to recover however it also requires more administration of LUNs. This recovery option utilizes a 1:1 ratio of virtual machines to LUNs on the Dell Compellent SAN. This option allows for easy recovery of volumes/virtual machines to the XenServer by creating a local recovery view of the Volume in Storage Center. Prior to mapping the Replay to the XenServer(s) remove the mapping to the original volume. Since the Replay has the same UUID as the original volume, XenServer will reattach to the volume just as if it was the original. The following process details how to recover a virtual machine to a previous state using the 1:1 mapping of Virtual Machines to LUNs. The Dell Compellent System does not limit the number of LUNs that can be created, however the servers HBA usually have a limitation of 256 LUNs per server. Recovery Scenario XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2. All servers are connected to the Dell Compellent Storage Center using Fibre Channel and zoned accordingly. The Dell Compellent Storage System has been setup to take hourly Replays of the volume running one virtual machine named W2k8-Xen6. 1. As shown below, a volume is created on the Dell Compellent system and named Xen6_P1_SR2. Also note the replay of this volume created at 08:30:00 PM. Replays can be manually or automatically generated on the Compellent system by utilizing the Replay Scheduler or manually through the Storage Center Console. Page 40

41 Figure 42, Compellent Replays 2. The figure below depicts the VM named w2k3-xen5 running. Figure 43, W2k8-Xen6 Online In this example a catastrophe strikes w2k8-xen6 rendering it unbootable. By using Dell Compellent Replays the server can be quickly recovered to the time of the last snapshot. Page 41

42 3. Verify the VM is shutdown in the XenServer Console. 4. Highlight the Xen6_P1_SR2 volume hosting w2k3-xen5 and select Forget Storage Repository to remove this volume from the XenServer Pool. Figure 44, Forget SR 5. Go to the Dell Compellent Storage Center Console and highlight the volume containing the VM. In this example this is the Xen6_P1_SR2 volume. 6. Select the Mapping button. Figure 45, Volume Mapping 7. Note the LUN number for the mapping. 8. Highlight each of the mappings listed individually and select the Remove Mapping button. 9. Select Yes on the Are you sure screen. 10. Select Yes (Remove Now) on the Warnings screen. 11. Repeat until all mappings are removed from the volume. Page 42

43 Figure 46, Remove Mappings 12. With the volume in question selected from the Dell Compellent Storage Center console, click the Replays button. Right click on the replay to recover to and select Create Volume from Replay. In this example it is the replay dated 09/10/ :30:00 pm. Figure 47, Local Recovery 13. On the Create Volume from Replay screen enter an appropriate name for the Replay Volume and select the Create Now button. 14. On the Map Volume to Server screen select one of the appropriate servers in the pool to map the view volume to, and then select Continue. 15. Go to the Advanced options screen enter the appropriate LUN number then select Continue. In this example LUN 2 is being used as that was the original volume number. 16. When completed select Create Now. 17. This procedure only mapped the volume to one server, if more mappings are required select the Mappings button and add the appropriate mappings to the volume to represent all the Page 43

44 servers in the XenServer Pool. In the example below the server XenServer6P1S1 and XenServer6P1S2 are both added to the new View Volume. Figure 48, Volume Mappings 18. Return to the XenCenter console, right click on the pool and select New Storage Repository. Figure 49, New SR 19. Select the appropriate type of storage for the volume then select Next. In this example it is a FC connection so hardware HBA should be selected. Page 44

45 Figure 50, SR Type 20. On the Select the LUN to reattach or create a new SR on screen select the appropriate volume, name it accordingly, then select Finish. Figure 51, Select LUN 21. A message should appear asking if the SR should be Reattached, Formatted or canceled. Select Reattach. Page 45

46 Figure 52, Reattach SR 22. With the replay of the SR now attached to the Pool, the virtual disk can be mapped to the virtual machine. From XenCenter highlight the server to be recovered then select the Storage Tab. Notice that the server doesn t have a disks associated with it. 23. Click the Attach button to associate a disk to the VM. Figure 53, Attach Disk 24. Expand the recovered SR, select the appropriate disk and click Attach. Page 46

47 Figure 54, Select Disk 25. The Virtual machine can now be started in the same state it was in at the time of the last Replay. In this example the last Replay was taken at 8:30 pm. Figure 55, Start VM 26. If satisfied with the result the original volume can be coalesced into the new view volume by following the remaining steps. CAUTION: Continuing the original volume with the view volume will destroy the original volume. 27. Highlight the original volume, right click on it and choose delete. Page 47

48 Figure 56, Delete Volume 28. Confirm the action by clicking Yes to move the volume to the Recycle Bin. 29. To completely remove the volume from the system, delete the volume from the recycle bin by expanding the recycle bin, right click on the volume and choose delete. Figure 57, Delete Volume from Recycle Bin 30. Confirm the delete by clicking Yes. 31. The original volume is not removed leaving the recovery volume as the primary volume. Once the associated replays of the view volume are expired they will be coalesced into the volume as shown below. Page 48

49 Figure 58, Volume with Replays Associated Figure 59, Replay Coalescing Page 49

50 Figure 60, Coalescing Complete Recovery Option 2 Recovery Server The second option available for recovering virtual machines with Dell Compellent Replays is using a standalone recovery XenServer. This option is useful when multiple virtual machines are hosted on each SR as it allows recovery of one VM to a recovery server utilizing Dell Compellent Replays. As mentioned earlier, there is a limitation that prevents mounting the replay to the same XenServer or Pool due to the UUID associated with the disks will conflict. Adding a separate standalone XenServer recovery server allows administrators to map the recovery volume to the recovery server and attach the SR. A new virtual machine can then be created and mapped to the appropriate virtual disk. The recovered virtual machines can then be exported and imported back into the production system. Below is a step by step guide on recovering virtual machines to a standalone XenServer or a Remote DR site XenServer. Recovery Scenario XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2. Standalone (Recovery) XenServer named XenRecovery. All servers are connected to the Dell Compellent Storage Center using Fibre Channel and are already zoned accordingly. A replay is created on the volume Xen6_P1_SR2. 1. From the Dell Compellent Storage Center console, select the volume to recover and click the Replays Button. Page 50

51 Figure 61, Volume Replays 2. Right click on the replay to recover to and select Create Volume from Replay. In the example below the Replay used is dated 09/11/ :09:54 am. Figure 62, Local Recovery 3. On the Create Volume from Replay screen enter an appropriate name for the Replay volume and click the Create Now button. 4. On the Select a Server to Map screen select one of the recovery servers to map the view volume to, click Continue. 5. In the Map Volume to Server Advanced options, enter the appropriate LUN numbers for the server port. If mapping to multiple servers set each mapping to the same LUN number. In the example LUN 12 is used. Click Create Now. When mapping to multiple servers in a Pool use the Storage Center Cluster Server Object. This will create the mapping to all servers with the same LUN number. Page 51

52 Figure 63, LUN Number 6. The next step after mapping the storage to the recovery XenServer is to add the Storage Repository to the recovery server. A separate copy of XenCenter must be used or the original Pool must first be removed from the console. XenCenter will not allow the addition of this Storage Repository to the recovery server if it sees that volume mapped elsewhere. Page 52

53 Figure 64, XenCenter Console 7. From XenCenter right click on recovery XenServer and select New Storage Repository. Figure 65, New SR Page 53

54 8. Select the appropriate storage type and click Next. Figure 66, Select Disk Type 9. Enter a name for the new SR and click Next. Figure 67, Enter SR Name 10. Select the recovered LUN, name it, and click Finish. Page 54

55 Figure 68, Select Recovery LUN 11. A warning message should appear stating that an existing SR was found on the selected LUN. click Reattach. Figure 69, Reattach SR 12. Now that the SR has been added to the recovery server the process of recovering the VMs can be started. The next step is to create a new virtual machine as a placeholder. 13. Right click on the recovery XenServer and choose New VM. Page 55

56 Figure 70, New Virtual Machine 14. Select the appropriate template for the server then click Next. Figure 71, OS Template 15. Enter in a name for the server then click Next. Typically the actual server name of the VM being recovered is used. Page 56

57 Figure 72, Virtual Machine Name 16. Click Next on the Locate the operating system installation media screen. Figure 73, Installation Media 17. Click Next at Select a home server screen. Page 57

58 Figure 74, Select VM Home Server 18. Enter in the appropriate amount of vcpus and Memory then click Next. Figure 75, Size CPU and Memory 19. On the screen Enter the information about the virtual disks for the new virtual machine, select a location to store a temporary virtual disk, then click Next. Typically it is best to store the temporary disk on a SR that isn t being used for recovery. Page 58

59 Figure 76, Temporary SR Disk Location 20. On the Add or remove virtual network interfaces screen click Add, select the appropriate network, then click Next. Figure 77, Select Network 21. On the Virtual machine configuration is complete screen uncheck Start VM automatically and click Finish. Page 59

60 Figure 78, Uncheck Start VM Automatically 22. From the XenCenter Console select the newly create VM then select the Storage tab. 23. Highlight the virtual disk temporarily attached to the VM and select Delete or Detach. Since this disk contains no information it is OK to delete it. Figure 79, Detach Disk 24. Click Yes at the Delete system disk message. Page 60

61 Figure 80, Delete System Disk 25. Once the temporary disk is deleted click the Attach button to select the original disk from the recovered Volume. Expand the recovered LUN and select the appropriate disk to attach. Figure 81, Attach Disk NOTE: If there are multiple disks in the Storage Repository with no name, it may take some trial and error to connect to the correct disk. Use the Storage Tab to detach and reattach disks until the correct one is selected. Restoring the MetaDate will prevent this issue. If a Virtual Machine MetaData backup has been taken on the Volume, use the procedure outlined in the VM MetaData Back and Recovery section to recover the names. From this point the VM can be started, exported, copied etc. Typically the VM would be exported and imported back into the production Pool. Page 61

62 Dynamic Capacity Dynamic Capacity Overview Dell Compellent's Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage allocation from utilization, enabling users to allocate any size virtual volume upfront yet only consume actual physical capacity when data is written by the application. Dynamic Capacity with XenServer When XenServer is connected to Dell Compellent storage via iscsi or Fibre Channel connections the Storage Repository is created as a LVM (Linux Volume Manager) repository. When the volume is created on the Dell Compellent System by default the newly created volume consumes zero space. Only when data is written to the volume will space be acquired and only the written space is consumed. Page 62

63 Data Progression Data Progression on XenServer The foundation of Dell Compellent s Automated Tiered Storage patent is our unique Dynamic Block Architecture. Storage Center records and tracks specific information about blocks of data, including time written, time accessed, frequency of access, associated volume, RAID level, and more. Data Progression utilizes all of this metadata, or data about the data to automatically migrate blocks of data to the optimum storage tier based on usage and performance, unlike traditional systems that move entire files. Figure 82, Data Progression Data Progression automatically classifies and migrates data to the optimum tier of storage, retaining frequently accessed data on high performance storage and storing infrequently accessed data on lower cost storage. XenServer, like other virtualization hypervisors, will contain virtual machines running Windows, Linux, or other virtual machines that contain stagnant data, data that is read frequently and heavy read/write data such as transaction logs and pagefiles. Take a Virtual Machine running a file server for example. A user copies a new file to the file server. The Dell Compellent system writes the data instantly to Tier 1 Raid 10. The longer the file sits without any reads/writes, the further the blocks of data that make up the file will transition in the tiering structure until it reaches Tier 3, Raid 5. Typically less than 20% of data on the file server is accessed frequently. The Dell Compellent system is optimized to automatically move this data between tiers without any assistance. In a typical storage solution, an Administrator would have to manually move files from one Tier to another. This equates to costs savings by storing static data on low-cost, highcapacity disks and by eliminating the need to manage data manually. Only data that is required to be on Tier 1 Storage will remain on that Tier. Page 63

64 Boot from SAN In some cases, such as with blade servers that do not have internal disk drives, booting from SAN is the only option, but a lot of XenServers have internal mirrored drives giving administrators the flexibility to choose whether to boot from SAN or local disks. Booting from SAN allows administrators to take Replays of the boot volume, replicate it to a DR site, and provides for fast recovery to other identical hardware if that XenServer fails. However, there are also benefits to booting from local disks and having the virtual machines located on SAN resources. Since it only takes about 30 minutes to install and patch a XenServer, booting from local disks insures the server will stay online if there is a need to do maintenance to fibre channel switches, Ethernet switches, or the SAN itself. The other advantage of booting from local disks is that this configuration does not require iscsi or FC HBAs. The XenServer can boot from local disk and use the iscsi software initiator to connect to shared storage on the SAN. Page 64

65 VM Metadata Backup and Recovery The metadata for a VM contains information about the VM (such as the name, description, and Universally Unique Identifier (UUID)), VM configuration (such as the amount of virtual memory and the number of virtual CPUs), and information about the use of resources on the host or Resource Pool (such as Virtual Networks, Storage Repository, ISO Library, and so on). Most metadata configuration data is written when the VM is created and is updated when changes to the VM configuration are made. Adding a metadata export command to the change-control checklist will ensure that this information is available if needed. NOTE: Without the Metadata Backup the names and descriptions of files on the SR may not be available for a recovery. This will make recovery a difficult process. Figure 83, Conceptual Overview of XenServer Disaster Recovery Backing Up VM MetaData In XenServer, exporting or importing metadata can be done from the text-based console menu. On the physical console the menu is loaded by default. To start the console menu through the host console screen in XenCenter, type: xsconsole from the command line. Page 65

66 Figure 84 Backup, Restore and Update Screen To export the VM metadata: 1. Select Backup, Restore and Update from the menu. 2. Select Backup Virtual Machine Metadata. 3. If prompted, log on with root credentials. 4. Select the Storage Repository where the desired VMs are stored. 5. After the metadata backup is done, verify the successful completion on the summary screen. 6. In XenCenter, on the Storage tab of the SR selected in step 4, a new VDI should be created named Pool Metadata Backup. Figure 85 Backup Summary Screen Page 66

67 Another option available from the console menu is Schedule Virtual Machine Metadata. This option allows for automated exports of metadata on a daily, weekly, or monthly basis. By default this option is disabled. Importing VM MetaData A prerequisite for running the import command in a DR environment is that Storage Repository(s) (where the replicated virtual disk images are located) need to be setup and re-attached to a XenServer. Also make sure that the Virtual Networks are set up correctly by using the same names in the production and DR environment. After the SR is attached, the metadata backup can be restored. From the console menu: 1. Select Backup, Restore and Update from the menu. 2. Select Restore Virtual Machine Metadata. 3. If prompted, log on with root credentials. 4. Select the Storage Repository to restore from. 5. Select the Metadata Backup you want to restore. 6. Select restore only VMs on this SR or all VMs in the pool. 7. After the metadata restore is done, verify the summary screen and check for errors. 8. The VMs are now available in XenCenter and can be started at the new site. Figure 86 Metadata Restore Summary Page 67

68 Disaster Recovery XenServer 6 provides the enterprise with functionality designed to recover data from a catastrophic failure of hardware which disables or destroys a whole pool or site. The XenServer 6 Disaster Recovery feature provides the mechanism to backup services and applications while Dell Compellent replication technology provides a means to make this data available at a remote site. Together they provide a high availability solution for mission critical services and applications This functionality is extended with XenServer Virtual Appliance (vapp) technology. A vapp is a logical group of one or more related VMs which can be started as a single entity in the event of a disaster. When a vapp is started, the VMs contained within the vapp are started based on a predefined order, relieving the administrator from manually stating servers. The vapp functionality is useful in DR situation where all VMs in s vapp reside on the same Storage Repository. NOTE: XenServer Disaster Recovery can only be enabled when using LVM over FC/iSCSI HBA, or software iscsi. A small amount of space will be required on the storage for a new LUN which will contain the pool recovery information. Replication Overview XenServer Disaster Recovery takes advantage of Dell Compellent s replication technology to provide high availability. Dell Compellent replicates volumes in one direction. In a DR scenario, data is replicated from the primary site to the secondary site. By default, Dell Compellent replication is not bidirectional; therefore it is not possible to XenMotion between source Storage Center (the primary site) and destination Storage Center (the secondary site) unless using Dell Compellent Live Volumes for Replication. The following best practices recommendations for replication and remote recovery should be considered. Compatible XenServer server hardware and OS is required at the DR site to map replicated volumes to in the event the main XenServer Pool becomes inoperable. Since replicated volumes can contain more than one virtual machine, it is recommended to sort virtual machines into specific replicated and non-replicated Storage Repositories. For example, if there are 30 virtual machines in the XenServer Pool, and only eight of them need to be replicated to the DR site, a special "Replicated" volume should be created to place those eight virtual machines on, or utilize a 1:1 mapping of VMs to Volumes and only replicated the required VMs. Take advantage of the Storage Center QOS settings to prioritize the replication bandwidth of certain "mission critical" volumes. For example, two QOS definitions could be created so that the "mission critical" volume would get 80 Mb of the bandwidth, and the lower priority volume would get 20 Mb of the bandwidth. The following steps should be taken in preparation for a disaster: Configure the VMs and vapps. Note how the VMs and vapps are mapped to the SRs and the SRs to Volumes. Verify that the name_label and name description are meaningful and will allow an administrator to recognize the SR after a disaster. Configure replication of the SR volume Page 68

69 After the VM and vapps have been configured the Volumes can be replicate to the secondary DR site. This process is simplified with Dell Compellent Enterprise Manager (EM) GUI. In the example below, an SR Volume that resides on a Storage Center named SC13 at the primary location is replicated to a Storage Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User Guide outlines the steps necessary to configure replication. Figure 87 Enterprise Manager Replication Disaster recovery can be configured once replication is setup and all data has been replicated to the secondary site. Follow the steps below to configure Disaster Recovery NOTE: The examples below server as a reference for the requirements of configuring XenServer DR with Dell Compellent Storage Center. For complete information on configuring and testing XenDesktop DR consult the Citrix XenServer 6.0 Administrators Guide. 1. Select the pool at the primary site that will be protected and go to the Pool menu, Disaster Recovery, and select Configure. This will open the DR configuration window. Figure 88, Select DR Pool 2. Select the Storage Repositories that will be protected with XenServer DR and click OK to finish. Page 69

70 XenServer DR is now configured on the volume and ready to be tested. Test XenServer Disaster Recovery The process bellow will test the configuration of XenDesktop DR, replication and configuration of the Pool at the secondary site. The process will use a Storage Center View Volume for testing. The View Volume will be created at the secondary site and mapped to that pool for the DR test. The use of View Volumes will allow the test to be performed without interrupting replication between the two sites. The steps below outline the process of testing XenServer DR with Dell Compellent: NOTE: Be sure the most recent information, including a Replay taken after DR was configured, has been replicated to the secondary site before testing DR failover. 1. Create a View Volume of the replicated volume at the secondary site by right clicking on the most recent Replay and selecting Create Volume from Replay. Figure 89 Create View Volume for DR Test 2. Next, map the new View Volume to the servers in the recovery pool. 3. After the View Volume has been created and mapped, run the Disaster Recovery Wizard by selecting the recovery pool in XenCenter going to Pool, Disaster Recovery, and selecting Disaster Recovery Wizard. 4. On the Disaster Recovery Wizard window select Test Failover and click on Next. Page 70

71 Figure 90 Disaster Recovery Failover Test 5. Read the message on the Before You Start screen and click Next to reach the Locate Mirrored SRs screen. From the Find Storage Repositories dropdown box, select the type of mappings used to connect the servers in the Pool to the View Volume, either HBA or software iscsi. NOTE: Only iscsi and FC HBAs and Software iscsi are available for the XenServer DR feature. Figure 91, Locate Mirrored SR 6. Select the SR to test and click Next to continue. XenServer will mount the SR and discover the VMs and vapps on the volume. 7. On the next screen, select the VMs and vapps to be tested. Also select the desired option for the power state after the recovery. Click Next to continue. Page 71

72 Figure 92 Select VMs and vapps to test 8. The Disaster Recovery Wizard will check prerequisites on the next screen. Once the failover pre-checks are finished, click the Fail Over button to continue the test. The test may take some time depending on the number of VMs involved. During this time, the VMs and vapps that were selected in the previous step will be created in the secondary Pool and started if that option was selected. Figure 93 Failover Test Progress 9. The progress screen will show the status of the DR process. 10. Clicking Next will display the summary of the test. Also, the VMs and vapps will be removed from the Pool as well as the replicated volume. 11. Clicking Finish at the Summary of Test Failover screen will conclude the test. Page 72

73 Recovering from a Disaster The steps to recover from a disaster are similar to testing a failover with a few exceptions. Below are the steps to take when recovering from a disaster Break replication between the primary and secondary site Shut down the VMs and vapps at the primary site if they are still running Ensure that the recovery volume at the secondary sit is not attached to any other pool. If the volume is attached to multiple pools data corruption may occur. There are two options to prepare the volume at the secondary Storage Center for a failover. The first option is to create and map a View Volume to the servers in the recovery pool. This is the same process as outlined in the failover test above and is the preferred method for recovering from a disaster. The second option is to remove replication and mount the replicated volume. This can be done by removing the replication in Enterprise Manager and adding mappings to the servers in the recovery pool at the secondary site. 1. To remove replication, go to Replications in EM and select the source Storage Center. 2. Right Click on the volume and select Delete. This will bring up the Delete Replication screen. Figure 94 Delete Replication in EM 3. Be sure that Put Destination Volume in the Recycle Bin is NOT selected and click OK. 4. Alternatively, if the Storage Center at the source site is not available, the source Storage Center mappings can be removed from the Mapping tab under the destination volumes properties. This will prevent replication to the volume if the source comes back online. Page 73

74 Figure 95 Remove Source Mapping Once the replication has stopped the volume at the secondary site can be mapped to the servers in the recovery pool. 1. To begin the failover process, select the recovery pool and go to Pool, Disaster Recovery Wizard and select Failover on the Welcome screen. Figure 96, Disaster Recovery Failover 2. Click Next on the Before you start screen and use the Find Storage Repositories dropdown to locate the recovery SR that was mapped to the servers in a previous step. Repeated this process for each SR to be recovered. Click Next when finished. Page 74

75 Figure 97, Select Mirrored SR 3. Select the VM s and vapps that are to be recovered. Select the appropriate Power State after Recovery option and click Next. Figure 98, Select vapps and VMs to Fail Over Page 75

76 4. Resolve any pre-check errors and click Fail Over to begin the failover process. This may take some time depending on the number of VMs and vapps to be recovered. Figure 99, DR Failover Progress 5. Once the DR process has completed a summary page displays the status of each vapp and VM and its status. Click Finish to exit the wizard. Page 76

77 Replication Based Disaster Recovery Citrix XenServer 6.0 introduces the new automated DR option as outlined above. This Desaster Recovery tool is available for environments with a Platinum Software subscription only. For users without a Platinum Software subscription there is still a semi-automated option for recovering virtual machines at a DR site. This method leverages the Virtual Machine Metadata backup and Dell Compellent replication to make VMs available at a recovery site. The basic steps involved include: 1. Configure replication of the protected volumes from the primary site to the secondary site. 2. Backup LUN Meta Data to each Replicated LUN. 3. Create a local recovery View volume from a Replay on the replicated XenServer SR volume. 4. Map that View Volume to the XenServer hosts(s). 5. Add Storage Repository to XenServer hosts(s). 6. Restore Virtual Machine MetaData. Disaster Recovery Replication Example This scenario will step through the recovery of a XenServer Pool and all its volumes on a remote DR Server Pool. This scenario will utilize two Dell Compellent Storage Centers performing one way replication from the source system to the destination system. DR Environment: Primary Site: Dell Compellent Storage Center SC13 One Pool (Pool1) consisting of servers xenserver6p1s1 and xenserver6p1s2 One FC connected Volume labeled Xen6_P1_SR1 Virtual Machine MetaData has been backed up using the steps outlined in the VM Metadata Backup and Recovery section Secondary Site: Compellent Storage Center SC12 One Pool (Pool2) consisting of servers xenserver6p2s1 and xenserver6p2s2 FC connected replicated volume The figure below shows the Pool1 Servers, data store and VM s within the SR. Note also that the Pool Metadata Backup exists on the SR. Page 77

78 Figure 100 Primary Site Servers and SR After the VM and vapps have been configured the Volumes can be replicate to the secondary DR site. This process is simplified with Dell Compellent Enterprise Manager (EM). In the example below, the SR Volume that resides on a Storage Center named SC13 at the primary location is replicated to a Storage Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User Guide outlines the steps necessary configure replication between the Storage Centers. Figure 101 Enterprise Manager Replication Next, a disaster is simulated by removing the replication jobs between the primary and secondary Storage Center in Enterprise Manager. 1. Replication can be removed in Enterprise Manager by going to Replications and selecting the source Storage Center. This will list the replications from that Storage Center. 2. Right Click on the volume and select delete. This will bring up the delete replication screen. Be sure that Put Destination Volume in the Recycle Bin is NOT selected and click OK. Page 78

79 Figure 102 Delete Replication in EM NOTE: A disaster test could have been done by simply creating a View Volume from one of the replays on the DR Storage Center system. This process would allow the testing of a DR plan to validate data at any time without disrupting replication 3. Next, the servers at the secondary site are mapped to the volume. In this example, servers XenServer6P2S1 and XenServer6P2S2 are mapped to the volume Repl of Xen6_P1_SR1. Figure 103, Server Mapping to the Recovery Volume 4. After the volume is mapped to the servers in pool2 at the secondary site it can be attached using the New Storage Wizard in XenCenter. The figure below shows the storage attached to the secondary pool. The VM files are on the storage but not yet available in the pool. Page 79

80 Figure 104, Recovery Pool 5. To add the VMs to the recovery pool the Metadata will need to be restored using the XenServer Console Backup, Restore and Update menu. It is important that the VM networks are named exactly the same in order for this to succeed. Figure 105, VM Metadata Restored 6. After the Metadata is restored the VM s will be available at the secondary site. Once the Virtual Machine MetaData is restored, the Virtual machines can be started on the remote DR XenServer. Page 80

81 Figure 106, Recovered VMs After the recovery to the secondary site it may be necessary to fail back to the primary site. The failback process is the same as outlined above, except for modifying the primary and secondary site to reflect the VMs source and destination location. Page 81

82 Live Volume Overview Live Volume is a software option for Compellent Storage Center that builds upon the Fluid Data architecture. Live Volume enables non-disruptive data access and migration of data between two Storage Centers. Figure 107, Live Volume Overview Live Volume is a software-based solution integrated into the Dell Compellent Storage Center controllers. Live Volume is designed to operate in a production environment, allowing both Storage Centers to remain operational during volume migrations. Live Volume increases operational efficiency, reduces planned outages, and enables a site to avoid disruption during anticipated disasters. Live Volume provides these powerful new options: Storage Follows the Application in Virtualized Server Environments. Live Volume automatically migrates data as virtual applications are moved. Zero Downtime Maintenance for Planned Outages. Live Volume enables all data to be moved non-disruptively between Storage Centers, enabling full planned site shutdown without downtime. On-demand Load Balancing. Live Volume enables data to be relocated as desired to distribute workload between Storage Centers. Stretch Microsoft, VMware, and XenServer Volumes between geographically disperse locations. Live Volume allows servers to see the same disk signature on the volume between datacenters thereby allowing the volume to be clustered. Page 82

83 Live Volume is designed to fit into existing physical and virtual environments without disruption and without requiring extra hardware or changes to configurations or workflow. Physical and virtual servers see a consistent, unchanging virtual volume. All volume mapping is consistent and transparent before, during, and after migration. Live Volume can be run automatically or manually and is fully integrated into the Storage Center software environment. Live Volume operates asynchronously and is designed for planned migrations where both Storage Centers are simultaneously available. A Live Volume can be created between two Dell Compellent Storage Centers residing in the same datacenter or between two well-connected datacenters. Using Dell Compellent Enterprise Manager, a Live Volume can be created from a new volume, an existing volume, or an existing replication. For more information on creating Live Volume, see the Compellent Enterprise Manager User Guide. For more information on the Best Practices for Live Volume please see the Dell Compellent Storage Center Best Practices Document for Live Volume on the Dell Compellent Knowledge Center Portal at Page 83

84 Appendix 1 Troubleshooting XenServer Pool FC Mapping Issue Occasionally when connecting a FC Volume to a XenServer Pool is the mapping is only made on the Master node in the pool and is not connected on the additional nodes. This typically takes place if attempting to attach the Volume right after the creation of the Volume. In most instances, waiting approximately one hour before mapping the volume will prevent this issue from occuring. The following section details the steps necessary to fix the missing connection issue when mapping a New SR to a XenServer pool without rebooting the hosts or moving the Master. Notice in the figure below that the SR mapped to Pool1 is mapped correctly to the host server XenServer6P1S1 but not to the server XenServer6P1S2. Figure 108, SR Mapping Broken To resolve this issue logon to the console of one of the XenServers in the pool and go to the local command shell. This can be done either from the console or from a SSH client such as PuTTy. At the command prompt type: xe host-list to obtain the list of all the server in the pool and their associated UUIDs. [root@ XenServer6P1S1 ~]# xe host-list uuid ( RO) : 5cd5d2ed-b462-4eba-9761-d874b8e3e564 name-label ( RW): XenServer6P1S 1.techsol.local name-description ( RO): Default install of XenServer uuid ( RO) : be925e21-a95e-438d-8155-b98d09c26351 name-label ( RW): XenServer6P1S 2.techsol.local name-description ( RO): Default install of XenServer [root@ XenServer6P1S1 ~]# 1. Run the SR-Probe command for each of the XenServer hosts not mapping the volume correctly. Type the following to probe the host: xe sr-probe host-uuid=<uuid or server> type=lvmohba Page 84

85 XenServer6P1S1 ~]# xe sr-probe host-uuid=5cd5d2ed-b462-4eba-9761-d874b8e3e564 type=lvmohba 7. Once the sr-probe command has been completed for all the hosts the SR can be repaired by right clicking the SR from the XenCenter console and selecting Repair Storage Repository. Figure 109, Repair Storage Repositories 2. Click the Repair button. 3. When the repair is complete all nodes should report back as Connected. Figure 110, Repaired SR Starting Software iscsi Software iscsi may need to be started manually when iscsi commands are ran on the server and the following errors occur: Page 85

86 Cannot perform discovery. Initiatorname required or iscsid is not running. Could not start up automatically using the startup command. Two ways to Start iscsi 1. Through the GUI: The process of adding storage with XenCenter will start iscsi on the server. Go to XenCenter, Select the host and go to New storage> Software iscsi> Enter control port in target host> discover IQN s, Discover LUNs. The step Discover LUNs will start iscsi on the host. Cancel out of the Add Storage Wizard to quit without making changes. 2. Through the Command Line: Start the iscsi service: service open-iscsi start Run sr-probe: xe sr-probe type=lvmoiscsi device-config:target=x.x.x.x where x.x.x.x is the Storage Center Control Port Software iscsi Fails to Start as Server Boot See Citrix Document CTX to configure auto start. These steps should to be run after iscsi has been started and has scanned the Storage Center. Scanning the Storage Center will create the node IQN files referenced in the Citrix Document on the server. Wildcard Doesn t Return All Volumes When adding storage using the wildcard option, an incomplete list of Volumes mapped to the server may be returned. In this situation, XenCenter only scans the first IP address listed in the Target Host field, resulting in a an incomplete listing of Target LUNS. This is a know issue with the XenCenter GUI. To work around, cycle through the Storage Center Control Ports in the Target Host field. Be sure to always use the (*) Wildcard Target IQN when discovering LUNs. This is a GUI issue and will not affect multipathing. Caution: Issues have been identified with the Citrix implementation of multipathing and Storage Center in virtual port mode. It is strongly recommended to use iscsi HBAs when implementing XenServer with Storage Center in virtual port mode. Page 86

87 Figure 111 Finding Target LUNs View Multipath Status Use the iscsiadm -m session command to view the active software iscsi sessions on the server. Use mpathutil status command to view the status of multipathing If only one path is showing in a multipath environment, typically after a reboot Figure 112, Multipath Issue Run the iscsiadm -m node --login command to force the iscsi software initiator to connect both paths Figure 113, Multipath Active Following the steps outlined in Citrix Document CTX may resolve this issue. XenCenter GUI displays Multipathing Incorrectly If the GUI does not display multipathing information correctly, cycle the multipath server with this command: Service multipathd restart Next, run the Python script to update XenCenter /opt/xensource/sm/mpathcount.py Page 87

88 Connectivity issues with a Fibre Channel Storage Repository If there are issues with a Fibre Chanel SR, first identify the host UUID with this command xe host-list Next, probe the SR using this command xe sr-probe host-uuid=<uuid of server> type=lvmohba Go into XenCenter and select the server. Right click on the SR and select the Repair Storage Repository Option Figure 114, Repair Storage Repository Page 88

Citrix XenServer with Dell SC Series Storage Configuration and Deployment

Citrix XenServer with Dell SC Series Storage Configuration and Deployment Citrix XenServer with Dell SC Series Storage Configuration and Deployment Dell Storage Engineering January 2017 A Dell EMC Deployment and Configuration Guide Revisions Date January 2016 Description Initial

More information

Dell Compellent Storage Center. Microsoft Server 2008 R2 Hyper-V Best Practices for Microsoft SCVMM 2012

Dell Compellent Storage Center. Microsoft Server 2008 R2 Hyper-V Best Practices for Microsoft SCVMM 2012 Dell Compellent Storage Center Microsoft Server 2008 R2 Hyper-V Best Practices for Microsoft SCVMM 2012 Document Revisions Date Revision Comments 05/18/2012 A First Revision THIS BEST PRACTICES GUIDE IS

More information

Dell Compellent Storage Center with CommVault Simpana 9.0. Best Practices

Dell Compellent Storage Center with CommVault Simpana 9.0. Best Practices Dell Compellent Storage Center with CommVault Simpana 9.0 Best Practices Document revision Date Revision Comments 1/30/2012 A Initial Draft THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY,

More information

SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0. Best Practices

SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0. Best Practices SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0 Best Practices Document revision Date Revision Comments 4/30/2012 A Initial Draft THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES

More information

Best Practices for Configuring the Dell Compellent SMI-S Provider for Microsoft SCVMM 2012

Best Practices for Configuring the Dell Compellent SMI-S Provider for Microsoft SCVMM 2012 Dell Compellent Storage Center Best Practices for Configuring the Dell Compellent SMI-S Provider for Microsoft SCVMM 2012 Document Revisions Date Revision Comments 04/11/2012 A First Revision THIS BEST

More information

XenServer Administrator's Guide

XenServer Administrator's Guide XenServer Administrator's Guide 4.1.0 Published March 2008 1.0 Edition 1 XenServer Administrator's Guide XenServer Administrator's Guide: Release 4.1.0 Published March 2008 Copyright 2008 Citrix Systems,

More information

Whitepaper. Deploying Citrix XenServer 5.0 with Dell EqualLogic PS Series storage

Whitepaper. Deploying Citrix XenServer 5.0 with Dell EqualLogic PS Series storage Whitepaper Deploying Citrix XenServer 5.0 with Dell EqualLogic PS Series storage Table of contents Dell EqualLogic PS Series Storage...4 XenServer 5.0 Dell Edition Overview...5 What s new in XenServer

More information

Dell Compellent Storage Center

Dell Compellent Storage Center Dell Compellent Storage Center How to Setup a Microsoft Windows Server 2012 Failover Cluster Reference Guide Dell Compellent Technical Solutions Group January 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL

More information

Deploying Solaris 11 with EqualLogic Arrays

Deploying Solaris 11 with EqualLogic Arrays Deploying Solaris 11 with EqualLogic Arrays Step-by-step guide to integrating an Oracle Solaris 11 server with a Dell EqualLogic PS Series Array Dell Storage Engineering February 2014 A Dell Deployment

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

WHITE PAPER Citrix XenServer and EMC CLARiiON CX4 Series Citrix XenServer and EMC CLARiiON CX4 Series Configuration Guide

WHITE PAPER Citrix XenServer and EMC CLARiiON CX4 Series Citrix XenServer and EMC CLARiiON CX4 Series Configuration Guide WHITE PAPER Citrix XenServer and EMC CLARiiON CX4 Series Citrix XenServer and EMC CLARiiON CX4 Series Configuration Guide www.citrix.com Contents Introduction...4 Citrix XenServer for Enterprise ready

More information

Course CXS-203 Citrix XenServer 6.0 Administration

Course CXS-203 Citrix XenServer 6.0 Administration Course CXS-203 Citrix XenServer 6.0 Administration Overview In the Citrix XenServer 6.0 classroom training course, students are provided the foundation necessary to effectively install, configure, administer,

More information

XenServer Release Notes

XenServer Release Notes Version 5.5.0 Published June 2009 1.0 Edition About this document XenServer Release Notes This document provides important information about the XenServer 5.5.0 Release. Release notes specific to the supported

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.2 Configuring Hosts to Access Fibre Channel (FC) or iscsi Storage 302-002-568 REV 03 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published July

More information

A Dell technical white paper By Fabian Salamanca, Javier Jiménez, and Leopoldo Orona

A Dell technical white paper By Fabian Salamanca, Javier Jiménez, and Leopoldo Orona Implementing cost-effective disaster recovery A Dell technical white paper By Fabian Salamanca, Javier Jiménez, and Leopoldo Orona THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, four-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Citrix XenServer Quick Start Guide. Published Tuesday, 25 September Edition

Citrix XenServer Quick Start Guide. Published Tuesday, 25 September Edition Citrix XenServer 6.0 Quick Start Guide Published Tuesday, 25 September 2012 0 Edition Citrix XenServer 6.0 Quick Start Guide Copyright 2012 Citrix Systems. Inc. All Rights Reserved. Version: 6.0 Citrix,

More information

CXS Citrix XenServer 6.0 Administration

CXS Citrix XenServer 6.0 Administration Course Overview View Course Dates & Register Today Students will learn to effectively install, configure, administer, and troubleshoot XenServer 6.0. Students will also learn how to configure a Provisioning

More information

Citrix XenServer 6.0 Administration

Citrix XenServer 6.0 Administration Citrix 1Y0-A26 Citrix XenServer 6.0 Administration Version: 5.0 QUESTION NO: 1 When starting up a XenServer from SAN, the BIOS of the contains the instructions that enable the host to find the boot disk.

More information

1Y0-A26 Citrix XenServer 6.0 Practice Exam

1Y0-A26 Citrix XenServer 6.0 Practice Exam 1Y0-A26 Citrix XenServer 6.0 Practice Exam Section 1: Implementing XenServer 1.1 Specific Task: Configure boot storage from SAN Objective: Given a scenario, determine how to configure options on the XenServer

More information

Citrix XenServer 6 Administration

Citrix XenServer 6 Administration Citrix XenServer 6 Administration Duration: 5 Days Course Code: CXS-203 Overview: In the Citrix XenServer 6.0 classroom training course, students are provided the foundation necessary to effectively install,

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

CXS-203-1I Citrix XenServer 6.0 Administration

CXS-203-1I Citrix XenServer 6.0 Administration 1800 ULEARN (853 276) www.ddls.com.au CXS-203-1I Citrix XenServer 6.0 Administration Length 5 days Price $5115.00 (inc GST) Overview In the Citrix XenServer 6.0 classroom training course, students are

More information

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN Abstract This Technical Report describes how to use PS Series storage arrays with a Microsoft Windows Server 2003 cluster.

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

Citrix XenServer 5.6 Administration

Citrix XenServer 5.6 Administration Citrix 1Y0-A24 Citrix XenServer 5.6 Administration Version: 6.0 QUESTION NO: 1 Citrix 1Y0-A24 Exam To successfully configure Role Based Access Control, an administrator must ensure that Active Directory

More information

Access Control Policies

Access Control Policies Access Control Policies The new feature within EqualLogic firmware 7.0 that enables centralized management of access controls for volume access. Dell Engineering January 2014 A Dell Technical White Paper

More information

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Abstract The Thin Import feature of Dell Storage Center Operating System offers solutions for data migration

More information

DELL TM PowerVault TM DL Backup-to-Disk Appliance

DELL TM PowerVault TM DL Backup-to-Disk Appliance DELL TM PowerVault TM DL Backup-to-Disk Appliance Powered by Symantec TM Backup Exec TM Configuring the Dell EqualLogic PS Series Array as a Backup Target A Dell Technical White Paper by Dell Engineering

More information

Configure Citrix XenServer with P300Q

Configure Citrix XenServer with P300Q Configure Citrix XenServer with P300Q Tim Chung Version 1.0 (DEC, 2010) 1 QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201011-P300H lntroduction In this document, it introduces how

More information

istorage Server: High-Availability iscsi SAN for Citrix Xen Server

istorage Server: High-Availability iscsi SAN for Citrix Xen Server istorage Server: High-Availability iscsi SAN for Citrix Xen Server Wednesday, Nov 21, 2013 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2013. All right reserved. Table

More information

Citrix 1Y0-A26. Citrix XenServer 6.0 Administration. Download Full Version :

Citrix 1Y0-A26. Citrix XenServer 6.0 Administration. Download Full Version : Citrix 1Y0-A26 Citrix XenServer 6.0 Administration Download Full Version : https://killexams.com/pass4sure/exam-detail/1y0-a26 QUESTION: 107 Scenario: An administrator built four new hosts in an existing

More information

StorTrends - Citrix. Introduction. Getting Started: Setup Guide

StorTrends - Citrix. Introduction. Getting Started: Setup Guide StorTrends - Citrix Setup Guide Introduction This guide is to assist in configuring a Citrix virtualization environment with a StorTrends SAN array. It is intended for the virtualization and SAN administrator

More information

Dell PowerVault MD3600i and MD3620i Storage Arrays. Deployment Guide

Dell PowerVault MD3600i and MD3620i Storage Arrays. Deployment Guide Dell PowerVault MD3600i and MD3620i Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Dell EMC SC Series Storage: Microsoft Multipath I/O

Dell EMC SC Series Storage: Microsoft Multipath I/O Dell EMC SC Series Storage: Microsoft Multipath I/O Dell EMC Engineering June 2017 A Dell EMC Best Practices Guide Revisions Date Description 10/11/2010 Initial release 10/21/2011 Corrected errors 11/29/2011

More information

Configuring Direct-Connect between a DR Series System and Backup Media Server

Configuring Direct-Connect between a DR Series System and Backup Media Server Configuring Direct-Connect between a DR Series System and Backup Media Server Dell Engineering October 2014 A Dell Technical White Paper Revisions Date October 2014 Description Initial release THIS WHITE

More information

Dell Compellent Volume Expansion with Solaris 10 UFS. Technical Tip

Dell Compellent Volume Expansion with Solaris 10 UFS. Technical Tip Dell Compellent Volume Expansion with Solaris 10 UFS Technical Tip Page 2 Document revision Date Revision Comments 8/30/2011 A Initial Draft 10/14/2011 B Fixed Some Typos THIS TECHNICAL TIP IS FOR INFORMATIONAL

More information

XenServer Administrator's Guide

XenServer Administrator's Guide XenServer Administrator's Guide 5.0.0 Published September 2008 1.0 Edition XenServer Administrator's Guide: Release 5.0.0 Published September 2008 Copyright 2008 Citrix Systems, Inc. Xen, Citrix, XenServer,

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Citrix XenServer 7.3 Quick Start Guide. Published December Edition

Citrix XenServer 7.3 Quick Start Guide. Published December Edition Citrix XenServer 7.3 Quick Start Guide Published December 2017 1.0 Edition Citrix XenServer 7.3 Quick Start Guide 1999-2017 Citrix Systems, Inc. All Rights Reserved. Version: 7.3 Citrix Systems, Inc. 851

More information

Dell PowerVault NX1950 configuration guide for VMware ESX Server software

Dell PowerVault NX1950 configuration guide for VMware ESX Server software Dell PowerVault NX1950 configuration guide for VMware ESX Server software January 2008 Dell Global Solutions Engineering www.dell.com/vmware Dell Inc. 1 Table of Contents 1. Introduction... 3 2. Architectural

More information

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide Dell Storage vsphere Web Client Plugin Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Compellent Storage Center

Compellent Storage Center How To Setup a Microsoft Windows Server 2003 Failover Cluster Compellent Corporate Office Compellent Technologies 7625 Smetana Lane Eden Prairie, Minnesota 55344 www.compellent.com Contents Contents...

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell PowerEdge Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better

More information

A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i

A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i Microsoft Hyper-V Implementation Guide for Dell PowerVault MD Series Storage Arrays A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers

Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers The purpose of this document is to provide best practices for deploying VMware Infrastructure 3.x on Dell PowerEdge Blade Servers.

More information

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Setup for Microsoft Cluster Service Revision: 041108

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

version 5.4 Installation Guide

version 5.4 Installation Guide version 5.4 Installation Guide Document Release Date: February 9, 2012 www.phdvirtual.com Legal Notices PHD Virtual Backup for Citrix XenServer Installation Guide Copyright 2010-2012 PHD Virtual Technologies

More information

Using SANDeploy iscsi SAN for Citrix XenServer

Using SANDeploy iscsi SAN for Citrix XenServer Using SANDeploy iscsi SAN for Citrix XenServer Friday, October 8, 2010 www.sandeploy.com Copyright SANDeploy Limited 2008 2011. All right reserved. Table of Contents Preparing SANDeploy Storage... 4 Create

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

Microsoft Technical Solutions

Microsoft Technical Solutions Microsoft Technical Solutions How To Setup Microsoft Windows Server 2008 Failover Clustering Compellent Corporate Office Compellent Technologies 7625 Smetana Lane Eden Prairie, Minnesota 55344 www.compellent.com

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

Dell Wyse Datacenter for VMware Horizon View Cloud Pod Architecture

Dell Wyse Datacenter for VMware Horizon View Cloud Pod Architecture Dell Wyse Datacenter for VMware Horizon View Cloud Pod Architecture A brief guide for the configuration and management of a Cloud Pod environment. Dell Wyse Solutions Engineering May 2014 A Dell Technical

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Administrator s Guide Version 3.1 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.0 This paper describes how to implement a Fibre Channel (FC) SAN

More information

Setting Up Replication between Dell DR Series Deduplication Appliances with NetVault 9.2 as Backup Software

Setting Up Replication between Dell DR Series Deduplication Appliances with NetVault 9.2 as Backup Software Setting Up Replication between Dell DR Series Deduplication Appliances with NetVault 9.2 as Backup Software Dell Engineering A Dell Technical White Paper Revisions Date Description Initial release THIS

More information

Citrix 1Y0-A09. 1Y0-A09 Implementing Citrix XenServer Enterprise Edition 5.0. Practice Test. Version

Citrix 1Y0-A09. 1Y0-A09 Implementing Citrix XenServer Enterprise Edition 5.0. Practice Test. Version Citrix 1Y0-A09 1Y0-A09 Implementing Citrix XenServer Enterprise Edition 5.0 Practice Test Version 1.3 QUESTION NO: 1 An administrator created a template of a Microsoft Windows XP SP3 virtual machine (VM)

More information

Citrix EXAM - 1Y0-A26. Citrix XenServer 6.0 Administration. Buy Full Product.

Citrix EXAM - 1Y0-A26. Citrix XenServer 6.0 Administration. Buy Full Product. Citrix EXAM - 1Y0-A26 Citrix XenServer 6.0 Administration Buy Full Product http://www.examskey.com/1y0-a26.html Examskey Citrix 1Y0-A26 exam demo product is here for you to test the quality of the product.

More information

Setting Up the Dell DR Series System on Veeam

Setting Up the Dell DR Series System on Veeam Setting Up the Dell DR Series System on Veeam Dell Engineering April 2016 A Dell Technical White Paper Revisions Date January 2014 May 2014 July 2014 April 2015 June 2015 November 2015 April 2016 Description

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 This document supports the version of each product listed and supports all subsequent

More information

DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT

DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT DELL POWERVAULT NX3500 INTEGRATION WITHIN A MICROSOFT WINDOWS ENVIRONMENT A Dell Technology White Paper Version 1.0 THIS TECHNOLOGY WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

DELL TM PowerVault TM DL Backup-to-Disk Appliance

DELL TM PowerVault TM DL Backup-to-Disk Appliance DELL TM PowerVault TM DL Backup-to-Disk Appliance Powered by CommVault TM Simpana TM Configuring the Dell EqualLogic PS Series Array as a Backup Target A Dell Technical White Paper by Dell Engineering

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

DSI Optimized Backup & Deduplication for VTL Installation & User Guide DSI Optimized Backup & Deduplication for VTL Installation & User Guide Restore Virtualized Appliance Version 4 Dynamic Solutions International, LLC 373 Inverness Parkway Suite 110 Englewood, CO 80112 Phone:

More information

Active System Manager Version 8.0 User s Guide

Active System Manager Version 8.0 User s Guide Active System Manager Version 8.0 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.0 for Windows brings advanced online storage management to Microsoft Windows Server environments.

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Configuring Hosts to Access NFS File Systems 302-002-567 REV 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the

More information

Using Dell Repository Manager with Dell OpenManage Essentials

Using Dell Repository Manager with Dell OpenManage Essentials Using Dell Repository Manager with Dell OpenManage Essentials Dell, Inc. Dell Repository Manager Team AVS Sashi Kiran December 2013 A Dell Technical White Paper Revisions Date December 2013 Description

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018 One Stop Virtualization Shop StarWind Virtual SAN HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2 MARCH 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

Dell Storage Integration Tools for VMware

Dell Storage Integration Tools for VMware Dell Storage Integration Tools for VMware Version 4.1 Administrator s Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION:

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Cisco Nexus Switch Configuration Guide for Dell SC Series SANs. Dell Storage Engineering August 2015

Cisco Nexus Switch Configuration Guide for Dell SC Series SANs. Dell Storage Engineering August 2015 Cisco Nexus 6001 Switch Configuration Guide for Dell SC Series SANs Dell Storage Engineering August 2015 Revisions Date August 2015 Description Initial release THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide.   support.dell.com VMware Infrastructure 3.0.2 Update 1 for Dell Systems Deployment Guide www.dell.com support.dell.com Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your

More information

Configuring Server Boot

Configuring Server Boot This chapter includes the following sections: Boot Policy, page 1 UEFI Boot Mode, page 2 UEFI Secure Boot, page 3 CIMC Secure Boot, page 3 Creating a Boot Policy, page 5 SAN Boot, page 6 iscsi Boot, page

More information

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays Microsoft Hyper-V Planning Guide for Dell PowerVault MD Series Storage Arrays A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays THIS WHITE PAPER IS FOR INFORMATIONAL

More information

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5 September 2008 Dell Virtualization Solutions Engineering Dell PowerVault Storage Engineering www.dell.com/vmware www.dell.com/powervault

More information

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2 One Stop Virtualization Shop StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2 DECEMBER 2017 TECHNICAL PAPER Trademarks StarWind, StarWind Software

More information

Reinstalling the Operating System on the Dell PowerVault 745N

Reinstalling the Operating System on the Dell PowerVault 745N Reinstalling the Operating System on the Dell PowerVault 745N This document details the following steps to reinstall the operating system on a PowerVault 745N system: 1. Install the Reinstallation Console

More information

Dell FluidFS 6.0 FS8600 Appliance CLI Reference Guide

Dell FluidFS 6.0 FS8600 Appliance CLI Reference Guide Dell FluidFS 6.0 FS8600 Appliance CLI Reference Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates

More information

HP integrated Citrix XenServer 5.0 Release Notes

HP integrated Citrix XenServer 5.0 Release Notes HP integrated Citrix XenServer 5.0 Release Notes Part Number 488554-003 March 2009 (Third Edition) Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage By Dave Jaffe Dell Enterprise Technology Center and Kendra Matthews Dell Storage Marketing Group Dell Enterprise Technology Center delltechcenter.com

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.1 for Windows brings advanced online storage management to Microsoft Windows Server environments,

More information

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website

More information

Citrix XenServer 7.1 Administrator's Guide. Published October Edition

Citrix XenServer 7.1 Administrator's Guide. Published October Edition Citrix XenServer 7.1 Administrator's Guide Published October 2017 1.0 Edition Citrix XenServer 7.1 Administrator's Guide Copyright 2017 Citrix Systems. Inc. All Rights Reserved. Version: 7.1.1 Citrix,

More information

Parallels Virtuozzo Containers 4.6 for Windows

Parallels Virtuozzo Containers 4.6 for Windows Parallels Parallels Virtuozzo Containers 4.6 for Windows Deploying Microsoft Clusters Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

Configuration Guide -Server Connection-

Configuration Guide -Server Connection- FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for Citrix XenServer This page is intentionally left blank. Preface This manual briefly explains the operations

More information

Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2

Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2 Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2 Dell Engineering Donald Williams November 2013 A Dell Deployment and Configuration Guide Revisions Date November 2013 Description Initial

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.4 Configuring and managing LUNs H16814 02 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication

More information

Installation Guide. Tandberg Data DPS1000 Series Model: DPS1100 and DPS1200, Release: 1.3

Installation Guide. Tandberg Data DPS1000 Series Model: DPS1100 and DPS1200, Release: 1.3 Installation Guide Tandberg Data DPS1000 Series Model: DPS1100 and DPS1200, Release: 1.3 Contents Preface.......................................................................v About this guide..............................................................

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information