Data center requirements

Similar documents
VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

EMC Business Continuity for Microsoft Applications

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC VSPEX END-USER COMPUTING

EMC Integrated Infrastructure for VMware. Business Continuity

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

vsan Mixed Workloads First Published On: Last Updated On:

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX END-USER COMPUTING

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

DELL EMC UNITY: BEST PRACTICES GUIDE

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Configuring and Managing Virtual Storage

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

2014 VMware Inc. All rights reserved.

Surveillance Dell EMC Storage with Bosch Video Recording Manager

VMware Virtualizing Business Critical Apps

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta)

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

JPexam. 最新の IT 認定試験資料のプロバイダ IT 認証であなたのキャリアを進めます

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

Tenant Onboarding. Tenant Onboarding Overview. Tenant Onboarding with Virtual Data Centers

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Eliminate the Complexity of Multiple Infrastructure Silos

Surveillance Dell EMC Storage with Digifort Enterprise

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Dell EMC SAN Storage with Video Management Systems

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

Installation and Cluster Deployment Guide for VMware

EMC VSPEX END-USER COMPUTING

EMC Backup and Recovery for Microsoft Exchange 2007

VMware vsphere VMFS First Published On: Last Updated On:

EMC Infrastructure for Virtual Desktops

EMC XTREMCACHE ACCELERATES ORACLE

Microsoft Office SharePoint Server 2007

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Assessing performance in HP LeftHand SANs

Virtualized Storage for VMware Virtual Infrastructure. Presented by: John Martin Consulting Systems Engineer, NetApp

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC Corporation. Tony Takazawa Vice President, EMC Investor Relations. August 5, 2008

Vurtualised Storage for VMware Virtual Infrastructures. Presented by: Rob Waite Consulting Systems Engineer, NetApp

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

DumpExam. The best exam dump, valid dumps PDF, accurate exam materials provider

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Soluzioni integrate con vsphere La virtualizzazione abilita il percorso evolutivo di innovazione dell'it

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Introduction and Data Center Topology For Your System

Dell EMC Unity Family

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Replace Single Server or Cluster

VMware vsphere with ESX 6 and vcenter 6

Virtual Volumes FAQs First Published On: Last Updated On:

SvSAN Data Sheet - StorMagic

NEC M100 Frequently Asked Questions September, 2011

Stellar performance for a virtualized world

VMware vsphere with ESX 4.1 and vcenter 4.1

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Nimble Storage Adaptive Flash

VMware Virtual SAN Technology

BraindumpsIT. BraindumpsIT - IT Certification Company provides Braindumps pdf!

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

Installation and Cluster Deployment Guide for VMware

REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX

Virtual Server Agent for VMware VMware VADP Virtualization Architecture

Surveillance Dell EMC Storage with FLIR Latitude

Cisco HyperFlex HX220c M4 Node

Overview. About the Cisco UCS S3260 System

Copyright 2012 EMC Corporation. All rights reserved.

Customer Premise Equipment

System Requirements. Hardware and Virtual Appliance Requirements

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

IOmark-VM. Datrium DVX Test Report: VM-HC b Test Report Date: 27, October

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

PRESENTATION TITLE GOES HERE

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration

Microsoft Exchange, Lync, and SharePoint Server 2010 on Dell Active System 800v

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

VMware vsphere 5.5 Professional Bootcamp

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

Surveillance Dell EMC Storage with Verint Nextiva

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Transcription:

Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine details about applications, page 4 Identify the various networking resources, page 4 Determine virtual machine distribution and connectivity, page 5 Investigate storage planning, page 6 Prerequisites Before you plan the data center for your Cisco HCS installation, make sure that you: Review and have access to the Solution Reference Network Design for Cisco Hosted Collaboration Solution. Complete the actions outlined in previous sections of this guide including: Initial system requirements and planned growth OL-30669-01 1

Data center workflow Data center workflow Determine data center requirements The following procedures lists the high-level steps required to determine your data center requirements. Procedure Step 1 Gather data for initial data center planning, on page 2. Step 2 Determine the data center deployment model, on page 3. Step 3 Determine details about applications, on page 4. Step 4 Identify the various networking resources, on page 4. Step 5 Determine virtual machine distribution and connectivity, on page 5. Step 6 Investigate storage planning, on page 6. Gather data for initial data center planning Procedure Step 1 Step 2 Step 3 Collect and estimate the type of tenants using detailed information about your system. You should have already done this as part of Complete tenant matrix. Gather information about customers, customer average sizes, future growth, security requirements, local versus central break out, regulatory requirements, and access models. Determine if there are any other services that are offered from the same data center as Cisco HCS. Determine the locations and the number of data centers that you plan to deploy for CIsco HCS. Locations and number of data centers are determined based on target market proximity requirements, redundancy requirements, size, and access models. All of these factors can affect your costs. For UC 2 OL-30669-01

Determine the data center deployment model applications, for example, you should determine whether all of your customers will have the same redundancy or whether the redundancy is determined on a per-customer basis. Step 4 Step 5 Determine whether you have a single or multiple data center setup. Choose your site deployment type. Based on decisions made in Determine your HCS deployment model, decide the type of site for your data center: Hosted Extender Remote managed Hosted Private Cloud/Large Enterprise Determine the data center deployment model Using the information collected about the locations, the number of data centers and the capital expenditure determines your deployment model, such as Large PoD, Small PoD or Micro Node. You may have already done this in Determine your HCS deployment model. For further information see the Capacity Planning Guide for Cisco Hosted Collaboration Solution: Procedure Step 1 Examine key attributes to determine the data center deployment model: Number of tenants Average tenant dize Total dedicated UC instances Total UC instances and clusters Step 2 Use the data and capacity requirements to determine the data center deployment model across one or more data centers. OL-30669-01 3

Determine details about applications Determine details about applications Procedure Step 1 Step 2 Determine whether the management applications will be NATted into a NAT customer domain. Review the Compatibility Matrix for Cisco Hosted Collaboration Solution 9.2(1) to determine the required application versions for the current release. For the most recent compatibility information, see the compatibility matrix from the Cisco HCS documentation page http://www.cisco.com/en/us/partner/products/ps11363/tsd_products_support_series_home.html Identify the various networking resources You need to build your network infrastructure with consideration of the following networking resources: Virtual machine templates: Details about these templates and other sizing considerations can be found in the Unified Communications Virtualization Sizing Guidelines. Service profiles on Cisco Unified Computing System (UCS) Manager: Decide how many service profiles you want for your system. From a hardware perspective, the best service profile choice is two profiles when you compare upgrade reduction versus hardware required. If hardware is not an issue, four service profiles is the best choice. Addressing scheme for the UC applications: For more details, refer toip addressing for HCS applications, on page 6. Management domain addressing. NATted address pools in management domain for UC applications: For more details, refer to NAT planning considerations. VLANs: Use the recommended numbering scheme. For more details, refer to Grouping VLANs, on page 4. PSTN connectivity: For more details, refer to Identify trunking services. Data center interconnection Grouping VLANs Cisco recommends that when you design Layer 2 for a Cisco HCS deployment, you group the VLANs based on their usage. The current Cisco HCS data center design assumes that each end customer consumes only two VLANs; however, it is possible to configure four VLANs for each end customer. Use the following VLAN numbering scheme if four VLANs are configured for each end customer: 0100 to 0999: UC Apps (100 to 999 are the customer IDs) 1100 to 1999: outside VLANs (100 to 999 are the customer IDs) 4 OL-30669-01

Determine virtual machine distribution and connectivity 2100 to 2999: hcs-mgmt ( 100 to 999 are the customer IDs) 3100 to 3999: Services ( 100 to 999 are the customer IDs) Use all the unused VLANs x000 x099 (where x is 1, 2, or 3) and VLANS 4000 to 4095 for other purposes Use the following number scheme if only two VLANs are configured for each end customer: 0100 to 1999: UC Apps (100 to 999 are the customer IDs for Group 1) 2100 to 3999: outside VLANs (100 to 999 are the customer IDs for Group 1) Use the following numbering scheme for additional end customers: 2100 to 2999: UC Apps (100 to 999 are the customer IDs for Group 2) 3100 to 3999: outside (100 to 999 are the customer IDs for Group 2) Use the unused VLANs for other purposes While this is the recommended grouping of VLANS to help you scale the number of customers that can be hosted on a Cisco HCS platform, you may reach the upper limit of customers due to limitations in other areas of the Cisco HCS solution. Determine virtual machine distribution and connectivity Procedure Step 1 Determine the virtual machine distribution. For more information on recommendations, refer to the Solution Reference Network Design for Cisco Hosted Collaboration Solution. Oversubscribed clusters Non oversubscribed clusters Step 2 Analyze the available bandwidth. Based on the deployment model, estimate: The bandwidth needed between data centers. The aggregate bandwidth needed between the data center and CPEs. The aggregate bandwidth needed between the datacenter and PSTN network. Step 3 Step 4 Determine the MPLS VPN peering arrangement between Cisco HCS Data Center and MPLS provider. Determine whether to use Over-The-Top-VPN as an alternative to site-to-site connectivity. OL-30669-01 5

IP addressing for HCS applications IP addressing for HCS applications One VLAN must be dedicated for each Cisco HCS Enterprise customer. Overlapping addresses for customer UC infrastructure applications of each customer is supported. The option to select the address pool from which addresses will be assigned for the customer's UC infrastructure applications (Cisco HCS Instance) is supported. This option is necessary to avoid any conflicts with customer-premises addressing schemes. Note When deploying Cisco HCS in the hosted environment, we recommend that you do not have NAT between any end device (phone) and the Cisco Unified Communications Manager (UC application) on the line side, because some of the mid-features may not work. Investigate storage planning Consider the following for storage planning. Procedure Step 1 Step 2 Step 3 Step 4 Determine capacity needs. Refer to the Capacity Planning Guide for Cisco Hosted Collaboration Solution for more information about capacity. Determine storage connectivity options such as Ethernet or FCOE/FC. Determine approved storage vendors for UCS. Determine storage tiering. Tiering allows you to distribute capacity across multiple tiers. Larger deployments should consider their capacity distribution carefully. Two tier storage. Refer to Option 1 - Two-tier storage, on page 9. Fiber channels. Refer to Option 2 - Traditional fiber channel drive storage, on page 15. Step 5 Consider storage raid array recommendations. Refer to Design considerations and constraints, on page 7. Step 6 Determine LUN sizing. Refer to LUN sizing and disk usage optimization, on page 8. Step 7 For Micro Node deployment users, consider local DAS storage. Micro Node deployments use C series server and local disks on the servers for storage. There is no planning needed specific to Micro Node, since the recommended tested reference configuration prescribes the server configuration for C series which includes the Disk size. 6 OL-30669-01

Storage recommendation Storage recommendation The VMware Virtual Machine File System (VMFS) is a high-performance cluster file system that provides storage virtualization that is optimized for virtual machines. Each virtual machine is encapsulated in a small set of files known as virtual machine disks (VMDKs). Each VMDK is a block level file that is formatted based on the operating system of its VM. VMFS is the default storage management interface for these files on physical SCSI disks and partitions. A VMDK can live outside a VMFS file system on top of any file system, such as RiserFS, XFS, EXT2/3/4, HFS, or NTFS, because it is a block level file and is not directly associated with the VMFS. However, VMFS is the best performing file system for hosting the VMFS disks. For additional information on VMFS and LUNs, see VMware Virtual Machine File System: Technical Overview and Best Practices at http://www.vmware.com/pdf/vmfs-best-practices-wp.pdf. The following recommendations exist for storage design. When possible, use tiered storage pools with one raid 5 group using five 200GB Solid State Drives (SSD) and two raid 5 groups with five 2TB Near-Line Serial Attached SCSI (NL-SAS) drives. When you cannot use tiered storage, use basic raid 5 groups with 450GB 15k RPM Fiber Channel drives. For either configuration, use LUNs of 750 to 1.4 TB in size. Storage detailed design Design considerations and constraints The following layers exist for the storage component: Raid Groups Tiered Storage Pools (TSP) (optional) LUNs VMFs Datastores The raid groups define the physical failure domain where, if the physical hard drives fail, a raid group failure occurs. The type of raid used determines the number of drives in the raid group that can fail before the raid group itself fails. We recommend that you use raid 5, which supports one disk failure. We also recommend that you have hot-standby drives in the SAN so that if a disk fails, you can use a hot standby drive to replace the failed drive in the array, and the array can start rebuilding. The number of hot standby drives in the SAN depends on the total number of drives in the array. Determine this number by reviewing the specific SAN vendor recommendations. Use Tiered Storage Pools (TSP) to optimize the storage for the Cisco UC applications, as explained in a later section. TSPs, LUNs, and VMFS datastores are all logical entities sitting on top of the raid groups; they fail if the raid groups fail. In addition, for each of these logical entities, if a failure occurs check that you did not delete a LUN, or install ESXi on a LUN that already contains a VMFS datastore with virtual machines. It is extremely important while planning and configuring storage for your particular implementation that you distribute the logical storage of virtual machines across multiple failure domains. For example, virtual machine A and virtual machine A' are within different LUNs comprised of different physical drives or storage enclosures. OL-30669-01 7

Storage detailed design This situation is similar to ensuring A and A' are on different compute failure domains so that a blade or chassis fails, A and A' are not both impacted. LUN sizing and disk usage optimization The best way to set up a LUN for a given VMFS volume is to size for throughput first and capacity second. You should aggregate the total I/O throughput for all applications and VMs that might run on a given shared pool of storage, and then make sure that you have provisioned enough back-end disk spindles (disk array cache) and appropriate storage service to meet the requirements. There is a trade-off between the efficient use of space and a lower total number of LUNs, versus the efficient running of VMs and a smaller logical failure domain when deciding on the size of the LUNs. You should provide a 1 to 1 mapping between LUNs and VMFS datastores. The 1 to 1 mapping simplifies determining which datastores are on which LUNs. The following are advantages of Larger LUN Size/Cons of Smaller LUN Size: Fewer total LUNs to manage. Maximum limit of 256 LUNs per ESXi host. More efficient use of space; as a percentage, the leftover space is smaller. The following are disadvantages of Larger LUN Size/Pros of Smaller LUN Size: Mistakes or misconfiguration of the LUN lead to larger failure domain. You must lock down the LUN for certain operations. The more VMs per LUN, the more locking. This can impact performance. The following examples for VMFS operations require locking metadata: Creating a VMFS datastore. Expanding a VMFS datastore onto additional extends. Powering on a virtual machine. Acquiring a lock on a file. Creating or deleting a file. Creating a template. Deploying a virtual machine from a template. Creating a new virtual machine. Migrating a virtual machine with VMotion. Growing a file, for example, a snapshot file or a thin provisioned virtual disk. Any software that updates regularly, particularly system utilities, such as antivirus programs and operating system updates. Perform the following to resolve potential sources of the reservation: Serialize the operation of the shared LUNs. If possible, limit the number of operations on different hosts that require SCSI reservation at the same time. Increase the number of LUNs and try to limit the number of VMs accessing the same LUN. 8 OL-30669-01

Storage detailed design We recommend that you have less than 20 VMs per LUN. Size LUNs based upon the applications that reside on the LUN and their vdisk requirements, as illustrated in the following example. Table 1: Choosing LUN size based on which VMs are deployed Number of VMs per LUN depending on the VM disk size Number of VMs on LUN Reserve space left on LUN after fitting on all VMs Disk GB RAM GM Buffer GB Total GB 1400 55 3 2 60 23 20 80 4 2 86 16 24 160 6 2 168 8 56 200 6 2 208 6 152 300 6 2 308 4 168 600 8 2 610 2 180 1000 8 2 1010 1 390 Options for storage configuration The following sections describe two possible options for recommended SAN storage configurations; however, other shared storage options do exist. The following are Cisco HCS requirements for shared storage: Storage must be supported by Cisco UCS, refer to http://www.cisco.com/en/us/products/ps10477/prod_ technical_reference_list.html. Storage must be supported by VMware, refer to http://www.vmware.com/resources/compatibility/ search.php. Storage must meet application capacity (GB) and performance (Input/Output Operations Per Second -IOPS), refer to http://docwiki.cisco.com/wiki/uc_virtualization_storage_system_design_requirements. Option 1 - Two-tier storage Tiered storage is a system of assigning applications to different types of storage media based on application requirements. Factors considered in the allocation of storage type include the level of protection needed, performance requirements, speed of recovery, and many other considerations. Because assigning application data to specific media may be complex, some vendors provide software for automatically managing the process. One example of this is combining two separate tiers; a high performing disk storage and a less expensive, slower disk of the same capacity and function. We recommend that you use a two-tiered storage deployment option because it uses 70 percent fewer disks overall compared to a non tiered solution and can save 15-30 percent in overall SAN storage costs. In addition, OL-30669-01 9

Storage detailed design this option can greatly reduce the need to actively distribute the maintenance workloads, due to the caching and SSD drives. Cisco testing has shown that the UC applications have a 95 percent skew. This indicates that 95 percent of the disk I/O occurs on only 5 percent of the capacity of the virtual disks. "Hot" blocks are blocks of the disk that see frequent read/write requests and "cold" blocks are those that do not. In a tiered storage solution, hot blocks of the virtual hard drives of each virtual machine are migrated to the SSD drives in the storage pool, while the cold blocks reside on the NL-SAS drives. The SSD drives provide high IOPS capacity where needed, and the NL-SAS drives provide inexpensive storage where high IOPS capacity is not required. Two-tier storage architecture A storage pool is made of three RAID5 (4+1) groups, one with five 200GB SSD drives, and two more with five 2TB NL-SAS drives. These three raid groups are then organized into a tiered storage pool, as shown in the following example. Note The architecture described here applies to the EMC VNX/CLARiiON line. This is not how pools are set up on an EMC VMAX. Figure 1: Two-tier storage architecture Another key aspect of this storage solution is the EMC FAST Cache. FAST Cache allows for flash drives to be added to the array and used to extend the cache capacity for both reads and writes, which translates into better system-wide performance. An example of the use of FAST Cache, which is used to absorb any high IOPS load presented to the cold portion of the storage groups, is during software upgrades within a VM. Two-tier resiliency The storage pool is covered by at least one disk failure and possibly up to three disk failures, as long as they all belong to different RAID 5 groups of the same storage pool. Hot standby disks are used to automatically replace failed disks. We recommend that you distribute virtual machines A and A' across different tiered storage pools so that failure of one storage pool does not cause both instances A and A' to fail; for example, SUB on storage pool 1 and SUB' on storage pool 2. Minimum storage pool number required Even in the case of the smallest deployments, we recommend that you have at least two tiered storage pools. 10 OL-30669-01

Storage detailed design LUN distribution in tiered storage pools Table 2: LUN distribution in tiered storage pools Storage Pool 0 ~ 14TB Storage Pool 1 ~ 14TB Storage Pool N ~ 14TB ~1.4 TB LUN #1 LUN TSP0-0 LUN TSP1-0 LUN TSPN-0 ~1.4 TB LUN #2 LUN TSP0-1 LUN TSP1-1 LUN TSPN-1 ~1.4 TB LUN #3 LUN TSP0-2 LUN TSP1-2 LUN TSPN-2 ~1.4 TB LUN #4 LUN TSP0-3 LUN TSP1-3 LUN TSPN-3 ~1.4 TB LUN #5 LUN TSP0-4 LUN TSP1-4 LUN TSPN-4 ~1.4 TB LUN #6 LUN TSP0-5 LUN TSP1-5 LUN TSPN-5 ~1.4 TB LUN #7 LUN TSP0-6 LUN TSP1-6 LUN TSPN-6 ~1.4 TB LUN #8 LUN TSP0-7 LUN TSP1-7 LUN TSPN-7 ~1.4 TB LUN #9 LUN TSP0-8 LUN TSP1-8 LUN TSPN-8 ~1.4 TB LUN #10 LUN TSP0-9 LUN TSP1-9 LUN TSPN-9 Choose LUN names so that they reflect a mapping to the storage pool where they belong. UC cluster distribution in LUNs and tiered storage pools Deploy VMs round robin across all the tiered storage pools and within the storage pool across the LUNs. Note Round robin should only be used on storage systems that are VAAI/Failover 4 compliant (see http:// www.vmware.com/resources/compatibility/). If a client has a non-vaai or Failover 4 compliant system, round robin will actually increase IOPS and degrade the performance of the older systems. For example, for a customer with an 8.6(2) environment running a CX-4 series CLARiiON, they should not use round robin. Instead, they should use VMWARE_FIXED as the CX-4 series is not VAAI-compliant with VMware 5.x. In the following table, Cisco Unified Communications Manager cluster 1 is shown using the * symbol, and Cisco Unified Communications Manager cluster 2 is shown using the # symbol. Table 3: Small SP UC cluster distribution in LUNs and tiered storage pools TSP 0 TFTP' * P * S1' * S2' * S3' * TSP 1 TFTP * S1 * S2 * S3 * S4 * OL-30669-01 11

Storage detailed design TSP 0 S4' * MoH' * TFTP # S1 # S2 # S3 # S4 # MoH # TSP 1 MoH * TFTP' # P # S1' # S2' # S3' # S4' # MoH' # Storage Pool 0 ~14TB LUN VMs ~1.4 TB LUN #1 TSP0-1: TFTP' * S3 # ~1.4 TB LUN #2 TSP0-2: P * S4 # ~1.4 TB LUN #3 TSP0-3: S1' * M0H # ~1.4 TB LUN #4 TSP0-4: S2' * ~1.4 TB LUN #5 TSP0-5: S3' * ~1.4 TB LUN #6 TSP0-6: S4' * ~1.4 TB LUN #7 TSP0-7: MoH' * ~1.4 TB LUN #8 TSP0-8: TFTP # ~1.4 TB LUN #9 TSP0-9: S1 # ~1.4 TB LUN #10 TSP0-10: S2 # Storage Pool 1 ~ 14TB LUN VMs ~1.4 TB LUN #1 TSP1-1: TFTP * S3' # ~1.4 TB LUN #2 TSP1-2: S1 * S4' # ~1.4 TB LUN #3 TSP1-3: S2 * M0H' # ~1.4 TB LUN #4 TSP1-4: S3 * ~1.4 TB LUN #5 TSP1-5: S4 * ~1.4 TB LUN #6 TSP1-6: MoH * 12 OL-30669-01

Storage detailed design ~1.4 TB LUN #7 ~1.4 TB LUN #8 ~1.4 TB LUN #9 ~1.4 TB LUN #10 Storage Pool 1 ~ 14TB TSP1-7: TSP1-8: TSP1-9: TSP1-10: TFTP' # P # S1' # S2' # In the following table, Cisco Unified Communications Manager cluster 1 is shown using the * symbol, and Cisco Unified Communications Manager cluster 2 is shown using the # symbol. Table 4: Distribution of VMs across the TSPs and across the LUNs within a TSP TSP 0 TSP 1 TSP 2 TSP 3 TFTP * TFTP' * P * S1 * S1' * S2 * S2' * S3 * S3' * S4 * S4' * MoH * MoH' * TFTP # TFTP' # P # S1 # S1' # S2 # S2' # S3 # S3' # S4 # S4' # MoH # MoH' # In the following table, Cisco Unified Communications Manager cluster 1 is shown using the * symbol, and Cisco Unified Communications Manager cluster 2 is shown using the # symbol. Table 5: Generic UC cluster distribution in LUNs and tiered storage pools Storage Pool 0 ~ 14TB LUN VMs ~1.4 TB LUN #1 TSP0-1: TFTP * ~1.4 TB LUN #2 TSP0-2: S1' * ~1.4 TB LUN #3 TSP0-3: S3' * ~1.4 TB LUN #4 TSP0-4: MoH' * ~1.4 TB LUN #5 TSP0-5: S1 # ~1.4 TB LUN #6 TSP0-6: S3 # ~1.4 TB LUN #7 TSP0-7: MoH # ~1.4 TB LUN #8 TSP0-8: ~1.4 TB LUN #9 TSP0-9: OL-30669-01 13

Storage detailed design Storage Pool 0 ~ 14TB ~1.4 TB LUN #10 TSP0-10: ~1.4 TB LUN #1 ~1.4 TB LUN #2 ~1.4 TB LUN #3 ~1.4 TB LUN #4 ~1.4 TB LUN #5 ~1.4 TB LUN #6 ~1.4 TB LUN #7 ~1.4 TB LUN #8 ~1.4 TB LUN #9 ~1.4 TB LUN #10 Storage Pool 1 ~ 14TB LUN TSP1-1: TSP1-2: TSP1-3: TSP1-4: TSP1-5: TSP1-6: TSP1-7: TSP1-8: TSP1-9: TSP1-10: TFTP * S2 * S4 * TFTP # S1' # S3' # MOH # VMs ~1.4 TB LUN #1 ~1.4 TB LUN #2 ~1.4 TB LUN #3 ~1.4 TB LUN #4 ~1.4 TB LUN #5 ~1.4 TB LUN #6 ~1.4 TB LUN #7 ~1.4 TB LUN #8 ~1.4 TB LUN #9 ~1.4 TB LUN #10 Storage Pool 2 ~ 14TB LUN TSP2-1: TSP2-2: TSP2-3: TSP2-4: TSP2-5: TSP2-6: TSP2-7: TSP2-8: TSP2-9: TSP2-10: P * S2' * S4' * TFTP' # S2 # S4 # VMs ~1.4 TB LUN #1 Storage Pool 3 ~ 14TB LUN TSP3-1: S1 * VMs 14 OL-30669-01

Storage detailed design ~1.4 TB LUN #2 ~1.4 TB LUN #3 ~1.4 TB LUN #4 ~1.4 TB LUN #5 ~1.4 TB LUN #6 ~1.4 TB LUN #7 ~1.4 TB LUN #8 ~1.4 TB LUN #9 ~1.4 TB LUN #10 Storage Pool 3 ~ 14TB TSP3-2: TSP3-3: TSP3-4: TSP3-5: TSP3-6: TSP3-7: TSP3-8: TSP3-9: TSP3-10: S3 * MoH * P # S2' # S4' # Two-tier scheduled jobs Although the tiered storage (storage pools) defined in this section can handle the IOPS loads that are generated by simultaneous nightly maintenance activities and by simultaneous software upgrades, Cisco recommends that you distribute the IOPS loads. Option 2 - Traditional fiber channel drive storage The traditional storage architecture for UC applications uses raid 5 in a 4+1 configuration. However, in this model, each raid group is standalone (not part of a storage group) and the groups use 450GB 15k rpm fiber channel disk drives. OL-30669-01 15

Storage detailed design Traditional fiber channel drive storage architecture The raid 5 groups are generally partitioned into a single 1.4 TB LUN or into two 720 GB LUNs. Figure 2: Fiber channel-based storage architecture Fiber channel resiliency Each raid group, including the LUNs on that raid group, can handle a single drive failure. Make sure that hot standby drives are available in the SAN. LUNs Choose LUN names that reflect mapping to the raid group where they belong. UC cluster distribution in LUNs Deploy VMs round-robin across all the LUNs. Deploy VMs so that A and A' do not reside on the same raid group. Fiber channel scheduled jobs When you use traditional raid groups and LUNs, distribute the IOPS load so that not more than one or two VMs on the same LUN, or on the same raid group, do not perform maintenance activities (such as DRS backup or CAR uploads) at the same time. Attempting too many simultaneous maintenance activities can exhaust the IOPS capacity of the raid group. The exact number of VMs that can perform maintenance activities simultaneously depends on various factors, such as the amount of cache available in the SAN controllers. We recommend that you stagger these types of activities by compute and storage, especially if you are using the non tiered storage option. Use Platform Manager in Cisco HCM-F to distribute the schedule of software upgrades within a storage device. 16 OL-30669-01