EMC VSPEX PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX PRIVATE CLOUD"

Transcription

1 Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with Microsoft Hyper-V and EMC VNX for up to 500 virtual machines. April, 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published April 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. Part Number H

3 Contents Chapter 1 Executive Summary 13 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 17 Introduction Virtualization Compute Network Storage Chapter 3 Solution Technology Overview 21 Overview Key components Virtualization Overview Microsoft Hyper-V Virtual Fibre Channel to Hyper-V guest operating systems Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Hyper-V Replica Hyper-V Snapshot Cluster aware Updating EMC Storage Integrator Compute Network Overview

4 Contents Storage Overview EMC VNX series VNX Snapshots VNX SnapSure VNX Virtual Provisioning Windows Offloaded Data Transfer PowerPath VNX FAST Cache VNX FAST VP VNX file shares ROBO SMB 3.0 features Overview SMB versions and negotiations VNX and VNXe storage support SMB 3.0 VHD/VHDX storage support SMB 3.0 Continuous Availability SMB Multichannel SMB 3.0 Copy Offload SMB 3.0 Branch Cache SMB 3.0 Remote VSS SMB 3.0 encryption SMB 3.0 PowerShell cmdlets SMB 3.0 Directory Leasing Summary Backup and recovery Overview EMC RecoverPoint EMC VNX Replicator EMC Avamar Other technologies Overview EMC XtremSW Cache Chapter 4 Solution Architecture Overview 61 Overview Solution architecture Overview Logical architecture Key components

5 Contents Hardware resources Software resources Server configuration guidelines Overview Hyper-V memory virtualization Memory configuration guidelines Network configuration guidelines Overview VLAN Enable jumbo frames (iscsi or SMB only) Link aggregation (SMB only) Storage configuration guidelines Overview Hyper-V storage virtualization for VSPEX VSPEX storage building blocks VSPEX private cloud validated maximums High-availability and failover Overview Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics Backup and recovery configuration guidelines Overview Backup characteristics Backup layout Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point of sale system Example 3: Web server Example 4: Decision-support database Summary of examples

6 Contents Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment Overview CPU requirements Memory requirements Storage performance requirements I/O operations per second I/O size I/O latency Storage capacity requirements Determining equivalent Reference virtual machines Fine-tuning hardware resources Chapter 5 VSPEX Configuration Guidelines 115 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs Configure jumbo frames (iscsi or SMB only) Complete network cabling Prepare and configure storage array VNX configuration for block protocols VNX configuration for file protocols FAST VP configuration FAST Cache configuration Install and configure Hyper-V hosts Overview

7 Contents Install Windows hosts Install Hyper-V and configure failover clustering Configure Windows host networking Install PowerPath on Windows servers Plan virtual machine memory allocations Install and configure SQL Server database Overview Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure a SQL Server for SCVMM System Center Virtual Machine Manager server deployment Overview Create a SCVMM host virtual machine Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Install the SCVMM agent locally on a host Add a Hyper-V cluster into SCVMM Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM Create a template virtual machine Deploy virtual machines from the template virtual machine Summary Chapter 6 Validating the Solution 145 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Block environments File environments Chapter 7 System Monitoring 149 Overview Key areas to monitor Performance baseline Servers Networking Storage

8 Contents VNX resources monitoring guidelines Monitoring block storage resources Monitoring file storage resources Summary Chapter 8 Validation with Microsoft Fast Track v3 167 Overview Business case for validation Process requirements Step 1: Core prerequisites Step 2: Select the VSPEX Proven Infrastructure platform Step 3: Define additional Microsoft Hyper-V Fast Track Program components Step 4: Build a detailed bill of materials Step 5: Test the environment Step 6: Document and publish the solution Additional resources Appendix A Bills of Materials 173 Bill of materials Appendix B Customer Configuration Data Sheet 181 Customer configuration data sheet Appendix C Server Resources Component Worksheet 185 Server resources component worksheet Appendix D References 187 References EMC documentation Other documentation Appendix E About VSPEX 191 About VSPEX

9 Figures Figure 1. Private cloud components Figure 2. Compute layer flexibility Figure 3. Example of highly available network design for block Figure 4. Example of highly available network design for file Figure 5. Storage pool rebalance progress Figure 6. Thin LUN space utilization Figure 7. Examining storage pool space utilization Figure 8. Defining storage pool utilization thresholds Figure 9. Defining automated notifications - for block Figure 10. SMB 3.0 baseline performance comparison point Figure 11. SMB 3.0 Continuous Availability Figure 12. Continuous Availability application performance Figure 13. SMB Multichannel fault tolerance Figure 14. Multichannel network throughput Figure 15. Copy Offload Figure 16. Enable the Encrypt Data parameter Figure 17. Enabling encryption: Client CPU utilization Figure 18. Enabling encryption: Data Mover CPU utilization Figure 19. PowerShell execution of Show Shares Figure 20. PowerShell execution of Get-SmbServerConfiguration Figure 21. SMB 3.0 Directory Leasing Figure 22. Logical architecture for block variant Figure 23. Logical architecture for file variant Figure 24. Hypervisor memory consumption Figure 25. Required networks for block variant Figure 26. Required networks for file variant Figure 27. Hyper-V virtual disk types Figure 28. Building block for 10 virtual servers Figure 29. Building block for 50 virtual servers Figure 30. Building block for 100 virtual servers Figure 31. Storage layout for 125 virtual machines using VNX Figure 32. Storage layout for 250 virtual machines using VNX Figure 33. Storage layout for 500 virtual machines using VNX Figure 34. Maximum scale level of different arrays Figure 35. High availability at the virtualization layer Figure 36. Redundant power supplies Figure 37. Network layer high availability (VNX) block variant Figure 38. Network layer high availability (VNX) file variant Figure 39. VNX series high availability

10 Figures Figure 40. Resource pool flexibility Figure 41. Required resource from the Reference virtual machine pool Figure 42. Aggregate resource requirements stage Figure 43. Pool configuration stage Figure 44. Aggregate resource requirements - stage Figure 45. Pool configuration stage Figure 46. Aggregate resource requirements for stage Figure 47. Pool configuration stage Figure 48. Customizing server resources Figure 49. Sample Ethernet network architecture - block variant Figure 50. Sample Ethernet network architecture - file variant Figure 51. Network Settings for File dialog box Figure 52. Create Interface dialog box Figure 53. Create CIFS Server dialog box Figure 54. Create File System dialog box Figure 55. File System Properties dialog box Figure 56. Create File Share dialog box Figure 57. Storage Pool Properties dialog box Figure 58. Manage Auto-Tiering dialog box Figure 59. Storage System Properties dialog box Figure 60. Create FAST Cache dialog box Figure 61. Advanced tab in the Create Storage Pool dialog Figure 62. Advanced tab in the Storage Pool Properties dialog Figure 63. Storage Pool Alerts area Figure 64. Storage Pools panel Figure 65. LUNProperties dialog box Figure 66. Monitoring and Alerts panel Figure 67. IOPS on the LUNs Figure 68. IOPS on the disks Figure 69. Latency on the LUNs Figure 70. SP utilization Figure 71. Data Mover statistics Figure 72. Front-end Data Mover network statistics Figure 73. Storage Pools for File panel Figure 74. File Systems panel Figure 75. File System Properties window Figure 76. File System I/O Statistics window Figure 77. CIFS Statistics window

11 Tables Table 1. VNX customer benefits Table 2. Thresholds and settings under VNX OE Block Release Table 3. SMB dialect used between client and server Table 4. Storage migration improvement with Copy Offload Table 5. Microsoft PowerShell cmdlets Table 6. EMC-provided PowerShell cmdlets Table 7. Default status of SMB 3.0 features Table 8. Solution hardware Table 9. Solution software Table 10. Hardware resources for compute Table 11. Hardware resources for network Table 12. Hardware resources for storage Table 13. Number of disks required for different number of virtual machines Table 14. Profile characteristics Table 15. Profile characteristics Table 16. Virtual machine characteristics Table 17. Blank worksheet row Table 18. Reference Virtual Machine resources Table 19. Example worksheet row Table 20. Example applications stage Table 21. Example applications - stage Table 22. Example applications - stage Table 23. Server resource component totals Table 24. Deployment process overview Table 25. Tasks for pre-deployment Table 26. Deployment prerequisites checklist Table 27. Tasks for switch and network configuration Table 28. Tasks for VNX configuration for block protocols Table 29. Storage allocation table for block Table 30. Tasks for storage configuration for file protocols Table 31. Storage allocation table for file Table 32. Tasks for server installation Table 33. Tasks for SQL Server database setup Table 34. Tasks for SCVMM configuration Table 35. Tasks for testing the installation Table 36. Rules of thumb for drive performance Table 37. Best practice for performance monitoring Table 38. Hyper-V Fast Track component classification

12 Tables Table 39. List of components used in the VSPEX solution for 125 virtual machines Table 40. List of components used in the VSPEX solution for 250 virtual machines Table 41. List of components used in the VSPEX solution for 500 virtual machines Table 42. Common server information Table 43. Hyper-V server information Table 44. Array information Table 45. Network infrastructure information Table 46. VLAN information... Error! Bookmark not defined. Table 47. Service accounts... Error! Bookmark not defined. Table 48. Blank worksheet for total server resources

13 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs

14 Executive Summary Introduction Target audience Document purpose VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk. This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The readers of this document should have the necessary training and background to install and configure Microsoft Hyper-V, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the custom installation. Users focusing on selling and sizing a Microsoft Hyper-V private cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This document is an initial introduction to the VSPEX architecture, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX private cloud architecture provides the customer with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Hyper-V virtualization layer backed by highly available VNX family storage. The compute and network components, which are defined by the VSPEX partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 125, 250, and 500 virtual machine environments are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For smaller environments, solutions for up to 100 virtual machines based on the EMC VNXe series are described in EMC VSPEX Private Cloud: Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines. 14

15 Executive Summary Business needs A private cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your system is running correctly. Following the instructions in this document ensures an efficient and painless journey to the cloud. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk. Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX private cloud using Microsoft Hyper-V reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud for Microsoft Hyper-V architectures are: Provide an end-to-end virtualization solution to use the capabilities of the unified infrastructure components. Provide a VSPEX private cloud solution for Microsoft Hyper-V to efficiently virtualize up to 500 virtual machines for varied customer use cases. Provide a reliable, flexible, and scalable reference design. 15

16 Executive Summary 16

17 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage

18 Solution Overview Introduction The EMC VSPEX private cloud for Microsoft Hyper-V provides complete system architecture capable of supporting up to 500 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, compute, storage, and networking. Virtualization Microsoft Hyper-V is a leading virtualization platform in the industry. For years, Hyper- V has provided flexibility and cost savings to end users by consolidating large, inefficient server farms into nimble, reliable cloud infrastructures. Features such as Live Migration which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization which performs Live Migration automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). Compute VSPEX provides the flexibility to design and implement the customer s choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines. Sufficient network connections to enable redundant connectivity to the system switches. Excess capacity to withstand a server failure and failover in the environment. Network VSPEX provides the flexibility to design and implement the customer s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage. Traffic isolation based on industry-accepted best practices. Support for link aggregation. Storage The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any Private Cloud implementation. 18

19 Solution Overview VNX storage includes the following components sized for the stated reference architecture workload: Host adapter ports (For block) Provide host connectivity through fabric to the array. Storage processors The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays. Disk drives Disk spindles and solid state drives that contain the host or application data and their enclosures. Data Movers (For file) Front-end appliances that provide file services to hosts (optional if CIFS services are provided). The 125, 250, and 500 virtual machine Microsoft Hyper-V private cloud solutions described in this document are based on the VNX5300 TM, VNX5500 TM and VNX5700 TM storage arrays respectively. VNX5300 can support a maximum of 125 drives, VNX5500 can host up to 250 drives, and VNX5700 can host up to 500 drives. The EMC VNX series supports a wide range of business class features ideal for the private cloud environment, including: Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache File-level data deduplication/compression Thin Provisioning Replication Snapshots/checkpoints File-Level Retention Quota management 19

20 Solution Overview 20

21 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Key components Virtualization Compute Network Storage SMB 3.0 features Backup and recovery Other technologies

22 Solution Technology Overview Overview This solution uses the EMC VNX series and Microsoft Hyper-V to provide storage and server hardware consolidation in a private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 1 depicts the solution components. Figure 1. Private cloud components The following sections describe the components in more detail. 22

23 Solution Technology Overview Key components This section describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. In other words, the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The EMC VNX storage family used in this solution provides high-performance data storage while maintaining highavailability. Backup and recovery The optional backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. Solution architecture provides details on all the components that make up the reference architecture. 23

24 Solution Technology Overview Virtualization Overview Microsoft Hyper-V The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and the physical capability of the system to change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V and Failover Clustering provide high-availability in a virtualized infrastructure with Cluster Shared Volumes (CSVs). Live Migration and Live Storage Migration enable seamless migration of virtual machines between Hyper-V servers, and stored files between storage systems, with minimal performance impact. Virtual Fibre Channel to Hyper- V guest operating systems Windows Server 2012 provides Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N_port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host s physical host bus adapter (HBA). This provides virtual machines with direct access to the existing storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, Live Migration, and Multipath I/O (MPIO). The prerequisites for virtual FC include: One or more installations of Windows Server 2012 with the Hyper-V role One or more FC HBAs installed on the server, each with an updated HBA driver that supports virtual FC NPIV-enabled SAN Virtual machines using the virtual FC adapter must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system. Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM allows administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. 24

25 Solution Technology Overview High availability with Hyper-V Failover Clustering The Windows Server 2012 Failover Clustering feature provides high-availability in Hyper-V. High-availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are: Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. Allows other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation. Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server, or migrated to a different host server. Hyper-V Replica Hyper-V Replica was introduced in Windows Server 2012 to provide asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network with HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. The virtual machines are brought back to a consistent point, and are accessible with minimal impact to the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper- V host at the primary site. 25

26 Solution Technology Overview Hyper-V Snapshot A Hyper-V snapshot creates a consistent point-in-time view of a virtual machine. Snapshots function as source for backups or other user cases. Virtual machines do not have to be running to take a Snapshot. Snapshots are completely transparent to the applications running on the virtual machine. The snapshot saves the point-in-time status of the virtual machine, and enables users to revert the virtual machine to a previous point-in-time if necessary. Note Snapshots require additional storage space. The amount of additional storage space depends on the frequency of data change on the virtual machine. Cluster aware Updating Cluster-Aware Updating (CAU) was introduced in Windows Server It provides a way of updating cluster nodes with little or no disruption. CAU transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are Live-Migrated to other cluster nodes). 2. Installs the updates. 3. Performs a restart if necessary. 4. Brings the node back online (migrated virtual machines are moved back to the original node). 5. Updates the next node. The node managing the update process is called the Orchestrator. The Orchestrator can work in two different modes: Self-updating mode: The Orchestrator runs on the cluster node being updated. Remote-updating mode: The Orchestrator runs on a standalone Windows operating system, and remotely manages the cluster update. CAU is integrated with Windows Server Update Service (WSUS). Powershell allows automation of the CAU process. EMC Storage Integrator EMC Storage Integrator (ESI) is an agentless, no-charge, plug-in that enables application-aware storage provisioning for Microsoft Windows Server applications, Hyper-V, VMware, and Xen Server environments. Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions: Provisioning, formatting, and presenting drives to Windows servers. Provisioning new cluster disks, and automatically adding them to the cluster. Provisioning shared CIFS storage, and mounting it to Windows servers. Provisioning SharePoint storage, sites, and databases in a single wizard. 26

27 Solution Technology Overview Compute The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents the minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 2, the compute layer requirements for a specific implementation are 25 processor cores, and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 2. Compute layer flexibility 27

28 Solution Technology Overview The first customer needs four of the chosen servers, while the other customer needs two. Note To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high-availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores, and RAM per core to meet the needs of the target environment. 28

29 Solution Technology Overview Network Overview The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 3 and Figure 4 depict an example of this highly available network topology. Figure 3. Example of highly available network design for block 29

30 Solution Technology Overview Figure 4. Example of highly available network design for file This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security. For block, EMC unified storage platforms provide network high-availability or redundancy by two ports per SP. If a link is lost on the SP front end port, the link fails over to another port. All network traffic is distributed across the active links. For file, EMC unified storage platforms provide network high-availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 30

31 Solution Technology Overview Storage Overview EMC VNX series The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in data center storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNX series arrays provide virtualization at the storage layer. The EMC VNX family is optimized for virtual applications; and delivers industryleading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNX series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by the VNX series. Table 1. Feature VNX customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High-availability, designed to deliver five 9s availability Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Up to three times improvement in performance with the latest Intel Xeon multi-core processor technology, optimized for Flash Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced production and performance: Software suites FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Local Protection Suite Practices safe data protection and repurposing. Remote Protection Suite Protects data against localized failures, outages, and disasters. 31

32 Solution Technology Overview Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software packs Total Efficiency Pack Includes all five software suites. Total Protection Pack Includes local, remote, and application protection suites. VNX Snapshots VNX Snapshots is a new software feature introduced in VNX OE for Block Release 32, which creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing SnapView Snapshot functionality by integrating with storage pools. Note LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView Snapshots. This limitation exists because VNX Snapshots require pool space as part of its technology. VNX Snapshots support 256 writeable snaps per pool LUN. It supports Branching, also called Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is a hard limit. VNX Snapshots use redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy on first write (COFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release (Block OE Release 32) introduces consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. VNX SnapSure VNX SnapSure is an EMC VNX Network Server software feather that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. 32

33 Solution Technology Overview A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports two types of checkpoints: Read-only checkpoints Read-only file systems created from a PFS Writeable checkpoints Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. For more detailed information, refer to Using VNX Snapsure. VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, advanced snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting. Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. Monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 5. 33

34 Solution Technology Overview Figure 5. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. Pool LUN expansion can be done with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. For more detailed information of pool LUN expansion, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX has the capability of shrinking a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves two steps: 1. Shrink the file system from Windows Disk Management. 2. Shrink the pool LUN using a command window and the DISKRAID utility. The utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. 34

35 Solution Technology Overview For more detailed information on thin LUN expansion, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. User alerting through Capacity Threshold setting Customers must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages can be avoided. Figure 6 explains why provisioning with thin pools requires monitoring. Figure 6. Thin LUN space utilization Monitor the following values for thin pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host-reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation may never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. 35

36 Solution Technology Overview Figure 7 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free capacity, Percent Full, Total Allocation, Total Subscription, Percent Subscribed and Oversubscribed By capacity. Figure 7. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation monitor pool utilization, and alert when thresholds are reached. Set the Percentage Full Threshold to allow enough buffer to make remediation before an outage situation occurs. Adjust this setting by clicking the Advanced tab of the Storage Pool Properties dialog box, as seen in Figure 8. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active as there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. 36

37 Solution Technology Overview Figure 8. Defining storage pool utilization thresholds View alerts by using the Alert tab in Unisphere. Figure 9 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 9. Defining automated notifications - for block 37

38 Solution Technology Overview Table 2 lists the information about thresholds and their settings under VNX OE for Block 32. Table 2. Thresholds and settings under VNX OE Block Release 32 Threshold type Threshold range Threshold default Alert severity User settable 1%-84% 70% Warning None Side effect Built-in N/A 85% Critical Clears user settable alert Allowing total allocation to exceed 90 percent of total capacity puts you at the risk of running out of space and affecting all applications that use thin LUNs in the pool. Windows Offloaded Data Transfer Offloaded Data Transfer (ODX) provides the ability to offload data transfer from the server to the storage arrays. This feature is enabled by default in Windows Server EMC VNX series arrays are compatible with Windows ODX on Windows Server ODX supports the following protocols: iscsi Fibre Channel (FC) Fibre Channel over Ethernet (FCoE) Server Message Block (SMB) 3.0 The following data-transfer operations currently support ODX: Transferring large amounts of data via the Hyper-V Manager, such as creating a fixed size VHD, merging a snapshot, or converting VHDs Copying files in File Explorer Using the Copy commands in Windows PowerShell Using the Copy commands in the Windows command prompt Since ODX offloads the file transfer to the storage array, host CPU and network utilization are significantly reduced. ODX minimizes latencies and improves the transfer speed by using the storage array for data transfer. This is especially beneficial for large files, such as database or video files. ODX is enabled by default in Windows Server 2012, so when ODX-supported file operations occur, data transfers automatically offloaded to the storage array. The ODX process is transparent to users. 38

39 Solution Technology Overview PowerPath EMC PowerPath is host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure: Standardized data management across physical and virtual environments. Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments. Improved service-level agreements by eliminating application impact from I/O failures. VNX FAST Cache VNX FAST VP VNX file shares VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to Flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The FAST Cache feature is an optional component of this solution. VNX FAST VP, a part of the VNX FAST Suite, can automatically tier data across multiple types of drives to use differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1 GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. The VNX family of storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information, refer to Configuring and Managing CIFS on VNX. 39

40 Solution Technology Overview ROBO In most cases, a Remote Office/Branch Office (ROBO) environment is an edge-core topology where edge nodes are deployed at remote sites to provide local computing resources. Note For detailed steps on how to build a ROBO data protection solution with an EMC VNX system at the core and EMC VNXe systems at the edges, refer to Deployment Guide: Data Protection in a ROBO Environment with EMC VNX and VNXe Series Arrays. SMB 3.0 features Branch Cache is a new feature that allows clients to cache data stored on SMB 3.0 shares locally at the branch office. With Branch Cache capability, remote users that access file shares can cache files locally, which helps future lookups, reduces network traffic, and improves scalability and performance. For more information on Branch Cache, refer to SMB 3.0 features. Overview SMB3.0 supports for Hyper-V and Microsoft SQL Server storage. Microsoft also introduced several key features which improve the performance of these applications, and simplify application management tasks. This section describes SMB3.0 features supported on VNX storage arrays, and how these features affect the performance of applications or data stored on SMB 3.0 file shares. For more information, refer to White Paper: EMC VNX Series: Introduction to SMB 3.0 Support. SMB versions and negotiations The SMB protocol follows the client-server model. The protocol level is negotiated by client request and server response when establishing a new SMB connection. The SMB versions for various Windows operating systems are as follows: CIFS Windows NT 4.0 SMB 1.0 Windows 2000, Windows XP, Windows Server 2003, and Windows Server 2003 R2 SMB 2.0 Windows Vista (SP1 or later) and Windows Server 2008 SMB 2.1 Windows 7 and Windows Server 2008 R2 SMB 3.0 Windows 8 and Windows Server

41 Solution Technology Overview Before establishing a session between the client and server, negotiate a common SMB dialect. Table 3 shows the common dialect used based on the SMB versions supported by the client and server. Table 3. SMB dialect used between client and server Client-server SMB 3.0 SMB 2.1 SMB 2.0 SMB 3.0 SMB 3.0 SMB 2.1 SMB 2.0 SMB 2.1 SMB 2.1 SMB 2.1 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 2.0 SMB 1.0 SMB 1.0 SMB 1.0 SMB 1.0 For more information on SMB versions and negotiations, refer to the Microsoft TechNet website at VNX and VNXe storage support All features mentioned in this document are supported in VNX OE for File X and VNXe OE X and later releases. Note The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables CIFS (SMB) and NFS protocols on the VNX. SMB 3.0 VHD/VHDX storage support With VHD and VHDX storage support, Hyper-V can store virtual machines, and files such as configuration files, virtual hard drives, and snapshots on SMB 3.0 shares. This applies to standalone and clustered servers. Feature benefit With SMB 3.0 support for storing Hyper-V virtual machines, Microsoft supports block storage protocols and file storage protocols. This provides Hyper-V users with additional storage options to store Hyper-V virtual machine files. 41

42 Solution Technology Overview Enabling the feature Support for VHD and VHDX files on a VNX storage array is enabled by default, without the need for additional configuration. Figure 10 shows the performance of 100 Hyper-V reference virtual machines on VNX SMB 3.0 file shares. Each virtual machine was driving 25 IOPS. The acceptable latency limit is 20 ms, and the average latency observed during the test was 12 ms. Figure 10. SMB 3.0 baseline performance comparison point Note This performance result serves as a baseline comparison point for all other SMB 3.0 features discussed later in this chapter. SMB 3.0 Continuous Availability The SMB 3.0 Continuous Availability (CA) feature ensures the transparent failover of the file server (serviced by the VNX storage array) when faults occur. It enables clients connected to SMB 3.0 shares to transparently reconnect to another file server node when one node fails. All open file handles from the faulted server node are transferred to the new server node, which eliminates application errors. 42

43 Solution Technology Overview Figure 11. SMB 3.0 Continuous Availability Figure 11 shows the sequence of events for a Data Mover failover with CA enabled: 1. The client (Windows Server 2012) requests a persistent handle by opening a file with associated leases and locks on a CIFS share. 2. The CIFS server saves the open state and persistent handle to disk. 3. If the primary Data Mover (Data Mover 2) fails, it fails over to the standby Data Mover (Data Mover 3). 4. The Data Mover reads and restores the persistent open state from the disk before starting the CIFS service. 5. Using the persistent handle, the client re-establishes the connection to the same CIFS server, and recovers the same context associated with the open file as before the failover occurred. Feature benefit When a Data Mover fails, clients accessing SMB 3.0 shares created with Continuous Availability do not perceive any application errors. Instead, they experience a small I/O delay due to the primary Data Mover failing over to the standby Data Mover. After the failover, the application may experience a brief spike in latency but soon resumes normal operation. 43

44 Solution Technology Overview Enabling the feature This feature is required for Hyper-V environments. To enable this feature, run the following commands from the VNX Control Station. 1. Run the following command to mount the file system through which the share will be exported with the smbca option: server_mount <server_name> -o smbca <fsname> /<fsmountpoint> 2. Run the following command to export the share with the CA option by using the following command: server_export <server_name> -P cifs n <sharename> o type=ca /<fsmountpoint> Performance impact This feature does not impact storage, server, or network performance. The only time that performance changes is after a failover or failback operation, when there is a spike in IOPS and latency for a brief period before normal operation resumes. Figure 12. Continuous Availability application performance Figure 12 shows the performance of VDbench on host when the primary Data Mover panics. There is an I/O delay during the failover operation. When the failover completes, the standby is active, and the VDbench returns to normal operation after a short spike in I/O and latency. 44

45 Solution Technology Overview SMB Multichannel This feature utilizes multiple network interfaces and connections to provide higher throughput and fault tolerance. This is achieved without any additional configuration steps for the network interfaces. Feature benefits SMB Multichannel provides network high-availability. If one of the NICs fails, the applications and clients continue operating at a lower throughput potential without any errors. SMB Multichannel is automatically configured. All network paths are automatically detected, and connections are added dynamically. SMB Multichannel works as follows: Multichannel connections on a single NIC for improved throughput: SMB Multichannel does not provide any additional throughput if the single NIC does not support RSS Receive Side Scaling (RSS). RSS allows multiple connections to spread across the CPU cores automatically and hence can distribute load between the CPU cores by creating multiple TCP/IP connections. Multichannel connections on multiple NICs for improved throughput: SMB Multichannel creates multiple TCP/IP sessions one for each available interface. If the NICs are RSS-capable, many TCP/IP connections per NIC are created. Enabling the feature SMB Multichannel is enabled by default on the VNX storage array. No parameter needs to be set on the system to use this feature. This feature is also enabled by default on Windows 8 and Windows 2012 clients. 45

46 Solution Technology Overview Performance impact SMB Multichannel provides additional network throughput by creating more TCP/IP connections (at least one per NIC). If the network is underutilized, no performance degradation is observed when one NIC fails. However, if the network is being heavily utilized, the application continues functioning at a lower throughput. Figure 13. SMB Multichannel fault tolerance Figure 13 shows the network-resiliency test result on an SMB 3.0 client when one out of two NICs is disabled. The application does not experience any errors or faults, and continues to perform normally even when the interface is enabled again. The application does not have an impact on performance because the network was not the bottleneck during the test. If it were a bottleneck, the response time would have been higher. However, the application would have continued functioning without any errors if the higher response time was acceptable. Figure 14. Multichannel network throughput Figure 14 shows the SMB3.0 clients network throughput on both interfaces. 46

47 Solution Technology Overview Each SMB 3.0 client in the test environment has two network interfaces. When one interface is disabled, the surviving interface services the traffic. This is evident from the graph, which shows the throughput doubled on one NIC, and the throughput drop to zero on the disabled NIC. After the disabled NIC was enabled again, the load balances equally on both NICs. SMB 3.0 Copy Offload Copy Offload enables the array to copy large amounts of data without involving server, network, or CPU resources. The server offloads the copy operation to the physical array where the data resides. Note Copy Offload requires that the source and the destination file system be on the same Data Mover. Figure 15. Copy Offload Feature benefits This feature enables faster data transfer from source to destination because it does not use any client CPU cycles. This feature is most beneficial for the following operations: Deployment operations: Deploy multiple virtual machines faster. The baseline VHDX can reside on an SMB 3.0 share, with new virtual machines deployed on SMB 3.0 shares w Hyper-V Manager, by pointing to the baseline VHDX. Cloning operations: Clone virtual machines from one SMB 3.0 share to another in minutes. 47

48 Solution Technology Overview Migration operations: Migrate virtual machines between file shares on the same Data Mover in 10 minutes, as opposed to almost 40 minutes without the Copy Offload feature. Table 4 shows the time taken to move virtual machine storage with and without copy offload feature. Table 4. Storage migration improvement with Copy Offload Number of virtual machines (100 GB each) Time spent for storage migration with Copy Offload enabled 1 10 mins 37 mins 2 13 mins 82 mins Time spent for storage migration with Copy Offload disabled 5 26 mins More than 4 hours mins More than 8 hours Enabling the feature This feature is enabled by default on the VNX storage array, Windows 8, and Windows Server 2012 clients. Performance impact Since the array handles the entire copy operation is, the Copy Offload feature increases the utilization of the Data Mover CPU and other array resources. The performance of the feature is limited by the array read/write bandwidth. SMB 3.0 Branch Cache Branch Cache enables clients to cache data stored on SMB 3.0 shares locally at the branch office. The cached content is encrypted between peers, clients, and hosted cache servers. This feature was first introduced with Windows 7 and Windows 2008 R2. SMB 3.0 supports Branch Cache v2. Implement Branch Cache in one of two modes: Distributed cache mode: Distributes cache between the client computers at the branch office. Hosted cache mode: Maintains cached content on a separate computer at the branch office. For more information on Branch Cache, refer to Feature benefit With Branch Cache capability, remote users that access file shares can cache files locally at the branch office. This helps future lookups, reduces network traffic, and improves scalability and performance. 48

49 Enabling the feature Solution Technology Overview Branch Cache feature is enabled by default on the VNX storage array. Run the following commands on the VNX Control Station to enable Branch Cache: server_cifs <server_name> smbhash service enable Run the following command to create the share with type=hash: server_export <server_name> -o type=hash Performance impact This feature reduces network traffic, as the cached data is available locally at the branch office. Client performance also improves due to faster data access of data, but there is some overhead involved to encrypt and decrypt data between Branch Cache members. SMB 3.0 Remote VSS Remote VSS (RVSS) is a Remote Procedure Call (RPC)-based protocol, which enables application-consistent shadow copies of VSS-aware server applications. RVSS stores data on SMB 3.0 file shares. RVSS supports application backup across multiple file servers and shares. VSS-aware backup applications can perform snapshots of server applications that store data on the VNX CIFS shares. Hyper-V has the ability to store virtual machine files on CIFS shares, and RVSS can take point-in-time copies of the share contents. Some examples of shadow copy uses are: Create backups Recover data Test scenarios Data mining Feature benefit This feature uses the existing Microsoft VSS infrastructure to integrate with VSSaware backup software and applications. Backup applications read directly from shadow-copy file shares instead of involving the server application computer. Enabling the feature This feature is enabled by default on the VNX storage array, without a need for additional configuration. Performance impact This feature increases the load on the VNX storage array because it takes applicationconsistent copies (or snapshots) of applications running on the file shares. 49

50 Solution Technology Overview SMB 3.0 encryption SMB 3.0 allows in-flight, end-to-end encryption of data, and protects it on untrusted networks. Enable this feature for an individual share, or for the entire CIFS server node. This feature only works with SMB 3.0 clients. If the share is encrypted, deny access, or allow unencrypted access for non-smb 3.0 clients. Feature benefit SMB Encryption does not require any additional software or hardware. It protects data on the network from attacks and eavesdropping. Enabling the feature This feature is not enabled by default on the VNX storage array. Enable encryption on all shares To configure encryption on all shares, set the Encrypt Data parameter in the VNX CIFS server registry to 0x1.To configure this parameter, complete the following steps: 1. Open the Registry Editor (regedit.exe) on a computer. 2. Select File > Connect Network Registry. 3. Enter the hostname or IP address of the CIFS server, and click Check Names. 4. When the server is recognized, click OK to close the window. 5. Edit the Encrypt Data parameter (0x1 is enabled, and 0x0 is disabled)under HKEY\System\CurrentControlSet\Services\LanmanServer\Parameters as shown in Figure 16 Figure 16. Enable the Encrypt Data parameter 50

51 Solution Technology Overview By default only SMB 3.0 clients can access encrypted VNX file shares. In order to allow pre-smb 3.0 clients to access encrypted shares, the RejectUnencryptedAccess value under the VNX CIFS server registry location shown above must be set to 0x0. Enable Encryption on a specific share In order to enable encryption for a particular share, run the following command on the VNX Control Station server_export <server_name> -P cifs n <sharename> o type=encrypted /<fsmountpoint> Performance impact With encryption enabled on the shares, Data Mover CPU, and SMB 3.0 client utilization increases because encryption and decryption require additional overhead. Figure 17. Enabling encryption: Client CPU utilization Figure 17 shows an increase in CPU utilization with encryption enabled on the SMB 3.0 shares. 51

52 Solution Technology Overview Figure 18. Enabling encryption: Data Mover CPU utilization Figure 18 shows the increase in Data Mover utilization with encryption enabled on the SMB 3.0 shares. SMB 3.0 PowerShell cmdlets SMB 3.0 PowerShell cmdlets are PowerShell commands that allow file share management through Windows PowerShell CLI. SMB 3.0 Windows Powershell cmdlets use WMIv2 classes, so not all commands are compatible with VNX-hosted file shares. However, VNX provides a set of PowerShell commands to install and execute from a Windows 8 or Server2012 client. Download these commands from EMC Online Support. For more information on Windows PowerShell commands for SMB 3.0, refer to Table 5 lists Microsoft SMB 3.0 PowerShell cmdlets to execute from the clients. Table 5. Microsoft PowerShell cmdlets Command Get-SmbServerNetworkInterface Get-SmbServerConfiguration Get-SmbMultichannelConnection New-SmbMultichannelConstraint Get-SmbMultichannelConstraint Update-SmbMultichannelConnection Remove-SmbMultichannelConstraint Get-SmbMapping Description Lists the network interfaces available to the SMB server Lists the SMB server configuration Lists the connections currently in use by SMB Multichannel Creates a new multichannel constraint Lists the constraints on multichannel connections Updates the constraint on the multichannel connection Removes the multichannel constraint Displays a list of drives mapped by an SMB client 52

53 Solution Technology Overview Remove-SmbMapping New-SmbMapping Get-SmbConnection Get-SmbClientNetworkInterface Get-SmbClientConfiguration Removes an existing mapping Creates a new mapping Lists the SMB connections on the server Displays the client network interface Displays the current SMB client configuration settings Table 6 lists the EMC-provided SMB3.0 PowerShell cmdlets to manage shares. Table 6. Command Add-LG EMC-provided PowerShell cmdlets Description cmdlet to add a new local group on a server name Add-LGMember Add-Share Add-ShareAcl Add-SharePerms Remove-LG Remove-LGMember Remove-Session Remove-Share Remove-ShareAcl Remove-SharePerms Set-ShareFlags Show-AccountSid Show-ACL Show-LG Show-LGMembers Show-RootDirMembers Show-SecurityEventLog Show-Sessions Show-Shares Show-ShareAcl cmdlet to add a member in a specified local group on a server name cmdlet to create a share on a server name cmdlet to add an ACE in a share's ACL on a server name cmdlet to add an access in share's permissions on a server name cmdlet to delete a local group on a server name cmdlet to delete a member of a Local Group on a server name cmdlet to delete a session open on a server name cmdlet to remove a share on a server name cmdlet to remove an ACE in a share's ACL on a server cmdlet to remove an access in share's permissions on a server name cmdlet to set share flags on a specified server name cmdlet to display SID of a specified user cmdlet to display the share's ACL on a server name cmdlet to enumerate local group on a server name cmdlet to enumerate members of a local group on a server name cmdlet to list the root directory contain of a server name cmdlet to display the eventlogs of a server name cmdlet to enumerate open sessions on a server name cmdlets to display all shares on a server name cmdlet to display the share's ACL on a server name 53

54 Solution Technology Overview Command Show-ShareFlags Show-SharePerms Description cmdlet to display share's flags values on a server name cmdlet to enumerate access contain in a share's permissions on a server name For example: Show Shares command Figure 19 shows a list of all the SMB 3.0 shares on the VNX from the Show Shares command. Figure 19. PowerShell execution of Show Shares 54

55 Solution Technology Overview Get-SmbServerConfiguration command Figure 20 shows the SMB 3.0 server configuration from the Get- SMBServerConfiguration command. Figure 20. PowerShell execution of Get-SmbServerConfiguration Feature benefit PowerShell cmdlets enable clients and administrators to easily manage SMB3.0 shares from a single location. Enabling the feature PowerShell commands are enabled by default on Windows 2012 and Windows 8 clients. Download the EMC PowerShell commands from EMC Online Support to use them. Performance impact The execution of these cmdlets has no impact on storage, server, or network resources. 55

56 Solution Technology Overview SMB 3.0 Directory Leasing Directory Leasing enables clients to cache directory metadata locally. All future metadata requests are serviced from the same cache. Cache coherency is maintained because clients are notified when directory information changes on the server. There are three types of leases: Read-caching lease (R) allows a client to cache reads, and can be granted to multiple clients. Write-caching lease (W) allows a client to cache writes. A handle-caching lease (H) allows a client to cache open handles, and can be granted to multiple clients. Figure 21. SMB 3.0 Directory Leasing Feature benefit Directory leasing improves application response time in branch offices. This feature is useful in scenarios where a client in the branch office does not want to go over the high-latency WAN to fetch the same metadata information repeatedly. Instead, they can cache the same data and rely on the SMB server to notify them when information changes on the server. The typical usage includes: Home folders (read/write) Publication (read-only) 56

57 Enabling the feature Solution Technology Overview This feature is enabled by default on the Data Mover without a need for additional configuration. Performance impact This feature improves application response time, reduces network traffic, and increases client processor utilization. Summary Table 7 summarizes the default status of the features. Table 7. Default status of SMB 3.0 features Feature Data Mover support Hyper-V storage support Supported by default on the Data Mover Continuous Availability Multichannel Copy Offload Branch Cache Remote VSS Encryption PowerShell cmdlets Directory leasing Must be enabled on the Data Mover Enabled by default on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover Must be enabled on the Data Mover Enabled by default on the Data Mover. EMC SMB PowerShell cmdlets for VNX can be downloaded from powerlink.emc.com Enabled by default on the Data Mover Backup and recovery Overview EMC RecoverPoint Backup and recovery is another important component in this VSPEX solution, which provides data protection by backing up data files or volumes on a defined schedule, and restoring data from backup for recovery after a disaster. This VSPEX solution uses RecoverPoint or VNX Replicator for replication and EMC Avamar for backup. EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (CLR). RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and the data is transferred by FC. RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. 57

58 Solution Technology Overview In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster. RecoverPoint supports several types of write splitters: Array-based Intelligent fabric-based Host-based EMC VNX Replicator EMC VNX Replicator is a powerful, easy-to-use asynchronous replication solution. With its WAN-aware functionality, simple management interface, and advanced DR capability, it provides a complete replication solution. Replication between a primary and a secondary file system or iscsi LUN can be on the same VNX system, or on a remote one. EMC VNX Replicator supports application-consistent iscsi replication. The host can initiate the replication via the VSS interface in Windows environments or Replication Manager. For CIFS environments, the Virtual Data Mover (VDM) functionality replicates the necessary context to the remote site along with the file systems. This includes CIFS server data, audit logs, and local groups. For asynchronous data recovery, the secondary copy can be read/write, and production can continue at the remote site. If the primary system becomes available, incremental changes at the secondary copy can be played back to the primary with the re-synchronization function. This operates as described above, with a role reversal between primary and secondary. EMC Avamar EMC Avamar data deduplication technology seamlessly integrates into virtual environments, providing rapid backup and restoration capabilities. Avamar s deduplication results in less data transmitted across the network, and greatly reduces the amount of data being backed up and stored to achieve storage, bandwidth, and operational savings. Two of the most common recovery requests made to backup administrators are: File-level recovery Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are individual users deleting files, applications requiring recoveries, and batch process-related erasures. System recovery Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues. 58

59 Other technologies Solution Technology Overview Leveraging CBT for both backup and recovery with virtual proxy server pools minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry-leading next-generation backup appliances. Overview EMC XtremSW Cache In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to the following technologies. EMC XtremSW Cache is a server Flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe Flash technology. Server-side Flash caching for maximum speed XtremSW Cache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server Flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremSW Cache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremSW Cache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremSW Cache is transparent to applications; do not rewrite, retest, or recertify to deploy XtremSW Cache in the environment. Minimum impact on system resources Unlike other caching solutions on the market, XtremSW Cache does not require a significant amount of memory or CPU cycles, as all Flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremSW Cache on server resources. XtremSW Cache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. 59

60 Solution Technology Overview XtremSW Cache active/passive clustering support The configuration of XtremSW Cache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremSW Cache-enabled active/passive cluster ensures data integrity, and accelerates application performance. XtremSW Cache performance considerations The XtremSW Cache performance considerations are: On a write request, XtremSW Cache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremSW Cache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremSW Cache performance decreases. XtremSW Cache is most effective for workloads with a 70 percent, or more, read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in XtremSW Cache 1.5. Note For more information, refer to VFCache Installation and Administration Guide v

61 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Validation test profile Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload

62 Solution Architecture Overview Overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute, and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloudbased computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This chapter is a comprehensive guide to the major aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Overview The VSPEX solution for Microsoft Hyper-V private cloud with EMC VNX validates at three different points of scale, one configuration with up to 125 virtual machines, one configuration with up to 250 virtual machines, and one configuration with up to 500 virtual machines. The defined configurations form the basis of creating a custom solution. Note VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. 62

63 Solution Architecture Overview Logical architecture The architecture diagrams in this section show the layout of the major components in the solutions. Two types of storage, block-based and file-based, are shown in the following diagrams. Figure 22 characterizes the infrastructure validated with block-based storage, where an 8 Gb FC/FCoE or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 22. Logical architecture for block variant 63

64 Solution Architecture Overview Figure 23 characterizes the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic. Figure 23. Logical architecture for file variant Key components The architectures include the following key components: Microsoft Hyper-V Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 8. Hyper-V provides highly available infrastructure through features such as: Live Migration Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Live Storage Migration Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. Failover Clustering High-Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. Dynamic Optimization (DO) Provides load balancing of computing capacity in a cluster with support of SCVMM. Microsoft System Center Virtual Machine Manager (SCVMM) SCVMM is not required for this solution. However, if deployed, it (or its corresponding function in Microsoft System Center Essentials) simplifies provisioning, management, and monitoring of the Hyper-V environment. Microsoft SQL Server 2012 SCVMM, if used, requires a SQL Server database instance to store configuration and monitoring details. 64

65 Solution Architecture Overview DNS Server Use DNS services for the various solution components to perform name resolution. This solution uses Microsoft DNS service running on Windows Server Active Directory Server Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 server. IP network A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries users and management traffic. Storage network The storage network is an isolated network that provides hosts with access to the storage arrays. VSPEX offers different options for block-based and file-based storage. Storage network for block This solution provides three options for block-based storage networks. Fibre Channel (FC) is a set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. Fibre Channel over Ethernet (FCoE) is a new storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frame s into Ethernet frames. This allows the encapsulated FC Frames to run alongside traditional Internet Protocol (IP) traffic. 10 Gb Ethernet (iscsi) enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. Storage network for file With file-based storage, a private, non-routable 10 GbE subnet carries the storage traffic. VNX storage array The VSPEX private cloud configuration begins with the VNX family storage arrays, including: EMC VNX5300 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 125 virtual machines. EMC VNX5500 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SM B3.0) shares (for file) to Hyper-V hosts for up to 250 virtual machines. EMC VNX5700 array Provides storage by presenting either Cluster Shared Volumes (for block) or CIFS (SMB 3.0) shares (for file) to Hyper-V hosts for up to 500 virtual machines. 65

66 Solution Architecture Overview VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols. The SPs provide access for all external hosts, and for the file side of the VNX array. Disk processor enclosure (DPE) is 3U in size, and houses the SPs and the first tray of disks. VNX5300 and VNX5500 use this component. Storage processor enclosure (SPE) is 2U in size and includes the SPs, two power supplies, and fan packs. VNX5700 and VNX7500 use this component, and support a maximum of 500 and 1,000 drives respectively. X-Blades (or Data Movers) access data from the back-end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. Data Mover enclosure (DME) is 2U in size and houses the Data Movers (X- Blades). All VNX for File models use the DME. Standby power supply (SPS) is 1U in size and provides enough power to each SP to ensure that any data in flight is de-stages to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes reconcile and persist. Control Station is 1U in size and provides management functions to the X-Blades. The Control Station is responsible for X-Blade failover. An optional secondary Control Station ensures redundancy on the VNX array. Disk-array enclosures (DAE) house the drives used in the array. Hardware resources Table 8 lists the hardware used in this solution. Table 8. Component Microsoft Hyper-V servers CPU Solution hardware Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs For 250 virtual machines: 250 vcpus Minimum of 63 physical CPUs For 500 virtual machines: 500 vcpus Minimum of 125 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host 66

67 Solution Architecture Overview Component Configuration For 125 virtual machines: Minimum of 250 GB RAM Add 2GB for each physical server For 250 virtual machines: Minimum of 500 GB RAM Add 2GB for each physical server For 500 virtual machines: Minimum of 1000 GB RAM Add 2GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBAs per server File 4 x 10 GbE NICs per server Note Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V High-Availability (HA) and meet the listed minimums. Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Next- Generation Backup Avamar 1 Gen4 utility node 1 Gen4 3.9 TB spare node For 125 virtual machines 3 Gen4 3.9 TB storage nodes For 250 virtual machines 5 Gen4 3.9 TB storage nodes For 500 virtual machines 7 Gen4 3.9 TB storage nodes 67

68 Solution Architecture Overview Component EMC VNX series storage array Data Domain Block Configuration For 125 virtual machines: 1 Data Domain DD640 1 ES30 15x1TB HDDs For 250 virtual machines: 1 Data Domain DD670 2 ES30 15x1TB HDDs For 500 virtual machines: Common: 1 Data Domain DD670 4 ES30 15x1TB HDDs 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP. system disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives. 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 68

69 Component File Configuration Common: 2 Data Movers (active/standby) Solution Architecture Overview 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 125 virtual machines EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives. 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare Shared infrastructure Note In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, add the following: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. This solution may use a 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled 69

70 Solution Architecture Overview Software resources Table 9 lists the software used in this solution. Table 9. Software Microsoft Hyper-V Windows Server Solution software System Center Virtual Machine Manager Microsoft SQL Server Configuration Windows Server 2012 Datacenter Edition (Datacenter Edition is necessary to support the number of virtual machines in this solution) Version 2012 SP1 Version 2012 Enterprise Edition Note Any supported database for SCVMM is acceptable. EMC VNX VNX OE for File Release VNX OE for Block Release 32 ( ) EMC Storage Integrator (ESI) 2.1 EMC PowerPath 5.7 Next-Generation Backup Avamar 6.1 SP1 Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Windows Server 2012 Datacenter Edition Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory and Smart Paging can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. 70

71 Table 10 lists the hardware resources that are used for compute. Solution Architecture Overview Table 10. Hardware resources for compute Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs For 250 virtual machines: 250 vcpus Minimum of 63 physical CPUs For 500 virtual machines: 500 vcpus Minimum of 125 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 125 virtual machines: Minimum of 250 GB RAM Add 2GB for each physical server For 250 virtual machines: Minimum of 500 GB RAM Add 2GB for each physical server For 500 virtual machines: Minimum of 1000 GB RAM Add 2GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server Note Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V High-Availability (HA) and meet the listed minimums. Note The solution may use a 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. 71

72 Solution Architecture Overview Hyper-V memory virtualization Microsoft Hyper-V has a number of advanced features to maximize performance, and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in the environment. In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 24. Figure 24. Hypervisor memory consumption Understanding the technologies in this section enhances this basic concept. 72

73 Dynamic Memory Solution Architecture Overview Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any time. In Windows Server 2012, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. It swaps out less-used memory to disk storage, and swaps in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, so Windows Server 2012 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows Server 2012 extends this functionality to the virtual machines, which provides improved performance in SMP environments. Memory configuration guidelines This section provides guidelines to configure server memory for this solution. The guidelines take into account Hyper-V memory overhead, and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, which includes the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB memory for the Hyper-V parent partition in this solution. Virtual machine memory In this solution, each virtual machine gets 2 GB memory in fixed mode. 73

74 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here consider Jumbo Frames, VLANs, and LACP on EMC unified storage. For detailed network resource requirements, refer to Table 11. Table 11. Hardware resources for network Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 ports per Hyper-V server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per Hyper-V server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Note The solution may use a 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage: Client access Storage (for iscsi or SMB only) Management 74

75 Solution Architecture Overview Figure 25 depicts the VLANs and the network connectivity requirements for a blockbased VNX array. Figure 25. Required networks for block variant 75

76 Solution Architecture Overview Figure 26 depicts the VLANs and the network connectivity requirements for a filebased VNX array. Figure 26. Required networks for file variant Note Figure 26 demonstrates the network connectivity requirements for a VNX array using 10 GbE connections. Create a similar topology for 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Enable jumbo frames (iscsi or SMB only) This solution recommends setting the MTU at 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames for storage and host ports on the switches. 76

77 Solution Architecture Overview Link aggregation (SMB only) A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Hyper-V allows more than one method of using storage when hosting virtual machines. The tested solutions described below use different block protocols (FC/FCoE/iSCSI) and CIFS (for file), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. VSPEX storage building blocks documents specific recommendations for customization. 77

78 Solution Architecture Overview Table 12 lists the hardware resources used for storage. Table 12. Hardware resources for storage Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP. System disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives. 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 78

79 Component File Configuration Common: 2 Data Movers (active/standby) Solution Architecture Overview 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management System disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives. 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives. 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives. 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare Hyper-V storage virtualization for VSPEX This section provides guidelines to set up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes V2 and New Virtual Hard Disk Format (VHDX) features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 27, the storage array presents either block-based LUNs (as CSV), or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines. 79

80 Solution Architecture Overview Figure 27. Hyper-V virtual disk types CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for Hyper-V virtual machine. CSV A Cluster Shared Volume (CSV) is a shared disk containing an NTFS volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass Through Windows 2012 also supports Pass Through, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured. SMB 3.0 (file-based storage only) The SMB protocol is the file sharing protocol that is used by default in Windows. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: SMB Transparent Failover SMB Scale Out SMB Multichannel SMB Direct SMB Encryption VSS for SMB file shares SMB Directory Leasing SMB PowerShell 80

81 Solution Architecture Overview With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note SMB, also known as Common Internet File System (CIFS). For more details about SMB 3.0, refer to Chapter 3. ODX (block-based storage only) Offloaded Data Transfers (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host, but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within or between storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Since the file transfer is offloading to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage array, ODX minimizes latencies and improve the transfer speed of large files, such as database or video files. When file operations that are supported by ODX are performed, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server New Virtual Hard Disk format Hyper-V in Windows Server 2012 contains an update to the VHD format called VHDX, which has much larger capacity and built-in resiliency. The main new features of VHDX format are: Support for virtual hard disk storage with the capacity of up to 64 TB. Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures. Optimal structure alignment of the virtual hard disk format to suit large sector disks. The VHDX format also has the following features: Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload. The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors. The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates. Space reclamation features that can result in smaller file size and enables the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware). 81

82 Solution Architecture Overview VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two Flash drives with FAST VP storage tiering to enhance metadata operations and performance. Building block for 10 virtual servers The first building block can contain up to 10 virtual servers. It has two Flash drives and five SAS drives in a storage pool, as shown in Figure 28. Figure 28. Building block for 10 virtual servers This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 10 more virtual servers. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. Building block for 50 virtual servers The second building block can contain up to 50 virtual servers. It contains two Flash drives, and 25 SAS drives, as shown in Figure 29. Figure 29. Building block for 50 virtual servers Implement this building block by placing all of the drives into a pool initially, or start with a 10 virtual server building block, and then expand the pool by adding five SAS drives and allowing the pool to restripe. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. 82

83 Building block for 100 virtual servers Solution Architecture Overview The third building block can contain up to 100 virtual servers. It contains two Flash drives, and 45 SAS drives, as shown in Figure 30. The preceding sections outline an approach to grow from 10 virtual machines in a pool to 100 virtual machines in a pool. However, after reaching 100 virtual machines in a pool, do not go to 110. Create a new pool and start the scaling sequence again. Figure 30. Building block for 100 virtual servers Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 13 lists the Flash and SAS requirements in a pool for different numbers of virtual servers. Table 13. Number of disks required for different number of virtual machines Virtual servers Flash drives SAS drives * Note Due to increased efficiency with larger stripes, the building block with 45 SAS drives can support up to 100 virtual servers. To grow the environment beyond 100 virtual servers, create another storage pool using the building block method described here. 83

84 Solution Architecture Overview VSPEX private cloud validated maximums VSPEX private cloud configurations are validated on the VNX5300, VNX5500, and VNX5700 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNX Operating Environment, and hot spare disks for the environment. Note Allocate at least one hot spare for every 30 disks of a given type and size. VNX5300 VNX5300 is validated for up to 125 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 31 shows one potential configuration. Figure 31. Storage layout for 125 virtual machines using VNX5300 This configuration uses the following storage layout: Sixty 600 GB SAS disks are allocated to two block-based storage pools: one pool with 45 SAS disks for 100 virtual machines, and one pool with 15 SAS disks for 25 virtual machines. Note Note The pool does not use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. Two 600 GB SAS disks are configured as hot spares. Four 200 GB Flash drives are allocated to two block-based storage pools, two per pool. One 200 GB Flash drive is allocated as a hot spare. 84

85 Solution Architecture Overview Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP : Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently-accessed data to higher tiers of storage in 1-GB increments, and migrates infrequently-accessed data to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block storage, allocate at least two LUNs to the Windows cluster from a single storage pool to serve as Cluster Shared Volumes for the virtual servers. For file storage, allocate at least two CIFS shares to the Windows cluster from a single storage pool to serve as SMB shares for the virtual servers. Optionally configure up to 10 Flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNX5300 can support 125 virtual servers as defined in Reference workload. 85

86 Solution Architecture Overview VNX5500 VNX5500 is validated for up to 250 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 32 shows one potential configuration. Figure 32. Storage layout for 250 virtual machines using VNX5500 This configuration uses the following storage layout: One hundred fifteen 600 GB SAS disks are allocated to three block-based storage pools: two pools with 45 SAS disks for 100 virtual machines each, and one pool with 25 SAS disks for 50 virtual machines. Note Note The pool does not use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. Four 600 GB SAS disks are configured as hot spares. Six 200 GB Flash drives are allocated to three block-based storage pools, two per pool. One 200 GB Flash drive is allocated as a hot spare. 86

87 Solution Architecture Overview Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP : Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently-accessed data to higher tiers of storage in 1-GB increments, and migrates infrequently-accessed data to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block storage, allocate at least two LUNs to the Windows cluster from a single storage pool to serve as Cluster Shared Volumes for the virtual servers. For file storage, allocate at least two CIFS shares to the Windows cluster from a single storage pool to serve as SMB shares for the virtual servers. Optionally configure up to 10 Flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNX5500 can support 250 virtual servers as defined in Reference workload. 87

88 Solution Architecture Overview VNX5700 VNX5700 is validated for up to 500 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 33 shows one potential configuration. Figure 33. Storage layout for 500 virtual machines using VNX

89 This configuration uses the following storage layout: Solution Architecture Overview Two hundred twenty-five 600 GB SAS disks are allocated to five block-based storage pools: with 45 SAS disks for 100 virtual machines each. Note Note The pool does not use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce sub-optimal results with drives of different sizes. Eight 600 GB SAS disks are configured as hot spares. Ten 200 GB Flash drives are allocated to three block-based storage pools, two per pool. One 200 GB Flash drive is allocated as a hot spare. Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP : Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently-accessed data to higher tiers of storage in 1-GB increments, and migrates infrequently-accessed data to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block storage, allocate at least two LUNs to the Windows cluster from a single storage pool to serve as Cluster Shared Volumes for the virtual servers. For file storage, allocate at least two CIFS shares to the Windows cluster from a single storage pool to serve as SMB shares for the virtual servers. Optionally configure up to 10 Flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration, the VNX5700 can support 500 virtual servers as defined in Reference workload. 89

90 Solution Architecture Overview Conclusion The scale levels listed in Figure 34 are maximums for the arrays in the VSPEX private cloud environment. It is acceptable to configure any of the listed arrays with a smaller number of virtual servers with the building blocks described. Figure 34. Maximum scale level of different arrays High-availability and failover Overview Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with little or no impact on business operations. Configure high-availability in the virtualization layer, and configure the hypervisor to automatically restart failed virtual machines. Figure 35 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 35. High availability at the virtualization layer By implementing high-availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 36. Connect the servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. 90

91 Solution Architecture Overview Figure 36. Redundant power supplies To configure high-availability in the virtualization layer, configure the compute layer with enough resources that the total number of available resources meetings the needs of the environment, even with a server failure, as demonstrated in Figure 35. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each Windows host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 37 and Figure 38. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 37. Network layer high availability (VNX) block variant 91

92 Solution Architecture Overview Figure 38. Network layer high availability (VNX) file variant Ensure there is no single point of failure to allow the compute layer to access storage, and communicate with users even if a component fails. Storage layer The VNX family design is for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure 39. Figure 39. VNX series high availability 92

93 Validation test profile Solution Architecture Overview EMC storage arrays are highly available by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. Profile characteristics The VSPEX solution was validated with the environment profile described in Table 9. Table 14. Profile characteristics Profile characteristic Value Number of virtual machines 125/250/500 Virtual machine OS Windows Server 2012 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core 4 RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or CIFS shares to store virtual machine disks Disk and RAID type for LUNs or CIFS shares 2 GB 100 GB 25 IOPS 4/6/10 RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks Note This solution was tested and validated with Windows Server 2012 as the operating system for Hyper-V hosts and virtual machines, but also supports Windows Server Hyper-V hosts on Windows Server 2008 hosts and Windows Server 2012 use the same sizing and configuration. Backup and recovery configuration guidelines Overview This section provides guidelines to set up backup and recovery for this VSPEX solution. It includes how the backup is characterized, and the backup layout. Backup characteristics The solution is sized with the following application environment profile, as listed in Table 15. Table 15. Profile characteristics Profile characteristic Number of users Value 1,250 for 125 virtual machines 2,500 for 250 virtual machines 5,000 for 500 virtual machines 93

94 Solution Architecture Overview Profile characteristic Value Number of virtual machines 125 for 125 virtual machines (20% database, 80% Unstructured) 250 for 250 virtual machines (20% database, 80% Unstructured) 500 for 500 virtual machines (20% database, 80% Unstructured) Exchange data 1.2 TB (1 GB mailbox per user) for 125 virtual machines 2.5 TB (1 GB mailbox per user) for 250 virtual machines 5 TB (1 GB mailbox per user) for 500 virtual machines SharePoint data 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines 2.5 TB for 500 virtual machines SQL Server 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines 2.5 TB for 500 virtual machines User data 6.1 TB (5.0 GB per user) for 125 virtual machines 25 TB (10.0 GB per user) for 250 virtual machines 50 TB (10.0 GB per user) for 500 virtual machines Daily Change Rate for the applications Exchange data 10% SharePoint data 2% SQL Server 5% User data 2% Retention per data types All database data User data 14 Dailies 30 Dailies, 4 weeklies, 1 monthly 94

95 Solution Architecture Overview Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, Avamar and Data Domain managed to deploy as a single solution. This enables users to back up the unstructured user data directly to the Avamar system for simple file level recovery. Avamar manages the database and virtual machine images, and stores the backups on the Data Domain system with the embedded Boost client library. This backup solution unifies the backup process with industry-leading deduplication backup software and storage, and achieves the highest levels of performance and efficiency. Sizing guidelines Reference workload The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. There is guidance on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Make modifications to the storage definition by adding drives for greater capacity and performance, and the addition of features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and typical operations such as snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times. Overview When you move an existing server to a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. 95

96 Solution Architecture Overview Defining the reference workload To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose. For the VSPEX solutions, the reference workload is a single virtual machine. Table 16 lists the characteristics of this virtual machine. Table 16. Characteristic Virtual machine characteristics Value Virtual machine operating system Microsoft Windows Server 2012 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Applying the reference workload Overview When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. The solution creates a pool of resources that are sufficient to host a target number of Reference virtual machines with the characteristics shown in Table 16. The customer virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of Reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. 96

97 Solution Architecture Overview Example 1: Custom-built application A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one Reference virtual machine Memory of two Reference virtual machines Storage of one Reference virtual machine I/Os of one Reference virtual machine In this example, an appropriate virtual machine uses the resources for two of the Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 123 Reference virtual machines remain. Example 2: Point of sale system The database server for a customer s point of sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four Reference virtual machines Memory of eight Reference virtual machines Storage of two Reference virtual machines I/Os of eight Reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 117 Reference virtual machines remain. Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two Reference virtual machines Memory of four Reference virtual machines Storage of one Reference virtual machine I/Os of two Reference virtual machines 97

98 Solution Architecture Overview In this case, the one appropriate virtual machine uses the resources of four Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 121 Reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 Reference virtual machines Memory of 32 Reference virtual machines Storage of 52 Reference virtual machines I/Os of 28 Reference virtual machines In this case, one virtual machine uses the resources of 52 Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 73 Reference virtual machines remain. Summary of examples These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 Reference virtual machines, and resources for 59 Reference virtual machines remain in the resource pool as shown in Figure 40. Figure 40. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. 98

99 Solution Architecture Overview Implementing the solution Overview Resource types CPU resources The solution described in this document requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. The solution defines the hardware requirements for the solution in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. The solution defines the number of CPU cores that are required, but not a specific type or configuration. New deployments should use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The Reference virtual machine and required hardware resources in the solution assume that there are four virtual CPUs for each physical processor core (4:1 ratio). Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the solution must have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server because of budget constraints. Memory over-commitment assumes that each virtual machine does not use all its allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. This solution is validated with statically assigned memory and no over-commitment of memory resources. If a real-world environment uses memory over-commit, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. 99

100 Solution Architecture Overview Network resources The solution outlines the minimum needs of the system. If the system requires additional bandwidth, add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, each virtual machine generates 25 IOPS with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each of these depend on the use of the environment. It is not practical to provide precise numbers in this context. However, the network described in the solution should be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The storage building blocks described in this solution contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision CIFS shares to the Windows cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is acceptable to replace drives with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. Moreover, it is acceptable to scale up using the building blocks with larger numbers of drives up to the limit defined in the VSPEX private cloud validated maximums. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance. When expanding the capability of a storage pool using the building blocks described in this document, use the same type and size of drive in the pool. Create a new pool to use different drive types and sizes. This prevents uneven performance across the pool. 100

101 Solution Architecture Overview Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual server. In any customer implementation, the load of a system varies over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, add more of that resource to the system. Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of Reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 17. Table 17. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent Reference virtual machines Example application Resource requirements N/A Equivalent Reference virtual machines Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, memory, IOPS, and capacity. 101

102 Solution Architecture Overview CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as perfmon in Microsoft Windows to examine the CPU Utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Microsoft Windows perfmon, to determine memory efficiency. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Storage performance requirements I/O operations per second The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system. The first is the number of requests coming in or IOPS. Equally important is the size of the request, or I/O size -- a request for 4 KB of data is easier and faster than a request for 4 MB of data. That distinction becomes important with the third factor, which is the average I/O response time, or I/O latency. The Reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as Microsoft Windows perfmon. Perfmon provides several counters that can help. The most common are: Note Logical Disk\Disk Transfer/sec Logical Disk\Disk Reads/sec Logical Disk\Disk Writes/sec At the time of publication, Windows perfmon does not provide counters to expose IOPS and latency for CIFS-based VHDX storage. Monitor these areas from the VNX array as discussed in Chapter 7. The Reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. 102

103 Solution Architecture Overview I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The Reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even, powers of 2 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of the common I/O sizes. The Reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the Reference virtual machine assumed 8 KB I/O sizes. I/O latency Storage capacity requirements Determining equivalent Reference virtual machines The average I/O response time, or I/O latency, is a measurement of how quickly I/O the storage system processes requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target; however, monitor the system and re-evaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/transfer counter in Microsoft Windows perfmon. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year, requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the equivalent Reference virtual machines line by using the relationships in Table 18. Round all values up to the closest whole number. Table 18. Resource Reference Virtual Machine resources Value for Reference virtual machine Relationship between requirements and equivalent Reference virtual machines CPU 1 Equivalent Reference virtual machines = resource requirements Memory 2 Equivalent Reference virtual machines = (resource requirements)/2 103

104 Solution Architecture Overview Resource Value for Reference virtual machine Relationship between requirements and equivalent Reference virtual machines IOPS 25 Equivalent Reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent Reference virtual machines = (resource requirements)/100 For example, the point of sale system used in Example 2: Point of sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 200 GB of storage. This translates to four Reference virtual machines of CPU, eight Reference virtual machines of memory, eight Reference virtual machines of IOPS, and two Reference virtual machines of capacity. Table 19 demonstrates how that machine fits into the worksheet row. Table 19. Application Example application Example worksheet row Resource requirement s CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) N/A Equivalent Reference virtual machines Equivalent Reference virtual machines Use the highest value in the row to fill in the Equivalent Reference Virtual Machines column. As shown in Figure 41, the example requires eight Reference virtual machines. 104

105 Solution Architecture Overview Figure 41. Required resource from the Reference virtual machine pool Implementation example stage 1 A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. He or she computes the sum of the Equivalent Reference Virtual Machines column on the right side of the worksheet as listed in Table 20 to calculate the total number of Reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number, to use. Table 20. Example applications stage 1 Application Example application #1: Custom built application Example Application #2: Point of sale system Example Application #3: Web server Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Server resources CPU (virtual CPUs) Memory (GB) Storage resources IOPS Capacity (GB) N/A N/A N/A Reference virtual machines Total equivalent Reference virtual machines

106 Solution Architecture Overview This example requires 14 Reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and 2 or more Flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5300, for up to 125 Reference virtual machines. Figure 42. Aggregate resource requirements stage 1 Figure 42 shows six Reference Virtual Machines are available after implementing VNX5300 with 10 SAS drives and two Flash drives. Figure 43. Pool configuration stage 1 Figure 43 shows the pool configuration in this example. Implementation example stage 2 This customer must add a decision support database to this virtual infrastructure. Using the same strategy, calculate the number of Equivalent Reference Virtual Machines required, as shown in Table 21. Table 21. Example applications - stage 2 Application Example application Resource requirements Server resources CPU (virtual CPUs) Memory (GB) Storage resources IOPS Capacity (GB) N/A Reference virtual machines 106

107 #1: Custom built application Example application #2: Point of sale system Example application #3: Web server Example application #4: Decision support database Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Solution Architecture Overview Server resources Storage resources N/A N/A N/A Total equivalent Reference virtual machines 66 This example requires 66 Reference virtual machines. According to the sizing guidelines, one storage pool with 35 SAS drives and 2 or more Flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5300, for up to 125 Reference virtual machines. Figure 44. Aggregate resource requirements - stage 2 Figure 44 shows four Reference Virtual Machines available after implementing VNX5300 with 35 SAS drives and two Flash drives. 107

108 Solution Architecture Overview Figure 45. Pool configuration stage 2 Figure 45 shows the pool configuration in this example. Implementation example stage 3 With business growth, the customer must implement a much larger virtual environment to support one custom built application, one point of sale system, two web servers, and three decision Support databases. Using the same strategy, calculate the number of Equivalent Reference Virtual Machines, as shown in Table 22. Table 22. Example applications - stage 3 Application Example application #1: Custom built application Example application #2: Point of sale system Example application #3: Web server #1 Example application #4: Decision support bat abase #1 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Server resources CPU (virtual CPUs) Memory (GB) Storage resources IOPS Capacity (GB) N/A N/A N/A N/A Reference virtual machines 108

109 Example application #5: Web server #2 Example application #6: Decision support database #2 Example application #7: Decision support database #3 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Solution Architecture Overview Server resources Storage resources N/A N/A N/A Total equivalent Reference virtual machines 174 This example requires 174 Reference virtual machines. According to the sizing guidelines, one storage pool with 85 SAS drives and 4 or more Flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5500, for up to 250 Reference virtual machines. Figure 46. Aggregate resource requirements for stage 3 109

110 Solution Architecture Overview Figure 46 shows six Reference Virtual Machines are available after implementing VNX5500 with 85 SAS drives and four Flash drives. Figure 47. Pool configuration stage 3 Figure 47 shows the pool configuration in this example. Fine-tuning hardware resources Usually, the process described in Determining equivalent Reference virtual determines the recommended hardware size for servers and storage. However, in some cases there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document; however, you can perform additional customization at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. With the method outlined in Determining equivalent Reference virtual machines, it is easy to build a virtual infrastructure scaling from 10 Reference virtual machines to 500 Reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in VSPEX private cloud validated maximums. 110

111 Server resources Solution Architecture Overview For some workloads the relationship between server needs and storage needs does not match what is outlined in the Reference virtual machine. Size the server and storage layers separately in this scenario. Figure 48. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 23. In the Server Component Totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. Note When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage Component Totals line at the bottom of Table 23 describes the required amount of storage. Table 23. Application Server resource component totals Server resources CPU (virtual CPUs) Memory (GB) Storage resources IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of sale system Example application #3: Web server #1 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines N/A N/A N/A

112 Solution Architecture Overview Example application #4: Decision support database #1 Example application #5: Web server #2 Example application #6: Decision support database #2 Example Application #7: Decision support database #3 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Server resources Storage resources N/A N/A N/A N/A Total equivalent Reference virtual machines 174 Server customization Server component totals NA Storage customization Storage component totals NA Storage component equivalent Reference virtual machines NA Total equivalent Reference virtual machines - storage 157 Note Calculate the sum of the Resource Requirements row for each application, not the Equivalent Reference Virtual machines, to get the Server/Storage Component Totals. 112

113 Solution Architecture Overview In this example, the target architecture required 39 virtual CPUs and 227 GB of memory. With the stated assumptions of four virtual machines per physical processor core, and no memory over-provisioning, this translates to 10 physical processor cores and 227 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources. Note Keep high-availability requirements in mind when customizing the resource pool hardware. Appendix C provides a blank server resource component totals worksheet. 113

114 Solution Architecture Overview 114

115 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment Summary

116 VSPEX Configuration Guidelines Overview The deployment process consists of the stages listed in Table 24. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 24 lists the main stages in the solution deployment process. The table also includes references to chapters that contain relevant procedures. Table 24. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment prerequisites Gather customer configuration data Rack and cable the components Configure the switches and networks, connect to the customer network Customer configuration data Refer to the vendor documentation. Prepare switches, connect network, and configure switches 6 Install and configure the VNX Prepare and configure storage array Configure virtual machine storage Install and configure the servers Set up SQL Server (used by SCVMM) Install and configure SCVMM Prepare and configure storage array Install and configure Hyper-V hosts Install and configure SQL Server database System Center Virtual Machine Manager server deployment 116

117 VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are collecting hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 25. Tasks for pre-deployment Task Description Reference Gather documents Gather the related documents listed in Appendix D. These documents provide detail on setup procedures and deployment best practices for the various components of the solution. References: EMC documentation Gather tools Gather data Gather the required and optional tools for the deployment. Use Table 26 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. Table 26: Deployment prerequisites checklist Appendix B Deployment prerequisites Table 26 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 8 and Table 9. Table 26. Deployment prerequisites checklist Requirement Description Reference Hardware Sufficient physical server capacity to host 125 or 250 or 500 virtual servers Windows Server 2012 servers to host virtual infrastructure servers Note The existing infrastructure may already meet this requirement. Switch port capacity and capabilities as required by the virtual server infrastructure Table 8: Solution hardware EMC VNX5300 (125 virtual machines), VNX5500 (250 virtual machines) or VNX5700 (500 virtual machines): Multiprotocol storage array with the required disk layout 117

118 VSPEX Configuration Guidelines Requirement Description Reference Software SCVMM 2012 installation media Microsoft Windows Server 2012 installation media Microsoft Windows Server 2008 R2 installation media (optional for virtual machine guest OS) Microsoft SQL Server 2012 or newer installation media Note The existing infrastructure may already meet this requirement. Licenses Microsoft Windows Server 2008 R2 Standard (or higher) license keys (optional) Microsoft Windows Server 2012 Datacenter Edition license keys Note An existing Microsoft Key Management Server (KMS) may already meet this requirement. Microsoft SQL Server license key Note The existing infrastructure may already meet this requirement. SCVMM 2012 license keys Customer configuration data Assemble information such as IP addresses and hostnames during the planning process to reduce the onsite time. Appendix B provides a table to maintain a record of relevant customer information. Add, record, or modify information as needed during the deployment process. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. 118

119 Prepare switches, connect network, and configure switches VSPEX Configuration Guidelines Overview This section lists the network infrastructure the requirements to support this architecture. Table 27 provides a summary of the tasks for switch and network configuration, and references for further information. Table 27. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure storage array and Windows host infrastructure networking as specified in Prepare and configure storage array and Install and configure Hyper-V hosts. Prepare and configure storage array Install and configure Hyper-V hosts. Configure VLANs Complete network cabling Configure private and public VLANs as required. Connect the switch interconnect ports. Connect the VNX ports. Connect the Windows server ports. Your vendor s switch configuration guide Prepare network switches For validated levels of performance and high-availability, this solution requires the switching capacity listed in Table 8. Do not use new hardware if existing infrastructure meets the requirements. Configure infrastructure network The infrastructure network requires redundant network links for each Windows host, the storage array, the switch interconnect ports, and the switch uplink ports to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 49 and Figure 50 show sample redundant infrastructures for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. 119

120 VSPEX Configuration Guidelines In Figure 49 converged switches provide customers different protocol options (FC, FCoE or iscsi) for the storage network. While existing FC switches are acceptable for FC or FCoE, use 10 Gb Ethernet network switches for iscsi. Figure 49. Sample Ethernet network architecture - block variant 120

121 VSPEX Configuration Guidelines Figure 50 shows a sample redundant Ethernet infrastructure for file storage. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Figure 50. Sample Ethernet network architecture - file variant Configure VLANs Ensure that there are adequate switch ports for the storage array and Windows hosts. Use a minimum three VLANs for the following purposes: Virtual machine networking and traffic management (These are customerfacing networks. Separate them if required) Live Migration networking (Private network) Storage networking (iscsi or SMB, private network) Configure jumbo frames (iscsi or SMB only) Use jumbo frames for iscsi and SMB protocols. Set the MTU to 9,000 on the switch ports for the iscsi or SMB storage network. Consult your switch configuration guide for instructions. 121

122 VSPEX Configuration Guidelines Complete network cabling Ensure the following: Note All servers, storage arrays, switch interconnects, and switch uplinks plug into separate switching infrastructures and have redundant connections. There is a complete connection to the existing customer network. Ensure that unforeseen interactions do not cause service when you connect the new equipment to the existing customer network. Prepare and configure storage array Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. Each case contains the following steps: 1. Configure the VNX. 2. Provision storage to the hosts. 3. Configure FAST VP. 4. Optionally configure FAST Cache. The sections below cover the options for each step separately depending on whether one of the block protocols (FC, FCoE, iscsi), or the file protocol (CIFS) is selected For FC, FCoE, or iscsi, refer to VNX configuration for block protocols. For CIFS, refer to VNX configuration for file protocols. VNX configuration for block protocols This section describes how to configure the VNX storage array for host access using block protocols such as FC, FCoE, or iscsi. In this solution, the VNX provides data storage for Windows hosts. Table 28. Tasks for VNX configuration for block protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Provision storage for Hyper-V hosts Physically install the VNX hardware using the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Create the storage areas required for the solution. VNX5300 Unified Installation Guide VNX5500 Unified Installation Guide VNX5700 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide 122

123 Prepare the VNX VSPEX Configuration Guidelines TheVNX5300, VNX5500, or VNX5700 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with the other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connections using FC or FCoE Connect one or more servers to the VNX storage system, either directly or through qualified FC or FCoE switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. For data connections using iscsi Connect one or more servers to the VNX storage system, either directly or through qualified IP switches. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: 1. Set up a storage network IP address: Logically isolate the storage network from the other networks in the solution, as described in Chapter 3. This ensures that other network traffic does not impact traffic between the hosts and storage. 2. Enable jumbo frames on the VNX iscsi ports: Use Jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment: a. In Unisphere, select Settings > Network > Settings for Block. b. Select the appropriate iscsi network interface. c. Click Properties. d. Set the MTU size to 9,000. e. Click OK to apply the changes. The reference documents listed in Table 28 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. 123

124 VSPEX Configuration Guidelines Provision storage for Hyper-V hosts This section describes provisioning block storage for Hyper-V hosts. To provision file storage, refer to VNX configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. Table 29. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools. d. Click the Pools tab. e. Click Create. Note The pool does not use system drives for additional storage. Storage allocation table for block Configuration Number of pools Number of 15K rpm SAS drives per pool Number of Flash drives per Pool Number of LUNs per pool LUN size (TB) 125 virtual machines 2 Pool 1 45 Pool (4 total) 2 (4 total) Pool 1 5 Pool virtual machines 3 Pool 1 45 Pool 2 45 Pool (6 total) 2 (6 total) Pool 1 5 Pool 2 5 Pool virtual machines 5 Pool 1 45 Pool 2 45 Pool 3 45 Pool 4 45 Pool (10 total) 2 (10 total) Pool 1 5 Pool 2 5 Pool 3 5 Pool 4 5 Pool 5 5 Note Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. Figure 31 depicts the target storage layout for 125 virtual machines. Figure 32 depicts the target storage layout for 250 virtual machines. Figure 33 depicts the target storage layout for 500 virtual machines. 124

125 2. Use the pools created in step 1, to provision thin LUNs: a. Select Storage > LUNs. b. Click Create. VSPEX Configuration Guidelines c. Select the pool created in step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 29 for more information. 3. Create a storage group, and add LUNs and Hyper-V servers: a. Select Hosts > Storage Groups. b. Click Create and input a name for the new storage group. c. Select the created storage group. d. Click LUNs. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs dialog appears. e. Configure and add the Hyper-V hosts to the storage pool. VNX configuration for file protocols This section describes file storage provisioning for Hyper-V hosts Table 30. Tasks for storage configuration for file protocols Task Description Reference Prepare the VNX Set up the initial VNX configuration Create a network interface Create a CIFS server Physically install the VNX hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Configure the IP address and network interface information for the CIFS server. Create the CIFS server instance to publish the storage. VNX5300 Unified Installation Guide VNX5500 Unified Installation Guide VNX5700 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Create a storage pool for file Create the file systems Create the SMB file share Create the block pool structure and LUNs to contain the file system. Establish the SMB shared file system. Attach the file system to the CIFS server to create an SMB share for Hyper-V storage. Prepare the VNX TheVNX5300, VNX5500, or VNX5700 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. 125

126 VSPEX Configuration Guidelines Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with the other devices in the environment. Ensure one or more servers connect to the VNX storage system, either directly or through qualified IP switches. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS. NTP Storage network interfaces. Storage network IP address. CIFS services and Active Directory Domain membership. Refer to the EMC Host Connectivity Guide for Windows for more detailed instructions. Enable jumbo frames on the VNX storage network interfaces Use Jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all the network interfaces in the environment. Complete the following steps to enable jumbo frames: 1. In Unisphere, select Settings > Network > Settings for File. 2. Select the appropriate network interface from the Interfaces tab. 3. Click Properties. 4. Set the MTU size to 9, Click OK to apply the changes. The reference documents listed in Table 28 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. Create a network interface A network interface maps to a CIFS server. CIFS servers provide access to file shares over the network Complete the following steps to create a network interface: 1. Log in to the VNX. 2. In Unisphere, select Settings > Netowrk > Settings For File. 3. On the Interfaces tab, click Create. 126

127 VSPEX Configuration Guidelines Figure 51. Network Settings for File dialog box In the Create Network Interface wizard, complete the following steps: 1. Select the Data Mover which will provide access to the file share. 2. Select the device name where the network interface will reside. Note Run the following command as nasadmin on the Control Station to ensure the selected device has a link connected: > server_sysconfig <datamovername> -pci This command lists the link status (UP or DOWN) for all devices on the specified Data Mover. 3. Type an IP address for the interface. 4. Type a Name for the Interface. 5. Type the netmask for the interface. The Broadcast Address field populates automatically after you provide the IP address and netmask. 6. Set the MTU size for the interface to 9,000. Note Ensure that all devices on the network (switch, servers, and so on) have the same MTU size. 7. If required, specify the VLAN ID. 8. Click OK. 127

128 VSPEX Configuration Guidelines Figure 52. Create Interface dialog box Create a CIFS server A CIFS server provides access to the CIFS (SMB) file share. 1. In Unisphere, select Storage > Shared Folders > CIFS > CIFS Servers. Note A CIFS server must exist before creating an SMB 3.0 file share. 2. Click Create. The Create CIFS Server window appears. From the Create CIFS Server window, complete the following steps: 3. Select the Data Mover on which to create the CIFS server. 4. Set the server type as Active Directory Domain. 5. Type a Computer Name for the server. The computer name must be unique within Active Directory. Unisphere automatically assigns the NetBIOS name to the computer name. 6. Type the Domain Name for the CIFS server to join. 7. Select Join the Domain. 8. Specify the domain credentials: a. Type the Domain Admin User Name. b. Type the Domain Admin Password. 9. Select Enable Local Usersto allow the creation of a limited number of local user accounts on the CIFS server. a. Set the Local Admin Password. b. Confirm the Local Admin Password. 10. Select the network interface created in step 1 to allow access to the CIFS server. 11. Click OK. The created CIFS server appears under the CIFS server tab. 128

129 VSPEX Configuration Guidelines Figure 53. Create CIFS Server dialog box Create storage pools for file Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter 4. a. Log in to Unisphere. b. Select the array for this solution. c. Select Storage > Storage Configuration > Storage Pools > Pools. d. Click Create. Note The pool does not use system drives for additional storage. 129

130 VSPEX Configuration Guidelines Table 31. Storage allocation table for file Configuration Number of storage pools Number of 15K rpm SAS drives per storage pool Number of Flash drives per storage pool Number of LUN per storage pool Number of FS per storage pool for file FS size (TB) 125 virtual machines 2 Pool 1 45 Pool (4 total) 20 2 Pool 1 5 Pool virtual machines 3 Pool 1 45 Pool (6 total) 20 2 Pool 1 5 Pool 2-5 Pool virtual machines 5 Pool 1-45 Pool 2 45 Pool 3 45 Pool 4 45 Pool (10 total) 20 2 Pool 1 5 Pool 2 5 Pool 3 5 Pool 4 5 Pool 5 5 Note Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Create the hot spare disks at this point. Refer to the appropriate VNX installation guide for additional information. Figure 31 depicts the target storage layout for 125 virtual machines. Figure 32 depicts the target storage layout for 250 virtual machines. Figure 33 depicts the target storage layout for 500 virtual machines. 2. Provision LUNs on the pool created in step 1: a. Select Storage > LUNs. b. Click Create. c. Select the pool created in step 1. For User Capacity, select MAX. The Number of LUNs to create depends on the disk number in the pool. Refer to 0 for details on the number of LUNs needed in each pool. 3. Connect the LUNs to the Data Mover for file access: a. Click Hosts > Storage Groups. b. Select ~filestorage. c. Click Connect LUNs. d. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs panel appears. Use the new storage pool to create file systems. 130

131 Create file systems To create an SMB file share, complete the following tasks: 1. Create a storage pool and a network interface. 2. Create a file system. 3. Export an SMB file share from the file system. VSPEX Configuration Guidelines If no storage pools or interfaces exist, follow the steps in Create a network interface and Create storage pools for file to create a storage pool and a network interface. Create two thin file systems from each storage pool for file. Refer to 0 for details on the number of file systems. Complete the following steps to create VNX file systems for SMB file shares: 1. Log in to Unisphere. 2. Select Storage > Storage Configuration > File Systems. 3. Click Create. The File System Creation wizard appears. 4. Specify the file system details: a. Select Storage Pool. b. Type a File System Name. c. Select a Storage Pool to contain the file system. d. Select the Storage Capacity of the file system. Refer to 0 for detailed storage capacity. e. Select Thin Enabled. f. Select the Data Mover (R/W) to own the file system. Note g. Click OK. The selected Data Mover must have an interface defined on it. 131

132 VSPEX Configuration Guidelines Figure 54. Create File System dialog box The new file system appears on the File Systems tab. 1. Click Mounts. 2. Select the created file system and then click Properties. 3. Select Set Advanced Options. 4. Select Direct Writes Enabled. 5. Select CIFS Sync Writes Enabled. 6. Click OK. 132

133 VSPEX Configuration Guidelines Figure 55. File System Properties dialog box Create the SMB file share After completing creating the file system, the SMB file share can now be created. To create the share, complete the following steps: 1. From the VNX dashboard, hover over the Storage tab. 2. Select Shared folders > CIFS. 3. From the shares page click Create. The Create CIFS Share window opens. 4. Select the Data Mover on which to create the share (the same Data Mover that owns the CIFS server). 5. Specify a name for the share. 6. Specify the file system for the share. Leave the default path as is. 7. Select the CIFS server to provide access to the share. 8. Optionally specify a user limit, or any comments about the share. 133

134 VSPEX Configuration Guidelines Figure 56. Create File Share dialog box FAST VP configuration This procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two Flash drives in each block-based storage pool: 1. In Unisphere, select the storage pool to configure for FAST VP. 2. Click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 57 shows the tiering information for a specific FAST pool. Note The Tier Status area shows FAST relocation information specific to the selected pool. 3. Select Automatic from the Auto-Tiering list box. The Tier Details panel shows the exact data distribution. 134

135 VSPEX Configuration Guidelines Figure 57. Storage Pool Properties dialog box You can also connect to the array-wide Relocation Schedule by using the button in the top right corner to access the Manage Auto-Tiering window as shown in Figure 58. Figure 58. Manage Auto-Tiering dialog box 135

136 VSPEX Configuration Guidelines From this status dialog, users can control the Data Relocation Rate. The default rate is Medium to minimize the impact on host I/O. Note FAST is a completely automated tool that provides the ability to create a relocation schedule. Schedule the relocations during off-hours to minimize any potential performance impact. FAST Cache configuration Optionally configure FAST Cache. To configure FAST Cache on the storage pools for this solution, complete the following steps: Note Use the Flash drives listed in Sizing guidelines for FAST VP configurations as described in FAST VP configuration. FAST Cache is an optional component of this solution that provides improved performance as outlined in Chapter Configure Flash drives as FAST Cache: a. Click Properties from the Unisphere dashboard or Manage Cache in the left-hand pane of the Unisphere interface to access the Storage System Properties window as shown in Figure 59. b. Click the FAST Cache tab to view FAST Cache information. Figure 59. Storage System Properties dialog box c. Click Create to open the Create FAST Cache window as shown in Figure

137 VSPEX Configuration Guidelines The RAID Type field displays RAID 1 when the FAST Cache is created. This window also provides the option to select the drives for the FAST Cache. The bottom of the screen shows the Flash drives used to create the FAST Cache. Select Manual to choose the drives manually. d. Refer to Storage configuration guidelines to determine the number of Flash required in this solution. Note If a sufficient number of Flash drives are not available, VNX displays an error message and does not create the FAST Cache. Figure 60. Create FAST Cache dialog box 2. Enable FAST Cache in the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure the LUNS from the advanced tab on the Create Storage Pool window shown in Figure 61. After installation, FAST Cache is enabled by default at storage pool creation. 137

138 VSPEX Configuration Guidelines Figure 61. Advanced tab in the Create Storage Pool dialog If the storage pool already exists, use the Advanced tab of the Storage Pool Properties window to configure FAST Cache as shown in Figure 62. Figure 62. Advanced tab in the Storage Pool Properties dialog Note The VNX FAST Cache feature on does not cause an instantaneous performance improvement. The system must collect data about access patterns, and promote frequently used information into the cache. This process can take a few hours. Array performance gradually improves during this time. 138

139 VSPEX Configuration Guidelines Install and configure Hyper-V hosts Overview This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 32 describes the required tasks. Table 32. Tasks for server installation Task Description Reference Install Windows hosts Install Windows Server 2012 on the physical servers for the solution. Install Hyper-V and configure Failover Clustering Configure windows hosts networking Install PowerPath on Windows Servers Plan virtual machine memory allocations 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including NIC teaming and the Virtual Switch network. Install and configure PowerPath to manage multipathing for VNX LUNs Ensure that Windows Hyper-V guest memory management features are configured properly for the environment. PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Install Windows hosts Install Hyper-V and configure failover clustering Follow Microsoft best practices to install Windows Server 2012 and the Hyper-V role on the physical servers for this solution. To install and configure Failover Clustering, complete the following steps: 1. Install and patch Windows Server 2012 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature. 3. Install the HBA drivers, or configure iscsi initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows. Table 32 provides the steps and references to accomplish the configuration tasks. 139

140 VSPEX Configuration Guidelines Configure Windows host networking To ensure performance and availability, the following network interface cards (NICs) are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary). At least two 10 GbE NICs for the storage network. At least one NIC for Live Migration. Note Enable Jumbo Frames for NICS that transfer iscsi or SMB data. Set the MTU to 9,000. Consult the NIC configuration guide for instruction. Install PowerPath on Windows servers Plan virtual machine memory allocations Install PowerPath on Windows servers to improve and enhance the performance and capabilities of the VNX storage array. For the detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Server capacity serves two purposes in the solution: Supports the new virtualized server infrastructure. Supports the required infrastructure services such as authentication/authorization, DNS and database. For information on minimum infrastructure service hosting requirements, refer to Table 8. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. There are multiple techniques available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources as this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance degradation due to resource-exhaustion increases with the amount of memory overcommitted. 140

141 VSPEX Configuration Guidelines Install and configure SQL Server database Overview Most customers use a management tool to provision and manage their server virtualization solution even though it is not required. The management tool requires a database back-end. SCVMM uses SQL Server 2012 as the database platform. This section describes how to set up and configure a SQL Server database for the solution. Table 33 lists the detailed setup tasks. Table 33. Tasks for SQL Server database setup Task Description Reference Create a virtual machine for Microsoft SQL Server Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure a SQL Server for SCVMM Install Microsoft Windows Server 2012 Datacenter Edition on the virtual machine. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance or SCVMM Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Create the virtual machine with enough computing resources on one of the Windows servers designated for infrastructure virtual machines. Use the storage designated for the shared infrastructure. Note The customer environment may already contain a SQL Server for this role. In that case, refer to Configure a SQL Server for SCVMM. The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. Use the SQL Server installation media to install SQL Server on the virtual machine. The Microsoft TechNet website provides information on how to install SQL Server. One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console. To change the default path for storing data files, perform the following steps: 141

142 VSPEX Configuration Guidelines 1. Right-click the server object in SSMS and select Database Properties. The Properties window appears. 2. Change the default data and log directories for new databases created on the server. Configure a SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server for remote connection. The requirements and steps to configure it correctly are available in the article Configuring a Remote Instance of SQL Server for VMM. Refer to the list of documents in Appendix D for more information. Note Do not use the Microsoft SQL Server Express based database option for this solution. Create individual login accounts for each service that accesses a database on the SQL Server. System Center Virtual Machine Manager server deployment Overview This section provides information on how to configure SCVMM. Complete the tasks in Table 34. Table 34. Tasks for SCVMM configuration Task Description Reference Create the SCVMM host virtual machine Create a virtual machine for the SCVMM Server. Install the SCVMM guest OS Install the SCVMM server Install the SCVMM Management Console Install the SCVMM agent locally on the hosts Add a Hyper-V cluster into SCVMM Add file share storage in SCVMM (file variant only) Create a virtual machine in SCVMM Install Windows Server 2012 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Management Console. Install an SCVMM agent locally on the hosts SCVMM manages. Add the Hyper-V cluster into SCVMM. Add SMB file share storage to a Hyper-V cluster in SCVMM. Create a virtual machine in SCVMM Install and configure Hyper-V hosts

143 Task Description Reference Create a template virtual machine Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest Operating System profile at this time. VSPEX Configuration Guidelines Deploy virtual machines from the template virtual machine Deploy the virtual machines from the template virtual machine. Create a SCVMM host virtual machine To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage. Install the SCVMM guest OS Install the SCVMM server Install the guest OS on the SCVMM host virtual machine. Install the requested Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the article Installing the VMM Server to install the SCVMM server. Install the SCVMM Management Console SCVMM Management Console is a client tool to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server. Refer to the article Installing the VMM Administrator Console to install the SCVMM Management Console. Install the SCVMM agent locally on a host If the hosts must be managed on a perimeter network, install a VMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM. Refer to the article Installing a VMM Agent Locally on a Host to install a VMM agent locally on a host. 143

144 VSPEX Configuration Guidelines Add a Hyper-V cluster into SCVMM Add file share storage to SCVMM (file variant only) Create a virtual machine in SCVMM Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster. Refer to the article How to Add a Host Cluster to VMM to add the Hyper-V cluster. To add file share storage to SCVMM, complete the following steps: 1. Open the VMs and Services workspace. 2. In the VMs and Services pane, right-click the Hyper-V Cluster name. 3. Click Properties. 4. In the Properties window, click File Share Storage. 5. Click Add, and then add the file share storage to SCVMM. Create a virtual machine in SCVMM to use as a virtual machine template. Install the e virtual machine, then install the software, and change the Windows and application settings. Refer to the article How to Create a Virtual Machine with a Blank Virtual Hard Disk to create a virtual machine. Create a template virtual machine Converting a virtual machine into a template removes the virtual machine. Backup the virtual machine, because the virtual machine may be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. Refer to the article How to Create a Template from a Virtual Machine to create the template. Deploy virtual machines from the template virtual machine Refer to the article How to Deploy a Virtual Machine to deploy the virtual machines. The deployment wizard allows you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration. Summary This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution, including the physical and logical components. At this point, the VSPEX solution is fully functional. 144

145 Chapter 6 Validating the Solution This chapter presents the following topics: Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components

146 Validating the Solution Overview This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure the configuration meets core availability requirements. Complete the tasks listed in Table 35. Table 35. Tasks for testing the installation Task Description Reference Post-install checklist Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. ra/archive/2011/03/27/ aspx Deploy and test a single virtual server Verify redundancy of the solution components Verify that each Hyper-V host has access to the required datastores and VLANs. Verify that the Live Migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Using a VNXe System with Microsoft Windows Hyper-V Ed/NorthAmerica/2012/VIR310 N/A Vendor documentation /contents/articles/151.hyper-v-virtualnetworking-survival-guide-en-us.aspx

147 Validating the Solution Post-install checklist The following configuration items are critical to the functionality of the solution. On each Windows Server, verify the following items prior to deployment into production: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Deploy and test a single virtual server Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares. A network interface is configured correctly for Live Migration. Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it. Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. Block environments Complete the following steps to perform a reboot of each VNX storage processor in turn and verify that connectivity to the LUNs is maintained throughout each reboot: 1. Log in to the Control Station with administrator credentials. 2. Navigate to /nas/sbin. 3. Reboot SP A by using the./navicli -h spa rebootsp command. 4. During the reboot cycle, check for the presence of datastores on Windows hosts. 5. When cycle completes, reboot SP B by using./navicli h spb rebootsp. 6. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 147

148 Validating the Solution File environments Perform a failover of each VNX Data Mover in turn and verify that connectivity to SMB shares is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each Data Mover: Note Optionally, reboot the Data Movers through the Unisphere interface. 1. From the Control Station prompt, run the server_cpu <movername> -reboot command, where <movername> is the name of the Data Mover. 2. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure. 3. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 148

149 Chapter 7 System Monitoring This chapter presents the following topics: Overview Key areas to monitor VNX resources monitoring guidelines

150 System Monitoring Overview System monitoring of the VSPEX environment is no different from monitoring any core IT systems; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure such as a VSPEX environment are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those experienced in administering virtualized environments should be readily familiar with the key concepts and focus areas. The key differentiator s are monitoring at scale and the ability to monitor end-to-end systems and workflows. Several business needs drive the need for proactive, consistent monitoring of the environment, which include: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity the dynamic addition, subtraction, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are at the end of this chapter. 150

151 System Monitoring Key areas to monitor Since VSPEX Proven Infrastructures comprise end-to-end solutions, system monitoring includes three discrete, but highly interrelated areas: Servers, including virtual machines and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNX array, but briefly describes other components. Performance baseline When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined Reference virtual machine. Deploy the first workload, and then measure the end-to-end resource consumption with a platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As additional workloads deploy, rerun the benchmarks to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure oversubscription is not negatively impacting overall system performance. Run these baselines consistently, to ensure the platform as a whole, and the virtual machines themselves operate as expected. What follows is a discussion on which components should comprise a core performance baseline. Servers The key resources to monitor from a server perspective include use of: Processors Memory Disk (local, NAS, and SAN) Networking Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses Windows servers as the hypervisor, you can use Windows perfmon to monitor and log these metrics. Follow your vendor s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending upon the application. Detailed information about this tool is available from: 151

152 System Monitoring Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based upon the number of Reference virtual machines deployed and their defined workload. Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS/CIFS/SMB/iSCSI/FCoE are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. If networking storage protocols are used, those are discussed in the following section. For detailed monitoring documentation, refer to your hypervisor/operating system vendor. Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the VNX family of storage arrays provide an easy yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus upon, including: Capacity IOPS Latency SP utilization For CIFS/SMB/NFS protocols, the following additional components should be monitored: Data Mover, CPU, and memory usage File system latency Network interfaces throughput in/throughput out Additional considerations (though primarily from a tuning perspective) include: I/O size Workload characteristics Cache utilization 152

153 System Monitoring These factors are outside the scope of this document; however, storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support: VNX resources monitoring guidelines Monitor the VNX with the EMC Unisphere GUI by opening an HTTPS session to the Control Station IP. The VNX family is a unified storage platform that provides both block storage and file storage access through a single entity. Monitoring is divided into two parts: Monitoring block storage resources. Monitoring file storage resources. Monitoring block storage resources This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS, and latency. Capacity In Unisphere, two panels display capacity information. These two panels provide a quick assessment to overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the configured pools to allow for anticipated growth and activities such as snapshot creation. It is essential to have a free buffer, especially for thin LUNs because out-ofspace conditions usually lead to undesirable behaviors on affected host systems. As such, configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space. To set capacity threshold alerts for a specific pool, complete the following steps: 1. Select that pool and click Properties > Advanced. 2. In the Storage Pool Alerts area, choose a number for Percent Full Threshold of this pool, as shown in Figure

154 System Monitoring Figure 63. Storage Pool Alerts area To drill-down into capacity for block, complete the following steps: 1. In Unisphere, select the VNX system to examine. 2. Select Storage > Storage > Configurations > Storage Pools. This opens the Storage Pools panel. 3. Examine the columns titled Free Capacity and % Consumed, as shown in Figure

155 System Monitoring Figure 64. Storage Pools panel Monitor capacity at the storage pool and LUN levels: 1. Click Storage > LUNs. The LUN Properties dialog box appears. 2. Select a LUN to examine and click Properties, which displays detailed LUN information, as shown in Figure Verify the LUN Capacity area of the dialog box. User Capacity is the total physical capacity available to all thin LUNs in the pool. Consumed Capacity is the total physical capacity currently assigned to all thin LUNs. 155

156 System Monitoring Figure 65. LUN Properties dialog box Examine capacity alerts and all other system events by opening the Alerts panel and the SP Event Logs panel, both of which are accessed under the Monitoring and Alerts panel, as shown in Figure

157 System Monitoring Figure 66. Monitoring and Alerts panel IOPS The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted can be felt system wide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. The VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters. Statistical reporting for IOPS (along with other key metrics) can be examined using the Statistics for Block panel by selecting VNX > System > Monitoring and Alerts > Statistics for Block. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license. Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps Front End SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions. IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with Thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN as shown in Figure

158 System Monitoring Figure 67. IOPS on the LUNs Certain RAID levels also impart write-penalties that create additional back-end IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer in Figure 68. The rules of thumb for drive performance are shown in Table 36. Table 36. Rules of thumb for drive performance 15k RPM SAS drives 10k RPM SAS drives NL-SAS drives IOPS 180 IOPS 120 IOPS 80 IOPS 158

159 System Monitoring Figure 68. IOPS on the disks Latency Latency is the byproduct of delays processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, view the latency at the LUN level as shown in Figure 69. Figure 69. Latency on the LUNs 159

160 System Monitoring Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices; determining precise causes of excessive latency requires a methodical approach. Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array typically causes latency within an FC environment by. Focus primarily on the LUNs and the underlying disk pools ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency. The same paradigm applies to Ethernet-based protocols such as iscsi and FCoE. However, additional factors come into place because these storage protocols use Ethernet as the underlying transport. Isolate the network traffic (either physical or logical) for storage, and preferably implement Quality of Service (QoS) in a shared/converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. SP utilization levels greater than 80 percent indicate a potential problem. Background processes such as replication, deduplication, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, setting replication limits, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful hardware. For SP metrics, examine the data under the SP tab of the Unisphere Analyzer, as shown in Figure 70. Review metrics such as Utilization %, Queue Length and Response Time (ms). High values for any of these metrics indicate the storage array is under duress and likely requires mitigation. Table 37 shows the best practices recommended by EMC. Table 37. Best practice for performance monitoring Utilization(%) Response time(ms) Queue lengths Threshold

161 System Monitoring Figure 70. SP utilization Monitoring file storage resources File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Data Movers, hardware components that provide an interface between NFS and CIFS/SMB users, and the SPs, provide these management services for VNX Unified systems. Data Movers process file protocol requests on the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional components and protocols introduce additional monitoring requirements such as. Data Mover network link utilization, memory utilization, and Data Mover processor utilization. To examine Data Mover metrics in the Statistics for File panel, select VNX > System > Monitoring and Alerts > Statistics for File, as shown in Figure 71. By clicking the Data Mover link, the following summary metrics are displayed as shown in Figure 71. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through Data Mover reconfiguration, additional physical resources, or both. 161

162 System Monitoring Figure 71. Data Mover statistics Select Network Device from the Statistics panel to observe front-end network statistics. The Network Device Statistics window appears as shown in Figure 72. If throughput figures exceed 80 percent of the link bandwidth to the client, configure additional links to relieve the network saturation. Figure 72. Front-end Data Mover network statistics Capacity Similar to block storage monitoring, Unisphere has a statistics panel for file storage. Select Storage > Storage Configurations > Storage Pools for File to check file storage space utilization at the pool level as shown in Figure

163 System Monitoring Figure 73. Storage Pools for File panel Monitor capacity at the pool and file system levels. 1. Select Storage > File Systems. The File Systems window appears, as shown in Figure 74. Figure 74. File Systems panel 2. Select a file system to examine and click Properties, which displays detailed file system information, as shown in Figure Examine the File Storage area for Used and Free capacity. 163

164 System Monitoring Figure 75. File System Properties window IOPS In addition to block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select System > Monitoring and Alerts > Statistics for File > File System I/O as shown in Figure

165 System Monitoring Figure 76. File System I/O Statistics window Latency To observe file system latency, select System > Monitoring and Alerts > Statistics for File > All Performance in Unisphere, and examine the value for CIFS:Ops/sec in Figure

166 System Monitoring Figure 77. CIFS Statistics window Summary Consistent and thorough monitoring of the VSPEX Proven Infrastructure is a best practice. Having baseline performance data helps to identify problems, while monitoring key system metrics helps to ensure that the system functions optimally and within designed parameters. The monitoring process can extend through integration with automation and orchestration tools from key partners such as Microsoft with their System Center suite of products. 166

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2010

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 EMC VSPEX Abstract This describes how to design virtualized Microsoft Exchange Server 2010 resources on the appropriate EMC VSPEX Proven Infrastructures

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Offloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces

Offloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces 2 ALWAYS ON, ENTERPRISE-CLASS FEATURES ON LESS EXPENSIVE HARDWARE ALWAYS UP SERVICES IMPROVED PERFORMANCE AND MORE CHOICE THROUGH INDUSTRY INNOVATION Storage Spaces Application storage support through

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.4 Configuring and managing LUNs H16814 02 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC Business Continuity for Oracle Database 11g

EMC Business Continuity for Oracle Database 11g EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS Copyright 2010 EMC Corporation. All rights reserved. Published March, 2010 EMC believes the information in this

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012!

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012! Windows Server 2012 Hands- On Camp Learn What s Hot and New in Windows Server 2012! Your Facilitator Damir Bersinic Datacenter Solutions Specialist Microsoft Canada Inc. damirb@microsoft.com Twitter: @DamirB

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.0 for Windows brings advanced online storage management to Microsoft Windows Server environments.

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Configuration Guide H15108 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

Local and Remote Data Protection for Microsoft Exchange Server 2007

Local and Remote Data Protection for Microsoft Exchange Server 2007 EMC Business Continuity for Microsoft Exchange 2007 Local and Remote Data Protection for Microsoft Exchange Server 2007 Enabled by EMC RecoverPoint CLR and EMC Replication Manager Reference Architecture

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

EMC Celerra CNS with CLARiiON Storage

EMC Celerra CNS with CLARiiON Storage DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.1 for Windows brings advanced online storage management to Microsoft Windows Server environments,

More information

VMware vsphere Clusters in Security Zones

VMware vsphere Clusters in Security Zones SOLUTION OVERVIEW VMware vsan VMware vsphere Clusters in Security Zones A security zone, also referred to as a DMZ," is a sub-network that is designed to provide tightly controlled connectivity to an organization

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Real-time Protection for Microsoft Hyper-V

Real-time Protection for Microsoft Hyper-V Real-time Protection for Microsoft Hyper-V Introduction Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate of customer adoption. Moving resources to

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

vsan Security Zone Deployment First Published On: Last Updated On:

vsan Security Zone Deployment First Published On: Last Updated On: First Published On: 06-14-2017 Last Updated On: 11-20-2017 1 1. vsan Security Zone Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Security Zone Deployment 3 1.1 Solution Overview VMware vsphere

More information

EMC VNX2 Deduplication and Compression

EMC VNX2 Deduplication and Compression White Paper VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Maximizing effective capacity utilization Abstract This white paper discusses the capacity optimization technologies delivered in the

More information

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Wendy Chen, Roger Lopez, and Josh Raw Dell Product Group February 2013 This document is for informational purposes only and may

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

70-745: Implementing a Software-Defined Datacenter

70-745: Implementing a Software-Defined Datacenter 70-745: Implementing a Software-Defined Datacenter Target Audience: Candidates for this exam are IT professionals responsible for implementing a software-defined datacenter (SDDC) with Windows Server 2016

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

Cisco HyperFlex Systems and Veeam Backup and Replication

Cisco HyperFlex Systems and Veeam Backup and Replication Cisco HyperFlex Systems and Veeam Backup and Replication Best practices for version 9.5 update 3 on Microsoft Hyper-V What you will learn This document outlines best practices for deploying Veeam backup

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER White Paper EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER EMC XtremSF, EMC XtremCache, EMC VNX, Microsoft SQL Server 2008 XtremCache dramatically improves SQL performance VNX protects data EMC Solutions

More information

Scale and secure workloads, cost-effectively build a private cloud, and securely connect to cloud services. Beyond virtualization

Scale and secure workloads, cost-effectively build a private cloud, and securely connect to cloud services. Beyond virtualization Beyond virtualization Scale and secure workloads, cost-effectively build a private cloud, and securely connect to cloud services The power of many servers, the simplicity of one Efficiently manage infrastructure

More information

Protecting Hyper-V Environments

Protecting Hyper-V Environments TECHNICAL WHITE PAPER: BACKUP EXEC TM 2014 PROTECTING HYPER-V ENVIRONMENTS Backup Exec TM 2014 Technical White Paper Protecting Hyper-V Environments Technical White Papers are designed to introduce Symantec

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

EMC VNXe3200 Unified Snapshots

EMC VNXe3200 Unified Snapshots White Paper Abstract This white paper reviews and explains the various operations, limitations, and best practices supported by the Unified Snapshots feature on the VNXe3200 system. July 2015 Copyright

More information

Protecting Miscrosoft Hyper-V Environments

Protecting Miscrosoft Hyper-V Environments Protecting Miscrosoft Hyper-V Environments Who should read this paper Technical White Papers are designed to introduce Veritas partners and end users to key technologies and technical concepts that are

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

Disaster Recovery-to-the- Cloud Best Practices

Disaster Recovery-to-the- Cloud Best Practices Disaster Recovery-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE

More information