EMC VSPEX PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX PRIVATE CLOUD"

Transcription

1 Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for private cloud deployments with VMware vsphere and EMC VNX for up to 500 virtual machines. April, 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published April 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC Online Support website. Generation Backup Part Number H

3 Contents Chapter 1 Executive Summary 17 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 21 Introduction Virtualization Compute Network Storage Chapter 3 Solution Technology Overview 25 Overview Key components Virtualization Overview VMware vsphere VMware vcenter VMware vsphere High-Availability EMC Virtual Storage Integrator for VMware VNX VMware vstorage API for Array Integration support Compute Network Overview Storage Overview EMC VNX series

4 Contents VNX Snapshots VNX Snapsure VNX Virtual Provisioning VNX FAST Cache VNX FAST VP vcloud Networking and Security VNX file shares ROBO Backup and recovery Overview VMware vsphere data protection vsphere replication EMC RecoverPoint EMC Avamar Other technologies Overview VMware vcloud Director VMware vcenter Operations Management Suite (vc OPs) VMware vcenter Single Sign On (SSO) PowerPath/VE (for block) EMC XtremSW Cache Chapter 4 Solution Architecture Overview 49 Overview Solution architecture Overview Logical Architecture Key components Hardware resources Software resources Server configuration guidelines Overview VMware vsphere memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines Overview VLAN Enable jumbo frames (for iscsi and NFS) Link aggregation (for NFS) Storage configuration guidelines

5 Contents Overview VMware vsphere storage virtualization for VSPEX VSPEX storage building blocks VSPEX private cloud validated maximums High-availability and failover Overview Virtualization layer Compute layer Network layer Storage layer Validation test profile Profile characteristics Backup and recovery configuration guidelines Overview Backup characteristics Backup layout Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point of sale system Example 3: Web server Example 4: Decision-support database Summary of examples Implementing the solution Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment Overview CPU requirements Memory requirements

6 Contents Storage performance requirements I/O operations per second (IOPS) I/O size I/O latency Storage capacity requirements Determining equivalent Reference virtual machines Fine tuning hardware resources Chapter 5 VSPEX Configuration Guidelines 101 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs Configure Jumbo Frames (iscsi and NFS only) Complete network cabling Prepare and configure storage array VNX configuration for block protocols VNX configuration for file protocols FAST VP configuration FAST Cache configuration Install and configure vsphere hosts Overview Install ESXi Configure ESXi networking Install and configure PowerPath/VE (block only) Connect VMware datastores Plan virtual machine memory allocations Install and configure SQL server database Overview Create a virtual machine for Microsoft SQL server Install Microsoft Windows on the virtual machine Install SQL server Configure database for VMware vcenter Configure database for VMware Update Manager

7 Contents Install and configure VMware vcenter server Overview Create the vcenter host virtual machine Install vcenter guest OS Create vcenter ODBC connections Install vcenter server Apply vsphere license keys Install the EMC VSI plug-in Summary Chapter 6 Validating the Solution 131 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components Block environments File environments Chapter 7 System Monitoring 135 Overview Key areas to monitor Performance baseline Servers Networking Storage VNX resource monitoring guidelines Monitoring block storage resources Monitoring file storage resources Summary Appendix A Bills of Materials 153 Bill of materials Appendix B Customer Configuration Data Sheet 161 Customer configuration data sheet Appendix C Server resource component worksheet 165 Server resources component worksheet Appendix D References 167 References EMC documentation

8 Contents Other documentation Appendix E About VSPEX 169 About VSPEX

9 Figures Figure 1. Private cloud components Figure 2. Compute layer flexibility Figure 3. Example of highly available network design for block Figure 4. Example of highly available network design for file Figure 5. Storage pool rebalance progress Figure 6. Thin LUN space utilization Figure 7. Examining storage pool space utilization Figure 8. Defining storage pool utilization thresholds Figure 9. Defining automated notifications (for block) Figure 10. Logical architecture for block storage Figure 11. Logical architecture for file storage Figure 12. Hypervisor memory consumption Figure 13. Required networks for block storage Figure 14. Required networks for file storage Figure 15. VMware virtual disk types Figure 16. Storage layout building block for 10 virtual machines Figure 17. Storage layout building block for 50 virtual machines Figure 18. Storage layout building block for 100 virtual machines Figure 19. Storage layout for 125 virtual machines using VNX Figure 20. Storage layout for 250 virtual machines using VNX Figure 21. Storage layout for 500 virtual machines using VNX Figure 22. Maximum scale level of different arrays Figure 23. High-availability at the virtualization layer Figure 24. Redundant power supplies Figure 25. Network layer High-Availability (VNX) Block storage Figure 26. Network layer High-Availability (VNX) - File storage Figure 27. VNX series High-Availability Figure 28. Resource pool flexibility Figure 29. Required resource from the Reference virtual machine pool Figure 30. Aggregate resource requirements stage Figure 31. Pool configuration stage Figure 32. Aggregate resource requirements - stage Figure 33. Pool configuration stage Figure 34. Aggregate resource requirements for stage Figure 35. Pool configuration stage Figure 36. Customizing server resources Figure 37. Sample network architecture Block storage Figure 38. Sample Ethernet network architecture File storage Figure 39. Network Settings For File dialog box

10 Figures Figure 40. Create Interface dialog box Figure 41. Create File System dialog box Figure 43. Storage Pool Properties dialog box Figure 44. Manage Auto-Tiering dialog box Figure 45. Storage System Properties dialog box Figure 46. Create FAST Cache dialog box Figure 47. Advanced tab in the Create Storage Pool dialog Figure 48. Advanced tab in the Storage Pool Properties dialog Figure 49. Virtual machine memory settings Figure 50. Storage pool alerts Figure 51. Storage pools panel Figure 52. LUN property dialog box Figure 53. Monitoring and Alerts panel Figure 54. IOPS on the LUNs Figure 55. IOPS on the drives Figure 56. Latency on the LUNs Figure 57. SP Utilization Figure 58. Data Mover statistics Figure 59. Front-end Data Mover network statistics Figure 60. Storage pools for file panel Figure 61. File systems panel Figure 62. File system property panel Figure 63. File system performance panel Figure 64. File storage all performance panel

11 Figures 11

12 Figures 12

13 Tables Table 1. VNX customer benefits Table 2. Thresholds and settings under VNX OE Block Release Table 3. Solution hardware Table 4. Solution software Table 5. Hardware resources for compute Table 6. Hardware resources for network Table 7. Hardware resources for storage Table 8. Number of disks required for different number of virtual machines Table 9. Profile characteristics Table 10. Backup Profile characteristics Table 11. Virtual machine characteristics Table 12. Blank worksheet row Table 13. Reference virtual machine resources Table 14. Example worksheet row Table 15. Example applications stage Table 16. Example applications -stage Table 17. Example applications - stage Table 18. Server resource component totals Table 19. Deployment process overview Table 20. Tasks for pre-deployment Table 21. Deployment prerequisites checklist Table 22. Tasks for switch and network configuration Table 23. Tasks for VNX configuration Table 24. Storage allocation table for block Table 25. Tasks for storage configuration Table 26. Storage allocation table for file Table 27. Tasks for server installation Table 28. Tasks for SQL server database setup Table 29. Tasks for vcenter configuration Table 30. Tasks for testing the installation Table 31. Rules of thumb for drive performance Table 32. Best Practice for performance monitoring Table 33. List of components used in the VSPEX solution for 125virtual machines Table 34. List of components used in the VSPEX solution for 250 virtual machines Table 35. List of components used in the VSPEX solution for 500 virtual machines Table 36. Common server information

14 Tables Table 37. ESXi server information Table 38. Array information Table 39. Network infrastructure information Table 40. VLAN information Table 41. Service accounts Table 43. Blank worksheet for server resource totals

15 Tables 15

16 Tables 16

17 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs

18 Executive Summary Introduction VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk. Target audience Document purpose This document is a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The readers of this document should have the necessary training and background to install and configure VMware vsphere, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the custom installation. Individuals focusing on selling and sizing a VMware Private Cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This document is an initial introduction to the VSPEX architecture, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system. The VSPEX Private Cloud architecture provides the customer with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the VMware vsphere virtualization layer backed by highly available VNX family storage. The compute and network components, which are defined by the VSPEX partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 125,250, and 500 virtual machine environments discussed are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective when deployed. For smaller environments, solutions for up to 100 virtual machines based on the EMC VNXe series are described in EMC VSPEX Private Cloud: VMware vsphere 5.1 for up to 100 Virtual Machines. 18

19 Executive Summary Business needs A private cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, validation tests and monitoring instructions ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud for VMware architectures are listed as follows: Provide an end-to-end virtualization solution to use the capabilities of the unified infrastructure components. Provide a VSPEX Private Cloud solution for VMware for efficiently virtualizing up to 500 virtual machines for varied customer use cases. Provide a reliable, flexible, and scalable reference design. 19

20 Executive Summary 20

21 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage

22 Solution Overview Introduction The EMC VSPEX Private Cloud for VMware vsphere 5.1 provides complete system architecture capable of supporting up to 500 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, compute, storage, and networking. Virtualization VMware vsphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to the end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere Hypervisor and the VMware vcenter Server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run on the system at one time as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through the vcenter product, and allow for dynamic allocation of CPU, memory and storage across the cluster. Features such as vmotion, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which perform vmotions automatically to balance load, make vsphere a solid business choice. With the release of vsphere 5.1, a VMware-virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). Compute VSPEX provides the flexibility to design and implement your choice of server components. The infrastructure must conform to the following attributes: Sufficient cores and memory to support the required number and types of virtual machines. Sufficient network connections to enable redundant connectivity to the system switches. Excess capacity to withstand a server failure and failover in the environment. 22

23 Solution Overview Network VSPEX provides the flexibility to design and implement the customer s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage. Traffic isolation based on industry-accepted best practices. Support for Link Aggregation. Storage The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation. VNX storage includes the following components that are sized for the stated reference architecture workload: Host Bus Adapter ports (for block) Provide host connectivity via fabric to the array. Storage processors (SP) The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays. Disk drives Disk spindles and solid state drives that contain the host or application data and their enclosures. Data Movers (for file) Front-end appliances that provide file services to hosts (optional if CIFS/SMB, NFS services are provided). The 125, 250, and 500 virtual machines VMware Private Cloud solutions described in this document are based on the VNX5300 TM, VNX5500 TM, and VNX5700 TM storage arrays respectively. VNX5300 supports a maximum of 125 drives, VNX5500 can host up to 250 drives, and VNX5700 can host up to 500 drives. The EMC VNX series supports a wide range of business class features ideal for the private cloud environment including: Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache Data deduplication Thin Provisioning Replication Snapshots/checkpoints File-Level Retention Quota management 23

24 Solution Overview 24

25 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Key components Virtualization Compute Network Storage Backup and recovery Other technologies

26 Solution Technology Overview Overview This solution uses the EMC VNX series and VMware vsphere 5.1 to provide storage and server hardware consolidation in a private cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 1 depicts the solution components. Figure 1. Private cloud components The following sections describe the components in more detail. 26

27 Solution Technology Overview Key components This section describes the key components of this solution. Virtualization The virtualization layer decouples the physical implementation of resources from the applications that use them. In other words, the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the private cloud can be implemented. The EMC VNX storage family used in this solution provides high-performance data storage while maintaining highavailability. Backup and recovery The optional backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. The Solution architecture section provides details on all the components that make up the reference architecture. 27

28 Solution Technology Overview Virtualization Overview VMware vsphere 5.1 The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and the physical capability of the system to change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. VMware vsphere 5.1 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications like physical computers. The high-availability features of VMware vsphere 5.1 such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vsphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. VMware vcenter VMware vcenter is a centralized management platform for the VMware Virtual Infrastructure. This platform provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, and accessed from multiple devices. VMware vcenter also manages some advanced features of the VMware virtual infrastructure such as VMware vsphere High-Availability and DRS, along with vmotion and Update Manager. VMware vsphere High-Availability The VMware vsphere High-Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions. If the virtual machine operating system has an error, the virtual machine can automatically restart on the same hardware. If the physical hardware has an error, the impacted virtual machines can automatically restart on other servers in the cluster. Note To restart virtual machines on different hardware, the servers must have available resources. The Compute section provides detailed information to enable this function. With VMware vsphere High-Availability, you can configure policies to determine which machines automatically restart, and under what conditions to attempt these operations. 28

29 Solution Technology Overview EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in for the vsphere client that provides a single management interface for EMC storage within the vsphere environment. Add and remove features to VSI independently; this provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which enables new features to be introduced rapidly in response to customer requirements. Validation testing uses the following features: Storage Viewer (SV) Extend the vsphere client to help to discover and identify of EMC VNX storage devices allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplified storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision Virtual Machine File System (VMFS) datastores, Raw Device Mapping (RDM) volumes, or network file system (NFS) seamlessly within vsphere client. Refer to the EMC VSI for VMware vsphere product guides on EMC Online Support for more information. VNX VMware vstorage API for Array Integration support Hardware acceleration with VMware vstorage API for Array Integration (VAAI) is a storage enhancement in vsphere 5.1 that enables vsphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With the assistance of storage hardware, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. Compute The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX documents minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. 29

30 Solution Technology Overview In the example shown in Figure 2, the compute layer requirements for a specific implementation are 25 processor cores, and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 2. Compute layer flexibility The first customer needs four of the chosen servers, while the other customer needs two. Note To enable high-availability at the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. 30

31 Use the following best practices in the compute layer: Solution Technology Overview Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high-availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment. Network Overview The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 3 and Figure 4 depict an example of this highly available network topology. 31

32 Solution Technology Overview Figure 3. Example of highly available network design for block 32

33 Solution Technology Overview Figure 4. Example of highly available network design for file This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security. For block, EMC unified storage platforms provide network high-availability or redundancy by two ports per SP. If a link is lost on the SP front end port, the link fails over to another port. All network traffic is distributed across the active links. For file, EMC unified storage platforms provide network high-availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost on the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 33

34 Solution Technology Overview Storage Overview EMC VNX series The storage layer is also a key component of any cloud infrastructure solution that serves data generated by applications and operating system in the datacenter storage processing systems. This increases storage efficiency, management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNX series arrays provide virtualization at the storage layer. The EMC VNX family is optimized for virtual applications; and delivers industryleading innovation and enterprise capabilities for file and block storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. Intel Xeon processors power the VNX series for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by VNX series. Table 1. Feature VNX customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High-availability, designed to deliver five 9s availability Automated tiering with FAST VP and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN and replication needs Up to three times improvement in performance with the latest Intel Xeon multi-core processor technology, optimized for Flash Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced protection and performance: Software suites FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Local Protection Suite Practices safe data protection and repurposing. Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. 34

35 Solution Technology Overview Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software packs Total Efficiency Pack Includes all five software suites. Total Protection Pack Includes local, remote, and application protection suites. VNX Snapshots VNX Snapshots is a new software feature introduced in VNX OE for Block Release 32, which creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation and local rapid restores. VNX Snapshots improves on the existing SnapView Snapshot functionality by integrating with storage pools. Note LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView Snapshots. This limitation exists because VNX Snapshots require pool space as part of its technology. VNX Snapshots support 256 writeable snaps per pool LUN. It supports Branching, also called a Snap of a Snap, as long as the total number of snapshots for any primary LUN is less than 256, which is a hard limit. VNX Snapshots use redirect on write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy on first write (COFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release (Block OE Release 32) introduces consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. VNX Snapsure VNX SnapSure is an EMC VNX Network Server software feature that enables you to create and manage checkpoints, which are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy on first modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. 35

36 Solution Technology Overview A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports two types of checkpoints: Read-only checkpoint Read-only file system created from a PFS. Writeable checkpoint Read/write file system created from a read-only checkpoint. SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS while allowing PFS applications continued access to real time data. Note Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint. For more detailed information, refer to Using VNX SnapSure. VNX Virtual Provisioning EMC VNX Virtual Provisioning enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, advanced snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and User Capacity Threshold setting. EMC VNX Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. Monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 5. 36

37 Solution Technology Overview Figure 5. Storage pool rebalance progress LUN Expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. Pool LUN expansion can be done with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. For more detailed information of pool LUN expansion, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. LUN Shrink Use LUN shrink of thin LUN to reduce the capacity of existing LUNs. VNX has the capability of shrinking a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process has two steps: 1. Shrink the file system from Windows Disk Management. 2. Shrink the pool LUN using a command window and the DISKRAID utility. The utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package. The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is completed, any other LUN in that pool can use the reclaimed space. 37

38 Solution Technology Overview For more detailed information of thin LUN expansion, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. User Alerting through Capacity Threshold Setting Customers must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available to be provisioned when needed and capacity shortages can be avoided. Figure 6 explains why provisioning with thin pools requires monitoring. Figure 6. Thin LUN space utilization Monitor the following values for think pool utilization: Total capacity is the total physical capacity available to all LUNs in the pool. Total allocation is the total physical capacity currently assigned to all pool LUNs. Subscribed capacity is the total host reported capacity supported by the pool. Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool. Total allocation may never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit. 38

39 Solution Technology Overview Figure 7 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free Capacity, Percent Full, Total allocation, Total Subscription, Percent Subscribed and Oversubscribed By Capacity. Figure 7. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization, and alert when thresholds are reached. Set the Percentage Full Threshold to allow enough buffer to make remediation before an outage situation occurs. Adjust this setting by clicking on the Advanced tab of the Storage Pool Properties dialog, as seen in Figure 8. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active as there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equalstotal Allocation/Total Capacity, when a pool is created. 39

40 Solution Technology Overview Figure 8. Defining storage pool utilization thresholds View alerts by using the Alert tab in Unisphere. Figure 9 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through , a paging service, or an SNMP trap. Figure 9. Defining automated notifications (for block) 40

41 Solution Technology Overview Table 2 displays information about thresholds and their settings under VNX OE Block 32. Table 2. Thresholds and settings under VNX OE Block Release 32 Threshold Type Threshold Range Threshold Default Alert Severity User settable 1%-84% 70% Warning None Side Effect Built-in N/A 85% Critical Clears user settable alert Allowing total allocation to exceed 90 percent of total capacity puts you at the risk of running out of space and impacting all applications that use thin LUNs in the pool. VNX FAST Cache VNX FAST VP vcloud Networking and Security 5.1 VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to function as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of highly active data to Flash drives. This dramatically improves the response time for the active data and reduces data hot spots that can occur within a LUN. The Fast Cache feature is an optional component of this solution. VNX FAST VP, a part of the VNX FAST Suite, can automatically tier data across multiple types of drives to use differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is part of a regularly scheduled maintenance operation. VMware vshield Edge, App, and Data Security capabilities have been integrated and enhanced in vcloud Networking and Security 5.1, which is part of the VMware vcloud Suite. VSPEX Private Cloud solutions with VMware vcloud Networking enables customers to adopt virtualized networks that eliminate the rigidity and complexity associated with physical equipment that creates artificial barriers to operating an optimized network architecture. Physical networking has not kept pace with the virtualization of the datacenter and it limits the ability of businesses to rapidly deploy, move, scale, and protect applications and data according to business needs. VSPEX with VMware vcloud Networking and Security solves these datacenter challenges by virtualizing networks and security to create efficient, agile, extensible logical constructs that meet the performance and scale requirements of virtualized datacenters. vcloud Networking and Security delivers software-defined networks and security with a broad range of services in a single solution and includes a virtual firewall, virtual private network (VPN), load balancing, and VXLAN-extended networks. Management integration with VMware vcenter Server and VMware vcloud Director reduces the cost and complexity of datacenter operations and unlocks the operational efficiency and agility of private cloud computing. 41

42 Solution Technology Overview VSPEX for Virtualized Applications can also take advantage of vcloud Networking and Security features. VSPEX allows business to virtualize Microsoft applications. With VMware vcloud, these applications can have protection and isolation from risk as administrators have greater visibility into virtual traffic flows so that they can enforce policies and implement compliance controls on in-scope systems by implementing logical grouping and virtual firewalls. Administrators deploying virtual desktops with VSPEX End User Computing with VMware vsphere and View can also benefit from vcloud Networking and Security by creating logical security around individual or groups of virtual desktops. This will ensure that those machine s users deployed on the VSPEX Proven Infrastructure can only access the applications and data with authorization, preventing broader access to the datacenter. vcloud also enables rapid diagnosis of traffic and potential trouble spots. Administrators can effectively create software defined networks that scale and move virtual workloads within their VSPEX Proven Infrastructures without physical networking or security constraints, all of which can be streamlined via VMware vcenter and VMware vcloud Director Integration. VNX file shares ROBO In many environments, it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. The VNX family of storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. A Remote Office/Branch Office (ROBO) environment typically is an edge-core topology where edge nodes are deployed at remote sites to provide local computing resources. Note For detailed steps on how to build a ROBO data protection solution with an EMC VNX system at the core and EMC VNXe systems at the edges, refer to Deployment Guide: Data Protection in a ROBO Environment with EMC VNX and VNXe Series Arrays. Backup and recovery Overview VMware vsphere data protection Backup and recovery is another important component in this VSPEX solution, which provides data protection by backing up data files or volumes on a defined schedule, and restoring data from backup for recovery after a disaster. vsphere Data Protection (VDP) is a proven solution for backing up and restoring VMware virtual machines. VDP is built off of EMC s award-winning Avamar product and has many integration points with vsphere, providing simple discovery of your virtual machines and efficient policy creation. One of challenges that traditional systems have with virtual machines is the large amount of data that these files contain. VDP s usage of a variable-length deduplication algorithm ensures a minimum amount of disk space is used and reduces ongoing backup storage growth. Data is deduplicated across all virtual machines associated with the VDP virtual appliance. VDP uses vstorage APIs for Data Protection (VADP) which sends only daily changed blocks of data resulting in only a fraction of the data being sent over the network. VDP enables up to eight virtual machines to be backed up concurrently and 42

43 Solution Technology Overview because VDP resides in a dedicated virtual appliance, all the backup processes are offloaded from the production virtual machines. VDP can alleviate the burdens of restore requests from administrators by enabling end users to restore their own files using a web-based tool called vsphere Data Protection Restore Client. Users can browse their system s backups in an easy to use interface that has search and version control. The users can restore individual files or directories without any intervention from IT freeing up valuable time and resources and resulting in a better end user experience. Smaller deployments of VSPEX Proven Infrastructure can also use VDP. vsphere Data Protection (VDP) deploys as a virtual appliance with 4 processors (vcpus) and 4 GB of RAM. Three configurations of usable backup storage capacity are available: 0.5 TB, 1 TB, and 2 TB, which consume 850 GB, 1300 GB, and 3100 GB of actual storage capacity respectively. Proper planning should be performed to help ensure proper sizing as additional storage capacity cannot be added once the appliance is deployed. Storage capacity requirements are based on the number of virtual machines being backed up, amount of data, retention periods, and typical data change rates. VSPEX Proven Infrastructures are sized off of Reference virtual machines and therefore any deployment of VDP sizing and should be considered. vsphere replication EMC RecoverPoint vsphere Replication is a feature of the vsphere platform that provides business continuity. vsphere Replication will copy a virtual machine defined in your VSPEX Infrastructures to second instance of VSPEX or within the clustered servers in a single VSPEX. vsphere Replication continues to protect the virtual machine on an ongoing basis and replicates the changes to the copied virtual machine. This ensures that the virtual machine remains protected and is available for recovery without requiring restore from backup. Replication application virtual machines defined in VSPEX to ensure application consistent data with a single click when replication is set up. Administrators who are managing VSPEX for virtualized Microsoft applications can use vsphere Replication s automatic integration with Microsoft s Volume Shadow Copy Service (VSS) to ensure that applications such as Microsoft Exchange or Microsoft SQL Server databases are quiescent and consistent when replica data is being generated. A very quick call to the virtual machine s VSS layer flushes the database writers for an instant to ensure that the data replicated is static and fully recoverable. This automated approach simplifies the management and increases the efficiency of your VSPEX based virtual environment. EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data locally (continuous data protection, CDP), remotely (continuous remote replication, CRR), or both (CLR). RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data by Fibre Channel (FC). RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. 43

44 Solution Technology Overview In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously. RecoverPoint uses lightweight splitting technology on the application server, in the fabric or in the array, to mirror application writes to the RecoverPoint cluster. RecoverPoint supports several types of write splitters: Array-based Intelligent fabric-based Host-based EMC Avamar EMC Avamar data deduplication technology seamlessly integrates into virtual environments, providing rapid backup and restoration capabilities. Avamar s deduplication results in less data transmission across the network, and greatly reduces the amount of data being backed up and stored to achieve storage, bandwidth, and operational savings. Other technologies Two of the most common recovery requests made to backup administrators are: File-level recovery Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are individual users deleting files, applications requiring recoveries, and batch process-related erasures. System recovery Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues. Avamar s functionality, along with VMware implementations, adds new capabilities for backup and recovery in both of these scenarios. Key capabilities added in VMware such as the vstorage API integration and change block tracking (CBT) enable the Avamar software to protect the virtual environment more efficiently. Leveraging CBT for both backup and recovery with virtual proxy server pools minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry leading next-generation backup appliances. Overview VMware vcloud Director In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to the following technologies. VMware vcloud Director which is part of the vcloud Suite in 5.1 orchestrates the provisioning of software defined datacenter services as complete virtual datacenters that are ready for consumption in a matter of minutes. vcloud Director is a software solution that enables customers to build secure, multi-tenant private clouds by 44

45 Solution Technology Overview pooling infrastructure resources from VSPEX into virtual datacenters and exposing them to users through Web-based portals and programmatic interfaces as fully automated, catalog-based services. VMware vcloud Director uses pools of resources abstracted from the underlying physical resources in VSPEX to enable the automation of deployment of virtual resources when and where required. VSPEX with vcloud Director enables customers to build out complete virtual datacenters delivering compute, networking, storage, security, and a complete set of services necessary to make workloads operational in minutes. Software-defined datacenter service and the virtual datacenters fundamentally simplify infrastructure provisioning, and enable IT to move at the speed of business. VMware vcloud Director integrates with existing or new VSPEX VMware vsphere Private Cloud deployments and supports existing and future applications by providing elastic standard storage and networking interfaces, such as Layer-2 connectivity and broadcasting between virtual machines. VMware vcloud Director uses open standards to preserve deployment flexibility and pave the way to the hybrid cloud. The key features of VMware vcloud Director include: Snapshot and revert capabilities Integrated vsphere Profiles Security (vcenter Single Sign-On) Fast Provisioning vapp catalogs Isolated multi-tenant organizations Self-service web portal VMware vcloud API (OVF and customer extensibility) All VSPEX Proven Infrastructures can use vcloud Director to orchestrate deployment of virtual datacenters based on single VSPEX or multi-vspex deployments. These infrastructures enable simple and efficient deployment of virtual machines, applications, and virtual networks to implement fenced off private infrastructures within a VSPEX instance. VMware vcenter Operations Management Suite (vc OPs) The VMware vcenter Operations Manager Suite provides unparalleled visibility into one s VSPEX virtual environments. vc OPS collects, analyzes data, correlates abnormalities, and identifies the root cause of performance problems and provides administrators the information to optimize and tune their VSPEX virtual infrastructures. vcenter Operations Manager provides an automated approach to optimizing your VSPEX-powered virtual environment by delivering self-learning analytic tools that are integrated to provide better performance, capacity usage and configuration management. vcenter Operations Management Suite delivers a comprehensive set of management capabilities, including: Performance Capacity 45

46 Solution Technology Overview Change Configuration and compliance management Application discovery and monitoring Cost metering vcenter Operations Management Suite includes five components: VMware vcenter Operations Manager, VMware vcenter Configuration Manager, VMware vfabric Hyperic, VMware vcenter Infrastructure Navigator, and VMware vcenter Chargeback Manager. VMware vcenter Operations Manager is the foundation of the suite and provides the operational dashboard interface that makes visualizing issues in your VSPEX virtual environment simple. vfabric Hyperic monitors physical hardware resources, operating systems, middleware, and applications that you may have deployed on VSPEX. vcenter Infrastructure Navigator provides visibility into the application services running over the virtual-machine infrastructure and their interrelationships for day-today operational management. vcenter Chargeback Manager enables accurate cost measurement, analysis, and reporting of virtual machines, providing visibility into the cost of the virtual infrastructure you have defined on VSPEX required to support business services. VMware vcenter Single Sign On (SSO) With the introduction of VMware vcenter Single Sign-On (SSO) in VMware vsphere 5.1 Administrators now have a deeper level of available authentication services in managing their VSPEX Proven Infrastructures. Authentication by vcenter Single Sign- On makes the VMware cloud infrastructure platform more secure. This function allows the vsphere software components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately with a directory service such as Active Directory. When users log in to the vsphere Web Client with a user name and passwords the vcenter Single Sign-On server receives their credentials. The credentials are then authenticated against the back-end identity source(s) and exchanged for a security token, which is returned to the client to access the solutions within the environment. Single sign on translates into time and cost savings which, when factored in against the entire organization can result in savings and streamlined workflows. New in vsphere 5.1, users have a single pane-of-glass view of their entire vcenter Server environment because multiple vcenter servers and their inventories are now displayed. This does not require Linked Mode unless users share roles, permissions, and licenses among vsphere 5.x vcenter servers. Administrators can now deploy multiple solutions within an environment with true single sign-on that creates trust between solutions without requiring authentication every time a user accesses the solution. VSPEX Private Cloud with VMware vsphere 5.1 is simple, efficient, and flexible. VMware SSO makes authentication simpler, workers can be more efficient, and administrators have the flexibility to make Single Sign on Servers local or global. 46

47 Solution Technology Overview PowerPath/VE (for block) EMC PowerPath/VE for VMware vsphere is a multipathing extensions module for vsphere that provides software that works in combination with SAN storage to intelligently manage FC, iscsi, and Fiber Channel over Ethernet (FCoE) I/O paths. PowerPath/VE is installed on the vsphere host and will scale to the maximum number of virtual machines on the host, improving I/O performance. The virtual machines do not have PowerPath/VE installed nor are they aware that PowerPath/VE is managing I/O to storage. PowerPath/VE dynamically balances I/O load requests and automatically detects, and recovers from path failures. EMC XtremSW Cache EMC XtremSW Cache, is a server Flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe Flash technology. Server-side Flash caching for maximum speed XtremSW Cache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server Flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application accelerates with XtremSW Cache, the array performance for other applications remains the same or slightly enhanced. Write-through caching to the array for total protection XtremSW Cache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high-availability, integrity, and disaster recovery. Application agnostic XtremSW Cache is transparent to applications; do not rewrite, retest or recertify to deploy XtremSW Cache in the environment. Integration with vsphere XtremSW Cache enhances both virtualized and physical environments. Integration with the VSI plug-in to VMware vsphere vcenter simplifies the management and monitoring of XtremSW Cache. Minimal impact on system resources Unlike other caching solutions on the market, XtremSW Cache does not require a significant amount of memory or CPU cycles, as all Flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using XtremSW Cache on server resources. 47

48 Solution Technology Overview XtremSW Cache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. XtremSW Cache active/passive clustering support The configuration of XtremSW Cache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The XtremSW Cache-enabled active/passive cluster ensures data integrity and accelerates application performance. XtremSW Cache performance considerations XtremSW Cache performance considerations are: On a write request, XtremSW Cache first writes to the array, then to the cache, and then completes the application I/O. On a read request, XtremSW Cache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds; therefore, the array limits how fast the cache can work. As the number of writes increases, XtremSW Cache performance decreases. XtremSW Cache is the most effective for workloads with a 70 percent, or more, read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in XtremSW Cache 1.5. Note For more information, refer to XtremSW Cache Installation and Administration Guide v1.5 48

49 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Validation test profile Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload Implementing the solution Quick assessment

50 Solution Architecture Overview Overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute, and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloudbased computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This chapter is a comprehensive guide to the major aspects of this solution. Server capacity is presented in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Overview The VSPEX solution for VMware vsphere Private Cloud with EMC VNX validates at three different points of scale, one configuration with up to 125 virtual machines, one configuration with up to 250 virtual machines, and one configuration with up to 500 virtual machines. The defined configurations form the basis of creating a custom solution. Note VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. 50

51 Solution Architecture Overview Logical Architecture The architecture diagrams in this section show the layout of major components in the solutions. Two types of storage, for block-based and file-based, are in the following diagrams. Figure 10 characterizes the infrastructure validated with block -based storage, where an 8 Gb FC/FCoE or 10 Gb-iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. Figure 10. Logical architecture for block storage 51

52 Solution Architecture Overview Figure 11 characterizes the infrastructure validated with file-based storage, where 10 GbE carries storage traffic and all other traffic Figure 11. Logical architecture for file storage Key components This architecture includes the following key components: VMware vsphere 5.1 Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 3. vsphere 5.1 provides highly available infrastructure through such features as: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Storage vmotion Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. vsphere High-Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster. Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores based on space usage and I/O latency. VMware vcenter Server 5 Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere 5.1 cluster. vcenter managers all vsphere hosts and their virtual machines. SQL Server VMware vcenter Server requires a database service to store configuration and monitoring details. This solution uses a Microsoft SQL 2008 R2 server. 52

53 Solution Architecture Overview DNS Server Use DNS services for the various solution components to perform name resolution. This solution uses the Microsoft DNS Service running on Windows 2012 server. Active Directory Server Various solution components require Active Directory services to function properly. The Microsoft AD Service runs on a Windows Server 2012 server. Shared Infrastructure Add DNS and authentication/authorization services, such as AD Service, with existing infrastructure or set up as part of the new virtual infrastructure. IP Network A standard Ethernet network carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic. Storage Network The Storage network is an isolated network that provides hosts with access to the storage array. VSPEX offers different options for block-based and file-based storage. Storage Network for Block: This solution provides three options for block based storage networks. Fibre Channel (FC) is a set of standards that define protocols for performing high speed serial data transfer. FC provides a standard data transport frame among servers and shared storage devices. Fibre Channel over Ethernet (FCoE) is a new storage networking protocol that supports FC natively over Ethernet, by encapsulating FC frames into Ethernet frames. This allows the encapsulated FC frames to run alongside traditional Internet Protocol (IP) traffic. 10 Gb Ethernet (iscsi) enables the transport of SCSI blocks over a TCP/IP network. iscsi works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. Storage Network for File: With file-based storage, a private, non-routable 10 Gb subnet carries the storage traffic. VNX Storage Array The VSPEX private cloud configuration begins with the VNX family storage arrays, including: EMC VNX5300 array Provides storage to vsphere hosts for up to 125 virtual machines. EMC VNX5500 array Provides storage to vsphere hosts for up to 250 virtual machines. EMC VNX5700 array Provides storage to vsphere hosts for up to 500 virtual machines. 53

54 Solution Architecture Overview VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols The SPs provide access for all external hosts, and for the file side of the VNX array. Disk Processor Enclosure (DPE) is 3U in size, and houses the SPs and the first tray of disks. VNX5300 and VNX5500 use this component. Storage Processor Enclosure (SPE) is 2U in size and includes SPs, two power supplies, and fan packs. VNX5700 and VNX7500 use this component, and support a maximum of 500 and 1,000 drives respectively. X-Blades (or Data Movers) access data from the back-end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (X- Blades). All VNX for File models use DME. Standby power supply (SPS) is 1U in size and provides enough power to each SP to ensure that any data in flight de-stages to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes reconcile and persist. Control Station is 1 U in size and provides management functions to the X- Blades. The Control Station is responsible for X-Blade failover. An optional secondary Control Station ensures redundancy on the VNX array. Disk Array Enclosures (DAE) house the drives used in the array. Hardware resources Table 3 lists the hardware used in this solution. Table 3. Component VMware vsphere servers Solution hardware CPU Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs For 250 virtual machines: 250 vcpus Minimum of 63 physical CPUs For 500 virtual machines: 500 vcpus Minimum of 125 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host 54

55 Solution Architecture Overview Component Configuration For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical Server For 250 virtual machines: Minimum of 500 GB RAM Add 2 GB for each physical Server For 500 virtual machines: Minimum of 1000 GB RAM Add 2 GB for each physical Server Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server Note Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High-Availability (HA) functionality and to meet the listed minimums. Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per Control Station for management 2 ports per VMware vsphere server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data EMC Next- Generation Backup Avamar 1 Gen4 utility node 1 Gen4 3.9 TB spare node For 125 virtual machines: 3 Gen4 3.9 TB Storage nodes For 250 virtual machines: 5 Gen4 3.9 TB Storage nodes For 500 virtual machines: 7 Gen4 3.9 TB Storage nodes 55

56 Solution Architecture Overview Component EMC VNX series storage array Data Domain Block Configuration For 125 virtual machines: 1 Data Domain DD640 1 ES30 15x1TB HDDs For 250 virtual machines: 1 Data Domain DD670 2 ES30 15x1TB HDDs For 500 virtual machines: Common: 1 Data Domain DD670 4 ES30 15x1TB HDDs 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front end ports per SP system disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 56

57 Component File Configuration Common: Solution Architecture Overview 2 Data Movers (active / standby) 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management system disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare Shared infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, the new minimum requirements are: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed. 57

58 Solution Architecture Overview Note The solution recommends using a 10 Gb network or an equivalent 1Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. Software resources Table 4 lists the software used in this solution. Table 4. Solution software Software VMware vsphere vsphere server vcenter Server Operating system for vcenter Server Configuration 5.1 Enterprise Edition 5.1 Standard Edition Windows Server 2008 R2 SP1 Standard Edition Note Any operating system that is supported for vcenter can be used. Microsoft SQL Server EMC VNX Version 2008 R2 Standard Edition VNX OE for file Release VNX OE for block Note Any supported database for vcenter can be used. EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer EMC PowerPath/VE 5.8 Next-generation backup Avamar 6.1 SP1 Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Window Server 2012 Datacenter Edition Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Memory Ballooning and Transparent Page Sharing, can reduce the aggregate memory requirement. 58

59 Solution Architecture Overview If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. 59

60 Solution Architecture Overview Table 5 lists the hardware resources used for compute. Table 5. Hardware resources for compute Component VMware vsphere servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs For 250 virtual machines: 250 vcpus Minimum of 63 physical CPUs For 500 virtual machines: 500 vcpus Minimum of 125 physical CPUs 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical server For 250 virtual machines: Minimum of 500 GB RAM Add 2 GB for each physical server For 500 virtual machines: Minimum of 1000 GB RAM Add 2 GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBA per server File 4 x 10 GbE NICs per server Note Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High-Availability (HA) functionality and to meet the listed minimums. Note The solution recommends using a 10 Gb network or an equivalent 1Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VMware vsphere memory virtualization for VSPEX VMware vsphere 5.1 has a number of advanced features that help maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features, and the items to consider when using these features in the environment. 60

61 Solution Architecture Overview In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 12. Figure 12. Hypervisor memory consumption Understanding the technologies in this section enhances this basic concept. Memory compression Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere can handle memory overcommitment without any performance degradation. However, if more memory than is present on the server is being actively used, vsphere might resort to swapping out portions of the memory of a virtual machine. 61

62 Solution Architecture Overview Non-Uniform Memory Access (NUMA) vsphere uses a NUMA load-balancer to assign a home node to a virtual machine. Because the home node allocates virtual machine memory, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, total memory usage can reduce to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. Complete this task with little or no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead Some associated overhead exists for the virtualization of memory resources. The memory space overhead has two components: The fixed system overhead for the VMkernel. Additional overhead for each virtual machine. Memory overhead depends on the number of virtual CPUs and configured memory for the guest operating system. Allocating memory to virtual machines Many factors determine the proper sizing for virtual machine memory in VSPEX architectures. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results. 62

63 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined consider jumbo frames, VLANs, and LACP on EMC unified storage. For detailed network resource requirements, refer to Table 6. Table 6. Hardware resources for network Component Configuration Network infrastructure Minimum switching capacity Block 2 physical switches 2 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per Control Station for management 2 ports per VMware vsphere server, for storage network 2 ports per SP, for storage data File 2 physical switches 4 x 10 GbE ports per VMware vsphere server 1 x 1 GbE port per Control Station for management 2 x 10 GbE ports per Data Mover for data Note The solution may use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VLAN Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage: Client access Storage(for iscsi and NFS) Management 63

64 Solution Architecture Overview Figure 13 depicts the VLANs and the network connectivity requirements for a blockbased VNX array. Figure 13. Required networks for block storage 64

65 Solution Architecture Overview Figure 14 depicts the VLANs for file and the network connectivity requirements for a file-based VNX array. Figure 14. Required networks for file storage Note Figure 14 demonstrates the network connectivity requirements for a VNX array using 10 GbE connections. Create a similar topology for 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network provides communication between the compute layer and the storage layer. Administrators use the Management Network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. Enable jumbo frames (for iscsi and NFS) Link aggregation (for NFS) This solution recommends setting MTU at 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches. A link aggregation resembles an Ethernet channel, but uses LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a 65

66 Solution Architecture Overview single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. VMware vsphere allows more than one method of storage when hosting virtual machines. The tested solutions described below use different block protocols (FC/FCoE/iSCSI) and NFS (for file), and the storage layout described adheres to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system usage and load if required. However, the building blocks described in this document ensure acceptable performance. VSPEX storage building blocks document specific recommendations for customization. 66

67 Solution Architecture Overview Table 7 lists the hardware resources used for storage. Table 7. Hardware resources for storage Component EMC VNX series storage array Block Configuration Common: 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management 2 front-end ports per SP system disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 67

68 Solution Architecture Overview Component File Configuration Common: 2 Data Movers (active/standby) 2 x 10 GbE interfaces per Data Mover 1 x 1 GbE interface per Control Station for management 1 x 1 GbE interface per SP for management system disks for VNX OE For 125 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 250 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 6 x 200 GB Flash drives 4 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare For 500 virtual machines: EMC VNX x 600 GB 15k rpm 3.5-inch SAS drives 10 x 200 GB Flash drives 8 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare VMware vsphere storage virtualization for VSPEX VMware ESXi provides host-level storage virtualization, virtualizes the physical storage, and presents the virtualized storage to the virtual machines. A virtual machine stores its operating system, and all the other files related to the virtual machine activities in a virtual disk. The virtual disk itself is one or more files. VMware uses a virtual SCSI controller to present virtual disks to guest operating system running inside the virtual machines. A datastore is where virtual disks reside. Depending on the protocol used, it can be either a VMware VMFS datastore, or an NFS datastore. An additional option, RDM, allows the virtual infrastructure to connect a physical device directly to a virtual machine. 68

69 Solution Architecture Overview Figure 15. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. Deploy over any SCSI-based local or network storage. Raw Device Mapping (RDM) VMware also provides RDM, which allows a virtual machine to directly access a volume on the physical storage. Only use RDM with FC or iscsi. NFS VMware supports using NFS file systems from an external NAS storage system or device as a virtual machine datastore. VSPEX storage building blocks Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two Flash drives with FAST VP storage tiering to enhance metadata operations and performance. Building block for 10 virtual servers The first building block can contains up to 10 virtual servers, with two Flash drives and five SAS drives in a storage pool, as shown in Figure

70 Solution Architecture Overview Figure 16. Storage layout building block for 10 virtual machines This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 10 more virtual servers. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. Building block for 50 virtual servers The second building block can contain up to 50 virtual servers. It contains two Flash drives, and 25 SAS drives, as shown in Figure 17. Figure 17. Storage layout building block for 50 virtual machines Implement this building block by placing all of the drives into a pool initially, or start with a 10 virtual server building block, and then expand the pool by adding five SAS drives and allowing the pool to restripe. For details about pool expansion and restriping, refer to White Paper: EMC VNX Virtual Provisioning Applied Technology. Building block for 100 virtual servers The third building block can contain up to 100 virtual servers. It contains 2 Flash drives, and 45 SAS drives, as shown in Figure 18. The preceding sections outline an approach to grow from 10 virtual machines in a pool to 100 virtual machines in a pool. However, after reaching 100 virtual machines in a pool, do not go to 110. Create a new pool and start the scaling sequence again. Figure 18. Storage layout building block for 100 virtual machines 70

71 Solution Architecture Overview Implement this building block with all of the resources in the pool initially, or expand the pool over time as the environment grows. Table 8 lists the Flash and SAS drive requirements in a pool for different numbers of virtual servers. Table 8. Number of disks required for different number of virtual machines Virtual servers Flash drives SAS drives * Note Due to increased efficiency with larger stripes, the building block with 45 SAS drives can support up to 100 virtual servers. To grow the environment beyond 100 virtual servers, create another storage pool using the building block method described here. VSPEX private cloud validated maximums VSPEX private cloud configurations are validated on the VNX5300, VNX5500, and VNX5700 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX private cloud configuration. In addition to the VSPEX private cloud building blocks, each storage array must contain the drives used for the VNX Operating Environment, and hot spare disks for the environment. Note Allocate at least one hot spare for every 30 disks of a given type and size. 71

72 Solution Architecture Overview VNX5300 The VNX5300 is validated using for up to 125 virtual servers. There are multiple ways to achieve this configuration with the building blocks. Figure 19shows one potential configuration. Figure 19. Storage layout for 125 virtual machines using VNX 5300 This configuration uses the following storage layout: Sixty 600 GB SAS drives are allocated to two block-based storage pools: one pool with 45 SAS drives for 100 virtual machines, and one pool with 15 SAS drives for 25 virtual machines. Note Note The pool use system drives for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, the all drives in the storage pool must be 15k RPM and the same size. Storage layout algorithms may produce suboptimal results with drives of different sizes. Four 200 GB Flash drives are configured for Fast VP, two for each pool. Three 600 GB SAS drives are configured as hot spares. One 200 GB Flash drives are configured as a hot spare. Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP: Works at the block storage pool level and automatically adjusts where data is stored based on how access frequency. Promotes frequently accessed data to higher tiers of storage in 1-GB increments and migrates infrequently accessed data to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block, allocate at least two LUNs to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. 72

73 Solution Architecture Overview For file, allocate at least two NFS shares to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. Optionally, configure up to 10 Flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit from the FAST Cache feature. These drives are an optional part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration the VNX5300 can support 125 virtual servers as defined in Reference workload. VNX5500 The VNX5500 is validated for up to 250 virtual servers. There are multiple ways to achieve this configuration with building blocks. Figure 20 shows one potential configuration. Figure 20. Storage layout for 250 virtual machines using VNX 5500 There are several other ways to achieve this scale using the building blocks above, this is simply one example. This configuration uses the following storage layout: One hundred and fifteen 600 GB SAS drives are allocated to 3 block-based storage pools, 2 pools each with 45 SAS drives for 100 virtual machines, one pool with 25 SAS drives for 50 virtual machines. 73

74 Solution Architecture Overview Note Note This pool does not use system drives and not used for additional storage. If required, substitute larger drives for more capacity. To meet the load recommendations, the drives all need to be 15k RPM and the same size. If using different sizes, storage layout algorithms may give suboptimal results. Six 200 GB Flash drives are configured for Fast VP, two for each pool. Four 600 GB SAS drives are configured as hot spares. One 200 GB Flash drives are configured as hot spares. Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP: Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently-accessed data to higher tiers of storage in 1 GB increments, and migrates infrequently-accessed data to a lower tier for costefficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block, allocate at least two LUNs to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. For file, allocate at least two NFS shares to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. Optionally, configure up to 20 Flash drives in the array FAST Cache. These drives are a required part of the solution, and additional licenses may be required to use the FAST Suite. Using this configuration the VNX5500 can support 250 virtual servers as defined in Reference workload. 74

75 VNX5700 Solution Architecture Overview VNX5700 can scale to 500 virtual servers. There are multiple ways to achieve this configuration with building blocks. Figure 21 shows one way to achieve that level of scale. Figure 21. Storage layout for 500 virtual machines using VNX 5700 This configuration uses the following storage layout: Two hundred and twenty five 600 GB SAS drives are allocated to five blockbased storage pools, each with 45 SAS drives. Note The pool does not use system drives, and not used for additional storage. 75

76 Solution Architecture Overview Note If required, substitute larger drives for more capacity. To meet the load recommendations, all drives in the storage pool need to be 15k RPM and the same size. Storage layout algorithms may produce suboptimal results with drives of different sizes. Ten 200GB Flash drives are configured for FAST VP, two for each pool. Eight 600 GB SAS drives are configured as hot spares. One 200GB Flash drives are configured as hot spare. Enable FAST VP to automatically tier data to use differences in performance and capacity. FAST VP: Works at the block storage pool level and automatically adjusts where data is stored based on access frequency. Promotes frequently accessed data to higher tiers of storage in 1-GB increments and migrates infrequently-accessed data to a lower tier for costefficiency. This rebalancing of 1 GB data units, or slices, is part of a regularly scheduled maintenance operation. For block, allocate at least two LUNs to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. For file, allocate at least two NFS shares to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. Optionally, configure up to 30 Flash drives in the array FAST Cache. These drives are not a required part of the solution, and additional licenses may be required in order to use the FAST Suite. Using this configuration VNX5700 can support 500 virtual servers as defined in Reference workload. Conclusion The scale levels listed in Figure 22 are maximums for the arrays in the VSPEX private cloud environment. It is acceptable to configure any of the listed arrays with a smaller number of virtual servers with the building blocks described. Figure 22. Maximum scale level of different arrays 76

77 Solution Architecture Overview High-availability and failover Overview Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, business operations survive from single-unit failures with little or no impact. Configure high-availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 23 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 23. High-availability at the virtualization layer By implementing high-availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the datacenter. This type of server has redundant power supplies, as shown in Figure 24. Connect these servers to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 24. Redundant power supplies To configure high-availability in the virtualization layer, configure the compute layer with enough resources that the total number of available resources meetings the needs of the environment, even with a server failure, as demonstrated in Figure

78 Solution Architecture Overview Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 25 and Figure 26. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 25. Network layer High-Availability (VNX) Block storage Figure 26. Network layer High-Availability (VNX) - File storage Ensure there is no single point of failure to allow the compute layer to access storage, and communicate with users even if a component fails. 78

79 Solution Architecture Overview Storage layer The VNX family is designed is for five 9s availability by using redundant components throughout the array. All the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can replace a failing disk, as shown in Figure 27. Figure 27. VNX series High-Availability Validation test profile EMC storage arrays are highly available by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. Profile characteristics The VSPEX solution was validated with the environment profile described in Table 9. Table 9. Profile characteristics Profile characteristic Value Number of virtual machines 125/250/500 Virtual machine OS Windows Server 2012 Datacenter Edition Processors per virtual machine 1 Number of virtual processors per physical CPU core 4 RAM per virtual machine 2 GB 79

80 Solution Architecture Overview Profile characteristic Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or NFS shares to store virtual machine disks Value 100 GB 25 IOPS 1/2 Number of virtual machines per LUN or NFS share 50 Disk and RAID type for LUNs or NFS shares RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks Note This solution was tested and validated with Windows Server 2012 as the operating system for vsphere virtual machines, but also supports Windows Server vsphere on Windows Server 2008 uses the same configuration and sizing. Backup and recovery configuration guidelines Overview This section provides guidelines to set up backup and recovery for this VSPEX solution. It includes how the backup characterization and the backup layout. Backup characteristics The solution is sized with the following application environment profile, as listed in Table 10. Table 10. Backup Profile characteristics Profile characteristic Number of users Value 1,250 for 125 virtual machines 2,500 for 250 virtual machines 5,000 for 500 virtual machines Number of virtual machines 125 for 125 virtual machines (20% DB, 80% Unstructured) 250 for 250 virtual machines (20% DB, 80% Unstructured) 500 for 500 virtual machines (20% DB, 80% Unstructured) Exchange data SharePoint data 1.2 TB (1 GB mailbox per user) for 125 virtual machines 2.5 TB (1 GB mailbox per user) for 250 virtual machines 5 TB (1 GB mailbox per user) for 500 virtual machines 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines 2.5 TB for 500 virtual machines 80

81 Solution Architecture Overview Profile characteristic SQL server Value 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines 2.5 TB for 500 virtual machines User data 6.1 TB (5.0 GB per user) for 125 virtual machines Daily Change Rate for the applications Exchange data 10% SharePoint data 2% SQL server 5% User data 2% Retention per data types 25 TB (10.0 GB per user) for 250 virtual machines 50 TB (10.0 GB per user) for 500 virtual machines All database data User data 14 Dailies 30 Dailies, 4 Weeklies, 1 Monthly Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, Avamar and Data Domain managed to deploy as a single solution. This enables users to back up the unstructured user data directly to the Avamar system for simple file level recovery. Avamar manages the database and virtual machine images, and stores the backups on the Data Domain system with the embedded Boost client library. This backup solution unifies the backup process with industry-leading deduplication backup software and storage, and achieves the highest levels of performance and efficiency. Sizing guidelines The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. There is guidance on how to correlate those reference workloads to customer workloads, and how that may change the end delivery from the server and network perspective. Make modifications to the storage definition by adding drives for greater capacity and performance, and the addition of features such as FAST Cache and FAST VP. The disk layouts provide support for the appropriate number of virtual machines at the defined performance level and typical operations like snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times. 81

82 Solution Architecture Overview Reference workload Overview When you move an existing server to a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. Defining the reference workload To simplify the discussion, this section presents a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose. For the VSPEX solutions, the reference workload is a single virtual machine. Table 11 lists the characteristics of this virtual machine. Table 11. Characteristic Virtual machine characteristics Value Virtual machine operating system Microsoft Windows Server 2012 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine does not to represent any specific application. Rather, it represents a single common point of reference to measure other virtual machines. Applying the reference workload Overview When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. The solution creates a pool of resources that are sufficient to host a target number of Reference virtual machines with the characteristics shown in Table 11. The customer 82

83 Solution Architecture Overview virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of Reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. Example 1: Custom-built application A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the resource pool needs the following resources: CPU of one Reference virtual machine Memory of two Reference virtual machines Storage of one Reference virtual machine I/Os of one Reference virtual machine In this example, an appropriate virtual machine uses the resources for two of the Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 123 Reference virtual machines remain. Example 2: Point of sale system The database server for a customer s point of sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four Reference virtual machines Memory of eight Reference virtual machine Storage of two Reference virtual machines I/Os of eight Reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 117 Reference virtual machines remain. 83

84 Solution Architecture Overview Example 3: Web server The customer s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two Reference virtual machines Memory of four Reference virtual machines Storage of one Reference virtual machine I/Os of two Reference virtual machines In this case, the one appropriate virtual machine uses the resources of four Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 121 Reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system must move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 Reference virtual machines Memory of 32 Reference virtual machines Storage of 52 Reference virtual machines I/Os of 28 Reference virtual machines In this case, one virtual machine uses the resources of 52 Reference virtual machines. If implemented on a VNX5300 storage system which can support up to 125 virtual machines, resources for 73 Reference virtual machines remain. Summary of examples These four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 Reference virtual machines, and resources for 59 Reference virtual machines remain in the resource pool as shown in Figure 28. Figure 28. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are beyond the scope of the document. Examine the change in resource 84

85 Implementing the solution Solution Architecture Overview balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. Overview Resource types The solution described in this document requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are general requirements that are independent of any particular implementation except that the requirements grow linearly with the target level of scale. This section describes some considerations for implementing the requirements. The solution defines the hardware requirements for the solution in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, their use in the solution, and key implementation considerations in a customer environment. CPU resources The solution defines the number of CPU cores required, but not a specific type or configuration. New deployments should use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, monitor the utilization of resources and adapt as needed. The Reference virtual machine and required hardware resources in the solution assume that there four virtual CPUs for each physical processor core (4:1 ratio). Usually, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the solution must have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than is installed on the physical hypervisor server because of budget constraints. The memory over-commitment technique takes advantage that each virtual machine does not use all allocated memory. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware ESXi runs out of memory for the guest operating systems, paging take place, and results in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely 85

86 Solution Architecture Overview impacted by a continuing overload of vswap activity, add more disks for increased performance. The administrator must decide if it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution is validated with statically assigned memory and no over-commitment of memory resources. If a real-world environment uses memory over-commit, monitor the system memory utilization and associated page file I/O activity consistently to ensure that a memory shortfall does not cause unexpected results. Network resources The solution outlines the minimum needs of the system. If additional bandwidth is needed, add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and can add ports using EMC UltraFlex I/O modules. For reference purposes in the validated environment, each virtual machine generates 25 IOPS per second with an average size of 8 KB. This means that each virtual machine generates at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each of these depend on the use of the environment. It is not practical to provide precise numbers in this context. However, the network described in the reference architecture for each solution must be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, always have at least two physical network connections shared for a logical network so that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The storage building blocks described in this solution contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. Consider a few factors when examining storage sizing. Specifically, the array has a collection of disks assigned to a storage pool. From that storage pool, provision datastores to the VMware vsphere cluster. Each layer has a specific configuration defined for the solution and documented in the deployment section of this guide in Chapter 5. It is acceptable to: Replace drive with larger capacity drives of the same type and performance characteristics, or with higher performance drives of the same type and capacity. 86

87 Solution Architecture Overview Change the placement of drives in the drive shelves to comply with updated or new drive shelf arrangements. Scale up using the building blocks with larger number of drives up to the limit defined in the VSPEX private cloud validated maximums section. Observe the following best practices: Use the latest best practices guidance from EMC regarding drive placement within the shelf. Refer to Applied Best Practices Guide: EMC VNX Unified Best Practice for Performance. When expanding the capability of a storage pool using the building blocks described in this document use the same type and size of drive in the pool. Create a new pool for different to use different drive types and sizes. This prevents uneven performance across the pool. Configure at least one hot spare for every type and size of drive on the system. Configure at least one hot spare for every 30 drives of a given type. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system and conforms to EMC published best practices. Implementation summary The requirements in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual server. In any customer implementation, the load of a system varies over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, add more of that resource to the system. 87

88 Solution Architecture Overview Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of Reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 12. Table 12. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent Reference virtual machines Example application Resource requirements NA Equivalent Reference virtual machines Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, memory, IOPS, and capacity. CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as esxtop, on vsphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as VMware esxtop, to determine memory efficiency. 88

89 Solution Architecture Overview In any operation involving performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Storage performance requirements I/O operations per second (IOPS) The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system. The first is the number of requests coming in or IOPS. Equally important is the size of the request, or I/O size a request for 4 KB of data is easier and faster than a request for 4 MB of data. That distinction becomes important with the third factor, which is the average I/O response time, or I/O latency. The Reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as VMware esxtop. Esxtop provides several counters that can help. The most common are: For Block: For File: Physical Disk \Commands/sec Physical Disk \Reads/sec Physical Disk \Writes/sec Physical Disk \ Average Guest MilliSec/Command Physical Disk NFS Volume \Commands/sec Physical Disk NFS Volume \Reads/sec Physical Disk NFS Volume \Writes/sec Physical Disk NFS Volume \ Average Guest MilliSec/Command The Reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The Reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even, powers of 2 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of the common I/O sizes. The Reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the Reference virtual machine assumed 8 KB I/O sizes. 89

90 Solution Architecture Overview I/O latency Storage capacity requirements Determining equivalent Reference virtual machines The average I/O response time, or I/O latency, is a measurement of how quickly I/O the storage system processes requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target; however, monitor the system and re-evaluate the resource pool utilization if needed. To monitor I/O latency, use the Physical Disk \ Average Guest MilliSec/Command counter (block storage) or Physical Disk NFS Volume \ Average Guest MilliSec/Command counter (file storage) in esxtop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure these machines that do not use more resources than intended. The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the equivalent Reference virtual machines line by using the relationships in Table 13. Round all values up to the closest whole number. Table 13. Resource Reference virtual machine resources Value for Reference virtual machine Relationship between requirements and equivalent Reference virtual machines CPU 1 Equivalent Reference virtual machines = resource requirements Memory 2 Equivalent Reference virtual machines = (Resource Requirements)/2 IOPS 25 Equivalent Reference virtual machines = (resource requirements)/25 Capacity 100 Equivalent Reference virtual machines = (resource requirements)/100 For example, the point of sale system database used in Example 2: Point of sale system requires four CPUs, 16 GB of memory, 200 IOPS and 200 GB of storage. This translates to four Reference virtual machines of CPU, eight Reference virtual machines of memory, eight Reference virtual machines of IOPS, and two Reference virtual machines of capacity. Table 14 demonstrates how that machine fits into the worksheet row. 90

91 Table 14. Application Example application Example worksheet row Resource requirements CPU (virtual CPUs) Memory (GB) IOPS Solution Architecture Overview Capacity (GB) N/A Equivalent Reference virtual machines Equivalent Reference virtual machines Use the highest value in the row to fill in Equivalent Reference virtual machines column. As shown in Figure 29, the example requires eight Reference virtual machines. Figure 29. Required resource from the Reference virtual machine pool Implementation Example - Stage 1 A customer wants to build a virtual infrastructure to support one custom built application, one point of sale system, and one web server. He or she computes the sum of the Equivalent Reference virtual machines column on the right side of the worksheet as listed in Table 15 to calculate the total number of Reference virtual machines required. The table shows the result of the calculation, along with the value, rounded up to the nearest whole number to use. 91

92 Solution Architecture Overview Table 15. Example applications stage 1 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of sale system Example application #3: Web server Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines NA NA NA Total equivalent Reference virtual machines 14 This example requires 14 Reference virtual machines. According to the sizing guidelines, one storage pool with 10 SAS drives and 2 or more Flash drives provides sufficient resources for the current needs and room for growth. You can implement with VNX5300, which supports up to 125 Reference virtual machines. 92

93 Solution Architecture Overview Figure 30. Aggregate resource requirements stage 1 Figure 30 shows six Reference virtual machines are available after implementing VNX5300 with 10 SAS drives and two Flash drives. Figure 31. Pool configuration stage 1 Figure 31 shows the pool configuration in this example. Implementation Example stage 2 Next, this customer must add a decision support database to this virtual infrastructure. Using the same strategy, the number of Reference virtual machines required can be calculated, as shown in Table 16. Table 16. Example applications -stage 2 Server resources Storage resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Resource requirements Equivalent Reference virtual machines N/A

94 Solution Architecture Overview Example application #2: Point of sale system Example application #3: Web server Example application #4: Decision support database Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource Requirements Equivalent Reference virtual machines Server resources Storage resources N/A N/A N/A Total equivalent Reference virtual machines 66 This example requires 66 Reference virtual machines. According to the sizing guidelines, one storage pool with 35 SAS drives and 2 Flash drives or more provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5300, for up to 125 Reference virtual machines. Figure 32. Aggregate resource requirements - stage 2 Figure 32 shows 4 Reference virtual machines are available after implementing VNX5300 with 35 SAS drives and 2 Flash drives. 94

95 Solution Architecture Overview Figure 33. Pool configuration stage 2 Figure 33 shows the pool configuration in this example. Implementation Example stage 3 With business growth, the customer must implement a much larger virtual environment to support one custom built application, one point of sale system, two web servers, and three decision Support databases. Using the same strategy, calculate the number of Equivalent Reference virtual machines, as shown in Table 17. Table 17. Example applications - stage 3 Server Resources Storage Resources Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of sale system Example application #3: Web server #1 Example application #4: Decision support database #1 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines N/A N/A N/A N/A

96 Solution Architecture Overview Example application #5: Web server #2 Example application #6: Decision support database #2 Example application #7: Decision support database #3 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Server Resources Storage Resources N/A N/A N/A Total equivalent Reference virtual machines 174 This example requires174 Reference virtual machines. According to our sizing, two pools with 85 SAS drives and 4 or more Flash drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with VNX5500 for up to 250 Reference virtual machines. Figure 34. Aggregate resource requirements for stage 3 Figure 34 shows 6 Reference virtual machines are available after implementing VNX5500 with 85 SAS drives and 4 Flash drives. 96

97 Solution Architecture Overview Figure 35. Pool configuration stage 3 Figure 35 shows the pool configuration in this example. Fine tuning hardware resources Usually, the process described determines the recommended hardware size for servers and storage. However, in some cases there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. To achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. With the method outlined in Determining equivalent Reference virtual machines, it is easy to build a virtual infrastructure scaling from 10 Reference virtual machines to 500 Reference virtual machines with the building blocks described in VSPEX storage building blocks, while keeping in mind the recommended limits of each storage array documented in the VSPEX private cloud validated maximums. Server resources For some workloads the relationship between server needs and storage needs does not match what is outlined in the Reference virtual machine. It is appropriate to size the server and storage layers separately in this scenario. Figure 36. Customizing server resources To do this, first total the resource requirements for the server components as shown in Table 18. In the Server Component Totals line at the bottom of the worksheet, add up the server resource requirements from the applications in the table. 97

98 Solution Architecture Overview Note When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage Component Totals line at the bottom of Table 18 describes the required amount of storage. Table 18. Application Server resource component totals Server resources CPU (virtual CPUs) Memory (GB) Storage resources IOPS Capacity (GB) Reference virtual machines Example application #1: Custom built application Example application #2: Point of sale system Example application #3: Web server #1 Example application #4: Decision support database #1 Example application #5: Web server #2 Example application #6: Decision support database #2 Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines Resource requirements Equivalent Reference virtual machines N/A N/A N/A N/A N/A N/A

99 Example application #7: Decision support database #3 Resource requirements Equivalent Reference virtual machines Solution Architecture Overview Server resources Storage resources N/A Total equivalent Reference virtual machines 174 Server customization Server component totals NA Storage customization Storage component totals NA Storage component Equivalent Reference virtual machines NA Total equivalent Reference virtual machines - Storage 157 Note Calculate the sum of the Resource Requirements row for each application, not the Equivalent Reference virtual machines, to get the Server and Storage Component Totals. In this example, the target architecture required 39 virtual CPUs and 227 GB of memory. With the stated assumptions of four virtual machines per physical processor core, and no memory over-provisioning, this translates to 10 physical processor cores and 227 GB of memory. With these numbers, the solution can be effectively implemented with fewer server and storage resources. Note Keep high-availability requirements in mind when customizing the resource pool hardware. Appendix C is a blank Server Resource Component Worksheet. 99

100 Solution Architecture Overview 100

101 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure vsphere hosts Install and configure SQL server database Install and configure VMware vcenter server Summary

102 VSPEX Configuration Guidelines Overview The deployment process consists of the stages listed in Table 19. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 19 lists the main stages in the solution deployment process. The table also includes references to chapters that contain relevant procedures. Table 19. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment prerequisites 3 Gather customer configuration data 4 Rack and cable the components 5 Configure the switches and networks, connect to the customer network Customer configuration data Refer to the vendor documentation. Prepare switches, connect network, and configure switches 6 Install and configure the VNX Prepare and configure storage array 7 Configure virtual machine datastores 8 Install and configure the servers 9 Set up SQL Server (used by VMware vcenter ) 10 Install and configure vcenter and virtual machine networking Prepare and configure storage array Install and configure vsphere hosts Install and configure SQL server database Configure database for VMware vcenter 102

103 VSPEX Configuration Guidelines Pre-deployment tasks Overview The pre-deployment tasks include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite. Table 20. Tasks for pre-deployment Task Description Reference Gather documents Gather tools Gather data Gather the related documents listed in Appendix D. These documents to provide detail on setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 21 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. References: EMC documentation Table 21 Deployment prerequisites checklist Appendix B Deployment prerequisites Table 21 lists the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 3 and Table 4. Table 21. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 125 or 250 or 500 virtual servers. VMware vsphere 5.1 servers to host virtual infrastructure servers Note The existing infrastructure requirement may already meet this requirement. Switch port capacity and capabilities as required by the virtual server infrastructure. EMC VNX5300 (125 virtual machines) or EMC VNX5500 (250 virtual machines) or EMC VNX5700 (500 virtual machines): Multiprotocol storage array with the required disk layout. Table 3: Solution hardware 103

104 VSPEX Configuration Guidelines Requirement Description Reference Software VMware ESXi 5.1 installation media. VMware vcenter Server 5.1 installation media. EMC VSI for VMware vsphere: Unified Storage Management. EMC Online Support EMC VSI for VMware vsphere: Storage Viewer. Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vcenter). Microsoft SQL Server 2008 R2 or newer installation media. Note This requirement may be covered in the existing infrastructure. EMC vstorage API for Array Integration Plug-in. EMC Online Support Licenses Microsoft Windows Server 2012 Datacenter installation media (suggested OS for virtual machine guest OS) or Windows Server 2008 R2 installation media. VMware vcenter 5.1 license key. VMware ESXi 5.1 license keys. Microsoft Windows Server 2008 R2 Standard (or higher) license keys. Microsoft Windows Server 2012 Datacenter license keys. Note An existing Microsoft Key Management Server (KMS) may cover this requirement. Microsoft SQL Server license key. Note The existing infrastructure may already meet this requirement. 104

105 VSPEX Configuration Guidelines Customer configuration data Assemble information such as IP addresses and hostnames as part of the planning process to reduce time onsite. Appendix B provides a table to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment progress. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. Prepare switches, connect network, and configure switches Overview This section lists the network infrastructure requirements needed to support this architecture. Table 22 provides a summary of the tasks for switch and network configuration, and references for further information. Table 22. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure VLANs Complete network cabling Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure storage array and Install and configure vsphere hosts. Configure private and public VLANs as required. Connect the switch interconnect ports. Connect the VNX ports. Connect the ESXi server ports. Prepare and configure storage array and Install and configure vsphere hosts. Your vendor s switch configuration guide Prepare network switches For validated levels of performance and high-availability, this solution requires the switching capacity listed in Table 3. Do not use new hardware if existing infrastructure meets the requirements. Configure infrastructure network The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports, to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution. Figure 37 and Figure 38 show a sample redundant infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that there are no single points of failure. 105

106 VSPEX Configuration Guidelines In Figure 37, converged switches provide customers different protocol options (FC, FCoE or iscsi) for storage network. While existing FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iscsi. Figure 37. Sample network architecture Block storage 106

107 VSPEX Configuration Guidelines Figure 38 shows a sample redundant Ethernet infrastructure for File storage. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in the network connectivity. Figure 38. Sample Ethernet network architecture File storage Configure VLANs Configure Jumbo Frames (iscsi and NFS only) Ensure there are adequate switch ports for the storage array and ESXi hosts. Use a minimum of two VLANs for: Virtual machine networking and ESXi management (These are customer- facing networks. Separate them if required.). Storage networking (iscsi and NFS only) and vmotion. Use jumbo frames for iscsi and NFS protocols. Set the MTU to 9,000 for the switch ports for the iscsi or NFS storage network. Consult your switch configuration guide for instructions. 107

108 VSPEX Configuration Guidelines Complete network cabling Ensure the following: Note All servers, storage arrays, switch interconnects, and switch uplinks plug into separate switching infrastructures and have redundant connections. There is complete connection to the existing customer network. Ensure that unforeseen interactions do not cause service issues when you connect the new equipment to the customer network. Prepare and configure storage array Implementation instructions and best practices may vary because of the storage network protocol selected for the solution. There are three steps in each case: 1. Configure the VNX. 2. Provision Storage to the hosts. 3. Configure FAST VP. 4. Optionally configure FAST Cache. The sections below cover the options for each step separately depending on whether one of the block protocols (FC, FCoE, iscsi), or the file protocol (NFS) is selected. For FC, FCoE, or iscsi, refer to the instructions marked for Block Protocols. For NFS, refer to the instructions marked for File Protocols. VNX configuration for block protocols This section describes how to configure the VNX storage array for host access with block protocols such FC, FCoE, and iscsi. In this solution, the VNX provides data storage for VMware hosts. Table 23. Tasks for VNX configuration Task Description Reference Prepare the VNX Set up the initial VNX configuration Provision storage for VMware hosts Prepare the VNX Physically install the VNX hardware with the procedures in the product documentation. Configure the IP addresses and other key parameters on the VNX. Create the storage areas required for the solution. VNX5300 Unified Installation Guide VNX5500 Unified Installation Guide VNX5700 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide The VNX5300, VNX5500 or VNX5700 Unified Installation Guide provides instructions to assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. 108

109 Set up the initial VNX configuration VSPEX Configuration Guidelines After completing the initial VNX setup, configure key information about the existing environment so that the storage array can communicate. Configure the following common items in accordance with your IT datacenter policies and existing infrastructure information: DNS NTP Storage network interfaces For data connection using the FC or FCoE protocols: Ensure one or more servers connected to the VNX storage system, either directly or through qualified FC or FCoE switches. Refer to the EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions. For data connection using iscsi protocol: Connect one or more servers to the VNX storage system, either directly or through qualified IP switches. Refer to EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions. Additionally, configure the following items in accordance with your IT datacenter policies and existing infrastructure information: 1. Set up a storage network IP address. Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage. 2. Enable jumbo frames on the VNX iscsi ports. Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment: a. In Unisphere, select Settings > Network > Settings for Block. b. Select the appropriate iscsi network interface. c. Click Properties. d. Set the MTU size to 9,000. e. Click OK to apply the changes. The reference documents listed in Table 23 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. 109

110 VSPEX Configuration Guidelines Provision Storage for VMware hosts This section describes provisioning block storage for VMware hosts. To provision file storage, refer to VNX configuration for file protocols. Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. Table 24. a. Log in to Unisphere. b. Select the array for in this solution. c. Select Storage > Storage Configuration > Storage Pools. d. Click the Pools tab. a. Click Create. Note Configuration 125 virtual machines The pool does not use system drives for additional storage. Storage allocation table for block Number of pools Number of 15K SAS per pool 2 Pool 1 45 Pool 2 14 Number of Flash drives per pool 2 (4 total) Number of LUNs per pool 2 (4 total) LUN size (TB) Pool 1 5 Pool virtual machines 3 Pool 1 45 Pool 2 45 Pool (6 total) 2 (6 total) Pool 1 5 Pool 2 5 Pool virtual machines 5 Pool 1 45 Pool 2 45 Pool 3 45 Pool 4 45 Pool (10 total) 2 (10 total) Pool 1 5 Pool 2 5 Pool 3 5 Pool 4 5 Pool 5 5 Note Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Create the hot spare disks at this point. Refer to appropriate installation guide for additional information. Figure 19 depicts the target storage layout for 125 virtual machines. Figure 20 depicts the target storage layout for 250 virtual machines. Figure 21 depicts the target storage layout for 500 virtual machines. 2. Use the pool created in step 1 to provision thin LUNs: a. Click Storage > LUNs. 110

111 b. Click Create. VSPEX Configuration Guidelines c. Select the pool created in step 1. Always create two thin LUNs in one physical storage pool. User Capacity depends on the specific number of virtual machines. Refer to Table 25 for more information. 3. Create storage group and add LUNs and VMware servers: a. Click Hosts > Storage Groups. b. Click Create and enter a name for it. c. Select the created storage group. d. Click LUNs. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs dialog appears. e. Configure and add the VMware hosts to the storage pool. VNX configuration for file protocols This section describes file storage provisioning for VMware Table 25. Tasks for storage configuration Task Description Reference Prepare the VNX Set up the initial VNX configuration Create a network interface Create a storage pool for file Physically install the VNX hardware with the procedures in the product documentation. Configure the IP address information and other key parameters on the VNX. Configure the IP address and network interface information for the NFS server. Create the Pool structure and LUNs to contain the file system. VNX5300 Unified Installation Guide VNX5500 Unified Installation Guide VNX5700 Unified Installation Guide Unisphere System Getting Started Guide Your vendor s switch configuration guide Create file systems Establish the file system that will be shared with the NFS protocol and export it to the VMware hosts. Prepare the VNX TheVNX5300, VNX5500, or VNX5700 Unified Installation Guide provides instructions on assemble, rack, cable, and power up the VNX. There are no specific setup steps for this solution. Set up the initial VNX configuration After the initial VNX setup, configure key information about the existing environment to allow the storage array to communicate with other devices in the environment. Ensure one or more servers connect to the VNX storage system, either directly or through qualified IP switches. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: DNS 111

112 VSPEX Configuration Guidelines NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership Refer to EMC Host Connectivity Guide for Windows for more detailed instructions. Enable jumbo frames on the VNX storage network interfaces Use jumbo frames for storage networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment: 1. In Unisphere, click Settings > Network > Settings for File. 2. Select the appropriate network interface from the Interfaces tab. 3. Click Properties. 4. Set the MTU size to 9, Click OK to apply the changes. The reference documents listed in Table 23 provide more information on how to configure the VNX platform. Storage configuration guidelines provide more information on the disk layout. Create a network interface A network interface maps to a NFS export. File shares provide access through this interface. Complete the following steps to create a network interface: 1. Log in to the VNX. 2. From the Dashboard of the VNX, click Settings > Network > Settings For File. 3. On the Interfaces tab, click Create. Figure 39. Network Settings For File dialog box In the Create Network Interface wizard, complete the following: 112

113 1. Select the Data Mover which will provide the file share. VSPEX Configuration Guidelines 2. Select the device name where the network interface will reside. Note Run the following command as nasadmin from the Control Station to ensure the selected device has a link connected. > server_sysconfig <datamovername> -pci This command lists the link status (UP or DOWN) for all devices on the specified Data Mover. 3. Type an IP address for the interface. 4. Type a Name for the Interface. 5. Type the netmask for the interface. The Broadcast Address field populates automatically after you provide the IP address and netmask. 6. Enter the MTU size for the interface to 9,000. Note Ensure that all devices on the (switch, servers) have the same MTU size. 7. If required, specify the VLAN ID. 8. Click OK. Figure 40. Create Interface dialog box Create storage pool for file Complete the following steps in Unisphere to configure LUNs on the VNX array to store virtual servers: 1. Create the number of storage pools required for the environment based on the sizing information in Chapter 4. This example uses the array recommended maximums as described in Chapter

114 VSPEX Configuration Guidelines a. Log on to Unisphere. b. Select the array for this solution. c. Click Storage > Storage Configuration > Storage Pools > Pools. d. Click Create. Note The pool does not use system drives for additional storage. Table 26. Storage allocation table for file Configuration Number of storage pools Number of 15K SAS per storage pool Number of Flash drives per storage pool Number of LUN per storage pool Number of FS per storage pool for file FS size (TB) 125 virtual machines 2 Pool 1 45 Pool (4 total) 20 2 Pool 1 5 Pool virtual machines 3 Pool 1 45 Pool (6 total) 20 2 Pool 1 5 Pool 2-5 Pool virtual machines 5 Pool 1-45 Pool 2 45 Pool 3 45 Pool 4 45 Pool (10 total) 20 2 Pool 1 5 Pool 2 5 Pool 3 5 Pool 4 5 Pool 5 5 Note Each virtual machine occupies 102 GB in this solution, with 100 GB for the OS and user space, and a 2 GB swap file. Create the Hot Spare disks at this point. Refer to the EMC VNX5300 Unified Installation Guide for additional information. Figure 19 depicts the target storage layout for 125 virtual machines. Figure 20 depicts the target storage layout for 250 virtual machines. Figure 21 depicts the target storage layout for 500 virtual machines. 2. Use the pool created in step 1, and provision LUNs: a. Select Storage > LUNs. b. Click Create. c. Select the pool created in step1. For User Capacity, select MAX. The Number of LUNs to create depends on the disk number in the pool. Refer to Table 26 for detail on the number of LUNs needed in each pool. 3. Connect the Provisioned LUNs to the Data Mover for file access: 114

115 a. Click Hosts > Storage Groups. b. Select ~filestorage. c. Click Connect LUNs. VSPEX Configuration Guidelines d. In the Available LUNs panel, select all the LUNs created in the previous steps. The Selected LUNs panel appears. Use a new Storage Pool for File to create multiple file systems. Create File Systems A File System exports a NFS file share. Create a file system before creating the NFS file share. VNX requires a storage pool and a network interface to create a file system. If no storage pools or interfaces exist, follow the steps in Create a network interface and Create storage pool for file to create a storage pool and a network interface. Create two thin file systems from each Storage Pool for File. Refer to Table 26 for details on the number of file systems. Complete the following steps to create file systems on the VNX for NFS File shares: 1. Log in to Unisphere. 2. Select Storage > Storage Configuration > File Systems. 3. Click Create. The File System Creation Wizard appears. 4. Specify the file system details: a. Select Storage Pool. b. Type a File System Name. c. Select a Storage Pool to contain the file system from. d. Select the Storage Capacity of the file system. Refer to Table 26 for detail storage capacity. e. Select Thin Enabled. f. Select Data Mover (R/W) to own the file system. Note The selected Data Mover must have an interface defined on it. g. Click OK. 115

116 VSPEX Configuration Guidelines Figure 41. Create File System dialog box The newly created File System appears on the File Systems tab. 1. Click Mounts. 2. Select the created file System and click Properties. 3. Select Set Advanced Options. 4. Select Direct Writes Enabled as shown in Figure

117 VSPEX Configuration Guidelines Figure 42. Direct Writes Enabled checkbox FAST VP configuration 5. Click OK. 6. Export the file systems using NFS, and give root access to ESXi servers. a. Click Storage > Shared Folders > NFS. b. Click Create. 7. In the dialog, add the IP addresses of all ESXi servers in Read/Write Hosts and Root Hosts. This procedure applies to both file and block storage implementations. Complete the following steps to configure FAST VP. Assign two Flash drives in each block-based storage pool: 1. Navigate to the block storage pool created in the previous step using Unisphere. Select the storage pool to configure FAST VP. 2. Click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 43 shows the tiering information for a specific FAST pool. Note The Tier Status area shows FAST relocation information specific to the selected pool. 3. Select the scheduled relocation at the pool level from the Auto-Tiering list. This can be set to either Automatic or Manual. Automatic is recommended. 117

118 VSPEX Configuration Guidelines In the Tier Details area, you can see the exact distribution of your data. Figure 43. Storage Pool Properties dialog box You can also connect to the array-wide Relocation Schedule using the button in the top right corner to access the Manage Auto-Tiering dialog box as shown in Figure 44. Figure 44. Manage Auto-Tiering dialog box 118

119 VSPEX Configuration Guidelines From this status dialog, users can control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. FAST Cache configuration Note FAST VP is a completely automated tool that provides the ability to create a relocation schedule. Schedule the relocations during offhours to minimize any potential performance impact. Optionally, configure FAST Cache. To configure FAST Cache on the storage pools for this solution, complete the following steps: Note The Flash drives listed in the sizing section of Chapter 4 are intended for use with FAST VP and configured in the section above. FAST Cache is an optional component of this solution which can provide improved performance as outlined in Chapter Configure Flash drives as FAST Cache: a. Click Properties from the dashboard or Manage Cache in the left-hand pane of the Unisphere interface to access the Storage System Properties dialog, shown in Figure 45. b. Click the FAST Cache tab to view FAST Cache information. Figure 45. Storage System Properties dialog box c. Click Create to open the Create FAST Cache dialog box as shown in Figure

120 VSPEX Configuration Guidelines The RAID Type field displays as RAID 1 after creating the FAST Cache. You can choose the number of flash drives from this screen. The bottom of the screen shows the Flash drives used to create FAST Cache. Select Manual to choose the drives manually. d. Refer to Storage configuration guidelines to determine the number of Flash drives needed in this solution. Note If a sufficient number of Flash drives are not available, FLARE displays an error message and does not create FAST Cache. Figure 46. Create FAST Cache dialog box 2. Enable FAST Cache on the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. All the LUNs created in the storage pool have FAST Cache enabled or disabled. Configure them from the Advanced tab in the Create Storage Pool dialog shown in Figure 47. After installing FAST Cache on the VNX series, it is enabled by default at storage pool creating. 120

121 VSPEX Configuration Guidelines Figure 47. Advanced tab in the Create Storage Pool dialog If the storage pool is created, use the Advanced tab in the Storage Pool Properties dialog to configure FAST Cache as shown in Figure 48. Figure 48. Advanced tab in the Storage Pool Properties dialog Note The VNX FAST Cache feature does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. 121

122 VSPEX Configuration Guidelines Install and configure vsphere hosts Overview This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 27 describes the tasks that must be completed. Table 27. Tasks for server installation Task Description Reference Install ESXi Configure ESXi networking Install and configure PowerPath/VE (block storage only) Connect VMware datastores Plan virtual machine memory allocations Install the ESXi 5.1 hypervisor on the physical servers that are deployed for the solution. Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames. Install and configure PowerPath/VE to manage multipathing for VNX LUNs. Connect the VMware datastores to the ESXi hosts deployed for the solution. Ensure that VMware memory management technologies are configured properly for the environment. vsphere Installation and Setup Guide vsphere Networking PowerPath VE for VMware vsphere Installation and Administration Guide. vsphere Storage Guide vsphere Installation and Setup Guide Install ESXi Upon initial power up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in each of the server s BIOS. If the servers have a RAID controller, configure mirroring on the local disks. Boot the ESXi 5.1 install media and install the hypervisor on each of the servers. ESXi requires hostnames, IP addresses, and a root password for installation. Appendix B provides appropriate values. In addition, install the HBA drivers or configure iscsi initiators on each ESXi host. For details refer to EMC Host Connectivity Guide for VMware ESX Server. Configure ESXi networking During the installation of VMware ESXi, a standard virtual switch (vswitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, add an additional NIC either by using the ESXi console or by connecting to the ESXi host from the vsphere Client. Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, and network adapter failover. 122

123 VSPEX Configuration Guidelines VMware ESXi networking configuration including load balancing, and failover options are described in vsphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for storage network (iscsi and NFS protocols). VMkernel port for VMware vmotion. Virtual server port groups (used by the virtual servers to communicate on the network). vsphere Networking describes the procedure for configuring these settings. Refer to Appendix D for more information. Jumbo frames (iscsi and NFS only) Enable jumbo frames for the NIC if using NIC for iscsi and NFS data. Set the MTU to 9,000. Consult your NIC vendor s configuration guide for instructions. Install and configure PowerPath/VE (block only) Connect VMware datastores To improve and enhance the performance and capabilities of VNX storage array, install PowerPath/VE on the VMware vsphere host. For the detail installation steps, refer to the PowerPath VE for VMware vsphere Installation and Administration Guide. Connect the datastores configured in the Install and configure vsphere hosts section to the appropriate ESXi servers. These include the datastores configured for: Virtual server storage. Infrastructure virtual machine storage (if required). SQL Server storage (if required). The vsphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix E for more information. Plan virtual machine memory allocations Server capacity in the solution is required for two purposes: To support the new virtualized server infrastructure. To support the required infrastructure services such as authentication/authorization, DNS, and databases. For information on minimum infrastructure requirements, refer to Table 3. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Memory configuration Take care when configuring server memory to properly size and configure the solution. This section provides an overview on memory allocation for the virtual servers, and factors in vsphere overhead and the virtual machine configuration. 123

124 VSPEX Configuration Guidelines ESX/ESXi memory management Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. In cases where advanced processors, such as Intel processors with EPT support, are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the virtual machine is known as memory over-commitment. Identical memory pages that are shared across virtual machines are merged with a feature known as transparent page sharing. Duplicate pages return to the host free memory pool for reuse. Memory compression - ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compressed cache located in the main memory. Memory ballooning relieves host resource exhaustion. This process requests free pages to be allocated from the virtual machine to the host for reuse. Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk. Additional information can be obtained in the following webpage: 124

125 Virtual machine memory concepts VSPEX Configuration Guidelines Figure 49 shows the memory settings parameters in the virtual machine. Figure 49. Virtual machine memory settings Configured memory Physical memory allocated to the virtual machine at the time of creation. Reserved memory Memory that is guaranteed to the virtual machine. Touched memory Memory that is active or in use by the virtual machine. Swappable Memory de-allocated from the virtual machine if the host is under memory pressure from other virtual machine s with ballooning, compression or swapping. The recommended best practices are: Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads. Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machine sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance might be adversely affected. Having performance baselines for your virtual machine workloads assists in this process. Additional information on tools such as ESXTop is in the following document: 125

126 VSPEX Configuration Guidelines Install and configure SQL server database Overview Table 28 describes how to set up and configure a SQL Server database for the solution. At the end of this chapter, you will have Microsoft SQL server installed on a virtual machine, with the databases required by VMware vcenter configured for use. Table 28. Tasks for SQL server database setup Task Description Reference Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install Microsoft SQL Server Configure database for VMware vcenter. Configure database for VMware Update Manager Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements. Install Microsoft Windows Server 2008 R2 on the virtual machine created to host SQL Server. Install Microsoft SQL Server on the virtual machine designated for that purpose. Create the database required for the vcenter server on the appropriate datastore. Create the database required for Update Manager on the appropriate datastore Preparing vcenter Server Databases Preparing the Update Manager Database Create a virtual machine for Microsoft SQL server Install Microsoft Windows on the virtual machine Create the virtual machine with enough computing resources on one of the ESXi servers designated for infrastructure virtual machines. Use the datastore designated for the shared infrastructure. Note The customer environment may already contain a SQL Server for this role. In that case, refer to the Configure database for VMware vcenter section. The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. 126

127 VSPEX Configuration Guidelines Install SQL server Install SQL Server on the virtual machine with the SQL Server installation media. One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). Install this component on the SQL server directly, and on an administrator console. In many implementations, you may want to store data files in locations other than the default path. To change the default path for storing data files perform the following steps: Note 1. Right-click the server object in SSMS and select Database Properties. The Properties window appears. 2. Change the default data and log directories for new databases created on the server. For high-availability, install the SQL Server on a Microsoft Failover Cluster, or on a virtual machine protected by VMware VMHA clustering. Do not combine these technologies. Configure database for VMware vcenter To use VMware vcenter in this solution, create a database for the service. The requirements and steps to configure the vcenter Server database correctly are covered in the section, Preparing vcenter Server Databases. Refer to the list of documents in Appendix E for more information. Note Do not use the Microsoft SQL Server Express based database option for this solution. Create individual login accounts for each service accessing a database on the SQL Server. Configure database for VMware Update Manager To use VMware Update Manager in this solution, create a database for the service to use. The requirements and steps to configure the Update Manager database are covered in the section, Preparing the Update Manager Database. Create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization s policy. 127

128 VSPEX Configuration Guidelines Install and configure VMware vcenter server Overview This section provides information on how to configure the VMware Virtual Center. Complete the tasks in Table 29. Table 29. Tasks for vcenter configuration Task Description Reference Create the vcenter host virtual machine Install vcenter guest operating system Update the virtual machine Create vcenter ODBC connections Install vcenter Server Install vcenter Update Manager Create a virtual datacenter Apply vsphere license keys Add ESXi hosts Configure vsphere clustering Perform array ESXi host discovery Create a virtual machine to be used for the VMware vcenter Server. Install Windows Server 2008 R2 Standard Edition on the vcenter host virtual machine. Install VMware Tools, enable hardware acceleration, and allow remote console access. Create the 64-bit vcenter and 32-bit vcenter Update Manager ODBC connections. Install vcenter Server software. Install vcenter Update Manager software. Create a virtual datacenter. Type the vsphere license keys in the vcenter licensing menu. Connect vcenter to ESXi hosts. Create a vsphere cluster and move the ESXi hosts into it. Perform ESXi host discovery from the Unisphere console. vsphere Virtual Machine Administration vsphere Virtual Machine Administration vsphere Installation and Setup Installing and Administering VMware vsphere Update Manager vsphere Installation and Setup Installing and Administering VMware vsphere Update Manager vcenter Server and Host Management vsphere Installation and Setup vcenter Server and Host Management vsphere Resource Management Using EMC VNX Storage with VMware vsphere TechBook Install the vcenter Update Manager plug-in Install the EMC VNX UEM CLI Install the vcenter Update Manager plug-in on the administration console. Install the EMC VNX UEM command line interface on the administration console. Installing and Administering VMware vsphere Update Manager EMC VSI for VMware vsphere: Unified Storage Management Product Guide 128

129 Task Description Reference Install the EMC VSI plug-in Install the EMC Virtual Storage Integrator plug-in on the administration console. VSPEX Configuration Guidelines EMC VSI for VMware vsphere: Unified Storage Management Product Guide Create the vcenter host virtual machine To deploy the VMware vcenter server as a virtual machine on an ESXi server installed as part of this solution, connect directly to an infrastructure ESXi server using the vsphere Client. Create a virtual machine on the ESXi server with the customer guest OS configuration, using the infrastructure server datastore presented from the storage array. The memory and processor requirements for the vcenter Server depend on the number of ESXi hosts and virtual machines managed. The requirements are in the vsphere Installation and Setup Guide. Install vcenter guest OS Create vcenter ODBC connections Install vcenter server Apply vsphere license keys Install the EMC VSI plug-in Install the guest OS on the vcenter host virtual machine. VMware recommends using Windows Server 2008 R2 Standard Edition. Before installing vcenter Server and vcenter Update Manager, create the ODBC connections required for database communication. These ODBC connections use SQL Server authentication for database authentication. Appendix B provides a place to record SQL login information. Install vcenter by using the VMware VIMSetup installation media. Use the customerprovided username, organization, and vcenter license key when installing vcenter. To perform license maintenance, log in to the vcenter Server and select the Administration > Licensing menu from the vsphere Client. Use the vcenter License console to enter the license keys for the ESXi hosts. After this, they can be applied to the ESXi hosts as they are imported into vcenter. Integrate the VNX storage system with VMware vcenter by using EMC Virtual Storage Integrator (VSI) for VMware vsphere: Unified Storage Management. This provides administrators the ability to manage VNX storage tasks from the vcenter. After installing the plug-in on the vsphere console, administrators can use vcenter to: Create NFS datastores on VNX and mount them on ESXi servers. Create LUNs on VNX and map them to ESXi servers. Extend NFS datastores/luns. Create Fast or Full Clones of virtual machines for NFS file storage. 129

130 VSPEX Configuration Guidelines Summary This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution, which includes both the physical and logical components. At this point, VSPEX solution is fully functional. 130

131 Chapter 6 Validating the Solution This chapter presents the following topics: Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components

132 Validating the Solution Overview This chapter provides a list of items to review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure that the configuration meets core availability requirements. Complete the tasks listed in Table 30. Table 30. Tasks for testing the installation Task Description Reference Post install checklist Verify that sufficient virtual ports exist on each vsphere host virtual switch. vsphere Networking Verify that each vsphere host has access to the required datastores and VLANs. vsphere Storage Guide vsphere Networking Verify that the vmotion interfaces are configured correctly on all vsphere hosts. vsphere Networking Deploy and test a single virtual server Deploy a single virtual machine using the vsphere interface. vcenter Server and Host Management vsphere Virtual Machine Management Verify redundancy of the solution components Perform a reboot of each storage processor in turn, and ensure that LUN connectivity is maintained. Disable each of the redundant switches in turn and verify that the vsphere host, virtual machine, and storage array connectivity remains intact. On a vsphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host. Steps shown below Reference vendor s documentation vcenter Server and Host Management 132

133 Validating the Solution Post-install checklist The following configuration items are critical to the functionality of the solution. On each vsphere server, verify the following items prior to deployment into production: The vswitch that hosts the client VLANs is configured with sufficient ports to accommodate the maximum number of virtual machines it may host. All required virtual machine port groups are configured, and each server has access to the required VMware datastores. An interface is configured correctly for vmotion using the material in the vsphere Networking guide. Deploy and test a single virtual server Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to login to it. Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. Block environments Complete the following steps to perform a reboot of each VNX storage processor in turn and verify that connectivity to VMware datastores is maintained throughout each reboot: 1. Log in to the Control Station with administrator credentials. 2. Navigate to /nas/sbin. 3. Reboot SP A by using the./navicli -h spa rebootsp command. 4. During the reboot cycle, check for presence of datastores on ESXi hosts. 5. When cycle completes, reboot SP B by using./navicli h spb rebootsp. 6. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 133

134 Validating the Solution File environments Perform a failover of each VNX Data Mover in turn and verify that connectivity to NFS datastores is maintained. For simplicity, use the following approach for each Data Mover. Note Optionally, reboot the Data Moves through the Unisphere interface. 1. From the Control Station prompt, run the server_cpu <movername> -reboot command, where <movername> is the name of the data mover 2. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure as well. 3. Enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 134

135 Chapter 7 System Monitoring This chapter presents the following topics: Overview Key areas to monitor VNX resource monitoring guidelines Summary

136 System Monitoring Overview System monitoring of the VSPEX environment is no different from monitoring any core IT systems; it is a relevant and core component of administration. The monitoring levels involved in a highly virtualized infrastructure such as a VSPEX environment are somewhat more complex than a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those experienced in administering virtualized environments should be readily familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows. Several business needs drive the need for proactive, consistent monitoring of the environment, which include: Stable, predictable performance. Sizing and capacity needs. Availability and accessibility. Elasticity the dynamic addition, subtraction, and modification of workloads. Data protection. If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are at the end of this chapter. 136

137 System Monitoring Key areas to monitor Since VSPEX Proven Infrastructures comprise end-to-end solutions, system monitoring includes three discrete, but highly interrelated areas: Servers, both virtual machines and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the VNX array, but briefly describes other components. Performance baseline When a workload is added to a VSPEX deployment, server, storage, and networking resources are consumed. As additional workloads are added, modified, or removed, resource availability and more importantly, capabilities change, which impact all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components prior to deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined Reference virtual machine. Deploy the first workload, and then measure the end-to-end resource consumption with a platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As additional workloads deploy, rerun benchmarks to determine cumulative load and impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these baselines consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected. What follows is a discussion on which components should comprise a core performance baseline. Servers The key resources to monitor from a server perspective include use of: Processors Memory Disk (local, NAS, and SAN) Networking Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your operating system, there are tools available to monitor and capture this data. For example, if your VSPEX deployment uses ESXi servers as the hypervisor, you can use ESXtop to monitor and log these metrics. Windows Server 2012 guests can use the perfmon utility. Follow your vendor s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending upon the application. 137

138 System Monitoring Detailed information about these tools is available from: Keep in mind that each VSPEX Proven Infrastructure provides a guaranteed level of performance based upon the number of Reference virtual machines deployed and their defined workload. Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and if network file or block protocols such as NFS/CIFS/SMB are implemented, at the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. If networking storage protocols are used, those are discussed in the following section. Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the VNX family of storage arrays provide an easy yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus upon, including: Capacity IOPs Latency SP utilization For CIFS/SMB/NFS protocols, the following additional components should be monitored: Data Mover, CPU, and memory usage. File system latency. Network interfaces throughput in, throughput out. Addition considerations (though primarily from a tuning perspective) include: I/O size. Workload characteristics. Cache utilization. 138

139 System Monitoring These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers the following additional guidance on the subject through EMC Online Support: VNX resource monitoring guidelines Monitor the VNX with the Unisphere GUI, which is accessible by opening an HTTPS session to the Control Station IP. The VNX family is a unified storage platform that provides both block storage and file storage access through a single entity. Monitoring is divided into two parts: Monitoring block storage resources Monitoring file storage resources Monitoring block storage resources This section explains how to use Unisphere to monitor block storage resource usage that includes capacity, IOPS and Latency. Capacity In Unisphere, two panels display capacity information. These two panels provide a quick assessment to overall free space available within the configured LUNs and underlying storage pools. For block, sufficient free storage should remain in the configured pools to allow for anticipated growth and activities such as snapshot creation. As such, configure threshold alerts to warn storage administrators when capacity use rises above 80 percent. In that case, auto-expansion may need to be adjusted or additional space allocated to the pool. If LUN utilization is high, reclaim space or allocate additional space. To set capacity threshold alerts for a specific pool, complete the following steps: 1. Select that pool and click Properties > Advanced tab. 2. In the Storage Pool Alerts area, choose a number for Percent Full Threshold of this pool, as shown in Figure

140 System Monitoring Figure 50. Storage pool alerts To drill-down into capacity for block, complete the following steps: 1. In Unisphere, select the VNX system to examine. 2. Select Storage > Storage Configurations > Storage Pools. This opens the Storage Pools panel. 3. Examine the columns titled Free Capacity and % Consumed, as shown in Figure

141 System Monitoring Figure 51. Storage pools panel Monitor capacity at the Storage Pool level, and at the LUN level. 1. Click Storage and select LUNs. This opens the LUN panel. 2. Select a LUN to examine and click Properties, which displays detailed LUN information, as shown in Figure Verify the LUN Capacity area of the dialog box. User Capacity is the total physical capacity available to all thin LUNs in the pool. Consumed Capacity is the total physical capacity currently assigned to all thin LUNs. 141

142 System Monitoring Figure 52. LUN property dialog box Examine capacity alerts, along with all other system events, by opening the Alerts panel, and the SP Event Logs panel, both of which are accessed under the Monitoring and Alerts panel, as shown in Figure

143 System Monitoring Figure 53. Monitoring and Alerts panel. IOPs The effects of an I/O workload serviced by an improperly configured storage system, or one whose resources are exhausted, can be felt system wide. Monitoring the IOPS that the storage array services includes looking at metrics from the host ports in the SPs, along with requests serviced by the back-end disks. The VSPEX solutions are carefully sized to deliver a certain performance level for a particular workload level. Ensure that IOPS are not exceeding design parameters. Statistical reporting for IOPS (along with other key metrics) can be examined by opening the Statistics for Block panel by selecting VNX >System > Monitoring and Alerts > Statistics for Block. Monitor the statistics online or offline using the Unisphere Analyzer, which requires a license. Another metric to examine is Total Bandwidth (MB/s). An 8 Gbps Front-End SP port can process 800 MB per second. The average bandwidth must not exceed 80 percent of the link bandwidth under normal operating conditions. IOPS delivered to the LUNs are often more than those delivered by the hosts. This is particularly true with thin LUNs, as there is additional metadata associated with managing the I/O streams. Unisphere Analyzer shows the IOPS on each LUN, as shown in Figure

144 System Monitoring Figure 54. IOPS on the LUNs Certain RAID levels also impart write-penalties that create additional back-end IOPS. Examine the IOPS delivered to (and serviced from) the underlying physical disks, which can also be viewed in the Unisphere Analyzer in Figure 55. The rules of thumb for drive performance are shown in Table 31. Table 31. Rules of thumb for drive performance 15k rpm SAS 10k rpm SAS NL-SAS IOPS 180 IOPS 120 IOPS 80 IOPS 144

145 System Monitoring Figure 55. IOPS on the drives Latency Latency is the byproduct of delays processing I/O requests. This context focuses on monitoring storage latency, specifically block-level I/O. Using similar procedures from a previous section, view the latency from the LUN level, as shown in Figure 56. Figure 56. Latency on the LUNs 145

146 System Monitoring Latency can be introduced anywhere along the I/O stream, from the application layer, through the transport, and out to the final storage devices; determining precise causes of excessive latency requires a methodological approach. Excessive latency in an FC network is uncommon. Unless there is a defective component such as an HBA or cable, delays introduced in the network fabric layer are normally a result of misconfigured switching fabrics. An overburdened storage array typically causes latency within an FC environment by. Focus primarily on the LUNs and the underlying disk pools ability to service I/O requests. Requests that cannot be serviced are queued, which introduces latency. The same paradigm applies to Ethernet-based protocols such as iscsi and FCoE. However, additional factors come into place because these storage protocols use Ethernet as the underlying transport. Isolate the network traffic (either physical or logical) for storage, and preferably some implementation of Quality of Service (QoS) in a shared/converged fabric. If network problems are not introducing excessive latency, examine the storage array. In addition to overburdened disks, excessive SP utilization can also introduce latency. SP utilization levels greater than 80 percent indicate a potential problem. Background processes such as replication, deduplication, and snapshots all compete for SP resources. Monitor these processes to ensure they do not cause SP resource exhaustion. Possible mitigation techniques include staggering background jobs, setting replication limits, and adding more physical resources or rebalancing the I/O workloads. Growth may also mandate moving to more powerful hardware. For SP metrics, examine data under the SP tab of the Unisphere Analyzer, as shown in Figure 57 Review metrics such as Utilization %, Queue Length, and Response Time (ms). High values for any of these metrics indicate the storage array is under duress and likely requires mitigation. Table 32 shows the best practices recommended by EMC. Table 32. Best Practice for performance monitoring Utilization (%) Response Time (ms) Queue Lengths Threshold

147 System Monitoring Figure 57. SP Utilization Monitoring file storage resources File-based protocols such as NFS and CIFS/SMB involve additional management processes beyond those for block storage. Data Movers, hardware components that provide an interface between NFS and CIFS/SMB users, and the SPs, provide these management services for VNX Unified systems. Data Movers process file protocol requests on the client side, and convert the requests to the appropriate SCSI block semantics on the array side. The additional components and protocols introduce additional monitoring requirements such as. Data Mover network link utilization, memory utilization, and Data Mover processor utilization. To examine Data Mover metrics in the Statistics for File panel, select VNX > System > Monitoring and Alerts > Statistics for File, as shown in Figure 58. By clicking the Data Mover link, the following summary metrics are displayed, as shown in Figure 58. Usage levels in excess of 80 percent indicate potential performance concerns and likely require mitigation through Data Mover reconfiguration, additional physical resources, or both. 147

148 System Monitoring Figure 58. Data Mover statistics Select Network Device from the Statistics panel to observe front-end network statistics. This Network Device Statistics window appears, as shown in Figure 59 If throughput figures exceeding 80 percent of the link bandwidth to the client, configure additional links to relieve the network saturation. Figure 59. Front-end Data Mover network statistics Capacity Similar to block storage monitoring, Unisphere has a statistics panel for file storage. Select Storage > Storage Configurations > Storage Pools for File to check file storage space utilization at pool level as shown in Figure

149 System Monitoring Figure 60. Storage pools for file panel Monitor capacity at the pool and file system level. 1. Click Storage > File Systems. The File Systems window appears, as shown in Figure 61. Figure 61. File systems panel 2. Select a file system to examine and click Properties, which displays detailed file system information, as shown in Figure Examine the File Storage area for Used and Free Capacity. 149

150 System Monitoring Figure 62. File system property panel IOPs In addition to block storage IOPS, Unisphere also provides the ability to monitor file system IOPS. Select System > Monitoring and Alerts > Statistics for File > File System I/O, shown in Figure

151 System Monitoring Figure 63. File system performance panel Latency To observe file system latency, select System > Monitoring and Alerts >Statistics for File > NFS in Unisphere, and examine the value for NFS: Average call time in Figure 64. Figure 64. File storage all performance panel 151

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 EMC VSPEX Abstract This describes how to design virtualized Microsoft Exchange Server 2010 resources on the appropriate EMC VSPEX Proven Infrastructures

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2010

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures Table of Contents Get the efficiency and low cost of cloud computing with uncompromising control over service levels and with the freedom of choice................ 3 Key Benefits........................................................

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Configuration Guide H15108 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell

More information

Disaster Recovery-to-the- Cloud Best Practices

Disaster Recovery-to-the- Cloud Best Practices Disaster Recovery-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Symantec Reference Architecture for Business Critical Virtualization

Symantec Reference Architecture for Business Critical Virtualization Symantec Reference Architecture for Business Critical Virtualization David Troutt Senior Principal Program Manager 11/6/2012 Symantec Reference Architecture 1 Mission Critical Applications Virtualization

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

Local and Remote Data Protection for Microsoft Exchange Server 2007

Local and Remote Data Protection for Microsoft Exchange Server 2007 EMC Business Continuity for Microsoft Exchange 2007 Local and Remote Data Protection for Microsoft Exchange Server 2007 Enabled by EMC RecoverPoint CLR and EMC Replication Manager Reference Architecture

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.4 Configuring and managing LUNs H16814 02 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication

More information

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR 1 VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR PRINCIPAL CORPORATE SYSTEMS ENGINEER RECOVERPOINT AND VPLEX 2 AGENDA VPLEX Overview RecoverPoint

More information

NEXT GENERATION UNIFIED STORAGE

NEXT GENERATION UNIFIED STORAGE 1 NEXT GENERATION UNIFIED STORAGE VNX Re-defines Midrange Price/ Performance VNX and VNXe: From January 2011 through Q12013 2 VNX Family Widely Adopted >63,000 Systems Shipped >3,600 PBs Shipped >200K

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc.

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc. Potpuna virtualizacija od servera do desktopa Saša Hederić Senior Systems Engineer VMware Inc. VMware ESX: Even More Reliable than a Mainframe! 2 The Problem Where the IT Budget Goes 5% Infrastructure

More information

NEXT GENERATION UNIFIED STORAGE

NEXT GENERATION UNIFIED STORAGE 1 NEXT GENERATION UNIFIED STORAGE VNX Re-defines Midrange Price/ Performance VNX and VNXe: From January 2011 through Q12013 2 VNX Family Widely Adopted >63,000 Systems Shipped >3,600 PBs Shipped >200K

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

Real-time Protection for Microsoft Hyper-V

Real-time Protection for Microsoft Hyper-V Real-time Protection for Microsoft Hyper-V Introduction Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate of customer adoption. Moving resources to

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

MOST ACCESSIBLE TIER 1 STORAGE

MOST ACCESSIBLE TIER 1 STORAGE EMC VMAX 10K Powerful, Trusted, Smart, and Efficient MOST ACCESSIBLE TIER 1 STORAGE The EMC VMAX 10K storage system is a new class of enterprise storage purposebuilt to provide leading high-end virtual

More information

Introduction to Using EMC Celerra with VMware vsphere 4

Introduction to Using EMC Celerra with VMware vsphere 4 Introduction to Using EMC Celerra with VMware vsphere 4 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Storing and Protecting Wouter Kolff Advisory Technology Consultant EMCCAe 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/

More information

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Reasons to Deploy Oracle on EMC Symmetrix VMAX Enterprises are under growing urgency to optimize the efficiency of their Oracle databases. IT decision-makers and business leaders are constantly pushing the boundaries of their infrastructures and applications

More information

ACCELERATE THE JOURNEY TO YOUR CLOUD

ACCELERATE THE JOURNEY TO YOUR CLOUD ACCELERATE THE JOURNEY TO YOUR CLOUD With Products Built for VMware Rob DeCarlo and Rob Glanzman NY/NJ Enterprise vspecialists 1 A Few VMware Statistics from Paul Statistics > 50% of Workloads Virtualized

More information

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments 1 2017 2017 Cisco Cisco and/or and/or its

More information

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT WHITE PAPER LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT Continuous protection for Oracle environments Simple, efficient patch management and failure recovery Minimal downtime for Oracle

More information