EMC VSPEX PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX PRIVATE CLOUD"

Transcription

1 EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with Microsoft Windows Server 2012 R2 with Hyper-V and EMC XtremIO all-flash array technology. July 2015

2 Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. Published July 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection Part Number: H EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

3 Contents Contents Chapter 1 Executive Summary 10 Introduction Target audience Document purpose Business benefits Chapter 2 Solution Overview 13 Introduction Virtualization Private cloud foundation Compute Network Storage Challenges Scalability Operational agility Deduplication Thin provisioning Data protection Microsoft ODX support EMC ViPR integration API support Benefits of using XtremIO Chapter 3 Solution Technology Overview 19 Overview VSPEX Proven Infrastructures Key components Virtualization layer Overview Microsoft Hyper-V Virtual Fibre Channel ports Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Hyper-V Replica EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 3

4 Contents Cluster-Aware Updating EMC Storage Integrator for Windows Suite Compute layer Network layer Storage layer EMC XtremIO EMC Data Protection Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems EMC RecoverPoint Other technologies Overview EMC PowerPath EMC ViPR Controller Public-key infrastructure Chapter 4 Solution Architecture Overview 33 Overview Solution architecture Overview Logical architecture Key components Hardware resources Software resources Server configuration guidelines Overview Intel Ivy Bridge updates Hyper-V memory virtualization Memory configuration guidelines Network configuration guidelines Overview VLANs Enable jumbo frames (for iscsi) Storage configuration guidelines Overview XtremIO X-Brick scalability Hyper-V storage virtualization VSPEX storage building blocks EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

5 Contents High-availability and failover Overview Virtualization layer Compute layer Network layer Storage layer XtremIO Data Protection Backup and recovery configuration guidelines Chapter 5 Environment Sizing 51 Overview Reference workload Overview Defining the reference workload Scaling out Reference workload application Overview Example 1: Custom-built application Example 2: Point-of-sale system Example 3: Web server Example 4: Decision-support database Summary of examples Quick assessment Overview CPU requirements Memory requirements Storage performance requirements IOPS I/O size I/O latency Unique data Storage capacity requirements Determining equivalent reference virtual machines Fine-tuning hardware resources EMC VSPEX Sizing Tool Chapter 6 VSPEX Solution Implementation 64 Overview Pre-deployment tasks Deployment resources checklist EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 5

6 Contents Customer configuration data Network implementation Preparing the network switches Configuring the infrastructure network Configuring VLANs Configuring jumbo frames (iscsi only) Completing network cabling Microsoft Hyper-V hosts installation and configuration Overview Installing the Windows hosts Installing Hyper-V and configuring failover clustering Configuring Windows host networking Installing and configuring Multipath software Planning virtual machine memory allocations Microsoft SQL Server database installation and configuration Overview Creating a virtual machine for SQL Server Installing Microsoft Windows on the virtual machine Installing SQL Server Configuring SQL Server for SCVMM System Center Virtual Machine Manager server deployment Overview Creating a SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Admin Console Installing the SCVMM agent locally on a host Adding the Hyper-V cluster to SCVMM Storage array preparation and configuration Overview Configuring the XtremIO array Preparing the XtremIO array Setting up the initial XtremIO configuration Creating the CSV disk Creating a virtual machine in SCVMM Performing partition alignment Creating a template virtual machine Deploying virtual machines from the template EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

7 Contents Chapter 7 Solution Verification 82 Overview Post-installation checklist Deploying and testing a single virtual machine Verifying solution component redundancy Chapter 8 System Monitoring 85 Overview Key areas to monitor Performance baseline Servers Networking Storage XtremIO resource monitoring guidelines Monitoring the storage Monitoring the performance Monitoring the hardware elements Using advanced monitoring Appendix A Reference Documentation 95 EMC documentation Other documentation Appendix B Customer Configuration Worksheet 98 Customer configuration worksheet Appendix C Server Resource Component Worksheet 101 Server resources component worksheet Figures Figure 1. I/O randomization brought by server virtualization Figure 2. VSPEX Proven Infrastructures Figure 3. Compute layer flexibility examples Figure 4. Example of highly available network design Figure 5. Logical architecture for the solution Figure 6. Hypervisor memory consumption Figure 7. Required networks for XtremIO storage Figure 8. Single X-Brick XtremIO storage Figure 9. Cluster configuration as single and multiple X-Brick clusters Figure 10. Hyper-V virtual disk types EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 7

8 Contents Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines Figure 12. XtremIO single X-Brick building block for 700 virtual machines Figure 13. High availability at the virtualization layer Figure 14. Redundant power supplies Figure 15. Network layer high availability Figure 16. XtremIO high availability Figure 17. Resource pool flexibility Figure 18. Required resources from the RVM pool Figure 19. Aggregate resource requirements - Stage Figure 20. Customizing server resources Figure 21. Sample Ethernet network architecture Figure 22. XtremIO initiator group Figure 23. Adding volume Figure 24. Volume summary Figure 25. Volumes and initiator group Figure 26. Mapping volumes Figure 27. Monitoring the efficiency Figure 28. Volume capacity Figure 29. Physical capacity Figure 30. Monitoring the performance (IOPS) Figure 31. Data and management cable connectivity Figure 32. X-Brick properties Figure 33. Monitoring the SSDs Tables Table 1. Solution hardware Table 2. Solution software Table 3. Hardware resources for the compute layer Table 4. XtremIO scalable scenarios with virtual machines Table 5. VSPEX Private Cloud RVM workload Table 6. Blank worksheet row Table 7. Reference virtual machine resources Table 8. Sample worksheet row Table 9. Example applications Stage Table 10. Example applications -Stage Table 11. Server resource component totals Table 12. Deployment process overview Table 13. Pre-deployment tasks EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

9 Contents Table 14. Deployment resources checklist Table 15. Tasks for switch and network configuration Table 16. Tasks for server installation Table 17. Tasks for SQL Server database setup Table 18. Tasks for SCVMM configuration Table 19. Tasks for XtremIO configuration Table 20. Storage allocation for block data Table 21. Testing the installation Table 22. Advanced monitor parameters Table 23. Common server information Table 24. ESXi server information Table 25. X-Brick information Table 26. Network infrastructure information Table 27. VLAN information Table 28. Service accounts Table 30. Blank worksheet for server resource totals EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 9

10 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business benefits EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

11 Chapter 1: Executive Summary Introduction Target audience Server virtualization has been a driving force in data center efficiency gains for the past decade. However, mixing multiple virtual machine workloads creates a randomization of I/O for the storage array, which stalls virtualization of I/O intensive workloads. EMC VSPEX Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk. The VSPEX Private Cloud architecture provides your customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer backed by the highly available EMC XtremIO all-flash array family. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. XtremIO effectively addresses the effects of virtualization on I/O-intensive workloads with impressive random I/O performance and consistent ultra-low latency. XtremIO also provides new levels of speed and provisioning agility to virtualized environments, with advanced data services that include space-efficient snapshots, inline data deduplication, and thin provisioning features. You must have the necessary training and background to install and configure Microsoft Hyper-V, the EMC XtremIO storage systems, and the associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents. You should also be familiar with the infrastructure and database security policies of the customer installation. If you are a partner selling and sizing a Private Cloud for Microsoft Hyper-V infrastructure, you must pay particular attention to the first four chapters of this guide. After purchase, the implementers of the solution should focus on the configuration guidelines in Chapter 6, the solution verification in Chapter 7, and the appropriate references and appendices. Document purpose This guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific customer engagements, and instructions for effectively deploying and monitoring the system. The EMC VSPEX Private Cloud for Microsoft Hyper-V solution for up to 700 virtual machines described in this guide is based on the XtremIO storage array and a defined reference workload. The guide describes the minimum server capacity required for EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 11

12 Chapter 1: Executive Summary Business benefits CPU, memory, and network interfaces when sizing this solution. You can select server and networking hardware that meets or exceeds these minimum requirements. A private cloud architecture is a complex system offering. This guide makes the solution setup easier by providing you with lists of prerequisite software and hardware materials, step-by-step sizing guidance and worksheets, and verified deployment steps. After all components have been installed and configured, verification tests and monitoring instructions ensure that the systems of your private cloud are operating properly. Follow the instructions in this guide to ensure an efficient and painless journey to the cloud. VSPEX solutions are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision about the hypervisor, server, network, and storage environment. The VSPEX Private Cloud for Microsoft Hyper-V reduces the complexity of configuring every component of a traditional deployment model. The solution simplifies integration management while maintaining the application design and implementation options. It also provides unified administration while enabling adequate control and monitoring of process separation. The business benefits of the VSPEX Private Cloud for Microsoft Hyper-V architecture include: An end-to-end virtualization solution to effectively use the capabilities of the all-flash array infrastructure components Efficient virtualization of 700 reference virtual machines (RVMs) for varied customer use cases A reliable, flexible, and scalable reference design Secure, multitenancy services to both intra- and inter-company departments and organizations Server consolidation from isolated resources to a shared, flexible resource model that further simplifies management A single environment to run mixed workloads and tiered applications Extendible platform to provide complete self-service portal functionality to users Optional implementation of the Federation Enterprise Hybrid Cloud offering on this platform to provide full-service cloud functionality Optional integration with configuration management tools such as Docker Orchestration or DevOps to simplify management and maintenance of the cloud platform 12 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

13 Chapter 2: Solution Overview Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 13

14 Chapter 2: Solution Overview Introduction Virtualization The VSPEX Private Cloud for Microsoft Hyper-V solution provides a complete cloudenabled system architecture capable of supporting up to 700 RVMs with a redundant server, network topology, and highly available storage. The core components that make up this solution are virtualization, compute, network, and storage. Microsoft Hyper-V is a key virtualization platform. It provides flexibility and cost savings by enabling you to consolidate large, inefficient siloed server farms into nimble, reliable cloud infrastructures. Features such as Live Migration, which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization, which performs live migrations automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012 R2, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). Private cloud foundation Cloud computing is the next logical progression from virtualization and is becoming mainstream in a modern data center. Cloud computing provides a hardware and software platform that is flexible in how users perceive and operate within the environment. This VSPEX reference architecture provides the methods to guarantee a private-cloud environment with a known level of performance and availability. In a private cloud environment, organizations manage their virtual-machine environment internally. Virtual machines can be moved seamlessly throughout the private cloud platform. The platform can be extended to offer multitenancy by adding additional software components. Full self-service provisioning complete with chargeback, cost control, and workflow automation, can also be layered. The platform can be further extended to offer hybrid cloud services, which enable virtual machines to run locally in the private cloud or remotely in a service provider s public-cloud environment. Virtual machines can be moved between the two physical platforms without interruption of services. The VSPEX reference architecture serves as the core pillar for all of these services. Compute VSPEX provides the flexibility to design and implement a customer s choice of server components. The infrastructure must have sufficient: CPU cores and memory to support the required number and types of virtual machines Network connections to enable redundant connectivity to the network switches 14 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

15 Chapter 2: Solution Overview Capacity to enable the environment to withstand a server failure and failover in the environment Network VSPEX provides the flexibility to design and implement a customer s choice of network components. The infrastructure must provide: Redundant network links for the hosts, switches, and storage Traffic isolation based on industry-accepted best practices Support for link aggregation Network switches with a minimum non-blocking backplane capacity that is sufficient for the target number of virtual machines and their associated workloads. EMC recommends enterprise-class network switches with advanced features such as quality of service. Storage Challenges Virtualization In highly virtualized environments, when a large number of virtual machines are virtualized on a cluster of servers that share a common storage pool, the I/O requests from all the disparate virtual machines are randomized for the storage, as shown in Figure 1. Traditional storage architectures cannot handle these high random I/O requests and introduce unacceptable application and virtual-machine latency. This is known as the I/O blender. Figure 1. I/O randomization brought by server virtualization Storage efficiency challenges The challenge for all-flash arrays is that often their high I/O performance alone can be insufficient for virtualized environments. Additional technologies that drive high EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 15

16 Chapter 2: Solution Overview storage efficiencies are also required. Storage efficiency is important because storage infrastructure acquisition and operations costs are among the top challenges of cloud-based virtual machine environments. To achieve storage efficiency, customers must maximize available storage capacity and processing resources, which often turn out to be competing resources. Storage efficiency is key to enabling the promise of elastic scalability, pay-as-you-grow efficiency, and a predictable cost structure, while increasing productivity and innovation. Technologies such as data compression and deduplication are key enablers to efficiency from a capacity standpoint, while simple, insightful management tools reduce management complexity. Resiliency and availability features, especially if enabled by default, further increase efficiency. While storage efficiency is important, in a private cloud environment, many disparate virtual machines with vastly different performance profiles and criticality are typically consolidated. Customers need a storage platform that can fulfill the performance demands, enhance storage efficiencies by reducing the data footprint, and enable agile provisioning and management of service delivery. Scalability Operational agility An agile, virtualized infrastructure must also scale in the multiple dimensions of performance, capacity, and operations. It must have the ability to scale efficiently, without sacrificing performance and resiliency, or requiring additional IT resources to manage the environment. Agility is a major reason why organizations choose to virtualize their infrastructures. However, IT responsiveness often slows exponentially as virtual environments grow. Resources are typically unable to deploy or service quickly enough to meet rapidly changing business requirements. Bottlenecks occur because organizations do not have the right tools to quickly determine the capacity and health of their physical and virtual resources. While enterprise users want responsive deployment of business applications to meet changing business requirements, the enterprise is often unable to rapidly deploy or update virtual machines and storage on a large scale. Standard virtual machine provisioning or cloning methods, which are commonly implemented in flash arrays, can be expensive, because full copies of virtual machines can require 50 GB or more storage for each copy. In a large-scale cloud data center, when shared storage may be cloning up to hundreds of virtual machines each hour while concurrently delivering I/O to active virtual machines, cloning can become a major bottleneck for data center performance and operational efficiency. Most storage arrays are designed to be statically installed and run, yet virtualized application environments are naturally dynamic and variable. Change and growth of virtualized workloads causes organizations to actively redistribute workloads across storage-array resources for load balancing to avoid running out of space or reducing performance. This ongoing load balancing is usually a manual, iterative task that is often costly and time-consuming. As a result, storage arrays that support large-scale virtualization environments require optimal and inherent data placement to ensure 16 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

17 Chapter 2: Solution Overview maximum utilization of both capacity and performance without any planning demands. Deduplication Storage arrays can accumulate duplicate data over time, which increases management and other costs. In particular, large-scale virtual machine environments create large amounts of duplicate data when virtual machines are deployed by cloning existing virtual machines, or when the same OS and applications are installed. Deduplication eliminates duplicate data by replacing it with pointers to unique instances of the data. This deduplication process can be implemented after I/O has been de-staged to disks, or it can be done in real time, which actively reduces the amount of redundant data written to the array. Thin provisioning Thin provisioning is a popular technique that improves storage utilization. The storage capacity is consumed only when data is written instead of when storage volumes are provisioned. Thin provisioning removes the need for overprovisioning storage up front to meet anticipated future capacity demands and enables you to allocate storage on-demand from an available storage pool. Data protection While storage arrays have traditionally supported several RAID data protection levels, the arrays required that storage administrators choose between data protection and performance for specific workloads. The challenge for large-scale virtual environments is the shared storage system that stores data for hundreds or thousands of virtual machines with different workloads. Optimal data protection for virtualized environments requires that arrays support data protection schemes that combine the best attributes of existing RAID levels while avoiding the drawbacks. Because flash endurance is a special consideration in an all-flash array, the scheme maximizes the service life of the array s solid-state drives (SSDs) while complementing the high I/O performance of flash media. Microsoft ODX support XtremIO 4.0, in beta at the time of publication of this guide, supports Microsoft Offloaded Data Transfers (ODX) technology, which offloads intra-array data movement requests to the array itself. This frees up the compute and network resources and reduces response times to data-transfer requests, which can result in drastically reduced virtual machine provisioning times and snapshot creation. For additional information on ODX, refer to the Microsoft Windows Dev Center Library topic Offloaded data transfers. EMC ViPR integration EMC ViPR integrates with Microsoft System Center Virtual Machine Manager (SCVMM) and Orchestrator APIs to simplify storage management and reduce the need for multiple management tools to address common management tasks. Using ViPR, storage provisioning and management can be done within SCVMM, and common tasks can be done within Orchestrator. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 17

18 Chapter 2: Solution Overview API support Benefits of using XtremIO RESTful API support enables advanced functionality exposure of the XtremIO 4.0 storage resources for customized workflows, and self-service portal development and integration without heavy coding efforts. This API support enables orchestration architects and developers access to a wide range of features without having to develop cumbersome wrappers or one-off drivers. To meet the multiple demands of a large-scale virtualized data center, you need a storage solution that is able to provide superb performance and capacity scale-out to accommodate: Infrastructure growth Built-in data reduction features Thin provisioning for capacity efficiency and cost mitigation Flash-optimized data protection techniques Near-instantaneous virtual machine provisioning and cloning Automated load-balancing Integration with key monitoring and orchestration tools Consistent, predictable, highly random I/O performance The XtremIO all-flash array is built to unlock the full performance potential of flash storage and to deliver array-based inline data services that make it an optimal storage solution for large-scale, agile, and dynamic virtual environments. 18 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

19 Chapter 3: Solution Technology Overview Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview VSPEX Proven Infrastructures Key components Virtualization layer Compute layer Network layer Storage layer EMC Data Protection Other technologies EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 19

20 Chapter 3: Solution Technology Overview Overview This solution uses the XtremIO all-flash array and Microsoft Hyper-V to provide storage and server virtualization in a private cloud. The solution has been designed and proven by EMC to provide virtualization, server, network, and storage resources that provide customers with the ability to deploy up to 700 RVMs and the associated shared storage. This guide provides guidance on how to scale the solution infrastructure for larger environments or as the environment grows. The following sections describe the components in more detail. VSPEX Proven Infrastructures EMC has joined forces with the providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of the private cloud. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk. VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity that is characteristic of truly converged infrastructures, with more choice in individual stack components. VSPEX Proven Infrastructures, as shown in Figure 2, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. These infrastructures include virtualization, server, network, and storage layers. Partners can choose the virtualization, server, and network technologies that best fit a customer s environment, while XtremIO storage systems and technologies provide the storage layers. 20 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

21 Chapter 3: Solution Technology Overview Figure 2. VSPEX Proven Infrastructures EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 21

22 Chapter 3: Solution Technology Overview Key components Virtualization layer This section describes the following key components of this solution: Virtualization layer: Decouples the physical implementation of resources from the applications that use the resources, so that the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. This solution uses Microsoft Hyper-V for the virtualization layer. Compute layer: Provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and implements the solution by using any server hardware that meets these requirements. Network layer: Connects the users of the private cloud to the resources in the cloud and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements. Storage layer: Critical for the implementation of the server virtualization. With multiple hosts accessing shared data, many use cases can be implemented. The XtremIO all-flash array used in this solution provides high performance, enables rapid service and virtual machine provisioning, and supports a number of capacity efficiency and data services capabilities. Data protection: The components of the solution provide protection when the data in the primary system is deleted, damaged, or unusable. For more information, see EMC Data Protection. Security layer: Optional solution component that provides customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. This solution uses RSA SecurID to provide secure user authentication. For more details about the reference architecture components, see Solution architecture. Overview The virtualization layer decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and enables the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, the virtualization layer enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. 22 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

23 Chapter 3: Solution Technology Overview Microsoft Hyper-V Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machine files between Hyper-V servers or storage systems transparently and with minimal performance impact. Virtual Fibre Channel ports Windows Server 2012 R2 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N-port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO). Prerequisites for virtual FC include: One or more installations of Windows Server 2012 R2 with the Hyper-V role One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC NPIV-enabled SAN Virtual machines using the virtual FC adapter must use one of the following as the guest OS: Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2 as the guest operating system. Microsoft System Center Virtual Machine Manager High availability with Hyper-V Failover Clustering Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM enables administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. The Windows Server 2012 R2 Failover Clustering feature provides high-availability in Microsoft Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 R2 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are: Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 23

24 Chapter 3: Solution Technology Overview Enables other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation. Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server or migrated to a different host server. Hyper-V Replica Hyper-V Replica, introduced in Windows Server 2012 R2, provides asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network using HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site. Cluster-Aware Updating Cluster-Aware Updating, introduced in Windows Server 2012 R2, provides a way to update cluster nodes with little or no disruption. Cluster-Aware Updating transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes) 2. Installs the updates 3. Performs a restart if necessary 4. Brings the node back online (migrated virtual machines are moved back to the original node) 5. Updates the next node in the cluster The node managing the update process is called the Update Coordinator. The Update Coordinator works in a couple of modes: Self-updating: Runs on the cluster node being updated Remote-updating: Runs on a standalone Windows operating system and remotely manages the cluster update Cluster-Aware Updating is integrated with Windows Server Update Service. PowerShell enables automation of the Cluster-Aware Updating process. 24 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

25 Chapter 3: Solution Technology Overview EMC Storage Integrator for Windows Suite EMC Storage Integrator (ESI) for Windows Suite is a software package with the essential components for storage administrators to provision business applications in less time, monitor storage health with an in-depth storage topology view, and automate storage management with rich scripting libraries. Compute layer Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions: Provisioning, formatting, and presenting drives to Windows servers Provisioning new cluster disks and automatically adding them to the cluster Provisioning SharePoint storage, sites, and databases in a single wizard The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX solutions have minimum requirements for the number of processor cores and the amount of RAM. The solution can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 3, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might select a higher-end server with 20 processor cores and 144 GB of RAM. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 25

26 Chapter 3: Solution Technology Overview Figure 3. Compute layer flexibility examples The first customer needs four of the selected servers, while the other customer needs two. Note: To enable high availability for the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: Use several identical, or at least compatible, servers. VSPEX implements hypervisor-level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at 26 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

27 Chapter 3: Solution Technology Overview least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single-unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be sufficiently flexible to meet your specific needs. Ensure that there are sufficient processor cores and enough RAM per core to for the target environment. Network layer The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 4 shows an example of this highly available network topology. Figure 4. Example of highly available network design EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 27

28 Chapter 3: Solution Technology Overview Storage layer This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. XtremIO is a block-only storage platform, and it provides network high availability or redundancy by using two ports per storage controller. If a link is lost on the storage processor I/O port, the link fails over to another port. All network traffic is distributed across the active links. The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, enhances operational agility, and reduces total cost of ownership. EMC XtremIO The EMC XtremIO all-flash array is a clean-sheet design with a revolutionary architecture. It brings together all the necessary and sufficient requirements to enable the agile data center: linear scale-out, inline, all-the-time data services, and rich data center services for the workloads. The basic hardware building block for these scale-out arrays is the EMC XtremIO X- Brick. Each X-Brick has two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The EMC XtremIO Starter X-Brick with 13 SSDs can be non-disruptively expanded to a full X-Brick with 25 SSDs without any downtime. The scale-out cluster can support up to six X-Bricks. The XtremIO platform is designed to optimize the use of flash storage media. Key attributes of this platform are: High levels of I/O performance, particularly for the random I/O workloads that are typical in virtualized environments Consistently low (sub-millisecond) latency Inline data services that include thin provisioning, deduplication, data compression, and copy data management Scale-out architecture that scales capacity and I/O performance in tandem while ensuring consistently low sub-millisecond latency A full suite of enterprise array capabilities, such as N-way active controllers, high availability, strong data protection, and thin provisioning Integration with EMC solutions for data center services including business continuity, backup and data protection, and converged infrastructure deployments 28 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

29 Chapter 3: Solution Technology Overview Because the XtremIO array has a scale-out design, you can add additional performance and capacity in a building block approach, with all building blocks forming a single clustered system. XtremIO storage includes the following components: Host adapter ports: Provide host connectivity through fabric into the array. Storage controllers: The compute component of the storage array. Storage controllers handle all aspects of data moving into, out of, and between arrays. Disk drives: SSDs that contain the host/application data and their enclosures. InfiniBand switches: A computer network communications link used in multi X- Brick configurations that is switched, has high throughput, low latency, scalable, and provides quality of service and failover capability. This is used for intra-x-brick communication and high-speed data movement. EMC XtremIO Operating System The XtremIO storage cluster is managed by the EMC XtremIO Operating System (XIOS). XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention. XIOS: Ensures that data is evenly distributed across all SSD and controller resources, providing the highest possible performance and endurance that stands up to demanding workloads for the entire life of the array. Eliminates the need to perform the complex configuration and optimization performance steps required for traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, and so on. Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system. Standards-based enterprise storage system The XtremIO system interfaces with Hyper-V hosts using standard FC and iscsi block interfaces. The system supports complete high-availability features, including support for native Microsoft Multipath I/O, protection against failed SSDs, nondisruptive software and firmware upgrades, no single point of failure, and hotswappable components. Real-time, inline data reduction The XtremIO storage system deduplicates and compresses incoming data in real time, enabling a massive amount of virtual machines and application data to reside in a small and economical amount of flash capacity. Due to the inline functionality, there is no post-processing of data, which helps to extend the endurance of the SSDs. Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; instead, it enhances the performance of the virtualized environment. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 29

30 Chapter 3: Solution Technology Overview Scale-out architecture Using a Starter X-Brick, Microsoft Hyper-V deployments can start small and be grown to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making the virtualized environments simple to size and manage as the demands grow over time. Massive performance The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read-and-write I/O, which is typical in virtual environments. It does so with consistent predictable sub-millisecond latency. Fast provisioning XtremIO arrays deliver writable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can rapidly clone virtual machine environments of any size. Ease of use EMC Data Protection The XtremIO storage system requires only a few basic setup steps that can be completed in minutes with absolutely no tuning or ongoing administration to achieve and maintain high performance levels. The XtremIO system can be deployment-ready in less than an hour after delivery. Security with Data at Rest Encryption (D@RE) XtremIO securely encrypts all data stored on the all-flash array, delivering protection for regulated use cases in sensitive industries, such as healthcare, finance, and government. Data center economics XtremIO provides breakthrough total cost of ownership in the virtualized workload environment through its exceptional performance, capacity savings from unique data reduction capabilities, linear predictable scaling with scale-out architecture, and ease of use. Overview EMC Data Protection provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC Data Protection is a smart method of backup. It consists of optimal integrated storage protection and software designed to meet backup and recovery objectives now and in the future. With EMC storage protection, deep data source integration, and feature-rich data management services, you can deploy an open, modular storage protection architecture that enables you to scale resources while lowering cost and minimizing complexity. 30 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

31 Chapter 3: Solution Technology Overview EMC Avamar deduplication EMC Data Domain deduplication storage systems EMC RecoverPoint EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, NAS servers, and desktops/laptops. EMC Data Domain deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. EMC RecoverPoint is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance and combines continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables dedicated appliances to protect data locally (continuous data protection (CDP)), remotely (continuous remote replication (CRR)), or both (continuous local and remote replication (CLR)), offering the following advantages: EMC RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via FC. EMC RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order. EMC RecoverPoint CLR replicates to both a local and a remote site simultaneously. Other technologies EMC RecoverPoint uses lightweight splitting technology to mirror application writes to the EMC RecoverPoint cluster, and supports the following write splitter types: Array-based Intelligent fabric-based Host-based Overview EMC PowerPath In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to, the following technologies. EMC PowerPath is a host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure: Standardized data management across physical and virtual environments EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 31

32 Chapter 3: Solution Technology Overview Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments Improved service-level agreements by eliminating application impact from I/O failures Note: In this solution, we used PowerPath 6.0 for the management of I/O traffic. EMC ViPR Controller Public-key infrastructure EMC ViPR Controller is storage automation software that centralizes, automates, and transforms storage into a simple and extensible platform. It abstracts and pools resources into a single storage platform to deliver automated, policy-driven storage services on demand via a self-service catalog. With vendor-neutral centralized storage management, your team can reduce costs, provide choice, and deliver a path to the cloud. The ability to secure data and ensure the identity of devices and users is critical in today s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, finance, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI). VSPEX solutions can be engineered with a PKI designed to meet the security criteria of your organization. The solution can be implemented via a modular process where layers of security are added as needed. The general process involves first implementing a PKI by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI are then enabled using the trusted certificates to ensure a high degree of authentication and encryption, where supported. Depending on the scope of PKI services needed, you may need to implement a PKI dedicated to those needs. There are many third-party tools that offer these services, including end-to-end solutions from RSA that can be deployed within a VSPEX environment. 32 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

33 Chapter 4: Solution Architecture Overview Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High-availability and failover Backup and recovery configuration guidelines EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 33

34 Chapter 4: Solution Architecture Overview Overview This chapter is a comprehensive guide to the architecture and configuration of this solution. Server capacity is presented in generic terms for the required minimum CPU, memory, and network resources. Your server and networking hardware must meet the stated minimum requirements outlined in this chapter. EMC has validated the storage architecture to ensure that it delivers a high performance, highly available architecture. Each Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important that you first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Solution architecture Overview This section details the VSPEX Private Cloud solution for Microsoft Hyper-V with XtremIO configuration for up to 700 RVMs. Note: VSPEX uses a reference workload to describe and define a virtual machine. Therefore, one physical or virtual machine in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This process is described in Reference workload application. Logical architecture Figure 5 shows a validated XtremIO infrastructure, where an 8 Gb/s FC or 10 Gb/s iscsi SAN carries storage traffic, and 10 GbE carries management and application traffic. 34 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

35 Chapter 4: Solution Architecture Overview Figure 5. Logical architecture for the solution Key components This solution architecture includes the following key components: Microsoft Hyper-V: Provides a common virtualization layer to host the server environment. Hyper-V provides a highly available infrastructure through features such as: Live Migration: Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption Live Storage Migration: Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption Failover Clustering High Availability: Detects and provides rapid recovery for a failed virtual machine in a cluster Dynamic Optimization: Provides load balancing of computing capacity in a cluster with support of SCVMM Microsoft System Center Virtual Machine Manager: SCVMM is technically not required for this VSPEX solution, because the Hyper-V Management Tools in Windows Server 2012 R2 can be used to manage the Hyper-V environment. However, considering the large number of virtual machines the solution is capable of hosting, EMC recommends using SCVMM. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 35

36 Chapter 4: Solution Architecture Overview Microsoft SQL Server: Stores configuration and monitoring details for SCVMM, which requires a database service. This solution uses a Microsoft SQL Server 2012 database. DNS server: Provides name resolution for the various solution components. This solution uses the Microsoft DNS Service running on Windows Server 2012 R2. Active Directory server: Provides functionality to various solution components that require the Active Directory service. The Active Directory service runs on a Windows Server 2012 R2 system. Shared infrastructure: Adds DNS and authentication/authorization services with existing infrastructure or sets them up as part of the new virtual infrastructure. IP network: Carries user and management traffic. A standard Ethernet network carries all network traffic with redundant cabling and switching. Storage network The storage network is isolated to provide hosts access to the array with the following options: Fibre Channel: Performs high-speed serial data transfer with a set of standard protocols. Fibre Channel (FC) provides a standard data transport frame among servers and shared storage devices. 10 Gb Ethernet (iscsi): Enables the transport of SCSI blocks over a TCP/IP network. ISCSI works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. XtremIO all-flash array The XtremIO all-flash array includes the following components: X-Brick: Represents a physical chassis that contains two active/active storage controllers as the fundamental scaling unit of the array, and a disk array enclosure (DAE) of emlc SSDs. When the XtremIO cluster scales, the array clusters together multiple X-Bricks with an InfiniBand back-end switch. Storage controller: Represents a physical computer (1U in size) in the cluster, which acts as the storage controllers, providing block data that supports FC and iscsi protocols. Storage controllers can access all SSDs in the same X-Brick. Processor D: Represents one of two CPU sockets for each storage controller. Processor D is responsible for disk access. Processor RC: Represents the other CPU socket that is responsible for the router (hash writes and lookup) and controller (meta data). Battery backup unit: Provides enough power to each storage controller to ensure that any data in flight destages to disk in the event of a power failure. The first X-Brick has two battery backup units for redundancy. As clusters require additional X-Bricks, only a single battery backup unit is necessary for each additional X-Brick, which is 1U in size. DAE: Houses the flash drives that the array uses and is 2U in size. 36 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

37 Chapter 4: Solution Architecture Overview InfiniBand switches: Connects multiple X-Bricks together and is 1U in size. Two separate switches are needed to ensure the fabric that ties the controllers together is highly available. Hardware resources Table 1 lists the hardware used in this solution. Table 1. Solution hardware Component Hyper-V servers CPU Memory Network Configuration 1 vcpu per virtual machine 4 vcpus per physical core Note: For Intel Ivy Bridge or later processors, use six vcpus per physical core. For 700 virtual machines: 700 vcpus Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors) 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 700 virtual machines: Minimum of 1,400 GB RAM Add 2 GB for each physical server 2 x 10 GbE network interface cards (NICs) per server 2 HBA per server or 2 x 10 GbE NICs per server for data traffic Note: You must add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums. Network infrastructure Minimum switching capacity 2 physical Ethernet switches 2 physical SAN switches, if implementing FC 2 x 10 GbE ports per Hyper-V server for management, user/application traffic, and Live Migration 2 ports per Hyper-V server for the storage network (FC or iscsi) 2 ports per storage controller for storage data (FC or iscsi) EMC XtremIO all-flash array One X-Brick with 25 x 400 GB SSD drives EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 37

38 Chapter 4: Solution Architecture Overview Component Shared infrastructure Configuration In most cases, the customer environment already has infrastructure services configured, such as Active Directory, DNS, and so on. The setup of these services is beyond the scope of this guide. If implemented without the existing infrastructure, the new minimum requirements are: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GbE ports per server Note: You can migrate the services into this solution post-deployment. However, the services must exist before the solution is deployed. Note: EMC recommends that you use a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled. Software resources Table 2 lists the software used in this solution. Table 2. Solution software Software Microsoft Windows Server with Hyper-V Microsoft Windows Server Microsoft System Center Virtual Machine Manager Microsoft SQL Server Configuration Version 2012 R2 Datacenter Edition Note: Datacenter Edition is necessary to support the number of virtual machines in this solution. Version 2012 R2 Datacenter Edition Note: Datacenter Edition is necessary to support the number of operating system environments (servers and virtual machines) used in this solution. Version 2012 Standard Edition Note: Any version of Microsoft SQL Server supported by SCVMM is acceptable. EMC PowerPath Use latest version XtremIO ( for Hyper-V datastores) EMC XtremIO Operating System Release 3.0 EMC Data Protection 38 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

39 Chapter 4: Solution Architecture Overview Software EMC Avamar Configuration Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. EMC Data Domain Operating System Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. Virtual machines (used for validation, but not required for deployment) Microsoft Windows Base Operating System Microsoft Windows Server 2012 R2 Datacenter Edition Server configuration guidelines Overview When designing and ordering the compute layer of this VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vcpus. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Intel Ivy Bridge updates Testing on the Intel Ivy Bridge processor series has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, EMC recommends increasing the vcpu/physical CPU (pcpu) ratio from 4:1 to 6:1. This reduces the number of server cores required to host the RVMs. Current VSPEX sizing guidelines require a maximum vcpu core to pcpu core ratio of 4:1, with a maximum 6:1 ratio for Ivy Bridge or later processors. This ratio is based on an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, original equipment manufacturer (OEM) server vendors that are VSPEX partners may suggest different (normally higher) ratios. Follow the updated guidance supplied by the OEM server vendor. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 39

40 Chapter 4: Solution Architecture Overview Table 3 lists the hardware resources used for the compute layer. Table 3. Hardware resources for the compute layer Component Microsoft Hyper-V servers CPU Memory Configuration 1 vcpu per virtual machine 4 vcpus per physical core Note: For Intel Ivy Bridge or later processors, use six vcpus per physical core. For 700 virtual machines: 700 vcpus Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors) 2 GB RAM per virtual machine 2 GB RAM reservation per Hyper-V host For 700 virtual machines: Minimum of 1,400 GB RAM Add 2 GB for each physical server Network Block 2 x 10 GbE NICs per server 2 HBA per server or 2 x 10 GbE NICs per server for iscsi connection Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums. Note: EMC recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled. Hyper-V memory virtualization Microsoft Hyper-V has several advanced features that help maximize performance and overall resource use. The most important features relate to memory management. This section describes some of these features, and what to consider when using these features in a VSPEX environment. Figure 6 shows how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as Dynamic Memory and Smart Paging can reduce total memory usage and increase consolidation ratios in the hypervisor. 40 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

41 Chapter 4: Solution Architecture Overview Figure 6. Hypervisor memory consumption Understanding the technologies in this section makes it easier to understand this basic concept. Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012 R2, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V enables more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 41

42 Chapter 4: Solution Architecture Overview It swaps out less-used memory to disk storage, and swaps it in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multinode computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, and therefore Windows Server 2012 R2 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature was only available to the host. Windows Server 2012 R2 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Memory configuration guidelines The memory configuration guidelines take into account the Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. In this solution, leave at least 2 GB memory for the Hyper-V parent partition. Virtual machine memory In this solution, configure each virtual machine with 2 GB memory in fixed mode. Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider VLANs and FC/iSCSI connections on XtremIO storage. For detailed network resource requirements, refer to Table 1 on page EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

43 Chapter 4: Solution Architecture Overview VLANs Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; however, in many cases, logical isolation with VLANs is sufficient. For a best practice, EMC recommends that you use a minimum of three or four VLANs for: Customer data Storage for iscsi, if implemented Live Motion or Storage Migration Management Figure 7 shows the VLANs and the network connectivity requirements for the XtremIO array. Figure 7. Required networks for XtremIO storage The customer data network is for system users (or clients) to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices need additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 43

44 Chapter 4: Solution Architecture Overview Enable jumbo frames (for iscsi) EMC recommends setting the maximum transmission unit (MTU) to 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer to provide high availability and the expected level of performance. Microsoft Hyper-V allows more than one method of storage when hosting virtual machines. The tested solution uses different block protocols (FC/iSCSI), and the storage layout described in this section adheres to all current best practices. If required, you can make modifications to this solution based on your system usage and load requirements. XtremIO X-Brick scalability XtremIO storage clusters support a fully distributed, scale-out design that enables linear increases in both capacity and performance in order to provide infrastructure agility. XtremIO uses a building-block approach in which the array can be scaled using additional X-Bricks. With clusters of two or more X-Bricks, XtremIO uses a redundant 40 Gb/s quad data rate (QDR) InfiniBand network for back-end connectivity among the storage controllers. This ensures a highly available, ultra-lowlatency network. Host access is provided by using two N-way active controllers for linear scaling of performance and capacity for simplified support of growing virtual environments. As a result, as capacity in the array grows, performance also grows as more storage controllers are added. As shown in Figure 8, the single brick is the basic building block of an XtremIO array. Figure 8. Single X-Brick XtremIO storage Each X-Brick comprises: One 2U DAE, containing: 25 emlc SSDs (10 TB X-Brick) or 13 emlc SSDs (5 TB Starter X-Brick) Two redundant power supply units Two redundant SAS interconnect modules 44 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

45 Chapter 4: Solution Architecture Overview One battery backup unit Two 1U storage controllers (redundant storage processors). Each storage controller includes: Two redundant power supply units Two 8 Gb/s FC ports Two 10 GbE iscsi ports Two 40 Gb/s InfiniBand ports One 1 Gb/s management/ipmi port Note: For details on X-Brick racking and cabinet requirements, refer to the EMC XtremIO Storage Array Site Preparation Guide. Figure 9 shows what the different cluster configurations look like as you scale up. You can start from one single X-Brick, and then, as you scale, you can add a second X- Brick, and then a third, and so on. The performance scales linearly as additional X- Bricks are added. Figure 9. Cluster configuration as single and multiple X-Brick clusters Note: A Starter X-Brick is physically similar to a single X-Brick cluster, except for the number of SSDs in the DAE (13 SSDs in a Starter X-Brick instead of 25 SSDs in a standard single X- Brick). EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 45

46 Chapter 4: Solution Architecture Overview Hyper-V storage virtualization Windows Server 2012 R2 Hyper-V and Failover Clustering use CSVs and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 10, the storage array presents block-based LUNs (as CSVs) to the Windows hosts to host virtual machines. Figure 10. Hyper-V virtual disk types CSV A CSV is a shared disk containing an NTFS volume that is made accessible to all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows Server 2012 R2 also supports pass-through disks, which enable a virtual machine to access a physical disk mapped to a host that does not have a volume configured on it. VHDX Hyper-V in Windows Server 2012 R2 contains a VHD format update, VHDX, which has much greater capacity and built-in resiliency. The main features of the VHDX format are: Support for virtual hard disk storage capacity of up to 64 TB Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format has the following features: Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload A 4 KB logical-sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors 46 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

47 Chapter 4: Solution Architecture Overview The ability to store custom file metadata that the user might want to record, such as the operating system version or applied updates Space reclamation features that can result in smaller file sizes and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) VSPEX storage building blocks Sizing the storage system to meet virtual machine IOPS is a complicated process. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disks that can support a certain number of virtual machines in the VSPEX architecture. Each building block combines several disks to create an XtremIO protection group that supports the needs of the private cloud environment. Building block for Starter X-Brick The Starter X-Brick building block can support up to 300 virtual machines with 13 SSDs in the XtremIO data protection group, as shown in Figure 11. Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines In the Starter X-Brick configuration, the raw capacity is 5 TB. Detailed information about the test profile can be found in Chapter 5. You can expand the raw capacity of this building block to 10 TB by adding an additional 12 SSDs and enabling the configuration to support up to 700 virtual machines. Building block for a single X-Brick X-Bricks with 25 SSDs as shown in Figure 12 are available with 10 TB and 20 TB raw capacity. Figure 12. XtremIO single X-Brick building block for 700 virtual machines A single X-Brick with 10 TB raw capacity can support up to 700 virtual machines, while an X-Brick with 20 TB raw capacity can support up to 1,400 virtual machines. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 47

48 Chapter 4: Solution Architecture Overview Table 4 lists the different scales of virtual machines supported by different types and numbers of X-Bricks. Table 4. XtremIO scalable scenarios with virtual machines Scalable Virtual machines Starter X-Brick (5 TB) 300 One X-Brick (10 TB) 700 One X-Brick (20 TB) 1,400 Two X-Brick cluster (40 TB) 2,800 Four X-Brick cluster (80 TB) v4.0 5,600 Six X-Brick cluster (120 TB) v4.0 8,400 Note: The number of supported virtual machines is based on a tested configuration using a value of 15 percent for unique data. This constitutes data that cannot be deduplicated. The XtremIO platform uses real-time deduplication to maximize the efficiency of its all-flash architecture. As a result, its logical capacity presented to users is greater than the physical capacity available in the system. When managing the system, monitor the current physical usage independently of the logical allocation so that out-of-space conditions can be avoided. EMC recommends keeping the physical allocation of the unit below 90 percent as a best practice. High-availability and failover Overview Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When you implement the solution by following the instructions in this guide, business operations survive with little or no impact from single-unit failures. Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 13 shows the hypervisor layer responding to a failure in the compute layer. Figure 13. High availability at the virtualization layer By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, we recommend that you use enterprise-class servers designed for the data center. This type of server has increased component redundancy, for example, redundant power 48 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

49 Chapter 4: Solution Architecture Overview supplies, as shown in Figure 14. Connect these servers to separate power distribution units (PDUs) following your server vendor s best practices. Figure 14. Redundant power supplies To configure high availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 13. Network layer The XtremIO advanced networking features provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 15. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 15. Network layer high availability EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 49

50 Chapter 4: Solution Architecture Overview Storage layer The XtremIO families are designed for five-9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 16. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures. Figure 16. XtremIO high availability EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability. XtremIO Data Protection Every other flash array in the market uses standard disk-based RAID algorithms, which do not perform, waste a lot of expensive flash capacity, and hurt the lifespan of flash. XtremIO developed a data protection scheme, XtremIO Data Protection (XDP), that uses both the random access nature of flash and the unique XtremIO dual-stage metadata engine. The result is flash-native data protection that delivers much lower capacity overhead, superior data protection, and much better flash endurance and performance than any RAID algorithm. XDP delivers superior RAID 6 performance, while exceeding RAID 1 performance and RAID 5 capacity utilization. More importantly, XDP is optimized for long-term enterprise operating conditions, where overwriting existing data becomes dominant in the array. Unlike other flash arrays, XDP enables XtremIO to maintain its performance until completely full, giving you the most economical use of flash. Backup and recovery configuration guidelines For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. 50 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

51 Chapter 5: Environment Sizing Chapter 5 Environment Sizing This chapter presents the following topics: Overview Reference workload Scaling out Reference workload application Quick assessment EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 51

52 Chapter 5: Environment Sizing Overview The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and descriptions of how that may change the end delivery from the server and network perspective. Modify the storage definition by adding drives for greater capacity and performance and by adding X-Bricks to improve the cluster performance. The cluster layouts provide support for the appropriate number of virtual machines at the defined performance level. Reference workload Overview When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system, and by improving resource utilization of the underlying hardware. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. Each virtual machine has its own unique requirements. In any discussion about virtual infrastructures, you need to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. Defining the reference workload To simplify this discussion, this section presents a representative customer reference workload. By comparing the actual customer usage to this reference workload, you can determine how to size the solution. VSPEX Private Cloud solutions define an RVM workload, which represents a common point of comparison. Since XtremIO has an in-line deduplication feature, it is critical to determine the unique data percentage, as this parameter will impact XtremIO physical capacity usage. In the validated solution, we set the unique data ratio to 15 percent. The parameters are described in Table 5. Table 5. VSPEX Private Cloud RVM workload Parameter Virtual machine OS Value Windows Server 2012 R2 vcpus 1 vcpus per physical core (maximum) 4 1 Memory per virtual machine 2 GB IOPS per virtual machine 25 1 Based on testing with Intel Sandy Bridge processors. Newer processors can support six vcpu/core or greater. Follow the recommendations of your VSPEX server vendor. 52 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

53 Chapter 5: Environment Sizing Parameter I/O size Value 8 KB I/O pattern Fully random; skew = 0.5 I/O read percentage 67% I/O write percentage 33% Virtual machine storage capacity 100 GB Unique data 15% This specification for a virtual machine represents a single common point of reference by which to measure other virtual machines. Scaling out XtremIO is designed to scale from a Starter X-Brick or single X-Brick to a cluster of multiple X-Bricks (up to six X-Bricks based on the current code release). Unlike most traditional storage systems, as the number of X-Bricks grows, so do capacity, throughputs, and IOPS. The scalability of performance is linear with the growth of the deployment. Whenever additional storage and compute resources (such as servers and drives) are needed, you can add them modularly. Storage and compute resources grow together so that the balance between them is maintained. Reference workload application Overview Example 1: Custom-built application The solution creates storage resources that are sufficient to host a target number of RVMs with the characteristics shown in Table 5. Virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of RVMs together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the pool until no resources remain. A small custom-built application server must move into a virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges is between four IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on direct-attached storage (DAS). Based on these numbers, the application needs the following resources: CPU of one RVM Memory of two RVMs Storage of one RVM I/Os of one RVM EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 53

54 Chapter 5: Environment Sizing In this example, a corresponding virtual machine uses the resources for two of the RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 698 RVMs remain. Example 2: Pointof-sale system The database server for a customer s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of four RVMs Memory of eight RVMs Storage of two RVMs I/Os of eight RVMs In this case, the corresponding virtual machine uses the resources of eight RVMS. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 692 RVMs remain. Example 3: Web server The customer s web server must move into a virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of two RVMs Memory of four RVMs Storage of one RVM I/Os of two RVMs In this case, the corresponding virtual machine uses the resources of four RVMs. If implemented on a single 10TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 696 RVMs remain. Example 4: Decision-support database The database server for a customer s decision-support system must move into a virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: CPUs of 10 RVMs Memory of 32 RVMs Storage of 52 RVMs I/Os of 28 RVMs 54 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

55 Chapter 5: Environment Sizing In this case, the corresponding virtual machine uses the resources of 52 RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 648 RVMs remain. Summary of examples These examples demonstrate the flexibility of the resource pool model. In all four examples, the workloads reduce the amount of available resources in the pool. With business growth, the customer must implement a much larger virtual environment to support one custom-built application, one point-of-sale system, two web servers, and ten decision-support databases. Using the same strategy, calculate the number of equivalent RVMs to get a total of 538 RVMs. All these RVMs can be implemented on the same virtual infrastructure with an initial capacity of 700 RVMs that is supported with a single 10 TB X-Brick. The resources for 162 RVMs remain in the resource pool, as shown in Figure 17. Figure 17. Resource pool flexibility Quick assessment In this case, you must examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. In more advanced cases, tradeoffs might be necessary between memory and I/O or other relationships in which increasing the amount of one resource, decreases the need for another. In these cases, the interactions between resource allocations become highly complex and are beyond the scope of this guide. Overview Performing a quick assessment of the customer's environment helps you determine the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of vcpus, the amount of memory, the required storage performance, the required storage capacity, and the number of RVMs required from the resource pool. The Reference workload section provides examples of this process. Complete the worksheet for each application listed in Table 6. Each row requires inputs on the following resources: CPU, memory, IOPS, and capacity. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 55

56 Chapter 5: Environment Sizing Table 6. Blank worksheet row Application CPU (virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent RVMs Example application Resource requirements Equivalent reference virtual machines NA CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as Perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Storage performance requirements IOPS Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Perfmon, to determine memory efficiency. Several components become important when discussing the I/O performance of a system: The number of requests coming in, or IOPS. The size of the request or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data. The average I/O response time, or I/O latency. The RVM calls for 25 IOPS. To monitor this on an existing system, use a performancemonitoring tool such as Perfmon. Perfmon provides several counters that can help. The most common are: Logical disk or disk transfer/sec 56 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

57 Chapter 5: Environment Sizing Logical disk or disk reads/sec Logical disk or disk writes/sec The RVM assumes a 2:1 read/write ratio. Use these counters to determine the total number of IOPS and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The RVM assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2: 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of even I/O sizes. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the RVM assumes 8 KB I/O sizes. I/O latency You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. VSPEX solutions must meet a target average I/O latency of 20 ms. The XtremIO array easily achieved this with an average sub-millisecond response time. The recommendations in this guide enable the system to continue to meet that 20 ms target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/transfer counter in Microsoft Windows Perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. Unique data XtremIO automatically and globally deduplicates data as it enters the system. Deduplication is performed in real time and not as a post-processing operation. XtremIO is an ideal capacity-saving storage array due to this feature. The consumed capacity is based on the deduplication ratio from the testing tool. Virtualization platforms typically have a high number of duplicate datasets. For example, the use of common OS builds and versions for virtual machines results in a relatively low percentage of truly unique data. The scaling numbers for this solution were based on a data uniqueness value of 15 percent. This translates into a deduplication ratio of approximately 7:1, which was validated by monitoring the XtremIO deduplication and compression metrics during testing. If your datasets have a higher percentage of unique data, the amount of capacity consumed on the XtremIO array will increase, and the number of available storage resources for RVMs will decrease accordingly. This may lower the number of RVMs the configuration can hold unless additional capacity is added. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 57

58 Chapter 5: Environment Sizing XtremIO offers tools to assess the deduplicatability of data in your present environment. Use the tool to determine a likely deduplication ratio and compare it to that used for this testing to assess the impact to available capacity and the number of RVMs the configuration can support. For information about the XtremIO Data Reduction Estimator tool, read the Everything Oracle at EMC blog post EMC XtremIO Data Reduction Estimator. Storage capacity requirements Determining equivalent reference virtual machines Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the equivalent RVMs line by using the relationships in Table 7. Round all values up to the closest whole number. Table 7. Reference virtual machine resources Resource Value for RVMs Relationship between requirements and equivalent RVMs CPU 1 vcpu Equivalent reference virtual machines = resource requirements Memory 2 GB Equivalent reference virtual machines = (resource requirements)/2 IOPS 25 IOPS Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 GB Equivalent reference virtual machines = (resource requirements)*0.15/100 For example, the point-of-sale system database used in Example 2: Point-of-sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 30 GB (15 percent unique data converted to physical capacity consumption is 200 * 0.15 = 30 GB) of physical capacity. This translates to four RVMs of CPU, eight RVMs of memory, eight RVMs of IOPS, and two RVMs of capacity. Table 8 shows how that fits into the worksheet row. 58 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

59 Chapter 5: Environment Sizing Table 8. Sample worksheet row Application CPU (vcpus) Memory (GB) IOPS Capacity (GB) Equivalent RVMs Sample application Resource requirements Equivalent reference virtual machines N/A Use the highest value in the row to fill in the Equivalent RVMs column. As shown in Figure 18, the sample requires eight RVMs. Figure 18. Required resources from the RVM pool Implementation example - Stage 1 A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent RVMs column, as shown in Table 9, to calculate the total number of RVMs required. The table shows the result of the calculation, rounded up to the nearest whole number. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 59

60 Chapter 5: Environment Sizing Table 9. Example applications Stage 1 Application Server resources Storage resources RVMs CPU (vcpus) Memory (GB) IOPS Capacity (GB) Example application 1: Custom-built application Example application 2: Point-of-sale system Example application 3: Web server Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines Resource requirements Equivalent reference virtual machines N/A N/A N/A Total equivalent reference virtual machines 14 This example requires 14 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth, because it supports up to 300 RVMs. Implementation example Stage 2 The customer must add a decision-support database to the virtual infrastructure. Using the same strategy, you can calculate the number of RVMs required, as shown in Table 10. Table 10. Example applications -Stage 2 Application Server resources Storage resources CPU (vcpus) Memory (GB) IOPS Capacity (GB) Equivalent RVMs Example application 1: Custom-built application Example application 2: Resource requirements Equivalent reference virtual machines Resource requirements N/A N/A 60 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

61 Chapter 5: Environment Sizing Application Server resources Storage resources Point-of-sale system Equivalent reference virtual machines Equivalent RVMs Example application 3: Web server Example application 4: Decisionsupport database Resource requirements Equivalent reference virtual machines Resource Requirements Equivalent reference virtual machines N/A ,500 N/A Total equivalent reference virtual machines 78 This example requires 78 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth. You can implement this storage layout with a Starter X-Brick that supports up to 300 virtual machines. Figure 19 shows that 222 RVMs are available after implementing one Starter X-Brick. Figure 19. Aggregate resource requirements - Stage 2 Fine-tuning hardware resources This process usually determines the recommended hardware size for servers and storage. However, in some cases, there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, additional customization can be done at this point. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 61

62 Chapter 5: Environment Sizing Server resources For some workloads, the relationship between server needs and storage needs does not match what is outlined in the RVM. You should size the server and storage layers separately in this scenario, as shown in Figure 20. Figure 20. Customizing server resources To do this, first total the resource requirements for the server components, as shown in Table 11. In the Server resource component totals row, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Server and storage resource component totals row in Table 11 describes the required amount of storage. Table 11. Server resource component totals Application Server resources Storage resources RVMs CPU (Virtual CPUs) Memory (GB) IOPS Capacity (GB) Example Application 1: Custombuilt application Example Application 2: Point-ofsale system Example Application Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Resource Requirements N/A N/A N/A 62 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

63 Chapter 5: Environment Sizing Application Server resources Storage resources RVMs #3: Web Server Equivalent Reference Virtual Machines Example Application #4: Decision Support Database Resource Requirements Equivalent Reference Virtual Machines Total equivalent reference virtual machines 46 Server and storage resource component totals Note: Calculate the sum of the Resource requirements row for each application, not the Equivalent reference virtual machines, to get the Server and storage resource component totals. In this example, the target architecture required 17 vcpus and 155 GB of memory. If four vcpus per physical processor core are allocated, and memory over-provisioning is not necessary, the architecture requires five physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server resources. Note: Keep high-availability requirements in mind when customizing the hardware resource. EMC VSPEX Sizing Tool To simplify the sizing of this solution, EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions and provides platform configuration information that meets those requirements. You can access this tool at: EMC VSPEX Sizing Tool. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 63

64 Chapter 6: VSPEX Solution Implementation Chapter 6 VSPEX Solution Implementation This chapter presents the following topics: Overview Pre-deployment tasks Network implementation Microsoft Hyper-V hosts installation and configuration Microsoft SQL Server database installation and configuration System Center Virtual Machine Manager server deployment Storage array preparation and configuration EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

65 Chapter 6: VSPEX Solution Implementation Overview The deployment process consists of the stages listed in Table 12. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 12. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment resources 3 Gather customer configuration data 4 Rack and cable the components 5 Configure the switches and networks, connect to the customer network 6 Install and configure the XtremIO array 7 Configure virtual machine storage 8 Install and configure the servers 9 Set up Microsoft SQL Server (used by SCVMM) 10 Install and configure SCVMM Server and virtual machine networking Customer configuration data Refer to the vendor documentation. Network implementation Storage array Storage array Microsoft Hyper-V hosts Microsoft SQL Server database Configuring SQL Server for SCVMM Pre-deployment tasks The pre-deployment tasks, as shown in Table 13, include procedures not directly related to the environment installation and configuration, and provide needed results at the time of installation. Pre-deployment tasks include gathering hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required on site. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 65

66 Chapter 6: VSPEX Solution Implementation Table 13. Pre-deployment tasks Task Gathering documents Gathering tools Gathering data Description Gather the related documents listed in Appendix A. These documents provide setup procedures and deployment best practices for the various components of the solution. Gather the required and optional tools for the deployment. Use Table 14 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration worksheet for reference during the deployment process. Deployment resources checklist Table 14 lists the hardware, software, and licenses required to configure the solution. For more information, refer to Table 1 and Table 2 on pages 37 and 38. Table 14. Deployment resources checklist Requirement Hardware Description Physical servers to host virtual machines: Sufficient physical server capacity as determined by sizing for the deployment (see Chapter 5) Microsoft Hyper-V servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement. Switch port capacity and capabilities as required by the virtual machine infrastructure EMC XtremIO X-Bricks in the type and quantity as determined by sizing for the deployment (see Chapter 5). Software Microsoft Windows Server 2012 R2 (or later) Datacenter Edition installation media Microsoft System Center Virtual Machine Manager 2012 R2 installation media Microsoft SQL Server 2012 or newer installation media Note: This requirement may be covered in the existing infrastructure. Licenses Microsoft System Center Virtual Machine Manager 2012 R2 license keys Microsoft Windows Server 2012 R2 Datacenter Edition license keys Note: An existing Microsoft Key Management Server (KMS) may cover this requirement. Microsoft SQL Server Standard Edition license key Note: The existing infrastructure may already meet this requirement. 66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

67 Chapter 6: VSPEX Solution Implementation Customer configuration data Gather information such as IP addresses and hostnames as part of the planning process to reduce time onsite. The Customer configuration worksheet provides a set of tables to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment process. Network implementation This section describes the network infrastructure requirements needed to support this architecture. Table 15 provides a summary of the tasks for network configuration, and references for further information. Table 15. Tasks for switch and network configuration Task Description Reference Configuring the infrastructure network Configuring VLANs Completing network cabling Configure storage array and Hyper-V host infrastructure networking. Configure private and public VLANs as required. 1. Connect the switch interconnect ports. 2. Connect the XtremIO front-end ports. 3. Connect the Microsoft Hyper-V server ports. Storage array preparation and configuration Microsoft SQL Server database installation and configuration Vendor switch configuration guide Preparing the network switches Configuring the infrastructure network For validated performance levels and high availability, this solution requires the switching capacity listed in Table 1 on page 37. You do not need to use new hardware if the existing infrastructure meets the requirements. To provide both redundancy and additional network bandwidth, the infrastructure network requires redundant network links for: Each Hyper-V host The storage array The switch interconnect ports The switch uplink ports This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution. Figure 21 shows a sample redundant infrastructure for this solution and the use of redundant switches and links to ensure that there are no single points of failure. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 67

68 Chapter 6: VSPEX Solution Implementation Converged switches provide customers with different protocol options (FC or iscsi) for storage networks for block storage. While existing 8 Gb FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iscsi. Figure 21. Sample Ethernet network architecture Configuring VLANs Ensure that there are adequate network switch ports for the storage array and Windows hosts. EMC recommends that you configure the Windows hosts with three VLANs: Customer data network: Virtual machine networking (these are customer-facing networks, which can be separated if needed). Storage network: XtremIO data networking (private network). Management network: Live Migration or Storage Migration networking (private network). These networks can also reside on separate VLANs for additional traffic isolation. 68 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

69 Chapter 6: VSPEX Solution Implementation Configuring jumbo frames (iscsi only) Completing network cabling Use jumbo frames for iscsi protocol. Set the maximum transmission unit (MTU) to 9,000 for the switch ports for the iscsi storage network. To enable jumbo frames on switch ports for storage and host ports on the switches refer to the switch vendor guidelines. Ensure that all solution servers, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is a complete connection to the existing customer network. Note: The new equipment is connected to the existing customer network. Ensure that unexpected interactions do not cause service issues on the customer network. Microsoft Hyper-V hosts installation and configuration Overview This section provides the requirements for installing and configuring the Windows hosts and infrastructure servers required to support the architecture. Table 16 describes the tasks that must be completed. Table 16. Tasks for server installation Task Description Reference Installing the Windows hosts Installing Hyper- V and configuring Failover Clustering Configuring Microsoft Hyper-V networking Installing PowerPath on Windows Servers Planning virtual machine memory allocations Install Windows Server 2012 R2 on the physical servers for the solution. 1. Add the Hyper-V Server role. 2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster. Configure Windows hosts networking, including NIC teaming and the virtual switch network. Install and configure PowerPath to manage multipathing for XtremIO LUNs. Ensure that Microsoft Hyper-V guest memory-management features are configured properly for the environment. Installing Windows Server 2012 R2 Installing Windows Server 2012 R2 Installing Windows Server 2012 R2 PowerPath and PowerPath/VE for Windows Installation and Administration Guide Installing Windows Server 2012 R2 Installing the Windows hosts Follow Microsoft best practices to install Windows Server 2012 R2 on the physical servers for the solution. Windows requires hostnames, IP addresses, and a root password for installation. The Customer configuration worksheet provides appropriate values. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 69

70 Chapter 6: VSPEX Solution Implementation Installing Hyper-V and configuring failover clustering Configuring Windows host networking To install Hyper-V and configure Failover Clustering: 1. Install and patch Windows Server 2012 R2 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature. To ensure performance and availability, the following NICs are required: At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary) At least two 10 GbE NICs for the storage network (iscsi) At least two 8 GbE HBAs for the storage network (FC) At least one NIC for Live Migration Note: Enable jumbo frames for NICS that transfer iscsi data. Set the MTU to 9,000. For instructions, refer to the NIC configuration guide. Installing and configuring Multipath software To improve and enhance the performance and capabilities of XtremIO storage array, you can choose the Windows Native Multipathing feature or install PowerPath for Windows on the Microsoft Hyper-V host. For detailed information and the configuration steps to install EMC PowerPath, refer to the PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Note: This solution uses PowerPath as the multipathing solution to manage XtremIO LUNs. Planning virtual machine memory allocations Server capacity Server capacity in the solution is required for two purposes: To support the new virtualized server infrastructure To support required infrastructure services such as authentication and authorization, DNS, and databases For information on the minimum infrastructure requirements, refer to Table 3 on page 40. There is no need for new hardware if existing infrastructure meets the requirements. Memory configuration Take care to properly size and configure the server memory for this solution. Memory virtualization techniques, such as Dynamic Memory, enable the hypervisor to abstract physical host resources to provide resource isolation across multiple virtual machines and avoid resource exhaustion. With advanced processors, such as Intel processors with Extended Page Table support, abstraction takes place within the CPU. Otherwise, abstraction takes place within the hypervisor itself. Microsoft Hyper-V includes multiple techniques for maximizing the use of system resources such as memory. Do not substantially overcommit resources as this can 70 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

71 Chapter 6: VSPEX Solution Implementation lead to poor system performance. The exact implications of memory overcommitment in a real-world environment are difficult to predict. Performance degradation due to resource exhaustion increases with the amount of memory overcommitted. Microsoft SQL Server database installation and configuration Overview Most customers use a management tool to provision and manage their server virtualization solution even though this is not required. The management tool requires a database back end. SCVMM uses SQL Server 2012 as the database platform. Note: Do not use Microsoft SQL Server Express Edition for this solution. Table 17 lists the tasks for installing and configuring a SQL Server database for the solution. The subsequent sections describe these tasks. Table 17. Tasks for SQL Server database setup Task Description Reference Creating a virtual machine for SQL Server Installing Microsoft Windows on the virtual machine Installing Microsoft SQL Server Configuring SQL Server for SCVMM Create a virtual machine to host SQL Server. Verify that the virtual machine meets the hardware and software requirements. Install Microsoft Windows Server 2012 R2 on the virtual machine created to host SQL Server. Install Microsoft SQL Server on the designated virtual machine. Configure a remote SQL Server instance for SCVMM. msdn.microsoft.com technet.microsoft.com technet.microsoft.com technet.microsoft.com EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 71

72 Chapter 6: VSPEX Solution Implementation Creating a virtual machine for SQL Server On one of the Windows servers designated for infrastructure virtual machines, create a virtual machine with sufficient computing resources for SQL Server. Use the datastore designated for the shared infrastructure. Note: EMC recommends CPU and memory values of 2 vcpu and 6 GB respectively for the SQL virtual machine. If the customer environment already contains a SQL Server instance, refer to Configuring SQL Server for SCVMM. Installing Microsoft Windows on the virtual machine Installing SQL Server The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. Install SQL Server on the virtual machine from the SQL Server installation media. Microsoft SQL Server Management Studio is one of the components in the SQL Server installer. Install this component on the SQL Server instance directly, and on an administrator console. In many implementations, you may want to store data files in locations other than the default path. To change the default path for storing data files: 1. Right-click the server object in SQL Server Management Studio and select Database Properties. 2. In the Properties window, change the default data and log directories for new databases created on the server. Note: For high availability, install SQL Server on a Microsoft failover cluster. Configuring SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server instance for remote connections. Create individual login accounts for each service that accesses a database on the SQL Server instance. For detailed requirements and instructions, refer to the Microsoft TechNet Library topic Configuring a Remote Instance of SQL Server for VMM. For further information, refer to the list of documents in Reference Documentation. 72 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

73 Chapter 6: VSPEX Solution Implementation System Center Virtual Machine Manager server deployment Overview This section provides information about configuring SCVMM for the solution. Table 18 outlines the tasks to be completed. Table 18. Tasks for SCVMM configuration Task Description Reference Creating the SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server Installing the SCVMM Admin Console Installing the SCVMM agent locally on the hosts Adding the Hyper-V cluster to SCVMM Creating a virtual machine in SCVMM Performing partition alignment Creating a template virtual machine Deploying virtual machines from the template virtual machine Create a virtual machine for the SCVMM server. Install Windows Server 2012 R2 Datacenter Edition on the SCVMM host virtual machine. Install an SCVMM server. Install an SCVMM Admin Console. Install an SCVMM agent locally on the hosts that SCVMM manages. Add the Hyper-V cluster to SCVMM. Create a virtual machine in SCVMM. Use diskpart.exe to perform partition alignment, assign drive letters, and assign the file allocation unit size of the virtual machine s disk drive. Create a template virtual machine from the existing virtual machine. Create the hardware profile and Guest OS profile at this time. Deploy the virtual machines from the template virtual machine. Create a virtual machine Install the guest operating system How to Install a VMM Management Server Installing the VMM Server How to Install the VMM Console Installing the VMM Administrator Console Installing a VMM Agent Locally on a Host How to Add a Host Cluster to VMM Creating and Deploying Virtual Machines in VMM How to Create a Virtual Machine with a Blank Virtual Hard Disk Disk Partition Alignment Best Practices for SQL Server How to Create a Virtual Machine Template How to Create a Template from a Virtual Machine How to Create and Deploy a Virtual Machine from a Template How to Deploy a Virtual Machine EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 73

74 Chapter 6: VSPEX Solution Implementation Creating a SCVMM host virtual machine Installing the SCVMM guest OS Installing the SCVMM server To deploy a SCVMM server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Hyper-V server with the customer guest OS configuration by using infrastructure server storage presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines that SCVMM must manage. Install the guest OS on the SCVMM host virtual machine. Install the required Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Set up the SCVMM database and the default library server; then install the SCVMM server. To install the SCVMM server, refer to the Microsoft TechNet Library topic Installing the VMM Server. Installing the SCVMM Admin Console Installing the SCVMM agent locally on a host The SCVMM Admin Console is a client tool to manage the SCVMM server. Install the SCVMM Admin Console on the same computer as the VMM server. To install the SCVMM Admin console, refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console. If the hosts must be managed on a perimeter network, install an SCVMM agent locally on the host before adding the host to SCVMM. Optionally, install an SCVMM agent locally on a host in a domain before adding the host to SCVMM. In all other cases, agents are installed automatically. To install a VMM agent locally on a host, refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally on a Host. Adding the Hyper-V cluster to SCVMM SCVMM manages the Hyper-V cluster. Add the deployed Hyper-V cluster to SCVMM. To add the Hyper-V cluster, refer to the Microsoft TechNet Library topic How to Add a Host Cluster to VMM. Storage array preparation and configuration Overview This section provides information about creating volume in XtremIO and mapping XtremIO volumes to SCVMM environment. Implementation instructions and best practices may vary depending on the storage network protocol selected for the solution. Follow high-level these steps in each case: 1. Configure the XtremIO array, including the register host initiator group. 2. Provision storage and LUN masking to the Hyper-V hosts. 74 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

75 Chapter 6: VSPEX Solution Implementation The following sections explain the options for each step separately, depending on whether the FC or iscsi protocol is selected. Configuring the XtremIO array This section describes how to configure the XtremIO storage array for host access using a block-only protocol such as FC or iscsi. In this solution, XtremIO provides data storage for Hyper-V hosts. Table 19 describes the XtremIO configuration tasks. Table 19. Tasks for XtremIO configuration Task Description Reference Preparing the XtremIO array Setting up the initial XtremIO configuration Provisioning storage for Microsoft Hyper-V hosts Physically install the XtremIO hardware following the procedures in the product documentation. Configure the IP addresses and other key parameters on the XtremIO. Create the storage areas required for the solution. XtremIO Storage Array Operation Guide XtremIO Storage Array Site Preparation Guide version 3.0 XtremIO Storage Array User Guide version 3.0 Vendor switch configuration guide Preparing the XtremIO array Setting up the initial XtremIO configuration The XtremIO Storage Array Operation Guide provides instructions to assemble, rack, cable, and power up the XtremIO. There are no specific setup steps for this solution. After completing the initial XtremIO array setup, configure key information about the existing environment so that the storage array can communicate with other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information: DNS NTP Storage network interfaces For data connections using the FC protocol Ensure that one or more servers are connected to the XtremIO storage system through qualified FC switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows. For data connections using the iscsi protocol 1. Connect one or more servers to the XtremIO storage system through qualified IP switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows. 2. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: a. Set up a storage network IP address. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 75

76 Chapter 6: VSPEX Solution Implementation Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage. b. Enable jumbo frames on the XtremIO front-end iscsi ports. Use jumbo frames for iscsi networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment. To enable the jumbo frames option: i. From the menu bar, click the Administration icon to display the Administration workspace. ii. Click the Cluster tab and select iscsi Ports Configuration from the left pane. The iscsi Ports Configuration screen appears. iii. In the Port Properties Configuration section, select the Enable Jumbo Frames option. iv. Set the MTU value by using the up and down arrows. v. Click Apply. The reference documents listed in Appendix A provide more information on how to configure the XtremIO platform. The Storage configuration guidelines section provides more information on the disk layout. Managing the initiator group The XtremIO storage array uses "initiators" to refer to ports that can access a volume. Initiators can be managed by the XtremIO storage array by assigning them to an initiator group. You can do this by either editing an initiator group in the GUI as shown in Figure 22 and adding the initiator's properties or using the relevant CLI command. Figure 22. XtremIO initiator group 76 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

77 Chapter 6: VSPEX Solution Implementation The initiators within an initiator group share access to one or more of the cluster's volumes. You can define which initiator groups have access to which volumes using LUN mapping. For detailed instructions, refer to the EMC XtremIO User Guide. Managing the volumes This section describes provisioning XtremIO volumes for Microsoft Hyper-V hosts. You can define various quantities of disk space as volumes in an active cluster. Volumes are defined as: Volume size: The quantity of disk space reserved for the volume. LB size: The logical block size in bytes. Alignment-offset: A value for preventing unaligned access performance problems. Note: In the GUI, selecting a predefined volume type defines the alignment-offset and LB size values. In the CLI, you can define the alignment-offset and LB size values separately. This section explains how to manage volumes using the XtremIO storage array GUI. Complete the steps in the XtremIO GUI to configure LUNs to store virtual machines. When XtremIO initializes during the installation process, the data protection domain is created automatically. Provision the LUNs based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter Log in to the XtremIO GUI. 2. From the menu bar, click Configuration. 3. From the Volumes pane, click Add, as shown in Figure 23. Figure 23. Adding volume 4. In the Add New Volumes window, as shown in Figure 24, define the following: a. Name: The name of the volume. b. Size: The amount of disk space allocated for this volume. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 77

78 Chapter 6: VSPEX Solution Implementation c. Volume Type: Select one of the following types that define the LB size and alignment-offset: i. Normal (512 LBs) ii. 4 KB LBs iii. Legacy Windows (offset:63) d. Small I/O Alerts: Enable if you want an alert to be sent when small I/Os (less than 4 KB) are detected. e. Unaligned I/O Alerts: Enable if you want an alert to be sent when unaligned I/Os are detected. f. VAAI TP Alerts: Enable if you want an alert to be sent when the storage capacity reaches the set limit. Figure 24. Volume summary 5. For volumes: a. If you do not want to add the new volumes to a folder, click Finish. The new volumes are created and appear in the root under Volumes in the Configuration window. b. If you want to add the new volumes to a folder: i. Click Next. ii. Select the existing folder (or click New Folder to create a new one). 78 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

79 Chapter 6: VSPEX Solution Implementation iii. Click Finish. The new volumes are created and appear in the selected folder under Volumes in the Configuration window. Table 20 lists a single X-Brick storage allocation layout for 700 virtual machines in the solution. Table 20. Storage allocation for block data Configuration 700 virtual servers Availability physical capacity (TB) Number of SSDs (400 GB) for single X-Brick Number of LUNs for single X-Brick Volume capacity (TB) Note: In this solution, each virtual machine occupies 102 GB, with 100 GB for the OS and user space and a 2 GB swap file. Mapping volumes to an initiator group This section describes how to map XtremIO volumes to an initiator group. To enable initiators within an initiator group to access a volume's disk space, you can map the volume to the initiator group. A LUN is automatically assigned when this is done. This number appears under Selected Volumes in the Configuration window. To map a volume to an initiator: 1. From the menu bar, click Configuration. 2. Under Volumes, select the volumes you want to map. To select multiple volumes, hold Shift and select the volumes. The volumes appear under Volumes in the Configuration window, as shown in Figure 25. Figure 25. Volumes and initiator group 3. Under Initiator Groups, select the initiator group to which you want to map the volume. The initiator appears under Initiator Groups in the Configuration window. 4. Once you have selected the volumes and initiator groups you want to map, under LUN Mapping Configuration, click Map All. 5. Click Apply, as shown in Figure 26. The selected volumes are mapped to the initiator group. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 79

80 Chapter 6: VSPEX Solution Implementation Figure 26. Mapping volumes XtremIO volumes have been created and mapped to an initiator group. You can see the disks in the Windows hosts. Creating the CSV disk To create the CSV disk for the Failover Cluster: 1. On each Microsoft Hyper-V host, open Disk Management, click Action and Rescan disks. After the rescan, all the XtremIO volumes appear under Disk Management on each Hyper-V host. 2. Initialize and format each XtremIO volume with NTFS file systems on one of the Hyper-V hosts. 3. Under Failover Cluster Manager, expand the name of the cluster, and then expand Storage. Right-click Disks, and then click Add Disk. Select the disks and click OK. 4. To add the disks to the CSV, select all the cluster disks and right-click Add to Cluster Shared Volumes. Note: EMC recommends that you format the Windows C drive and CSV volumes with the Allocation Unit Size set to 8,192(8 KB). To format the boot volume to 8,192, refer to EMC best practices. To create the CSV disks, refer to the Microsoft TechNet Library topic Use Cluster Shared Volumes in a Failover Cluster. Creating a virtual machine in SCVMM Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, install the software, and then change the Windows and application settings. To create a virtual machine, refer to the Microsoft TechNet Library topic How to Create and Deploy a Virtual Machine from a Blank Virtual Hard Disk. Performing partition alignment Perform disk partition alignment only for virtual machines running Windows Server 2003 R2 or earlier. EMC recommends implementing disk partition alignment with an offset of 1,024 KB, and formatting the disk drive with a file allocation unit (cluster) size of 8 KB. 80 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

81 Chapter 6: VSPEX Solution Implementation To perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe, refer to the Microsoft TechNet topic Disk Partition Alignment Best Practices for SQL Server. Creating a template virtual machine Create a template virtual machine from the existing virtual machine in SCVMM. Create a hardware profile and a guest OS profile when creating the template. Use the profiler to deploy the virtual machines. Converting a virtual machine into a template destroys the source virtual machine. Consequently, you should back up the virtual machine before converting it. To create a template from a virtual machine, refer to the Microsoft TechNet topic How to Create a Template from a Virtual Machine. Deploying virtual machines from the template The virtual machine deployment wizard in the SCVMM Admin Console enables you to save the PowerShell scripts that perform the conversion and reuse them to deploy other virtual machines with the same configuration. To deploy a virtual machine from a template, refer to the Microsoft TechNet topic How to Deploy a Virtual Machine. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 81

82 Chapter 7: Solution Verification Chapter 7 Solution Verification This chapter presents the following topics: Overview Post-installation checklist Deploying and testing a single virtual machine Verifying solution component redundancy EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

83 Chapter 7: Solution Verification Overview This chapter provides a list of items to review and tasks to perform after configuring the solution. To verify the configuration and functionality of specific aspects of the solution, and to ensure that the configuration meets the customer s core availability requirements, complete the tasks listed in Table 21. Table 21. Testing the installation Task Description Reference Postinstallation checklist Deploying and testing a single virtual machine Verifying solution component redundancy Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Verify that the VLAN for virtual machine networking is configured correctly on each Hyper-V host. Verify that each Hyper-V host has access to the required Cluster Shared Volumes. Verify that the live migration interfaces are configured correctly on all Hyper-V hosts. Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Hyper-V: How many network cards do I need? Network Recommendations for a Hyper-V Cluster in Windows Server 2012 R2 Hyper-V: Using Hyper-V and Failover Clustering Virtual Machine Live Migration Overview Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager Vendor documentation Creating a Hyper-V Host Cluster in VMM Overview EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 83

84 Chapter 7: Solution Verification Post-installation checklist Before moving to production, on each Windows Server, verify the following critical items: The VLAN for virtual machine networking is configured correctly. The storage networking is configured correctly. Each server can access the required CSVs. A network interface is configured correctly for Live Migration. Deploying and testing a single virtual machine Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to log in to it. Verifying solution component redundancy To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. Complete the following steps to restart each XtremIO storage controller in turn and verify that connectivity to Microsoft Hyper-V CSV file system is maintained throughout each restart: 1. Log in to XtremIO XMS CLI console with administrator credentials. 2. Power off storage controller 1 using the following command: deactivate-storage-controller sc-id=1 power-off sc-id=1 3. Activate storage controller 1 using the following command: power-on sc-id=1 activate-storage-controller sc-id=1 4. When the cycle completes, change the sc-id=2 to verify another storage controller using the same command as in the previous steps. 5. On the host side, enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. 84 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

85 Chapter 8: System Monitoring Chapter 8 System Monitoring This chapter presents the following topics: Overview Key areas to monitor XtremIO resource monitoring guidelines EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 85

86 Chapter 8: System Monitoring Overview Key areas to monitor Monitoring a VSPEX environment is no different from monitoring any core IT system, and is a relevant and essential component of administration. Monitoring a highly virtualized infrastructure, such as a VSPEX environment, is more complex than in a purely physical infrastructure, because the interaction and interrelationships between various components can be subtle and nuanced. If you are experienced in administering virtualized environments, you should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows. Several business needs require proactive, consistent monitoring of the environment: Stable, predictable performance Sizing and capacity needs Availability and accessibility Elasticity the dynamic addition, subtraction, and modification of workloads Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter. VSPEX Proven Infrastructures provide end-to-end solutions and require system monitoring of three discrete, but highly interrelated areas: Servers, both virtual machines, and clusters Networking Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the XtremIO array, but also briefly describes other components. Performance baseline When a workload is added to a VSPEX deployment, server and networking resources are consumed. As more workloads are added, modified, or removed, resource availability and, more importantly, capabilities change, which affects all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components before deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined RVM. 86 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

87 Chapter 8: System Monitoring Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As more workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and the impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these assessments consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected. The following components comprise the critical areas that affect overall system performance. Servers Networking Storage Servers The key server resources to monitor include: Processors Memory Disk (local and SAN) Networking Monitor these areas both from a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). For a VSPEX deployment with Microsoft Hyper-V, you can use Windows Perfmon to monitor and log the metrics. Follow your vendors guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. For detailed information about Perfmon, refer to the Microsoft TechNet Library topic Using Performance Monitor. Each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of RVMs deployed and their defined workload. Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths, and inter-switch link (ISL) utilization. Networking storage protocols are discussed in the following section. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 87

88 Chapter 8: System Monitoring Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the XtremIO series of storage arrays provide an easy, yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including: Capacity Hardware elements X-Brick Storage controllers SSDs Cluster elements Clusters Volumes Initiator groups Additional considerations (primarily from a tuning perspective) include: I/O size Workload characteristics These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers additional guidance on the subject in the EMC XtremIO Storage Array User Guide. XtremIO resource monitoring guidelines To monitor XtremIO, use the XMS GUI console, which you can access by opening an HTTPS session to the XMS IP address. The XtremIO series is an all-flash array storage platform that provides block storage access through a single entity. Monitoring the storage This section explains how to use the XtremIO GUI to monitor block storage resource usage that includes the list elements. Performance counters can be displayed from the Dashboard. Efficiency You can monitor the cluster efficiency status under Storage > Overall Efficiency in the Dashboard, as shown in Figure EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

89 Chapter 8: System Monitoring Figure 27. Monitoring the efficiency The Overall Efficiency section displays the following data: Overall Efficiency: The disk space saved by the XtremIO storage array, calculated as: Total provisioned capacity Unique data on SSD Data Reduction Ratio: The inline data deduplication and compression ratio, calculated as: Data written to the array Physical capacity used Deduplication Ratio: The real-time Inline data deduplication ratio, calculated as: Data written to the array Unique data on SSD Compression Ratio: The real-time inline compression ratio, calculated as: Unique data on SSD Physical capacity used Thin Provisioning Savings: Used disk space compared to allocated disk space. Volume capacity You can monitor the volume capacity status under Storage > Volume Capacity in the Dashboard, as shown in Figure 28. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 89

90 Chapter 8: System Monitoring Figure 28. Volume capacity Volume Capacity displays the following data: Total disk space defined by the volumes Physical space used Logical space used Physical capacity You can monitor the physical capacity status under Storage > Physical Capacity in the Dashboard, as shown in Figure 29. Figure 29. Physical capacity Physical Capacity displays the following data: Total physical capacity Used physical capacity Monitoring the performance To monitor the cluster performance from the GUI: 1. From the menu bar, click the Dashboard icon to display the Dashboard. 2. Under Performance, select the desired parameters: a. Select the measurement unit of the display by clicking one of the following: i. Bandwidth: MB/s ii. IOPS iii. Latency: Microseconds (μs). Applies only to the activity history graph. b. Select the item to be monitored from the Item Selector: i. Block Size ii. Initiator Groups iii. Volumes 90 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

91 Chapter 8: System Monitoring c. Set the Activity History timeframe by selecting one of the following periods from the Time Period Selector: i. Last Hour ii. Last 6 Hours iii. Last 24 Hours iv. Last 3 Days v. Last Week Figure 30 shows the Performance GUI. Figure 30. Monitoring the performance (IOPS) Note: You can also monitor the performance through the CLI. For more information, refer to the XtremIO Storage Array User Guide. Monitoring the hardware elements Monitoring X-Bricks You can quickly view the X-Brick name and any associated alerts by hovering the mouse pointer over the X-Brick in the Hardware pane of the Dashboard workspace. To view the displayed X-Brick s details in the Hardware workspace, you can hover the mouse pointer over different parts of the component to view that component s parameters and associated alerts: 1. Click Show Front to view the X-Brick s front end. 2. Click Show Back to view the X-Brick s back end. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 91

92 Chapter 8: System Monitoring 3. Click Show Cable Connectivity to view the X-Brick s cable connections. Figure 31 shows the data and management cable connectivity. Figure 31. Data and management cable connectivity 4. Click X-Brick Properties to display the dialog box, as shown in Figure 32. Figure 32. X-Brick properties 92 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V

93 Monitoring storage controllers To view the storage controller information from the GUI: Chapter 8: System Monitoring 1. From the menu bar, click the Hardware icon to display the Hardware workspace. 2. Select the X-Brick for the storage controller to be monitored. 3. Click X-Brick Properties to open the X-Brick Properties dialog box. 4. View the details of the selected X-Brick s two storage controllers. Monitoring SSDs To view the SSDs information from the GUI: 1. From the menu bar, click the Hardware icon to display the Hardware workspace. 2. Select the X-Brick for the storage controller to be monitored. 3. Click X-Brick Properties to open the X-Brick Properties dialog box. 4. View the details of the selected X-Brick s SSDs, as shown in Figure 33. Figure 33. Monitoring the SSDs Using advanced monitoring In addition to the available monitoring services provided by the XtremIO storage array, you can define monitors tailored to your cluster s needs. Table 22 displays the parameters that can be monitored (depending on the selected monitor type). Table 22. Advanced monitor parameters Parameters Description Read-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Write-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB IOPS Total read and write IOPS, by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 93

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Dell EMC ScaleIO Ready Node

Dell EMC ScaleIO Ready Node Essentials Pre-validated, tested and optimized servers to provide the best performance possible Single vendor for the purchase and support of your SDS software and hardware All-Flash configurations provide

More information

UNLEASH YOUR APPLICATIONS

UNLEASH YOUR APPLICATIONS UNLEASH YOUR APPLICATIONS Meet the 100% Flash Scale-Out Enterprise Storage Array from XtremIO Opportunities to truly innovate are rare. Yet today, flash technology has created the opportunity to not only

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside.

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside. MODERNISE WITH ALL-FLASH Intel Inside. Powerful Data Centre Outside. MODERNISE WITHOUT COMPROMISE In today s lightning-fast digital world, it s critical for businesses to make their move to the Modern

More information

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director Hyper-Convergence De-mystified Francis O Haire Group Technology Director The Cloud Era Is Well Underway Rapid Time to Market I deployed my application in five minutes. Fractional IT Consumption I use and

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery White Paper Business Continuity Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery Table of Contents Executive Summary... 1 Key Facts About

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability IT teams in companies of all sizes face constant pressure to meet the Availability requirements of today s Always-On

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Offloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces

Offloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces 2 ALWAYS ON, ENTERPRISE-CLASS FEATURES ON LESS EXPENSIVE HARDWARE ALWAYS UP SERVICES IMPROVED PERFORMANCE AND MORE CHOICE THROUGH INDUSTRY INNOVATION Storage Spaces Application storage support through

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the data protection functionality of the Federation Enterprise Hybrid Cloud for Microsoft applications solution, including automated backup as a service, continuous availability,

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash N O I T A M R O F S N A R T T I L H E S FU FLA A IN Dell EMC All-Flash solutions are powered by Intel Xeon processors. MODERNIZE WITHOUT COMPROMISE I n today s lightning-fast digital world, your IT Transformation

More information

EBOOK. NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD

EBOOK. NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD EBOOK NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD NetApp ONTAP Cloud for Microsoft Azure The ONTAP Cloud Advantage 3 Enterprise-Class Data Management 5 How ONTAP Cloud

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Wendy Chen, Roger Lopez, and Josh Raw Dell Product Group February 2013 This document is for informational purposes only and may

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix Nutanix White Paper Hyper-Converged Infrastructure for Enterprise Applications Version 1.0 March 2015 1 The Journey to Hyper-Converged Infrastructure The combination of hyper-convergence and web-scale

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER

XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER 1 XTREMIO: TRANSFORMING APPLICATIONS, ENABLING THE AGILE DATA CENTER MAX FISHMAN XTREMIO PRODUCT MANAGEMENT 2 THE ALL FLASH ARRAY REVOLUTION ALL FLASH ARRAY 3 XTREMIO ENABLES THE AGILE DATA CENTER 10%

More information

EMC XtremIO All-Flash Applications. Sonny Aulakh VP, Sales Engineering November 2014

EMC XtremIO All-Flash Applications. Sonny Aulakh VP, Sales Engineering November 2014 EMC XtremIO All-Flash Applications Sonny Aulakh VP, Sales Engineering XtremIO @sonnyaulakh November 2014 1 XtremIO #1 All-Flash Array in the Market Gartner Magic Quadrant Leader >$300,000,000

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

Merging Enterprise Applications with Docker* Container Technology

Merging Enterprise Applications with Docker* Container Technology Solution Brief NetApp Docker Volume Plugin* Intel Xeon Processors Intel Ethernet Converged Network Adapters Merging Enterprise Applications with Docker* Container Technology Enabling Scale-out Solutions

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

VMware vsphere Clusters in Security Zones

VMware vsphere Clusters in Security Zones SOLUTION OVERVIEW VMware vsan VMware vsphere Clusters in Security Zones A security zone, also referred to as a DMZ," is a sub-network that is designed to provide tightly controlled connectivity to an organization

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments 1 2017 2017 Cisco Cisco and/or and/or its

More information

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

DATA CENTRE SOLUTIONS

DATA CENTRE SOLUTIONS DATA CENTRE SOLUTIONS NOW OPTIMIZATION IS WITHIN REACH. CONVERGED INFRASTRUCTURE VIRTUALIZATION STORAGE NETWORKING BACKUP & RECOVERY POWER & COOLING 2 INCREASE AGILITY, STARTING IN YOUR DATA CENTRE. Chances

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

vsan Security Zone Deployment First Published On: Last Updated On:

vsan Security Zone Deployment First Published On: Last Updated On: First Published On: 06-14-2017 Last Updated On: 11-20-2017 1 1. vsan Security Zone Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Security Zone Deployment 3 1.1 Solution Overview VMware vsphere

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION

The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The Data-Protection Playbook for All-flash Storage KEY CONSIDERATIONS FOR FLASH-OPTIMIZED DATA PROTECTION The future of storage is flash The all-flash datacenter is a viable alternative You ve heard it

More information

Storage Solutions for VMware: InfiniBox. White Paper

Storage Solutions for VMware: InfiniBox. White Paper Storage Solutions for VMware: InfiniBox White Paper Abstract The integration between infrastructure and applications can drive greater flexibility and speed in helping businesses to be competitive and

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Addressing Data Management and IT Infrastructure Challenges in a SharePoint Environment. By Michael Noel

Addressing Data Management and IT Infrastructure Challenges in a SharePoint Environment. By Michael Noel Addressing Data Management and IT Infrastructure Challenges in a SharePoint Environment By Michael Noel Contents Data Management with SharePoint and Its Challenges...2 Addressing Infrastructure Sprawl

More information

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems OceanStor 5300F&5500F& 5600F&5800F V5 Huawei mid-range all-flash storage systems (OceanStor F V5 mid-range storage for short) deliver the high performance, low latency, and high scalability that are required

More information

Implementing SharePoint Server 2010 on Dell vstart Solution

Implementing SharePoint Server 2010 on Dell vstart Solution Implementing SharePoint Server 2010 on Dell vstart Solution A Reference Architecture for a 3500 concurrent users SharePoint Server 2010 farm on vstart 100 Hyper-V Solution. Dell Global Solutions Engineering

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

SOLUTION BRIEF Fulfill the promise of the cloud

SOLUTION BRIEF Fulfill the promise of the cloud SOLUTION BRIEF Fulfill the promise of the cloud NetApp Solutions for Amazon Web Services Fulfill the promise of the cloud NetApp Cloud Volumes Service for AWS: Move and manage more workloads faster Many

More information

FLASHARRAY//M Smart Storage for Cloud IT

FLASHARRAY//M Smart Storage for Cloud IT FLASHARRAY//M Smart Storage for Cloud IT //M AT A GLANCE PURPOSE-BUILT to power your business: Transactional and analytic databases Virtualization and private cloud Business critical applications Virtual

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

Microsoft E xchange 2010 on VMware

Microsoft E xchange 2010 on VMware : Microsoft E xchange 2010 on VMware Availability and R ecovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more

More information

Hedvig as backup target for Veeam

Hedvig as backup target for Veeam Hedvig as backup target for Veeam Solution Whitepaper Version 1.0 April 2018 Table of contents Executive overview... 3 Introduction... 3 Solution components... 4 Hedvig... 4 Hedvig Virtual Disk (vdisk)...

More information

Life In The Flash Director - EMC Flash Strategy (Cross BU)

Life In The Flash Director - EMC Flash Strategy (Cross BU) 1 Life In The Flash Lane @SamMarraccini, Director - EMC Flash Strategy (Cross BU) CONSTANT 2 Performance = Moore s Law, Or Does It? MOORE S LAW: 100X PER DECADE FLASH Closes The CPU To Storage Gap FLASH

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012!

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012! Windows Server 2012 Hands- On Camp Learn What s Hot and New in Windows Server 2012! Your Facilitator Damir Bersinic Datacenter Solutions Specialist Microsoft Canada Inc. damirb@microsoft.com Twitter: @DamirB

More information

SECURE, FLEXIBLE ON-PREMISE STORAGE WITH EMC SYNCPLICITY AND EMC ISILON

SECURE, FLEXIBLE ON-PREMISE STORAGE WITH EMC SYNCPLICITY AND EMC ISILON White Paper SECURE, FLEXIBLE ON-PREMISE STORAGE WITH EMC SYNCPLICITY AND EMC ISILON Abstract This white paper explains the benefits to the extended enterprise of the on-premise, online file sharing storage

More information

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Novell* Cloud Manager Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Novell* Cloud Manager Intel

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Cisco HyperFlex Systems and Veeam Backup and Replication

Cisco HyperFlex Systems and Veeam Backup and Replication Cisco HyperFlex Systems and Veeam Backup and Replication Best practices for version 9.5 update 3 on Microsoft Hyper-V What you will learn This document outlines best practices for deploying Veeam backup

More information

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS PRODUCT OVERVIEW DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS Dell EMC VxRack FLEX is a Dell EMC engineered and manufactured rack-scale

More information

Virtuozzo Containers

Virtuozzo Containers Parallels Virtuozzo Containers White Paper An Introduction to Operating System Virtualization and Parallels Containers www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3

More information

Virtual Security Server

Virtual Security Server Data Sheet VSS Virtual Security Server Security clients anytime, anywhere, any device CENTRALIZED CLIENT MANAGEMENT UP TO 50% LESS BANDWIDTH UP TO 80 VIDEO STREAMS MOBILE ACCESS INTEGRATED SECURITY SYSTEMS

More information

Real-time Protection for Microsoft Hyper-V

Real-time Protection for Microsoft Hyper-V Real-time Protection for Microsoft Hyper-V Introduction Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate of customer adoption. Moving resources to

More information

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity Enterprise power with everyday simplicity QUALIT Y AWARDS STO R A G E M A G A Z I N E EqualLogic Storage The Dell difference Ease of use Integrated tools for centralized monitoring and management Scale-out

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

Understanding Virtual System Data Protection

Understanding Virtual System Data Protection Understanding Virtual System Data Protection Server virtualization is the most important new technology introduced in the data center in the past decade. It has changed the way we think about computing

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage IBM System Storage DS5020 Express Highlights Next-generation 8 Gbps FC Trusted storage that protects interfaces enable infrastructure

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information