EMC VSPEX PRIVATE CLOUD

Size: px
Start display at page:

Download "EMC VSPEX PRIVATE CLOUD"

Transcription

1 VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private Cloud deployments with VMware vsphere and EMC VNX for up to 250 virtual machines using NFS Storage. January, 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published January 2013 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website. Generation Backup Part Number H

3 Contents Chapter 1 Executive Summary 13 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 17 Introduction Virtualization Compute Network Storage Chapter 3 Solution Technology Overview 21 Overview Summary of key components Virtualization Overview VMware vsphere VMware vcenter VMware vsphere High Availability EMC Virtual Storage Integrator for VMware VNX VMware vstorage API for Array Integration support Compute Overview Network Overview Storage Overview

4 Contents EMC VNX series VNX FAST Cache (optional) VNX FAST VP (optional) Backup and recovery Overview EMC Avamar Other technologies Overview EMC VFCache (optional) Chapter 4 Solution Architecture Overview 35 Solution overview Solution architecture Overview Architecture for up to 125 virtual machines Architecture for up to 250 virtual machines Key components Hardware resources Software resources Server configuration guidelines Overview VMware vsphere memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines Overview Enable jumbo frames Link aggregation Storage configuration guidelines Overview VMware vsphere storage virtualization for VSPEX Storage layout for 125 virtual machines Storage layout for 250 virtual machines High availability and failover Overview Virtualization layer Compute layer Network layer Storage layer Backup and recovery configuration guidelines Overview

5 Contents Backup characteristics Backup layout Sizing guidelines Reference workload Overview Defining the reference workload Applying the reference workload Overview Example 1: Custom-built application Example 2: Point of scale system Example 3: Web server Example 4: Decision-support database Summary of examples Implementing the reference architectures Overview Resource types CPU resources Memory resources Network resources Storage resources Implementation summary Quick assessment Overview CPU requirements Memory requirements Storage performance requirements I/O operations per second (IOPS) I/O size I/O latency Storage capacity requirements Determining Equivalent Reference Virtual Machines Fine tuning hardware resources Chapter 5 VSPEX Configuration Guidelines 73 Configuration overview Deployment process Pre-deployment tasks Overview Deployment prerequisites Customer configuration data

6 Contents Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs Complete network cabling Prepare and configure storage array VNX configuration Install and configure vsphere infrastructure Overview Install ESXi Configure ESXi networking Jumbo frames Connect VMware datastores Plan virtual machine memory allocations Install and configure SQL server database Overview Create a virtual machine for Microsoft SQL server Install Microsoft Windows on the virtual machine Install SQL server Configure database for VMware vcenter Configure database for VMware Update Manager Install and configure VMware vcenter server Overview Create the vcenter host virtual machine Install vcenter guest OS Create vcenter ODBC connections Install vcenter server Apply vsphere license keys Deploy the VNX VAAI for NFS plug-in Install the EMC VSI plug-in Summary Chapter 6 Validating the Solution 99 Overview Post-install checklist Deploy and test a single virtual server Verify the redundancy of the solution components

7 Contents Appendix A Bills of Materials 103 Bill of materials Appendix B Customer Configuration Data Sheet 107 Customer configuration data sheet Appendix C References 111 References EMC documentation Other documentation Appendix D About VSPEX 113 About VSPEX

8 Contents 8

9 Figures Figure 1. Private Cloud components Figure 2. Compute layer flexibility Figure 3. Example of highly-available network design Figure 4. Logical architecture for 125 virtual machines Figure 5. Logical architecture for 250 virtual machines Figure 6. Hypervisor memory consumption Figure 7. Required networks Figure 8. VMware virtual disk types Figure 9. Storage layout for 125 virtual machines Figure 10. Storage layout for 250 virtual machines Figure 11. High Availability at the virtualization layer Figure 12. Redundant power supplies Figure 13. Network layer High Availability (VNX) Figure 14. VNX series High Availability Figure 15. Resource pool flexibility Figure 16. Required resource from the reference virtual machine pool Figure 17. Aggregate resource requirements from the referenced virtual machine pool Figure 18. Customizing server resources Figure 19. Sample Ethernet network architecture Figure 20. Direct Writes Enabled checkbox Figure 21. Storage System Properties dialog box Figure 22. Create FAST Cache dialog box Figure 23. Advanced tab in the Create Storage Pool dialog Figure 24. Advanced tab in the Storage Pool Properties dialog Figure 25. Storage Pool Properties dialog box Figure 26. Manage Auto-Tiering dialog box Figure 27. LUN Properties dialog box Figure 28. Virtual machine memory settings

10 Figures 10

11 Tables Table 1. VNX customer benefits Table 2. Solution hardware Table 3. Solution software Table 4. Hardware resources for compute Table 5. Hardware resources for network Table 6. Hardware resources for storage Table 7. Profile characteristics Table 8. Virtual machine characteristics Table 9. Blank worksheet row Table 10. Reference Virtual Machine resources Table 11. Example worksheet row Table 12. Example applications Table 13. Server resource component totals Table 14. Deployment process overview Table 15. Tasks for pre-deployment Table 16. Deployment prerequisites checklist Table 17. Tasks for switch and network configuration Table 18. Tasks for storage configuration Table 19. Tasks for server installation Table 20. Tasks for SQL server database setup Table 21. Tasks for vcenter configuration Table 22. Tasks for testing the installation Table 23. List of components used in the VSPEX solution for Table virtual machines List of components used in the VSPEX solution for 250 virtual machines Table 25. Common server information Table 26. ESXi server information Table 27. Array information Table 28. Network infrastructure information Table 29. VLAN information Table 30. Service accounts

12 Tables 12

13 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs

14 Executive Summary Introduction VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk. Target audience Document purpose This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The readers of this document are expected to have the necessary training and background to install and configure VMware vsphere, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents. Readers are also expected to be familiar with the infrastructure and database security policies of the custom installation. Users focusing on selling and sizing a VMware Private Cloud infrastructure should pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices. This document is an initial introduction to the VSPEX architecture, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy the system. The VSPEX Private Cloud architecture provides the customer with a modern system capable of hosting a large number of virtual machines at a consistent performance level. This solution runs on a VMware vsphere virtualization layer backed by highly available VNX family storage. The compute and network components, which are defined by the VSPEX Partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. The 125 and 250 virtual machine environments discussed are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed. For smaller environments, solutions for up to 100 virtual machines 14

15 Executive Summary Business needs based on the EMC VNXe series are described in EMC VSPEX Private Cloud: VMware vsphere 5.1 for up to 100 Virtual Machines. A Private Cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, there are validation tests to ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk. Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs for the VSPEX Private Cloud for VMware architectures: Providing an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components. Providing a VSPEX Private Cloud solution for VMware for efficiently virtualizing up to 250 virtual machines for varied customer use cases. Providing a reliable, flexible, and scalable reference design. 15

16 Executive Summary 16

17 Chapter 2 Solution Overview This chapter presents the following topics: Introduction Virtualization Compute Network Storage

18 Solution Overview Introduction The EMC VSPEX Private Cloud for VMware vsphere 5.1 provides complete system architecture capable of supporting up to 250 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, storage, server compute, and networking. Virtualization VMware vsphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to the end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere Hypervisor and the VMware vcenter Server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run on the system at one time as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through the vcenter product, and allow for dynamic allocation of CPU, memory and storage across the cluster. Features like vmotion, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which perform vmotions automatically to balance load, make vsphere a solid business choice. With the release of vsphere 5.1, a VMware-virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual RAM. Compute VSPEX provides the flexibility to design and implement your choice of server components. The infrastructure must conform to the following attributes: Sufficient RAM, cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to withstand a server failure and failover in the environment 18

19 Solution Overview Network VSPEX provides the flexibility to design and implement the customer s choice of network components. The infrastructure must conform to the following attributes: Redundant network links for the hosts, switches, and storage. Support for Link Aggregation. Traffic isolation based on industry-accepted best practices. Storage The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation. VNX storage includes the following components that are sized for the stated reference architecture workload: Host adapter ports Provide host connectivity via fabric to the array. Data Movers Front-end appliances that provide file services to hosts (optional if CIFS/SMB, NFS services are provided). Storage processors The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays. Disk drives Disk spindles that contain the host or application data and their enclosures. The 125 and 250 virtual machine VMware Private Cloud solutions described in this document are based on the VNX5300 TM and VNX5500 TM storage arrays respectively. VNX5300 can support a maximum of 125 drives, and VNX5500 can host up to 250 drives. The EMC VNX series supports a wide range of business class features ideal for the private cloud environment including: Fully Automated Storage Tiering for Virtual Pools (FAST VP) FAST Cache Data deduplication Thin Provisioning Replication Snapshots/Checkpoints File-Level Retention Quota Management 19

20 Solution Overview 20

21 Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview Summary of key components Virtualization Compute Network Storage Backup and recovery Other technologies

22 Solution Technology Overview Overview This solution uses the EMC VNX series and VMware vsphere 5.1 to provide storage and server hardware consolidation in a Private Cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage. Figure 1 depicts the solution components. Figure 1. Private Cloud components The components are described in more details in the following sections. 22

23 Solution Technology Overview Summary of key components This section describes the key components of this solution. Virtualization The virtualization layer enables the physical implementation of resources to be decoupled from the applications that use them. In other words, the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the Private Cloud concept. Compute The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements. Network The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements. Storage The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the Private Cloud can be implemented. The EMC VNX storage family used in this solution provides high-performance data storage while maintaining high availability. Backup and recovery The optional backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable. Solution architecture provides details on all the components that make up the reference architecture. 23

24 Solution Technology Overview Virtualization Overview VMware vsphere 5.1 The virtualization layer is a key component of any Server Virtualization or Private Cloud solution. It enables the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and the physical capability of the system to change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. VMware vsphere 5.1 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. The high-availability features of VMware vsphere 5.1 such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vsphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. VMware vcenter VMware vcenter TM is a centralized management platform for the VMware Virtual Infrastructure. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, which can be accessed from multiple devices. VMware vcenter also manages some advanced features of the VMware virtual infrastructure such as VMware vsphere High Availability and Distributed Resource Scheduling (DRS), along with vmotion and Update Manager. VMware vsphere High Availability The VMware vsphere High Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions. Note If the virtual machine operating system has an error, the virtual machine can be automatically restarted on the same hardware. If the physical hardware has an error, the impacted virtual machines can be automatically restarted on other servers in the cluster. In order to restart virtual machines on different hardware, the servers need to have available resources. Compute provides detailed information to enable this function. With VMware vsphere High Availability, you can configure policies to determine which machines are automatically restarted, and under what conditions these operations should be attempted. 24

25 Solution Technology Overview EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in for the vsphere client that provides a single management interface for EMC storage within the vsphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which enables new features to be introduced rapidly in response to customer requirements. The following features are used during validation testing: Storage Viewer (SV) Extends the vsphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) and Virtual Machine File System (VMFS) datastores, and RDM volumes seamlessly within vsphere client. Refer to the EMC VSI for VMware vsphere product guides on EMC Online Support for more information. VNX VMware vstorage API for Array Integration support Hardware acceleration with VMware vstorage API for Array Integration (VAAI) is a storage enhancement in vsphere 5.1 that enables vsphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With the assistance of storage hardware, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. Compute Overview The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution. 25

26 Solution Technology Overview In the example shown in Figure 2, the compute layer requirements for a given implementation are 25 processor cores, and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 2. Compute layer flexibility The first customer needs four of the servers they choose, while the other customer needs two. Note To enable high availability at the compute layer, each customer needs one additional server to make sure that the system has enough capability to maintain business operations when a server fails. 26

27 The following best practices should be used in the compute layer: Solution Technology Overview Use a number of identical or at least compatible servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area. If you are implementing hypervisor layer high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment. Implement the available high availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Make sure that sufficient processor cores, and RAM per core are provided to meet the needs of the target environment. 27

28 Solution Technology Overview Network Overview The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. An example of this highly available network topology is depicted in Figure 3. Figure 3. Example of highly available network design This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost on 28

29 Solution Technology Overview the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage Overview EMC VNX series The storage layer is also a key component of any Cloud Infrastructure solution that serves data generated by applications and operating system sin datacenter storage processing systems. This increases storage efficiency, management flexibility and reduces total cost of ownership. In this VSPEX solution, EMC VNX Series arrays are used to provide virtualization at the storage layer. The EMC VNX family is optimized for virtual applications; and delivers industryleading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. The VNX series is powered by Intel Xeon processors for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises. Table 1 shows the customer benefits that are provided by VNX series. Table 1. Feature VNX customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High availability, designed to deliver five 9s availability Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Up to three times improvement in performance with the latest Intel Xeon multi-core processor technology, optimized for Flash Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced production and performance: Software Suites FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously. Local Protection Suite Practices safe data protection and repurposing. 29

30 Solution Technology Overview Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software Packs Total Efficiency Pack Includes all five software suites. Total Protection Pack Includes local, remote, and application protection suites. VNX FAST Cache (optional) VNX FAST VP (optional) VNX FAST Cache, a part of the VNX FAST Suite, enables flash drives to be used as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64kB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within a LUN. VNX FAST VP, a part of the VNX FAST Suite, can automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation. 30

31 Solution Technology Overview Backup and recovery Overview EMC Avamar Backup and recovery is another important component in this VSPEX solution, which provides data protection by backing up data files or volumes on a defined schedule, and restoring data from backup for recovery after a disaster. This VSPEX solution uses EMC Avamar for up to 250 virtual machines. EMC Avamar data deduplication technology seamlessly integrates into virtual environments, providing rapid backup, and restoration capabilities. Avamar s deduplication results in less data transmitted across the network, and greatly reduces the amount of data being backed up and stored to achieve storage, bandwidth, and operational savings. The following are two of the most common recovery requests made to backup administrators: File-level recovery Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are individual users deleting files, applications requiring recoveries, and batch process-related erasures. System recovery Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues. Avamar s functionality in conjunction with VMware implementations adds new capabilities for backup and recovery in both of these scenarios. Key capabilities added in VMware such as the vstorage API integration and change block tracking (CBT) enable the Avamar software to protect the virtual environment more efficiently. Leveraging CBT for both backup and recovery with virtual proxy server pools minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry leading next-generation backup appliances. 31

32 Solution Technology Overview Other technologies Overview EMC VFCache (optional) In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to the technologies listed below. EMC VFCache is a server Flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe Flash technology. Server-side Flash caching for maximum speed VFCache performs the following functions to improve system performance: Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application. Automatically adapts to changing workloads by determining which data is most frequently referenced and promoting it to the server Flash card. This means that the hottest data (most active data) automatically resides on the PCIe card in the server for faster access. Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application is accelerated with VFCache, the array performance for other applications is maintained or slightly enhanced. Write-through caching to the array for total protection VFCache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high availability, integrity, and disaster recovery. Application agnostic VFCache is transparent to applications, so no rewriting, retesting, or recertification is required to deploy VFCache in the environment. Integration with vsphere VFCache enhances both virtualized and physical environments. Integration with the VSI plug-in to VMware vsphere vcenter simplifies the management and monitoring of VFCache. Minimum impact on system resources Unlike other caching solutions on the market, VFCache does not require a significant amount of memory or CPU cycles, as all Flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using VFCache on server resources. VFCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. 32

33 VFCache active/passive clustering support Solution Technology Overview The configuration of VFCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The VFCache-enabled active/passive cluster ensures data integrity, and accelerates application performance. VFCache performance considerations The following are the VFCache performance considerations: On a write request, VFCache first writes to the array, then to the cache, and then completes the application I/O. On a read request, VFCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds, therefore the array limits how fast the cache can work. As the number of writes increases, VFCache performance decreases. VFCache is most effective for workloads with a 70 percent, or more, read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in VFCache 1.5. Note For more information, refer to the VFCache Installation and Administration Guide v

34 Solution Technology Overview 34

35 Chapter 4 Solution Architecture Overview This chapter presents the following topics: Solution overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High availability and failover Backup and recovery configuration guidelines Sizing guidelines Reference workload Applying the reference workload

36 Solution Architecture Overview Solution overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloud-based computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This section is intended to be a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Overview The VSPEX solution for VMware vsphere Private Cloud with EMC VNX is validated at two different points of scale, one configuration with up to 125 virtual machines, and one configuration with up to 250 virtual machines. The defined configurations form the basis of creating a custom solution. Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload. 36

37 Solution Architecture Overview Architecture for up to 125 virtual machines The architecture in Figure 4 characterizes the infrastructure validated for support of up to 125 virtual machines. Figure 4. Logical architecture for 125 virtual machines Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks if sufficient bandwidth and redundancy are provided to meet the listed requirements. 37

38 Solution Architecture Overview Architecture for up to 250 virtual machines The architecture in Figure 5 characterizes the infrastructure validated for support of up to 250 virtual machines. Figure 5. Logical architecture for 250 virtual machines Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks if sufficient bandwidth and redundancy are provided to meet the listed requirements. Key components VMware vsphere 5.1 Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 2 on page 40. vsphere 5.1 provides highly available infrastructure through such features as: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Storage vmotion Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. vsphere High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster. Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores, based on space usage and I/O latency. VMware vcenter Server 5 Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere 5.1 cluster. All vsphere hosts and their virtual machines are managed from vcenter. 38

39 Solution Architecture Overview VSI for VMware vsphere EMC VSI for VMware vsphere is a plug-in to the vsphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface. SQL Server VMware vcenter Server requires a database service to store configuration and monitoring details. A Microsoft SQL 2008 R2 server is used for this purpose. DNS Server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. Active Directory Server Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. Shared Infrastructure DNS and authentication/authorization services like Microsoft Active Directory can be provided via existing infrastructure or set up as part of the new virtual infrastructure. IP /Storage Network All network traffic is carried over a standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while storage traffic is carried over a private, non-routable subnet. EMC VNX5300 array Provides storage by presenting NFS datastores to vsphere hosts for up to 125 virtual machines. EMC VNX5500 array Provides storage by presenting NFS datastores to vsphere hosts for up to 250 virtual machines. VNX family storage arrays include the following components: Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iscsi, and FCoE protocols The SPs provide access for all external hosts, and for the file side of the VNX array. The disk-processor enclosure (DPE) is 2 U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500. X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pnfs protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists. The Data Mover enclosure (DME) is 2 U in size and houses the Data Movers (X- Blades). The DME is similar in form to the DPE, and is used on all VNX models that support file. Standby power supplies are 1 U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Control Stations are 1 U in size and provide management functions to the fileside components referred to as X-Blades. The Control Station is responsible 39

40 Solution Architecture Overview for X-Blade failover. The Control Station may optionally be configured with a matching secondary Control Station to ensure redundancy on the VNX array. Disk-array enclosures (DAE) house the drives used in the array. Hardware resources Table 2 lists the hardware used in this solution. Table 2. Solution hardware Hardware Configuration Notes VMware vsphere servers CPU: One vcpu per virtual machine Four vcpus per physical core Configured as a single vsphere cluster. Memory: 2 GB RAM per virtual machine 250 GB RAM across all servers for the 125- virtual-machine configuration 500 GB RAM across all servers for the 250- virtual-machine configuration 2 GB RAM reservation per vsphere host Network: Six 1 GbE NICs per server Network infrastructure Note To implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server. Minimum switching capacity: Two physical switches Six 1 GbE ports per vsphere server One 1 GbE port per control station for management Four 1 GbE ports per data mover for data Redundant LAN configuration 40

41 Solution Architecture Overview Hardware Configuration Notes Storage Common Two Data Movers (active / standby) Four 1 GbE interfaces per data mover One 1GbE interface per control station for management For 125 Virtual Machines EMC VNX5300 Seventy-five 300 GB 15k rpm 3.5-inch SAS drives Three 300 GB 15k rpm 3.5-inch SAS drives as hot spares For 250 Virtual Machines EMC VNX5500 One hundred fifty 300 GB 15k rpm 3.5-inch SAS drives Six 300 GB 15k rpm 3.5-inch SAS drives as hot spares VNX shared storage Shared Infrastructure EMC nextgeneration backup In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document. If this is being implemented without existing infrastructure, a minimum number of additional servers is required: Avamar Two physical servers 16 GB RAM per server Four processor cores per server Two 1 GbE ports per server One Gen4 utility node One Gen4 3.9TB spare node Three Gen4 3.9TB Storage nodes for 125 virtual machines or five Gen4 3.9TB Storage nodes for 250 virtual machines Data Domain One Data Domain DD640 for 125 virtual machines or one Data Domain DD670 for 250 virtual machines One ES30 15x1TB HDD for 125 virtual machines or two ES350 15x1 TB HDDs for 250 virtual machines These services can be migrated into VSPEX postdeployment; however, they must exist before VSPEX can be deployed. 41

42 Solution Architecture Overview Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. Software resources Table 3 lists the software used in this solution. Table 3. Solution software Software VMware vsphere vsphere server vcenter Server Operating system for vcenter Server Microsoft SQL Server Configuration 5.1 Enterprise Edition 5.1 Standard Edition Windows Server 2008 R2 SP1 Standard Edition Version 2008 R2 Standard Edition EMC VNX VNX OE for file Release VNX OE for block Release 32 ( ) EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer Next-generation backup Avamar 6.1 SP1 Data Domain OS 5.2 Virtual machines (used for validation not required for deployment) Base operating system Microsoft Window Server 2012 Datacenter Edition Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may alter the final purchase. From a virtualization perspective, if a system s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, the number of vcpus may be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased. 42

43 Table 4 lists the hardware resources that are used for compute. Solution Architecture Overview Table 4. Hardware resources for compute Hardware Configuration Notes VMware CPU: Configured as a vsphere servers One vcpu per virtual machine single vsphere Four vcpus per physical core cluster. Memory: 2 GB RAM per virtual machine 250 GB RAM across all servers for 125 virtual machines 500 GB RAM across all servers for 250 virtual machines 2 GB RAM reservation per vsphere host Network: Six 1 GbE NICs per server Note To implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server. Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. VMware vsphere memory virtualization for VSPEX VMware vsphere 5.1 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features and the items you need to consider when using them in the environment. 43

44 Solution Architecture Overview In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 6. Figure 6. Hypervisor memory consumption This basic concept is enhanced by understanding the technologies presented in this section. Memory compression Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere is able to handle memory over-commitment without any performance degradation. However, if more memory 44

45 Solution Architecture Overview than is present on the server is being actively used, vsphere might resort to swapping out portions of the memory of a virtual machine. Non-Uniform Memory Access (NUMA) vsphere uses a NUMA load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, total memory usage can be reduced to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is done with little to no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead There is some associated overhead for the virtualization of memory resources. The memory space overhead has two components: The fixed system overhead for the VMkernel. Additional overhead for each virtual machine. Memory overhead depends on the number of virtual CPUs and configured memory for the guest operating system. Allocating memory to virtual machines The proper sizing for virtual machine memory in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results. 45

46 Solution Architecture Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account Jumbo Frames, VLANs, and Link Aggregation Control Protocol (LACP) on EMC unified storage. For detailed network resource requirements, refer to Table 5. Table 5. Hardware resources for network Hardware Configuration Notes Network infrastructure Minimum switching capacity: Two physical switches Six 1 GbE ports per vsphere server One 1 GbE port per control station for management Four 1 GbE ports per data mover for data Redundant LAN configuration Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. It is a best practice to isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage: Client access Storage Management 46

47 Solution Architecture Overview Figure 7 depicts the VLANs. Figure 7. Required networks Note Figure 7 demonstrates the network connectivity requirements for a VNX array using 10 GbE connections. A similar topology should be created when using 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network is used for communication between the compute layer and the storage layer. The Management Network is used for administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts. Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks may be implemented if necessary, but they are not required. Enable jumbo frames This solution requires MTU set at 9000 (jumbo frames) for efficient storage and migration traffic. 47

48 Solution Architecture Overview Link aggregation A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance. vsphere allows more than one method of utilizing storage when hosting virtual machines. The solutions described below are tested utilizing NFS, and the storage layout described adheres to all current best practices. A customer or architect with related background can make modifications based on their understanding of the system usage and load if required. Table 6 lists the hardware resources that are used for storage. Table 6. Hardware resources for storage Hardware Configuration Notes Storage Common Two Data Movers (active / standby) Four 1 GbE interfaces per data mover One 1GbE interface per control station for management For 125 Virtual Machines EMC VNX5300 Seventy-five 300 GB 15k rpm 3.5-inch SAS drives Three 300 GB 15k rpm 3.5-inch SAS drives as hot spares For 250 Virtual Machines EMC VNX5500 One hundred fifty 300 GB 15k rpm 3.5-inch SAS drives Six 300 GB 15k rpm 3.5-inch SAS drives as hot spares VNX shared storage Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. 48

49 Solution Architecture Overview VMware vsphere storage virtualization for VSPEX VMware ESXi provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machines. A virtual machine stores its operating system, and all other files that are related to the virtual machine activities in a virtual disk. The virtual disk itself is one or more files. VMware uses a virtual SCSI controller to present virtual disks to guest operating system running inside the virtual machines. A datastore is where virtual disks reside. Depending on the type used, it can be either a VMware Virtual Machine File system (VMFS) datastore, or an NFS datastore. An additional option, Raw Device Mapping, allows the virtual infrastructure to connect a physical device directly to a virtual machine. Figure 8. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage. Raw Device Mapping VMware also provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to directly access a volume on the physical storage, and can only be used with Fibre Channel or iscsi. NFS VMware supports using NFS file systems from an external NAS storage system or device as a virtual machine datastore. 49

50 Solution Architecture Overview Storage layout for 125 virtual machines Figure 9 shows the physical disk layout for 125 virtual machines. Figure 9. Storage layout for 125 virtual machines The reference architecture uses the following configuration: Seventy 300 GB SAS disks are allocated to a block-based storage pool. Note System drives are specifically excluded from the pool, and not used for additional storage. If more capacity is required, larger drives may be substituted. To meet the load recommendations, the drives all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give suboptimal results. Three 300 GB SAS disks are configured as hot spares. 50

51 Solution Architecture Overview Optionally, you can configure up to 10 flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit by enabling the FAST Cache feature. These drives are not considered a required part of the solution, and additional licensing may be required in order to use the FAST Suite. If the FAST Suite has been purchased and multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1-GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation. At least one hot spare disk is allocated for every 30 disks of a given type. At least two NFS shares are allocated to the vsphere cluster from a single storage pool to serve as datastores for the virtual servers. 51

52 Solution Architecture Overview Storage layout for 250 virtual machines Figure 10 shows the physical disk layout for 250 virtual machines. Figure 10. Storage layout for 250 virtual machines 52

53 The reference architecture uses the following configuration: Solution Architecture Overview One hundred forty-five 300 GB SAS disks are allocated to a block-based storage pool. Note System drives are specifically excluded from the pool, and not used for additional storage. If more capacity is required, larger drives may be substituted. To meet the load recommendations, the drives all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give suboptimal results. Six 300 GB SAS disks are configured as hot spares. Optionally, you can configure up to 20 flash drives in the array FAST Cache. These drives are not considered a required part of the solution, and additional licensing may be required in order to use the FAST Suite. If the FAST Suite has been purchased and multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1 GB increments, while infrequently accessed data can be migrated to a lower tier for cost-efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation. At least one hot spare disk is allocated for every 30 disks of a given type. At least two NFS shares are allocated to the vsphere cluster from each storage pool to serve as datastores for the virtual servers. 53

54 Solution Architecture Overview High availability and failover Overview Virtualization layer This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with little to no impact on business operations. Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart virtual machines that fail. Figure 11 illustrates the hypervisor layer responding to a failure in the compute layer: Figure 11. High Availability at the virtualization layer Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the datacenter. This type of server has redundant power supplies, as shown in Figure 12. These should be connected to separate power distribution units (PDUs) in accordance with your server vendor s best practices. Figure 12. Redundant power supplies 54

55 Solution Architecture Overview Configure high availability in the virtualization layer. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 11. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 13. These connections should be spread across multiple Ethernet switches to guard against component failure in the network. Figure 13. Network layer High Availability (VNX) By ensuring that there are no single points of failure in the network layer, you can ensure that the compute layer is able to access storage, and communicate with users even if a component fails. 55

56 Solution Architecture Overview Storage layer The VNX family is designed for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure 14. Figure 14. VNX series High Availability EMC storage arrays are designed to be highly available by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability. 56

57 Solution Architecture Overview Backup and recovery configuration guidelines Overview This section provides guidelines to set up backup and recovery for this VSPEX solution. It includes how the backup is characterized, and the backup layout. Backup characteristics The solution is sized with the following application environment profile, as listed in Table 7. Table 7. Profile characteristics Profile characteristic Number of users Value 1250 for 125 virtual machines 2500 for 250 virtual machines Number of virtual machines 125 for 125 virtual machines (20% DB, 80% Unstructured) 250 for 250 virtual machines (20% DB, 80% Unstructured) Exchange data SharePoint data SQL server 1.2 TB (1 GB mail box per user) for 125 virtual machines 2.5 TB (1 GB mail box per user) for 250 virtual machines 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines 0.6 TB for 125 virtual machines 1.25 TB for 250 virtual machines User data 6.1 TB (5.0 GB per user) for 125 virtual machines Daily Change Rate for the applications Exchange data 10% SharePoint data 2% SQL server 5% User data 2% Retention per data types 25 TB (10.0 GB per user) for 250 virtual machines All DB data User data 14 Dailies 30 Dailies, 4 Weeklies, 1 Monthly 57

58 Solution Architecture Overview Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with both Avamar and Data Domain managed as a single solution. This enables users to back up the unstructured user data directly to the Avamar system for simple file level recovery. The database and virtual machine images are managed by the Avamar software, but it is directed to the Data Domain system with the embedded Boost client library. This backup solution unifies the backup process with industry-leading deduplication backup software and storage, and achieves the highest levels of performance and efficiency. Sizing guidelines Reference workload The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. Guidance is provided on how to correlate those reference workloads to actual customer workloads, and how that may change the end delivery from the server and network perspective. Modifications to the storage definition can be made by adding drives for greater capacity and performance, as well as the addition of features like FAST Cache and FAST VP. The disk layouts have been created to provide support for the appropriate number of virtual machines at the defined performance level and typical operations like snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times. Overview When considering an existing server to move into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, which has been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. 58

59 Solution Architecture Overview Defining the reference workload To simplify the discussion, we have defined a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose. For the VSPEX solutions, the reference workload is defined as a single virtual machine. Table 8 lists the characteristics of this virtual machine. Table 8. Characteristic Virtual machine characteristics Value Virtual machine operating system Microsoft Windows Server 2012 Datacenter Edition Virtual processors per virtual machine 1 RAM per virtual machine Available storage capacity per virtual machine I/O operations per second (IOPS) per virtual machine I/O pattern 2 GB 100 GB 25 Random I/O read/write ratio 2:1 This specification for a virtual machine is not intended to represent any specific application. Rather, it represents a single common point of reference against which other virtual machines can be measured. Applying the reference workload Overview When considering an existing server that will move into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. The reference architectures create a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 1 on page 29. The customer virtual machines may not exactly match the specifications above. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain. 59

60 Solution Architecture Overview Example 1: Custom-built application A small custom-built application server needs to move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage. Based on these numbers, the following resources are needed from the resource pool: CPU resources for one virtual machine Memory resources for two virtual machines Storage capacity for one virtual machine I/Os for one virtual machine In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If the original pool had the resources to provide 125 reference virtual machines, the resources for 123 reference virtual machines remain. Example 2: Point of scale system The database server for a customer s point of scale system needs to move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The following are the requirements to virtualize this application: CPUs of four reference virtual machines Memory of eight reference virtual machines Storage of two reference virtual machines I/Os of eight reference virtual machines In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. Implementing this one machine on a pool for 125 reference virtual machines would consume the resources of eight reference virtual machines, and leave resources for 117 reference virtual machines. Example 3: Web server The customer s web server needs to move into this virtual infrastructure. It is currently running on a physical system with two CPUs and eight GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The following are the requirements to virtualize this application: CPUs of two reference virtual machines Memory of four reference virtual machines Storage of one reference virtual machines I/Os of two reference virtual machines 60

61 Solution Architecture Overview In this case, the one appropriate virtual machine would use the resources of four reference virtual machines. If this is implemented on a resource pool for 125 reference virtual machines, resources for 121 reference virtual machines remain. Example 4: Decision-support database The database server for a customer s decision support system needs to move into this virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses five TB of storage and generates 700 IOPS during an average busy cycle. The following are the requirements to virtualize this application: CPUs of 10 reference virtual machines Memory of 32 reference virtual machines Storage of 52 reference virtual machines I/Os of 28 reference virtual machines In this case, the one virtual machine uses the resources of 52 reference virtual machines. If this is implemented on a resource pool for 125 reference virtual machines, resources for 73 reference virtual machines remain. Summary of examples The four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads simply reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 reference virtual machines, and resources for 59 reference virtual machines would remain in the resource pool as shown in Figure 15. Figure 15. Resource pool flexibility In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are outside the scope of the document. Once the change in resource balance has been examined and the new level of requirements is known, these virtual machines can be added to the infrastructure using the method described in the examples. 61

62 Solution Architecture Overview Implementing the reference architectures Overview Resource types The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. The reference architectures define the hardware requirements for the solution in terms of four basic types of resources: CPU resources Memory resources Network resources Storage resources This section describes the resource types, how they are used in the reference architecture, and key considerations for implementing them in a customer environment. CPU resources The architectures define the number of CPU cores that are required, but not a specific type or configuration. New deployments use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the reference architectures assume that there will be no more than four virtual CPUs for each physical processor core (4:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual server in the reference architecture is defined to have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than the hypervisor physically has because of budget constraints. The memory overcommitment technique takes advantage of the fact that each virtual machine does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware ESXi runs out of memory for the guest operating systems, paging begins to take place, resulting in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, more disks need to be added 62

63 Solution Architecture Overview due to the demand for increased performance. Now, it is up to the administrator to decide whether it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution is validated with statically assigned memory and no over-commitment of memory resources. If memory over-commit is used in a real-world environment, you should regularly monitor the system memory utilization, and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. Network resources The reference architecture outlines the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and have the option to add ports using EMC UltraFLEX I/O modules. For reference purposes in the validated environment, EMC assumes that each virtual machine generates 25 I/Os per second with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual machine migration Administrative and management operations The requirements for each of these vary, depending on how the environment is being used. It is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few factors to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vsphere Cluster. Each layer has a specific configuration that is defined for the solution and documented in the deployment guide. It is generally acceptable to replace drive types with a type that has more capacity with the same performance characteristics or with ones that have higher performance 63

64 Solution Architecture Overview characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Implementation summary The requirements that are stated in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual server. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, then you may need to add more of that resource to the system. 64

65 Solution Architecture Overview Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations, and help assess the customer environment. First, summarize the applications that are planned for migration into the VSPEX Private Cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines are required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as listed in Table 9. Table 9. Blank worksheet row Application CPU (Virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent Reference Virtual Machines Example Application Resource Requirements Equivalent Reference Virtual Machines Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, Memory, IOPS, and Capacity. CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all of the CPUs that are presented. Use a performance-monitoring tool, such as ESXTop, on vsphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, then implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs that are required. In any operation involving performance monitoring, it is a best practice to collect data samples for a period of time that includes all of the operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by 65

66 Solution Architecture Overview using a performance-monitoring tool, like VMware ESXTop, to determine if it is being used efficiently. In any operation involving performance monitoring, it is a best practice to collect data samples for a period of time that includes all of the operational use cases of the system. Then use either the maximum or 95th percentile value of the resource requirements for planning purposes. Storage performance requirements I/O operations per second (IOPS) The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system. The first is the number of requests coming in or IOPS. Equally important is the size of the request, or I/O size -- a request for 4 KB of data is significantly easier and faster than a request for 4 MB of data. That distinction becomes important with the third factor, which is the average I/O response time, or I/O latency. The reference virtual machine calls for 25 I/O operations per second. To monitor this on an existing system use a performance-monitoring tool like VMware ESXTop. ESXTop provides several counters that can help here. The most common are: Physical Disk NFS Volume \Commands/sec Physical Disk NFS Volume \Reads/sec Physical Disk NFS Volume \Writes/sec Physical Disk NFS Volume \ Average Guest MilliSec/Command The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average, so it is common to see 11 KB or 15 KB instead of the common I/O sizes. The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, EMC recommends applying a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB / 8 KB = 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the reference virtual machine assumed 8 KB I/O sizes. I/O latency The average I/O response time, or I/O latency, is a measurement of how quickly I/O requests are processed by the storage system. The VSPEX solutions were designed to meet a target average I/O latency of 20 ms. The recommendations in this document should allow the system to continue to meet that target, however it is worthwhile to monitor the system and re-evaluate the resource pool utilization if needed. To 66

67 Solution Architecture Overview monitor I/O latency, use the Physical Disk NFS Volume \ Average Guest MilliSec/Command counters in ESXTop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure that they are not using more resources than intended. Storage capacity requirements Determining Equivalent Reference Virtual Machines The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine how much space on disk the system is using, and add an appropriate factor to accommodate growth. For example, to virtualize a server that is currently using 40 GB of a 200 GB internal drive with anticipated growth of approximately 20% over the next year, 48 GB are required. EMC also recommends reserving space for regular maintenance patches and swapping files. In addition, some file systems, like Microsoft NTFS, degrade in performance if they become too full. With all of the resources defined, determine an appropriate value for the Equivalent Reference Virtual Machines line by using the relationships in Table 10. Round all values up to the closest whole number. Table 10. Reference Virtual Machine resources Resource Value for Reference Virtual Machine (RVM) Relationship between requirements and Equivalent Reference Virtual Machines CPU 1 Equivalent Reference Virtual Machines = Resource Requirements Memory 2 Equivalent Reference Virtual Machines = (Resource Requirements)/2 IOPS 25 Equivalent Reference Virtual Machines = (Resource Requirements)/25 Capacity 100 Equivalent Reference Virtual Machines = (Resource Requirements)/100 For example, the point of scale system used in Example 2: Point of scale system earlier in the paper requires four CPUs, 16 GB of memory, 200 IOPS and 200 GB of storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 11 on page 68 demonstrates how that machine fits into the worksheet row. 67

68 Solution Architecture Overview Table 11. Example worksheet row Application CPU (Virtual CPUs) Memory (GB) IOPS Capacity (GB) Equivalent Reference Virtual Machines Example Application Resource Requirements Equivalent Reference Virtual Machines Use the highest value in the row to fill in the column for Equivalent Reference Virtual Machines. As shown below, eight Reference Virtual Machines are required. Figure 16. Required resource from the reference virtual machine pool Once the worksheet has been filled out for each application that the customer wants to migrate into the virtual infrastructure, compute the sum of the Equivalent Reference Virtual Machines column on the right side of the worksheet as listed in Table 12 on page 69 to calculate the total number of reference virtual machines that are required in the pool. In the example, the result of the calculation from Table 10 on page 67 is shown for clarity, along with the value, rounded up to the nearest whole number, to use. 68

69 Solution Architecture Overview Table 12. Application Example applications Server Resources CPU (Virtual CPUs) Memory (GB) Storage Resources IOPS Capacity (GB) Reference Virtual Machines Example Application #1: Custom Built Application Example Application #2: Point of scale System Example Application #3: Web Server Example Application #4: Decision Support Database Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Total Equivalent Reference Virtual Machines 66 The VSPEX Virtual Infrastructure solutions define discrete resource pool sizes. For this solution set, the pool can support 125 or 250 reference virtual machines. Figure 17 shows 59 Reference Virtual Machines available after applying all four examples in the 125 virtual machine solution. 69

70 Solution Architecture Overview Figure 17. Aggregate resource requirements from the referenced virtual machine pool In the case of Table 12 on page 69, the customer requires 66 virtual machines of capability from the pool. Therefore, the 125 virtual machine resource pool provides sufficient resources for the current needs as well as room for growth. Fine tuning hardware resources In most cases, the recommended hardware for servers and storage is sized appropriately based on the process described. However, in some cases there is a desire to further customize the hardware resources that are available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point. Storage resources In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. In order to achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool. It is not appropriate to reduce the size of the main resource pool in order to support application isolation, or to reduce the capability of the pool. The storage layouts presented in the 125 and 250 virtual machine solutions are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-to-predict impacts on other areas of the system. Server resources For the server resources in the VSPEX virtual infrastructure, it is possible to customize the hardware resources more effectively. Figure 18. Customizing server resources 70

71 Solution Architecture Overview To do this, first total the resource requirements for the server components as shown in Table 13. Note the addition of a Server Component Totals line at the bottom of the worksheet. In this line, add up the server resource requirements from the applications in the table. Table 13. Application Server resource component totals Server Resources CPU (Virtual CPUs) Memory (GB) Storage Resources IOPS Capacity (GB) Referen ce Virtual Machin es Example Application #1: Custom Built Application Example Application #2: Point of scale System Example Application #3: Web Server Example Application #4: Decision Support Database Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Resource Requirements Equivalent Reference Virtual Machines Total Equivalent Reference Virtual Machines 66 Server Resource Component Totals

72 Solution Architecture Overview 72

73 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Configuration overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure vsphere infrastructure Install and configure SQL server database Install and configure VMware vcenter server Summary

74 VSPEX Configuration Guidelines Configuration overview Deployment process The deployment process is divided into the stages shown in Table 14. Upon completion of the deployment, the VSPEX infrastructure is ready for integration with the existing customer network and server infrastructure. Table 14 lists the main stages in the solution deployment process. The table also includes references to chapters where relevant procedures are provided. Table 14. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment prerequisites Gather customer configuration data Rack and cable the components Configure the switches and networks, connect to the customer network Customer configuration data Refer to the vendor documentation. Prepare switches, connect network, and configure switches 6 Install and configure the VNX Prepare and configure storage array Configure virtual machine datastores Install and configure the servers Set up SQL Server (used by VMware vcenter ) Install and configure vcenter and virtual machine networking Prepare and configure storage array Install and configure vsphere infrastructure Install and configure SQL server database Configure database for VMware vcenter 74

75 VSPEX Configuration Guidelines Pre-deployment tasks Overview Pre-deployment tasks include procedures that are not directly related to environment installation and configuration, but whose results are needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. These tasks should be performed before the customer visit to decrease the time required onsite. Table 15. Tasks for pre-deployment Task Description Reference Gather documents Gather the related documents listed in the Appendix C. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. References: EMC documentation Gather tools Gather data Gather the required and optional tools for the deployment. Use Table 16 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process. Table 16: Deployment prerequisites checklist Appendix B Deployment prerequisites Table 16 itemizes the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 2 on page 40 and Table 3 on page 42. Table 16. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 125 or 250 virtual servers VMware vsphere 5 servers to host virtual infrastructure servers Note This requirement may be covered in the existing infrastructure Table 2: Solution hardware Networking: Switch port capacity and capabilities as required by the virtual server infrastructure. 75

76 VSPEX Configuration Guidelines Requirement Description Reference EMC VNX5300 (125 virtual machines) or EMC VNX5500 (250 virtual machines): Multiprotocol storage array with the required disk layout. Software VMware ESXi 5.1 installation media VMware vcenter Server 5.1 installation media EMC VSI for VMware vsphere: Unified Storage Management EMC VSI for VMware vsphere: Storage Viewer EMC Online Support Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vcenter) Microsoft SQL Server 2008 or newer installation media Note This requirement may be covered in the existing infrastructure. EMC vstorage API for Array Integration Plug-in EMC Online Support Microsoft Windows Server 2012 DataCenter installation media (suggested OS for virtual machine guest OS) Licenses VMware vcenter 5.1 license key VMware ESXi 5.1 license keys Microsoft Windows Server 2008 R2 Standard (or higher) license keys Microsoft Windows Server 2012 DataCenter license keys Note This requirement may be covered by an existing Microsoft Key Management Server (KMS) Microsoft SQL Server license key Note This requirement may be covered in the existing infrastructure 76

77 VSPEX Configuration Guidelines Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information. Prepare switches, connect network, and configure switches Overview This section provides the requirements for network infrastructure needed to support this architecture. Table 17 provides a summary of the tasks for switch and network configuration, and references for further information. Table 17. Tasks for switch and network configuration Task Description Reference Configure infrastructure network Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure storage array and Install and configure vsphere infrastructure. Prepare and configure storage array and Install and configure vsphere infrastructure. Configure VLANs Complete network cabling Configure private and public VLANs as required. Connect the switch interconnect ports. Connect the VNX ports. Connect the ESXi server ports. Your vendor s switch configuration guide 77

78 VSPEX Configuration Guidelines Prepare network switches Configure infrastructure network For validated levels of performance and high availability, this solution requires the switching capacity that is provided in the Solution Hardware table of the Table 2 on page 40. If existing infrastructure meets the requirements, no new hardware is needed. The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. Figure 19 shows a sample redundant Ethernet infrastructure for this solution. The diagram illustrates the use of redundant switches and links to ensure that there are no single points of failure. Figure 19. Sample Ethernet network architecture 78

79 VSPEX Configuration Guidelines Configure VLANs Ensure adequate switch ports for the storage array and ESXi hosts that are configured with a minimum of three VLANs for: Virtual machine networking, ESXi management (customer- facing networks, which may be separated if desired) NFS networking (private network) vmotion (private network) Complete network cabling Ensure that all servers, storage arrays, switch interconnects, and switch uplinks have redundant connections, and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network. Note At this point, the new equipment is being connected to the existing customer network. Ensure that unforeseen interactions do not cause service issues on the customer network. Prepare and configure storage array VNX configuration Overview This section describes how to configure the VNX storage array. In the solution, VNX series provides Network File System (NFS) or Virtual Machine File System (VMFS) data storage for VMware hosts. Table 18. Tasks for storage configuration Task Description Reference Set up initial VNX configuration Provision storage for NFS datastores Prepare VNX Configure the IP address information and other key parameters on the VNX. Create NFS file systems that will be presented to the ESXi servers as NFS datastores that host the virtual servers. VNX5300 Unified Installation Guide VNX File and Unified Worksheet Unisphere System Getting Started Guide Your vendor s switch configuration guide VNX5300 Unified Installation Guide provides instructions on assembly, racking, cabling, and powering the VNX. For 250 virtual machines, refer to the VNX5500 Unified Installation Guide instead. There are no specific setup steps for this solution. 79

80 VSPEX Configuration Guidelines Set up initial VNX configuration After completing the initial VNX setup, you need to configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: DNS NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership The reference documents listed in Table 18 on page 79 provide more information on how to configure the VNX platform. Storage configuration guidelines provides more information on the disk layout. Provision storage for NFS datastores Complete the following steps in EMC Unisphere to configure NFS file systems on the VNX array to store virtual servers: 1. Create a block-based RAID 5 storage pool that consists of 70 (for 125 virtual machines) or one 145 (for 250 virtual machines) 300 GB SAS drives. a. Log on to EMC Unisphere. b. Select the array that is to be used in this solution. c. Click Storage Storage Configuration Storage Pools. d. Select the Pools tab. e. Click Create. Note System drives are specifically excluded from the pool, and not used for additional storage. Create your Hot Spare disks at this point. Refer to the EMC VNX5300 Unified Installation Guide for additional information. Figure 9 on page 50 depicts the target storage layout for the system for 125 virtual machines. Figure 10 on page 52 depicts the target storage layout for the system for 250 virtual machines. 2. Use the pool created in step 1, and provision LUNs and present them to the Data Mover using the system-defined NAS storage group. a. Click Storage LUNs. b. Click Create. 80

81 VSPEX Configuration Guidelines c. In the prompted dialog, select the pool created in step 1. For User Capacity, select MAX. The Number of LUNs to create is 50 for (125 virtual machines) or 100 for (250 virtual machines). 268 GB LUNs are provisioned after this operation. d. Click Hosts Storage Groups. e. Select ~filestorage. f. Click Connect LUNs. g. In the Available LUNs panel, select 100 LUNs created in the previous steps. The Selected LUNs panel appears immediately. After this step, you can see a new Storage Pool for File is ready, from which we can create multiple file systems. 3. Create multiple file systems from the NAS pool to present to the ESXi servers as NFS datastores. The validated solution used five (125 virtual machines) or 10 (250 virtual machines) 2.5TB file systems from the pool. In a customer implementation, it may be proper to create logical separation between virtual machine groups by assigning some to one file system, and others to a separate one. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. a. Click Storage Storage Configuration File Systems. b. Click Create. c. In the prompted dialog, select create from Storage Pool, and the Storage Capacity is 1250 GB for (125 virtual machines) or 2500 GB for (250 virtual machines) d. Keep default settings. Note To enable an NFS performance fix for VNX File that significantly reduces NFS write latency, the file systems must be mounted on the Data Mover by using the Direct Writes mode as shown in Figure 20 on page 82. Set Advanced Options must be selected to enable Direct Writes Enabled. 81

82 VSPEX Configuration Guidelines Figure 20. Direct Writes Enabled checkbox 4. Export the file systems using NFS, and give root access to ESXi servers. a. Click Storage Shared Folders NFS. b. Click Create. c. In the dialog, add the IP addresses of all ESXi servers in Read/Write Hosts and Root Hosts. FAST Cache configuration (optional) To configure FAST Cache on the storage pools for this solution, complete the following steps: 5. Configure Flash drives as FAST Cache a. To create FAST Cache, click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog (shown in Figure 21). b. Select the FAST Cache tab to view FAST Cache information. 82

83 VSPEX Configuration Guidelines Figure 21. Storage System Properties dialog box c. Click Create to open the Create FAST Cache dialog box as shown in Figure 22. The RAID Type field is displayed as RAID 1 when the FAST Cache has been created. The number of Flash drives can also be chosen from this screen. The bottom portion of the screen shows the Flash drives that are used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. d. Refer to Storage configuration guidelines to determine the number of Flash drives that are needed in this solution. Note If a sufficient number of Flash drives are not available, FLARE displays an error message and FAST Cache cannot be created. 83

84 VSPEX Configuration Guidelines Figure 22. Create FAST Cache dialog box 6. Enable FAST Cache on the storage pool If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled. You can configure them from the Advanced tab in the Create Storage Pool dialog shown in Figure 23. After FAST Cache is installed on the VNX series, it is enabled by default when a storage pool is created. 84

85 VSPEX Configuration Guidelines Figure 23. Advanced tab in the Create Storage Pool dialog If the storage pool has already been created, use the Advanced tab in the Storage Pool Properties dialog to configure FAST Cache as shown in Figure 24. Figure 24. Advanced tab in the Storage Pool Properties dialog Note The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves. FAST VP configuration (optional) To configure FAST VP for this solution complete the following steps. 7. Configure FAST at the pool level To view and manage FAST at the pool level, click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 25 shows the tiering information for a specific FAST pool. 85

86 VSPEX Configuration Guidelines Figure 25. Storage Pool Properties dialog box The Tier Status area shows FAST relocation information specific to the selected pool. Select the scheduled relocation at the pool level from the Auto-Tiering list. This can be set to either Automatic or Manual. In the Tier Details area, you can see the exact distribution of your data. You can also connect to the array-wide Relocation Schedule using the button on the top right corner, which presents the Manage Auto-Tiering dialog box as shown in Figure

87 VSPEX Configuration Guidelines Figure 26. Manage Auto-Tiering dialog box From this status dialog, users can control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O. Note FAST (Fully Automated Storage Tiering) is a completely automated tool. To this end, relocations can be scheduled to occur automatically. Schedule the relocations during off-hours to minimize any potential performance impact the relocations may cause. 8. Configure FAST at the LUN level (Optional) Some FAST properties are managed at the LUN level. a. Click Properties for a specific LUN. b. In this dialog, select the Tiering tab to view tiering information for this single LUN, as shown in Figure

88 VSPEX Configuration Guidelines Figure 27. LUN Properties dialog box c. The Tier Details section displays the current distribution of slices within the LUN. Select the tiering policy at the LUN level from the Tiering Policy list. 88

89 VSPEX Configuration Guidelines Install and configure vsphere infrastructure Overview This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 19 describes the tasks that must be completed. Table 19. Tasks for server installation Task Description Reference Install ESXi Install the ESXi 5.1 hypervisor on the physical servers being deployed for the solution. vsphere Installation and Setup Guide Configure ESXi Networking Connect VMware Datastores Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and Jumbo Frames. Connect the VMware datastores to the ESXi hosts deployed for the solution. vsphere Networking vsphere Storage Guide Install ESXi Upon initial power up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in each of the server BIOS. If the servers are equipped with a RAID controller, it is recommended to configure mirroring on the local disks. Boot the ESXi 5.1 install media and install the hypervisor on each of the servers. ESXi hostnames, IP addresses, and a root password is required for installation. Appendix B provides appropriate values. Configure ESXi networking During the installation of VMware ESXi, a standard virtual switch (vswitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, an additional NIC must be added either by using the ESXi console or by connecting to the ESXi host from the vsphere Client. Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover. VMware ESXi networking configuration including load balancing, link aggregation, and failover options are described in vsphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for NFS traffic VMkernel port for VMware vmotion 89

90 VSPEX Configuration Guidelines Virtual server port groups (used by the virtual servers to communicate on the network) vsphere Networking describes the procedure for configuring these settings. Refer to Appendix C or more information. Jumbo frames A Jumbo frame is an Ethernet frame with a payload greater than 1,500 bytes and up to 9,000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9,000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames must be enabled end-to-end. This includes the network switches, ESXi servers, and VNX Data Movers. Jumbo frames can be enabled on the ESXi server on two different levels. If all the portals on the virtual switch need to be enabled for jumbo frames, this can be achieved by selecting the properties of the virtual switch and editing the MTU settings from vcenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel port under the network properties from vcenter. To enable jumbo frames on the VNX, complete the following steps: 1. In Unisphere, click Settings Network Settings for File. 2. Select the appropriate network interface from the Interfaces tab. 3. Click Properties. 4. Set the MTU size to Click OK to apply the changes. Jumbo frames may also need to be enabled on each network switch. Consult your switch configuration guide for instructions. Connect VMware datastores Plan virtual machine memory allocations Connect the datastores configured in Install and configure vsphere infrastructure to the appropriate ESXi servers. These include the datastores configured for: Virtual server storage. Infrastructure virtual machine storage (if required). SQL Server storage (if required). vsphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix C for more information. Server capacity is required for two purposes in the solution: To support the new virtualized desktop infrastructure. Support the required infrastructure services such as authentication/authorization, DNS, and databases. For information on minimum infrastructure requirements, refer to Table 2. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. 90

91 Memory configuration VSPEX Configuration Guidelines Take care when configuring server memory in order to properly size and configure the solution. This section provides general guidance on memory allocation for the virtual desktops, and factors in vsphere overhead and the virtual machine configuration ESX/ESXi memory management Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple virtual machines, while avoiding resource exhaustion. In cases where advanced processors (i.e. Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself via a feature known as shadow page tables. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the virtual machine is known as memory over commitment. Identical memory pages that are shared across virtual machines are merged via a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse. Memory compression - ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compressed cache located in the main memory. Host resource exhaustion can be relieved via a process known as memory ballooning. This process requests free pages to be allocated from the virtual machine to the host for reuse. Lastly, hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk. Additional information can be obtained in the following webpage: 91

92 VSPEX Configuration Guidelines Virtual machine memory concepts Figure 28 shows the memory settings parameters in the virtual machine. Figure 28. Virtual machine memory settings Configured memory Physical memory allocated to the virtual machine at the time of creation. Reserved memory Memory that is guaranteed to the virtual machine. Touched memory Memory that is active or in use by the virtual machine. Swappable Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machine s via ballooning, compression or swapping. The following are the recommended best practices: Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads. Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machine sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance might be adversely affected. Having performance baselines for your virtual machine workloads assists in this process. Additional information on ESXTop can be found in the following document: 92

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 EMC VSPEX Abstract This describes how to design virtualized Microsoft Exchange Server 2010 resources on the appropriate EMC VSPEX Proven Infrastructures

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER White Paper EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER EMC XtremSF, EMC XtremCache, EMC VNX, Microsoft SQL Server 2008 XtremCache dramatically improves SQL performance VNX protects data EMC Solutions

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE DESIGN GUIDE EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE EMC VSPEX Abstract This describes how to use EMC XtremSF and EMC XtremSW Cache in a virtualized environment with an EMC VSPEX Proven Infrastructure

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE DESIGN GUIDE EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE EMC VSPEX Abstract This describes how to use EMC XtremSF and EMC XtremCache in a virtualized environment with an EMC VSPEX Proven Infrastructure

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 Proven Solutions Guide EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved. Verron Martina vspecialist 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2010

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING INFRASTRUCTURES FOR THE POST PC ERA Umair Riaz vspecialist 2 The Way We Work Is Changing Access From Anywhere Applications On The Go Devices End User Options 3 Challenges Facing Your Business

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Cloud Meets Big Data For VMware Environments

Cloud Meets Big Data For VMware Environments Cloud Meets Big Data For VMware Environments

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Umair Riaz vspecialist 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/ Distributed Computing Cloud Computing 3 EMC s Mission

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Configuration Guide H15108 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

VxRack System SDDC Enabling External Services

VxRack System SDDC Enabling External Services VxRack System SDDC Enabling External Services May 2018 H17144 Abstract This document describes how to enable external services for a VxRack System SDDC. Use cases included are Dell EMC Avamar-based backup

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Storing and Protecting Wouter Kolff Advisory Technology Consultant EMCCAe 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc.

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc. Potpuna virtualizacija od servera do desktopa Saša Hederić Senior Systems Engineer VMware Inc. VMware ESX: Even More Reliable than a Mainframe! 2 The Problem Where the IT Budget Goes 5% Infrastructure

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Video Surveillance EMC Storage with Digifort Enterprise

Video Surveillance EMC Storage with Digifort Enterprise Video Surveillance EMC Storage with Digifort Enterprise Sizing Guide H15229 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published August 2016 EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 EMC VSPEX Abstract This describes how to design a Microsoft Exchange Server 2013 solution on an EMC VSPEX Proven Infrastructure with Microsoft

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

EMC VNX2 Deduplication and Compression

EMC VNX2 Deduplication and Compression White Paper VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Maximizing effective capacity utilization Abstract This white paper discusses the capacity optimization technologies delivered in the

More information

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales:

More information

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures Table of Contents Get the efficiency and low cost of cloud computing with uncompromising control over service levels and with the freedom of choice................ 3 Key Benefits........................................................

More information