EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Similar documents
EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

Surveillance Dell EMC Storage with FLIR Latitude

DATA PROTECTION IN A ROBO ENVIRONMENT

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

Surveillance Dell EMC Storage with Digifort Enterprise

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

Dell EMC SAN Storage with Video Management Systems

EMC VSPEX END-USER COMPUTING

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

EMC Backup and Recovery for Microsoft Exchange 2007

EMC VSPEX END-USER COMPUTING

EMC VSPEX PRIVATE CLOUD

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Thinking Different: Simple, Efficient, Affordable, Unified Storage

EMC VSPEX PRIVATE CLOUD

EMC VSPEX END-USER COMPUTING

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Surveillance Dell EMC Storage with Synectics Digital Recording System

EMC VSPEX END-USER COMPUTING

EMC Integrated Infrastructure for VMware. Business Continuity

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

EMC VSPEX END-USER COMPUTING

EMC Business Continuity for Microsoft Applications

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

Surveillance Dell EMC Storage with Milestone XProtect Corporate

EMC VSPEX PRIVATE CLOUD

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Microsoft Office SharePoint Server 2007

EMC VSPEX END-USER COMPUTING

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC VSPEX END-USER COMPUTING

Copyright 2012 EMC Corporation. All rights reserved.

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

EMC VSPEX PRIVATE CLOUD

Dell EMC Unity Family

Surveillance Dell EMC Storage with Verint Nextiva

Video Surveillance EMC Storage with Digifort Enterprise

EMC XTREMCACHE ACCELERATES ORACLE

EMC VSPEX END-USER COMPUTING

Here is Your Customized Document

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

Surveillance Dell EMC Storage with Synectics Digital Recording System

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

Video Surveillance EMC Storage with Honeywell Digital Video Manager

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Video Surveillance EMC Storage with LENSEC Perspective VMS

Jake Howering. Director, Product Management

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE

EMC VSPEX. Proven Infrastructure That Is Simple, Efficient, And Flexible. Copyright 2013 EMC Corporation. All rights reserved.

INTRODUCING VNX SERIES February 2011

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007

Cisco and EMC Release Joint Validated Design for Desktop Virtualization with Citrix XenDesktop. Daniel Chiu Business Development Manager, EMC

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE

EMC VSPEX. Proven Infrastructure. P.M. Hashin Kabeer Sr. Systems Engineer. Copyright 2013 EMC Corporation. All rights reserved.

Virtualization And High Availability. Howard Chow Microsoft MVP

EMC Business Continuity for Microsoft Exchange 2010

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

Video Surveillance EMC Storage with NUUO Crystal

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

MICROSOFT APPLICATIONS

NEXT GENERATION UNIFIED STORAGE

Surveillance Dell EMC Storage with Aimetis Symphony

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

DELL EMC UNITY: BEST PRACTICES GUIDE

MIGRATING TO DELL EMC UNITY WITH SAN COPY

NEXT GENERATION UNIFIED STORAGE

Reference Architecture

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

Transcription:

IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on an EMC VSPEX Proven Infrastructure with Microsoft Hyper-V. June 2014

Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published June 2014 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H12850 2

Contents Contents Chapter 1 Introduction 9 Purpose of this guide... 10 Business value... 10 Scope... 11 Audience... 11 Terminology... 12 Chapter 2 Before You Start 13 Overview... 14 Predeployment tasks... 14 Deployment workflow... 15 Deployment prerequisites... 15 Planning and sizing the Exchange Server 2013 environment... 17 Overview... 17 Storage pools... 17 Example: Small Exchange organization... 17 Essential reading... 19 VSPEX Design Guide... 19 VSPEX Solution Overviews... 19 VSPEX Proven Infrastructure Guides... 19 EMC Powered Backup for VSPEX guide... 19 Chapter 3 Solution Overview 21 Overview... 22 EMC VSPEX Proven Infrastructure... 22 Solution architecture... 24 Summary of key components... 25 Chapter 4 Solution Implementation 27 Overview... 28 Physical setup... 28 Network implementation... 28 Storage implementation... 29 Overview... 29 Example architectures... 30 3

Contents Setting up the initial VNX or VNXe configuration... 32 Provisioning storage for Hyper-V datastores... 32 Provisioning storage for Exchange datastores and logs... 32 Configuring FAST Cache on VNX... 37 Configuring FAST VP on VNX... 38 Microsoft Windows Server 2012 R2 with Hyper-V infrastructure implementation... 40 Overview... 40 Installing the Windows hosts... 41 Installing and configuring Failover Clustering... 41 Configuring Windows host networking... 41 Configuring multipathing... 41 Configuring the initiator to connect to VNX or VNXe via iscsi... 42 Publishing VNXe datastores or VNX LUNs to Hyper-V... 42 Connecting Hyper-V datastores... 42 Using EMC Storage Integrator to manage CSV disks for Exchange... 43 Exchange Server virtualization implementation... 43 Overview... 43 Creating the Exchange virtual machines... 44 Installing the Exchange guest OS... 44 Installing integration services... 44 Assigning IP addresses... 44 Attaching pass-through disks to Exchange virtual machines... 44 Using ESI to manage pass-through disks for Exchange... 47 Application implementation... 48 Overview... 48 Verifying predeployment with Jetstress... 48 Preparing Active Directory... 48 Installing Exchange Server 2013 Mailbox server roles... 49 Installing the Exchange Server 2013 Client Access server roles... 49 Deploying the database availability group... 50 EMC Powered Backup implementation... 51 Chapter 5 Solution Verification 53 Baseline infrastructure verification... 54 Overview... 54 Verifying Hyper-V functionality... 54 Verifying solution component redundancy... 54 Verifying the Exchange DAG configuration... 55 Monitoring the solution s health... 56 4

Contents Exchange Server performance verification... 57 Overview... 57 Using Jetstress to verify performance... 57 EMC Powered Backup verification... 59 Chapter 6 Reference Documentation 61 EMC documentation... 62 Other documentation... 62 Links... 63 Microsoft TechNet... 63 Appendix A Configuration Worksheet 65 Configuration worksheet for Exchange Server 2013... 66 5

Contents Figures Figure 1. VSPEX Proven Infrastructure... 23 Figure 2. Solution architecture... 24 Figure 3. Figure 4. Exchange Server 2013 storage elements on a Hyper-V and VNX platform... 30 Exchange Server 2013 storage elements on a Hyper-V and VNXe platform... 31 Figure 5. Example of storage layout for EMC VNX... 34 Figure 6. Storage layout example for VNXe... 36 Figure 7. Using ESI to manage the storage system... 37 Figure 8. Storage pool properties FAST Cache enabled... 38 Figure 9. Expand Storage Pool dialog box... 39 Figure 10. CSV disk in Failover Cluster Manager... 42 Figure 11. CSV disk in EMC Storage Integrator... 43 Figure 12. Rescanning disks... 45 Figure 13. Adding disks... 46 Figure 14. Configuring pass-through disks in Failover Cluster Manager... 47 Figure 15. Pass-through disks in EMC Storage Integrator... 47 Figure 16. Selecting Mailbox role... 49 Figure 17. Selecting Client Access role... 50 Figure 18. Verifying the DAG configuration... 55 Figure 19. Verifying that the DAG detects the failure... 55 6

Contents Tables Table 1. Terminology... 12 Table 2. Predeployment tasks... 14 Table 3. Solution deployment process workflow... 15 Table 4. Deployment prerequisites checklist... 16 Table 5. Exchange-related storage pools... 17 Table 6. Customer evaluation example using the qualification worksheet... 17 Table 7. Example of required resources Small Exchange organization... 18 Table 8. Example of storage recommendations Small Exchange organization. 18 Table 9. Example of performance key metrics Jetstress tool... 19 Table 10. Solution components... 25 Table 11. Tasks for physical setup... 28 Table 12. Tasks for switch and network configuration... 28 Table 13. Tasks for VNX or VNXe storage array configuration... 29 Table 14. Example additional storage pools for Exchange data on VNX... 32 Table 15. Example iscsi LUNs layout for Exchange data on VNX... 33 Table 16. Example additional storage pools for Exchange data on VNXe... 35 Table 17. Example iscsi LUNs layout for Exchange data on VNXe... 35 Table 18. Tasks for server installation... 40 Table 19. Exchange host virtual machine installation and configuration... 43 Table 20. Example of Exchange reference virtual machines... 44 Table 21. Tasks to implement Exchange Server 2013... 48 Table 22. Tasks for verifying the VSPEX installation... 54 Table 23. Tools to monitor the solution... 56 Table 24. Example of verification questions for user profile... 57 Table 25. Key metrics for Jetstress verification... 58 Table 26. Jetstress verification example results... 59 Table 27. Common server information... 66 Table 28. Exchange information... 66 Table 29. Hyper-V server information... 66 Table 30. Array information... 67 Table 31. Network infrastructure information... 67 Table 32. VLAN information... 67 7

Contents 8

Chapter 1: Introduction Chapter 1 Introduction This chapter presents the following topics: Purpose of this guide... 10 Business value... 10 Scope... 11 Audience... 11 Terminology... 12 9

Chapter 1: Introduction Purpose of this guide Business value EMC VSPEX Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX enables partners to plan and design the assets required to support Microsoft Exchange 2013 in a virtualized environment on a VSPEX Private Cloud. The EMC VSPEX for virtualized Exchange 2013 architecture provides a validated system, capable of hosting a virtualized Exchange 2013 solution at a consistent performance level. This solution has been tested, sized, and designed to be layered on an existing VSPEX Private Cloud using either a VMware vsphere or Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer, and uses the highly available EMC VNX family of storage systems. All VSPEX solutions are sized and tested with EMC Powered Backup products. EMC Avamar and EMC Data Domain enable complete infrastructure, application, and Exchange backup and recovery. The compute and network components, while vendor-definable, are designed to be redundant and are sufficiently powerful to handle the processing and data needs of the virtual machine environment. This describes how to implement, with best practices, the resources necessary to deploy Microsoft Exchange Server 2013 on any VSPEX Proven Infrastructure and other mixed workloads with Microsoft Hyper-V. Email is an indispensable lifeline for communicating within your business, and connecting you with customers, prospects, partners, and suppliers. IT administrators supporting Microsoft Exchange Server are challenged with maintaining the highest possible levels of performance and application efficiency. At the same time, most companies struggle to keep pace with relentless data growth while working to overcome diminishing or stagnant budgets. Administering, auditing, protecting, and managing an Exchange environment for a modern geographically dispersed work force is a major challenge for most IT departments. EMC has joined forces with the industry s leading providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of private cloud and Microsoft Exchange. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves. VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating the planning, sizing, and configuration burdens that are typically associated with deploying a new IT Infrastructure. VSPEX provides a validated solution for customers looking to simplify their system a characteristic of truly converged infrastructures while at the same time gaining more choice in individual stack components. 10

Chapter 1: Introduction Scope This describes the high-level steps and best practices required to deploy Exchange Server 2013 on a VSPEX Proven Infrastructure for Microsoft Hyper- V with a VNX or VNXe storage system. This guide assumes that a VSPEX Proven Infrastructure already exists in the customer environment. This guide provides examples of deployments on an EMC VNX5600 array and an EMC VNXe3200 array. The same principles and guidelines apply to any other VNX or VNXe model that VSPEX supports. EMC Powered Backup solutions for Exchange data protection are described in a separate document the EMC Backup and Recovery Options for VSPEX for Virtualized Microsoft Exchange 2013 Design and. Audience This guide is intended for internal EMC personnel and qualified EMC VSPEX partners. The guide assumes that VSPEX partners who intend to deploy this VSPEX for virtualized Exchange 2013 solution are: Qualified by Microsoft to sell and implement Exchange solutions Certified in Exchange 2013, ideally with one or both of the following Microsoft Certified Solutions Expert (MCSE)certifications: Messaging Core Solutions of Microsoft Exchange Server 2013 (Exam: 341) Messaging Advanced Solutions of Microsoft Exchange Server 2013 (Exam: 342) Qualified by EMC to sell, install, and configure the VNX family of storage systems Certified to sell VSPEX Proven Infrastructures Qualified to sell, install, and configure the network and server products required for VSPEX Proven Infrastructures Partners who plan to deploy the solution must also have the necessary technical training and background to install and configure: Microsoft Windows Server 2012 R2 operating systems (OS) VMware vsphere or Microsoft Windows Server 2012 R2 with Hyper-V virtualization platforms Microsoft Exchange Server 2013 EMC Powered Backup products, including Avamar and Data Domain This guide provides external references where applicable. EMC recommends that partners implementing this solution are familiar with these documents. For details, refer to Essential reading and Reference Documentation. 11

Chapter 1: Introduction Terminology Table 1 lists the terminology used in this guide. Table 1. Terminology Term BDM CIFS CSV DAG FQDN VHDX Definition Background Database Maintenance Common Internet File System Cluster-shared volume Database availability group Fully Qualified Domain Name Hyper-V virtual hard disk format a new, enhanced format available in Microsoft Windows Server 2012 12

Chapter 2: Before You Start Chapter 2 Before You Start This chapter presents the following topics: Overview... 14 Predeployment tasks... 14 Deployment workflow... 15 Deployment prerequisites... 15 Planning and sizing the Exchange Server 2013 environment... 17 Essential reading... 19 13

Chapter 2: Before You Start Overview This chapter provides an overview of important information you need to be aware of, documents you need to be familiar with, and tasks you need to perform before you start implementing your VSPEX for virtualized Microsoft Exchange Server 2013 solution. The Design Guide for this solution EMC VSPEX for Virtualized Exchange Server 2013 describes how to size and design your solution and how to select the right VSPEX Proven Infrastructure on which to layer Exchange Server 2013. The deployment examples in this Implementation Guide are based on the recommendations and examples in the Design Guide. Before you implement Exchange on a VSPEX Proven Infrastructure, EMC recommends that you check and complete the predeployment tasks described in Table 2. Predeployment tasks Predeployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results are needed at the time of installation. Examples of predeployment tasks include the collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Before you visit a customer, perform these tasks to reduce the time required on site. Table 2 lists the predeployment tasks for this solution. Table 2. Predeployment tasks Task Description References Gathering documents Gathering tools Gathering data Gather the related documents listed in Essential reading. This guide often refers to these documents. They provide details on setup procedures, sizing, and deployment best practices for the various solution components. Gather the required and optional tools for the deployment. Use Table 4 to confirm that all required equipment, software, and licenses are available for the deployment. Collect the customer-specific configuration data for networking, arrays, accounts, and so on. Enter this information into the configuration worksheet for Exchange Server 2013 for reference during deployment. Essential reading Deployment prerequisites Configuration worksheet for Exchange Server 2013 14

Chapter 2: Before You Start Deployment workflow To design and implement your VSPEX for virtualized Exchange Server 2013 solution, refer to the process flow in Table 3 1. Table 3. Step Solution deployment process workflow Action 1 Use the VSPEX for virtualized Exchange 2013 qualification worksheet to collect user requirements. The qualification worksheet is in Appendix A of the Design Guide. 2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX Proven Infrastructure for your Exchange Server solution, based on the user requirements collected in Step 1. Refer to the Design Guide for guidance. For more information about the Sizing Tool, refer to the EMC VSPEX Sizing Tool portal. Note: If the Sizing Tool is not available, you can manually size the application using the guidelines in Appendix B of the Design Guide. 3 Refer to the Design Guide to determine the final design for your VSPEX solution. Note: Ensure that you consider all application requirements, not only the requirements for Exchange. 4 Refer to the relevant VSPEX Proven Infrastructure document in Essential reading to select and order the correct VSPEX Proven Infrastructure. 5 Follow this to deploy and test your VSPEX solution. Note: If you already have a VSPEX Proven Infrastructure environment, you can skip the implementation steps already completed. Deployment prerequisites This guide applies to VSPEX Proven Infrastructures virtualized with Hyper-V on VNX or VNXe. The principles and guidance from the examples provided apply to all Next- Generation VNX or VNXe models that VSPEX Proven Infrastructures support. Table 4 itemizes the hardware, software, and licenses required to configure this solution. For additional information, refer to the hardware and software tables in the relevant VSPEX Proven Infrastructure document in Essential reading. 1 If your solution includes backup and recovery components, for backup and recovery sizing and implementation guidelines, refer to the EMC Backup and Recovery Options for VSPEX for Virtualized Microsoft Exchange 2013 Design and. 15

Chapter 2: Before You Start Table 4. Deployment prerequisites checklist Requirement Description Version References/Notes Hardware Physical servers: Sufficient physical server capacity to host the required number of virtual machines as recommended by the VSPEX Sizing Tool and Design Guide Networking: Switch port capacity and capabilities as required by the virtual server infrastructure EMC VNX or VNXe: Multiprotocol storage array with the required disk layout Note: The storage should be sufficient to support the total reference virtual machines required and the additional storage layout for applications. Software EMC VNXe Operating Environment (OE) 3.0.0 (GA Release) EMC VNX OE for Block 05.33.000.5.034 EMC VNX OE for File 8.1.1.33 EMC Unisphere for VNX 1.3.0.1.0718 EMC Unisphere for VNXe 3.0.0 (GA Release) EMC Storage Integrator 3.1 EMC Storage Integrator for Windows Suite Technical Notes Microsoft Windows Server Microsoft Exchange Server Jetstress 2012 Standard or Data Center edition 2013 SP1 Standard or Enterprise edition 2013 version 15.0.775.8 For verification tests only EMC PowerPath /VE 5.7 Virtual Edition Multipathing Licenses Microsoft Windows Server license keys Note: This requirement might be covered by an existing Software Assurance agreement and might be found on an existing customer Microsoft Key Management Server (KMS) (if applicable) 2012 R2 (Standard or Data Center) http://www.microsoft.com Microsoft Exchange Server license key 2013 (Standard or Enterprise) 16

Chapter 2: Before You Start Planning and sizing the Exchange Server 2013 environment Overview Storage pools To plan and size your Exchange Server 2013 environment on a VSPEX Proven Infrastructure, follow the recommendations and guidelines in the Design Guide. Use the VSPEX Sizing Tool and the VSPEX for virtualized Exchange 2013 qualification worksheet, as described in that guide. In this VSPEX solution, we 2 introduced general storage pools that are used to store Exchange data. Table 5 shows an example of the storage pools needed in an Exchange database availability group (DAG) deployment where each database has two copies. For detailed information, refer to the Design Guide. Table 5. Exchange-related storage pools Storage pool name VSPEX private cloud pool Exchange storage pool 1 Exchange storage pool 2 Purpose The infrastructure pool where all the virtual machines OS volumes reside. For details, refer to the relevant VSPEX Proven Infrastructure in Essential reading. The pool where all the Exchange database files and log files of the first database copy reside. The pool where all the Exchange database files and log files of the second database copy reside. Example: Small Exchange organization The Design Guide introduces this example. A customer wants to create a small Exchange Server 2013 organization on a VSPEX Proven Infrastructure. Complete a customer evaluation, using the VSPEX for virtualized Exchange 2013 qualification worksheet as shown in Table 6, to determine the requirements needed to create the Exchange environment. For more detailed information about this example, refer to the Design Guide. Table 6. Customer evaluation example using the qualification worksheet Question Number of mailboxes 1,500 Example answer Maximum mailbox size (GB) Mailbox IOPS profile (messages sent/received per mailbox per day) 1.5 GB 0.101 IOPS per mailbox (150 messages sent/received per mailbox per day) DAG copies (including active copy) 2 Deleted Items Retention (DIR) window (days) 14 Backup/Truncation Failure Tolerance (days) 3 2 In this guide, we refers to the EMC Solutions engineering team that validated the solution. 17

Chapter 2: Before You Start Question Example answer Included number of years growth 1 Annual growth rate (number of mailboxes) (%) 20% Using the completed evaluation from the customer, enter those answers into the VSPEX Sizing Tool to obtain the following results: Required resources table that lists the required number of virtual machines and their characteristics. Storage recommendations table that lists the additional storage hardware required to run Exchange Server in addition to the VSPEX private cloud pools. Performance metrics table that lists the key performance metrics that should be achieved in the Jetstress tests. EMC recommends that you run Jetstress tests to verify the Exchange performance before deploying Exchange in the production environment. For more information, refer to Exchange Server performance verification. Table 7 through Table 9 provide example results based on the customer information provided in Table 6. Use the VSPEX Sizing Tool and follow the recommendations in the Design Guide to determine the number of server roles required for your Exchange organization, and the resources required for each server role. Table 7 provides an example of the required resources for each Exchange Server role. In this example, you need to set up two Exchange Mailbox servers and two Client Access servers to support the requirements specified in the qualification worksheet in Table 6 for a small Exchange organization. Table 7. Example of required resources Small Exchange organization Exchange Server role vcpu Memory Mailbox server Resource requirements OS volume capacity OS volume IOPS 8 64 GB 300 GB 25 2 No. of virtual machines Client Access server Resource requirements 4 12 GB 100 GB 25 2 Table 8 shows an example of EMC s storage recommendations for a small Exchange organization. Table 8. Example of storage recommendations Small Exchange organization Storage pool name RAID type Disk type Disk capacity Exchange data pool 1 RAID 1/0 (4+4) 7,200 rpm NL-SAS disks 2 TB 8 Exchange data pool 2 RAID 1/0 (4+4) 7,200 rpm NL-SAS disks 2 TB 8 No. of disks 18

Chapter 2: Before You Start Table 9 lists the key performance metrics that should be achieved in the Jetstress tests for a small Exchange organization. Table 9. Example of performance key metrics Jetstress tool Performance counters Achieved Exchange transactional IOPS (I/O database reads/sec + I/O database writes/sec) I/O database reads/sec I/O database writes/sec Total IOPS (I/O database reads/sec + I/O database writes/sec + BDM reads/sec + I/O log replication reads/sec + I/O log writes/sec) I/O database reads average latency (ms) I/O log writes average latency (ms) Target values Number of mailboxes * Exchange Server 2013 user IOPS profile N/A (for analysis purpose) N/A (for analysis purpose) N/A (for analysis purpose) Less than 20 ms Less than 10 ms Essential reading EMC recommends that you read the following documents, available from the VSPEX space on the EMC Community Network or from the VSPEX Proven Infrastructure pages on EMC.com. If you do not have access to a document, contact your EMC representative. VSPEX Design Guide VSPEX Solution Overviews VSPEX Proven Infrastructure Guides EMC Powered Backup for VSPEX guide Refer to the following VSPEX Design Guide: EMC VSPEX for Virtualized Microsoft Exchange 2013 Refer to the following VSPEX Solution Overview documents: EMC VSPEX Server Virtualization for Midmarket Businesses EMC VSPEX Server Virtualization for Small and Medium Businesses Refer to the following VSPEX Proven Infrastructure Guides: EMC VSPEX Private Cloud Microsoft Windows Server 2012 R2 with Hyper-V for up to 125 Virtual Machines EMC VSPEX Private Cloud Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Refer to the following EMC Powered Backup for VSPEX guide: EMC Backup and Recovery Options for VSPEX for Virtualized Microsoft Exchange 2013 Design and 19

Chapter 2: Before You Start 20

Chapter 3: Solution Overview Chapter 3 Solution Overview This chapter presents the following topics: Overview... 22 EMC VSPEX Proven Infrastructure... 22 Solution architecture... 24 Summary of key components... 25 21

Chapter 3: Solution Overview Overview This chapter provides an overview of the VSPEX for virtualized Microsoft Exchange 2013 with Microsoft Hyper-V solution and the key technologies used. The solution has been proven and designed to be layered on a VSPEX Private Cloud, which provides storage, compute, network, and backup resources. The solution enables customers to quickly and consistently deploy and protect a virtualized Exchange organization in a VSPEX Proven Infrastructure. VNX or VNXe and Microsoft Hyper-V virtualized Windows Server platforms provide storage and server hardware consolidation. This supports all VSPEX Proven Infrastructure for virtualized Exchange solutions with Hyper-V and VNX or VNXe. The solution includes the servers, EMC storage, network components, and Exchange components that are required for small- and medium-sized business environments. VNX and VNXe storage arrays are multiprotocol platforms that can support the Internet Small Computer Systems Interface (iscsi), Network File System (NFS), and Common Internet File System (CIFS) protocols depending on the customer s specific needs. EMC validated the solution using iscsi for Exchange database and log files. This solution requires the presence of Microsoft Active Directory (AD) and domain name system (DNS). The implementation of these services is beyond the scope of this guide, but is considered a prerequisite for a successful deployment. EMC Powered Backup solutions provide essential Exchange data protection and are described in a separate document, the EMC Backup and Recovery Options for VSPEX for Virtualized Microsoft Exchange 2013 Design and. EMC VSPEX Proven Infrastructure VSPEX Proven Infrastructure, as shown in Figure 1, is a modular virtualized infrastructure validated by EMC and delivered by EMC VSPEX partners. VSPEX includes a virtualization layer, server and network layers, and EMC storage and backup, designed by EMC to deliver reliable and predictable performance. 22

Chapter 3: Solution Overview Figure 1. VSPEX Proven Infrastructure VSPEX provides the flexibility to choose network, server, and virtualization technologies that fit a customer s environment to create a complete virtualization solution. VSPEX delivers faster deployment for EMC partner customers, with greater simplicity and efficiency, more choice, and lower risk to a customer s business. You can deploy application-based solutions such as Exchange on VSPEX Proven Infrastructures. We validated the VSPEX Proven Infrastructure for virtualized Exchange solution using VNX or VNXe and a Hyper-V virtualized Windows Server platform to provide storage and server hardware consolidation. You can centrally manage the virtualized infrastructure, which enables you to efficiently deploy and manage a scalable number of virtual machines and associated shared storage. 23

Chapter 3: Solution Overview Solution architecture Figure 2 shows an example of the architecture that characterizes the validated infrastructure for the support of Exchange Server 2013 layered on a VSPEX Proven Infrastructure. You can use any VNX or VNXe model that has been validated as part of the VSPEX Proven Infrastructure to provide the back-end storage functionality. In this example, we deployed two Exchange Mailbox servers and two Exchange Client Access servers as virtual machines on a Windows Server 2012 R2 with a Hyper-V cluster to meet a small Exchange organization s requirements as shown in Table 6 on page 17. Note: This solution applies to all VSPEX offerings on Hyper-V. Figure 2. Solution architecture 24

Chapter 3: Solution Overview Summary of key components Table 10 summarizes the key technologies used in this solution. The Design Guide provides overviews of the individual components. Table 10. Solution components VSPEX layer Components Application Microsoft Exchange Server 2013 Virtualization Compute Network Storage Backup Microsoft Windows Server 2012 R2 with Hyper-V VSPEX defines the minimum amount of compute layer resources required but enables the customer to implement the requirements using any server hardware that meets these requirements. VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but enables the customer to implement the requirements using any network hardware that meets these requirements. EMC VNX EMC VNXe Microsoft Multipath I/O (MPIO) and Multiple Connections per Session (MCS) EMC PowerPath/VE EMC Powered Backup solutions 25

Chapter 3: Solution Overview 26

Chapter 4: Solution Implementation Chapter 4 Solution Implementation This chapter presents the following topics: Overview... 28 Physical setup... 28 Network implementation... 28 Storage implementation... 29 Microsoft Windows Server 2012 R2 with Hyper-V infrastructure implementation... 40 Exchange Server virtualization implementation... 43 Application implementation... 48 EMC Powered Backup implementation... 51 27

Chapter 4: Solution Implementation Overview This chapter provides information about how to implement the solution. If you already have a VSPEX Proven Infrastructure environment, you can skip the sections for the implementation steps you have already completed. Physical setup This section includes preparation information for the solution s physical components. After you complete the tasks in Table 11, the new hardware components are racked, cabled, powered, and ready for network connection. Table 11. Tasks for physical setup Task Description Reference Preparing network switches Preparing servers Preparing VNX or VNXe Install switches in the rack and connect them to power. Install the servers in the rack and connect them to power. Install the VNX or VNXe storage array in the rack and connect it to power. Vendor installation guide Vendor installation guide EMC VNXe3200 Installation Guide EMC VNX5600 Unified Installation Guide For details of the physical setup, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Network implementation This section describes the network infrastructure requirements that you need to support the solution architecture. Table 12 provides a summary of the tasks for switch and network configuration and references for further information. Table 12. Tasks for switch and network configuration Task Description Reference Configuring infrastructure network Completing network cabling Configure the storage array and Windows host infrastructure networking as specified in the solution reference architecture. Connect: Switch interconnect ports VNX or VNXe ports Windows server ports Refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading 28

Chapter 4: Solution Implementation Task Description Reference Configuring VLANs Configure private and public VLANs as required. Vendor switch configuration guide For network implementation details, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Storage implementation Overview This section describes how to configure the VNX or VNXe storage array. This guide uses iscsi as a block storage example for the Exchange Server 2013 database and log volumes. If you already have a VSPEX Proven Infrastructure environment on other block protocols, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading for more information about storage implementation. Note: Microsoft has support policies on the types of storage (file or block protocols) that Exchange virtual machines can use for Exchange data. For detailed information, refer to the Microsoft TechNet topic Exchange Server 2013 Virtualization. Table 13 provides a summary of the tasks required for switch and network configuration, and references for further information. Table 13. Tasks for VNX or VNXe storage array configuration Task Description Reference Setting up the initial VNX or VNXe configuration Provisioning the storage for Hyper-V datastores Provisioning the storage for Exchange databases and logs Configure the IP address information and other key parameters, such as DNS and Network Time Protocol (NTP), on the VNX or VNXe array. Create storage pools and provision storage that will be presented to the Windows servers as Hyper-V datastores that host the virtual machines. Create storage pools and provision storage that will be presented to the Exchange Mailbox server virtual machines as pass-through disks hosting Exchange databases and logs. EMC VNXe3200 Installation Guide EMC VNX5600 Unified Installation Guide EMC Host Connectivity Guide for Windows Using a VNXe System with FC iscsi LUNS 29

Chapter 4: Solution Implementation Example architectures Figure 3 shows an example of the high-level architecture of the Exchange components and storage elements validated in the VSPEX Proven Infrastructure for virtualized Microsoft Exchange 2013 on a Hyper-V virtualization platform and VNX storage array. The system volumes of all virtual machines are stored in Hyper-V virtual hard disk (VHDX) format on a cluster-shared volume (CSV), and all Exchange database and log LUNs are presented to the virtual machines as pass-through disks. You can also use VHDX to store Exchange data. Whether you use VHDX or pass-through disks to store Exchange data depends on your technical requirements. For example, if you use hardware snapshots for Exchange Server protection, you should use pass-through disks to store the Exchange database and log files. Figure 3. Exchange Server 2013 storage elements on a Hyper-V and VNX platform 30

Chapter 4: Solution Implementation Figure 4 shows an example of the high-level architecture of the Exchange 2013 components and storage elements validated in the VSPEX Proven Infrastructure for virtualized Microsoft Exchange 2013 on a Hyper-V virtualization platform and VNXe storage array. The system volumes of all virtual machines are stored in the Hyper-V VHDX disks on a CSV. All Exchange database and log LUNs are presented to the virtual machines as pass-through disks. Figure 4. Exchange Server 2013 storage elements on a Hyper-V and VNXe platform 31

Chapter 4: Solution Implementation Setting up the initial VNX or VNXe configuration Ensure that network interfaces, IP address information, and other key parameters such as DNS and NTP are configured on the VNX or VNXe before provisioning the storage. For more information about how to configure the VNX or VNXe platform, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Provisioning storage for Hyper- V datastores Provisioning storage for Exchange datastores and logs To configure the iscsi servers on VNX or VNXe and provision storage for Hyper-V datastores, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. In this solution, all the Exchange database and log LUNs are presented to Exchange Mailbox server virtual machines as pass-through disks. Before you provision the storage for Exchange, follow the recommendations from the VSPEX Sizing Tool and the Design Guide. Provisioning iscsi storage for Exchange on VNX Table 14 shows an example of storage pools for Exchange on VNX, in addition to the VSPEX private cloud pool. For more information about the storage layout recommendations and design, refer to the Design Guide. For increased efficiency and performance, Exchange database pools use thin LUNs and contain both high-performance and high-capacity disks, with FAST VP enabled for storage tiering. Table 14. Example additional storage pools for Exchange data on VNX Storage pool name RAID type Disk type Disk capacity Exchange database pool 1 RAID 1/0 (16+16) 7,200 rpm NL-SAS disks 3 TB 32 RAID 1 (1+1) FAST VP SSDs 100 GB 2 Exchange database pool 2 RAID 1/0 (16+16) 7,200 rpm NL-SAS disks 3 TB 32 RAID 1 (1+1) FAST VP SSDs 100 GB 2 Exchange log pool 1 RAID 1/0 (2+2) 7,200 rpm NL-SAS disks 3 TB 4 Exchange log pool 2 RAID 1/0 (2+2) 7,200 rpm NL-SAS disks 3 TB 4 No. of disks To configure iscsi network settings, storage pools, and iscsi LUNs on the VNX array: 1. In Unisphere, select the VNX array that is to be used. 2. Select Settings > Network > Settings for Block. 3. Configure the IP address for the network ports used for iscsi. 4. Select Storage > Storage Configuration > Storage Pools. 5. Select Pools and create the additional storage pools in the VNX for the Exchange database and transaction logs, according to the VSPEX Sizing Tool recommendation. 32

Chapter 4: Solution Implementation Note: If you receive a warning message that the selected disk number is not recommended for the selected RAID type, you can safely ignore the warning. 6. To create and optimize the thin LUNs in a VNX storage pool for maximum performance, refer to Microsoft Exchange Server: Best Practices and Design Guidelines for EMC Storage. Table 15 shows an example of an iscsi LUNs layout for Exchange databases and transaction logs. Thin LUNs were used for this layout. Table 15. Example iscsi LUNs layout for Exchange data on VNX Server role LUN name LUN size Exchange Mailbox server 1 LUN type No. of LUNs Storage pool name Database LUNs 1,900 GB Thin 4 Exchange database pool 1 Log LUNs 110 GB Thin 4 Exchange log pool 1 Exchange Mailbox server 2 Exchange Mailbox server 3 Exchange Mailbox server 4 Exchange Mailbox server 5 Exchange Mailbox server 6 Exchange Mailbox server 7 Exchange Mailbox server 8 Database LUNs 1,900 GB Thin 4 Exchange database pool 2 Log LUNs 110 GB Thin 4 Exchange log pool 2 Database LUNs 1,900 GB Thin 4 Exchange database pool 1 Log LUNs 110 GB Thin 4 Exchange log pool 1 Database LUNs 1,900 GB Thin 4 Exchange database pool 2 Log LUNs 110 GB Thin 4 Exchange log pool 2 Database LUNs 1,900 GB Thin 4 Exchange database pool 1 Log LUNs 110 GB Thin 4 Exchange log pool 1 Database LUNs 1,900 GB Thin 4 Exchange database pool 2 Log LUNs 110 GB Thin 4 Exchange log pool 2 Database LUNs 1,900 GB Thin 4 Exchange database pool 1 Log LUNs 110 GB Thin 4 Exchange log pool 1 Database LUNs 1,900 GB Thin 4 Exchange database pool 2 Log LUNs 110 GB Thin 4 Exchange log pool 2 To configure iscsi LUNs and unmask LUNs on the VNX array: 1. Select Host > Storage Groups. 2. To create a storage group to unmask LUNs to the Hyper-V hosts: a. Click Create and type a name for the storage group. b. Click Yes to finish the creation. c. In the prompt dialog box, click Yes to select LUNs or connect hosts. d. Select LUNs. Under Available LUNs, select all the LUNs created previously and click Add. 33

Chapter 4: Solution Implementation e. Select Hosts. Under Available Hosts, select the Hyper-V servers to be used and add them to The Hosts to be Connected. Click OK to finish. Figure 5 shows an example of a storage layout for VNX. Next-generation VNX does not require you to manually select specific drives as hot spares. Instead, VNX considers every unbound disk in the array to be available as a spare. VNX will always select an unbound disk that most closely matches the disk type, disk size, and location of the failing or failed disk. Figure 5. Example of storage layout for EMC VNX The number of disks used in the VSPEX private cloud pool can vary according to your customer s requirements. For detailed information, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. 34

Provisioning storage for Exchange on VNXe Chapter 4: Solution Implementation Table 16 shows an example of storage pools for Exchange on VNXe, in addition to the VSPEX private cloud pool. For more information about the storage layout recommendations and design, refer to the Design Guide. Table 16. Example additional storage pools for Exchange data on VNXe Storage pool name RAID type Disk type Disk capacity Exchange data pool 1 RAID 1/0 (4+4) 7,200 rpm NL-SAS disks 2 TB 8 Exchange data pool 2 RAID 1/0 (4+4) 7,200 rpm NL-SAS disks 2 TB 8 No. of disks To configure iscsi network settings, storage pools, and iscsi LUNs on the VNXe array: 1. In Unisphere, select the VNXe array that is to be used. 2. Select Settings > iscsi Settings. 3. Configure the IP address for the network ports used for iscsi. 4. Select Storage > Storage Pools. 5. Click Create to launch the Storage Pool wizard and create the additional storage pools according to the VSPEX Sizing Tool recommendation. Note: If you receive a warning message that the selected disk number is not recommended for the selected RAID type, you can safely ignore the warning. 6. Select Storage > LUNs. 7. Click Create to launch the LUN wizard, create a LUN group in each Exchange storage pool to contain the Exchange database LUNs and transaction log LUNs, and grant access permission to the Hyper-V hosts. Table 17 shows an example of an iscsi LUNs layout for Exchange databases and transaction logs. For increased efficiency, the Exchange storage pools use thin LUNs. Table 17. Example iscsi LUNs layout for Exchange data on VNXe Server role LUN name LUN size Exchange Mailbox server 1 No. of LUNs Storage pool name Database LUNs 1,360 GB 4 Exchange data pool 1 Log LUNs 80 GB 4 Exchange data pool 1 Exchange Mailbox server 2 Database LUNs 1,360 GB 4 Exchange data pool 2 Log LUNs 80 GB 4 Exchange data pool 2 Figure 6 shows an example of the target storage layout for the VNXe system used in this solution. Next-generation VNXe does not require you to manually select specific drives as hot spares. Instead, VNXe considers every unbound disk in the array to be available as a spare. VNXe always selects an unbound disk that most closely matches the disk type, disk size, and location of the failing or failed disk. 35

Chapter 4: Solution Implementation The number of disks used in the VSPEX private cloud pool can vary according to your customer's requirements. For detailed information, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Figure 6. Storage layout example for VNXe Using EMC Storage Integrator to manage storage for Exchange You can also use EMC Storage Integrator (ESI) to provision and manage storage for Exchange on VNX or VNXe. Figure 7 shows the storage provisioned for Exchange on VNXe. ESI simplifies the various steps involved in viewing, provisioning, and managing block and file storage for Microsoft Windows. For more information, refer to the EMC Storage Integrator for Windows Suite Product Guide. 36

Chapter 4: Solution Implementation Figure 7. Using ESI to manage the storage system Configuring FAST Cache on VNX The following sections describe FAST Cache and FAST VP implementation steps on the VNX storage array. Enabling FAST Cache on a VNX series array is transparent to Exchange. No reconfiguration or downtime is necessary. To use the FAST Cache feature, EMC recommends that you enable FAST Cache on the Exchange database storage pools. Do not enable FAST Cache on the Exchange log storage pools. For more details about FAST Cache best practices, refer to the Design Guide. To create and configure FAST Cache: 1. Refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading for detailed steps about how to create FAST Cache in Unisphere. 2. After creating the FAST Cache in Unisphere, select Storage > Storage Pool. 3. Choose an Exchange database pool, and click Properties. 4. In Storage Pool Properties, select Advanced, then select Enabled to enable FAST Cache, as shown in Figure 8. Click OK. 37

Chapter 4: Solution Implementation Figure 8. Storage pool properties FAST Cache enabled Note: FAST Cache on the VNX series array does not cause an instant performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours, during which the performance of the array steadily improves. Configuring FAST VP on VNX If FAST VP is enabled on the VNX array, you can add flash disks (or SAS disks) into the Exchange database pool as an extreme performance tier. For Exchange deployments on VNX, pool-based thin LUNs with FAST VP provide a good balance between flexibility and performance. Using thin LUNs to store Exchange database data improves storage efficiency. After FAST VP solid-state drives (SSDs) are added, thin LUN metadata is promoted to the extreme performance tier to boost performance. FAST VP intelligently manages data relocation at the sub-lun level. For more information about FAST VP design considerations for Exchange, refer to the Design Guide. To add flash disks to an existing Exchange database pool: 1. In Unisphere, select Storage > Storage Pool. 2. Choose an Exchange database pool and click Properties. 3. Select Disks and click Expand to view the Expand Storage Pool dialog box. 4. Under Extreme Performance, from the list boxes, select the number of flash disks and a RAID configuration to add into the Exchange database storage pool for tiering, as shown in Figure 9. EMC recommends using RAID 5 for the extreme performance tier in the Exchange database storage pool. 38

Chapter 4: Solution Implementation 5. Under Disks, review the flash drives that you will use for the extreme performance tier. You can choose the drives manually by selecting Manual. Click OK. Figure 9. Expand Storage Pool dialog box 39

Chapter 4: Solution Implementation Microsoft Windows Server 2012 R2 with Hyper-V infrastructure implementation Overview This section lists the requirements for installing and configuring the Windows hosts and infrastructure servers required to support the solution architecture. Table 18 describes the tasks required to complete the implementation. Table 18. Tasks for server installation Task Description Reference Installing the Windows hosts Installing and configuring Failover Clustering Configuring Windows hosts networking Configuring multipathing Configuring the initiator to connect to a VNX or VNXe iscsi server Publishing the VNXe datastores or VNX LUNs to Hyper-V Connecting the Hyper-V datastores Install Windows Server 2012 R2 on the physical servers deployed for the solution. Install and configure Failover Clustering. Configure Windows hosts networking including NIC teaming. Configure multipathing to optimize connectivity with the storage arrays. Configure Windows Server 2012 R2 initiator to connect to a VNX or VNXe iscsi server. Configure the VNX or VNXe to enable the Hyper-V hosts to access the created datastores. Connect the Hyper-V datastores to the Windows hosts as CSVs. Deploy a Hyper-V Cluster Understanding MPIO Features and Components EMC PowerPath and PowerPath/VE for Windows Installation and Administration Guide Using a VNXe System with FC iscsi LUNs EMC Host Connectivity Guide for Windows VNXe3200 Installation Guide VNX5600 Unified Installation Guide EMC Host Connectivity Guide for Windows For more details, refer to the relevant VSPEX Proven Infrastructure Guide listed in Essential reading. 40

Chapter 4: Solution Implementation Installing the Windows hosts Ensure that all the servers in a Hyper-V failover cluster are running the 64-bit version of Windows Server 2012 R2. For detailed steps on how to configure the Windows hosts, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Installing and configuring Failover Clustering To install and configure Failover Clustering: 1. Apply the latest service pack for Windows Server 2012 R2. 2. Configure the Hyper-V role and the Failover Clustering feature. For detailed steps, refer to Microsoft TechNet topic Deploy a Hyper-V Cluster. Configuring Windows host networking Configuring multipathing To ensure performance and availability, the solution requires: At least one NIC for virtual machine networking and management (separated by network or VLAN if desired) At least two NICs for iscsi connection (configured as MCS, MPIO, or PowerPath/VE) At least one NIC for live migration To configure additional paths for high availability, use MPIO or MCS with additional network adapters in the server. This creates additional connections to the storage array in Microsoft iscsi Initiator through redundant Ethernet switch fabrics. For detailed instructions about how to install and configure MPIO or MCS, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. Alternatively, you can use PowerPath/VE for optimal performance. PowerPath/VE is host-resident software that works with both VNX and VNXe storage systems to deliver intelligent I/O path management. Using PowerPath, administrators can improve the server s ability to manage heavy storage loads through continuous and intelligent I/O balancing. PowerPath/VE automatically configures multiple paths, and dynamically tunes performance as the workload changes. PowerPath/VE also adds to the HA capabilities of the VNX and VNXe storage systems by automatically detecting and recovering from server-to-storage path failures. For detailed instructions about how to install and configure PowerPath/VE, refer to the EMC PowerPath and PowerPath/VE for Windows Installation and Administration Guide. 41

Chapter 4: Solution Implementation Configuring the initiator to connect to VNX or VNXe via iscsi To connect to the VNX or VNXe targets (iscsi servers or iscsi ports), the host uses an iscsi initiator, which requires Microsoft iscsi Software Initiator and the iscsi initiator service. These services are part of the Windows Server 2012 R2 software; however, the drivers for them are not installed until you start the service. You must start the iscsi initiator service using the administrative tools. For instructions on configuring an iscsi initiator to connect to VNX or VNXe via iscsi, refer to Using a VNXe System with FC iscsi LUNs and EMC Host Connectivity Guide for Windows. Publishing VNXe datastores or VNX LUNs to Hyper-V At the end of the storage implementation process on VNXe, you have datastores that are ready to be published to the Hyper-V. Now that the hypervisors are installed, you must return to Unisphere and add the Hyper-V servers to the list of hosts that are enabled to access the datastores. Because you are using VNXe iscsi targets in a clustered environment, you must grant the datastore access to all the Windows Server 2012 R2 hosts in the Hyper-V cluster. On VNX, configure the Storage Group to grant LUN access to all the Windows Server 2012 R2 hosts in the Hyper-V cluster. For more information, refer to Using a VNXe System with FC iscsi LUNs and EMC Host Connectivity Guide for Windows. Connecting Hyper- V datastores Connect the Hyper-V datastores configured in Storage implementation to the relevant Windows hosts as CSVs. The datastores are used for virtual server infrastructure. For instructions about how to connect the Hyper-V datastores to the Windows host, refer to EMC Host Connectivity Guide for Windows. After you connect and format the datastores on one of the hosts, enable CSV, and then add the clustered disks as CSV disks. Figure 10 shows the CSV disk used in this solution. Figure 10. CSV disk in Failover Cluster Manager 42

Chapter 4: Solution Implementation Using EMC Storage Integrator to manage CSV disks for Exchange You can also use ESI to view and manage CSV disks in an efficient manner. Figure 11 shows the same CSV disk in the ESI GUI. Figure 11. CSV disk in EMC Storage Integrator For more information, refer to EMC Storage Integrator for Windows Suite Product Guide. Exchange Server virtualization implementation Overview This section describes the requirements for the installation and configuration of the Exchange host virtual machines as outlined in Table 19. Table 19. Exchange host virtual machine installation and configuration Task Description Reference Creating the Exchange virtual machines Installing the Exchange guest OS Installing integration services Assigning IP addresses Attaching passthrough disks to Exchange virtual machines Create the virtual machines to be used for the Exchange Server 2013 organization. Install Windows Server 2012 R2 Data Center Edition on the Exchange virtual machines. Install the integration services on the Exchange virtual machines. Assign IP addresses for all the networks in the virtual machines. Join all the Exchange Servers to the domain. Attach the database LUNs and log LUNs to the Exchange Server mailbox virtual machines as pass-through disks. Create a virtual machine Install the guest operating system Install or upgrade integration services 43

Chapter 4: Solution Implementation Creating the Exchange virtual machines EMC recommends that you use the VSPEX Sizing Tool and follow the recommendations in the Design Guide to determine the number of Exchange Server 2013 Mailbox server and Client Access server roles required for your Exchange organization, and the resources (processor, memory, and so on) required for each server role. Table 20 shows an example of the resources required for each Exchange Server role. In this example, you need to set up two Exchange Mailbox servers and two Client Access servers to support the requirements for a small Exchange organization. The system volumes of all Exchange virtual machines are stored on the VSPEX Proven Infrastructure pool, and are presented as CSV disks in the Hyper-V cluster. Table 20. Example of Exchange reference virtual machines Exchange Server role vcpu Memory Mailbox server Resource requirements OS volume capacity OS volume IOPS 8 64 GB 300 GB 25 2 No. of virtual machines Client Access server Resource requirements 4 12 GB 100 GB 25 2 Installing the Exchange guest OS Installing integration services Assigning IP addresses Install Windows 2012 on the Exchange virtual machines and apply the latest service pack. EMC recommends that you install the Hyper-V integration software package on the guest OS to improve integration between the physical computer and the virtual machine. Assign an IP address for each of the network adapters in all the Exchange virtual machines, according to what you have planned for the IP reservation for each server. Join every server to the existing domain. For more information, refer to the Configuration worksheet for Exchange Server 2013. Attaching passthrough disks to Exchange virtual machines To attach the Exchange LUNs to Mailbox server virtual machines as pass-through disks: 1. Ensure that the Hyper-V nodes recognize the newly created Exchange LUNs on VNX or VNXe by opening Computer Manager and selecting Rescan Disks, as shown in Figure 12. 44

Chapter 4: Solution Implementation Figure 12. Rescanning disks 2. Initialize the disks as follows: a. Bring the new Exchange LUNs online. b. Initialize the disks. c. Switch the LUNs to offline. 3. Add all Exchange LUNs to the Hyper-V cluster, in Microsoft Failover Cluster Manager, by selecting Storage > Disks > Add Disk, as shown in Figure 13. 45

Chapter 4: Solution Implementation Figure 13. Adding disks 4. Expand the Hyper-V node, and then select the Exchange Mailbox server virtual machine that hosts the Exchange LUNs. 5. Right-click on the virtual machine and select Settings. 6. Click Add Hardware and select SCSI Controller. 7. Add a hard drive by clicking Add. 8. Select Physical hard disk, select the proper Exchange LUN, and click OK. The selected Exchange LUN is added as a pass-through disk. Note: Repeat these steps to add additional pass-through disks planned for this Exchange Mailbox server. 9. Verify the storage disk status, as shown in Figure 14, and ensure that the pass-through disks are correctly assigned to the Exchange Mailbox server virtual machine. 46

Chapter 4: Solution Implementation Figure 14. Configuring pass-through disks in Failover Cluster Manager Using ESI to manage passthrough disks for Exchange You can also use ESI to view and manage pass-through disks efficiently. Figure 15 shows the same pass-through disks in the ESI GUI. Figure 15. Pass-through disks in EMC Storage Integrator For more information, refer to EMC Storage Integrator for Windows Suite Product Guide. 47

Chapter 4: Solution Implementation Application implementation Overview This section provides information about how to implement Exchange 2013 in a VSPEX Proven Infrastructure. Before you implement Exchange 2013, use the Design Guide to plan your Exchange organization based on your business needs. After you complete the tasks in Table 21, the new Exchange organization is ready to be verified and tested. Table 21. Tasks to implement Exchange Server 2013 Task Description References Verifying predeployment with Jetstress Preparing Active Directory Installing Exchange Server 2013 Mailbox server role Installing Microsoft Exchange Server 2013 Client Access server role Deploying the database availability group (DAG) Run Jetstress to verify the disk subsystem performance. Prepare Active Directory for the Exchange organization. 1. Install Exchange Server 2013 Mailbox server role. 2. Install Exchange latest service pack and update rollup. 1. Install Exchange Server 2013 Client Access server role. 2. Install Exchange latest service pack and update rollup. Deploy DAG and create multiple copies for each mailbox database to provide high availability for Exchange mailbox databases. Using Jetstress to verify performance Prepare Active Directory and Domains Deploy a New Installation of Exchange 2013 Mailbox Server Deploy a New Installation of Exchange 2013 Client Access Server Managing Database Availability Groups Managing Mailbox Database Copies Verifying predeployment with Jetstress Preparing Active Directory You must run Jetstress to verify the disk subsystem performance before you implement the Exchange application. For details, refer to Using Jetstress to verify performance. Before you install Exchange Server 2013, complete the following steps to prepare your Active Directory environment for the Exchange organization: 1. Extend the Active Directory schema for Exchange Server 2013 by running the following command: Setup /PrepareSchema /IAcceptExchangeServerLicenseTerms 48

Chapter 4: Solution Implementation 2. Create the required Active Directory containers and set up permissions for the Exchange organization by running the following command. You can also specify the organization name here. Setup /PrepareAD /OrganizationName:<organization name> /IAcceptExchangeServerLicenseTerms 3. Prepare the other Active Directory domains by running the following command: Setup /PrepareDomain /IAcceptExchangeServerLicenseTerms For more information on how to prepare Active Directory, refer to the Microsoft TechNet topic Prepare Active Directory and Domains. Installing Exchange Server 2013 Mailbox server roles Before installing Exchange Server roles, confirm that you have completed the steps described in the Microsoft TechNet topic Exchange 2013 Prerequisites. To install the Mailbox server role on a virtual machine, use the Exchange Server 2013 installation media and follow these steps: 1. In the Exchange Server 2013 Setup wizard, under Server Role Selection, select Mailbox Role, as shown in Figure 16. Click Next. Figure 16. Selecting Mailbox role 2. Use the wizard to complete the installation of the Mailbox server role. When the installation is complete, apply the latest service pack and the latest update rollup. 3. Repeat these steps if there are other Exchange Mailbox server virtual machines to deploy. Installing the Exchange Server 2013 Client Access server roles Use the Exchange Server installation media to install the Exchange Server 2013 Client Access server role on a virtual machine: 1. In the Exchange Server 2013 Setup wizard, under Server Role Selection, select Client Access role, as shown in Figure 17. Click Next. 49

Chapter 4: Solution Implementation Figure 17. Selecting Client Access role 2. Follow the wizard to complete the installation and apply the latest service pack and the latest update rollup. 3. Repeat these steps if you want to install the Client Access role on other Exchange Client Access server virtual machines. Deploying the database availability group A DAG is the base component of the high availability framework built into Exchange Server 2013. A DAG is a group of up to 16 Mailbox servers that hosts a set of databases and provides automatic database-level recovery from failures that affect individual servers or databases. To deploy DAG in your Exchange Server 2013 environment: 1. Run the following command to create a DAG: New-DatabaseAvailabilityGroup -Name <DAG_Name> - WitnessServer <Witness_ServerName> -WitnessDirectory <Folder_Name> -DatabaseAvailabilityGroupIPAddresses <DAG_IP> 2. If you create a DAG on a Mailbox server running Windows Server 2012 R2, prestage the cluster name object (CNO) before adding members to the DAG. For detailed steps, refer to the Microsoft TechNet topic Pre-Stage the Cluster Name Object for a Database Availability Group. 3. Run the following command to add the Mailbox server to the DAG: Add-DatabaseAvailabilityGroupServer -Identity <DAG_Name> - MailboxServer <Server_Name> 4. Run the following command to create a DAG network: New-DatabaseAvailabilityGroupNetwork - DatabaseAvailabilityGroup <DAG_Name> -Name <Network_Name> - Description "Network_Description" -Subnets <SubnetId> - ReplicationEnabled:<$True $False> For details about how to manage Exchange DAG, refer to the Microsoft TechNet topic Managing Database Availability Groups. 50

Chapter 4: Solution Implementation 5. Create Exchange databases by running the following command: New-MailboxDatabase -Name <Database_Name> -EdbFilePath <Database_File_Path> -LogFolderPath <Log_File_Path> - MailboxServer <Mailbox_Server_Name> 6. Add mailbox database copies for each mailbox database by running the following command. Add-MailboxDatabaseCopy -Identity <Database_Name> - MailboxServer <Server_Name> -ActivationPreference <Preference_Number> For details, refer to the Microsoft TechNet topic Managing Mailbox Database Copies. The Exchange organization is now running with the DAG deployed. To verify the functionality and monitor the system s health, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. EMC Powered Backup implementation VSPEX solutions are sized and tested with EMC Powered Backup, including Avamar and Data Domain. If your solution includes EMC Powered Backup components, refer to the EMC Backup and Recovery Options for VSPEX for Virtualized Microsoft Exchange 2013 Design and for detailed information on implementing these options into your VSPEX solution. 51

Chapter 4: Solution Implementation 52

Chapter 5: Solution Verification Chapter 5 Solution Verification This chapter presents the following topics: Baseline infrastructure verification... 54 Exchange Server performance verification... 57 EMC Powered Backup verification... 59 53

Chapter 5: Solution Verification Baseline infrastructure verification Overview After you configure the solution, review this section to verify the solution s configuration and functionality, and ensure that the configuration supports the core availability requirements. Table 22 describes the tasks that you must complete when verifying the VSPEX installation. Table 22. Tasks for verifying the VSPEX installation Task Description Reference Verifying Hyper-V functionality Verifying solution components redundancy Verifying the Exchange DAG configuration Monitoring the solution s health Verify the basic Hyper-V functionality of the solution with a post-installation checklist. Verify the redundancy of the solution components: Storage Hyper-V host Network switch Verify the DAG configuration in the solution. Use tools to monitor the solution s health. Monitoring Database Availability Groups Vendor documentation Server Health and Performance EMC VNX Monitoring and Reporting 1.0 User Guide EMC Unisphere: Unified Storage Management Solution Verifying Hyper-V functionality Verifying solution component redundancy EMC recommends that you verify the Hyper-V configurations prior to deployment into production on each Hyper-V server. For more detailed information about how to verify Hyper-V functionality, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. To ensure that the various components of the solution maintain availability requirements, it is important that you test specific scenarios related to maintenance or hardware failure. EMC recommends that you verify redundancy of the solution components including storage, Hyper-V hosts, and network switches. For details, refer to the relevant VSPEX Proven Infrastructure Guide in Essential reading. 54

Chapter 5: Solution Verification Verifying the Exchange DAG configuration To ensure that the Exchange DAG is working smoothly, verify the DAG configuration: 1. Use the following command to verify on which Mailbox servers the databases are activated. Get-MailboxDatabaseCopyStatus -Server <Server_Name> If the status is Mounted, it means the database is active on this Mailbox server; if the status is Healthy, it means this is a passive database on this Mailbox server. Normally, the active databases are hosted on different Mailbox servers as showed in Figure 18. Figure 18. Verifying the DAG configuration 2. Shut down one Mailbox server to simulate a failure. 3. Monitor the database copy status to verify that the DAG detects the failure, as shown in Figure 19, and that the DAG automatically fails over the affected database to another Mailbox server that hosts a passive copy of these databases. Figure 19. Verifying that the DAG detects the failure 4. Verify that the users can access the mailbox after the database is activated on the other Mailbox server. For more information, refer to the Microsoft TechNet topic Monitoring Database Availability Groups. 55