ENTERPRISE HYBRID CLOUD 4.1.1

Size: px
Start display at page:

Download "ENTERPRISE HYBRID CLOUD 4.1.1"

Transcription

1 ENTERPRISE HYBRID CLOUD September 2017 Abstract This solution guide provides an introduction to the concepts and architectural options available within Enterprise Hybrid Cloud. It should be used as an aid to deciding on the most suitable configuration for the initial deployment of Enterprise Hybrid Cloud. H R This document is not intended for audiences in China, Hong Kong, Taiwan, and Macao. SOLUTION GUIDE

2 Copyright The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. Copyright 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA September 2017 H R. Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice. 2 Enterprise Hybrid Cloud 4.1.1

3 Contents Contents Chapter 1 Executive Summary 6 Enterprise Hybrid Cloud... 7 Document purpose... 7 Audience... 7 Essential reading... 7 Solution purpose... 8 Business challenge... 8 Technology solution... 9 We value your feedback!... 9 Chapter 2 Cloud Management Platform Options 10 Overview Cloud management platform components Cloud management platform model Component high availability Chapter 3 Object Model and Key Foundational Concepts 18 Object model overview Key foundational concepts Chapter 4 Multisite and Multi-vCenter Protection Services 25 vcenter endpoint types SSO domain types Protection services Single-site protection service Continuous Availability (Single-site) protection service Continuous Availability (Dual-site) protection service Disaster recovery (EMC RecoverPoint for Virtual Machines) protection service.. 34 Disaster recovery (VMware Site Recovery Manager) protection service Combining protection services Multi-vCenter and multisite topologies Chapter 5 Network Considerations 56 Overview Cross-vCenter VMware NSX Logical network considerations Network virtualization Enterprise Hybrid Cloud

4 Contents Network requirements and best practices Enterprise Hybrid Cloud validated network designs using VMware NSX Chapter 6 Storage Considerations 76 Common best practices and supported configurations Non-replicated storages Continuous availability storage considerations Disaster recovery (Site Recovery Manager) storage considerations Chapter 7 Data Protection (Backup as a Service) 85 Overview Data protection technology Backup types overview Key Enterprise Hybrid Cloud backup constructs Controlling the backup infrastructure Chapter 8 Enterprise Hybrid Cloud on Converged Infrastructure Systems98 Enterprise Hybrid Cloud on VxRail Appliances Enterprise Hybrid Cloud on VxRack Systems Enterprise Hybrid Cloud on VxBlock Systems Chapter 9 Ecosystem Interactions 114 Single-site ecosystems Continuous availability ecosystem Disaster recovery ecosystems Chapter 10 Maximums, Rules, Best Practices, and Restrictions 120 Overview Maximums VMware Platform Services Controller rules Active Directory VMware vrealize tenants and business groups EMC ViPR tenant and projects rules Bulk import of virtual machines Resource sharing Data protection considerations RecoverPoint for Virtual Machines best practices Software resources Sizing guidance Restrictions Component options Enterprise Hybrid Cloud 4.1.1

5 Contents Chapter 11 Conclusion 134 Conclusion Enterprise Hybrid Cloud

6 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Enterprise Hybrid Cloud... 7 Document purpose... 7 Audience... 7 Essential reading... 7 Solution purpose... 8 Business challenge... 8 Technology solution... 9 We value your feedback! Enterprise Hybrid Cloud 4.1.1

7 Chapter 1: Executive Summary Enterprise Hybrid Cloud Document purpose Enterprise Hybrid Cloud is a converged cloud platform that provides a completely virtualized data center, fully automated by software. It starts with a foundation that delivers IT as a service (ITaaS), with options for high availability, backup and recovery, and disaster recovery (DR) as tiers of service within the same environment. It also provides a framework and foundation for add-on modules, such as database as a service (DBaaS), platform as a service (PaaS), and cloud brokering. This solution guide provides an introduction to the concepts and architectural options available within Enterprise Hybrid Cloud. It should be used as an aid to deciding on the most suitable configuration for the initial deployment of Enterprise Hybrid Cloud. Audience Essential reading This solution guide is intended for executives, managers, architects, cloud administrators, security manager, developers, and technical administrators of IT environments who want to implement a hybrid cloud infrastructure as a service (IaaS) platform. Readers should be familiar with the VMware vrealize Suite, storage technologies, general IT functions and requirements, and how a hybrid cloud infrastructure accommodates these technologies and requirements. The Enterprise Hybrid Cloud Reference Architecture Guide describes the reference architecture for Enterprise Hybrid Cloud. The guide introduces the features and functionality of the solution, the solution architecture and key components, and the validated hardware and software environments. The following guides provide further information about various aspects of Enterprise Hybrid Cloud: Enterprise Hybrid Cloud Reference Architecture Guide Enterprise Hybrid Cloud Administration Guide Enterprise Hybrid Cloud Infrastructure and Operations Management Guide Enterprise Hybrid Cloud Security Management Guide Enterprise Hybrid Cloud

8 Chapter 1: Executive Summary Solution purpose Enterprise Hybrid Cloud enables customers to build an enterprise-class, multisite, scalable infrastructure that enables: Complete management of the infrastructure service lifecycle On-demand access to and control of network bandwidth, servers, storage, and security On-demand provisioning, monitoring, protection, and management of the infrastructure services by the line-of-business users On-demand provisioning of application blueprints with associated infrastructure resources by line-of-business application owners Simplified provisioning of backup, continuous availability (CA), and DR services as part of the cloud service provisioning process Add, modify, or delete services to an application or virtual machine during its complete lifecycle Maximum asset use Increased scalability with centrally managed multisite platforms spanning IT services to all data centers Business challenge While many organizations have successfully introduced virtualization as a core technology within their data center, the benefits of virtualization have largely been restricted to the IT infrastructure owners. End users and business units within customer organizations have not experienced many of the benefits of virtualization, such as increased agility, mobility, and control. Transforming from the traditional IT model to a cloud-operating model involves overcoming the challenges of legacy infrastructure and processes, such as: Inefficiency and inflexibility Slow, reactive responses to customer requests Inadequate visibility into the cost of the requested infrastructure Limited choice of availability and protection services The difficulty in overcoming these challenges has given rise to public cloud providers who have built technology and business models catering to the requirements of end-user agility and control. Many organizations are under pressure to provide similar service levels within the secure and compliant confines of the on-premises data center without sacrificing visibility and control. As a result, IT departments must create cost-effective alternatives to public cloud services, alternatives that do not compromise enterprise features such as data protection (DP), DR, and guaranteed service levels. 8 Enterprise Hybrid Cloud 4.1.1

9 Chapter 1: Executive Summary Technology solution Enterprise Hybrid Cloud integrates the best of Dell EMC, VCE, and VMware products and services, and empowers IT organizations to accelerate implementation and adoption of a hybrid cloud infrastructure. The solution caters to customers who want to preserve their investment and make better use of their existing infrastructure and to those customers who want to build out new infrastructures dedicated to a hybrid cloud. This solution takes advantage of the strong integration between Dell EMC technologies and vrealize and VMware vcloud Suite. The solution, developed by Dell EMC and VMware product and services teams, includes Dell EMC scalable storage arrays, VCE converged and hyper-converged infrastructures, integrated Dell EMC and VMware monitoring, and data protection suites to provide the foundation for enabling cloud services within the customer environment. Enterprise Hybrid Cloud offers several key benefits to customers: Rapid implementation Enterprise Hybrid Cloud offers the foundation IaaS and can be designed and implemented in a validated, tested, and repeatable way based on VCE converged and hyper-converged infrastructure. This increases the time-tovalue for the customer while simultaneously reducing risk. Deliver ITaaS with addon modules for backup, DR, CA, virtual machine encryption, applications, application lifecycle automation for continuous delivery, ecosystem extensions, and more. Supported cloud platform Implementing Enterprise Hybrid Cloud through Dell EMC results in a cloud platform that Dell EMC supports and further reduces risk that is associated with the ongoing operations of your hybrid cloud. Defined upgrade path Customers implementing Enterprise Hybrid Cloud receive upgrade guidance based on the testing and validation completed by the engineering teams. This upgrade guidance enables customers, partners, and Dell EMC services teams to perform upgrades faster and with much less risk. Validated and tested integration Extensive integration testing done across Enterprise Hybrid Cloud makes it simpler to use and manage, and more efficient to operate. We value your feedback! Dell EMC and the authors of this document welcome your feedback on the solution and the solution documentation. Contact us at EMC.Solution.Feedback@emc.com with your comments. Authors: Ken Gould, Fiona O Neill Enterprise Hybrid Cloud

10 Chapter 2: Cloud Management Platform Options Chapter 2 Cloud Management Platform Options This chapter presents the following topics: Overview Cloud management platform components Cloud management platform model Component high availability Enterprise Hybrid Cloud 4.1.1

11 Chapter 2: Cloud Management Platform Options Overview Management terminology and hierarchy To understand how the management platform is constructed, it is important to understand how a number of terms are used throughout this guide. Figure 1 shows the relationship between platform, pod, and cluster and their relative scopes as used in Enterprise Hybrid Cloud. Note: It is important to understand that the term pod does not imply a vsphere cluster. A pod may be distributed across multiple vsphere clusters, or multiple pods may exist on the same vsphere cluster. Figure 1. Cloud management terminology and hierarchy In respect of the scope of each term, the following distinctions exist: Platform The full name is Cloud Management Platform (CMP), which is an umbrella term representing the entire management environment. Pod The full name is Management Pod. Each management pod is a subset of the overall management platform and represents a grouping of Enterprise Hybrid Cloud components designed to deliver specific functions. Pods reside on compute resources but are not themselves the compute resource. While it may be possible for pods to share compute resources with non- Enterprise Hybrid Cloud components (via Request Per Qualification (RPQ)), no component can be considered part of a pod without being expressly integrated and tested by Dell EMC Engineering. Management Pod functions may be distributed between vsphere clusters or consolidated onto vsphere clusters, depending on the individual Enterprise Hybrid Cloud managed vcenter endpoint requirements. Enterprise Hybrid Cloud

12 Chapter 2: Cloud Management Platform Options Cluster Refers to individual technology clusters. While clusters may refer to vsphere clusters, it can also refer to EMC VPLEX clusters, EMC RecoverPoint clusters, and so on. Resource pools Non-default resource pools are used only when two or more management pods are collapsed onto the same vsphere cluster. In this case, they are used to control and guarantee resources to each affected pod. Cloud management platform components Full management stack components When Enterprise Hybrid Cloud is deployed, the first location requires a full management stack to be deployed. Figure 2 shows how the components of the full management stack are distributed among the management pods. Figure 2. Enterprise Hybrid Cloud full management stack Note: Figure 2 includes ViPR Controller and ViPR SRM. Neither component is required in Dell EMC VxRail Appliance deployments where VMware vsan storage is used. 12 Enterprise Hybrid Cloud 4.1.1

13 Chapter 2: Cloud Management Platform Options Core Pod function The Core Pod function provides the base set of resources to establish Enterprise Hybrid Cloud services and consists of: VMware vcenter Server Note: While Figure 2 depicts vcenter, Update Manager, and Platform Services Controller (PSC) as one cell of related components, they are deployed as separate virtual machines. Microsoft SQL Server VMware NSX Manager Log Insight Forwarders and vrealize Operations Manager collectors EMC SMI-S Provider (on Dell EMC VxBlock Systems) PowerPath/VE vapp (on VxBlock Systems and Dell EMC VxRack Systems with FLEX) When deployed on VCE converged infrastructure systems, the Core Pod is overlaid on the existing management environment and uses the existing management components. Network Edge Infrastructure Pod function The Network Edge Infrastructure (NEI) Pod function provides the convergence point for the physical and virtual networks and is only required where VMware NSX is deployed. The NEI Pod function consists of NSX controllers, north-south NSX Edge Services Gateway (ESG) devices, NSX DLR control virtual machines, and an NSX Load Balancer (NLB) to front-distributed vrealize components, such as VMware vrealize Automation and VMware vrealize Orchestrator. VMware vsphere Distributed Resource Scheduler (DRS) rules are used to ensure that NSX controllers are separated from each other, and also to ensure that primary ESGs are separated from primary DLRs so that a host failure does not affect network availability. Note: VMware NSX is a mandatory component on VxRail Appliance as it is deployed by the Enterprise Hybrid Cloud Automation Tool to provide the required load balancing function. Automation Pod function The Automation Pod function consists of the remaining components required for automating and managing the cloud infrastructure, including those responsible for the user portal, automated provisioning, monitoring, and metering. It is managed by the Cloud vcenter Server instance but is dedicated to automation and management services. Therefore, the compute resources that support the Automation Pod function are not exposed to vrealize Automation business groups. To ensure independent failover capability, the Automation Pod function may not share storage resources with the workload clusters. For the same reasons, it may not share networks and should be on a distinctly different Layer 3 network to both the Core Pod and NEI Management Pod functions. Note: While Figure 2 depicts vrealize IaaS as one cell of related components, the individual vrealize Automation roles are actually deployed as separate virtual machines. Enterprise Hybrid Cloud

14 Chapter 2: Cloud Management Platform Options Workload Pods Workload Pods are configured and assigned to fabric groups in vrealize Automation. Available resources are used to host tenant workload virtual machines deployed by business groups in the Enterprise Hybrid Cloud environment. All business groups can share the available VMware vsphere ESXi cluster resources. In VxBlock Systems and VxRack FLEX configurations, EMC ViPR service requests are initiated from the vrealize Automation catalog to provision Workload Pod storage. For VxRail Appliances, all storage is pre-provisioned at the time of the VxRail Appliance deployment. Endpoint Management Stack When an additional vcenter endpoint is configured for use in Enterprise Hybrid Cloud, that endpoint only requires a subset of the management stack to operate. Figure 3 shows how the components of the endpoint management stack are distributed among the management pods. Figure 3. Enterprise Hybrid Cloud endpoint management stack As with the full management stack, the NEI Pod function is only required if VMware NSX is being used. vrealize Automation agents are deployed for each endpoint to follow best practice and are geographically co-located to ensure predictable performance. Additional vrealize distributed execution managers (DEMs) may also be required, depending on the overall run rate of tasks in the cloud environment. 14 Enterprise Hybrid Cloud 4.1.1

15 Chapter 2: Cloud Management Platform Options Cloud management platform model The Enterprise Hybrid Cloud management platform requires a minimum set of resources to comply with the combined best practices of the components used to manage the cloud. These minimums are shown in Table 1. Note: Minimum host count is dependent on the specification of the compute resources being sufficient to support the relevant management virtual machine requirements. The Enterprise Hybrid Cloud sizing tool may recommend a larger number of hosts based on the server specification chosen. Table 1. Enterprise Hybrid Cloud management model minimums Hosting platform Pod vcenter Cluster Minimum number of hosts Resource pool VxBlock System/VxRack System Core Cloud AMP Cluster 3* (for VxBlock) 4 (for VxRack) N/A NEI (NSX configurations only) Cloud Split between AMP and Edge clusters Controllers are on AMP 4 for Edge N/A Automation Cloud Automation cluster Core 3 N/A Core VxRail Appliance NEI Cloud VxRail cluster 4 NEI Automation Automation * Based on VCE AMP-2HAP configuration. Factors affecting minimum host counts The following considerations are inherent in the numbers presented in Table 1: The Core Pod function is overlaid on the VCE AMP cluster or VxRail cluster reducing the overall management requirements The ESXi cluster serving as the Edge cluster in VxBlock System and VxRack System configurations (for the NEI Pod function Edges and Controllers) must have at least four nodes when using VMware NSX to avoid short-term network outage if a host fails The ESXi cluster supporting the Automation Pod function must have at least three nodes to meet ViPR best practices for ViPR node separation (VxBlock System and VxRack System configurations only) When an ESXi cluster supporting a management pod uses v SAN with, for example, VxRack Systems or VxRail Appliances, there must be at least four nodes in the ESXi cluster to ensure storage can suffer the loss of a cluster node without compromising storage integrity. Enterprise Hybrid Cloud

16 Chapter 2: Cloud Management Platform Options Note: For ultimate resilience and ease of use during maintenance windows, creating vsphere cluster sizes based on N+2 sizing may be appropriate based on customer preference, where N is calculated as the number of hosts required to support the Management Pod functions based on the CPU and RAM requirements for the hosted virtual machines plus host system overhead. The Enterprise Hybrid Cloud sizing tool sizes vsphere clusters based on an N+1 algorithm by default, but also allows N+2 selection. Component high availability General high availability The use of vsphere ESXi clusters with VMware vsphere High Availability (HA) provides general virtual machine protection across the management platform. Further levels of availability are provided by using the high availability options of the individual components, as outlined in the next section. Distributed vrealize Automation Enterprise Hybrid Cloud requires the use of distributed vrealize Automation installations. In this model, multiple instances of each vrealize Automation role are deployed behind a load balancer to ensure scalability and fault tolerance. Note: All-in-one vrealize Automation installations are not supported for production use. VMware NSX load balancing technology is fully supported, tested, and validated by Enterprise Hybrid Cloud. Other load balancer technologies supported by VMware for use in vrealize Automation deployments are permitted, but configuration assistance for those technologies should be provided by VMware or the specific vendor. Note: Use of a load balancer technology not officially supported by Enterprise Hybrid Cloud or VMware with vrealize Automation requires an Enterprise Hybrid Cloud RPQ. VMware NSX is the only supported Load Balancer technology on a VxRail Appliance configuration and is automatically deployed and configured in that scenario. Clustered vrealize Orchestrator Both clustered and stand-alone vrealize Orchestrator installations are supported by Enterprise Hybrid Cloud. Table 2 details the specific component high availability options. Table 2. vrealize Automation and vrealize Orchestrator HA options Management model Distributed vrealize Automation Minimal vrealize Automation (AIO) Clustered vrealize Orchestrator (active/active) Stand-alone vrealize Orchestrator Distributed Supported Not supported Supported (recommended) Supported 16 Enterprise Hybrid Cloud 4.1.1

17 Chapter 2: Cloud Management Platform Options Highly available VMware Platform Services Controller Highly available configurations for VMware Platform Services Controller are not supported in Enterprise Hybrid Cloud Multisite protection for the management platform The cloud-management platform for Enterprise Hybrid Cloud can be made resilient across sites using any of the supported multisite protection models that are deployed in the Enterprise Hybrid Cloud environment. Note: When multisite protection is available within the Enterprise Hybrid Cloud environment, Dell EMC recommends that the management platform is also protected, because it may not be possible to recover tenant workloads to a secondary site without first recovering elements of the management platform. We also recommend that you protect the management platform with the highest form of multisite protection available in your Enterprise Hybrid Cloud environment. Enterprise Hybrid Cloud

18 Chapter 3: Object Model and Key Foundational Concepts Chapter 3 Object Model and Key Foundational Concepts This chapter presents the following topics: Object model overview Key foundational concepts Enterprise Hybrid Cloud 4.1.1

19 Chapter 3: Object Model and Key Foundational Concepts Object model overview Overview Enterprise Hybrid Cloud uses an object model that provides the framework for storing and referencing metadata related to infrastructure and compute resources. The object model acts as the rules engine for provisioning storage, backup service levels, and inter-site or intra-site protection services. Note: Upgrade of existing Enterprise Hybrid Cloud environments to the object model is fully supported and covered by the included upgrade procedure. The Enterprise Hybrid Cloud object model is presented to vrealize Orchestrator through a combination of the Enterprise Hybrid Cloud vrealize Orchestrator plug-in and the VMware Dynamic Data Types plug-in. All model data is stored in a Microsoft SQL Server database on the Automation Pod SQL Server instance, and can be referenced by all vrealize Orchestrator nodes. Figure 4 shows the Enterprise Hybrid Cloud object model and the relationships between the individual object types. Enterprise Hybrid Cloud

20 Chapter 3: Object Model and Key Foundational Concepts Figure 4. Enterprise Hybrid Cloud object model Viewing the object model in vrealize Orchestrator Figure 5 shows an example of Enterprise Hybrid Cloud connection objects created in the Enterprise Hybrid Cloud object model as presented and referenceable through vrealize Orchestrator. 20 Enterprise Hybrid Cloud 4.1.1

21 Chapter 3: Object Model and Key Foundational Concepts Figure 5. vrealize Orchestrator view of onboarded Enterprise Hybrid Cloud connections in the object model Key foundational concepts Sites Enterprise Hybrid cloud now supports up to eight sites, depending on the combination of protection services deployed in the environment. In Enterprise Hybrid Cloud, a site is a geographic location and is the key factor in determining the following: The workloads that may share the same EMC Avamar and EMC Data Domain backup infrastructure. All workloads determined to be on the same site (irrespective of their hardware island, vcenter, or cluster) can share the same backup infrastructure for efficiencies in terms of data-center space and backup deduplication. The vcenters that may be used as partner vcenters for DR relationships. The Enterprise Hybrid Cloud object model enforces the rule that vcenters in a DR relationship must contain two distinct sites. From a metadata perspective, a site object contains just one user-defined property, that is, the user-provided name for the site. vcenters Enterprise Hybrid Cloud supports up to four Enterprise Hybrid Cloud-managed VMware vcenter endpoints. Each managed vcenter endpoint can be configured as a vrealize Automation endpoint with the ability to provide any of the following services (as relevant): Infrastructure as a service (IaaS) Storage as a service (STaaS) Backup as a service (BaaS) Enterprise Hybrid Cloud

22 Chapter 3: Object Model and Key Foundational Concepts Disaster recovery as a service (DRaaS) Note: The VxRail Appliance does not support STaaS or DRaaS in Enterprise Hybrid Cloud To facilitate DRaaS, each vcenter endpoint may be configured with a vcenter partner for VMware Site Recovery Manager (VMware SRM) or EMC RecoverPoint for Virtual Machines purposes. This allows the solution to scale out in the following ways: Increases the number of workload virtual machines that may be protected by either SRM or RecoverPoint for Virtual Machines Increases the number of target sites that workloads can be protected to (via different vcenter DR relationships) vcenter endpoints can be added to the Enterprise Hybrid Cloud model as a Day 2 operation through the vrealize Automation Catalog item vcenter Endpoint Maintenance. Hardware islands Hardware islands allow multiple converged infrastructure platforms to be logically aggregated (for scale) or partitioned (for isolation) inside a single vcenter while retaining awareness of physical network, storage, and site boundaries. You can then use this awareness to ensure that correct inter-site and intra-site protection services are applied to cloud workloads. As a result, the hardware island concept is the key determining factor in configuring vsphere clusters that offer inter- or intra-site resilience, that is, these services must span more than one hardware island by definition. The properties of a hardware island can be defined as: A set of ESXi clusters and ViPR virtual arrays (ViPR virtual arrays properties are not required in a VxRail Appliance configuration). Note: When only VS1S clusters from a VxRail Appliance are included in a hardware island, adding virtual arrays to the hardware island is not required. Belonging to only one vcenter. Therefore, clusters assigned to a hardware island must be from the corresponding vcenter. Smaller than a single storage area network (SAN) or storage array (in line with the capability of a virtual array. Can include multiple storage arrays if they are all in the same storage area network. Note: This does not apply to VxRail Appliance. 22 Enterprise Hybrid Cloud 4.1.1

23 Chapter 3: Object Model and Key Foundational Concepts Including items from different storage area networks if all clusters and arrays assigned to the hardware island are connected to each of those networks. This allows for independent SAN fabrics for redundancy. Note: A ViPR virtual array may not be a member of more than one hardware island. Note: A vcenter can contain multiple hardware islands. Note: When workloads reside on different hardware islands on the same site, they may share the same backup infrastructure, if required. Clusters A cluster object is created in the Enterprise Hybrid Cloud object model when a vsphere cluster is onboarded through the vrealize Automation catalog. When clusters are onboarded, each cluster must be given a type, which then dictates the type of storage that may be provisioned to the cluster. Table 3 shows the cluster types available in the model. Table 3. Cluster types Datastore type LC1S VS1S CA1S CA2S DR2S Storage description Local Copy on One Site vsan Storage on One Site (not used by STaaS) CA VPLEX Metro Storage on One Site CA VPLEX Metro Storage across Two Sites DR EMC RecoverPoint Storage across Two Sites Note: VS1S clusters currently only apply to VxRail Appliance converged infrastructure systems. Note: Refer to the Enterprise Hybrid Cloud Support Matrix for up-to-date information on supported converged systems. EMC RecoverPoint for Virtual Machines workloads use LC1S clusters, as the storage is single-site storage and workloads are selectively replicated by EMC RecoverPoint for Virtual Machines rather than underlying logical unit number (LUN)-level storage technology. Datastores Datastore objects are created in the Enterprise Hybrid Cloud model when a storage provisioning operation is carried out by STaaS workflows against an onboarded Enterprise Hybrid Cloud cluster. The cluster type is used to guide the user to provisions only suitable storage to that cluster. After a datastore object is created, the datastore properties regarding its service level offering (Storage Reservation Policy), and other details that support the BaaS operations, are recorded. VS1S datastores are not created by STaaS but are ingested into the object model when a VS1S cluster is onboarded. Enterprise Hybrid Cloud

24 Chapter 3: Object Model and Key Foundational Concepts Table 4 shows the available datastore types that can be created, in keeping with the cluster types that may be onboarded. Table 4. Datastore types Cluster type LC1S VS1S CA1S CA2S DR2S Storage description Local Copy on One Site vsan Storage on One Site (not used by STaaS) CA VPLEX Metro Storage on One Site CA VPLEX Metro Storage across Two Sites DR RecoverPoint Storage across Two Sites 24 Enterprise Hybrid Cloud 4.1.1

25 Chapter 4: Multisite and Multi-vCenter Protection Services Chapter 4 Multisite and Multi-vCenter Protection Services This chapter presents the following topics: vcenter endpoint types SSO domain types Protection services Single-site protection service Continuous Availability (Single-site) protection service Continuous Availability (Dual-site) protection service Disaster recovery (EMC RecoverPoint for Virtual Machines) protection service Disaster recovery (VMware Site Recovery Manager) protection service Combining protection services Multi-vCenter and multisite topologies Enterprise Hybrid Cloud

26 Chapter 4: Multisite and Multi-vCenter Protection Services vcenter endpoint types Enterprise Hybrid Cloudmanaged vcenter endpoint An Enterprise Hybrid Cloud-managed vcenter endpoint implies that the endpoint, when configured as a vrealize Automation endpoint, can use the virtual-machine lifecycle operations provided as part of vrealize Automation, and can use the value-add Enterprise Hybrid Cloud as-a-service offerings. Infrastructureas-a-service vcenter endpoint An IaaS vcenter endpoint implies that the endpoint, when configured as a vrealize Automation endpoint, can only use the virtual-machine lifecycle operations provided as part of vrealize Automation and cannot use the Enterprise Hybrid Cloud as-a-service offerings. vcenter endpoint types comparison Table 5 shows a side-by-side comparison of vcenter endpoint types. Table 5. vcenter endpoint types comparison Feature Managed vcenter endpoint IaaS vcenter endpoint Services supported IaaS (vra Native Functionality) STaaS Continuous Availability DRaaS BaaS IaaS (vra Native Functionality) Note: STaaS and DRaaS are not available on VxRail Appliance platforms in Enterprise Hybrid Cloud SSO domain types Enterprise Hybrid Cloudmanaged SSO domain An Enterprise Hybrid Cloud-managed single sign-on (SSO) domain hosts managed vcenter endpoints (refer to Enterprise Hybrid Cloud-managed vcenter endpoint). All managed vcenter endpoints configured in the Enterprise Hybrid Cloud instance must be part of the same (managed) SSO domain. There may be only one managed SSO domain per hybrid cloud instance. A managed SSO domain may also host IaaS-only vcenter endpoints if required, up to the maximum number of vcenter endpoints configured per domain (refer to vcenter maximums). Infrastructureas-a-service SSO domain An IaaS-only SSO domain only hosts IaaS vcenter endpoints. The domain may host more than one IaaS vcenter endpoint. Enterprise Hybrid Cloud can support multiple IaaS-only SSO domains up to the maximum number of SSO Domains permitted per hybrid cloud instance (refer to SSO maximums). 26 Enterprise Hybrid Cloud 4.1.1

27 Chapter 4: Multisite and Multi-vCenter Protection Services Protection services Protection service availability attributes Enterprise Hybrid Cloud provides the following intra-site and inter-site protection features, which can be combined to offer multiple tiers of service to different workloads within the same Enterprise Hybrid Cloud deployment. The attributes of the protection services are: Converged/hyper-converged infrastructure redundancy Virtual machine workloads benefit from the redundancy of the internal components of a converged infrastructure platform, such as VxRail Appliances, VxRack Systems, or VxBlock Systems, including redundant compute, network, and storage components. Inter-converged/hyper-converged infrastructure redundancy Virtual machine workloads are insulated against the failure of a converged/hyper-converged infrastructure platform by replicating that workload to a second converged infrastructure platform on the same site. Inter-site redundancy Virtual machine workloads are insulated against failure of an entire site location by replicating that workload to a second converged infrastructure platform on an alternate site. Local backup Virtual machine workloads can be backed up locally to shared backup infrastructure with a single copy of each backup image retained. Replicated backup Virtual machine workloads can be backed up locally to shared backup infrastructure, and each backup image is replicated to, and restorable from, a shared backup infrastructure on an alternate site. Note: When both local and replicated backup options are available within the Enterprise Hybrid Cloud environment, the backup type applied to a given virtual machine workload matches the level of intra- or inter-site protection applied to the workload itself, for example, a workload protected using DR protection service is automatically assigned a replicated backup. Protection service offerings Enterprise Hybrid Cloud offers several types of protection services: Single-site protection Suitable when you have just a single site and do not require inter-site protection. It can be used in its own right or as the base deployment on top of which you can layer additional multisite protection services. It is designed to permit use of shared backup infrastructure with non-replicated backups. Continuous Availability (Single-site) protection Suitable when you want to provide additional resilience for single-site workloads. It is designed to provide storage and compute resilience for workloads on the same site, using shared backup infrastructure but maintaining non-replicated backups. Continuous Availability (Dual-site) protection Suitable when you want to provide inter-site resilience for workloads and have two sites within a 10 ms latency of each other. It is designed to provide storage and compute resilience for workloads across sites, using local shared backup infrastructure and replicating backup images between sites. Disaster recovery (EMC RecoverPoint for Virtual Machines) protection Suitable when you want to provide inter-site resilience for workloads, with individual workload level failover, but have sites greater than 10 ms latency apart. It is designed to provide storage and compute resilience for workloads across sites, using local shared backup infrastructure and replicating backup images between sites. Enterprise Hybrid Cloud

28 Chapter 4: Multisite and Multi-vCenter Protection Services Disaster recovery (Site Recovery Manager) protection Suitable when you want to provide inter-site resilience for workloads, but have sites greater than 10 ms latency apart and where recovering virtual machine workloads at the ESXi cluster level of granularity is acceptable. It is designed to provide storage and compute resilience for workloads across sites, using local shared backup infrastructure and replicating backup images between sites. The detailed features of each protection service are described in subsequent sections. Table 6 presents the protection types and their attributes. Table 6. Available Enterprise Hybrid Cloud protection services Protection service Converged/ hyperconverged infrastructure redundancy Interconverged/hyperconverged infrastructure redundancy Inter-site redundancy Local backup Replicated backup Single site Continuous Availability (Single-site) Continuous Availability (Dual-site) Disaster recovery (RecoverPoint for Virtual Machines) Disaster recovery (Site Recovery Manager) 28 Enterprise Hybrid Cloud 4.1.1

29 Chapter 4: Multisite and Multi-vCenter Protection Services Single-site protection service Architecture Figure 6 shows an example of single-site protection service, where multiple vcenters and hardware islands are used on a single site, managed by Enterprise Hybrid Cloud. Figure 6. Enterprise Hybrid Cloud single-site service The single-site protection service has the following attributes: Allows a maximum of four VMware vcenter endpoints to be configured for use as Enterprise Hybrid Cloud-managed vcenter endpoints. This allows up to four sites to be configured like this when no other protection service are available. Each vcenter endpoint can have between one and four hardware islands configured (where a hardware island is an island of compute and storage such as a converged infrastructure from VCE). Workloads that use this protection service are bound by the confines of the site to which they were deployed such that: Virtual machine workloads cannot be restarted on any other sites. Virtual machine backup images are not replicated to any another site. There is no inter-converged infrastructure (intra-site) protection available to the virtual machine workloads. Workloads on any vcenter, hardware island, or cluster may share the same backup infrastructure. Enterprise Hybrid Cloud

30 Chapter 4: Multisite and Multi-vCenter Protection Services Continuous Availability (Single-site) protection service Architecture Figure 7 displays an example of Continuous Availability (Single-site) protection service, where one vcenter and two hardware islands are used on a single site, managed by Enterprise Hybrid Cloud. Figure 7. Enterprise Hybrid Cloud Continuous Availability (Single-site) service The Continuous Availability (Single-site) protection service has the following attributes: This service allows a maximum of four vcenter endpoints to be configured for use as Enterprise Hybrid Cloud-managed vcenter endpoints. This allows up to four sites to be configured like this when no other protection services are available. Each vcenter endpoint may have between one and four hardware islands configured (where a hardware island is an island of compute and storage, such as a converged infrastructure from VCE). Workloads using this protection service are bound by the confines of the site to which they were deployed such that: Virtual machine workloads cannot be restarted on any other sites. Virtual machine backup images are not replicated to any another site. 30 Enterprise Hybrid Cloud 4.1.1

31 Chapter 4: Multisite and Multi-vCenter Protection Services Virtual machine workloads using this service may be recovered from one converged infrastructure platform to another on the same site using VMware vsphere Metro Stretched Cluster backed by EMC VPLEX Metro storage. This combination allows the use of VMware vsphere vmotion for proactive movement of workloads before a known event, or the use of vsphere HA for reactive restart of those workloads if an unpredicted failure event occurs. Workloads may exist on both sides of the continuous availability clusters in active/active fashion. Workloads on any vcenter, hardware island, or cluster may use shared backup infrastructure. Hardware island affinity for tenant virtual machines When single-site continuous availability is deployed, there are two copies of storage volumes on the same site controlled by VPLEX Metro. In this case, each leg of the stretched cluster is provided by a different hardware island associated with the same vcenter and the same site. Each hardware island should be backed by a different converged infrastructure platform (such as VxBlock Systems) to provide inter-converged infrastructure redundancy. To maximize uptime of workloads in a failure scenario, virtual machine workloads deployed to a CA1S (CA VPLEX Metro Storage on One Site) cluster are automatically added to VMware DRS affinity groups. The affinity group that an individual workload is associated with is based on the storage chosen by the user, and the hardware island that hosts the winning leg for that distributed storage if a storage partition event occurs. As a result, only the failure of the winning leg of a distributed storage volume requires a vsphere HA restart of that virtual machine workload on another member of the vsphere cluster that is associated with the other hardware island. To avail of single-site continuous availability protection, the hwi_srp_policy_enabled Enterprise Hybrid Cloud global option must be set to True. This ensures that Enterprise Hybrid Cloud STaaS operations create VMware vrealize Storage Reservation Policies (SRPs) that convey to the user which hardware island is the winning site when selecting storage for use by a virtual machine workload. Changing the value of this option can be done with the vrealize Automation catalog Global Options Maintenance item. Hardware island affinity for management platform machines If the Enterprise Hybrid Cloud management stack is protected by single-site continuous availability, then the management component virtual machines should be configured in affinity groups in a similar way as tenant workloads. This is done as part of the Enterprise Hybrid Cloud deployment process. Enterprise Hybrid Cloud

32 Chapter 4: Multisite and Multi-vCenter Protection Services Continuous Availability (Dual-site) protection service Architecture Figure 8 shows an example of Continuous Availability (Dual-site) protection service, where one vcenter and two hardware islands are used across two different sites, and managed by Enterprise Hybrid Cloud. Figure 8. Enterprise Hybrid Cloud Continuous Availability (Dual-site) service The Continuous Availability (Dual-site) protection service has the following attributes: This service allows a maximum of four vcenter endpoints to be configured for use as Enterprise Hybrid Cloud-managed vcenter endpoints. This allows up to eight sites to be configured in this manner when no other protection service are available. Each vcenter endpoint may have between one and four hardware islands configured (where a hardware island is an island of compute and storage such as a converged infrastructure from VCE). Workloads using this protection service may operate on either of the two sites participating in any given CA relationship such that: Virtual machine workloads may be restarted on either of the two sites. Virtual machine backup images are replicated to the other site and are available for restore. 32 Enterprise Hybrid Cloud 4.1.1

33 Chapter 4: Multisite and Multi-vCenter Protection Services Virtual machine workloads using this service may be recovered from one converged infrastructure platform to another on different sites using vsphere Metro Stretched Cluster back by VPLEX Metro storage. This combination allows the use of vmotion for proactive movement of workloads before a known event or the use of vsphere HA for the reactive restart of those workloads if an unpredicted failure event occurs. Workloads may exist on both sides of the continuous availability clusters in active/active fashion. Workloads on hardware islands or clusters on the same site may all use the shared backup infrastructure local to that site. When workloads move to the other site, the shared backup infrastructure on that site takes ownership of executing backups for those workloads. Site affinity for tenant virtual machines When dual-site continuous availability is deployed, there are two copies of storage volumes on different sites controlled by VPLEX Metro. In this case, each leg of the stretched cluster is provided by a different hardware island, on a different site, but associated with the same vcenter. Each hardware island should be backed by a different converged infrastructure platform (such as VxBlock Systems) to provide inter-converged infrastructure redundancy. To maximize uptime of workloads in a failure scenario, virtual machine workloads deployed to a CA2S (CA VPLEX Metro Storage across Two Sites) cluster are automatically added to VMware DRS affinity groups. These groups are used to subdivide the vsphere ESXi hosts in each workload cluster into groupings of hosts corresponding to their respective sites. The specific affinity group that an individual workload is associated with is based on the storage chosen by the user and the hardware island or site that hosts the winning leg for that distributed storage, if a storage partition event occurs. As a result, only the failure of the winning leg of a distributed storage volume requires a vsphere HA restart of that virtual machine workload on another member of the vsphere cluster that is associated with the other hardware island. Site affinity for management platform machines If the Enterprise Hybrid Cloud management stack is protected by dual-site continuous availability then the management component virtual machines should be configured in affinity groups in a similar fashion to tenant workloads. This is done as part of the Enterprise Hybrid Cloud deployment process. Enterprise Hybrid Cloud

34 Chapter 4: Multisite and Multi-vCenter Protection Services Disaster recovery (EMC RecoverPoint for Virtual Machines) protection service Architecture Figure 9 displays an example of an EMC RecoverPoint for Virtual Machines-based disaster recovery protection service, where two vcenters and two hardware islands are used across two different sites, and managed by Enterprise Hybrid Cloud. Figure 9. Enterprise Hybrid Cloud (EMC RecoverPoint for Virtual Machines) disaster recovery service The RecoverPoint for Virtual Machines-based disaster recovery protection service has the following attributes: This service allows a maximum of four vcenter endpoints to be configured for use as Enterprise Hybrid Cloud-managed vcenter endpoints. This allows up to four sites to be configured in this manner when no other protection services are available. Each vcenter endpoint may have between one and four hardware islands configured (where a hardware island is an island of compute and storage, such as a converged infrastructure from VCE). Workloads using this protection service may operate on either of the two sites participating in any given DR relationship such that: Virtual machine workloads may be recovered on either of the two sites. Virtual machine backup images are replicated to the other site and are available for restore 34 Enterprise Hybrid Cloud 4.1.1

35 Chapter 4: Multisite and Multi-vCenter Protection Services Virtual machine workloads using this service may be individually recovered from one converged infrastructure platform to another, on different sites, using the failover mechanisms provided by RecoverPoint for Virtual Machines. Tenant workload clusters are logically paired across vcenters and sites to ensure networking and backup policies operate correctly both before and after failover. Workloads may exist on both sides of the cluster pairing in active/active fashion Workloads on hardware islands or clusters on the same site may all use the shared backup infrastructure local to that site. When workloads move to the other site, the shared backup infrastructure on that site takes ownership of executing backups for those workloads. Disaster recovery (VMware Site Recovery Manager) protection service Architecture Figure 10 shows an example of the VMware SRM-based disaster recovery service, where two vcenters and two hardware islands are used across two different sites, and managed by Enterprise Hybrid Cloud. Figure 10. Enterprise Hybrid Cloud (VMware Site Recovery Manager) disaster recovery service The SRM-based disaster recovery protection service has the following attributes: This service allows a maximum of two vcenter endpoints to be configured for use as Enterprise Hybrid Cloud-managed vcenter endpoints. This allows up to two sites to be configured in this manner when no other protection service are available. Enterprise Hybrid Cloud

36 Chapter 4: Multisite and Multi-vCenter Protection Services Note: Only one Enterprise Hybrid Cloud-managed vcenter DR pair is permitted to offer VMware SRM-based disaster recovery protection services. Each vcenter endpoint may have between one and four hardware islands configured (where a hardware island is an island of compute and storage such as a converged infrastructure from VCE). Workloads using this protection service may operate on either of the two sites participating in any given DR relationship such that: Virtual machine workloads may be recovered on either of the two sites. Virtual machine backup images are replicated to the other site and are available for restore. Tenant workload clusters are logically paired across vcenters and sites to ensure networking and backup policies operate correctly both before and after failover, and failover is at a cluster-level granularity, in that all virtual machine workloads on a specified cluster pair must fail over and back as a unit. Virtual machine workloads using this service can recover from one converged infrastructure platform to another on a different site using the failover mechanisms provided by VMware SRM. Workloads may exist on only one side of the cluster pairing in active/passive fashion. Additional active/passive clusters in an opposite configuration are required to achieve active/active sites. Workloads on hardware islands or clusters on the same site may all use the shared backup infrastructure local to that site. When workloads move to the other site, the shared backup infrastructure on that site takes ownership of executing backups for those workloads. Combining protection services Protection services may be combined within a single Enterprise Hybrid Cloud environment to offer the multiple tiers of service within the same hybrid cloud. The protection levels described earlier can be combined in the following ways: Two services: Single-site plus RecoverPoint for Virtual Machines DR Two services: Single site plus VMware Site Recovery Manager-based DR Three services: Single site plus single/dual-site CA Three services: Single site plus RecoverPoint for Virtual Machines DR plus Site Recovery Manager DR Four services: Single site plus single/dual-site CA and RecoverPoint for Virtual Machines-based DR Four services: Single site plus single/dual-site CA and Site Recovery Managerbased DR Five services: Single site, single/dual-site CA, RecoverPoint for Virtual Machinesbased DR, and Site Recovery Manger-based DR 36 Enterprise Hybrid Cloud 4.1.1

37 Chapter 4: Multisite and Multi-vCenter Protection Services Two services: Single-site plus RecoverPoint for Virtual Machines DR The RecoverPoint for Virtual Machines-based disaster recovery service inherently offers both single-site and RecoverPoint for Virtual Machines protection services, because the compute and storage resources to provide both options are the same. When RecoverPoint for Virtual Machines-based DR is deployed in Enterprise Hybrid Cloud, any virtual machine workload can be individually toggled from the single-site protection service to RecoverPoint for Virtual Machines protection service and back again, as required. When this is done, the backup protection level automatically changes from local to replicated backup as appropriate. Figure 11 shows an example where RecoverPoint for Virtual Machines has been deployed within the Enterprise Hybrid Cloud environment, using two vcenter endpoints. Figure 11. Combining single-site and RecoverPoint for Virtual Machines-based disaster recovery protection services Figure 11 shows an example where two protection services are provided within the same Enterprise Hybrid Cloud environment, using a single vcenter endpoint: Single-site workloads These workloads are not associated with any RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup infrastructure only. In Figure 11, these workloads reside on the compute resource labelled Local/RP4VM Clusters. RecoverPoint for Virtual Machines workloads RecoverPoint for Virtual Machines protected workloads are associated with the RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup Enterprise Hybrid Cloud

38 Chapter 4: Multisite and Multi-vCenter Protection Services infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 11, these workloads reside on the compute resource labelled Local/RP4VM Clusters. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Two services: Single site plus VMware Site Recovery Manager-based DR When single-site and SRM-based disaster recovery protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from two different tiers of protection service, as shown in Figure 12. Figure 12. Combining single site and Site Recovery Manager-based disaster recovery protection services Figure 12 shows an example where both protection services are provided within the same Enterprise Hybrid Cloud environment, using two vcenter endpoints: Single-site workloads Are automatically directed to compute resources, storage, and backup policies that reflect that tier of service, that is, non-replicated storage and non-replicated backups. They are not associated with SRM replication components. In Figure 12, these workloads reside on the compute resource labelled Local Clusters. Backups occur to the relevant local backup infrastructure only. Site Recovery Manager workloads SRM-protected workloads are associated with SRM replication components. Backups occur to the relevant local backup 38 Enterprise Hybrid Cloud 4.1.1

39 Chapter 4: Multisite and Multi-vCenter Protection Services infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 12, these workloads reside on the compute resource labelled DR Clusters. Additional vcenters can be added (in keeping with the parameter in the section Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Note: Only one Enterprise Hybrid Cloud-managed vcenter DR pair is permitted to offer Site Recovery Manager-based disaster recovery protection services. Three services: Single site plus single/dual-site CA When single site and continuous availability protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from three different tiers of protection service, as shown in Figure 13. Figure 13. Combining single-site and continuous availability protection services Figure 13 shows an example where all three protection services are provided within the same Enterprise Hybrid Cloud environment, using a single vcenter endpoint: Single-site workloads Are automatically directed to compute resources, storage, and backup policies that reflect that tier of service, that is, non-replicated storage and non-replicated backups. In Figure 13 these workloads reside on the compute resource labelled Local Clusters. Backups occur to the backup infrastructure on Site A only. Enterprise Hybrid Cloud

40 Chapter 4: Multisite and Multi-vCenter Protection Services CA single site If multiple VPLEX clusters are deployed on the same site, then workloads may avail of the replicated storage on the same site, but continue to use non-replicated backups. In Figure 13 these workloads reside on the compute resource labelled Continuous Availability Clusters (Single Site). Backups occur to the backup infrastructure on Site A only. CA dual site If VPLEX clusters are deployed to both sites, then workloads may avail of the replicated storage to a different site, but additionally avail of replicated backups. In Figure 13 these workloads reside on the compute resource labelled Continuous Availability Clusters (Dual Site). Backups occur to the relevant local backup infrastructure and are replicated to the backup infrastructure on the other site. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide the same type of protection service between a different combination of sites. Three services: Single site plus RecoverPoint for Virtual Machines DR plus Site Recovery Manager DR When single site, RecoverPoint for Virtual Machines-based disaster recovery and SRMbased disaster recovery protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from three different tiers of protection service, as shown in Figure 14. Figure 14. Combining single site, RecoverPoint for Virtual Machines disaster recovery, and Site Recovery Manager-based disaster recovery protection services 40 Enterprise Hybrid Cloud 4.1.1

41 Chapter 4: Multisite and Multi-vCenter Protection Services In this combination, Site Recovery Manager and RecoverPoint for Virtual Machines protection services can be provided by the same or different vcenter pairs. Figure 14 shows an example where all three protection services are provided within the same Enterprise Hybrid Cloud environment, using two vcenter endpoints. Single-site workloads These workloads are not associated with any RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup infrastructure only. In Figure 14, these workloads reside on the compute resource labelled Local/RP4VM Clusters. RecoverPoint for Virtual Machines workloads RecoverPoint for Virtual Machines protected workloads are associated with the RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 14, these workloads reside on the compute resource labelled Local/RP4VM Clusters. Site Recovery Manager workloads SRM-protected workloads are associated with SRM replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 14, these workloads reside on the compute resource labelled DR Clusters. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Note: Only one Enterprise Hybrid Cloud-managed vcenter DR pair is permitted to offer Site Recovery Manager-based disaster recovery protection services. Enterprise Hybrid Cloud

42 Chapter 4: Multisite and Multi-vCenter Protection Services Four services: Single site plus single/dual-site CA and RecoverPoint for Virtual Machines-based DR When single site, continuous availability, and RecoverPoint for Virtual Machines protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from four different tiers of protection service, as shown in Figure 15. Figure 15. Combining single site, continuous availability, and RecoverPoint for Virtual Machines-based disaster recovery protection services 42 Enterprise Hybrid Cloud 4.1.1

43 Chapter 4: Multisite and Multi-vCenter Protection Services Figure 15 shows an example where all four protection services are provided within the same Enterprise Hybrid Cloud environment, using two vcenter endpoints: Single-site workloads Are automatically directed to compute resources, storage, and backup policies that reflect that tier of service, that is, non-replicated storage and non-replicated backups. These workloads are not associated with any RecoverPoint for Virtual Machines replication components. In Figure 15, these workloads reside on the compute resource labelled Local/RP4VM Clusters. Backups occur to the relevant local backup infrastructure only. CA single site If multiple VPLEX clusters are deployed on the same site, then workloads may avail of the replicated storage on the same site, but continue to use non-replicated backups. In Figure 15, these workloads reside on the compute resource labelled Continuous Availability Clusters (Single Site). Backups occur to the backup infrastructure on Site A only. CA dual site If VPLEX clusters are deployed to both sites, then workloads may avail of the replicated storage to a different site, but additionally avail of replicated backups. In Figure 15, these workloads reside on the compute resource labelled Continuous Availability Clusters (Dual Site). Backups occur to the relevant local backup infrastructure and are replicated to the backup infrastructure on the other site. RecoverPoint for Virtual Machines workloads RecoverPoint for Virtual Machines protected workloads are associated with the RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 15, these workloads reside on the compute resource labelled Local/RP4VM Clusters. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Enterprise Hybrid Cloud

44 Chapter 4: Multisite and Multi-vCenter Protection Services Four services: Single site plus single/dual-site CA and Site Recovery Manager-based DR When single site, continuous availability, and Site Recovery Manager protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from four different tiers of protection service, as shown in Figure 16. Figure 16. Combining single site, continuous availability, and Site Recovery Manager-based disaster recovery protection services 44 Enterprise Hybrid Cloud 4.1.1

45 Chapter 4: Multisite and Multi-vCenter Protection Services Figure 16 shows an example where all four protection services are provided within the same Enterprise Hybrid Cloud environment, using two vcenter endpoints: Single-site workloads Are automatically directed to compute resources, storage, and backup policies that reflect that tier of service, that is, non-replicated storage and non-replicated backups. In Figure 16, these workloads reside on the compute resource labelled Local Clusters. Backups occur to the relevant local backup infrastructure only. CA single site If multiple VPLEX clusters are deployed on the same site, then workloads may avail of the replicated storage on the same site, but continue to use non-replicated backups. In Figure 16, these workloads reside on the compute resource labelled Continuous Availability Clusters (Single Site). Backups occur to the backup infrastructure on Site A only. CA dual site If VPLEX clusters are deployed to both sites, then workloads may avail of the replicated storage to a different site, but additionally avail of replicated backups. In Figure 16, these workloads reside on the compute resource labelled Continuous Availability Clusters (Dual Site). Backups occur to the relevant local backup infrastructure and are replicated to the backup infrastructure on the other site. Site Recovery Manager workloads SRM-protected workloads are associated with SRM replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 16, these workloads reside on the compute resource labelled DR Clusters. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Note: Only one Enterprise Hybrid Cloud-managed vcenter DR pairs is permitted to offer Site Recovery Manager-based disaster recovery protection services. Enterprise Hybrid Cloud

46 Chapter 4: Multisite and Multi-vCenter Protection Services Five services: Single site, single/dual-site CA, RecoverPoint for Virtual Machines-based DR, and Site Recovery Manger-based DR When single site, RecoverPoint for Virtual Machines-based disaster recovery and SRMbased disaster recovery protection levels are combined in an Enterprise Hybrid Cloud environment, virtual machine workloads can choose from five different tiers of protection service, as shown in Figure 17. Figure 17. Combining single site, CA, RecoverPoint for Virtual Machines-based disaster recovery, and Site Recovery Manager-based disaster recovery protection services In this combination, Site Recovery Manager and RecoverPoint for Virtual Machines protection services can be provided by the same or different vcenter pairs. Figure 17 shows an example where all five protection services are provided within the same Enterprise Hybrid Cloud environment, using two vcenter endpoints. Single-site workloads These workloads are not associated with any RecoverPoint for Virtual Machines replication components. Backups occur to the 46 Enterprise Hybrid Cloud 4.1.1

47 Chapter 4: Multisite and Multi-vCenter Protection Services relevant local backup infrastructure only. In Figure 17, these workloads reside on the compute resource labelled Local/RP4VM Clusters. CA single site If multiple VPLEX clusters are deployed on the same site, then workloads may avail of the replicated storage on the same site, but continue to use non-replicated backups. In Figure 17, these workloads reside on the compute resource labelled Continuous Availability Clusters (Single Site). Backups occur to the backup infrastructure on Site A only. CA dual site If VPLEX clusters are deployed to both sites, then workloads may avail of the replicated storage to a different site, but additionally avail of replicated backups. In Figure 17, these workloads reside on the compute resource labelled Continuous Availability Clusters (Dual Site). Backups occur to the relevant local backup infrastructure and are replicated to the backup infrastructure on the other site. RecoverPoint for Virtual Machines workloads RecoverPoint for Virtual Machines protected workloads are associated with the RecoverPoint for Virtual Machines replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 17, these workloads reside on the compute resource labelled Local/RP4VM Clusters. Site Recovery Manager workloads SRM-protected workloads are associated with SRM replication components. Backups occur to the relevant local backup infrastructure, and are replicated to the backup infrastructure on the other site. In Figure 17, these workloads reside on the compute resource labelled DR Clusters. Additional vcenters can be added (in keeping with the parameter in Supported topologies) to provide additional scale, or to provide the same type of protection service between a different combination of sites. Note: Only one Enterprise Hybrid Cloud-managed vcenter DR pairs is permitted to offer Site Recovery Manager-based disaster recovery protection services. Note: RecoverPoint for Virtual Machines-based disaster recovery and Site Recovery Managerbased disaster recovery services can both be provided from the same pair of hardware islands. Figure 17 shows them in separate hardware islands simply to demonstrate that this is also possible. Enterprise Hybrid Cloud

48 Chapter 4: Multisite and Multi-vCenter Protection Services Multi-vCenter and multisite topologies This section provides some sample topologies, and guidance on the parameters you need to consider when designing a topology that meets your requirements. Supported topologies The following topologies represent samples of the configurations that Enterprise Hybrid Cloud can support. You can configure the supplied topologies as they are, or variants of the supplied topologies, as long as you abide by the scalability-limit parameters listed in Table 7. Table 7. Scale considerations when designing Enterprise Hybrid Cloud topology Scale consideration Limit Number of vcenter endpoints with full Enterprise Hybrid Cloud services 4 Number of vcenter endpoints associated with more than one Enterprise Hybrid Cloud site 4 Number of Associated Hardware Islands per managed vcenter endpoint 4 Number of managed vcenter endpoints that can support the continuous availability single-site and continuous availability dual-site protection services 4 (VxBlock System-based endpoints) Number of vcenter DR partners per managed vcenter endpoint 1 Number of vcenter DR pairs 2 Number of vcenter pairs that can support RecoverPoint for Virtual Machines-based disaster recovery protection service Number of RecoverPoint for Virtual Machines-based disaster recovery protected virtual machine workloads per vcenter pair Number of vcenters pairs that may offer SRM-based disaster recovery protection Number of Site Recovery Manager-based disaster recovery protected virtual machine workloads Number of Managed vcenter endpoints that can support backup as a service Number of virtual machines across full solution assuming full vcenter count 2 1,024* 1 5, ,000 * If you protect the Automation Pod using RecoverPoint for Virtual Machines, then the vcenter DR pair that supports the Automation Pod can only support 896 individually protected tenant virtual machine workloads as a consequence. 48 Enterprise Hybrid Cloud 4.1.1

49 Chapter 4: Multisite and Multi-vCenter Protection Services Sample topology: Four vcenters on a single site Figure 18 shows how four vcenters on the same site might be deployed. The first vcenter requires the full management stack while other vcenters require just the endpoint management stack. Figure 18. Four vcenters on one site sample topology This type of topology is suitable when you: Have a single site but have multiple vcenter endpoints Want to provide STaaS and BaaS services to all vcenters on the site Enterprise Hybrid Cloud

50 Chapter 4: Multisite and Multi-vCenter Protection Services Sample topology: Four vcenters across five sites with all protection services Figure 19 shows how four vcenters across five sites might be deployed. The first vcenter requires the full management stack to be deployed, and in this example that vcenter is protected via continuous availability between sites. Subsequent vcenters require just the endpoint management stack. Figure 19. Four vcenters across five sites with all services sample topology A topology such as this is suitable when you: Have five geographical sites 50 Enterprise Hybrid Cloud 4.1.1

51 Chapter 4: Multisite and Multi-vCenter Protection Services Want to provide STaaS, BaaS, and DRaaS services to all sites/vcenters Want to provide CA services to Site A and Site B vcenters Have two sites with 10 ms latency of each other to provide Continuous Availability (Dual-site) protection Have other sites that exceed 10 ms latency from each other but that still need intersite protection for virtual machine workloads. In this case, the topology offers RecoverPoint for Virtual Machines disaster recovery and Site Recovery Managerbased disaster recovery protection for those workloads. Sample topology: Four vcenters across four sites with RecoverPoint for Virtual Machines and SRM-based DR protection services Figure 20 shows how four vcenters across four sites might be deployed. The first and second vcenters both require the full management stack in this example because the Automation Pod is to be protected between sites A and B via either Site Recovery Manager or RecoverPoint for Virtual Machines. Subsequent vcenters require just the endpoint management stack. Figure 20. Four vcenters across four sites with DR services sample topology A topology such as this is suitable when you: Have four sites and four vcenters Want to provide STaaS, BaaS, and DRaaS services to all sites/vcenters Want to provide inter-site protection for the management platform. Want to provide both RecoverPoint for Virtual Machines and Site Recovery Manager-based disaster recovery protection services to virtual machine workloads Do not want to provide continuous availability protection services, or the distances between sites exceed 10 ms and therefore preclude that option Enterprise Hybrid Cloud

52 Chapter 4: Multisite and Multi-vCenter Protection Services Sample topology: Four vcenters across three sites with RecoverPoint for Virtual Machines and SRM-based DR protection services Figure 21 shows how four vcenters across three sites might be deployed. The first vcenter and second vcenters between Site A and Site B both require the full management stack to be deployed, because the Automation Pod is protected between those vcenters using RecoverPoint for Virtual Machines. Subsequent vcenters require just the endpoint management stack. Figure 21. Four vcenters across three sites with DR services sample topology A topology such as this is suitable when you: Have three sites and four vcenters Want to provide STaaS, BaaS, and DRaaS services for all sites/vcenters Want to provide inter-site protection for the management platform. Want to provide both RecoverPoint for Virtual Machines and Site Recovery Manager-based disaster recovery protection services to virtual machine workloads Do not want to provide continuous availability protection services, or the distances between sites exceed 10 ms and therefore preclude that option Sample topology: Four vcenters across four sites with SRM-based DR protection service and remote endpoints Figure 22 shows another example of how four vcenters across four sites might be deployed. The first vcenter and second vcenters both require the full management stack to be deployed, because the Automation Pod is protected between those vcenters using Site Recovery Manager. Subsequent vcenters require just the endpoint management stack. 52 Enterprise Hybrid Cloud 4.1.1

53 Chapter 4: Multisite and Multi-vCenter Protection Services Figure 22. Four vcenters across four sites including SRM and two remote/branch endpoints A topology such as this is suitable when you: Have four sites and four vcenters Want to provide STaaS and BaaS services for all sites/vcenters. Want to provide DR services between two sites/vcenters only Want to provide just Site Recovery Manager-based disaster recovery protection services to virtual machine workloads between those two sites Want to provide single-site protection only to the remaining two sites/vcenters Want to provide inter-site protection for the management platform. Do not want to provide continuous availability protection services, or the distances between sites exceed 10 ms and therefore preclude that option Sample topology: Four vcenters across four sites with RecoverPoint for Virtual Machines -based DR protection service and remote endpoints Figure 23 shows another example of how four vcenters across four sites might be deployed. The first vcenter and second vcenters both require the full management stack to be deployed, because the Automation Pod is protected between those vcenters using RecoverPoint for Virtual Machines. Subsequent vcenters require just the endpoint management stack. Enterprise Hybrid Cloud

54 Chapter 4: Multisite and Multi-vCenter Protection Services Figure 23. Four vcenters across four sites including RecoverPoint for Virtual Machines and two remote/branch endpoints A topology such as this is suitable when you: Have four sites and four vcenters Want to provide STaaS and BaaS services for all sites/vcenters. Want to provide DR services between two sites/vcenters only Want to provide just RecoverPoint for Virtual Machines-based disaster recovery protection services to virtual machine workloads between those two sites Want to provide single-site protection only to the remaining two sites/vcenters Want to provide inter-site protection for the management platform. Do not want to provide continuous availability protection services, or the distances between sites exceed 10 ms and therefore preclude that option Sample topology: Four vcenters across four sites RP4VM-based DR protection service Figure 24 shows another example of how four vcenters across four sites might be deployed. The first vcenter and second vcenters both require the full management stack to be deployed, as the Automation Pod is protected between those vcenters using RecoverPoint for Virtual Machines. Subsequent vcenters require just the endpoint management stack. 54 Enterprise Hybrid Cloud 4.1.1

55 Chapter 4: Multisite and Multi-vCenter Protection Services Figure 24. Four vcenters across four sites with RecoverPoint for Virtual Machines protection A topology such as this is suitable when you: Have four sites and four vcenters Want to provide STaaS, BaaS, and DRaaS services to all sites/vcenters Want to provide just RecoverPoint for Virtual Machines-based disaster recovery protection services to virtual machine workloads Want to provide inter-site protection for the management platform Do not want to provide continuous availability protection services, or the distances between sites exceed 10 ms and therefore preclude that option Enterprise Hybrid Cloud

56 Chapter 5: Network Considerations Chapter 5 Network Considerations This chapter presents the following topics: Overview Cross-vCenter VMware NSX Logical network considerations Network virtualization Network requirements and best practices Enterprise Hybrid Cloud validated network designs using VMware NSX Enterprise Hybrid Cloud 4.1.1

57 Chapter 5: Network Considerations Overview Introduction to Enterprise Hybrid Cloud networking Enterprise Hybrid Cloud provides a network architecture that is resilient in the event of failure, enables optimal throughput, and provides secure separation. This chapter presents a number of generic logical network topologies, details on the requirements for networking under the different protection services available within Enterprise Hybrid Cloud and the networking designs that are pre-validated by Enterprise Hybrid Cloud. Cross-vCenter VMware NSX Using crossvcenter NSX Enterprise Hybrid Cloud permits the use of cross-vcenter VMware NSX networking objects in all except the Site Recovery Manager-based DR protection service. This feature allows multiple NSX managers to be federated together in a primary/secondary relationship. This enables the deployment of NSX network and security components across multiple vcenters. These cross-vcenter network and security components are referred to as universal and can only be managed on the primary manager. Some network and security objects are not universal; they are referred to as standard or local objects, and must be managed from their associated NSX manager. Replication of universal objects takes place from the primary NSX managers to the secondary managers so that each manager has the configuration details for all universal objects. This allows a secondary NSX manager to be promoted if that primary NSX manager has failed. Universal network objects The universal distributed logical router (UDLR) and the universal logical switch (ULS) are used to span networks and create east-west routing across vcenters. Note: The version of vrealize Automation used with Enterprise Hybrid Cloud supports universal network objects. Enterprise Hybrid Cloud supports universal network objects under all protection services except Site Recovery Manager-based disaster recovery. Universal security objects The universal section of the Distributed Firewall (DFW) and universal security group can be used to apply firewall rules across vcenters. Note: While the version of vrealize Automation used with Enterprise Hybrid Cloud supports universal security objects, Enterprise Hybrid Cloud disaster recovery protections services currently support only standard NSX security objects. Protecting the primary NSX manager and the universal controller cluster There is a single primary NSX manager and a single universal controller cluster in a cross-vcenter NSX environment, therefore the placement and protection of these components must be considered carefully. The primary NSX manager is connected to one of cloud vcenters in the Enterprise Hybrid Cloud system. The universal controllers cluster can only be deployed to clusters that are part of that cloud vcenter. Enterprise Hybrid Cloud

58 Chapter 5: Network Considerations If the Enterprise Hybrid Cloud system uses VPLEX to support either Continuous Availability (Single-site) or Continuous Availability (Dual-site) protection, then the primary NSX manager and the universal controller cluster should be VPLEX protected with the higher of the two levels available. Logical network considerations Distributed vswitch requirements There must be at least one distributed vswitch in the cloud vcenter if NSX is in use. Multiple distributed vswitches are supported. Note: While the minimum is one distributed vswitch per vcenter, Dell EMC recommends two distributed switches in the Cloud vcenter. The first distributed switch should be used for cloud management networks and the second distributed switch for tenant workload networks. The sample layouts provided later in this chapter use this model and indicate which networks are on each distributed switch by indicating vds1 or vds2. Additional distributed switches can be created for additional tenants if required. Supported routing protocols Enterprise Hybrid Cloud has validated network designs using both Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) in DR environments. Dell EMC recommends BGP over OSPF, but both have been validated. Note: Dell EMC recommends BGP for DR solutions because it offers more granular control for path selection and exceeds the flexibility of OSPF in cross vcenter NSX networks. Network virtualization Supported technologies Enterprise Hybrid Cloud supports two different virtual networking technologies: VMware NSX for vsphere (recommended) VMware vsphere Distributed Switch (VDS) (backed by appropriate technologies) The dynamic network services with vrealize Automation highlighted in this guide require NSX. VDS supports static networking configurations only, precluding the use of VXLANs. 58 Enterprise Hybrid Cloud 4.1.1

59 Chapter 5: Network Considerations Enterprise Hybrid Cloud attributes with and without VMware NSX Table 8 compares the attributes, support, and responsibility for various aspects of the Enterprise Hybrid Cloud protection services when used with, and without, VMware NSX. Table 8. Comparing Enterprise Hybrid Cloud attributes with and without VMware NSX Protection service Attributes with NSX Attributes without NSX Single Site and Continuous Availability (Singlesite) Provides the fully tested and validated load balancer component for vrealize Automation and other Automation Pod components. vrealize Automation multi-machine blueprints may use networking components provisioned dynamically by NSX. Supports the full range of NSX functionality supported by VMware vrealize Automation. Requires a non-nsx load balancer for vrealize Automation and other Automation Pod components. Load balancers listed as supported by VMware are permitted, but support burden falls to VMware and/or the relevant vendor. vrealize Automation blueprints must use pre-defined vsphere networks only (no dynamic provisioning of networking components is possible.) Possesses fewer security features due to the absence of NSX. Reduces network routing efficiency due to the lack of east-west kernel level routing options provided by NSX. Continuous Availability (Dualsite) Provides the fully tested and validated load balancer component for vrealize Automation and other Automation Pod components. vrealize Automation multi-machine blueprints may use networking components provisioned dynamically by NSX. Supports the full range of NSX functionality supported by VMware vrealize Automation. Enables automatic path fail over when the preferred site fails. Enables VXLAN over Layer 2 or Layer 3 DCI to support tenant workload networks being available in both physical locations. Requires a non-nsx load balancer for vrealize Automation and other Automation Pod components. Load balancers listed as supported by VMware are permitted, but the support burden falls to VMware and/or the relevant vendor. vrealize Automation blueprints must use pre-defined vsphere networks only (no dynamic provisioning of networking components is possible.) Possesses fewer security features due to the absence of NSX. Reduces network routing efficiency due to the lack of east-west kernel level routing options provided by NSX. Requires Layer 2 VLANs present at both sites to back tenant virtual machine vsphere port groups. Enterprise Hybrid Cloud

60 Chapter 5: Network Considerations Protection service Attributes with NSX Attributes without NSX RecoverPoint for Virtual Machinesbased disaster recovery Provides the fully tested and validated load balancer component for vrealize Automation and other Automation Pod components. Does not support inter-site protection of dynamically provisioned VMware NSX networking artefacts. Supports consistent NSX security group membership by ensuring virtual machines are placed in corresponding predefined standard security groups across sites via Enterprise Hybrid Cloud workflows. Requires a non-nsx load balancer for vrealize Automation and other Automation Pod components. Load balancers listed as supported by VMware are permitted, but the support burden falls to VMware and/or relevant vendor. vrealize Automation blueprints must use pre-defined vsphere networks only (no dynamic provisioning of networking components is possible.) Possesses fewer security features due to the absence of NSX. Provides IP mobility and simultaneous network availability on both sites using cross-vcenter NSX network configuration and universal NSX networks objects for both the management pods as well as the tenant workloads. Honors NSX security tags applied to a virtual machine on the protected site prior to failover. Reduces network routing efficiency due to the lack of east-west kernel level routing options provided by NSX. Requires customer-supplied technology to ensure network is available simultaneously on both sites to enable IP mobility. Site Recovery Manager-based disaster recovery Provides the fully tested and validated load balancer component for vrealize Automation and other Automation Pod components. Does not support inter-site protection of dynamically provisioned VMware NSX networking artefacts. Supports consistent NSX security group membership by ensuring virtual machines are placed in corresponding predefined standard security groups across sites via Enterprise Hybrid Cloud workflows. Requires a non-nsx load balancer for vrealize Automation and other Automation Pod components. Load balancers listed as supported by VMware are permitted, but the support burden falls to VMware and/or relevant vendor. vrealize Automation blueprints must use pre-defined vsphere networks only (no dynamic provisioning of networking components is possible.) Possesses fewer security features due to the absence of NSX. Allows fully automated network reconvergence for tenant resource pods networks on the recovery site via Enterprise Hybrid Cloud workflows, the redistribution capability of BGP/OSPF and the use of NSX redistribution polices. Does not honor NSX security tags applied to a virtual machine on the protected site prior to failover. Reduces network routing efficiency due to lack of east-west kernel level routing options provided by NSX. Requires customer-supplied IP mobility technology. Requires manual or customer provided reconvergence process for tenant resource pods on the recovery site. 60 Enterprise Hybrid Cloud 4.1.1

61 Chapter 5: Network Considerations NSX coexistence with Cisco ACI Enterprise Hybrid Cloud with VMware NSX can coexist with Cisco Application Centric Infrastructure (ACI), but integration with Cisco ACI is not supported. Supported with Enterprise Hybrid Cloud RPQ ACI as data center fabric In this configuration, the Top-of-Rack (ToR) switches are in NX-OS mode and connected to ACI leaf switches. ACI as underlay: In this configuration, the ToR switches are configured as ACI leaf switches. The configuration must match the VMware reference design, refer to Design Guide for VMware NSX running with a Cisco ACI Underlay Fabric. Support for any issues that arise with this configuration is provided by VMware's Network and Security Business Unit (NSBU). Not supported ACI integration with vsphere/vcenter. This includes the use of Cisco AVS or VMM integration (APIC controlled VDS). Enterprise Hybrid Cloud integration with Cisco ACI. There are currently no plans to integrate Enterprise Hybrid Cloud with Cisco ACI. Network requirements and best practices IP mobility Single site Single-site workloads do not migrate from one site to another; therefore, there is no requirement for IP mobility technology to support this protection level. Continuous Availability (Single-site) Single-site continuous availability workloads do not migrate from one site to another; therefore, there is no requirement for IP mobility technology to support this protection level. Continuous Availability (Dual-site) Continuous availability workloads can operate on either site, or be divided between sites. As virtual machine workloads may potentially migrate at any time, the network they use should span sites so that the workloads continue to operate if this happens. When using VMware NSX, this can be provided by VXLAN technology. Disaster recovery (Site Recovery Manager) All SRM-based disaster recovery workloads on a DR2S (DR RecoverPoint storage across two sites) cluster pair may operate on one site, or the other at any given time. The networks for that cluster therefore must be active only on the protected side of that cluster pair. During failover, those networks must be re-converged to the other site so that the workloads can benefit from using the same IP addressing. Enterprise Hybrid Cloud

62 Chapter 5: Network Considerations When using VMware NSX, this can be provided by Enterprise Hybrid Cloud customizations that dynamically find those networks and interact with multiple NSX manager instances to achieve that re-convergence. Disaster recovery (RecoverPoint for Virtual Machines) RecoverPoint for Virtual Machine-based DR workloads may move individually between sites and vcenter endpoints. This requires that the underlying network infrastructure is able to support two or more virtual machines with IPv4 addresses from the same subnet, running on different sites and vcenters at the same time. When using VMware NSX, this can be achieved using cross-vcenter NSX VXLAN networking configurations. VMware NSX best practices Best practices common to all protection services VMware anti-affinity rules should be used to ensure that the following conditions are true during optimum conditions: NSX controllers reside on different hosts NSX ESGs using ECMP reside on different hosts NSX DLR Control virtual machines reside on different hosts NSX ESG and DLR Control virtual machines reside on different hosts Note: This enforces a four-node minimum on the cluster hosting the NSX Edges and DLRs. Continuous availability additional best practices VMware anti-affinity rules should be used to ensure that the following conditions are true during optimum conditions: All NSX controllers should reside on a given converged infrastructure (CI) platform/site, and move laterally within that platform/site before moving to the alternate platform/site. 62 Enterprise Hybrid Cloud 4.1.1

63 Chapter 5: Network Considerations NEI Pod sizing considerations Table 9 indicates the effect on NEI Pod sizing of combining VMware NSX best practices with your chosen Enterprise Hybrid Cloud protection services. Table 9. Effect on vsphere cluster sizing for hosting the NEI function Protection services vsphere cluster nodes to host NSX Controllers vsphere cluster nodes to host NSX Edges/DLRs Single Site 3 4 Continuous Availability Single Site (NEI function on vsphere Metro storage clusters) Continuous Availability Single Site (NEI function on common infrastructure in third fault domain) Continuous Availability Single Site (NEI function on one of the two CI platforms)* Continuous Availability Dual Site (NEI function on vsphere Metro storage clusters) Disaster Recovery (Site Recovery Manager and RecoverPoint for Virtual Machines)** * This places a dependency on one of the CI systems, such that the failure of that system could result in network outage for the remaining CI systems. ** Because there is an independent cluster per site in these configurations, the requirement is per cluster (that is, per site). Note: The numbers in Table 9 do not indicate the need for a dedicated vsphere cluster to host the NEI Pod. They indicate that the cluster must have this number of nodes to facilitate best practice placement of NEI Pod components. The same cluster (depending on the configuration) can also host other management pod components, for example, in a VxRail Appliance topology the same four-node cluster can support Core, NEI, Automation, and Tenant pods. Enterprise Hybrid Cloud

64 Chapter 5: Network Considerations Traffic ingress and egress considerations Table 10 describes the considerations for network traffic and egress when deciding on the NEI Pod configuration. Table 10. Traffic and egress considerations Protection services Single Site Continuous Availability (Single Site) Continuous Availability (Dual Site) Disaster recovery (Site Recovery Manager) Disaster recovery (RecoverPoint for Virtual Machines) Considerations No special considerations. All traffic bound to single site. No inter-site latency considerations. Review NEI Pod sizing considerations as the method of Edge cluster protection may affect routing and bandwidth between CI platforms. To enable local egress on both sides, Dell EMC recommends that you configure separate NSX DLRs that are peered with independent NSX Edge devices per site. Workloads should then be connected to the DLR/Edge combination that is local to their preferred site. After site failure or majority move of workloads from one site to another, Edge gateways should be migrated to surviving site/site with majority of workload. Any given network is only active on the protected side of a DR2S cluster pair. As each member of a cluster pair is in a different vcenter and uses only the NSX Edges relevant to that vcenter and site, there are no special traffic routing considerations for these workloads. All networks on LC1S clusters are simultaneously spread across two different sites and vcenters. The NSX Edge gateway for any given network can only reside on one site, or the other. As a result, any virtual machine on that network residing on the opposite site to the Edge gateway must cross inter-site link to exit the network. After site failure or a majority move of workloads from one site to another, networks should be reconverged to route through Edge gateways on a surviving site/site with the majority of the workload. 64 Enterprise Hybrid Cloud 4.1.1

65 Chapter 5: Network Considerations Enterprise Hybrid Cloud validated network designs using VMware NSX Single-site network design Figure 25 shows an example of an environment that uses the VMware NSX-based network configuration for the single-site protection service, as tested and validated by Enterprise Hybrid Cloud. Figure 25. Verified single site network configuration Supported configurations While the diagram in Figure 25 shows the use of standard (local) DLRs and logical switches. If multiple vcenter instances are present on the same site, then universal DLRs and logical switches are also supported, although they would not provide any additional benefits in this scenario. Enterprise Hybrid Cloud

66 Chapter 5: Network Considerations Normal operation Under normal operating conditions, the network operates as follows: Egress from the DLR Routes learned by the ESGs from the ToR switches via External Border Gateway Protocol (ebgp) are advertised to the DLR. All routes from the ESGs are installed in the routing table. Ingress to the DLR Routes learned by the ESGs from the DLR via Interior Border Gateway Protocol (ibgp) are advertised to the ToR switches. Continuous Availability (Single-site) network design Figure 26 shows an example of an environment that uses the VMware NSX-based network configuration for the Continuous Availability (Single-site) protection service as tested and validated by Enterprise Hybrid Cloud. Figure 26. Verified Continuous Availability (Single-site) network configuration 66 Enterprise Hybrid Cloud 4.1.1

67 Supported configurations Chapter 5: Network Considerations While the diagram in Figure 26 shows the use of standard (local) DLRs and logical switches, universal DLRs and logical switches are also supported, although they do not provide any additional benefits in this scenario. This statement is true for two reasons: The local egress feature in VMware NSX is not utilized. Because a single vcenter is in use, a standard or a universal DLR are both capable of spanning hardware islands or sites. In Figure 26, Hardware Island 1 is the primary egress/ingress path for one or more tenant clusters. It is also possible to make Hardware Island 2 the primary egress/ingress path for one or more tenant clusters. This network design allows the egress/ingress path to switch to Hardware Island 2 automatically if Hardware Island 1 fails (and vice versa, if Hardware Island 2 is primary). Note: To influence the routing of ingress traffic, this network design uses BGP autonomous system (AS) path prepend. This requires support for BGP routing in the customer network. Normal operation Under normal operating conditions, the network operates as follows: Egress from the DLR Routes learned by the ESGs from their respective ToR switches via ebgp are advertised to the DLR. Only routes from Hardware Island 1 ESGs are installed in the DLR routing table due to a higher BGP weight. Ingress to the DLR Routes advertised by the DLR are learned by both the Hardware Island 1 and Hardware Island 2 ESGs via ibgp. The ESGs will in turn advertise these routes to their respective ToR switches. The Hardware Island 1 ToR switches will advertise the DLR routes to the customer s network without manipulation. The Hardware Island 2 ToR switches will also advertise the DLR routes to the customer s network, but will first elongate the AS path. Operation if Hardware Island 1 fails If Hardware Island 1 fails, the network operates as follows: Egress from the DLR The DLR will no longer be peered with the Hardware Island 1 ESGs. Routes learned from the Hardware Island 1 ESGs will timeout and routes learned from the Hardware Island 2 ESGs will now become the best (and only) option and will be installed in the DLR routing table. Ingress to the DLR DLR routes learned by the customer s network from the Hardware Island 1 ToR switches will timeout and the DLR routes learned from the Hardware Island 2 ToR switches will now become the best (and only) option and will be installed in upstream routing tables. Operation if Hardware Island 1 is restored When Hardware Island 1 is restored, the network operates as follows: Egress from the DLR The peering relationship between the DLR and the Hardware Island 1 ESGs are automatically restored. Routes from the Hardware Island 1 ESGs are again learned by the DLR. Ingress to the DLR The peering relationship between the ToR switches and the customer s network is automatically restored. The customer s network again learns the DLR routes from the Hardware Island 1 ToR switches. Enterprise Hybrid Cloud

68 Chapter 5: Network Considerations Continuous Availability (Dual-site) network design Figure 27 shows an example of an environment that uses the VMware NSX-based network configuration for the Continuous Availability (Dual-site) protection service as tested and validated by Enterprise Hybrid Cloud. Figure 27. Verified Continuous Availability (Dual-site) network configuration Supported configurations While the diagram in Figure 27 show the use of standard (local) DLRs and logical switches, universal DLRs and logical switches are also supported, although they would not provide any additional benefits in this scenario. This statement is true for two reasons: The local egress feature in VMware NSX is not utilized. Because a single vcenter is in use, a standard or universal DLR are both capable of spanning hardware islands or sites In Figure 27, Site A is the primary egress/ingress path for the tenant cluster(s). It is also possible to make Site B the primary egress/ingress path for the tenant cluster(s). This 68 Enterprise Hybrid Cloud 4.1.1

69 Chapter 5: Network Considerations network design allows the egress/ingress path to switch to Site B automatically if Site A fails (and vice versa, if Site B is primary). Note: Using a stretched Edge cluster to support the ESGs and DLR is also supported. Normal operation Under normal operating conditions, the network operates as follows: Egress from the DLR Routes learned by the ESGs from their respective ToR switches via ebgp are advertised to the DLR. Only routes from Site A ESGs are installed in the DLR routing table due to a higher BGP weight. Ingress to the DLR Routes advertised by the DLR are learned by both the Site A and Site B ESGs via ibgp. The ESGs turn advertise these routes to their respective ToR switches. The Site A ToR switches will advertise the DLR routes to the customer s network without manipulation. The Site B ToR switches also advertises the DLR routes to the customer s network, but first elongates the AS path. Operation if Site A fails If Site A fails, the network operates as follows: Egress from the DLR The DLR will no longer be peered with the Site A ESGs. Routes learned from the Site A ESGs will time out and routes learned from the Site B ESGs will now become the best (and only) option and will be installed in the DLR routing table. Ingress to the DLR DLR routes learned by the customer s network from the Site A ToR switches will time out and the DLR routes learned from the Site B ToR switches, will now become the best (and only) option and will be installed in upstream routing tables. Operation if Site A is restored When Site A is restored, the network will operate as follows: Egress from the DLR The peering relationship between the DLR and the Site A ESGs will be automatically restored. Routes from the Site A ESGs will again be learned by the DLR. Ingress to the DLR The peering relationship between the ToR switches and the customer s network will be automatically restored. The customer s network will again learn DLR routes from the Site A ToR switches. Disaster recovery (Site Recovery Manager) network design Figure 28 shows an example of an environment that uses the VMware NSX-based network configuration for the SRM-based disaster recovery protection service as tested and validated by Enterprise Hybrid Cloud. Enterprise Hybrid Cloud

70 Chapter 5: Network Considerations Figure 28. Verified Site Recovery Manager-based disaster recovery network configuration Supported configurations The SRM-based DR service currently supports only standard (local) DLR usage as Enterprise Hybrid Cloud network convergence customizations were built to work with standard DLRs only. In Figure 28, Site A is the primary egress/ingress path for workloads on the DR cluster pair. It is also possible to make Site B the primary egress/ingress path for a DR cluster pair (this is done by the Enterprise Hybrid Cloud network convergence customizations during failover). For active/active sites, it is possible to have separate tenant clusters active at each site, each with a primary egress/ingress path at the active site. 70 Enterprise Hybrid Cloud 4.1.1

71 Chapter 5: Network Considerations Note: As the SRM-based DR protection service does not influence upstream routing in the same way as the continuous availability protection services, BGP is not required to be supported on the customer network (upstream of the ToR switches). Other routing protocols such as OSPF can be used; however, for consistency and ease of deployment, the same routing protocol should be used between the UDLR and ESGs and between the ESGs and the physical network. Normal operation Under normal operating conditions, the network operates as follows: Egress from the DLR At each site, routes learned by the ESGs from the ToR switches via ebgp are advertised to the DLR via ibgp. These routes are installed in the DLR routing table. Ingress to the DLR At Site A, routes are advertised by the DLR, because a redistribution policy of permit is configured. These DLR routes are learned by the ESGs via ibgp and then advertised to the ToR switches via ebgp. The ToR switches then advertises these DLR routes to the customer s network. At Site B, routes are NOT advertised by the DLR, because a redistribution policy of deny is configured. No DLR routes are learned by the ESGs, ToR switches or the customer s network. Operation if Site A fails If Site A fails, the network operates as follows: Egress from the DLR If Site A fails, then the Site A DLR and ESGs will be down. Since no failover operation has been performed, all virtual machines lose the ability to reach the external network and there is no egress traffic. Ingress to the DLR DLR routes learned by the customer s network from the Site A ToR switches will timeout. Since no DLR routes were ever learned by the customer s network from the Site B ToR switches, there will no longer be any DLR routes in upstream routing tables. Since no failover operation has been performed, all virtual machines are no longer reachable from the external network and there would be no endpoint for ingress traffic. Programmatic failover As part the of SRM failover process, the network convergence scripts are run. These scripts: On Site A DLR Attempt to change the redistribution policy from permit to deny. Because Site A is down, this operation will fail. The operation is attempted in case this is a scheduled failover, if so the Site A DLRs would have been online. On Site B DLR Change the redistribution policy from deny to permit. Operation if Site A is restored When Site A is restored, the network operates as follows: Egress from the DLR The peering relationship between the Site A DLR and the ESGs will be automatically restored. Routes from the Site A ESGs will again be learned by the DLR and installed in the DLR routing table. Ingress to the DLR If Site A is restored, then the peering relationship between the Site A DLR and the ESGs will be automatically restored, but since the redistribution policy was set to deny during the programmatic failover, the ESGs will NOT learn any routes from the DLR and will NOT advertise any DLR routes to the ToR switches. The peering relationship between the ToR switches and the Enterprise Hybrid Cloud

72 Chapter 5: Network Considerations customer s network will also be automatically restored, but the customer s network will NOT learn DLR routes from the Site A ToR switches, since none were learned from the Site A ESGs. Programmatic failback As part the of SRM failback process, the network convergence scripts are run. These scripts: On Site B DLR Change the redistribution policy from permit to deny. On Site A DLR Change the redistribution policy from deny to permit. Disaster recovery (RecoverPoint for Virtual Machines) network design Figure 29 shows an example of an environment that uses the VMware NSX-based network configuration for the RecoverPoint for Virtual Machines-based DR protection service as tested and validated by Enterprise Hybrid Cloud. 72 Enterprise Hybrid Cloud 4.1.1

73 Chapter 5: Network Considerations Figure 29. Verified RecoverPoint for Virtual Machines-based DR network configuration Supported configurations The RecoverPoint for Virtual Machines-based DR service currently only supports universal DLR usage. This is a requirement to have VMware NSX-based networks federated across two vcenters and NSX Manager instances. In Figure 29, Site A is the primary egress/ingress path for virtual machines running at either site (see above configuration). Switching the egress/ingress path to Site B can only be done programmatically. Enterprise Hybrid Cloud

74 Chapter 5: Network Considerations Normal operation Under normal operating conditions, the network operates as follows: Egress from the UDLR Routes learned by the ESGs from their respective ToR switches via ebgp, are advertised to the UDLR. Only routes from Site A ESGs are installed in the routing table due to BGP weighting. Ingress to the UDLR Routes learned from the UDLR by the Site A ESGs via ibgp are advertised to the Site A ToR switches. The Site B ESGs does not learn any routes from the UDLR due to the BGP route filters. The Site A ToR switches advertise the UDLR routes to the customer without manipulation. The Site B ToR switches did not learn any UDLR routes from the Site B ESGs, so they do not advertise any UDLR routes to the customer. Operation if Site A fails If Site A fails, the network operates as follows: Egress from the UDLR The UDLR is no longer peered with the Site A ESGs. Routes learned from the Site A ESGs time out and routes learned from the Site B ESGs now become the best (and only) option and are installed in the UDLR routing table. Since no failover operation has been performed, all virtual machines lose the ability to reach the external network and there is no egress traffic. Ingress to the UDLR UDLR routes learned by the customer s network from the Site A ToR switches time out. Since no UDLR routes were learned by the customer s network from the Site B ToR switches, there are no longer any UDLR routes in upstream routing tables. Since no failover operation has been performed, all virtual machines are no longer reachable from the external network and there is no endpoint for ingress traffic. Programmatic failover To programmatically fail over the networks in this configuration: On the UDLR Raise the BGP weights for the neighbor relationships with Site B ESGs from 40 to 140. This preserves Site B as the primary egress path, even if Site A comes back online. On the UDLR Remove the BGP filters, so that the Site B ESGs learns routes from the UDLR. On the UDLR Add the BGP filters, so that the Site A ESGs do NOT learn routes from the UDLR. Operation if Site A is restored When Site A is restored, the network operates as follows: Egress from the DLR The peering relationship between the UDLR and the Site A ESGs is automatically restored. Routes from the Site A ESGs are again learned by the UDLR. These routes have a lower BGP weight, due to changes made during programmatic failover, and therefore are NOT installed in the UDLR routing table. Ingress to the DLR The peering relationship between the UDLR and the Site A ESGs is automatically restored, but because BGP filters were added during the programmatic failover, the ESGs does NOT learn any routes from the UDLR and does NOT advertise any UDLR routes to ToR switches. The peering relationship between the ToR switches and the customer s network is also automatically restored, but the customer s network does NOT learn UDLR routes from the Site A ToR switches, because none were learned from the Site A ESGs. 74 Enterprise Hybrid Cloud 4.1.1

75 Programmatic failback To programmatically fail back the networks in this configuration: Chapter 5: Network Considerations On the UDLR Lower the BGP weights for the neighbor relationships with Site B ESGs from 140 to 40. This allows Site A to become the primary egress path. On the UDLR Remove the BGP filters, so that the Site A ESGs learns routes from the UDLR. On the UDLR Add the BGP filters, so that the Site B ESGs will NOT learn routes from the UDLR. Enterprise Hybrid Cloud

76 Chapter 6: Storage Considerations Chapter 6 Storage Considerations This chapter presents the following topics: Common best practices and supported configurations Non-replicated storages Continuous availability storage considerations Disaster recovery (Site Recovery Manager) storage considerations Enterprise Hybrid Cloud 4.1.1

77 Common best practices and supported configurations Chapter 6: Storage Considerations Storage platform best practices Dell EMC recommends that you follow the best practice guidelines when deploying any of the supported platform technologies. Enterprise Hybrid Cloud does not require any variation from these best practices. ViPR virtual pools For block-based provisioning, ViPR virtual arrays must not contain more than one protocol. For Enterprise Hybrid Cloud, this means that EMC ScaleIO storage and FC block storage must be provided with separate virtual arrays. Note: Enterprise Hybrid Cloud supports combining multiple physical arrays into fewer virtual arrays to provide storage to virtual pools. vsphere datastore clusters VMware raw device mappings Enterprise Hybrid Cloud does not support vsphere datastore clusters for the following reasons: Linked clones do not work with datastore clusters, causing multi-machine blueprints to fail unless configured with different reservations for edge devices. vrealize Automation already performs capacity analysis during initial placement. VMware Storage DRS operations can result in inconsistent behavior when virtual machines report their location to vrealize Automation. Misalignment with vrealize reservations can make the virtual machines uneditable. Enterprise Hybrid Cloud STaaS operations do not place new LUNs into datastore clusters, therefore all datastore clusters would have to be manually maintained. Specific to SRM-based DR protection service: Day 2 storage DRS migrations between datastores would break the SRM protection for the virtual machines moved. Day 2 storage DRS migrations between datastores would result in re-replicating the entire virtual machine to the secondary site. VMware raw device mappings (RDMs) are not created by, or supported by, Enterprise Hybrid Cloud STaaS operations. If created outside of STaaS services, any issues arising from RDM use is not supported by Enterprise Hybrid Cloud customer support. If changes are required in the environment to make them operate correctly, then an Enterprise Hybrid Cloud RPQ should be submitted first for approval. If issues arise related to these changes, then Enterprise Hybrid Cloud support teams may request that you back them out to restore normal operation. Enterprise Hybrid Cloud

78 Chapter 6: Storage Considerations Non-replicated storages Applicability Non-replicated storage is used for the single-site and RecoverPoint for Virtual Machines DR protection services. This section addresses the storage considerations for such storage. Non-replicated storage types and supported storage platforms Table 11 outlines the categories of non-replicated storage, how it is provisioned, the cluster type it is assigned to, and the underlying supported storage technology for each category. Table 11. Storage type Non-replicated storage and supported underlying storage platforms Cluster assigned to Provisioned by Supported storage LC1S LC1S Enterprise Hybrid Cloud STaaS workflows using EMC ViPR Controller VS1S VS1S Pre-provisioned (VxRail Appliance only) EMC VMAX EMC VNX EMC XtremIO EMC ScaleIO EMC VPLEX Local EMC Isilon (workload use only) EMC Unity Hybrid VSAN Single-site workloads With single-site protection, storage provisioned to serve these workloads has no LUN-based replicas, and therefore by definition, it is bound to a single site and hardware island. LC1S or VS1S clusters are created using hosts from a single hardware island, therefore workloads placed on these clusters do not require any form of site or hardware island affinity. RecoverPoint for Virtual Machines disaster recovery workloads RecoverPoint for Virtual Machines-based DR workloads use LC1S vsphere clusters and storage without any LUN-based replicas, as replication for these workloads is carried out at the Virtual Machine Disk (VMDK) level and not at the datastore level. While the storage these workloads use are bound to one site, the replicated version of the workload uses a second set of non-replicated storage bound to a different site. RecoverPoint for Virtual Machines-based disaster recovery functionality can be enabled or disabled globally using the rp4vms_enabled global option. Changing the value of this option can be done with the vrealize Automation Global Options Maintenance catalog item. 78 Enterprise Hybrid Cloud 4.1.1

79 Chapter 6: Storage Considerations Onboarding process for clusters with non-replicated storage vsphere clusters that use non-replicated storage are made eligible for use within Enterprise Hybrid Cloud using the Cluster Maintenance vrealize Automation catalog item. LC1S clusters are onboarded using the Onboard Local Cluster option, which makes them available for selection when the Provision Cloud Storage catalog item is used. This process ensures that only the correct type of storage is presented to the single-site (LC1S) vsphere clusters and no misplacement of virtual machines intended for LUNbased inter-site protection occurs. VS1S clusters are onboarded using the Onboard VSAN Cluster option, which discovers vsan datastores on the cluster and ingests them in the Enterprise Hybrid Cloud object model. When onboarding an LC1S in an environment where RecoverPoint for Virtual Machines is also enabled, this catalog item also ensures a partner cluster is selected to allow machines to fail over to the other site. Continuous availability storage considerations Applicability Continuous availability provided by VPLEX Metro is used for both the Continuous Availability (Single-Site) and Continuous Availability (Dual-site) protection services. This section addresses the storage considerations for the VPLEX metro configuration. Continuous availability storage and supported storage platforms Table 12 outlines the categories of continuous availability storage, how it is provisioned, the cluster type it is assigned to, and the underlying supported storage technology for each category. Table 12. CA (VPLEX) storage and supported underlying storage platforms Storage type Assigned to cluster Provisioned by VPLEX cluster locations Underlying storage CA1S CA1S Enterprise Hybrid Cloud STaaS workflows using EMC ViPR Controller CA2S CA2S Enterprise Hybrid Cloud STaaS workflows using EMC ViPR Controller Both on one site One on each of two sites EMC VMAX EMC VNX EMC XtremIO EMC VMAX EMC VNX EMC XtremIO Continuous availability workloads With continuous availability protection, storage provisioned to serve these workloads is replicated across two different physical backend storage platforms. These platforms may be in two different hardware islands on the same physical site (CA1S) or may be in two different hardware islands on two separate sites (CA2S). As described in Site affinity for tenant virtual machines Enterprise Hybrid Cloud workflows use the winning site in a VPLEX configuration to determine to which site/hardware island to map virtual machines. Enterprise Hybrid Cloud

80 Chapter 6: Storage Considerations Continuous availability ViPR virtual pools ViPR virtual pools for block storage offer two options under high availability: VPLEX local and VPLEX distributed. Continuous availability protection uses VPLEX Distributed only. Figure 30 shows this configuration in the context of Continuous Availability (Dual-site), where VPLEX High-Availability Virtual Pool represents the VPLEX high-availability pool created. Figure 30. Interactions between local and VPLEX distributed pools To enable active/active clusters, you must create two sets of datastores one set that will win on Hardware Island/Site A and another set that will win on Hardware Island/Site B. Onboarding for clusters with continuously available storage vsphere clusters that use continuously available storage are made eligible for use within Enterprise Hybrid Cloud using the vrealize Automation Cluster Maintenance catalog item. Both CA1S and CA2S clusters are onboarded using the Onboard CA Cluster option, which makes them available for selection when you use the Provision Cloud Storage catalog item. This process ensures that only the correct type of storage is presented to the CA1S or CA2s vsphere clusters and no misplacement of virtual machines intended for LUN-based inter-site protection occurs. Continuous availability functionality may be enabled or disabled globally using the ca_enabled global option. Changing the value of this option can be done using the vrealize Automation Global Options Maintenance catalog item. 80 Enterprise Hybrid Cloud 4.1.1

81 Chapter 6: Storage Considerations Supported VPLEX topologies VMware classifies the stretched VPLEX Metro cluster configuration with VPLEX into the following categories: Uniform host access configuration with VPLEX host cross-connect vsphere ESXi hosts in a distributed vsphere cluster have a connection to the local VPLEX system and paths to the remote VPLEX system. The remote paths presented to the vsphere ESXi hosts are stretched across distance. Non-uniform host access configuration without VPLEX host cross-connect vsphere ESXi hosts in a distributed vsphere cluster have a connection only to the local VPLEX system. Deciding on VPLEX topology Use the following guidelines to help you decide which topology suits your environment: Uniform (cross-connect) is typically used where: Inter-site latency is less than 5 ms. Stretched SAN configurations are possible. Non-Uniform (without cross-connect) is typically used where: Inter-site latency is between 5 ms and 10 ms. Stretched SAN configurations are not possible. ViPR and VPLEX consistency groups interaction VPLEX uses consistency groups to maintain common settings on multiple LUNs. To create a VPLEX consistency group using ViPR, a ViPR consistency group must be specified when creating a new volume. ViPR consistency groups are used to control multi- LUN consistent snapshots and have a number of important rules associated with them when creating VPLEX distributed devices: All volumes in any given ViPR consistency group must contain only LUNs from the same physical array. As a result of these considerations, Enterprise Hybrid Cloud STaaS workflows create a new consistency group per physical array, per vsphere cluster per site. All VPLEX distributed devices in a ViPR consistency group must have source and target backing LUNS from the same pair of arrays. As a result of these two rules, it is a requirement of Enterprise Hybrid Cloud that an individual ViPR virtual pool is created for every physical array that provides physical pools for use in a VPLEX distributed configuration. Virtual Pool Collapser function Enterprise Hybrid Cloud STaaS workflows use the name of the ViPR virtual pool chosen as part of the naming for the vrealize SRP that the new datastore is added to. The Virtual Pool Collapser (VPC) function of Enterprise Hybrid Cloud collapses the LUNs from multiple virtual pools into a single SRP. Where multiple physical arrays provide physical storage pools of the same service level to VPLEX through different ViPR virtual pools, the VPC function can be used to ensure that all LUNS provisioned across those physical pools are collapsed into the same SRP. VPC can be enabled or disabled at a global Enterprise Hybrid Cloud level using the ehc_vpc_disabled global option. Changing the value of this option can be done with the vrealize Automation catalog item entitled Global Options Maintenance. Enterprise Hybrid Cloud

82 Chapter 6: Storage Considerations When ehc_vpc_disabled is set to False, Enterprise Hybrid Cloud STaaS workflows examine the naming convention of the virtual pool selected to determine which SRP it should add the datastore to. If the virtual pool includes the string _VPC-, then Enterprise Hybrid Cloud knows that it should invoke VPC logic. Virtual Pool Collapser example Figure 31 shows an example of VPC in use. In this scenario, the administrator has enabled the VPC function and created two ViPR virtual pools: GOLD_VPC , which has physical pools from Array 1 GOLD_VPC , which has physical pools from Array 2 Figure 31. Virtual Pool Collapser example When determining how to construct the SRP name to be used, the VPC function only uses that part of the virtual pool name that exists before _VPC-. This example results in the term GOLD, which then contributes to the common SRP name of: SITEA_GOLD_CA_Enabled. This makes it possible to conform to the rules of ViPR consistency groups, as well as providing a single SRP for all datastores of the same type, which maintains abstraction and balanced datastore usage at the vrealize layer. In the example shown in Figure 31, all storage is configured to win on a single site (Site A). To enable true active/active vsphere Metro storage clusters, additional pools should be configured in the opposite direction. 82 Enterprise Hybrid Cloud 4.1.1

83 Disaster recovery (Site Recovery Manager) storage considerations Chapter 6: Storage Considerations Applicability The SRM-based DR protection service for Enterprise Hybrid Cloud incorporates storage replication using RecoverPoint, storage provisioning using ViPR, and integration with SRM to support DR services for applications and virtual machines deployed in the hybrid cloud. SRM natively integrates with vcenter and NSX to support DR, planned migration, and recovery plan testing. Disaster recovery (SRM) storage and supported storage platforms Table 13 outlines the categories of DR RecoverPoint storage, how it is provisioned, the cluster type it is assigned to and the underlying supported storage technology for each category. Table 13. Storage type DR (RecoverPoint) storage and supported underlying storage platforms Assigned to cluster Provisioned by Underlying storage DR2S DR2S (one on each hardware island/site) Enterprise Hybrid Cloud STaaS workflows using EMC ViPR Controller EMC VMAX EMC VNX EMC XtremIO EMC VMAX3 (must reside behind an EMC VPLEX unit) SRM-based disaster recovery workloads With SRM-based disaster recovery protection, storage provisioned to serve these workloads is replicated across two different physical backend storage platforms using EMC RecoverPoint. These must be in two different hardware islands on two separate sites. Affinity to a site is determined by the storage choice of the user when deploying a virtual machine workload, as the storage reservation policy presented to the user indicates the site on which the storage is usually active. SRM-based disaster recovery ViPR virtual pools When you specify RecoverPoint as the protection option for a virtual pool, the ViPR storage provisioning services create the source and target volumes and the source and target journal volumes, as shown in Figure 32. Enterprise Hybrid Cloud

84 Chapter 6: Storage Considerations Figure 32. ViPR/EMC RecoverPoint protected virtual pool Each SRM DR-protected/recovery DR2S cluster pair has storage that replicates (under normal conditions) in a given direction, for example, from Site A to Site B. To allow active/active site configuration, additional SRM DR cluster pairs should be configured whose storage replicates in the opposite direction. Onboarding clusters with SRM-based disaster recovery storage vsphere clusters that use disaster recovery storage are made eligible for use within Enterprise Hybrid Cloud using the vrealize Automation Cluster Maintenance catalog item. DR2S clusters are onboarded using the Onboard DR Cluster option, which makes them available for selection when you use the Provision Cloud Storage catalog item. This process ensures that only the correct type of storage is presented to the DR2S vsphere clusters and no misplacement of virtual machines intended for LUN based inter-site protection occurs. SRM-based DR functionality can be enabled or disabled globally using the dr_enabled global option. You can change the value of this option by using the vrealize Automation Global Options Maintenance catalog item. RecoverPoint journal considerations Every RecoverPoint-protected LUN requires access to a journal LUN to maintain the history of disk writes to the LUN. The performance of the journal LUN is critical in the overall performance of the system attached to the RecoverPoint-protected LUN and therefore its performance capability should be in line with the expected performance needs of that system. By default, ViPR uses the same virtual pool for both the target and the journal LUN for a RecoverPoint copy, but it does allow you to specify a separate or dedicated pool. In both cases, the virtual pool and its supporting physical pools should be sized to provide adequate performance. 84 Enterprise Hybrid Cloud 4.1.1

85 Chapter 7: Data Protection (Backup as a Service) Chapter 7 Data Protection (Backup as a Service) This chapter presents the following topics: Overview Data protection technology Backup types overview Key Enterprise Hybrid Cloud backup constructs Controlling the backup infrastructure Enterprise Hybrid Cloud

86 Chapter 7: Data Protection (Backup as a Service) Overview This chapter discusses the considerations for implementing data protection, also known as backup as a service (BaaS), in the context of Enterprise Hybrid Cloud. This solution uses Avamar as the technology to protect your datasets. Using Avamar, this backup solution includes the following characteristics: Abstracts and simplifies backup and restore operations for cloud users Uses VMware storage APIs for data protection, which provides changed block tracking for faster backup and restore operations Provides full image backups for running virtual machines Eliminates the need to manage backup agents for each virtual machine in most cases Minimizes network traffic by de-duplicating and compressing data Note: Dell EMC recommends that you engage an Avamar product specialist to design, size, and implement a solution specific to your environment and business needs. Data protection technology Avamar instances/grids Avamar instances (known as Avamar grids in the Enterprise Hybrid Cloud object model) are the physical targets for Enterprise Hybrid Cloud backup data. They may be optionally backed by a Data Domain infrastructure for even greater efficiencies. Day 2 catalog items are available through vrealize Automation that are used to onboard the grids and their details. Part of the onboarding process assigns the grid to one of the sites that exist in the model. That information is subsequently used to ensure that users are guided to choose from only the applicable grids when setting up BaaS constructs. Scalable backup architecture through multiple Avamar grids Enterprise Hybrid Cloud backup configurations add scalable backup by adding the ability to configure multiple Avamar grids/instances per site across multiple sites. BaaS workflows automatically distribute workloads in a round-robin fashion across the available Avamar instances available to the workloads in question. Additional Avamar grids, or new relationships between existing Avamar grids, can be added to increase the scalability. By using Avamar Site Relationships and Avamar Replication Relationships, this new capacity is transparently discovered and used as new workloads are deployed. 86 Enterprise Hybrid Cloud 4.1.1

87 Chapter 7: Data Protection (Backup as a Service) Backup types overview Backup types Backup types, as shown in Table 14 are the rulesets that are automatically applied to Enterprise Hybrid Cloud workloads based on the type of cluster on which they reside and the protection options that are available within the environment Table 14. Type 1C1VC 2C1VC 2C2VC MC2VC Backup types Description One backup copy. Virtual machines are only on one site (one vcenter). Two backup copies. Virtual machines move between two sites (one vcenter). Two backup copies. Virtual machines move between two sites (two vcenters). Mixed number of copies. Virtual machines can be in up to two sites (two vcenters). One-Copy OnevCenter (1C1VC) The 1C1VC backup model is designed to back up workloads that are bound to a single site and may never move. As a result, the backup strategy that is applied to these workloads has a single copy per backup image (that is, non-replicated) on a single Avamar grid, as shown in Table 15. Table 15. Characteristics of one-copy-one-vcenter backup type Applies to workload Number of sites involved Number of images created Number of vcenters involved Number of vcenter folders Number of Avamar grids required Single Site You can deploy multiple Avamar grids in the environment to enable scale, and backup is automatically balanced across the number of grids deployed. Figure 33 shows an example of the folders and backup groups involved in a 1C1VC backup type. Enterprise Hybrid Cloud

88 Chapter 7: Data Protection (Backup as a Service) Figure 33. 1C1VC folders and backup groups Two-Copy OnevCenter (2C1VC) The 2C1VC backup model is designed to back up dual-site CA workloads that may move between sites, but remain in the same vcenter endpoint. As a result, the backup strategy that is applied to these workloads has two copies per backup image replicated from the grid that took the backup to the relevant partner grid. Table 16. Characteristics of two-copy-one-vcenter backup type Applies to workload Number of sites involved Number of images created Number of vcenters involved Number of vcenter folders Number of Avamar grids required Continuous Availability In this configuration, two vcenter folders are created (one per site) in the same vcenter and one Avamar grid in each site has primary ownership of the set of folders local to its own site with ability to take over ownership of the set corresponding to the other site. When workloads move sites, the same Avamar grid continues to backup those workloads until the backup policies on the Avamar grid are toggled via Failover Avamar policies for an offline Avamar grid or Failover Avamar grids after a site failure (depending on failure scenario) so that the partner Avamar grid will take over monitoring of the vcenter folder in which the workloads reside. Multiple Avamar grids may be deployed in the environment to enable scale, and backup is automatically balanced across the number of grids deployed. Figure 34 shows an example of the folders, backup groups, and replication groups involved in a 2C1VC backup type. 88 Enterprise Hybrid Cloud 4.1.1

89 Chapter 7: Data Protection (Backup as a Service) Figure 34. 2C1VC folders, backup groups, and replication groups Two-Copy TwovCenter (2C2VC) The 2C2VC backup model is designed to back up DR (SRM) workloads that may move between sites and that change vcenter endpoint when they do so. As a result, the backup strategy that is applied to these workloads has two copies per backup image replicated from the grid that took the backup to the relevant partner grid. Table 17. Characteristics of two-copy-two-vcenter backup type Applies to workload Number of sites involved Number of images created Number of vcenters involved Number of vcenter folders Number of Avamar grids required DR SRM In this configuration, four vcenter folders are created (two per site) with two residing in each vcenter endpoint. One Avamar grid on each site has sole ownership of the folders local to its own site or vcenter. When workloads move sites, the Avamar grid on the target site automatically commences backing up those workloads when they appear in the vcenter folders that it is monitoring. Multiple Avamar grids may be deployed in the environment to enable scaling, and backup is automatically balanced across the number of grids deployed. Figure 35 shows an example of the folders, backup groups, and replication groups involved in a 2C2VC backup type. Enterprise Hybrid Cloud

90 Chapter 7: Data Protection (Backup as a Service) Figure 35. 2C2VC folders, folder mappings, backup groups, and replication groups Mixed-Copy Two-vCenter (MC2VC) The MC2VC backup model is designed to back up disaster recovery (RecoverPoint for Virtual Machine) workloads. This is slightly different to the other models in that two different types of workloads may co-exist on the same tenant compute cluster, and those workloads may change type from single-site to RecoverPoint for Virtual Machines-based DR protection and back again at any time. Table 18. Characteristics of mixed-copy-two-vcenter backup type Applies to workload Number of sites involved Number of images created Number of vcenters involved Number of vcenter folders Number of Avamar grids required Single Site DR RP4VMs 2 1 or In this configuration, six vcenter folders are created (three per site) with three residing in each vcenter endpoint. One Avamar grid on each site has sole ownership of the set of folders local to its own site/vcenter. When workloads move sites, the Avamar grid on the 90 Enterprise Hybrid Cloud 4.1.1

91 Chapter 7: Data Protection (Backup as a Service) target site automatically commences backing up those workloads when they appear in the vcenter folders that it is monitoring. Single-site workloads reside in folders designated for local protection only. Therefore, they are backed up by a single grid and the backup image is not replicated, similar in concept to the 1C1VC model. RecoverPoint for Virtual Machines DR workloads reside in folders designated for dual-site protection. Therefore, they are backed up by a single grid and the backup image is replicated, similar in concept to the 2C2VC model. When a workload changes RecoverPoint for Virtual Machines DR protection status, it is moved from one vcenter folder to another, thereby self-adjusting its backup strategy to stay in line with its overall protection status, while maintaining all the previous backup images that were taken of that workload. Multiple Avamar grids may be deployed in the environment to enable scale, and backup is automatically balanced across the number of grids deployed. Figure 36 shows an example of the folders, backup groups and replication groups involved in a MC2VC backup type. Figure 36. MC2VC folders, folder mappings, backup groups and replication groups Enterprise Hybrid Cloud

92 Chapter 7: Data Protection (Backup as a Service) Mapping cluster types to backup types To establish the types of backups that should be applied to given workload types, Enterprise Hybrid Cloud ensures that all workloads on a given cluster type map to corresponding backup types. Table 19 shows how individual cluster types map to backup types. Table 19. Cluster type to backup type mapping Cluster type LC1S or VS1S CA1S CA2S DR2S Backup type 1C1VC (without RecoverPoint for Virtual Machines) MC2VC (with RecoverPoint for Virtual Machines) 1C1VC 2C1VC 2C2VC Key Enterprise Hybrid Cloud backup constructs Avamar Site Relationships An Avamar Site Relationship (ASR) is a relationship between sites for backup purposes. When it is determined that two sites must have a backup relationship, an ASR is created and assigned one of the backup types listed in Backup types. This backup type then defines the nature of the relationships that must be subsequently configured between physical Avamar grids to achieve that type of backup. The individual relationships created between grids are referred to as Avamar Replication Relationships (ARRs), and each ARR created becomes a child of the ASR that defined it. ARRs are covered in detail in Avamar Replication Relationships. The following rules apply to an ASR: Each ASR must correspond to a single backup type, as shown in Backup types. Each ASR may have one or two sites associated with it. The sites and type of an ASR become the ruleset for all ARRs that are added to it as children. ASRs may contain multiple ARRs to facilitate scale. Onboarded Enterprise Hybrid Cloud clusters may be associated with a single ASR. Therefore, all workloads on a specified cluster follow the backup policy structure of the mapped ASR. ASRs provide an abstraction between the vsphere cluster and the ARRs that are the children of the ASR to which the cluster is mapped. This allows additional ARRs to be added and used without altering the cluster mapping. During provisioning of backup services to an individual workload, round-robin algorithms examine the ASR mapped to the workload cluster and choose the ARR the workload will use. Multiple ASRs of the same type can exist at the same time. 92 Enterprise Hybrid Cloud 4.1.1

93 Chapter 7: Data Protection (Backup as a Service) Multiple ASRs of the same type in a single Enterprise Hybrid Cloud instance There are several use cases for using multiple ASRs of the same type in the same environment: There could be a 1C1VC ASR in each of several sites. There could be multiple 2C1VC ASRs if multiple CA relationships exist in the Enterprise Hybrid Cloud environment. There could be multiple 2C2VC ASRs if multiple DR relationships exist in the Enterprise Hybrid Cloud environment. Multiple ASRs with the same set of sites could also be used to separate clusters onto dedicated sets of Avamar infrastructure. Note: In the earlier versions of Enterprise Hybrid Cloud, Avamar grids were paired, resulting in a single backup type being available for any given Avamar grid. ASRs and ARRs break this dependency and allow multiple backup types to co-exist on the same physical grid infrastructure. Avamar Replication Relationships An Avamar Replication Relationship (ARR) is a relationship between up to two Avamar grids. When creating an ARR, you must assign it to a parent ASR. This ensures that the grids available for selection during its creation are only from the sites defined in the ASR itself. Avamar grids may participate in multiple ARRs. In this way, the same grid may: Provide multiple protection levels between the same set of sites by being a member of multiple ARRs whose parent ASR has the same set of sites but a different ASR type. Provide protection between different sets of sites by being a member of multiple ARRs whose parent ASRs have different combinations of sites. Both of the above at the same time. The following rules are associated with an ARR: An ARR must be associated with an ASR. When creating an ARR, it inherits its type based on the parent ASR to which it is associated. Enterprise Hybrid Cloud workflows only present Avamar grids from the relevant sites for inclusion in the ARR based on the choice of ASR. As ARRs are added to an ASR, vsphere clusters associated with that ASR automatically pick up new ARRs and use them without needing to modify the cluster mapping. Avamar Site Relationship to vsphere cluster association Avamar image-level backups work by mounting snapshots of VMDKs to Avamar proxy virtual machines and then backing up the data to the Avamar instance to which the Avamar proxy is registered. In a fully deployed Enterprise Hybrid Cloud with up to 10,000 user virtual machines per vcenter and hundreds of vsphere clusters, this could lead to Avamar proxy sprawl if not properly configured and controlled. Enterprise Hybrid Cloud

94 Chapter 7: Data Protection (Backup as a Service) To do this, Enterprise Hybrid Cloud associates vsphere clusters to a subset of ASRs (of which there may be many). In this way, different clusters may be mapped to different ASRs to enable backup to distinct sets of Avamar infrastructure. This means that a reduced number of Avamar proxy virtual machines are required to service the cloud. Associations between a vsphere cluster and an ASR are made via the Enterprise Hybrid Cloud vrealize Automation Cluster Maintenance catalog item. Note: When a cluster is mapped to a 2C2VC or MC2VC ASR, its partner cluster is automatically associated with the same ASR to ensure full access to all backup images taken before, during, or after failover or virtual machine workloads. Cluster, ASR, ARR, and Avamar grid configuration example Figure 37 shows an example of the concepts described previously with the following characteristics: Two sites are in use Site A provides the single-site protection service through Cluster 01 Site A/Site B provide SRM-based the disaster recovery protection service through the DR2S cluster pair represented as Cluster 02 Both cluster types are serviced by a common set of grids Cluster 01 local workloads can load balance backups to Grid 01 and Grid 02 because the ASR it is mapped to has two different child ARRs Cluster 02 DR workloads can load balance backups to Grid 01 and Grid 02 on Site A or Grid 03 and Grid 04 on Site B as appropriate because the ASR it is mapped to has two different child ARRs 94 Enterprise Hybrid Cloud 4.1.1

95 Chapter 7: Data Protection (Backup as a Service) Figure 37. Cluster, ASR, ARR, and Avamar grid configuration example Avamar backup groups Avamar backup groups are automatically created on Avamar grids during the creation of a backup service level. They control the vcenter elements that are monitored by scheduled backup policies and are triggered by on-demand backups. These backup groups are recorded in the object model and used to ensure that backup policies can follow their workloads as they are recovered to different sites. Avamar replication groups Avamar replication groups are automatically created on Avamar grids during the creation of a backup service level when multisite ASRs and ARRs exist. They control the replication settings for scheduled backups and are triggered by on-demand backups. These replication groups are recorded in the object model and used to ensure that replication policies can follow their workloads as they are recovered to different sites. Backup service levels Backup service levels appear to the user as the available options when specifying the tier of backup that a virtual machine workload will use. They are created using the vrealize Automation Backup Service Level Maintenance catalog item. A backup service level includes the following attributes: Cadence At what frequency a backup should run (daily, weekly, monthly, or custom) Backup Schedule Time of day/window that the backup should execute Enterprise Hybrid Cloud

96 Chapter 7: Data Protection (Backup as a Service) Replication Schedule Time of day when backup images should be replicated to another site (where appropriate) Retention How long the backup image should be retained for VMware vcenter folder structure and backup service level relationship When you create a backup service level the vrealize Automation Backup Service Level Maintenance catalog item, it creates an associated set of folders in the cloud vcenter endpoints that are added to the Enterprise Hybrid Cloud object model. When doing this, the Enterprise Hybrid Cloud workflows cycle through all available ASRs and find vcenter endpoints that match the site relationships in those ASRs. They then cycle through the associated ARRs so that new vcenters folders are created appropriately. The number of folders created depends on how many ARRs are present, and these folders become part of the mechanism for distributing the backup load. The format of the folder name varies slightly depending on the ASR/ARR type, as shown in Table 20. Table 20. vcenter folder name structure by backup type Backup type Folders Folder name structure 1C1VC 1 BackupServiceLevelName-ARRName-Site, for example, Gold- ARR00001-NewYork 2C1VC 2 BackupServiceLevelName-ARRName-Site for each site, for example: Gold-ARR00002-NewYork Gold-ARR00002-Boston 2C2VC 4 BackupServiceLevelName-ARRName-Site plus BackupServiceLevelName-ARRName-Site_PH for each site, for example,: Gold-ARR00003-NewYork Gold-ARR00003-NewYork_PH Gold-ARR00001-Boston Gold-ARR00001-Boston_PH MC2VC 6 BackupServiceLevelName-ARRName-Site-Protected plus BackupServiceLevelName-ARRName-Site-Protected_PH plus BackupServiceLevelName-ARRName-Site-Local for each site, for example: Gold-ARR00003-NewYork-Protected Gold-ARR00003-NewYork-Protected _PH Gold-ARR00003-NewYork-Local Gold-ARR00001-Boston-Protected Gold-ARR00001-Boston-Protected _PH Gold-ARR00001-Boston-Local When Enterprise Hybrid Cloud custom lifecycle operations must enable a workload for backup, they: 1. Check the cluster the workload resides on. 2. Check the ASR that the cluster is mapped to. 96 Enterprise Hybrid Cloud 4.1.1

97 Chapter 7: Data Protection (Backup as a Service) 3. Enumerate the child ARRs assigned to that ASR and determine which of the grids local to that workload has the least current load. 4. Look up the appropriate vcenter folder for the chosen grid. 5. Move the workload to the correct vcenter folder. Controlling the backup infrastructure Avamar instance full Determining that a backup target, in this case an Avamar instance, has reached capacity can be based on a number of metrics of the virtual machines it is responsible for protecting, including: The number of virtual machines assigned to the instance The total capacity of those virtual machines The rate of change of the data of those virtual machines The effective deduplication ratio that can be achieved while backing up those virtual machines The available network bandwidth and backup window size Because using these metrics can be subjective, Enterprise Hybrid Cloud enables an administrator to preclude an Avamar instance (and therefore any ARRs it participates in) from being assigned further workload by setting a binary Admin Full flag set via the vrealize Automation Avamar Grid Maintenance catalog item. When a virtual machine is enabled for data protection via Enterprise Hybrid Cloud BaaS workflows, the available ARRs are assessed to determine the most suitable target. If an ARR is set with the Admin Full flag, then that relationship is excluded from the selection algorithm but continues to back up its existing workloads through on-demand or scheduled backups. If workloads are retired and an Avamar grid is determined to have free capacity, the Admin Full flag can be toggled back, including it in the selection algorithm again. Replication control If Data Domain is used as a backup target, Avamar is responsible for replication of Avamar data from the source Data Domain system to the destination Data Domain system. As a result, all configuration and monitoring of replication is done via the Avamar server. This includes the schedule on which Avamar data is replicated between Data Domain units. You cannot schedule replication of data on the Data Domain system separately from the replication of data on the Avamar server. There is no way to track replication by using Data Domain administration tools. Note: Do not configure Data Domain replication to replicate data to another Data Domain system that is configured for use with Avamar. When you use Data Domain replication, the replicated data does not refer to the associated remote Avamar server. Enterprise Hybrid Cloud

98 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Chapter 8 Enterprise Hybrid Cloud on Converged Infrastructure Systems This chapter presents the following topics: Enterprise Hybrid Cloud on VxRail Appliances Enterprise Hybrid Cloud on VxRack Systems Enterprise Hybrid Cloud on VxBlock Systems Enterprise Hybrid Cloud 4.1.1

99 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Enterprise Hybrid Cloud on VxRail Appliances Enterprise Hybrid Cloud features supported on a VxRail Appliance Table 21 provides details on the Enterprise Hybrid Cloud features that are supported on a VxRail Appliance. Table 21. Characteristics of Enterprise Hybrid Cloud on VxRail Appliances Feature Supported Not supported EHC Services Single-site protection BaaS Encryption as a service Continuous availability protection RecoverPoint for Virtual Machinesbased DR protection Site Recovery Manager-based DR protection Cluster Types VS1S LC1S CA1S CA2S DR2S Backup Types 1C1VC 2C1VC 2C2VC MC2VC Architecture overlay: Singlesite protection (VxRail Appliance) When Enterprise Hybrid Cloud is deployed on a VxRail Appliance with single-site protection: The Core, NEI, and Automation management pod functions are all hosted on the VxRail Appliance cluster The Data Protection Pod components are distributed as follows: Avamar Virtual Edition appliance(s) reside on the VxRail cluster Data Domain virtual appliances(s) reside on Dell PowerEdge Servers The Enterprise Hybrid Cloud management platform is auto-installed on the customer site using the Enterprise Hybrid Cloud Automation Tool Tenant workloads also reside on the VxRail Appliance cluster Figure 38 shows how Enterprise Hybrid Cloud is overlaid on a VxRail Appliance when the single-site protection services are deployed. Enterprise Hybrid Cloud

100 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Figure 38. Architecture overlay: Single-site protection with VxRail Appliance Rules and considerations (VxRail Appliance) The following rules and considerations are relevant to Enterprise Hybrid Cloud deployments on VxRail Appliance: If VxRail Appliance is required as an Enterprise Hybrid Cloud-managed vcenter endpoint, then it must also host the Cloud Management Platform If the VxRail Appliance hosts the Cloud Management Platform: No other VxRail Appliance platforms may exist in the same hybrid cloud instance as an Enterprise Hybrid Cloud-managed vcenter endpoint. They can be added as infrastructure-as-a-service vcenter endpoints only. No VxBlock System or VxRack System platforms may exist in the same hybrid cloud instance as an Enterprise Hybrid Cloud-managed vcenter endpoint. They can be added as infrastructure-as-a-service vcenter endpoints only. The management platform requires the use of VMware NSX for the Automation Pod load balancing function. No other load balancing technology is supported. Note: VMware NSX is the only load balancer technology currently supported for automated Enterprise Hybrid Cloud deployment. Automated deployment is the only supported method of installing the Cloud Management Platform on a VxRail Appliance. 100 Enterprise Hybrid Cloud 4.1.1

101 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems If the VxRail Appliance does not host the Cloud Management Platform, it cannot be added as an Enterprise Hybrid Cloud-managed vcenter endpoint to that hybrid cloud instance When data protection (BaaS) is used with a VxRail Appliance: Avamar Virtual Appliance(s) are loaded onto the VxRail cluster. Data Domain Virtual Appliance(s) are loaded onto a distinct Dell PowerEdge Server(s) Physical connectivity considerations Relevant considerations are: While it is possible for you to order and deploy a VxRail Appliance system with a single ToR switch, Dell EMC recommends using dual ToR switches to avoid a single point of failure in the overall system. VxRail Appliance groups traffic in the following categories during deployment: Management, vsphere vmotion, vsan, and Virtual Machine. Dell EMC recommends (but it is not required) traffic isolation on separate VLANs in the VxRail Appliance. If you are using multiple switches, connect them via VLAN trunked interfaces and ensure that all VLANs used for VxRail are carried across the trunk. Any additional port groups created for Enterprise Hybrid Cloud including NSX specific port groups should also be added to the trunk ports. VxRail Appliance nodes advertise themselves on the network using the loudmouth service, which relies on multicast. The switch must support IPv6 multicast traffic. vsan requires IPv4 multicasting. It is not necessary to enable multicasting on the entire switch. It is only required for the ports supporting VxRail Appliance node traffic. Each VxRail Appliance node uses two 10 GbE network ports, and each port must be connected to a switch that supports IPv4 multicast and IPv6 multicast. To ensure vsphere vmotion traffic does not consume all available bandwidth on the port, VxRail Appliance limits vmotion traffic using Network I/O Control (NIOC). Enterprise Hybrid Cloud

102 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Enterprise Hybrid Cloud on VxRack Systems Enterprise Hybrid Cloud features supported on a VxRack System Table 22 provides details on the Enterprise Hybrid Cloud features that are supported on a VxRack FLEX. Table 22. Characteristics of Enterprise Hybrid Cloud on VxRack Systems Feature Supported Not supported EHC Services Single-site protection RecoverPoint for Virtual Machinesbased DR protection Backup as a service Encryption as a service Continuous availability protection Site Recovery Manager-based DR protection Cluster Types LC1S CA1S CA2S DR2S VS1S Backup Types 1C1VC 2C2VC 2C1VC MC2VC Understanding which architectural overlay to use Architecture overlay: Singlesite protection (VxRack System) Use the following information to understand which of the management platform architectural overlays is relevant to your deployment: Single-site protection (VxRack Systems) Use this overlay when you have a single-site deployment and do not intend to provide inter-site protection for the Cloud Management Platform. RecoverPoint for Virtual Machines DR protection (VxRack Systems) Use this overlay when you have a multisite deployment and want to provide inter-site protection for the Cloud Management Platform via RecoverPoint for Virtual Machines. When Enterprise Hybrid Cloud is deployed with single-site protection on a VxRack System: Dual vcenters are the default. The VxRack Management vcenter is provided with VxRack System and hosts native management applications. The second production vcenter is used as the Enterprise Hybrid Cloud Cloud vcenter role. Core and NEI Pod functions are deployed and configured in the VCE factory with the exception of the Automation Pod NSX Load Balancer. The Enterprise Hybrid Cloud NEI Pod function is split across the VXMA cluster and Edge clusters. The Edge cluster is deployed on Dell 630/730 servers. Automation Pod components consume production blades. They are deployed in the VCE factory but are configured onsite. Figure 39 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. 102 Enterprise Hybrid Cloud 4.1.1

103 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Figure 39. Architecture overlay: Single-site protection with VxRack Systems Architecture overlay: RecoverPoint for Virtual Machines DR protection (VxRack Systems) When Enterprise Hybrid Cloud is deployed with RecoverPoint for Virtual Machines disaster recovery protection on a VxRack System: Dual vcenters are the default. The VxRack Management vcenter is provided with the VxRack System and hosts native management applications. The second Production vcenter is used as the Enterprise Hybrid Cloud Cloud vcenter role. Core and NEI Pod functions are deployed and configured in the VCE factory with the exception of the Automation POD NSX Load Balancer The Enterprise Hybrid Cloud NEI Pod function is split across the VXMA-cluster and Edge clusters. The Edge cluster is deployed on Dell 630/730 servers. Automation Pod components consume production blades. They are deployed in the VCE factory but are configured onsite. RecoverPoint for Virtual Machines is required for virtual machine replication. Enterprise Hybrid Cloud

104 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Reserved compute capacity in the secondary for Automation Pod failover is required. Virtual RecoverPoint Appliances (vrpas) are deployed to tenant resource pods and scale in line with the number of protected virtual machines. Figure 40 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. Figure 40. Architecture overlay: RecoverPoint for Virtual Machines DR protection with VxRack Systems Rules and considerations (VxRack Systems) The following rules and considerations are relevant to Enterprise Hybrid Cloud deployments on VxRack Systems: If VxRack Systems is to partner with another converged infrastructure platform to provide RecoverPoint for Virtual Machines-based disaster recovery protection service, then both the VxRack System production vcenter and its partner s production vcenter must co-exist in the same VMware SSO domain. Note: To avoid on-site remediation of the partner VxRack System, ideally both VxRack Systems should be installed in-factory. The VxRack System management vcenters may remain in distinct VMware SSO domains if required VxRack Systems cannot be partnered with VxRail Appliances for RecoverPoint for Virtual Machines-based disaster recovery protection service as Enterprise Hybrid Cloud does not currently support RecoverPoint for Virtual Machines-based disaster recovery protection service running on VxRail. 104 Enterprise Hybrid Cloud 4.1.1

105 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Physical connectivity considerations VxRack Systems were designed with NSX in mind, therefore configuring NSX on VxRack Systems for Enterprise Hybrid Cloud is relatively straightforward. Relevant considerations are: VxRack Systems are built with ToR switches that are configured for Layer 3. This is ideal for NSX deployments. VxRack Systems, since they have ToR switches that are configured for Layer 3, are not normally deployed with virtual port channel connections between the ToR switches and the customer s switches. When deploying NSX on VxRack Systems, some servers must be dedicated to the Edge cluster. This Edge cluster forms the physical to virtual demarcation point between NSX and the customer s physical network. All north-south traffic flows through the ESGs in the Edge cluster, Dell 630/730 servers are used to provide 10 GB connectivity to TOR switches. Enterprise Hybrid Cloud

106 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Enterprise Hybrid Cloud on VxBlock Systems Enterprise Hybrid Cloud features supported on a VxBlock System Table 23 provides details on the Enterprise Hybrid Cloud features that are supported on a VxBlock System. Table 23. Characteristics of Enterprise Hybrid Cloud on VxBlock Systems Feature Supported Not supported EHC Services Single-site protection Backup as a service Encryption as a service Continuous availability protection RecoverPoint for Virtual Machines-based DR protection Site Recovery Manager-based DR protection None Cluster types Backup types LC1S CA1S CA2S DR2S 1C1VC 2C2VC 2C1VC MC2VC VS1S None Understanding which architectural overlay to use Architecture overlay: Singlesite protection (VxBlock Systems) Use the following information to understand which of the management platform architectural overlays is relevant to your deployment: Single-site protection (VxBlock Systems) Use this overlay when you have a single-site deployment and do not intend to provide inter-site protection for the Cloud Management Platform. Continuous availability protection (VxBlock Systems) Use this overlay when you have a multisite deployment, and want to provide inter-site protection for the Cloud Management Platform via continuous availability. RecoverPoint for Virtual Machines DR protection (VxBlock Systems) Use this overlay when you have a multisite deployment and want to provide inter-site protection for the Cloud Management Platform via RecoverPoint for Virtual Machines. Site Recovery Manager DR protection (VxBlock Systems) Use this overlay when you have a multisite deployment and want to provide inter-site protection for the Cloud Management Platform via Site Recovery Manager. When Enterprise Hybrid Cloud is deployed with single-site protection on VxBlock Systems: The high-performance Advanced Management Platform (AMP-2HAP) option from VCE is required. AMP-2HAP ensures sufficient compute resources exist to run all 106 Enterprise Hybrid Cloud 4.1.1

107 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems native AMP management components and the Enterprise Hybrid Cloud Core Pod components. A single vcenter is the default as the Enterprise Hybrid Cloud Cloud vcenter role is provided by the VCE AMP vcenter. Core and NEI Pod functions are deployed and configured in-factory. The Enterprise Hybrid Cloud NEI Pod function is split across the AMP and Edge clusters. The Edge cluster uses VCE UCS C-Series (configurable based on bandwidth requirements). Automation Pod components consume production blades. They are deployed infactory but are configured on-site. Figure 41 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. Figure 41. Architecture overlay: Single-site protection with VxBlock Systems Enterprise Hybrid Cloud

108 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Architecture overlay: Continuous availabilityprotection (VxBlock Systems) When Enterprise Hybrid Cloud is deployed with continuous availability protection on VxBlock Systems: The high-performance AMP-2HAP option is required. AMP-2HAP ensures sufficient compute resources exist to run all native AMP management components, and the Enterprise Hybrid Cloud Core Pod components. A single vcenter is the default as the Enterprise Hybrid Cloud Cloud vcenter is provided by the VCE AMP vcenter. Core and NEI Pod functions are deployed and configured in-factory. Selected components are deployed on each site and use local storage. The remaining components are shared across sites and use VPLEX storage. The Enterprise Hybrid Cloud NEI Pod function is split across the AMP and Edge clusters. The Edge cluster uses VCE UCS C-Series (configurable based on bandwidth requirements). Automation Pod components consume production blades. They are deployed infactory but are configured on-site. VPLEX Metro is required for storage replication. Figure 42 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. 108 Enterprise Hybrid Cloud 4.1.1

109 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Figure 42. Architecture overlay: Continuous Availability Site Protection with VxBlock Systems Architecture overlay: RecoverPoint for Virtual Machines DR protection (VxBlock Systems) When Enterprise Hybrid Cloud is deployed with RecoverPoint for Virtual Machines disaster recovery protection on a VxBlock System: The high-performance AMP-2HAP option from VCE is required. AMP-2HAP ensures sufficient compute resources exist to run all native AMP management components as well as the Enterprise Hybrid Cloud Core Pod components. A single vcenter per site is the default as the Enterprise Hybrid Cloud Cloud vcenter role is provided by the VCE AMP vcenter. Core and NEI Pod functions are deployed and configured in the VCE factory. The Enterprise Hybrid Cloud NEI Pod function is split across AMP and Edge clusters. Enterprise Hybrid Cloud

110 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems The Edge cluster uses VCE UCS C-Series (configurable based on bandwidth requirements). Automation Pod components consume production blades. They are deployed in the VCE factory but are configured onsite. RecoverPoint for Virtual Machines is required for virtual machine replication. Reserved compute capacity in the secondary for Automation Pod failover is required. Virtual RecoverPoint Appliances (vrpas) are deployed to tenant resource pods and scale in line with the number of protected virtual machines. Figure 43 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. Figure 43. Architecture overlay: RecoverPoint for Virtual Machines DR protection with VxBlock Systems 110 Enterprise Hybrid Cloud 4.1.1

111 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Architecture overlay: Site Recovery Manager DR protection (VxBlock Systems) When Enterprise Hybrid Cloud is deployed with SRM-based disaster recovery protection on a VxBlock System: The high-performance AMP-2HAP option from VCE is required. AMP-2HAP ensures sufficient compute resources exist to run all native AMP management components as well as the Enterprise Hybrid Cloud Core Pod components. A single vcenter per site is the default as the Enterprise Hybrid Cloud Cloud vcenter role is provided by the VCE AMP vcenter. Core and NEI Pod functions are deployed and configured in the VCE factory. The Enterprise Hybrid Cloud NEI Pod function is split across AMP and Edge clusters. The Edge cluster uses VCE UCS C-Series (configurable based on bandwidth requirements). Automation Pod components consume production blades. They are deployed in the VCE factory but are configured onsite. RecoverPoint is required for storage replication. Reserved compute capacity in the secondary for Automation Pod failover is required. Figure 44 shows how Enterprise Hybrid Cloud and VCE components overlay in this configuration. Enterprise Hybrid Cloud

112 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Figure 44. Architecture overlay: Site Recovery Manager DR protection with VxBlock Systems Rules and considerations (VxBlock Systems) The following rules and considerations are relevant to Enterprise Hybrid Cloud deployments on VxBlock Systems: If VxBlock Systems is to partner with another converged infrastructure platform to provide RecoverPoint for Virtual Machines or Site Recovery Manager-based disaster recovery protection service, then both the VxBlock System vcenter and its partner s vcenter must co-exist in the same VMware SSO domain. Note: To avoid on-site remediation of partner VxBlock Systems, the ideal situation is to have both VxBlock Systems installed in-factory. VxBlock Systems cannot be partnered with VxRail Appliances for RecoverPoint for Virtual Machines-based disaster recovery protection service, because Enterprise Hybrid Cloud does not currently support RecoverPoint for Virtual Machines-based 112 Enterprise Hybrid Cloud 4.1.1

113 Chapter 8: Enterprise Hybrid Cloud on Converged Infrastructure Systems Physical connectivity considerations Dell EMC VxBlock Systems were designed with NSX in mind, so configuring NSX on a VxBlock System for Enterprise Hybrid Cloud is relatively straightforward. Relevant considerations are: VxBlock Systems are built with ToR switches that are configured for Layer 3. This is ideal for NSX deployments. VxBlock Systems already have a disjointed Layer 2 (DJL2) configuration between the fabric interconnects and the ToR switches to support dynamic routing. VxBlock Systems, since they have ToR switches that are configured for Layer 3, are not normally deployed with virtual port channel connections between the ToR switches and the customer s switches. When deploying NSX on VxBlock Systems, some servers must be dedicated to the Edge cluster. This Edge cluster forms the physical to virtual demarcation point between NSX and the customer s physical network. All north-south traffic flows through the edge cluster. VxBlock Systems support C-series servers for use in the Edge cluster. C-series servers have an excellent throughput profile. Two different types of C-series are offered: One that supports 10 Gb/s of throughput, and one that supports 20 Gb/s of throughput, so sizing the Edge cluster is still important. Enterprise Hybrid Cloud

114 Chapter 9: Ecosystem Interactions Chapter 9 Ecosystem Interactions This chapter presents the following topics: Single-site ecosystems Continuous availability ecosystem Disaster recovery ecosystems Enterprise Hybrid Cloud 4.1.1

115 Chapter 9: Ecosystem Interactions Single-site ecosystems Single-site ecosystem Figure 45 shows how the concepts in this guide combine under the single-site protection service. Figure 45. Single-site protection ecosystem Enterprise Hybrid Cloud

116 Chapter 9: Ecosystem Interactions Continuous availability ecosystem Continuous Availability (Dual-site) ecosystem Figure 46 shows how the concepts in this guide combine under the Continuous Availability (Dual-site) protection service. Figure 46. Continuous Availability (Dual-site) protection ecosystem 116 Enterprise Hybrid Cloud 4.1.1

117 Chapter 9: Ecosystem Interactions Disaster recovery ecosystems Disaster recovery (RecoverPoint for Virtual Machines) ecosystem Figure 47 shows how the concepts in this guide combine under the RecoverPoint for Virtual Machines-based disaster recovery protection service. Enterprise Hybrid Cloud

118 Chapter 9: Ecosystem Interactions Figure 47. RecoverPoint for Virtual Machines-based disaster recovery protection ecosystem 118 Enterprise Hybrid Cloud 4.1.1

119 Chapter 9: Ecosystem Interactions Disaster recovery (Site Recovery Manager) ecosystem Figure 48 shows how the concepts in this guide combine under the SRM-based DR protection service. Figure 48. SRM-based disaster recovery protection ecosystem Enterprise Hybrid Cloud

120 Chapter 10: Maximums, Rules, Best Practices, and Restrictions Chapter 10 Maximums, Rules, Best Practices, and Restrictions This chapter presents the following topics: Overview Maximums VMware Platform Services Controller rules Active Directory VMware vrealize tenants and business groups EMC ViPR tenant and projects rules Bulk import of virtual machines Resource sharing Data protection considerations RecoverPoint for Virtual Machines best practices Software resources Sizing guidance Restrictions Component options Enterprise Hybrid Cloud 4.1.1

121 Chapter 10: Maximums, Rules, Best Practices, and Restrictions Overview This chapter looks at the maximums, rules, best practices, restrictions, and dependencies between Enterprise Hybrid Cloud components and their constructs, outlining how this influences the supported configurations within the cloud. Maximums vcenter maximums Table 24 shows the maximums related to vcenter endpoints in Enterprise Hybrid Cloud. Table 24. vcenters per Enterprise Hybrid Cloud instance Total number of vcenter endpoints with full Enterprise Hybrid Cloud services (managed vcenter endpoints) Maximum 4* vcenter endpoints (Managed or IaaS-only) in a single SSO domain 10 vcenter endpoints configured as vrealize Automation endpoints 10 vcenters that may offer continuous availability protection 4 vcenters pairs that may offer SRM-based disaster recovery protection 1 vcenters pairs that may offer RecoverPoint for Virtual Machines-based disaster recovery protection 2 * All managed vcenter endpoints must be part of the same SSO domain (also known as Enterprise Hybrid Cloud-managed SSO domain). Table 25 shows the maximums that apply per vcenter endpoint. Table 25. Per vcenter maximums Total number of Maximum Enterprise Hybrid Cloud sites 2 Hosts per cluster 64 Virtual machines per cluster 8,000 Hosts per vcenter Server 1,000 Powered-on virtual machines per vcenter Server 10,000 Registered virtual machines per vcenter Server 15,000 Enterprise Hybrid Cloud

122 Chapter 10: Maximums, Rules, Best Practices, and Restrictions SSO maximums Table 26 shows the maximums that apply to SSO domains used in Enterprise Hybrid Cloud. Table 26. SSO maximums Total number of Maximum Managed SSO domains 1 SSO domains 10 * External SSO domains (if used) do not contribute to this number. Site maximums Table 27 shows the maximums that apply to Enterprise Hybrid Cloud sites. Table 27. Site maximums Total number of Maximum Standalone sites per Enterprise Hybrid Cloud instance 4 Sites where all managed vcenter endpoints provide continuous availability protection 8 Note: Enterprise Hybrid Cloud supports a maximum of four sites when using single-site protection or DR protection. This may be increased to eight sites when all the vcenters in the solution have two sites associated with it. Also, the number of sites supported is influenced by the number of vcenters per site. For instance, if four vcenters are provisioned on the first site, then vcenter maximums take precedence and only one site may be supported. PSC maximums Table 28 shows the maximums that apply for Platform Service Controllers (PSCs). Table 28. PSC maximums Total number of Maximum PSCs per vsphere domain 8 PSCs per site, behind a load balancer 4 Active Directory or OpenLDAP Groups per user for best performance 1,015 VMware Solutions* connected to a single PSC 4 VMware Solutions* in a vsphere domain 10 * VMware Solutions is a term provided by VMware. In the case of Enterprise Hybrid Cloud, this equates to VMware vcenter Server instances. 122 Enterprise Hybrid Cloud 4.1.1

123 Chapter 10: Maximums, Rules, Best Practices, and Restrictions Virtual machine maximums Table 29 shows the maximums that apply for virtual machines. Table 29. Virtual machines maximums Total number of Maximum Virtual machines managed by Enterprise Hybrid Cloud 40,000 Powered-on virtual machines protected by CA 10,000 Registered virtual machines protected by CA 15,000 Virtual machines protected by SRM-based DR 5,000 Virtual machines protected by RecoverPoint for Virtual Machines-based DR (individually recoverable) per vcenter Virtual machines protected by RecoverPoint for Virtual Machines-based DR (using multi-machine consistency groups) per vcenter Virtual machines protected by RecoverPoint for Virtual Machines-based DR (individually recoverable) for each Enterprise Hybrid Cloud instance Virtual machines protected by RecoverPoint for Virtual Machines-based DR (using multi-machine consistency groups) 1,024* 2,048 2,048* 4,096 * If the Enterprise Hybrid Cloud Automation Pod is also being protected using RecoverPoint for Virtual Machines, then a RecoverPoint vrpa cluster must be dedicated for this purpose, effectively reducing the number of individually protected virtual machine workloads by 128. Latency maximums Table 30 shows the maximums that apply with respect to latencies in the solution. Table 30. Latency maximums Context VPLEX uniform (cross-connect) configuration VPLEX non-uniform (Non-cross-connect) configuration Cross-vCenter NSX network latency RecoverPoint for Virtual Machines latency between sites Maximum 5ms 10ms 150ms 200ms Site Recovery Manager maximums Protection maximums Table 31 shows the maximums that apply for SRM-protected resources. Table 31. SRM protection maximums Total number of Maximum Virtual machines configured for protection using array-based replication 5,000 Virtual machines per protection group 500 Protection groups 250 Recovery plans 250 Protection groups per recovery plan 250 Enterprise Hybrid Cloud

124 Chapter 10: Maximums, Rules, Best Practices, and Restrictions Total number of Maximum Virtual machines per recovery plan 2,000 Replicated datastores (using array-based replication) and >1 RecoverPoint cluster 255 Recovery maximums Table 32 shows the maximums that apply for SRM recovery plans. Table 32. SRM protection maximums Total number of Maximum Concurrently executing recovery plans 10 Concurrently recovering virtual machines using array-based replication 2,000 Storage maximums Table 33 indicates the storage maximums with SRM-based DR in Enterprise Hybrid Cloud, when all other maximums are taken into account. Table 33. Implied Enterprise Hybrid Cloud storage maximums Total number of Maximum DR-enabled datastores per RecoverPoint consistency group 1 DR-enabled datastores per RecoverPoint cluster 128 DR-enabled datastores per Enterprise Hybrid Cloud environment 250 To ensure maximum protection for DR-enabled vsphere clusters, Enterprise Hybrid Cloud STaaS workflows create each LUN in its own RecoverPoint consistency group. This ensures that ongoing STaaS provisioning operations have no effect on either the synchronized state of existing LUNs or the history of restore points for those LUNs maintained by RecoverPoint. Because there is a limit of 128 consistency groups per RecoverPoint cluster, there is therefore a limit of 128 Enterprise Hybrid Cloud STaaS provisioned LUNs per RecoverPoint cluster. To extend the scalability further, additional RecoverPoint clusters are required. Each new datastore is added to its own SRM protection group. Because there is a limit of 250 protection groups per SRM installation, this limits the total number of datastores in a DR environment to 250, irrespective of the number of RecoverPoint clusters deployed. RecoverPoint cluster limitations There is a limit of 64 consistency groups per RecoverPoint appliance and 128 consistency groups per RecoverPoint cluster. Therefore, the number of nodes deployed in the RecoverPoint cluster should be sized to allow appropriate headroom for surviving appliances to take over the workload of a failed appliance. 124 Enterprise Hybrid Cloud 4.1.1

125 Chapter 10: Maximums, Rules, Best Practices, and Restrictions RecoverPoint for Virtual Machines maximums Table 34 shows the maximums that apply for latencies in the solution. Table 34. RecoverPoint for Virtual Machines maximums Total number of Maximum Enterprise Hybrid Cloud vcenter endpoints connected to a vrpa cluster 1* vrpa clusters connected to a vcenter Server instance 8 Best practice number of vrpa clusters per ESXi cluster 4* Virtual machines protected by a single Enterprise Hybrid Cloud LC1S cluster 512** Virtual machines in a vcenter protected 2,048 Protected VMDKs per vrpa cluster 2,048 Protected VMDKs per ESXi cluster 5,000 ESXi hosts with a splitter that can be attached to a vrpa cluster 32 Enterprise Hybrid Cloud LC1S clusters connected to a vrpa cluster 1* Consistency groups per vrpa cluster 128 Virtual machines per consistency group 128 Replica sites per RecoverPoint for Virtual Machines system 1* Replica copies per consistency group 1* Capacity of a single VMDK Capacity of a replicated virtual machine 10 TB 40 TB * These figures are Enterprise Hybrid Cloud best practices based on architecture design and differ from the numbers that RecoverPoint for Virtual Machines natively supports. ** Based on best practice of four vrpa clusters per ESXi cluster, and 128 consistency groups per vrpa cluster. Enterprise Hybrid Cloud

126 Chapter 10: Maximums, Rules, Best Practices, and Restrictions VMware Platform Services Controller rules This solution uses VMware Platform Services Controller (PSC) in place of the vrealize Automation Identity Appliance. A VMware PSC is deployed on a dedicated virtual machine in each Core Pod (multiple Core Pods in multisite environments) and an additional PSC (Auto-PSC) is deployed in the Automation Pod. The Auto-PSC provides authentication services to all the Automation Pod management components requiring PSC integration. This configuration enables authentication services to fail over with the other automation components and enables a seamless transition between sites. There is no need to change IP addresses, domain name system (DNS), or management component settings. Platform Services Controller domains Enterprise Hybrid Cloud uses one or more SSO domains, depending on the management platform deployed. PSC instances are configured within those domains according to the following model: External SSO domain (external core function only) First external PSC Additional external PSCs (multisite/multi-vcenter configurations) Managed (also known as Cloud) single sign-on (SSO) domain (all topologies) First Cloud PSC Automation Pod PSC Additional Cloud PSCs (multisite/multi-vcenter configurations only) IaaS-only SSO domains First IaaS-only PSC Additional IaaS-only PSCs as required Note: An external SSO domain is only used in Enterprise Hybrid Cloud where an external vcenter is used to host the Core Pod and Cloud vcenter. This is true in case of VxRack Systems and in Enterprise Hybrid Cloud instances that were upgraded from the Distributed Management Model used in older releases. Figure 49 shows an example of how the PSC domains and each PSC instance and domain required could be configured in an environment with the following characteristics: Three sites, each with their own Cloud vcenter endpoint Site A is the primary location for the cloud management platform Inter-site protection of the cloud management platform is required between Site A and Site B No external PSC domain was required Note: The partner replication relationships within the SSO domain shown in Figure 49 are permitted to be in different sequences depending on when each geographical location was deployed. The key factor is that a replication ring is formed between all the PSC instances. 126 Enterprise Hybrid Cloud 4.1.1

127 Chapter 10: Maximums, Rules, Best Practices, and Restrictions Figure 49. SSO domain and vcenter PSC instance relationships First PSC instance in each SSO domain The first VMware PSC deployed in each SSO domain is deployed by creating a new vcenter SSO domain and enabling it to participate in the default vcenter SSO namespace (vsphere.local). This primary PSC server supports identity sources, such as Active Directory, OpenLDAP, local operating system users, and SSO-embedded users and groups. This is the default deployment mode when you install VMware PSC. Subsequent PSC instances in each single signon domain Additional VMware PSC instances are installed by joining the new PSC to an existing SSO domain, making them part of the existing domain but in a new SSO site. When you create PSC servers in this way, the deployed PSC instances become members of the same authentication namespace as the PSC instance. This deployment mode should only be used after you have deployed the first PSC instance in each SSO domain. Active Directory In vsphere 6.0, VMware PSC SSO data (such as policies, solution users, application users, and identity sources) is automatically replicated between each PSC instance in the same authentication namespace every 30 seconds. Active directory considerations All vcenter endpoints (managed and IaaS only) must be integrated with the same active directory domain. Enterprise Hybrid Cloud

ENTERPRISE HYBRID CLOUD 4.1.2

ENTERPRISE HYBRID CLOUD 4.1.2 ENTERPRISE HYBRID CLOUD 4.1.2 September 2017 Abstract This solution guide provides an introduction to the concepts and architectural options available within Enterprise Hybrid Cloud. It should be used

More information

ENTERPRISE HYBRID CLOUD 4.1.1

ENTERPRISE HYBRID CLOUD 4.1.1 ENTERPRISE HYBRID CLOUD 4.1.1 May 2017 Abstract This guide describes the administration functionality of. Enterprise Hybrid Cloud enables IT organizations to deliver infrastructure, storage, backup, continuous

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the data protection functionality of the Federation Enterprise Hybrid Cloud for Microsoft applications solution, including automated backup as a service, continuous availability,

More information

ENTERPRISE HYBRID CLOUD 4.1.2

ENTERPRISE HYBRID CLOUD 4.1.2 ENTERPRISE HYBRID CLOUD 4.1.2 August 2017 ABSTRACT This reference architecture guide describes the Enterprise Hybrid Cloud 4.1.2 reference architecture that enables IT organizations to deploy an on-premises

More information

Dell EMC Extensions for VMware vrealize Automation

Dell EMC Extensions for VMware vrealize Automation Dell EMC Extensions for VMware vrealize Automation April 2018 H17047 Reference Architecture Guide Abstract This reference architecture guide provides an introduction to the concepts and architectural options

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

Dedicated Hosted Cloud with vcloud Director

Dedicated Hosted Cloud with vcloud Director VMware vcloud Architecture Toolkit for Service Providers Dedicated Hosted Cloud with vcloud Director Version 2.9 April 2018 Harold Simon 2017 VMware, Inc. All rights reserved. This product is protected

More information

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1 Introducing VMware Validated Design Use Cases Modified on 21 DEC 2017 VMware Validated Design 4.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the disaster recovery modular add-on to the Federation Enterprise Hybrid Cloud Foundation solution for SAP. It introduces the solution architecture and features that ensure

More information

VxRack System SDDC Enabling External Services

VxRack System SDDC Enabling External Services VxRack System SDDC Enabling External Services May 2018 H17144 Abstract This document describes how to enable external services for a VxRack System SDDC. Use cases included are Dell EMC Avamar-based backup

More information

EMC Simple Support Matrix

EMC Simple Support Matrix EMC Simple Support Matrix EMC Enterprise Hybrid Cloud 2.5.1 Federation SDDC Edition SEPTEMBER 2015 REV 07 2014-2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication

More information

Introducing VMware Validated Design Use Cases

Introducing VMware Validated Design Use Cases Introducing VMware Validated Design Use Cases VMware Validated Designs 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Converged and Hyper-Converged: Factory-Integrated Data Protection for Simplicity and Lifecycle Assurance

Converged and Hyper-Converged: Factory-Integrated Data Protection for Simplicity and Lifecycle Assurance Converged and Hyper-Converged: Factory-Integrated Data Protection for Simplicity and Lifecycle Assurance Key takeaways Dell EMC offers unique factory-integrated data protection for CI and HCI systems to

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES WHITE PAPER JULY 2017 Table of Contents 1. Executive Summary 4 2.

More information

EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition

EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition Reference Architecture EMC Enterprise Hybrid Cloud 2.5.1, Federation Software-Defined Data Center Edition Foundation Infrastructure Reference Architecture Infrastructure as a service Automated provisioning

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date

More information

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers VMware vcloud Network VMware vcloud Architecture Toolkit for Service Providers Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers Version 2.8 August 2017 Harold Simon 2017 VMware,

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Where else do you consume like this? 3 Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 13 FEB 2018 VMware Validated Design 4.2 VMware Validated Design for Software-Defined Data Center 4.2 You can find the most up-to-date

More information

The Latest EMC s announcements

The Latest EMC s announcements The Latest EMC s announcements Copyright 2014 EMC Corporation. All rights reserved. 1 TODAY S BUSINESS CHALLENGES Cut Operational Costs & Legacy More Than Ever React Faster To Find New Growth Balance Risk

More information

Dell EMC Extensions for VMware vrealize Automation

Dell EMC Extensions for VMware vrealize Automation Dell EMC Extensions for VMware vrealize Automation Administration Guide Version 1.0 May 2018 H17049.1 Administration Guide Abstract This administration guide describes how to implement and manage Dell

More information

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VMware customers trust their infrastructure to vsan #1 Leading SDS Vendor >10,000 >100 83% vsan Customers Countries Deployed Critical Apps

More information

DELL EMC TEST DRIVE. Build Confidence and Close More Deals EXPLORE TEST DRIVES BY PRODUCT

DELL EMC TEST DRIVE. Build Confidence and Close More Deals EXPLORE TEST DRIVES BY PRODUCT DELL EMC TEST DRIVE Build Confidence and Close More Deals Simplify and accelerate the sales cycle and close more deals with a hands-on Dell EMC Test Drive The Test Drive program is a pipeline acceleration

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Modernize Your Data Center With Hyper Converged Platforms

Modernize Your Data Center With Hyper Converged Platforms Modernize Your Data Center With Hyper Converged Platforms Mr. Bui Minh Cong Presales Manager techdata.com Industrial Evolution Level of complexity 3. Industrial revolution Through the use of electronics

More information

EMC HYBRID CLOUD 2.5 WITH VMWARE

EMC HYBRID CLOUD 2.5 WITH VMWARE Solution Guide EMC HYBRID CLOUD 2.5 WITH VMWARE EMC Solutions Abstract This Solution Guide provides an introduction to data protection through continuous availability. The guide enables you to begin the

More information

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] [VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] Length Delivery Method : 5 Days : Instructor-led (Classroom) Course Overview This five-day course features intensive hands-on training that

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

vsan Disaster Recovery November 19, 2017

vsan Disaster Recovery November 19, 2017 November 19, 2017 1 Table of Contents 1. Disaster Recovery 1.1.Overview 1.2.vSAN Stretched Clusters and Site Recovery Manager 1.3.vSAN Performance 1.4.Summary 2 1. Disaster Recovery According to the United

More information

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE WHITE PAPER - DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE ABSTRACT This planning guide provides best practices and requirements for using stretched clusters with VxRail appliances. April 2018

More information

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric Solution Overview Next-Generation Interconnect Powered by the Adaptive Cloud Fabric Increases availability and simplifies the stretching and sharing of resources across distributed data centers Highlights

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

Leveraging cloud for real business transformation

Leveraging cloud for real business transformation Leveraging cloud for real business transformation Session 5000 - Harry Meier GLOBAL SPONSORS Silicon valley is coming. They all want to eat our lunch. Jamie Dimon, CEO JPMC No industry is immune to disruption

More information

VMWARE CLOUD FOUNDATION: INTEGRATED HYBRID CLOUD PLATFORM WHITE PAPER NOVEMBER 2017

VMWARE CLOUD FOUNDATION: INTEGRATED HYBRID CLOUD PLATFORM WHITE PAPER NOVEMBER 2017 : INTEGRATED HYBRID CLOUD PLATFORM WHITE PAPER NOVEMBER 2017 Table of Contents Executive Summary 3 A Single Architecture for Hybrid Cloud 4 Introducing VMware Cloud Foundation 4 Deploying on Premises 6

More information

VMware vcloud Air Key Concepts

VMware vcloud Air Key Concepts vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

TOP REASONS TO CHOOSE DELL EMC OVER VEEAM

TOP REASONS TO CHOOSE DELL EMC OVER VEEAM HANDOUT TOP REASONS TO CHOOSE DELL EMC OVER VEEAM 10 This handout overviews the top ten reasons why customers choose Data Protection from Dell EMC over Veeam. Dell EMC has the most comprehensive data protection

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Dell EMC vsan Ready Nodes for VDI

Dell EMC vsan Ready Nodes for VDI Dell EMC vsan Ready Nodes for VDI Integration of VMware Horizon on Dell EMC vsan Ready Nodes April 2018 H17030.1 Deployment Guide Abstract This deployment guide provides instructions for deploying VMware

More information

IBM Cloud IBM Cloud for VMware Solutions Zeb Ahmed Senior Offering Manager and BCDR Leader VMware on IBM Cloud VMworld 2017 Content: Not for publicati

IBM Cloud IBM Cloud for VMware Solutions Zeb Ahmed Senior Offering Manager and BCDR Leader VMware on IBM Cloud VMworld 2017 Content: Not for publicati LHC2432BU IBM Cloud for VMware Solutions Zeb Ahmed Senior Offering Manager and BCDR Leader VMware on IBM Cloud #VMworld IBM Cloud IBM Cloud for VMware Solutions Zeb Ahmed Senior Offering Manager and BCDR

More information

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/

More information

Detail the learning environment, remote access labs and course timings

Detail the learning environment, remote access labs and course timings Course Duration: 4 days Course Description This course has been designed as an Introduction to VMware for IT Professionals, but assumes that some labs have already been developed, with time always at a

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity, Solution overview A New Generation of Converged Infrastructure that Improves Flexibility, Efficiency, and Simplicity Enterprises everywhere are increasingly adopting Converged Infrastructure (CI) as one

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme PBO2631BE A Base Design for Everyone s Data Center: The Consolidated VMware Validated Design (VVD) Gary Blake Senior SDDC Integration Architect garyjblake #VMworld #PB02631BE Disclaimer This presentation

More information

VxRack SDDC Deep Dive:

VxRack SDDC Deep Dive: VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation GLOBAL SPONSORS What is HCI? Systems design shift Hyper-converged HYPER-CONVERGED SERVERS SAN STORAGE THEN NOW 2 What is HCI?

More information

VMWARE CLOUD FOUNDATION: THE SIMPLEST PATH TO THE HYBRID CLOUD WHITE PAPER AUGUST 2018

VMWARE CLOUD FOUNDATION: THE SIMPLEST PATH TO THE HYBRID CLOUD WHITE PAPER AUGUST 2018 VMWARE CLOUD FOUNDATION: THE SIMPLEST PATH TO THE HYBRID CLOUD WHITE PAPER AUGUST 2018 Table of Contents Executive Summary 3 A Single Architecture for Hybrid Cloud 4 Introducing VMware Cloud Foundation

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4. Deploying VMware Validated Design Using PF Dynamic Routing Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.0 Deploying VMware Validated Design Using PF Dynamic Routing You

More information

EMC ENTERPRISE PRIVATE CLOUD 2.0

EMC ENTERPRISE PRIVATE CLOUD 2.0 Reference Architecture EMC ENTERPRISE PRIVATE CLOUD 2.0 Infrastructure as a service Automated provisioning and monitoring Service-driven IT operations EMC Solutions March 2014 Copyright 2013-2014 EMC Corporation.

More information

VMware vsphere Clusters in Security Zones

VMware vsphere Clusters in Security Zones SOLUTION OVERVIEW VMware vsan VMware vsphere Clusters in Security Zones A security zone, also referred to as a DMZ," is a sub-network that is designed to provide tightly controlled connectivity to an organization

More information

Converged Platforms and Solutions. Business Update and Portfolio Overview

Converged Platforms and Solutions. Business Update and Portfolio Overview Converged Platforms and Solutions Business Update and Portfolio Overview IT Drivers In Next 5 Years SCALE SCALE 30,000+ physical servers 500,000+ virtual servers Current tools won t work at this scale

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

vsan Management Cluster First Published On: Last Updated On:

vsan Management Cluster First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 03-05-2018 1 1. vsan Management Cluster 1.1.Solution Overview Table Of Contents 2 1. vsan Management Cluster 3 1.1 Solution Overview HyperConverged Infrastructure

More information

Dell EMC Hyperconverged Portfolio: Solutions that Cover the Use Case Spectrum

Dell EMC Hyperconverged Portfolio: Solutions that Cover the Use Case Spectrum Enterprise Strategy Group Getting to the bigger truth. White Paper Dell EMC Hyperconverged Portfolio: Solutions that Cover the Use Case Spectrum Broad Portfolio that Meets the Wide Variety of Customer

More information

DELL EMC VSCALE FABRIC

DELL EMC VSCALE FABRIC NETWORK DATA SHEET DELL EMC VSCALE FABRIC FIELD-PROVEN BENEFITS Increased utilization and ROI Create shared resource pools (compute, storage, and data protection) that connect to a common, automated network

More information

vsan Security Zone Deployment First Published On: Last Updated On:

vsan Security Zone Deployment First Published On: Last Updated On: First Published On: 06-14-2017 Last Updated On: 11-20-2017 1 1. vsan Security Zone Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Security Zone Deployment 3 1.1 Solution Overview VMware vsphere

More information

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR 1 VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR PRINCIPAL CORPORATE SYSTEMS ENGINEER RECOVERPOINT AND VPLEX 2 AGENDA VPLEX Overview RecoverPoint

More information

Architecture and Design. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Management and Workload Consolidation 4. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Management and Workload Consolidation 4.1 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1297BE Stretched Clusters or VMware Site Recovery Manager? We Say Both! Jeff Hunter, VMware, @jhuntervmware GS Khalsa, VMware, @gurusimran #VMworld Disclaimer This presentation may contain product features

More information

DELL EMC VXRACK SYSTEM SDDC TM TECHNOLOGY OVERVIEW

DELL EMC VXRACK SYSTEM SDDC TM TECHNOLOGY OVERVIEW DELL EMC VXRACK SYSTEM SDDC TM TECHNOLOGY OVERVIEW ABSTRACT VxRack System SDDC powered by VMware Cloud Foundation is a turnkey, rack-scale, hyper-converged engineered system with fully integrated hardware

More information

VxRAIL for the ClearPath Software Series

VxRAIL for the ClearPath Software Series VxRAIL for the ClearPath Software Series Agenda Introduction to ClearPath MCP Software Series Virtualization for ClearPath MCP Software Series ClearPath MCP on VxRAIL Support ClearPath MCP Services 2016

More information

PLEXXI HCN FOR VMWARE VSAN

PLEXXI HCN FOR VMWARE VSAN PLEXXI HCN FOR VMWARE VSAN SOLUTION BRIEF Hyperconverged Network Fabric for VMware vsan Solutions FEATURED BENEFITS: Fully automated network configuration, based on VMware, drastically reduces operating

More information

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Harry Meier GLOBAL SPONSORS Dell EMC VxRack SDDC Integrated compute, storage, and networking powered by VMware Cloud Foundation

More information

VMware Validated Design for NetApp HCI

VMware Validated Design for NetApp HCI Network Verified Architecture VMware Validated Design for NetApp HCI VVD 4.2 Architecture Design Sean Howard Oct 2018 NVA-1128-DESIGN Version 1.0 Abstract This document provides the high-level design criteria

More information

Redefine: Enterprise Hybrid Cloud

Redefine: Enterprise Hybrid Cloud Redefine: Enterprise Hybrid Cloud Enterprise Hybrid Cloud Wong Tran 2 Hybrid Clouds Will Be Pervasive Hybrid Private Cloud Cloud Public Cloud 3 Build Your Hybrid Cloud Strategy Economic Evaluation Trust

More information

VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018

VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018 VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Eliminate the Complexity of Multiple Infrastructure Silos

Eliminate the Complexity of Multiple Infrastructure Silos SOLUTION OVERVIEW Eliminate the Complexity of Multiple Infrastructure Silos A common approach to building out compute and storage infrastructure for varying workloads has been dedicated resources based

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Certified Reference Design for VMware Cloud Providers

Certified Reference Design for VMware Cloud Providers VMware vcloud Architecture Toolkit for Service Providers Certified Reference Design for VMware Cloud Providers Version 2.5 August 2018 2018 VMware, Inc. All rights reserved. This product is protected by

More information

VxRail: Level Up with New Capabilities and Powers

VxRail: Level Up with New Capabilities and Powers VxRail: Level Up with New Capabilities and Powers Sasho Tasevski Senior Manager, Systems Engineering Dell EMC SEE GLOBAL SPONSORS Increasing complexity of application ecosystems is driving IT transformation

More information

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA DELL EMC TECHNICAL SOLUTION BRIEF ARCHITECTING A DELL EMC HPERCONVERGED SOLUTION WITH VMware vsan Version 1.0 Author: VICTOR LAMA Dell EMC Networking SE July 2017 What is VMware vsan? VMware vsan (formerly

More information

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer 21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...

More information

AVAILABILITY AND DISASTER RECOVERY. Ravi Baldev

AVAILABILITY AND DISASTER RECOVERY. Ravi Baldev AVAILABILITY AND DISASTER RECOVERY Ravi Baldev 1 Agenda Zero And Near Zero RPO/RTO Transformation Introducing RecoverPoint For Virtual Machines What s New With VPLEX Simplified Provisioning With ViPR 2

More information

Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS

Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS Abstract This white paper discusses Dell EMC UnityVSA Cloud Edition and Cloud Tiering Appliance running within VMware Cloud on Amazon Web Services

More information

Architecting a vcloud NFV Platform R E F E R E N C E A R C H I T E C T U RE V E R S I O N 2. 0

Architecting a vcloud NFV Platform R E F E R E N C E A R C H I T E C T U RE V E R S I O N 2. 0 Architecting a vcloud NFV Platform R E F E R E N C E A R C H I T E C T U RE V E R S I O N 2. 0 Table of Contents 1. Network Function Virtualization Overview... 6 1.1 NFV Infrastructure Working Domain...

More information

Data Protection for Dell EMC Ready Solution for VMware NFV Platform using Avamar and Data Domain

Data Protection for Dell EMC Ready Solution for VMware NFV Platform using Avamar and Data Domain Data Protection for Dell EMC Ready Solution for VMware NFV Platform using Avamar and Data Domain Abstract This document discusses data protection using VMware vcloud NFV with Dell EMC Avamar and Dell EMC

More information

Private Cloud Public Cloud Edge. Consistent Infrastructure & Consistent Operations

Private Cloud Public Cloud Edge. Consistent Infrastructure & Consistent Operations Hybrid Cloud Native Public Cloud Private Cloud Public Cloud Edge Consistent Infrastructure & Consistent Operations VMs and Containers Management and Automation Cloud Ops DevOps Existing Apps Cost Management

More information

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures Table of Contents Get the efficiency and low cost of cloud computing with uncompromising control over service levels and with the freedom of choice................ 3 Key Benefits........................................................

More information

Hyperconverged Infrastructure: Cost-effectively Simplifying IT to Improve Business Agility at Scale

Hyperconverged Infrastructure: Cost-effectively Simplifying IT to Improve Business Agility at Scale Enterprise Strategy Group Getting to the bigger truth. White Paper Hyperconverged Infrastructure: Cost-effectively Simplifying IT to Improve Business Agility at Scale By Mike Leone, ESG Senior Analyst;

More information

Modernizing Virtual Infrastructures Using VxRack FLEX with ScaleIO

Modernizing Virtual Infrastructures Using VxRack FLEX with ScaleIO Background As organizations continue to look for ways to modernize their infrastructures by delivering a cloud-like experience onpremises, hyperconverged offerings are exceeding expectations. In fact,

More information

IBM Cloud for VMware Solutions

IBM Cloud for VMware Solutions Introduction 2 IBM Cloud IBM Cloud for VMware Solutions Zeb Ahmed Senior Offering Manager VMware on IBM Cloud Mehran Hadipour Director Business Development - Zerto Internal Use Only Do not distribute 3

More information

DISASTER RECOVERY- AS-A-SERVICE FOR VMWARE CLOUD PROVIDER PARTNERS WHITE PAPER - OCTOBER 2017

DISASTER RECOVERY- AS-A-SERVICE FOR VMWARE CLOUD PROVIDER PARTNERS WHITE PAPER - OCTOBER 2017 DISASTER RECOVERY- AS-A-SERVICE FOR VMWARE CLOUD PROVIDER PARTNERS WHITE PAPER - OCTOBER 2017 Table of Contents Executive Summary 3 Introduction 3 vsphere Replication... 3 VMware NSX for vsphere... 4 What

More information

EMC Hybrid Cloud. Umair Riaz - vspecialist

EMC Hybrid Cloud. Umair Riaz - vspecialist EMC Hybrid Cloud Umair Riaz - vspecialist 1 The Business Drivers RESPOND FASTER TO DRIVE NEW REVENUE INCREASE AGILITY REFOCUS RESOURCES TOWARD BUSINESS VALUE INCREASE VISIBILITY & CONTROL 2 CLOUD TRANSFORMS

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon 7 on Dell EMC XC Family September 2018 H17387 Deployment Guide Abstract This deployment guide provides instructions for deploying VMware

More information

Increase Efficiency with VMware Software and Cisco UCS

Increase Efficiency with VMware Software and Cisco UCS VMware Software and Cisco UCS Solution Overview September 2015 Cisco UCS and VMware Work Better Together Highlights Consistent Management Manage your VMware environments with familiar VMware tools and

More information

EMC ViPR Controller. Create a VM and Provision and RDM with ViPR Controller and VMware vrealize Automation. Version 2.

EMC ViPR Controller. Create a VM and Provision and RDM with ViPR Controller and VMware vrealize Automation. Version 2. EMC ViPR Controller Version 2.3 Create a VM and Provision and RDM with ViPR Controller and VMware vrealize Automation 302-002-205 01 Copyright 2015- EMC Corporation. All rights reserved. Published in USA.

More information

SC Series: Virtualization and Ecosystem Integrations. Darin Schmitz Storage Applications Engineering Midrange and Entry Solutions Group

SC Series: Virtualization and Ecosystem Integrations. Darin Schmitz Storage Applications Engineering Midrange and Entry Solutions Group SC Series: Virtualization and Ecosystem Integrations Darin Schmitz Storage Applications Engineering Midrange and Entry Solutions Group What s New with the SC Series New Dell EMC SC5020 Hybrid Array Optimized

More information

Changes in VCP6.5-DCV exam blueprint vs VCP6

Changes in VCP6.5-DCV exam blueprint vs VCP6 Changes in VCP6.5-DCV exam blueprint vs VCP6 Blueprint Objective Blueprint Changes Blueprint Additions Associated v6.5 Technology Changes 1.1 Changed objective from: VMware Directory Service VMware Identity

More information

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7)

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7) VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7) COURSE OVERVIEW: This five-day course features intensive hands-on training that focuses on installing, configuring, and managing VMware vsphere

More information