VCE Vblock System 320 Gen 3.2

Size: px
Start display at page:

Download "VCE Vblock System 320 Gen 3.2"

Transcription

1 VCE Vblock System 320 Gen 3.2 Architecture Overview Document revision 3.7 November 2015

2 Revision history Date VCE System Document revision Description of changes November 2015 Gen Updated document for VMware vsphere 6.0 support Removed section "Aggregating multiple Vblock Systems" October 2013 Gen Added Cisco Trusted Platform Module information Removed data protection information August 2013 Gen Updated EMC VNX5500 information June 2013 Gen Updated the bare metal support policy April 2013 Gen Added VCE Vision Intelligent Operations Updated VMware vsphere information Integrated into the Gen 3.1 release March 2013 Gen Updated for VCE rebranding January 2013 Gen Updated for VCE rebranding Changed wording from segregated storage architecture to segregated network architecture December 2012 Gen Gen 3.0 release Revision history 2

3 Contents Accessing VCE documentation...5 System overview...6 System architecture and components... 6 Base configurations and scaling...8 Connectivity overview...9 Segregated network architecture...9 Unified network architecture Compute layer overview...16 Compute overview...16 Cisco Unified Computing System...16 Cisco Unified Computing System fabric interconnects...16 Cisco Trusted Platform Module Scaling up compute resources VCE bare metal support policy...18 Disjoint layer 2 configuration Storage layer EMC VNX series storage arrays Replication...21 Scaling up storage resources...21 Storage features support...24 Network layer Network overview IP network components...27 Storage switching components Virtualization layer...29 Virtualization overview...29 VMware vsphere Hypervisor ESXi...29 VMware vcenter Server Management...32 Management hardware overview Management software components System infrastructure...34 VCE System descriptions...34 Cabinets overview Cabinet types Power options...36 Configuration descriptions...38 VCE System with EMC VNX7500 overview Sample VCE System with EMC VNX7500 maximum configuration Contents

4 Sample VCE System with EMC VNX7500 maximum CNS cabinets...40 Sample VCE System with EMC VNX7500 maximum CS cabinets...41 Sample VCE System with EMC VNX7500 maximum S cabinet...48 VCE System with EMC VNX5700 overview Sample VCE System with EMC VNX5700 maximum configuration Sample VCE System with EMC VNX5700 maximum CNS cabinets...51 Sample VCE System with EMC VNX5700 CS cabinets VCE System with EMC VNX5500 overview Sample VCE System with EMC VNX5500 maximum configuration Sample VCE System with EMC VNX5500 maximum CNS cabinets...61 Sample VCE System with EMC VNX5500 CS cabinets VCE System with EMC VNX5300 overview Sample VCE System with EMC VNX5300 maximum configuration Sample VCE System with EMC VNX5300 maximum CNS cabinet...68 Sample VCE System with EMC VNX5300 maximum S cabinet...69 Additional references Virtualization components Compute components Network components...71 Storage components Contents 4

5 Accessing VCE documentation Select the documentation resource that applies to your role. Role Customer Cisco, EMC, VMware employee, or VCE Partner VCE employee Resource support.vce.com A valid username and password are required. Click VCE Download Center to access the technical documentation. partner.vce.com A valid username and password are required. sales.vce.com/saleslibrary or vblockproductdocs.ent.vce.com 5 Accessing VCE documentation

6 System overview System architecture and components This section describes the architecture and components of the VCE System. The VCE System has a number of features including: Optimized, fast delivery configurations based on the most commonly purchased components Standardized cabinets with multiple North American and international power solutions Base configurations with fewer drives, fewer blades, and more granular flexibility on the configuration Block (SAN) and unified storage options (SAN and NAS) Support for multiple features of the EMC operating environment for EMC VNX arrays Granular, but optimized compute and storage growth by adding predefined kits and packs VMware vstorage API for Array Integration (VAAI) enablement Advanced Management Platform (AMP) models for value and high availability requirements Unified network architecture provides the option to leverage Cisco Nexus switches to support IP and SAN without the use of Cisco MDS switches. The VCE System contains the following key hardware and software components: Resource VCE System management Components VCE Vision Intelligent Operations System Library VCE Vision Intelligent Operations Plug-in for vcenter VCE Vision Intelligent Operations Compliance Checker VCE Vision Intelligent Operations API for System Library VCE Vision Intelligent Operations API for Compliance Checker System overview 6

7 Resource Components Virtualization and management VMware vsphere Server Enterprise Plus VMware vsphere ESXi VMware vcenter Server VMware vsphere Web Client VMware Single Sign-On (SSO) Service (version 5.1 and higher) EMC PowerPath/VE Cisco UCS Manager EMC Unisphere Manager EMC VNX Local Protection Suite EMC VNX Remote Protection Suite EMC VNX Application Protection Suite EMC VNX Fast Suite EMC VNX Security and Compliance Suite EMC Secure Remote Support (ESRS) on Windows Cisco Data Center Network Manager for SAN (Optional) EMC Ionix UIM Compute Cisco UCS 5108 Server Chassis Cisco UCS B-Series Blades Cisco UCS Virtual Interface Card supported on the Cisco UCS B200 M3 Blade Server Cisco UCSB-MLOM-PT-01 - Port Expander for 1240 VIC Cisco UCS Virtual Interface Card supported on all Cisco UCS B-Series M2 and M3 Blade Servers except the Cisco UCS B250 Blade Server (which still uses M81KR) Cisco UCS 2208XP fabric extenders or Cisco UCS 2204XP fabric extenders Cisco UCS 2208XP Fabric Extenders with FET Optics or Cisco UCS 2204XP Fabric Extenders with FET Optics Cisco UCS 6248UP and Cisco UCS 6296UP Fabric Interconnects Network Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP switches (Optional) Cisco MDS 9148 Multilayer Fabric Switch (Optional) Cisco Nexus 1000V Series Switches Storage EMC VNX storage array running the VNX Operating Environment (Optional) EMC RecoverPoint (Optional) EMC unified storage (NAS) Each VCE System has a different scale point based on compute and storage options, and can support block and/or unified storage protocols. The VCE Systems Release Certification Matrix provides a list of the certified versions of components for the VCE System. For information about Vblock System management, refer to the VCE Vision Intelligent Operations Technical Overview. The VCE Integrated Data Protection Guide provides information about available data protection solutions. 7 System overview

8 Base configurations and scaling The VCE System has a base configuration that contains a minimum set of compute and storage components, as well as fixed network resources that are integrated in one or more 19 inch, 42U cabinets. Within the base configuration, the following hardware aspects can be customized: Hardware Compute blades Compute chassis Storage hardware Storage Supported disk drives How it can be customized Cisco UCS B-Series blade types include all supported VCE blade configurations. Cisco UCS Server Chassis 16 chassis maximum for the VCE System with EMC VNX5700 and the VCE System with EMC VNX7500 Eight chassis maximum for the VCE System with EMC VNX5500 Two chassis maximum for the VCE System with EMC VNX5300 Drive flexibility for up to three tiers of storage per pool, drive quantities in each tier, the RAID protection for each pool, and the number of disk array enclosures (DAEs). EMC VNX storage 100/200 GB EFD, 300/600 GB 15K SAS, 300/600/900 GB 10K SAS, 1/2/3TB 7.2K NL- SAS Supported RAID types Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1) Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2), (14+2) Tier 3: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2), (14+2) Management hardware options EMC RecoverPoint Appliances (RPAs) X-Blades An Advanced Management Platform (AMP) centralizes management of Vblock System components. Optionally, a high-availability version of the AMP can be substituted. Refer to Management hardware components. Two to eight clustered appliances that facilitate storage replication and rapid disaster recovery. Available on all VCE Systems. Additional X-Blades can be added on a VCE System with EMC VNX7500, EMC VNX5700, and EMC VNX5500. Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the compute and storage arrays in the system. All components have N+N or N+1 redundancy. These resources can be scaled up as necessary to meet increasingly stringent requirements. The maximum supported configuration differs from model to model. To scale up compute resources, add blade packs and chassis activation kits. To scale up storage resources, add RAID packs, DME packs, and DAE packs. Optionally, expansion cabinets with additional resources can be added. The VCE System is designed to keep hardware changes to a minimum if the storage protocol is changed after installation (for example, from block storage to unified storage). Cabinet space can be reserved for all components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.) ensuring that network and power cabling capacity for these components is in place. System overview 8

9 Connectivity overview This topic describes the components and interconnectivity within the VCE System. These components and interconnectivity are conceptually subdivided into the following layers: Layer Compute Storage Network Description Contains the components that provide the computing power within a VCE System. The Cisco UCS blade servers, chassis, and fabric interconnects belong to this layer. Contains the EMC VNX storage component. Contains the components that provide switching between the compute and storage layers within a VCE System, and between a VCE System and the network. Cisco MDS switches and the Cisco Nexus switches belong to this layer. All components incorporate redundancy into the design. Segregated network architecture and unified network architecture In the segregated network architecture, LAN and SAN connectivity is segregated into separate switches within the VCE System. LAN switching uses the Cisco Nexus switches. SAN switching uses the Cisco MDS 9148 Multilayer Fabric Switch. In the unified network architecture, LAN and SAN switching is consolidated onto a single network device (Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches) within the VCE System. This removes the need for a Cisco MDS SAN switch. The VCE Vblock System 320 Port Assignments Reference provides information about the assigned use for each port in the Vblock System. Segregated network architecture This topic shows VCE System segregated network architecture for block, SAN boot, and unified storage. Block storage configuration The following illustration shows a block-only storage configuration for VCE System with the X-Blades absent from the cabinets. However, space can be reserved in the cabinets for these components 9 System overview

10 (including optional EMC RecoverPoint Appliances). This design makes it easier to add the components later if there is an upgrade to unified storage. SAN boot storage configuration In all VCE System configurations, the VMware vsphere ESXi blades boot over the Fibre Channel (FC) SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the Cisco MDS 9148 Multilayer Fabric Switch. In a unified storage configuration, the boot devices are presented over FC and data service can be either block devices (SAN) or presented as NFS datastores (NFS). In a file-only configuration, the boot devices are presented over FC and data devices are through NFS shares. Storage can also be presented directly to the virtual machines as CIFS shares. System overview 10

11 The following illustration shows the components (highlighted in a red, dotted line) that are leveraged to support SAN booting in the VCE System: 11 System overview

12 Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches in the network layer over 10 GbE, as shown in the following illustration: Unified network architecture The topic provides an overview of the block storage, SAN boot storage, and unified storage configurations for the unified network architecture. With unified network architecture, access to both block and file services on the EMC VNX is provided using the Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch. The Cisco Nexus 9396PX Switch is not supported in unified network architecture. System overview 12

13 Block storage configuration The following illustration shows a block-only storage configuration in the VCE System: In this example, there are no X-Blades providing NAS capabilities. However, space can be reserved in the cabinets for these components (including the optional EMC RecoverPoint Appliance). This design makes it easier to add the components later if there is an upgrade to unified storage. In a unified storage configuration for block and file, the storage processors also connect to X-Blades over FC. The X-Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. SAN boot storage configuration In all VCE System configurations, VMware vsphere ESXi blades boot over the FC SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the Cisco Nexus unified switch. In a unified storage configuration, the boot devices are presented over FC and data devices can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, boot devices are presented over FC, and data devices over NFS shares. The remainder of the storage can be presented either as NFS or as VMFS datastores. Storage can also be presented directly to the VMs as CIFS shares. 13 System overview

14 The following illustration shows the components that are leveraged to support SAN booting in the VCE System: Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. System overview 14

15 The following illustration shows a unified storage configuration for the VCE System: 15 System overview

16 Compute layer Compute overview This topic provides an overview of the compute components for the VCE System. Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power within the VCE System. Fabric extenders (FEX) within the Cisco UCS chassis connect to Cisco fabric interconnects over converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP and storage traffic. VCE has reserved some of these ports to connect to upstream access switches within the VCE System. These connections are formed into a port channel to the Cisco Nexus switch and carry IP traffic destined for the external network 10 GbE links. In a unified storage configuration, this port channel can also carry NAS traffic to the X-Blades within the storage layer. Each fabric interconnect also has multiple ports reserved by VCE for Fibre Channel (FC) ports. These ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and the storage layer. In a unified storage configuration, port channels carry IP traffic to the X-Blades for NAS connectivity. For SAN connectivity, SAN port channels carrying FC traffic are configured between the fabric interconnects and upstream Cisco MDS or Cisco Nexus switches. Cisco Unified Computing System This topic provides an overview of the Cisco Unified Compute System (UCS) data center platform that unites compute, network, and storage access. Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unified network fabric with enterprise-class, x86-based servers (the Cisco B-Series). VCE Systems powered by Cisco UCS offer the following features: Built-in redundancy for high availability Hot-swappable components for serviceability, upgrade, or expansion Fewer physical components than in a comparable system built piece by piece Reduced cabling Improved energy efficiency over traditional blade server chassis The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades. Cisco Unified Computing System fabric interconnects The Cisco Unified Computing System (UCS) fabric interconnects provide network connectivity and management capabilities to the Cisco UCS blades and chassis. Compute layer 16

17 The Cisco UCS fabric interconnects provide the management and communication backbone for the blades and chassis. The Cisco UCS fabric interconnects provide LAN and SAN connectivity for all blades within their domain. Cisco UCS fabric interconnects are used for boot functions and offer line-rate, lowlatency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions. VCE Systems use Cisco UCS 6248UP Fabric Interconnects and Cisco UCS 6296UP Fabric Interconnects. Single domain uplinks of 2, 4, or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6248UP Fabric Interconnects. Single domain uplinks of 4 or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6296UP Fabric Interconnects. Cisco Trusted Platform Module Cisco TPM provides authentication and attestation services that provide safer computing in all environments. Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that authenticate the Vblock System. Cisco TPM is available by default within the Vblock System as a component within the Cisco UCS B- Series M3 Blade Servers and Cisco UCS B-Series M4 Blade Servers, and is shipped disabled. The Vblock System Blade Pack Reference contains additional information about Cisco TPM. VCE supports only the Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Because making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant experience in trusted computing, VCE defers to the software stack vendor for configuration and operational considerations relating to the Cisco TPM. Related information Scaling up compute resources This topic describes what can be added to scale up VCE System resources. To scale up compute resources, you can add uplinks, blade packs, and chassis activation kits to enhance Ethernet and Fibre Channel (FC) bandwidth either when VCE Systems are built, or after they are deployed. Ethernet and FC I/O bandwidth enhancement For VCE Systems with EMC VNX5500, EMC VNX5700, and EMC VNX7500, the Ethernet I/O bandwidth enhancement increases the number of Ethernet uplinks from the Cisco UCS 6296UP fabric interconnects to the network layer to reduce oversubscription. To enhance Ethernet I/O bandwidth performance increase the uplinks between the Cisco UCS 6296UP fabric interconnects and the Cisco Nexus 5548UP Switch for segregated networking, or the Cisco Nexus 5596UP Switch for unified networking. FC I/O bandwidth enhancement increases the number of FC links between the Cisco UCS 6248UP fabric interconnects or Cisco UCS 6296UP fabric interconnects and the SAN switch, and from the SAN switch to the EMC VNX storage array. Single domain uplinks of two or four for the Cisco UCS 6248UP Fabric Interconnects, and up to eight for the Cisco UCS 6296UP Fabric Interconnects between the fabric interconnects and the chassis are provided. The FC I/O bandwidth enhancement feature is supported on the VCE System with EMC VNX5700 and VNX7500. Implementing the FC I/O bandwidth enhancement feature results in a block-only array configuration. 17 Compute layer

18 Blade packs Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The base configuration of each VCE System includes two blade packs. The maximum number of blade packs depends on the type of VCE System. Each blade type must have a minimum of two blade packs as a base configuration and then can be increased in single blade pack increments thereafter. Each blade pack is added along with the following license packs: VMware vsphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) EMC PowerPath/VE License packs for VMware vsphere ESXi, Cisco Nexus 1000V Series Switches, and EMC PowerPath are not available for bare metal blades. The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades. Chassis activation kits The power supplies and fabric extenders for all chassis are populated and cabled, and all required Twinax cables and transceivers are populated. However, in a base VCE System configuration, only two of the Cisco UCS 5108 Server Chassis are licensed for fabric interconnect ports. This licensing limit reduces the entry cost for the VCE System. As more blades are added and additional chassis are required, chassis activation kits (CAK) are automatically added to an order. The kit contains software licenses to enable additional fabric interconnect ports. Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis. VCE bare metal support policy Since many applications cannot be virtualized due to technical and commercial reasons, VCE Systems support bare metal deployments, such as non-virtualized operating systems and applications. While it is possible for VCE Systems to support these workloads (with caveats noted below), due to the nature of bare metal deployments, VCE is able to provide only reasonable effort" support for systems that comply with the following requirements: VCE Systems contain only VCE published, tested, and validated hardware and software components. The VCE Release Certification Matrix provides a list of the certified versions of components for VCE Systems. The operating systems used on bare-metal deployments for compute and storage components must comply with the published hardware and software compatibility guides from Cisco and EMC. For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.), those hypervisor technologies are not supported by VCE. VCE Support is provided only on VMware Hypervisors. Compute layer 18

19 VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether a VCE System is operating correctly, and assistance in problem resolution to the extent possible. VCE is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating systems or applications. The OEM support vendor should be contacted directly for issues and patches related to those operating systems and applications. Disjoint layer 2 configuration In the disjoint layer 2 configuration, traffic is split between two or more different networks at the fabric interconnect to support two or more discrete Ethernet clouds. The Cisco UCS servers connect to two different clouds. Upstream disjoint layer 2 networks allow two or more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain. 19 Compute layer

20 The following illustration provides an example implementation of disjoint layer 2 networking into a Cisco UCS domain: Virtual port channels (VPCs) 101 and 102 are production uplinks that connect to the network layer of the VCE System. Virtual port channels 105 and 106 are external uplinks that connect to other switches. If you use Ethernet performance port channels (103 and 104 by default), port channels 101 through 104 are assigned to the same VLANs. Compute layer 20

21 Storage layer EMC VNX series storage arrays EMC VNX series platforms support block storage and unified storage. The platforms are optimized for VMware virtualized applications. They feature flash drives for extendable cache and high performance in the virtual storage pools. Automation features include self-optimized storage tiering, and applicationcentric replication. Regardless of the storage protocol implemented at startup (block or unified), the VCE System can include cabinet space, cabling, and power to support the hardware for all of these storage protocols. This arrangement makes it easier to move from block storage to unified storage with minimal hardware changes. The following list shows the VCE Systems in order from largest capacity to smallest: EMC VNX7500 EMC VNX5700 EMC VNX5500 EMC VNX5300 In the VCE Systems, all EMC VNX components are installed in VCE cabinets in a VCE-specific layout. Replication This section describes how VCE Systems can be upgraded to include EMC RecoverPoint. For block storage configurations, the VCE System can be upgraded to include EMC RecoverPoint. This replication technology provides continuous data protection and continuous remote replication for ondemand protection and recovery to any point in time. EMC RecoverPoint advanced capabilities include policy-based management, application integration, and bandwidth reduction. RecoverPoint is included in the EMC Local Protection Suite and EMC Remote Protection Suite. To implement EMC RecoverPoint within a VCE System, add two or more EMC RecoverPoint Appliances (RPA) in a cluster to the VCE System. This cluster can accommodate approximately 80 MBps sustained throughput through each EMC RPA. To ensure proper sizing and performance of an EMC RPA solution, VCE works with an EMC Technical Consultant. They collect information about the data to be replicated, as well as data change rates, data growth rates, network speeds, and other information that is needed to ensure that all business requirements are met. Scaling up storage resources This topic describes what you can add to the VCE System to scale up storage resources. 21 Storage layer

22 To scale up storage resources, you can expand block I/O bandwidth between the compute and storage resources, add RAID packs, and add disk-array enclosure (DAE) packs. I/O bandwidth and packs can be added when the VCE System is built and after it is deployed. I/O bandwidth expansion Fibre channel (FC) bandwidth can be increased in the VCE System with EMC VNX7500 and EMC VNX5700. This option adds an additional four FC interfaces per fabric between the fabric interconnects and the Cisco MDS 9148 Multilayer Fabric Switch (segregated network architecture) or Cisco Nexus 5548UP Switch or Switch Cisco Nexus 5596UP Switch (unified network architecture). It also adds an additional four FC ports from the EMC VNX to each SAN fabric. This option is available for environments that require high bandwidth, block-only configurations. This configuration requires the use of four storage array ports per storage processor that are normally reserved for unified connectivity of the X-Blades. RAID packs Storage capacity can be increased by adding RAID packs. Each pack contains a number of drives of a given type, speed, and capacity. The number of drives in a pack depends upon the RAID level that it supports. The number and types of RAID packs to include in a VCE System are based upon the following: The number of storage pools that are needed. The storage tiers that each pool contains, and the speed and capacity of the drives in each tier. The following table lists tiers, supported drive types, and supported speeds and capacities. The speed and capacity of all drives within a given tier in a given pool must be the same. Tier Drive type Supported speeds and capacities 1 Solid-state Enterprise Flash drives (EFD) 100 GB 200 GB 2 Serial attached SCSI (SAS) 300 GB 10K RPM 600 GB 10K RPM 900 GB 10K RPM 300 GB 15K RPM 600 GB 15K RPM Storage layer 22

23 Tier Drive type Supported speeds and capacities 3 Nearline SAS 1 TB 7.2K RPM 2 TB 7.2K RPM 3 TB 7.2K RPM The RAID protection level for the tiers in each pool. The following table describes each supported RAID protection level. The RAID protection level for the different pools can vary. RAID protection level Description RAID 1/0 A set of mirrored drives. Offers the best overall performance of the three supported RAID protection levels. Offers robust protection. Can sustain double-drive failures that are not in the same mirror set. Lowest economy of the three supported RAID levels since usable capacity is only 50% of raw capacity. RAID 5 Block-level striping with a single parity block, where the parity data is distributed across all of the drives in the set. Offers the best mix of performance, protection, and economy. Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required to perform a single write. With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss or unrecoverable read errors on a track during a drive rebuild. Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity or better. RAID 6 Block-level striping with two parity blocks, distributed across all of the drives in the set. Offers increased protection and read performance comparable to RAID 5. Has a significant write performance penalty because multiple I/Os are required to perform a single write. Economy is very good. Usable capacity is 75% of raw capacity or better. EMC best practice for SATA and NL-SAS drives. There are RAID packs for each RAID protection level/tier type combination. The RAID levels dictate the number of drives that are included in the packs. RAID 5 or RAID 1/0 is for performance and extreme performance tiers and RAID 6 is for the capacity tier. The following table lists RAID protection levels and the number of drives in the pack for each level: RAID protection level RAID 1/0 RAID 5 RAID 6 Number of drives per RAID pack 8 (4 data + 4 mirrors) 5 (4 data + 1 parity) or 9 (8 data + 1 parity) 8 (6 data + 2 parity) or 16 (14 data + 2 parity) * file virtual pool only **block virtual pool only 23 Storage layer

24 Disk array enclosure packs If the number of RAID packs in VCE Systems is expanded, more disk array enclosures (DAEs) might be required. DAEs are added in packs. The number of DAEs in each pack is equivalent to the number of back-end buses in the EMC VNX array in the VCE System. The following table lists the number of buses in the array and the number of DAEs in the pack for each VCE System: VCE System Number of buses in the array Number of DAEs in the DAE pack EMC VNX EMC VNX EMC VNX (base includes DPE as the first DAE) EMC VNX (base includes DPE as the first DAE) There are two types of DAEs: a 15 drive 3U enclosure for 3.5 inch form factor drives, and a 25 drive 2U enclosure for 2.5 inch form factor drives. A DAE pack can contain a mix of DAE sizes, as long as the total DAEs in the pack equals the number of buses. To ensure that the loads are balanced, physical disks will be spread across the DAEs in accordance with best practice guidelines. Storage features support This topic presents additional storage features available on the VCE System. Support for array hardware or capabilities The following table provides an overview of the support provided for EMC VNX operating environment for new array hardware or capabilities: Feature NFS Virtual X-Blades VDM (Multi-LDAP Support) Data-in-place block compression Compression for file/ display compression capacity savings EMC VNX snapshots Description Provides security and segregation for service provider environmental clients. When compression is enabled, thick LUNs are converted to thin and compressed in place. RAID group LUNs are migrated into a pool during compression. There is no need for additional space to start compression. Decompression temporarily requires additional space, since it is a migration, and not an in-place decompression. Available file compression types: Fast compression (default) Deep compression (up to 30% more space efficient, but slower and with higher CPU usage) Displays capacity savings due to compression to allow a cost/benefit comparison (space savings versus performance impact). EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. This feature is optional. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. Storage layer 24

25 Feature vstorage API for Array Integration (VAAI) enhancement Description NFS snap-on-snap of VMDK files to one level of depth. For example, fast clone of a fast clone that is a second level clone of a base image of a virtual machine. Hardware features VCE supports the following hardware features: Option to expand storage processor memory to 48 GB for EMC VNX7500 Dual 10 GE Optical/Active Twinax IP IO/SLIC for X-Blades 2.5 inch vault drives 2.5 inch DAEs and drive form factors File deduplication File deduplication is supported, but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. Block compression Block compression is supported but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. External NFS and CIFS access The VCE System can present CIFS and NFS shares to external clients provided that these guidelines are followed: Requires dedicated X-Blades for access by hosts outside of the VCE System. VCE System shares cannot be mounted internally by VCE System hosts and external to the VCE System at the same time. In a configuration with two X-Blades, mixed internal and external access is not supported. The following configurations are supported: External NFS and external CIFS only Internal NFS and internal CIFS only In a configuration with more than two X-Blades, external NFS and CIFS access can run on one or more X-Blades that are physically separate from the X-Blades serving VMFS data stores to the VCE System compute layer. Snapshots EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. EMC VNX snapshot is an optional feature. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. 25 Storage layer

26 Replicas For VCE System NAS configurations, EMC VNX Replicator is supported. This software can create local clones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator is included in the EMX VNX Remote Protection Suite. Storage layer 26

27 Network layer Network overview This topic provides an overview of the network components for the VCE System. The Cisco Nexus 5500 series switches in the network layer provide 10 GbE IP connectivity between the VCE System and the outside world. In unified storage architecture, the switches also connect the fabric interconnects in the compute layer to the X-Blades in the storage layer. The switches also provide connectivity to the Advanced Management Platform (AMP) through redundant connections to the Cisco Catalyst 3560X Ethernet switch or switches in the AMP. In the segregated architecture, the Cisco MDS 9000 series switches in the network layer provide Fibre Channel (FC) links between the Cisco fabric interconnects and the EMC VNX array. These FC connections provide block level devices to blades in the compute layer. In unified network architecture, there are no Cisco MDS series storage switches. FC connectivity is provided by the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches. Ports are reserved or identified for special services such as backup, replication, or aggregation uplink connectivity. IP network components VCE Systems include two Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches, to provide 10 GbE connectivity: Between the VCE System internal components To the site network To the Advanced Management Platform (AMP) through redundant connections to the Cisco Catalyst 3200 Ethernet Switches in the AMP To support the Ethernet and SAN requirements in the traditional segregated network architecture, two Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide Ethernet connectivity and a pair of Cisco MDS 9148 Multilayer Fabric Switches provide Fibre Channel (FC) connectivity. The two Cisco Nexus 5500UP switches used by the VCE System support low latency line-rate 10 GB Ethernet and Fibre Channel over Ethernet (FCoE) connectivity on up to 96 ports. Unified port expansion modules are available and provide an extra 16 ports of 10 GbE or FC connectivity. The FC ports are licensed in packs of eight. The Cisco Nexus 5548UP switches have 32 integrated low latency unified ports. Each port provides linerate 10GB Ethernet or FC connectivity. The Cisco Nexus 5548UP switches have one expansion slot that can be populated with a 16 port unified port expansion module. The Cisco Nexus 5548UP Switch is the only network switch supported for VCE System data connectivity in a VCE System with EMC VNX5300. The Cisco Nexus 5548UP Switch is available as an option for all segregated network VCE Systems. It is also an option for unified network VCE Systems with EMC VNX5500. The Cisco Nexus 5596UP switches have 48 integrated, low-latency, unified ports. Each port provides linerate 10 GB Ethernet or FC connectivity. The Cisco Nexus 5596UP switches have three expansion slots that can be populated with 16 port unified port expansion modules. The Cisco Nexus 5596UP Switch is 27 Network layer

28 available as an option for both network topologies for all VCE Systems, except the VCE System with EMC VNX5300. Storage switching components This section describes how the VCE System includes redundant Cisco SAN fabric switches. In a segregated networking model, there are two Cisco MDS 9148 multilayer fabric switches. In a unified networking model, Fibre Channel (FC) based features are provided by the two Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches that are also used for LAN traffic. In VCE Systems, these switches provide: FC connectivity between the compute layer components and the storage layer components Connectivity for backup, business continuity (EMC RecoverPoint Appliance), and storage federation requirements when configured. Inter-Switch Links (ISL) to the existing SAN are not permitted. The Cisco MDS 9148 Multilayer Fabric Switch provides from 16 to 48 line-rate ports for non-blocking 8 Gbps throughput. The port groups are enabled on an as needed basis. The Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches provide a number of line-rate ports for non-blocking 8 Gbps throughput. Expansion modules can be added to the Cisco Nexus 5596UP Switch to provide 16 additional ports operating at line-rate. Related information IP network components (see page 27) Network layer 28

29 Virtualization layer Virtualization components VMware vsphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vsphere components are the VMware vsphere ESXi and VMware vcenter Server for management. Depending on the version that you are running, VMware vsphere 5.x includes a Single Sign-on (SSO) component as a standalone Windows server or as an embedded service on the vcenter server. VMware vsphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSO service. The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vmotion and Storage vmotion technology. VMware vsphere Hypervisor ESXi The VMware vsphere Hypervisor ESXi runs in the AMP and in a VCE System. This lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install) and has minimal management overhead. VMware vsphere ESXi does not contain a console operating system. The VMware vsphere Hypervisor ESXi boots from SAN through an independent 20 GB FC LUN presented from the EMC VNX storage array. The FC LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to provide stateless computing within the VCE System. The stateless hypervisor is not supported. Cluster configuration VMware vsphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources available for allocation to virtual machines (VMs). Clusters can scale up to a maximum of 32 hosts for VMware vsphere 5.1/5.5 and 64 hosts for VMware vsphere 6.0. Clusters can support thousands of VMs. The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Some advanced CPU functionality might be unavailable if more than one blade model is running in a given cluster. Data stores The VCE System supports a mixture of data store types: block level storage using VMFS or file level storage using NFS. The maximum size per VMFS5 volume is 64 TB (50 TB 1 MB). Each host/cluster can support a maximum of 255 volumes. VCE optimizes the advanced settings for VMware vsphere ESXi hosts to maximize the throughput and scalability of NFS data stores. The VCE System supports a maximum of 256 NFS data stores per host. 29 Virtualization layer

30 Virtual networks Virtual networking in the AMP uses the VMware vsphere Distributed Switch (VDS). Virtual networking is managed by the Cisco Nexus 1000V distributed virtual switch. The Cisco Nexus 1000V Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a virtual machine during live migration. This provides persistent network, security, and storage compliance. Alternatively, virtual networking in the VCE System is managed by a VMware vcenter Virtual Distributed Switch (version 5.5 or higher) with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware Standard Switch (VSS) and a VMware vsphere Distributed Switch (VDS) and will use a minimum of four uplinks presented to the hypervisor. The implementation of Cisco Nexus 1000V Series Switch for VMware vsphere 5.1/5.5 and VMware VDS for VMware vsphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policies to appropriately shape network traffic according to workload type and priority. With VMware vsphere 6.0, QoS is set to Default (Trust Host). The vnics are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS Virtual Interface Card (VIC) hardware. Thus, VMware vsphere ESXi is presented a predicable uplink interface count. All applicable VLANs, native VLANs, MTU settings and QoS policies are assigned to the virtual network interface cards (vnic) to ensure consistency in case the uplinks need to be migrated to the VMware vsphere Distributed Switch (VDS) after manufacturing. VMware vcenter Server This topic describes the VMware vcenter Server, which is a central management point for the hypervisors and VMs. VMware vcenter Server and VMware Update Manager are installed on a 64-bit Windows Server and run as a service to assist with host patch management. The AMP and the VCE System each have a unified VMware vcenter Server instance, as well as an accompanying instance of VMware Update Manager to assist with upgrades and host patch management. VMware vcenter Server provides the following functionality: Cloning of virtual machines Creating templates VMware vmotion and VMware Storage vmotion Initial configuration of DRS and VMware vsphere high availability clusters VMware vcenter Server provides monitoring and alerting capabilities for hosts and VMs. VCE System administrators can create and apply the following alarms to all managed objects in VMware vcenter Server: Data center, cluster, and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtualization layer 30

31 Virtual network usage and health Databases The backend database that supports VMware vcenter Server and VMware Update Manager (VUM) is remote Microsoft SQL Server 2008 (vsphere 5.1) and Microsoft SQL 2012 (vsphere 5.5/6.0). The SQL Server service requires a dedicated service account. Authentication VCE Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vsphere 5.1 and higher. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate Windows services which can be configured to use a dedicated service account depending on the security and directory services requirements. VCE supported features VCE supports the following VMware vcenter Server features: VMware Single Sign-On (SSO) Service (version 5.1 and higher) VMware vsphere Web Client (used with VCE Vision Intelligent Operations) VMware vsphere High Availability VMware Distributed Resource Scheduler (DRS) VMware Fault Tolerance VMware vmotion VMware Storage vmotion Layer 3 capability available for compute resources (version 6.0 and higher) Raw Device Mappings Resource Pools Storage DRS (capacity only) VMware vsphere Storage APIs for Array Integration (VAAI) (except TP reclaim primitive) Storage driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vcenter Web Client 31 Virtualization layer

32 Management Management hardware overview VCE Systems include an Advanced Management Platform (AMP). The AMP provides a single management point for VCE Systems to enable the following benefits: Monitors and manages VCE System health, performance, and capacity Provides fault isolation for management Eliminates resource overhead on a VCE System Provides a clear demarcation point for remote operations The following table describes the AMPs: AMP Description Mini One Cisco UCS C220 server Two Cisco Catalyst 3560X Ethernet switches High-availabiliy (HA) Two Cisco UCS C220 servers Two Cisco Catalyst 3560X Ethernet switches One EMC VNXe3150 In a VCE System configuration with a mini-amp, space is reserved in the compute/storage base (CSB) cabinet for the additional HA AMP components to make it easy to upgrade to the HA AMP later. In the VCE System with EMC VNX5300, the AMP is not installed in the base cabinet. The AMP must be installed within an external storage expansion (SE) cabinet, network cabinet, or customer-provided cabinet. The mini-amp occupies three rack units (RUs). The HA AMP occupies six RUs. Management 32

33 Management software components Each AMP is delivered pre-configured with the following software tools: Microsoft Windows Server 2008 Standard R2 x64 (six licenses per Cisco UCS Blade) VMware vsphere Server Enterprise Plus VMware vsphere Hypervisor ESXi, VMware Single Sign-On (SSO) Service (version 5.1 and higher), VMware vsphere Web Client, VMware vcenter Server, VMware vcenter Database using Microsoft SQL Server 2008 Standard, VMware vcenter Update Manager, VMware vcenter client Cisco Nexus 1000V virtual switch EMC PowerPath/VE License Management Server EMC Secure Remote Support (ESRS) Array management modules, including but not limited to, EMC Unisphere Client and Server, EMC Unisphere Service Manager, EMC VNX Initialization Utility, and EMC VNX Startup Tool Cisco Device Manager and Cisco Data Center Network Manager System administration utilities such as PuTTY, TFTP Server, and Java (Optional) EMC Ionix UIM/P (Optional) EMC RecoverPoint management software that includes EMC RecoverPoint Management Application and EMC RecoverPoint Deployment Manager (Optional) Cisco Secure Access Control Server (ACS) 33 Management

34 System infrastructure VCE System descriptions This topic provides a comparison of the compute, network, and storage architecture for the VCE System. The following table compares the architecture of the VCE System: VCE System with EMC VNX7500 VCE System with EMC VNX5700 VCE System with EMC VNX5500 VCE System with EMC VNX5300 Storage access Block or unified Block or unified Block or unified Block or unified Fabric interconnect model Cisco UCS 6248UP or Cisco UCS 6248UP or Cisco UCS 6248UP or Cisco UCS 6248UP Cisco UCS 6296UP Cisco UCS 6296UP Cisco UCS 6296UP Cisco B-series blade chassis 16 maximum (requires Cisco UCS 6296UP) 16 maximum (requires Cisco UCS 6296UP) 8 maximum 2 maximum B-series blades (maximum) Half-width = 128 Full-width = 64 Half-width = 128 Full-width = 64 Half-width = 64 Full-width = 32 Half-width = 16 Full-width = 8 IP switches Cisco Nexus 5548UP Cisco Nexus 5548UP Cisco Nexus 5548UP Cisco Nexus 5548UP or or or Cisco Nexus 5596UP Cisco Nexus 5596UP Cisco Nexus 5596UP SAN switches (for segregated architecture only) Cisco MDS 9148 (segregated) Cisco MDS 9148 (segregated) Cisco MDS 9148 (segregated) Cisco MDS 9148 (segregated) Array EMC VNX7500 EMC VNX5700 EMC VNX5500 EMC VNX5300 Back-end SAS buses Storage protocol Block = FC Block = FC Block = FC Block = FC Unified = NFS, FC, and CIFS Unified = NFS, FC, and CIFS Unified = NFS, FC, and CIFS Unified = NFS, FC, and CIFS Datastore type Block = VMFS Block = VMFS Block = VMFS Block = VMFS Unified = NFS and VMFS Unified = NFS and VMFS Unified = NFS and VMFS Unified = NFS and VMFS X-Blades (unified only) Minimum = 2 Maximum = 8 Minimum = 2 Maximum = 4 Minimum = 2 Maximum = 3 Minimum = 2 Maximum = 2 Boot path SAN SAN SAN SAN Disk drives Minimum = 18 Minimum = 18 Minimum = 18 Minimum = 18 Maximum = 1000 Maximum = 500 Maximum = 250 Maximum = 125 System infrastructure 34

35 Cabinets overview This topic describes the VCE cabinets. In each VCE System, the compute, storage, and network layer components are distributed within two or more 42U cabinets. Distributing the components this way balances out the power draw and reduces the size of the power outlet units (POUs) that are required. The VCE System with EMC VNX5300 configuration is installed in a single cabinet. Each cabinet conforms to a standard predefined layout. Space can be reserved for specific components even if they are not present or required for the external configuration. This design makes it easier to upgrade or expand each VCE System as capacity needs increase. VCE System cabinets are designed to be installed next to one another within the data center (that is, contiguously). If a customer requires the base and expansion cabinets to be physically separated, customized cabling is needed, which incurs additional cost and can increase delivery time. The cable length is NOT the same as distance between cabinets. The cable must route through the cabinets and through the cable channels overhead or in the floor. Cabinet types This topic describes the different cabinet types for the VCE System with EMC VNX7500, EMC VNX5700, EMC VNX5500, and EMC VNX5300. Compute/network/storage cabinets Compute/network/storage (CNS) cabinets contain the compute, storage, and networking components. The VCE System with EMC VNX7500, EMC VNX5700, and EMC VNX5500 has two CNS cabinets: CNS 1 and CNS 2. The VCE System with EMC VNX5300 has one CNS cabinet: CNS 1. The CNS 1 and CNS 2 (EMC VNX7500, VNX5700, and VNX5500) cabinets each contain the following set of components: Two Cisco UCS blade chassis, each containing up to four full-width blades or eight half-width blades One Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch One Cisco MDS 9148 Multilayer Fabric Switch (segregated network) One Cisco UCS 6248UP Fabric Interconnect or Cisco UCS 6296UP Fabric Interconnect The CNS 1 cabinet contains these additional components: EMC VNX array storage processors, control stations, and X-Blades (for unified storage configurations) Up to four DAEs (optional) The CNS 2 cabinet contains these additional components: Mini Advanced Management Platform (Mini AMP) 35 System infrastructure

36 Up to four DAEs (optional) Compute/storage cabinets If a VCE System expands beyond its compute/storage (CS) cabinets, up to two CS cabinets can be added to contain the additional components. Each CS cabinet can contain these components: Up to two Cisco UCS blade chassis, each containing up to four full-width blades or eight halfwidth blades (if the capacity limitations of the given VCE System permit more blades than can fit in the base cabinet) Up to nine DAEs, if the capacity limitations of a given VCE System permit this many DAEs Additional CS cabinets are added as needed after the VCE System expands beyond the initial CS cabinet. Space is reserved in each cabinet for the blade chassis and the DAEs, even if they are not all installed. This design makes it easier to add this hardware later, if necessary. Storage cabinets If a VCE System requires more DAEs than the CNS and CS cabinets can contain, a storage (S) cabinet is added. Each cabinet can contain up to 14 DAEs, if the capacity limitations of a given VCE System permit this many. Storage cabinets are numbered from 1 to 3. Storage cabinets are added only after all of the reserved DAE space in the CNS and CS cabinets is filled. Storage cabinets contain the following components: The AMP (VCE System with EMC VNX5300 only) (Optional) EMC RecoverPoint Appliances (RPAs) Network cabinets If additional space is required for networking or storage switching components, one or more network cabinets is added. Network cabinets serve as aggregation points for: IP networking components for cross-vce System communication and customer access points The core portion of core-edge block fabrics to support shared replication and fabric services AMP IP management connectivity consolidation Network cabinets contain the following components: A centralized AMP for large VCE System environments Shared (optional) EMC RecoverPoint Appliances (RPAs) Power options This topic describes the power outlet unit (POU) options inside and outside of North America. System infrastructure 36

37 VCE Systems support several POU options inside and outside of North America. North America power options The NEMA POU is standard; other POUs add time to assembly and delivery. The following table lists the POUs available for VCE Systems in North America: POU NEMA L15-30P IEC EC 309 3P4W SPLASH PROOF 460P9S IEC IEC309 2P3W SPLASH PROOF 360P6S NEMA L6-30P Power specifications 3-phase Delta / 30A / 208V 3-phase Delta / 60A / 208V Single phase / 60A / 208V Single phase / 30A / 208V Europe power options The IEC 309 POU is standard; other POUs add time to assembly and delivery. The following table lists the POUs available for VCE Systems in Europe: POU IEC 60309, SPLASH PROOF IEC 60309, SPLASH PROOF IEC 60309, SPLASH PROOF Power specifications 3-phase WYE/ 32A / 230 / 400V 3-phase WYE / 16A / 230 / 400V Single phase / 32A / 230V Japan power options The following table lists the POUs available for VCE Systems in Japan: POU JIS C8303 L15-30P IEC SPLASH PROOF IEC SPLASH PROOF JIS C8303 L15-30P Power specifications 3-phase Delta / 30A / 208V 3-phase Delta / 60A / 208V Single phase / 60A / 208V Single phase / 30A / 208V The VCE Vblock System 320 Physical Planning Guide provides more information about power requirements. Related information Accessing VCE documentation (see page 5) 37 System infrastructure

38 Configuration descriptions VCE System with EMC VNX7500 overview The VCE System with EMC VNX7500 can support the highest compute and storage capacity of all the Vblock 320 Systems. It contains the following features: Compute capacity is expandable from 4 blades to 128 half-width blades or 64 full-width blades. The VCE System with EMC VNX7500 storage capacity is expandable up to 64 DAEs/1000 drives, depending upon drive type mixture and DAE model. To support Ethernet and SAN requirements in a unified network architecture, a pair of Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide network connectivity. In a segregated network architecture, a pair of Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide Ethernet connectivity and a pair of Cisco MDS 9148 Multilayer Fabric Switches provide Fibre Channel (FC) connectivity. In a unified networking architecture, the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide Ethernet and FC connectivity. Sample VCE System with EMC VNX7500 maximum configuration To add compute blades or DAEs beyond the capacity of the compute/network/storage (CNS) cabinets, add up to six compute/storage (CS) cabinets. To add DAEs beyond this capacity, add up to three storage (S) cabinets. The following illustration shows the front view of a sample maximized VCE System with EMC VNX7500: Configuration descriptions 38

39 The following illustration shows the rear view of a sample maximized VCE System with EMC VNX7500: Expanding the VCE System with EMC VNX7500 compute layer The VCE System with EMC VNX7500 compute layer can expand to include up to 16 Cisco UCS 5108 Server Chassis. Blade packs can be added up to 64 full-width or 128 half-width blades. To add blades beyond the four chassis, add CS cabinets. To activate the chassis within the CS cabinets, you must purchase chassis activation kits. Each kit contains eight fabric interconnect port licenses to connect the FEXs in the chassis to the two Cisco UCS 6248UP Fabric Interconnects. Expanding the VCE System with EMC VNX7500 storage layer The VCE System with EMC VNX7500 storage capacity can be expanded in any of the following ways: Adding VMware vsphere ESX or bare metal boot LUN RAID groups (five drives per RAID group) to upgrade the boot capacity. Each boot LUN RAID group accommodates 32 VMware vsphere ESXi hosts Upgrading EMC FAST Cache from 100 GB to 2000 GB, in mirrored pairs of EFD drives Adding RAID packs to one or more tiers The EMC VNX7500 maximum configuration is 64 DAEs/1000 drives. Add DAEs in multiples of eight, with the same number of back-end buses In a unified storage configuration, as storage capacity grows, add X-Blades and DMEs, with NFS/ CIFS licensing. X-Blades are supplied in DME packs (two X-Blades per pack), including NFS/ CIFS licensing. A VCE System with EMC VNX7500 configuration can expand to up to eight X- Blades in a 7+1 active-standby cluster configuration. The expansion DMEs and X-Blades are installed in the storage cabinets. 39 Configuration descriptions

40 Related information Scaling up compute resources (see page 17) Sample VCE System with EMC VNX7500 maximum CNS cabinets This section shows the components in a sample of a maximized VCE System with EMC VNX7500 compute/network/storage (CNS) cabinet with the Cisco UCS 6296UP Fabric Interconnect. A Cisco UCS 6248UP Fabric Interconnect is also available. Sample VCE System with EMC VNX7500 CNS 1 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CNS 1 cabinet: Configuration descriptions 40

41 Sample VCE System with EMC VNX7500 CNS 2 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CNS 2 cabinet: The location of the EMC RecoverPoint appliances (RPAs) depends on the VCE System configuration. If there are more than two EMC RPAs in the cluster, the EMC RPAs are not located in the CS cabinet. The EMC RPAs are located in the last CS or S cabinet that has available space. Sample VCE System with EMC VNX7500 maximum CS cabinets This topic shows the components in sample maximized sample VCE System with EMC VNX7500 compute/storage (CS) cabinets. 41 Configuration descriptions

42 Sample VCE System with EMC VNX7500 maximized CS 3 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 3 cabinet: Configuration descriptions 42

43 Sample VCE System with EMC VNX7500 maximized CS 4 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 4 cabinet: 43 Configuration descriptions

44 Sample VCE System with EMC VNX7500 maximized CS 5 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 5 cabinet: Configuration descriptions 44

45 Sample VCE System with EMC VNX7500 maximized CS 6 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 6 cabinet: 45 Configuration descriptions

46 Sample VCE System with EMC VNX7500 maximized CS 7 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 7 cabinet: Configuration descriptions 46

47 Sample VCE System with EMC VNX7500 maximized CS 8 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 CS 8 cabinet: 47 Configuration descriptions

48 Sample VCE System with EMC VNX7500 maximum S cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX7500 storage (S) 9 cabinet: Configuration descriptions 48

49 VCE System with EMC VNX5700 overview The VCE System with EMC VNX5700 is a slightly smaller version of the VCE System with EMC VNX7500, with identical compute capacity but a different storage array. It contains the following features: Compute capacity is expandable from four blades to 128 half-width blades or 64 full-width blades. The VCE System with EMC VNX5700 storage capacity is expandable up to 32 DAEs/500 drives, depending upon drive type mixture and DAE model. To support Ethernet and SAN requirements in a unified network architecture, a pair of Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide network connectivity. In a segregated network architecture, a pair of Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches are used for Ethernet connectivity and a pair of Cisco MDS 9148 Multilayer Fabric Switches provide FC connectivity. In a unified networking architecture, the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide both Ethernet and FC connectivity. In a unified storage configuration, a VCE System with EMC VNX5700 can expand to up to four X-Blades in a 3+1 active-standby cluster configuration. Sample VCE System with EMC VNX5700 maximum configuration To add compute blades or DAEs beyond the capacity of the compute/network/storage (CNS) cabinets, add up to two compute/storage cabinets (CS 3 and CS 4). To add DAEs beyond this capacity, add one storage cabinet. The following illustration shows the front view of a sample maximized VCE System with EMC VNX5700: 49 Configuration descriptions

50 The following illustration shows the rear view of a sample maximized VCE System with EMC VNX5700: Expanding the VCE System with EMC VNX5700 compute layer The VCE System with EMC VNX5700 compute layer can be expanded with up to sixteen Cisco UCS 5108 chassis. Blade packs can be added for up to 64 full-width or 128 half-width blades. To add blades beyond the four chassis in the base configuration, add compute/storage expansion cabinets. To activate the chassis within the compute/storage expansion cabinets, purchase chassis activation kits. Each kit contains eight fabric interconnect port licenses to connect the FEXs in the chassis to the two Cisco UCS 6248UP Fabric Interconnects. Expanding to 16 chassis requires upgrading to the Cisco UCS 6296UP Fabric Interconnects. Expanding the VCE System with EMC VNX5700 storage layer The VCE System with EMC VNX5700 storage capacity can be expanded in any of the following ways: Adding VMware vsphere ESXi or bare metal Boot LUN RAID groups (5 drives per RAID group) to upgrade the boot capacity. Each Boot LUN RAID group accommodates 32 VMware vsphere ESXi hosts Upgrading FAST Cache from 100 GB to 1500 GB, in mirrored pairs of EFD drives Adding RAID packs to one or more tiers The EMC VNX5700 maximum configuration is 32 DAEs/500 drives. In a unified configuration, as storage capacity grows, add another X-Blade enclosure pack that contains two X-Blades, including NFS/CIFS licensing. The X-Blades are configured in a 3+1 active-standby cluster. Configuration descriptions 50

51 Related information Scaling up compute resources (see page 17) Sample VCE System with EMC VNX5700 maximum CNS cabinets This topic shows the components in samples of maximized VCE System with EMC VNX5700 compute/ network/storage 1 (CNS 1) and compute/network/storage 2 (CNS 2) cabinets with the Cisco UCS 6248UP Series Fabric Interconnect. A Cisco UCS 6296UP Fabric Interconnect is also available. Sample VCE System with EMC VNX5700 CNS 1 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CNS 1 cabinet: 51 Configuration descriptions

52 Sample VCE System with EMC VNX5700 CNS 2 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CNS 2 cabinet: The location of the EMC RecoverPoint appliances (RPAs) depends on the VCE System configuration. If there are more than two EMC RPAs in the cluster, the EMC RPAs are not located in the CS cabinet. The EMC RPAs are located in the last CS or storage (S) cabinet that has available space. Sample VCE System with EMC VNX5700 CS cabinets This section shows the components in a sample of a maximized VCE System with EMC VNX5700 compute/storage 3 (CS 3) through compute/storage 7 (CS 7) cabinets. Configuration descriptions 52

53 Sample VCE System with EMC VNX5700 maximized CS 3 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CS 3 cabinet: 53 Configuration descriptions

54 Sample VCE System with EMC VNX5700 maximized CS 4 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CS 4 cabinet: Configuration descriptions 54

55 Sample VCE System with EMC VNX5700 maximized CS 5 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CS 5 cabinet: 55 Configuration descriptions

56 Sample VCE System with EMC VNX5700 maximized CS 6 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CS 6 cabinet: Configuration descriptions 56

57 Sample VCE System with EMC VNX5700 maximized CS 7 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5700 CS 7 cabinet: 57 Configuration descriptions

58 VCE System with EMC VNX5500 overview The VCE System with EMC VNX5500 is a slightly smaller version of the VCE System with EMC VNX5700, with less compute capacity and a different storage array. It contains the following features: Compute capacity is expandable from four blades to 64 half-width blades or 32 full-width blades. The VCE System with EMC VNX5500 storage capacity is expandable up to 16 DAEs/250 drives, depending upon drive type mixture and DAE model. To support Ethernet and SAN requirements in a unified network architecture, a pair of Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide network connectivity. In a segregated network architecture, a pair of Cisco Nexus 5548UP Switches provide Ethernet connectivity and a pair of Cisco MDS 9148 Multilayer Fabric Switches provide FC connectivity. In a unified networking architecture, the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches provide both Ethernet and FC connectivity. In a unified storage configuration, a VCE System with EMC VNX5500 can expand to up to three X-Blades in a 2+1 active-standby cluster configuration. Sample VCE System with EMC VNX5500 maximum configuration To add blades and DAEs beyond the capacity of the compute/network/storage (CNS) cabinets, add compute/storage (CS) cabinets. Configuration descriptions 58

59 The following illustration shows the front view of a sample VCE System with EMC VNX5500: 59 Configuration descriptions

60 The following illustration shows the rear view of a sample maximized VCE System with EMC VNX5500: Expanding the VCE System with EMC VNX5500 compute layer The VCE System with EMC VNX5500 compute layer can be expanded to up to eight Cisco UCS 5108 Server Chassis. Blade packs can be added for up to 32 full-width or 64 half-width blades. To add blades beyond the four chassis, add CS cabinets. To activate the chassis within the CS cabinets, the customer must purchase chassis activation kits. Each kit contains eight fabric interconnect port licenses, to connect the FEXs to either the two Cisco UCS 6248UP or Cisco UCS 6296UP Fabric Interconnects. Expanding the VCE System with EMC VNX5500 storage layer The VCE System with EMC VNX5500 storage capacity can be expanded in any of the following ways: Adding VMware vsphere ESXi or bare metal Boot LUN RAID groups (five drives per RAID group) to upgrade the boot capacity. Each RAID group accommodates 32 VMware vsphere ESXi hosts Upgrading FAST cache from 100 GB to 1000 GB, in mirrored pairs of EFD drives Adding RAID packs to one or more tiers The EMC VNX5500 maximum configuration is approximately 16 DAEs/250 drives. In a unified configuration, as storage capacity grows, add another X-Blade, including NFS/CIFS licensing. The X-Blades are configured in a 2+1 active-standby cluster. Configuration descriptions 60

61 Related information Scaling up compute resources (see page 17) Sample VCE System with EMC VNX5500 maximum CNS cabinets This topic shows the components in a sample of a fully maximized VCE System with EMC VNX5500 compute/network/storage (CNS) cabinets. Sample VCE System with EMC VNX5500 with EMC VNX5500 CNS 1 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5500 compute/network/ storage 1 cabinet: 61 Configuration descriptions

62 Sample VCE System with EMC VNX5500 CNS 2 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5500 compute/network/storage 2 cabinet: The location of the EMC RecoverPoint appliances (RPAs) depends on the VCE System configuration. If there are more than two EMC RPAs in the cluster, the EMC RPAs are not located in the compute/storage cabinet. The EMC RPAs are located in the last compute/storage expansion or storage expansion cabinet that has space available. Sample VCE System with EMC VNX5500 CS cabinets This topic shows the components in a sample of a maximized VCE System with EMC VNX5500 compute/ storage 3 (CS 3) cabinet and a compute/storage 4 (CS 4) cabinet. Configuration descriptions 62

63 Sample VCE System with EMC VNX5500 maximized CS 3 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5500 CS 3 cabinet: 63 Configuration descriptions

64 Sample VCE System with EMC VNX5500 maximized CS 4 cabinet The following illustration shows the components in a sample of a maximized VCE System with EMC VNX5500 CS 4 cabinet: Configuration descriptions 64

65 VCE System with EMC VNX5300 overview The VCE System with EMC VNX5300 is an entry-level VCE System in a single compute/storage base cabinet. It contains the following features: Compute capacity is expandable from four blades to 16 half-width blades or eight full-width blades. Includes an EMC VNX5300 storage array. The VCE System with EMC VNX5300 storage capacity is expandable up to eight DAEs/120 drives (including the DPE), depending upon drive type mixture. Does not include an AMP within the compute/storage base cabinet. The AMP must be installed within an external SE cabinet, aggregation cabinet, or customer-provided cabinet. The standard AMP occupies three rack units (RUs). The HA AMP occupies six RUs. To support Ethernet and SAN requirements in a unified network architecture, a pair of Cisco Nexus 5548UP Switches provide network connectivity. In a segregated network architecture, a pair of Cisco Nexus 5548UP Switches provide Ethernet connectivity and a pair of Cisco MDS 9148 Multilayer Fabric Switches provide FC connectivity. In a unified networking architecture, the Cisco Nexus 5548UP Switches provide both Ethernet and FC connectivity. 65 Configuration descriptions

66 Sample VCE System with EMC VNX5300 maximum configuration To add DAEs beyond the capacity of the compute/network/storage (CNS) cabinets, add a storage cabinet. The following illustration shows the front view of a sample maximized VCE System with EMC VNX5300: Configuration descriptions 66

67 The following illustration shows the rear view of a sample maximized VCE System with EMC VNX5300: Blade packs can be added for up to eight full-width or 16 half-width blades. The VCE System with EMC VNX5300 storage capacity can be expanded in any of the following ways: Adding VMware vsphere ESXi or bare metal boot LUN RAID groups (five drives per RAID group) to upgrade the boot capacity. Each RAID group accommodates 32 VMware vsphere ESXi hosts Upgrading FAST cache from 100 GB to 500 GB, in mirrored pairs of EFD drives Adding RAID packs to one or more tiers The EMC VNX5300 maximum configuration is eight DAEs (including the DPE)/120 drives. 67 Configuration descriptions

VCE Vblock and VxBlock Systems 340 Architecture Overview

VCE Vblock and VxBlock Systems 340 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 340 Architecture Overview Document revision 3.11 April 2016 VCE Vblock and VxBlock Systems 340 Architecture Overview Revision history Revision history Date Document

More information

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview Dell EMC VxBlock and Vblock Systems 340 Architecture Overview Document revision 3.16 July 2018 Revision history Date Document revision Description of changes July 2018 3.16 Updated System architecture

More information

VCE Vblock System 720 Gen 4.0 Architecture Overview

VCE Vblock System 720 Gen 4.0 Architecture Overview VCE Vblock System 720 Gen 4.0 Architecture Overview Revision history www.vce.com VCE Vblock System 720 Gen 4.0 Architecture Overview Document revision 4.1 March 2013 2012 VCE Company, LLC. All Rights Reserved.

More information

VCE Vblock System 720 Gen 4.1 Architecture Overview

VCE Vblock System 720 Gen 4.1 Architecture Overview VCE Vblock System 720 Gen 4.1 Architecture Revision history www.vce.com VCE Vblock System 720 Gen 4.1 Architecture Document revision 4.3 June 2013 2013 VCE Company, LLC. 2013 VCE Company, 1 LLC. Revision

More information

VCE Vblock System 100. Gen 2.3 Architecture Overview

VCE Vblock System 100. Gen 2.3 Architecture Overview VCE Vblock System 100 Gen 2.3 Architecture Overview Document revision 2.6 November 2014 Revision history Date Vblock System Document revision Description of changes November 2014 Gen 2.3 2.6 Added the

More information

VCE Vblock Systems Series 700 Architecture Overview

VCE Vblock Systems Series 700 Architecture Overview VCE Vblock Systems Series 700 Architecture Overview Revision history www.vce.com VCE Vblock Systems Series 700 Architecture Overview Version 1.5 June 2012 2012 VCE Company, LLC. All Rights Reserved. 2012

More information

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview Dell EMC VxBlock and Vblock Systems 350 Architecture Overview Document revision 1.13 December 2018 Revision history Date Document revision Description of changes December 2018 1.13 Added support for VMware

More information

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Document revision 1.6 February 2016 VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Revision history Revision

More information

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview Dell EMC VxBlock and Vblock Systems 540 Architecture Overview Document revision 1.16 July 2018 Revision history Date Document revision Description of changes July 2018 1.16 Updated Compute Connectivity

More information

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview Dell EMC VxBlock and Vblock Systems 740 Architecture Overview Document revision 1.15 December 2017 Revision history Date Document revision Description of changes December 2017 1.15 Added Cisco UCS B-Series

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION Microsoft Windows Server 2012 with Hyper-V, VCE Vblock 340, EMC VNX Quickly provision virtual machines Extend and support hybrid-cloud environments

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer Maailman paras palvelinjärjestelmä Tommi Salli Distinguished Engineer Cisco UCS Momentum $1.1B Annualized Run Rate 7,400 UCS Customers; 2,620 repeat customers with average 3.4 repeat buys 350 ATP Channel

More information

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP VCE: A FOUNDATION FOR IT TRANSFORMATION Juergen Hirtenfelder EMEA SI/SP What happened on 9th Jan 2007? A best in breed Networking Device A best in breed Storage Device A best in breed Server Device A

More information

Cisco UCS Performance Manager

Cisco UCS Performance Manager Data Sheet Cisco UCS Performance Manager Introduction Today s integrated infrastructure data centers must be highly responsive, with heightened levels of flexibility and visibility. Personnel are responsible

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019 Dell EMC Converged Systems Glossary Document revision 1.25 January 2019 Revision history Date Document revision Description of changes January 2019 1.25 Added the following terms: Hyper-converged deployment

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity, Solution overview A New Generation of Converged Infrastructure that Improves Flexibility, Efficiency, and Simplicity Enterprises everywhere are increasingly adopting Converged Infrastructure (CI) as one

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

Virtual Server Agent for VMware VMware VADP Virtualization Architecture

Virtual Server Agent for VMware VMware VADP Virtualization Architecture Virtual Server Agent for VMware VMware VADP Virtualization Architecture Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 18 VMware VADP Virtualization Architecture - Virtual Server Agent for VMware

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

Administering VMware vsan. 17 APR 2018 VMware vsphere 6.7 VMware vsan 6.7

Administering VMware vsan. 17 APR 2018 VMware vsphere 6.7 VMware vsan 6.7 Administering VMware vsan 17 APR 2018 VMware vsphere 6.7 VMware vsan 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

Cisco Actualtests Questions & Answers

Cisco Actualtests Questions & Answers Cisco Actualtests 642-999 Questions & Answers Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 22.8 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot Cisco

More information

VMware vsphere 5.5 Advanced Administration

VMware vsphere 5.5 Advanced Administration Format 4-day instructor led training Course Books 630+ pg Study Guide with slide notes 180+ pg Lab Guide with detailed steps for completing labs vsphere Version This class covers VMware vsphere 5.5 including

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

ACCELERATE THE JOURNEY TO YOUR CLOUD

ACCELERATE THE JOURNEY TO YOUR CLOUD ACCELERATE THE JOURNEY TO YOUR CLOUD With Products Built for VMware Rob DeCarlo and Rob Glanzman NY/NJ Enterprise vspecialists 1 A Few VMware Statistics from Paul Statistics > 50% of Workloads Virtualized

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

VMware vsphere Clusters in Security Zones

VMware vsphere Clusters in Security Zones SOLUTION OVERVIEW VMware vsan VMware vsphere Clusters in Security Zones A security zone, also referred to as a DMZ," is a sub-network that is designed to provide tightly controlled connectivity to an organization

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Administering VMware vsan. Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1

Administering VMware vsan. Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1 Administering VMware vsan Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

vsan Security Zone Deployment First Published On: Last Updated On:

vsan Security Zone Deployment First Published On: Last Updated On: First Published On: 06-14-2017 Last Updated On: 11-20-2017 1 1. vsan Security Zone Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Security Zone Deployment 3 1.1 Solution Overview VMware vsphere

More information

INTRODUCTION TO THE VVNX COMMUNITY EDITION

INTRODUCTION TO THE VVNX COMMUNITY EDITION INTRODUCTION TO THE VVNX COMMUNITY EDITION A Detailed Review ABSTRACT This white paper introduces the architecture and functionality of the EMC vvnx Community Edition. This paper also discusses some of

More information

CISCO EXAM QUESTIONS & ANSWERS

CISCO EXAM QUESTIONS & ANSWERS CISCO 642-999 EXAM QUESTIONS & ANSWERS Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 32.5 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot CISCO 642-999

More information

Administering VMware vsphere and vcenter 5

Administering VMware vsphere and vcenter 5 Administering VMware vsphere and vcenter 5 Course VM-05 5 Days Instructor-led, Hands-on Course Description This 5-day class will teach you how to master your VMware virtual environment. From installation,

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits On Performance Underutilized

More information

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices Solution Overview Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices BENEFITS Cisco UCS and StorMagic SvSAN deliver a solution to the edge: Single addressable storage pool

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

Questions & Answers

Questions & Answers 642-999 Questions & Answers Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 36.6 http://www.gratisexam.com/ 642-999 Questions & Answers Exam Name: Implementing Cisco Data Center Unified

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

VMware vsphere Customized Corporate Agenda

VMware vsphere Customized Corporate Agenda VMware vsphere Customized Corporate Agenda It's not just VMware Install, Manage, Configure (Usual VCP Course). For working professionals, just VCP is not enough, below is the custom agenda. At the outset,

More information

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco Nexus 4000 Series Switches for IBM BladeCenter Cisco Nexus 4000 Series Switches for IBM BladeCenter What You Will Learn This document is targeted at server, storage, and network administrators planning to deploy IBM BladeCenter servers with the unified

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

EMC Exam E VNX Solutions Specialist Exam for Implementation Engineers Version: 9.0 [ Total Questions: 411 ]

EMC Exam E VNX Solutions Specialist Exam for Implementation Engineers Version: 9.0 [ Total Questions: 411 ] s@lm@n EMC Exam E20-390 VNX Solutions Specialist Exam for Implementation Engineers Version: 9.0 [ Total Questions: 411 ] Topic 3, Volume C Question No : 1 - (Topic 3) A customer has an application that

More information

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide Dell EMC VxBlock and Vblock Systems 350 Administration Guide Document revision 1.10 June 2018 Revision history Date Documen t revision Description of changes June 2018 1.10 Changed the following sections

More information