VCE Vblock and VxBlock Systems 340 Architecture Overview

Size: px
Start display at page:

Download "VCE Vblock and VxBlock Systems 340 Architecture Overview"

Transcription

1 VCE Vblock and VxBlock Systems 340 Architecture Overview Document revision 3.11 April 2016

2 VCE Vblock and VxBlock Systems 340 Architecture Overview Revision history Revision history Date Document revision Description of changes April Added support for Cisco Nexus 3172TQ Switch December Updated to include 16 Gb SLIC. Added support for Cisco MDS 9148S Multilayer Fabric Switch. Added support for unified (NAS) configuration for EMC VNX5800, EMC VNX7600, and EMC VNX8000. Updated support for mixed internal and external access in a configuration with more than two X-Blades. Updated power options. Updated VCE System with EMC VNX5800 elevations for VxBlock System 340. Updated VCE System with EMC VNX5800 (ACI ready) elevations for Cisco MDS 9148S Multilayer Fabric Switch and VxBlock System 340. October Updated graphics. August Updated to include the VxBlock System 340. Added support for VMware vsphere 6.0 with VMware VDS on the VxBlock System and for existing Vblock Systems. Added information on Intelligent Physical Infrastructure (IPI) appliance. February Added support for Cisco B200 M4 Blade. December Added support for AMP-2HA. September Modified elevations and removed aggregate section. July Added support for VMware VDS. May Updated for Cisco Nexus 9396 Switch and 1500 drives for EMC VNX8000 Added support for VMware vsphere 5.5 January Updated elevations for AMP-2 reference. November Updated network connectivity management illustration. October Gen 3.1 release 2

3 Contents VCE Vblock and VxBlock Systems 340 Architecture Overview Contents Introduction...5 Accessing VCE documentation...6 System overview...7 System architecture and components... 7 Base configurations and scaling...9 Connectivity overview...11 Segregated network architecture...13 Unified network architecture Compute layer overview...20 Compute overview...20 Cisco Unified Computing System...20 Cisco Unified Computing System fabric interconnects...21 Cisco Trusted Platform Module Scaling up compute resources VCE bare metal support policy...23 Disjoint layer 2 configuration Storage layer Storage overview...26 EMC VNX series storage arrays Replication...28 Scaling up storage resources...28 Storage features support...31 Network layer Network overview IP network components...34 Port utilization...35 Cisco Nexus 5548UP Switch - segregated networking...36 Cisco Nexus 5596UP Switch - segregated networking...36 Cisco Nexus 5548UP Switch unified networking Cisco Nexus 5596UP - unified networking...39 Cisco Nexus 9396PX Switch - segregated networking...40 Storage switching components Virtualization layer...43 Virtualization overview...43 VMware vsphere Hypervisor ESXi...43 VMware vcenter Server

4 VCE Vblock and VxBlock Systems 340 Architecture Overview Contents Management...47 Management components overview...47 Management hardware components...47 Management software components Management network connectivity Configuration descriptions...54 VCE Systems with EMC VNX VCE Systems with EMC VNX VCE Systems with EMC VNX VCE Systems with EMC VNX VCE Systems with EMC VNX Sample configurations Sample VCE System with EMC VNX Sample VCE System with EMC VNX Sample VCE System with EMC VNX5800 (ACI ready)...80 System infrastructure...85 VCE Systems descriptions Cabinets overview Intelligent Physical Infrastructure appliance Power options...86 Additional references Virtualization components Compute components Network components...88 Storage components

5 Introduction VCE Vblock and VxBlock Systems 340 Architecture Overview Introduction This document describes the high-level design of the VCE System. This document also describes the hardware and software components that VCE includes in the VCE System. In this document, the Vblock System and VxBlock System are referred to as VCE Systems. The VCE Glossary provides terms, definitions, and acronyms that are related to VCE. To suggest documentation changes and provide feedback on this book, send an to Include the name of the topic to which your feedback applies. Related information Accessing VCE documentation (see page 6) 5

6 VCE Vblock and VxBlock Systems 340 Architecture Overview Accessing VCE documentation Accessing VCE documentation Select the documentation resource that applies to your role. Role Customer Cisco, EMC, VMware employee, or VCE Partner VCE employee Resource support.vce.com A valid username and password are required. Click VCE Download Center to access the technical documentation. partner.vce.com A valid username and password are required. sales.vce.com/saleslibrary or vblockproductdocs.ent.vce.com 6

7 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview System overview System architecture and components VCE Systems are modular platforms with defined scale points that meet the higher performance and availability requirements of an enterprise's business-critical applications. Refer to the VCE Systems Physical Planning Guide for information about cabinets and their components, the Intelligent Physical Infrastructure solution, and environmental, security, power, and thermal management. The VCE Systems include the following architecture features: Optimized, fast delivery configurations based on the most commonly purchased components Standardized cabinets with multiple North American and international power solutions Block (SAN) and unified storage options (SAN and NAS) Support for multiple features of the EMC operating environment for EMC VNX arrays Granular, but optimized compute and storage growth by adding predefined kits and packs Second generation of the Advanced Management Platform (AMP-2) for management Unified network architecture provides the option to leverage Cisco Nexus switches to support IP and SAN without the use of Cisco MDS switches. VCE Systems contain the following key hardware and software components: Resource VCE Systems management Components VCE Vision Intelligent Operations System Library VCE Vision Intelligent Operations Plug-in for vcenter VCE Vision Intelligent Operations Compliance Checker VCE Vision Intelligent Operations API for System Library VCE Vision Intelligent Operations API for Compliance Checker 7

8 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview Resource Components Virtualization and management VMware vsphere Server Enterprise Plus VMware vsphere ESXi VMware vcenter Server VMware vsphere Web Client VMware Single Sign-On (SSO) Service (version 5.1 and higher) Cisco UCS C220 Server for AMP-2 EMC PowerPath/VE Cisco UCS Manager EMC Unisphere Manager EMC VNX Local Protection Suite EMC VNX Remote Protection Suite EMC VNX Application Protection Suite EMC VNX Fast Suite EMC VNX Security and Compliance Suite EMC Secure Remote Support (ESRS) EMC PowerPath Electronic License Management Server (ELMS) Cisco Data Center Network Manager for SAN Compute Cisco UCS 5108 Server Chassis Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240, optional port expander or Cisco UCS VIC 1280 Cisco UCS B-Series M4 Blade Servers with Cisco UCS VIC 1340, optional port expander or Cisco UCS VIC 1380 Cisco UCSB-MLOM-PT-01 - Port Expander for 1240 VIC Cisco UCS 2208XP fabric extenders or Cisco UCS 2204XP fabric extenders Cisco UCS 2208XP Fabric Extenders with FET Optics or Cisco UCS 2204XP Fabric Extenders with FET Optics Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric Interconnects Network Cisco Nexus 3172TQ or Cisco Nexus 3048 Switches. Refer to the appropriate RCM for a list of what is supported on your VCE System. Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX Switches (Optional) Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. (Optional) Cisco Nexus 1000V Series Switches (Optional) VMware vsphere Distributed Switch (VDS) (VMware vsphere version 5.5 and higher) (Optional) VMware NSX Virtual Networking Storage EMC VNX storage array (5400, 5600, 5800, 7600, 8000) running the VNX Operating Environment (Optional) EMC unified storage (NAS) 8

9 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview VCE Systems have a different scale point based on compute and storage options. VCE Systems can support block and/or unified storage protocols. The VCE Release Certification Matrix provides a list of the certified versions of components for VCE Systems. For information about VCE System management, refer to the VCE Vision Intelligent Operations Technical Overview. The VCE Integrated Data Protection Guide provides information about available data protection solutions. Related information Accessing VCE documentation (see page 6) EMC VNX series storage arrays (see page 26) Base configurations and scaling VCE Systems have base configurations that contain a minimum set of compute and storage components, and fixed network resources that are integrated in one or more 19 inch, 42U cabinets. In the base configuration, you can customize the following hardware: Hardware Compute blades Compute chassis Edge servers (with optional VMware NSX) Storage hardware Storage How it can be customized Cisco UCS B-Series blade types include all supported VCE blade configurations. Cisco UCS Server Chassis Sixteen chassis maximum for VCE Systems with EMC VNX8000, VCE Systems with EMC VNX7600, VCE Systems with EMC VNX5800 Eight chassis maximum for VCE Systems with EMC VNX5600 Two chassis maximum for VCE Systems with EMC VNX5400 Four to six Cisco UCS B-series Blade Servers, including the B200 M4 with VIC 1340 and VIC For more information, see the VCE VxBlock Systems for VMware NSX Architecture Overview. Drive flexibility for up to three tiers of storage per pool, drive quantities in each tier, the RAID protection for each pool, and the number of disk array enclosures (DAEs). EMC VNX storage - block only or unified (SAN and NAS) 9

10 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview Hardware Supported disk drives How it can be customized FastCache 100/200GB SLC SSD Tier 0 100/200GB SLC SSD 100/200/400GB emlc SSD Tier 1 300/600GB 15K SAS 600/900GB 10K SAS Tier 2 1/2/3/4 TB 7.2K NL-SAS Supported RAID types Tier 0: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1) Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)** Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)** *file virtual pool only **block virtual pool only Management hardware options Data Mover enclosure (DME) packs The second generation of the Advanced Management Platform (AMP-2) centralizes management of VCE System components. AMP-2 offers minimum physical, redundant physical, and highly available models. The standard option for this platform is the minimum physical model. The optional VMware NSX feature requires AMP-2HA Performance. Available on all VCE Systems. Additional enclosure packs can be added for additional X-Blades on VCE Systems with EMC VNX8000, VCE Systems with EMC VNX7600, and VCE Systems with EMC VNX5800. Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the compute and storage arrays in the system. All components have N+N or N+1 redundancy. These resources can be scaled up as necessary to meet increasingly stringent requirements. The maximum supported configuration differs from model to model. To scale up compute resources, add blade packs and chassis activation kits. To scale up storage resources, add RAID packs, DME packs, and DAE packs. Optionally, expansion cabinets with additional resources can be added. VCE Systems are designed to keep hardware changes to a minimum if the storage protocol is changed after installation (for example, from block storage to unified storage). Cabinet space can be reserved for all components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.) ensuring that network and power cabling capacity for these components is in place. Related information EMC VNX series storage arrays (see page 26) 10

11 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview Scaling up compute resources (see page 22) Scaling up storage resources (see page 28) Management components overview (see page 47) Replication (see page 28) Connectivity overview The interconnectivity between thevce Systems components depends on the network architecture. These components and interconnectivity are conceptually subdivided into the following layers: Layer Compute Storage Network Description Contains the components that provide the computing power within a VCE System. The Cisco UCS blade servers, chassis, and fabric interconnects belong to this layer. Contains the EMC VNX storage component. Contains the components that provide switching between the compute and storage layers within a VCE System, and between a VCE System and the network. Cisco MDS switches and the Cisco Nexus switches belong to this layer. All components incorporate redundancy into the design. Segregated network architecture and unified network architecture In the segregated network architecture, LAN and SAN connectivity is segregated into separate switches within the VCE System. LAN switching uses the Cisco Nexus switches. SAN switching uses the Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. In the unified network architecture, LAN and SAN switching is consolidated onto a single network device (Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches) within the VCE System. This removes the need for a Cisco MDS SAN switch. Note: The optional VMware NSX feature uses the Cisco Nexus 9396 switches for LAN switching. For more information, see the VCE VxBlock Systems for VMware NSX Architecture Overview. All management interfaces for infrastructure power outlet unit (POU), network, storage, and compute devices are connected to redundant Cisco Nexus 3048 switch. This switch provides connectivity for Advanced Management Platform (AMP-2) and egress points into the management stacks for the VCE System components. All management interfaces for infrastructure power outlet unit (POU), network, storage, and compute devices are connected to redundant Cisco Nexus 3172TQ or Cisco Nexus 3048 switches. Refer to the appropriate RCM for a list of what is supported on your VCE System. These switches provide connectivity for Advanced Management Platform (AMP-2) and egress points into the management stacks for the VCE System components. 11

12 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview Related information Accessing VCE documentation (see page 6) Management components overview (see page 47) Segregated network architecture (see page 13) Unified network architecture (see page 16) 12

13 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview Segregated network architecture This topic shows VCE Systems segregated network architecture for block, SAN boot, and unified storage. Block storage configuration The following illustration shows a block-only storage configuration for VCE Systems with no EMC X- Blades in the cabinets. You can reserve space in the cabinets for these components (including optional EMC RecoverPoint Appliances). This design makes it easier to add the components later if there is an upgrade to unified storage. 13

14 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview SAN boot storage configuration In all VCE Systems configurations, the VMware vsphere ESXi blades boot over the Fibre Channel (FC) SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the SAN. In a unified storage configuration, the boot devices are presented over FC and data service can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, the boot devices are presented over FC and data devices are through NFS shares. Storage can also be presented directly to the VMs as CIFS shares. The following illustration shows the components (highlighted in a red, dotted line) that are leveraged to support SAN booting in the VCE Systems: 14

15 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches in the network layer over 10 GbE, as shown in the following illustration: Related information Connectivity overview (see page 11) Unified network architecture (see page 16) 15

16 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview Unified network architecture The topic provides an overview of the block storage, SAN boot storage, and unified storage configurations for the unified network architecture. With unified network architecture, access to both block and file services on the EMC VNX is provided using the Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch. The Cisco Nexus 9396PX Switch is not supported in unified network architecture. Block storage configuration The following illustration shows a block-only storage configuration in the VCE Systems: In this example, there are no X-Blades providing NAS capabilities. However, space can be reserved in the cabinets for these components (including the optional EMC RecoverPoint Appliance). This design makes it easier to add the components later if there is an upgrade to unified storage. 16

17 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview In a unified storage configuration for block and file, the storage processors also connect to X-Blades over FC. The X-Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. SAN boot storage configuration In all VCE Systems configurations, VMware vsphere ESXi blades boot over the FC SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the Cisco Nexus unified switch. In a unified storage configuration, the boot devices are presented over FC and data devices can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, boot devices are presented over FC, and data devices over NFS shares. The remainder of the storage can be presented either as NFS or as VMFS datastores. Storage can also be presented directly to the VMs as CIFS shares. 17

18 VCE Vblock and VxBlock Systems 340 Architecture Overview System overview The following illustration shows the components that are leveraged to support SAN booting in the VCE Systems: Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. 18

19 System overview VCE Vblock and VxBlock Systems 340 Architecture Overview The following illustration shows a unified storage configuration for the VCE Systems: Related information Connectivity overview (see page 11) Management components overview (see page 47) Segregated network architecture (see page 13) 19

20 VCE Vblock and VxBlock Systems 340 Architecture Overview Compute layer Compute layer Compute overview This topic provides an overview of the compute components for the VCE System. Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power within the VCE System. Fabric extenders (FEX) within the Cisco UCS chassis connect to Cisco fabric interconnects over converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP and storage traffic. VCE has reserved some of these ports to connect to upstream access switches within the VCE System. These connections are formed into a port channel to the Cisco Nexus switch and carry IP traffic destined for the external network 10 GbE links. In a unified storage configuration, this port channel can also carry NAS traffic to the X-Blades within the storage layer. Each fabric interconnect also has multiple ports reserved by VCE for Fibre Channel (FC) ports. These ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and the storage layer. In a unified storage configuration, port channels carry IP traffic to the X-Blades for NAS connectivity. For SAN connectivity, SAN port channels carrying FC traffic are configured between the fabric interconnects and upstream Cisco MDS or Cisco Nexus switches. Cisco Unified Computing System This topic provides an overview of the Cisco Unified Compute System (UCS) data center platform that unites compute, network, and storage access. Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unified network fabric with enterprise-class, x86-based servers (the Cisco B-Series). VCE Systems powered by Cisco UCS offer the following features: Built-in redundancy for high availability Hot-swappable components for serviceability, upgrade, or expansion Fewer physical components than in a comparable system built piece by piece Reduced cabling Improved energy efficiency over traditional blade server chassis The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades. 20

21 Compute layer VCE Vblock and VxBlock Systems 340 Architecture Overview Related information Accessing VCE documentation (see page 6) Cisco Unified Computing System fabric interconnects The Cisco Unified Computing System (UCS) fabric interconnects provide network connectivity and management capabilities to the Cisco UCS blades and chassis. The Cisco UCS fabric interconnects provide the management and communication backbone for the blades and chassis. The Cisco UCS fabric interconnects provide LAN and SAN connectivity for all blades within their domain. Cisco UCS fabric interconnects are used for boot functions and offer line-rate, lowlatency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions. VCE Systems use Cisco UCS 6248UP Fabric Interconnects and Cisco UCS 6296UP Fabric Interconnects. Single domain uplinks of 2, 4, or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6248UP Fabric Interconnects. Single domain uplinks of 4 or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6296UP Fabric Interconnects. The optional VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the port count needed for VMware NSX external connectivity (edges). For more information, see the VCE VxBlock Systems for VMware NSX Architecture Overview. Related information Accessing VCE documentation (see page 6) Cisco Trusted Platform Module Cisco TPM provides authentication and attestation services that provide safer computing in all environments. Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that authenticate the VCE System. Cisco TPM is available by default in the VCE System as a component in the Cisco UCS B-Series M3 Blade Servers and Cisco UCS B-Series M4 Blade Servers, and is shipped disabled. The Vblock System Blade Pack Reference contains additional information about Cisco TPM. VCE supports only the Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Because making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant experience in trusted computing, VCE defers to the software stack vendor for configuration and operational considerations relating to the Cisco TPM. Related information 21

22 VCE Vblock and VxBlock Systems 340 Architecture Overview Compute layer Scaling up compute resources This topic describes what you can add to your VCE System to scale up compute resources. To scale up compute resources, you can add uplinks, blade packs, and chassis activation kits to enhance Ethernet and Fibre Channel (FC) bandwidth either when VCE Systems are built, or after they are deployed. The following table shows the maximum chassis and blade quantities that are supported for VCE Systems with EMC VNX5400, VCE Systems with EMC VNX5600, VCE Systems with EMC VNX5800, VCE Systems with EMC VNX7600, and VCE Systems with EMC VNX8000: VCE Systems with 2-link Cisco UCS 6248UP Cisco UCS 2204XP IOM 4-link Cisco UCS 6248UP Cisco UCS 2204XP IOM 4-link Cisco UCS 6296UP Cisco UCS 2204XP IOM 8-link Cisco UCS 6248UP Cisco UCS 2208XP IOM 8-link Cisco UCS 6296UP Cisco UCS 2208XP IOM EMC VNX (128) 8(64) 16(128) 4(32) 8(64) EMC VNX (128) 8(64) 16(128) 4(32) 8(64) EMC VNX (128) 8(64) 16(128) 4(32) 8(64) EMC VNX5600 N/A 8(64) 8(64) 4(32) 8(64) EMC VNX5400 N/A 2(16) N/A N/A N/A Ethernet and FC I/O bandwidth enhancement For VCE Systems with EMC VNX5600, EMC VNX5800, EMC VNX7600, and EMC VNX8000, the Ethernet I/O bandwidth enhancement increases the number of Ethernet uplinks from the Cisco UCS 6296UP fabric interconnects to the network layer to reduce oversubscription. To enhance Ethernet I/O bandwidth performance increase the uplinks between the Cisco UCS 6296UP fabric interconnects and the Cisco Nexus 5548UP Switch for segregated networking, or the Cisco Nexus 5596UP Switch for unified networking. FC I/O bandwidth enhancement increases the number of FC links between the Cisco UCS 6248UP fabric interconnects or Cisco UCS 6296UP fabric interconnects and the SAN switch, and from the SAN switch to the EMC VNX storage array. The FC I/O bandwidth enhancement feature is supported on VCE Systems with EMC VNX5800, EMC VNX7600, and EMC VNX8000. Blade packs Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The base configuration of each VCE System includes two blade packs. The maximum number of blade packs depends on the type of VCE System. Each blade type must have a minimum of two blade packs as a base configuration and then can be increased in single blade pack increments thereafter. Each blade pack is added along with the following license packs: VMware vsphere ESXi 22

23 Compute layer VCE Vblock and VxBlock Systems 340 Architecture Overview Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) EMC PowerPath/VE Note: License packs for VMware vsphere ESXi, Cisco Nexus 1000V Series Switches, and EMC PowerPath are not available for bare metal blades. The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades. Chassis activation kits The power supplies and fabric extenders for all chassis are populated and cabled, and all required Twinax cables and transceivers are populated. As more blades are added and additional chassis are required, chassis activation kits (CAK) are automatically added to an order. The kit contains software licenses to enable additional fabric interconnect ports. Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis. Related information Accessing VCE documentation (see page 6) VCE bare metal support policy Since many applications cannot be virtualized due to technical and commercial reasons, VCE Systems support bare metal deployments, such as non-virtualized operating systems and applications. While it is possible for VCE Systems to support these workloads (with caveats noted below), due to the nature of bare metal deployments, VCE is able to provide only reasonable effort" support for systems that comply with the following requirements: VCE Systems contain only VCE published, tested, and validated hardware and software components. The VCE Release Certification Matrix provides a list of the certified versions of components for VCE Systems. The operating systems used on bare-metal deployments for compute and storage components must comply with the published hardware and software compatibility guides from Cisco and EMC. For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.), those hypervisor technologies are not supported by VCE. VCE Support is provided only on VMware Hypervisors. VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether a VCE System is operating correctly, and assistance in problem resolution to the extent possible. 23

24 VCE Vblock and VxBlock Systems 340 Architecture Overview Compute layer VCE is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating systems or applications. The OEM support vendor should be contacted directly for issues and patches related to those operating systems and applications. Related information Accessing VCE documentation (see page 6) Disjoint layer 2 configuration In the disjoint layer 2 configuration, traffic is split between two or more different networks at the fabric interconnect to support two or more discrete Ethernet clouds. The Cisco UCS servers connect to two different clouds. Upstream disjoint layer 2 networks allow two or more Ethernet clouds that never connect to be accessed by servers or VMs located in the same Cisco UCS domain. 24

25 Compute layer VCE Vblock and VxBlock Systems 340 Architecture Overview The following illustration provides an example implementation of disjoint layer 2 networking into a Cisco UCS domain: Virtual port channels (VPCs) 101 and 102 are production uplinks that connect to the network layer of the VCE Systems. Virtual port channels 105 and 106 are external uplinks that connect to other switches. If you use Ethernet performance port channels (103 and 104 by default), port channels 101 through 104 are assigned to the same VLANs. 25

26 VCE Vblock and VxBlock Systems 340 Architecture Overview Storage layer Storage layer Storage overview EMC VNX series are fourth-generation storage platforms that deliver industry-leading capabilities. They offer a unique combination of flexible, scalable hardware design and advanced software capabilities that enable them to meet the diverse needs of today s organizations. EMC VNX series platforms support block storage and unified storage. The platforms are optimized for VMware virtualized applications. They feature flash drives for extendable cache and high performance in the virtual storage pools. Automation features include self-optimized storage tiering, and applicationcentric replication. Regardless of the storage protocol implemented at startup (block or unified), VCE Systems can include cabinet space, cabling, and power to support the hardware for all of these storage protocols. This arrangement makes it easier to move from block storage to unified storage with minimal hardware changes. VCE Systems are available with: EMC VNX5400 EMC VNX5600 EMC VNX5800 EMC VNX7600 EMC VNX8000 Note: In all VCE Systems, all EMC VNX components are installed in VCE cabinets in VCE-specific layout. EMC VNX series storage arrays The EMC VNX series storage arrays contain common components across all models. The EMC VNX series storage arrays connect to dual storage processors (SPs) using 6Gb/s four-lane serial attached SCSI (SAS). Each storage processor connects to one side of each two, four, eight, or sixteen (depending on the VCE System) redundant pairs of four-lane x 6Gb/s serial attached SCSI (SAS) buses, providing continuous drive access to hosts in the event of a storage processor or bus fault. Fibre Channel (FC) expansion cards within the storage processors connect to the Cisco MDS switches in the network layer over FC. 26

27 Storage layer VCE Vblock and VxBlock Systems 340 Architecture Overview The storage layer in the VCE System consists of an EMC VNX storage array. Each EMC VNX model contains some or all of the following components: The disk processor enclosure (DPE) houses the storage processors for the EMC VNX5400, EMC VNX5600, EMC VNX5800, and EMC VNX7600. The DPE provides slots for two storage processors, two battery backup units (BBU), and an integrated 25 slot disk array enclosure (DAE) for 2.5" drives. Each SP provides support for up to 5 SLICs (small I/O cards). The EMC VNX8000 uses a storage processor enclosure (SPE) and standby power supplies (SPS). The SPE is a 4U enclosure with slots for two storage processors, each supporting up to 11 SLICs. Each EMC VNX8000 includes two 2U SPS' that power the SPE and the vault DAE. Each SPS contains two Li-ION batteries that require special shipping considerations. X-Blades (also known as data movers) provide file-level storage capabilities. These are housed in data mover enclosures (DME). Each X-Blade connects to the network switches using 10G links (either Twinax or 10G fibre). DAEs contain individual disk drives and are available in the following configurations: 2U model that can hold " disks 3U model that can hold " disks EMC VNX5400 The EMC VNX5400 is a DPE-based array with two back-end SAS buses, up to four slots for front-end connectivity, and support for up to 250 drives. It is available in both unified (NAS) and block configurations. EMC VNX5600 The EMC VNX5600 is a DPE-based array with up to six back-end SAS buses, up to five slots for frontend connectivity, and support for up to 500 drives. It is available in both unified (NAS) and block configurations. EMC VNX5800 The EMC VNX5800 is a DPE-based array with up to six back-end SAS buses, up to five slots for frontend connectivity, and support for up to 750 drives. It is available in both unified (NAS) and block configurations. EMC VNX7600 The EMC VNX7600 is a DPE-based array with six back-end SAS buses, up to four slots for front-end connectivity, and support for up to 1000 drives. It is available in both unified (NAS) and block configurations. 27

28 VCE Vblock and VxBlock Systems 340 Architecture Overview Storage layer EMC VNX8000 The EMC VNX8000 comes in a different form factor from the other EMC VNX models. The EMC VNX8000 is an SPE-based model with up to 16 back-end SAS buses, up to nine slots for front-end connectivity, and support for up to 1500 drives. It is available in both unified (NAS) and block configurations. Related information Storage features support (see page 31) Replication This section describes how VCE Systems can be upgraded to include EMC RecoverPoint. For block storage configurations, the VCE System can be upgraded to include EMC RecoverPoint. This replication technology provides continuous data protection and continuous remote replication for ondemand protection and recovery to any point in time. EMC RecoverPoint advanced capabilities include policy-based management, application integration, and bandwidth reduction. RecoverPoint is included in the EMC Local Protection Suite and EMC Remote Protection Suite. To implement EMC RecoverPoint within a VCE System, add two or more EMC RecoverPoint Appliances (RPA) in a cluster to the VCE System. This cluster can accommodate approximately 80 MBps sustained throughput through each EMC RPA. To ensure proper sizing and performance of an EMC RPA solution, VCE works with an EMC Technical Consultant. They collect information about the data to be replicated, as well as data change rates, data growth rates, network speeds, and other information that is needed to ensure that all business requirements are met. Scaling up storage resources You can scale up storage resources in the VCE System. To scale up storage resources, you can expand block I/O bandwidth between the compute and storage resources, add RAID packs, and add disk-array enclosure (DAE) packs. I/O bandwidth and packs can be added when VCE Systems are built and after they are deployed. I/O bandwidth expansion You can increase Fibre channel (FC) bandwidth in the VCE Systems with EMC VNX8000, VCE Systems with EMC VNX7600, and VCE Systems with EMC VNX5800. An I/O bandwidth expansion adds an additional four FC interfaces per fabric between the fabric interconnects and the Cisco MDS 9148 or 9148S Multilayer Fabric Switch with segregated network architecture, or Cisco Nexus 5548UP Switch or Switch Cisco Nexus 5596UP Switch with unified network architecture. The expansion includes an additional four FC ports from the EMC VNX to each SAN fabric. Refer to the appropriate RCM for a list of what is supported on your VCE System. 28

29 Storage layer VCE Vblock and VxBlock Systems 340 Architecture Overview This option is available for environments that require high bandwidth, block-only configurations. This configuration requires the use of four storage array ports per storage processor that are normally reserved for unified connectivity of the X-Blades. RAID packs Storage capacity can be increased by adding RAID packs. Each pack contains a number of drives of a given type, speed, and capacity. The number of drives in a pack depends upon the RAID level that it supports. The number and types of RAID packs to include in VCE Systems are based upon the following: The number of storage pools that are needed. The storage tiers that each pool contains, and the speed and capacity of the drives in each tier. The following table lists tiers, supported drive types, and supported speeds and capacities. Note: The speed and capacity of all drives within a given tier in a given pool must be the same. Tier Drive type Supported speeds and capacities 1 Solid-state Enterprise Flash drives (EFD) 100 GB SLC EFD 200 GB SLC EFD 100 GB emlc EFD 200 GB emlc EFD 400 GB emlc EFD 2 Serial attached SCSI (SAS) 300 GB 10K RPM 600 GB 10K RPM 900 GB 10K RPM 300 GB 15K RPM 600 GB 15K RPM 29

30 VCE Vblock and VxBlock Systems 340 Architecture Overview Storage layer Tier Drive type Supported speeds and capacities 3 Nearline SAS 1 TB 7.2K RPM 2 TB 7.2K RPM 3 TB 7.2K RPM The RAID protection level for the tiers in each pool. The following table describes each supported RAID protection level. The RAID protection level for the different pools can vary. RAID protection level Description RAID 1/0 A set of mirrored drives. Offers the best overall performance of the three supported RAID protection levels. Offers robust protection. Can sustain double-drive failures that are not in the same mirror set. Lowest economy of the three supported RAID levels since usable capacity is only 50% of raw capacity. RAID 5 Block-level striping with a single parity block, where the parity data is distributed across all of the drives in the set. Offers the best mix of performance, protection, and economy. Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required to perform a single write. With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss or unrecoverable read errors on a track during a drive rebuild. Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity or better. RAID 6 Block-level striping with two parity blocks, distributed across all of the drives in the set. Offers increased protection and read performance comparable to RAID 5. Has a significant write performance penalty because multiple I/Os are required to perform a single write. Economy is very good. Usable capacity is 75% of raw capacity or better. EMC best practice for SATA and NL-SAS drives. There are RAID packs for each RAID protection level/tier type combination. The RAID levels dictate the number of drives that are included in the packs. RAID 5 or RAID 1/0 is for performance and extreme performance tiers and RAID 6 is for the capacity tier. The following table lists RAID protection levels and the number of drives in the pack for each level: RAID protection level RAID 1/0 RAID 5 RAID 6 Number of drives per RAID pack 8 (4 data + 4 mirrors) 5 (4 data + 1 parity) or 9 (8 data + 1 parity) 8 (6 data + 2 parity), 14 (12 data + 2 parity)* or 16 (14 data + 2 parity)** * file virtual pool only 30

31 Storage layer VCE Vblock and VxBlock Systems 340 Architecture Overview **block virtual pool only Disk array enclosure packs If the number of RAID packs in VCE Systems is expanded, more disk array enclosures (DAEs) might be required. DAEs are added in packs. The number of DAEs in each pack is equivalent to the number of back-end buses in the EMC VNX array in the VCE System. The following table lists the number of buses in the array and the number of DAEs in the pack for each VCE System: VCE System Number of buses in the array Number of DAEs in the DAE pack EMC VNX or 16 8 or 16 EMC VNX EMC VNX EMC VNX or 6 2 or 6 (base includes DPE as the first DAE) EMC VNX (base includes DPE as the first DAE) There are two types of DAEs: 2U 25 slot DAE for 2.5" disks 3U 15 slot DAE for 3.5" disks A DAE pack can contain a mix of DAE sizes, if the total DAEs in the pack equals the number of buses. To ensure that the loads are balanced, physical disk is spread across the DAEs in accordance with best practice guidelines. Storage features support This topic presents additional storage features available on the VCE Systems. Support for array hardware or capabilities The following table provides an overview of the support provided for EMC VNX operating environment for new array hardware or capabilities: Feature NFS Virtual X-Blades VDM (Multi-LDAP Support) Data-in-place block compression Description Provides security and segregation for service provider environmental clients. When compression is enabled, thick LUNs are converted to thin and compressed in place. RAID group LUNs are migrated into a pool during compression. There is no need for additional space to start compression. Decompression temporarily requires additional space, since it is a migration, and not an in-place decompression. 31

32 VCE Vblock and VxBlock Systems 340 Architecture Overview Storage layer Feature Compression for file/ display compression capacity savings EMC VNX snapshots Description Available file compression types: Fast compression (default) Deep compression (up to 30% more space efficient, but slower and with higher CPU usage) Displays capacity savings due to compression to allow a cost/benefit comparison (space savings versus performance impact). EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. Note: This feature is optional. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. Hardware features VCE supports the following hardware features: Dual 10 GE Optical/Active Twinax IP IO/SLIC for X-Blades 2.5 inch vault drives 2.5 inch DAEs and drive form factors 3.5 inch DAEs and drive form factors File deduplication File deduplication is supported, but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. Block compression Block compression is supported but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. External NFS and CIFS access The VCE Systems can present CIFS and NFS shares to external clients provided that these guidelines are followed: VCE Systems shares cannot be mounted internally by VCE Systems hosts and external to the VCE Systems at the same time. In a configuration with two X-Blades, mixed internal and external access is not supported. The following configurations are supported: External NFS and external CIFS only Internal NFS and internal CIFS only 32

33 Storage layer VCE Vblock and VxBlock Systems 340 Architecture Overview In a configuration with more than two X-Blades, mixed internal and external access is supported. In a configuration with more than two X-Blades, external NFS and CIFS access can run on one or more X-Blades that are physically separate from the X-Blades serving VMFS data stores to the VCE System compute layer. Snapshots EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. Note: EMC VNX snapshot is an optional feature. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. Replicas For VCE Systems NAS configurations, EMC VNX Replicator is supported. This software can create local clones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator is included in the EMX VNX Remote Protection Suite. 33

34 VCE Vblock and VxBlock Systems 340 Architecture Overview Network layer Network layer Network overview The network components are switches that provide connectivity to different components in the VCE System. The Cisco Nexus Series Switches in the network layer provide 10 or 40 GbE IP connectivity between the VCE System and the external network. In unified storage architecture, the switches also connect the fabric interconnects in the compute layer to the X-Blades in the storage layer. In the segregated architecture, the Cisco MDS 9000 series switches in the network layer provide Fibre Channel (FC) links between the Cisco fabric interconnects and the EMC VNX array. These FC connections provide block level devices to blades in the compute layer. In unified network architecture, there are no Cisco MDS series storage switches. FC connectivity is provided by the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches. Ports are reserved or identified for special services such as backup, replication, or aggregation uplink connectivity. The VCE System contains two Cisco Nexus 3172TQ or Cisco Nexus 3048 Switches to provide management network connectivity to the different components of the VCE System. Refer to the appropriate RCM for a list of what is supported on your VCE System. These connections include the EMC VNX service processors, Cisco UCS fabric interconnects, Cisco Nexus 5500UP switches or Cisco Nexus 9396PX switches, and power output unit (POU) management interfaces. IP network components VCE Systems use the following IP network components. VCE Systems use Cisco UCS 6200 series fabric interconnects. VCE Systems with EMC VNX5400 use the Cisco UCS 6248UP fabric switches. All other VCE Systems use the Cisco UCS 6248UP Fabric Interconnects or the Cisco UCS 6296UP Fabric Interconnects. VCE Systems include two Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX switches to provide 10 or 40 GbE connectivity: Between the VCE Systems internal components To the site network To the second generation Advanced Platform (AMP-2) through redundant connections between AMP-2 and the Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX switches 34

35 Network layer VCE Vblock and VxBlock Systems 340 Architecture Overview To support the Ethernet and SAN requirements in the traditional, segregated network architecture, two Cisco Nexus 5548UP switches or Cisco Nexus 9396PX switches provide Ethernet connectivity, and a pair of Cisco MDS switches provide Fibre Channel (FC) connectivity. The Cisco Nexus 5548UP Switch is available as an option for all segregated network VCE Systems. It is also an option for unified network VCE Systems with EMC VNX5400 and EMC VNX5600. Cisco Nexus 5500 series switches The two Cisco Nexus 5500 series switches support low latency, line-rate, 10 Gb Ethernet and FC over Ethernet (FCoE) connectivity for up to 96 ports. Unified port expansion modules are available and provide an extra 16 ports of 10 GbE or FC connectivity. The FC ports are licensed in packs of eight in an ondemand basis. The Cisco Nexus 5548UP switches have 32 integrated, low-latency, unified ports. Each port provides line-rate, 10 Gb Ethernet or eight Gbps FC connectivity. The Cisco Nexus 5548UP switches have one expansion slot that can be populated with a 16 port unified port expansion module. The Cisco Nexus 5548UP Switch is the only network switch supported for VCE Systems data connectivity in VCE Systems (5400). The Cisco Nexus 5596UP switches have 48 integrated, low-latency, unified ports. Each port provides line-rate 10 GB Ethernet or eight Gbps FC connectivity. The Cisco Nexus 5596UP switches have three expansion slots that can be populated with 16 port, unified, port expansion modules. The Cisco Nexus 5596UP Switch is available as an option for both network topologies for all VCE Systems except VCE Systems (5400). Cisco Nexus 9396PX Switch The Cisco Nexus 9396PX Switch supports both 10 Gbps SFP+ ports and 40 Gbps QSFP+ ports. The Cisco Nexus 9396PX Switch is a two rack unit (2RU) appliance with all ports licensed and available for use. There are no expansion modules available for the Cisco Nexus 9396PX Switch. The Cisco Nexus 9396PX Switch provides 48 integrated, low-latency SFP+ ports. Each port provides linerate 1/10 Gbps Ethernet. There are also 12 QSFP+ ports that provide line-rate 40 Gbps Ethernet. Related information Management hardware components (see page 47) Management software components (see page 48) Port utilization This section describes the switch port utilization for Cisco Nexus 5548UP Switch and Cisco Nexus 5596UP Switch in segregated networking and unified networking configurations, as well as the Cisco Nexus switches in a segregated networking configuration. 35

36 VCE Vblock and VxBlock Systems 340 Architecture Overview Network layer Cisco Nexus 5548UP Switch - segregated networking This section describes port utilization for a Cisco Nexus 5548UP Switch segregated networking configuration. The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN traffic. The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with segregated networking: Feature Used ports Port speeds Media Uplinks from fabric interconnect (FI) 8* 10G Twinax Uplinks to customer core 8** Up to 10G SFP+ Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax AMP-2 ESX management 3 10G SFP+ *VCE Systems with VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus 5548UP switches. **VCE Systems with VNX5400 only support four links between the Cisco Nexus 5548UP Switch and customer core network. The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the following additional connectivity option: Feature Available ports Port speeds Media Customer IP backup 3 1G or 10G SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional ports (beyond the core connectivity requirements) available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for Cisco Nexus 5548UP Switch with a 16UP module: Feature Available ports Port speeds Media Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FI for Ethernet bandwidth (BW) enhancement 8 10G Twinax Cisco Nexus 5596UP Switch - segregated networking This section describes port utilization for a Cisco Nexus 5596UP Switch segregated networking configuration. 36

37 Network layer VCE Vblock and VxBlock Systems 340 Architecture Overview The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1G or 10G connectivity for LAN traffic. The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module) with segregated networking: Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax AMP-2 ESX management 3 10G SFP+ The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity option: Feature Used ports Port speeds Media Customer IP backup 3 1G or 10G SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports (beyond the core connectivity requirements) are available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for the Cisco Nexus 5596UP Switch with one 16UP module: Note: Cisco Nexus 5596UP Switch with two or three 16UP modules is not supported with segregated networking. Feature Available ports Port speeds Media Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10G Twinax Cisco Nexus 5548UP Switch unified networking This section describes port utilization for a Cisco Nexus 5548UP Switch unified networking configuration. The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN traffic or 2/4/8 Gbps FC traffic. 37

38 VCE Vblock and VxBlock Systems 340 Architecture Overview Network layer The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with unified networking for VCE Systems with EMC VNX5400 only. Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 4 10G Twinax Uplinks to customer core 4 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from Cisco UCS FI 4 8G SFP+ FC links to EMC VNX array 6 8G SFP+ The following table shows the core connectivity for the Cisco Nexus 5548UP Switch with unified networking for VCE Systems with EMC VNX5600: Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from UCS FI 4 8G SFP+ FC links to EMC VNX array 6 8G SFP+ The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the following additional connectivity options for VCE Systems with EMC VNX5400 only. Feature Available ports Port speeds Media X-Blade connectivity 2 10G EMC Active Twinax X-Blade NDMP connectivity 2 8G SFP+ Customer IP backup 3 1G or 10G SFP+ The remaining ports in the base Cisco Nexus 5548UP Switch provide support for the following additional connectivity options for the other VCE Systems: Feature Available ports Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 2 1G GE_T SFP+ X-Blade connectivity 2 10G EMC Active Twinax Customer IP backup 2 1G or 10G SFP+ 38

39 Network layer VCE Vblock and VxBlock Systems 340 Architecture Overview If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, additional ports (beyond the core connectivity requirements) available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for the Cisco Nexus 5548UP Switch with one 16UP module: Feature Available ports Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 4 1G GE_T SFP+ X-Blade connectivity 8 10G EMC Active Twinax Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FIs for Ethernet BW Enhancement 8 10G Twinax Cisco Nexus 5596UP Switch - unified networking This section describes port utilization for a Cisco Nexus 5596UP Switch unified networking configuration. The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1/10G connectivity for LAN traffic or 2/4/8 Gbps Fibre Channel (FC) traffic. The following table shows the core connectivity for the Cisco Nexus 5596UP Switch (no module): Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from Cisco UCS FI 4 8G SFP+ FC links to EMC VNX Array 6 8G SFP+ The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity options: Feature Minimum ports required for feature Port speeds Media X-Blade connectivity 4 10G EMC Active Twinax X-Blade NDMP connectivity 2 8G SFP+ IP backup solutions 4 1 or 10G SFP+ 39

40 VCE Vblock and VxBlock Systems 340 Architecture Overview Network layer Feature Minimum ports required for feature Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) EMC RecoverPoint SAN links (two per EMC RecoverPoint Appliance) 2 1G GE_T SFP+ 4 8G SFP+ Up to three additional 16 unified port modules can be added to the Cisco Nexus 5596UP Switch (depending on the selected VCE System). Each module has 16 ports to enable additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the connectivity options for Cisco Nexus 5596UP Switch for slots 2-4: Feature Ports available for feature Port speeds Media Default module Uplinks from Cisco UCS FI for Ethernet BW enhancement 8 10G Twinax 1 EMC VPLEX SAN connections (4 per engine) 8 8G SFP+ 1 X-Blade connectivity 12 10G EMC Active Twinax 3 X-Blade NDMP connectivity 6 8G SFP+ 3,4 EMC RecoverPoint WAN links (1 per EMC RecoverPoint Appliance pair) EMC RecoverPoint SAN links (2 per EMC RecoverPoint Appliance) FC links from Cisco UCS fabric interconnect for FC BW enhancement FC links from EMC VNX array for FC BW enhancement 2 1G GE_T SFP G SFP G SFP G SFP+ 4 Cisco Nexus 9396PX Switch - segregated networking This section describes port utilization for a Cisco Nexus 9396PX Switch segregated networking configuration. The base Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1G or 10G connectivity and 12 40G QSFP+ ports for LAN traffic. 40

41 Network layer VCE Vblock and VxBlock Systems 340 Architecture Overview The following table shows core connectivity for the Cisco Nexus 9396PX Switch with segregated networking: Feature Used ports Port speeds Media Uplinks from fabric interconnect (FI) Uplinks to customer core*** 8* 10G Twinax 8(10G)**/2(40G) Up to 40G SFP+/QSFP+ VPC peer links 2 40G Twinax AMP-2 ESX management 3 10G SFP+ *VCE Systems with EMC VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus 9396PX switches. ** VCE Systems with EMC VNX5400 only support four links between the Cisco Nexus 9396PX Switch and customer core network. *** VCE Systems and Cisco Nexus 9396PX support 40G or 10G SFP+ uplinks to customer core. The remaining ports in the Cisco Nexus 9396PX Switch provide support for a combination of the following additional connectivity options: Feature Available ports Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 4 1G GE T SFP+ Customer IP backup 8 1G or 10G SFP+ X-Blade connectivity 8 10G EMC Active Twinax Uplinks from Cisco UCS FIs for Ethernet BW enhancement* 8 10G Twinax *Not supported on VCE Systems with EMC VNX5400 Storage switching components The storage switching components consist of redundant Cisco SAN fabric switches. In a segregated networking model, there are two Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches. Refer to the appropriate RCM for a list of what is supported on your VCE System. In a unified networking model, Fibre Channel (FC) based features are provided by the two Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches that are also used for LAN traffic. 41

42 VCE Vblock and VxBlock Systems 340 Architecture Overview Network layer In VCE Systems, these switches provide: FC connectivity between the compute layer components and the storage layer components Connectivity for backup, business continuity (EMC RecoverPoint Appliance), and storage federation requirements when configured. Note: Inter-Switch Links (ISL) to the existing SAN are not permitted. The Cisco MDS 9148 Multilayer Fabric Switch provides from 16 to 48 line-rate ports (in 8-port increments) for non-blocking 8 Gbps throughput. The port groups are enabled on an as needed basis. The Cisco MDS 9148S Multilayer Fabric Switch provides from 12 to 48 line-rate ports (in 12-port increments) for non-blocking 16 Gbps throughput. The port groups are enabled on an as needed basis. The Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches provide a number of line-rate ports for non-blocking 8 Gbps throughput. Expansion modules can be added to the Cisco Nexus 5596UP Switch to provide 16 additional ports operating at line-rate. The following tables define the port utilization for the SAN components when using a Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Feature Used ports Port speeds Media FC uplinks from Cisco UCS FI 4 8 G SFP+ FC links to EMC VNX array 6 8 G or 16 G** SFP+ **16 Gb Fibre Channel SLICs are available on the EMC VNX storage arrays. Feature Available ports Backup 2 FC links from Cisco UCS fabric interconnect (FI) for FC Bandwidth (BW) enhancement 4 FC links from EMC VNX storage array for FC BW enhancement 4 FC links to EMC VNX storage array dedicated for replication 2 EMC RecoverPoint SAN links (two per EMC RecoverPoint Appliance) 8 SAN aggregation 2 EMC VPLEX SAN connections (four per engine) 8 EMC X-Blade network data management protocol (NDMP) connectivity 2 42

43 Virtualization layer VCE Vblock and VxBlock Systems 340 Architecture Overview Virtualization layer Virtualization components VMware vsphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vsphere components are the VMware vsphere ESXi and VMware vcenter Server for management. Depending on the version that you are running, VMware vsphere 5.x includes a Single Sign-on (SSO) component as a standalone Windows server or as an embedded service on the vcenter server.vmware vsphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the Single Sign-on (SSO) service. The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vmotion and Storage vmotion technology. VMware vsphere Hypervisor ESXi This topic describes the VMware vsphere Hypervisor ESXi that runs on the second generation of the Advanced Management Platform (AMP-2) and in a VCE System utilizing VMware vsphere Server Enterprise Plus. This lightweight hypervisor requires very little space to run (less than six GB of storage required to install) and has minimal management overhead. VMware vsphere ESXi does not contain a console operating system. The VMware vsphere Hypervisor ESXi boots from Cisco FlexFlash (SD card) on AMP-2. For the compute blades, ESXi boots from the SAN through an independent Fibre Channel (FC) LUN presented from the EMC VNX storage array. The FC LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to provide stateless computing within VCE Systems. The stateless hypervisor is not supported. Cluster configuration VMware vsphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources available for allocation to virtual machines (VMs). Clusters can scale up to a maximum of 32 hosts for VMware vsphere 5.1/5.5 and 64 hosts for VMware vsphere 6.0. Clusters can support thousands of VMs. The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Note: Some advanced CPU functionality might be unavailable if more than one blade model is running in a given cluster. Data stores VCE Systems support a mixture of data store types: block level storage using VMFS or file level storage using NFS. 43

44 VCE Vblock and VxBlock Systems 340 Architecture Overview Virtualization layer The maximum size per VMFS5 volume is 64 TB (50 TB 1 MB). Beginning with VMware vsphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255 volumes. VCE optimizes the advanced settings for VMware vsphere ESXi hosts that are deployed in VCE Systems to maximize the throughput and scalability of NFS data stores. VCE Systems support a maximum of 256 NFS data stores per host. Virtual networks Virtual networking in the Advanced Management Platform (AMP-2) uses standard virtual switches. Virtual networking in VCE Systems is managed by the Cisco Nexus 1000V Series Switch. The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM during live migration. This provides persistent network, security, and storage compliance. Alternatively, virtual networking in VCE Systems is managed by a VMware vcenter Virtual Distributed Switch (version 5.5 or higher) with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware Standard Switch (VSS) and a VMware vsphere Distributed Switch (VDS) and will use a minimum of four uplinks presented to the hypervisor. The implementation of Cisco Nexus 1000V Series Switch for VMware vsphere 5.1/5.5 and VMware VDS for VMware vsphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policies to appropriately shape network traffic according to workload type and priority. With VMware vsphere 6.0, QoS is set to Default (Trust Host). The vnics are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS Virtual Interface Card (VIC) hardware. Thus, VMware vsphere ESXi has a predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the virtual network interface cards (vnic) to ensure consistency in case the uplinks need to be migrated to the VMware vsphere Distributed Switch (VDS) after manufacturing. Related information Management hardware components (see page 47) Management software components (see page 48) VMware vcenter Server This topic describes the VMware vcenter Server which is a central management point for the hypervisors and VMs. VMware vcenter Server is a central management point for the hypervisors and virtual machines. VMware vcenter Server is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit Windows Server and runs as a service to assist with host patch management. 44

45 Virtualization layer VCE Vblock and VxBlock Systems 340 Architecture Overview Second generation of the Advanced Management Platform with redundant physical servers (AMP-2RP) and the VCE System each have a unified VMware vcenter Server Appliance instance. Each of these systems resides in the AMP-2RP. VMware vcenter Server provides the following functionality: Cloning of VMs Creating templates VMware vmotion and VMware Storage vmotion Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vsphere high-availability clusters VMware vcenter Server provides monitoring and alerting capabilities for hosts and VMs. VCE System administrators can create and apply the following alarms to all managed objects in VMware vcenter Server: Data center, cluster, and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and health Databases The backend database that supports VMware vcenter Server and VMware Update Manager (VUM) is remote Microsoft SQL Server 2008 (vsphere 5.1) and Microsoft SQL 2012 (vsphere 5.5/6.0). The SQL Server service requires a dedicated service account. Authentication VCE Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vsphere 5.1 and higher. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate Windows services, which can be configured to use a dedicated service account depending on the security and directory services requirements. VCE supported features VCE supports the following VMware vcenter Server features: VMware Single Sign-On (SSO) Service (version 5.1 and higher) VMware vsphere Web Client (used with VCE Vision Intelligent Operations) VMware vsphere Distributed Switch (VDS) 45

46 VCE Vblock and VxBlock Systems 340 Architecture Overview Virtualization layer VMware vsphere High Availability VMware DRS VMware Fault Tolerance VMware vmotion VMware Storage vmotion Layer 3 capability available for compute resources (version 6.0 and higher) Raw Device Mappings Resource Pools Storage DRS (capacity only) Storage driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vcenter Web Client 46

47 Management VCE Vblock and VxBlock Systems 340 Architecture Overview Management Management components overview This topic describes the second generation of the Advanced Management Platform (AMP-2) components. AMP-2 provides a single management point for VCE Systems and provides the ability to: Run the core and VCE Optional Management Workloads Monitor and manage VCE System health, performance, and capacity Provide network and fault isolation for management Eliminate resource overhead on VCE Systems The Core Management Workload is the minimum required set of management software to install, operate, and support a VCE System. This includes all hypervisor management, element managers, virtual networking components (Cisco Nexus 1000v or VMware vsphere Distributed Switch (VDS)), and VCE Vision Intelligent Operations software. The VCE Optional Management Workload is non-core Management Workloads that are directly supported and installed by VCE whose primary purpose is to manage components within a VCE System. The list would be inclusive of, but not limited to, Data Protection, Security or Storage management tools such as, EMC Unisphere for EMC RecoverPoint or EMC VPLEX, Avamar Administrator, EMC InsightIQ for Isilon, or VMware vcns appliances (vshield Edge/Manager). Related information Connectivity overview (see page 11) Unified network architecture (see page 16) Management hardware components This topic describes the second generation of the Advanced Management Platform (AMP-2) hardware. AMP-2 is available with one to three physical servers. All options use their own resources to run workloads without consuming resources: AMP-2 option Physical server Description AMP-2P One Cisco UCS C220 server Default configuration for VCE Systems that use a dedicated Cisco UCS C220 Server to run management workload applications. 47

48 VCE Vblock and VxBlock Systems 340 Architecture Overview Management AMP-2 option Physical server Description AMP-2RP AMP-2HA Baseline AMP-2HA Performance Two Cisco UCS C220 servers Two Cisco UCS C220 servers Three Cisco UCS C220 servers Adds a second Cisco UCS C220 Server to support application and hardware redundancy. Implements VMware vsphere HA/DRS with shared storage provided by EMC VNXe3200 storage. Adds a third Cisco UCS C220 Server and additional storage for EMC FAST VP. Management software components This topic describes the software that is delivered pre-configured with the second generation of the Advanced Management Platform (AMP-2). AMP-2 is delivered pre-configured with the following software components which are dependent on the selected VCE Release Certification Matrix: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vsphere Enterprise Plus VMware vsphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vsphere Web Client Service VMware vsphere Inventory Service VMware vcenter Server VMware vcenter Database using Microsoft SQL Server Standard Edition VMware vcenter Update Manager VMware vsphere client VMware vsphere Syslog Service (optional) VMware vsphere Core Dump Service (optional) VMware vcenter Server Appliance (AMP-2RP) - a second instance of VMware vcenter Server is required to manage the replication instance separate from the production VMware vcenter Server VMware vsphere Replication Appliance (AMP-2RP) VMware vsphere Distributed Switch (VDS) or Cisco Nexus 1000V virtual switch (VSM) 48

49 Management VCE Vblock and VxBlock Systems 340 Architecture Overview EMC PowerPath/VE Electronic License Management Server (ELMS) EMC Secure Remote Support (ESRS) Array management modules, including but not limited to, EMC Unisphere Client, EMC Unisphere Service Manager, EMC VNX Initialization Utility, EMC VNX Startup Tool, EMC SMI-S Provider, EMC PowerPath Viewer Cisco Prime Data Center Network Manager and Device Manager (Optional) EMC RecoverPoint management software that includes EMC RecoverPoint Management Application and EMC RecoverPoint Deployment Manager Management network connectivity This topic provides the second generation of the Advanced Management Platform network connectivity and server assignment illustrations. 49

50 VCE Vblock and VxBlock Systems 340 Architecture Overview Management AMP-2HA network connectivity The following illustration provides an overview of the network connectivity for the AMP-2HA: 50

51 Management VCE Vblock and VxBlock Systems 340 Architecture Overview AMP-2HA server assignments The following illustration provides an overview of the VM server assignment for AMP-2HA: VCE Systems that use VMware vsphere Distributed Switch (VDS) do not include Cisco Nexus1000V VSM VMs. The Performance option of AMP-2HA leverages the DRS functionality of VMware vcenter to optimize resource usage (CPU/memory) so that VM assignment to a VMware vsphere ESXi host will be managed automatically 51

52 VCE Vblock and VxBlock Systems 340 Architecture Overview Management AMP-2P server assignments The following illustration provides an overview of the VM server assignment for AMP-2P: 52

53 Management VCE Vblock and VxBlock Systems 340 Architecture Overview AMP-2RP server assignments The following illustration provides an overview of the VM server assignment for AMP-2RP: VCE Systems that use VMware VDS do not include Cisco Nexus1000V VSM VMs. 53

54 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions Configuration descriptions VCE Systems with EMC VNX8000 VCE Systems with EMC VNX8000 support various array types and features, disk array enclosure and SLIC configurations, and compute and connectivity for fabric interconnects. Array options VCE Systems (8000) are available as block only or unified storage. Unified storage VCE Systems (8000) support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10G front-end network connections. An additional data mover enclosure (DME) supports the connection of two additional X-Blades with the same configuration as the base data movers. The following table shows the available array options: Array Bus Supported X-Blades Block 8/16 N/A Unified 8/16 2 Unified 8/16 3 Unified 8/16 4 Unified 8/16 5 Unified 8/16 6 Unified 8/16 7 Unified 8/16 8 Each X-Blade contains: One 6 core 2.8 GHz Xeon processor 24 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options VCE Systems (8000) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, or Cisco Nexus 5596UP switches, depending on topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. 54

55 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview The following table shows the feature options: Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Unified networking is supported only on the VCE Systems (8000) with Cisco Nexus 5596UP switches. Ethernet BW enhancement is supported only on the VCE Systems (8000) with Cisco Nexus 5596UP switches. Disk array enclosure configuration VCE Systems (8000) include two 25 slot 2.5" disk array enclosures (DAEs). An additional six DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial eight) are added in multiples of eight. If there are 16 buses, then DAEs must be added in multiples of 16. DAEs are interlaced when racked, and all 2.5" DAEs are first racked on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX8000 provides slots for 11 SLICs in each service processor (SP). Two slots in each SP are populated with back-end SAS bus modules by default. Two additional back-end SAS bus modules support up to 16 buses. If this option is chosen, all DAEs are purchased in groups of 16. VCE Systems (8000) support two FC SLICs per SP for host connectivity. Additional FC SLICs are included to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity. The addition of FC BW Enhancement provides four additional FC ports per SP. As the VCE System with EMC VNX8000 has multiple CPUs, balance the SLIC arrangements across CPUs. 55

56 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions The following table shows the SLIC configurations per SP (eight bus): Array FC BW enhancement SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10 Block Y FC Res Res FC Res Bus Res Res Res FC Bus Unified Y FC Res Res FC Res Bus Res Res FC/U FC Bus Block N FC Res Res Res Res Bus Res Res Res FC Bus Unified N FC Res Res Res Res Bus Res Res FC/U FC Bus Unified -> 4 DM N FC Res FC/U Res Res Bus Res Res FC/U FC Bus Unified -> 4 DM Y FC Res FC/U FC Res Bus Res Res FC/U FC Bus Res: slot reserved for future VCE configuration options. FC: 4xFC port input/output module (IOM): Provides four 16 Gb FC connections (segregated networking). FC: 4xFC port input/output module (IOM): Provides four 8 Gb FC connections (unified networking). FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8 Gb FC connections. Bus: Four port - 4x lane/port 6 Gbps SAS: provides additional back-end bus connections. The following table shows the SLIC configurations per SP (16 bus): Array FC BW SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10 Block Y FC Res Res FC Bus Bus Bus Res Res FC Bus Unified Y FC Res Res FC Bus Bus Bus Res FC/U FC Bus Block N FC Res Res Res Bus Bus Bus Res Res FC Bus Unified N FC Res Res Res Bus Bus Bus Res FC/U FC Bus Unified -> 4 DM N FC Res FC/U Res Bus Bus Bus Res FC/U FC Bus Unified -> 4 DM Y FC Res FC/U FC Bus Bus Bus Res FC/U FC Bus N/A: not available for this configuration. Res: slot reserved for future VCE configuration options. FC: 4xFC port IOM: provides four 16Gb FC connections (segregated networking). FC: 4xFC port IOM: provides four 8 Gb FC connections (unified networking). FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections. Bus: Four port - 4x lane/port 6 Gbps. SAS: provides additional back-end bus connections. 56

57 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview Two additional back-end SAS bus modules are available to support up to 16 buses. If this option is chosen, all DAEs are purchased in groups of 16. Compute VCE Systems (8000) support between two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity VCE Systems (8000) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series SwitchesCisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, based on the topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. The following table shows the available switch combinations that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148, or 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148, or 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches 57

58 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions Fabric interconnect Topology Ethernet SAN Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148, or 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148, or 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches Note: The default is a unified network with Cisco Nexus 5596UP switches. VCE Systems with EMC VNX7600 VCE Systems with EMC VNX7600 support various array types and features, disk array enclosure and SLIC configurations, and compute and connectivity for fabric interconnects. Array options VCE Systems (7600) are available as block only or unified storage. Unified storage VCE Systems (7600) support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10 G front-end connections to the network. An additional data mover enclosure (DME) supports the connection of two additional X-Blades with the same configuration as the base X-Blades. The following table shows the available array options: Array Bus Supported X-Blades Block 6 N/A Unified 6 2 * Unified 6 3 * Unified 6 4 * Unified 6 5* Unified 6 6* Unified 6 7* Unified 6 8* *VCE supports two to eight X-Blades in VCE Systems (7600). 58

59 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview Each X-Blade contains: One 4 core 2.4 GHz Xeon processor 12 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options VCE Systems (7600) support both the Ethernet and FC bandwidth (BW) enhancement. The Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. The FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, or the Cisco Nexus 5596UP switches, depending on topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. Both block and unified arrays use FC BW enhancement. The following table shows the feature options: Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Unified networking is only supported on VCE Systems (7600) with Cisco Nexus 5596UP switches. Disk array enclosure configuration VCE Systems (7600) have two 25 slot 2.5" disk array enclosures (DAEs). The EMC VNX7600 data processor enclosure (DPE) provides the DAE for bus 0, and provides the first DAE on bus 1. An additional four DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX7600 provides slots for five SLICs in each service processor (SP). Slot 0 in each SP is populated with a back-end SAS bus module. VCE Systems (7600) support two FC SLICs per SP for host connectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. VCE only supports the four port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity. The addition of FC BW enhancement provides four additional FC ports per SP. 59

60 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block Y Bus FC FC FC N/A Unified (<5DM)* Y Bus FC FC FC FC/U Block N Bus FC FC N/A N/A Unified N Bus FC FC FC/U FC/U Greater than four X-Blades prohibits FC BW enhancement feature. N/A: not available for this configuration. FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregated networking). FC 4xFC port IO module (IOM) provides four 8 Gb FC connections (unified networking). FC/U 4xFC port IO module dedicated to unified X-Blade connectivity provides four 8 Gb FC connections. Bus four port - 4x lane/port six GB SAS provides additional back-end bus connections. Compute VCE Systems (7600) support two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders input/output module (IOM) only), four links (Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity VCE Systems (7600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches, Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. 60

61 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview The following table shows the available switch combinations available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches Note: The default is unified network with Cisco Nexus 5596UP switches. VCE Systems with EMC VNX5800 VCE Systems with EMC VNX5800 support various array types and features, disk array enclosure and SLIC configurations, and compute and connectivity for fabric interconnects. Array options VCE Systems (5800) are available as block only or unified storage. Unified storage VCE Systems (5800) support up to six X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10G front-end connections to the network. An additional data mover enclosure (DME) supports the connection of one additional X-Blade with the same configuration as the base data movers. 61

62 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions The following table shows the available array options: Array Bus Supported X-Blades Block 6 N/A Unified 6 2 Unified 6 3* Unified 6 4* Unified 6 5* Unified 6 6* VCE supports two to six X-Blades in VCE Systems (5800). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor 12 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options VCE Systems (5800) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, or the Cisco Nexus 5596UP switches, depending on the topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. Both block and unified arrays use FC BW enhancement. The following table shows the feature options. Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Note: Unified networking is supported only on VCE Systems (5800) with Cisco Nexus 5596UP switches. Disk array enclosure configuration VCE Systems (5800) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5800 data processor enclosure (DPE) provides the DAE for bus 0, and the second provides the first DAE on bus 1. 62

63 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview An additional four DAEs are required beyond the base two DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX5800 provides slots for five SLICs in each service processor. Slot 0 is populated with a back-end SAS bus module. VCE Systems (5800) support two FC SLICs per SP for host connectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. VCE only supports the four-port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity. The addition of FC BW enhancement provides four additional FC ports per SP. The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block Y Bus FC FC FC N/A Unified (<5DM)* Y Bus FC FC FC FC/U Block N Bus FC FC N/A N/A Unified N Bus FC FC FC/U FC/U Greater than four X-Blades prohibits FC BW enhancement. N/A: not available for this configuration. FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking). FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking). FC/U 4xFC port IOM dedicated to unified X-Blade connectivity provides four 8 Gb FC connections. Bus: Four port - 4x lane/port 6 Gbps SAS: provides additional back-end bus connections. Compute VCE Systems (5800) support two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) 63

64 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions Connectivity VCE Systems (5800) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 switches, or Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. The following table shows all the available switch combinations that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified Cisco Nexus 5596UP switches Note: The default is a unified network with Cisco Nexus 5596UP switches. VCE Systems with EMC VNX5600 VCE Systems with EMC VNX5600 support various array types and features, disk array enclosure and SLIC configurations, and compute and connectivity for fabric interconnects. 64

65 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview Array options VCE Systems (5600) are available as block only or unified storage. Unified storage VCE Systems (5600) support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end connections to the network. The following table shows the available array options: Array Bus Supported X-Blades Block 2 or 6 N/A Unified 2 or 6 1 Unified 2 or 6 2* Unified 2 or 6 3* Unified 2 or 6 4* *VCE supports one to four X-Blades in VCE Systems (5600). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor Six GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array One 2-port 10 GB SFP+ compatible SLICs Feature options VCE Systems (5600) use the Cisco Nexus 5596UP switches. VCE Systems (5600) do not support FC bandwidth (BW) enhancement in block or unified arrays. The following table shows the feature options: Array Topology Ethernet BW enhancement Block Segregated Y Unified Segregated Y Block Unified network Y Unified Unified network Y DAE configuration VCE Systems (5600) have two 25 slot 2.5" disk array enclosure (DAEs). The EMCVNX 5600 disk processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1. Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added in 65

66 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. An additional four port SAS bus expansion SLIC is an option with VCE Systems (5600). If more than 19 DAEs are required, the addition of a four port expansion bus card is required. If the card is added, DAEs are purchased in groups of six. SLIC configuration The EMC VNX5600 provides slots for five SLICs in each service processor. VCE Systems (5600) have two FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four port FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity. The following table shows the SLIC configurations per SP: Array FC bandwidth enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block N Bus FC FC N/A N/A Unified N Bus FC FC N/A FC/U The FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking). The FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking). The FC/U 4xFC port IO module (IOM) dedicated to unified X-Blade connectivity provides four 8 Gb FC connections. Bus four port - 4x lane/port six GB. SAS: provides additional back-end bus connections. Compute VCE Systems (5600) support two to eight chassis and up to 64 half-width blades. Each chassis can be connected with four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) N/A 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) 66

67 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview Connectivity VCE Systems (5600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches, or Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switches, and based on the topology. Refer to the appropriate RCM for a list of what is supported on your VCE System. The following table shows the switch options that are available for the fabric interconnects: Fabric Interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5548UP switches Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5548UP switches Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5596UP switches Note: The default is a unified network with Cisco Nexus 5596UP switches. 67

68 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions VCE Systems with EMC VNX5400 VCE Systems with EMC VNX5400 support various array types and features, disk array enclosure and SLIC configurations, and compute and connectivity for fabric interconnects. Array options VCE Systems (5400) are available as block only or unified storage. Unified storage VCE Systems (5400) support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end connections to the network. The following table shows the available array options: Array Bus Supported X-Blades Block 2 N/A Unified 2 1* Unified 2 2* Unified 2 3* Unified 2 4* *VCE supports one to four X-Blades in VCE Systems (5400). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor Six GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array One 2-port 10 GB SFP+ compatible SLICs Feature options VCE Systems (5400) use the Cisco UCS 6248UP fabric interconnects. VCE Systems (5400) do not support FC bandwidth (BW) enhancement or Ethernet BW enhancement in block or unified arrays. Disk array enclosure configuration VCE Systems (5400) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5400 disk processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1. Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added in multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. 68

69 Configuration descriptions VCE Vblock and VxBlock Systems 340 Architecture Overview SLIC configuration The EMC VNX5400 provides slots for five SLICs in each service processor (SP), although only four are enabled. VCE Systems (5400) have two FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four-port FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity. The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block N N/A FC FC N/A N/A Unified N N/A FC FC N/A FC/U FC 4xFC port I/O module (IOM) provides four 16 Gb FC connections (segregating networking). FC 4xFC port I/O module (IOM) provides four 8 Gb FC connections (unified networking). The FC/U 4xFC port IOM dedicated to unified X-Blade connectivity provides four 8 Gb FC connections. Compute VCE Systems (5400) are configured with two chassis that support up to 16 half-width blades. Each chassis is connected with four links per fabric extender I/O module (IOM). VCE Systems (5400) support Cisco UCS 2204XP Fabric Extenders IOM only. The following table shows the compute options that are available for the Cisco UCS 6248UP fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) N/A 2 (16) N/A Connectivity VCE Systems (5400) contain the Cisco UCS 6248UP fabric interconnects that uplink to Cisco UCS Nexus 5548UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5548UP switches, or Cisco MDS 9148 or Cisco MDS9148S Multilayer Fabric Switches. Refer to the appropriate RCM for a list of what is supported on your VCE System. 69

70 VCE Vblock and VxBlock Systems 340 Architecture Overview Configuration descriptions The following table shows the switch options that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5548UP switches Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 or Cisco MDS 9148S Multilayer Fabric Switch. Refer to the appropriate RCM for a list of what is supported on your VCE System. Unified network Cisco Nexus 5596UP switches Note: The default is a unified network with Cisco Nexus 5596UP switches. 70

71 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Sample Vblock System 340 and VxBlock System 340 with EMC VNX8000 VCE Systems with EMC VNX8000 cabinet elevations vary based on the specific configuration requirements. These elevations are provided for sample purposes only. For specifications for a specific VCE System design, consult your varchitect. 71

72 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Front view 72

73 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Rear view 73

74 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Cabinet 1 74

75 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Cabinet 2 Sample VCE System with EMC VNX5800 VCE Systems with EMC VNX5800 cabinet elevations vary based on the specific configuration requirements. These elevations are provided for sample purposes only. For specifications for a specific VCE System design, consult your varchitect. 75

76 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Front view 76

77 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Rear view 77

78 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Cabinet 1 78

79 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Cabinet 2 79

80 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Cabinet 3 Sample Vblock System 340 and VxBlock System 340 with EMC VNX5800 (ACI ready) VCE Systems with EMC VNX5800 elevations for a cabinet that is Cisco Application Centric Infrastructure (ACI) vary based on the specific configuration requirements. These elevations are provided for sample purposes only. For specifications for a specific VCE System design, consult your varchitect. 80

81 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Front view 81

82 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Rear view 82

83 Sample configurations VCE Vblock and VxBlock Systems 340 Architecture Overview Cabinet 1 83

84 VCE Vblock and VxBlock Systems 340 Architecture Overview Sample configurations Cabinet 2 84

85 System infrastructure VCE Vblock and VxBlock Systems 340 Architecture Overview System infrastructure VCE Systems descriptions A comparison of the compute, network, and storage architecture describes the differences among the VCE Systems. The following table shows a comparison of the compute architecture: VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Cisco B-series blade chassis 16 maximum 8 maximum 2 maximum B-series blades (maximum) Half-width = 128, Full-width = 64 Half-width = 64 Full-width = 32 Half-width = 16 Full-width = 8 Fabric interconnects Cisco Nexus 6248UP or Cisco Nexus 6296UP Cisco Nexus 6248UP The following table shows a comparison of the network architecture: VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Network Cisco Nexus 5548UP or Cisco Nexus 5596UP Cisco Nexus 5548UP SAN Cisco MDS 9148 or Cisco MDS 9148S (segregated). Refer to the appropriate RCM for a list of what is supported on your VCE System. The following table shows a comparison of the storage architecture: VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Storage access Block or unified Back-end SAS buses 8 or or 6 2 Storage protocol (block) Storage protocol (file) Data store type (block) Data store type (file) Boot path FC NFS and CIFS VMFS NFS SAN Maximum drives

86 VCE Vblock and VxBlock Systems 340 Architecture Overview System infrastructure VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 X-Blades (min/max) 2/8 2/4 2/3 2/2 2/2 Cabinets overview In each VCE System, the compute, storage, and network layer components are distributed within the cabinets. Distributing the components in this manner balances out the power draw and reduces the size of the power distribution units (PDUs) that are required. Each cabinet has a capacity for physical dimensions such as weight, heat dissipation, power draw, RU space, and receptacle count. This design improves flexibility when upgrading or expanding VCE Systems as capacity needs increase. For some configurations, VCE preinstalls all wiring based on the predefined layouts. VCE cabinets are designed to be installed contiguously to one another within the data center. If the base and expansion cabinets need to be physically separated, customized cabling is needed, which incurs additional cost and delivery delays. Note: The cable length is not the same as distance between cabinets. The cable must route through the cabinets and through the cable channels overhead or in the floor. Intelligent Physical Infrastructure appliance The Intelligent Physical Infrastructure (IPI) appliance allows users to collect and monitor environmental data, and monitor control power and security. For more information about the IPI appliance, refer to the administration guide for your VCE System and to the VCE Intelligent Physical Infrastructure (IPI) Appliance User Manual. Power options VCE Systems support several power distribution unit (PDU) options inside and outside of North America. Power options for VCE System cabinets The following table lists the PDUs that are available: PDU Power specifications Number per CN or S IEC P+PE 3-phase Delta / 60A 2 pairs of PDUs per cabinet NEMA L15-30P 3-phase Delta / 30A 3 pairs of PDUs per cabinet NEMA L6-30P Single phase / 30A 3 pairs of PDUs per cabinet IEC P+N+PE 3-phase WYE / 30 / 32A / 2 pairs of PDUs per cabinet 86

87 System infrastructure VCE Vblock and VxBlock Systems 340 Architecture Overview PDU Power specifications Number per CN or S IEC P+E Single phase / 32A 3 pairs of PDUs per cabinet Balancing cabinet maximum usable power The VCE System maximum usable power must be balanced in the cabinets based on the amount of components in the cabinet. The maximum kilowatt draw for a VCE System PDU that has been derated to 80 percent is listed in the following table: Power option Kilowatt draw per PDU 3-Phase Delta 60A@208V Phase Delta 30A@208V Phase WYE 32A@230V 17.7 Single Phase 30A@208V 5 Single Phase 32A@230V 5.9 Note: The kilowatt draw per PDU is an approximate measurement. The following PDU limitations per cabinet are for a VCE System with one or more Cisco UCS 5108 Blade Server Chassis installed: Power option Number of Cisco UCS 5108 Blade Server Chassis Maximum PDUs per cabinet Three-Phase Delta 60A 1-3 One pair Three-Phase Delta 60A 4-6 Two pair Three-Phase Delta 30A 1-3 Two pair Three-Phase Delta 30A 4 Three pair Three-Phase WYE 30A or 32A 1-3 One pair Three-Phase WYE 30A or 32A 4-6 Two pair Single Phase 30A or 32A 1 Two pair Single Phase 30A or 32A 2 Three pair Related information Accessing VCE documentation (see page 6) 87

88 VCE Vblock and VxBlock Systems 340 Architecture Overview Additional references Additional references Virtualization components Product Description Link to documentation VMware vcenter Server VMware vsphere ESXi Provides a scalable and extensible platform that forms the foundation for virtualization management. Virtualizes all application servers and provides VMware high availability (HA) and dynamic resource scheduling (DRS). vcenter-server/ vsphere/ Compute components Product Description Link Cisco UCS B-Series Blade Servers Servers that adapt to application demands, intelligently scale energy use, and offer best-in-class virtualization. ps10280/index.html Cisco UCS Manager Cisco UCS 2200 Series Fabric Extenders Cisco UCS 5100 Series Blade Server Chassis Cisco UCS 6200 Series Fabric Interconnects Provides centralized management capabilities for the Cisco Unified Computing System (UCS). Bring unified fabric into the blade-server chassis, providing up to eight 10 Gbps connections each between blade servers and the fabric interconnect. Chassis that supports up to eight blade servers and up to two fabric extenders in a six rack unit (RU) enclosure. Cisco UCS family of line-rate, low-latency, lossless, 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions. Provide network connectivity and management capabilities. ps10281/index.html support/servers-unifiedcomputing/ucs-2200-seriesfabric-extenders/tsd-productssupport-series-home.html ps10279/index.html ps11544/index.html Network components Product Description Link to documentation Cisco Nexus 1000V Series Switches A software switch on a server that delivers Cisco VN-Link services to virtual machines hosted on that server. index.html 88

89 Additional references VCE Vblock and VxBlock Systems 340 Architecture Overview Product Description Link to documentation VMware vsphere Distributed Switch (VDS) Cisco MDS 9148 Multilayer Fabric Switch Cisco MDS 9148S Multilayer Fabric Switch A VMware vcenter-managed software switch that delivers advanced network services to virtual machines hosted on that server. Provides 48 line-rate 16 Gbps ports and offers cost-effective scalability through ondemand activation of ports. Provides 48 line-rate 16 Gbps ports and offers cost-effective scalability through ondemand activation of ports. features/distributed-switch.html index.html collateral/storage-networking/mds-9148s-16gmultilayer-fabric-switch/datasheetc html Cisco Nexus 3048 Switch Cisco Nexus 3172TQ Switch Provides local switching that connects transparently to upstream Cisco Nexus switches, creating an end-to-end Cisco Nexus fabric in data centers. Provides local switching that connects transparently to upstream Cisco Nexus switches, creating an end-to-end Cisco Nexus fabric in data centers. switches/nexus-3048-switch/index.html collateral/switches/nexus-3000-seriesswitches/data_sheet_c html Cisco Nexus 5000 Series Switches Cisco Nexus 9396PX Switch Simplifies data center transformation by enabling a standards-based, highperformance unified fabric. Provides high scalability, performance, and exceptional energy efficiency in a compact form factor. Designed to support Cisco Application Centric Infrastructure (ACI). switches/nexus-5000-series-switches/ index.html switches/nexus-9396px-switch/model.html Storage components This topic provides a description of the storage components. Product Description Link to documentation EMC VNX8000, EMC VNX7600, EMC VNX5800, EMC VNX5600, EMC VNX5400 storage arrays High-performing unified storage with unsurpassed simplicity and efficiency, optimized for virtual applications. vnx-series.htm 89

90 About VCE VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCE accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while improving time to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure systems, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to Copyright VCE Company, LLC. All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack, and the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein are the property of their respective owners. 90

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview Dell EMC VxBlock and Vblock Systems 340 Architecture Overview Document revision 3.16 July 2018 Revision history Date Document revision Description of changes July 2018 3.16 Updated System architecture

More information

VCE Vblock System 320 Gen 3.2

VCE Vblock System 320 Gen 3.2 VCE Vblock System 320 Gen 3.2 Architecture Overview Document revision 3.7 November 2015 Revision history Date VCE System Document revision Description of changes November 2015 Gen 3.2 3.7 Updated document

More information

VCE Vblock System 720 Gen 4.0 Architecture Overview

VCE Vblock System 720 Gen 4.0 Architecture Overview VCE Vblock System 720 Gen 4.0 Architecture Overview Revision history www.vce.com VCE Vblock System 720 Gen 4.0 Architecture Overview Document revision 4.1 March 2013 2012 VCE Company, LLC. All Rights Reserved.

More information

VCE Vblock System 720 Gen 4.1 Architecture Overview

VCE Vblock System 720 Gen 4.1 Architecture Overview VCE Vblock System 720 Gen 4.1 Architecture Revision history www.vce.com VCE Vblock System 720 Gen 4.1 Architecture Document revision 4.3 June 2013 2013 VCE Company, LLC. 2013 VCE Company, 1 LLC. Revision

More information

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview Dell EMC VxBlock and Vblock Systems 540 Architecture Overview Document revision 1.16 July 2018 Revision history Date Document revision Description of changes July 2018 1.16 Updated Compute Connectivity

More information

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview Dell EMC VxBlock and Vblock Systems 350 Architecture Overview Document revision 1.13 December 2018 Revision history Date Document revision Description of changes December 2018 1.13 Added support for VMware

More information

VCE Vblock System 100. Gen 2.3 Architecture Overview

VCE Vblock System 100. Gen 2.3 Architecture Overview VCE Vblock System 100 Gen 2.3 Architecture Overview Document revision 2.6 November 2014 Revision history Date Vblock System Document revision Description of changes November 2014 Gen 2.3 2.6 Added the

More information

VCE Vblock Systems Series 700 Architecture Overview

VCE Vblock Systems Series 700 Architecture Overview VCE Vblock Systems Series 700 Architecture Overview Revision history www.vce.com VCE Vblock Systems Series 700 Architecture Overview Version 1.5 June 2012 2012 VCE Company, LLC. All Rights Reserved. 2012

More information

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Document revision 1.6 February 2016 VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Revision history Revision

More information

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview Dell EMC VxBlock and Vblock Systems 740 Architecture Overview Document revision 1.15 December 2017 Revision history Date Document revision Description of changes December 2017 1.15 Added Cisco UCS B-Series

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Dell EMC. Converged Technology Extensions for Storage Product Guide

Dell EMC. Converged Technology Extensions for Storage Product Guide Dell EMC Converged Technology Extensions for Storage Product Guide Document revision 1.9 December 2017 Revision history Date Document revision Description of changes December 2017 1.9 Removed the topic,

More information

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity, Solution overview A New Generation of Converged Infrastructure that Improves Flexibility, Efficiency, and Simplicity Enterprises everywhere are increasingly adopting Converged Infrastructure (CI) as one

More information

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION Microsoft Windows Server 2012 with Hyper-V, VCE Vblock 340, EMC VNX Quickly provision virtual machines Extend and support hybrid-cloud environments

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer Maailman paras palvelinjärjestelmä Tommi Salli Distinguished Engineer Cisco UCS Momentum $1.1B Annualized Run Rate 7,400 UCS Customers; 2,620 repeat customers with average 3.4 repeat buys 350 ATP Channel

More information

VxBlock System Deep inside the next generation converged infrastructure

VxBlock System Deep inside the next generation converged infrastructure VxBlock System 1000 Deep inside the next generation converged infrastructure Scott Redfern Senior Director, Modern Data Centers Jeff Wheeler Consultant Architect, Modern Data Centers Agenda VxBlock System

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

INTRODUCTION TO THE EMC VNX2 SERIES

INTRODUCTION TO THE EMC VNX2 SERIES White Paper INTRODUCTION TO THE EMC VNX2 SERIES A Detailed Review Abstract This white paper introduces the EMC VNX2 series platform. It discusses the different models, new and improved features, and key

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco Nexus 4000 Series Switches for IBM BladeCenter Cisco Nexus 4000 Series Switches for IBM BladeCenter What You Will Learn This document is targeted at server, storage, and network administrators planning to deploy IBM BladeCenter servers with the unified

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP VCE: A FOUNDATION FOR IT TRANSFORMATION Juergen Hirtenfelder EMEA SI/SP What happened on 9th Jan 2007? A best in breed Networking Device A best in breed Storage Device A best in breed Server Device A

More information

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019 Dell EMC Converged Systems Glossary Document revision 1.25 January 2019 Revision history Date Document revision Description of changes January 2019 1.25 Added the following terms: Hyper-converged deployment

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 5 Connectivity Matrix, on page 7 Deployment Options, on page 7 Management Through

More information

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide Dell EMC Converged Technology Extension for Isilon Storage Product Guide Document revision 1.7 December 2017 Revision history Date Document revision Description of changes December 2017 1.7 Added Generation

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

VMware vsan 6.6. Licensing Guide. Revised May 2017

VMware vsan 6.6. Licensing Guide. Revised May 2017 VMware 6.6 Licensing Guide Revised May 2017 Contents Introduction... 3 License Editions... 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster... 7

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide Dell EMC VxBlock and Vblock Systems 350 Administration Guide Document revision 1.10 June 2018 Revision history Date Documen t revision Description of changes June 2018 1.10 Changed the following sections

More information

TECHNICAL SPECIFICATIONS + TECHNICAL OFFER

TECHNICAL SPECIFICATIONS + TECHNICAL OFFER ANNEX II + III : TECHNICAL SPECIFICATIONS + TECHNICAL OFFER Contract title : Supply of Information & Communication Technology Hardware Publication reference: 2017/386304 10/1/1.1/1.2.2a p 1 / Columns 1-2

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP:

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP: Vendor must indicate at what level its proposed solution will the College s requirements as delineated in the referenced sections of the RFP: 2.3 Solution Vision Requirement 2.3 Solution Vision CCAC will

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

NST6000 UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

NST6000 UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage system is ideal for organizations

More information

VCE Vblock Specialized Systems for Extreme Applications. Gen 2.1 Physical Planning Guide

VCE Vblock Specialized Systems for Extreme Applications. Gen 2.1 Physical Planning Guide VCE Vblock Specialized Systems for Extreme Applications Gen 2.1 Physical Planning Guide Version 1.3 May 2015 Revision History Date Vblock System Document revision Description of changes May 2015 Gen 2.1

More information

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide - MARCH 2018 VMWARE VSAN 6.6 Licensing Guide Table of Contents Introduction 3 License Editions 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster with

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

EMC Innovations in High-end storages

EMC Innovations in High-end storages EMC Innovations in High-end storages Symmetrix VMAX Family with Enginuity 5876 Sasho Tasevski Sr. Technology consultant sasho.tasevski@emc.com 1 The World s Most Trusted Storage System More Than 20 Years

More information

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Veeam Availability Suite on Cisco UCS S3260

Veeam Availability Suite on Cisco UCS S3260 Veeam Availability Suite on Cisco UCS S3260 April 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction Technology Overview

More information

VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW

VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW Executive Summary Cloud computing provides a flexible, shared pool of preconfigured and integrated computing resources that enables organizations

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the

More information

Cisco UCS B460 M4 Blade Server

Cisco UCS B460 M4 Blade Server Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v3 product family to add new levels of performance

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

Cisco UCS C24 M3 Server

Cisco UCS C24 M3 Server Data Sheet Cisco UCS C24 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,

More information

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9 PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp Agenda The Landscape has Changed New Customer Requirements The Market has Begun to Move Comparing Performance Results Storage

More information

Dell EMC Unity: Architectural Overview. Ji Hong Product Technologist Midrange & Entry Solutions Group

Dell EMC Unity: Architectural Overview. Ji Hong Product Technologist Midrange & Entry Solutions Group Dell EMC Unity: Architectural Overview Ji Hong Product Technologist Midrange & Entry Solutions Group Introduction What s New with Dell EMC Unity 650F 550F 450F 350F Optimized for All-Flash Performance

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 4 Deployment Options, on page 5 Management Through Cisco UCS Manager, on page

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide Dell EMC VxBlock and Vblock Systems 540 Administration Guide Document revision 1.16 December 2018 Revision history Date Document revision Description of changes December 2018 1.16 Added support for VxBlock

More information

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices Solution Overview Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices BENEFITS Cisco UCS and StorMagic SvSAN deliver a solution to the edge: Single addressable storage pool

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information