Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview

Size: px
Start display at page:

Download "Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview"

Transcription

1 Dell EMC VxBlock and Vblock Systems 540 Architecture Overview Document revision 1.16 July 2018

2 Revision history Date Document revision Description of changes July Updated Compute Connectivity to include link to B480 M5 April Removed vcha. December Added Cisco UCS B-Series M5 server information. August Added support for VMware vsphere 6.5 on VxBlock System 540. August Added support for 40 Gb connectivity option for VxBlock System 540. March Added support for the Cisco Nexus 93180YC-EX Switch. January Internal release September Added support for AMP-2S and AMP enhancements. Added support for the Cisco MDS 9396S 16G Multilayer Fabric Switch August Updated to include the Cisco MDS 9706 Multilayer Director. April Updated to include the Cisco Nexus 3172TQ Switch February Updated to include the following: 8 X-Bricks, 20 TB 6 and 8 X-Bricks, 40 TB November Updated to include 40 TB X-Brick October Updated to include VMware vsphere 6.0 with Cisco Nexus 1000V Switch August Updated to include VxBlock Systems. Added support for VMware vsphere 6.0 with VMware VDS on the VxBlock System and for existing Vblock Systems. February Updated Intelligent Physical Infrastructure appliance information. December Updates to Vblock System 540 Gen 2.0 October Initial version Revision history 2

3 Contents Introduction...5 System overview...6 System architecture and components... 6 Benefits... 7 Base configurations...8 Scaling up compute resources...10 Scaling up storage resources...11 Network topology...11 Compute layer overview...15 Compute overview...15 Cisco UCS...15 Compute connectivity Cisco UCS fabric interconnects...18 Cisco Trusted Platform Module Disjoint layer 2 configuration Bare metal support policy...20 Storage layer overview...22 Storage layer hardware XtremIO storage arrays XtremIO storage array configurations and capacities XtremIO storage array physical specifications Network layer overview...29 LAN layer...29 Cisco Nexus 3064-T Switch - management networking Cisco Nexus 3172TQ Switch - management networking...30 Cisco Nexus 5548UP Switch Cisco Nexus 5596UP Switch Cisco Nexus 9332PQ Switch...32 Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networking SAN layer Cisco MDS 9148S Multilayer Fabric Switch...34 Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director...34 Virtualization layer overview...36 Virtualization components VMware vsphere Hypervisor ESXi...36 VMware vcenter Server (VMware vsphere 5.5 and 6.0) VMware vcenter Server (vsphere 6.5) Management...42 Management components overview...42 Management hardware components...42 Management software components Contents

4 Management software components (vsphere 6.5) Management network connectivity Sample configurations Sample VxBlock and Vblock Systems 540 with 20 TB XtremIO Sample VxBlock System 540 and Vblock System 540 with XtremIO...56 Additional references Virtualization components Compute components Network components...61 Storage components Contents 4

5 Introduction This document describes the high-level design of the Converged System and the hardware and software components. In this document, the VxBlock System and Vblock System are referred to as Converged Systems. Refer to the Glossary for a description of terms specific to Converged Systems. 5 Introduction

6 System overview System architecture and components Converged Systems are modular platforms with defined scale points that meet the higher performance and availability requirements of business-critical applications. Architecture SAN storage mediums are used for deployments involving large numbers of VMs and users to provide the following features: Multi-controller, scale-out architecture with consolidation and efficiency for the enterprise. Scaling of resources through common and fully redundant building blocks. Local boot disks are optional and available only for bare metal blades. Connectivity The next generation of Cisco UCS compute and network components with the VxBlock System 40 Gb connectivity option allow greater bandwidth for Ethernet and FC traffic. Capacities and limitations for the 40 Gb connectivity option are described in the compute and network sections of this guide. Ethernet media and links provide 10 Gb of bandwidth per link. The FC media and links provide 8 Gb of bandwidth per link. With the 40 Gb connectivity, Ethernet media and links provide 40 Gb of bandwidth per link. The FC media and links provide 16 Gb of bandwidth per link from the fabric interconnects to the SAN switches. Components The following table provides a description of the hardware and software components for Converged Systems: Resource Converged System management Components Vision Intelligent Operations System Library Vision Intelligent Operations Plug-in for vcenter Vision Intelligent Operations Compliance Checker Vision Intelligent Operations API for System Library Vision Intelligent Operations API for Compliance Checker System overview 6

7 Resource Components Virtualization and management VMware vsphere Server Enterprise Plus VMware vsphere ESXi VMware vcenter Server VMware vsphere Web Client VMware Single Sign-On Service Cisco UCS C220 or C240 Servers for AMP-2 PowerPath/VE Cisco UCS Manager XtremIO Management Server Secure Remote Support PowerPath Management Appliance Cisco Data Center Network Manager for SAN Compute Cisco UCS 5108 Blade Server Chassis Cisco UCS B-Series M4 or M5 Blade Servers Cisco UCS C-Series M5 Rack Servers Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric Extenders Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric Interconnects Cisco UCS 2304 Fabric Extenders with the VxBlock System 40 Gb connectivity option Cisco UCS UP Fabric Interconnects with the VxBlock System 40 Gb connectivity option Network Cisco MDS 9148S Multilayer Fabric Switch, Cisco MDS 9396S 16 G Multilayer Fabric Switch, or Cisco MDS 9706 Multilayer Director Cisco Nexus 3172TQ Switch or Cisco Nexus 3064-T Switch One pair of Cisco Nexus 5548UP, Cisco Nexus 5596UP, Cisco Nexus 93180YC- EX, or Cisco Nexus 9396PX Switches Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gb connectivity option Optional components: Cisco Nexus 1000V Series Switches VMware NSX Virtual Networking for VxBlock Systems VMware vsphere Distributed Switch (VDS) for VxBlock Systems Storage XtremIO 10 TB (encryption capable) XtremIO 20 TB (encryption capable) XtremIO 40 TB (encryption capable) Benefits Converged Systems with XtremIO provide enhancement for Virtual Desktop Infrastructure (VDI) applications, virtual server and high performance applications. 7 System overview

8 The following scenarios benefit from Converged Systems with XtremIO: Scenario VDI applications Virtual server applications High-performance database applications Benefit VDI applications, such as VMware Horizon View and Citrix XenDesktop deployments, with an excess of 1000 desktops that require: The ability to use full clone or linked clone technology interchangeably and without drawbacks Assured project success from pilot to large-scale deployment A fast, simple method of performing high volume cloning of desktops, even during production hours Virtual server applications, such as VMware vcloud Director deployments, in largescale environments that require: A simple, dynamic method of creating a large number of VMs, even during production hours Application scenarios requiring mixed read and write workloads that need to adapt to high degrees of growth over time. OLTP database, database test/developer environments, and database analytic applications such as Oracle and Microsoft SQL Server that require: Consistent, low I/O (<1ms) latency to meet the performance service level objectives of the database workload Multiple space-efficient test or development copies The ability to reduce database licensing costs (XtremIO increases database server CPU utilization so fewer database CPU core licenses are needed) Base configurations The base configuration contains the minimum set of compute and storage components, and fixed network resources for a Converged System. These components are integrated within one or more 28-inch 42 RU cabinets. The following table describes how hardware components can be customized: Component How it can be customized Compute Cisco UCS B-Series and C-Series M4 or M5 Blade Servers Minimum of 4 Cisco UCS Blade Servers Maximum of 256 Cisco UCS B-Series Blade Servers, depending on the number of X-Bricks Minimum of 2 Cisco UCS 5108 Blade Server Chassis Maximum of 16 Cisco UCS 5108 Blade Server Chassis per Cisco UCS domain. Maximum of 8 Cisco UCS 5108 Blade Server Chassis per Cisco UCS domain with the VxBlock System 40 Gb connectivity option Each Cisco UCS 5108 Blade Server Chassis is configured with a pair of Cisco UCS 2304 Fabric Extenders with the VxBlock System 40 Gb connectivity option Optional VMware NSX edge servers Minimum of one pair of Cisco UCS 62xxUP fabric interconnects Maximum of 4 pairs of Cisco UCS 62xxUP fabric interconnects 4 to 6 Cisco UCS B-Series or Cisco UCS C-Series Blade Servers, including the B200 M4 and M5 with VIC 1340/1380. System overview 8

9 Component How it can be customized Network One pair of Cisco MDS 9148S Multilayer Switches, Cisco MDS 9396S 16G Multilayer Fabric Switches, or Cisco MDS 9706 Multilayer Directors One pair of Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches One pair of Cisco Nexus 3172TQ Switches or Cisco Nexus 3064-T Switches One pair of Cisco Nexus 9332PQ Switches with the VxBlock System 40 Gb connectivity option Storage Management hardware options One XtremIO 40 TB, 20 TB, or 10 TB cluster per Converged System XtremIO 40 TB cluster Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports Supports drives depending on the configuration Each X-Brick contains 25 x 1.6 TB Encryption Capable drives XtremIO 20 TB cluster Contains 1, 2, 4, 6, or 8 X-Bricks with a maximum of 32 front-end ports Supports drives depending on the configuration Each X-Brick contains 25 x 800 GB Encryption Capable drives XtremIO 10 TB cluster Contains 1, 2, or 4 X-Bricks with a maximum of 16 front-end ports Supports drives depending on the configuration Each X-Brick contains 25 x 400 GB Encryption Capable drives The second generation of the Advanced Management Platform (AMP-2) centralizes management components of the Converged System. Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the compute and storage arrays in the Converged System. All components have N+N or N+1 redundancy. Depending upon the configuration, the following maximums apply: Component Cisco UCS 62xxUP Fabric Interconnects Cisco UCS UP Fabric Interconnects with the VxBlock System 40 Gb connectivity option Maximum configurations 32 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS 6248UP Fabric Interconnects 64 Cisco B-Series Blade Servers with 4 Cisco UCS domains for Cisco UCS 6296UP Fabric Interconnects Maximum blades are as follows: Half-width = 256 Full-width = 256 Double-height = Cisco B-Series Blade Servers with 4 Cisco UCS domains. Maximum blades are as follows: Half-width = 256 Full-width = 128 Double-height = 64 9 System overview

10 Component Maximum configurations Disk drives 8 X-Bricks = X-Bricks = X-Bricks = X-Bricks = 50 1 X-Brick = 25 A minimum of eight X-Bricks are required to support the 256 hosts. Related information Storage layer hardware (see page 22) XtremIO system specifications Scaling up compute resources Compute resources can be scaled to meet increasingly stringent requirements. The maximum supported configuration differs based on core components. Add uplinks, blade packs, and chassis activation kits to enhance Ethernet and FC bandwidth when the Converged Systems are built or deployed. Blade packs Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The base configuration of each Converged System includes two blade packs. The maximum number of blade packs depends on the type of Converged System. Each blade type must have a minimum of two blade packs as a base configuration and can be increased in single blade pack increments. Each blade pack includes the following license packs: VMware vsphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) PowerPath/VE License packs for VMware vsphere ESXi, Cisco Nexus 1000V Series Switches, and PowerPath are not available for bare metal blades. Chassis activation kits Power supplies and fabric extenders for all chassis are populated and cabled. All required twinax cables and transceivers are populated. As more blades are added and additional chassis are required, chassis activation kits are automatically added to an order. Each kit contains software licenses to enable additional fabric interconnect ports. Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis. System overview 10

11 Scaling up storage resources XtremIO components are placed in a dedicated rack. Add X-Bricks to the Converged System to scale up storage resources. The following table provides Cisco UCS compute maximums with 10 Gb connectivity: X-Brick count Total servers With the VxBlock System 40 Gb connectivity option, the compute layer can scale to 256 host servers and four pairs of Cisco UCS fabric interconnects known as Cisco UCS domains. Cisco UCS fabric interconnects also known as Cisco UCS domains can contain up to 16 chassis or eight chassis. However, server and domain maximums are dependent on the size and SAN connectivity of the storage array. The following table provides Cisco UCS compute maximums with the 40 Gb connectivity option: X-Brick count Cisco UCS domains Chassis Total servers The following table provides SAN maximums for 10 and 40 Gb connectivity: Cisco MDS SAN Switch Cisco UCS domains Total servers X-Bricks 9148S S 16G Network topology In segregated network architecture, LAN and SAN connectivity is segregated into separate switch fabrics. 10 Gb connectivity LAN switching uses the Cisco Nexus 93180YC-EX, Cisco Nexus 9396PX, Cisco Nexus 5548UP, or Cisco Nexus 5596UP Switches. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, Cisco MDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 Multilayer Director. 11 System overview

12 The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCS fabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels, based on 10 GbE links, and to the Cisco MDS switches through port channels made up of multiple 8 Gb FC links. VxBlock System with 40 Gb connectivity LAN switching uses the Cisco Nexus 9332PQ switch. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, Cisco MDS 9396S 16G Multilayer Fabric Switch, or Cisco MDS 9706 Multilayer Director. The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCS fabric interconnects connect to the Cisco Nexus switches in the Ethernet network through port channels, based on 40 GbE links, and to the Cisco MDS switches through port channels made up of multiple 16 Gb FC links. Segregated network architecture The storage layer consists of an XtremIO storage array. The front-end IO modules connect to the Cisco MDS switches within the network layer over 16 Gb FC links. Refer to the appropriate Dell EMC Release Certification Matrix for a list of what is supported on your Converged System. System overview 12

13 The following illustration shows a segregated block storage configuration for the 10 Gb based Converged System: 13 System overview

14 The following illustration shows a segregated block storage configuration for a VxBlock System with the 40 Gb connectivity option: SAN boot storage configuration VMware vsphere ESXi hosts always boot over the FC SAN from a 10 Gbps boot LUN (vsphere 6.0) which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage can be presented as VMFS data stores or as raw device mappings. VMware vsphere ESXi hosts always boot over the FC SAN from a 15 Gbps boot LUN (vsphere 6.5), which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage can be presented as VMFS data stores or as raw device mappings. System overview 14

15 Compute layer Compute overview Cisco UCS B- and C-Series Servers provide computing power within the Converged System. Converged Systems include Cisco UCS 62xxUP fabric interconnects with eight or sixteen 10 Gbps links connected to a pair of 10 Gbps capable Cisco Nexus 55xxUP fabric interconnects, Cisco Nexus 93180YC-EX Switches, or Cisco Nexus 9396PX Switches. With the VxBlock System 40 Gbps connectivity option, Cisco UCS UP Fabric Interconnects are included with four or six 40 Gbps links connected to a pair of 40 Gbps capable Cisco Nexus 9332PQ Switches. Fabric extenders (FEX) within the Cisco UCS 5108 Blade Server Chassis connect to fabric interconnects (FIs) over converged networking. Up to eight 10 Gbps ports or four 40 GbE ports with the 40 Gbps connectivity option on each FEX connect northbound to the FIs, regardless of the number of blades in the chassis. These connections carry IP and FC traffic. There are reserved FI ports to connect to upstream access switches within the Converged System. These connections are formed into a port channel to the Cisco Nexus switches and carry IP traffic destined for the external network links. Each FI also has multiple ports reserved for FC ports. These ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and the storage layer. SAN port channels carrying FC traffic are configured between the FIs and upstream Cisco MDS switches. The following table provides a hardware comparison between the connectivity options: Component 10 Gbps connectivity VxBlock System with 40 Gbps connectivity FIs Cisco UCS 62xxUP Cisco UCS UP FEX Cisco UCS 22xxXP Cisco UCS 2304 LAN switches Cisco Nexus 55xxUP, Cisco Nexus 93180YC-EX, Cisco Nexus 9396PX Cisco Nexus 9332PQ Cisco UCS Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless unified network fabric with enterprise-class, x86-based servers. Converged Systems contain a number of Cisco UCS 5108 Blade Server Chassis. Each chassis can contain up to eight half-width Cisco UCS B- and C-Series blade servers, four full-width or two doubleheight blades installed at the bottom of the chassis. Converged Systems powered by Cisco UCS offer the following features: Built-in redundancy for high availability Hot-swappable components for serviceability, upgrade, or expansion Fewer physical components than in a comparable system built piece by piece Reduced cabling 15 Compute layer

16 Improved energy efficiency over traditional chassis Compute connectivity Each Cisco UCS B-Series Blade Server contains at least one physical virtual interface card (VIC) that passes converged FC and IP network traffic through the chassis mid-plane to the fabric extenders. Blade servers Half-width blade servers can be configured to contain a VIC 1340 or VIC 1385 installed in the motherboard (mlom) mezzanine slot to connect at a potential bandwidth of 20 Gb/s or 40 Gb/s to each fabric. Optionally, a VIC 1380 or VIC 1387 can be installed in the PCIe mezzanine slot alongside a VIC 1340 or VIC 1385 to separate non-management network traffic onto a separate physical adapter. In a Cisco UCS B200 server, the VIC 1340 and VIC 1380 can connect at 20 Gb/s or 40 Gb/s to each fabric. With the VxBlock System 40 Gb connectivity option, the VIC 1340 and VIC 1385 can be installed along with a port expander card to achieve native 40 Gb/s connectivity to each fabric. Full-width blade servers can be configured to contain a VIC 1340 or VIC 1385 that can connect at 20 Gb/s or 40 Gb/s to each fabric. Optionally, a full-width blade can be configured with a VIC 1340 or VIC The VIC 1340 and VIC 1385 can connect at 40 Gb/s. The VIC 1380 and VIC 1387 can communicate at a maximum bandwidth of 40 Gb/s to each fabric with the 40 Gb connectivity option. Another option is to configure the full-width blade server to contain a VIC 1340 or VIC 1385, a port expander card, and a VIC 1380 or a VIC 1387 card. With the VxBlock System 40 Gb connectivity option and all cards installed, the server's network interfaces each communicate at a maximum bandwidth of 40 Gb/s. Cisco UCS 5108 Blade Server Chassis Each chassis is configured with two Cisco UCS 22xxXP fabric extenders. Each FEX connects to a single Cisco UCS 62xxUP fabric interconnect, one on the A side fabric and one on the B side fabric. The chassis can have two or four 10 Gb/s or 40 Gb/s connections per Cisco UCS 2204XP Fabric Extender or per Cisco UCS 2208XP Fabric Extender to the Cisco UCS 62xxUP fabric interconnects. Optionally, the Cisco UCS 2208XP Fabric Extenders can be used for up to eight 10 Gb/s or 40 Gb/s connections per module to the fabric interconnects. With the VxBlock System 40 Gb/s connectivity option, the chassis are configured with two Cisco UCS 2304 Fabric Extenders with each connected to a single Cisco UCS UP Fabric Interconnect. One on the A side and one on the B side of the fabric. The chassis can have two or four 40 Gb/s connections to each Cisco UCS UP Fabric Interconnect. Compute layer 16

17 The following illustration shows the FEX to FI connections on a chassis with the VxBlock System 40 Gb/s connectivity option: Fabric interconnect Each Cisco UCS 62xxUP fabric interconnect has a total of eight 10 Gb/s or 40 Gb/s LAN uplink connections. Each is configured in a port channel on each fabric to a pair of Cisco Nexus switches. Optionally, the LAN bandwidth enhancement can increase connections to a total of 16. Four, eight, or sixteen 8 Gb/s FC connections carry SAN traffic to a pair of Cisco MDS switches. With the VxBlock System 40 Gb/s connectivity option, each FI has a minimum of four 40 Gb/s LAN connections, two to each fabric. This can be expanded to six total ports on each FI. These connections are configured in a port channel for maximum bandwidth and redundancy. A port channel of eight 16 Gb/s FC connections carry SAN traffic from each FI to the Cisco MDS SAN switches. The SAN connections can be expanded to 12 or 16 ports on each FI. For the Cisco UCS UP Fabric Interconnects, only active cables can be used for LAN connectivity. Passive cables are not supported for LAN uplinks to the Cisco Nexus switches. Blade packs Cisco UCS blades are sold in packs of two and include two identical blades. The base configuration of each Converged System includes two blade packs. The maximum number of blade packs depends on the type of Converged System. Each blade type must have a minimum of two blade packs as a base configuration and can be increased in single blade pack increments. 17 Compute layer

18 Each blade pack is added along with the following license packs: VMware vsphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) PowerPath/VE License packs for VMware vsphere ESXi, Cisco Nexus 1000V Series Switches, and PowerPath are not available for bare metal blades. Chassis activation kits The power supplies and fabric extenders for all chassis are populated and cabled, and all required twinax cables and transceivers are populated. As more blades are added and additional chassis are required, chassis activation kits are automatically added to an order. The kit contains software licenses to enable additional fabric interconnect ports. Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis. SAN boot storage configuration VMware vsphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN, which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage can be presented as VMFS data stores or as raw device mappings. Related information Cisco UCS B-Series Blade Servers B200 M5 specifications Cisco UCS B-Series Blade Servers B420 M4 specifications Cisco UCS B-Series Blade Servers B480 M5 specifications Cisco UCS fabric interconnects Cisco UCS fabric interconnects provide network connectivity and management capability to the Cisco UCS blades and chassis. Cisco UCS fabric interconnects offer line-rate, low-latency, lossless 10 or 40 Gbps Ethernet and Fibre Channel over Ethernet (FCoE) functions. VMware NSX The optional VMware NSX feature is only supported with 10 Gbps connectivity. This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the required port count for VMware NSX external connectivity (edges). Compute layer 18

19 Cisco Trusted Platform Module Cisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safer computing in all environments. Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that are used to authenticate remote and local server sessions. Cisco TPM is available by default as a component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled. Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant experience in trusted computing, defer to the software stack vendor for configuration and operational considerations relating to the Cisco TPM. Related information Disjoint Layer 2 configuration Traffic is split between two or more different networks at the fabric interconnect in a Disjoint Layer 2 configuration to support two or more discrete Ethernet clouds. Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks allow two or more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain. The following illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco UCS domain: 19 Compute layer

20 vpcs 101 and 102 are production uplinks that connect to the network layer of the Converged System. vpcs 105 and 106 are external uplinks that connect to other switches. If using Ethernet performance port channels (103 and 104, by default), port channels 101 through 104 are assigned to the same VLANs. Disjoint Layer 2 network connectivity can also be configured with an individual uplink on each fabric interconnect. Bare metal support policy Since many applications cannot be virtualized due to technical and commercial reasons, Converged Systems support bare metal deployments, such as non-virtualized operating systems and applications. Compute layer 20

21 While it is possible for Converged Systems to support these workloads (with the following caveats), due to the nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systems that comply with the following requirements: Converged Systems contain only Dell EMC published, tested, and validated hardware and software components. The Release Certification Matrix provides a list of the certified versions of components for Converged Systems. The operating systems used on bare metal deployments for compute components must comply with the published hardware and software compatibility guides from Cisco and Dell EMC. For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.) those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided only on VMware Hypervisors. Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination of whether a Converged System is operating correctly, and assistance in problem resolution to the extent possible. Dell EMC is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, Dell EMC does not provide updates to or test those operating systems or applications. The OEM support vendor should be contacted directly for issues and patches related to those operating systems and applications. 21 Compute layer

22 Storage layer Storage layer hardware XtremIO fully leverages the properties of random access flash media. XtremIO The resulting system addresses the demands of mixed workloads with superior random I/O performance, instant response times, scalability, flexibility, and administrator agility. XtremIO delivers consistent low latency response times (below <1 ms) with a set of non-stop data services. The following features are included: Inline data reduction and compression Thin provisioning Snapshots percent availability enhances host performance Unprecedented responsiveness for enterprise applications The XtremIO Management Server is a VM that provides a browser-based GUI to create devices creation, manage, and monitor XtremIO storage arrays. Related information XtremIO storage array configurations and capacities (see page 26) XtremIO storage arrays (see page 22) XtremIO storage array physical specifications (see page 27) XtremIO storage arrays XtremIO storage arrays share common characteristics across XtremIO models. XtremIO storage arrays include the following features: Two 8 Gb FC ports per controller (four per X-Brick). 25 drives per X-Brick. Encryption capable All X-Bricks within the cluster must be the same type. All XtremIO cluster components must reside in the same cabinet in contiguous RUs. The only exception is an eight X-Brick array in a 42 RU cabinet where X-Bricks seven and eight may reside in an adjacent cabinet. Storage layer 22

23 The maximum number of supported hosts depends on the number of X-Bricks in the configuration. While the maximum number of initiators per XtremIO cluster is 1024, the recommended limit of initiators is 64 per FC port for performance to support hosts with four vhbas. The following illustration shows the interconnection of XtremIO in a VxBlock System with the 40 Gb connectivity option: 23 Storage layer

24 The following illustration shows the interconnection of XtremIO in Converged Systems with 10 Gb connectivity: Fan-in ratio The following table provides the sizing guidelines for the Converged Systems at 32:1 best practice for performance fan-in ratio: X-Bricks FC ports FC ports per host Maximum number of physical hosts Half-width blades The maximum number of hosts supported with half-width blades depends on the number of X-Bricks: Storage layer 24

25 Physical host maximums aggregate across all blade types and form factors. X-Bricks Physical host maximum Full-width blades The maximum number of hosts supported with full-width blades depends on the number of X-Bricks: Physical host maximums aggregate across all blade types and form factors. X-Bricks Physical host maximum With the VxBlock System 40 Gb connectivity option: 128* With the VxBlock System 40 Gb connectivity option: 128* *With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain across four Cisco UCS domains, a maximum of 128 full-width blades is supported in Converged Systems. Double-height blades The maximum number of hosts supported with double-height blades depends on the number of X-Bricks: Physical host maximums aggregate across all blade types and form factors. X-Bricks Physical host maximum With the VxBlock System 40 Gb connectivity option: 64* With the VxBlock System 40 Gb connectivity option: 64* With the VxBlock System 40 Gb connectivity option: 64* 25 Storage layer

26 * With the VxBlock System 40 Gb connectivity option, due to a limit of eight chassis per domain across four Cisco UCS domains, a maximum of 64 double-height blades are supported in Converged Systems. The recommended fan in ratio for high IOPS workloads for XtremIO front-end ports is 32:1. Higher ratios can be achieved based on the workload profile. Proper sizing of the XtremIO is crucial to ensure the XtremIO front-end ports are not saturated. XtremIO storage array configurations and capacities XtremIO storage arrays have specific configurations and capacities. The following options are supported for XtremIO: 10 TB X-Brick (encryption capable) 20 TB X-Brick (encryption capable) 40 TB X-Brick (encryption capable) If additional X-Bricks are added to clusters post deployment, a data migration professional services engagement is required. Plan for future growth during the initial purchase. Supported standard configurations (tier 1) Model Encryption Drive size X-Brick cluster One Two Four Six Eight 10 TB Y 400 GB N/A N/A 20 TB Y 800 GB TB Y 1.6 TB XtremIO 10 TB X-Brick capacities Capacity X-Brick cluster One Two Four Raw (TB) Usable (TiB)* Effective (TiB)** * Usable capacity is the amount of unique, non-compressible data that can be written into the array. ** Effective capacity includes the benefits of thin provisioning, inline global deduplication, inline compression, and space-efficient copies. Effective numbers represent a 6:1 capacity increase and vary based on specific environment. XtremIO 20 TB X-Brick capacities Capacity X-Brick cluster One Two Four Six Eight Raw (TB) Storage layer 26

27 Capacity X-Brick cluster One Two Four Six Eight Usable (TiB)* Effective (TiB)** XtremIO 40 TB X-Brick capacities Capacity X-Brick cluster One Two Four Six Eight Raw (TB) Usable (TiB)* Effective (TiB)** ,100 1,466 XtremIO storage array physical specifications Each X-Brick contains two storage controllers, one DAE, and one or two battery backup units (BBUs). Physical specifications Each X-Brick consists of the following components: Two X-Brick Controllers One X-Brick DAE Two (single X-Brick system) or one (multiple X-Brick system) BBU A pair of Infiniband switches are required in two, four, six, or eight X-Brick clusters. Each X-Brick consists of the following components: Two 1 RU storage controllers containing: Two redundant power PDUs Two 8 Gb/s FC ports Two 40 Gb/s InfiniBand ports One 1 Gb/s management/ipmi port Two 6 Gb/s SAS ports for DAE connections Additional ports are unused in Dell EMC One 2 RU DAEs containing: 25 emlc SSDs Two redundant PDUs 27 Storage layer

28 Two redundant SAS interconnect modules One BBU A single X-Brick cluster consists of: One X-Brick One additional BBU A cluster of multiple X-Bricks consists of: Two, four, six, or eight X-Bricks Two InfiniBand switches The following table provides physical specifications for each type of X-Brick cluster with VxBlock Systems with the 40 Gb connectivity option: Component X-Brick cluster Single Two Four Six Eight X-Bricks InfiniBand switches Additional BBUs The following table provides physical specifications for each component with VxBlock Systems with the 40 Gb connectivity option: Device RU Weight Typical power consumption (Watts) C14 power sockets X-Brick Storage Controller 1 40 lbs (18.1 kg) X-Brick DAE 2 45 lbs (20.4 kg) BBU 1 44 lbs (20 kg) N/A 1 Infiniband switches* 3 41 lbs (18.6 kg) 130 (65 per switch) 4 (2 per switch) *Two 1 RU switches and 1 RU for cabling. The following table provides the total RU for each X-Brick: Model X-Brick cluster One Two Four Six Eight 10 TB (encrypted) N/A N/A 20 TB (encrypted) ** 40 TB (encrypted) ** ** Because IPI cabinets are 42 RU, split the X-Bricks between two cabinets with X-Bricks 7 and 8 in an adjacent cabinet. Storage layer 28

29 Network layer LAN and SAN make up the network layer. LAN layer The LAN layer of the Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQ, and Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switches. The following table shows LAN layer components: Component Description Cisco Nexus 5548UP Switch 1 RU appliance Supports 32 fixed 10 Gbps SFP+ ports Expands to Gbps SFP+ ports through an available expansion module Cisco Nexus 5596UP Switch 2 RU appliance Supports 48 fixed 10 Gbps SFP+ ports Expands to Gbps SFP+ ports through three available expansion slots Cisco Nexus 93180YC-EX 1 RU appliance Supports 48 fixed 10/25 Gbps SFP+ ports and 6 fixed 40/100 Gbps QSFP+ ports No expansion modules available Cisco Nexus 9396PX Switch 2 RU appliance Supports 48 fixed, 10 Gbps SFP+ ports and 12 fixed, 40 Gbps QSFP + ports No expansion modules available Cisco Nexus 9332PQ Switch 1 RU 2.56 Tbps bandwidth that supports 32 fixed, 40 Gbps QSFP+ ports (ports 1-12 and support QSFP+-to-10 Gbps SFP+ breakout cables and QSA adapters on the last six ports) Cisco Nexus 3172TQ Switch 1 RU appliance Supports 48 fixed, 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivity ports and 6 fixed, 40 Gbps QSFP+ ports for the management layer of the Converged System Cisco Nexus 3064-T Switch 1 RU appliance Supports 48 fixed, 10GBase-T RJ45 ports and 4 fixed, 40 Gbps QSFP+ ports for the management layer of the Converged System Cisco Nexus 3064-T Switch - management networking The base Cisco Nexus 3064-T Switch provides Mbps/1GbE/10GbE Base-T fixed ports and 4- QSFP+ ports to provide 40GbE connections. 29 Network layer

30 The following table shows core connectivity for the Cisco Nexus 3064-T Switch for management networking and reflects the AMP-2 HA base for two servers: Feature Used ports Port speeds Media Management uplinks from fabric interconnect (FI) 2 1 GbE Cat6 Uplinks to customer core 2 Up to 10 G Cat6 vpc peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50µ/125 LC/LC Uplinks to management 1 1 GbE Cat6 Cisco Nexus management ports 1 1 GbE Cat6 Cisco MDS management ports 2 1 GbE Cat6 AMP2-CIMC ports 1 1 GbE Cat6 AMP2 ports 2 1 GbE Cat6 AMP2-10G ports 2 10 GbE Cat6 VNXe management ports 1 1 GbE Cat6 VNXe_NAS ports 4 10 GbE Cat6 XtremIO Controllers 2 per X-Brick 1 GbE Cat6 Gateways Mb/1 GbE Cat6 The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and their necessary management connections. Related information Management components overview (see page 42) Cisco Nexus 3172TQ Switch - management networking Each Cisco Nexus 3172TQ Switch provides Mbps/1000 Mbps/10 Gbps twisted pair connectivity and six 40 GbE QSFP+ ports. Cisco Nexus 3172TQ Switch on AMP-2 The following table shows core connectivity for the Cisco Nexus 3172TQ Switch for management networking and reflects the base for two servers: Feature Used ports Port speeds Media Management uplinks from fabric interconnect (FI) 2 10 GbE Cat6 Uplinks to customer core 2 Up to 10 GbE Cat6 vpc peer links 2 QSFP+ 40 GbE Cat6/MMF 50µ/125 LC/LC Network layer 30

31 Feature Used ports Port speeds Media Uplinks to management 1 1 GbE Cat6 Cisco Nexus management ports 2 1 GbE Cat6 Cisco MDS management ports 2 1 GbE Cat6 AMP-2 CIMC ports 1 1 GbE Cat6 AMP-2 1 GbE ports 2 1 GbE Cat6 AMP-2 10 GbE ports 2 10 GbE Cat6 VNXe management ports 1 1 GbE Cat6 VNXe_storage ports 4 10 GbE Cat6 XtremIO Controllers 2 per X-Brick 1 GbE Cat6 Gateways Mb/1 GbE Cat6 The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and their necessary management connections. Cisco Nexus 5548UP Switch The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1 Gbps or 10 Gbps connectivity for all Converged System production traffic. The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module): Feature Used ports Port speeds Media Uplinks from fabric interconnect (FI) 8 10 Gbps Twinax Uplinks to customer core 8 Up to 10 Gbps SFP+ Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax Uplinks to management 3 10 Gbps Twinax Customer IP backup 4 1 Gbps or 10 Gbps SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional ports are available to provide additional network connectivity. Cisco Nexus 5596UP Switch The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivity for LAN traffic. The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module): Feature Used ports Port speeds Media Uplinks from Cisco UCS fabric interconnect 8 10 Gbps Twinax 31 Network layer

32 Feature Used ports Port speeds Media Uplinks to customer core 8 Up to 10 Gbps SFP+ Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax Uplinks to management 2 10 Gbps Twinax The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity option: Feature Used ports Port speeds Media Customer IP backup 4 1 Gbps or 10 Gbps SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports are available to provide additional network connectivity. Cisco Nexus 9332PQ Switch The base Cisco Nexus 9332PQ Switch provides 32 QSFP+ ports used for 40 Gb (24 of which can provide 10 Gb connectivity) and six 40 Gb QSFP+ ports for customer LAN uplink traffic. Cisco Nexus 9332PQ Switch supports both 40 Gbps QSFP+ and 10 Gbps speeds with breakout cables and QSA adapters on the last six ALE ports. The Cisco Nexus 9332PQ Switch has licensed and available ports. There are no expansion modules available for the Cisco Nexus 9332PQ Switch. The following table shows core connectivity for the Cisco Nexus 9332PQ Switch: Feature Used ports Port speeds Media Uplinks from fabric interconnect 2 per domain 40 Gb QSFP+ Uplinks to customer core 4 40 Gb QSFP+ vpc peer links 2 40 Gb Twinax Uplinks to AMP-2 management servers 2 10 Gb Twinax breakout cable The remaining ports in the Cisco Nexus 9332PQ Switch provide support for a combination of the following additional connectivity options: Feature Available ports Port speeds Media Customer IP backup 8 10 Gb breakout Twinax breakout cable Uplinks from Cisco UCS FIs for Ethernet BW enhancement 1 per domain 40 Gb Twinax Network layer 32

33 Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch - segregated networking The Cisco Nexus 93180YC-EX Switch provides 48 10/25 Gbps SFP+ ports and six 40/100 Gbps QSFP+ uplink ports. The Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivity and Gbps QSFP+ ports. The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch with segregated networking: Feature Used ports Port speeds Media Uplinks from FI 8 10 GbE Twinax Uplinks to customer core 8 (10 GbE)/2 (40 GbE) Up to 40 GbE SFP+/ QSFP+ vpc peer links 2 40 GbE Twinax The remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch provide support for a combination of the following additional connectivity options: Feature Available ports Port speeds Media RecoverPoint WAN links (one per appliance pair) 4 1 GbE GE T SFP+ Customer IP backup 8 1 GbE or 10 GbE SFP+ Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE Twinax SAN layer Two Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS 9396S 16G Multilayer Fabric Switches that make up two separate fabrics to provide 16 Gbps of FC connectivity between the compute and storage layer components. Connections from the storage components are over 16 Gbps connections. With 10 Gbps connectivity, Cisco UCS fabric interconnects provide a FC port channel of four 8 Gbps connections (32 Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches and can be increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbps bandwidth per fabric. With the VxBlock System 40 Gbps connectivity option, Cisco UCS fabric interconnects provide a FC port channel of four 16 Gbps connections (128 Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches and can be increased to 12 connections for 192 Gbps bandwidth. The Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 256 Gbps bandwidth. The Cisco MDS switches provide: FC connectivity between compute and storage layer components 33 Network layer

34 Connectivity for backup and business continuity requirements (if configured) Inter-Switch Links (ISLs) to the existing SAN or between switches is not permitted. The following table shows SAN network layer components: Component Description Cisco MDS 9148S Multilayer Fabric Switch Cisco MDS 9396S 16G Multilayer Fabric Switch 1 RU appliance Provides 12 to 48 line-rate ports for non-blocking 16 Gbps throughput 12 ports are licensed - additional ports can be licensed 2 RU appliance Provides 48 to 96 line-rate ports for non-blocking 16 Gbps throughput 48 ports are licensed - additional ports can be licensed in 12 port increments Cisco MDS 9706 Multilayer Director 9 RU appliance Provides up to 12 Tbps front panel FC line rate non-blocking, system level switching Dell EMC leverages the advanced 48 port line cards at line rate of 16 Gbps for all ports Consists of two 48 port line cards per director - up to two additional 48 port line cards can be added Dell EMC requires that 4 fabric modules are included with all Cisco MDS 9706 Multilayer Directors for an N+1 configuration 4 PDUs 2 supervisors Cisco MDS 9148S Multilayer Fabric Switch Converged Systems incorporate the Cisco MDS 9148S Multilayer Fabric Switch provide line-rate ports for non-blocking, 16 Gbps throughput. In the base configuration, 24 ports are licensed. Additional ports can be licensed as needed. The Cisco MDS 9148S Multilayer Fabric Switch is a fixed switch with no IOM expansion for additional ports. The Cisco MDS 9148S Multilayer Fabric Switch provides connectivity for up to 48 ports for Cisco UCS fabric interconnects and storage array connectivity. The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch: Feature Used ports Port speeds Media FI uplinks 4 or 8 8 Gb SFP+ XtremIO X-Brick 2 per X-Brick 8 Gb SFP+ Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director Converged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS 9706 Multilayer Director to provide FC connectivity from storage to compute. Network layer 34

35 Cisco MDS 9706 Multilayer Directors provide line-rate ports for non-blocking 16 Gbps throughput. Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS 9706 Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb FC line cards. It deploys two supervisor modules for redundancy. The Cisco MDS 9706 Multilayer Director provides connectivity for up to 192 ports from Cisco UCS fabric interconnects and an XtremIO storage array that supports up to eight X-Bricks. The Cisco MDS 9706 Multilayer Director uses dynamic port mapping. There are no port reservations. Cisco MDS 9396S 16G Multilayer Fabric Switches provide line-rate ports for non-blocking, 16 Gbps throughput. The base license includes 48 ports. Additional ports can be licensed in 12 port increments. The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules for port expansion. The following tables provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS 9706 Multilayer Director: Cisco MDS 9396S 16G Multilayer Fabric Switch Feature Used ports Port speeds Media FI uplinks with 10 Gb connectivity 4, 8, or 16 8 Gb SFP+ XtremIO X-Brick 2 per X-Brick 8 Gb SFP+ Cisco MDS 9706 Multilayer Director Feature Used ports Port speeds Media FI uplinks with 10 Gb connectivity FI uplinks with the 40 Gb connectivity option 4, 8, or 16 8 Gb SFP+ 8, 12, or Gb SFP+ XtremIO X-Brick 2 per X-Brick 8 Gb SFP+ 35 Network layer

36 Virtualization layer Virtualization components VMware vsphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vsphere components are the VMware vsphere ESXi and VMware vcenter Server for management. VMware vsphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or as an embedded service on the vcenter server. Only VMware vsphere vcenter server on Windows is supported. VMware vsphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSO service. Either the VMware vcenter Service Appliance or the VMware vcenter Server for Windows can be deployed. VMware vsphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSO service. Starting from vsphere 6.5 VMware vcenter Server Appliance is the default deployment model for vcenter Server. The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vmotion and Storage vmotion technology. VMware vsphere Hypervisor ESXi The VMware vsphere Hypervisor ESXi runs in the management servers and Converged Systems using VMware vsphere Server Enterprise Plus. The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install) with minimal management overhead. In some instances, the hypervisor may be installed on a 32 GB or larger Cisco FlexFlash SD Card (mirrored HV partition). Beginning with VMware vsphere 6.x, all Cisco FlexFlash (boot) capable hosts are configured with a minimum of two 32 GB or larger SD cards. The compute hypervisor supports GigE physical NICs (pnics) on the Converged Systems VICs. VMware vsphere ESXi does not contain a console operating system. The VMware vsphere Hypervisor ESXi boots from the SAN through an independent FC LUN presented from the storage array to the compute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor (PXE boot into memory) is not supported. Cluster configuration VMware vsphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters can scale up to a maximum of 32 hosts for VMware vsphere 5.5 and 64 hosts for VMware vsphere 6.0. Clusters can support thousands of VMs. Virtualization layer 36

37 The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Some advanced CPU functionality might be unavailable if more than one blade model is running in a given cluster. Datastores Converged Systems support a mixture of data store types: block level storage using VMFS or file level storage using NFS. The maximum size per VMFS volume is 64 TB (50 TB 1 MB). Beginning with VMware vsphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255 volumes. Advanced settings are optimized for VMware vsphere ESXi hosts deployed in Converged Systems to maximize the throughput and scalability of NFS data stores. Converged Systems currently support a maximum of 256 NFS data stores per host. Datastores (VMware vsphere 6.5) Block level storage using VMFS or file level storage using NFS are supported datastores. The maximum size per VMFS5 / VMFS6 volume is 64 TB (50 TB 1 MB). The maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 512 volumes. Virtual networks Virtual networking in the AMP-2 uses the VMware Virtual Standard Switch. Virtual networking is managed by either the Cisco Nexus 1000V distributed virtual switch or VMware vsphere Distributed Switch (VDS). The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM during live migration. This provides persistent network, security, and storage compliance. Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware Virtual Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor. The implementation of Cisco Nexus 1000V for VMware vsphere 5.5 and VMware VDS for VMware vsphere 5.5 use intelligent network CoS marking and QoS policies to appropriately shape network traffic according to workload type and priority. With VMware vsphere 6.0, QoS is set to Default (Trust Host). The vnics are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vsphere ESXi has a predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the vnics to ensure consistency in case the uplinks need to be migrated to the VMware VDS after manufacturing. Virtual networks (VMware vsphere 6.5) Virtual networking in the AMP-2S use standard virtual switches and the Cisco Nexus 1000V is not currently supported on the VMware vsphere 6.5 vcsa. Alternatively, virtual networking is managed by a VMware VDS with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of a VMware Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor. The vnics are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vsphere ESXi has a 37 Virtualization layer

38 predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the vnic to ensure consistency in case the uplinks need to be migrated to the VMware VDS after manufacturing. VMware vcenter Server (VMware vsphere 5.5 and 6.0) VMware vcenter Server is the central management point for the hypervisors and VMs. VMware vcenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit Windows Server and runs as a service to assist with host patch management. AMP-2 VMware vcenter Server provides the following functionality: Cloning of VMs Template creation VMware vmotion and VMware Storage vmotion Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vsphere high-availability clusters VMware vcenter Server also provides monitoring and alerting capabilities for hosts and VMs. System administrators can create and apply alarms to all managed objects in VMware vcenter Server, including: Data center, cluster, and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and health Databases The back-end database that supports VMware vcenter Server and VUM is Microsoft SQL Authentication VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vsphere 5.x and later. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run as separate Windows services, which can be configured to use a dedicated service account depending on security and directory services requirements. Supported features Dell EMC supports the following VMware vcenter Server features: VMware SSO Service (version 5.x and later) VMware vsphere Web Client (used with Vision Intelligent Operations) Virtualization layer 38

39 VMware vsphere Distributed Switch (VDS) VMware vsphere High Availability VMware DRS VMware Fault Tolerance VMware vmotion: Layer 3 capability available for compute resources (version 6.0 and later) VMware Storage vmotion Raw Device Maps Resource Pools Storage DRS (capacity only) Storage-driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vcenter Web Services Related information Management components overview (see page 42) VMware vcenter Server (VMware vsphere 6.5) VMware vcenter Server is a central management point for the hypervisors and VMs. VMware vcenter Server 6.5 resides on the VMware vcenter Server Appliance (vcsa). By default, VMware vcenter Server is deployed using the VMware vcsa. VMware Update Manager (VUM) is fully integrated with the VMware vcsa and runs as a service to assist with host patch management. AMP-2 AMP-2 and the Converged System have a single VMware vcsa instance. VMware vcenter Server provides the following functionality: Cloning of VMs Creating templates VMware vmotion and VMware Storage vmotion 39 Virtualization layer

40 Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vsphere high-availability clusters VMware vcenter Server provides monitoring and alerting capabilities for hosts and VMs. Converged System administrators can create and apply the following alarms to all managed objects in VMware vcenter Server: Data center, cluster and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and health Databases The VMware vcsa uses the embedded PostgreSQL database. The VMware Update Manager and VMware vcsa share the same PostgreSQL database server, but use separate PostgreSQL database instances. Authentication Converged Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple identity sources including AD, Open LDAP, and local accounts for authentication. VMware vsphere 6.5 includes a pair of VMware Platform Service Controller (PSC) Linux appliances to provide the VMware SSO service. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate services. Each service can be configured to use a dedicated service account depending on the security and directory services requirements. Virtualization layer 40

41 Supported features Dell EMC supports the following VMware vcenter Server features: VMware SSO Service VMware vsphere Platform Service Controller VMware vsphere Web Client (used with Vision Intelligent Operations) VMware vsphere Distributed Switch (VDS) VMware vsphere High Availability VMware DRS VMware Fault Tolerance VMware vmotion VMware Storage vmotion - Layer 3 capability available for compute resources, version 6.0 and higher Raw Device Mappings Resource Pools Storage DRS (capacity only) Storage driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vcenter Web Client 41 Virtualization layer

42 Management Advanced Management Platforms are available in multiple configurations that use their own resources to run workloads without using the resources of the Converged System. Management components overview AMP platforms provide a single management point for Converged Systems. AMP overview AMP-2 provides a single management point for Converged Systems. AMP-2 only manage a single platform. For Converged Systems, the AMP-2 provides the ability to: Run the core and Dell EMC optional management workloads Monitor and manage health, performance, and capacity Provide network and fault isolation for management Eliminate resource overhead The core management workload is the minimum required management software to install, operate, and support the Converged System. This includes all hypervisor management, element managers, virtual networking components, and Vision Intelligent Operations Software. The Dell EMC optional management workload is non-core management workloads supported and installed by Dell EMC, whose primary purpose is to manage components in the Converged System. The list includes, but is not limited to security, or storage management tools such as InsightIQ for Isilon, and VMware vcns appliances (vshield Edge/Manager). Management hardware components AMPs are available in multiple configurations that use their own resources to run workloads without consuming resources on the Converged System. AMP-2 hardware components The following list shows the operational relationship between the Cisco UCS Servers and VMware vsphere versions: Converged Systems with Cisco UCS C240 M3 servers are configured with VMware vsphere 5.5 or 6.0. Converged Systems with Cisco UCS C2x0 M4 servers are configured with VMware vsphere 5.5 or 6.x. AMP-2 does not support 40 Gb connectivity. Management 42

43 The following table describes the various AMP options: AMP option Number of Cisco UCS C2x0 servers Storage Description AMP-2HA Baseline 2 FlexFlash SD for VMware vsphere ESXi boot VNXe3200 with Fast Cache for VM data stores AMP-2HA Performance 3 FlexFlash SD for VMware vsphere ESXi boot VNXe3200 with Fast Cache for VM data stores AMP-2S* 2-12 FlexFlash SD for VMware vsphere ESXi boot VNXe3200 with Fast Cache and FAST VP for VM data stores *AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vsphere 5.5 or 6.x. Provides HA/DRS functionality and shared storage using the VNXe3200. Adds additional compute capacity with a third server and storage performance with the inclusion of FAST VP. Provides scalability configuration using Cisco UCS C220 Servers and additional storage expansion capacity. Management software components (vsphere 5.5 and 6.0) The Advanced Management Platforms are delivered with specific installed software components that depend on the selected Release Certification Matrix (RCM). AMP-2 software components The following components are installed: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vsphere Enterprise Plus VMware vsphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vsphere Platform Services Controller VMware vsphere Web Client Service VMware vsphere Inventory Service 43 Management

44 VMware vcenter Server Appliance For VMware vsphere 6.0, the preferred instance is created using VMware vsphere vcenter Server Appliance. An alternate instance may be created using the Windows version. Only one of these options can be implemented. For VMware vsphere 5.5, only VMware vsphere vcenter with Windows is supported. VMware vcenter Database using Microsoft SQL Server 2012 Standard Edition VMware vcenter Update Manager (VUM) - Integrated with VMware vcenter Server Appliance VMware vsphere client For VMware vsphere 6.0, the preferred configuration (with VMware vsphere vcenter Server Appliance) embeds the SQL server on the same VM as the VUM. The alternate configuration leverages the remote SQL server with VMware vcenter Server on Windows. Only one of these options can be implemented. VMware vsphere Syslog Service (optional) VMware vsphere Core Dump Service (optional) VMware vsphere Distributed Switch (VDS) PowerPath/VE Management Appliance (PPMA) Secure Remote Support (SRS) Array management modules, including but not limited to ExtremIO Management Server Cisco Prime Data Center Network Manager and Device Manager (Optional) RecoverPoint management software that includes the management application and deployment manager Management software components (VMware vsphere 6.5) AMP-2 is delivered with specific installed software components that are dependent on the selected Release Certification Matrix (RCM). Management 44

45 The following components are installed: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vsphere Enterprise Plus VMware vsphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vsphere Platform Services Controller VMware vsphere Web Client Service VMware vsphere Inventory Service VMware vcenter Server Appliance For VMware vsphere 6.5, only the VMware vsphere vcenter Server Appliance deployment model is offered. VMware vcenter Update Manager (VUM Integrated with VMware vcenter Server Appliance) VMware Host client (HTML5 based) The legacy C# client (aka thick client, desktop client, or vsphere Client) will no longer be available with the VMware vsphere 6.5 release. VMware vsphere Client (HTML5) has a subset of the features available in the VMware vsphere Web Client VMware Host client (HTML5 based) VMware vsphere Syslog Service (optional) VMware vsphere Core Dump Service (optional) VMware vsphere Distributed Switch (VDS) PowerPath/VE Management Appliance (PPMA) Secure Remote Support (ESRS) Array management modules, including but not limited to XtremIO Management Server (XMS) Cisco Prime Data Center Network Manager and Device Manager (DCNM) (Optional) RecoverPoint management software that includes the management application and deployment manager Management network connectivity The Converged System offers several types of AMP-2 network connectivity and servers assignments. 45 Management

46 AMP-2S network connectivity on Cisco UCS C220 M4 servers with VMware vsphere 6.0 The following illustration shows the network connectivity for the AMP-2S with the Cisco UCS C220 M4 servers: Management 46

47 AMP-2S server assignments on Cisco UCS C220 M4 servers with VMware vsphere 6.0 The following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers. This illustration shows the default VMware vcenter Server configuration using the VMware 6.0 vcenter Server Appliance and the VMware Update Management with embedded MS SQL Server 2012 database. 47 Management

48 The following illustration shows the VM server assignment for AMP-2S on Cisco UCS C220 M4 servers, which implements the alternate VMware vcenter Server configuration using VMware 6.0 vcenter Server, Database Server, and VMware Update Manager. Management 48

49 AMP-2S on Cisco UCS C220 M4 servers (vsphere 6.5) The following illustration provides an overview of the network connectivity for the AMP-2S on the Cisco C220 M4 servers: * No default gateway The default VMware vcenter Server configuration contains the VMware vcenter Server 6.5 Appliance with integrated VMware Update Manager Beginning with vsphere 6.5, Microsoft SQL will no longer be used since vcenter and VUM will utilize the Postgres database embedded within the vcsa. 49 Management

50 The following illustration provides an overview of the VM server assignment for AMP-2S on C220 M4 with the default configuration: Management 50

51 AMP-2HA network connectivity on Cisco UCS C240 M3 servers The following illustration shows the network connectivity for AMP-2HA on Cisco UCS C240 M3 servers: 51 Management

52 Management 52

53 53 Management

54 AMP-2HA server assignments with Cisco UCS C240 M3 servers The following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C240 M3 servers: Management 54

55 Sample configurations Cabinet elevations vary based on the specific configuration requirements. Elevations are provided for sample purposes only. For specifications for a specific design, consult your varchitect. Sample VxBlock and Vblock Systems 540 with 20 TB XtremIO Elevations are provided for sample purposes only. For specifications for a specific design, consult your varchitect. Cabinet 1 55 Sample configurations

56 Cabinet 2 Sample VxBlock and Vblock Systems 540 with 40 TB XtremIO Elevations are provided for sample purposes only. For specifications for a specific design, consult your varchitect. Sample configurations 56

57 Cabinet 1 57 Sample configurations

58 Cabinet 2 Sample configurations 58

59 Cabinet 3 59 Sample configurations

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Document revision 1.6 February 2016 VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Revision history Revision

More information

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview Dell EMC VxBlock and Vblock Systems 740 Architecture Overview Document revision 1.15 December 2017 Revision history Date Document revision Description of changes December 2017 1.15 Added Cisco UCS B-Series

More information

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview Dell EMC VxBlock and Vblock Systems 350 Architecture Overview Document revision 1.13 December 2018 Revision history Date Document revision Description of changes December 2018 1.13 Added support for VMware

More information

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 340 Architecture Overview Dell EMC VxBlock and Vblock Systems 340 Architecture Overview Document revision 3.16 July 2018 Revision history Date Document revision Description of changes July 2018 3.16 Updated System architecture

More information

VCE Vblock and VxBlock Systems 340 Architecture Overview

VCE Vblock and VxBlock Systems 340 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 340 Architecture Overview Document revision 3.11 April 2016 VCE Vblock and VxBlock Systems 340 Architecture Overview Revision history Revision history Date Document

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

VCE Vblock System 320 Gen 3.2

VCE Vblock System 320 Gen 3.2 VCE Vblock System 320 Gen 3.2 Architecture Overview Document revision 3.7 November 2015 Revision history Date VCE System Document revision Description of changes November 2015 Gen 3.2 3.7 Updated document

More information

VCE Vblock System 720 Gen 4.1 Architecture Overview

VCE Vblock System 720 Gen 4.1 Architecture Overview VCE Vblock System 720 Gen 4.1 Architecture Revision history www.vce.com VCE Vblock System 720 Gen 4.1 Architecture Document revision 4.3 June 2013 2013 VCE Company, LLC. 2013 VCE Company, 1 LLC. Revision

More information

VCE Vblock System 720 Gen 4.0 Architecture Overview

VCE Vblock System 720 Gen 4.0 Architecture Overview VCE Vblock System 720 Gen 4.0 Architecture Overview Revision history www.vce.com VCE Vblock System 720 Gen 4.0 Architecture Overview Document revision 4.1 March 2013 2012 VCE Company, LLC. All Rights Reserved.

More information

VCE Vblock System 100. Gen 2.3 Architecture Overview

VCE Vblock System 100. Gen 2.3 Architecture Overview VCE Vblock System 100 Gen 2.3 Architecture Overview Document revision 2.6 November 2014 Revision history Date Vblock System Document revision Description of changes November 2014 Gen 2.3 2.6 Added the

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

VCE Vblock Systems Series 700 Architecture Overview

VCE Vblock Systems Series 700 Architecture Overview VCE Vblock Systems Series 700 Architecture Overview Revision history www.vce.com VCE Vblock Systems Series 700 Architecture Overview Version 1.5 June 2012 2012 VCE Company, LLC. All Rights Reserved. 2012

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer Maailman paras palvelinjärjestelmä Tommi Salli Distinguished Engineer Cisco UCS Momentum $1.1B Annualized Run Rate 7,400 UCS Customers; 2,620 repeat customers with average 3.4 repeat buys 350 ATP Channel

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity, Solution overview A New Generation of Converged Infrastructure that Improves Flexibility, Efficiency, and Simplicity Enterprises everywhere are increasingly adopting Converged Infrastructure (CI) as one

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019 Dell EMC Converged Systems Glossary Document revision 1.25 January 2019 Revision history Date Document revision Description of changes January 2019 1.25 Added the following terms: Hyper-converged deployment

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 5 Connectivity Matrix, on page 7 Deployment Options, on page 7 Management Through

More information

Dell EMC. IPv6 Overview for VxBlock Systems

Dell EMC. IPv6 Overview for VxBlock Systems Dell EMC IPv6 Overview for VxBlock Systems Document revision 1.2 February 2018 Revision history Date Document revision Description of changes February 2018 1.2 Added support for Isilon Generation 6 and

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Cisco UCS Unified Fabric

Cisco UCS Unified Fabric Solution Overview Unified Fabric Third Generation of Connectivity and Management for Cisco Unified Computing System 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public

More information

VMware vsan 6.6. Licensing Guide. Revised May 2017

VMware vsan 6.6. Licensing Guide. Revised May 2017 VMware 6.6 Licensing Guide Revised May 2017 Contents Introduction... 3 License Editions... 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster... 7

More information

VxBlock System Deep inside the next generation converged infrastructure

VxBlock System Deep inside the next generation converged infrastructure VxBlock System 1000 Deep inside the next generation converged infrastructure Scott Redfern Senior Director, Modern Data Centers Jeff Wheeler Consultant Architect, Modern Data Centers Agenda VxBlock System

More information

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Harry Meier GLOBAL SPONSORS Dell EMC VxRack SDDC Integrated compute, storage, and networking powered by VMware Cloud Foundation

More information

Dell EMC. Converged Technology Extensions for Storage Product Guide

Dell EMC. Converged Technology Extensions for Storage Product Guide Dell EMC Converged Technology Extensions for Storage Product Guide Document revision 1.9 December 2017 Revision history Date Document revision Description of changes December 2017 1.9 Removed the topic,

More information

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide - MARCH 2018 VMWARE VSAN 6.6 Licensing Guide Table of Contents Introduction 3 License Editions 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster with

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION

MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION MICROSOFT PRIVATE CLOUD ON VCE VBLOCK 340: FOUNDATION Microsoft Windows Server 2012 with Hyper-V, VCE Vblock 340, EMC VNX Quickly provision virtual machines Extend and support hybrid-cloud environments

More information

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

THE OPEN DATA CENTER FABRIC FOR THE CLOUD Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

Next Generation Computing Architectures for Cloud Scale Applications

Next Generation Computing Architectures for Cloud Scale Applications Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Manager Technical Marketing #clmel Agenda Introduction Cloud Scale Architectures System Link Technology

More information

Cisco UCS Virtual Interface Card 1225

Cisco UCS Virtual Interface Card 1225 Data Sheet Cisco UCS Virtual Interface Card 1225 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites compute, networking,

More information

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches . White Paper Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections

More information

Cisco UCS Virtual Interface Card 1400 Series

Cisco UCS Virtual Interface Card 1400 Series Data Sheet Cisco UCS Virtual Interface Card 1400 Series Cisco Unified Computing System overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,

More information

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide Dell EMC Converged Technology Extension for Isilon Storage Product Guide Document revision 1.7 December 2017 Revision history Date Document revision Description of changes December 2017 1.7 Added Generation

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Cisco UCS B460 M4 Blade Server

Cisco UCS B460 M4 Blade Server Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v3 product family to add new levels of performance

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions In Collaboration with Intel May 2015 Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions Presenters: Frank Anderson Desktop Virtualization

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide Dell EMC VxBlock and Vblock Systems 350 Administration Guide Document revision 1.10 June 2018 Revision history Date Documen t revision Description of changes June 2018 1.10 Changed the following sections

More information

Cisco Actualtests Questions & Answers

Cisco Actualtests Questions & Answers Cisco Actualtests 642-999 Questions & Answers Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 22.8 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot Cisco

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Cisco UCS C24 M3 Server

Cisco UCS C24 M3 Server Data Sheet Cisco UCS C24 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches What You Will Learn The Cisco Unified Computing System helps address today s business challenges by streamlining

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Cisco HyperConverged Infrastructure

Cisco HyperConverged Infrastructure Cisco HyperFlex Cisco HyperConverged Infrastructure Bjarne Madsen, CSE HyperConverged Solutions. 2015 Cisco and/or its affiliates. All rights reserved. 2 Agenda 1. What is HyperConverged, and why 2. Cisco

More information

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the

More information

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide Dell EMC VxBlock and Vblock Systems 540 Administration Guide Document revision 1.16 December 2018 Revision history Date Document revision Description of changes December 2018 1.16 Added support for VxBlock

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco Nexus 4000 Series Switches for IBM BladeCenter Cisco Nexus 4000 Series Switches for IBM BladeCenter What You Will Learn This document is targeted at server, storage, and network administrators planning to deploy IBM BladeCenter servers with the unified

More information

Cisco HyperFlex HX220c Edge M5

Cisco HyperFlex HX220c Edge M5 Data Sheet Cisco HyperFlex HX220c Edge M5 Hyperconvergence engineered on the fifth-generation Cisco UCS platform Rich digital experiences need always-on, local, high-performance computing that is close

More information

Product Overview >> Cisco R Series Racks: Make Your Infrastructure Highly Secure. Has an innovative design to deliver exceptional power, cooling, and cable management, as well as strength and stability

More information

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations With Intel Xeon Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations Mark Balch May 22, 2013 Desktop and Application Virtualization: Business Drivers Co-sponsored

More information

FlexPod Express with VMware vsphere 6.0U2, NetApp E-Series, and Cisco UCS Mini

FlexPod Express with VMware vsphere 6.0U2, NetApp E-Series, and Cisco UCS Mini NetApp Verified Architecture FlexPod Express with VMware vsphere 6.0U2, NetApp E-Series, and Cisco UCS Mini NVA Design and Deployment Arvind Ramakrishnan, NetApp January 2017 NVA-0032 Version 1.0 Reviewed

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer UCS Fundamentals Aaron Kerr, Consulting Systems Engineer Complementary UCS Sessions Check out www.ciscolive.com for previous session presentations and videos BRKCOM-2003 UCS Networking Deep Dive BRKCOM-2007

More information

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture White Paper Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture Executive Summary Introduction Microsoft SQL Server 2005 has been in extended support since April 2011,

More information

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale In Collaboration with Intel UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale Aniket Patankar UCS Product Manager May 2015 Cisco UCS - Powering Applications at Every Scale

More information

UCS Architecture Overview

UCS Architecture Overview BRKINI-1005 UCS Architecture Overview Max Alvarado Brenes Systems Engineer Central America Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session 1. Find this session

More information

Cisco UCS B200 M3 Blade Server

Cisco UCS B200 M3 Blade Server Data Sheet Cisco UCS B200 M3 Blade Server Product Overview The Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS B-Series Blade Servers and C- Series Rack Servers with networking and storage

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP

VCE: A FOUNDATION FOR IT TRANSFORMATION. Juergen Hirtenfelder EMEA SI/SP VCE: A FOUNDATION FOR IT TRANSFORMATION Juergen Hirtenfelder EMEA SI/SP What happened on 9th Jan 2007? A best in breed Networking Device A best in breed Storage Device A best in breed Server Device A

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Con

PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Con PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Content: Not for publication #VMworld #PBO1064BU PBO1064BU

More information

Vblock System 540. with VMware Horizon View 6.1 Solution Architecture

Vblock System 540. with VMware Horizon View 6.1 Solution Architecture Vblock System 540 with VMware Horizon View 6.1 Solution Architecture Version 1.0 September 2015 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Cisco Application Centric Infrastructure (ACI) Simulator

Cisco Application Centric Infrastructure (ACI) Simulator Data Sheet Cisco Application Centric Infrastructure (ACI) Simulator Cisco Application Centric Infrastructure Overview Cisco Application Centric Infrastructure (ACI) is an innovative architecture that radically

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO3308BES NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM #VMworld #STO3308BES Disclaimer This presentation may contain product features

More information

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

VxRack SDDC Deep Dive:

VxRack SDDC Deep Dive: VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation GLOBAL SPONSORS What is HCI? Systems design shift Hyper-converged HYPER-CONVERGED SERVERS SAN STORAGE THEN NOW 2 What is HCI?

More information

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks

More information

TECHNICAL SPECIFICATIONS + TECHNICAL OFFER

TECHNICAL SPECIFICATIONS + TECHNICAL OFFER ANNEX II + III : TECHNICAL SPECIFICATIONS + TECHNICAL OFFER Contract title : Supply of Information & Communication Technology Hardware Publication reference: 2017/386304 10/1/1.1/1.2.2a p 1 / Columns 1-2

More information

Paul Hodge June 18 th Virtualization Update Slimmed Down, Stretched Out and Simplified

Paul Hodge June 18 th Virtualization Update Slimmed Down, Stretched Out and Simplified Paul Hodge June 18 th 2018 Virtualization Update Slimmed Down, Stretched Out and Simplified Agenda 1 Key Message 2 That Honeywell is continuously driving ways to simplify virtualization deployments The

More information

Cisco UCS Virtual Interface Card 1227

Cisco UCS Virtual Interface Card 1227 Data Sheet Cisco UCS Virtual Interface Card 1227 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,

More information

VxRack System SDDC Enabling External Services

VxRack System SDDC Enabling External Services VxRack System SDDC Enabling External Services May 2018 H17144 Abstract This document describes how to enable external services for a VxRack System SDDC. Use cases included are Dell EMC Avamar-based backup

More information

DELL EMC VSCALE FABRIC

DELL EMC VSCALE FABRIC NETWORK DATA SHEET DELL EMC VSCALE FABRIC FIELD-PROVEN BENEFITS Increased utilization and ROI Create shared resource pools (compute, storage, and data protection) that connect to a common, automated network

More information

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale Kiewiet Kritzinger DELL EMC CPSD Snr varchitect Introduction to hyper-converged Focus on innovation, not IT integration

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Content: Not for publication Gabriel Chapman: Sr. Mgr. - NetA

NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Content: Not for publication Gabriel Chapman: Sr. Mgr. - NetA STO3308BUS NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM Cindy Goodell: Sr. Mktg. Mgr NetApp HCI Content: Not for publication

More information

Driving Greater Business Outcomes with Next Gen IT. Prasad Radhakrishnan ASEAN Datacenter Computing Solutions 18 th Jan 2018

Driving Greater Business Outcomes with Next Gen IT. Prasad Radhakrishnan ASEAN Datacenter Computing Solutions 18 th Jan 2018 Driving Greater Business Outcomes with Next Gen IT Prasad Radhakrishnan ASEAN Datacenter Computing Solutions 18 th Jan 2018 Tech from the Star Wars movies are not too far off What has changed in the last

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

IBM Europe Announcement ZG , dated February 13, 2007

IBM Europe Announcement ZG , dated February 13, 2007 IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and

More information

Cisco UCS Network Performance Optimisation and Best Practices for VMware

Cisco UCS Network Performance Optimisation and Best Practices for VMware 1 Cisco UCS Network Performance Optimisation and Best Practices for VMware Chris Dunk Technical Marketing Engineer, Cisco UCS #clmel Agenda Server to Server East West Traffic Flow Architecture Why it is

More information

CISCO UCS: MEETING THE GROWING NEED FOR BANDWIDTH

CISCO UCS: MEETING THE GROWING NEED FOR BANDWIDTH CISCO UCS: MEETING THE GROWING NEED FOR BANDWIDTH Today s bandwidth needs are greater than ever, and modern blade servers need to rise to the occasion. In the past, a 1Gbps connection might have supported

More information

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2. UCS-ABC Why Firefly Length: 5 Days Format: Lecture/Lab Course Version: 5.0 Product Version: 2.1 This special course focuses on UCS Administration and Troubleshooting UCS Manager 2.0 and provides additional

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information