Techtorial Datová Centra

Similar documents
Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Cisco UCS Virtual Interface Card 1225

UCS Networking Deep Dive

Cisco UCS C24 M3 Server

Overview. Cisco UCS Manager User Documentation

UCS Networking Deep Dive

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

UCS Technical Deep Dive: Getting to the Heart of the Matter

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

UCS - the computing vision that boosts your business

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

Cisco UCS C200 M2 High-Density Rack-Mount Server

UCS Fabric Fundamentals

Cisco UCS Virtual Interface Card 1227

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

C-Series Servers Meat and Tators

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

UCS Networking 201 Deep Dive

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco Actualtests Questions & Answers

Cisco UCS C240 M3 Server

Next Generation Computing Architectures for Cloud Scale Applications

UCS Fabric Fundamentals

Cisco UCS C240 M3 Server

UCS Management Architecture Deep Dive

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server


UCS Networking Deep Dive

Cisco HyperFlex HX220c M4 Node

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Overview. About the Cisco UCS S3260 System

Cisco UCS B230 M2 Blade Server

UCS Management Deep Dive

PrepKing. PrepKing

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Cisco UCS B200 M3 Blade Server

Cisco UCS B440 M1High-Performance Blade Server

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Questions & Answers

Overview of Cisco Unified Computing System

Cisco UCS B460 M4 Blade Server

Cisco UCS Unified Fabric

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

Nexus DC Tec. Tomas Novak. BDM Sponsor. Sponsor. Sponsor Logo. Sponsor. Logo. Logo. Logo

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering

Traffic Monitoring and Engineering for UCS

Cisco UCS Network Performance Optimisation and Best Practices for VMware

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

UCS Engineering Details for the SAN Administrator

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

CISCO EXAM QUESTIONS & ANSWERS

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

January 28 29, 2014San Jose. Engineering Workshop

Cisco. Exam Questions DCUCI Implementing Cisco Data Center Unified Computing (DCUCI)

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

UCS Architecture Overview

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Evolution with End-to-End Data Center Virtualization

UCS Management Deep Dive

Cisco HyperFlex HX220c Edge M5

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Configuring VM-FEX. Information About VM-FEX. VM-FEX Overview. VM-FEX Components. This chapter contains the following sections:

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Compute Engineering Workshop March 9, 2015 San Jose

Release Notes for Cisco UCS C-Series Software, Release 2.0(13)

Pass-Through Technology

UCS Architecture Overview

SCHOOL OF PHYSICAL, CHEMICAL AND APPLIED SCIENCES

CISCO EXAM QUESTIONS & ANSWERS

Oracle Database Consolidation on FlashStack

Suggested use: infrastructure applications, collaboration/ , web, and virtualized desktops in a workgroup or distributed environments.

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Cisco UCS Performance Manager

Návrh serverových farem

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

Cisco UCS Virtual Interface Card 1400 Series

Cloud Scale Architectures

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

Cisco Exam Questions & Answers

Jake Howering. Director, Product Management

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Cisco Certdumps Questions & Answers - Testing Engine

Data Center Virtualization Setting the Foundation. Ed Bugnion VP/CTO, Cisco Server, Access and Virtualization Technology Group

Vendor: Cisco. Exam Code: Exam Name: Designing Cisco Data Center Unified Fabric (DCUFD) Version: Demo

Overview. About the Cisco UCS S3260 System

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions

Active System Manager Release 8.2 Compatibility Matrix

The Virtualized Server Environment

Density Optimized System Enabling Next-Gen Performance

A-GEAR 10Gigabit Ethernet Server Adapter X520 2xSFP+

Virtual Networks: For Storage and Data

Transcription:

Techtorial Datová Centra Sponsor Sponsor Sponsor Sponsor Logo Logo Logo Logo CIscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public 1

Vítejte na DC Techtoriálu 11:00-12:00 UCS deepdive 30" Tomáš Michaeli 12:00-12:30 UCS API 30" David Pasek Oběd 13:30-14:00 Unified fabric a související standardy 30" Jarry Pilař 14:00-14:30 Návrh serverových farem 30" Martin Diviš 14:30-15:00 VMWare View - případová studie 30" DiData/Macha 15:00-15:30 EMC Ionix 30" EMC/David Hanacek 15:30-16:00 L-2 extension, DCI, OTV 30" Miroslav Brzek 16:00-16:30 Přepínače Nexus 30" Tomáš Novák 2 CiscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public

Unified Computing System Deep Dive T-DC1 / L3 Tomáš Michaeli tomichae@cisco.com Sponsor Sponsor Sponsor Sponsor Logo Logo Logo Logo CIscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public 3

Building Blocks UCS Manager Embedded manages entire system UCS Fabric Interconnect 20 Port 10Gb FCoE 40 Port 10Gb FCoE UCS Fabric Extender Remote line card UCS Blade Server Chassis Flexible bay configurations UCS Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters 4

Enclosure, Fabric Switch, & Blades (front) Redundant, Hot Swap Power Supply Redundant, Hot Swap Fan 1U or 2U Fabric Switch Half width server blade Up to eight per enclosure Full width server blade Up to four per enclosure Up to two per enclosure 6U Enclosure Hot Swap, SAS drive Power Supplies: N+1, N+N, Grid Redundant, and Hot Swap 5

Enclosure and Fabric Switch (rear) 10GigE Ports Expansion Bay 1U or 2U Fabric Switch Redundant, Hot swap fan module Redundant, Hot swap Fabric extender 6U Enclosure Power Expansion Module 6

System Components SAN G G LAN S Fabric A Interconnect MGMT S G Fabric Interconnect A G SAN Fabric Interconnect (40 or 20 10GE ports) + (2 or 1 GEM slots) G G G G G G Chassis Compute Chassis Upto 8 half width blades or 4 full width blades Fabric Extender R x8 I x8 C C I x8 R x8 Fabric Extender Fabric Extender Host to uplink traffic engineering M Adapter B P Adapter B P Adapter Up to 80Gb Flexible bandwidth allocation X X X X X X x86 Computer x86 Computer Adapter Virtualized adapter for single OS and hypervisor systems Compute Blade (Half slot) Compute Blade (Full slot) Compute Blade 7

Wire for Bandwidth, Not Connectivity Uplinks 20Gb/s 40Gb/s 80Gb/s Wire Once Architecture All links can be active all the time Policy-driven bandwidth allocation Virtual interface granularity 8

IOM connections: chassis backplane view HA Fabric A Fabric B Blade 1 Path A Path B Path A Path B Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7 Blade 8 Path A Path B IOM1 IOM2 Chassis Half-width servers: 1 mezz card (one A and one B path) Full-width servers: 2 mezz cards (two A & B paths) 9

IOM to FI connectivity options Fabric A Fabric B Fabric A Fabric B 1, 2 or 4 links 1, 2 or 4 links Chassis Chassis Fabric A Fabric A Fabric B Chassis Chassis 10

Actual IOM-to-FI pinning scheme slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 I O M 1 link switch Server slots pinned to uplink Uplink: slots 1,2,3,4,5,6,7,8 How to read this: with one IOM-to-FIlink, all servers use that link slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 I O M 2 links switch Uplink 1: slots 1,3,5,7 Uplink 2: slots 2,4,6,8 How to read this: with two IOM-to-FI links, servers in slots 1,3,5,7 use link number 1 while other slots use link number 2 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 I O M 4 links Uplink 1: slots 1,5 switch Uplink 2: slots 2,6 Uplink 3: slots 3,7 Uplink 4: slots 4,8 How to read this: with four IOM-to-FI links, servers in slots 1 and 5 use link 1, servers in slots 2 and 6 use link 2, etc. 11

Unified Computing System Manager Embedded device manager for family of UCS components Enables stateless computing via Service Profiles Efficient scale: Same effort for 1 to 320 blades APIs for integration with new and existing data center infrastructure 12

UCS Manager GUI CLI Custom Portal or Tools Systems Management Software UCS Manager Single point of management for UCS system of components Adapters, blades, chassis, fabric extenders, fabric interconnects Embedded device manager Discovery, Inventory, Configuration, Monitoring, Diagnostics, Statistics Collection Coordinated deployment to managed endpoints APIs for integration with new and existing data center infrastructure SMASH-CLP, IPMI, SNMP XML-based SDK for commercial & custom implementations 13

UCS Fabric Interconnect Portfolio UCS 6100 Family 20-Port Fabric Interconnect 20 fixed ports 10GE/FCoE, fixed 1 Expansion Module 40-Port Fabric Interconnect 40 fixed ports 10GE/FCoE, fixed 2 Expansion Modules Expansion Modules Fibre Channel 8 Ports 1/2/4GFC 6 Ports 2/4/8GFC FC + Ethernet 4 Ports 10GbE/FCoE 4 Ports 1/2/4GFC Ethernet 6 Ports 10GE/FCoE 14

UCS 2100 Series Fabric Extenders 2104 Fabric Extender Connects UCS blade chassis to the Fabric Interconnect Four 10 Gigabit Ethernet, FCoE capable, SFP+ ports Up to 2 Fabric Extenders per chassis for redundancy and up to 80 Gbps of bandwidth per chassis Built-in chassis management functionality Hardware based support for Cisco VN-Link technology Fully managed by UCS Manager through Fabric Interconnect 15

Midplane and Fabric Extender High performance midplane 2x 40G total bandwidth per half slot 10G Base-KR running today 8 lanes of 10G Base-KR (half-slot) 16 lanes of 10G Base-KR (full-slot) Redundant data and management paths Support auto discover of all component Midplane Fabric Extender Compute blade Compute blade Compute blade Compute blade Fabric extender Dynamically manage bandwidth 10Gb to 80Gb for chassis FCoE from blade to fabric switch Compute blade Compute blade Compute blade Compute blade 16

UCS 5108 Blade Chassis Chassis Up to 8 half slot blades Up to 4 full slot blades 4x power supplies, N+N grid redundant 8x fans included 2x UCS 2104 Fabric Extender All items hot-pluggable Up to 40 chassis per UCS system 17

Blade Overview UCS B200 M1 Common Attributes 2 x Intel Nehalem-EP processors 4x the memory UCS B250 M1 2 x SAS hard drives (optional) Blade Service processor Blade and HDD hot plug support Stateless blade design 10Gb CNA and 10GbE adapter options Differences 4x memory 2x I/O bandwidth Half-width blade Full-width blade 12 x DIMM slots 48 x DIMM slots 1 x dual port adapter 2 x dual port adapters 18

B-Series Family Subhead UCS B200 M1 General Purpose Blade Server UCS B250 M1 Extended Memory Blade Server High-density server with balanced compute performance and I/O flexibility Memory-intensive server for virtualized and large-data-set workloads Item Size CPU Sockets/ Cores UCS B200 M1 Half 2/8 Intel Xeon 5500 CPU Memory Disks I/O 12 DIMM 96 GB 2 SFF SAS/SATA 1 Mezz UCS B250 M1 Full 2/8 Intel Xeon 5500 48 DIMM 384 GB 2 SFF SAS/SATA 2 Mezz 19

B-Series Family Subhead UCS B200 M2 General Purpose Blade Server UCS B250 M2 Extended Memory Blade Server UCS B440 M1 High- Performance Blade Server High-density server with balanced compute performance and I/O flexibility Memory-intensive server for virtualized and large-data-set workloads Compute & memory-intensive server for enterprise-critical workloads Item Size CPU Sockets/ Cores UCS B200 M2 Half 2/12 CPU Memory Disks I/O Intel Xeon 5600 12 DIMM 96 GB 2 SFF SAS 1 Mezz UCS B250 M2 Full 2/12 Intel Xeon 5600 48 DIMM 384 GB 2 SFF SAS 2 Mezz UCS B440 M1 Full 4/32 Intel Xeon 7500 32 DIMM 256GB 4 SFF SAS/SATA 2 Mezz 20 CiscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public

C-Series Family UCS C250 M1 UCS C210 M1 UCS C200 M1 Item CPU Size Memory Disks Adaptor UCS C250 M1 UCS C210 M1 UCS C200 M1 Intel Xeon 5500 Intel Xeon 5500 Intel Xeon 5500 2RU 2RU 1RU 48 DIMM 384 GB 12 DIMM 96 GB 12 DIMM 96GB 8 SFF SAS/SATA Drives 16 SFF SAS/SATA Drives 4 x 3.5 SAS/SATA Drives 5 PCIe 5 PCIe 2 PCIe 21 CiscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public

C-Series Family UCS C200 M2 UCS C210 M2 UCS C250 M2 UCS C460 M1 High-density server with balanced compute performance and I/O flexibility General-purpose server for workloads requiring economical, highcapacity, internal storage High-performance, memory-intensive server for virtualized and largedata-set workloads Compute & memoryintensive server for enterprise critical workloads Item CPU Size Memory Disks I/O UCS C460 M1 Intel Xeon 7500 4RU 64 DIMM 512GB 12 SFF SAS/SATA 10 PCIe UCS C250 M2 Intel Xeon 5600 2RU 48 DIMM 384 GB 8 SFF SAS/SATA 5 PCIe UCS C210 M2 Intel Xeon 5600 2RU 12 DIMM 96 GB 16 SFF SAS/SATA 5 PCIe UCS C200 M2 Intel Xeon 5600 1RU 12 DIMM 96GB 4 x 3.5 SAS/SATA 2 PCIe 22 CiscoEXPO 2010 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS B200 M1 diagram Proc Proc Xeon 5500 Xeon 5500 IOH ICH BIOS x4 PCIe x16 PCIe SAS Controller Mezz card Service Processor 23

Service Processor Based on ServerEngines Pilot II USB 2.0 and 1.1 interfaces 2x 10/100 Ethernet interfaces Integrated Graphics Matrox G200e compatible IPMI 2.0 compliant BMC 64MB ECC DDR2 memory Single chip IP based server management Provides PRE-OS management access to blade 24

Storage Processor LSI Logic 1064e Based on Fusion MPT architecture PCIe to 4-Port 3Gb/s SAS Controller Support for 1.5 and 3Gb/s SAS and SATA transfer rates x4 Gen1 PCIe interface to CPU/Memory complex Integrated Mirroring and Striping 25

Three Pronged Adapter Strategy Virtualization Virtual Machine Aware: Virtualization and Consolidation Compatibility Existing Driver Stacks Cost Free SAN Access for Any Ethernet Equipped Host Converged network adapters (CNA) Ability to mix and match adapter types within a system Automatic discovery of component types 26

Oplin 10GbE Mezzanine card PCIe x8, Intel VT-c, I/OAT Measured power of 13 watts ASIC latency of 10-12 s VM Device Queues (VMDq) Virtualization support 16 queues Sorting based on: MAC address, 802.1q tag I/O Enhancements Max queues (per port): 32 TX, 64RX Priority grouping (802.1P) PCI-Express DMA/Host Interface TX FIFO PCIe v1.1 (2.5Gbps) RX FIFO FRU DMA/Host Interface TX FIFO RX FIFO 1/10 GbE MAC 1/10 GbE MAC XAUI 10GBASE-KX4 I2C to BMC XAUI 10GBASE-KX4 Two connections to the UCS backplane 27

What is Menlo? Cisco ASIC 7.7M gates 9.4Mb SRAM (including 512K of CPU SRAM) Embedded MIPS 24k at 350Mhz Interfaces Two 10G to a 3rd party Ethernet NIC Two 1/2/4G to a 3rd party FC HBA Two 10G to an Ethernet network Other misc. interfaces No changes to customer s software/drivers I/O Consolidation, FCoE Priority Flow Control 10GbE/FCoE 10GbE FC PCIe Bus 28

Palo Based Mezzanine Card Adapter designed for both single-os and VM-based deployments Network Interface Virtualisation support VN-link capable hypervisor integration PCIe standard compliant Power of 18 watts Cut-through architecture High Performance 2x 10Gb Low latency 600K IOPS The OS sees up to 128 vnics Ethernet vnic, and FC vhba Two ports to the backplane Switch and network see all vnic and vhba Rate limiters per vnic and per COS Resource allocation per vnic and per COS Scalability (RSS within a vnic) Eth FC FC 0 1 2 Eth 119 29

Virtual interfaces Blade 1 eth0 OS eth1 Southbound or OS-side interfaces hba0 hba1 Virtual interface tag to associate frames to a VIF 0 1 External mezz card 10GE port IOM 1 Eth X/Y/Z interface IOM 2 IOM-to-FI link Vif 1 Vif 2 Vif 3 Vif 4 Fabric A Fabric B 30

Adapter Fault-tolerance Menlo and Palo-based mezz cards can automatically get rerouted to another IOM in the event of a failure of the active path if configured to do so: A given ethernet vnic uses one and only one active path at any given time (either through Switch A or B, but not both at the same time) 31

Backplane-based adapter failover As mentioned before, the chassis backplane provides Menlo and Palo adapters with both an active and a standby path for Ethernet vnics Here s a logical representation of a profile using the two vnics provided by Menlo: IOM 1 vnic 1 standby path vnic 0 active vnic 0 standby path Server 1 IOM 2 vnic 1 active 32

Connecting the UCS SAN B Switch Fabric Extender CMS Compute Blade California Enclosure LAN OOB Mgmt 10/100/1000 Fabric Switch Mux Service Processor Adapter IOH Mgmt OOB Mgmt 10/100/1000 Fabric Switch Mux CPU Fabric Extender CMS SAN B Switch LAN End host mode Presents a host to network Replaces today s server Switch mode Spanning tree switch Replaces today s access switch Fibre Channel End host mode Presents a host to fabric Replaces today s server Management Separate management network 33

SAN QoS Architecture G G Compute Chassis Fabric Extender M A G Adapter R G x8 B G x86 Computer I x8 S C MGMT X X X X X Compute Blade (Half slot) LAN Fabric Switch C P S Fabric Switch I Adapter G x8 B G R x86 Computer Compute Blade (Full slot) A G x8 X P G G SAN Fabric Extender Adapter No packet drops within the array Largest buffers are on switch and host memory, so congestion pushed to edges Priority Flow Control (PFC) used to ensure packet drops are at vnic or Switch All traffic in a UCS system belongs to 1 of 6 System Classes Four are user configurable while the other two are for FCoE and standard Ethernet QoS parameters can be configured at a per system class level, or a per vnic level. 34

Linking a vnic and a pin-group When creating a vnic, you can attach it to any existing pin-group 35

Traffic Engineering vnics can be pinned to specific switches when created (with configurable failover to other switch) Depending on requirements, vnics could be pinned to one switch or distributed evenly vnic-1 vnic-2 FEX-1 2 Fabric Extenders in chassis, each with 1 link to the switch. vnic-3 Switch-1 Blade-1, Palo-1 Palo NIC with 3 vnics vnics in System Class C pinned to one switch vnic-1 Blade-2, Palo-1 vnic-2 2 switches, both with 1 connection to each FEX vnic-3 FEX-2 Class-A Class-B Class-C Switch-2 36

Traffic Engineering vnics can be pinned to specific switches when created (with configurable failover to other switch) Depending on requirements, vnics could be pinned to one switch or distributed evenly vnic-1 FEX-1 vnic-2 vnic-3 Switch-1 Blade-1, Palo-1 vnics in System Class C distributed across switches vnic-1 Blade-2, Palo-1 vnic-2 vnic-3 FEX-2 Class-A Class-B Class-C Switch-2 37

vnic-1 Congestion Spread vnic1 in class C generating large amount of traffic vnics 1 and 2 in Class-C backup in host memory FEX-1 2 Fabric Extenders in chassis, each with 1 link to the switch. 2 switches, both with 1 connection to each FEX Class-C congestion, going into switch on this port Switch-1 vnic-2 vnic-3 Pause packet sent out for Class-C Blade-1, Palo-1 Palo NIC with 3 vnics vnic-1 vnic-2 Blade-2, Palo-1 Switch-2 vnic-3 FEX-2 Class-A Class-B Class-C 38

EHM: No locally attached devices In end-host mode there is no MAC learning on network ports. Servers are pinned to network ports directly.as such, attaching a device (NAS, etc.) locally does not work! NAS FC 39

Reverting back to Switching Mode 40

Why Switch Mode? Locally attaching devices to FI ports becomes an option Keep in mind though that with the FCS release only 10GE connections are available The Aptos release will allow ports to operate in 1GE mode User familiarity with well-known STP mode of operation Provides a somewhat easier transition to UCS to certain customers 41

Port Profiles in UCS vnics are dynamic and malleable Port Profiles in UCS capture several vnic properties vnic Type Network Configuration QoS Parameters Example Profile Name HR Type VLAN 5 Rate Ethernet 10 Mbits Security Policies* Port Profiles can be exported to VC and used as Port Groups in UCS DVS * Post-FCS 42

UCS DVS I/O profiles defined in UCS & applied in vcenter Server Server VM #1 VM #2 VM #3 VM #4 VM #5 VM #6 VM #7 VM #8 VMW ESX UCS DVS (PTS) VMW ESX UCS Port Profiles Defined in UCS VC Deploys VMs with UCS policies vcenter UCS exports Port Profiles to VC WEB Apps HR DB Compliance 43

UCS DVS Integrated Mobility with VMotion VM #1 VM #2 Server VM #3 VM #4 VM VM #1 #5 Server VM VM VM #2 VM #3 #6 #7 VM VM#4 #8 VMW ESX UCS DVS (PTS) VMW ESX UCS VN-Link Property Mobility VMotion for the network Ensures VM security Maintains connection state vcenter 44

With UCS in End-Host Mode design 1 Port-channel Core Switch 1 Core Switch 2 VSS/vPC on Core 1&2 does not bring value with End- Host Mode Fabric A HA Fabric B Chassis 45

With UCS in Switch mode design 2 Port-channel Core Switch 1 VSS/vPC Core Switch 2 In Switch mode, VSS/vPC is critical to ensure a loopfree topology! Fabric A HA Fabric B Chassis 46

A design that will not work in End-Host Mode HSRP Primary Core Switch 1 Core Switch 2 HSRP Secondary Fabric A HA Fabric B The loop-free distribution/access design does not work here Chassis 47

End-Host Mode caveat do not do this! Core Switch 1 Core Switch 2 NFS LAN Switch A NFS LAN Switch B Port elected to receive ingress Multi broadcast traffic for the entire Fabric Fabric A HA Chassis Fabric B Remember: only a single interface per FI elected to receive ingress broad multicast traffic! Disjoint L2 networks will cause issues - use Switch mode here! 48

EHM and SAN in the picture MDS SAN Left Core 1 Core 2 MDS SAN Right Fabric A HA Fabric B Chassis 49

50