UCS Fabric Fundamentals

Similar documents
UCS Fabric Fundamentals

UCS Networking 201 Deep Dive

UCS Networking Deep Dive

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

UCS Engineering Details for the SAN Administrator

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

UCS Technical Deep Dive: Getting to the Heart of the Matter

UCS Networking Deep Dive

UCS Networking Deep Dive

Troubleshooting the Cisco UCS Compute Deployment BRKCOM Cisco and/or its affiliates. All rights reserved.

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

Overview. Cisco UCS Manager User Documentation

UCS Management Architecture Deep Dive

Cisco Actualtests Questions & Answers

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

UCS Management Deep Dive

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Cisco UCS Network Performance Optimisation and Best Practices for VMware

UCS Management Deep Dive

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect


Next Generation Computing Architectures for Cloud Scale Applications

UCS Firmware Management Architecture

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

UCS Architecture Overview

CISCO EXAM QUESTIONS & ANSWERS

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco Exam Questions & Answers

CCIE Data Center Written Exam ( ) version 1.0

FCoE Configuration Between VIC Adapter on UCS Rack Server and Nexus 5500 Switch

Virtual Machine Fabric EXtension (VM-FEX) Extending the Network directly to the VM s

UCS Architecture Overview


Overview. About the Cisco UCS S3260 System

Techtorial Datová Centra

Virtual Machine Fabric Extension (VM-FEX)

Questions & Answers

Implementing Cisco Data Center Unified Computing (DCUCI)

Oracle Database Consolidation on FlashStack

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Evolution with End-to-End Data Center Virtualization

Cisco UCS Unified Fabric

UCS Networking Deep Dive

CISCO EXAM QUESTIONS & ANSWERS

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

C-Series Jason Shaw, UCS Technical Marketing Steve McQuerry, CCIE # 6108, UCS Technical Marketing

Cisco UCS Virtual Interface Card 1225

Cisco UCS Accelerated Accreditation Training

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Cisco. Exam Questions DCUCI Implementing Cisco Data Center Unified Computing (DCUCI)

Designing Cisco Data Center Unified Computing

Cisco UCS C24 M3 Server

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Cisco UCS. Click to edit Master text styles Second level Third level Fourth level Fifth level

UCS IPv6 Management Configuration Example

DCNX5K: Configuring Cisco Nexus 5000 Switches

Configuring the Fabric Interconnects

FCoE Cookbook for HP Virtual Connect

UCS deployment guide for Nimble Storage

FlexPod Express with VMware vsphere 6.0U2, NetApp E-Series, and Cisco UCS Mini

Direct Attached Storage

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

Expert Reference Series of White Papers. Cisco UCS B Series Uplink Strategies

Configuring Cisco UCS Server Pools and Policies

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Active System Manager Release 8.2 Compatibility Matrix

Configuring Cisco UCS Server Pools and Policies

LAN Ports and Port Channels

Course Description. Boot Camp Date and Hours: The class starts on Saturday 2 nd of April, 2016 Saturdays Only 8:00 AM 4:00 PM Central Standard Time

UCS Architecture Overview

Cisco. Designing Cisco Data Center Unified(R) Computing (DCUCD) v5.0

Cisco UCS Virtual Interface Card 1400 Series

Fibre Channel Zoning

Deployment Best Practices for Microsoft Platforms on UCS

Q&As. Troubleshooting Cisco Data Center Infrastructure. Pass Cisco Exam with 100% Guarantee

Cisco Nexus B22 Blade Fabric Extender for IBM

Data Center 3.0 Technology Evolution. Session ID 20PT

Exam Questions

UCS C-Series Deployment Options, Best Practice and UCSM Integration

Cisco HyperFlex HX220c M4 Node

Real4Test. Real IT Certification Exam Study materials/braindumps

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions

Deploying Applications in Today s Network Infrastructure

Cisco UCS C240 M3 Server

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Implementing Cisco Data Center Unified Computing (DCUCI)

Cisco UCS Virtual Interface Card 1227

UCS Supported Storage Architectures and Best Practices with Storage

Cloud Scale Architectures

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations

Cisco UCS B200 M3 Blade Server

Cisco.Actualtests v by.Dragan.81q

Cisco CCIE Data Center Written (Beta) Download Full Version :

N-Port Virtualization in the Data Center

Next Generation Data Centers Networks Consolidation and Virtualization

Transcription:

UCS Fabric Fundamentals

Hardware State Abstraction MAC Address NIC Firmware NIC Settings UUID BIOS Firmware BIOS Settings Boot Order BMC Firmware Drive Controller F/W Drive Firmware State abstracted from hardware WWN Address HBA Firmware HBA Settings LAN Connectivity OS & Application Storage Connectivity

Agenda Introduction UCS Key Features Hardware Overview Ethernet Connectivity Storage Connectivity Demo

More UCS sessions Date Session Title Wednesday BRKCOM-1005 UCS Architecture Overview Thursday BRKCOM-1004 Automating UCS for Systems Administrators BRKCOM-2002 UCS Supported Storage Architectures BRKCOM-2003 UCS Networking - Deep Dive including VM-FEX BRKCOM-2008 UCS Firmware Management Architecture Friday BRKCOM-2001 UCS Management Architecture - Deep Dive BRKCOM-3001 Troubleshooting the Cisco UCS Compute Deployment

Introduction

Building Blocks of Cisco UCS UCS Manager Embedded manages entire UCS UCS Fabric Interconnect 10GE unified fabric switch UCS Fabric Extender (I/O Module) Remote line card UCS Blade Server Chassis Flexible bay configurations UCS Blade and Rack Servers x86 industry standard Patented extended memory UCS I/O Adapters Choice of multiple adapters

Logical Architecture SAN A ETH 1 ETH 2 SAN B MGMT MGMT Uplink Ports OOB Mgmt Fabric Switch Server Ports FI A Cluster FI B Fabric Extenders IO M Chassis 1 IO IO Chassis 20 M M IO M Virtualised Adapters Compute Blades Half / Full width A CNA1 Half blade B A CNA1 CNA2 Full blade B

Software Architecture UCS Manager (UCSM or SAM) External Interface Layer Data Management Engine (DME) Application / EP Gateways (AG) Managed End Points (EP)

Service Profiles Server Name UUID Server Name MAC UUID WWN MAC Boot WWN info Boot info UUID, MAC,WWN Boot info LAN, SAN Config Firmware LAN Config Run-time SAN LAN Config Config association SAN Config Logical container of server state information User-defined Each profile can be individually created Profiles can be generated from a template Applied to physical blades at run time Without profiles, blades are just anonymous hardware components

UCS Key Features

Cisco Unified Computing System Stateless Computing Hardware Abstraction Service Profiles Unified Fabric Just-in-time Provisioning Consolidated I/O UCS Manager Single Management Domain Virtual Adapters Virtualised I/O

Stateless Computing Hardware State Abstraction: Enhanced server availability LAN SAN LAN SAN UUID: 56 4dcd3f 59 5b MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN Chassis- 1/Blade-2 Same identity moved Chassis- 1/Blade-2 UUID: 56 4dcd3f 59 5b MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN Chassis- 8/Blade-5 Chassis- 8/Blade-5

Service Profiles Benefits Server Name UUID Server Name MAC UUID WWN MAC Boot WWN info LAN Boot Config info SAN LAN Config Config SAN Config UUID, MAC,WWN Boot info LAN, SAN Config Firmware Consistent and simplified server deployment Pre provisioning Repurposing Disaster Recovery Run-time association

Unified Fabric Consolidated I/O MGMT LAN LAN SAN A SAN B MGMT LAN SAN A SAN B 160 blades 160 blades Chassis 1 Chassis 2 Chassis 3 Chassis 4 Chassis 10 Chassis 1/2 Chassis 3/4 Chassis 5/6 Chassis 7/8 Chassis 19/20 Chassis 4 10 Network Devices 8 20 Chassis Management Devices Legacy Blade Vendors 8 20 Convergence inside each chassis Chassis 4 10 Network Devices 2 2 Chassis Management Devices UCS 2 2 True FCoE of LAN/SAN & MGMT Cisco Confidential Internal Use Only

UCS Manager: Single Management Domain Multi-Chassis Server Identity Manager (VCEM) Server Health Monitoring (SIM) Enet Switch Enet Switch Enet Switch Enet Switch FC Switch FC Switch FC Switch FC Switch CMC/OA Legacy CMC/OA Blade Architecture CMC/OA CMC/OA Enclosure 1: Servers 1-16 Enclosure 2: Servers 17-32 Enet Switch Enet Switch Enet Switch Enet Switch FC Switch FC Switch FC Switch FC Switch CMC/OA CMC/OA CMC/OA CMC/OA Enclosure 3: Servers 33-48 Enclosure 4: Servers 49-64 Fabric Interconnect A Fabric Interconnect B Multi-chassis Server Identity Manager Server Health Monitoring Blade & Chassis Management Ethernet Ethernet Fiber Channel Fiber Channel Servers 1-8 Servers 9-16 Servers 17-24 Servers 25-32 Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Servers 33-40 Servers 49-56 Servers 41-48 Servers 57-64 Enclosure 5: Servers 65-80 Enclosure 6: Servers 81-96 Servers 65-72 Servers 73-80 Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Enet Switch FC Switch CMC/OA Servers 81-88 Servers 97-104 Servers 89-96 Servers 105-112 Enclosure 7: Servers 97-112 Enclosure 8: Servers 113-128 Servers 113-120 Servers 121-128 Enet Switch Enet Switch FC Switch FC Switch CMC/OA CMC/OA Enclosure 9: Servers 129-144 Enet Switch Enet Switch FC Switch FC Switch CMC/OA CMC/OA Enclosure 10: Servers 145-160 Servers 129-136 Servers 137-144 Servers 145-152 Servers 153-160 Logical Chassis 1

UCS Manager: Single Management Domain (Rack) Mgmt LAN SAN A SAN B All of the benefits of blades applied to rack servers Single point of management Firmware management Service profiles stateless agility

UCS Manager: Multi-Domain Management Do It Yourself Open Source Cisco Product ISV Partners SDK & Emulator UCS Dashboard UCS Central System Centre plug-in GoUCS Cisco CIAC BMC Integration PowerShell &.NET Cisco NSM CA integration etc Most flexible Most powerful Flexible Leverage power of community Cisco support and roadmap Multi-vendor support Incumbency Broadest use cases Multi-UCS Management Solutions UCS XML API UCS XML API UCS XML API UCS XML API BRKCOM-1004 Automating UCS for Systems Administrators

Cisco UCS Virtual Interface Card Creates multiple x16 PCIe Ethernet and FC interface adapters Fabric Failover UCS 2208 IOM UCS 2208 IOM Centralised management of virtual interfaces from UCS Manager Offers Virtualisation aware networking with best performance (VM-FEX) vnics /vhbas Side A Side B Eth FC UCS 1280 VIC / 1240 + Exp FC Eth 0 1 2 3 256

VM-FEX (Vmware ESX, RHEL KVM and Hyper-V ) Emulated Mode PCIe Pass-Thru or VMDirectPath Each VM gets a dedicated PCIe device 12%-15% CPU performance improvement Appears as distributed virtual switch to hypervisor vmotion supported Co-exists with Standard mode Bypasses Hypervisor layer 30% improvement in I/O performance Appears as distributed virtual switch to hypervisor Currently supported with ESX 5.0+ Hyper-V 2012, RHEL KVM 6.3 vmotion supported

Hardware Overview

Unified Ports Customer benefits Simplify fabric purchase by removing port ratio estimates Increase design flexibility Feature details Flexibility to configure any port at Ethernet (1/10 Gigabit with SFP+) or FCoE or Native FC Ports (8/4/2/1G with FC Optics) All ports on UCS 6200 Series 16-port expansion module (E16UP) Uses existing Ethernet / Fibre Channel SFP and SFP+ transceiver Compared to 6100 Series, Fibre Channel is standard. No requirement to buy FC module Native Fibre Channel Unified Port Lossless Ethernet: 1/10GbE, FCoE, iscsi, NAS

UCS 6248 Fabric Interconnect Rear Panel 32 x Fixed ports: 1/10 GE or 1/2/4/8 FC Expansion Module (GEM) Fabric Interconnect Cluster Connectivity Out of Band Mgmt 10/100/1000 Front Panel Console Fan Module N + N Redundant Fans Fan Module Power Entry Power Entry N + N Power Supplies

UCS FI 6296 Rear Panel Expansion Module Expansion Module Expansion Module 48 x Fixed ports 1/10 GE or 1/2/4/8 FC

UCS FI 6296 Front Panel Fabric Interconnect Out of Band Mgmt 10/100/1000 Console USB Flash Power Entry Power Entry Fan Module Fan Module Fan Module Fan Module N + N Power Supplies N + 1 Redundant Fans

UCS 5108 Blade Chassis Up to 8 half slot blades Up to 4 full slot blades 4x power supplies, N+N grid redundant 8x fans included 2x UCS Fabric Extender All items hot-pluggable AC / DC

UCS 5108 Blade Chassis Parts Rear 6U, 19 Rack Front 2 Fabric Extenders (I/O modules) 8 Fan Modules 4 Power Connectors 6U, 19 Rack 4 to 8 Blades 4 Power Supplies

UCS 5108 Blade Chassis Backplane Blade Connectors PSU Connectors I/O Modules Redundant data ( 40 G) and management paths.

UCS 2204/2208 Fabric Extender (I/O Module) Connects UCS blade chassis to the Fabric Interconnect 10 Gigabit Ethernet, FCoE capable, SFP+ ports Up to 2 Fabric Extenders per chassis for redundancy Up to 160 Gbps of bandwidth per chassis Built-in chassis management functionality Fully managed by UCS Manager through Fabric Interconnect No local switching

Compute Serves : B series Scale-Out UCS B22 M3 Web/IT Infrastructure Proxy/Caching UCS B200 M3 IT/Web Infrastructure, distributed database, ERP and CRM Price / Performance Optimised RAS and Performance Optimised UCS B420 M3 Virtualisation, VDI, database, ERP and CRM Enterprise Class UCS B230 M2 Database and Virtualisation UCS B440 M2 Database Consolidation Virtualisation Mission Critical Performance Optimised

Compute Servers : C-Series RAS and Performance Optimised UCS C260 M2 UCS C460 M2 UCS C22 M3 Scale-Out Web/IT Infrastructure Proxy/Caching UCS C24 M3 Big Data Storage Server UCS C220 M3 Distributed DB Middleware Collaboration UCS C240 M3 Data Warehouse Middleware Collaboration Big Data Enterprise Class Price / Performance Optimised UCS C420 M3 Virtualisation, Database and highend HPC Database Consolidation Virtualisation Database Consolidation Virtualisation Mission Critical Performance Optimised

Virtualisation Interface Card 1200 B series:1280,1240 C series:1225,p81e Host connectivity PCIe Gen2 x16 HW Capable of 256 PCIe devices OS restriction apply PCIe virtualisation OS independent (same as M81KR) Fabric Failover supported B series Upto Dual 4x10 GE port-channels to a single server slot BRKCOM-1005 UCS Architecture Overview

Ethernet Connectivity

Block Diagram: Upstream UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2208XP 2208XP Midplane Mezzanine 1280 VIC x16 Gen 2 Server Blade IOH x16 Gen 2 CPU CPU UCS Blade Chassis

Switching Modes: End Host FI A Spanning Tree Fabric A VNIC 0 VLAN 10 L2 Switching LAN veth 3 veth 1 VNIC 0 MAC Learning MAC Learning UCS connects to the LAN like a server, not like a switch Server vnic pinned to an uplink port No Spanning Tree Protocol Reduces control plane load on FI Simplified upstream connectivity Maintains MAC table for servers only Eases MAC table sizing in the access layer Allows multiple active uplinks per VLAN Doubles effective bandwidth vs STP Prevents loops by preventing uplink-to-uplink switching Completely transparent to upstream LAN Traffic on same VLAN switched locally Server 1 Server 2

End Host Mode : Pinning Dynamic pinning Server ports pinned to an uplink port/port-channel automatically Static pinning Specific ping groups created and associated with adapters Static pinning allows traffic management, if required for certain applications/servers FI A 1 2 3 4 veth 3 veth 1 Fabric A VNIC 0 VNIC 0 Server X Oracle DEFINED: PinGroup Oracle APPLIED: PinGroup Oracle Pinning Switching

End Host Mode: Upstream Connectivity Without vpc With vpc 7K/5K1 7K/5K 2 keep alive vpc peer-link PO PO PO PO vpc vpc EHM FI A EHM FI B EHM FI A EHM FI B VNIC 0 Server 1 VNIC 0 Server 2 VNIC 0 Server 1 VNIC 0 Server 2

End Host Mode: Disjoint Layer 2 Networks Customer benefits Ability to support multiple layer 2 disjoint networks upstream to UCS in End Host Mode Feature details Production Vlan 10,20 Public Vlan 31,32 Backup Vlan 40,41 Static or dynamic vnic pinning based on VLAN membership of the uplink A VLAN can exist only in one L2 disjoint network, i.e. no overlap A vnic is mutually exclusive to a L2 network upstream, i.e. a L2 network per vnic End Host End Host More than two L2 disjoint networks supported per host with virtual interface card

Switch Mode Fabric Interconnect behaves like a normal Layer 2 switch Spanning tree protocol is run on the uplink ports per VLAN MAC learning/aging happens on both the server and uplink ports like in a typical Layer 2 switch Was required for certain Application Specific Scenarios Disjoint VLAN Direct connect Appliance Microsoft Load Balancing Clusters

Block Diagram: FEX 2204 UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2204XP 2204XP 4 X Network Interface (NIF) IOM A to FI A IOM B to FI B Equal Number of uplinks Midplane 16 X Host Interface (HIF) 16Downlinks Mezzanine Mezz Card x16 Gen 2 x16 Gen 2 Server Blade IOH CPU CPU UCS Blade Chassis

Block Diagram: FEX 2208 UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module 8 x Network Interface (NIF) IO Modules 2208XP 2208XP Midplane Host Interface (HIF) 32 Downlinks Mezzanine Mezz Card x16 Gen 2 x16 Gen 2 Server Blade IOH CPU CPU UCS Blade Chassis

Block Diagram: FEX UCS 6200 UCS 6200 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module Network Interfaces (NIF) to FI IO Modules 220X XP 220X XP Midplane Host Interface (HIF) to blades Mezzanine Server Blade UCS Blade Chassis

UCS 2200 : Discrete Mode (Pinning) Server slots pinned to uplink slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 1 link NIF Fabric Interconnect Uplink: slots 1,2,3,4,5,6,7,8 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 2 links NIF Fabric Interconnect Uplink 1: slots 1,3,5,7 Uplink 2: slots 2,4,6,8 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 4 links NIF Fabric Interconnect Uplink 1: slots 1,5 Uplink 2: slots 2,6 Uplink 3: slots 3,7 Uplink 4: slots 4,8

UCS 2208 : Discrete Mode (Pinning) Server slots pinned to uplink slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 8 links NIF Fabric Interconnect Uplink 1: slot 1 Uplink 2: slot 2 Uplink 3: slot 3 Uplink 4: slot 4 Uplink 5: slot 5 Uplink 6: slot 6 Uplink 7: slot 7 Uplink 8: slot 8 No discrete mode support with 3,5,6,7 uplinks Adding more links was disruptive

UCS 2200 :Port Channel FI 6200 FI 6200 Customer benefits Resilience and flexibility Feature details Up to 8 * 10 Gb aggregated bandwidth to each fabric in a chassis Port-channel is a user configurable choice, default mode discrete Addition of links does not require chassis reacknowledge Supported number of links in a port-channel: 1,2,3,4,5,6,7 or 8 Load balancing 80 Gb 80 Gb Port-Channel 2208 FI 6200 FI 6200 40 Gb 40 Gb Port-Channel 2204

UCS 2200 :Port Channel Server slots channelled across all uplinks slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 4 links NIF Fabric Interconnect Uplink 1: slots 1-8 Uplink 2: slots 1-8 Uplink 3: slots 1-8 Uplink 4: slots 1-8 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X 8 links NIF Fabric Interconnect Server slots channelled across all uplinks Uplink 1: slots 1-8 Uplink 2: slots 1-8 Uplink 3: slots 1-8 Uplink 4: slots 1-8 Uplink 5: slots 1-8 Uplink 6: slots 1-8 Uplink 7: slots 1-8 Uplink 8: slots 1-8

UCSM C-series Integration (Nexus 2232) UCS 2.1 and CIMC release 1.4(6) Required for VIC 1225 Default CIMC Settings Shared-LOM-EXT Dual Wire Mode still supported Multiple VIC 1225 supported C240, C260, C420, C460 NO Direct FI SUPPORT GE LOM CIMC Mgmt Traffic Data Traffic (LAN and FCoE) UCS 6100 or 6200 Nexus 2232 PCIe Adapter OS or Hypervisor

Block Diagram: Fabric Failover UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2208XP 2208XP Midplane Mezzanine 1280 VIC Fabric Failover Server Blade vnic UCS Blade Chassis

End Host Mode: Fabric Failover for Ethernet Fabric provides NIC failover capabilities when defining a service profile Traditionally done using NIC bonding driver in the OS 16x SFP+ 16x SFP+ UCS 6248 Expansion Module 16x SFP+ 16x SFP+ UCS 6248 Expansion Module Provides failover for both unicast and multicast traffic 2208XP 2208XP Works for any OS on bare metal Recommended in case of bare metal OS 1280 VIC Hyper - V vnic UCS Blade Chassis

Block Diagram: VIC Adapter Port Channel UCS 6200 UCS 6200 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2200 2200 Midplane Mezzanine Server Blade 1. 10 Gb FTP traffic 2. 10 Gb UDP traffic 1200 VIC 4x10 Gbps Ether channel from VIC 1200 to 2208 IO Modules 2 x 10 Gbps Ether channel from VIC 1200 to 2204 IO Modules No user configuration required vnic flows are Load Balanced across links Each individual flow limited to 10Gb Fabric Failover available UCS Blade Chassis

Block Diagram: VM-FEX UCS 6200 UCS 6200 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2200XP 2200XP Midplane Mezzanine 1280 VIC x16 Gen 2 x16 Gen 2 VM-FEX Server Blade UCS Blade Chassis BRKCOM-2003 UCS Networking - Deep Dive including VM-FEX

Storage Connectivity

SAN End Host NPV Mode N-Port Virtualisation Forwarding Fabric Interconnect operates in N_Port Proxy mode FLOGI FDISC FI-A SAN A SAN B vfc 1 vfc 2 vfc 1 vfc 2 F_Proxy N_Port NPIV VSAN 101 Server 1 VSAN 1 F_Port N_Proxy FI-B NPIV F_Port VSAN 201 F_Proxy N_Port N_Proxy vhba 0 vhba 1 vhba 0 vhba 1 Server 2 VSAN 1 Simplifies multi-vendor interoperation Simplifies management SAN switch sees Fabric Interconnect as an FC End Host with many N_Ports and many FC IDs assigned Server facing ports function as F-proxy ports Server vhba pinned to an FC uplink in the same VSAN. Round Robin selection Provides multiple FC end nodes to one F_Port off an FC Switch Eliminates the FC domain on UCS Fabric Interconnect One VSAN per F_port (multi-vendor) F_Port Trunking and Channelling with MDS, 5K

SAN End Host NPV Mode N-Port Virtualisation Forwarding with MDS, Nexus 5000 F_ Port Channel & Trunk FI-A SAN A SAN B vfc 1 vfc 2 vfc 1 vfc 2 F_Proxy N_Port NPIV VSAN 101,102 F_Port N_Proxy FI-B NPIV VSAN 201,202 F_Port Channelling and Trunking from MDS or Nexus 5000 to UCS FC Port Channel behaves as one logical uplink FC Port Channel can carry all VSANs (Trunk) UCS Fabric Interconnects remains in NPV end host mode Server vhba pinned to an FC Port Channel Server vhba has access to bandwidth on any link member of the FC Port Channel Load balancing based on FC Exchange_ID per flow vhba 0 vhba 1 vhba 0 vhba 1 Server 1 VSAN 1 Server 2 VSAN 2

SAN FC Switch Mode FC FCoE SAN UCS Fabric Interconnect behaves like an FC fabric switch Storage ports can be FC or FCoE Ethernet and FC switching modes are independent Local Zoning OR Upstream Zoning Parallel Local and Upstream Zoning Currently NOT Supported Upstream Zoning Provided by MDS/N5k Fabric Interconnect uses a FC Domain ID Supported FC/FCoE Direct Connect Arrays Check Note 5 on HCL for Updated List Lower cost point for small deployments (no access layer FC/FCoE switches required) N_Port VSAN 1 VSAN 2 F_Port FI-A FC Switch vfc 1 vfc 2 vfc 1 vfc 2 F_Port N_Port vhba 0 vhba 1 vhba 0 vhba 1 Server 1 VSAN 1 MDS FI-B FC Switch TE_Port Server 2 VSAN 2 MDS

Multi-Hop FCoE FCoE STORAGE End-to-End FCoE MDS, N5K, N7K FCoE Capable Switches Supported Upstream MDS/N5K/N7K FCoE FCoE MDS/N5K/N7K New Port Type Unified Uplink FCoE/Ethernet Unified Uplinks FCoE/Ethernet Carries FCoE and Normal Ethernet Traffic UCS FI NPV/EHM UCS FI Fabric Interconnect in End Host Mode or Switch Mode UCS B-Series

NAS Direct Attach (Appliance Port) NAS LAN Default (recommended) - End Host Mode Superior traffic engineering Easier integration into network 1.4 Introduced Appliance Ports which allow direct connect NAS filers Options - Ethernet Switching Mode As of 1.4, no need to use this mode for NAS direct connect Previous releases required switching mode for direct connect NAS Appliance Port FI_A FI-B veth 1 veth 2 veth 1 veth 2 vnic 0 Volume A C1 A A vnic 1 Server 1 Accessing Volume A Volume B C2 U A A vnic 0 U vnic 1 Server 2 Accessing Volume B

Unified Appliance Support File and block data over a single port/cable FCoE, iscsi, NFS, CIFS Port and cable consolidation Unified Appliance Port Storage FCoE iscsi NFS CIFS New port type: Unified Appliance Port Appliance port of today + FCoE Initial support for NetApp storage and their Unified Target Adapter UCS FI UCS FI UCS B-Series 58

iscsi Boot An alternative boot option for stateless servers iscsi Boot Support Cisco UCS Virtual Interface Card (M81KR ) Cisco UCS M51KR-B Broadcom BCM5771 Option ROM uses iscsi Boot Firmware Table (ibft) iscsi Hardware Offload is not a requirement to support booting, only supporting in the option ROM BRKCOM-2002 UCS Supported Storage Architectures

Demo / Basic Configuration Steps

UCS Configuration Workflow Physical Base Configuration Network Configuration Storage Configuration Server Configuration

Topology SAN A ETH 1 ETH 2 VLAN 10 VLAN 1000 SAN B VSAN 101, 102 FI A L2 L2 FI B VSAN 201, 202 32 31 30 29 29 30 31 32 L1 L1 1 2 3 4 1 2 3 4 I O M Chassis 1 I I Chassis 2 O O M M I O M A CNA1 Server-01 B A B

Console setup : Fabric Interconnect A You have chosen to setup a new Fabric interconnect. Continue? (y/n): y Enforce strong password? (y/n) [y]: Enter the password for "admin": <password> Confirm the password for "admin": <password> Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: y Enter the switch fabric (A/B) []: a Enter the system name: <UCS Domain name> Physical Switch Mgmt0 IPv4 address : <FI A IP> Physical Switch Mgmt0 IPv4 netmask : <MASK> IPv4 address of the default gateway : <Gateway> Cluster IPv4 address : <CLUSTER IP> Configure the DNS Server IPv4 address? (yes/no) [n]: Configure the default domain name? (yes/no) [n]: Following configurations will be applied: Switch Fabric=A System Name= UCS Domain name Enforced Strong Password= no Physical Switch Mgmt0 IP Address= FI A IP Physical Switch Mgmt0 IP Netmask= MASK Default Gateway= GATEWAY Cluster Enabled= yes Cluster IP Address= IP NOTE: Cluster IP will be configured only after both Fabric Interconnects are initialized Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): Applying configuration. Please wait. Configuration file Ok Cisco UCS 6200 Series Fabric Interconnect <UCS Domain name>-a login:

Console setup : Fabric Interconnect B Enter the configuration method. (console/gui)? console Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n)? y Enter the admin password of the peer Fabric interconnect: Connecting to peer Fabric interconnect... done Retrieving config from peer Fabric interconnect... done Peer Fabric interconnect Mgmt0 IP Address: <FI A IP > Peer Fabric interconnect Mgmt0 IP Netmask: <FI A MASK> Cluster IP address : <CLUSTER IP> Physical Switch Mgmt0 IPv4 address : <FI B IP> Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes Applying configuration. Please wait. Configuration file - Ok Cisco UCS 6200 Series Fabric Interconnect <UCS Domain name>-b login:

Configuration overview Basic Configuration 1. Configure Chassis Discovery and Power Policy 2. Configure Unified Ports 3. Configure Server ports for Chassis Discovery 4. Configure Management IP (KVM)

Configuration overview Network Configuration 1. Configure network uplink ports 2. Configure Port Channel 3. Configure VLAN (Dev 10, Dev-Clu-1000 )

Configuration overview Storage Configuration 1. Enable FC trunking 2. Configure VSAN (A:101,102 B:201,202) FCOE VLAN id not used as a data vlan 3. Configure storage uplink ports 4. Configure Port Channel

Configuration overview Service Profile 1. Service Profile Experts Wizard 2. UUID settings 3. vnic settings 4. vhba settings 5. Boot settings 6. Associate blade 7. Install OS BRKCOM-2001 UCS Management Architecture - Deep Dive

More UCS sessions Date Session Title Wednesday BRKCOM-1005 UCS Architecture Overview Thursday BRKCOM-1004 Automating UCS for Systems Administrators BRKCOM-2002 UCS Supported Storage Architectures BRKCOM-2003 UCS Networking - Deep Dive including VM-FEX BRKCOM-2008 UCS Firmware Management Architecture Friday BRKCOM-2001 UCS Management Architecture - Deep Dive BRKCOM-3001 Troubleshooting the Cisco UCS Compute Deployment

Q & A

Complete Your Online Session Evaluation Complete your session evaluation: Directly from your mobile device by visiting www.ciscoliveaustralia.com/mobile and login by entering your username and password Visit one of the Cisco Live internet stations located throughout the venue Open a browser on your own computer to access the Cisco Live onsite portal Don t forget to activate your Cisco Live 365 account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account by visiting www.ciscolive365.com 71