UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

Similar documents
UCS Networking Deep Dive

UCS Networking Deep Dive

UCS Networking 201 Deep Dive

UCS Networking Deep Dive

UCS Networking Deep Dive

UCS Fabric Fundamentals

UCS Fabric Fundamentals

Next Generation Computing Architectures for Cloud Scale Applications

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

UCS Engineering Details for the SAN Administrator

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Troubleshooting the Cisco UCS Compute Deployment BRKCOM Cisco and/or its affiliates. All rights reserved.

Overview. Cisco UCS Manager User Documentation

Cisco UCS Network Performance Optimisation and Best Practices for VMware

LAN Ports and Port Channels

UCS Technical Deep Dive: Getting to the Heart of the Matter

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Cisco Actualtests Questions & Answers

Cloud Scale Architectures

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Evolution with End-to-End Data Center Virtualization

Cisco UCS Unified Fabric

Cisco Exam Questions & Answers

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Cisco UCS Virtual Interface Card 1400 Series

UCS Management Architecture Deep Dive

CISCO EXAM QUESTIONS & ANSWERS

Fabric Failover Scenarios in the Cisco Unified Computing System

Overview. About the Cisco UCS S3260 System

Data Center 3.0 Technology Evolution. Session ID 20PT


Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Techtorial Datová Centra

CISCO EXAM QUESTIONS & ANSWERS

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Cisco UCS Manager Network Management Guide, Release 3.2

UCS Firmware Management Architecture

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Layer 2 Implementation

FCoE Configuration Between VIC Adapter on UCS Rack Server and Nexus 5500 Switch

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Virtual Machine Fabric EXtension (VM-FEX) Extending the Network directly to the VM s

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Questions & Answers

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions

UCS Management Deep Dive

Cisco Nexus B22 Blade Fabric Extender for IBM

Cisco CCIE Data Center Written (Beta) Download Full Version :

Cisco UCS Virtual Interface Card 1227

Cisco UCS Virtual Interface Card 1225

Equipment Policies. Chassis/FEX Discovery Policy

Traffic Monitoring. Traffic Monitoring

Cisco Nexus 7000 Series Connectivity Solutions for the Cisco Unified Computing System

Nexus 5000 NPIV FCoE with FCoE NPV Attached UCS Configuration Example

Virtual Machine Fabric Extension (VM-FEX)

Implementing Cisco Data Center Unified Computing (DCUCI)

Cisco.Actualtests v by.Dragan.81q

Implementing Cisco Data Center Unified Computing (DCUCI)

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

UCS Supported Storage Architectures and Best Practices with Storage

You can monitor or use SPAN on port channels only for ingress traffic.

Pass-Through Technology

Traffic Monitoring and Engineering for UCS

N-Port Virtualization in the Data Center

Nexus DC Tec. Tomas Novak. BDM Sponsor. Sponsor. Sponsor Logo. Sponsor. Logo. Logo. Logo

Expert Reference Series of White Papers. Cisco UCS B Series Uplink Strategies

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

UCS SAN Deployment Models and Best Practices

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

UCS Direct Attached Storage and FC Zoning Configuration Example

UCS C-Series Deployment Options, Best Practice and UCSM Integration

UCS C Series Rack Servers VIC Connectivity Options

Cisco UCS Local Zoning

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview

Configuring VM-FEX. Information About VM-FEX. VM-FEX Overview. VM-FEX Components. This chapter contains the following sections:

Návrh serverových farem

Configuring Enhanced Virtual Port Channels

Configuring Cisco UCS Server Pools and Policies

Agenda Registration & Coffee

Managing Network Adapters

UCS deployment guide for Nimble Storage

FCoE Cookbook for HP Virtual Connect

Direct Attached Storage

Disjoint Layer2 Uplinks (DLU)

Configuring Network-Related Policies

Cisco. Exam Questions DCUCI Implementing Cisco Data Center Unified Computing (DCUCI)

CCIE Data Center Written Exam ( ) version 1.0

Q&As. Troubleshooting Cisco Data Center Infrastructure. Pass Cisco Exam with 100% Guarantee

Configuring Cisco UCS Server Pools and Policies

DCNX5K: Configuring Cisco Nexus 5000 Switches

Overview of Cisco Unified Computing System

UCS Nirvana. Lee Burleson Network and Systems Engineer Iowa National Guard. Cisco ARNG Training Event March 2011

Service Oriented Virtual DC Design

All Roads Lead to Convergence

examcollection.premium.exam.186q

Transcription:

UCS Networking Deep Dive Neehal Dass - Customer Support Engineer

Agenda Chassis Connectivity Server Connectivity Fabric Forwarding M-Series Q & A

Cisco Unified Computing System (UCS) Single Point of Management Logical Building Blocks Stateless Compute (Service Profiles)

UCS Components LAN MGMT SAN Fabric Interconnect UCS Chassis Heartbeat link (No Data)

UCS Components LAN MGMT SAN Fabric Interconnect UCS Chassis Heartbeat link (No Data) IO Module

UCS Components LAN MGMT SAN Fabric Interconnect 4 x 10G KR lanes to each half width blade slot. UCS Chassis IO Module Heartbeat link (No Data)

UCS Components Fabric Interconnect Cisco VIC IO Module Heartbeat link (No Data) UCS Blade

UCS Mini: 6324 Fabric Interconnect UCS B UCS 5108 Chassis Supports existing and future blades + IO Modules + 6248 or 6296 Fabric Fabric Interconnects UCS Mini + 6324 Fabric Interconnect UCS 5108 Chassis Supports existing and future blades

3 rd Generation Fabric Interconnect and IOM

UCS FI & IOM Models FI 6300 Series and IOM 2304 FI 6332 32 x 40GbE QSFP+ ports 2.56Tbps switching performance 1RU fixed form factor, two power supplies & four fans FI 6332-16UP 24 x 40GbE QSFP+ & 16 x UP ports (1/10GbE or 4/8/16G FC) 2.43Tbps switching performance 1RU fixed form factor, two power supplies & four fans IOM 2304 8 x 40GbE server links & 4 x 40GbE QSFP+ uplinks 960Gbps switching performance Modular IOM for UCS 5108

FI 6300 Series Hardware Overview FI 6332 (Ethernet Only) FI 6332-16UP (Unified) L1 & L2 High avail ports 26 x 40G QSFP+ * or 98 x 10G SFP+ ** 6 x 40G QSFP+ L1 & L2 High avail ports 16 x UP 16 x 1/10G SFP+ or 16 x 4/8/16G FC 18 x 40G QSFP+ Or 72 x 10G SFP+ * 6 x 40G QSFP+ 6300 Series (Rear View) * QSA module required on ports 13-14 to provide 10G support ** Requires QSFP to 4xSFP breakout cable 4 x Fans 2 x Power Supplies Serial Ports

Chassis Connectivity

UCS Fabric Topologies Chassis Bandwidth Options 2208XP only 2x 1 Link 20 Gbps per Chassis 2x 2 Link 40 Gbps per Chassis 2x 4 Link 80 Gbps per Chassis 2x 8 Links 160 Gbps per Chassis 80 Gbps per Chassis (IOM-2304) 160 Gbps per Chassis (IOM-2304) 320 Gbps per Chassis (IOM-2304)

UCS 2200 IO Module (FEX) UCS-IOM-2204XP UCS-IOM-2208XP 40G to the Network 160G to the Hosts 2x10G Half width slot 4x10G Full width slot 80G to the Network 320G to the Hosts 4x10G Half width slot 8x10G Full width slot

UCS-IOM-2304 Interface NIF 4 x 40G QSFP Connects only to FI63xx HIF 32 Interfaces Supports 10G or 4 ports can be combined to a single 40G HIF

VN-TAG: Pre-Standard IEEE 802.1BR FEX architecture Switch FEX LAN Frame VNTAG Frame Application Payload TCP IP VN-TAG Ethernet VN-TAG Ethertype d p destination virtual interface l r ver source virtual interface

UCS IOM 220x Architecture Fabric Ports to FI Network Interfaces (NIFs) FLASH DRAM EEPROM 2204 2208 Feature 2204-XP 2208-XP Chassis Management Controller Control IO CIMC Switch Woodside ASIC ASIC Woodside Woodside Fabric Ports (NIF) Host Ports (HIF) 4 8 16 32 Chassis Signals 2204 2208 Latency ~ 500ns ~ 500ns Internal backplane ports to blades Host Interfaces (HIFs)

IOM Fabric & Backplane Interfaces Backplane port to blade 1/3 UCSB-2-A# connect nxos UCSB-2-A(nxos)# show fex detail FEX: 1 Description: FEX0001 state: Online Extender Model: UCS-IOM-2204XP, Part No: 73-14488-01 pinning-mode: static Max-links: 1 Fabric interface state: Eth1/3 - Interface Up. State: Active Eth1/4 - Interface Up. State: Active Fex Port State Fabric Port Eth1/1/1 Down Eth1/3 Eth1/1/2 Down None Eth1/1/3 Up Eth1/4 Eth1/1/4 Down None Eth1/1/5 Up Eth1/3 Eth1/1/6 Up Eth1/3 Eth1/1/7 Up Eth1/4 Eth1/1/8 Down None Eth1/1/9 Up Eth1/3 Eth1/1/10 Down None Eth1/1/11 Up Eth1/4 Eth1/1/12 Up Eth1/4 Eth1/1/13 Up Eth1/3 Eth1/1/14 Down None Eth1/1/15 Up Eth1/4 Eth1/1/16 Down None Eth1/1/17 Up Eth1/4 FI ports IOM connects to Backplane to FI pinning 1G link to CIMC switch.

NIFs HIFs

NIFs HIFs

IOM Traffic Rate Monitoring Statistics from perspective of IOM!

UCS Mini: 6324 Fabric Interconnect UCS B UCS 5108 Chassis Supports existing and future blades + IO Modules + 6248 or 6296 Fabric Fabric Interconnects UCS Mini + 6324 Fabric Interconnect UCS 5108 Chassis Supports existing and future blades

UCS Mini Secondary Chassis A secondary chassis can be added to an existing UCS Mini Cluster This can be achieved by connected the secondary chassis through the Scalability Ports on the UCS Mini Fabric Interconnect The secondary chassis requires a UCS-2204 or a UCS-2208 IOM Only one secondary chassis can be connected FEX based Rack Server connectivity is not supported

UCS Mini Secondary Chassis

UCS Mini Secondary Chassis

Fabric Link Connectivity

Chassis Connectivity Policy

IO Module HIF to NIF Pinning 2208XP 1 Link NIF1 HIF1-4 HIF5-8 Slot 1 Slot 2 HIF1-4 HIF5-8 NIF1 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32

IO Module HIF to NIF Pinning 2208XP 2 Links NIF1 HIF1-4 HIF5-8 Slot 1 Slot 2 HIF1-4 HIF5-8 NIF1 NIF2 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 NIF2

IO Module HIF to NIF Pinning 2208XP 4 Links NIF1 HIF1-4 HIF5-8 Slot 1 Slot 2 HIF1-4 HIF5-8 NIF1 NIF2 NIF3 NIF4 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 NIF2 NIF3 NIF4

IO Module HIF to NIF Pinning 2208XP 8 Links Slot 1 HIF1-4 HIF1-4 NIF1 HIF5-8 Slot 2 HIF5-8 NIF1 NIF2 NIF3 NIF4 HIF9-12 HIF13-16 Slot 3 Slot 4 HIF9-12 HIF13-16 NIF2 NIF3 NIF4 NIF5 Slot 5 NIF5 NIF6 NIF7 NIF8 HIF17-20 HIF21-24 HIF25-28 Slot 6 Slot 7 Slot 8 HIF17-20 HIF21-24 HIF25-28 NIF6 NIF7 NIF8 HIF29-32 HIF29-32

IOM Link Failure Scenario Link Failure HIF1-4 NIF1 HIF5-8 NIF2 NIF3 HIF9-12 NIF4 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF1-4 HIF5-8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 NIF1 NIF2 NIF3 NIF4

IOM Link Failure Scenario HIF1-4 NIF1 HIF5-8 NIF2 NIF3 HIF9-12 NIF4 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF1-4 HIF5-8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 NIF1 NIF2 NIF3 NIF4

IOM Link Failure Scenario NIF1 HIF1-4 HIF5-8 Slot 1 Slot 2 HIF1-4 HIF5-8 NIF1 NIF2 NIF3 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 HIF9-12 HIF13-16 HIF17-20 HIF21-24 HIF25-28 HIF29-32 NIF2 NIF3 NIF4

Port-channel Pinning VIC 1200/1300 adaptor with DCE links in Port-Channel HIFs 2200-IOM Pinned to Po NIF Gen-1 adaptor with single 10G link HIF

Increased Bandwidth Access to Blades 4 links, Discrete - Today 8 links, Discrete Up to 8 links, Port-channel slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X Fabric Interconnect slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X Fabric Interconnect F E X Fabric Interconnect Available bandwidth per blade 10Gb Statically pinned to individual fabric links Deterministic Path Available bandwidth per blade 20Gb Statically pinned to individual fabric links Deterministic Path Guaranteed 10Gb to each blade Available bandwidth per blade up to 160Gb Statically pinned to Portchannel Increased and shared bandwidth Higher Availability

Server Connectivity

Cisco Virtual Interface Cards (VIC) 1 st Gen (Palo) 2 nd Gen (Sereno) 3 rd Gen (Cruz) M81KR, P81E 128 PCIe devices Dual 10Gb 16x PCIe Gen1 1240, 1280, 12xx 256 PCIe Device Dual 40Gb (4 x 10Gb) 16x PCIe Gen 2 1340, 1380 Dual 8x PCIe Gen 3 VXLAN & NVGRE Native 40Gb Support RoCE

Fabric Extender Evolution Virtual Interfaces VN-TAG/IEEE 802.1BR allows cascading FEXs LIF FEX Cisco VIC is an extension of FEX VN-TAG associates the Logical Interface (LIF) to a Virtual Interface (VIF) VIF Adapter FEX

VIC 1240/1340 + Port Expander Card Base option supports dual 2x10Gb Port Expander = passive connector device Port Expander fits in Mezzanine slot mlom vs Mezzanine. 1240 Sereno Modular LOM (mlom) 1340 Cruz

VIC 1240/1340 to IOM Connectivity MLOM only Fabric Interconnects UCS 6248 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ UCS 6248 Expansion Module IO Modules 2208XP 2208XP Midplane Adapter 1340 VIC Empty Dual 2x10 Gb port-channel from VIC 1240/1340 to 2208 IO Modules x8 Gen 3 x8 Gen 3 Server Blade CPU 0 CPU 1 QPI Link B200 M3/M4 UCS Blade Chassis

VIC 1240/1340 to IOM Connectivity MLOM plus Port Expander UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2208XP 2208XP Midplane Port Channel 1 Port Channel 2 Adapter 1340 VIC x8 Gen 3 Port Exp x8 Gen 3 Port Expander Passive Increase BW to 80Gbps Dual 4x10Gbps Port-channel Server Blade CPU 0 QPI Link CPU 1 B200 M3/M4 UCS Blade Chassis

What Does The OS See?

Connectivity IOM to Adapter 2208 IOM 2208 IOM Implicit Port-channel Flows hashed across port-channel Side A vnic1 VM Side B UCS 1200/1300 VIC VM Flows 1. 10 Gb FTP traffic 2. 10 Gb UDP traffic UCSB-2-A# connect nxos UCSB-2-A(nxos)# show port-channel summary ------------------------------------------------------ -------------------------- Group Port- Type Protocol Member Ports Channel ------------------------------------------------------ -------------------------- 11 Po11(SD) Eth LACP Eth1/11(D) 88 Po88(SD) Eth LACP Eth1/20(D)... 1314 Po1314(SU) Eth NONE Eth1/1/5(P) 1315 Po1315(SU) Eth NONE Eth1/1/6(P) UCSB-2-A(nxos)# 50

VIC 1x40 & 1x80 to IOM Connectivity UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules Midplane 2208XP 2208XP No mixing of 12xx/13xx. Adapter Server Blade 1340 VIC VIC1380 x8 Gen 3 x8 Gen 3 CPU 0 CPU 1 QPI Link Adapter Redundancy Split vnic across adapters 4 2x10 Gb Port-channels B200 M3/M4 UCS Blade Chassis

Full Width Blade to IOM Connectivity MLOM, Port Expander, VIC1x80 UCS 6248 UCS 6248 Fabric Interconnects 16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+ Expansion Module IO Modules 2208XP 2208XP Midplane Port Channel 1 Port Channel 2 Adapter 1340 VIC Port Exp 4x10 4x10 VIC1380 Total BW is 160G Four 40G port-channels x8 Gen 3 x8 Gen 3 x8 Gen 3 Server Blade CPU QPI Link CPU B260 M4 UCS Blade Chassis

23XX-A 23XX-B IOM 2304 and Adapter Connection VIC1340 Only 20G (2x10G) Active KR Lane Passive KR Lane Mezz 1 (empty) VIC1340 ` PCIe Lanes PCIe Lanes Blade Server QPI CPU # 1 CPU # 0

23XX-A 23XX-B 2304 IOM and Adapter Connection VIC1340 Plus Port Expander 40G (native) Port Expander Card VIC1340 ` PCIe Lanes PCIe Lanes Blade Server QPI CPU # 1 CPU # 0

23XX-A 23XX-B 2304 IOM and Adapter Connection VIC1340 Plus VIC1380 Adapter Resiliency 2 independent Adapters vcon placement 4 20G connections 20G are 2x10 VIC1380 VIC1340 ` PCIe Lanes PCIe Lanes QPI Blade Server CPU # 1 CPU # 0

UCS Mini: Fabric to Server Connectivity Same server-side connectivity as the 2204XP IOM 40G per half width blade

Fabric Forwarding - Ethernet

Ethernet Fabric Forwarding Mode of Operations LAN Switch mode: FI acts like regular Ethernet switch VLAN/Mac based forwarding End-host mode (EHM): No spanning-tree protocol (STP) Active/Active for all links & VLANs Policy based forwarding

End Host Mode Spanning Tree LAN Learn MAC addresses only from server interfaces vnics are pinned to uplink interfaces FI A Fabric A veth 3 veth 1 VLAN 10 L2 Switching MAC Learning MAC Learning VNIC 0 Server 2 VNIC 0 Server 1

End Host Mode: Unicast Forwarding FI Uplink Ports LAN VLAN 10 Deja-Vu veth 1 veth 3 Server 2 RPF Policies to prevent packet looping 1. No uplink to uplink forwarding 2. Déjà Vu check 3. RPF No unknown unicast. Silent VM? FI Mac Aging vs. Router ARP Timeout VNIC 0 Server 2 VNIC 0 Server 1

End Host Mode: Broadcast Forwarding Uplink Ports FI B LAN Broadcast Listener per VLAN veth 1 veth 3 B Broadcast traffic for a VLAN is pinned to one uplink port only Broadcast Listener prevents duplicate packets Server to server broadcast traffic is locally switched RPF and Déjà Vu check also applies for broadcast traffic. B VNIC 0 Server 2 VNIC 0 Server 1

Designated Receiver - Broadcast Uplinks Carrying VLAN 511 Uplink which is the DR for VLAN 511

End Host Mode: Disjointed L2 Domains Broadcast Link UCSM by default assumes all uplinks are part of all VLANs Prod (vlans 10,20,30) DMZ (vlans 40,50,60) EHM 6200 A 6200 B EHM Cannot see DMZ 2 Broadcasts Prod Server DMZ Server

Switch Mode VLAN 10 L2 Switching Root LAN veth 3 veth 1 MAC Learning Fabric Interconnect behaves like a normal L2 switch Rapid-STP+ to prevent loops Server vnic traffic follows STP forwarding states MAC address learning on both uplinks and server links VNIC 0 Server 2 VNIC 0 Server 1

Uplink Pinning

End Host Mode - Dynamic Pinning FI A vlan10 LAN veth 2 veth 3 veth 1 vlan20,30 Pinning Switching UCSM manages the veth pinning to the uplink Pinned uplink must pass VLAN used by vnic UCSM periodically redistributes the veths vlan10 vlan20 vlan30 VNIC 0 Server 2 VNIC 0 Server 3 VNIC 0 Server 1

End Host Mode Individual Uplinks Dynamic Re-pinning of failed uplinks FI-A Fabric A veth 3 Sub-second re-pinning veth 1 Pinning Switching GARP aided upstream convergence Sub-second re-pinning L2 Switching VNIC 0 VNIC stays up VNIC 0 MAC A Server 2 vswitch / N1K ESX HOST 1 VM 1 VM 2 MAC B MAC C

End Host Mode Port Channel Uplinks No disruption No GARPs needed More Bandwidth per Uplink No Server NIC disruption Fewer GARPs needed Fewer moving parts RECOMMENDED FI-A Fabric A veth 3 Sub-second convergence VNIC 0 veth 1 NIC stays up VNIC 0 MAC A Server 2 Pinning Switching vswitch / N1K ESX HOST 1 VM 1 VM 2 MAC B MAC C

End Host Mode Static Pinning (LAN Pin Group) FI A VNIC 0 Server 2 LAN veth 2 veth 3 veth 1 VNIC 0 Server 3 VNIC 0 Server 1 Pinning Switching Administrator Pinning Definition veth Interfaces veth 1 veth 2 veth 3 Administrator controls the veth pinning Deterministic traffic flow Uplink Blue Blue Purple No re-pinning with in the same FI Static and dynamic pinning can co-exist

Which uplink is the servers vnic pinned to? Uplink vnic

Fabric Forwarding - Storage

SAN End Host NPV Mode N-Port Virtualisation Forwarding NPV FLOGI FDISC SAN A N_Proxy 6200-A 6200-B Server 1 VSAN 1 SAN B N_Proxy vfc 1 vfc 3 vfc 2 vfc 4 vhba 0 NPIV VSAN 10 F_Proxy N_Port F_Port vhba 1 NPIV F_Proxy vhba 0 F_Port VSAN 20 N_Port vhba 1 Server 2 VSAN 1 NPV vhbas are pinned to SAN uplinks FI proxies FC messages to NPIV switch FI in NPV mode means: Uplinks connect to F port No domain ID consumption Multi-vendor interoperability Zoning performed upstream

SAN End Host NPV Mode N-Port Virtualisation Forwarding with MDS, Nexus 5000 F_ Port Channel & Trunk SAN A N_Proxy 6200-A 6200-B Server 1 VSAN 1 SAN B vfc 1 vfc 3 vfc 2 vfc 4 vhba 0 NPIV VSAN 1,2 F_Proxy N_Port vhba 1 F_Port NPIV VSAN 1,2 vhba 0 vhba 1 Server 2 VSAN 2 Port channel support Increased Bandwidth Redundancy VSAN Trunking support UCSB-2-B(nxos)# show vsan vsan 1 information name:vsan0001 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:up

SAN FC Switch Mode Direct Attach FC & FCoE Storage to UCS FC FCoE SAN Optional UCS acts like an FC SAN switch Local or Remote Zoning Direct attached storage MDS MDS N_Port VSAN 1 VSAN 2 TE_Port F_Port 6200-A FC Switch 6200-B FC Switch vfc 1 vfc 3 vfc 2 vfc 4 F_Port vhba 0 N_Port vhba 1 Server 1 VSAN 1 vhba 0 vhba 1 Server 2 VSAN 2

3 rd Generation FI Port Allocation Unified Ethernet 40G only 6332-16UP 6332 Ethernet 40G Only

FC Port Configurations Slider bar Left to right Contiguous ports System Reboot 3 blocks Block 1: 6 FC ports (1/1-6) Block 2: 12 FC ports (1/1-12) Block 3: 16 FC ports (1/1-16) FC ports are enable by default

Operation Mode vs. Features Operation Mode for FC/FCoE End-Host (NPV) Mode UCS Functions as Node Port (initiator) Required for Connecting FC to Non-MDS FC Switches FC Switching Mode Upstream MDS or Nexus FC Switch Required Required for UCS Local Zoning Feature Direct Connect from Fabric Interconnect to FC/FCoE Storage Target Operation Mode for Ethernet/iSCSI/NAS End-Host Mode Appliance Ports which allow Direct Connect Ethernet/iSCSI/NAS Storage Targets Ethernet Switch No Storage Based Reasons to use this Mode

M-Series

UCS M-Series Architecture Shared Power & Cooling 2 x 40 Gb Uplinks Shared Resources Independent Server Management Virtual Network Virtual Storage 8 x Cartridge Slots 4 PCIe Gen3 Lanes per Slot Flexible compute and memory

System Link Technology Overview System Link Technology is built on proven Cisco Virtual Interface Card (VIC) technologies VIC technologies use standard PCIe architecture to present an endpoint device to the compute resources VIC technology is a key component to the UCS converged infrastructure In the M-Series platform this technology has been extended to provide access to PCIe resources local to the chassis like storage eth0 eth1 operating system eth0 eth1 eth2 eth3 operating system

System Link Technology System Link technology provides the same capabilities as a VIC to configure PCIe devices for use by the server The difference with System Link is that it is an ASIC within the chassis and not a PCIe card The ASIC is core to the M-Series platform and provides access to I/O resources The ASIC connects devices to the compute resource through the system mid plane System Link provides the ability to access network and storage shared resources SCSI Commands Virtual Drive

System Link Technology Same ASIC used in the 3 rd Generation VIC Cartridges Network M-Series takes advantage of additional features which include: Gen3 PCIe root complex for connectivity to Chassis PCIe cards (e.g Storage) 32 Gen3 PCIe lanes connected to cartridges CPUs 2 x 40Gbps uplinks 32 Lanes Gen3 PCIe 40Gbps QSFP x2 Scale to 1024 PCIe devices created on ASIC (e.g. vnic) Storage

/ Mapping Network resources to the M-Series Servers The System Link Technology provides the network interface connectivity for all of the servers Virtual NICs (vnic) are created for each server and are mapped to the appropriate fabric through the service profile on UCS Manager Servers can have up to 8 vnics. The operating system sees each vnic as a 40Gbps Ethernet Interface but they can be rate limited and provide hardware QoS marking. Interfaces are 802.1Q capable Fabric Failover is supported, so in the event of a failure traffic is automatically moved to the second fabric Host PCIe Interface eth0 eth1 eth0 eth 1 Fabric Interconnect A Fabric Interconnect B

Networking Capabilities The System Link ASIC supports 1024 virtual devices. Current scale limits are 8 vnics per server. All network forwarding is provided by the fabric interconnects, there is no forwarding local to the chassis Currently the network uplinks for the M-Series chassis supports Ethernet traffic only. The M-Series devices can connect to external IP storage volumes like NFS, CIFS, HTTPS, or iscsi. FCoE connectivity will be supported in a future release. iscsi boot is supported see the UCS Interoperability Matrix for details.

Typical UCS Deployment

Recommended Topology for Upstream Connectivity Access/Aggregation Layer vpc/vss FI-A FI-B

UCS VM Traffic Flow All VMs in same VLAN VM1 to VM2 L2 Switching VM1 to VM3 VM1 to VM4 EHM FI-A EHM FI-B VNIC 0 ESX HOST 1 VNIC 1 VNIC 0 ESX HOST 2 VNIC 1 vswitch / N1K Mac Pinning vswitch / N1K Mac Pinning VM1 VM2 VM3 VM4

Summary Chassis Connectivity Server Connectivity Fabric Forwarding M-Series

Q & A

Complete Your Online Session Evaluation Give us your feedback and receive a Cisco 2016 T-Shirt by completing the Overall Event Survey and 5 Session Evaluations. Directly from your mobile device on the Cisco Live Mobile App By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/ Visit any Cisco Live Internet Station located throughout the venue T-Shirts can be collected Friday 11 March at Registration Learn online with Cisco Live! Visit us online after the conference for full access to session videos and presentations. www.ciscoliveapac.com

Thank you