Cisco Nexus 6000 Architecture

Similar documents
Cisco Nexus 5500, 5600, 6000 with Fabric Extender 2000 Switch Architecture

Cisco Nexus 6000 and 5600 with Fabric Extender 2000 Switch Architecture

Cisco Nexus 6001/6004 Architecture

Cisco Nexus 5600/6000 Switch Architecture Rohan Grover, Technical Marketing Manager BRKARC-3452

Unified Fabric Nexus Fixed Switching

Cisco Nexus 7000 Hardware Architecture

Next-Generation Cisco Nexus 7000 Series Switches and Modules and Cisco NX-OS Software Release 6.1

Cisco Nexus 5000 Series Architecture: The Building Blocks of the Unified Fabric

Cisco Nexus 7000 Switch Architecture

Cisco Nexus 9508 Switch Power and Performance

Cisco Nexus 5600/6000 Switch Architecture

Cisco Nexus 9300-EX Platform Switches Architecture

CISCO CATALYST 4500-X SERIES FIXED 10 GIGABIT ETHERNET AGGREGATION SWITCH DATA SHEET

Cisco Nexus 7000 / 7700 Switch Architecture

Evolution with End-to-End Data Center Virtualization

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

Cisco Nexus 9500 Series Switches Architecture

Arista 7010 Series: Q&A

Configuring SPAN. About SPAN. SPAN Sources

Configuring Local SPAN and ERSPAN

Configuring Queuing and Flow Control

Verified Scalability Limits

Cisco Nexus 7000 / 7700 Switch Architecture

Arista 7160 series: Q&A

Introducing the Cisco Nexus 7000 Series Data Center Class Switches

Cisco Nexus 7700 F3-Series 24-Port 40 Gigabit Ethernet Module

Cisco UCS 6324 Fabric Interconnect

Cisco Nexus 7000 Switches Second-Generation Supervisor Modules Data Sheet

Configuring Queuing and Flow Control

Arista 7170 series: Q&A

Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture

Centec V350 Product Introduction. Centec Networks (Suzhou) Co. Ltd R

Arista 7300X and 7250X Series: Q&A

Cisco Nexus 7000 Next-Generation Hardware and NX-OS Software Release 6.0

Cisco Nexus 9200 Switch Datasheet

Cisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K)

AVANTUS TRAINING PTE LTD

Configuring IP ACLs. About ACLs

7500 Series Data Center Switch

Cisco NCS 5011 Router Data Sheet

Cisco RF Gateway 10 Supervisor Engine V-10GE

Contents. Introduction. Background Information. Terminology. ACL TCAM Regions

Cisco UCS 6100 Series Fabric Interconnects

Cisco - DCNX7K: Configuring Cisco Nexus 7000 Switches

Cisco Nexus 7700 M3-Series 24-Port 40 Gigabit Ethernet Module

7124SX 10G SFP Data Center Switch Data Sheet

Verified Scalability for Cisco Nexus 5500 Series NX-OS Release 7.0(3)N1(1)

Verified Scalability Limits

Deep Dive QFX5100 & Virtual Chassis Fabric Washid Lootfun Sr. System Engineer

Cisco Nexus 7000 F3-Series 6-Port 100 Gigabit Ethernet Module

Introduction to Aruba Dik van Oeveren Aruba Consulting System Engineer

7500 Series Data Center Switch

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Cisco Certdumps Questions & Answers - Testing Engine

Vendor: Cisco. Exam Code: Exam Name: Designing Cisco Data Center Unified Fabric (DCUFD) Version: Demo

Cisco Exam Questions & Answers

7050 Series 40G Data Center Switches

Cisco Introduces the Cisco Nexus 2248PQ 10GE

Configuring SPAN. Finding Feature Information. About SPAN. SPAN Sources

Cisco Nexus 7700 M3-Series 100 Gigabit Ethernet Module

OmniSwitch 6900 Overview 1 COPYRIGHT 2011 ALCATEL-LUCENT ENTERPRISE. ALL RIGHTS RESERVED.

7050 Series 10/40G Data Center Switches

Cisco Nexus 9300-EX Switch Datasheet

Nexus DC Tec. Tomas Novak. BDM Sponsor. Sponsor. Sponsor Logo. Sponsor. Logo. Logo. Logo

Arista 7020R Series: Q&A

Configuring Cisco Nexus 7000 Series Switches

PON Product Datasheet U9016B

Cisco Nexus 9500 R-Series

Nexus 7000 F3 or Mx/F2e VDC Migration Use Cases

CloudEngine 6800 Series Data Center Switches

Arista 7060X, 7060X2, 7260X and 7260X3 series: Q&A

ilu-net580 Product Overview

Arista EOS Central Drop Counters

Configuring IP ACLs. About ACLs

PFC QoS. Prerequisites for PFC QoS. Restrictions for PFC QoS CHAPTER

Arista 7500E Series: Q&A

Configuring TAP Aggregation and MPLS Stripping

Implementing the ERSPAN Analytics Feature on Cisco Nexus 6000 Series and 5600 Platform Switches

Cisco UCS 6300 Series Fabric Interconnects

Data Center Access Design with Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders and Virtual PortChannels

Cisco ASR 1000 Series Aggregation Services Routers: QoS Architecture and Solutions

Configuring IP ACLs. Finding Feature Information

CCIE Data Center Written Exam ( ) version 1.0

100 GBE AND BEYOND. Diagram courtesy of the CFP MSA Brocade Communications Systems, Inc. v /11/21

CloudEngine 6800 Series Data Center Switches

Exam Questions

7050 Series 1/10G BASE-T Data Center Switches

Cisco Nexus 9500 Platform Line Cards and Fabric Modules

LSW6600 are the industry's highest performance 1U stackable data center switch, featuring with 1.28Tbps

Cisco Exam Questions & Answers

Cisco Nexus 7000 Hardware Architecture BRKARC-3470

Verified Scalability Limits

Configuring TAP Aggregation and MPLS Stripping

Module 5: Cisco Nexus 7000 Series Switch Administration, Management and Troubleshooting

Arista 7280E Series: Q&A

Cisco Nexus 5000/5500 and 2000 Switch Architecture

Cisco ASR 1000 Series Routers Embedded Services Processors

IP Fabric Architectures for SMPTE 2110 Bits By The Bay 2018 Conference. Ammar Latif Cisco Systems

7500 Series Data Center Switch

H3C S7500E-XS Switch Series FAQ

Transcription:

Cisco Nexus 6000 Architecture Sina Mirtorabi Technical Marketing Engineer

Session Abstract Session ID: Title: Cisco Nexus 6000 Architecture Abstract: This session describes the architecture of the Nexus 6000, a revolutionary switch for the datacenter. The nexus 6000 provides high density 40 Gig ports with low latency, highly efficient unicast and multicast forwarding, and new troubleshooting tools. 3

Agenda Nexus 6000 overview Architecture detail SPAN Buffering and QoS Multicast Summary 4

Nexus 6000 Overview

New! Nexus 6000 Series Cisco Nexus 6000 Series Nexus 6004 96 port 40G 4RU Switch Nexus 6001 Cisco Nexus 5000 Series Nexus 5020 Customers 56-Port 2RU Switch Nexus 5548 48-Port 1RU Switch Nexus 5596T 10Gbase-T switch 48 port 10G, 4 port 40G, 1RU Switch 4-port QSFP+ GEM Nexus 5010 28-Port 1RU Switch Nexus 5596 96-Port 2RU Switch 6

Introducing Nexus 6000 High Performance Line rate for L2 and L3 for all packet size Line rate SPAN 1 us port to port latency for all frame size Cut through switching at 10Gig and 40Gig FCoE at 40 Gig 25M buffer for 3 QSFP ports 31 active SPAN Feature Rich L2 and L3 features vpc Fabric Path / TRILL Segment-ID Adapter FEX / VM FEX NAT, Tunneling High Scalability Nexus 6004: 96 x 40Gig or 384 x 10Gig Nexus 6001: 48 x 10Gig and 4 x 40Gig Up to 256K MAC Up to 128K ARP 32K LPM routes 16K Bridge domain Sampled Netflow Buffer monitoring Latency monitoring SPAN on drop SPAN on high latency Microburst monitoring Visibility / Analytic 7

Nexus 7000 and Nexus 6004: DC Considerations Customer Requirements: Decision Points Virtualization Scalability DCI/Mobility Environmentals L4-7 Services High Availability Latency Investment Protection Decision Criteria in the Aggregation Lead Platform: Modular, High-End Solution Fixed, Mid-Range Solution Recommended when: Scale and Flexibility (100M/1G/10G/40G/100G/UP*) Highest Availability (HA) Investment Protection Multi-Protocol / Services VDC / OTV / MPLS / VPLS / LISP Nexus 7000 Series Up to 36 Tbps Recommended when: High density compact 10G/40G/100G*/UP* Low footprint & low power Low latency & jitter Extensive FEX Features Nexus 6000 Series Up to 7.68 Tbps * Roadmap 8

Nexus 6000 Deployment Scenarios Compact Aggregation Large Scale Fabric (Layer 2 and Layer 3) Nexus 7000 L3 CORE Nexus 7000 CORE Nexus7000 Nexus 6004 Neuxs 6004/ Nexus 6001 Nexus 2000 VPC AGG. ACCESS FEX Nexus 6004 Nexus 6004/ Nexus 6001 L3 L2/L3 Fabric SPIxus LEAF High Performance Computing (HFT) Multi hop FCoE Nexus 7000 CORE FCoE FC Nexus 6004 CORE/AGG. A B L3/L2 Fabric Nexus 6004 AGG Nexus 3548 Nexus 6001/6004 ACCESS Nexus 6001 Access FCoE FC 9

Nexus 6004 Chassis Rear View 4RU 30 48 Fixed QSFP Interfaces 12 QSFP ports Line card Expansion Module (LEM) 10

Nexus 6004 Chassis Front View Minimum 3 PS and 3 FAN are required Power Supply 3+3 grid redundancy or 3+1 redundancy 4RU Fan Module 3+1 redundancy Console Mgmt0 USB 11

Nexus 6001 Chassis Rear View 1RU 30 48 Fixed SFP+ Interfaces 4 Fixed QSFP interface 12

Nexus 6001 Chassis Front View Power Supply 1+1 redundancy Fan Module 2+1 redundancy Console Mgmt0 USB 13

Nexus 6000 Airflow Front to back or back to front air flow Port side exhaust at FCS Port side intake(reversed airflow) Nexus 6004 with 6.0(2)N1(2) release-q2cy13 Nexus 6001 with 6.0(2)N2(1) release-q3cy13 Different PS and Fan modules are required for different air flow directions Airflow direction at FCS 14

Nexus 6004 Physical Connections QSFP-SR4: 100m over OM3 MMF, 150m over OM4 MMF QSFP-CSR4: 300m over OM3 MMF, 400m over OM4 MMF 40G QSFP Interfaces QSFP-SR4/QSFP-CSR4 QSFP-SR4/QSFP-CSR4 40G QSFP Interfaces 40G QSFP Interfaces 1M, 3M, 5M passive 7M, 10M Active 40G QSFP Interfaces 40G QSFP Interfaces QSFP-SR4/QSFP-CSR4 10GE SFP+ interfaces 40G QSFP Interfaces QSFP-4x10G-7M QSFP-4x10G-10M 10GE SFP+ interfaces 15

Nexus 6004 Supported Transceivers 6.0(2)N1(2) Q2CY13 6.0(2)N2(1) Q3CY13 10KM Type Distance QSFP QSFP QSFP-40G-LR4 FET-40G 10KM with SMF 100m with OM3 QSFP 400m SFP+ QSFP-40G-SR4 QSFP-40G-CSR4 QSFP-4x10G-AC7M 100m with OM3 150m with OM4 300m with OM3 400m with OM4 7m QSFP 400m SFP+ FEX QSFP-4x10G-AC10M QSFP-H40G-1M QSFP-H40G-3M QSFP-H40G-5M 10m 1m 3m 5m QSFP 10KM Nexus 2248PQ-10G QSFP QSFP-H40G-AC7M QSFP-H40G-AC10M 7m 10m 16

FET-40G (Q3CY13) Low cost QSFP optical transceiver connecting FEX to N6004 Supported on N6004 and Nexus 2248PQ-10G Interoperable with FET-10G Support 100m distance with OM3 Cisco Nexus 6000 Cisco Nexus 6000 FET-40G FET-40G FET-40G Nexus 2248PQ-10G FET-10G Nexus 2232PP, 2232TM-E 2232TM Nexus 2248TP-E 2248TP 17

Unified Port LEM Unified ports expansion module 2/4/8/FC port 1/10GE and FCoE For Nexus 6004 only SFP+ port allows support for more variety of optical transceiver Target 2HCY13 18

100 Gig LEM 100G unified ports expansion module 100GE support including FCOE For Nexus 6004 only Not CC d and EC d. Form factor and features under consideration Target 1HCY14 19

Port Speed Configuration By default ports are in 40GE mode Port speed can be changed at group of 3 QSFP ports. The group of 12 QSFP ports need to be reset after port speed change. Eth2/1-12 Eth4/1-12 Eth6/1-12 Eth8/1-12 40G interface Ethernet 8/1 10G Interface Ethernet 8/1/1 Ethernet 8/1/2 Ethernet 8/1/3 Ethernet 8/1/4 N6004(config)# interface breakout slot 8 port 1-12 map 10g-4x? <CR> Eth1/1-12 Eth3/1-12 Eth5/1-12 Eth7/1-12 20

Nexus 6000 Key Forwarding Tables Host Table LPM Table(32K) Mroute Table(64K) MAC Region (128K by default) Summary routes (S,G) 256K Entries IP Host Region (ARP/ND/host route) (*,G) (128K by default) Host Table: 256K entry hashing table. Actual capacity is slightly below 256K. Host Table is shared between MAC, ARP/ND and /32 host route and (*,G) Host Table default carving: 128K MAC, 128K IP host LPM Table: 32K entries. Summary routes Mroute table: 64K entries. 21

Nexus 6000 Unicast Table Scaling Each IPv6 ND consumes 2 entries in IP host region in host table Each IPv6 route consumes 4 entries in LPM table 10.1.1.0/24 2001::/64 Host Table MAC: 00:02:B3:01:02:03 IPv4: 10.1.1.1 IPv6: 2001::0DB8:800:200C:417A LPM Table MAC Region 00:02:B3:01:02:03 MAC 10.1.1.0 Ipv4 route IP Host Region 10.1.1.1 ARP 2001:0 0:0 0:0 0:0 4 HW entries for IPv6 route 2001:0:0:0: 0DB8:800:200C:417A 2 HW entries for IPv6 ND 22

Nexus 6000 Host Table Scaling Deploy Scenario L2 switch L2L3 Gateway with IPv4 only* L2L3 Gateway with IPv6 only* L2L3 Gateway with dual stack* Scalability 256K MAC 128K hosts 85K hosts 50K hosts In L2 mode the IGMP snooping is stored at IP host region. So the actual MAC region will be less than 256K. At FCS software statically allocate 128K entries for MAC and 128K entries for IP host(arp/nd/host route) Host table is hashing table. Actual capacity will be slightly below the number in the table. * Assume one IPv4 or IPv6 per host. Hardware scaling number. 23

Nexus 6004 Control Plane Nexus 5000 Nexus 5500 Nexus 6000 CPU Scaling Feature Complexity and Corresponding Control Plane Growth CPU - 1.66 GHz Intel LV Xeon DRAM - 2 GB of DDR2 400 (PC2 3200) in two DIMM slots On-Board Fault Log - 64 MB of flash for failure analysis 1G Flash NVRAM - 2 MB of SRAM: Syslog and licensing information CPU - 1.7 GHz Intel Jasper Forest (Dual Core) DRAM - 8 GB of DDR3 in two DIMM slots On-Board Fault Log (OBFL) - 64 MB 2G Flash NVRAM - 6 MB of SRAM to store Syslog and licensing information Built-in single supervisor ISSU with L2 at FCS CPU 4 Core Intel Gladden 2.0GHz DRAM - 4 DIMM slots 16GB by default. 8G Flash for NX-OS software and user files 6MB NVRAM 64MB OBLF(On-board Fault Logging) 24

Nexus 6000 versus Nexus 5500 HW Performance and Scalability 25

Nexus 6004 Architecture

Nexus 6004 Architecture UPC UPC Switch Fabric UPC UPC 27

Nexus 6004 Unified Port Controller ( UPC) UPC Multimode MAC Packet Parser/Editing LU Engine Buffering Queuing 28

Nexus 6004 Switch Fabric 14 Gig 10 Gig Fabric Mode 40 Gig Fabric Mode 14 Gig 4 x 14 Gig 4 x 14 Gig Scheduler Scheduler Switch Fabric 192 x 384 Switch Fabric 48 x 96 14 Gig 14 Gig 4 x 14 Gig 4 x 14 Gig 29

12 x 10GE 16 x 14 Gbps = 224 Gbps 32 x 14 Gbps = 448 Gbps 12 x 10GE Nexus 6004 Internal Architecture Fabric Mode 10 Gig 10 Gig Fabric Mode 14 Gbps Switch Fabric 192 x 384 14 Gbps Ingress UPC Switch Fabric 192 x 384 Switch Fabric 192 x 384 Egress UPC 14 Gbps Switch Fabric 192 x 384 14 Gbps 30

3 x 40GE 4 x 56 Gbps = 224 Gbps 8 x 56 Gbps = 448 Gbps 3 x 40GE Nexus 6004 Internal Architecture Fabric Mode 40 Gig 56 Gbps 40 Gig Fabric Mode Switch Fabric 48 x 96 56 Gbps Ingress UPC Switch Fabric 48x 96 Switch Fabric 48x 96 Egress UPC 56 Gbps Switch Fabric 48x 96 56 Gbps 31

Nexus 6004 Internal Architecture Fabric Mode 10 Gig Ingress UPC 1 4 x14gig 10G mode (192x384) 8 x 14Gig Egress UPC 1 10G mode (192x384) 10G mode (192x384) Ingress UPC 32 10G mode (192x384) Egress UPC 32 32

Nexus 6004 Internal Architecture Fabric Mode 40 Gig Ingress UPC 1 56Gig 40G mode (48x96) 2 x 56Gig Egress UPC 1 40G mode (48x96) 40G mode (48x96) Ingress UPC 32 40G mode (48x96) Egress UPC 32 33

Nexus 6004 Unified Port Controller ( UPC) UPC Multimode MAC Packet Parser/Editing LU Engine and ACL Buffering Queuing Packet protocol function: encoding/ decoding for physical medium dependent Flow control, PFC, 802.3x, B2B credit for Fibre Channel Parse the packet, extract relevant fields, form a lookup vector and pass it to LU Create SPAN copy if needed Put 1588 clock Perform all the forwarding decision based on the vector received from FW AC module performs ACL, QoS classification, FC zoning, NAT, PBR Packet buffering on ingress and egress Pass queuing information to ingress QS Transmit frame to Fabric Interface Perform DWRR based on output queues. Maintains ingress VoQs for unicast, multicast and SPAN. Maintains per port egress queues for unicast and multicast. Performs egress replications 34

Nexus 6004 UPC block (animation) Fabric interface Fabric interface Ingress QS Egress QS Ingress BM Egress BM FTD Ingress FW Ingress FW Ingress FW LU ACL Egress FW Egress FW Egress FW FTE MM MAC MM MAC MM MAC MM MAC MM MAC MM MAC 35

Cut Through and S&F Depending on the port speed combination and switch fabric mode, Nexus 6004 performs either cut through switching or store& forward switching We can summarize the cut through switching as follow In 10 Gig fabric mode, we do cut through switching when the egress is 10Gig Ingress Egress 10GE 40GE 10GE Cut-through Store-N-Forwarding 40GE Cut-through Store-N-Forwarding In 40 Gig fabric mode, we do cut through switching when the ingress is 40 Gig Ingress 10GE Egress 10GE Cut-through OR Store-- Forwarding 40GE Store-N-Forwarding 40GE Cut-through Cut-through 36

Fabric Mode, 40Gig or 10Gig? If all ports are operating at 10Gig, use the 10Gig fabric mode. If all ports are operating at 40Gig, use 40 Gig fabric mode. A change to the fabric mode requires a reload. If there is a mix of 10Gig and 40Gig ports, use the cut through / S&F matrix in previous slides to see what traffic needs low latency, the following also needs to be considered In 10Gig fabric mode, a 40GE interface can only carry 10Gbps flows In 10Gig fabric mode, there could be a latency improvement of 200 ns for 10Gig ports At FCS, ISSU is disabled when fabric mode is 10Gig mode. Post FCS, ISSU will be enabled in 10Gig fabric mode 37

Nexus 6000 SPAN

Nexus 6004 SPAN Enhancements 448Gbps Egress UPC Ingress UPC 224Gbps Switch Fabric 448Gbps Egress UPC Total 31 active SPAN sessions. 16 ERSPAN sessions. Support ERSPAN termination Wire speed SPAN throughput. Extra fabric link bandwidth for SPAN traffic Best effort for SPAN traffic. Drop SPAN traffic in case of fabric link congestion Hardware support multiple SPAN destination ports per SPAN session Support PortChannel as SPAN destination port. Source port based hashing. 39

Data Protection with SPAN SPAN source interface Data SPAN 224Gbps Switch Fabric SPAN packets are dropped first when there is congestion Egress UPC Egress UPC SPAN Destination interface For rx SPAN data packets are replicated at ingress UPC. For tx SPAN data packets are replicated at egress UPC UPC gives priority to data traffic and schedule data traffic first with remaining fabric link bandwidth allocate for SPAN. Separate buffer pool for SPAN traffic to avoid buffer hog due to SPAN destination port congestion 448Gbps 448Gbps Egress interface 40

Line Rate SPAN Production data traffic can take up to 120Gbps* SPAN source interface Data SPAN 224Gbps Switch Fabric 448Gbps 448Gbps Egress UPC Egress interface Egress UPC SPAN Destination interface Extra bandwidth capacity is built in for SPAN traffic Any fabric bandwidth not taken by data traffic can be used for SPAN traffic Can achieve line rate SPAN throughput with extra fabric link bandwidth First product in the market support 16 line rate SPAN sessions *Note: The internal header is not take into consideration. 41

SPAN with Multiple Destination Ports SPAN source interface Data SPAN 1 2 224Gbps Switch Fabric 448Gbps 448Gbps Egress UPC Egress UPC 2 1 Egress interface SPAN Destination interface 2 SPAN Destination interface 1 Hardware support multiple SPAN destination ports per session Each destination port is counted as one SPAN session Packets will be replicated multiple times, one per each SPAN destination port 42

Nexus 6004 vs Nexus 5500 SPAN SPAN Features Nexus 6000 Nexus 5500 Total SPAN sessions 31 4 Local SPAN sessions 31 4 ERSPAN sessions 16 4 Prioritize data over SPAN Yes(through scheduling) SPAN policing Line rate SPAN throughput Yes Yes for limited scenarios ERSPAN destination session Yes No Truncated SPAN/ERSPAN Yes Yes ACL based SPAN/ERSPAN Yes Yes SPAN on drop Yes No SPAN on high latency Yes No SPAN with multiple destination ports Yes(each destination port burns one SPAN session) Yes(each destination port burns one SPAN session) 43

Nexus 6000 Buffering & QoS

Nexus 6004 Buffer UPC Ingress 16 MB 25 MB Egress 9 MB 45

Nexus 6004 Ingress Buffer Management Ingress UPC Ingress Buffer SHARED Default class Dedicated Control SPAN Default 46

Nexus 6004 Ingress Buffer Allocation Traffic Type Traffic Type 40 Gig Port Buffer Dedicated per 40Gig Port Shared Control Plane 67 KB N/A SPAN 152 KB N/A Default Class 100 KB 14.6 MB 10 Gig Port Buffer Dedicated per 10Gig Port Shared Control Plane 64 KB N/A SPAN 38 KB N/A Default Class 100 KB 13.2 MB 47

Nexus 6004 FCoE Buffer Allocation Traffic Type 40 Gig Port Buffer 300 m 3Km 10Km FCoE 187 KB 444 KB 1.1 MB 10 Gig Port Traffic Type Buffer 300 m 3Km 10Km FCoE 165 KB 422 KB 1 MB By default no buffer is allocated to class FCoE The buffer allocation for FCoE is a function of port speed and distance There is enough buffer to support one port per UPC for FCoE over 100KM FCoE over 100KM requires 9.6 MB dedicated on a single port 48

Nexus 6004 Ingress Buffer Management By default all the shared buffer is allocated to default class A CLI is provided to change the amount of shared buffer, the remaining buffer is dedicated and divided between ports hardware shared-buffer-size <0-14.2 MB> A Drop class can take half of the shared buffer. The queue-limit change the fixed ingress buffer allocation for a class. 49

Nexus 6004 Egress Buffer Management Egress Buffer Unicast Multicast Pool 1 Pool 2 Pool 3 Pool 4 Pool 1 Pool 2 Egress UPC 50

Nexus 6004 Egress Buffer Management Fabric Mode 10G Fabric Mode 40G Traffic Type Buffer Dedicated per 10 Gig Port Shared Unicast 212 KB None Multicast None 6.3 MB Traffic Type Buffer Dedicated per 10 Gig Port Shared Unicast 212 KB None Multicast None 6 MB 51

Nexus 6004 Egress Buffer Management Fabric Mode 10G Traffic Type Buffer Dedicated / 40 Gig Port Shared Unicast 672 KB None Multicast None 6.1 MB Fabric Mode 40G Traffic Type Buffer Dedicated / 40 Gig Port Shared Unicast 795 KB None Multicast None 6.1 MB 52

Nexus 6004 Ingress Queuing 4K Unicast VoQ Per port, Per Class of Service 224 Gig 8K Multicast VoQ Per UPC Fan out Switch Fabric 32 SPAN VoQ Per SPAN Session UPC 53

Nexus 6004 Egress Queuing Egress UPC Switch Fabric 448 Gig 12 x 8 = 96 12 x 8 = 96 Unicast Multicast Queues Queues 8 queues 8 queues 8 queues 8 queues P1 P12 54

Nexus 6004 QoS flow Ingress UPC SPAN queues Trust CoS/DSCP L2/L3/L4 info with ACL If buffer usage crosses threshold: Tail drop for drop class Assert pause signal to MAC for no-drop system class VoQs for unicast (8 per egress port) MAC Traffic Classification Ingress Cos/DSCP Marking Ingress Policing MTU checking Per-class Buffer usage Monitoring muticast queues Truncate or drop packets if MTU is violated Switch Fabric PAUSE ON/OFF signal Proxy Queues Egress Queues unicast MAC ECN Marking Egress Policing Egress scheduling Egress UPC Strict priority + DWRR scheduling multicast 55

Nexus 6004 QoS Features 8 Classes of service. 2 reserved for control traffic. 6 for data traffic Traffic classification DSCP/CoS/ ACL Strict Priority queuing and DWRR DCBX 802.1Qaz Packet marking DSCP/CoS/ECN Ingress policing/egress policing 4K policer per ASIC No drop system class Flexible buffer management 56

Nexus 6004 ACL Table Scaling 4K TCAM entries per ASIC. TCAM entries are shared if same policy are applied for multiple interfaces or VLANs There are 24 L4ops for ingress and 24 L4ops for egress. 16 L4ops are used for tcp/udp port range (each protocol can use up to 12 L4ops) and 8 are usef for TCP flag Software can choose to expand ACL entry with UDP/TCP port to multiple ACE in TCAM automatically 57

Nexus 6004 ACL Entries Sharing When same ACL policy(security ACL such as PACL, VACL, RACL and QoS ACL) is applied to multiple interfaces or VLAN there is only one copy stored in TCAM. The same ACL rules are shared by multiple interfaces and VLANs. Each ACL policy has label. By assign same label to multiple interfaces and VLANs same TCAM rule can be applies to multiple interfaces or VLANs Label xyz xyz xyz eth1/10 eth1/11 eth1/12 Label xyz interface Ethernet1/10 ip port access-group ip-list-1 in interface Ethernet1/11 ip port access-group ip-list-1 in interface Ethernet1/12 ip port access-group ip-list-1 in IPV4 ACL ip-list-1 10 permit ip 100.1.1.0/24 200.1.1.0/24 20 permit ip 100.1.2.0/24 200.1.2.0/24 30 permit tcp 100.1.3.0/24 200.1.3.0/24 range 100 2000 58

Nexus 6004 TCAM Partition Default TCAM Partition User configured TCAM Partition VACL(1024) VACL IFACL IFACL(1152) 4096 Entries QoS(448) QoS RBACL(1024) RBACL SPAN(64) SPAN Control Traffic(256) Sup 59

Nexus 6000 Multicast

Nexus 6004 Multicast Highlits High Performance Line rate L2 and L3 multicast throughput with all frame sizes Low latency at scale High Scalability 64k mroute table. Larger buffer for burst absorption Optimized multicast replication Fabric replication and egress replication Enhanced features IP based forwarding for IGMP snooping PIM-BiDir support Flow based hashing for multicast over PortChannel Better traffic visibility 61

IP Based Forwarding for IGMP Snooping Source IP and group address based forwarding for IGMPv3 snooping even when N6k is L2 switch Can filter traffic based on source IP for IGMPv3 No concern of overlapping multicast MAC addresses v3 Report (224.0.0.22) 1.1.1.10 H1 Group: 224.1.1.1 Include: 10.0.0.1 Multicast MAC based forwarding Vlan10 0100.5E01.0101 eth1/1 IP based forwarding Vlan10 10.0.0.1 224.1.1.1 eth1/1 62

Nexus 6004 Multicast Packet Replication 448Gbps Egress UPC Vlan 10 Vlan 10 Ingress UPC 224Gbps Switch Fabric 448Gbps V 10 V 20 Vlan 10 Vlan 20 Egress UPC Vlan 10 Fabric replication: One copy is sent to each egress UPC that has at least one receiver Egress replication: UPC replicates packets locally to each port and multiple copies to same port if needed Egress buffering for microburst and oversubscription Drop multicast packet at egress on the per port per queue basis for congestion 63

Multicast VOQ Unicast VOQ Multicast Replication and VoQ Fabric Replication Egress Replication Egress UPC 1 224.1.1.1 224.1.1.1 Switch Fabric Egress UPC 2 224.1.1.1 224.1.1.2 Ingress UPC Egress UPC 3 224.1.1.2 Optimized replication: fabric and egress replication 8K Multicast VOQ at ingress to avoid HOLB Track the fanout of the egress UPC. Packets with different egress UPC fanout are assigned to different VoQ 224.1.1.2 64

Multicast VOQ Unicast VOQ Multicast Hashing over Port Channel Real flow based hashing for multi-destination traffic Traffic is replicated to all egress UPC where PortChannel member resides Egress UPC run hash calculation and choose an egress port for a particular flow. Rest of UPCs drops the packet. Extra bandwidth from fabric to egress UPC to handle the extra multi-destination packets Egress Bigsur 1 Ingress BigSur Port 1 Port 3 Selected Port 2 Switch Fabric Egress Bigsur 2 Port 3 Selected Port 3 Port 4 65

Summary

Nexus 6000 Summary Key Enhancements Performance Scalability Buffering L2 and L3 line rate at 10Gig and 40Gig, Latency at 1us, 1K way ECMP, Line rate SPAN 256K MAC/IP host routes, 32K LPM routes, 16K Bridge Domains, 4K VRF, 64K RPF, 64K Mroute table 25M shared buffer per 3 QSFP ports Multicast SPAN Optimized multicast replication, flow based hashing, IP (S,G) / (*,G) lookup even for L2 multicast, egress midx translation Line rate SPAN, 31 SPAN sessions, SPAN on drops, SPAN on high latency Analytics Sampled Netflow, buffer monitoring, microburst monitoring, latency monitoring, SPAN on Drop, SPAN on High Latency 67

68