Cisco CRS-3 Carrier Routing System MultiChassis Overview Session ID BRKARC-3002
|
|
- Susan Evans
- 6 years ago
- Views:
Transcription
1
2 Cisco CRS-3 Carrier Routing System MultiChassis Overview Session ID
3 Agenda CRS (Carrier Routing System) Overview CRS-3 Overview Overview of CRS Data Path Overview of CRS Switch Fabric CRS MultiChassis Specifics CRS MultiChassis Switch Fabric details CRS MultiChassis Control Ethernet CRS MultiChassis Configuration CRS MultiChassis Troubleshooting (Control Ethernet & Fabric) 3
4 Agenda continued Appendix for reference Appendix Typical migration Step SC to MC Appendix A Case Study 1 - Offline SC to MC migration Appendix B Case Study 2 Online SC to MC migration Appendix C FCC Physical Installation notes 4
5 CRS (Carrier Routing System) Overview A fully modular and distributed routing system CRS-1 (40G) and CRS-3 (140G) Systems 4,8 & 16 Slot Standalone Systems 8 & 16 Slot MultiChassis Systems (LCC s + FCC s) 5
6 CRS-1 & 3 Hardware Introduction 16 slot Line Card Chassis with integrated rack (standalone or LCC) External Interfaces (PLIM) 8 slot Line Card Chassis rack mountable Redundant Route Processors Fabric Card Chassis (FCC) Front and Rear Access Required MSC and Fabric Access - Rear 6
7 CRS-3 Overview New 140G Fabric and Line Cards Increased throughput and scale Backwards compatible with CRS-1 Hardware 4,8 & 16 Slot Standalone Systems 8 & 16 Slot MultiChassis Systems (LCC s + FCC s) Can leverage 40G Line Card's & MSC s 7
8 CRS-3 Product Family Foundation for the IP NGN Core Unprecedented service flexibility Continuous system operation True Telco grade OS IOS-XR In-service hardware and software upgrades Hitless upgrade to Multichassis Unparalleled system longevity Multi-chassis fabric scales to 322Tbps. Investment protection common forwarding engines and I/O modules Modular hardware and software CRS-4/S CRS-8/S CRS-16/S CRS-MC 1.12 Tbps 2.24 Tbps 4.48 Tbps 4.48 Tbps to 322 Tbps 8
9 CRS-3 Line Card Architecture High Level LC Architecture MSC Architecture PSEs (Ingress and Egress) IngressQ FabricQ EgressQ Switch Fabric Architecture Control Ethernet 9
10 CRS-3 System Overview Same architecture as CRS-1 but at 140G Compatible with all CRS-1 chassis sizes Standalone or MultiChassis S1/S2/ 100G PHY MAC PLA EgressQ PSE Intel CPU Sub-system PSE IngressQ FabricQ FabricQ 1 of 8 2 of 8 3 of 8 S1 S2 4 of 8 S1 S2 5 of 8 S1 S2 6 of 8 S1 S2 7 of 8 S1 S1 S2 S2 8 of 8 S1 S1 S2 S2 S1 S1 S2 S2 S2 S1 S1 S2 S1 S2 S1 S2 S2 S1 S2 S1 Fabric S2 PLIM MSC/FP 10
11 CRS-3 Line Card Forwarding Lookup Input Features Queuing for Fabric Cell Segmentation Input Shaping from PLIM 160G 2x100G PSE 160G IngressQ 141G to Fabric Intel CPU Sub-system to PLIM 120G 2x80G EgressQ 160G PSE 100G 100G FabricQ FabricQ 113G 113G from Fabric Output Queuing Output Features Multicast Replication Cell Reassembly 11
12 CRS-3 New hardware CRS-3 New Hardware Boards Supporting Platforms CRS-4 Fabric Single Chassis CRS-4 CRS-8 Fabric CRS-16 Fabric CRS-16 S13 Fabric FCC S2 Fabric Board CRS-3 MSC 140G CRS-3 14x10GE PLIM Single Chassis CRS-8 Single Chassis CRS-16 MultiChassis Line Card Chassis MutiChassis Fabric Card Chassis All CRS-3 platforms All CRS-3 platforms CRS-3 20x10GE PLIM All CRS-3 platforms CRS-3 1x100GE PLIM All CRS-3 platforms Note: CRS-3 Fabric is backward compatible with CRS-1 40G MSC/PLIMs. Thus, a CRS-3 MultiChassis will support CRS-1 Line Card chassis. CRS-3 PLIMs are compatible only with CRS-3 MSCs and vice versa. 12
13 CRS-3 MSC140 vs. FP140 MSC140 FP140 Feature Without licenses With appropriate licenses Queues 64K / slot 8 / port 8 / port L3 Interfaces 12K / slot 250 / slot 250 / slot Multichassis Yes No Yes Netflow Sampling <1: 1500 >1: 1500 <1:1500 L2/L3VPN 2K+ VRFs/LC No VRFs 250 VRFs + L2VPN connections per LC Route Scale 4M IPv4/2M IPv6 1M IPv4/500K IPv6 4M IPv4/2M IPv6 TE Scale >3K midpoints 3K midpoints + heads + tails >3K midpoints Advanced Features Tunneling, LI No Tunneling or LI Tunneling, LI 13
14 CRS System Attributes Comparison MSC40 CRS-1 Bandwidth 40 Gbps 140 Gbps Max Packet per second 80 Mpps 125 Mpps MSC140 CRS-3 FIB Scalability 2M IPV4, 1M IPV6 4M IPV4, 2M IPV6 BW Modes 20/40 Gbps 40/140 Gbps Queues/Groups/Ports 8k/2k/768 64k/16k/128 Supported Fabric Cards 40G and 140G fabric 140G fabric only 8 x 10GE 14 x 10GE 16xOC48, 4xOC192, 1xOC x 10GE (oversubscribed) PLIMs Modular 6 x SPA 1 x 100GE SONET and Modular PLIMs (future) LC (MSC + PLIM): 530W LC (MSC + PLIM): 600W Power Switch Card (4/8/16 slot): 102/185/206W Total (4/8/16-slot): 3.5/6.6/11.5KW Switch Card (4/18/16 slot): 49/90/94W Total (4/8/16 slot): 4/7.5/14 KW 14
15 CRS-3 Full Rack MultiChassis Fabric Card Chassis (FCC) Line Card Chassis (LCC) Optical Backplane Redundant Fans/Power Supports CRS-16 FRONT: 24 Fabric cards 2 Shelf Controllers Control Ethernet Connections BACK: 24 Optical Interconnect Modules (OIM) 2 OIM LED Module Array Cable Connections 100m Mid-Plane Design Redundant Fans/Power 140G per Slot Each Line Card Shelf add 4.48 Tbps to the MultiChassis system Fabric architecture can support 72 LC Chassis and 8 Fabric Chassis Tbps FRONT: 16 Interface Slots 2 RP Slots 2 Controller slots BACK: 16 LC Slots 8 Fabric Card Slots Hitless Single Chassis to MultiChassis Upgrade 15
16 Overview of CRS Data Path High Level view Switch Fabric and Fabric attributes Basic Fabric Building Blocks CRS-1 and CRS-3 Fabric Bandwidth Comparison High Level Data Path Ingress to Egress via Fabric 16
17 Cisco CRS Switch Fabric High Level Line Card MSC S1 S1 S2 S Line Card MSC Ingress Side: Each Line Card (and RP) is connected to all Fabric planes through the S1 stage S1 S2 8 of 8 Egress Side: Each Line Card (and RP) is connected to all Fabric planes through stage Line cards chop Packets up into Cisco Cells for transport across the fabric 1 of 8 2 of 8 Line cards reassemble Packets from Cisco Cells for egress packet processing Destination line card (or RP) is prepended to the Cell Switch Fabric: Three Stage (S1, S2, ) Benes Topology Multicast replication in Fabric 2 levels of priority through Fabric HP Low latency path LP Best effort traffic 17
18 Cisco CRS Switch Fabric Attributes Line Card MSC Queuing: Discrete Queues for control and data plane Integrated Congestion & flow control S1 S1 S1 S2 S2 S2 1 of 8 2 of of Line Card MSC Multicast Full multicast capability in the fabric 1M fabric mcast groups Mcast cells are dropped if fabric is congested Mcast cells queued separately from unicast Hardware Capacity: Fabric implements at 1296 x 1296 buffered non-blocking switch Provides Capacity for 72 LCC 72 LCC * 16 Line cards = LCC * 2 Route Processors = 144 Redundancy Fabric implemented across 8 fabric planes Loss of any one fabric plane does not reduce capacity Loss of additional planes reduces capacity by 1/7 per plane 18
19 CRS-3 Basic Fabric Building Blocks IngressQ segments packets into cells Fabric routes cells to egress card 160G 100G 100G IngressQ FabricQ FabricQ 141G 113G 113G 1 of 8 2 of 8 3 of 8 4 of 8 5 of 8 S1 S2 6 of 8 S1 S2 7 of 8 S1 S2 8 of 8 S1 S2 S1 S2 S1 S2 S2 S1 S2 S1 S2 S1 S1 S2 S1 S2 S1 S2 S1 S2 S2 S1 S2 S1 S2 S1 S2 FabricQ reassembles packets from cells 19
20 CRS Fabric Bandwidth Comparison CRS-1 2.5G x 4 = 10Gbps ingress (per plane) 10G * 8 planes = 80Gbps total ingress CRS-3 5G x 5 = 25Gbps ingress (per plane) 25G * 8 planes = 200Gbps total ingress MSC40 Fabric MSC140 Fabric IngressQ 2.5G 2.5G 2.5G 2.5G S1 IngressQ 5G 5G 5G 5G S1 5G FabricQ FabricQ 2.5G 2.5G 2.5G 2.5G 2.5G 2.5G 2.5G 2.5G FabricQ FabricQ 5G 5G 5G 5G 5G 5G 5G 5G 2.5G * 8 = 20Gbps egress (per plane) 20G * 8 planes = 160Gbps total egress 5G * 8 = 40Gbps egress (per plane) 40G * 8 planes = 320Gbps total egress 20
21 Fabric Bandwidth Calculations CRS-3 IngressQ to S1 = 200Gbps (raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 141Gbps to FabricQ = 320Gbps (raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 225Gbps or 113Gbps per FabricQ CRS-1 IngressQ to S1 = 80Gbps(raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 56Gbps to FabricQ = 160Gbps(raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 112Gbps or 56 Gbps per FabricQ 21
22 CRS Fabric Bandwidths and Links Between MSC40 and MSC140 MSC40 MSC140 To Fabric Ingress IngressQ has 32 for 40G = 32 links * 2.5Gbps *8/10 (coding) *120/136 (cell tax) = 56Gbps (This is for 8 planes) Scaling IngressQ to 140G requires 40 IngressQ to Fabric ASIC S1 = 40 links * 5Gbps *8/10 (coding) *120/136 (cell tax) = 141Gbps (This is for 8 planes) From Fabric Egress 2 FabricQ ASICs per MSC, each with 2 RX Links per ASIC Gb = 64 links * 2.5Gbps *8/10 (coding) *120/136(cell tax) = 100Gb (this is for 8 planes) 100 Gb is split across 2 FabricQ ASICs on the LC. Egress forwarding capacity is only 40Gb, so the LC must backpressure when overloaded Fabric ASIC to FabricQ = 64 links * 5Gbps * 8/10 (coding) *120/136 (cell tax) = 225Gbps or 113Gbps per FabricQ. (for 8 planes) 22
23 IngressQ To-Fabric interface Every MSC and RP transmits data to the fabric via an IngressQ ASIC 3072 fabric destinations with 2 priorities, plus 2 multicast queues An IngressQ ASIC has 48 TX 40 used for 200Gbps raw bandwidth 141 Gbps net bandwidth TX links physically connect to each of the fabric planes 100G PHY MAC PLA EgressQ PSE Intel CPU Sub-system PSE IngressQ 1 of 8 2 of 8 3 of 8 S1 S2 S1 S2 4 of 8 S1 S2 5 of 8 S1 S2 6 of 8 S1 7 of 8 S1 S2 S2 S1 S1 S2 8 of 8 S2 S1 S1 S2 S2 S1 S2 S1 S2 S1 S2 S1 S2 S2 FabricQ S1 S2 S1 FabricQ S2 23
24 Ingress LC IngressQ to S1 With 8 fabric planes and 6 connections to the S1 ASICs, each CRS-3 LC has 48 connections. 40 of these links are used for carrying data traffic. 40 5Gb = 200Gb 8b/10b encoding reduces this to 160Gb effective BW S1 S1 S1 Plane 1 Plane 3 Plane 5 S1 S1 Plane 2 Plane 4 Cell tax means that the final result is ~140Gb S1 Plane 6 S1 Plane 7 S1 Plane 8 24
25 S1/S2/ ASIC s 100G PHY MAC PLA EgressQ PSE Intel CPU Sub-system PSE IngressQ 1 of 8 2 of 8 3 of 8 S1 S2 S1 S2 4 of 8 S1 S2 5 of 8 S1 S2 6 of 8 S1 7 of 8 S1 S2 S2 S1 S1 S2 8 of 8 S2 S1 S1 S2 S2 S1 S2 S1 S2 S1 S2 S1 S2 FabricQ S1 S2 S1 S2 FabricQ Fabric switching ASIC 5Gbps SerDes with 2.5Gbps backwards compatibility mode HP/LP unicast and HP/LP multicast queues 25
26 S1-> S2 -> switching Ingress linecard selects which FabricQ ASIC on the egress linecard a packet should be sent to. The selection is encoded in the Cell Header when the packet is converted into Cisco Cells S1 switches cells streams, load-balancing across the three S2 ASICs S2 queues and routes the cell to the correct based on the Cell header queues and routes the cell to the correct FabricQ based on the Cell header 72 links from S1 stage to the 3 S2 ASICs (24 links to each ASIC) 144 links from S2 stage to the 2 ASICs (24 links to each ASIC) S1 S2 S2 S1 S2 26
27 Switch Fabric Multicast Replication CRS provides efficient multicast replication via 3 operations S2 can replicate cells to registered (via FGID) SEAs can replicate cells to registered (via FGID) FabricQs Egress SPP can replicate packets for each output port 1 million Fabric Group IDs Program fabric for mcast S1 S2 and lookup FGID S2 S1 switches to all S2s Mcast replication at S2 based on FGID Replication at based on FGID 27
28 Egress CRS-3 Fabric Plane On egress, in the CRS-3 LCC chassis, there are 2 ASICs per fabric plane. (the same S13 ASIC acts as S1 or for different Fabric stages S1 S1 S2 S2 S2 In the 16-slot chassis, each receives 72 out of 144 TX links in service. Each MSC receives 8 TX links per pair In the 16-slot chassis, each RP receives 4 TX links per pair 28
29 FabricQ CRS-3 From-Fabric interface Every MSC and RP receives data from the fabric via one or more FabricQ ASICs MSCs have two FabricQ ASICs RPs have one FabricQ ASIC A FabricQ ASIC has 32 RX 160 Gbps raw, 113 Gbps net bandwidth MSC140/FP140 uses 2 FabricQs for 225 Gbps total RX links physically connect to each of the fabric planes 100G PHY MAC PLA (Beluga) EgressQ (Tor) PSE (Pogo) Intel CPU Sub-system PSE (Pogo) IngressQ (Seal) FabricQ (Crab) FabricQ (Crab) 1 of 8 2 of 8 3 of 8 S1 S2 S1 S2 4 of 8 S1 S2 5 of 8 S1 S2 6 of 8 S1 7 of 8 S1 S2 S2 S1 S1 S2 8 of 8 S2 S1 S1 S2 S2 S1 S1 S2 S2 S1 S2 S1 S2 S1 S2 S2 S1 S2 29
30 Egress LC to FabricQ CRS-3 Plane 1 Plane 3 Plane 2 Plane 4 In the 16-slot chassis, there are 2* ASICs per plane connected to each MSC/RP 2 FabricQ ASICs per MSC, each with 4 RX Links per ASIC 8 connections per linecard per fabric plane 64 connections in 5Gb = 320Gb Plane 5 8B/10B encoding reduces BW to 256Gb Plane 7 Plane 6 cell tax reduces this down to 225Gb 225Gb is split across 2 FabricQ ASICs on the LC Plane 8 30
31 Overview of CRS Switch Fabric Fabric stages defined Fabric Planes Fabric Resilience 31
32 Cisco CRS-1 Switch Fabric Overview High Speed Path for Transit Packets and IPC 40 Gbps 100 Gbps 8 8 of 8 Line Card of 8 2 of 8 Line Card 8 independent fabric planes 2.5x speedup through fabric Support for 72 chassis & 1296 RP/MSC clients 32
33 Cisco CRS-1 Switch Fabric Overview 3 Stages with Priority 8 8 of 8 S1 S2 16 Line Card of 8 1 of 8 S1 S2 1 2 Line Card High and low priority cells Set on control by default Set on transit pkts via CLI S1 S2 Vital Bit for IPC 3 stage switching fabric 33
34 CRS-3 Switch Fabric - Simplified Form The 8 fabric planes providing ingress BW is 200Gb which reduces to ~140Gb effective capacity due 8/10B encoding and cell tax overhead On egress, each linecard has 2 ports per plane at 20Gb. 40Gb per fabric plane * 8 planes = 320Gb. 8B/10B encoding reduces BW to 256Gb. Cell tax reduces down to 225Gb Egress PSE ASIC forwarding capacity is ~145Gb, so the LC must backpressure when overloaded 34
35 Decisions at Each Stage in 16 slot CRS-3 For each of the 8 planes S1 S2 Send to any S2. No need to look at header Look at cell header. Send to based on chassis Look at cell header. Send to specific LC and FabricQ 35
36 CRS-3 Fabric Planes A full fabric consists of 8 planes Switch Element ASICs (SEAs) perform switching operations Each plane is an independent set of SEAs A Switch Element ASIC can act as S1, S2 or stages of Fabric. The S1 and stages are in the same ASIC. SEAs have 72 RX and 144 TX links Number of RX and TX active depends on programming/stage Cells never move between planes Each plane will have multiple S13 (S1 and stages) and S2 asics. E.g. It will have 2 S13 asics and 3 S2s. The S13 asic will act as both S1 and stage on the LCC. 36
37 CRS Fabric Resilience and System Recovery CRS can operate with missing/failed fabric components System can disable individual link but still use plane BW Capacity reduced if links or entire planes are down Full capacity for Intermix traffic can be reached with 7 planes At least 2 planes must be active for the system to work Specifically one odd and one even numbered plane must be up. Each cell is sent on a single plane Cells have error correcting ECC, but No multi-plane reconstruction of errored cells Removing fabric HW w/o shutting down the plane first will lose cells 37
38 CRS MultiChassis Specifics SC vs. MC (Single Chassis vs. MultiChassis) Fabric details MultiChassis benefits MC building blocks Fabric Plane config, cabling, and SFC placement Sample MC Systems CRS System Failure and Recovery 38
39 CRS Single Chassis vs. MultiChassis In MultiChassis, FCC provides the Fabric switch connectivity between different Line Card Chassis which house the RPs, MSCs, PLIMs and DRPs. The Control Ethernet network is used for processes on different devices to communicate for functions such as system device discovery, image transfers, heartbeat messages, alarms and configuration management Spanning Tree Protocol is used between the Switch elements on the Shelf controller and on the inter-rp links in order to prevent loops 39
40 CRS MultiChassis and Benes Fabric The distinct stages of the Benes fabric allow easy migration to MultiChassis Line Card Chassis (LCC) Fabric Card Chassis (FCC) Line Card Chassis (LCC) Line cards (MSC) and S1 and stages stay on LCC S2 Stage move to the FCC Up to 100m Up to 100m Array Cables connect the fabric stages together Upgrading from single chassis to MultiChassis can be done in-service The S2 stage for each plane is moved to FCC one at a time MSC S1 S2 MSC LCC requires new fabric cards with just the S1 and stages S1 S2 40
41 CRS Fabric Chassis (FCC) Front & rear access mini-midplane for control network access Front 24 Fabric card 2 Shelf Controllers Back 24 OIM slots 2 Fiber LED modules Dimensions: 23.6 W x 41* D x 84 H (60 W x D x H (cm)) Power: ~9 KW DC, 11.1 KW AC Weight: ~1550 lbs/704kg Heat Dis.: BTUs 41
42 What benefits to expect from a CRS MultiChassis System Scale Process Placement to distribute the process load Redundancy and Reliability Performance 42
43 MultiChassis technology building blocks Key points: A fabric plane can span multiple S2 fabric boards Multiple fabric planes can be supported in a Fabric chassis FCCs FCC 4 FCCs 43
44 MultiChassis technology - building blocks FCCs 44
45 Fabric Plane Configuration Single Module Topology Full Plane Configuration Each S2 SFC serves one plane The array cables for each fabric plane are connected to a common SFC Each S13 card is connected to a single S2 card Multi-Module Topology Three Plane Configuration Three S2 SFC s are used to create a plane The array cables for each fabric plane are connected to all three cards Each S13 card is connected to 3 different S2 cards To LC rack # 0 To LC rack # 1 To LC rack # 0 To LC rack # 2 To LC rack # 6 Switch Fabric Card (SFC) To LC rack # 8 Switch Fabric Cards 45
46 MultiChassis Cabling 4+4 Configuration 4 CRS-16 Line Card Chassis 4 Fabric Card Chassis 2+1 Configuration 2 CRS-16 Line Card Chassis 1 Fabric Card Chassis 46
47 SFC Switch Fabric Card Placement The CRS MutiChassis system supports the placement of all the OIMs and fabric cards in a single chassis or across multiple chassis CRS Fabric Chassis installation on cisco.com provides guidelines and best practices for fabric card placement The CRS MultiChassis systems support configuration with one, two, or four Fabric Chassis Additional chassis add fabric redundancy, not additional capacity Protects fabric bandwidth in case of the loss of single Fabric chassis The CRS fabric only needs 7 planes to run at full capacity The 8 th plane provides active load sharing redundancy The CRS fabric capacity degrades with losses of additional planes Loss of additional planes reduce throughput by approximately 1/7 The CRS fabric requires at least 1 odd plane and one even plane to function MultiChassis capacity is gated by the OIM hardware Up to 3 CRS-16 LCCs, in single topology mode where 8 OIM/SFC are required Up to 9 CRS-16 LCCs, in multi-module topology mode where 24 OIM/SFC are required 47
48 OIM LED Module OIM LED Module provided to aid Operations Two OIM LED Modules per Fabric Chassis LEDs provide visual indication of the status of each array cable Each tri-color LED shows 5 states: OK (green) no signal (none) misconnected (blinking red) signal fault (yellow) OIM LED Module connect here (slow blinking green) OIM LED Module provides an installation aid to help identify misconnected fiber bundles Based on fabric configuration, a point-point connection map will be generated by the router If only 1 fiber bundle is misconnected, specifies the correct place to connect or reconnect a fabric cable. 48
49 OIM LED 1 OIM LED 0 OIM LED 1 OIM LED 0 OIM LED 1 OIM LED 0 Fabric Placement with Two CRS-16 LCC Single Topology Example 2+2 FCC Single Topology 2+1 Single Topology SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Front of Chassis Full Rack FCC Shelf Controller 0 Shelf Controller 1 Back of Chassis Full Rack FCC Legend : Plane 0 Plane 1 Plane 2 Plane 3 Plane 4 Plane 5 Plane 6 Plane 7 Shelf Controller 0 Shelf Controller 1 SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Shelf Controller 0 Shelf Controller 1 SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Front of Chassis Full Rack FCC Back of Chassis Full Rack FCC Front of Chassis Full Rack FCC Back of Chassis Full Rack FCC 49
50 OIM LED 1 OIM LED 0 OIM LED 1 OIM LED 0 OIM LED 1 OIM LED 0 Fabric Placement with Three CRS-16 LCC MultiModule Topology Example Single FCC Configuration SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Front of Chassis Full Rack FCC Shelf Controller 0 Shelf Controller 1 Back of Chassis Full Rack FCC Legend : Plane 0 Plane 1 Dual FCC Configuration Plane 4 Plane 5 Plane 2 Plane 6 Plane 3 Plane 7 Shelf Controller 0 Shelf Controller 1 SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Shelf Controller 0 Shelf Controller 1 SFC 11 SFC 10 SFC 9 SFC 8 SFC 7 SFC 6 SFC 5 SFC 4 SFC 3 SFC 2 SFC 1 SFC 0 SFC 23 SFC 22 SFC 21 SFC 20 SFC 19 SFC 18 SFC 17 SFC 16 SFC 15 SFC 14 SFC 13 SFC 12 Front of Chassis Full Rack FCC Back of Chassis Full Rack FCC Front of Chassis Full Rack FCC Back of Chassis Full Rack FCC 50
51 CRS-3 LC/Fabric(MultiChassis) connectivity LC1 S13 Fabric Card S2 Fabric Card S13 Fabric Card LC1 IngressQ Seal Links S1 S1 Links S2 S2 Links Links FabricQ FabricQ S1 S2 LC S2 FabricQ FabricQ 51
52 Failure detection and System Recovery The CRS system works towards detecting different type of failures in the system and take corrective action. Some of these failures could be: Fabric Failure Board Failure Cable Failure ASIC failure for Fabric, RP or Line card. ASIC errors like ECC, Pairity or other miscellaneous errors. Any single node non fabric failure e.g. RP, DRP, MSC. Link errors and recovery Rack failure for LCC Rack failure for FCC 52
53 Failure detection and System Recovery Contd... Fabric failure The fabric board failures or cable failures are detected and reported by the fabric software and corrective action will be taken based on the severity of the failure in terms of Plane MCAST_DOWN All unicast traffic should pass through the plane except for destinations in the rack where the failure is based. This gives an oppurtunity for the system admins to recover from the malfunctioning board and cable and then bring the plane back into full action. Plane DOWN If the failure of fabric board or cables impacts the whole plane (e.g. An S2 board), the plane is brought down completely so that traffic can be redirected through other planes till this failure is corrected and plane is functional again. Note: Extensive FMEA testing has been done on all CRS-3 boards to detect, report and take corrective action of any possible board failure. ASIC Failures and errors recovery 53
54 CRS MultiChassis Switch Fabric Details MC Optical interconnects and cabling MC SFC types (S13 and S2) MC Fabric Topologies (Single module vs. Multi Module) 54
55 CRS Switch Fabric Interconnect Optical Interconnect FIBER BUNDLE 12 fibers per ribbon cable 6 ribbon cables per bundle = 72 fibers per Array cable Multiple cables Between LCC Chassis and FCC Chassis (100m max) Array cable connections on an S13 card 55
56 CRS Array Cables Details Array Cables connect FCC to the LCC 24 Array cables required for each LCC (3 per fabric plane x 8 fabric planes) Array Cable composed of individual Fibers Each cable consists of 6 ribbon cables with 12 fibers each 10m, 15m, 20m, 25m, 30m, 40m, 50m,60m,70m, 80m, 90m,100m length variants Terminated at each end with a Square keyed Connector 56
57 Array Cables Turn Radius collar attachment 57
58 CRS S13 Switch Fabric Card (SFC) For CRS-1 S13 contains the 2 S1 and 4 ASICs from the S123 Card now with parallel optical devices (PODs) For CRS-3 S13 contains 2 ASIC s (S1, mode) For CRS-1 top S1 and elements service linecard shelf slots 0->7 Bottom S1 and elements service slots 8->15 plus 2 RP slots For CRS-3 all LC and RP are connected to both SEA ASIC s, three connections to each Asic from each LC. Each POD terminates a Fiber ribbon consisting of 12 unidirectional links each operating at 2.5Gbps for CRS-1 and 5Gbps for CRS-3 The POD perform electrical to optical or optical to electrical conversions for data for transmission over multimode fiber (850 nm), for distances of up to 100 meters. 6 PODs are used for the 72 fibers from S1 ASICs/12 PODs are used for the 144 fibers from ASICs 58
59 CRS S13 SFC Card 3 Fiber Ribbons 3* Tx POD S1 3 Fiber Ribbons 3* Tx POD S1 S13 Service Processor 6 Fiber Ribbons 6* Rx POD 6 Fiber Ribbons 6* Rx POD 59
60 CRS S13 - Fiber to bulkhead mapping S1 S1 Fibers S1 Fibers 60
61 Fiber mapping 61
62 CRS S2 Switch Fabric Card (SFC) S2 Card consists of PSU, Service Processor and 3*S2 sub-boards 6 S2 ASICs and 54 parallel optical devices (PODs) per card for CRS-1 3 S2 ASICs and 54 parallel optical devices (PODs) per card for CRS-3 Each POD terminates 1 Fiber ribbon (containing 12 fibers) 6 PODs per sub-board are used for the 72 fibers from the S1 ASICs 12 PODs per sub-board are used for the 144 fibers to the ASICs 6 Fiber Ribbons 12 Fiber Ribbons 6 Fiber Ribbons 12 Fiber Ribbons 6 Fiber Ribbons 12 Fiber Ribbons 6* Rx POD 12* Tx POD 6* Rx POD 12* Tx POD 6* Rx POD 12* Tx POD S2 S2 S2 S2 S2 S2 Board 0 Board 1 Board 2 S2 Service Processor 62
63 Fabric Chassis S2 Optical Interface Module (OIM) Passive device providing fiber X-connect function OIM distributes the fibers within each bundle to the S2 ASICs 9 Array cables per single-wide OIM provides connectivity for up to 3 LCC chassis Larger size chassis deployments require cabling layout to be switched from vertical to horizontal Note: OIM must be installed before S2 card insertion 63
64 CRS S13 S2 card interconnect for MC single module (vertical) cabling Rack 0 S13 Card 6 Fiber Ribbons 12 Fiber Ribbons 6* Rx POD 12* Tx POD S2 S2 Rack 1 S13 Card 6 Fiber Ribbons 12 Fiber Ribbons 6* Rx POD 12* Tx POD S2 S2 S2 Service Processor Rack 2 S13 Card 6 Fiber Ribbons 12 Fiber Ribbons 6* Rx POD 12* Tx POD S2 S2 64
65 CRS MC Optical Interface Module A single-wide Optical Interface Module is mated to one S2 Switch Fabric Card This configuration can support up to 3 chassis for single module (vertical) cabling To LC rack # 0 To LC rack # 2 65
66 CRS MC Fabric Topology Single Module Mode LCC0 Chassis S13 Fabric Slots A0 A1 A2 LCC0 - Plane 0 rear view left-to-right FCC Chassis S2 Fabric Slots FCC - Plane 0 rear view right-to-left LCC0 LCC1 LCC2 66
67 CRS MC Fabric Topology A0 A1 A2 Multi Module Mode LCC0 Chassis fabric Slots LCC0 - Plane 0 rear view left-to-right FCC Chassis Slots FCC - Plane 0 rear view right-to-left LCC0 LCC1 LCC2 LCC3 LCC4 LCC5 LCC6 LCC7 LCC8 67
68 CRS Optical Interface Module Fiber distribution SINGLE-WIDE OIM Individual fiber connections are wired to all S2 ASICs 3 TX fiber ribbons (36 links in total) are used to connect from each of the S1 ASICs in the LCC to the S2 ASICs in the FCC 36 links go to the CRS-1 S2 board, with 6 links to each of the 6 S2 ASIC on the board 36 links go to the CRS-3 S2 board, with 12 links to each of the 3 S2 ASICs on the board Rack0 Plane X S1->S2 68
69 CRS Optical Interface Module Fiber distribution SINGLE-WIDE OIM Each S2 ASIC in CRS-1 FCC has 72 TX fiber links Each S2 ASIC in CRS-3 FCC has 144 TX fiber links 6 links are used to connect to each of the ASICs 144 S2-> links per line card shelf per plane This configuration can service 12 ASICs per plane Rack0 Plane X S2-> 69
70 CRS OIM (horizontal cabling) For up to 9 chassis, a single fabric plane spans 3 S2 SFC s cabling arrangement as follows: To LC rack # 0 To LC rack # 2 To LC rack # 6 To LC rack # 8 70
71 CRS MC Fabric Chassis - Cabling CONNECTION MAP Based on configuration, a p2p connection map will be generated by the router LED INSTALLATION AID Flag misconnected fiber bundles If only 1 fiber bundle is misconnected, identify correct plug hole (with some amount of persistence) 5 states, 1 tri-color LED : OK (green) no signal (none) misconnected one cable (blinking red) misconnected more than one cable (red) signal fault (yellow) connect here (slow blinking green) coresponds to blinking red 71
72 CRS MC 2+1 System (vertically) cabled 8 S13 boards 8 S2 boards & OIMs LCC 0 FCC LCC 1 72
73 CRS MultiChassis Fabric Data Path Linecard LCC LC Midplane S13 Card fiber Bundle Fiber Module FCC S2 Card IngressQ S1 1. Data segmented. Cells distributed over 8 planes 2. Load balance to the available S2s in plane S2 3. Switch cell to correct. Multicast is replicated here. FabricQ 5. Data reassembled into packets 4. Switch cell to FabricQ. 73
74 Ingress CRS 16-slot LCC Fabric Plane Upper shelf Ingress LC S1 S2 RP S2 Lower shelf Ingress LC S1 S2 Each CRS-3 LC has 6* links to the S1 asic. So total 48 links (8 planes). Each TX Link has a capability of carrying 5G of raw traffic. Each CRS-1 LC has 4 connections to the S1 ASICs on the fabric plane (8 planes = 32 links). Each TX Link is capable of carrying 2.5G of raw traffic. Each CRS-1 RP has 2 connections to the Lower S1 ASICs on the fabric plane only (8 planes = 16 links). This is 2.5G as well. 74
75 CRS MC Fabric Terminology Link: Data link between two fabric stages. Each link has two link ports, a transmitter and a receiver. Bundle: Cable between the LC and Fabric shelves. Each end terminates at a bundle port. Also known as Array cable. Link port types Ingressqtx, s1rx,s1tx, s2rx, s2tx, s3rx, s3tx and fabricqrx. Link port inter-connect Ingressqtx s1rx, s1tx s2rx, s2tx s3rx, s3tx fabricqrx. 75
76 CRS MC Fabric Topology Select from LCC0 Plane 0 A0 Remove Dust Caps Connect one end into Bundle A0 76
77 CRS MC Fabric Topology Connect LCC0 Plane 0 Bundles A0, A1, A2 A0 A1 A2 77
78 CRS MC Fabric Topology Connect bundle other end on FCC0 Plane0 A0 A1 A2 78
79 CRS MultiChassis Control Ethernet MC control Ethernet functions MC control Ethernet HW Integrated Shelf Controller MC control Ethernet connectivity 79
80 SC-GE-22 PLIM PLIM PLIM PLIM RP/FABRIC(rear) PLIM PLIM PLIM PLIM LC MSC PLIM PLIM PLIM PLIM RP/FABRIC(rear) PLIM PLIM PLIM PLIM LC MSC SC-GE-22 PLIM PLIM PLIM PLIM Fan\FABRIC(rear) PLIM PLIM PLIM PLIM MSC PLIM PLIM PLIM PLIM Fan\FABRIC(rear) PLIM PLIM PLIM PLIM MSC 2LCC +1FCC MC Component connectivity via SC Connected to S13 cards (rear) on LCC 24 cables to each LC chassis Power Supplies Air Exhaust (r) Power Supplies Air Exhaust (r) Power Supplies Air Exhaust (r) 12 x S2 Connected to OIM s (rear) on FCC RP0 RP1 Stacking link 12 x S2 Air Exhaust (r) FCC Air Intake LCC0 Air Intake LCC1 80
81 CRS-1 MultiChassis Control Ethernet All communication from the Line card RPs to integrated switch is over the Control Ethernet The integrated switch is not connected to the fabric The Control Ethernet is used for many purposes System Boot Node availability (Heart beat) checks All communication from the LCC to the FCC. The Control Ethernet is redundant and must be connected in a fully meshed configuration to all active and standby RPs and SCs 2+1 Systems requires 9 cables 8 RP to SCGE and 1 SCGE to SCGE 2+2 System requires 15 cables 8 RP to SCGE and 6 SCGE to SCGE 2+4 System requires 36 cables 8 RP to SCGE and 28 SCGE to SCGE The Control Ethernet uses Spanning Tree (STP) to determine which paths to use for communication 81
82 Integrated Shelf Controller Product Description: Fabric chassis houses 2 Shelf Controllers (SC) acting as Primary and Secondary Provides local management of fabric chassis components Boot and initialization of the SFCs, the optical interface module LED (OIM-LED) card, alarms, power supplies, and fans Integrated GE Ethernet Switch Interface Provides out of band inter-chassis control network Full Mesh connectivity required for control Ethernet Two SC-22GE are provided for redundancy Product ID: CRS-FCC-SC-22GE 82
83 2+1 MC External GE Connections LCC0 RP0 RP1 SC-GE-22 SC0 SC0-RP GE Links SC1-RP GE Links SC-SC GE Links Note there is still an FE link over the backplane on the FC between the 2 SC- GE-22 cards RP0 LCC1 RP1 SC-GE-22 SC1 FCC0 83
84 2+2 MC External GE Connections RP0 LCC0 RP1 SC-GE-22 SC0 SC-GE-22 SC0 RP0 LCC1 RP1 SC-GE-22 SC1 SC-GE-22 SC1 FCC0 FCC1 84
85 CRS MultiChassis Configuration MC dsc MC SW distribution MC rack configuration MC fabric plane topology configuration 85
86 CRS MC - Introducing the dsc dsc = designated System Controller dsc is responsible for overall system control, configuration and operation Image download & synchronization to all devices in system is controlled by dsc By default, dsc is the Primary RP in the first rack (LCC) that boots. If RP fails, Secondary RP assumes the role If the LCC housing the dsc where to fail, today, MC system will reboot and dsc will come active on one of other LCC s connected to the system Eventually, dsc functionality will be able to move between racks in a graceful manner No specific configuration is required to become dsc 86
87 IOS-XR SW version on MC dsc determines the IOS XR version on all components of the system Adding a new LCC with different IOS XR version will install the version running on the dsc Upgrading from single to MC the FCC will install the IOS XR version running on the dsc Inserting new MSCs same as on single chassis 87
88 CRS MC configuration Rack numbers Assign a rack number to the S/N of each chassis Serial numbers can be obtained from sh diag chassis - From rommon with dumpplaneeeprom - From rear of the FCC and front of the LCC Rack 0 or 1 = dsc (also an LCC) Rack 1->127 = LCC Rackf0-> f4 = FCC Note: Rack numbers have to be unique Example: RP/0/RP0/CPU0:CR#admin show run i dsc Building configuration... dsc serial TBA rack 1 dsc serial TBA rack 0 dsc serial TBA rack f0 88
89 CRS LCC / FCC Serial Number Location Front side of the Linecard Chassis Watchout! It is on the rear Side of the Fabric Chassis, unlike line card chassis LCC FCC 89
90 CRS MC Fabric Config Plane Topology configuration Planes are not tied to specific slots in fabric rack Admin-level config defines the slots each plane uses (and how big the plane is) controllers fabric plane 0 oim count 1 oim width 1 oim instance 0 location F0/SM0/FM Count 1- All cables in plane connect to the same OIM. Count 3 - The cables from each LCC for that plane connect to different OIMs Position of 1 st card within the plane in fabric rack: plane 0 uses rack f0 slot sm0 90
91 CRS MC Fabric Configuration Example RP/0/RP0/CPU0:CR#admin show run Building configuration... dsc serial TBA rack 1 dsc serial TBA rack 0 dsc serial TBA rack F0 controllers fabric plane oim count 1 oim width 1 oim instance 0 location F0/SM0/FM! controllers fabric plane 1 oim count 1 oim width 1 oim instance 0 location F0/SM3/FM! controllers fabric plane 2 oim count 1 oim width 1 oim instance 0 location F0/SM6/FM! [SNIP] controllers fabric plane 2 oim count 1 oim width 1 oim instance 0 location F0/SM21/FM! 91
92 Troubleshooting CRS MC MC Control Ethernet connectivity verification & statistics MC Control Ethernet UDLD and spanning tree functions MC Shelf Controller (SC) LED s Monitoring MC fabric plane, bundles and links 92
93 CRS MC RP Connectivity To verify the control ethernet connectivity on the RPs use: RP/0/RP0/CPU0:router(admin)#show controllers switch 0 ports location 0/RP0/CPU0 Ports Active on Switch 0 FE Port 0 : Up, STP State : FORWARDING (Connects to - 0/RP0) FE Port 1 : Up, STP State : BLOCKING (Connects to - 0/RP1) FE Port 2 : Up, STP State : FORWARDING (Connects to - 0/FC0) FE Port 3 : Up, STP State : FORWARDING (Connects to - 0/FC1) FE Port 4 : Up, STP State : FORWARDING (Connects to - 0/AM0) FE Port 5 : Up, STP State : FORWARDING (Connects to - 0/AM1) FE Port 6 : Down (Connects to - ) FE Port 7 : Down (Connects to - ) FE Port 8 : Up, STP State : FORWARDING (Connects to - 0/SM0) FE Port 9 : Up, STP State : FORWARDING (Connects to - 0/SM1) FE Port 10 : Up, STP State : FORWARDING (Connects to - 0/SM2) FE Port 11 : Up, STP State : FORWARDING (Connects to - 0/SM3) FE Port 12 : Up, STP State : FORWARDING (Connects to - 0/SM4) FE Port 13 : Up, STP State : FORWARDING (Connects to - 0/SM5) FE Port 14 : Up, STP State : FORWARDING (Connects to - 0/SM6) FE Port 15 : Up, STP State : FORWARDING (Connects to - 0/SM7) GE Port 0 : Up, STP State : FORWARDING (Connects to - GE_0) GE Port 1 : Up, STP State : FORWARDING (Connects to - Switch 1) 93
94 CRS MC SC Intra-rack Connectivity To verify the control ethernet connectivity on intra-rack switches on the SC-GE-22s use: RP/0/RP0/CPU0:CRS-D(admin)#sh controllers switch 0 ports location F0/SC0/CPU0 FE Port 0 : Up, STP State : FORWARDING (Connects to - F0/SC0) FE Port 1 : Up, STP State : BLOCKING (Connects to - F0/SC1) FE Port 2 : Down (Connects to - F0/FC0) FE Port 3 : Down (Connects to - F0/FC1) FE Port 4 : Down (Connects to - F0/AM0) FE Port 5 : Up, STP State : FORWARDING (Connects to - F0/AM1) FE Port 6 : Up, STP State : FORWARDING (Connects to - F0/LM0) FE Port 7 : Up, STP State : FORWARDING (Connects to - F0/LM1) FE Port 8 : Down (Connects to - F0/SM0) FE Port 9 : Up, STP State : FORWARDING (Connects to - F0/SM1) FE Port 10 : Down (Connects to - F0/SM2) FE Port 11 : Down (Connects to - F0/SM3) FE Port 12 : Up, STP State : FORWARDING (Connects to - F0/SM4) FE Port 13 : Down (Connects to - F0/SM5) FE Port 14 : Up, STP State : FORWARDING (Connects to - F0/SM6) FE Port 15 : Down (Connects to - F0/SM7) GE Port 0 : Up, STP State : FORWARDING (Connects to - GE_0) GE Port 1 : Up, STP State : FORWARDING (Connects to - Switch 1) 94
95 CRS MC RP UDLD UDLD runs on links between RPs and SCs. Use the UDLD CLI to find out who is connected this is a nice extra feature of UDLD RP/0/RP0/CPU0:ios(admin)#show controllers switch udld location 0/rp0/CPU0 Interface GE_Port_0 Current bidirectional state: Bidirectional Current operational state: Advertisement - Single neighbor detected Entry Device name: nodef0_sc0_cpu0 Port ID: Gig port# 13 Neighbor echo 1 device: 0_RP0_CPU0_Switch Neighbor echo 1 port: GE_Port_0 95
96 CRS MC SC Intra-rack Connectivity Use the UDLD CLI the same way as for RPs RP/0/RP0/CPU0:CRS-D(admin)#sh controllers switch udld location F0/SC0/CPU0 Interface GE_Port_0 --- Port enable administrative configuration setting: Enabled Port enable operational state: Enabled Current bidirectional state: Bidirectional Current operational state: Advertisement - Single neighbor detected Message interval: 7 Time out interval: 5 Entry Expiration time: 15 Device ID: 1 Current neighbor state: Bidirectional Device name: nodef0_sc0_cpu0 Port ID: Gig port# 22 Neighbor echo 1 device: F0_SC0_CPU0_Switch Neighbor echo 1 port: GE_Port_0 96
97 CRS MC SC Inter-rack Connectivity To verify the control ethernet connectivity on inter-rack switches on the SC-GE-22s use: RP/0/RP0/CPU0:CRS-D(admin)#sh controllers switch inter-rack ports all location F0/SC0/CPU0 GE_Port_0 : Up GE_Port_1 : Down GE_Port_2 : Up GE_Port_3 : Up [SNIP] GE_Port_17 : Down GE_Port_18 : Down GE_Port_19 : Down GE_Port_20 : Down GE_Port_21 : Up 97
98 CRS MC RP STP STP is run on links between RPs and SCs. Use the STP CLI to find out spanning tree info RP/0/RP0/CPU0:CRS-D(admin)#sh controllers switch stp location 0/RP0/CPU0 ##### MST 0 vlans mapped: Bridge address a26c priority (36864 sysid 0) Root address f0.20ff priority (32768 sysid 0) port GE_Port_0 path cost 0 Regional Root address f0.20ff priority (32768 sysid 0) internal cost rem hops 3 Operational hello time 1, forward delay 6, max age 8, txholdcount 6 Configured hello time 1, forward delay 6, max age 8, max hops 4 Interface Sts Role Cost Prio.Nbr Type ##### MST 1 vlans mapped: 1 Bridge address a26c priority (36864 sysid 1) Root address f0.20ff priority (32768 sysid 1) port GE_Port_0 cost rem hops 3 Interface Sts Role Cost Prio.Nbr Type FE_Port_1 FWD Desg P2p GE_Port_0 FWD Root P2p 98
99 CRS MC SC Intra-rack Connectivity Use the STP CLI the same way as for RPs RP/0/RP0/CPU0:CRS-D(admin)#sh controllers switch stp location F0/SC0/CPU0 ##### MST 0 vlans mapped: Bridge address e.47b4 priority (36864 sysid 0) Root address f0.20ff priority (32768 sysid 0) port GE_Port_0 path cost 0 Regional Root address f0.20ff priority (32768 sysid 0) internal cost rem hops 3 Operational hello time 1, forward delay 6, max age 8, txholdcount 6 Configured hello time 1, forward delay 6, max age 8, max hops 4 Interface Sts Role Cost Prio.Nbr Type ##### MST 1 vlans mapped: 1 Bridge address e.47b4 priority (36864 sysid 1) Root address f0.20ff priority (32768 sysid 1) port GE_Port_0 cost rem hops 3 Interface Sts Role Cost Prio.Nbr Type FE_Port_1 BLK Altn P2p GE_Port_0 FWD Root P2p 99
100 CRS MC SC Inter-rack Connectivity Use UDLD CLI to find out who is connected RP/0/RP0/CPU0:ios(admin)#show controllers switch inter-rack udld all location f0/sc0/cpu0 Interface Gig port# Port enable administrative configuration setting: Enabled Port enable operational state: Enabled Current bidirectional state: Bidirectional Current operational state: Advertisement - Single neighbor detected Message interval: 7 Time out interval: 5 Entry Expiration time: 14 Device ID: 1 Current neighbor state: Bidirectional Device name: 0_RP0_CPU0_Switch Port ID: GE_Port_0 Neighbor echo 1 device: nodef0_sc0_cpu0 Neighbor echo 1 port: Gig port# 0 Message interval: 7 Time out interval: 5 CDP Device name: BCM_SWITCH Interface Gig port#
101 CRS MC SC Inter-rack Connectivity Use STP CLI to find out STP information RP/0/RP0/CPU0:ios(admin)#show controllers switch inter-rack stp location f0/sc0/cpu0 ##### MST 0 vlans mapped: Bridge address f0.20ff priority (32768 sysid 0) Root this switch for the CIST Operational hello time 1, forward delay 6, max age 8, txholdcount 6 Configured hello time 1, forward delay 6, max age 8, max hops 4 Interface Role Sts Cost Prio.Nbr Type ##### MST 1 vlans mapped: 1 Bridge address f0.20ff priority (32768 sysid 1) Root this switch for MST1 Interface Role Sts Cost Prio.Nbr Type GE_13 Desg FWD P2p GE_14 Desg FWD P2p GE_15 Desg FWD P2p GE_17 Desg FWD P2p GE_22 Desg FWD P2p 101
102 CRS MC Show tech control-ethernet Use this CLI to collect control ethernet logs for TAC RP/0/RP0/CPU0:IOX(admin)#show tech-support control-ethernet file.. 102
103 CRS MC SC-GE-22 Counters Checking port statistics counters RP/0/RP0/CPU0:ios(admin)#show controllers switch inter-rack statistics all brief location f0/sc0/cpu0 Port Tx Frames Tx Errors Rx Frames Rx Errors GE_Port_0 : GE_Port_1 : GE_Port_2 : GE_Port_3 : GE_Port_4 : GE_Port_5 : <SNIP> GE_Port_16 : GE_Port_17 : GE_Port_18 : GE_Port_19 : GE_Port_20 : GE_Port_21 : Intra-rack : Stacking : Stacking :
104 CRS MC SC-GE-22 Counters Clear statistics using Per port: RP/0/RP0/CPU0:ios(admin)#clear controller switch inter-rack statistics ports 0 location f0/sc0/cpu0 Or all ports: RP/0/RP0/CPU0:ios(admin)#clear controller switch inter-rack statistics all location f0/sc0/cpu0 104
105 CRS MC SC-GE-22 LEDs The SC-GE-22 has LEDs on the front panel for every port. The LEDs can be used to get information about the link such as Green Blinking Green Amber Off Link Up Activity Port Error Disabled (by UDLD) Link Down What color is stp blocking state? it still stays green. In blocking state you are still receiving udld/stp packets. The amber condition only indicates a fault on the port Note: Admin Shutdown of ports is not supported 105
106 CRS MC SC UDLD error messages Whenever a port is disabled in the control network due to it being detected as Uni-directional, a Syslog message is generated The state of the port is displayed in show controller switch.. udld..loc <> CLI The fiber and SFP should be checked on the port You can try to bring-up the port and clear the error condition using clear controller switch [inter-rack] errdisable port Note that the port will be err-disabled again if the error condition still exists 106
107 CRS MC SC Spanning tree The following is the default priority order of nodes to become the root of the network F0/SC0 F0/SC1 F1/SC0 F1/SC1 F2/SC0 F2/SC1... we use bridgeids to compare priorities are same and mac address determine. Higher number is lower priority for STP, lower number wins. Note we don t us a guma (globally unique mac add) so we generate it geographically and can set the above up. Note that ma comes from the location of the card, its not burnt into the board itself, its generated at runtime during bootup The LCC RP s are given lower priority than all FCC SC s so they will not become the root of the network as long as a single SC is booted up 107
108 CRS MC Spanning tree to see the root Use the following command to find the STP status on a node. Verify that the root is the node you expect it to be RP/0/RP0/CPU0:ios(admin)#show controllers switch stp location f0/sc1/cpu0 ##### MST 0 vlans mapped: Bridge address e.468f priority (36864 sysid 0) Root address f0.20ff priority (32768 sysid 0) port GE_Port_0 path cost 0 Regional Root address f0.20ff priority (32768 sysid 0) internal cost rem hops 2 Operational hello time 1, forward delay 6, max age 8, txholdcount 6 Configured hello time 1, forward delay 6, max age 8, max hops 4 Interface Role Sts Cost Prio.Nbr Type ##### MST 1 vlans mapped: 1 Bridge address e.468f priority (36864 sysid 1) Root address f0.20ff priority (32768 sysid 1) port GE_Port_0 cost rem hops 2 Interface Role Sts Cost Prio.Nbr Type FE_Port_0 Altn BLK P2p GE_Port_0 Root FWD P2p 108
109 CRS MC Spanning tree in a normal topology Verify that F0/SC0 inter-rack is the root of the network and all spanning tree port states seem normal Verify that other SCs in the system see F0/SC0 as the root of the network and are designated on links connecting to RPs Verify that the RP has selected one of the GE ports as its root port and is alternate on the other GE port Verify that one of the RPs is blocked on the FE link connecting the RPs 109
110 Verifying MC control network with ping Ping control-eth from admin mode can be used for troubleshooting control-eth issues RP/0/RP0/CPU0:CRS-D(admin)#ping control-eth location f0/sc0/cpu0 Src node: 513 : 0/RP0/CPU0 Dest node: : F0/SC0/CPU0 Local node: 513 : 0/RP0/CPU0 Packet cnt: 1 Packet size: 128 Payload ptn type: default (0) Hold-off (ms): 1 Time-out(s): 2 Max retries: 5 DelayTimeout: 1Destination node has MAC addr f.0201 Running CE node ping. Please wait... Src: 513, Dest: , Sent: 1, Rec'd: 1, Mismatched: 0 Min/Avg/Max RTT (usecs): 1000/1000/1000 CE node ping succeeded for node:
111 Monitoring CRS MC Fabric Operation Error counts RP/0/RP0/CPU0:CRS(admin)#show controllers fabric plane all statistics In Out CE UCE PE Plane Cells Cells Cells Cells Cells CE Cells Correctable Error UCE Uncorrectable Error PE Parity Error 111
112 Monitoring CRS MC Fabric Operation High level fabric status RP/0/RP0/CPU0:CRS(admin)#show controllers fabric plane all detail Plane Admin Oper Down Total Down Id State State Flags Bundles Bundles UP UP UP UP UP UP UP UP UP UP UP UP UP UP UP UP 0 0 Examples of Down flags P plane admin down C card admin down p plane oper down c card oper down 112
113 Check the CRS MC fabric bundles All fibers are UP on each bundle? RP/0/RP0/CPU0:ios(admin)#sh controllers fabric bundle all detail Flags: P - plane admin down, p - plane oper down C - card admin down, c - card oper down L - link port admin down, l - linkport oper down A - asic admin down, a - asic oper down B - bundle port admin Down, b - bundle port oper down I - bundle admin down, i - bundle oper down N - node admin down, n - node down o - other end of link down d - data down f - failed component downstream m - plane multicast down Bundle Oper Down Plane Total Down Bundle Bundle R/S/M/P State Flags Id Links Links Port1 Port F0/SM0/FM/0 UP F0/SM0/FM/0 0/SM3/SP/0 F0/SM0/FM/1 UP F0/SM0/FM/1 0/SM3/SP/1 F0/SM0/FM/2 UP F0/SM0/FM/2 0/SM3/SP/2 F0/SM0/FM/3 UP F0/SM0/FM/3 1/SM3/SP/0 F0/SM0/FM/4 UP F0/SM0/FM/4 1/SM3/SP/1 F0/SM0/FM/5 UP F0/SM0/FM/5 1/SM3/SP/2 F0/SM0/FM/6 DOWN bo F0/SM0/FM/6 2/SM3/SP/0 F0/SM0/FM/7 DOWN bo F0/SM0/FM/7 2/SM3/SP/1 F0/SM0/FM/8 DOWN bo F0/SM0/FM/8 2/SM3/SP/2 113
114 Check the CRS MC fabric DOWN links F0/SM6/FM/1 UP F0/SM6/FM/1 0/SM1/SP/1 (admin)#sh controllers fabric link port s2tx all i UP.*DOWN.*SM1/SP/1 F0/SM6/SP/1/61 UP DOWN do 1/SM1/SP/1/25 F0/SM6/3 1/SM1/0 (admin)#sh controllers fabric link port s3rx all i 1/SM1/SP/1/25 1/SM1/SP/1/25 UP DOWN l F0/SM6/SP/1/61 1/SM1/0 F0/SM6/3 (admin)#sh controllers fabric link port s2tx F0/SM6/SP/1/61 detail [SNIP] Sfe Port Admin Oper Down Sfe BP Port BP Other R/S/M/A/P State State Flags Role Role End F0/SM6/SP/1/61 UP DOWN do 1/SM1/SP/1/25 Connection Details for s2tx/f0_sm6_sp,0x1,0x3d Type: Inter-chassis bundle Near-end bundle port: bport/f0/sm6/3 ribbon 3 fiber 1 Far-end bundle port : bport/1/sm1/0 ribbon 2 fiber 1 HBMT pin name : P6L2_1 Fabric group offset : 0 Fabric group : 2 Start to clean here 114
115 Check the CRS MC Bundle Statistics RP/0/RP0/CPU0:ios(admin)#show controllers fabric bundle port all statistics Total racks: 3 Rack 0: Bundle Port In Out CE UCE PE R/S/M/P Cells Cells Cells Cells Cells /SM0/SP/ /SM0/SP/ Rack 1: Bundle Port In Out CE UCE PE R/S/M/P Cells Cells Cells Cells Cells /SM0/SP/ /SM0/SP/ Rack F0: Bundle Port In Out CE UCE PE R/S/M/P Cells Cells Cells Cells Cells F0/SM4/FM/ F0/SM4/FM/
116 More useful commands - CRS MC fabric Fabric (admin)#show controllers fabric rack all detail (admin)#show controllers fabric plane all detail (admin)#show controllers fabric connectivity all detail (admin)#show controllers fabric plane all statistics 116
117 Cisco CRS MultiChassis is managed as a system. 117
118 Recommended Reading 118
119 Evaluation Forms Please fill out evaluation forms and surveys 119
120 Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center. Don t forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit 120
121 Final Thoughts Get hands-on experience with the Walk-in Labs located in World of Solutions, booth 1042 Come see demos of many key solutions and products in the main Cisco booth 2924 Visit after the event for updated PDFs, ondemand session videos, networking, and more! Follow Cisco Live! using social media: Facebook: Twitter: LinkedIn Group: 121
122 Presentation_ID
123 Appendix Typical Migration Steps SC to MC Types of Migration : Online vs Offline SC to MC conversion can be done : online (most customers) & offline MC Downgrade to a SC is not supported Online Conversion : SC is live & carrying production traffic CRS Fabric can route line rate IMIX traffic even with 7 planes UP SC to MC online migration uses the above concept to migrate each plane one by one (S123 boards to S13 boards) Though done in maintenance window, the control plane traffic is still live Offline Conversion : Customers who do not want to change metrics config to divert the traffic Easier process if the operational folks in the POP & N/W admins in the Central Site Least risky 123
124 ACCESS EDGE CORE Appendix A Case Study 1: Offline SC to MC migration - Network Topology SBCR (Super Block Core Router) Aggregation routers of East connecting to West BCR (Block Core Router) Aggregation routers of each region AER (Area Edge Router) Aggregation routers of each prefecture Site A MC Installation Site B SSE Multi-Service customer edge router L2SW L2 aggregation switch 124
125 Appendix A Case Study 1 MC Migration Plan Take one week maintenance window for one site Setup 1+1 MC config for system verification No live migration (Off-line upgrade from single chassis to multi chassis) Site A (Primary) Site B (Secondary) Site A Site B SSE SSE SSE SSE 125
126 Appendix A Case Study 1 MC Migration Steps Site A (Primary) Site B (Secondary) Site A Site B FCC dsc Site A SSE SSE SSE SSE Site B Offline Site A Install new LCC and FCC Setup 1+1 MC configuration Verify 1+1 MC system Site B dsc FCC ndsc Single to Multi Migration SSE SSE SSE SSE 126
127 Appendix A Case Study 1 MC Migration Steps contd. Site A Site B Site A Site B Live FCC dsc SSE SSE 5 6 SSE SSE Install new LCC and FCC Setup 1+1 MC configuration Verify 1+1 MC system Site A Site B 7 8 Site A Site B Offline dsc FCC ndsc Single to Multi Migration SSE SSE SSE SSE 127
128 Appendix A Case Study 1 MC Migration Scenario contd. Site A Live Site B SSE SSE 9 128
129 Appendix A Case Study 1 MC Installation Procedure summary Assumption: LCC0 is the Line Card Chassis that is on and in operation. LCC1 is the new Line Card Chassis. FCC0 is the new Fabric Chassis. LCC0 will NOT be touched when building a Rack Multi-Chassis. Exisiting LCC MC 1+1 LCC0 FCC0 LCC1 First build up a MC 1+1 (LCC Rack num = 1) and test it offline. Step1. LCC1 Setup dsc Step2. LCC1 Turboboot Step3. FCC0 Bootup MC 2+1 LCC0 FCC0 LCC1 Step4. Connecting Array cables Step 5. Then connect the existing LCC(Rack Num = 0) into MC 1+1 offline (more details in next slide) dsc 129
130 Appendix A Case Study 1 MC Installation Procedure - detail Exisiting LCC LCC0 FCC0 LCC1 XR RUN Configure SN of LCC1 & FCC, install-mode Power OFF Migrate S123 to S13 & connect cables XR RUN MC 1+1 Power OFF XR RUN Power OFF Connect CE port Connect CE port Connect CE port Step5. Connecting 1+1 MC into SC offline -Configure SC with SN of LCC1 & FCC -Configure LCC1 to be in install-mode -Bring down 1+1 MC -Replace all S123 boards to S13 & connect LCC0 to FCC -Connect the CE ports of LCC0 & 1 on FCC -Turn ON entire MC & allow it to bake -Ensure the Fabric is baked & ready -Run basic tests & ensure HW is fine on LCC1 -Remove install-mode from LCC1 -MC migration complete Turn ON Verify LCC1 operation Turn ON Turn ON Remove LCC1 from installmode Verify MC operation 130
131 Appendix B Case Study 2: Online SC to MC migration: MC Installation Procedure summary Assumption: LCC0 is the Line Card Chassis that is on and in operation. LCC1 is the new Line Card Chassis & will be put in install-mode FCC0 is the new Fabric Chassis. LCC0 will NOT be touched when building a Rack Multi-Chassis. Exisiting LCC MC 1+1 LCC0 FCC0 LCC1 First build up a MC 1+1 (LCC Rack num = 1) and test it offline. Step1. LCC1 Setup dsc Step2. LCC1 Turboboot MC 2+1 LCC0 FCC0 LCC1 Step3. FCC0 Bootup Step4. Array Cable Connecting Step 5. Then connect the existing LCC(Rack Num = 0) into MC 1+1 online (more details in next slide) dsc 131
132 Appendix B Case Study 2 MC Installation Procedure - detail Exisiting LCC LCC0 FCC0 LCC1 XR RUN Configure SN of LCC1 & FCC, install-mode Migrate S123 to S13 plane by plane XR RUN MC 1+1 Power OFF Turn ON FCC --> Fabric ready XR RUN Power OFF Connect CE port Connect CE port Connect CE port Turn ON LCC1 Step5. Connecting 1+1 MC into SC online -Bring down 1+1 MC -Connect the CE ports of LCC0 & 1 on FCC -Configure SC with SN of LCC1 & FCC -Configure LCC1 to be in install-mode -Turn ON FCC & allow it to bake -Ensure the Fabric is baked & ready -Migrate S123 boards on SC to S13 plane by plane -Once the SC is converted into LCC0 with all S13 s -Turn ON the LCC1 & allow it to bake -Run basic tests & ensure HW is fine on LCC1 -Remove install-mode from LCC1 -MC migration complete Verify LCC1 operation Remove LCC1 from installmode Verify MC operation 132
133 Appendix C FCC Physical Install notes Optical Interface Module (OIM) - Rear OIM *Important: Always insert OIM before inserting corresponding S2 Fabric Board. When removing OIM, disengage S2 Fabric Board first. Back View 133
134 Appendix C Physical Install notes Tips for handling S2 Fabric Board S2 Fabric Board is the longest and heaviest board. Insert after OIM module. Take care when carrying the board and swinging in an arc, not to hit the optical connectors. Recommended hand placement when carrying the board. 134
135 Appendix C Physical Install notes Watch out for Bend Radius in Cable Trays 135
136 Appendix C Physical Install notes Optics Troubleshooting/Cleaning S13 Manual Optics Cleaning optic adapter 136
137 Appendix C Physical Install notes Optic Troubleshooting/Cleaning OIM Manual Optics Cleaning 137
Bringing Up the Cisco IOS XR Software on a Multishelf System
CHAPTER 3 Bringing Up the Cisco IOS XR Software on a Multishelf System This chapter describes how to bring up Cisco IOS XR software on a Cisco CRS Multishelf System for the first time. Layer 2 system switching
More informationSwitch Fabric. Switch Fabric Overview
This chapter describes the Cisco CRS Carrier Routing System 16-Slot Line Card Chassis switch fabric. It includes the following sections: Overview, on page 1 Operation, on page 2 Card Description, on page
More informationCisco CRS Carrier Routing System 8-Slot Line Card Chassis Enhanced Router Overview
Cisco CRS Carrier Routing System 8-Slot Line Card Chassis Enhanced Router Overview This chapter provides an overview of the Cisco CRS Carrier Routing System 8-Slot Line Card Chassis Enhanced router and
More informationCisco CRS Carrier Routing System 16-Slot Line Card Chassis Router Overview
Cisco CRS Carrier Routing System 16-Slot Line Card Chassis Router Overview This chapter includes the following sections: About the CRS 16-Slot Line Card Chassis, on page 1 Chassis Components, on page 2
More informationNetwork Virtualization. Duane de Witt
Network Virtualization Duane de Witt nv Edge System Overview System Deep Dive nv System Overview System Deep Dive NV EDGE SYSTEM OVERVIEW SUPERIOR, SIMPLE NETWORK DUAL-HOMING SOLUTION L3 Router dualhoming
More informationLine Cards and Physical Layer Interface Modules Overview, page 1
Line Cards and Physical Layer Interface Modules Overview This chapter describes the modular services cards (MSCs), forwarding processor (FP) cards, label switch processor (LSP) cards, and associated physical
More informationCisco CRS 4-Slot Line Card Chassis Overview
This chapter describes the Cisco CRS Carrier Routing System 4-Slot Line Card Chassis and its main components. Throughout the remainder of this guide, the Cisco CRS Carrier Routing System 4-Slot Line Card
More informationDiagnostics Commands. diagnostic unload, page 23
This module provides command line interface (CLI) commands for configuring diagnostics on your router. To use commands of this module, you must be in a user group associated with a task group that includes
More informationConfiguring StackWise Virtual
Finding Feature Information, page 1 Restrictions for Cisco StackWise Virtual, page 1 Prerequisites for Cisco StackWise Virtual, page 2 Information About Cisco Stackwise Virtual, page 2 Cisco StackWise
More informationOptical Interface Modules and Optical Interface Module LED Card
Optical Interface Modules and Optical Interface Module LED Card This chapter describes the optical interface module (OIM) cards and optical interface module light emitting diode (OIM-LED) cards. It includes
More informationCisco CRS Carrier Routing System General Maintenance Guide 2
Cisco CRS Carrier Routing System General Maintenance Guide Cisco CRS Carrier Routing System General Maintenance Guide 2 Identifying Fiber-Optic Connectors Associated with Fabric Link Errors 2 Air Filter
More informationASR Single Chassis Migration to nv Edge System Configuration Example
ASR Single Chassis Migration to nv Edge System Configuration Example Document ID: 117642 Contributed by Aaron Foss, Sakthi Saravanan Malli Somanathan, and Sam Milstead, Cisco TAC Engineers. Dec 09, 2014
More informationTroubleshooting Booting
CHAPTER 2 This chapter describes techniques that you can use to troubleshoot a router running Cisco IOS XR software. It includes the following sections: Booting Tips, page 2-73 Verifying Successful Bootup,
More informationCisco CRS-1 Carrier Routing System
CHAPTER This site planning guide describes how to plan and prepare your site facilities for the installation of a 8-Slot Line Card Chassis (also referred to in this document as the Cisco CRS- 8-slot line
More informationIntroduction to NCS 6008 Syed Hassan & Alexander Orel BRKARC-2022
Introduction to NCS 6008 Syed Hassan & Alexander Orel BRKARC-2022 Core routing platforms evolution With the new 10Gbps capabilities, Cisco is providing the highest performance router architecture available.
More informationConfiguring Virtual Port Channels
This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Configuring vpcs, page 11 Verifying the vpc Configuration, page 25 vpc Default
More informationPerform Preliminary Checks
After successfully logging into the XR VM console, you must perform some preliminary checks to verify the default setup. If any setup issue is detected when these checks are performed, take corrective
More informationConfiguring UDLD. Understanding UDLD. Modes of Operation CHAPTER
23 CHAPTER This chapter describes how to configure the UniDirectional Link Detection (UDLD) protocol on your Catalyst 3550 switch. Note For complete syntax and usage information for the commands used in
More informationConfiguring Virtual Port Channels
This chapter contains the following sections: Information About vpcs vpc Overview Information About vpcs, on page 1 Guidelines and Limitations for vpcs, on page 11 Verifying the vpc Configuration, on page
More informationConfiguring Virtual Port Channels
Configuring Virtual Port Channels This chapter describes how to configure virtual port channels (vpcs) on Cisco Nexus 5000 Series switches. It contains the following sections: Information About vpcs, page
More informationCisco Nexus 9508 Switch Power and Performance
White Paper Cisco Nexus 9508 Switch Power and Performance The Cisco Nexus 9508 brings together data center switching power efficiency and forwarding performance in a high-density 40 Gigabit Ethernet form
More informationNexus 7000 F3 or Mx/F2e VDC Migration Use Cases
Nexus 7000 F3 or Mx/F2e VDC Migration Use Cases Anees Mohamed Network Consulting Engineer Session Goal M1 VDC M1/M2 VDC M2/F3 VDC M1/F1 VDC M1/M2/F2e VDC F2/F2e/F3 VDC F2 VDC F3 VDC You are here This Session
More informationISSU on high-end routers
ISSU on high-end routers Syed Hassan, Technical Lead NCE CNI Practice Alexander Orel, NCE CNI Practice Agenda ISSU Introduction ISSU overview Impactful Minimum Disruption Restart introduction ISSU stages
More informationConfiguring Virtual Port Channels
This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Verifying the vpc Configuration, page 11 vpc Default Settings, page 16 Configuring
More informationConfiguring Basic Interface Parameters
This chapter describes how to configure the basic interface parameters on Cisco NX-OS devices. About the Basic Interface Parameters, page 1 Licensing Requirements, page 7 Guidelines and Limitations, page
More informationCisco Nexus 9500 Series Switches Architecture
White Paper Cisco Nexus 9500 Series Switches Architecture White Paper December 2017 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 17 Contents
More informationConfiguring UDLD. Understanding UDLD CHAPTER
21 CHAPTER This chapter describes how to configure the UniDirectional Link Detection (UDLD) protocol on the Catalyst 3750 switch. Unless otherwise noted, the term switch refers to a standalone switch and
More informationRoute Processor. Route Processor Overview. This chapter describes the route processor (RP) card. The following sections are included:
This chapter describes the route processor (RP) card. The following sections are included: Overview, page 1 Primary and Standby Arbitration, page 4 RP Card to Fabric Module Queuing, page 4 Performance,
More informationNCS6000 Fabric Troubleshooting Best Practices
NCS6000 abric Troubleshooting Best Practices Osman Hashmi Sr. Technical Leader, High End Routing Agenda NCS6000 Overview abric Architecture Cell ormat Congestion Management Cell Routing Life of a Packet
More informationProduct IDs. Chassis Product IDs
This chapter provides information about the product structure and product IDs. It contains the following tables: Chassis, on page 1 Fabric Cables, on page 6 These tables list system components, their product
More informationSummit Virtual Chassis Design and Installation Guide
Summit Virtual Chassis Design and Installation Guide Extreme Networks, Inc. 10460 Bandley Drive Cupertino, California 95014 (888) 257-3000 http://www.extremenetworks.com Published: June 1998 Part No: 120031-00
More informationLayer 2 Implementation
CHAPTER 3 In the Virtualized Multiservice Data Center (VMDC) 2.3 solution, the goal is to minimize the use of Spanning Tree Protocol (STP) convergence and loop detection by the use of Virtual Port Channel
More informationTopics for Today. Network Layer. Readings. Introduction Addressing Address Resolution. Sections 5.1,
Topics for Today Network Layer Introduction Addressing Address Resolution Readings Sections 5.1, 5.6.1-5.6.2 1 Network Layer: Introduction A network-wide concern! Transport layer Between two end hosts
More informationHP Routing Switch Series
HP 12500 Routing Switch Series EVI Configuration Guide Part number: 5998-3419 Software version: 12500-CMW710-R7128 Document version: 6W710-20121130 Legal and notice information Copyright 2012 Hewlett-Packard
More informationCisco CRS Forwarding Processor Cards
Data Sheet Cisco CRS s Product Overview The Cisco Carrier Routing System (CRS) provides outstanding economical scale, IP and optical network convergence, and a proven architecture. The Cisco CRS continues
More informationGuidelines for Installing the 20-Port 100Gbps Line Card
Guidelines for Installing the 20-Port 100Gbps Line Card Before installing the 20-port 100Gbps Line Card, use the guidelines in the following sections. The 20-port 100Gbps Line Card is supported with Cisco
More informationConfiguring Cisco StackWise Virtual
Finding Feature Information, page 1 Restrictions for Cisco StackWise Virtual, page 1 Prerequisites for Cisco StackWise Virtual, page 3 Information About Cisco Stackwise Virtual, page 3 Cisco StackWise
More informationCisco ASR 1000 Series Routers Embedded Services Processors
Cisco ASR 1000 Series Routers Embedded Services Processors The Cisco ASR 1000 Series embedded services processors are based on the Cisco QuantumFlow Processor (QFP) for next-generation forwarding and queuing.
More informationS7500 series Core Routing Switches. Datasheet. Shenzhen TG-NET Botone Technology Co., Ltd.
S7500 series Core Routing Switches Datasheet Shenzhen TG-NET Botone Technology Co., Ltd. Overview The S7500 series switches are high-end smart routing switches designed for nextgeneration enterprise networks.
More informationHigh Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA
High Performance Ethernet for Grid & Cluster Applications Adam Filby Systems Engineer, EMEA 1 Agenda Drivers & Applications The Technology Ethernet Everywhere Ethernet as a Cluster interconnect Ethernet
More informationConfiguring Online Diagnostics
Configuring Online s This chapter contains the following sections: Information About Online s, page 1 Guidelines and Limitations for Online s, page 4 Configuring Online s, page 4 Verifying the Online s
More informationImplementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN
This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing
More informationUniDirectional Link Detection (UDLD) Protocol
The UniDirectional Link Detection protocol is a Layer 2 protocol that detects and disables one-way connections before they create undesired situation such as Spanning Tree loops. Information About the
More informationData Center Access Design with Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders and Virtual PortChannels
Design Guide Data Center Access Design with Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders and Virtual PortChannels Updated to Cisco NX-OS Software Release 5.1(3)N1(1) Design Guide October
More informationConfiguring IPv4. Finding Feature Information. This chapter contains the following sections:
This chapter contains the following sections: Finding Feature Information, page 1 Information About IPv4, page 2 Virtualization Support for IPv4, page 6 Licensing Requirements for IPv4, page 6 Prerequisites
More informationCisco Series Internet Router Architecture: Packet Switching
Cisco 12000 Series Internet Router Architecture: Packet Switching Document ID: 47320 Contents Introduction Prerequisites Requirements Components Used Conventions Background Information Packet Switching:
More informationTroubleshooting. Diagnosing Problems. Verify Switch Module POST Results. Verify Switch Module LEDs CHAPTER
CHAPTER 3 This chapter describes these switch module troubleshooting topics: Diagnosing Problems, page 3-1 Resetting the Switch Module, page 3-4 How to Replace a Failed Stack Member, page 3-5 Diagnosing
More informationDescribing the STP. Enhancements to STP. Configuring PortFast. Describing PortFast. Configuring. Verifying
Enhancements to STP Describing the STP PortFast Per VLAN Spanning Tree+ (PVST+) Rapid Spanning Tree Protocol (RSTP) Multiple Spanning Tree Protocol (MSTP) MSTP is also known as Multi-Instance Spanning
More informationHP FlexFabric 5700 Switch Series
HP FlexFabric 5700 Switch Series FAQ Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and
More informationNew Product: Cisco Catalyst 2950 Series Fast Ethernet Desktop Switches
New Product: Cisco Catalyst 2950 Series Fast Ethernet Desktop Switches Product Overview The Cisco Catalyst 2950 Series of fixed configuration, wire-speed Fast Ethernet desktop switches delivers premium
More informationCisco ASR 1000 Series Aggregation Services Routers: QoS Architecture and Solutions
Cisco ASR 1000 Series Aggregation Services Routers: QoS Architecture and Solutions Introduction Much more bandwidth is available now than during the times of 300-bps modems, but the same business principles
More informationConfiguring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts
Contents Configuring VPLS 1 VPLS overview 1 Operation of VPLS 1 VPLS packet encapsulation 4 H-VPLS implementation 5 Hub-spoke VPLS implementation 7 Multi-hop PW 8 VPLS configuration task list 9 Enabling
More informationLink Bundling Commands
Link Bundling Commands This module provides command line interface (CLI) commands for configuring Link Bundle interfaces on the Cisco NCS 5000 Series Router. For detailed information about Link Bundle
More informationConfiguring NetFlow. Feature History for Configuring NetFlow. Release This feature was introduced.
Configuring NetFlow A NetFlow flow is a unidirectional sequence of packets that arrive on a single interface (or subinterface), and have the same values for key fields. NetFlow is useful for the following:
More informationConfiguring SPAN. About SPAN. SPAN Sources
This chapter describes how to configure an Ethernet switched port analyzer (SPAN) to analyze traffic between ports on Cisco NX-OS devices. This chapter contains the following sections: About SPAN, page
More informationCisco CRS Carrier Routing System 4-Slot Line Card Chassis System Description
Cisco CRS Carrier Routing System 4-Slot Line Card Chassis System Description First Published: 2013-08-25 Last Modified: 2017-02-10 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,
More informationCampus Networking Workshop. Layer 2 engineering Spanning Tree and VLANs
Campus Networking Workshop Layer 2 engineering Spanning Tree and VLANs Switching Loop When there is more than one path between two switches What are the potential problems? Switching Loop If there is more
More informationTroubleshooting. Diagnosing Problems. Verify the Switch Module POST Results CHAPTER
CHAPTER 3 This chapter describes these topics for troubleshooting problems:, page 3-1 Clearing the Switch Module IP Address and Configuration, page 3-5 Replacing a Failed Stack Member, page 3-5 Locating
More informationVXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches
White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...
More informationHPE FlexNetwork 5510 HI Switch Series FAQ
HPE FlexNetwork 5510 HI Switch Series FAQ Part number: 5200-0021a Document version: 6W101-20160429 The information in this document is subject to change without notice. Copyright 2016 Hewlett Packard Enterprise
More informationSwitch Stacking ArubaOS Switch
Switch Stacking ArubaOS Switch 10:00 GMT 11:00 CEST 13:00 GST September 25 th, 2018 Manoj Ramalingam, Aruba ERT Engineering resolution team Why stacking? To reduce the number of uplinks and optimize their
More informationConfiguring SPAN. Finding Feature Information. About SPAN. SPAN Sources
This chapter describes how to configure an Ethernet switched port analyzer (SPAN) to analyze traffic between ports on Cisco NX-OS devices. Finding Feature Information, on page 1 About SPAN, on page 1 Licensing
More information4 PWR XL: Catalyst 3524 PWR XL Stackable 10/100 Ethernet
4 PWR XL: Catalyst 3524 PWR XL Stackable 10/100 Ethernet Table of Contents...1 Contents...1 Introduction...1 Ordering Information...1 Key Features/Benefits...2 Flexible and Scalable Switch Clustering Architecture...3
More informationConfiguring Virtual Private LAN Services
Virtual Private LAN Services (VPLS) enables enterprises to link together their Ethernet-based LANs from multiple sites via the infrastructure provided by their service provider. This module explains VPLS
More informationVirtual Switching System
Virtual Switching System Q. What is a virtual switching system (VSS)? A. A VSS is network system virtualization technology that pools multiple Cisco Catalyst 6500 Series Switches into one virtual switch,
More informationCisco CPT Packet Transport Module 4x10GE
Data Sheet Cisco CPT Packet Transport Module 4x10GE The Cisco Carrier Packet Transport System (CPT) 200 and 600 sets the industry benchmark as a carrier-class converged access and aggregation platform
More informationH3C S7500E-XS Switch Series FAQ
H3C S7500E-XS Switch Series FAQ Copyright 2016 Hangzhou H3C Technologies Co., Ltd. All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means without prior
More informationToward a unified architecture for LAN/WAN/WLAN/SAN switches and routers
Toward a unified architecture for LAN/WAN/WLAN/SAN switches and routers Silvano Gai 1 The sellable HPSR Seamless LAN/WLAN/SAN/WAN Network as a platform System-wide network intelligence as platform for
More informationCisco SCE 2020 Service Control Engine
Data Sheet Cisco SCE 2000 Series Service Control Engine The Cisco SCE 2000 Series Service Control Engine is a network element specifically designed for carrier-grade deployments requiring high-capacity
More informationMS425 SERIES. 40G fiber aggregation switches designed for large enterprise and campus networks. Datasheet MS425 Series
Datasheet MS425 Series MS425 SERIES 40G fiber aggregation switches designed for large enterprise and campus networks AGGREGATION SWITCHING WITH MERAKI The Cisco Meraki 425 series extends cloud management
More informationDATA CENTER FABRIC COOKBOOK
Do It Yourself! DATA CENTER FABRIC COOKBOOK How to prepare something new from well known ingredients Emil Gągała WHAT DOES AN IDEAL FABRIC LOOK LIKE? 2 Copyright 2011 Juniper Networks, Inc. www.juniper.net
More informationConfiguring Traffic Mirroring
This module describes the configuration of the traffic mirroring feature. Traffic mirroring is sometimes called port mirroring, or switched port analyzer (SPAN). Feature History for Traffic Mirroring Release
More informationNexus 7000 Peer Switch Configuration (Hybrid Setup)
Nexus 7000 Peer Switch Configuration (Hybrid Setup) Document ID: 116140 Contributed by Andy Gossett and Rajesh Gatti, Cisco TAC Engineers. Aug 09, 2013 Contents Introduction Prerequisites Requirements
More informationConfiguring Traffic Mirroring
This module describes the configuration of the traffic mirroring feature. Traffic mirroring is sometimes called port mirroring, or switched port analyzer (SPAN). Feature History for Traffic Mirroring Release
More informationCISCO CATALYST 4500-X SERIES FIXED 10 GIGABIT ETHERNET AGGREGATION SWITCH DATA SHEET
CISCO CATALYST 4500-X SERIES FIXED 10 GIGABIT ETHERNET AGGREGATION SWITCH DATA SHEET ROUTER-SWITCH.COM Leading Network Hardware Supplier CONTENT Overview...2 Appearance... 2 Key Features and Benefits...2
More informationConfiguring UDLD. Understanding UDLD CHAPTER
CHAPTER 9 This chapter describes how to configure the UniDirectional Link Detection (UDLD) protocol. Release 12.2(33)SXI4 and later releases support fast UDLD, which provides faster detection times. For
More informationDeploying Network Foundation Services
CHAPTER 2 After designing each tier in the model, the next step in enterprise network design is to establish key network foundation technologies. Regardless of the applications and requirements that enterprises
More informationConfiguring Port Channels
This chapter contains the following sections: Information About Port Channels, page 1, page 10 Verifying Port Channel Configuration, page 21 Verifying the Load-Balancing Outgoing Port ID, page 22 Feature
More informationTroubleshooting. Diagnosing Problems CHAPTER
CHAPTER 4 The LEDs on the front panel provide troubleshooting information about the switch. They show failures in the power-on self-test (POST), port-connectivity problems, and overall switch performance.
More informationCCIE Service Provider Sample Lab. Part 2 of 7
CCIE Service Provider Sample Lab Part 2 of 7 SP Sample Lab Main Topology R13 S2/1.135.13/24 Backbone Carrier SP AS 1002 S2/1 PPP E0/1.69.6/24 R6 Customer Carrier SP ABC Site 5 AS 612 E1/0 ISIS.126.6/24
More informationChapter 7 Hardware Overview
Chapter 7 Hardware Overview This chapter provides a hardware overview of the HP 9308M, HP 930M, and HP 6308M-SX routing switches and the HP 6208M-SX switch. For information about specific hardware standards
More informationMulti-Chassis APS and Pseudowire Redundancy Interworking
Multi-Chassis and Pseudowire Redundancy Interworking In This Chapter This section describes multi-chassis and pseudowire redundancy interworking. Topics in this section include: Applicability on page 120
More informationCisco Questions $ Answers
Cisco 644-906 Questions $ Answers Number: 644-906 Passing Score: 800 Time Limit: 120 min File Version: 38.7 http://www.gratisexam.com/ Cisco 644-906 Questions $ Answers Exam Name: Implementing and Maintaining
More informationManaging the Router Hardware
This chapter describes the command-line interface (CLI) techniques and commands used to manage and configure the hardware components of a router running the Cisco IOS XR software. For complete descriptions
More information3. What could you use if you wanted to reduce unnecessary broadcast, multicast, and flooded unicast packets?
Nguyen The Nhat - Take Exam Exam questions Time remaining: 00: 00: 51 1. Which command will give the user TECH privileged-mode access after authentication with the server? username name privilege level
More informationIntroduction to Aruba Dik van Oeveren Aruba Consulting System Engineer
Introduction to Aruba 8400 Dik van Oeveren Aruba Consulting System Engineer 8400 Hardware Overview 2 Aruba campus edge switch portfolio 3810M 5400R Advanced Layer 3 Layer 2 2530 8, 24 or 48 ports with
More informationContents. Configuring EVI 1
Contents Configuring EVI 1 Overview 1 Layer 2 connectivity extension issues 1 Network topologies 2 Terminology 3 Working mechanism 4 Placement of Layer 3 gateways 6 ARP flood suppression 7 Selective flood
More informationASR 5500 Hardware Platform Overview
This chapter describes the hardware components that comprise the ASR 5500 chassis. The ASR 5500 is designed to provide subscriber management services for high-capacity 4G wireless networks. Figure 1: The
More informationProduct Overview. Switch Features. Catalyst 4503 Switch Features CHAPTER
CHAPTER This chapter provides an overview of the features and components of the Catalyst 4500 series switches. The Catalyst 4500 series switches are the Catalyst 4503 switch, the Catalyst 4506 switch,
More informationCisco ASR 9000 Modular Line Card and Modular Port Adapters
Cisco ASR 9000 Modular Line Card and Modular Port Adapters In this section you will identify the following aspects of the Modular Line Card: Part number and description Location Status LEDs Part number
More informationDeep Dive QFX5100 & Virtual Chassis Fabric Washid Lootfun Sr. System Engineer
Deep Dive QFX5100 & Virtual Chassis Fabric Washid Lootfun Sr. System Engineer wmlootfun@juniper.net 1 Copyright 2012 Juniper Networks, Inc. www.juniper.net QFX5100 product overview QFX5100 Series Low Latency
More informationCisco ASR 9000 Architecture Overview BRKARC Christian Calixto, IP NGN Consulting Systems Engineer
Cisco ASR 9000 Architecture Overview BRKARC-2003 Christian Calixto, IP NGN Consulting Systems Engineer ccalixto@cisco.com Agenda Hardware Overview Carrier Class, Scalable System Architecture Fabric architecture
More informationConfiguring Port Channels
CHAPTER 5 This chapter describes how to configure port channels and to apply and configure the Link Aggregation Control Protocol (LACP) for more efficient use of port channels using Cisco Data Center Network
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on
More informationCisco CRS-X Modular Services Card
Data Sheet Cisco CRS Modular Services Cards Product Overview The Cisco Carrier Routing System (CRS) provides outstanding economical scale, IP and optical network convergence, and a proven architecture.
More informationHardware Redundancy and Node Administration Commands
Hardware Redundancy and Node Administration Commands This module describes the commands used to manage the hardware redundancy, power, and administrative status of the nodes on a router running Cisco IOS
More informationProduct Overview. Switch Models CHAPTER
CHAPTER 1 The Cisco CGS 2520 switches, also referred to as the switch, are Ethernet switches that you can connect devices such as Intelligent Electronic Devices (IEDs), distributed controllers, substation
More informationConfiguring STP and RSTP
7 CHAPTER Configuring STP and RSTP This chapter describes the IEEE 802.1D Spanning Tree Protocol (STP) and the ML-Series implementation of the IEEE 802.1W Rapid Spanning Tree Protocol (RSTP). It also explains
More informationVSS-Enabled Campus Design
3 CHAPTER VSS-enabled campus design follows the three-tier architectural model and functional design described in Chapter 1, Virtual Switching Systems Design Introduction, of this design guide. This chapter
More informationConfiguring Modular QoS on Link Bundles
A link bundle is a group of one or more ports that are aggregated together and treated as a single link. This module describes QoS on link bundles. Line Card, SIP, and SPA Support Feature ASR 9000 Ethernet
More information