Cisco IP Fabric for Media

Size: px
Start display at page:

Download "Cisco IP Fabric for Media"

Transcription

1 White Paper Cisco IP Fabric for Media Design Guide April 2018 Rahul Parameswaran (rparames) CISCO SYSTEMS, INC Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 37

2 Contents Introduction... 3 Frame accurate switching... 4 Nexus 9000 for IP fabric for Media... 4 Spine and leaf design with network controller... 5 Why use a layer 3 spine and leaf design... 5 Non-blocking multicast in CLOS architecture... 5 Why Media Fabrics can t rely on ECMP and PIM alone... 6 Flow orchestration with Cisco DCNM media controller... 6 Cisco DCNM media controller features... 6 Cisco DCNM media controller installation... 7 Fabric topology... 7 Fabric configuration using Power-On Auto Provisioning (POAP)... 9 Fabric configuration using the command-line interface Topology discovery Host discovery Host policy Flow policy Flow setup Flow visibility and bandwidth tracking Flow statistics and analysis Flow alias Events and notification Precision Time Protocol Quality of Service (QoS) API integration with the broadcast controller Failure handling Cisco DCNM media controller connectivity options: fabric control Designing the control network Cisco IP fabric for Media architecture Multi-Site and PIM Border Guidelines and Limitations Topology discovery Flow setup Host policy Flow policy Design Live and File based workflows on same fabric Single switch with controller Single switch without a controller Conclusion For more information Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 37

3 Introduction Today, the broadcast industry uses an SDI router and SDI cables to transport video and audio signals. The SDI cables can carry only a single unidirectional signal. As a result, a large number of cables, frequently stretched over long distances, are required, making it difficult and time-consuming to expand or change an SDI-based infrastructure. Cisco IP Fabric for Media helps you migrate from an SDI router to an IP-based infrastructure (Figure 1). In an IPbased infrastructure, a single cable has the capacity to carry multiple bidirectional traffic flows and can support different flow sizes without requiring changes to the physical infrastructure. An IP-based infrastructure with Cisco Nexus 9000 Series switches: Supports various types and sizes of broadcasting equipment endpoints with port speeds up to 100 Gbps Supports the latest video technologies, including 4K and 8K ultra HD Allows for a deterministic network with zero packet loss, ultra low latency, and minimal jitter, and Supports the AES67 and SMPTE PTP profiles Figure 1(a) SDI router Figure 1(b) IP fabric The Society of Motion Picture and Television Engineers (SMPTE) standard define the way that SDI is encapsulated in an IP frame and SMPTE 2110 defines how video, audio and ancillary data is carried over IP. Similarly, Audio Engineering Society (AES) 67 defines the way that audio is carried over IP. All these flows are typically User Datagram Protocol (UDP) and IP multicast flows. A network built to carry these flows must help provide zero-drop transport with guaranteed forwarding, low latency, and minimal jitter Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 37

4 There are multiple design options that are available today based on a given use case 1. Spine and Leaf CLOS based architecture using Nexus 9000 series switches with Data Center Network Manager (DCNM) Controller Provides a flexible and scalable architecture that is suitable for studio deployments. 2. A single switch, Nexus 9000 series with DCNM controller Provides a design that is identical to an SDI router. Simple architecture with DCNM controller offering security and flow visibility. Studio or OBVAN type deployment. 3. A single switch, Nexus 9000 without a controller Provides a design that is identical to an SDI router. Simple architecture. Switch operates like any multicast router. Frame accurate switching When switching from one source to another, it is important to ensure the switch occurs at a frame boundary. The ways to ensure frame accurate switching are as follows: Network Timed Switching: Here the network is responsible to switch on a frame boundary. This is not possible in a standard off the shelf switch as it requires FPGA to switch at the frame boundary. Source Timed Switching: Source timed switching requires the source to modify the UDP port number when a switch is triggered. At a precise time interval, the source is instructed to change the destination UDP port number of the signal. The network switch is programmed with a rule to steer the signal to the receiver just before the source switches their destination UDP port. While this works, it does not scale when the network involves multiple devices (routers and switches) and limits the ability to switch flows across different sites. Destination Timed Switching: The destination subscribes or joins the new flow before leaving the old flow and switches at the frame boundary. Destination timed switching does not require special capabilities in the network and can also scale across multiple network switches and sites. Given the goal of the industry is to use commercial off-the-shelf switches, in general it has chosen to move forward with destination timed switching and Cisco has adopted that model. Nexus 9000 for IP fabric for Media Nexus 9000 Series delivers proven high performance and density, low latency, and exceptional power efficiency in a range of form factors. It also performs line rate multicast replication with minimal jitter. Each switch can operate as a PTP boundary clock and supports SMPTE and AES67 profile. Listed below are supported Nexus 9000 switches with their role Table 1. Nexus 9000 switch port capacity and supported role Part Number Description Supported Mode N9K-C92160-YC 48x10/25G+4x100G(6x40G) Leaf with Controller/Standalone Switch with/without Controller N9K-C93180YC-EX and FX 48x10G/25G+ 6x100G Leaf with Controller/Standalone Switch with/without Controller N9K-C93108TC-EX and FX 48x1G/10G Base T + 6x100G Leaf with Controller/Standalone Switch with/without Controller N9K-C9236C 36x100G/40G (Supports 4x10G,4x25G) Leaf or Spine with Controller/Standalone Switch with/without Controller N9K-C9272Q 72x40G Port (Supports 4x10G on ports 36-71) Leaf or Spine with Controller/Standalone Switch with/without Controller 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 37

5 Part Number Description Supported Mode 9500 with N9K-X9636C-R 36x100G Spine with Controller/Standalone Switch with/without Controller 9500 with N9K-X9636Q-R 36x40G Spine with Controller/Standalone Switch with/without Controller Spine and leaf design with network controller Figure 2. Multiple-spine and multiple-leaf architecture with Cisco DCNM media controller Why use a layer 3 spine and leaf design Spine and leaf CLOS architecture has proven to be flexible and scalable and is widely deployed in modern data center design. No matter where the receiver is connected, the path always involves a single hop through the spine, thereby essentially guaranteeing deterministic latency. Although a Layer 2 network design may seem simple, it has a very large failure domain. A misbehaving endpoint could potentially storm the network with traffic that is propagated to all devices in the Layer 2 domain. Also, in a Layer 2 network, traffic is always flooded to the multicast router or querier, which can cause excessive traffic to be sent to the router or querier, even when there are no active receivers. This results in non-optimal and nondeterministic use of bandwidth. Layer 3 multicast networks contain the fault domain and forward traffic across the network only when there are active receivers, thereby ensures optimal use of bandwidth. This also provides granular application of filtering policy that can be applied to a specific port instead of all devices like in case of a layer 2 domain. Non-blocking multicast in CLOS architecture SDI routers are non-blocking in nature. A single Ethernet switch such as the Nexus 9000/9500 is also nonblocking. A CLOS architecture provides flexibility and scalability, however, there are a few design considerations that needs to be taken into consideration to ensure a CLOS architecture remains non-blocking Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 37

6 In an ideal scenario, the sender leaf (First hop router) sends one copy of the flow to one of the SPINE switch. The SPINE creates N copies, one for each receiver LEAF switch that has interested receivers for that flow. The receiver leaf (Last hop router) creates N copies of the flow, one per local receiver connected on the leaf. At times, especially when the system is at its peak capacity, you could encounter a scenario where sender leaf has replicated a flow to a certain spine, but the receiver leaf cannot get traffic from that spine as its link bandwidth to that spine is completely occupied by other flows. When this happens, the sender leaf must replicate the flow to another spine. This results in the sender leaf using twice the bandwidth for a single flow. To ensure the CLOS network remains non-blocking, a sender leaf must have enough bandwidth to replicate all of its local senders to all spines. By following this guideline, the CLOS network can be non-blocking. Bandwidth of all senders connected to a leaf must be equal to the bandwidth of the links going from that leaf to each of the spine. Bandwidth of all receivers connected to a leaf must be equal to the aggregate bandwidth of all links going to all spines from that leaf. For example: A two spine design using N9k-C93180YC-EX, with 6x100G uplinks, 300G going to each spine can support 300G of senders and 600G of receiver connected to the leaf. In a broadcasting facility, most of the endpoints are unidirectional camera, microphone, multiviewers etc. Also, there are more number of receivers than senders (ratio is 4:1). Also, when a receiver no longer needs a flow, it leaves the flows, freeing up the bandwidth. Hence, the network can be designed with placement of senders and receivers such that the CLOS architecture becomes non-blocking. Why Media Fabrics can t rely on ECMP and PIM alone In IT data centers, Equal-Cost Multipath (ECMP) is extremely efficient because most traffic is TCP based, with millions of flows. A Cisco IP Fabric for Media network has fewer flows, but the flows use more bandwidth, and all are UDP flows. In this scenario, ECMP may not always be efficient, resulting in hash collusion. Hence, an IP network needs a controller to orchestrate flows based on bandwidth availability. Flow orchestration with Cisco DCNM media controller The Cisco Data Center Network Manager (DCNM) media controller provides a variety of features and functions. One important function is to ensure that flow is set up between sources and receivers with guaranteed bandwidth available for the flow. Every flow request sent either by an IGMP join message from a receiver or an API call from the broadcast controller triggers the media controller to compute an available path in the network and to program the path end to end. This process helps ensure efficient use of bandwidth. It also helps prevent hash collusion, which could potentially occur with ECMP. Cisco DCNM media controller features The DCNM media controller is an extension to Cisco DCNM. It includes all the features available with DCNM plus the following media controller features: Fabric configuration using Power-On Auto Provisioning (POAP) to help automate configuration Topology and host discovery to dynamically discover the topology and host connection Flow orchestration to help set up traffic flow with guaranteed bandwidth Flow policies, including bandwidth reservation and flow policing Host policies to secure the fabric by restricting senders and receivers Flow visibility and analytics, including end-to-end path visibility with bit rate set on a per-flow basis 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 37

7 Events and notifications for operation and monitoring Northbound API integration with the broadcast controller For more information about DCNM, see Cisco DCNM media controller installation For the steps to install the DCNM media controller, see The recommended approach is to set up the DCNM media controller in native high-availability mode. After DCNM is installed, you must manually configure it to work in media-controller mode. You accomplish this by connecting to the DCNM server through Secure Shell (SSH) and logging in with the username root. ~]# appmgr stop dcnm ~]# appmgr set-mode media-controller With OVA installation, ensure DCNM is deployed using Large Configuration (four vcpus and 12G RAM) and in Thick Provisioned mode. Fabric topology The number and type of leaf and spine switches required in your IP fabric depend on the number and type of endpoints in your broadcasting center. Follow these steps to help determine the number of leaf switches you need: Count the number of endpoints (cameras, microphones, gateway, production switchers etc.) in your broadcasting center. For example, assume that your requirements are as follows: Number of 40-Gbps ports required: 40 Number of 10-Gbps ports required: 150 Number of 1-Gbps ports required: 50 The uplink bandwidth from a leaf switch to a spine switch must be equal to or greater than the bandwidth provisioned to endpoints. Tables 1 and 2 list the supported switches and their capacities. Figure 3 shows the network topology. Table 2. Supported leaf switch Leaf Switch Endpoint Capacity Uplink Capacity Cisco Nexus 9236C Switch 25 x 40-Gbps endpoints 10 x 100-Gbps (1000-Gbps) uplinks Cisco Nexus 9272Q Switch 36 x 40-Gbps endpoints 36 x 40-Gbps (1440-Gbps) uplinks Cisco Nexus 92160YC-X Switch 40 x 10-Gbps endpoints 4 x 100-Gbps (400-Gbps) uplinks Cisco Nexus 93180YC-EX Switch 48 x 10-Gbps endpoints 6 x 100-Gbps (600-Gbps) uplinks Cisco Nexus 93108TC-EX Switch 48 x 10-Gbps endpoints 6 x 100-Gbps (600-Gbps) uplinks 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 37

8 Table 3. Supported spine switch Spine Switch Cisco Nexus 9236C Switch Cisco Nexus 9272Q Switch Cisco Nexus 9508 with N9K-X9732C-EX Line Card Cisco Nexus 9508 with N9K-X9636Q-R Line Card Cisco 9508 with N9K-X9636C-R Line Card Number of Ports 36 x 100-Gbps ports 72 x 40-Gbps ports 256 x 40-Gbps ports 288x 40-Gbps ports 288x 100-Gbps ports The 9236C can be used as a leaf switch for 40-Gbps endpoints. Each supports up to 25 x 40-Gbps endpoints and requires 10 x 100-Gbps uplinks. The 93180YC-EX can be used as a leaf switch for 10-Gbps endpoints. Each supports up to 48 x 10-Gbps endpoints and requires 6 x 100-Gbps uplinks. The 93108TC-EX can be used as a leaf switch for 1/10GBASE-T endpoints. Each supports up to 48 x 1/10GBASE-T endpoints with 6 x 100-Gbps uplinks. 40 x 40-Gbps endpoints would require 2 x 9236C leaf switches with 20 x 100-Gbps uplinks. 160 x 10-Gbps endpoints would require 4 x EX leaf switches with 24 x 100-Gbps uplinks. 70 x 1-Gbps endpoints would require 2 x EX leaf switches with 4 x 100-Gbps uplinks. (Not all uplinks are used.) The total number of uplinks required is 48 x 100 Gbps. The 9500 with N9K-X9636C-R line card or a 9236C can be used as a SPINE. With 9236C, each switch supports up to 36 x 100-Gbps ports. Two spine switches with 24 x 100-Gbps ports per spine can be used. With 9508 and N9K-X9636C-R line card, each line card supports 36x100G ports. Two line cards with a single spine switch can be used. Figure 3. Network topology with 9236c as spine 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 37

9 Figure 4. Network topology with 9508 as spine Fabric configuration using Power-On Auto Provisioning (POAP) POAP automates the process of upgrading software images and installing configuration files on Cisco Nexus switches that are being deployed in the network. When a Cisco Nexus switch with the POAP feature boots and does not find the startup configuration, the switch enters POAP mode, locates the DCNM Dynamic Host Configuration Protocol (DHCP) server, and bootstraps itself with its interface IP address, gateway, and DCNM Domain Name System (DNS) server IP addresses. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file (Figure 5). Figure 5. POAP process 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 37

10 The DCNM controller ships with configuration templates: the Professional Media Network (PMN) fabric spine template and the PMN fabric leaf template. The POAP definition can be generated using these templates as the baseline. Alternatively, you can generate a startup configuration for a switch and use it during POAP definition. (The section Fabric Configuration Using the Command-Line Interface later in this document provides steps for creating a configuration manually.) When using POAP, you follow these steps: Create a DCHP scope for temporary IP assignment. Upload the switch image to the DCNM image repository. Generate the switch configuration using startup configuration or a template. Figures 6 through 11 illustrate these steps. Figure 6. DCNM > Configure > POAP Figure 7. DCNM > Configure > POAP > DHCP Scope 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 37

11 Figure 8. DCNM > Configure > POAP > Images and Configurations Figure 9. DCNM > Configure > POAP > POAP Definitions Figure 10. DCNM > Configure > POAP > POAP Definitions 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 37

12 Figure 11. Generate configuration using a template: DCNM > Configure > POAP > POAP Definitions > POAP Wizard Fabric configuration using the command-line interface Fabric setup can be performed through the switch Command-Line Interface (CLI). The main components include an Interior Gateway Protocol (IGP) such as Open Shortest Path First (OSPF) for unicast routing and PIM for multicast routing. However, PIM is set up in passive mode because the DCNM media controller provisions the PIM Outgoing Interface (OIF)) and does not use protocol negotiation. Using Nonblocking Multicast (NBM) mode in the media controller, the switch is informed of the DCNM controller IP address and credentials. TCAM carving is used for host security policies and flow statistics. The configurations are shown here. Setting up the NBM mode controller! Enable the feature feature nbm feature nxapi! Enable mode controller and specify controller IP nbm mode controller controller ip <dcnm_vip_address> vrf <vrf_name> controller-credentials username admin password 0 <password>! Verification show nbm controller VRF IP Role Status Online- management ACTIVE ONLINE :43: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 37

13 Configuring TCAM carving! RACL TCAM for Host Policy and ing-l3-vlan-qos for flow statistics! Requires a switch reload post configuration! Only required on LEAF switches hardware access-list tcam region ing-racl 512 hardware access-list tcam region ing-l3-vlan-qos 256 hardware access-list tcam region ing-nbm 1536 Configuring PIM and IGP (OSPF)! PIM is enabled but in passive mode, IGP needed for any unicast traffic on fabric feature pim feature ospf router ospf 1 maximum-paths 36! Maximum paths must be greater than number of uplinks between a single leaf and spine layer. Default is 8 which is sufficient in most scenarios interface etx/y ip pim sparse-mode ip pim passive ip router ospf 1 area 0 Configuring the interfaces! Fabric Port Configuration (links between leaf and spine) interface Ethernet1/50 no switchport ip address x.x.x.x/x ip router ospf 1 area ip pim sparse-mode ip pim passive no shutdown! Host Port Configuration (link between switch and end host) interface Ethernet1/1 ip address x.x.x.x/x ip router ospf 1 area ip ospf passive-interface ip pim sparse-mode ip pim passive ip igmp version 3 ip igmp immediate-leave 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 37

14 ip igmp suppress v3-gsq no shutdown! Host Port Configuration when using an SVI with port as a trunk or access port interface Ethernet 1/2 switchport mode access switchport access vlan 100 spanning-tree port type edge interface vlan 100 ip address x.x.x.x/x ip router ospf 1 area ip ospf passive-interface ip pim sparse-mode ip pim passive vlan configuration 100 ip igmp snooping fast-leave Topology discovery The DCNM media controller automatically discovers the topology when fabric is brought up using POAP. If the fabric is provisioned through the CLI, the switches need to be manually discovered by DCNM. Be sure that fabric has been discovered after you set the default flow and host policy and before any endpoints are connected and discovered on the fabric. Figures 12 and 13 show the steps required to discover the fabric. Figure 12. DCNM > Inventory > Discover Switches > LAN Switches 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 37

15 Figure 13. DCNM > Media Controller > Topology Host discovery A host can be discovered in the following ways: When the host sends an Address Resolution Protocol (ARP) request for its default gateway: the switch When the sender host sends a multicast flow When a receiver host sends an IGMP join message When the host is manually added through an API You can find host information by choosing Media Controller > Host. The host is populated on the topology page, providing a visual representation showing where the host is connected. Only hosts that are discovered are displayed on the topology page. Figure 14(a) DCNM > Media Controller > Topology 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 37

16 Figure 14(b) DCNM > Media Controller > Hosts Host policy Host policy is used to prevent or allow a sender to send traffic to certain multicast groups and a receiver to subscribe to certain multicast groups. Default policy can use a whitelist or a blacklist model. Sender host policy is implemented with an ingress access-list programmed on the sender leaf. Receiver host policy is implemented by applying an IGMP filter on the receiver leaf. The host policy page includes all policies created on DCNM through the GUI or API. It also indicates the devices to which a given policy is applied. To see this information, click the i on the host policy page. (Figure 15). Configuring the default host policy Be sure to make any changes to the default host policy before any active host appears on the network. The default host policy specifies whether to use a blacklist (allow all by default) or a whitelist (deny all by default) and should be defined before hosts are added to the network. Changes can be made to a host-specific policy at any time. However, no changes can be made to the default policy anytime it is applied to any host. In order to make changes to the default policy, ports facing the hosts must be shut down, the flows then time out and DCNM controller removes the policy associations on the switch Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 37

17 Figure 15. DCNM > Media Controller > Policies > Host Policies Flow policy The flow policy is used to specify the flow properties, including the flow bandwidth and Differentiated Services Code Point (DSCP) marking. The default policy is applied to flows that do not have a specific policy defined. The flow policy helps ensure that sufficient bandwidth is provisioned on the network. It also helps ensure that the sender does not send traffic at a rate higher than the rate provisioned using the policy. Excessive traffic is policed. (Figure 16). Configuring the default flow policy Changes to the default flow policy must be made before any active flows exist in the system. The controller ships with a default flow size of 3 Gbps and a DSCP of CS Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 37

18 Figure 16. DCNM > Media Controller > Policies > Flow Policies Flow setup Flow setup can be accomplished in two ways. A receiver, using Internet Group Management Protocol (IGMP) can request for a flow, or the broadcast controller can setup a flow using an API. Details on the API and the API integration is discussed in API Integration with the Broadcast Controller section. Flow visibility and bandwidth tracking One of the broadcast industry s biggest concerns in moving to IP is maintaining the capability to track the flow path. The DCNM media controller provides end-to-end flow visibility on a per-flow basis. The flow information can be queried from the DCNM GUI or through an API. One can view bandwidth utilization per link through the GUI or an API. Also, when the Bandwidth check box is selected, the color of the links change based on their utilization. See Figures 17, 18, and Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 37

19 Figure 17. DCNM > Media Controller > Topology > Multicast Group Figure 18. DCNM > Media Controller > Topology and Double-Click Link 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 37

20 Figure 19. DCNM > Media Controller > Topology Flow statistics and analysis DCNM maintains a real-time per-flow rate monitor. It can provide the bit rate of every flow in the system. If a flow exceeds the rate defined in the flow policy, the flow is policed, and the policed rate is also displayed. Flow information is cached for a period of 24 hours. Flow information can be exported and stored for offline analysis (Figure 20). Figure 20. DCNM > Media Controller > Flow Status Flow alias Every flow on the fabric is either a video, audio or ancillary flow. For an operator to recognize a flow a more meaningful name can now be associated with a multicast group. Flow Alias provides option to name a flow. The flow alias name can be referenced throughout the DCNM GUI instead of the multicast group. (Figure 21) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 37

21 Figure 21. DCNM > Media Controller > Flow Alias Figure 22. DCNM Flow View with Flow Alias Events and notification The DCNM media controller logs events that can be subscribed to using Advanced Message Queuing Protocol (AMQP). The events are also logged and can be viewed through the GUI. Every activity that occurs is logged: a new sender coming online, a link failure, a switch reload, a new host policy pushed out, etc. (Figure 23) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 37

22 Figure 23. DCNM > Media Controller > Events Precision Time Protocol Precision Time Protocol (PTP) is used to provide clock information to all endpoints to help ensure that they are time synchronized. If you need PTP, you must enable the boundary clock function on all switches. The fabric does not support multicast routing for PTP packets. The active and passive grandmaster switches are connected to the leaf switch. Two profiles are supported: and AES 67. Any profile can be enabled on any interface. The slave switches connected to the grandmaster switch must be configured with the same PTP parameters (Figure 24). Example feature ptp! ptp source IP can be any IP. If switch has a loopback, use the loopback IP as PTP source ptp source interface Ethernet1/1 ptp ptp delay-request minimum interval smpte ptp announce interval smpte ptp sync interval smpte Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 37

23 Figure 24. Grandmaster and passive clock connectivity Quality of Service (QoS) QoS provides differential treatment to traffic based on its DSCP markings. The suggested markings for traffic traversing the fabric is as indicated in the table below (Table 4). Table 4. QoS Service Class (Flow/Control) DSCP Values Queue PTP Time and Sync EF Control/Priority Level 1 Intercom Audio EF Priority Level 1 AES67, 2110 Audio AF41 Priority Level 2 or Bandwidth , 2110 Video AF41 Priority Level 2 or Bandwidth 2110 Ancillary CS3 Bandwidth Other/Unicast 0 Best Effort/Default API integration with the broadcast controller The broadcast controller integrates with the DCNM media controller through open REST APIs. The integration helps ensure that the end operator experience remains unchanged while the solution moves from an SDI network to an IP network. The broadcast controller remains synchronized with the DCNM network controller (Figure 25) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 37

24 Figure 25. Integration of broadcast and DCNM media controllers For a list of APIs, see Failure handling Most deployments implement redundancy by provisioning parallel networks and connecting endpoints to both networks (for example, ). Any failure on one network will not affect traffic to endpoints because a copy of the frame is always available on the redundant network (Figure 26). Within a fabric, when a failure takes place, such as a link down or a spine failure, recovery mechanisms built into the DCNM media controller will help ensure that traffic converges. However, a few seconds may be needed to reprogram the flows on other available paths. Flows are not moved back to the original path after the original path recovers. New flows use the recovered path. The DCNM media controller provides a list of flows affected by any failure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 37

25 Figure 26. Redundant IP fabrics: Completely independent of each other The following types of failures may occur: Link flap between a leaf and spine (fabric link) Spine reload or power down Leaf reload or power down Loss of connectivity between switch and DCNM media controller DCNM failure (server reload or power down) Link failure When a link fails between a leaf and spine, flows are moved to other available links. In scenarios where available bandwidth is not sufficient to move all affected flows, the media controller will move certain flows (random). For the flows that it was not able to service, an event will be generated notifying the user about the affected flows. When bandwidth becomes available in the future, the flows will be programmed after an IGMP join message is received from a receiver. Spine failure A spine reload causes all flows routed on that spine to fail. The media controller moves the affected flows to other spine switches. In scenarios where available bandwidth is not sufficient to move all affected flows, the media controller will move certain flows (random). For the flows that it was not able to service, an event will be generated notifying the user about the affected flows. When bandwidth becomes available in the future, the flows will be programmed after an IGMP join message is received from a receiver. Leaf failure A Leaf failure will result in the loss of traffic to endpoints connected to that leaf. After recovery, the endpoints will have to resend IGMP join messages Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 37

26 Connectivity loss between Cisco DCNM and switch The DCNM media controller is the control plane for the fabric. When connectivity is lost between the switch and the controller, no new control plane activities are possible. New flows cannot be programmed, and flows cannot be removed. Existing flows will continue to work. The switch will periodically retry to connect to DCNM. Cisco DCNM failure DCNM high availability is implemented using an active and cold standby pair. The databases of the active and standby instances are synchronized. When the active instance fails, the standby instance detects the failure and assumes the active role. DCNM high-availability failover takes about three minutes to complete, during which time no new control-plane activities can be performed (Future release will provide active, hot standby HA implementation). Cisco DCNM media controller connectivity options: fabric control The switches in the IP fabric can communicate with DCNM using Out-Of-Band (OOB) management, OOB frontpanel ports, or in-band management. DCNM ships with two Network Interface Cards (NICs): Eth0 and Eth1. POAP works on Eth1, with the DHCP server on DCNM configured to listen to DHCP messages on Eth1 only. Switches can use POAP using the management port or a front-panel port. For POAP to work, the OOB management port or the front-panel port used for communication with DCNM must be on the same VLAN as the DCNM Eth1 port. Figures 27, 28, and 29 show the control options. Figure 27. Using OOB management ports to communicate with Cisco DCNM media controller 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 37

27 Figure 28. Using OOB Front-Panel Ports to Communicate with Cisco DCNM media controller Figure 29. Using Inband Ports to Communicate with Cisco DCNM media controller The following CLI snippet shows the configuration using the OOB management port: interface mgmt0 vrf member management ip address /24 nbm mode controller controller ip <ip address> vrf management controller-credentials username admin password 7 ljw Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 37

28 The following CLI snippet shows the configuration using the OOB front-panel port:! Define Fabric Control VRF vrf context fabric_control! Specify the VRF name under NBM controller configuration nbm mode controller controller ip <dcnm_ip> vrf fabric_control interface Ethernet1/10 description to_fabric_control_network vrf member fabric_control ip address x.x.x.x/x no shutdown CLI snippet when using Inband! Create a loopback interface on each switch! Advertise the loopback into IGP! Extend the IGP to DCNM Control Network interface loopback 100 ip address /24 ip router ospf area 0 nbm mode controller controller ip <ip address> vrf default controller-credentials username admin password 7 ljw39655 Designing the control network The control network can be divided into two segments. One is fabric control network, which includes the network between the IP fabric and the DCNM media controller. The other is the endpoint control network, which enables the broadcast controller to communicate with the endpoints and the DCNM media controller. Figure 30 shows the logical network connectivity between the broadcast controller, endpoints, and DCNM. The control network typically carries unicast control traffic between controllers and endpoints Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 37

29 Figure 30. Control network Cisco IP fabric for Media architecture Figures 29 through 31 shows various deployment topologies using Cisco IP Fabric for Media. Figure 29 describes a topology that supports a redundant IP network that can carry /7 or 2110 flows. Endpoints must be capable of supporting hitless merge. Figure 30 shows the flexibility offered by a SPINE and LEAF design that can be used in a distributed studio deployment model Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 37

30 Figure 31 shows a possible deployment scenario to connect a remote site using the concept of a remote leaf. The leaf is not configured any different from a leaf in the main facility. It is a part of the fabric. Figure 31. Redundant IP networks Figure 32. Distributed studios 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 37

31 Figure 33. Remote leaf Multi-Site and PIM Border To aid remote production and to interconnect various fabrics with each other or to connect a media fabric to an external PIM network, the multi-site/pim border feature has been introduced. This solution uses PIM SSM (source specific multicast) to enable flow setup between remote senders and receivers. For a receiver to request flow from a remote sender, it must send the IGMPV3 report, along with the sender and group address. Using the unicast routing table, DCNM and the switch creates and propagates PIM (S,G) join towards the source. When the fabric that receives the PIM (S,G) join, it programs the flows to go all the way from the sender to the receiver. DCNM intelligently manages the inter site link bandwidth, ensuring reliable flow programming. This solution requires designating a leaf as a border-leaf. The border-leaf is the leaf that is used to interconnect the fabric to remote sites or a PIM router. The current software release supports a single border-leaf per site, however, the border-leaf itself can have multiple layer 3 routed links towards the external network or sites. The following CLI snippet shows the configuration of the Border Leaf:! Specify the switch role under NBM controller configuration nbm mode controller switch-role border-leaf!enable PIM on the interface connecting remote site/external PIM router interface Ethernet1/52 description to_external_network ip pim sparse-mode ip router ospf 1 area 0 ip address x.x.x.x/x no shutdown 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 37

32 Figure 34. Multisite Guidelines and Limitations The following section presents guidelines and limitations for the solution described here. Topology discovery Because the topology can be discovered when the switches use POAP or when they are manually added to DCNM, be sure to add only switches that are participating in the Cisco IP Fabric for Media and that are managed by the DCNM media controller. Flow setup Flows can be set up using IGMP join messages and through an API call. The system supports IGMP Version 2 (IGMPv2) and IGMPv3. Also, a static OIF cannot be added through the CLI. Any static flow setup must be implemented through an API call. Note the following guidelines and limitations for flows set up using an API: DCNM notifies the broadcast controller and user about any unsuccessful API call that requires the broadcast controller or user to retry the call. The notification is done via the AMQP bus. Static APIs are not supported when the receiver is connected through a Layer 2 port and Switch Virtual Interface (SVI). Host policy The default host policy shipped with DCNM is a blacklist policy (allow all by default). Any changes to the default policy must be made before any hosts are discovered on the system. The default policy cannot be edited after hosts are active. This restriction does not apply to host-specific policies. The receiver host policy applied to a host connected through a Layer 2 port and SVI applies to all the join messages sent by all the hosts on that VLAN and cannot be applied to just a single receiver Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 37

33 When there are multiple hosts behind a Layer 3 port or SVI and the default host policy is a whitelist, in certain scenarios even when a host policy permits a host, ARP is the only mechanism through which the host is detected and that specific host policy is programmed. Host detection through IGMP or multicast traffic may not work. Do not modify any host policy from the CLI. You cannot apply any custom polices from CLI. Changes has to be made using the DCNM media controller. Flow policy You must make any changes to the default policy before any flows are active on the fabric. Flow policy cannot be modified while the flow is active. You can make modifications only when the flow ages out, which typically occurs after the sender stops transmitting the flow (inactive flows time out after 180 seconds). As a best practice, assume a flow size that is at least 5 percent greater than the flow bit rate. This configuration will accommodate a certain amount of burst without causing the flow to be policed. For example, a 3-Gbps flow must be provisioned as 3.15 Gbps or 3.3 Gbps. Design Always be sure that the uplink bandwidth from the leaf layer to the spine layer is equal to or greater than the bandwidth to the endpoint. For example, when using a Cisco Nexus 93180YC-EX Switch, 48 x 10-Gbps connectivity to the endpoints requires a minimum of 5 x 100-Gbps uplinks. You can use all 6 x 100-Gbps uplinks. When possible, spread endpoint across different leaf switches to evenly distribute sources and receivers on all leaf switches. Limit the number of Spines to 2. In scenarios where a fixed SPINE like the 9326C is not sufficient, consider using the Nexus 9508 as a SPINE. Design redundant IP networks (2022-7). The networks should be independent, and each should have its own instance of the DCNM media controller. The bandwidth management algorithm does not account for unicast traffic on the network. Be sure that unicast traffic, if any, occupies the minimum network bandwidth. Live and File based workflows on same fabric The benefit of a true IP Fabric is its ability to carry any IP Streams. Using QoS (Quality of Service) flows can be prioritized according to business requirements. In an IP Fabric for Media, LIVE production workflows which are multicast (2022-5, 2110) are of a higher priority than file based workflows, which are typically unicast. When DCNM orchestrates multicast flows, it automatically puts the flows in high priority class, that way, as these flows egress the switch, it is provided the highest priority. User has to define customer policies to ensure unicast flows are placed in low priority queue. The configuration below ensures LIVE traffic (multicast) is prioritized over File traffic (unicast).! Create an ACL matching unicast traffic ip access-list pmn-ucast 10 permit ip any permit ip any permit ip any !Create an ACL matching multicast traffic 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 33 of 37

34 ip access-list pmn-mcast 10 permit ip any /4 policy-map type qos pmn-qos class type qos pmn-ucast set qos-group 0 class type qos pmn-mcast set qos-group 7 class class-default!apply the service policy on all interfaces on all switches in the fabric interface ethernet 1/1-54 service-policy type qos input pmn-qos Single switch with controller In certain deployments, a single switch provide sufficient ports and bandwidth and the endpoints are directly connected to this switch. Any Nexus 9000 switch can be used with this type of deployment. Controller offers features such as fabric configuration, flow visibility and analytics, security and north bound integration with broadcast controller. (Support for 9508 with R series line card will be available in a future release) Setting up the NBM mode controller! Enable the feature feature nbm feature nxapi! Enable mode controller and specify controller IP nbm mode controller controller ip <dcnm_vip_address> vrf <vrf_name> controller-credentials username admin password 0 <password>! Verification show nbm controller VRF IP Role Status Online- management ACTIVE ONLINE :43: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 34 of 37

35 Configuring TCAM carving! RACL TCAM for Host Policy and ing-l3-vlan-qos for flow statistics! Requires a switch reload post configuration! Configuration below not required on R series Nexus 9500 hardware access-list tcam region ing-racl 512 hardware access-list tcam region ing-l3-vlan-qos 256 hardware access-list tcam region ing-nbm 1536 Configuring interface! Host Port Configuration (link between switch and end host) interface Ethernet1/1 ip address x.x.x.x/x ip router ospf 1 area ip ospf passive-interface ip pim sparse-mode ip pim passive ip igmp version 3 ip igmp immediate-leave ip igmp suppress v3-gsq no shutdown Single switch without a controller A single switch without controller can be deployed as well. There is no direct interaction between the broadcast controller and the network. Broadcast controller communicate with the endpoints which using IGMP can subscribe to any stream on the switch. Below are the configurations required. Configuring TCAM carving! RACL TCAM for Host Policy and ing-l3-vlan-qos for flow statistics! Requires a switch reload post configuration! Configuration not required on R series Nexus 9500 hardware access-list tcam region ing-racl 512 hardware access-list tcam region ing-l3-vlan-qos 256 hardware access-list tcam region ing-nbm 1536 Configuring NBM mode flow! 9500 with EX series need the configuration below! 9500 with R series line card is natively non-blocking and the CLI below is optional 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 35 of 37

36 nbm mode flow policy <policy-name> bandwidth <flow-bandwidth> ip group-range <ip-address> to <ip-address> Configuring interface! Host Port Configuration (link between switch and end host) interface Ethernet1/1 ip address x.x.x.x/x ip router ospf 1 area ip ospf passive-interface ip pim sparse-mode ip pim passive ip igmp version 3 ip igmp immediate-leave ip igmp suppress v3-gsq no shutdown Conclusion Cisco IP Fabric for Media provides the broadcast industry a path to migrate from SDI to an IP Based infrastructure with multiple modes of deployment, be it a flexible Spine and leaf design or a single modular chassis or switch. The fabric guarantees zero-drop multicast transport with minimal latency and jitter. The fabric is highly visible and provides flow visibility and bit-rate statistics on a per-flow basis. Host polices help ensure that the fabric is secured from any unauthorized endpoints. Integration with any broadcast controller through open REST APIs helps ensure that any complexity in the network is abstracted and that the user has an unchanged operator experience. For more information x/ip_fabric_for_media/solution/guide_703i45/b_cisco_nexus_9000_series_ip_fabric_for_media_solution_guide_ 703I45.html Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 36 of 37

37 Printed in USA C / Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 37 of 37

IP Fabric Architectures for SMPTE 2110 Bits By The Bay 2018 Conference. Ammar Latif Cisco Systems

IP Fabric Architectures for SMPTE 2110 Bits By The Bay 2018 Conference. Ammar Latif Cisco Systems IP Fabric Architectures for SMPTE 2110 Bits By The Bay 2018 Conference Ammar Latif Cisco Systems Industry Challenges and Requirements Video Router COTS Switches Deterministic Network End Point Synchronization

More information

IP Fabric Architectures for Video Production and Broadcast workflows

IP Fabric Architectures for Video Production and Broadcast workflows BRKSPV-1222 IP Fabric Architectures for Video Production and Broadcast workflows Rahul Parameswaran Technical Marketing Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker

More information

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2)

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) First Published: 2016-07-15 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

The Transformation of Media & Broadcast Video Production to a Professional Media Network

The Transformation of Media & Broadcast Video Production to a Professional Media Network The Transformation of Media & Broadcast Video Production to a Professional Media Network Subha Dhesikan, Principal Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Layer 3 IP Multicast Architecture and Design in Cisco ACI Fabric

Layer 3 IP Multicast Architecture and Design in Cisco ACI Fabric White Paper Layer 3 IP Multicast Architecture and Design in Cisco ACI Fabric What You Will Learn Many enterprise data center applications require IP multicast support and rely on multicast packet delivery

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

Cisco Nexus 9200 Switch Datasheet

Cisco Nexus 9200 Switch Datasheet Cisco Nexus 9200 Switch Datasheet CONTENT Content... 1 Overview... 2 Appearance... 2 Key Features and Benefits... 3 NX-OS Software... 4 Nexus 9200 Compare models... 6 Specification of nexus 9200 series

More information

Configuring IP Multicast Routing

Configuring IP Multicast Routing 34 CHAPTER This chapter describes how to configure IP multicast routing on the Cisco ME 3400 Ethernet Access switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive

More information

Configuring IP Multicast Routing

Configuring IP Multicast Routing CHAPTER 46 This chapter describes how to configure IP multicast routing on the Catalyst 3750-E or 3560-E switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive

More information

Powering Next-Generation IP Broadcasting using QFX Series Switches. Tech Note

Powering Next-Generation IP Broadcasting using QFX Series Switches. Tech Note Powering Next-Generation IP Broadcasting using QFX Series Switches Tech Note March 2018 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks

More information

IP Multicast Technology Overview

IP Multicast Technology Overview IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of businesses and homes. Applications that take

More information

Exam Questions

Exam Questions Exam Questions 642-997 DCUFI Implementing Cisco Data Center Unified Fabric (DCUFI) v5.0 https://www.2passeasy.com/dumps/642-997/ 1.Which SCSI terminology is used to describe source and destination nodes?

More information

Configuring IP Multicast Routing

Configuring IP Multicast Routing 39 CHAPTER This chapter describes how to configure IP multicast routing on the Catalyst 3560 switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive

More information

Implementing the ERSPAN Analytics Feature on Cisco Nexus 6000 Series and 5600 Platform Switches

Implementing the ERSPAN Analytics Feature on Cisco Nexus 6000 Series and 5600 Platform Switches White Paper Implementing the ERSPAN Analytics Feature on Cisco Nexus 6000 Series and 5600 Platform Switches White Paper October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document

More information

Configuring SPAN. About SPAN. SPAN Sources

Configuring SPAN. About SPAN. SPAN Sources This chapter describes how to configure an Ethernet switched port analyzer (SPAN) to analyze traffic between ports on Cisco NX-OS devices. This chapter contains the following sections: About SPAN, page

More information

Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture

Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture White Paper Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture White Paper December 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

Media Controller. This section describes the DCNM Media Controller.

Media Controller. This section describes the DCNM Media Controller. This section describes the DCNM. This feature is available only if you have enabled the feature explicitly, after the Cisco DCNM OVA/ISO installation is complete. For more information, see the Cisco DCNM

More information

Cisco Nexus Data Broker for Network Traffic Monitoring and Visibility

Cisco Nexus Data Broker for Network Traffic Monitoring and Visibility Guide Cisco Nexus Data Broker for Network Traffic Monitoring and Visibility Solution Implementation Guide 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

Configuring PIM. Information About PIM. Send document comments to CHAPTER

Configuring PIM. Information About PIM. Send document comments to CHAPTER CHAPTER 3 This chapter describes how to configure the Protocol Independent Multicast (PIM) features on Cisco NX-OS switches in your IPv4 networks. This chapter includes the following sections: Information

More information

Configuring IP Multicast Routing

Configuring IP Multicast Routing CHAPTER 45 This chapter describes how to configure IP multicast routing on the Catalyst 3750 Metro switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive

More information

Configuring Fabric and Interfaces

Configuring Fabric and Interfaces Fabric and Interface Configuration, on page 1 Graceful Insertion and Removal (GIR) Mode, on page 2 Configuring Physical Ports in Leaf Nodes and FEX Devices Using the NX-OS CLI, on page 3 Configuring Port

More information

Configuring StackWise Virtual

Configuring StackWise Virtual Finding Feature Information, page 1 Restrictions for Cisco StackWise Virtual, page 1 Prerequisites for Cisco StackWise Virtual, page 2 Information About Cisco Stackwise Virtual, page 2 Cisco StackWise

More information

What is Multicasting? Multicasting Fundamentals. Unicast Transmission. Agenda. L70 - Multicasting Fundamentals. L70 - Multicasting Fundamentals

What is Multicasting? Multicasting Fundamentals. Unicast Transmission. Agenda. L70 - Multicasting Fundamentals. L70 - Multicasting Fundamentals What is Multicasting? Multicasting Fundamentals Unicast transmission transmitting a packet to one receiver point-to-point transmission used by most applications today Multicast transmission transmitting

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Deploying Secure Multicast Market Data Services for Financial Services Environments Modified: 2016-07-29 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Service Graph Design with Cisco Application Centric Infrastructure

Service Graph Design with Cisco Application Centric Infrastructure White Paper Service Graph Design with Cisco Application Centric Infrastructure 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction...

More information

This chapter describes how to configure the Cisco ASA to use the multicast routing protocol.

This chapter describes how to configure the Cisco ASA to use the multicast routing protocol. This chapter describes how to configure the Cisco ASA to use the multicast routing protocol. About, page 1 Guidelines for, page 4 Enable, page 4 Customize, page 5 Monitoring for PIM, page 16 Example for,

More information

Implementing IPv6 Multicast

Implementing IPv6 Multicast Finding Feature Information, page 1 Information About Routing, page 1, page 8 Finding Feature Information Your software release may not support all the features documented in this module. For the latest

More information

Pass-Through Technology

Pass-Through Technology CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

ACI Fabric Endpoint Learning

ACI Fabric Endpoint Learning White Paper ACI Fabric Endpoint Learning 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 45 Contents Introduction... 3 Goals of this document...

More information

Configuring SSM. Finding Feature Information. Prerequisites for Configuring SSM

Configuring SSM. Finding Feature Information. Prerequisites for Configuring SSM Finding Feature Information, page 1 Prerequisites for, page 1 Restrictions for, page 2 Information About SSM, page 3 How to Configure SSM, page 7 Monitoring SSM, page 15 Configuration Examples for Source

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,

More information

Multicast Quick Start Configuration Guide

Multicast Quick Start Configuration Guide Multicast Quick Start Configuration Guide Document ID: 9356 Contents Introduction Prerequisites Requirements Components Used Conventions Dense Mode Sparse Mode with one RP Sparse Mode with Multiple RPs

More information

Configuring Control Plane Policing

Configuring Control Plane Policing 21 CHAPTER This chapter describes how to configure control plane policing (CoPP) on the NX-OS device. This chapter includes the following sections: Information About CoPP, page 21-1 Guidelines and Limitations,

More information

IP Multicast Technology Overview

IP Multicast Technology Overview IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of businesses and homes. Applications that take

More information

HP Routing Switch Series

HP Routing Switch Series HP 12500 Routing Switch Series EVI Configuration Guide Part number: 5998-3419 Software version: 12500-CMW710-R7128 Document version: 6W710-20121130 Legal and notice information Copyright 2012 Hewlett-Packard

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Configuring vpcs, page 11 Verifying the vpc Configuration, page 25 vpc Default

More information

Configuring Port Channels

Configuring Port Channels CHAPTER 5 This chapter describes how to configure port channels and to apply and configure the Link Aggregation Control Protocol (LACP) for more efficient use of port channels using Cisco Data Center Network

More information

Configuring Wireless Multicast

Configuring Wireless Multicast Finding Feature Information, on page 1 Prerequisites for, on page 1 Restrictions for, on page 1 Information About Wireless Multicast, on page 2 How to Configure Wireless Multicast, on page 6 Monitoring

More information

Configuring IGMP Snooping

Configuring IGMP Snooping This chapter describes how to configure Internet Group Management Protocol (IGMP) snooping on a Cisco NX-OS device. About IGMP Snooping, page 1 Licensing Requirements for IGMP Snooping, page 4 Prerequisites

More information

3. What could you use if you wanted to reduce unnecessary broadcast, multicast, and flooded unicast packets?

3. What could you use if you wanted to reduce unnecessary broadcast, multicast, and flooded unicast packets? Nguyen The Nhat - Take Exam Exam questions Time remaining: 00: 00: 51 1. Which command will give the user TECH privileged-mode access after authentication with the server? username name privilege level

More information

Viewing IP and MPLS Multicast Configurations

Viewing IP and MPLS Multicast Configurations CHAPTER 19 These topics provide an overview of the IP Multicast technology and describe how to view IP and multicast configurations in Prime Network Vision: IP and MPLS Multicast Configuration: Overview,

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels Configuring Virtual Port Channels This chapter describes how to configure virtual port channels (vpcs) on Cisco Nexus 5000 Series switches. It contains the following sections: Information About vpcs, page

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

Configuring Private VLANs

Configuring Private VLANs Finding Feature Information, on page 1 Prerequisites for Private VLANs, on page 1 Restrictions for Private VLANs, on page 1 Information About Private VLANs, on page 2 How to Configure Private VLANs, on

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco

More information

IP Multicast Optimization: Optimizing PIM Sparse Mode in a Large IP Multicast Deployment

IP Multicast Optimization: Optimizing PIM Sparse Mode in a Large IP Multicast Deployment IP Multicast Optimization: Optimizing PIM Sparse Mode in a Large IP Multicast Deployment Finding Feature Information, page 1 Prerequisites for Optimizing PIM Sparse Mode in a Large IP Multicast Deployment,

More information

Configuring Private VLANs Using NX-OS

Configuring Private VLANs Using NX-OS This chapter describes how to configure private VLANs on Cisco NX-OS devices. Private VLANs provide additional protection at the Layer 2 level. This chapter includes the following sections: Finding Feature

More information

Configuring Private VLANs

Configuring Private VLANs CHAPTER 15 This chapter describes how to configure private VLANs on the Cisco 7600 series routers. Note For complete syntax and usage information for the commands used in this chapter, refer to the Cisco

More information

IP Multicast Routing Technology Overview

IP Multicast Routing Technology Overview Finding Feature Information, on page 1 Information About IP Multicast Technology, on page 1 Finding Feature Information Your software release may not support all the features documented in this module.

More information

Nexus 9000/3000 Graceful Insertion and Removal (GIR)

Nexus 9000/3000 Graceful Insertion and Removal (GIR) White Paper Nexus 9000/3000 Graceful Insertion and Removal (GIR) White Paper September 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 21

More information

MULTICAST SECURITY. Piotr Wojciechowski (CCIE #25543)

MULTICAST SECURITY. Piotr Wojciechowski (CCIE #25543) MULTICAST SECURITY Piotr Wojciechowski (CCIE #25543) ABOUT ME Senior Network Engineer MSO at VeriFone Inc. Previously Network Solutions Architect at one of top polish IT integrators CCIE #25543 (Routing

More information

Demand-Based Control Planes for Switching Fabrics

Demand-Based Control Planes for Switching Fabrics Demand-Based Control Planes for Switching Fabrics Modern switching fabrics use virtual network overlays to support mobility, segmentation, and programmability at very large scale. Overlays are a key enabler

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Configuring Rapid PVST+

Configuring Rapid PVST+ This chapter describes how to configure the Rapid per VLAN Spanning Tree (Rapid PVST+) protocol on Cisco NX-OS devices using Cisco Data Center Manager (DCNM) for LAN. For more information about the Cisco

More information

Cisco Nexus Data Broker

Cisco Nexus Data Broker Data Sheet Cisco Nexus Data Broker Product Overview You used to monitor traffic mainly to manage network operations. Today, when you monitor traffic you can find out instantly what is happening throughout

More information

Configuring APIC Accounts

Configuring APIC Accounts This chapter contains the following sections: Adding an APIC Account, page 1 Viewing APIC Reports, page 3 Assigning an APIC account to a Pod, page 15 Handling APIC Failover, page 15 Adding an APIC Account

More information

Configuring SPAN. Finding Feature Information. About SPAN. SPAN Sources

Configuring SPAN. Finding Feature Information. About SPAN. SPAN Sources This chapter describes how to configure an Ethernet switched port analyzer (SPAN) to analyze traffic between ports on Cisco NX-OS devices. Finding Feature Information, on page 1 About SPAN, on page 1 Licensing

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Verifying the vpc Configuration, page 11 vpc Default Settings, page 16 Configuring

More information

OTV Technology Introduction and Deployment Considerations

OTV Technology Introduction and Deployment Considerations CHAPTER 1 OTV Technology Introduction and Deployment Considerations This document introduces a Cisco innovative LAN extension technology called Overlay Transport Virtualization (OTV). OTV is an IP-based

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

Hands-On IP Multicasting for Multimedia Distribution Networks

Hands-On IP Multicasting for Multimedia Distribution Networks Hands-On for Multimedia Distribution Networks Course Description This Hands-On course provides an in-depth look how IP multicasting works, its advantages and limitations and how it can be deployed to provide

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels This chapter contains the following sections: Information About vpcs vpc Overview Information About vpcs, on page 1 Guidelines and Limitations for vpcs, on page 11 Verifying the vpc Configuration, on page

More information

Multicast H3C Low-End Ethernet Switches Configuration Examples. Table of Contents

Multicast H3C Low-End Ethernet Switches Configuration Examples. Table of Contents Table of Contents Table of Contents Chapter 1 Protocol Overview... 1-1 1.1 Overview... 1-1 1.2 Support of Features... 1-2 1.3 Configuration Guidance... 1-3 1.3.1 Configuring IGMP Snooping... 1-3 1.3.2

More information

Cisco Nexus 1000V InterCloud

Cisco Nexus 1000V InterCloud Deployment Guide Cisco Nexus 1000V InterCloud Deployment Guide (Draft) June 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 49 Contents

More information

INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2

INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2 Table of Contents INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2 LISP MOBILITY MODES OF OPERATION/CONSUMPTION SCENARIOS 3 LISP SINGLE HOP SCENARIO 3 LISP MULTI- HOP SCENARIO 3 LISP IGP ASSIT MODE 4 LISP INTEGRATION

More information

Deploying LISP Host Mobility with an Extended Subnet

Deploying LISP Host Mobility with an Extended Subnet CHAPTER 4 Deploying LISP Host Mobility with an Extended Subnet Figure 4-1 shows the Enterprise datacenter deployment topology where the 10.17.1.0/24 subnet in VLAN 1301 is extended between the West and

More information

Implementing IPv6 Multicast

Implementing IPv6 Multicast Implementing IPv6 Multicast Last Updated: November 14, 2011 Traditional IP communication allows a host to send packets to a single host (unicast transmission) or to all hosts (broadcast transmission).

More information

Configuring Port-Based Traffic Control

Configuring Port-Based Traffic Control Overview of Port-Based Traffic Control, page 1 Finding Feature Information, page 2 Information About Storm Control, page 2 How to Configure Storm Control, page 4 Information About Protected Ports, page

More information

Configuring Rate Limits

Configuring Rate Limits This chapter describes how to configure rate limits for supervisor-bound traffic on Cisco NX-OS devices. This chapter includes the following sections: About Rate Limits, page 1 Licensing Requirements for

More information

Customizing IGMP. Finding Feature Information. Last Updated: December 16, 2011

Customizing IGMP. Finding Feature Information. Last Updated: December 16, 2011 Customizing IGMP Last Updated: December 16, 2011 Internet Group Management Protocol (IGMP) is used to dynamically register individual hosts in a multicast group on a particular LAN segment. Enabling Protocol

More information

Modeling an Application with Cisco ACI Multi-Site Policy Manager

Modeling an Application with Cisco ACI Multi-Site Policy Manager Modeling an Application with Cisco ACI Multi-Site Policy Manager Introduction Cisco Application Centric Infrastructure (Cisco ACI ) Multi-Site is the policy manager component used to define intersite policies

More information

Configuring Port-Based Traffic Control

Configuring Port-Based Traffic Control CHAPTER 22 This chapter describes how to configure the port-based traffic control features on the Cisco ME 3400 Ethernet Access switch. For complete syntax and usage information for the commands used in

More information

Configuring Local SPAN and ERSPAN

Configuring Local SPAN and ERSPAN This chapter contains the following sections: Information About ERSPAN, page 1 Licensing Requirements for ERSPAN, page 5 Prerequisites for ERSPAN, page 5 Guidelines and Limitations for ERSPAN, page 5 Guidelines

More information

Layer 4 to Layer 7 Service Insertion, page 1

Layer 4 to Layer 7 Service Insertion, page 1 This chapter contains the following sections:, page 1 Layer 4 to Layer 7 Policy Model, page 2 About Service Graphs, page 2 About Policy-Based Redirect, page 5 Automated Service Insertion, page 12 About

More information

Configuring Q-in-Q VLAN Tunnels

Configuring Q-in-Q VLAN Tunnels Information About Q-in-Q Tunnels, page 1 Licensing Requirements for Interfaces, page 7 Guidelines and Limitations, page 7 Configuring Q-in-Q Tunnels and Layer 2 Protocol Tunneling, page 8 Configuring Q-in-Q

More information

Configuring Port-Based Traffic Control

Configuring Port-Based Traffic Control Overview of Port-Based Traffic Control, page 2 Finding Feature Information, page 2 Information About Storm Control, page 2 How to Configure Storm Control, page 4 Finding Feature Information, page 9 Information

More information

Configuring IGMP Snooping and MVR

Configuring IGMP Snooping and MVR CHAPTER 21 This chapter describes how to configure Internet Group Management Protocol (IGMP) snooping on the Cisco ME 3400 Ethernet Access switch, including an application of local IGMP snooping, Multicast

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Configuring Port Channels

Configuring Port Channels CHAPTER 5 This chapter describes how to configure port channels and to apply and configure the Link Aggregation Control Protocol (LACP) for more efficient use of port channels in Cisco DCNM. For more information

More information

PIM Configuration. Page 1 of 9

PIM Configuration. Page 1 of 9 PIM Configuration Page 1 of 9 Contents Contents...2 Chapter 1 PIM Configuration...3 1.1 PIM Description...3 1.1.1 Principles of PIM-DM...3 1.1.2 Principles of PIM-SM...4 1.1.3 Principles of PIM-SSM...5

More information

Contents. Configuring EVI 1

Contents. Configuring EVI 1 Contents Configuring EVI 1 Overview 1 Layer 2 connectivity extension issues 1 Network topologies 2 Terminology 3 Working mechanism 4 Placement of Layer 3 gateways 6 ARP flood suppression 7 Selective flood

More information

IT Deployment Guide AT-OMNI-512 AT-OMNI-521 AT-OMNI-111 AT-OMNI-112 AT-OMNI-121 AT-OMNI-122. Atlona Manuals OmniStream AT-OMNI-232

IT Deployment Guide AT-OMNI-512 AT-OMNI-521 AT-OMNI-111 AT-OMNI-112 AT-OMNI-121 AT-OMNI-122. Atlona Manuals OmniStream AT-OMNI-232 IT Deployment Guide AT-OMNI-111 AT-OMNI-121 AT-OMNI-512 AT-OMNI-521 AT-OMNI-232 Atlona Manuals OmniStream Version Information Version Release Date Notes 1 08/17 Initial release 2 11/17 Added language to

More information

Configuring Basic IP Multicast

Configuring Basic IP Multicast IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of corporate businesses and homes. Applications

More information

GRE and DM VPNs. Understanding the GRE Modes Page CHAPTER

GRE and DM VPNs. Understanding the GRE Modes Page CHAPTER CHAPTER 23 You can configure Generic Routing Encapsulation (GRE) and Dynamic Multipoint (DM) VPNs that include GRE mode configurations. You can configure IPsec GRE VPNs for hub-and-spoke, point-to-point,

More information

Overview. Information About Layer 3 Unicast Routing. Send document comments to CHAPTER

Overview. Information About Layer 3 Unicast Routing. Send document comments to CHAPTER CHAPTER 1 This chapter introduces the basic concepts for Layer 3 unicast routing protocols in Cisco NX-OS. This chapter includes the following sections: Information About Layer 3 Unicast Routing, page

More information

Configuring Port-Based Traffic Control

Configuring Port-Based Traffic Control CHAPTER 18 This chapter describes how to configure port-based traffic control features on the Catalyst 3750 Metro switch. For complete syntax and usage information for the commands used in this chapter,

More information

Modular Policy Framework. Class Maps SECTION 4. Advanced Configuration

Modular Policy Framework. Class Maps SECTION 4. Advanced Configuration [ 59 ] Section 4: We have now covered the basic configuration and delved into AAA services on the ASA. In this section, we cover some of the more advanced features of the ASA that break it away from a

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

SWP-0208G, 8+2SFP. 8-Port Gigabit Web Smart Switch. User s Manual

SWP-0208G, 8+2SFP. 8-Port Gigabit Web Smart Switch. User s Manual SWP-0208G 1 SWP-0208G, 8+2SFP 8-Port Gigabit Web Smart Switch User s Manual Version: 3.4 April 1, 2008 2 TABLE OF CONTENT 1.0 INTRODUCTION...4 1.1 MAIN FEATURES...4 1.2 START TO MANAGE THIS SWITCH...6

More information

Financial Services Design for High Availability

Financial Services Design for High Availability Financial Services Design for High Availability Version History Version Number Date Notes 1 March 28, 2003 This document was created. This document describes the best practice for building a multicast

More information