Fast Rerouting for IP Multicast Under Single Node Failures

Similar documents
Fast Reroute from Single Link and Single Node Failures for IP Multicast

What is Multicasting? Multicasting Fundamentals. Unicast Transmission. Agenda. L70 - Multicasting Fundamentals. L70 - Multicasting Fundamentals

Enhancement of the CBT Multicast Routing Protocol

Multicast Technology White Paper

IP Multicast Technology Overview

Physical-Layer Communication Recovery for Heterogeneous Network

ASM. Engineering Workshops

IP Multicast Technology Overview

Multicast Communications

Link/Node Failure Recovery through Backup Path Lengths

CSCE 463/612 Networks and Distributed Processing Spring 2018

Multicast overview. Introduction to multicast. Information transmission techniques. Unicast

Broadcast Routing. Multicast. Flooding. In-network duplication. deliver packets from source to all other nodes source duplication is inefficient:

The Overlay Multicast Protocol (OMP): A Proposed Solution to Improving Scalability of Multicasting in MPLS Networks

IP Multicast. What is multicast?

Multicast overview. Introduction to multicast. Information transmission techniques. Unicast

Why multicast? The concept of multicast Multicast groups Multicast addressing Multicast routing protocols MBONE Multicast applications Conclusions

Tag Switching. Background. Tag-Switching Architecture. Forwarding Component CHAPTER

DD2490 p IP Multicast routing. Multicast routing. Olof Hagsand KTH CSC

Enhancement of the CBT Multicast Routing Protocol

A COMPARISON OF REACTIVE ROUTING PROTOCOLS DSR, AODV AND TORA IN MANET

Multicast Communications. Tarik Čičić, 4. March. 2016

DATA COMMUNICATOIN NETWORKING

Enhanced Cores Based Tree for Many-to-Many IP Multicasting

IP MULTICAST EXPLAINED

OSPF Protocol Overview on page 187. OSPF Standards on page 188. OSPF Area Terminology on page 188. OSPF Routing Algorithm on page 190

Network Configuration Example

Juniper Networks Live-Live Technology

End User Level Classification of Multicast Reachability Problems

OSPF with Deterministic Routing

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN

Network Working Group Request for Comments: 3446 Category: Informational H. Kilmer D. Farinacci Procket Networks January 2003

Multicast routing Draft

Deterministic Routing in OSPF for Post Single Link Failure

Table of Contents 1 PIM Configuration 1-1

CSE 123A Computer Networks

Advanced Network Training Multicast

Table of Contents 1 MSDP Configuration 1-1

ICS 351: Today's plan. routing protocol comparison encapsulation network dynamics multicasting in general IP multicasting IGMP PIM

Multicast Quick Start Configuration Guide

Configuring Basic IP Multicast

IP Multicast Load Splitting across Equal-Cost Paths

Configuring IP Multicast Routing

Internet Engineering Task Force (IETF)

Table of Contents 1 Multicast VPN Configuration 1-1

IP Multicast: PIM Configuration Guide, Cisco IOS Release 15S

IPv6 PIM. Based on the forwarding mechanism, IPv6 PIM falls into two modes:

IP Multicast: PIM Configuration Guide, Cisco IOS Release 12.4T

ITEC310 Computer Networks II

Content. 1. Introduction. 2. The Ad-hoc On-Demand Distance Vector Algorithm. 3. Simulation and Results. 4. Future Work. 5.

IP Multicast: PIM Configuration Guide, Cisco IOS Release 15M&T

A Fast Reroute Scheme for IP Multicast

A New IPTV Multicast Fast Reroute Method

Configuring Bidirectional PIM

Performance Evaluation of Various Routing Protocols in MANET

IP Multicast: PIM Configuration Guide, Cisco IOS XE Release 3S

Internet Engineering Task Force (IETF) Category: Standards Track. T. Morin France Telecom - Orange Y. Rekhter. Juniper Networks.

Configuring IP Multicast Routing

Routing Protocols. Technology Description BGP CHAPTER

Configuring multicast VPN

IP Multicast: PIM Configuration Guide, Cisco IOS Release 15SY

Backup segments. Path after failure recovery. Fault. Primary channel. Initial path D1 D2. Primary channel 1. Backup channel 1.

A FORWARDING CACHE VLAN PROTOCOL (FCVP) IN WIRELESS NETWORKS

IPv6 PIM-DM configuration example 36 IPv6 PIM-SM non-scoped zone configuration example 39 IPv6 PIM-SM admin-scoped zone configuration example 42 IPv6

Table of Contents 1 MSDP Configuration 1-1

Configuring Basic IP Multicast

IP conference routing optimisation over GEO satellites

FAULT TOLERANT REAL TIME COMMUNICATION IN MULTIHOP NETWORKS USING SEGMENTED BACKUP

Virtual Multi-homing: On the Feasibility of Combining Overlay Routing with BGP Routing

PIM Configuration. Page 1 of 9

Internet Engineering Task Force (IETF) Category: Standards Track ISSN: Y. Cai Alibaba Group T. Morin Orange June 2016

Load Splitting IP Multicast Traffic over ECMP

MULTICAST EXTENSIONS TO OSPF (MOSPF)

List of groups known at each router. Router gets those using IGMP. And where they are in use Where members are located. Enhancement to OSPF

Using SSM Proxies to Provide Efficient Multiple- Source Multicast Delivery

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Cisco Systems, Inc. Bruno Decraene Stephane Litkowski Orange November 18, 2013

Financial Services Design for High Availability

Contents. Configuring MSDP 1

Multicast H3C Low-End Ethernet Switches Configuration Examples. Table of Contents

4.2 Multicast IP supports multicast to support one-to-many (radio, news, IP multicast was originally a many-to-many (any source MC or

Multicast VPN C H A P T E R. Introduction to IP Multicast

Achieving Sub-50 Milliseconds Recovery Upon BGP Peering Link Failures

Performance Analysis and Enhancement of Routing Protocol in Manet

Open Shortest Path First (OSPF)

Computation of Multiple Node Disjoint Paths

Multicast EECS 122: Lecture 16

This chapter describes how to configure the Cisco ASA to use the multicast routing protocol.

IP Multicast: PIM Configuration Guide

International Journal of Advanced Research in Computer Science and Software Engineering

BTEC Level 3 Extended Diploma

draft-ietf-magma-igmp-proxy-04.txt Brian Haberman, Caspian Networks Hal Sandick, Sheperd Middle School Expire: March, 2004 September, 2003

Multicast Communications. Slide Set were original prepared by Dr. Tatsuya Susa

Configuring MSDP. Overview. How MSDP operates. MSDP peers

Destination-Specific Maximally Redundant Trees: Design, Performance Comparison, and Applications

Configuring IP Multicast Routing

Network Configuration Example

Presenting a multicast routing protocol for enhanced efficiency in mobile ad-hoc networks

Distributed Core Multicast (DCM): a multicast routing protocol for many groups with few receivers

Analysis of Performance of Core Based Tree and Centralized Mode of Multicasting Routing Protocol

Transcription:

Fast Rerouting for IP Multicast Under Single Node Failures Aditya Sundarrajan, Srinivasan Ramasubramanian Department of Electrical and Computer Engineering University of Arizona Abstract In this paper, we propose multicast protection trees that provide instantaneous failure recovery from single node failures. For a given node v, the multicast protection tree spans all the neighbors v and does not include v. Thus, when node v fails, its neighbors are connected through the multicast protection tree instead of node v, and the neighbors of node v forward the traffic over this tree. The multicast protection trees are constructed a priori, without the knowledge of the multicast traffic in the network. This facilitates protocol independent single node failure recovery in multicast networks. These trees are used when a new multicast tree is being formed after a node failure has occurred. We analyze the effectiveness of the proposed fast rerouting technique using three practical networks. I. INTRODUCTION Multicast routing is gaining importance with the advent of IP-based applications like video conferencing, webcasting sessions and multi-user gaming. However, the growth of high bandwidth applications like IPTV imposes stringent delay requirements on failure recovery. Protocols like protocol independent multicast source-specific multicast (PIM-SSM) [1] and protocol independent multicast sparse mode (PIM- SM) [2] are being deployed within individual AS domains to enable IP multicast. These protocols form multicast trees that span the source and the set of destination nodes. Such trees are identified by a group id that corresponds to a multicast address. Multicast source discovery protocol (MSDP) [] and multiprotocol extensions for BGP- (MBGP) [] provide interdomain multicast functionality. All these protocols have built-in failure recovery mechanisms that ensure a robust operation against failures. However, the drawback of these mechanisms is that they are reactive by nature and do not provide instantaneous failure recovery []. With limited buffer space at intermediate routers, the recovery often results in loss of data. To prevent this loss, recovery must take place as soon as the failure is detected. This could be enabled by proactive mechanisms that precompute backup paths, also called fast reroute paths. This prevents overflow of router buffers and subsequent loss of data. Fast rerouting in multicast networks is complicated by the nature of construction of the multicast trees. To better understand this difficulty, we first look at how fast rerouting is performed in unicast networks. In unicast communication, every intermediate node on the path from the source to the destination is aware of the final destination of a packet. This helps them reroute traffic immediately to the destination during a node failure, provided an alternate path is available []. In contrast, the destination address in multicast communication is the multicast group address, and the source or any intermediate node is not aware of the individual destinations being served. Thus, an intermediate node cannot redirect traffic to the destinations when the downstream node in the multicast tree fails. For instance, consider the multicast session (S, G) (directed tree) in Figure 1 involving source S and destinations D1 and D2. Intermediate nodes 0, 1 and 2 are only aware of the multicast state (S, G) and do not know that D1 and D2 are the destinations. When node 1 fails, node 0 cannot redirect traffic immediately to D1 and D2. Hence, the key is to develop a mechanism that enables rerouting multicast packets to the destination(s) until a new multicast tree is constructed. S (S,G) (S,G) (S,G) 0 1 2 D1 D2 Group G Fig. 1. Multicast tree with source node S and destination nodes D1 and D2. One approach to recovering from failures is to construct protection trees rooted at different nodes. When a packet encounters a failure, we can reroute the packet around the failure on the protection tree. We may construct protection trees that are specifically designed for link failures and node failures separately. For example, we may construct link-independent trees and vertex-independent trees rooted at a destination that would provide link-disjoint and node-disjoint paths from every node to the destination [7]. However, in order to choose which protection tree to use, we need to classify if the outgoing failed link is due to a simple link failure or a node failure. As node failures are relatively rarer compared to link failures, the common approach is to assume link failure and attempt a recovery. If the packet encounters subsequent link failures that share a common node, we assume that the node has failed and attempt node failure recovery. This two step process helps develop a single failure recovery procedure for the network. Recovery from single link failures for multicast traffic is fairly straight-forward. If a multicast packet has to be forwarded to a neighbor and the link has failed, we may forward the packet to the neighboring node along an alternate path using encapsulation. The encapsulated header would have its destination as one of the alias addresses of the neighboring node. The routing for the alias address takes into account the 1

link failure encountered by the packet. Approaches such as not-via routing [] and recovery from multiple link failures using colored trees [], use such encapsulation techniques and protection addresses. Thus, fast rerouting for multicast traffic from single link failures is similar to recovering unicast traffic. In this paper, we develop a fast rerouting mechanism for single node failures. We develop multicast protection trees (MPTs) that connect the neighbors of a node such that any neighbor of failed node can reroute the packets on the MPT, bypassing the failed node for the duration of failure recovery. A. Related Work Most existing works can be broadly classified into two categories: one that modifies the existing multicast protocols being used, and others that do not. We highlight some of the works in both categories. The proposal in [10] uses tunneling to protect multicast traffic from link and node failures in networks that deploy PIM-SM or PIM-SSM. These tunnels are set up using new PIM hello messages that are sent between adjacent PIM nodes for link protection, and from the downstream node to the upstream node of the protected node for node failure recovery. However, the draft does not guarantee link or node failure recovery because it may not always be possible to discover a backup path. Another proposal [11] relying on modifications to PIM works by maintaining not-via paths [] to next-next-hop (NNH) nodes in the original multicast tree. The NNH nodes are advertised in the PIM join messages, and not-via paths to these nodes are computed. Such not-via paths are created for every multicast session. This results in several not-via paths to a node as each node could be part of multiple multicast sessions at the same time, causing excessive backup path maintenance overhead. The proposal in [12] constructs fast reroute paths without modifying the multicast protocol. In this work, a merge point router (multicast only fast reroute enabled router) joins the multicast tree via two disjoint upstream paths, one, through the primary upstream multicast hop (UMH), and the other, through the secondary UMH to the root of the multicast tree. The merge point accepts packets from both upstream nodes and forwards only those received from the primary upstream node under a failure free scenario. When the primary UMH fails, the secondary UMH becomes the primary upstream neighbor. This ensures fast recovery at the cost of redundant traffic flowing over the network, affecting network bandwidth. A more recent work [1] quickens the transition from the primary to the secondary UMH in the face of a failure. This work also overcomes the live-live nature of the previous scheme where the merge point router does not receive traffic from the secondary UMH until the primary UMH fails. However, the drawback is that failure recovery can be activated only by merge point routers and not by the nodes detecting the failure (in most cases), leading to delays in the start of failure recovery. It is worth noting that most existing mechanisms set up back-up paths that depend on the original multicast tree, and this leads to excessive maintenance overheads that we avoid in this paper. The advantages and the differences in the proposed technique are further elaborated in Section V-C. B. Contributions In this paper, we develop MPTs that facilitate protocol independent single node failure recovery with minimum overhead. We also develop a multicast failure recovery procedure that combines existing unicast link failure recovery mechanisms with the new proactive single node failure recovery mechanism using MPTs. The rest of the paper is organized as follows. Section II describes the network model and the notations that will be used in this paper. Section III introduces and explains the MPTs that facilitate node failure fast rerouting. The multicast failure recovery mechanism in Section IV uses these MPTs to recover from single node failures. We evaluate the performance of the proposed technique in section V. Section VI concludes the paper. II. NETWORK MODEL We assume that the networks constitute a single AS domain and are at least two-vertex connected i.e., the network is resilient to single node failures. These networks employ unicast link state protocols like OSPF or IS-IS rendering the entire network view to all nodes. Also, all links are bidirectional, which is an essential requirement for fast reroute mechanisms to work. The failure of any link disrupts the communication in both directions. In this paper, a node is assumed to have failed when at least two links incident on that node have failed. The notations used in this paper are in Table I. Notation TABLE I NOTATIONS USED IN PAPER Comment G(V, E) = G Graph G with vertex set V and edge set E V Total number of vertices E Total number of edges V () Vertices in any set T (V T, E T ) = T Tree T with vertex set V T and edge set E T Ne(z) = ne 1 z, ne 2 z,..., ne Nz z N z neighbors of node z, N z = 1, 2,... P A v Protection address of node v MP T v Multicast protection tree of node v III. MULTICAST PROTECTION TREES The multicast protection tree of a node v, denoted by MPT v, is a tree that spans all the neighbors of node v, N e (v), and does not contain node v. This tree facilitates the communication between the neighbors of node v without having node v involved. The optimal multicast protection tree of node v is therefore a Steiner tree [1] in the modified network, where node v and the links connected to it are removed. Such trees help the node detecting the failure to reroute around the failure without any knowledge of the affected multicast destinations. Note that given any multicast tree with node v as an intermediate node, MPT v connects all the downstream neighbors of v with the upstream neighbor of v. Thus, when 2

node v fails, the upstream node may use MPT v to send traffic to all the downstream nodes of v. Every node v is assigned a multicast protection address, PA v. Node v shares PA v with its neighbors Ne(v). If a packet is destined for PA v, then the intended destination nodes are the neighbors of v. Note that only packets with destination address PA v need to be routed on MPT v. Thus, we may proactively construct the MPT for every node and populate the routing table entries at the nodes in MPT v for PA v. Consider the example multicast tree shown in Figure 1. Let PIM-SSM be the multicast protocol being deployed. Figure 2(a) shows MPT 1 : 0 D1 2. As node 1 has only two neighbors, the tree is simply a path. Once the failure of node 1 is detected, node 0 forwards the multicast packets over MPT 1 using encapsulation, with destination address set to the PA 1. Node 2, which is the only other neighbor of node 1 is the intended destination of this encapsulated packet. Upon receiving this encapsulated packet, node 2, decapsulates the packet and forwards the inner multicast packet to D1 and D2. In the meantime, upon detecting a failure, PIM-SSM kick starts its recovery procedure by sending out new join messages towards node S to form a new multicast tree. Once the new tree is established, the MPT computed before the failure may be discarded. Multicast protocols can use make-beforebreak mechanisms by adjusting their failure reaction timers, to ensure lossless data transmission during failure recovery. This requires no enhancements to current multicast protocols. S (S,G) (S,G) (S,G) 0 1 2 D1 (a) MPT for node 1 D2 Group G S 0 1 2 D1 (S,G) (S,G) (S,G) D2 Group G (b) New multicast tree Fig. 2. Multicast tree with source node S and destination nodes D1 and D2. A. Computing Multicast Protection Trees The MPT of node v is a tree with the neighbors of v as the set of leaf nodes. As packets sent to PA v are broadcast on MPT v, we need to reduce the number of links in the tree in order to minimize resource consumption. Thus, the optimal MPT is the minimum Steiner tree, which is known to be an NP-Hard problem [1]. We describe one algorithm we use to compute MPTs. This is a centralized algorithm that obtains the entire network topology as an input. Node v and the links incident on it are not considered for constructing MPT v. Augmented Steiner trees: The algorithm used to compute the MPTs is based on the work in [1], which relies on the shortest path heuristic. For any node v in graph G(V, E) whose MPT is being computed, the algorithm begins with a partially constructed tree T 1 (V T1, E T1 ), where V T1 = ne 1 v a randomly chosen neighbor of v, and E T1 = φ. In every iteration of the algorithm, i = 2,... N v, we add a new path P (T i 1, ne i v) which is the shortest path from any vertex in V Ti 1 to a non-visited neighbor of v in Ne(v)\V Ti 1. This results in a partially constructed tree T i (V Ti, E Ti ) where, V Ti V. The procedure is illustrated in Algorithm 1. Algorithm 1 Augmented Steiner trees i = 2 while i N v do T i = T i 1 P (T i 1, ne i v) V Ti = V Ti 1 vertices in P (T i 1, ne i v) E Ti = E Ti 1 edges in P (T i 1, ne i v) i = i + 1 end while MP T v = T i This algorithm provides a near-optimal solution as the cost of the entire tree is minimized. It has a time complexity of O(N v E + N v V log V ), as each new path P (T i 1, ne i v) in the i th iteration is computed in O( E + V log V ) time using the Dijkstra s shortest path algorithm. IV. MULTICAST FAILURE RECOVERY The multicast failure recovery procedure uses tunneling (GRE [17], IP-in-IP [1]) to reroute packets around the failed node. The node detecting the failed downstream node tunnels the packet over the appropriate MPT. Every destination on the tunneled path decapsulates the received encapsulated packet to obtain the original multicast data, and continues forwarding it along the original multicast tree. As link failures are much more common than node failures in today s Internet, the recovery procedure always begins as a link failure recovery and switches to node failure recovery when needed. The transition from link failure recovery to node failure recovery is enabled using link failure messages. These control messages are generated by the node detecting an outgoing failed link, and are transmitted to the protection address of neighbor corresponding to the failed link. Link failure messages are sent only once when the failure is detected. For instance, when node w detects a failure of the link connected to node v, it generates a link failure message destined for PA v and sends it to node v s neighbors. If a second neighbor of v, say x, observes the failure of link x v and receives a failure message from w, then it is an indication of multiple link failures around node v. In this work, we assume that if more than one link failure is seen around a node, then the node has failed. A neighbor of v marks node v as failed if it detects its link to v has failed and receives a link failure message from another neighbor of v, or it receives link failure messages from two different neighbors of v. Existing fast rerouting mechanisms for link failures (for instance using colored trees []) and MPTs enable every node in the network to autonomously switch to the failure recovery path when necessary. Every packet carries with it two bits, referred to as the failure code (FC). FC bits help identify if a packet has encountered a failure or not. A value of 0 indicates that the packet has not seen any failures. A value of 1 indicates a link failure, while a value of 2 indicates a node failure. These two bits may be used from any reserved bits in the packet header. Nodes switching to

a link/node protection tree can transmit these decisions via FC bits allowing the downstream nodes to participate in the failure recovery procedure. To facilitate failure recovery, every node has three types of addresses. The normal address (NA) is used when the network has no failures, while the others are used as alias addresses under certain failure conditions. We refer to the address corresponding to failure code of 1 as the not-via address (NVA) [], which is used when link failure is assumed. The failure code value 2 is used to denote multicast protection address (PA). Every node in the network selects a forwarding path based on the destination address on the packet and the FC value in the packet header. The steps to forward a packet at a node are shown in Figure. Steps 1 and 2 extract the FC and the destination address from the packet and compute the forwarding entry corresponding to these two fields. Steps through 10 show the forwarding procedure when the incoming packet sees a link failure and steps 11 to 2 show the forwarding procedure when a node failure has occurred. 1: Extract FC and destination address from received packet. 2: Compute the outgoing link(s) corresponding to the destination address and FC in the packet. For each outgoing link do: : if FC = 0 then : if Outgoing link has failed then : Single link failure scenario : Send link failure message to all neighbors of downstream node adjacent to failed link. 7: Encapsulate packet into unicast packet with the NVA of the adjacent node as the destination address and set FC to 1. : Forward packet along link failure fast reroute paths. : end if 10: end if 11: if FC = 1 then 12: if Outgoing link incident on destination node has failed then 1: Single node failure scenario 1: Send link failure message to all neighbors of failed node. 1: Decapsulate received packet. 1: Store a copy, and forward original multicast packet according to routing table entries. 17: Encapsulate stored packet with the PA of the failed node as the destination address and set FC to 2. 1: Forward encapsulated packet to the PA of the failed node. 1: end if 20: end if 21: if FC = 2 then 22: Node failure has occurred 2: Forward packet to the PA of the failed node. 2: end if Fig.. Steps to forward a packet at a node. Once the information about link or node failures from the link failure messages are recorded at a node, the node takes into account the number of messages received, in addition to the received FC value to decide which forwarding path to use. For instance, if a node is aware that the downstream node on the original multicast tree has failed, it does not have to send another link failure message. It immediately uses the corresponding MPT to forward traffic during failure recovery. The entire multicast failure recovery mechanism is explained in the following subsection using the NSFNET topology. For this example, we use unicast link fast reroute paths constructed in [] using colored trees. A. Example Consider the graph in Figure (a) which is an alternate version of the NSFNET topology for easier analysis. Node 1 is the designated source router and nodes and are the designated destination routers. The directed tree in black in Figure (b) is the original multicast tree. Under a failurefree scenario, the NA of a node is used and the FCs in all transmitted packets are set to 0. When node detects a failed outgoing link to node (Figure (c)), it treats it as a link failure scenario and sends a link failure message to nodes and over MPT. At this point, nodes and know that one incident link on node has failed. They send individual link failure messages to PA if their incident links on node fail too. Node then encapsulates the multicast packet into a unicast packet, sets FC to 1 and sends it to the NVA of node along the blue outgoing edge as the red outgoing edge was the failed link. 1 1 2 12 (a) Modified NSFNET topology 1 1 2 12 (c) Link - fail Fig.. 1 1 2 12 (b) Example multicast tree 1 1 2 12 (d) Link - fail Multicast failure recovery example. Under a single link failure scenario, node would decapsulate the received not-via packet, recognize the received multicast packet, and forward it accordingly. For a single node failure scenario, node detects a failed outgoing link on the blue path to node (Figure (d)) and sends a link failure message to nodes and over MPT. At this point, neighbors, and know that node has failed. Node then decapsulates the received unicast packet, stores a copy of the original multicast packet, re-encapsulates the packet with the destination address set to PA and FC set to 2, and sends the packet across MPT. Once node knows that node has failed, it sends the multicast packet directly to PA over MPT for the duration of node failure recovery. V. PERFORMANCE EVALUATION Any multicast protocol in today s Internet establishes its own multicast tree to route traffic from the source to the destinations. During a link/node failure, the fast reroute procedure is instantaneously activated by the node detecting this failure. Fast reroute mechanisms guarantee lossless transmissions at

0 0 0 1 0 2 11 7 10 1 12 1 2 10 7 1 2 10 7 2 1 7 17 1 11 1 1 10 12 1 1 1 (a) NSFNET (b) NJLATA (c) Modified NJLATA (d) ARPANET Fig.. Network topologies used for performance evaluation. TABLE II MPT OVERHEAD DURING SINGLE NODE FAILURE RECOVERY Topology NSFNET NJLATA Modified NJLATA ARPANET (Number of vertices, Number of edges) (1,21) (11,2) (11,17) (20,2) Average # of routing table entries per node..1.1. Maximum # of routing table entries per node 10 7 10 Overhead due to unnecessary flooding 1. 1.1 1. 1.0 TABLE III NETWORK UTILIZATION Topology NSFNET NJLATA Modified NJLATA ARPANET Average path length before node failure 2. 2.2 2.1.1 Average path length when MPTs are used. 2.1..2 Average path length after node failure recovery.7 2... the cost of maintaining extra information at participating nodes. The following subsections look at overhead incurred to determine the feasibility of this approach. They are computed over commonly used topologies like the NSFNET (Figure (a)), NJLATA (Figure (b)), Modified NJLATA (NJLATA with a few links removed, Figure (c)) and ARPANET (Figure (d)). Modified NJLATA has more nodes with degree 2 in comparison to the regular NJLATA. The modified topologies are used to better understand the overheads incurred while using MPTs. All the networks considered are two-vertex connected. Although the recovery mechanism always starts of as link failure recovery, we only calculate the overheads incurred during single node fast rerouting due to space constraints. We also calculate the average path length between any source and destination in the multicast tree prior to a failure, during node failure recovery when the MPTs are used, and after the source has switched to the new multicast tree, in all these networks. This is indicative of the additional network resources used by the MPTs during failure recovery. We finally discuss the advantages of the proposed technique over the existing mechanisms in the literature. A. MPT overhead during single node failure recovery The overhead of using MPTs during the node failure recovery is due to: (1) maintaining the MPT infrastructure, and (2) flooding fast reroute packets to unwanted downstream neighbors that are not part of the original multicast tree. Overhead due to the MPT infrastructure: In order to route over the MPTs, every node has additional routing entries. These correspond to the MPTs that every node is a part of. In Table II, we calculate the maximum and average number of additional routing entries that have to be maintained at each node in all four topologies. NSFNET and ARPANET are sparsely connected with low average node degree, hence each node would be part of multiple MPTs. However, the NJLATA is a densely connected network with higher node degree compared to NSFNET and ARPANET. Thus, the number of additional routing entries per node in the NJLATA network is fewer. Many real ISP networks are approximately three-vertex connected [1] and this overhead should be tolerable. Flooding overhead due to non-participating downstream neighbor recipients: In the original multicast tree, all the downstream neighbors of the failed node may not be part of the affected portion of the ongoing multicast session, such as node in Figure (b). This leads to unnecessary overhead while routing multicast traffic to non-participating downstream nodes that belong to the MPT of the failed node. The anonymity of the participating downstream neighbors to the upstream neighbor of the failed node (the node detecting the failure) causes this overhead. We evaluate the extent of this overhead on the four topologies in Figure. The overhead measures the ratio of the total number of links used (total number of edges in the MPT) to the number of links required to connect the participating neighbors on all MPTs in the network. This ratio is calculated for the entire topology by

considering all combinations of destinations with a single source over all MPTs in the network. The ratio computed here is a conservative estimate of the flooding overhead as we do not disregard non-participating neighbors of the failed node that could be on the path connecting the participating neighbors. From Table II, for the NSFNET topology where the degree of any node is not more than, the flooding overhead is only % on average. This is representative of networks that have all nodes having low node degrees. In the NJLATA network, node has degree. This leads to larger overheads when the MPT is used to connect only a few neighbors that are in the downstream section of the original multicast tree. This overhead is caused only while the new multicast tree is being formed and should not affect the network performance. Modified NJLATA benefits by approximately 2% over NJLATA as it has slightly lesser connectivity at nodes and 7. The ARPANET topology has similar characteristics as the NSFNET network owing to similar neighborhood sizes at each node. B. Network utilization We measure the impact of MPTs on the network resource utilization by comparing the average path lengths for all source-destination pairs in the multicast network, prior to a node failure, during the node failure recovery when the MPTs are used, and after the node failure recovery when a new multicast tree has been established. From Table III, we see that the path lengths during node failure recovery are only 1% higher than those after failure recovery on average across the four topologies. This is because, in most cases, the recovery path from the upstream to the downstream node of the failed node on the MPT corresponds to the second shortest path between them. C. Benefits of the proposed technique In comparison to the existing mechanisms, the use of MPTs provide the following advantages: (1) The MPTs are constructed a priori, independent of the original multicast tree, unlike most existing schemes that construct backup paths only after the original multicast tree is set up. This enables protocolindependent node failure recovery. (2) Every node has only one protection tree that can used by any multicast session unlike existing schemes that construct multiple session-specific backup paths for the same node. Therefore, using MPTs significantly reduces the backup path maintenance cost incurred in other approaches. () Every node can independently initiate node failure recovery without requiring deployment of special nodes to help recover multicast traffic. All these factors result in guaranteeing instant node failure recovery for multicast traffic. multicast tree is being formed. The trees can be used with any multicast protocol without modifying its operation. Based on experimental evaluation, we observe that the MPTs increase the routing table size by % on average, and the path length between any source and destination on the multicast tree by 1% on average in the networks considered for performance evaluation. ACKNOWLEDGMENT The research developed in this proposal is funded by the National Science Foundation under grant CNS-111727. REFERENCES [1] S. Bhattacharya, An overview of source-specific multicast (SSM), RFC, Jul. 200. [2] B. Fenner, M. Handley, H. Holbrook, and I. Kouvelas, Protocol independent multicast - sparse mode (PIM-SM), RFC 01, Aug. 200. [] B. Fenner and D. Meyer, Multicast source discovery protocol (MSDP), RFC 1, Oct. 200. [] T. Bates, R. Chandra, D. Katz, and Y. Rekhter, Multiprotocol extensions for BGP-, RFC 70, Jan. 2007. []. Wang, C. Yu, H. Schulzrinne, P. A. Stirpe, and W. Wu, IP multicast fault recovery in PIM over OSPF, in ICNP, 2000, pp. 11 12. [] S. Kini, S. Ramasubramanian, A. Kvalbein, and A. Hansen, Fast recovery from dual-link or single-node failures in IP networks using tunneling, IEEE/ACM Transactions on Networking, vol. 1, no., pp. 1 1, Dec. 2010. [7] G. ue, L. Chen, and K. Thulasiraman, Quality-of-service and qualityof-protection issues in preplanned recovery schemes using redundanttrees, IEEE Journal on Selected Areas in Communication, vol. 21, no., pp. 12 1, October 200. [] S. Bryant, S. Previdi, and M. Shand, A framework for IP and MPLS fast reroute using not-via addresses, Internet-Draft draft-ietf-rtgwg-ipfrrnotvia-addresses-10, Dec. 2012. [] G. Jayavelu, S. Ramasubramanian, and O. Younis, Maintaining colored trees for disjoint multipath routing under node failures, IEEE/ACM Transactions on Networking, vol. 17, no. 1, pp., Feb. 200. [10] L. Wei, A. Karan, N. Shen, Y. Cai, and M. Napierala, Tunnel based multicast fast reroute (TMFRR) extensions to PIM, Working Draft, Internet-Draft draft-1wei-pim-tmfrr-00, Oct. 200. [11] N. Wang and B. Dong, Fast failure recovery for reliable multicast-based content delivery, in International Conference on Network and Service Management (CNSM), Oct. 2010, pp. 0 10. [12] A. Karan, S. Filsfils, D. Farinacci, B. Decraene, N. Leymann, U. Joorde, and W. Henderickx, Multicast only fast re-route, Working Draft, Internet-Draft draft-karan-mofrr-02, Mar. 2012. [1] I. Wijnands, A. Csaszar, and J. Tantsura, Tree notification to improve multicast fast reroute, Working Draft, Internet-Draft draft-wijnandsrtgwg-mcast-frr-tn-00, Oct. 2012. [1] F. K. Hwang, D. S. Richards, and P. Winter, The Steiner Tree Problem (Annals of Discrete Mathematics). North-Holland, 12. [1] R. M. Karp, Reducibility among combinatorial problems, in Complexity of Computer Computations, 172, pp. 10. [1] H. Takahashi and A. Matsuyama, An approximate solution for the Steiner tree problem in graphs, Math. Japonica, vol. 2, no., pp. 7 77, 10. [17] S. Hanks, D. Farinacci, and P. Traina, Generic routing encapsulation (GRE), RFC 1701, Oct. 1. [1] C. Perkins, IP encapsulation within IP, RFC 200, Oct. 1. [1] N. Spring, R. Mahajan, D. Wetherall, and T. Anderson, Measuring ISP topologies with rocketfuel, IEEE/ACM Transactions on Networking, vol. 12, no. 1, pp. 2 1, Feb. 200. VI. CONCLUSION In this paper, we develop multicast protection trees (MPTs) that provide single node failure recovery in two-vertex connected networks. These trees are used only when the new