Fault management. Fault management. Acnowledgements. The impact of network failures. Pag. 1. Inspired by

Similar documents
Fault management. Acnowledgements

Network Survivability

ECE442 Communications Lecture 4. Optical Networks

Optical Communications and Networking 朱祖勍. Nov. 27, 2017

Resilient IP Backbones. Debanjan Saha Tellium, Inc.

MODERN RECOVERY MECHANISMS FOR DATA TRANSPORT NETWORKS

SONET Topologies and Upgrades

Survivability Architectures for Service Independent Access Points to Multiwavelength Optical Wide Area Networks

Progress Report No. 15. Shared Segments Protection

David Tipper Graduate Telecommunications and Networking Program University of Pittsburgh. Motivation

ITU-T Y Protection switching for MPLS networks

SONET Bidirectional Line-Switched Ring Equipment Generic Criteria

SHARED MESH RESTORATION IN OPTICAL NETWORKS

Optical Fiber Communications. Optical Networks- unit 5

SYSC 5801 Protection and Restoration

Complexity in Multi-level Restoration and Protection Strategies in Data Networking

Multi-layer protection and restoration requirements

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

Measuring APS Disruption Time

Common Issues With Two Fiber Bidirectional Line Switched Rings

A Novel Class-based Protection Algorithm Providing Fast Service Recovery in IP/WDM Networks

Multi Protocol Label Switching

Network Topologies & Error Performance Monitoring in SDH Technology

MPLS Multi-Protocol Label Switching

Availability of Optical Network with Self-Healing Nodes Designed by Architecture on Demand Branko Mikac and Matija Džanko

Control and Management of Optical Networks

Capacity planning and.

Synchronous Optical Networking Service (SONETS)

Some economical principles

Card Protection. 4.1 Overview CHAPTER

OPTICAL NETWORKS. Optical Metro Networks. A. Gençata İTÜ, Dept. Computer Engineering 2005

Local Restoration in Metro Ethernet Networks for Multiple Link Failures

CE Ethernet Operation

All-Optical Switches The Evolution of Optical Functionality Roy Appelman, Zeev Zalevsky, Jacob Vertman, Jim Goede, Civcom

CHAPTER I INTRODUCTION. In Communication Networks, Survivability is an important factor

Request for Comments: Alcatel March 2006

Capacity planning and.

Category: Standards Track. Cisco N. Sprecher. Nokia Siemens Networks. A. Fulignoli, Ed. Ericsson October 2011

SPARE CAPACITY MODELLING AND ITS APPLICATIONS IN SURVIVABLE IP-OVER-OPTICAL NETWORKS

The LSP Protection/Restoration Mechanism in GMPLS. Ziying Chen

System Applications Description

Delivering. Effective Element Management Networks

Sycamore Networks Implementation of the ITU-T G.ASON Control Plane

Data Center Applications and MRV Solutions

IST ATRIUM. A testbed of terabit IP routers running MPLS over DWDM. TF-NGN meeting

Configuring Rapid PVST+

A Comparison of Path Protections with Availability Concern in WDM Core Network

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks

Index Terms Backbone Networks Optimization, Resilience, Virtual Private Network, Traffic Demands and Traffic Flows

Bridging the Ring-Mesh Dichotomy with p-cycles:

A Comparative Study on Protection Methods in MPLS-TP Networks

Techniques and Protocols for Improving Network Availability

Name of Course : E1-E2 CFA. Chapter 14. Topic : NG SDH & MSPP

RPR Seminar Report Introduction

Company Overview. Network Overview

Upgrading From a Successful Emergency Control System to a Complete WAMPAC System for Georgian State Energy System

Cisco I/O Accelerator Deployment Guide

High Availability Architectures for Ethernet in Manufacturing

Ahmed Benallegue RMDCN workshop on the migration to IP/VPN 1/54

SPARTAN Meeting in Kansas. Naoya Henmi and Satoshi Hasegawa May 10, NEC Corporation

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 1, FEBRUARY /$ IEEE

Progress Report No. 13. P-Cycles and Quality of Recovery

Ericsson ip transport nms

1. INTRODUCTION light tree First Generation Second Generation Third Generation

GMPLS Overview Generalized MPLS

Data Center Revolution Impact on Ethernet and MPLS

Evolution of Protection Technologies in Metro Core Optical Networks

Spare Capacity Allocation Using Partially Disjoint Paths for Dual Link Failure Protection

Configuring Rapid PVST+ Using NX-OS

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Introduction to Wireless Networking ECE 401WN Spring 2008

Overview of Networks

LSP placement in an MPLS-TP mesh network with shared mesh protection mechanism

Decoding MPLS-TP and the deployment possibilities

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

SFP GBIC XFP. Application Note. Cost Savings. Density. Flexibility. The Pluggables Advantage

Data Communication. Chapter # 1: Introduction. By: William Stalling

Alcatel-Lucent 1675 LambdaUnite MultiService Switch

MPLS OAM Technology White Paper

Multilayer Design. Grooming Layer + Optical Layer

A Review of Traffic Management in WDM Optical Networks: Progress and Challenges

TRANS-OCEANIC OADM NETWORKS: FAULTS AND RECOVERY

New Approaches to Optical Packet Switching in Carrier Networks. Thomas C. McDermott Chiaro Networks Richardson, Texas

Dynamic service Allocation with Protection Path

FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES

An Approach to Dual-failure Survivability for Multi Quality Data Based On Double p-cycle

Ethernet Network Redundancy in SCADA and real-time Automation Platforms.

A simulation study of GELS for Ethernet over WAN

The fundamentals of Ethernet!

General Troubleshooting

Service Definition Internet Service

NETWORK ARCHITECTURE

Service-centric transport infrastructure

ARE CONTROL PLANE BENEFITS APPLICABLE TO SUBMARINE NETWORKS?

interwan Packet Transport Alan Hurren

Automatic Service and Protection Path Computation - A Multiplexing Approach

OnePlanner. Unified Design System

Multi-Protocol Lambda Switching for Packet, Lambda, and Fiber Network

Take Back Lost Revenue by Activating Virtuozzo Storage Today

The Role of MPLS-TP in Evolved packet transport

Transcription:

Fault management ndrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Network Management and QoS Provisioning - 1 cnowledgements Inspired by Davide Cuda, Politecnico di Torino, now at France Telecom Luca Valcarenghi, Università Pisa Sant nna chille Pattavina, Politecnico di Milano Network Management and QoS Provisioning - The impact of network failures Cable 1101011000110101011 1 cable x 00 fibers/cable x 160 λ/fiber x 10 Gb/s/λ = 30 Tb/s 5 billion telephone lines (@ 64 kb/s) 60.000 full CDs per second single cable cut can lead to a dramatic amount of lost traffic May translate to revenue losses Network Management and QoS Provisioning - 3 Pag. 1

Fibers Network Management and QoS Provisioning - 4 Some failure rates Statistics for the year 000 for an Optical Cable Network of 30359 km Hard Failures: service interruption Cause Number of failures Percentage of failures Damage due to thirds 19 61% Rodents 6 9% Malice 3 10% Materials degradation 1 3% Natural events 1 3% Installation Defects 1 3% Total 31 Source: Sirti IP router failure (some data) route processor, line card : 70.000-150.000 hours MTBF (Mean Time Between Failures) software : 10.000 100.000 hours MTBF Note: 1 year is about 10.000 hours Network Management and QoS Provisioning - 5 Terminology Many different definitions Fault/Failure catastrophic event that causes the disruption of communications link failure, interface failure, node failure Fault tolerance voiding service failures in the presence of faults Network fault tolerance is a measure of the number of failures the network can sustain before a disconnection occurs Resilience Network resilience is the maximum number of node failures that can be sustained while the network remains connected with a probability (1-p) The general aim of resilience is to make network failures transparent to users. If a failure happens to affect a circuit, it would be very desirable to reconfigure that circuit as quickly as possible with no information loss Network Integrity bility of a network to provide the desired QoS to the services, not only in normal (i.e., failure free) network conditions, but also when network congestion or network failure occurs Network Management and QoS Provisioning - 6 Pag.

Terminology Survivability network survivability is the fraction of a quantifiable feature x that remains after an instance of the disaster type under consideration has happened x can be defined as the traffic volume, the number of connected subscribers, the network operator s revenue, the grade of service, other network characteristics that are related to the remaining goodness of the network is a subset of integrity bility of a network to recover the traffic in the event of a failure, causing few or no consequences for the user network is referred to as survivable if it provides some ability to recover ongoing connections disrupted by the catastrophic failure of a network component, such as a line interruption or node failure Network survivability is the set of capabilities that allows a network to restore affected traffic in the event of a failure. [RFC447] Network Management and QoS Provisioning - 7 Terminology Reliability The probability of a network element (e.g., a node or a link) to be fully operational during a certain time frame R(t), is the probability that the system works correctly in the period of time t under defined environmental conditions vailability is the instantaneous counterpart of reliability Network element availability is the probability of a network element to be operational at one particular point in time (t) is the probability that the system works correctly at the time point t probability that an item will be able to perform its designed functions at the stated performance level, conditions and environment when called upon to do so reliability_time over (reliability_time+ recovery_time) Different ways to measure availability (port, bandwidth, blocking probability) Network Management and QoS Provisioning - 8 Recovery Cycle Total Recovery Time Recovery Switching Time Notification Time Hold-off Time Correlation Time Failure Localization and Isolation Time Detection Time From Vasseur, Pickavet, Deemester, Network Recovery, Morgan Kaufmann RFC447, RFC448 Wait to Restore Time Operational Operational Time t Failure Notification Failure Correlation Failure Localization and Isolation Failure Detection Failure Recovery Reversion Network Management and QoS Provisioning - 9 Pag. 3

Fault management [RFC447,RFC448] Failure Detection The action of detecting the impairment (defect of performance degradation) as a defect condition and the consequential activation of Signal Fail (SF) or Signal Degrade (SD) trigger to the control plane. Thus, failure detection is the only phase that cannot be achieved by the control plane alone. Failure Localization (and Isolation) Failure localization provides information about the location (and thus the identity) of the transport plane entity that causes the LSP(s)/span(s) failure. Failure Correlation single failure event (such as a span failure) can cause multiple failure (such as individual LSP failures) conditions to be reported. These can be grouped (i.e., correlated) to reduce the number of notified failures. Failure Notification Failure notification phase is used 1) to inform intermediate nodes that LSP(s)/span(s) failure has occurred and has been detected and ) to inform the recovery deciding entities (which can correspond to any intermediate or end-point of the failed LSP/span) that the corresponding LSP/span is not available. Recovery Recovery of the failure through a recovery scheme Reversion (Normalization) revertive recovery operation refers to a recovery switching operation, where the traffic returns to (or remains on) the working LSP/span when the switch-over requests are terminated (i.e., when the working LSP/span has recovered from the failure). Network Management and QoS Provisioning - 10 Timing [RFC447, RFC448] Detection time The time between the occurrence of the fault or degradation and its detection. Note that this is a rather theoretical time because, in practice, this is difficult to measure. Localization time The time necessary to localize the failure. Correlation time The time between the detection (localization) of the fault or degradation and the reporting of the signal fail (SF) or degrade (SD). This time is typically used in correlating related failures or degradations. Hold-off time time between the reporting of signal fail (SF) or degrade (SD), and the initialization of the recovery switching operation with the deciding entities notification. This is useful when multiple layers of recovery are being used. Notification time The time between the reporting of the signal fail or degrade and the reception of the indication of this event by the entities that decide on the recovery switching operation(s). Recovery Switching Time The time between the initialization of the recovery switching operation and the moment the normal traffic is selected from the recovery LSP/span. Total Recovery time The sum of the detection, the localization, the correlation, (the hold-off,) the notification, and the recovery switching time. Wait To Restore time period of time that must elapse after a recovered fault before an LSP/span can be used again to transport the normal traffic and/or to select the normal traffic from. Network Management and QoS Provisioning - 11 Fault types and availability vailability(%) N-Nines Downtime Time (min/year) 99% -Nines ~5000 min/year 99.9% 3-Nines ~500 min/year 99.99% 4-Nines ~50 min/year 99.999% 5-Nines ~5 min/year 99.9999% 6-Nines < 1 min/year Metric Equipment MTTR Cable-cuts MTTR Cable-cut rate Tx failure rate Rx failure rate Bellcore-Statistics hours 1 hours 1-5 cuts/year/100 Km 10867 FIT 4311 FIT MTTR: Mean Time To Repair FIT: Failure In Time (# failures in 10 9 hours ) Network Management and QoS Provisioning - 1 Pag. 4

Failure types Types of failure Components: links, nodes, WDM channels, active components, software Human errors: fiber cut Fiber inside oil/gas pipelines less likely to be cut Systems Entire Central Offices can fail due to catastrophic events Most frequent fault Fiber cut 1 4 Fiber cut 3 5 Duct failure 6 Node failures are rare Port failure (treated as a fiber cut) Network Management and QoS Provisioning - 13 Survivability: at the physical layer? Survivability can be provided at different network layers Example Ethernet switches may rebuild the spanning tree after a link failure IP routers can fight a link failure by excluding the failed route from their routing tables High delays (10s of seconds after the failure is detected and algorithms converge) During this time packets can be routed incorrectly Lots of signaling is required Handling faults at the physical is faster and protocol agnostic Technologies SDH/SONET networks Point to point or ring based WDM networks Mesh based Consider a primary or working path and a secondary or protection path Network Management and QoS Provisioning - 14 Survivability Requires exploiting redundant capacity to deal with failures Often classified in Provisioning (restoration) Restoration (also named dynamic resilience) Redundant capacity not reserved ffected traffic is re-routed by on-line processing Reaction time in the order of s (also named static resilience) Redundant capacity pre-allocated utomatic re-routing upon failure Reaction time in the order of ms One single failure at the time is normally considered Overall network partitioned in sub-networks One failure per sub-network MTTR << MTBF Mostly focus on links/paths Node survivability is guaranteed through backup provisioning (1+1 scheme, see later) Network Management and QoS Provisioning - 15 Pag. 5

schemes 1:N 1+1 working channel/fiber Splitter (Switch can Switch be used also) working channel/fiber working channel/ fiber backup channel/fiber 1:1 working channel/fiber working channel/ fiber backup channel/fiber low priority traffic low priority traffic low priority traffic backup channel /fiber low priority traffic Network Management and QoS Provisioning - 16 1+1 protection Two nodes connected to each other with two or more links copy of the signal is transmitted both on the working and the protection channel The receiver can select which copy to accept based e.g. on the signal quality (primary) Backup (secondary) (1+1) Network Management and QoS Provisioning - 17 1:1 protection Backup (1:1) Send data on the working channel only The protection channel is reserved for future use in case of failure The protection ti channel can be used for low priority data traffic Only when a failure occurs on the working channel, the protection channel carries the traffic from the failed working channel Network Management and QoS Provisioning - 18 Pag. 6

m:n protection n working links are protected using m backup links and backup path from a m:n protected group Backup channels can be used to carry low priority traffic (n) Backup (m) (m:n) Network Management and QoS Provisioning - 19 techniques 1+1 the most costly the fastest 1:1 higher efficiency in the protection capacity usage higher restoration time actions (and signalling) are needed to switch traffic from the working channel to the protection one m:n requires less resources might not be able to restore all the traffic supported before the fault exploits the non-100% utilization of working links how many working links may fail at the same time? Network Management and QoS Provisioning - 0 Fault management in SONET/SDH Dedicated vs. Shared: working connection assigned dedicated or shared protection bandwidth 1+1 is dedicated, 1:n is shared Revertive (Non-revertive): after failure is fixed, traffic is automatically, or manually, (not) switched back on the working path Shared protection schemes are usually revertive Unidirectional connection Data transmitted both on the working (primary) and the backup (secondary) path Bidirectional connection Data could need to be switched from the working path to the backup path (even if a fault affects only one of the primary connections) utomatic Switching (PS): signaling protocol to detect faults Network Management and QoS Provisioning - 1 Pag. 7

Fault management in SONET/SDH: 1+1 PS TX = Transmitter BR = Bridge RX = Receiver SW = Switch TX RX BR SW TX RX RX TX SW BR RX TX Network Management and QoS Provisioning - Fault management in SONET/SDH: 1:1 PS PS Channel TX = Transmitter RX = Receiver BR = Bridge SW = Switch TX RX BR SW TX RX RX TX SW BR RX TX Network Management and QoS Provisioning - 3 Fault management in SONET/SDH: Self-healing Ring Much of the carrier infrastructure today uses SONET/SDH rings Multiple nodes are interconnected with a single physical ring Rings show interesting restoration properties: connected topology Provides disjoint paths between any couple of nodes Unidirectional-Line-Switched-Ring (UPLR) Bidirectional-Line-Switched-Ring (BPLR) Network Management and QoS Provisioning - 4 Pag. 8

Current SDH architecture: Ring protection Multiple rings over DWDM Regional Ring (BLSR) Intra-Regional Ring (BLSR) Intra-Regional Ring (BLSR) ccess Rings (ULSR) Network Management and QoS Provisioning - 5 Unidirectional-Line-Switched-Ring: Failure-free State Fiber 1: path Fiber : Protecting path Traffic is sent simultaneously l on the working and the protecting path 1+1 protection Bridge -B B- Path Selection fiber 1 Path Selection -B fiber B- Bridge B Backup D C Network Management and QoS Provisioning - 6 Unidirectional-Line-Switched-Ring: Failure State switches the receiver from the working path to the protecting path B remains unaffected Non trivial to detect failure in -B B- Bridge Path Selection fiber 1 -B B B- Bridge Backup C Path Selection fiber D Network Management and QoS Provisioning - 7 Pag. 9

Unidirectional-Line-Switched-Ring Easy to implement No signaling Fast failure recovery ULSRs popular in lower-speed local exchange and access The delay difference between the working and the backup path affects the restoration ti time (max 60ms according to standards) d Not efficient 50% of capacity for protection purpose no spatial reuse of wavelengths no sharing of resources dedicated to protection Network Management and QoS Provisioning - 8 Bidirectional-Line-Switched-Ring BPLR/4 4-Fiber: BPLR/4 Shortest path routing B C C C C C D Network Management and QoS Provisioning - 9 Bidirectional-Line-Switched-Ring: Span switching Failure of a working path: B traffic is routed through the protection fiber C C 4-Fiber BLSR C C C D Network Management and QoS Provisioning - 30 Pag. 10

C C Bidirectional Line Switched Ring: Ring switching Node or fiber bundle failure (working and protection) : ti B 4-Fiber BLSR C traffic between nodes is routed around the ring on the protection fiber C C D Network Management and QoS Provisioning - 31 Bidirectional Line Switched Ring BLSR/ -Fiber: BLSR/ In each fiber half of the capacity is reserved for protection purposes Span switching is not possible B B Shared protection bandwidth B C C D Network Management and QoS Provisioning - 3 Bidirectional Line Switched Ring 1:n and m:n are possible in BLSR bandwidth can be shared by spatially separated path BLSR are More capacity efficient than ULSR More complex (extensive signaling and protection mechanism) B B Shared protection bandwidth D B C Network Management and QoS Provisioning - 33 C Pag. 11

Fault management in WDM mesh networks Survivability Provisioning Preplanned protection Link restoration Path restoration Mesh Ring Path protection Link protection Shared Dedicated Ring Loopback Generalized Ring Loopback 1+1 1:1 node cover ring cover p-cycles Network Management and QoS Provisioning - 34 Fault management in WDM mesh networks and backup path must be link disjoint Preplanned protection: Resources dedicated to protection are allocated each time a new light path is set up Simple and fast Rigid resource allocation Provisioning: The network is generically over-provisioned with respected the real traffic the network needs to support and backup connections are allocated only if primary connections fail Provisioning is more complex & slower than preplanned protection Provisioning can support multiple failures Network Management and QoS Provisioning - 35 Fault management in WDM mesh networks Path protection Traffic rerouted through a backup route Primary and backup route must be link-disjoint 1 4 3 5 Sub-Path protection: divide path in a sequence of segments and protect each segment separately 6 Link protection: Traffic rerouted around the failed link 1 4 3 5 Network Management and QoS Provisioning - 36 6 Pag. 1

Ring cover Mesh networks Large scalability (physical, lower spare resources) Large restoration time Ring Easy to manage Fast reconfiguration time Capacity inefficient (~100% of spare capacity) Cover the physical mesh network with logical rings Nodes are still physical nodes Links are composed of one or multiple wavelength channels Logical rings can behave as ULSR/BLSR or can be used for protection only Network Management and QoS Provisioning - 37 Ring cover: stacked rings If rings behave as ULSR/BLSR End-to-end traffic Inter-Ring traffic Intra-Ring traffic protected in each segment by a different ring B routed through R5 and R4 Ring 3 B Ring 4 Ring 1 Ring Ring 5 Network Management and QoS Provisioning - 38 Ring cover: stacked rings If rings behave as ULSR/BLSR Horizontal dimension: adjacent rings provide geographic coverage Vertical dimension: rings stacked on top of each others providing additional capacity Usually achieved by WDM 100% of spare capacity required Ring 3 Ring Ring 1 Ring 5 B Ring 4 Network Management and QoS Provisioning - 39 Pag. 13

cycle (p-cycle) Rings are used for protection purposes only Find a set of directed cycles covering all links in a network If a link goes down, there is a cycle to recover the traffic of the failed link Given the set of all covering cycles, ILP techniques can be used to achieve different purpose (e.g., minimize reconfiguration time, minimize extra-capacity needed for protection) p-cycles can protect both links on the ring and chordal (straddling) links Straddling links are links having p-cycle nodes as endpoints Network Management and QoS Provisioning - 40 cycle (p-cycle) Protecting failure of a on-cycle link path 1 p-cycle path Protecting failure of a straddling link ( paths) Network Management and QoS Provisioning - 41 cycle (p-cycle) ttribute p-cycles SONET rings Modularity One spare span per link OC-n modularity yield Up to useful restoration paths per p- cycle p-cycles contribute to the restoration of flexibility working links on the cycle and all straddling links Routing and provisioning of working paths Total network redundancy Proceeds without regard to structures formed in the sparing (protection) layer Essentially just that of a span restorable mesh network 1 restoration path unit per ring (or protection channel) Rings only protect working links in spans contained within the ring path routing must be a succession of intra-ring and inter-ring traversals Over 100% investment in spare capacity. (Can raise up to 300% depending on topology and traffic). Network Management and QoS Provisioning - 4 Pag. 14

lgorithm to compute disjoint paths: steps algorithm 1 st step: 1 st path is computed using shortest path algorithm nd step: remove the edge from the original graph and compute another path using a shortest path algorithm Does not work 1 4 6 1 1 3 5 1 4 6 3 5 1 4 6 1 1 1 3 5 Network Management and QoS Provisioning - 43 lgorithm to compute disjoint paths Given a graph, a source node s, a destination node d, a couple of disjoint paths is computed as it follows: Compute the shortest path tree rooted at node s. Let d(s,u) denote the distance shortest-path distance between s and u Transform G into an auxiliary graph G : Nodes and links are kept unchanged The cost of each link (u,v) in G is define by c (u,v)= c(u,v)+d(s,u)-d(s,v) Reverse direction of the links along the shortest path from node s to node d Compute the shortest path from node s to node d in G The shortest path between node s and d in G (G ) is denoted as T (T ). fter removing the link appearing both in T and T (the one in opposite direction) the remaining link form a cycle. Two link-disjoint path between s and d can be found from the cycle. Network Management and QoS Provisioning - 44 lgorithm to compute disjoint paths Compute the shortest path tree and d(s,u) 1 4 6 1 1 3 5 d(1,) = d(1,3) = 1 d(1,4) = 3 d(1,5) = 3 d(1,6) = 4 c (u,v)= c(u,v)+d(s,u)-d(s,v) Transform G into an auxiliary graph G c (1,) = + 0 = 0 4 6 c (1,3) = 1 + 0 1 = 0 1 0 c (,4) = + 3 = 1 0 0 1 c (3,4) = + 1 3 = 0 0 0 c (5,6) = + 3 4 = 1 1 3 5 c (4,6) = 1 + 3 4 = 0 c (3,5) = + 1 3 = 0 Network Management and QoS Provisioning - 45 Pag. 15

lgorithm to compute disjoint paths Transform G into an auxiliary graph G 4 6 Reverse direction of links 1 0 along the shortest path 0 0 1 0 0 Compute the shortest path in G 1 3 5 Remove links in the shortest path both in G and G 4 6 4 6 1 1 1 1 1 3 5 1 3 5 Network Management and QoS Provisioning - 46 Pag. 16