Jaringan Komputer. Network Layer. Network Layer. Network Layer. Network Layer Design Issues. Store-and-Forward Packet Switching

Similar documents
Chapter 5 (Week 9) The Network Layer ANDREW S. TANENBAUM COMPUTER NETWORKS FOURTH EDITION PP BLM431 Computer Networks Dr.

Outline EEC-682/782 Computer Networks I. Midterm 1 Statistics. Midterm 1 Statistics. High 99, Low 34, Average 66

Computer Networks 1 (Mạng Máy Tính 1) Lectured by: Dr. Phạm Trần Vũ

Chapter 5. The Network Layer. Network Layer Design Isues. Store-and-Forward Packet Switching 10/7/2010. Implementation of Connectionless Service

Chapter 5. The Network Layer

CEN445 Network Protocols & Algorithms. Network Layer. Prepared by Dr. Mohammed Amer Arafah Summer 2008

Chapter 5. The Network Layer. CEN Chapter 5 1

Computer Networks Prof. Ashok K. Agrawala

Chapter 5. The Network Layer FATİH ŞAHİN Network Layer Design Isues. Store-and-Forward Packet Switching

II. Principles of Computer Communications Network and Transport Layer

Chapter 24 Congestion Control and Quality of Service 24.1

Telecommunication Protocols Laboratory Course. Lecture 3

Computer Networks 1 (Mạng Máy Tính 1) Lectured by: Dr. Phạm Trần Vũ

EEC-484/584 Computer Networks

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

Quality of Service (QoS)

ADVANCED COMPUTER NETWORKS

CHAPTER 9: PACKET SWITCHING N/W & CONGESTION CONTROL

Chapter 6. The Network Layer

Routing in packet-switching networks

Chapter 6. The Network Layer

The Network Layer. Network Layer Design Objectives

5.2 Routing Algorithms

QUALITY of SERVICE. Introduction

Layer 3: Network Layer. 9. Mar INF-3190: Switching and Routing

Quality of Service in the Internet

Unit 2 Packet Switching Networks - II

Quality of Service in the Internet

ECE 435 Network Engineering Lecture 11

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Mohammad Hossein Manshaei 1393

Network Layer Chapter 5

Computer Networks. Routing

Routing. 4. Mar INF-3190: Switching and Routing

Network Layer. For this purpose the network layer should:

ECE 435 Network Engineering Lecture 11

UNIT-III NETWORK LAYER: NETWORK LAYER DESIGN ISSUES:

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm

COMPUTER NETWORKS UNIT III Y3/S5

Chapter -6 IMPROVED CONGESTION CONTROL MECHANISM FOR REAL TIME DATA TRANSMISSION

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

ETSF10 Internet Protocols Transport Layer Protocols

Congestion in Data Networks. Congestion in Data Networks

Quality of Service (QoS)

Congestion Control in Communication Networks

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose

Kent State University

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

Lecture Outline. Bag of Tricks

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

Flow Control. Flow control problem. Other considerations. Where?

Distributed Queue Dual Bus

Fairness Example: high priority for nearby stations Optimality Efficiency overhead

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network

Chapter 6 Congestion Control and Resource Allocation

Improving QOS in IP Networks. Principles for QOS Guarantees

Quality of Service II

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

Routing & Congestion Control

Routing Algorithms. CS158a Chris Pollett Apr 4, 2007.

Network Control and Signalling

Lecture 4 Wide Area Networks - Congestion in Data Networks

Lecture (08, 09) Routing in Switched Networks

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

ECE 333: Introduction to Communication Networks Fall 2001

Institute of Computer Technology - Vienna University of Technology. L73 - IP QoS Integrated Services Model. Integrated Services Model

Quality of Service (QoS)

Design Intentions. IP QoS IntServ. Agenda. Design Intentions. L73 - IP QoS Integrated Services Model. L73 - IP QoS Integrated Services Model

Internet Services & Protocols. Quality of Service Architecture

3. Quality of Service

We will discuss about three different static routing algorithms 1. Shortest Path Routing 2. Flooding 3. Flow Based Routing

William Stallings Data and Computer Communications. Chapter 10 Packet Switching

Network Layer Enhancements

UNIT IV -- TRANSPORT LAYER

Chapter 5. The Network Layer

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

Lecture 14: Performance Architecture

Chapter 6: Congestion Control and Resource Allocation

Chapter 22 Network Layer: Delivery, Forwarding, and Routing 22.1

Internetworking Part 1

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

INTRODUCTORY COMPUTER

Course 6. Internetworking Routing 1/33

Advanced Lab in Computer Communications Meeting 6 QoS. Instructor: Tom Mahler

RSVP 1. Resource Control and Reservation

Resource Control and Reservation

Cisco IOS ISO Connectionless Network Service Commands

Supporting Service Differentiation for Real-Time and Best-Effort Traffic in Stateless Wireless Ad-Hoc Networks (SWAN)

Routing Algorithms McGraw-Hill The McGraw-Hill Companies, Inc., 2001

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Lecture 9. Quality of Service in ad hoc wireless networks

CEN445 Network Protocols and Algorithms. Chapter 2. Routing Algorithms. Dr. Ridha Ouni

Chapter 5 (Week 10) The Network Layer (CONTINUATION) ANDREW S. TANENBAUM COMPUTER NETWORKS FOURTH EDITION PP

UNIT 2 TRANSPORT LAYER

Subject: Adhoc Networks

Jaringan Komputer. The Transport Layer

Routing, Routing Algorithms & Protocols

Transcription:

Network Layer Jaringan Komputer Network Layer Concerned with getting packets from the source all the way to the destination May require making many hops at intermediate routers along the way Contrasts with that of the data link layer, which has the more modest goal of just moving frames from one end of a wire to the other Is the lowest layer that deals with end-toend transmission. Network Layer Must know about the topology of the communication subnet (i.e., the set of all routers) Choose appropriate paths through subnet Must take care to choose routes to avoid overloading some of the communication lines and routers while leaving others idle Deal with problems when the source and destination are in different networks Network Layer Design Issues Store-and-Forward Packet Switching Services Provided to the Transport Layer Implementation of Connectionless Service Implementation of Connection-Oriented Service Comparison of Virtual-Circuit and Datagram Subnets Store-and-Forward Packet Switching environment of the network layer protocols Store-and-Forward Packet Switching A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-to-point link to the carrier The packet is stored there until it has fully arrived so the checksum can be verified Then it is forwarded to the next router along the path until it reaches the destination host, where it is delivered This mechanism is store-and-forward packet switching

Services Provided to the Transport Layer The network layer services have been designed with the following goals: 1. The services should be independent of the router technology. 2. The transport layer should be shielded from the number, type, and topology of the routers present. 3. The network addresses made available to the transport layer should use a uniform numbering, even across LANs and WANs. Services Provided to the Transport Layer Network layer should provide either connection-oriented service or connectionless service Services Provided to the Transport Layer Internet community: routers' job is moving packets around and nothing else subnet is inherently unreliable network is unreliable (need error control & flow control) network service should be connectionless, with primitives SEND PACKET,RECEIVE PACKET and little else no packet ordering each packet must carry the full destination address, because each packet sent is carried independently The Internet offers connectionless networklayer service Services Provided to the Transport Layer Telephone companies: subnet should provide a reliable, connectionoriented service quality of service is the dominant factor, and without connections in the subnet, quality of service is very difficult to achieve, especially for real-time traffic such as voice and video ATM networks offer connectionoriented network-layer service Implementation of Connectionless Service Routing within a datagram subnet. Implementation of Connectionless Service Packets are injected into the subnet individually and routed independently of each other. No advance setup is needed. The packets are frequently called datagrams (in analogy with telegrams) and the subnet is called a datagram subnet The algorithm that manages the tables and makes the routing decisions is called the routing algorithm

Implementation of Connection- Oriented Service Routing within a virtual-circuit subnet. Implementation of Connection- Oriented Service A path from the source router to the destination router must be established before any data packets can be sent This connection is called a VC (virtual circuit), in analogy with the physical circuits set up by the telephone system, and the subnet is called a virtual-circuit subnet VC avoid having to choose a new route for every packet sent. When a connection is established, a route from the source machine to the destination machine is chosen as part of the connection setup and stored in tables inside the routers Implementation of Connection- Oriented Service Comparison of Virtual-Circuit and The route is used for all traffic flowing over the connection, exactly the same way that the telephone system works. When the connection is released, the virtual circuit is also terminated. With connection-oriented service, each packet carries an identifier telling which virtual circuit it belongs to Any conflict should be avoided Routers need the ability to replace connection identifiers in outgoing packets. In some contexts, this is called label switching. Comparison of Virtual-Circuit and Trade-off between router memory space and bandwidth VC allow packets to contain circuit numbers instead of full destination addresses If the packets tend to be fairly short, a full destination address in every packet may represent a significant amount of overhead and hence, wasted bandwidth The price paid for using virtual circuits internally is the table space within the routers. Depending upon the relative cost of communication circuits versus router memory, one or the other may be cheaper. Comparison of Virtual-Circuit and Trade-off between setup time and address parsing time Using VC requires a setup phase, which takes time and consumes resources. However, figuring out what to do with a data packet in a virtual-circuit subnet is easy: the router just uses the circuit number to index into a table to find out where the packet goes. In a datagram subnet, a more complicated lookup procedure is required to locate the entry for the destination.

Comparison of Virtual-Circuit and The amount of table space required in router memory A datagram subnet needs to have an entry for every possible destination VC subnet just needs an entry for each virtual circuit Routing Algorithms The main function of the network layer is routing packets from the source to destination In most subnets, packets will require multiple hops to make the journey. The only notable exception is for broadcast networks, but even here routing is an issue if the source and destination are not on the same network. Routing algorithm is part of the network layer software responsible for deciding which output line an incoming packet should be transmitted on Routing Algorithms Datagram Route must be made anew for every arriving data packet since the best route may have changed since last time VC Routing decisions are made only when a new virtual circuit is being set up. Thereafter, data packets just follow the previously-established route. This is sometimes called session routing because a route remains in force for an entire session Routing Algorithms Router have two processes: Forwarding: handles each packet as it arrives, looking up the outgoing line to use for it in the routing tables The other process is responsible for filling in and updating the routing tables. That is where the routing algorithm comes into play Certain properties are desirable in a routing algorithm: correctness, simplicity, robustness, stability, fairness, and optimality Routing Algorithms Conflict between fairness and optimality. Routing Algorithms Two classes of routing algorithms: Non-adaptive Adaptive

Non-adaptive Do not base the routing decisions on measurements or estimates of the current traffic and topology The choice of the route is computed in advance, off-line, and downloaded to the routers when the network is booted Sometimes called static routing. Adaptive Change the routing decisions to reflect changes in the topology, and usually the traffic as well Differ in: where they get their information (locally, from adjacent routers, from all routers), when they change the routes (e.g., every T sec, when the load changes or when the topology changes) what metric is used for optimization (e.g., distance, number of hops, or estimated transit time) Routing Algorithms The Optimality Principle Shortest Path Routing Flooding Distance Vector Routing Link State Routing Hierarchical Routing Broadcast Routing Multicast Routing Routing for Mobile Hosts Routing in Ad Hoc Networks The Optimality Principle General statement about optimal routes without regard to network topology or traffic if router J is on the optimal path from router I to router K, then the optimal path from J to K also falls along the same route As a direct consequence of the optimality principle, the set of optimal routes from all sources to a given destination form a tree rooted at the destination Such a tree is called a sink tree, where the distance metric is the number of hops The Optimality Principle (a) A subnet. (b) A sink tree for router B. Shortest Path Routing Find shortest/fastest path from source to destination Metrics: Basic: Hops Physical distance Advance: Bandwidth Average traffic Communication cost Mean queue length Measured delay

Flooding Every incoming packet is sent out on every outgoing line except the one it arrived on Hop counter contained in the header of each packet, which is decremented at each hop, with the packet being discarded when the counter reaches zero. The counter can be initialized to the worst case, namely, the full diameter of the subnet To keep track of which packets have been flooded, to avoid sending them out a second time, using sequence number Selective flooding: variation of flooding that is slightly more practical routers do not send every incoming packet out on every line, only on those lines that are going approximately in the right direction Distance Vector Routing Dynamic algorithm Each router maintain a table (i.e, a vector) giving the best known distance to each destination and which line to use to get there. These tables are updated by exchanging information with the neighbors Each router maintains a routing table indexed by, and containing one entry for, each router in the subnet This entry contains two parts: the preferred outgoing line to use for that destination and an estimate of the time or distance to that destination The metric used might be number of hops, time delay in milliseconds, total number of packets queued along the path, or something similar Link State Routing Replacing DVR Each router must do the following: Discover its neighbors, learn their network address. Measure the delay or cost to each of its neighbors. Construct a packet telling all it has just learned. Send this packet to all other routers. Compute the shortest path to every other router. Hierarchical Routing Network may grow to the point where it is no longer feasible for every router to have an entry for every other router, so the routing will have to be done hierarchically, as it is in the telephone network Routers are divided into what we will call regions, with each router knowing all the details about how to route packets to destinations within its own region, but knowing nothing about the internal structure of other regions It may be necessary to group the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run out of names for aggregations Broadcast Routing Hosts need to send messages to many or all other hosts Simplest: the source to simply send a distinct packet to each destination wasteful of bandwidth requires the source to have a complete list of all destinations Flooding Multidestination routing Spanning Tree Reverse path forwarding Multicast Routing Some applications require that widely-separated processes work together in groups, it is frequently necessary for one process to send a message to all the other members of the group If the group is small, it can just send each other member a point-to-point message If the group is large: Broadcasting Spanning Tree Pruning

Routing for Mobile Hosts A WAN to which LANs, MANs, and wireless cells are attached. Routing for Mobile Hosts Packet routing for mobile users Routing in Ad Hoc Networks Possibilities when the routers are mobile: Military vehicles on battlefield. No infrastructure. A fleet of ships at sea. All moving all the time Emergency works at earthquake. The infrastructure destroyed. A gathering of people with notebook computers. In an area lacking 802.11. Routing in Ad Hoc Networks Route Discovery Route Request Route Reply Route Maintenance Active neighbors Reading Computer Networks, Fourth Edition By Andrew S. Tanenbaum Congestion Control Algorithms General Principles of Congestion Control Congestion Prevention Policies Congestion Control in Virtual-Circuit Subnets Congestion Control in Load Shedding Jitter Control

Congestion When too much traffic is offered, congestion sets in and performance degrades sharply. Factors: Congestion Long queue Memory (but infinite memory make things worse) Slow processors Low-bandwidth lines Upgrading part, but not all, of the system, often just moves the bottleneck somewhere else. The real problem is frequently a mismatch between parts of the system. This problem will persist until all the components are in balance Congestion Control Flow Control Congestion control has to do with making sure the subnet is able to carry the offered traffic It is a global issue, involving the behavior of all the hosts, all the routers, the store-and-forwarding processing within the routers, and all the other factors that tend to diminish the carrying capacity of the subnet. Flow control, in contrast, relates to the point-to-point traffic between a given sender and a given receiver Its job is to make sure that a fast sender cannot continually transmit data faster than the receiver is able to absorb it Flow control frequently involves some direct feedback from the receiver to the sender to tell the sender how things are doing at the other end General Principles of Congestion Control Solutions in complex problem from a control theory point of view: Open Loop: attempt to solve the problem by good design to make sure problem does not occur in the first place Closed Loop: based on the concept of a feedback loop Closed Loop solution for Congestion: Monitor the system: detect when and where congestion occurs. Pass information to where action can be taken. Adjust system operation to correct the problem Variety of metrics can be used to monitor for congestion percentage of all packets discarded for lack of buffer space the average queue lengths the number of packets that time out and are retransmitted average packet delay Taxonomy of Congestion Control Algorithm Yang and Reddy (1995) have developed a taxonomy for congestion control algorithms open loop act at the source act at the destination closed loop explicit feedback: packets are sent back from the point of congestion to warn the source implicit feedback. the source deduces the existence of congestion by making local observations, such as the time needed for acknowledgements to come back. Congestion means the load is greater than the resources can handle. Two solutions come to mind increase the resources: sometimes it is not possible to increase the capacity, or it has already been increased to the limit decrease the load: several ways exist to reduce the load, including denying service to some users, degrading service to some or all users, and having users schedule their demands in a more predictable way. Congestion Prevention Policies Policies that affect congestion. Open Loop 5-26

Congestion Control in Virtual- Circuit Subnets Admission control: keep congestion that has already started from getting worse Idea: once congestion has been signaled, no more virtual circuits are set up until the problem has gone away Attempts to set up new transport layer connections fail This approach is crude, it is simple and easy to carry out In the telephone system, when a switch gets overloaded, it also practices admission control by not giving dial tones. Congestion Control in Virtual- Circuit Subnets Allow new virtual circuits but carefully route all new virtual circuits around problem areas (a) A congested subnet (b) (b) A redrawn subnet, eliminates congestion and a virtual circuit from A to B. Congestion Control in Virtual- Circuit Subnets Negotiate an agreement between the host and subnet when a virtual circuit is set up Specifies the volume and shape of the traffic, quality of service required, and other parameters Subnet will typically reserve resources along the path when the circuit is set up. These resources can include table and buffer space in the routers and bandwidth on the lines In this way, congestion is unlikely to occur on the new virtual circuits because all the necessary resources are guaranteed to be available. This reservation can be done all the time as standard operating procedure: tends to waste resources only when the subnet is congested Congestion Control in Can also be used in virtual-circuit subnets Idea: each router can easily monitor the utilization of its output lines and other resources it can associate with each line a real variable, u, whose value, between 0.0 and 1.0, reflects the recent utilization of that line Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly-arriving packet is checked to see if its output line is in warning state. If it is, some action is taken Congestion Control in The Warning Bit Old DECNET architecture signaled the warning state by setting a special bit in the packet's header When the packet arrived at its destination, the transport entity copied the bit into the next acknowledgement sent back to the source. The source then cut back on traffic. As long as the router was in the warning state, it continued to set the warning bit The source monitored the fraction of acknowledgements with the bit set and adjusted its transmission rate accordingly As long as the warning bits continued to flow in, the source continued to decrease its transmission rate When they slowed to a trickle, it increased its transmission rate Note that since every router along the path could set the warning bit, traffic increased only when no router was in trouble Congestion Control in Choke Packets The router sends a choke packet back to the source host, giving it the destination found in the packet. When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination by X percent The host should ignore choke packets referring to that same destination for a fixed time interval After that period has expired, the host listens for more choke packets for another interval If one arrives, the line is still congested, so the host reduces the flow still more and begins ignoring choke packets again. If no choke packets arrive during the listening period, the host may increase the flow again Hop-by-Hop Choke Packets: An alternative approach is to have the choke packet take effect at every hop it passes through

Load Shedding When none of the mentioned methods make the congestion disappear Load shedding is a fancy way of saying that when routers are being inundated by packets that they cannot handle, they just throw them away Which packet to discard may depend on the applications running For file transfer, an old packet is worth more than a new one because dropping packet 6 and keeping packets 7 through 10 will cause a gap at the receiver that may force packets 6 through 10 to be retransmitted (if the receiver routinely discards out-of-order packets). In a 12-packet file, dropping 6 may require 7 through 12 to be retransmitted, whereas dropping 10 may require only 10 through 12 to be retransmitted In contrast, for multimedia, a new packet is more important than an old one The former policy (old is better than new) is often called wine and the latter (new is better than old) is often called milk Jitter Control The variation (i.e., standard deviation) in the packet arrival times is called jitter High jitter, for example, having some packets taking 20 msec and others taking 30 msec to arrive will give an uneven quality to the sound or movie The jitter can be bounded by computing the expected transit time for each hop along the path When a packet arrives at a router, the router checks to see how much the packet is behind or ahead of its schedule This information is stored in the packet and updated at each hop If the packet is ahead of schedule, it is held just long enough to get it back on schedule If it is behind schedule, the router tries to get it out the door quickly Jitter Control Packets that are ahead of schedule get slowed down and packets that are behind schedule get speeded up, in both cases reducing the amount of jitter In some applications, such as video on demand, jitter can be eliminated by buffering at the receiver and then fetching data for display from the buffer instead of from the network in real time However, for other applications, especially those that require real-time interaction between people such as Internet telephony and videoconferencing, the delay inherent in buffering is not acceptable Quality of Service Requirements Techniques for Achieving Good Quality of Service Integrated Services Differentiated Services Label Switching and MPLS Requirements Reduce congestion improve network performance How stringent/strict the quality-of-service requirements are. Flows vs. QoS in ATM ATM networks classify flows in four broad categories with respect to their QoS demands as follows: Constant bit rate (e.g., telephony). Real-time variable bit rate (e.g., compressed videoconferencing). Non-real-time variable bit rate (e.g., watching a movie over the Internet). Available bit rate (e.g., file transfer).

Techniques for Achieving Good Quality of Service Overprovisioning Buffering Traffic Shaping The Leaky Bucket Algorithm The Token Bucket Algorithm Resource Reservation Admission Control Proportional Routing Packet Scheduling Overprovisioning An easy solution is to provide so much router capacity, buffer space, and bandwidth that the packets just fly through easily The trouble with this solution is that it is expensive. As time goes on and designers have a better idea of how much is enough, this technique may even become practical To some extent, the telephone system is overprovisioned It is rare to pick up a telephone and not get a dial tone instantly There is simply so much capacity available there that demand can always be met Buffering Traffic Shaping Flows can be buffered on the receiving side before being delivered It smooths out the jitter Smoothing the output stream by buffering packets. Smooths out the traffic on the server side, rather than on the client side Regulating the average rate (and burstiness) of data transmission Contrast: sliding window protocols limit the amount of data in transit at once, not the rate at which it is sent The Leaky Bucket Algorithm = single-server queueing system with constant service time The Token Bucket Algorithm (a) A leaky bucket with water. (b) a leaky bucket with packets. Allow the output to speed up somewhat when large bursts arrive For a packet to be transmitted, it must capture and destroy one token Allow save up permission to send large bursts, up to the maximum size of the bucket Throws away tokens (i.e., transmission capacity) when the bucket fills up but never discards packets

Resource Reservation Once we have a specific route for a flow, it becomes possible to reserve resources along that route to make sure the needed capacity is available. Three different kinds of resources can potentially be reserved: Bandwidth. Buffer space. CPU cycles Admission Control Router has to decide, based on its capacity and how many commitments it has already made for other flows, whether to admit or reject the flow. Flows must be described accurately in terms of specific parameters that can be negotiated An example of flow specification Proportional Routing To provide a higher quality of service is to split the traffic for each destination over multiple paths Use locally-available information A simple method is to divide the traffic equally or in proportion to the capacity of the outgoing links Packet Scheduling Fair queueing algorithm: routers have separate queues for each output line, one for each flow. When a line becomes idle, the router scans the queues round robin (a) A router with five packets queued for line O. (b) Finishing times for the five packets.