Lesson Schedule: Class No. Portion covered per hour (An estimate)

Similar documents
Chapter 3. The Data Link Layer

Networking Link Layer

Chapter 3. The Data Link Layer. Wesam A. Hatamleh

Outline. EEC-484/584 Computer Networks. Data Link Layer Design Issues. Framing. Lecture 6. Wenbing Zhao Review.

Jaringan Komputer. Data Link Layer. The Data Link Layer. Study the design principles

1.Write about the Data Link layer design issues.

2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS Collision Free Protocols 2.3 FDDI 2.4 DATA LINK LAYER DESIGN ISSUES 2.5 FRAMING & STUFFING

CS422 Computer Networks

Computer Networks. Data Link Layer. Design Issues. Basic Services. Notes. Paolo Costa costa. Notes. Note.

06/05/2008. Chapter 3. The Data Link Layer. Data Link Layer Design Issues. Services Provided to the Network Layer. Error Control Flow Control

The Data Link Layer Chapter 3

The data link layer has a number of specific functions it can carry out. These functions include. Figure 2-1. Relationship between packets and frames.

Inst: Chris Davison

CSMC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. Nov 1,

CSMC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala Set 4. September 09 CMSC417 Set 4 1

Department of Computer and IT Engineering University of Kurdistan. Data Communication Netwotks (Graduate level) Data Link Layer

Introduction to Data Communications & Networking

DATA LINK LAYER UNIT 7.

The Data Link Layer Chapter 3

The Transport Layer. Why is there a transport layer? Addressing

Data Link Technology. Suguru Yamaguchi Nara Institute of Science and Technology Department of Information Science

3. Data Link Layer 3-2

Data Link Layer. Indian Institute of Technology Madras

Data Link Layer. Overview. Links. Shivkumar Kalyanaraman

Chapter 3. The Data Link Layer

COMPUTER NETWORKS UNIT-3

C08a: Data Link Protocols

CSCI-1680 Link Layer I Rodrigo Fonseca

Advantages and disadvantages

UNIT-II 1. Discuss the issues in the data link layer. Answer:

The Data Link Layer. Data Link Layer Design Issues

COMPUTER NETWORKS UNIT I. 1. What are the three criteria necessary for an effective and efficient networks?

Chapter 11 Data Link Control 11.1

Computer Network. Direct Link Networks Reliable Transmission. rev /2/2004 1

Introduction to Computer Networks. 03 Data Link Layer Introduction

TYPES OF ERRORS. Data can be corrupted during transmission. Some applications require that errors be detected and corrected.

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

I. INTRODUCTION. each station (i.e., computer, telephone, etc.) directly connected to all other stations

16.682: Communication Systems Engineering. Lecture 17. ARQ Protocols

(Refer Slide Time: 2:20)

CS 4453 Computer Networks Winter

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

DLL: Flow Control DLL. Simplex. Fast sender / slow receiver scenario. Various protocols used. Simplified examples implemented in C.

Chapter 11 Data Link Control 11.1

Chapter 11 Data Link Control 11.1

Data Link Layer: Overview, operations

Data Link Networks. Hardware Building Blocks. Nodes & Links. CS565 Data Link Networks 1

EE 122: Error detection and reliable transmission. Ion Stoica September 16, 2002

Data Link Control Protocols


Lecture 5. Homework 2 posted, due September 15. Reminder: Homework 1 due today. Questions? Thursday, September 8 CS 475 Networks - Lecture 5 1

CS 640 Introduction to Computer Networks. Role of data link layer. Today s lecture. Lecture16

CSCI-1680 Link Layer Reliability John Jannotti

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

High Level View. EE 122: Error detection and reliable transmission. Overview. Error Detection

Comparison of ISO-OSI and TCP/IP Suit. Functions of Data Link Layer:

Links Reading: Chapter 2. Goals of Todayʼs Lecture. Message, Segment, Packet, and Frame

Overview. Data Link Technology. Role of the data-link layer. Role of the data-link layer. Function of the physical layer

Data link layer functions. 2 Computer Networks Data Communications. Framing (1) Framing (2) Parity Checking (1) Error Detection

INF Data Communication Data Link Layer

CSCI-1680 Link Layer Reliability Rodrigo Fonseca

UNIT II THE DATA LINK LAYER DESIGN ISSUES

Telecom Systems Chae Y. Lee. Contents. Flow Control Error Detection/Correction Link Control (Error Control) Link Performance (Utility)

Lecture 4: CRC & Reliable Transmission. Lecture 4 Overview. Checksum review. CRC toward a better EDC. Reliable Transmission

Links. CS125 - mylinks 1 1/22/14

User watching video clip. Server with video clips Chapter 4: Data Link Layer Application Layer Application Layer. Beispielbild. Presentation Layer

Computer Networking : Principles, Protocols and Practice

Telematics Chapter 4: Data Link Layer

Analyzation of Automatic Repeat Request (ARQ) Protocols

Data Link Layer (1) Networked Systems 3 Lecture 6

CSE123A discussion session

Error Detection Codes. Error Detection. Two Dimensional Parity. Internet Checksum Algorithm. Cyclic Redundancy Check.

Data and Computer Communications. Protocols and Architecture

Advanced Computer Networks. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS University, Lahore Pakistan. Department of Computer Science

CSE 461: Framing, Error Detection and Correction

PART III. Data Link Layer MGH T MGH C I 204

Ch. 7 Error Detection and Correction

CSE123A discussion session

The Data Link Layer. CS158a Chris Pollett Feb 26, 2007.

EITF25 Internet Techniques and Applications L3: Data Link layer. Stefan Höst

Where we are in the Course

TCP: Flow and Error Control

Communication Networks. Part I

Advanced Computer Networks. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS University, Lahore Pakistan. Department of Computer Science

Chapter Six. Errors, Error Detection, and Error Control. Data Communications and Computer Networks: A Business User s Approach Seventh Edition

Lecture 5: Data Link Layer Basics

Lecture 6: Reliable Transmission. CSE 123: Computer Networks Alex Snoeren (guest lecture) Alex Sn

Data Link Layer Overview

Lecture / The Data Link Layer: Framing and Error Detection

2.4 Error Detection Bit errors in a frame will occur. How do we detect (and then. (or both) frames contains an error. This is inefficient (and not

Islamic University of Gaza Faculty of Engineering Department of Computer Engineering ECOM 4021: Networks Discussion. Chapter 2.

Sequence Number. Acknowledgment Number. Data

Direct Link Communication I: Basic Techniques. Data Transmission. ignore carrier frequency, coding etc.

Lecture 7: Sliding Windows. CSE 123: Computer Networks Geoff Voelker (guest lecture)

Data Link Control. Outline. DLC functions

4. Error correction and link control. Contents

Programming Assignment 3: Transmission Control Protocol

CSC 401 Data and Computer Communications Networks

CompSci 356: Computer Network Architectures. Lecture 4: Link layer: Encoding, Framing, and Error Detection Ref. Chap 2.2, 2.3,2.4

THE DATA LINK LAYER. From: Computer Networks, 3rd ed. by Andrew S. Tanenbaum, 1996 Prentice Hall

Transcription:

Chapter -3: The Data Link Layer 1 Slide 29: Contents Lesson Schedule: Class No. Portion covered per hour (An estimate) 1. Data Link Layer Design Issues Dates Planned Engaged 2. Error detecting codes 3. Elementary data link protocols 4. Continued 5. Sliding window protocols 6. Continued 7. Example data link protocol 8. Continued Slide -30: Objectives: The chapter will focus on 1. How service is provide by lower to upper layer? Services offered to NWL. 2. How bits are efficiently transmitted? Framing character, byte & bit stuffing. 3. The structure of a FRAME. 4. How bits are reliably transmitted? - Error control during transmission. 5. How to address disparity of seeds in adjacent machines. Flow control DLL protocols. 2

Slide 31: Data Link Layer Design Issues The study concentrates on: o Design principles of DLL. o Algorithms for achieving reliable & efficient communication between two adjacent machines (Two machines / hosts connected by a communication channel which acts like a wire telephone line, coaxial cable, point-to-point to wireless.) o Important properties of channels which make it wire like is that the bits are delivered in exactly the same order in which they are sent. That means it is a trivial problem of transmitting bits from A to B, which are connected by wire. Unfortunately, communication channels: o Make errors. o Have finite band width. o Have non-zero propagation delay. These limitations have important implications on the reliability & efficiency of data transfer. The protocols are the subject matter of this chapter & must take all the three limitations into account. 3 4 First we will look at design issues. o Service provided to NWL, Framing, Error control, & Flow control. Second we will look at Elementary data link protocols. o Nature of errors, their causes & how they are detected & corrected followed by a series of complex protocols. Finally, we look at few examples of protocols: HDLC, PPP in internet. DLL Design issues Functions / goals of DLL: 1. To provide a well defined service interface to NWL. 2. Dealing with transmission errors. 3. Regulating flow of data, so that slow receivers are not swamped by fast receivers. To accomplish these goals, the DLL: o Take the packets from NWL. Figure 1 4. Encapsulate them into FRAMES. o Transmits these frames. o Frames contain: frame header, Payload (data i.e. packet), frame trailer. Figure 1.

5 Frame management is at the heart of DLL. The principles used in error control & flow control are again found in TPL. In many n/ws these are found in upper layers also, but the principle of operation is the same. Let us look at each function / goal in detail: 1. Services provided to NWL. The principle service is to transfer data from source to destination NWL. o Data (bits of information) is given to DLL at the source entity from source NWL. The DLL is to transmit this data to destination entity so that it can hand over this to NWL at destination entity, as shown in figure 2. o However, the actual transmission takes path shown in b. But is easier to think / visualize two DLL processes communicating using DLL protocols. We use this simplistic model in a throughout our study. Figure 2 6 DLL can be designed to offer various services & may vary from system to system. The 3 possibilities commonly provided are : a. Unacknowledged Connectionless service. b. Acknowledged Connectionless service. c. Acknowledged Connection oriented service. o Why not Unacknowledged Connection oriented service? a. Unacknowledged Connectionless service. o Source machine send independent frames to destination machine and does not receive an ACK from destination machine. o No logical connection is established before hand. o If a frame is lost due to noise in the line, no attempt is made to detect the loss or receiver. This service is appropriate when the error rate is very low, so that recovery is left to higher layers. This is also appropriate for real time traffic, such as voice where late data are worse than bad data. Most LANs use this service in DLL.

7 b. Acknowledged Connectionless service. o This a step forward to improve reliability. o Still no logical connection is established. o But each frame sent is acknowledged from destination. o Now the sender knows whether a frame has arrived correctly. o If frame has not arrived within a specified interval, it can be sent again. This service is useful over unreliable channels, such as wireless. Perhaps it is worth emphasizing that acknowledgement in DLL is just an optimization than a requirement. How? o The NWL can always send a packet & wait for an ACK. If not received, resend the complete packet after the time out. o The trouble with this approach is that frames usually have a strict maximum length imposed by h/w. Whereas n/w layer packets do not impose this length criterion. o i.e. packets are broken down into smaller frames. o If the average packet is broken up into say 10 frames & 20% of all frames are lost, it may take a long time for a packet to get through. o If the individual frames are ACK & retransmitted entire packet gets through faster. On reliable channels like fiber DLL protocols may be unnecessary, but wireless channels which are inherently unreliable, it is worth the cost of DLL. So wireless LANs uses this service as indicated earlier. 8 c. Acknowledged Connection oriented service. It is the most sophisticated service the DLL can provide to NWL. o Here a connection is established between source & destination before any data transmission. o Each frame sent over the line is numbered & DLL guarantees the reception of each frame. o Further, it guarantees that each frame is exactly received once & all the frames arrive in right order. That is connection oriented service provides the NWL with the equivalent of a reliable bit stream. But in connectionless service a ve. ACK / ACK for lost packet cause a packet to be sent several times & received several times. There are three distinct phases in connection oriented service: i) Connection is established Both sides initialize variables & counters needed to keep track of which frames have been received & which are not. ii) Frames are actually transmitted - iii) Connection is released Variables, buffers & other resources used to maintain connection are released.

9 Example: Consider a WAN subnet consisting of routers / IMPs connected by point-to-point leased telephone lines. Figure 3 o When a frame arrives at a router, the h/w checks it for errors. o Then passes the frame to DLL s/w, may be embedded in a chip on NIC. o The DLL s/w checks to see if this is the frame expected, if so gives the packet (contained in the payload field) to routing s/w. o The routing s/w then chooses an appropriate outgoing line and passes the packet down to DLL s/w which then transmits it. The frame is unpacked and repacked. Figure 3. 10 The routing s/w wants the job done with reliable, sequenced connection on each of the point-to-point lines. Router does not bother with packets that got lost on he way. It is up to the DLL dotted rectangle to make unreliable communication lines to look perfect / fairly good. There is only one copy of DLL s/w with different tables & data structures for each outgoing line. 2. Framing To provide service to NWL, DLL must use the services provided by physical layer. What the physical layer does? It accepts raw data bits stream and attempt to deliver it to the destination. This bit stream is not guaranteed to be error free. The number of bits received may be <, =, or > the bits transmitted and may have different values (errors). It is up to the DLL to detect, if necessary correct errors.

11 The approach used to achieve this in DLL is to break the bit stream into discrete FRAMES and compute checksum for each frame. Frame is group of bits with header & trailer. Usually used in µp interfacing. When the frame arrives at the destination, the checksum is recomputed, if found different, the DLL knows that there is an error & takes steps to deal with it: i) Try to correct errors, if possible, or ii) Could be discarding the bad frame and ask for retransmission. Framing could be difficult than it appears, one possible solution is: o To introduce time gaps between frames, much like white spaces between the words. o As discussed earlier in transmission lines (local loop) different frequency components travel different speeds giving rise to distortion in received signal. o Hence n/ws rarely make timing guarantees, so it is not possible to use this method. There are other 4 alternative methods: i) Character Count, ii) Flag bytes with Byte stuffing, iii) Starting & Ending flags with Bit stuffing, iv) Physical layer coding violations. Let us look at each one in detail: 12 1. Character Count Uses a field in the header to specify the number of characters in the frame as shown in figure 4a. A character stream. (a) Without errors. (b) With one error Figure 4 At the destination it looks for character count & determines the end of frame & beginning of next frame. The trouble with this is, when there is an error in transmission, the data gets out of sync as shown in figure 4b. If the checksum is incorrect, the destination knows that frame is bad, but still has no way of telling where the next frame starts. Asking for retransmission may not help, hence this method is rarely used.

13 2. Flag bytes with Byte stuffing (Lab Expt.) This method gets around the problem of retransmission after an error. Each byte starts & end with a special byte, earlier starting & ending bytes were different of late a FLAG byte is used for both as shown in figure 5a. (a) A frame delimited by flag bytes. (b) Four examples of byte sequences before and after stuffing. Figure 5 Now the synchronization is achieved even in the presence of noise & error. The FLAG is a special bit pattern 01111110. A problem arises here when binary data are being transmitted & occur as FLAG bits. 14 This flag bit pattern data will interfere with framing. One way to solve this problem is to insert a special escape byte (ESC) just before each FLAG byte in the data. The receiving DLL removes these ESC bytes before giving data to NWL. So ESC byte helps in identifying FLAGs in data. This technique is called byte / character stuffing. If ESC byte occurs in data it too is stuffed with ESC byte as in figure 5b. As earlier the extra ESC is removed before the data is given to NWL. Ex: if data is A FLAG FLAG ESC ESC FLAG B. What is the stuffed data? The byte stuffing method discussed here is a simplified version used in PPP (point-topoint protocols), that most home computers use to connect to ISPs. Yet there is a major disadvantage with this method, the character is an 8-bit character. But characters can use different number of bits. Unicode uses 16-bit characters. (Search Unicodes for Kannada characters) As networks develop, alternative to byte stuffing had to be developed to allow arbitrary sized characters.

15 3. Starting & Ending flags with Bit stuffing (Lab Expt.) The new technique allows data frames to contain arbitrary number of bits and allows character codes with arbitrary number of bits / character. How it works? o Each frame begins & ends with FLAG (a special bit pattern 01111110). o Whenever a sender DLL encounters with 5 consecutive 1s in the data it automatically stuffs a 0 bit in the outgoing bit stream. o This is similar to byte stuffing, instead of ESC byte a 0 bit is stuffed. o When the receiver DLL sees a consecutive 5 1s followed by a 0, it removes the 0 bit. This is completely transparent to the NWL. Figure 6 shows an example of bit stuffing. Bit stuffing: (a) The original data. (b) The data as they appear on the line. (c) The data as they are stored in receiver s memory after destuffing. With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern. If the receiver looses the sync, just it has scanned the i/p for FLAG bit pattern, as they occur at frame boundaries. Figure 6 4. Physical layer coding violations (An Expt.?) 16 This is applicable when n/ws use encoding on the physical media contains some redundancy. As an example, some LANs encode 1 bit of data by 2 physical bits. A 1 is represented as a high-low pair & 0 as a low-high pair. 1 0 Every bit has the transition in the middle making it easy for the receiver to locate bit boundaries. 1 1 The combinations high-high & low-low are not used in data 0 0 and can be used for delimiting in some protocols as shown. Finally many DLL use a combination of character count with one of the 3 extra safeties. When a frame arrives, the count field is used to locate the end of frame. If the delimiter / frame end is found, if its checksum is correct the frame is accepted. Otherwise the i/p stream is scanned for the next delimiter & checked for checksum etc.

17 3. Error Control After solving the framing problem, next problem is to make sure all these frames are delivered to NWL at the destination in proper order. It might be fine for unacknowledged connectionless service that the frames being sent from transmitter to receiver without regard to whether they were arriving properly. This might not be fine for a reliable connection oriented service. Usual way to ensure reliable delivery of frames is to provide sender with some feedback about what is happening at the other end of wire. Typically, protocols ask for a special control frame (ACK frame) from the receiver bearing +ve / -ve ACK about the incoming frames. o If a +ve ACK is received at the sender, it knows that frame has reached safely. o If a ve ACK is received at the sender, something has gone wrong and the frame must be retransmitted. Additional complications are possible due to h/w troubles & noise effect. o Frame may be lost due to noise burst or h/w malfunction. o ACK frame may be lost due to same reason. Then the ACK system may not function properly. In the first case the receiver the receiver will not react & the sender will wait infinitely. In the second case the receiver will respond still the sender will wait infinitely. 18 This problem can be solved by introducing timers into DLL. When a sender transmits a frame, it also starts a timer: o The timer is set to expire after an interval long enough for the frame to reach the destination, be processed there & have the ACK turned back to the sender. Under normal conditions, when the frame will be correctly received, the CAK will travel back before the time expires, then timer is cancelled. [Expt / Proj: Determining timer expiry for different n/ws LANs, WANs, MANs & WLANs etc] However, when the timer goes off because lost frame / lost ACK, the sender knows that there is a potential problem. Then the frame is retransmitted. There is a problem again: o A frame is sent, it is received, a +ve ACK is sent. o ACK is lost due to noise burst. o The frame is resent. o So, there are duplicate packets o These packets may be passed on to NWL. Trouble starts. To prevent this from happening, out going frames are numbered (sequence number) so that the receiver can distinguish between originals & retransmissions. Te whole issue of managing ACK, timers, & sequence number so as to ensure that each frame arrives at the destination exactly once is an important aspect of DLL.

19 4. Flow Control Another important issue that occurs in DLL is the speed mismatch between sender & receiver. This is obvious because systems with different configurations are connected by different communication links operating at different speeds. Sender speed may be <, = or > the receiver speed. The problem comes when the sender wants to systematically transmit frames faster than the receiver can accept. Even if the transmitter is error free, receiver will be unable to handle the frames & will start loosing them. Something has to be done to prevent this situation. There are two possible approaches: o Feedback based flow control & Rate based flow control. A. Feedback based flow control: The receiver sends back information to the sender giving permission to send more data or at least tell the sender how the receiver is doing. Studied here. B. Rate based flow control: It has a built-in mechanism that limits the rate at which senders may transmit data without any feedback from receiver. Studied in NWL. The basic principle used in different feedback based flow control is the well defined rules about when the sender may transmit the next frame. One of the possible rule may be to prohibit the frames (1 or n frames) being sent until the receiver has granted permission implicitly / explicitly. Slide -32: Error detection & correction The parts of telephone system are: the switches, the trunks & local loops. Switches & trunks are almost digital & local loop is still analog twisted copper pair & may remain for some due to high cost of replacement. While errors are very less on digital links they are common on the local loops. Further wireless is becoming more common & the errors are more than the trunk lines. Hence, transmission errors are there & we have to learn how to deal with them. A lesson in every bodies life also. Errors are generated as a result of physical process. Errors come in bursts rather than singularly. It is both advantageous & disadvantageous. o Computer data is always sent in blocks of bits. Consider the block size of 1000 bits & the error rate of 0.001 per bit. o If errors were independent 1 out of 1000 bits will be in error, hence most blocks contain errors. This is disadvantageous. o If errors come in bursts of 100, only few blocks in 100 could be in error. Advantageous. o But burst errors are hard to correct than isolated / singular errors. 20

21 Error Correcting Codes: N/w designers have developed two basic strategies for dealing with errors: 1. Include enough redundant information along with each block of data sent, to enable the receiver to deduce the transmitted data leading to Error correction. Used in Wireless channels which have less reliability. It is better to add enough redundancy to each block so that the original data is recovered rather than retransmitted, which again may be in error. 2. Include only enough redundancy to allow receiver to detect that an error has occurred and ask for retransmission leading to Error detection. Used in reliable channels such as fiber optic. It is cheaper to retransmit few frames that came in error. What is an error? To understand how errors can be handled, it is necessary to look closely at what an error is? Normally a frame consists of m data bits (message bits) & r redundant bits. n = m + r is the total length of the code & is referred to as n bit codeword. Given any two code words, it is possible to determine how many bits differ, as shown. Can be determined by just Ex-ORing two 1 0 0 0 1 0 0 1 Code-1 codes words & counting no. of 1s in the result. 1 0 1 1 0 0 0 1 Code-2 Here it is 3. 0 0 1 1 1 0 0 0 Distance 22 The number of bit positions in which two codewords differ is called Hamming distance. In above example it is 3. The significance is that, if two codewords are Hamming distance d apart, it will require d single bit errors to convert codeword one to codeword two. In most applications, 2 m possible data messages are legal, but not all 2 n codewords are used because of the way checksum is computed / redundancy bits are put. Given the algorithm for computing check bits, it is possible 2 m, m=3 P1 P2 P1 Odd parity to construct a complete list of legal codewords. 0 0 0 0 0 P2 Even parity 0 0 1 1 0 From this list find two codewords whose distance is 0 1 0 1 0 minimum. That is the Hamming distance of the entire 0 1 1 0 1 codewords. 1 0 0 1 0 In the table, the legal codewords with m=3 are listed. 1 0 1 0 1 1 1 0 0 1 Check bit is computed using odd / even parity. Only 8 out of 16 (2 n 1 1 1 1 0 ) codewords are used. The Hamming dist is 1. The error detecting & correcting properties of a code depend on its Hamming distance o To detect d errors we need a distance d+1 code, because with a such a code there is no way that d single bit errors can change a valid codeword into another. When the receiver sees an individual codeword, it can tell that a transmission error has happened.

23 o To correct d errors, you need a distance of 2d+1 code because that way the legal codeword is still closer than any other codeword, so it can be uniquely determined. Ex: A single parity bit code has a distance of 2. i.e. a single error will produce codeword with parity. 1 0 0 0 1 0 0 1 Tx So it can be used to detect single error (d+1) = (1+1). 1 0 0 0 0 0 0 1 Rx 0 0 0 0 1 0 0 0 Distance Example: Consider a code with 4 valid codewords 000000000 0000011111 1111100000 1111111111 This code has distance of 5. It can correct double errors (2d+1=5). If codeword 0000000111 with double errors arrives, the received code must have been 0000011111 with 2 errors corrected. If codeword 0000000011 triple error arrives, the corrected code is 0000000000 which is incorrect. We want to design a code to correct single errors, with m - message bits & r - check bits. n = m + r is codeword length. There are 2 m legal messages & 2 n codewords. Each of the 2 m legal messages have n illegal codewords at a dist 1 from it These are formed by systematically inverting each of the n bits. So, each of the 2 m legal messages require n+1 bit patterns dedicated to it. Since the total number of bit patterns in 2 n, we must have (n+1) 2 m 2 n. By substituting n = m + r. (m + r + 1) 2 m 2 m+r. (m + r + 1) 2 r. Given m - message bits puts a lower limit on the number of check bits (r) needed to correct single errors. This theoretical lower limit of r is achieved due to Hamming. d d d P <-Odd parity 0 0 0 0 0 0 1 1 0 0 0 0 v 0 0 1 1 Valid code 0 1 0 1 0 0 0 1 0 0 1 0 n illegal 0 1 1 0 0 0 1 0 0 0 0 1 code words 1 0 0 1 0 1 0 0 0 1 1 1 at d=1 1 0 1 0 1 0 0 0 1 0 1 1 1 1 0 0 1 1 1 1 Valid code n+1 bit patterns Systematic inversion of bits 24

25 In Hamming code: o The bits of codewords are numbered consecutively. o The bits that are powers of 2 (1, 2, 4, 8, 16..) are added as check bits. o Rest (3, 5, 7, 9, 10..) are filled-up by message bits. o Each check bit forces the parity (odd/even) of some collection of bits, including itself. o A bit may be included in several parity computations. A data bit in position k contributes to check bits in positions of sum of powers of 2. Ex: 3 = 1 + 2 & 11 = 1 + 2 + 8. The data bit in position 3 contributes to check bit in position 1 & 2. Similarly other contributes to check bits at 1, 2 & 8. So, a message bit is checked just by those bits occurring in its expression as above. i.e. bit 11 checked by bits 1, 2, & 8. When a codeword arrives, the receiver initializes a counter to zero. It then examines each check bit k (k=1, 2, 4, 8, 16..) to see if it has the correct parity. If parity bit is not correct, the receiver adds k to the counter. If the counter is zero after all the check bits have been examined, the codeword is accepted as valid. If the counter is non-zero, it contains the number of incorrect bit. Ex: If check bits 1, 2, & 8 are in error, the counter contains 11. 26 Figure 7 shows some 7-bit ASCII characters encoded in a 11-bit codeword using Hamming code. Check bits at 1, 2, 4, 8 positions Data bits at 3, 5, 6, 7, 9, 10, 11 positions Hamming codes can only correct single errors. However there is atrick that can be used to permit Hamming code to correct burst errors. o A sequence of k consecutive code words are arranged as a matrix as shown in figure 7. o To correct burst errors, the data is sent one column at a time, starting with left Figure 7 most. o At the receiver the matrix is reconstructed one column at a time. If a busrt error of length k occurs, at most 1 bit in each of the k code words would have been affected, but Hamming code can correct one error per codeword, so the entire block can be restored. This method uses kr check bits to make blocks of km data bits immune to single error of length k or less.

27 Error Detection Codes As observed earlier error correcting codes are widely used in wireless links; which are noisier. Without error correction it would be hard to achieve transmission. However on copper wire & fiber, where the error rate is much lower error detection & retransmission with occasional errors will be achieve better efficiency. Example: Consider a channel with isolated errors with error rate 10-6 /bit & data block of 1000 bits. Let us consider whether correction / detection is viable: o Error correction: o To provide error correction we require r bits per block. o Using the expression (m + r + 1 2 r ) (1000 + r + 1 2 r ) (1001 + r 2 r ) 10 bits are required. o So 1 MB data require 10 x 1000 = 10,000 check bits for error correction. o Error Detection: o To provide single bit error detection, 1 bit parity per block is required. o For 1000 blocks of 1000 bits (10 6 bits) 1 error will occur with an error rate of 10-6 i.e. (10 6 x 10-6 = 1). o Hence 1001 blocks are sent instead of 1000 blocks. One extra block is sent. o Total overhead is 1 extra block + 1 block of data (1001 + 1000) 2001 bits compared to 10,000 bits using Hamming code. 28 If a single parity bit is added to block & is badly garbled by a long burst error, the probability that the error will be detected is hardly 0.5 & not acceptable. This can be improved considerably if each block is considered as a rectangular matrix of n x k as described earlier for Hamming code. o A parity bit is computed separately for each column & added as last row. o The matrix is transmitted row wise. o AT the receiver all the parity bits are checked, if any one of them is wrong, the receiver requests for retransmission of the block. o Additional retransmissions are requested as needed. This method can detect a single burst of length n, as only one bit / column is changed as shown in figure 7. If the block is garbled by a long burst or multiple short bursts, the probability that any of the n column will have correct parity is 0.5. So the probability of a bad block being accepted when it should not be is 2 -n.

29 Polynomial Code Although the above scheme may sometime be adequate, in practice another method is in widespread use: the Polynomial code also called as CRC (Cyclic Redundancy Check). Polynomial codes are based on upon treating bit strings as representations of polynomials with coefficients of 0 & 1 only. A k bit frame is regarded as the coefficient list for a polynomial with k terms with leftmost bit as higher order degree. Ex: 110001 represent 6 term polynomial with coefficients 1, 1, 0, 0, 0, & 1 as x 5 +x 4 +x 0. To find out checksum, polynomial arithmetic is used & it is done in modulo 2. There are no carries & borrows. Both addition & subtraction are identical and equal to logical Ex-OR. 1 0 0 1 1 0 1 1 0 0 1 1 0 0 1 1 1 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 + 0 1 0 1 0 0 0 1 + 1 1 1 1 1 1 1 0-0 1 0 1 0 1 1 0-1 1 1 1 1 0 1 0 Long division is carried out as in binary except the division is done in modulo 2 as above. For polynomial code usage, the sender & receiver must agree upon a generator polynomial G(x) in advance. Both higher & lower order bit must be 1. To compute the checksum for m-bit message corresponding to polynomial M(x) must be longer than G(x). The checksum R(x) is appended to the end of M(x). i.e. T(x)=M(x)+R(x). At the receiver the checksum frame is divided by G(x). If there is reminder, there has been an error else the frame is error free. An example is shown below. 1. Generation of Checksum. 2. Computation of checksum after receiving without errors. 30 Checksum Generation at Tx Checksum computation at Rx, No error 1 0 0 0 1 1 1 0 0 0 1 1 1 0 1 0 1 0 1 0 1 1 0 0 0 M(x) 1 0 1 0 1 0 1 0 1 1 1 1 0 G(x) 1 0 1 0 G(x) 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 T(x)=M(x)+R(x) 0 0 0 0 0 0 0 1 1 T(x)= 1 0 1 0 1 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 1 1 0 R(x) 0 0 0 0

3. Computation of checksum after receiving with one error. 4. Computation of checksum after receiving with one error. 31 Checksum computation at Rx, One error Checksum computation at Rx, One error 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 1 0 1 0 1 0 1 1 1 1 1 G(x) 1 0 1 0 G(x) 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 1 The algorithm: The algorithm for computing checksum is as follows: 1. Let r be the degree of G(x). Append r zero bits to the low order end of the frame, so it contains m + r = n bits & correspond to polynomial x r M(x). 2. Divide the bit string x r M(x) by G(x) using modulo 2 division. 3. Subtract the reminder ( r bits) from the bit string corresponding to x r M(x) using modulo 2 subtractions. The result is the checksum frame to be transmitted i.e. T(x). Example: M(x) = 1 1 0 1 0 1 1 0 1 1 of order 9 G(x) = x 4 + x + 1 -> 1 0 0 1 1 of order 4 A decimal problem: If M(x) = 116 & G(x) =5 then R(x) = 1 Then T(x)=M(x) R(x) = 116-1 = 115, divisible by 5. An example is shown. 23 5 116 10 16 15 1 Figure 8 32

33 Now let us look at the power of the method. What kinds of errors this will detect? o If T(x) arrives without error, the reminder R(x) is zero. o Assuming that a transmission error has occurred, T(x) arrives as T(x) + E(x). o Each 1-bit in E(x) corresponds to a bit that has been inverted. (0->1 or 1->0 errors). o If there are k 1-bits in E(x) indicates k single bit errors have occurred. o A single burst error is characterized by an initial 1, a final 1 & a mixture of 1s and 0s in between. o At the receiver [T(x)+E(x)] / G(x) gives T(x) / G(x) equals 0, so the result of computation is simply E(x) / G(x). o Those errors corresponding to this are happen to be polynomials containing G(x) as a factor will be stopped by & all others will be caught. This detects all burst errors of odd number of bits. A polynomial code with r check bits will detect all burst errors of length r. The possibility of bad frames, longer than r bits is 1 / 2 r. Assumed. Certain polynomials have become international standards (every node has to know this advance). The one used by IEEE 802 is: X 32 + x 26 + x 23 + x 22 + x 16 + x 12 + x 11 + x 10 + x 8 + x 7 + x 5 + x 4 + x 2 + x 1 + 1. 34 Although the computation of checksum is complicated, Peterson & Brown have shown that shift register circuit can be constructed to compute & verify the checksums in h/w. H/w is used always. i.e. error checking is implemented in h/w. Virtually all LANs & some Point-to-Point also use error detection. It has been assumed that frames contain random bits & all analysis of checksum algorithms uses this assumption. Inspection of real data shows that it is quite wrong. As a consequence the undetected errors are much more than thought. The data assumed to be arriving with a probability of 0.5. i.e. P(0)= P(1)=0.5, which is wrong. A project may be taken up to determine the actual probability of data in different networks. A project may be taken up for an efficient h/w implementation of the existing CRC / Hamming algorithm, in VHDL / others.

Slide -33: Elementary data link protocols To introduce protocols we study a communication model with few explicit assumptions. And study protocols on increasing complexity. The initial assumptions are: 1. Layers are independent processes and communicate by passing messages back & forth. In the study of DLL, the layers would be Physical Layer, DLL & NWL. In many cases Physical & DLL run on a n/w i/o chip & NWL on main CPU. Or others. 2. Machine A sends a long stream of data to machine B using a reliable connection oriented service. A is all the time ready to send data & B is ready to receive. May be changed alter. 3. Machines do not crash. That is protocols deal with communication errors not other problems. 4. For DLL the packet passed by NWL is pure data & every bit is to be delivered destination NWL. 5. The DLL accepts packet encapsulate it into frame by adding header & trailer. Assume that there exists suitable library procedures like to_physical_layer, from_physical_layer etc. to send & receive a frames. 6. The hardware computes & appends the checksum, so DLL s/w need not worry. Figure 9 shows some declarations are common to many of the protocols discussed later. 5 data structure are defined. 35 Many functions / procedures as listed below are used. They are located in the file protocol.h. More details are in the book. 36

37 Figure 9 38 1. An Unrestricted Simplex protocol. An initial simple example to understand DLL protocols. The assumptions made are: 1. Data transmission is simplex. 2. Processing time can be ignored. 3. Infinite buffer is available. 4. Transmitting & Receiving NWL are always ready. No flow control. 5. Best one, communication channel is error free. Though it is highly unrealistic, helps to understand step by step. Figure 10. The protocol consists of two procedures: sender & receiver. The sender runs in DLL of source & the receiver runs in DLL of destination machine. No sequence numbers or ACK are used The only possible event is frame_arrival at the receiver. Sender is pumping data all the time & receiver accepting data all the time (1). Sender1 consists of: i/p = packet, o/p = frame. o from_network_layer(&buffer) : to get packet from source NWL. o s.info = buffer : construct an out bound frame (s). Only info (data) field is used. kind, seq ack used for error & flow control is not used. Figure 9. o to_physical_layer(&s) : transmit to the destination DLL. o _ Ready to accept one more packet.

39 Sender NWL DLL Physical Packet Receiver Figure 10 NWL DLL Physical Frame Receiver1 is equally simple & consists of : i/p = frame, o/p = packet. o Waiting for a frame to arrive o wait_for_event(&event) : waiting process indicates the arrival of frame from source. o from_physical_layer(&r) : accepts the frame from physical layer, on event happening. o to_network_layer(&r.info) : The info (data) portion is passed on to destination NWL. o _ Waits for another incoming frame. 40 2. A Simplex Stop & Wait Protocol. Some unrealistic assumptions made earlier will be dropped. Unrestricted transmission becomes stop & wait transmission. 1. Data transmission is simplex. 2. Processing time can be ignored. Dropped. 3. Infinite buffer is available. Dropped. 4. Transmitting & Receiving NWL are always ready. No flow control. Dropped. 5. Best one, communication channel is error free. As the name suggests: A frame is sent from A to B (simplex), stop for a while, and wait for an ACK & send the next frame. Similar thing happens from B to A, but alternatively. There is feedback from receiver at the sender. The sending & receiving processes may have different speeds.

41 Both sender & receiver will check for frame arrival. So, the main problem is to prevent the fast sender from flooding the slow receiver so that the receiver is able to process the received frames. Figure 11. Each DLL will have multiple lines to attend to and hence different delays. Sender2 consists of: i/p = packet, o/p = frame, ACK to transmit next frame. o from_network_layer(&buffer) : to get packet from source NWL. o s.info = buffer : construct an out bound frame (s). X. o to_physical_layer(&s) : transmit to the destination DLL. o Wait_for_event(&event) : indicates frame (ACK) arrival from receiver, then take next packet is sent. o _ Ready to accept one more packet. Receiver2 is equally simple & consists of : i/p = frame, o/p = packet, ACK sent back. o Waiting for a frame to arrive o wait_for_event(&event) : waiting process indicates the arrival of frame from source. o from_physical_layer(&r) : accepts the frame from physical layer, on event happening. o to_network_layer(&r.info) : The info (data) portion is passed on to destination NWL. o to_physical_layer(&s) : send a dummy (ACK) back to sender to send next frame. o _ Waits for another incoming frame. 42 Sender NWL Receiver NWL DLL DLL Physical Packet Physical Frame ACK Figure 11

43 3. A simplex Protocol for a Noisy Channel. The channel is error prone under normal conditions, hence that assumption is dropped. 1. Data transmission is simplex. 2. Processing time can be ignored. Dropped earlier. 3. Infinite buffer is available. Dropped earlier. 4. Transmitting & Receiving NWL are always ready. No flow control. Dropped earlier. 5. Best one, communication channel is error free. Dropped. Due to this channel property of making errors, frames may be damaged / lost partly or completely. For the damaged frame in transit, the h/w will detect the checksum (6 on page 35). The checksum may be correct, even if there are errors, which is highly unlikely. Such errors will not be detected by any algorithm. Like earlier two it also sends data in one direction at time. Simplex. At the first instance a small variation in protocol-2 will help. But when the ACK is lost, there are duplicate packets. This has to be dealt with. So, frames have to be numbered to identify the duplicate frames. What should be the range of frame sequence number? 1-bit is sufficient as I frame is transmitted at a time. The sender remembers the sequence number of the next frame to send & the receiver remembers the sequence number of the next frame expected. 44 When the data frame / ACK frame is lost, sender will not receive ACK, then one need to detect that, so a timer is used. The algorithm is shown in figure 12. Sender3 consists of: i/p = packet, o/p = frame, ACK to transmit next frame, Sequence number to identify duplicate frame. o next_frame_to_send : is 0, initial sequence number. o from_network_layer(&buffer) : to get packet from source NWL. o s.info = buffer : construct an out bound frame (s). X. o s.seq = next_frame_to_send : insert sequence number in frame. o to_physical_layer(&s) : transmit to the destination DLL. o Start_timer(s.seq) : time out for long responses o Wait_for_event(&event) : indicates frame (ACK) arrival from receiver, then take next packet is sent. o On event from_physical_layer(&r) : accepts the ACK frame from physical layer, on event happening. o stop_timer(s.ack) : turn the timer off when you get ACK. o from_network_layer(&buffer) : get next packet from source NWL. o inc(next_frame_to_send) : inc seq number next to send

45 Receiver3 is equally simple & consists of : i/p = frame, o/p = packet, ACK sent back, Sequence number to identify duplicate frame. o Waiting for a frame to arrive o frame_expected = 0 : initialize sequence number. o wait_for_an_event(&event) : Frame arrival / checksum error. o from_physical_layer(&r) : accepts the frame from physical layer, on event happening. o if (r.seq== frame expeted) : on arrival of expected frame. o to_network_layer(&r.info) : The info (data) portion is passed on to destination NWL. o inc(next_frame_expected) : inc seq number to indicate next frame expected. o s.ack = 1- frame_expected : the frame being ACK. o to_physical_layer(&s) : send a dummy (ACK) back to sender to send next frame. 46 Figure 12

47 Figure 12 Slide -34: Sliding window protocols In the previous protocols data transfer was simplex. But in most practical situations, there is a need for transmitting data in both directions. One way of achieving this is to use two separate channels (forward / reverse), one for each direction, but the B/W of the reverse channel is almost entirely wasted. A better idea is to use the same circuit for data in both directions, as already used in protocols 2 and 3. In this model the data frames from A to B are intermixed with the ACK frames from A to B. The kind field in the header of an incoming frame is used to identify ACK frame / data frame. Apart from sharing channel as above, when a data frame arrives, instead of immediately sending a separate control frame, the receiver waits until the NWL passes it the next packet. The ACK is attached to the outgoing data frame (using the ack field in the frame header). 48

49 The ACK gets a free ride on the next outgoing data and is known as piggybacking. This saves B/W as it can use few bits in the frame as a field. This also reduces the number of frames transacted, less traffic & less resources. Some times 1-bit in the frame header is used for piggybacking, it may cost few bits but not an additional frame. There is a problem. Piggybacking introduces a complication not present with separate ACK. How long should the DLL wait for a packet to piggyback the ACK? If DLL waits longer than the sender s timeout period, the frame will be retransmitted & DLL can t foretell the arrival of packet. So some ad hoc method is used: Waiting for a fixed time (msec). If a new packet arrives quickly, the ACK is piggybacked else an ACK packet is sent. Earlier we have seen simple transmission; we will look into bidirectional protocols that belong to a class called sliding window protocols. We will study the following sliding window protocols: 1. A one-bit sliding window protocol. 2. Go back n protocol. 3. Selective repeat protocol. They differ in terms of efficiency, complexity, and buffer requirements. Instead of sending 1 packet at a time as earlier, a group of packets, 0 to 2 n -1 (fit in n-bit field) are sent. The stop-and-wait sliding window protocol uses n =1, sequence number is 0 or 1. Sending Receiving Window Window 50 Seq # 1 Seq # 1 Seq # 2 Seq # 2 Seq # 3 Seq # 3 Seq # 4 Seq # 4 ACK Figure 12 a: Sliding Window Protocol.

In all sliding window protocols at any instant of time, the sender maintains a set of sequence numbers corresponding to frames it is permitted to send, 4 in this case. These frames are said to fall within the sending window. The receiver also maintains a receiving window corresponding to the set of frames it is permitted to accept. Window size may be fixed on both sides, may differ and even dynamically change during transfer. But the delivery of frames should be order i.e. channel is wire-like. The window continuously maintains a list of un-ack frames. Figure 12a & Figure 13 shows an example. After receiving an ACK the sender window (lower edge) slides down as shown in figure 12a & 13. It is possible for the frames within the sender s window may be lost / damaged in transit, the sender must keep all these frames in its memory (buffer) for possible retransmission. 51 So, for a window size of n needs n buffers. When the window grows to its maximum size, the sending DLL forcibly shuts off NWL until another buffer becomes free. The receiving window accepts frame within its range, and discards others. When a frame with seq number equal to the lower edge of the window is received, it is passed to the NWL and an ACK is generated, and the window is rotated by one / moved down. Window size of 1 means that the DLL accepts frames in order, but for larger windows this is not so. The NWL, in contrast, is always fed data in the order, regardless of the DLL s window size. 52

Figure 3-13 shows an example with a window size of 1. 53 Figure 3-13. A sliding window of size 1, with a 3-bit sequence number. (a) Initially. (b) After the first frame has been sent. (c) After the first frame has been received. (d) After the first acknowledgement has been received. A One-Bit Sliding Window Protocol It is similar to stop-and-wait and helps one to understand the concepts. Figure 14 depicts such a protocol (program). Sender3 consists of: i/p = packet, o/p = frame, ACK to transmit next frame, Sequence number to identify duplicate frame. o next_frame_to_send : is 0, initial sequence number. o frame_expected : frame expected form other side. o from_network_layer(&buffer) : to get packet from source NWL. o s.info = buffer : construct an out bound frame (s). o s.seq = next_frame_to_send : insert sequence number in frame. o s.ack = 1 - frame_expected : piggybacked ACK. o to_physical_layer(&s) : transmit to the destination DLL. o start_timer(s.seq) : time out for long responses o Wait_for_event(&event) : indicates frame (ACK) arrival from receiver, then take next packet is sent. o On event from_physical_layer(&r) : accepts the ACK frame from physical layer, on event happening. o On frame_expected : for in bound frame o to_network_lyer(&r.info) : pass it to NWL o stop_timer(s.ack) : turn the timer off when you get ACK. o On frame_to_send : for out bound frame o stop_timer(r.ack) : turn off timer o from_network_layer(&buffer) : get next packet from source NWL. 54

o inc(next_frame_to_send) : inc seq number next to send. 55 o s.info = buffer : construct out bound frame. o s.seq = next_frame_to_send : insert seq number o s.ack = 1 - frame_expected : seq no. of last received frame o to_physical_layer(&s) : transmit frame o start_timer(s.seq) : start timer for the frame. Receiver3 is equally simple & consists of : i/p = frame, o/p = packet, ACK sent back, Sequence number to identify duplicate frame. o Waiting for a frame to arrive o frame_expected = 0 : initialize sequence number. o wait_for_an_event(&event) : Frame arrival / checksum error. o from_physical_layer(&r) : accepts the frame from physical layer, on event happening. o if (r.seq== frame expeted) : on arrival of expected frame. o to_network_layer(&r.info) : The info (data) portion is passed on to destination NWL. o inc(next_frame_expected) : inc seq number to indicate next frame expected. o s.ack = 1- frame_expected : the frame being ACK. o to_physical_layer(&s) : send a dummy (ACK) back to sender to send next frame. /* Protocol 4 (sliding window) is bidirectional. */ #define MAX_SEQ 1 /* must be 1 for protocol 4 */ typedef enum {frame_arrival, cksum_err, timeout} event_type; #include "protocol.h" void protocol4 (void) { seq_nr next_frame_to_send; /* 0 or 1 only */ seq_nr frame_expected; /* 0 or 1 only */ frame r, s; /* scratch variables */ packet buffer; /* current packet being sent */ event_type event; next_frame_to_send = 0; /* next frame on the outbound stream */ frame_expected = 0; /* frame expected next */ from_network_layer(&buffer); /* fetch a packet from the network layer */ s.info = buffer; /* prepare to send the initial frame */ s.seq = next_frame_to_send; /* insert sequence number into frame */ s.ack = 1 - frame_expected; /* piggybacked ack */ to_physical_layer(&s); /* transmit the frame */ start_timer(s.seq); /* start the timer running */ while (true) { wait_for_event(&event); /* frame_arrival, cksum_err, or timeout */ if (event == frame_arrival) { /* a frame has arrived undamaged. */ from_physical_layer(&r); /* go get it */ if (r.seq == frame_expected) { /* handle inbound frame stream. */ to_network_layer(&r.info); /* pass packet to network layer */ inc(frame_expected); /* invert seq number expected next */ } if (r.ack == next_frame_to_send) { /* handle outbound frame stream. */ stop_timer(r.ack); /* turn the timer off */ from_network_layer(&buffer); /* fetch new pkt from network layer */ inc(next_frame_to_send); /* invert sender s sequence number */ } } s.info = buffer; /* construct outbound frame */ s.seq = next_frame_to_send; /* insert sequence number into it */ s.ack = 1 - frame_expected; /* seq number of last received frame */ to_physical_layer(&s); /* transmit a frame */ start_timer(s.seq); /* start the timer running */ } } 56 Figure 3-14. A 1-bit sliding window protocol.

57 ACK field contains the number of the last frame received without error. If this seq agrees with the frame the sender is trying to send, the next packet from its network layer is fetched for transmission. If not resend the same frame. i.e. whenever a frame is received, a frame is also sent back. Let us see how flexible / resilient the protocol is? Assume that: 1. A is trying to send frame 0 to computer B, and 2. B is trying to send its frame 0 to A, 3. If A s time out is little less than B. Time outs could be different. Quit possible a case. Consequently, A may time out repeatedly, sending a series of identical frames to B, all with seq=0 and ack=1. Machine A Machine B Timeout = n Timeout= n+1 Send seq=0, ack=1 Expect Send seq=1 seq=0, ack=1 Times out & Re-sends Expects seq=0, ack=1 seq=1 Receives seq=1 ack=0 Sends seq=0, ack=0 A keeps re-sending All frames are rejected as seq=0, ack=1 seq=0, ack=1 is received B will never fetch new packet from NWL After some time seq=0, ack=0 Then sends next packet Transmission begins after several retransmissions. Many duplicates are sent even though there are no errors. Some time 3 or more frames are duplicates. 58

59 However, a peculiar situation arises if both sides simultaneously send an initial packet. Figure 15a depicts the normal operation. o If B waits for A s first frame before sending one of its own, the sequence is as shown in (a), and every frame is accepted. o Each frame arrival brings a new packet for the network layer; there are no duplicates. Peculiar situation is illustrated in (b). o When A and B simultaneously send, their first frames cross. o Half of the frames contain duplicates, even though there are no transmission errors. o Similar thing happens in premature timeouts. Figure 3-15. Two scenarios for protocol 4. (a) Normal case. (b) Abnormal case. The notation is (seq, ack, packet number). An * indicates where a network layer accepts a packet. A Protocol Using Go Back N Until now, the assumption made is, the round trip Tx time is negligible. And it is not true. But the b/w utilization (efficiency) is a function of round trip delay. Longer the delay, lesser the utilization. An example: Consider a 50-kbps satellite channel with 500-msec round-trip propagation delay. Let us imagine trying to use protocol 4 (1 bit sliding window) with 1000-bit frames via satellite. o At t = 0 the sender starts sending the first frame. At t = 20 msec the frame has been completely sent. o At t = 270 msec the frame fully arrived at the receiver. o At t = 520 msec the ACK arrived back at the sender, under the best of circumstances. o This means that the sender was blocked during 500/520 or 96 percent of the time & 4 % is the channel efficiency. o Clearly, the combination of a long transit time, high bandwidth, and short frame length is disastrous in terms of efficiency. 60