MONITORING AND ANALYZING COMMUNICATION LATENCY IN DISTRIBUTED REAL-TIME SYSTEMS

Size: px
Start display at page:

Download "MONITORING AND ANALYZING COMMUNICATION LATENCY IN DISTRIBUTED REAL-TIME SYSTEMS"

Transcription

1 MONITORING AND ANALYZING COMMUNICATION LATENCY IN DISTRIBUTED REAL-TIME SYSTEMS A thesis presented to the faculty of the Fritz J. and Dolores H. Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirement for the degree Master of Science By Ming Liang June, 2003

2 This thesis entitled MONITORING AND ANALYZING COMMUNICATION LATENCY IN DISTRIBUTED REAL-TIME SYSTEMS by Ming Liang has been approved for the School of Electrical Engineering and Computer Science and the Russ College of Engineering and Technology by Jeffery Dill Professor of Computer Science Dennis Irwin Dean, Fritz J. and Dolores H. Russ College of Engineering and Technology

3 MING LIANG. MS. June Computer Science Monitoring and Analyzing Communication Latency in Distributed Real-time Systems (63. pp) Director of Thesis: Jeffery Dill Abstract This thesis presents mathematic models to compute communication latency in LAN across Ethernet technology and a latency monitoring tool for distributed real-time systems. The models contain two sets of formulas for both dedicated and contention network environment. In dedicated network environment, latency is computed by analyzing the host delay time and network delay time. In contention network environment, latency is computed used statistic method. The mean value of communication latency is obtained by analyzing host delay time, network delay time and traffic waiting time. Experiments in two typical LAN connected with hub or switch have been done to show the accuracy of the models. Approved: Jeffery Dill Professor of Computer Science

4 4 Table of Contents List of Tables.5 List of Figures 6 Chapter 1 - Introduction. 8 Chapter 2 - Related work Chapter 3 - The Measurement Tool Problem Definition The Measurement Tool.. 18 Chapter 4 - Analysis of the Communication Latency Analysis of Latency in Dedicated Network Environment Host Delay Time Analysis Network Delay Time Analysis Network Device Time Analysis Conclusion Analysis of Latency in Contention Network Environment LAN Construction and Network Device Knowledge The CSMA/CD protocol Waiting Time Analysis Conclusion.. 34 Chapter 5 - Experiments and Results Experiments in Dedicated Network Environment Hosts Connected by Hubs in Dedicated Network Environment Hosts Connected by Switch in Dedicated Network Environment Experiments in Contention Network Environment Hosts Connected by LAN Hub Hosts Connected by LAN Switch Chapter 6 - Conclusions and Future Work. 60 Reference... 62

5 5 List of Tables Table 5.1: Statistic of the measured latency and the expected latency in dedicated network environment (hosts connected by hub) Table 5.2: Statistic of the measure latency and the expected latency in dedicated network environment (hosts connected by switch). 42 Table 5.3: Statistic of the measured average latency and the expected average latency in contention LAN connected with hub (package size = 1000 byte, traffic package size = 50K). 53 Table 5.4: Statistic of the measured average latency and the expected average latency in contention LAN connected with hub (package size = 100 byte, traffic package size = 50K). 56 Table 5.5: Statistic of the measured average latency and the expected average latency in contention LAN connected with hub (package size = 100 byte, traffic package size = 30K) 57

6 6 List of Figures Figure 1.1: A path-base resource management model for dynamic and distributed real-time system Figure 3.1: Three components of the communication latency 18 Figure 3.2: The method to measure communication latency Figure 4.1: Host delay time analysis Figure 4.2: Data format from application layer to network layer Figure 4.3: Ethernet frame format Figure 5.1: Layout of LAN test bed model in dedicated network environment (hosts connected by hub) Figure 5.2: Measured latency vs expected latency in dedicated network environment (hosts connected by hub) Figure 5.3: Layout of LAN test bed model in dedicated network environment (hosts connected by switch) Figure 5.4: Measured latency vs expected latency in dedicated network environment (hosts connected by switch) Figure 5.5: Layout of LAN test bed model in contention network environment Figure 5.6: Traffic model and the communication model.. 44 Figure 5.7: Layout of LAN test bed model in contention network environment (hosts connected by hub) 45 Figure 5.8: Samples of round trip time to transmit 100 byte data with different network load Figure 5.9: Sorted samples of round trip time to transmit 100 byte data with different network load Figure 5.10: Traffic packets distributed in the network medium in experiment Figure 5.11: Measured average latency vs expected average latency in contention LAN connected with hub (package size = 1000 byte, traffic package size = 50K)... 53

7 7 Figure 5.12: Measured average latency vs expected average latency in contention LAN connected with hub (package size = 100 byte, traffic package size = 50K) Figure 5.13: Measured average latency vs expected average latency in contention LAN connected with hub (package size = 100 byte, traffic package size = 30K) 56 Figure 5.14: Layout of LAN test bed model in contention network environment (hosts connected by switch) 58 Figure 5.15: Measured latency in contention network environment with different network load (hosts connected by switch)... 59

8 8 Chapter 1 Introduction Real-time systems refer to computer and communication systems in which the applications or tasks in the systems have explicit timing requirements. Distributed real-time systems are real-time systems running in the environment that autonomous machines communicate via various communication media [1]. The conditions for the correct performance of a real-time system include both the logical correctness of each of the tasks that are executed and also their timing correctness, which is that the system should meet the timing requirements of each task. From the evolution of real-time theory, real-time systems can be classified into three major types: (1) Priority-Driven Systems; (2) Priority-Driven with Enhanced Time; (3) Time-Driven Scheduling Systems, which are time-deterministic and time-motivated systems. In order to provide a suitable facility for meeting the real-time requirements, the real-time resource management must be inferred. The functions of real-time resource management system are various in different systems. But most of them include (1) monitoring time constraint and detecting deadline violation; (2) task scheduling; (3) resource allocation. In distributed real-time systems, the resource management system relates to a very important resource: the communication subsystem. Therefore, monitoring time constraint must include monitoring the communication time and resource allocation should include allocation network resource. One typical example for distributed, scalable, dynamic real-time system is a resource management model developed by Welch et al. (1998). The main components of the model are Quality of Services (QoS) monitoring and violation detection, QoS diagnosis and resource

9 9 allocation [2]. Figure 1.1 shows the model and the management process of the resource management model. Real-Time Control System Resource Management Middleware Resource Allocation Qos Diagnosis Operating System Qos Monitoring Distributed Hardware Figure 1.1 A path-base resource management model for dynamic and distributed real-time system In QoS monitoring component, Welch s model applies the technique that monitors the end-to-end path latency by computing the data flow transmitting time between two end points. However, this is not a satisfactory technique for network monitoring because it cannot provide a predictable behavior of their communication subsystems, which is important to the performance of a distributed real-time system. As the wide acceptance that most of the real time systems are cluster computing on a network of computers, we see larger and larger cluster, with the increasing probability of failure to satisfy the required time constraint due to network resource contention.

10 10 Furthermore, as these computers are networked using some types of LAN, the occurrences of communication bottlenecks are certain due to the limited bandwidth available. In many cases, the cluster may be heterogeneous, or there may be many applications competing for limited resources. QoS violation happens when there are unexpected bottlenecks or latencies in a certain network path. Therefore, determining where performance problems lie and which part causes the problems is one of the time consuming challenges. Furthermore, since resource management needs to reschedule jobs when QoS violation happens, the effects of the rescheduling jobs on different nodes and the status of the communication path QoS is studied. It is from these bases that the technologies to monitoring and analyzing the network status grasp the attention of the research society. Although most of the resource management systems contain latency monitor as one of their components, the evaluation of these monitors are ignored. Major latency monitors use the time difference between starting to send and receiving by the destination host as the communication latency. However, they seldom evaluate the data to determine how the results show the current network status. For example, in contention network, only several samples cannot tell how busy the network is, because a lot of transmission may fail meeting the time constraints even if the monitor gathers some success data. To evaluate the latency monitors, the results need to be analyzed to retrieve the pattern. Significant works have been done on monitoring the communication time and analyzing the communication delay in order to predict the behavior of the communication subsystems. Levi et. al. (1990) provided a model of communication delay in real-time system. This model decomposes the communication delays into deterministic and

11 11 non-deterministic parts [1]. And algorithms were developed to reduce the error term to the non-deterministic part by compensating of the deterministic part. However, this model only gave a brief concept of the communication delay components and it is hard to use this model directly in real-time systems. Furthermore, it deals little with how to analyze the delay in contention network, which is one of the common reasons that cause the QoS violation. This thesis describes a tool to measure the communication delay and presents a model to analyze the latency in dedicated and contention network environments with varying workloads in order to evaluate the latency monitor. In dedicated network environment, only one pair of hosts is communicating through the network. In contention network environment, several communicating pairs are competing for network resources. The model provides methods to compute exactly communication latency in dedicated network environment and the mean value of the latency pattern in contention environment. Local area networks using Ethernet technology are examined, but analysis of other networks can be carried out in the same manner. The results can be used to evaluate other latency monitors and it can also be used to predict the communication latency in distributed real-time systems. This thesis is organized as follows. Chapter 2 introduces some relative works in latency monitoring and analysis. Chapter 3 provides the detail definition of the communication latency measured and describes the tool that is used to measure the latency in this thesis. In Chapter 4, mathematical models of the latency for dedicated and contention network environment are presented. Experiment results are shown in Chapter

12 12 5 to demonstrate the accuracy and usefulness of the methodology. Conclusion is made and future works are described in Chapter 6.

13 13 Chapter 2 Related Work In the past ten years, many works have been done to improve the reliability of distributed real-time system. For this purpose, many monitoring tools and methods to monitor the network and hosts information were developed. For example, a monitoring distributed real-time systems method, developed by Raju et al. (1992), provides detecting violations of timing assertions for distributed real-time systems running in multiple processors [5]. In such a system, the time constraints include the inter-processor and intra-processor constraints. The method used a set of graphs to represent the time constraints of different tasks, collected timestamps, the time when event happens, and recorded the relevant event occurrences. By analyzing the past history as other events are recorded with the monitoring result and checking the constraint graphs for potential violations, the monitor system provides feedback to the rest of the system such as the operator, the application tasks or the scheduler. Another monitor, developed by Islam et al. (2000), provided the performance of a network running a real-time distributed system. It determined the amount of network loads that each host places on the network [6]. It also measured the delay of the network by sending time-stamped packets from one host to another host. Based on the above measurement results, the monitor described the network load index for real-time resource management decisions. Hong Chen et. al. (2002) developed a technique to monitor the network bandwidth. The technique obtained bandwidth information of each host in the network by

14 14 querying the SNMP database, a database that records network status data [8]. Then, it computed the network bandwidth utilization of each real-time communication path based on the network topology information. The network-monitoring program provided real-time network performance information to the resource management middleware and helped the middleware detect potential QoS violations due to a sub-optimal allocation of network resources. This thesis focuses on a narrow aspect of the monitoring problem, the communication delay monitoring between different hosts connected by a local area network. To monitor the communication delay, most of the monitors use the difference of timestamps. However, to provide useful and predictable information for distributed real-time systems, analyzing the delay is necessary. Fang Feng et. al. (1994) developed two methods to estimate the delay time in order to test whether the message deadlines are met in distributed real-time systems. In the first method, which was called the independent method, the message delay time was divided into three subsystems: sender host subsystem, network subsystem, and receiver host subsystem [3]. This method obtained the estimated delay time by analyzing the scheduling processes at each subsystem independently. Then, the numbers of these three values were added up together to the estimated transmission time. The second method, called the integrated method, considered the interactive of the subsystems. It considered the situation that the message was divided into several small packets and focused on the arrival and departure of these packets between the subsystems. However, the two methods were based on the assumption that the network subsystem provides a bounded message transfer delay.

15 15 Therefore, these two methods are useful for FDDI or Token ring, but cannot be used for Ethernet because the Ethernet is not a bounded network technology. Tindell et al. (1994) provided a different approach while analyzing the end-to-end communications delay in a distributed hard real-time system. In this approach, communication delay is made up of four parts: (1) the generation delay, the time that the application generates the message and queues the message, (2) the queuing delay, the time that the message spends on waiting in queue to reach the communication media, (3) the transmission delay, same as the propagation time, the time required for a signal to travel from one point to another, (4) the delivery delay, the time taken from the receiving buffer to the destination task [4]. By analyzing two communication protocols, a simple time taken passing approach and a real-time priority broadcast bus, Tindell derived a scheduling approach bounding the media access delay and the delivery delay. This approach allows complete distributed system, including both the underlying medium and the application that is using this resource to be subjected to the same level of predictability as single processor platforms. Mikael Sjodin (1997) showed methods to calculate end-to-end response times for distributed hard real-time system over Asynchronous Transfer Mode (ATM) network based on priority-driven CPU scheduling, output queues using FIFO priority. The methods divided the network delay into four types: 1) fix delay components, 2) bounded variable components, 3) traffic shapers and 4) output port queues. By analyzing each component in ATM networks, they determine the worst-case response time and how much buffer memory is needed in each ATM switch in the network [7].

16 16 This thesis provides a tool to measure the communication latency for the real-time resource management and focus on the latency analysis over Ethernet technology. It describes two sets of formulas to compute the latency in dedicated and contention environment, and applies the models to two typical LAN devices, hub and switch. Results are used to evaluate the latency monitor in order to demonstrate the accuracy of the monitor.

17 17 Chapter 3 The Measurement Tool Real-time systems in distributed environment need a lot of communication through network. Thus, tools to measure the network status are necessary in order to provide network status information and predictable transmission delay. Since many real-time systems require that the delays between the application layer at the sender host and that at the receiver host must be bounded, this thesis studies how to measure and analysis the package transmission delay from application layer to application layer. In this chapter, a detailed definition of the communication latency and a clear understanding of the components of communication latency will be provided. 3.1 Problem Definition In this thesis, the communication latency is studied based on the layer concept. From the ISO/OSI network mode, there are seven network layers: physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer. However, people normally use a simple model, which only contains five layers: physical layer, data link layer, network layer, transport layer and application layer. In this simple model, the application layer includes the session, presentation and application layer in ISO/OSI network model. Communication latency can simply be considered as the time required for a message to reach its destination. Assume that a real-time application is communicating with another host across the network in client-server model. The client sends a request to

18 18 the server, and the server responds with one or more messages in reply. The client may then send another request to the server. In general, a transaction (e.g., placing an order, performing a query) may consist of a number of client requests and corresponding server responses. The time elapsed from the client starts sending the first packet to the server ends receiving the last packet is referred to herein as the communication latency. Network, server and application behavior all contribute to the communication latency. Figure 3.1 shows the three components of the communication latency. Sender delay Receiver delay Network delay Figure 3.1 Three components of the communication latency The sender delay, network delay and receiver delay denote as T sender, T network and T receiver, respectively. And the communication latency, L, is defined as L = T sender + T network + T receiver [1] 3.2 The Measurement Tool This section describes a simple tool that was developed to measure the communication latency. This tool measures the round trip time between two end points in order to obtain the communication latency. In most cases, the communication latency can

19 19 be considered as half of the round trip time. The reason to measure round trip time instead of direct latency is to avoid system time difference between two hosts. This tool uses client-server model, which is the most popular model used in real-time system communication. The client plays the master role and the server plays the assistant role. The master one sends UDP package, data packed under User Datagram Protocol [9], to the assistant; receives the same size package from the assistant; and calculates the round trip time of this cycle. The assistant receives the package from the master and then sends it back. Because UDP provides application programs to communicate with other programs with a minimum protocol mechanism, this mechanism can minimize the protocol affection to the latency. Therefore, UDP above IP as the network protocol is chosen in this monitor tool. Figure 3.1 shows the latency monitor model. Sender Receiver Round Trip Time Computing LAN Figure 3.2 The method to measure communication latency

20 20 Chapter 4 Analysis of the Communication Latency In order to evaluate the tool described in Chapter 3 and show the accuracy of the results, the communication latency will be computed in this Chapter through analyzing the three components of the communication latency: time delay in the sender, time delay in the receiver, and the time delay across the network. Since the sender delay time and the receiver delay time are closed, host delay is used to substitute sender delay and receiver delay in order to simplify the problem. The network contributes to the communication latency through a variety of mechanisms. The selection of protocols (e.g. CSMA/CD or ATM, EIGRP or OSPF, FIFO or CBWFQ) strongly influences the packet transmission time across the network. There are two kinds of delay time: the transmission or serialization delay, and the propagation delay. Transmission delay is the time lapsed when the first bit transmitted, till the last bit is captured by the link capacity. The propagation delay is the time it takes a bit to travel across the link, dependent on the physical medium and distance. Packet corruption and loss will either degrade the quality of information or introduce additional delay due to the need for retransmissions. In enterprise terrestrial networks, transmission delay is often the dominant components of network delay. In satellite networks the propagation delay (coupled with the access protocol) can dominate. Host delay is affected by interdependent factors such as processing delay (a catch-all term for the various actions taken once a packet is received by a node until it is assigned to a transmission queue), queuing delay (when other packets are present),

21 21 application design (e.g., are sessions persistent or transient?), protocol selection (e.g., UDP or TCP, Tahoe or Reno), and network infrastructure. The fewer round-trips an application requires to complete a given transaction, the less sensitive it will be to the network infrastructure. However, the number of round-trips can itself be dependent on the network infrastructure due to retransmissions. 4.1 Analysis of Latency in Dedicated Network Environment Recall that L = T sender + T network + T receiver [1] In our environment, the host type, operating system and the configuration of the hosts are roughly the same, so the above formula has the following format: L = = 2 * T host + T network [2] Host Delay Time Analysis In order to analyze the communication latency in hosts, the LAN architecture based on the five layers model was examined. The five layers are physical layer, data layer, network layer, transport layer and application layer, from bottom to above. All these layers constitute the local network (LAN) protocols. Followings are the data flows in a host: 1) data passes from the application layer to the socket layer. In this procedure, data needs to be physical copy from the application buffer to the socket queue; 2) data passes from socket layer to protocols layer. In this procedure, data is copied from socket queue to the network queue. It may be logical copy or physical copy, depends on the configuration and operating system. 3) data passes from protocols layer to network layer. Figure 4.1 shows way that data transfers between layers.

22 22 Application layer (application buffer) transport layer (Socket queue) network layer (add protocol header) datalink Layer (Depart to frames add frame header) Physical copy Logical copy (Physical copy in some configuration) Host delay Physical copy Ethernet Figure 4.1 Host delay time analysis From the above analysis, the host delay time can be departed into two different types of time: fixed processing time and varied processing time, denoted by T h_fix and T h_variable, respectively. T host = T h_fix + T h_variable [3] T h_fix, fixed processing time in the host, is the time that is not affected by the user data size. It includes the function calls, adding UDP header, adding IP header, adding Ethernet header, network driver processing time. The fixed processing time is just roughly fixed. It is still affected by many factors such as how many tasks that are running on this computer, the volume of data flow thru the network driver, and many other situations. However, compared to network transfer time, these types of time are so small, normally within 10 microseconds, which can be ignored. But if large size data is passing network driver, the latency may increase significantly. The

23 23 increase time is mostly the time that data frame is waiting in the network queue to be put in the physical network. In this section, the above factors are ignored in order to simplify the analysis of the communication latency. T h_variable, varied processing time in the host, can be logically considered as the physical copying time. Suppose one byte needs k μs, and the user data size is S user. Then T h_varous = k * S user. [4] k is fixed by host model, operating system, and system configuration such as memory allocation mechanism. Therefore T host = T h_fix + k * S user [5] The T h_fix, the host fixed processing time, and k. should be obtained before the host delay time can be computed. Theoretically, the two numbers are fixed and can be computed if the host type, operating system and configuration are known. However, it is too complex to obtain T h_fix and k by analyzing the operating system because there are many factors affecting these two numbers. Therefore, a much easier way was applied to obtain these two numbers. Suppose two values of host delay time with different user data size, S user, were obtained, then T h_fix and k can be calculated easily, as shown in the next Chapter. Since the measurement tool only obtains whole communication latency, T host can be known after T network is known. The rest of this Chapter will show how to obtain the T network from user data size Network Delay Time Analysis Two main factors affect the network delay time, the physical network type and the data size. In Ethernet, data is put into physical network in a fix rate, normally is 10 megabit

24 24 per second. In fast Ethernet, this rate is 100 megabit per second. If there are S bits data in network queue waiting to be put into the network media, it takes S/10m second from the first bit reach the media to the last bit reach the media. If the transmission rate is denoted as TR, then T network = S network / TR [6] Therefore, the transmission time in network system will be calculated from S network, the data that is put into the network wire. However, S network is not equal to S user. If a user requires transmitting S user size data, more data will be put into network because every system will add their overheads to the message. S network can be computed from S user. From logical layer view, data flows from application layer to the transport layer, then to the network layer. Finally it reaches the data link layer and physical layer. In the transport layer, there are two major transport protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Both protocols will add their protocol header to the user data. If UDP is used, 8 bytes header adds to the user data. If TCP is used, at least 20 bytes TCP header adds to the user data. In network layer, Internetwork Protocol (IP) is the most popular network layer protocol. IP header contains 20 bytes data. Figure 4.1 shows the header format when user data reaches the network layer based on UDP/IP protocols.

25 25 IP header (20 bytes) UDP header (8 bytes) User Data (UDS) Figure 4.2 Data format from application layer to network layer In the data link layer, the user data will be packed as network transmission units, specifically as frames in Ethernet. In each frame, there is a 23 bytes frame header and CRC in the end of the frame. Figure 4.2 shows the data format of Ethernet frame header. Preamble Dest Addr Src Addr Frame type Data in Frame CRC-32 8 bytes 6 bytes 6 bytes 2 bytes bytes 4 bytes Figure 4.3 Ethernet frame format Each frame contains no more than 1500 bytes and no less than 46 bytes data. If the user data is less than 46 bytes, the frame will transmit 46 bytes. If the user data size is larger than 1500 bytes, it will be divided into several frames. Furthermore, Ethernet devices must allow a minimum idle period between transmission of frames, known as the interframe gap (IFG) or interpacket gap (IPG). It provides a brief recovery time between frames to allow devices to prepare for reception of the next frame. The minimum interframe gap is 96 bit (12 bytes) times, which is 9.6 microseconds for 10 Mb/s Ethernet, 960 nanoseconds for 100 Mb/s Ethernet, and 96 nanoseconds for 1 Gb/s Ethernet. To transfer n frames, there are n-1 interframe gaps. We can consider as (n-1) *12 more bytes are put in the physical network. Now S network can be calculated from S user.

26 26 1) If S user + overhead < 46, there is only one frame. S network = = 69 (byte) [7] 2) If 46 < S user + overhead < 1500, there is also only one frame. S network = 23 + overhead + S user [8] 3) If 1500 < S user + overhead, it is divided to n frames and there are n-1 interframe gaps between these frames. n = (S user + overhead) / 1500 [9] S network = S user + overhead + (n * 23) + ((n-1) * 12) [10] Now the S network is known. And T network, the network delay time, can be easily computed by the formula [6]. T network = S network /TR TR is the Ethernet transmission rate. A 10base T Ethernet supports 1.25 M bytes/s transmission rate, TR = For a 100Mb/s Ethernet, TR = Network Device Delay Time Analysis There is another factor, network device type, which affects the network delay time. In most of the network, transmission packets will pass through some network devices, such as hubs, switches and/or routers. The network devices need time to handle the transmission packets. Some of the network devices just need a very small time that can be ignored. Fox example, hub is a device used to provide connectivity between network devices. Hubs perform the basic repeater functions of restoring signal amplitude and timing, detecting collisions, and broadcasting signals to network devices. Therefore

27 27 the time that data needs to pass thru a hub can be ignored. However, the delay time of some other devices will affect the accuracy of the expected latency and should be considered in the above latency model. L = T host + T network + T device [11] For example, switch, a network device, is composed by plates, braces and switch chips. It checks the hardware address for every passing packet and forwards it to the destination host, and can be considered as a simple computer. Therefore, to analyze the delay time of a switch, the model that uses to analyze the host delay time can be applied. Router can also apply the host delay time model. T device = T n_fix + k n * S network [12] Same as, the T n_fix and k n can be calculated by two samples points with known packet size and T h_fix and k Conclusion For data passes hub, repeater or bridge, L d = (T h_fix + k * S user ) * 2 + S network / TR [13] L d : S user : T h_fix : communication latency (dedicated network environment) user data size fixed inter-communication time in host system k: system processing time per byte in host system S network : Network data size, use data size + overhead; it is computed by formulas [7], [8], or [9] TR: maximum transmission rate

28 28 Ethernet: Fast Ethernet: TR = 1.25m bytes /sec TR = 12.5m bytes /sec Gigabit Ethernet: TR = 125m bytes /sec For data passes switch or router, L d = (T h_fix + k * S user ) * 2 + S network / TR + T n_fix + k n * S network [14] T n_fix : k n : fixed inter-communication time in network device system processing time per byte in network device 4.2 Analysis of the Latency in Contention Network Environment: All of the above formulas are established in dedicated network environment, which means that no other network interface is using the network path, including network wire and network device. However, in distributed real-time systems, the latency with contention environment is what people are really concerned because contention causes significant increase of the latency. This section analyzes how other communicating stations in the same network environment affect the communication latency. To know how the other transmitting packets in the LAN affect the time to transmit the desired packet, the low-level transmission mechanism of LAN must be analyzed. The following two sections provide a basic concept of LAN knowledge and Ethernet protocol mechanism LAN Construction and Network Device Knowledge The reason that contention network environment affects the communication latency is the shared physical media of LAN, which means in the same time only one signal can be transmitted in one wire. Historically, LANs grew and proliferated in a

29 29 shared environment characterized by several LAN access methods. For instance, the MAC (Media Access Protocol) protocols for Ethernet, Token Ring and FDDI (Fiber Distributed Data Interface) each has arbitration rules that determine how data is transmitted over a shared physical media type. This thesis only focuses on the communication time across Ethernet. Traditional Ethernet LANs run at 10 Mbps over a common bus-type design. Stations physically attach to this bus through a hub, repeater or concentrator, creating a broadcast domain. Every host is capable of receiving all transmissions from all host, but only in a half-duplex mode. This means stations cannot send and receive data simultaneously. Hosts on an Ethernet network follow a simple rule while transmitting message: they listen before speak. In Ethernet environment, only one host is allowed to transmit at any time due to the CSMA/CD protocol (Carrier Sense Multiple Access/Collision Detection). Stations in Ethernet are connected directly or by other network devices, such as bridges, hubs, switches, and routers. These network devices all attempt to reduce transmission time in order to increase overall performance. For example, a 2-port bridge splits a logical network into two physical segments and only lets a transmission cross if its destination lies on the other side. It forwards packets only when necessary, reducing network congestion by isolating traffic to one of the segments. Hub, repeater or concentrator is shared by all the stations connected to the network. Only one node is allowed to transmit a packet in the same time. Stations that connected by these medias have competitive relationship while communication.

30 30 Therefore, communication latency between one pair stations is affected by the communication between other stations. There are two modes LAN hubs, the half duplex hub and the full duplex hub. When a hub works in full duplex way, data is passed in both directions simultaneously. When a hub works in half duplex way, communication does not allow send and receive to occur concurrently. Unlike hubs, switches examine each packet and process it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the nodes residing on each network stations and then allow only the necessary traffic to pass through the switch. When a packet is received in a switch, the switch examines the destination and source hardware addresses and compares them to a table of network stations and addresses. If the stations are the same, the packet is dropped ("filtered"); if the stations are different, then the packet is "forwarded" to the proper stations. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them. Filtering of packets and the regeneration of forwarded packets enables switching technology to split a network into separate collision domains. Regeneration of packets allows for greater distances and more nodes to be used in the total network design, and dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. In shared networks all nodes reside in one, big-shared collision domain. As a result, if hosts are connected by switch, each communication pairs can be considered as they are communicating in a dedicated network environment.

31 The CSMA/CD protocol The process of transmitting a message involves gaining control of the communication medium. In Ethernet, the protocol that governs how and when the hosts are allowed to transmit is called Carrier Sense Multiple Access with Collision Detection, or CSMA/CD for short. With CSMA/CD, the network is monitored for a "carrier", or presence of a transmitting station. This process is known as "carrier sense". If an active carrier is detected, then transmission is deferred. The station continues to monitor the network until the carrier ceases. If an active carrier is not detected, and the period of no carrier is equal to or greater than the interframe gap, then the station immediately begins transmission of the frame. The time that elapses from the moment monitoring the network medium to successfully start transmitting is denoted as T traffic. In dedicated environment, T traffic. = 0. If the as the communication latency in contention environment is denoted as L c, and the latency in non-contention environment is denoted as L d L c = L d + T traffic [15] When two processes transmit simultaneously over the same media, the messages are jammed, causing a collision. In CSMA/CD protocol, computer system keeps monitoring the media for collisions while stations are sending messages. If a collision is detected, the transmitting host stops sending the frame data and sends a 32-bit "jam sequence". If the collision is detected very early in the frame transmission, the transmitting station will complete sending the frame preamble before starting

32 32 transmission of the jam sequence. Transmitted the jam sequence is to ensure that the length of the collision is sufficient to be noticed by the other transmitting stations. After sending the jam sequence, the transmitting station waits a random period of time before starting the transmission process over from the first step above. This process is called "backoff". The probability of a repeated collision is reduced by having the colliding stations to wait for a random period of time before retransmission. If repeated collisions occur, then transmission is repeated, but the random delay is increased with each attempt. This further reduces the probability of another collision. This process repeats until a station transmits a frame without collision. Once a station successfully transmits a frame, it clears the collision counter it uses to increase the backoff time after each repeated collision. Research shows that the collision rate is related to the number of hosts, the length of the network medium and the average number of packets on the network. Research also shows that less than 10 percent collision will not decrease the Ethernet efficiency. Collisions happen rarely when the usage bandwidth of Ethernet is less than 80 percent and it contributes little to the communication latency. The major reason that causes increasing the communication delay is the waiting time. Once a packet is sent from a node, the Ethernet LAN will not transfer any other information until that packet reaches its endpoint. Therefore, the analysis in this thesis will ignore the latency increased by collisions since collisions happen rarely in a small LAN environment.

33 Waiting Time Analysis This section presents a simple set of formulas to characterize the latency performance, expected of a heavily loaded Ethernet environment. Here a heavily loaded network means the usage bandwidth is larger than 80 percent of the total bandwidth. In a CSMA/CD system, a host is allowed to send its message only upon the detection of the media is idle. System can only know the media is busy or not, it is difficult to predict how long a packet will wait in the queue before it can be put into the network. Due to this reason, this thesis will use the statistic analysis method: compute the average communication latency and the worst-case, the maximum, communication latency. When a host attempts to transmit a packet, it may face two possible situations: 1) the network is idle, which means it can start to transmit the packet immediately, T wait = 0; 2) there is another packet in-transmitting, the host must wait a period of time. Assume that the traffic pattern in the network media is already known. The probability of the second situation happens is P1, and then the probability of the first situation is 1-P1. Let T wait be the average waiting time before a successful acquisition of the Ethernet by a station s transmission. The expected latency, denoted by Lc, to transmit the packet is Lc = (1-P1)* L d + P1 * (L d + T wait ) = L d + P1 * T wait [16] Given a traffic model, let S traffic be the average packets size in the contention Ethernet. Let n be the average number of packets per second. The current bandwidth

34 34 usage is n * S traffic. Because the Ethernet peak capacity is TR, the possibility to encounter a busy network is (n * S traffic ) / TR. Simply, P1 = (n * S traffic ) / TR [17] Consider the situation that there is a transmitting packet in the network while one host attempt to transmit a packet. The host must wait until the in-transmitting packet finished its transmission. In the worst case, that the in-transmitting packet just starts its transmission, the host has to wait S in-trans /TR time, where S in-trans is the in-transmitting packet size. The mean value of the waiting time is ½ S in-trans/ TR. S in-trans is various because the packets in the network are different length. Based on the traffic model, S traffic, the average traffic packets size, can be use to substitute S in-trans. The expect value of the waiting time is ½ S traffic / TR. In some traffic models, hosts may not successfully grasp the Ethernet control after the in-transmitting packet finished transmission. Let P2 be the possibility that the attempting host has to wait two packets sending by other hosts, P3 be the possibility that it has to wait three packets, and Pn be the host has to wait n packets. Therefore, the expected value of T wait is E[T wait ]= ½ S traffic / TR + P2*S traffic /TR + P3*S traffic /TR +. + Pn* S traffic /TR [18] Conclusion The mean value of the communication latency in contention network environment is E [Lc] = L d + [(n * S traffic ) / TR] * [ ½ * S traffic /TR + (P2 + +Pn) * S traffic /TR] [19]

35 35 L d : communication latency in dedicated network environment S traffic : the average packet size in the network n: the average packet number per second TR: maximum transmission rate. Pi: the possibility that hosts have to wait i packets.

36 36 Chapter 5 Experiments and Results This section presents some experiments aimed at verifying the accuracy of the latency measurement tool by using the formulas in the previous section. Experiments are done in a real-time research lab. Stations in the lab construct an Ethernet LAN. They are connected with a 100mb/s switch and a 100mb/s hub. To minimize the interference from users and network, independent LANs will be constructed in some of the following experiments. 5.1 Experiments in Dedicated Network Environment This section will present two experiments. The first experiment measured the round trip time of a communicating pairs that are connected by a unidirectional hub. By sending a various size of data over Ethernet, results are compared with the formula in Chapter 4.2 and percentage error is calculated to show the accuracy of the measurement latency. In the second experiment, two stations are connected by Switch, another common network device. As in the first experiment, results are compared to the latency prediction Hosts Connected by Hubs in Dedicated Network Environment This experiment shows the measured latencies with various workloads, and compared results with the prediction result in dedicated network environment that hosts are connected by LAN hub. To measure the communication latency in dedicated network environment, an independent LAN was constructed. The LAN contains only two stations connected by a 10mbs half-duplex mode hub, as shown in figure 5.1. Because sender of

37 37 the network monitor would not send next packet until it receives the previous reply from receiver, either chosen half duplex hub or full duplex hub would not affect the experiment result. The two stations are running SUN OS 5.7 version operating system and using BSD socket as the network interface. Both of the stations have two network interface cards and one of them is connected with a 10mbs hub in this experiment. The Ethernet maximum transmission rate is 10mbs, which is equal to 1.25 M bytes per second. H1 hub H2 secure-rm.ece.ohiou.edu america.ece.ohiou.edu Figure 5.1 Layout of LAN test bed model in dedicated environment (hosts connected by hub) Recall that from formula [13], L d = (T h_fix + k * S user ) * 2 + S network / TR. T h_fix and k are fixed system parameter. This section will show how to obtain the T h_fix and k by extracting two points of the experiment. From the measurement tool, two samples points are obtained: 1) While S user = 110, L = 278 us; 2) When S user = 59010, L = us. Then, When S user = 110, S network = 138, (from formula [8]) When S user = 59010, S network = 60546, (from formula [9])

38 38 It is also known the TR = 10 m bits per second, which is 1.25 bytes per μs It comes to the two following equations a) (T h_fix + k * 110 r ) * / 1.25 = 278 b) (T h_fix + k * 59010) * /1.25 = By resolved a) and b) T h_fix = 72 μs and k = μs. Since these two numbers are related to the hardware and software in hosts, they can be used as constant in all the following experiments. After computed T h_fix and k, the communication latency with various user data size can be computed from L d = ( * S user ) * 2 + S network / TR Figure 5.2 shows the experiment results by compare the measured round trip time, obtained from the latency monitor, with the expected round trip time, obtained from the model in Chapter 4. X-axle is the packet size, increasing 1K bytes every cycle. Y-axle is the round trip time. This experiment shows that the error is very small, which can be seen much clear from table Hosts Connected by Switch in Dedicated Network Environment This experiment shows the accuracy of the latency monitor while monitoring in a LNA connected by switch. Experiment environment is shown in figure 5.3. The two endpoints in the communication path are hosts with SUN OS 5.7 operating system and 100 mbs network interface card. The switch is a 100mbs switch. Therefore, in this experiment, TR = 12.5M byte / second.

39 39 latency (us) measured latency predict latency workload (Kbyte) Figure 5.2 Measured latency vs expected latency in dedicated network environment (hosts connected by hub) Table 5.1 Statistic of the measured latency and the expected latency in dedicated network environment (hosts connected by hub) User data size (byte) Average difference (μs) Average percentage error (%) Maximum percentage error (%) 0<= S user <= 10, % 1.66 % 10,000 <= S user <= 20, % 0.70 % 20,000<= S user <= 30, % 0.90 % 30,000<= S user <= 40, % 0.28 % 40,000<= S user <= 50, % 0.20 % 50,000<= S user <= 60, % 0.60 %

40 40 H1 switch H2.. Figure 5.3 Layout of LAN test bed in dedicated network environment (hosts connected by switch) From formula [14], L d = (T h_fix + k * S user ) * 2 + S network / TR + T n_fix + k n * S network Since the operating system and other configuration are same with hosts in experiment 1, we can still use T h_fix = 72 μs and k = T h_fix and k. L d = ( * S user ) * 2 + S network / T n_fix + k n * S network In this experiment, the T n_fix and k n will be calculated in the same way to obtain We extract two samples points, 1) while size = 1010 bytes, RTT = 704, L = 352 2) while size = 52,010 bytes, RTT = 13464, L = 6732 Then, a) L = 352 = T n_fix + k n * 1061 b) L = 6732 = T n_fix + k n * 53251

41 41 by resolved a) and b) T n_fix = 39 k n = Therefore, L d = ( * S user ) * 2 + S network / * S network Host-switch-host (dedicated environment) expected latency measured latency latency (micro second) workload (Kbyte) Figure 5.4 Measured latency vs expected latency in dedicated network environment (hosts connected by switch) Figure 5.4 shows the experiment results by comparing the measured round trip time with the expected round trip time, where hosts are connected by switch. X-axle is the packet size and Y-axle is the round trip time. Table 5.2 presents the statistic result from Figure 5.4 From the above tale, it is noticed that the error rate is larger than experiment 1, hosts are connected by LAN hub. This is caused by two main factors. a) the switch

Lecture 5 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 5 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 5 The Data Link Layer Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Link Layer: setting the context two physically connected devices: host-router, router-router, host-host,

More information

Lecture 6 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 6 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 6 The Data Link Layer Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Link Layer: setting the context two physically connected devices: host-router, router-router, host-host,

More information

Summary of MAC protocols

Summary of MAC protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division, Code Division, Frequency Division Random partitioning (dynamic) ALOHA, S-ALOHA,

More information

Interface The exit interface a packet will take when destined for a specific network.

Interface The exit interface a packet will take when destined for a specific network. The Network Layer The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network

More information

Medium Access Protocols

Medium Access Protocols Medium Access Protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division,Code Division, Frequency Division Random partitioning

More information

CSE 461: Multiple Access Networks. This Lecture

CSE 461: Multiple Access Networks. This Lecture CSE 461: Multiple Access Networks This Lecture Key Focus: How do multiple parties share a wire? This is the Medium Access Control (MAC) portion of the Link Layer Randomized access protocols: 1. Aloha 2.

More information

Lecture 9: Bridging. CSE 123: Computer Networks Alex C. Snoeren

Lecture 9: Bridging. CSE 123: Computer Networks Alex C. Snoeren Lecture 9: Bridging CSE 123: Computer Networks Alex C. Snoeren Lecture 9 Overview Finishing up media access Ethernet Contention-free methods (rings) Moving beyond one wire Link technologies have limits

More information

Objectives. Hexadecimal Numbering and Addressing. Ethernet / IEEE LAN Technology. Ethernet

Objectives. Hexadecimal Numbering and Addressing. Ethernet / IEEE LAN Technology. Ethernet 2007 Cisco Systems, Inc. All rights reserved. Cisco Public Objectives Ethernet Network Fundamentals Chapter 9 ITE PC v4.0 Chapter 1 1 Introduce Hexadecimal number system Describe the features of various

More information

Topics. Link Layer Services (more) Link Layer Services LECTURE 5 MULTIPLE ACCESS AND LOCAL AREA NETWORKS. flow control: error detection:

Topics. Link Layer Services (more) Link Layer Services LECTURE 5 MULTIPLE ACCESS AND LOCAL AREA NETWORKS. flow control: error detection: 1 Topics 2 LECTURE 5 MULTIPLE ACCESS AND LOCAL AREA NETWORKS Multiple access: CSMA/CD, CSMA/CA, token passing, channelization LAN: characteristics, i basic principles i Protocol architecture Topologies

More information

Reminder: Datalink Functions Computer Networking. Datalink Architectures

Reminder: Datalink Functions Computer Networking. Datalink Architectures Reminder: Datalink Functions 15-441 15 441 15-641 Computer Networking Lecture 5 Media Access Control Peter Steenkiste Fall 2015 www.cs.cmu.edu/~prs/15-441-f15 Framing: encapsulating a network layer datagram

More information

Principles behind data link layer services

Principles behind data link layer services Data link layer Goals: Principles behind data link layer services Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control: Done!

More information

CCNA Exploration1 Chapter 7: OSI Data Link Layer

CCNA Exploration1 Chapter 7: OSI Data Link Layer CCNA Exploration1 Chapter 7: OSI Data Link Layer LOCAL CISCO ACADEMY ELSYS TU INSTRUCTOR: STELA STEFANOVA 1 Explain the role of Data Link layer protocols in data transmission; Objectives Describe how the

More information

Lecture 9 The Data Link Layer part II. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 9 The Data Link Layer part II. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 9 The Data Link Layer part II Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Physical Addresses Physical (or LAN or MAC) address: 48 bit string Hexadecimal representation

More information

King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering

King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering Student Name: Section #: King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering COE 344 Computer Networks (T072) Final Exam Date

More information

Medium Access Control. IEEE , Token Rings. CSMA/CD in WLANs? Ethernet MAC Algorithm. MACA Solution for Hidden Terminal Problem

Medium Access Control. IEEE , Token Rings. CSMA/CD in WLANs? Ethernet MAC Algorithm. MACA Solution for Hidden Terminal Problem Medium Access Control IEEE 802.11, Token Rings Wireless channel is a shared medium Need access control mechanism to avoid interference Why not CSMA/CD? 9/15/06 CS/ECE 438 - UIUC, Fall 2006 1 9/15/06 CS/ECE

More information

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10 Extending the LAN Info 341 Networking and Distributed Applications Context Building up the network Media NIC Application How to hook things together Transport Internetwork Network Access Physical Internet

More information

Getting Connected (Chapter 2 Part 4) Networking CS 3470, Section 1 Sarah Diesburg

Getting Connected (Chapter 2 Part 4) Networking CS 3470, Section 1 Sarah Diesburg Getting Connected (Chapter 2 Part 4) Networking CS 3470, Section 1 Sarah Diesburg Five Problems Encoding/decoding Framing Error Detection Error Correction Media Access Five Problems Encoding/decoding Framing

More information

ECE 4450:427/527 - Computer Networks Spring 2017

ECE 4450:427/527 - Computer Networks Spring 2017 ECE 4450:427/527 - Computer Networks Spring 2017 Dr. Nghi Tran Department of Electrical & Computer Engineering Lecture 5.5: Ethernet Dr. Nghi Tran (ECE-University of Akron) ECE 4450:427/527 Computer Networks

More information

IEEE , Token Rings. 10/11/06 CS/ECE UIUC, Fall

IEEE , Token Rings. 10/11/06 CS/ECE UIUC, Fall IEEE 802.11, Token Rings 10/11/06 CS/ECE 438 - UIUC, Fall 2006 1 Medium Access Control Wireless channel is a shared medium Need access control mechanism to avoid interference Why not CSMA/CD? 10/11/06

More information

High Level View. EE 122: Ethernet and Random Access protocols. Medium Access Protocols

High Level View. EE 122: Ethernet and Random Access protocols. Medium Access Protocols High Level View EE 122: Ethernet and 802.11 Ion Stoica September 18, 2002 Goal: share a communication medium among multiple hosts connected to it Problem: arbitrate between connected hosts Solution goals:

More information

Computer Networks Principles LAN - Ethernet

Computer Networks Principles LAN - Ethernet Computer Networks Principles LAN - Ethernet Prof. Andrzej Duda duda@imag.fr http://duda.imag.fr 1 Interconnection structure - layer 3 interconnection layer 3 router subnetwork 1 interconnection layer 2

More information

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis Computer Network Fundamentals Spring 2008 Week 3 MAC Layer Andreas Terzis Outline MAC Protocols MAC Protocol Examples Channel Partitioning TDMA/FDMA Token Ring Random Access Protocols Aloha and Slotted

More information

Data Link Layer. Our goals: understand principles behind data link layer services: instantiation and implementation of various link layer technologies

Data Link Layer. Our goals: understand principles behind data link layer services: instantiation and implementation of various link layer technologies Data Link Layer Our goals: understand principles behind data link layer services: link layer addressing instantiation and implementation of various link layer technologies 1 Outline Introduction and services

More information

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061/1110. Lecture 8. Medium Access Control Methods & LAN

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061/1110. Lecture 8. Medium Access Control Methods & LAN RMIT University Data Communication and Net-Centric Computing COSC 1111/2061/1110 Medium Access Control Methods & LAN Technology Slide 1 Lecture Overview During this lecture, we will Look at several Multiple

More information

Principles behind data link layer services:

Principles behind data link layer services: Data link layer Goals: Principles behind data link layer services: Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control Example

More information

Principles behind data link layer services:

Principles behind data link layer services: Data link layer Goals: Principles behind data link layer services: Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control Example

More information

TCOM 370 NOTES 99-1 NETWORKING AND COMMUNICATIONS

TCOM 370 NOTES 99-1 NETWORKING AND COMMUNICATIONS TCOM 370 NOTES 99-1 NETWORKING AND COMMUNICATIONS Communication Networks Allow Exchange of Information between Users telephone network for voice communication interconnected computers and peripherals,

More information

CS 455/555 Intro to Networks and Communications. Link Layer Addressing, Ethernet, and a Day in the Life of a Web Request

CS 455/555 Intro to Networks and Communications. Link Layer Addressing, Ethernet, and a Day in the Life of a Web Request CS 455/555 Intro to Networks and Communications Link Layer Addressing, ernet, and a Day in the Life of a Web Request Dr. Michele Weigle Department of Computer Science Old Dominion University mweigle@cs.odu.edu

More information

Hubs. twisted pair. hub. 5: DataLink Layer 5-1

Hubs. twisted pair. hub. 5: DataLink Layer 5-1 Hubs Hubs are essentially physical-layer repeaters: bits coming from one link go out all other links at the same rate no frame buffering no CSMA/CD at : adapters detect collisions provides net management

More information

Links Reading: Chapter 2. Goals of Todayʼs Lecture. Message, Segment, Packet, and Frame

Links Reading: Chapter 2. Goals of Todayʼs Lecture. Message, Segment, Packet, and Frame Links Reading: Chapter 2 CS 375: Computer Networks Thomas Bressoud 1 Goals of Todayʼs Lecture Link-layer services Encoding, framing, and error detection Error correction and flow control Sharing a shared

More information

Systems. Roland Kammerer. 10. November Institute of Computer Engineering Vienna University of Technology. Communication Protocols for Embedded

Systems. Roland Kammerer. 10. November Institute of Computer Engineering Vienna University of Technology. Communication Protocols for Embedded Communication Roland Institute of Computer Engineering Vienna University of Technology 10. November 2010 Overview 1. Definition of a protocol 2. Protocol properties 3. Basic Principles 4. system communication

More information

===================================================================== Exercises =====================================================================

===================================================================== Exercises ===================================================================== ===================================================================== Exercises ===================================================================== 1 Chapter 1 1) Design and describe an application-level

More information

CHAPTER 7 MAC LAYER PROTOCOLS. Dr. Bhargavi Goswami Associate Professor & Head Department of Computer Science Garden City College

CHAPTER 7 MAC LAYER PROTOCOLS. Dr. Bhargavi Goswami Associate Professor & Head Department of Computer Science Garden City College CHAPTER 7 MAC LAYER PROTOCOLS Dr. Bhargavi Goswami Associate Professor & Head Department of Computer Science Garden City College MEDIUM ACCESS CONTROL - MAC PROTOCOLS When the two stations transmit data

More information

Media Access Control (MAC) Sub-layer and Ethernet

Media Access Control (MAC) Sub-layer and Ethernet Media Access Control (MAC) Sub-layer and Ethernet Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF MAC Sub-layer The MAC sub-layer is a sub-layer

More information

Link Layer and Ethernet

Link Layer and Ethernet Link Layer and Ethernet 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross traceroute Data Link Layer Multiple

More information

Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과.

Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과. Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과 ahn@venus.uos.ac.kr Data Link Layer Goals: understand principles behind data link layer services: error detection, correction sharing a broadcast channel: multiple

More information

The Link Layer and LANs. Chapter 6: Link layer and LANs

The Link Layer and LANs. Chapter 6: Link layer and LANs The Link Layer and LANs EECS3214 2018-03-14 4-1 Chapter 6: Link layer and LANs our goals: understand principles behind link layer services: error detection, correction sharing a broadcast channel: multiple

More information

Redes de Computadores. Medium Access Control

Redes de Computadores. Medium Access Control Redes de Computadores Medium Access Control Manuel P. Ricardo Faculdade de Engenharia da Universidade do Porto 1 » How to control the access of computers to a communication medium?» What is the ideal Medium

More information

Chapter 16 Networking

Chapter 16 Networking Chapter 16 Networking Outline 16.1 Introduction 16.2 Network Topology 16.3 Network Types 16.4 TCP/IP Protocol Stack 16.5 Application Layer 16.5.1 Hypertext Transfer Protocol (HTTP) 16.5.2 File Transfer

More information

CHAPTER 2 - NETWORK DEVICES

CHAPTER 2 - NETWORK DEVICES CHAPTER 2 - NETWORK DEVICES TRUE/FALSE 1. Repeaters can reformat, resize, or otherwise manipulate the data packet. F PTS: 1 REF: 30 2. Because active hubs have multiple inbound and outbound connections,

More information

The Link Layer II: Ethernet

The Link Layer II: Ethernet Monday Recap The Link Layer II: Ethernet q Link layer services q Principles for multiple access protocols q Categories of multiple access protocols CSC 249 March 24, 2017 1 2 Recap: Random Access Protocols

More information

Multiple Access Protocols

Multiple Access Protocols Multiple Access Protocols Computer Networks Lecture 2 http://goo.gl/pze5o8 Multiple Access to a Shared Channel The medium (or its sub-channel) may be shared by multiple stations (dynamic allocation) just

More information

Data Link Layer, Part 3 Medium Access Control. Preface

Data Link Layer, Part 3 Medium Access Control. Preface Data Link Layer, Part 3 Medium Access Control These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make a single machine-readable

More information

Chapter 5 Link Layer and LANs

Chapter 5 Link Layer and LANs Chapter 5 Link Layer and LANs Computer Networking: A Top Down Approach 4 th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007. All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights

More information

CIS 551 / TCOM 401 Computer and Network Security. Spring 2007 Lecture 7

CIS 551 / TCOM 401 Computer and Network Security. Spring 2007 Lecture 7 CIS 551 / TCOM 401 Computer and Network Security Spring 2007 Lecture 7 Announcements Reminder: Project 1 is due on Thursday. 2/1/07 CIS/TCOM 551 2 Network Architecture General blueprints that guide the

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes CS 4: COMPUTER NETWORKS SPRING 03 FINAL May 6, 03 50 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable justification.

More information

Lecture 4b. Local Area Networks and Bridges

Lecture 4b. Local Area Networks and Bridges Lecture 4b Local Area Networks and Bridges Ethernet Invented by Boggs and Metcalf in the 1970 s at Xerox Local area networks were needed to connect computers, share files, etc. Thick or Thin Ethernet Cable

More information

CSE/EE 461 Wireless and Contention-Free Protocols

CSE/EE 461 Wireless and Contention-Free Protocols CSE/EE 461 Wireless and Contention-Free Protocols Last Time The multi-access problem Medium Access Control (MAC) sublayer Random access protocols: Aloha CSMA variants Classic Ethernet (CSMA/CD) Application

More information

- Hubs vs. Switches vs. Routers -

- Hubs vs. Switches vs. Routers - 1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing

More information

Internetworking is connecting two or more computer networks with some sort of routing device to exchange traffic back and forth, and guide traffic on

Internetworking is connecting two or more computer networks with some sort of routing device to exchange traffic back and forth, and guide traffic on CBCN4103 Internetworking is connecting two or more computer networks with some sort of routing device to exchange traffic back and forth, and guide traffic on the correct path across the complete network

More information

EE 122: Ethernet and

EE 122: Ethernet and EE 122: Ethernet and 802.11 Ion Stoica September 18, 2002 (* this talk is based in part on the on-line slides of J. Kurose & K. Rose) High Level View Goal: share a communication medium among multiple hosts

More information

CS 43: Computer Networks Switches and LANs. Kevin Webb Swarthmore College December 5, 2017

CS 43: Computer Networks Switches and LANs. Kevin Webb Swarthmore College December 5, 2017 CS 43: Computer Networks Switches and LANs Kevin Webb Swarthmore College December 5, 2017 Ethernet Metcalfe s Ethernet sketch Dominant wired LAN technology: cheap $20 for NIC first widely used LAN technology

More information

Chapter 2. Switch Concepts and Configuration. Part I

Chapter 2. Switch Concepts and Configuration. Part I Chapter 2 Switch Concepts and Configuration Part I CCNA3-1 Chapter 2-1 Note for Instructors These presentations are the result of a collaboration among the instructors at St. Clair College in Windsor,

More information

Chapter 8 LAN Topologies

Chapter 8 LAN Topologies Chapter 8 LAN Topologies Point-to-Point Networks In a Point-to-Point network, each wire connects exactly two computers Point To Point Link Machine A Machine B Figure 1: Each line connects two machines

More information

Link Layer and Ethernet

Link Layer and Ethernet Link Layer and Ethernet 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross traceroute Data Link Layer Multiple

More information

Jaringan Komputer. Broadcast Network. Outline. MAC (Medium Access Control) Channel Allocation Problem. Dynamic Channel Allocation

Jaringan Komputer. Broadcast Network. Outline. MAC (Medium Access Control) Channel Allocation Problem. Dynamic Channel Allocation Broadcast Network Jaringan Komputer Medium Access Control Sublayer 2 network categories: point-to-point connections broadcast channels Key issue in broadcast network: how to determine who gets to use the

More information

Chapter 4 NETWORK HARDWARE

Chapter 4 NETWORK HARDWARE Chapter 4 NETWORK HARDWARE 1 Network Devices As Organizations grow, so do their networks Growth in number of users Geographical Growth Network Devices : Are products used to expand or connect networks.

More information

CS 455/555 Intro to Networks and Communications. Link Layer

CS 455/555 Intro to Networks and Communications. Link Layer CS 455/555 Intro to Networks and Communications Link Layer Dr. Michele Weigle Department of Computer Science Old Dominion University mweigle@cs.odu.edu http://www.cs.odu.edu/~mweigle/cs455-s13 1 Link Layer

More information

1: Review Of Semester Provide an overview of encapsulation.

1: Review Of Semester Provide an overview of encapsulation. 1: Review Of Semester 1 1.1.1.1. Provide an overview of encapsulation. Networking evolves to support current and future applications. By dividing and organizing the networking tasks into separate layers/functions,

More information

CSE/EE 461 Section 2

CSE/EE 461 Section 2 CSE/EE 461 Section 2 Latency in a store-and-forward network 4ms, 10MB/s B How long does it take to send a 2kB packet from to B? 2ms, 10MB/s C 2ms, 10MB/s B What if it has to pass through a node C? Plan

More information

CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca

CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca Based partly on lecture notes by David Mazières, Phil Levis, John Jannotti Administrivia Homework I out later today, due next Thursday Today: Link Layer (cont.)

More information

CSC 4900 Computer Networks: Link Layer (2)

CSC 4900 Computer Networks: Link Layer (2) CSC 4900 Computer Networks: Link Layer (2) Professor Henry Carter Fall 2017 Link Layer 6.1 Introduction and services 6.2 Error detection and correction 6.3 Multiple access protocols 6.4 LANs addressing,

More information

Switching & ARP Week 3

Switching & ARP Week 3 Switching & ARP Week 3 Module : Computer Networks Lecturer: Lucy White lbwhite@wit.ie Office : 324 Many Slides courtesy of Tony Chen 1 Ethernet Using Switches In the last few years, switches have quickly

More information

Introduction to Networking Devices

Introduction to Networking Devices Introduction to Networking Devices Objectives Explain the uses, advantages, and disadvantages of repeaters, hubs, wireless access points, bridges, switches, and routers Define the standards associated

More information

Real-Time (Paradigms) (47)

Real-Time (Paradigms) (47) Real-Time (Paradigms) (47) Memory: Memory Access Protocols Tasks competing for exclusive memory access (critical sections, semaphores) become interdependent, a common phenomenon especially in distributed

More information

Review. Error Detection: CRC Multiple access protocols. LAN addresses and ARP Ethernet. Slotted ALOHA CSMA/CD

Review. Error Detection: CRC Multiple access protocols. LAN addresses and ARP Ethernet. Slotted ALOHA CSMA/CD Review Error Detection: CRC Multiple access protocols Slotted ALOHA CSMA/CD LAN addresses and ARP Ethernet Some slides are in courtesy of J. Kurose and K. Ross Overview Ethernet Hubs, bridges, and switches

More information

Introduction to Open System Interconnection Reference Model

Introduction to Open System Interconnection Reference Model Chapter 5 Introduction to OSI Reference Model 1 Chapter 5 Introduction to Open System Interconnection Reference Model Introduction The Open Systems Interconnection (OSI) model is a reference tool for understanding

More information

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100 NAME: Login name: Computer Science 461 Midterm Exam March 10, 2010 3:00-4:20pm This test has five (5) questions. Put your name on every page, and write out and sign the Honor Code pledge before turning

More information

CSC 4900 Computer Networks: The Link Layer

CSC 4900 Computer Networks: The Link Layer CSC 4900 Computer Networks: The Link Layer Professor Henry Carter Fall 2017 Last Time We talked about intra-as routing protocols: Which routing algorithm is used in RIP? OSPF? What techniques allow OSPF

More information

Introduction to LAN Protocols

Introduction to LAN Protocols CHAPTER 2 Chapter Goals Learn about different LAN protocols. Understand the different methods used to deal with media contention. Learn about different LAN topologies. This chapter introduces the various

More information

Local Area Networks (LANs) SMU CSE 5344 /

Local Area Networks (LANs) SMU CSE 5344 / Local Area Networks (LANs) SMU CSE 5344 / 7344 1 LAN/MAN Technology Factors Topology Transmission Medium Medium Access Control Techniques SMU CSE 5344 / 7344 2 Topologies Topology: the shape of a communication

More information

Goal and Outline. Computer Networking. What Do We Need? Today s Story Lecture 3: Packet Switched Networks Peter Steenkiste

Goal and Outline. Computer Networking. What Do We Need? Today s Story Lecture 3: Packet Switched Networks Peter Steenkiste Goal and Outline 15-441 15-641 Computer Networking Lecture 3: Packet Switched Networks Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15 441 F16 Goal: gain a basic understanding of how you can build a

More information

ECE 333: Introduction to Communication Networks Fall Lecture 19: Medium Access Control VII

ECE 333: Introduction to Communication Networks Fall Lecture 19: Medium Access Control VII ECE : Introduction to Communication Networks Fall 2002 Lecture 9: Medium Access Control VII More on token ring networks LAN bridges and switches. More on token rings In the last lecture we began discussing

More information

Data Link Layer: Multi Access Protocols

Data Link Layer: Multi Access Protocols Digital Communication in the Modern World Data Link Layer: Multi Access Protocols http://www.cs.huji.ac.il/~com1 com1@cs.huji.ac.il Some of the slides have been borrowed from: Computer Networking: A Top

More information

CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca

CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca CSCI-1680 Link Layer Wrap-Up Rodrigo Fonseca Based partly on lecture notes by David Mazières, Phil Levis, John Janno< Administrivia Homework I out later today, due next Thursday, Sep 25th Today: Link Layer

More information

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 16

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 16 CMPE 150/L : Introduction to Computer Networks Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 16 1 Final project demo Please do the demo next week to the TAs. So basically you may need

More information

ECSE-4670: Computer Communication Networks (CCN) Informal Quiz 3

ECSE-4670: Computer Communication Networks (CCN) Informal Quiz 3 ECSE-4670: Computer Communication Networks (CCN) Informal Quiz 3 : shivkuma@ecse.rpi.edu Biplab Sikdar: sikdab@rpi.edu 1 T F Slotted ALOHA has improved utilization since the window of vulnerability is

More information

LAN PROTOCOLS. Beulah A AP/CSE

LAN PROTOCOLS. Beulah A AP/CSE LAN PROTOCOLS Beulah A AP/CSE IEEE STANDARDS In 1985, the Computer Society of the IEEE started a project, called Project 802, to set standards to enable intercommunication among equipment from a variety

More information

Switching and Forwarding Reading: Chapter 3 1/30/14 1

Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Next Problem: Enable communication between hosts that are not directly connected Fundamental Problem of the Internet or any

More information

cs144 Midterm Review Fall 2010

cs144 Midterm Review Fall 2010 cs144 Midterm Review Fall 2010 Administrivia Lab 3 in flight. Due: Thursday, Oct 28 Midterm is this Thursday, Oct 21 (during class) Remember Grading Policy: - Exam grade = max (final, (final + midterm)/2)

More information

Data Link Layer, Part 5. Medium Access Control

Data Link Layer, Part 5. Medium Access Control CS 455 Medium Access Control, Page 1 Data Link Layer, Part 5 Medium Access Control These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang s courses at GMU

More information

Session Exam 1. EG/ES 3567 Worked Solutions. (revised)

Session Exam 1. EG/ES 3567 Worked Solutions. (revised) Session 003-00 Exam 1 EG/ES 3567 Worked Solutions. (revised) Please note that both exams have identical solutions, however the level of detail expected in ES is less, and the questions are phrased to provide

More information

Computer and Network Security

Computer and Network Security CIS 551 / TCOM 401 Computer and Network Security Spring 2009 Lecture 6 Announcements First project: Due: 6 Feb. 2009 at 11:59 p.m. http://www.cis.upenn.edu/~cis551/project1.html Plan for Today: Networks:

More information

Adaptors Communicating. Link Layer: Introduction. Parity Checking. Error Detection. Multiple Access Links and Protocols

Adaptors Communicating. Link Layer: Introduction. Parity Checking. Error Detection. Multiple Access Links and Protocols Link Layer: Introduction daptors ommunicating Terminology: hosts and routers are nodes communication channels that connect adjacent nodes along communication path are links wired links wireless links LNs

More information

Principles behind data link layer services:

Principles behind data link layer services: Data Link Layer Goals: Principles behind data link layer services: Error detection, correction Sharing a broadcast channel: multiple access Link layer addressing Reliable data transfer, flow control: Done!

More information

Multiple Access Channels

Multiple Access Channels Multiple Access Channels Some Queuing Theory MAC: Aloha, ethernet Exponential backoff & friends LANs: Local Area Networks Goal: extend benefits of simple connection as far as possible Means: Share medium

More information

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking

More information

6th Slide Set Computer Networks

6th Slide Set Computer Networks Prof. Dr. Christian Baun 6th Slide Set Computer Networks Frankfurt University of Applied Sciences WS1718 1/36 6th Slide Set Computer Networks Prof. Dr. Christian Baun Frankfurt University of Applied Sciences

More information

Chapter 3: Industrial Ethernet

Chapter 3: Industrial Ethernet 3.1 Introduction Previous versions of this handbook have dealt extensively with Ethernet so it is not our intention to revisit all the basics. However, because Smart Grid protocols are increasingly reliant

More information

Chapter 4. The Medium Access Control Sublayer. Points and Questions to Consider. Multiple Access Protocols. The Channel Allocation Problem.

Chapter 4. The Medium Access Control Sublayer. Points and Questions to Consider. Multiple Access Protocols. The Channel Allocation Problem. Dynamic Channel Allocation in LANs and MANs Chapter 4 The Medium Access Control Sublayer 1. Station Model. 2. Single Channel Assumption. 3. Collision Assumption. 4. (a) Continuous Time. (b) Slotted Time.

More information

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 18

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 18 CMPE 150/L : Introduction to Computer Networks Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 18 1 Final project demo Please do the demo THIS week to the TAs. Or you are allowed to use

More information

Applied Networks & Security

Applied Networks & Security Applied Networks & Security Wired Local Area Networks (LANs) http://condor.depaul.edu/~jkristof/it263/ John Kristoff jtk@depaul.edu IT 263 Spring 2006/2007 John Kristoff - DePaul University 1 Local Area

More information

CS 428/528 Computer Networks Lecture 01. Yan Wang

CS 428/528 Computer Networks Lecture 01. Yan Wang 1 CS 428/528 Computer Lecture 01 Yan Wang 2 Motivation: Why bother? Explosive growth of networks 1989, 100,000 hosts on the Internet Distributed Applications and Systems E-mail, WWW, multimedia, distributed

More information

Lecture 16: Network Layer Overview, Internet Protocol

Lecture 16: Network Layer Overview, Internet Protocol Lecture 16: Network Layer Overview, Internet Protocol COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016,

More information

Module 16: Distributed System Structures

Module 16: Distributed System Structures Chapter 16: Distributed System Structures Module 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology Communication Structure Communication

More information

Data and Computer Communications

Data and Computer Communications Data and Computer Communications Chapter 16 High Speed LANs Eighth Edition by William Stallings Why High Speed LANs? speed and power of PCs has risen graphics-intensive applications and GUIs see LANs as

More information

Chapter 9 Ethernet Part 1

Chapter 9 Ethernet Part 1 Chapter 9 Ethernet Part 1 Introduction to Ethernet Ethernet Local Area Networks (LANs) LAN (Local Area Network) - A computer network connected through a wired or wireless medium by networking devices (s,

More information

ECE4110 Internetwork Programming. Introduction and Overview

ECE4110 Internetwork Programming. Introduction and Overview ECE4110 Internetwork Programming Introduction and Overview 1 EXAMPLE GENERAL NETWORK ALGORITHM Listen to wire Are signals detected Detect a preamble Yes Read Destination Address No data carrying or noise?

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 11 MIDTERM EXAMINATION #1 OCT. 16, 2013 COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2013-75 minutes This examination

More information

Chapter 5 Link Layer and LANs

Chapter 5 Link Layer and LANs Chapter 5 Link Layer and LANs A note on the use of these ppt slides: All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 4 th edition.

More information