Average Latency observed by each Benchmark Process Request Size (MBytes)

Size: px
Start display at page:

Download "Average Latency observed by each Benchmark Process Request Size (MBytes)"

Transcription

1 1 Emerging Serial Storage Interfaces: Serial Storage Architecture (SSA) and Fibre Channel - Arbitrated Loop (FC-AL) David H.C. Du, Taisheng Chang, Jenwei Hsieh, Sangyup Shim and Yuewei Wang Distributed Multimedia Research Center and Computer Science Department, University of Minnesota The authors would like to thank Dave Archer, Gary Delp, Larry Whitley and Walter Krapohl at IBM Rochester and Horst Truestedt, Edward Clausell, Richard Rolls, Michelle Tidwell and Howard Rankin at IBM Storage System Division for numerous discussions on the details of SSA. We also would like to thank Cort Fergusson, Mike Miller and Jim Coomes at Seagate Technology for providing us valuable information on FC-AL. January 6, 1997 DRAFT

2 2 Abstract Several serial storage interfaces have been proposed for constructing storage systems with high transfer bandwidth, large storage capacity, and fault tolerance feature. Among them, Serial Storage Architecture (SSA) and Fibre Channel - Arbitrated Loop (FC-AL) are considered as the next generation storage interfaces with broad industry support. Both technologies support simple cabling, long transmission distance, high data bandwidth, large capacity, fault tolerance, and fair sharing of link bandwidth. In this paper, a tutorial and a comparison of these two technologies are presented. The tutorial examines their interface specications, transport protocols, fairness algorithms, and capabilities of fault tolerance. The comparison focuses on their protocol overhead, ow control, fairness algorithms, and fault tolerance. The paper also summarizes the recently proposed Aaron Proposal which incorporates features from both SSA and FC-AL and targets at merging these two technologies. DRAFT January 6, 1997

3 I. Introduction Due to the rapid improvement of network infrastructure and computer processing power, many new applications such as virtual reality, digital library, video-on-demand, electronic commerce, etc are emerging at a very fast pace. These applications require a storage system which can support high data bandwidth, large storage capacity, and fault tolerance. When constructing a mass storage system, a designer requests a storage interface with the following capabilities: (1) a simple cabling scheme, (2) high data transfer rate, (3) capable of connecting a large number of storage devices, (4) fault tolerance and (5) cost eectiveness. The data transfer rate of storage devices is also increasing. For example, the bandwidth of magnetic disks has been improved by about 20% per year in the last few years due to the increasing recording density and faster rotational speed. Therefore, it is necessary to have high performance interfaces to take advantage of this improvement. Currently Small Computer System Interface (SCSI) channels are widely used for connecting storage devices to a host computer. However, SCSI channels are not designed with the aforementioned capabilities in mind. SCSI is a parallel bus interface which connects storage devices in a daisy-chained fashion. Although SCSI provides satisfactory performance for many traditional applications, it has some disadvantages including low data transfer bandwidth (the typical transfer rate is 20 MBytes/sec for fast/wide SCSI-2 channel), limited distance between devices, large connectors due to the parallel interface, and only small number of storage devices can be connected to a single channel due to the bandwidth limitation and overhead of SCSI bus. Moreover, its prioritized bus arbitration scheme leads to unfair bandwidth sharing among the devices attached to an SCSI channel. Even with the emerging Ultra SCSI channels running at 40 MBytes/sec, its performance is inherently limited by these deciencies. The limited bandwidth of SCSI interface forces a storage system designer to use many physical channels to connect a large number of disks in order to achieve high aggregate data transfer rate and to provide a large storage capacity. As an example to demonstrate this problem, we show a conguration of a mass storage system which was used in one of our previous studies. In this system fast/wide SCSI-2 channels were used to connect 1

4 disk arrays to a symmetric multiprocessing computer. This mass storage system consists of 24 RAID-3 disk arrays. Because one disk array (consisting of eight data and one parity disks) can deliver close to 16 MB/sec (this is close to the actual performance upper bound of a fast/wide SCSI-2 channel) throughput, only one disk array can be connected to one SCSI channel. Therefore, 24 SCSI channels are necessary to form such a large disk farm. Figure 1 shows two pictures of the connections between an SGI Onyx computer and 24 RAID-3 disk arrays. The left picture shows 24 disk arrays were connected to the backplane of an SGI Onyx computer via SCSI-2 host adaptors and cables. The other ends of the cables which connected to 24 disk arrays are shown in the right picture. This example demonstrates that it requires many SCSI channels and adaptors to set up a mass storage system with a large storage capacity and high aggregate data transfer bandwidth. This is usually only feasible for today's high-end servers and super-computers. Fig. 1. Use SCSI-2 channels to connect 24 RAID 3 disk arrays. To demonstrate the severeness of SCSI's unfair bus arbitration, we conducted a simple experiment on an SCSI-2 channel attached with ve disks on a multiprocessor computer. Five copies of a benchmark program were invoked with each process simultaneously accessing one dedicated disk repeatedly. Figure 2 shows the latencies observed by each 2

5 Average Latency (Sec) Average Latency observed by each Benchmark Process Disk 1 Disk 2 Disk 3 Disk 4 Disk Fig Request Size (MBytes) Average latency observed by each benchmark process with dierent data transfer sizes. benchmark process when the size of accessing data varied from 64 KB to 2 MB. As shown in the gure, there are signicant dierences among the latencies observed by dierent processes. In this test, Disk 1 always experiences the longest latencies while Disk 5 has the shortest latency (Disk 5 has the highest priority and Disk 1 has the lowest). The experiment result shows the impact of SCSI's unfair bus arbitration, especially when the SCSI is heavily loaded. Several serial storage interfaces with higher data transfer rates were developed as alternatives which may solve some of the SCSI's problems discussed before. These interfaces include Serial Storage Architecture (SSA) [1], [2], Fibre Channel - Arbitrated Loop (FC- AL) [4], [5], IEEE P1394 serial bus [9]. A serial interface uses compact connectors and serial links to simplify cabling schemes and reduce the cost. To reduce the cost of connection per device, it allows a large number (ranging from tens to more than one hundred) of storage devices attached to a single physical interface. Some of the serial interfaces also provide a mechanism for fault tolerance against single link failure. They also supports hot swappable devices and longer distance data transmission. Among the serial storage interface technologies, SSA and FC-AL are the two major technologies which are widely considered as the next generation storage interfaces. Both of them have broad industry support. SSA provides 20 MB/s of link bandwidth. The aggregate data bandwidth is 80 MB/sec 3

6 with two-in and two-out connections on a node. The full-duplex SSA interface supports fault tolerance against host, link, and adaptor failures. SSA devices (disks, tape drives, or hosts) can be congured as a string, a loop, or a switched topology. It also provides spatial reuse which allows multiple pairs of devices in a loop to communicate with each other simultaneously. Multiple hosts can be connected to a loop and access data concurrently for higher achievable throughput. It provides a fairness algorithm to enforce fair sharing of link bandwidth. FC-AL operates with 100 MB/sec link bandwidth. It provides high data bandwidth, fault tolerance, and an optional fairness algorithm. It supports any combination of hosts and storage devices up to a total of 127 in a single loop. As an enhancement to the Fibre Channel standard [7], devices in FC-AL can be connected by copper cables for low-cost storage attachment or optical cables for long distance transmission up to 10km with single-mode ber optics. The FC-AL standard includes a fairness algorithm which allows the storage devices connected on the same loop to fairly share the loop access. For performance consideration, some of the devices such as hosts may choose not to run the fairness algorithm. This allows a host to grab the loop access and send out commands earlier. A conguration with dual loops and multiple hosts oers fault tolerance against host, link, and adaptor failures. FC-AL supports a bypass circuit which may be used to keep a loop operating even when a device on the loop fails or is removed. In this paper a tutorial and a comparison of these two technologies are presented. The tutorial examines their interface specications, transport protocols, fairness algorithms, and capabilities of fault tolerance. Tutorials of SSA and FC-AL are presented in Section II and Section III respectively. The comparison in Section IV focus on their protocol overhead, ow control, fairness algorithms, and fault tolerance. The paper also summarize the recently proposed Aaron Proposal which incorporates features from both SSA and FC-AL and targets at merging these two technologies. The Aaron Proposal is briey discussed in Section V. Section VI concludes the paper. II. Serial Storage Architecture (SSA) The Task Group of Technical Committee X3T10.1 has been working on SSA standards 4

7 which maps the SCSI-2 protocol over SSA since The proposed SSA standard has completed and passed ANSI public review. It is currently in the process of being approved as an ANSI standard. Before we start introducing the technical details of SSA, we would like to give the readers an overview of SSA's features. We would like to provide a highlevel explanation about what is the SSA technology and how SSA can be used in many applications. The technical details about the protocol stack and transport protocol will be give in the following subsections. A. Overview Unlike SCSI which uses a shared bus to connect the devices, SSA uses a pair of pointto-point links to connect two devices together via a port on each device. A node (could be either a host adaptor or a storage device) with two ports can connect to two adjacent nodes by two pairs of links 1. Figure 3 shows one two-port node (the middle one in the gure) connected with two other two-port nodes. If all nodes are two-port nodes, then they can be connected as an SSA loop. An example of an SSA loop is shown in Figure 4. port port port port port port Router Router Router Fig. 3. Node interface Node interface Node interface One SSA two-port node (the middle one) connected with the other two two-port nodes. Fig. 4. An example of an SSA loop. 1 To simplify the description, we also use a duplex or bidirectional link in the paper. 5

8 The link bandwidth is 20MB/sec in the current specication 2. The host has the aggregate 80MB/sec bandwidth (two 20MB/sec links in and two 20MB/sec links out) in an SSA loop. The aggregate bandwidth can be doubled with future 40MB/sec links. Also, a host adaptor card with four ports can connect two SSA loops and achieve 160MB/sec aggregate bandwidth with 20MB/sec links. This provides a storage subsystem with large bandwidth. Compared to 20MB/sec SCSI, SCSI needs eight SCSI channels to provide the same bandwidth from the storage. For 40MB/sec Ultra SCSI, it still need four channels to provide the same bandwidth. One of the special features of SSA is the Spatial Reuse. This is because the links between two adjacent nodes are operating independently. Therefore, SSA is capable of supporting multiple simultaneous transmissions. Figure 5 shows an example of spatial reuse in an SSA loop with four hosts attached to it. One important advantage of this feature is, as shown in the Figure 5, each host can still have 80MB/sec aggregate bandwidth (which is the same as single host conguration) if each host only accesses its neighboring disks and there is no overlap among their access patterns. In fact, the aggregate bandwidth can be scaled up and each host on the loop can have 80 MB/sec aggregate bandwidth from an SSA loop if the aforementioned conditions are satised. In reality, each host may occasionally access far-away disks. Therefore, each host will have less than 80 MB/sec aggregate bandwidth. This also points out the importance of data allocation and load balance in an SSA loop. 40MB/sec 40MB/sec 40MB/sec 40MB/sec 40MB/sec 40MB/sec 40MB/sec 40MB/sec Fig. 5. Spatial reuse in an SSA loop with four hosts. Since there are two paths between each pair of nodes in an SSA loop, a single link failure 2 We assume 20MB/sec links bandwidth for the rest of this paper unless otherwise noted. 6

9 Link Failure Disk 3 Disk 2 Disk 1 Disk 4 20MB/s Traffic is rerouted to an alterante path 20MB/s Host (Initiator) Disk 5 Fig. 6. Disk 6 Disk 7 Fault Tolerance in a SSA Loop. can be tolerated. When a link in one path fails, an alternate path is available for data transmission. This redundant path eliminates a single point of failure. Figure 6 shows the scenario in which a single link failure is occurred. When the link from Disk 3 to Disk 4 fails, Disk 4 can send its data via the alternative route (through Disk 5 - Disk 6 - Disk 7) to the host. The detailed error handling procedures can be found in SSA standard document. The basic unit of information transmitted on SSA is a frame. A frame may consist of up to 128 bytes of data. The format of a frame will be discussed later. An SSA frame ows over each SSA node in a store and forward fashion. However, to reduce the long delay of traditional store and forward approach, SSA limits the maximum delay to ten character time (0.5 sec at 20 MB/sec) at each node if the intermediate node is allowed to forward this frame to the downstream node right away. Therefore, in the case of light load on the loop, the end-to-end delay can be close to the delay with cut-through routing approach. The destination node is responsible to drain a frame out from the loop. So each pair of source-destination nodes only use a portion of the loop. Each node can either forward frames from its upstream nodes or originate its own frames. SSA adopts a token based mechanism to ensure fairness of link sharing among the nodes. There is one token rotates in each direction (clock-wise and counter clock-wise). It ensures each node can originate certain amount of frames during each token rotation time. The token rotation time (or token cycle time) may vary from time to time depending on the trac load on the loop in that direction. 7

10 B. SSA Protocol Layers The proposed protocol stack is shown in Figure 7. The major function of each layer is outlined as follows. SSA-S2P (SSA SCSI-2 Protocol): It denes a mapping of the existing SCSI-2 protocol with the extension to SSA serial link. The goal of this protocol layer is to convert the existing SCSI-2 systems to be used in SSA and make such a migration easier. SSA-TL1 (SSA Transport Layer 1): This layer denes the transport layer functions for SSA. It includes ow control, acknowledgment, and fairness mechanism. SSA-PH1 (SSA Physical Layer 1): This protocol layer consists of the electrical characteristics of SSA interface and connectors. The Committee is dening another protocol stack which maps the SCSI-3 over SSA. This paper, however, will be focusing on the SCSI-2 over SSA, i.e., SSA-S2P - SSA-TL1 - SSA-PH1 protocol stack. Host Applications Disk SCSI-2 SSA-S2P SSA-TL1 SSA-PH1 SCSI-2 SSA-S2P SSA-TL1 SSA-PH1 Disk or Disk Array Fig SSA Loop The protocol stack mapping the SCSI-2 over SSA. C. SSA SCSI-2 Protocol: SSA-S2P SSA-S2P is a layer designed to minimize the changes required when convert the existing system and devices from SCSI-2 to SSA. This layer denes a data structure called SSA Message Structure (SMS) which is used to communicate control information between an initiator and a target 3. For examples, SCSI COMMAND SMS is used to transmit a Read or Write command from an initiator to a target. SCSI STATUS SMS can be used to 3 Note that throughout the paper an initiator is referred to a host and a target is referred to a disk. We shall use these terms interchangeably. 8

11 indicate the completion of a command. Table I and Table II show the corresponding SMSs transmitted between the initiator and the target for a read and write operation, respectively. TABLE I The typical activities for a read operation in SSA. Host SMSs or Data Frame on SSA Disk Command from upper layer SCSI CMD SMS! processing command fetching data prepared to receive data DATA READY SMS indicating data ready ready to receive data DATA REPLY SMS! allowed to transfer data First DATA frame sending data Last DATA frame sending data command complete informed SCSI STATUS SMS indicating command complete For a read command (Table I), a corresponding SCSI COMMAND SMS is transmitted from the initiator to the target. When the disk is ready to transfer data to the initiator, a DATA READY SMS is sent back to the initiator. After the initiator receives the SMS and is ready to receive data, it replies a DATA REPLY SMS to the disk. The disk then sends all the data corresponding to this command. At the end of data transfer, the disk sends a SCSI STATUS SMS to indicate the completion of this command. A write command follows the similar sequences as a read command is shown in Table II. TABLE II The typical activities for a write operation in SSA. Host SMSs or Data Frame on SSA Disk Command from upper layer SCSI CMD SMS! processing command DATA REQUEST SMS sending data First DATA frame!... sending data Last DATA frame! prepared to receive data ready to receive data command complete informed SCSI STATUS SMS indicating command complete 9

12 D. SSA Transport Layer: SSA-TL1 SSA uses 8B/10B encoding scheme. Each 8-bit data byte is encoded into a 10-bit character at the physical layer 4. Because of this 8B/10B encoding scheme, there are some bit patterns are not used to represent any data byte. SSA takes advantage of these special characters and uses them in its protocol layer. These special characters can be distinguished from other normal data bytes. Therefore, they can be inserted into any place of the transmission stream. We will introduce these special characters when we describe the data format and the SSA's transport protocol. We will show how they are used and helpful in improving the performance. The SSA frames are used to communicate between any two nodes. The frame format is shown in Figure 8. It consists of a one-byte long control eld, one to four bytes long address eld, one byte or two bytes long channel eld, up to 128 bytes data eld and four bytes CRC eld. The control eld identies the frame type. The address eld is use to route the frame. The rst bit in every address byte indicates if there is any other address byte following it. So the length of the address eld is actually dynamic and depends on the conguration. In an SSA loop, it only needs one byte address eld. The channel eld is used to identify which connection or application this frame is belong to when it arrives at the destination node. Up to 128 Bytes of data can be put in each SSA frame. Any two consecutive SSA frames must be separated by at least one FLAG. FLAG is one of the special characters we mentioned at the previous paragraph. The maximum eciency (the ratio of actual data length in each frame) of SSA is 128 (maximum data length)/136(1 Flag + 1 Control + 1 address + 1 channel data + 4 CRC) = 94%. In other words, theoretically SSA can utilize at most MB/sec out of 20MB/sec link bandwidth for actual data transmission. To avoid frame losses due to the receiver's buer overow, SSA uses a link-level creditbased ow control scheme to regulate transmissions between two adjacent nodes. A pair of RR (Receiver Ready) special characters is sent from a downstream node to a upstream node 4 Please note that when we said 20 MB/sec SSA link we meant the eective bandwidth. That is, the bandwidth for data transfer is 20 MB/sec although the physical hardware link speed is higher. We will use the same eective bandwidth whenever we mention link bandwidth throughout this paper. 10

13 1 char. 1-4 char. 1-2 char char. 4 char. FLAG Control Address Channel Data CRC FLAG Fig. 8. : SSA frame Data frame format for SSA. indicating enough buer space for the next frame. Note that downstream and upstream is from the data frame point of view. Each node can be a downstream node and also a upstream node for dierent directions. The upstream node requires to receive an RR character pair from the downstream node before it can start transmitting a new frame. When SSA starts operation, each node sends an RR pair to its upstream node of each link. This allows the upstream node to send an SSA frame. Every time the upstream node sends out an SSA frame, it has to wait for a new RR pair to be able to send out the next frame. How fast a new RR pair can be sent to the sender becomes an important factor on the performance. If a new RR pair arrives a long time after the sender sends out the previous frame, the sender has to wait for a long time before it can send the next frame. This results in a long idle time on the link and also a longer end-to-end delay. On the other hand, if a new RR pair can arrive before the sender nishes the transmission of the previous frame, the sender does not have to wait for the RR pair at all. To reduce the time needed for a new RR character to arrive at the sender, the receiver will send an RR pair as soon as it receives the control eld of an incoming frame if it has buer space for another frame (beside the one arriving). Since RR is a special character, the RR pair can be inserted into the transmission stream at any time. By these two approaches, if the receiver has buer space for the next frame, a new RR pair will arrive before the sender nishes the previous frame as long as the round-trip propagation delay and processing delay at the receiver is smaller than the frame transmission time. With 20MB/sec link, the transmission time of a 136 byte (the minimum frame size with 128 byte data eld and a FLAG) frame is 6.8 sec. The round-trip propagation time for 25m cable is about 175 nano seconds. To provide reliable transmissions, a link-level acknowledgment scheme is used. SSA 11

14 does not provide an end-to-end acknowledgment. It is up to the upper layer, for example, SCSI, to do that when there is a need. The transmission of each frame on an SSA link requires an acknowledgment from the downstream node. The acknowledgment is indicated by the arrival of an ACK character pair. The ACK is also one of the special characters. There is no sequence number information can be carried in this ACK character. Therefore, an acknowledgment only acknowledge one frame at a time. This results in at most only one outstanding frame (which is the frame that has been sent out completely but is not yet acknowledged) at any time. Similar as the ow control scheme, a late acknowledgment can also block a node from sending a new frame. To prevent this, a sender is allowed to start sending out the next frame even without receiving the acknowledgment for the previous frame. However, the tail FLAG of the second frame can not be sent before the acknowledgment comes back. Holding the tail FLAG of the second frame keeps the second frame in the state of transmitting which means the second frame is not yet complete and will not be considered as an outstanding frame. At the receiver side, the receiver should send an ACK pair to the sender right after it receives an whole frame with correct checksum. As in the RR ow control, the sender will not be blocked by this acknowledgment mechanism as long as the round-trip propagation time plus the checksum processing time is less than the transmission time. E. Fairness Algorithm As we mentioned at the overview, each SSA node can either forward or originate an SSA frame when it is allowed to send a frame to the downstream node. Normally, it will forward the frame from the upstream node rst. That is, it gives the trac from the upstream node higher priority. The reason of giving the higher priority to the frames from upstream nodes is to reduce the latency for the connections with longer paths. To prevent the starvation problem, the problem that a node never gets its chance to originate its frames, some mechanism is needed to ensure the fair sharing of the link among all the nodes. SSA adopts a token-based fairness algorithm to ensure fair sharing. There is one token rotating in an SSA loop in each direction (clockwise and counter clockwise). The token propagates on the loop and is used to govern the trac ow in the opposite direction 12

15 of the token rotation. When a node receives a token, it will switch the priority ordering between the frames from upstream node (we call them forwarded frames) and the frames to be originated from the node itself (we call them originated frames). It will store the frames coming from the upstream nodes in a buer. When this buer is full, the frame stream will be backed up. The upstream node will not be allowed to send more frames to this node due to the link-level ow control we described above. Two parameters, A quota and B quota, are used to regulate the amount of frames could be originated from each node. A quota denes the minimum number of frames a node is guaranteed to originate during each token rotation period (the time between when a node passed the token to the upstream node last time and the time when it passes the token again). When a node has originated at least A frames since it passed the token last time, the node is called Satised. The basic idea of the fairness algorithm is that it guarantees every node is Satised during each token rotation period. If a node is not Satised and has more frames to originate when it receives the token, it will hold the token and switch the priority such that it holds the frames from upstream nodes in a buer and originate its frames until it is Satised. After it is Satised, it should pass the token right away to the upstream node. To avoid the upstream node keep sending frames to the downstream nodes, B quota is used to regulate the maximum number of transmissions a node is allowed to originate during each token rotation period. If a node has originated B frames but the token has not arrived, it is not allowed to originate any more frames until the token arrives. When the token arrives eventually, the node should pass the token to the upstream node right away. This fairness algorithm is usually called SAT(ised) algorithm. The token is call SAT token. The SAT algorithm is extended to the case when there are more than one streams/connections is being originated from a node. The quotas will be dynamically scaled up by the number of connections at each node in this case. Although the SAT algorithm ensure the minimum and maximum numbers of frames a node could originate during each token rotation period, it does not necessarily result in even sharing. This is because the token rotation period that a node would experience varies from time to time and is dierent for dierent nodes. It highly depends on the 13

16 trac load on the loop, the load distribution among the nodes and the values of the A quota and B quota. The location of a node on a loop is also an important factor which would aect the number of frames it can originate. For example, the most upstream node has no upstream trac coming down to it. Therefore, this node does not need to wait for a token arrival to switch the priority order to originate its frames. On the other hand, the most downstream node will have a lot of frames coming from the other nodes from upstream and will most likely forward those frames most of time. Therefore, the most upstream node is more likely to be able to originate closer to B frames during each token rotation period while the most downstream node is more likely to be able to originate only A frames during each token rotation period in heavy load. Choice of A quota and B quota trades o the link utilization and the fair sharing. Choosing A = B will denitely ensure even sharing among the nodes. However, it comes with a cost of lower utilization when the trac load on the loop is light. On the other hand, setting B > A, as the scenario we gave above, will cause uneven sharing. There is no such a choice for A and B that results in optimal solution to all the scenarios. It really depends on the load and applications. III. Fibre Channel and FC-AL Fibre Channel is a high speed serial architecture that allows either optical or electrical connections at data rates from 25 MBytes/sec (Mbps) up to 100 MBytes/sec. Fibre Channel denes three topologies based on the capability and the existence of switches (also called fabric) between communicating ports (called N Ports). These include point-to-point topology, fabric topology and the Arbitrated Loop topology. In point-to-point topology, communication occurs between N Ports without using a switch fabric. In fabric topology, Fibre Channel uses the destination address in the frame header to route a data frame through a switch fabric to the destination N Port. The Arbitrated Loop topology allows more than two L Ports (ports which are capable of communicating in a loop topology) to communicate without a fabric. In Arbitrated Loop topology, only one pair of node can communicate at a time. Figure 9 shows these topologies. Fibre Channel provides three classes of service for dierent communication requirements. 14

17 Fabric Topology Arbitrated Loop Topology Fibre Channel Switch Connection Sub-Fabric Connectionless Sub-Fabric Fabric and Loop Topology Fig. 9. Topologies of Fibre Channel. Point-to-Point Topology These classes of service are distinguished by the method of connection setup and the level of delivery integrity. These classes of service are topology independent. Class 1 Service: It provides an acknowledged connection service with guaranteed bandwidth, end-to-end ow control, and in-order delivery. Class 2 Service: It is a frame-switched, acknowledged connectionless service that provides guaranteed delivery and buer-to-buer ow control. Class 3 Service: It is an unacknowledged connectionless service that lets data be sent rapidly to one device or multiple devices (with the help of a Fabric). Intermix: It is a concurrent Class 1, 2, and 3 services that enable parallel operations. It reserves full Fibre Channel bandwidth for dedicated Class 1 connections, but permit connectionless transmissions if bandwidth becomes available during idle Class 1 connections. FC-AL [4], [5] is an enhancement to the Fibre Channel standards. It denes additional signals and control mechanism to support Fibre Channel for operating with the loop. In a loop topology, communicating devices share a loop interface which only supports one pair of nodes to communicate at a time. A connection must be established between two L Ports before transferring Fibre Channel frames. FC-AL denes an arbitration scheme as an access protocol among L Ports. It is a prioritized protocol which grants accesses 15

18 of a loop to the L Ports with the highest priority. To prevent those L Ports with lower priorities from starvation, FC-AL denes a fairness algorithm which allows all L Ports have equal opportunity to access the loop. Since FC-AL is an enhancement to the Fibre Channel, we rst present a brief introduction to Fibre Channel in Section III-A. Then we describe FC-AL in Section III-B. A. Fibre Channel Fibre Channel has ve functional layers as shown in Figure 10. Among them, FC-0 to FC-2 are dened in Fibre Channel - Physical and Signaling Interface (FC-PH) [7]. FC- 3 (Common Services) is concerned with functions that span multiple N ports, including Striping (uses multiple N ports in parallel for their aggregate bandwidth), Hunt groups (allows more than one port to respond to the same alias address for higher eciency), and Multi-cast (delivers information to multiple destination ports). Upper Layer Protocols IP ATM HIPPI SCSI IPI FC-4 Protocol Mapping FC-LE FC-ATM FC-FP SCSI-FCP FC-I3 FC-3 Common Services FC-2 Framing Protocol and Flow Control FC-PH FC-1 Transmission Protocol FC-0 Fig. 10. Physical Layer Function layers of Fibre Channel. FC-4 (Protocol Mapping) provides a common and interoperable method for implementing upper-layer protocols over Fibre Channel. The protocols that Fibre Channel supports include SCSI, Intelligent Peripheral Interface (IPI), High-Performance Parallel Interface (HIPPI), Internet Protocol (IP). There is one FC-4 mapping protocol for each supported upper layer protocol. In Section III-A.2, we will describe one of the FC-4 protocols, Fibre Channel Protocol for SCSI[3]. A.1 Fibre Channel - Physical and Signaling Interface (FC-PH) 16

19 FC-PH denes three of Fibre Channel's functional layers, from FC-0 to FC-2. FC- 0 (Physical Layer) species a variety of physical media, drivers and receivers for a wide range of transmission speeds. The physical media can use single-mode or multi-mode ber, shielded twisted-pair, and coaxial-cable. Most of the commercially available products run at or Mbps at physical media. FC-1 (Transmission Protocol) denes the byte synchronization and the encode/decode scheme. An 8B/10B coding scheme is used for two types of transmission characters, data characters and special characters. Certain combinations of transmission characters are used to identify frame boundaries and transmit primitive function requests. Based on the 8B/10B coding, a physical link runs at Mbps (or Mbps) can support a data rate of 100 MB/sec (or 25 MB/sec). FC-2 (Framing Protocol and Flow Control) denes a set of building blocks to carry user data and ow control schemes to pace the transmission of frames. These building blocks include Frame, Sequence, Exchange and Protocol. Frames are based on a format as shown in Figure 11. The Data Field can carry up to 2112 bytes of data or up to 2048 bytes if there exist the Optional Headers. A Sequence is a set of related data frames transmitted unidirectionally with control frames, if applicable, transmitted in reverse direction. An Exchange consists of one or more Sequences. Fibre Channel also denes data transfer protocols and other protocols to manage the operating environment. We can use TCP/IP over Fibre Channel as an example to describe the hierarchy of building blocks. In this case, each TCP connection can be treated as an Exchange which is composed of one or more TCP/IP packets (as Sequences). A TCP/IP packet may be carried by a number of data frames. (4) (24) (0 to 2112) (4) (4) Idle Words SOF Frame Header Data Field CRC EOF Idle Words Fig. 11. Optional Header (64) Optional Header Payload (up to 2048) FC-2 general frame format. Unit = byte FC-2 also denes credit-base ow control schemes to pace the transmission of frames between nodes or between a node and a switch to prevent buer overow at the receiving side. The number of buers available at the receiving side is represented as Credits. The 17

20 credit information is sent from the receiver to the sender regularly. The sender uses a counter to manage the number of Credits which it received. The value of the counter is incremented with each credit it receives and decreased by one for each frame it transmits. During the transmission of frames, the sender restrains itself from transmitting more frames than the receiver can accommodate. A.2 Fibre Channel Protocol for SCSI Fibre Channel Protocol for SCSI (FCP) is one of Fibre Channel mapping protocols (FC- 4) which uses the service provided by FC-PH to transmit SCSI commands, data, status information between an SCSI initiator and an SCSI target. Each SCSI I/O operation is implemented as an individual Exchange consisting of a number of Sequences or Information Units. An Information Unit is a collection of data frames to be transmitted as a single Sequence by the Fibre Channel interface. A typical SCSI I/O operation consists of: (1) a command Sequence (FCP CMND) representing the desired operation, (2) zero or more transfer ready Sequence (FCP XFER RDY) and transfer data Sequence (FCP DATA), and (3) a response Sequence (FCP RSP) for the status information. TABLE III An example of the FCP read operation. Host Information Unit Disk Command request FCP CMND! Indicate command completion FCP XFER RDY FCP DATA... FCP RSP prepare data transfer data delivery request data in action prepare response message response For example, Table III shows the FCP mapping of an SCSI read operation to a series of FC-2 Sequences or Information Units. A command request (FCP CMND) is transferred by a host to a disk using a Sequence which only needs one frame for an SCSI read command. The disk follows the instruction contained in the read command and prepares the data. 18

21 When the desired data is ready to be transferred, the disk transmits one FCP XFER RDY (data delivery request) and one FCP DATA for each segment of data. This step is repeated until all data described by the SCSI command is transferred. FCP can take advantage of the multiplexing and shared bandwidth capabilities of FC Classes 2 or 3 Service. Multiple FCP I/O operations may be active at the same time. The maximum number of FCP I/O operations that may be active at one time depends on the queuing capabilities of a disk and the number of concurrent Exchanges supported by a Fibre Channel interface (the architectural limit is 65535). Class 1 and Intermixed classes of service may also be used to transfer the Information Units of FCP I/O operations. B. Fibre Channel - Arbitrated Loop (FC-AL) FC-AL allows Fibre Channel to operate in a loop topology. Its signaling interface supports a connection between two L Ports before they can exchange FC frames. FC-AL is logically located between FC-1 and FC-2 of the functional layers as shown in Figure 12. Figure 12 shows the Fibre Channel protocol layers in the context of a storage system with FC-AL. The upper-layer protocol used is SCSI. The Fibre Channel Protocol for SCSI (FCP) denes an FC-4 mapping layer for SCSI. In this example, FC-3 is not implemented. Applications SCSI-2/3 FCP FC-2 Host Disk Interface SCSI-2/3 FCP FC-2 Disk or Disk Array FC-AL FC-AL FC-1 FC-1 FC-0 FC-0 Out-bound Fibre Fig. 12. In-bound Fibre Out-bound Fibre In-bound Fibre Fibre Channel protocol layers with FC-AL. B.1 Arbitration An FC-AL loop is a shared medium for all of the attached L Ports. An L Port needs to arbitrate in order to access the loop. When more than one L Ports want to access the 19

22 loop, a priority scheme is used to decide which L Port wins the arbitration. The priority is based on the unique Arbitrated Loop Physical Address (AL PA) assigned to each L Port. L Ports with lower AL PAs have higher priorities. In most of the implementations, a host is assigned with a higher priority than disks. If a switch is also connected with the loop via an FL Port (a switch port which is capable of operating with an FC-AL loop), the highest priority (AL PA = 0x00) shall be assigned to the FL Port. Because the FL Port needs to handle the trac between L Ports in the loop and devices outside the loop. A loop is called a Public Loop if there exists an FL Port, otherwise it is called a Private Loop. TABLE IV Some of the FC-AL Primitive Signals. Primitive Signal Function ARBx Arbitrate Transmitted by an L Port (AL PA = x) to request access to the loop. OPNy Open Set up a circuit with a destination whose AL PA equals to y. CLS Close CLS is transmitted by an L Port to indicate that it is prepared to or has relinquished control of the loop. IDLE Idle Idle Table IV shows some of the Primitive Signals dened in FC-AL. These Primitive Signals are used to control accesses of a loop. There are other Primitive Signals for loop initiation and maintenance which are not listed here. Whenever an L Port wants to set up a circuit with another L Port. The L Port must arbitrate the control of the loop by sending out ARBx (arbitrate) with x as its AL PA. The ARBx Primitive Signals will travel the loop and reach all L Ports. When any arbitrating L Port (an L Port participating in arbitration) receives an ARBx, it compares its AL PA with the x value of the received ARBx. If its AL PA is smaller than the x value (which means it has higher priority), it sends out a new ARBx with x equals to its own AL PA. Otherwise, it shall forward the ARBx without any change. Eventually one of the arbitrating L Ports with the highest priority will receive its own ARBx and wins the arbitration. The L Port which won the arbitration can send out OPNy (open) to set up a circuit with another L Port whose AL PA is y. After a connection is established, they can exchange data frames and control frames according to FC-PH's specications. When either one of the two communicating L Ports nishes the transmission, it sends out a CLS (close) to 20

23 notify its partner. The other L Port will respond another CLS. After a pair of CLSs are exchanged between them, they relinquish control of the loop. The loop is then available again for other communications. To reduce the overhead of arbitration, an L Port which won the arbitration can open more than one circuits one by one without relinquishing control of the loop. In this case, the L Port sends out a CLS to close the current connection. Then sends out another OPNy to set up a circuit with another destination. This scheme allows an L Port opening more than one circuits without re-arbitrating the loop. For example, a host adaptor can arbitrate the loop once and send out multiple SCSI commands to dierent disks. B.2 Fairness Algorithm Like other prioritized protocols, FC-AL's arbitration scheme could lead to situations where an L Port with a low priority can not gain access of the loop. Thus, a fairness algorithm is dened to allow all L Ports to have an opportunity to arbitrate and win accesses of the loop. The basic idea of the fairness algorithm is that each L Port should only arbitrate and gain access of the loop once if there are other L Ports also arbitrating the loop. The fairness algorithm is enforced with one variable ACCESS maintained by each L Port and two special signals, ARB(F0) and IDLE. The default value of ACCESS is TRUE which allows an L Port to participate in arbitration. When an L Port wins the arbitration, it shall set its ACCESS to FALSE and restrain itself from arbitrating again until it receives an IDLE. When the wining L Port is opening a circuit, it sends out ARB(F0) to detect if other L Ports are also arbitrating. The ARB(F0) is a special ARBx whose x, equals to 0xF0, is lager than any possible AL PA. Any arbitrating L Port can change the ARB(F0) into its own ARBx. The ARBx (changed from ARB(F0) by some L Ports) or ARB(F0) circulates the loop and nally reaches the wining L Port. If an ARB(F0) is received by the wining L Port without change. This means no other L Ports are arbitrating. Otherwise, the wining L Port will receive an ARBx with x other than 0xF0. When an L Port intends to relinquish control of the loop, it sends out either an IDLE or an ARB(F0) depending on if there is any L Port was arbitrating access of the loop. If there is another L Port arbitrating (based on the received ARBx), it sends out an ARB(F0) that 21

24 stimulates the arbitrating process of other L Ports. If there are no L Ports arbitrating (the wining L Port received an ARB(F0)), the wining L Port sends out an IDLE. The IDLE will trigger all L Ports to set their ACCESS back to TRUE. This allows them to arbitrate the loop if they have data to transfer. The time between the rst L Port to win arbitration and an L Port to transmit an IDLE is called an access window. FC-AL's fairness algorithm sets up an access window in which all L Ports are given an opportunity to arbitrate and win accesses of the loop. An L Port can chose to use or not to use the fairness algorithm. In most implementations, the host adaptor does not use fairness algorithm, which allows it to promptly send out commands. Disks, on the other hand, usually use fairness algorithm to share the bandwidth with others t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 Time Idle Arbitrating Won Finished Fig. 13. An example of FC-AL fairness algorithm. An example of FC-AL fairness algorithm is illustrated in Figure 13. We use several gray levels to represent dierent states (idle, arbitrating, won arbitration, and nished) of a L Port and put its AL PA in the box. At time t1, L Port 3 won the arbitration and detected that there are other L Ports also arbitrating the loop. It restrains itself from arbitrating again until each arbitrating L Port has a chance to access the loop. The order of L Ports nishing their transmission is unpredictable since new L Ports may join the arbitration at any time. For example, L Port 1 joins the arbitration at time t2 and nishes before L Port 7. When the last L Port (L Port 7) won the arbitration, the ARB(F0) it sent out will return back to it without change. Therefore, it sends IDLE to conclude this access window and each L Port had an opportunity to access the loop. 22

25 B.3 Fault Tolerance The fault tolerance capability of FC-AL is accomplished by using two loops to connect all disks. All disks are attached to both FC-AL loops. One of the loops is used as the primary interface, the other loop is used for fault tolerance. In this case, if any loop fails, the disks can still be accessed through the other loop. FC-AL allows any combination of disks, tape drives, and hosts connect to the same loop. A networked le system consists of two hosts and a group of shared disks can provide better availability when any one of the hosts fails. FC-AL also denes a Bypass Circuit which can be used to keep a loop operating when an L Port is removed or fails. The Bypass Circuit provides the means to route the signal and bypass a failed L Port. In a disk array system implemented with FC-AL interface, the Bypass Circuits allow users to replace failed disks without shutting down the disk array. IV. Comparison between SSA and FC-AL In this section, comparisons of both technologies are discussed. Basic features are summarized in Table V. Features discussed include connectivity, topologies, spatial reuse, fairness algorithm, protocol overhead, and fault tolerance. TABLE V The comparison of SSA and FC-AL with single host Description SSA FC-AL (single loop) Fast/Wide SCSI Distance with copper 25m device to device 30m device to device 25m total length Distance with ber optic 2.5km device to device 10km device to device not supported Data bandwidth 80MB/s 100MB/s 20MB/s Number of attached devices Protocol Support SCSI SCSI/IPI/IP/HIPPI/ATM SCSI Error detection CRC detection CRC detection Parity Hot swappable devices yes yes yes with additional hardware Fault tolerance yes yes in dual loops no Fairness algorithm yes yes no Spatial reuse yes no no 23

26 The links between nodes in SSA and FC-AL are point-to-point connections. In SSA, each link communicates independently, and frames are transmitted using store and forward routing with the maximum of ten characters delay at each node if the intermediate nodes are allowed to forward the frame right away. In FC-AL, only one node can transmit at a time. A transmitting node is selected by an arbitration process. An SSA node is connected with two in and two out ports. In order to fully utilize the total link bandwidth, both inow and outow from a node need to be fully utilized to achieve the full data bandwidth. If a host generates only read commands, only half of the total bandwidth may be utilized because most of the trac for read commands are data frames from devices to a host. Because of independent links in SSA, spatial reuse is possible where more than one pair of nodes communicate simultaneously with each other. However, spatial reuse is not possible in FC-AL where only one node transmits data at a time. An SSA loop with multiple hosts may potentially increase the aggregate throughput due to spatial reuse. Both SSA and FC-AL oer exible topologies. SSA can be congured as a string, a loop, or a switched topology. FC-AL supports loop topology and loop with switch fabric topology. The Fibre Channel standard supports point-to-point and switched architecture. Hence multiple FC- AL subsystems can be connected by a bre channel switch. Both SSA and FC-AL allow multiple hosts in a single loop. Data sharing among multiple hosts can potentially increase data availability. SSA and FC-AL incur dierent protocol overhead. The protocol overhead includes two major parts, the framing eciency and the access overhead. The framing eciency is dened as the ratio of data portion (or payload) in a frame to the total frame size. The maximum framing eciency of SSA is about 94 percents. FC-AL's framing eciency is about 98 percents. These are derived by the maximum data size of 128 bytes out of the frame size of 136 bytes for SSA, and the maximum data size of 2112 out of the frame size of 2148 bytes for FC-AL. Access overhead for both technologies are also dierent. Each loop access in FC-AL requires an arbitration which results in higher overhead. On the other hand, SSA does not require any access arbitration. Therefore, although SSA has less framing eciency, it has less overall protocol overhead (in terms of percentages) for small size transactions. FC-AL has the higher framing eciency but higher access overhead in 24

27 small size transactions. However, FC-AL has less protocol overhead and may have less latency for large size requests (or commands) because of its higher bandwidth. Dierent ow control schemes are used to prevent buer overow at the receiving side. Flow control in SSA is enforced by credits between two adjacent nodes. FC-AL uses buer-to-buer ow control between the source and destination nodes. In SSA, multiple connections can be outstanding at the same time. SSA frames of dierent connections can be multiplexed and routed through a single link. FC-AL uses a connection oriented routing. A source node sets up a connection and transmits frames to the destination. While a source node in FC-AL transmits data frames, a destination node can send control information back to the source node. With FCP and SCSI protocol, multiple I/O operations can also be outstanding at the same time. The fairness algorithm of SSA is implemented by using a token scheme and two quotas, i.e. A quota and B quota. The values of the A quota and B quota are important to enforce the fair sharing of the link bandwidth. When A < B, upstream nodes can send more data than downstream ones because the upstream nodes have inherently higher priority over the downstream nodes. When the loop is heavily loaded, the nodes in downstream are allowed to transfer only up to A quota if upstream nodes are kept sending data. On the other hand, the upstream nodes use the data bandwidth up to B quota when available. The fairness algorithm in SSA is a part of the SSA protocol. On the other hand, a fairness algorithm in FC-AL is an optional feature. The fairness algorithm is enforced by keeping a state information at each node. Within a single access window, a node can win an arbitration only once. The fairness algorithm in FC-AL can be enforced partially or fully. Since the hosts in a loop need to send out commands promptly, they may not want to run the fairness algorithm. The fairness algorithm in FC-AL guarantees equal number of opportunities in accessing a loop. However, the equal number of opportunities do not translate into the equal amount of trac for all the nodes. SSA in a loop provides redundant paths to a device. It can tolerate a single link failure. A multiple host conguration in SSA oers fault tolerance against host, link, and adaptor failures. FC-AL with a single loop does not provide fault tolerance against a link failure. Hence FC-AL is often congured as dual loops. An FC-AL conguration with dual loops 25

Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches

Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches Vishal Sinha sinha@cs.umn.edu Department of Computer Science and Engineering University of Minnesota Minneapolis, MN 55455 7481

More information

Storage Area Network (SAN)

Storage Area Network (SAN) Storage Area Network (SAN) 1 Outline Shared Storage Architecture Direct Access Storage (DAS) SCSI RAID Network Attached Storage (NAS) Storage Area Network (SAN) Fiber Channel and Fiber Channel Switch 2

More information

Keywords: ASM, Fiber channel, Testing, Avionics.

Keywords: ASM, Fiber channel, Testing, Avionics. Implementation of Fiber Channel Avionics Environment Anonymous Subcriber Messaging Protocol and Testing Shweta Dinnimani 1, Sree Ranjani N. Y. 2, Lakshmi T. 3 1 M. Tech. Scholar, Digital Electronics 2

More information

Serial Storage Architecture A Technology Overview

Serial Storage Architecture A Technology Overview Serial Storage Architecture A Technology Overview Version 3.0 Send comments to: David Deming Solution Technology P.O. Box 104 Boulder Creek, CA 95006 Ph: 408/338-4285 Fx:408/338-4374 email: ddeming@scruznet.com

More information

Module 5. Broadcast Communication Networks. Version 2 CSE IIT, Kharagpur

Module 5. Broadcast Communication Networks. Version 2 CSE IIT, Kharagpur Module 5 Broadcast Communication Networks Lesson 5 High Speed LANs Token Ring Based Specific Instructional Objectives On completion, the student will be able to: Explain different categories of High Speed

More information

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network Congestion-free Routing of Streaming Multimedia Content in BMIN-based Parallel Systems Harish Sethu Department of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104, USA sethu@ece.drexel.edu

More information

Module 2 Storage Network Architecture

Module 2 Storage Network Architecture Module 2 Storage Network Architecture 1. SCSI 2. FC Protocol Stack 3. SAN:FC SAN 4. IP Storage 5. Infiniband and Virtual Interfaces FIBRE CHANNEL SAN 1. First consider the three FC topologies pointto-point,

More information

THROUGHPUT IN THE DQDB NETWORK y. Shun Yan Cheung. Emory University, Atlanta, GA 30322, U.S.A. made the request.

THROUGHPUT IN THE DQDB NETWORK y. Shun Yan Cheung. Emory University, Atlanta, GA 30322, U.S.A. made the request. CONTROLLED REQUEST DQDB: ACHIEVING FAIRNESS AND MAXIMUM THROUGHPUT IN THE DQDB NETWORK y Shun Yan Cheung Department of Mathematics and Computer Science Emory University, Atlanta, GA 30322, U.S.A. ABSTRACT

More information

Efficient RAID Disk Scheduling on Smart Disks

Efficient RAID Disk Scheduling on Smart Disks Efficient RAID Disk Scheduling on Smart Disks Tai-Sheng Chang tchang@cs.umn.edu Tel: +1-847-856-8074 Department of Computer Science and Engineering, University of Minnesota 200 Union Street SE #4-192 Minneapolis

More information

Design and Performance Evaluation of a New Spatial Reuse FireWire Protocol. Master s thesis defense by Vijay Chandramohan

Design and Performance Evaluation of a New Spatial Reuse FireWire Protocol. Master s thesis defense by Vijay Chandramohan Design and Performance Evaluation of a New Spatial Reuse FireWire Protocol Master s thesis defense by Vijay Chandramohan Committee Members: Dr. Christensen (Major Professor) Dr. Labrador Dr. Ranganathan

More information

CMSC 611: Advanced. Interconnection Networks

CMSC 611: Advanced. Interconnection Networks CMSC 611: Advanced Computer Architecture Interconnection Networks Interconnection Networks Massively parallel processor networks (MPP) Thousands of nodes Short distance (

More information

UNIT- 2 Physical Layer and Overview of PL Switching

UNIT- 2 Physical Layer and Overview of PL Switching UNIT- 2 Physical Layer and Overview of PL Switching 2.1 MULTIPLEXING Multiplexing is the set of techniques that allows the simultaneous transmission of multiple signals across a single data link. Figure

More information

Switching and Forwarding Reading: Chapter 3 1/30/14 1

Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Next Problem: Enable communication between hosts that are not directly connected Fundamental Problem of the Internet or any

More information

Token Ring and. Fiber Distributed Data Interface (FDDI) Networks: Token Ring and FDDI 1

Token Ring and. Fiber Distributed Data Interface (FDDI) Networks: Token Ring and FDDI 1 Token Ring and Fiber Distributed Data Interface (FDDI) Networks: Token Ring and FDDI 1 IEEE 802.5 Token Ring Proposed in 1969 and initially referred to as a Newhall ring. Token ring :: a number of stations

More information

ARINC-818 TESTING FOR AVIONICS APPLICATIONS. Ken Bisson Troy Troshynski

ARINC-818 TESTING FOR AVIONICS APPLICATIONS. Ken Bisson Troy Troshynski ARINC-818 TESTING FOR AVIONICS APPLICATIONS Ken Bisson Troy Troshynski 2007 The ARINC-818 Specification is an industry standard that defines a digital video interface link and protocol that is used for

More information

ET4254 Communications and Networking 1

ET4254 Communications and Networking 1 Topic 10:- Local Area Network Overview Aims:- LAN topologies and media LAN protocol architecture bridges, hubs, layer 2 & 3 switches 1 LAN Applications (1) personal computer LANs low cost limited data

More information

Question 1 In answering the following questions use the following network conguration. Each node in the network represents a router, and the weights o

Question 1 In answering the following questions use the following network conguration. Each node in the network represents a router, and the weights o University of Uppsala Department of Computer Systems (DoCS) Final Examination Datakommunikation (DVP)) Data Communication and Networks INSTRUCTIONS TO CANDIDATES This is a SIX (6) hour examination Answer

More information

2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS Collision Free Protocols 2.3 FDDI 2.4 DATA LINK LAYER DESIGN ISSUES 2.5 FRAMING & STUFFING

2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS Collision Free Protocols 2.3 FDDI 2.4 DATA LINK LAYER DESIGN ISSUES 2.5 FRAMING & STUFFING UNIT-2 2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS 2.2.1 Pure ALOHA 2.2.2 Slotted ALOHA 2.2.3 Carrier Sense Multiple Access 2.2.4 CSMA with Collision Detection 2.2.5 Collision Free Protocols 2.2.5.1

More information

INTERNATIONAL STANDARD

INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 14165-141 First edition 2001-06 Information technology Fibre Channel Part 141: (FC-FG) Reference number ISO/IEC 14165-141:2001(E) INTERNATIONAL STANDARD ISO/IEC 14165-141

More information

Ch. 4 - WAN, Wide Area Networks

Ch. 4 - WAN, Wide Area Networks 1 X.25 - access 2 X.25 - connection 3 X.25 - packet format 4 X.25 - pros and cons 5 Frame Relay 6 Frame Relay - access 7 Frame Relay - frame format 8 Frame Relay - addressing 9 Frame Relay - access rate

More information

Data Link Layer. Our goals: understand principles behind data link layer services: instantiation and implementation of various link layer technologies

Data Link Layer. Our goals: understand principles behind data link layer services: instantiation and implementation of various link layer technologies Data Link Layer Our goals: understand principles behind data link layer services: link layer addressing instantiation and implementation of various link layer technologies 1 Outline Introduction and services

More information

INTERNATIONAL STANDARD

INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 11518-10 First edition 2001-03 Information technology High-performance parallel interface Part 10: 6 400 Mbit/s Physical Layer (HIPPI-6400-PH) Reference number ISO/IEC 11518-10:2001(E)

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

INTERNATIONAL STANDARD

INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 9314-7 First edition 1998-08 Information technology Fibre distributed data interface (FDDI) Part 7: Physical Layer Protocol (PHY-2) Reference number ISO/IEC 9314-7:1998(E)

More information

Principles behind data link layer services

Principles behind data link layer services Data link layer Goals: Principles behind data link layer services Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control: Done!

More information

Research and Analysis of Flow Control Mechanism for Transport Protocols of the SpaceWire Onboard Networks

Research and Analysis of Flow Control Mechanism for Transport Protocols of the SpaceWire Onboard Networks Research and Analysis of Flow Control Mechanism for Transport Protocols of the SpaceWire Onboard Networks Nikolay Sinyov, Valentin Olenev, Irina Lavrovskaya, Ilya Korobkov {nikolay.sinyov, valentin.olenev,

More information

Lesson 1: Network Communications

Lesson 1: Network Communications Lesson 1: Network Communications This lesson introduces the basic building blocks of network communications and some of the structures used to construct data networks. There are many different kinds of

More information

Local Area Networks (LANs) SMU CSE 5344 /

Local Area Networks (LANs) SMU CSE 5344 / Local Area Networks (LANs) SMU CSE 5344 / 7344 1 LAN/MAN Technology Factors Topology Transmission Medium Medium Access Control Techniques SMU CSE 5344 / 7344 2 Topologies Topology: the shape of a communication

More information

2 Network Basics. types of communication service. how communication services are implemented. network performance measures. switching.

2 Network Basics. types of communication service. how communication services are implemented. network performance measures. switching. 2 Network Basics types of communication service how communication services are implemented switching multiplexing network performance measures 1 2.1 Types of service in a layered network architecture connection-oriented:

More information

Media Access Control (MAC) Sub-layer and Ethernet

Media Access Control (MAC) Sub-layer and Ethernet Media Access Control (MAC) Sub-layer and Ethernet Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF MAC Sub-layer The MAC sub-layer is a sub-layer

More information

Advantages and disadvantages

Advantages and disadvantages Advantages and disadvantages Advantages Disadvantages Asynchronous transmission Simple, doesn't require synchronization of both communication sides Cheap, timing is not as critical as for synchronous transmission,

More information

Distributed Systems. Pre-Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2015

Distributed Systems. Pre-Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2015 Distributed Systems Pre-Exam 1 Review Paul Krzyzanowski Rutgers University Fall 2015 October 2, 2015 CS 417 - Paul Krzyzanowski 1 Selected Questions From Past Exams October 2, 2015 CS 417 - Paul Krzyzanowski

More information

Chapter Seven. Local Area Networks: Part 1. Data Communications and Computer Networks: A Business User s Approach Seventh Edition

Chapter Seven. Local Area Networks: Part 1. Data Communications and Computer Networks: A Business User s Approach Seventh Edition Chapter Seven Local Area Networks: Part 1 Data Communications and Computer Networks: A Business User s Approach Seventh Edition After reading this chapter, you should be able to: State the definition of

More information

CS/ECE 438: Communication Networks for Computers Spring 2018 Midterm Examination Online

CS/ECE 438: Communication Networks for Computers Spring 2018 Midterm Examination Online 1 CS/ECE 438: Communication Networks for Computers Spring 2018 Midterm Examination Online Solutions 1. General Networking a. In traditional client-server communication using TCP, a new socket is created.

More information

Performance Comparison Between AAL1, AAL2 and AAL5

Performance Comparison Between AAL1, AAL2 and AAL5 The University of Kansas Technical Report Performance Comparison Between AAL1, AAL2 and AAL5 Raghushankar R. Vatte and David W. Petr ITTC-FY1998-TR-13110-03 March 1998 Project Sponsor: Sprint Corporation

More information

MODULE: NETWORKS MODULE CODE: CAN1102C. Duration: 2 Hours 15 Mins. Instructions to Candidates:

MODULE: NETWORKS MODULE CODE: CAN1102C. Duration: 2 Hours 15 Mins. Instructions to Candidates: BSc.(Hons) Computer Science with Network Security BEng (Hons) Telecommunications Cohort: BCNS/17B/FT Examinations for 2017-2018 / Semester 2 Resit Examinations for BCNS/15A/FT, BTEL/15B/FT & BTEL/16B/FT

More information

UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING

UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING 1 UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING Syllabus: Physical layer and overview of PL Switching: Multiplexing: frequency division multiplexing, wave length division multiplexing, synchronous

More information

EEC-484/584 Computer Networks

EEC-484/584 Computer Networks EEC-484/584 Computer Networks Lecture 13 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline 2 Review of lecture 12 Routing Congestion

More information

General comments on candidates' performance

General comments on candidates' performance BCS THE CHARTERED INSTITUTE FOR IT BCS Higher Education Qualifications BCS Level 5 Diploma in IT April 2018 Sitting EXAMINERS' REPORT Computer Networks General comments on candidates' performance For the

More information

Expected Time: 90 min PART-A Max Marks: 42

Expected Time: 90 min PART-A Max Marks: 42 Birla Institute of Technology & Science, Pilani First Semester 2010-2011 Computer Networks (BITS C481) Comprehensive Examination Thursday, December 02, 2010 (AN) Duration: 3 Hrs Weightage: 40% [80M] Instructions-:

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Today. Last Time. Motivation. CAN Bus. More about CAN. What is CAN?

Today. Last Time. Motivation. CAN Bus. More about CAN. What is CAN? Embedded networks Characteristics Requirements Simple embedded LANs Bit banged SPI I2C LIN Ethernet Last Time CAN Bus Intro Low-level stuff Frame types Arbitration Filtering Higher-level protocols Today

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive

More information

IBM Almaden Research Center, at regular intervals to deliver smooth playback of video streams. A video-on-demand

IBM Almaden Research Center, at regular intervals to deliver smooth playback of video streams. A video-on-demand 1 SCHEDULING IN MULTIMEDIA SYSTEMS A. L. Narasimha Reddy IBM Almaden Research Center, 650 Harry Road, K56/802, San Jose, CA 95120, USA ABSTRACT In video-on-demand multimedia systems, the data has to be

More information

Cisco Cisco Certified Network Associate (CCNA)

Cisco Cisco Certified Network Associate (CCNA) Cisco 200-125 Cisco Certified Network Associate (CCNA) http://killexams.com/pass4sure/exam-detail/200-125 Question: 769 Refer to exhibit: Which destination addresses will be used by Host A to send data

More information

Chapter 3. The Data Link Layer. Wesam A. Hatamleh

Chapter 3. The Data Link Layer. Wesam A. Hatamleh Chapter 3 The Data Link Layer The Data Link Layer Data Link Layer Design Issues Error Detection and Correction Elementary Data Link Protocols Sliding Window Protocols Example Data Link Protocols The Data

More information

Medium Access Protocols

Medium Access Protocols Medium Access Protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division,Code Division, Frequency Division Random partitioning

More information

CS321: Computer Networks Introduction to Computer Networks and Internet

CS321: Computer Networks Introduction to Computer Networks and Internet CS321: Computer Networks Introduction to Computer Networks and Internet Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in What is Data Communication? Data communications

More information

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Overview of Networks

Overview of Networks CMPT765/408 08-1 Overview of Networks Qianping Gu 1 Overview of Networks This note is mainly based on Chapters 1-2 of High Performance of Communication Networks by J. Walrand and P. Pravin, 2nd ed, and

More information

INTRODUCTORY COMPUTER

INTRODUCTORY COMPUTER INTRODUCTORY COMPUTER NETWORKS TYPES OF NETWORKS Faramarz Hendessi Introductory Computer Networks Lecture 4 Fall 2010 Isfahan University of technology Dr. Faramarz Hendessi 2 Types of Networks Circuit

More information

Module 16: Distributed System Structures

Module 16: Distributed System Structures Chapter 16: Distributed System Structures Module 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology Communication Structure Communication

More information

The Avalanche Myrinet Simulation Package. University of Utah, Salt Lake City, UT Abstract

The Avalanche Myrinet Simulation Package. University of Utah, Salt Lake City, UT Abstract The Avalanche Myrinet Simulation Package User Manual for V. Chen-Chi Kuo, John B. Carter fchenchi, retracg@cs.utah.edu WWW: http://www.cs.utah.edu/projects/avalanche UUCS-96- Department of Computer Science

More information

Distributed Queue Dual Bus

Distributed Queue Dual Bus Distributed Queue Dual Bus IEEE 802.3 to 802.5 protocols are only suited for small LANs. They cannot be used for very large but non-wide area networks. IEEE 802.6 DQDB is designed for MANs It can cover

More information

Introduction to Protocols

Introduction to Protocols Chapter 6 Introduction to Protocols 1 Chapter 6 Introduction to Protocols What is a Network Protocol? A protocol is a set of rules that governs the communications between computers on a network. These

More information

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Internetworking Multiple networks are a fact of life: Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Fault isolation,

More information

Summary of MAC protocols

Summary of MAC protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division, Code Division, Frequency Division Random partitioning (dynamic) ALOHA, S-ALOHA,

More information

EEC-484/584 Computer Networks

EEC-484/584 Computer Networks EEC-484/584 Computer Networks Lecture 2 Wenbing Zhao wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Misc. Interested in research? Secure

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Data Link Technology. Suguru Yamaguchi Nara Institute of Science and Technology Department of Information Science

Data Link Technology. Suguru Yamaguchi Nara Institute of Science and Technology Department of Information Science Data Link Technology Suguru Yamaguchi Nara Institute of Science and Technology Department of Information Science Agenda Functions of the data link layer Technologies concept and design error control flow

More information

Internetworking Models The OSI Reference Model

Internetworking Models The OSI Reference Model Internetworking Models When networks first came into being, computers could typically communicate only with computers from the same manufacturer. In the late 1970s, the Open Systems Interconnection (OSI)

More information

Real-Time (Paradigms) (47)

Real-Time (Paradigms) (47) Real-Time (Paradigms) (47) Memory: Memory Access Protocols Tasks competing for exclusive memory access (critical sections, semaphores) become interdependent, a common phenomenon especially in distributed

More information

Computer-System Organization (cont.)

Computer-System Organization (cont.) Computer-System Organization (cont.) Interrupt time line for a single process doing output. Interrupts are an important part of a computer architecture. Each computer design has its own interrupt mechanism,

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

Data Communication & Computer Networks INFO

Data Communication & Computer Networks INFO Data Communication & Computer Networks INFO Instructor: Dr. A. SARI Department: Management Information Systems Course Code: MIS 305 Academic Term: 2013/2014 Fall Title: Data Communication & Computer Networks

More information

Chapter Four. Making Connections. Data Communications and Computer Networks: A Business User s Approach Seventh Edition

Chapter Four. Making Connections. Data Communications and Computer Networks: A Business User s Approach Seventh Edition Chapter Four Making Connections Data Communications and Computer Networks: A Business User s Approach Seventh Edition After reading this chapter, you should be able to: List the four components of all

More information

Chapter 15 Local Area Network Overview

Chapter 15 Local Area Network Overview Chapter 15 Local Area Network Overview LAN Topologies Bus and Tree Bus: stations attach through tap to bus full duplex allows transmission and reception transmission propagates throughout medium heard

More information

Snia S Storage Networking Management/Administration.

Snia S Storage Networking Management/Administration. Snia S10-200 Storage Networking Management/Administration http://killexams.com/exam-detail/s10-200 QUESTION: 85 What are two advantages of over-subscription? (Choose two.) A. saves on ISL links B. decreases

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review Administrivia CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Homework #4 due Thursday answers posted soon after Exam #2 on Thursday, April 24 on memory hierarchy (Unit 4) and

More information

Modes of Transfer. Interface. Data Register. Status Register. F= Flag Bit. Fig. (1) Data transfer from I/O to CPU

Modes of Transfer. Interface. Data Register. Status Register. F= Flag Bit. Fig. (1) Data transfer from I/O to CPU Modes of Transfer Data transfer to and from peripherals may be handled in one of three possible modes: A. Programmed I/O B. Interrupt-initiated I/O C. Direct memory access (DMA) A) Programmed I/O Programmed

More information

1999, Scott F. Midkiff

1999, Scott F. Midkiff Lecture Topics Direct Link Networks: Multiaccess Protocols (.7) Multiaccess control IEEE 80.5 Token Ring and FDDI CS/ECpE 556: Computer Networks Originally by Scott F. Midkiff (ECpE) Modified by Marc Abrams

More information

Direct-Attached Storage (DAS) is an architecture

Direct-Attached Storage (DAS) is an architecture Chapter 5 Direct-Attached Storage and Introduction to SCSI Direct-Attached Storage (DAS) is an architecture where storage connects directly KEY CONCEPTS to servers. Applications access data from Internal

More information

Data and Computer Communications. Protocols and Architecture

Data and Computer Communications. Protocols and Architecture Data and Computer Communications Protocols and Architecture Characteristics Direct or indirect Monolithic or structured Symmetric or asymmetric Standard or nonstandard Means of Communication Direct or

More information

Concept Questions Demonstrate your knowledge of these concepts by answering the following questions in the space provided.

Concept Questions Demonstrate your knowledge of these concepts by answering the following questions in the space provided. 83 Chapter 6 Ethernet Technologies and Ethernet Switching Ethernet and its associated IEEE 802.3 protocols are part of the world's most important networking standards. Because of the great success of the

More information

Medium Access Control. IEEE , Token Rings. CSMA/CD in WLANs? Ethernet MAC Algorithm. MACA Solution for Hidden Terminal Problem

Medium Access Control. IEEE , Token Rings. CSMA/CD in WLANs? Ethernet MAC Algorithm. MACA Solution for Hidden Terminal Problem Medium Access Control IEEE 802.11, Token Rings Wireless channel is a shared medium Need access control mechanism to avoid interference Why not CSMA/CD? 9/15/06 CS/ECE 438 - UIUC, Fall 2006 1 9/15/06 CS/ECE

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Extensions to RTP to support Mobile Networking: Brown, Singh 2 within the cell. In our proposed architecture [3], we add a third level to this hierarc

Extensions to RTP to support Mobile Networking: Brown, Singh 2 within the cell. In our proposed architecture [3], we add a third level to this hierarc Extensions to RTP to support Mobile Networking Kevin Brown Suresh Singh Department of Computer Science Department of Computer Science University of South Carolina Department of South Carolina Columbia,

More information

1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals.

1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals. 1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals. A typical communication link between the processor and

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

Lecture 13. Storage, Network and Other Peripherals

Lecture 13. Storage, Network and Other Peripherals Lecture 13 Storage, Network and Other Peripherals 1 I/O Systems Processor interrupts Cache Processor & I/O Communication Memory - I/O Bus Main Memory I/O Controller I/O Controller I/O Controller Disk Disk

More information

NoC Test-Chip Project: Working Document

NoC Test-Chip Project: Working Document NoC Test-Chip Project: Working Document Michele Petracca, Omar Ahmad, Young Jin Yoon, Frank Zovko, Luca Carloni and Kenneth Shepard I. INTRODUCTION This document describes the low-power high-performance

More information

Storage Area Networks SAN. Shane Healy

Storage Area Networks SAN. Shane Healy Storage Area Networks SAN Shane Healy Objective/Agenda Provide a basic overview of what Storage Area Networks (SAN) are, what the constituent components are, and how these components fit together to deliver

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 11 MIDTERM EXAMINATION #1 OCT. 16, 2013 COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2013-75 minutes This examination

More information

Reducing SpaceWire Time-code Jitter

Reducing SpaceWire Time-code Jitter Reducing SpaceWire Time-code Jitter Barry M Cook 4Links Limited The Mansion, Bletchley Park, Milton Keynes, MK3 6ZP, UK Email: barry@4links.co.uk INTRODUCTION Standards ISO/IEC 14575[1] and IEEE 1355[2]

More information

Different network topologies

Different network topologies Network Topology Network topology is the arrangement of the various elements of a communication network. It is the topological structure of a network and may be depicted physically or logically. Physical

More information

UNIT 2 TRANSPORT LAYER

UNIT 2 TRANSPORT LAYER Network, Transport and Application UNIT 2 TRANSPORT LAYER Structure Page No. 2.0 Introduction 34 2.1 Objective 34 2.2 Addressing 35 2.3 Reliable delivery 35 2.4 Flow control 38 2.5 Connection Management

More information

Introduction to Input and Output

Introduction to Input and Output Introduction to Input and Output The I/O subsystem provides the mechanism for communication between the CPU and the outside world (I/O devices). Design factors: I/O device characteristics (input, output,

More information

1/29/2008. From Signals to Packets. Lecture 6 Datalink Framing, Switching. Datalink Functions. Datalink Lectures. Character and Bit Stuffing.

1/29/2008. From Signals to Packets. Lecture 6 Datalink Framing, Switching. Datalink Functions. Datalink Lectures. Character and Bit Stuffing. /9/008 From Signals to Packets Lecture Datalink Framing, Switching Peter Steenkiste Departments of Computer Science and Electrical and Computer Engineering Carnegie Mellon University Analog Signal Digital

More information

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [NETWORKING] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Why not spawn processes

More information

How to Choose the Right Bus for Your Measurement System

How to Choose the Right Bus for Your Measurement System 1 How to Choose the Right Bus for Your Measurement System Overview When you have hundreds of different data acquisition (DAQ) devices to choose from on a wide variety of buses, it can be difficult to select

More information

Interface The exit interface a packet will take when destined for a specific network.

Interface The exit interface a packet will take when destined for a specific network. The Network Layer The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network

More information

Introduction to Open System Interconnection Reference Model

Introduction to Open System Interconnection Reference Model Chapter 5 Introduction to OSI Reference Model 1 Chapter 5 Introduction to Open System Interconnection Reference Model Introduction The Open Systems Interconnection (OSI) model is a reference tool for understanding

More information

Review for Chapter 4 R1,R2,R3,R7,R10,R11,R16,R17,R19,R22,R24, R26,R30 P1,P2,P4,P7,P10,P11,P12,P14,P15,P16,P17,P22,P24,P29,P30

Review for Chapter 4 R1,R2,R3,R7,R10,R11,R16,R17,R19,R22,R24, R26,R30 P1,P2,P4,P7,P10,P11,P12,P14,P15,P16,P17,P22,P24,P29,P30 Review for Chapter 4 R1,R2,R3,R7,R10,R11,R16,R17,R19,R22,R24, R26,R30 P1,P2,P4,P7,P10,P11,P12,P14,P15,P16,P17,P22,P24,P29,P30 R1. Let s review some of the terminology used in this textbook. Recall that

More information

RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface **

RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface ** Rec. ITU-R BS.776 1 RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface ** The ITU Radiocommunication Assembly considering (1992) a) that there is a real need for

More information

Advanced Computer Networks. Flow Control

Advanced Computer Networks. Flow Control Advanced Computer Networks 263 3501 00 Flow Control Patrick Stuedi Spring Semester 2017 1 Oriana Riva, Department of Computer Science ETH Zürich Last week TCP in Datacenters Avoid incast problem - Reduce

More information

1. Introduction 2. Methods for I/O Operations 3. Buses 4. Liquid Crystal Displays 5. Other Types of Displays 6. Graphics Adapters 7.

1. Introduction 2. Methods for I/O Operations 3. Buses 4. Liquid Crystal Displays 5. Other Types of Displays 6. Graphics Adapters 7. 1. Introduction 2. Methods for I/O Operations 3. Buses 4. Liquid Crystal Displays 5. Other Types of Displays 6. Graphics Adapters 7. Optical Discs 1 Introduction Electrical Considerations Data Transfer

More information