4. Environment. This chapter describes the environment where the RAMA file system was developed. The

Size: px
Start display at page:

Download "4. Environment. This chapter describes the environment where the RAMA file system was developed. The"

Transcription

1 4. Environment This chapter describes the environment where the RAMA file system was developed. The hardware consists of user computers (clients) that request reads and writes of file data from computers that host the disks (servers). The clients and servers are connected via a network switch. In traditional devices, the data path from a user to devices (disks) is only through a backplane bus such as SCSI [3][4] or Fibre Channel [2]. In the RAMA file system, however, the network serves as the system backplane similar to Network Attached Secure Disks [27]. I define a block as the smallest unit of file data that is read/written from/to disk. A packet is a container that delivers the data block and control information from/to users and servers over the attached network. The packet size is relatively small when it delivers requests that are not associated with data delivery, such as open ; however, when file data is delivered, the size of the packet is the size of the data block plus some overhead bytes. The design calls for at most one data block to be carried in a packet for several reasons; It is simpler to implement (this is the main reason, but it does not conflict with the other reasons). If one block is ready to be shipped, waiting for a second block delays the first block. The probability for network errors is greater with large packets, or with bursting many small packets towards a single network port, a fact that became evident during my experiments that are described in Section

2 The RAMA software, servers, clients, tests and simulations programs, run in user mode on RedHat Linux kernel version 7.1, pthread library /lib/libpthread.so.0, C library libc /lib/ libc.so.6, linker /lib/ld-linux.so.2 and are written in C Network File data moves between clients and servers through the attached network. Given a particular environment of specific equipment (commodity nodes and network switch), the only network parameters that are left as variables are the packet size and a timer value to wait for expected packets. The timer is discussed later in this section. Therefore, the only decision that the system architect needs to make is what packet size optimizes use of the supporting network capabilities. The packets that flow between clients and servers approximate the block size of file data that is ultimately stored on the disk. I now explore the related network protocols and examine the affect of packet size on network performance. The system on which RAMA was developed consists of 12 nodes which are connected via a level 2 (802.3 MAC [5]) switch (3Com SuperStack II 10/100 3C16465A [1]) as shown in Figure 1. The standard defines the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol, where stations are connected by a shared network cable. Each station has separate transmit and receive attachments to the shared cable. Any station that wants to transmit a packet senses the cable (on the receive attachment) to see if any other station is transmitting. If it does not detect any other transmission, it transmits its own packet. A transmitting station continues to monitor the shared media for possible collisions by receiving back its own transmitted bits and comparing them to the transmitted ones, to see whether another station made a similar decision 60

3 and started transmitting at the same time, causing a collision. If the received and transmitted frames do not match, a collision is detected, and the station aborts transmission of the packet immediately, and sends instead a jamming signal, notifying all other stations on the cable that a collision has occurred. The jamming signal is long enough so that all stations have an opportunity to be alerted. Next, the station waits for a short random period and tries to send the packet again. Collisions depend on several factors: The length of the cable connecting the attached stations; A shorter length is better because the head of a packet can start arriving at the furthest station sooner, and hence, can be detected sooner, causing other stations to wait before transmitting. On the packet size; minimum 64 bytes (for 10Mb/sec) to prevent a station from completing the transmission of a short packet before the first bit has even reached the far end of the cable, where it may collide with another packet. Larger packet size is better. More nodes attached to the same cable will more likely try to transmit simultaneously, possibly causing collisions. A RAMA node is connected to the switch through a Network Interface Card (NIC) where a cable connects a chip implementing the protocol on the NIC with a similar chip (a port) in the switch, effectively creating an network of two stations on the connecting cable, one station at the NIC and one at the switch. In the RAMA test environment, all the servers and some of the clients have two NICs, and the rest of the clients have just a single NIC. A detailed list of the NICs is given in Section A node with two NICs creates two separate networks of two stations with the switch, and not one network of four stations. 61

4 To improve performance, the protocol was enhanced to create 802.3x [5], where the collision detection circuitry is not used for monitoring the line, but rather is used for data reception only. When there are only two stations on a cable, one thread of the wire is used for transmission and the other for reception. Using a cross over cable where the receive signal pin at one end of the cable is connected to the transmit pin at the other end of the cable, and vice versa, (switches automatically cross over a straight through cable), collisions are no longer a possibility since each station transmits on a different thread of the cable. Some mechanism, other than collision detection, is necessary in order to compensate for network errors (such as a connection oriented protocol) x is theoretically twice as fast as because there are two parallel paths for data to flow. The two paths can only be used to transmit and receive in parallel (one path receives, the other transmits), but not to transmit in or receive in parallel (both transmitting or both receiving). This improved protocol is called full duplex usage of each port. Each NIC in the RAMA network operates in full duplex mode (i.e. it can receive and transmit simultaneously). The switch operates as a store and forward device, meaning it receives packets from nodes on one of the ports (an input port), waits for the entire packet to be in the switch, validates the packet for CRC errors, and sends it to the destination node on one of the other ports (the output port) [82]. When multiple packets which are destined to different output ports arrive into the switch in parallel through different input ports, the store time in the switch is relatively short because each packet is forwarded to a different output port, all in parallel. If on the other hand, packets arrive simultaneously, and are destined to the same output port, (for example, when many clients write to the same server), all but one of the packets need to wait and be sent with some delay. Each port queues incoming packets 62

5 from its input port in an input queue and forwards them in First In First Out (FIFO) order. The head of each port s input FIFO has a packet that cannot be forwarded because the output port that it needs to be transmitted through is busy with another packet. Further, under heavy uneven load, it is possible that not enough memory is available in the switch to queue the incoming packets, and they are dropped. No indication is given by the switch for this condition. It is important for the sending node to have a mechanism to detect packet loss. Switches with input FIFOs (like the one used for RAMA in this environment) have an additional characteristic that has great impact on overall performance, namely, Head-Of- List (HOL) blocking [73]. The problem manifests itself as follows: Suppose that port 1 has the following messages in its input FIFO: (1->3, 1->3, 1->3 (head)), and port 2 has the following list of packets in its input FIFO: (2->4, 2->5, 2->3 (head)). Assume that port 3 is busy and therefore ports 1 and 2 are unable to send the packet at the head of their input FIFOs to output port 3. We would want 2->4 and 2->5 to be moved ahead of 2->3 if ports 4 or/and 5 are idle. If the switch can operate only on the packet at the head of each input FIFO, then packet 2->3 at the head of input FIFO port 2 effectively blocks 2->4 and 2->5 from moving ahead of 2->3 and be forwarded to idle output ports 4 and/or 5. Scheduling inside the switch affects performance as well. Returning to the example in the previous paragraph, after some delay, port 3 becomes available, then, which packet will be outputted next to port 3, 1->3 or 2->3? The answer depends on the scheduling algorithms in the switch. Scheduling could be round robin or prioritized. In this particular example, 63

6 we prefer that 2->3 move next, however, if the scheduling is prioritized, say by port number, then 1->3 will move ahead of 2->3, and 2->4 and 2->5 will have to wait even longer. HOL blocking and switch scheduling algorithms greatly affect switch scaling. The aggregate switch performance scales as more and more ports participate in communication. However, the scaling is not linear because some of the packets that could be transmitted are delayed behind HOL packets that cannot move and are holding all packets that are behind them in the input FIFO hostage. It is not exactly clear what scheduling algorithm is used by the switch that was used for my experiments, however, that HOL blocking exists in the switch was confirmed by the manufacturer. This is evident from the results achieved for the switch scaling test as explained in Section Further advances in switch development came with protocol 802.3x [5] where participating stations could have some measure of flow control. If a node or a switch is unable to handle the incoming packets, they may send a special push back MAC frame requesting the sending station to hold off while congestion is cleared. The protocol states that stations must accept these frames but do not have to act upon them. Since it was easy to create a condition where packets were lost with increased frequency as the traffic in the switch increased, I conclude that the push back frames were ignored by some nodes or by the switch or by both. The frequency of lost packets also depends on the packet size. Since the amount of memory in the switch is finite, it is able to store more small packets than large ones. Hence, packet loss is more likely to occur with large packets than small packets. This fact is confirmed by my network tests, where I found that as the packet size increases, the network 64

7 performance improves for a while, but as bigger and bigger packets flow through the switch, the performance deteriorates because some of the packets are lost. (see Figure 4). In general, read requests involve a short request packet (read x bytes from file y at position z) followed by a long reply packet (here are x bytes from file y at position z) from a server. Write requests are characterized by long request packets (here are x bytes for file y at position z) and short reply packets (got x bytes for file y at position z) by the server. Since queueing and lost packets are caused by excessive data on a switch port, it is more likely that reads will cause HOL blocking on the server switch ports and writes will cause HOL blocking on the client switch ports. The RAMA software uses the UDP/IP protocol to send and receive packets, and since UDP/IP is best effort delivery, every request by a node is followed by a response. If a response to a request does not arrive in a reasonable amount of time, the request is resubmitted. The time-out value is important because of two reasons. First, if it is too short, there will be too many retransmissions and unnecessary additional work will be done by the server node, repeating work that was just completed. If however the time-out value is too long, the requesting node is idle, waiting longer than necessary, before retransmitting a lost packet. My experiments show that a longer time-out is preferable to a shorter time-out. The reason is that the additional unnecessary retries with the shorter time-out lower overall performance by wasting network bandwidth and server effort, but to err with the longer time-out reduces the performance of one specific node, the one whose packet got lost, while the rest of the nodes continue to use the network at full bandwidth. 65

8 One additional performance booster is the bonding of multiple NICs on a node [84]. When two NICs are attached to a node, bonding them creates a parallel interface that looks to the sending software as a single interface. According to the IP protocol, the Maximum Transmit Unit (MTU) is 1500 bytes. If a larger packet is transmitted, it is first divided into multiple MTU size fragments and sent as fragments. When the fragments arrive at the destination node, they are reassembled to resemble the original packet. With bonded NICs, two fragments can be transmitted simultaneously into the switch, potentially doubling the transmit speed. Since all fragments of a packet are destined to the same output port from the switch, they are sent out from the switch in series, not in parallel. This could indirectly cause longer store time at the switch and HOL blocking. For example: (1->3, 1->4) in parallel with (2->3, 2->4) will empty the switch in four units of time because 1->3 will block 2->3 and 1->4 will block 2->4. However, (1->3, 1->3) in parallel with (2->4, 2->4) will empty the switch in two units of time. My experiments show that bonding affects scaling as well. The performance with a single server and a single client does not scale linearly with packet size. The scaling depends on the number of fragments that are sent in parallel. A two fragment packet will take one unit of time to send on two bonded NICs, and a three fragment packet will take two units of time to send, not a linear scaling. This was confirmed by observing the packets with the tcpdump [78][83] utility software. The next section describes the tests that were performed to measure the actual performance of the RAMA testbed and to derive the desired packet size for the RAMA system. 66

9 Network Performance (Baseline) The network and switch performance were tested for various packet and network sizes. The tests did not involve any disk activity and no data copies. The criteria for choosing the optimal packet size is maximizing total network throughput without compromising the performance due to packet loss as the number of clients and servers scale up. Two types of tests were performed. Reads were simulated by the clients sending a 100 byte packet to the server(s) to simulate a request, and the server responding by sending the specific packet size that was tested. The Writes were simulated by the clients sending a packet size packet and the server responding with a 100 byte packet as a reply. The tests included the following packet sizes (in bytes): 1,024, 4,096, 8,192, 16,384, 32,768, and 54,916. The first observation was that the switch cannot switch packets larger than 54,916 bytes. Therefore the largest packet size that was tested was 54,916 bytes. A special note on the server NICs and the number and type of clients used in the tests: The servers (nodes 1, 2, 3, 4) have two NICs each but only a single process reading the ports. The clients are made up of several types of nodes and the order of adding client processes to the test depends on the number of NICs they have. In each node one client process is added first, and after all client nodes have one client process, a second client process is added to the nodes that have two NICs. Nodes 2, 3, 4, are used as clients when they are not used as servers. In that case, they are the last nodes to be used as clients, first with one process each, and then with a second process. 67

10 Figure 4 and Figure 7 show the network and switch performance for various network configurations. Performance is measured in (1,000,000) Megabytes per second (MB/sec) or packets per second (PKTps or PKT/sec) Single Disk Server Write The network traffic for writes demonstrates the queueing activity at the server ports and also the loss of packets as the packets get larger. Figure 4 shows that a server can receive network data at a rate of about 12MB/sec when at least two client processes nodes are feeding it. This is achievable with all packet sizes that were tested. As the ratio of client:server ports increases, the probability of queueing at the ports increases as well, and the probability of a server port being idle decreases because the port is more likely to have a packet available all the time, hence, throughput is improved. However, when the client:server port ratio increases beyond the point where the switch can queue packets without loss, or when HOL blocking occurs, the switch saturates and packets are lost, causing overall network performance to deteriorate. The loss occurs at a lower throughput with larger packets than with small packets. As Figure 4 shows, packets of size 54,916 bytes perform poorly, and have massive packet loss as the ratio of client:server ports increases. This packet size is therefore not optimal for the RAMA file system. Packets of sizes 1,024, and 4,096 bytes did not have any loss, but did not scale well as the client:server port ratio increased. This is due to the fact that more packets are needed to achieve the maximum throughput and therefore the system overhead is the limiting factor 68

11 in the performance. These packet sizes are therefore acceptable but not optimal for the RAMA file system. Packets of size performed better than the 54,916 bytes packets, but caused packet loss and therefore did not scale well as the client:server port ratio increased. This packet size is therefore acceptable but not optimal for the RAMA file system MB/sec number of client NICs 1KB/pkt 4KB/pkt 8KB/pkt 16KB/pkt 32KB/pkt 54KB/pkt Figure 4. Network Performance - Write Baseline (1 Server): Server With 2 NICs, 7 Clients With 2 NICs, 4 Clients With 1 NIC. 69

12 Packets of sizes 8,192 and 16,384 bytes show the best overall performance at all client:server port ratios. The throughput scaled with the increase of client:server port ratio and there were no losses due to switch memory limitations. 8,192 and 16,384 bytes packet sizes are therefore acceptable for the RAMA file system. By increasing the socket buffer size, the performance can be improved but the general trend stays as shown in the figure above Multiple Disk Servers Write When there is more than one server in the network, the aggregate network performance is increased, however, the performance of each individual server is reduced as more servers are added to the network. Figure 5 shows the performance of each individual server among all other servers as the total number of servers is increased, and Figure 6 shows the aggregate switch performance as it scales up when servers are added to the network. A single server can receive 8,192 byte packets at 20.7 MB/sec, but when two, three, or four servers are operating together in the network and competing for switch resources, the performance is reduced to 18.1, 17.5 and 16.6 MB/sec respectively. The total network performance is increased as servers are added to the network, but not linearly. This is due to several factors: ( input and output refers to input from the network to the switch and output from the switch to the network ) All clients and servers on the network share the memory in the network switch and the average amount of memory that is available per port is reduced. This reduces the average queue length that is available at each port, which possibly causes packet loss. 70

13 The Head-Of-Line blocking effect in the switch occurs only when multiple input ports try to output to the same port, which is the case with multiple RAMA disk servers. Because each switch port has an input from the network queue, a packet for an idle output to the network port may be waiting for a packet ahead of it, which cannot go out on a busy port. The more RAMA disk servers that the FS has, the more often this incident may occur. The order in which contentions on output ports are resolved may not be optimal. For example, if two ports have packets to send on the same output port and one of them has packets waiting behind it, and the other does not, the switch does not give preference to the port with the longer input queue MB/sec nservers Figure 5. Individual Server Performance - Write Baseline: 8,192 bytes/packet 71

14 MB/sec nservers Figure 6. Aggregate Server Performance - Write Baseline: 8,192 bytes/packet Based on these reasons, the switch is expected to scale, but not linearly, which is evident from these experiments Single Disk Server Read The network traffic for reads demonstrate the lack of queueing activity at the ports and also the lack of packet loss as the packets get larger. This is true because in reading, the server port receives small request packets from multiple clients. These packets empty fast and are not large enough to fill the switch memory. The server sends large reply packets to multiple ports, which all empty the switch memory in parallel. Figure 7 shows that a server can send packets at a maximum rate of 24MB/sec but all tested packet sizes can achieve more than 20 MBps with at least three clients reading from 72

15 its port(s). The scalability depends on the number of fragments into which the packet is fragmented and whether they are sent in parallel through the two server NICs. In all packet sizes, the read operation performs better than or equal to the write MB/sec number of client NICs 1KB/pkt 4KB/pkt 8KB/pkt 16KB/pkt 32KB/pkt 54KB/pkt Figure 7. Network Performance - Read Baseline (1 Server): Server With 2 NICs, 7 Clients With 2 NICs, 4 Clients With 1 NIC. 73

16 4.2. Disk The disks in the RAMA system are Maxtor DiamondMax Plus 40 Ultra DMA 66: model number 54098U8 (512 bytes per Sector, 7,200 RPM Rotation Speed). A 2 gigabyte partition was carved out of the disk to use as a RAMA disk. The disk is divided into blocks that represent file data blocks. When file data arrives at the server, it is saved in cache until some later time when it is written to physical disk. The delay is intentional, since I want to transfer as much data as possible to the physical disk with every write command in order to achieve better write throughput. When data is read from the disk, it is also copied to the cache, anticipating repeated requests for the same data block which then can be satisfied from cache and without another I/O from the physical disk. A data block arriving at the server is considered dirty. A dirty block contains the valid content for the block and supersedes any data on the physical disk for the same block. The cache is written to disk (cleaned) when any one of the following conditions occur: 1. The cache is 100% full with dirty blocks and space is needed in the cache for newly arrived blocks (under duress). 2. The cache is 80-90% full with dirty blocks. 3. A specific block is dirty and is in the cache longer than some threshold time. In considering the desired block size for the disk system, the following issues need to be addressed: The cache needs to be cleaned at least as fast as the file data arrival rate at the server port. If not, then the network bandwidth is not fully utilized. 74

17 When cleaning under duress, no requests are serviced until there is space in the cache. It is therefore desirable to limit the total cleaning time so that the wait for cache space is minimal. Otherwise packets queue up at the server port, causing packet loss. It is assumed that all writes are to random locations on the disk and that is how the disk performance was tested. This is because when cleaning is performed, when conditions 1, 2, or 3 above occur, the disk head is not expected to be at a specific position. This is also true because the location on disk to which the dirty blocks should be written is random, because of the hashing function that is performed on the block number and file ID. If consecutive writes are to consecutive disk locations, that is an added bonus to performance but it is not expected. I now show the disk performance with various block sizes and then derive the optimal block size Disk Write Performance (Baseline) I measured the write performance of the disk drives as stand alone disks (no network overhead) with various seek distances between writes, varying the block size and the number of consecutive blocks in a write request. The consecutive blocks were an attempt to mimic the blocks of a stripe on a disk line. The consecutive blocks were written via the writev command (vectored write). The writev command allows writes of data that are not in consecutive addresses in memory but are at consecutive address on the disk to be written in a single I/O kernel call. The instruction includes a table of byte count and memory address pairs (vectors) as the information of where the data is in memory. This type of a write saves kernel overhead, and maybe, a disk revolution. The test consisted of writing a 75

18 total of 64MB of data in each test. To aid in debugging, each data block contained the fileid and the block number that it belongs to. No data copies were performed. The seek distances were generated by a random number generator. The results are shown in Figure 8. Modeling hardware in order to arrive at file layout decisions that lead to favorable performance is discussed in Severance [77]. The tests included the following block sizes (in bytes): 1,024, 2,048, 4,096, 8,192, 16,384, 32,768, 65,536, 131,072 and 262, MB/sec nvectors 1KB/Vector 2KB/Vector 4KB/Vector 8KB/Vector 16KB/Vector 32KB/Vector 64KB/Vector 128KB/Vector 256KB/Vector Figure 8. Stand Alone Disk Writev Performance With Random Seek 76

19 From the network performance test (Section 4.1.1) I learned that the servers can receive data at a maximum rate of 12 MB/sec, therefore I want the disk to operate at least at that rate. The disk write performance Figure 8 shows that a block size of 2KB or less cannot achieve this goal even when writing 64 consecutive blocks. A block of 4KB can achieve this goal when writing about 40 or more consecutive blocks, a block size of 8KB can achieve this goal when writing 16 or more consecutive blocks, and a block size of 16KB can do it with about four consecutive blocks. A higher block size will achieve the minimum bandwidth constraint as well. I now consider the number of blocks that are written in a single writev call. The more vectors that the FS writes, the longer it will take to write them (for a specific block size). If blocks are written under duress (when the cache is full and a cache buffer is needed), it is desirable to minimize this total write time so that incoming requests can be serviced sooner and the requesting client does not time-out and retransmit. To illustrate this point, consider the example of 16,384 byte block size. At full network capacity one 16,384 byte block arrives every 1.5 ms. Looking at Figure 8, we see that writing 8 blocks of size 16,384 bytes takes 10.8 ms, during which time about 7-8 new packets may have arrived at the port. A smaller vector count does not bring the disk performance up to the desired minimum of 12MB/sec. If the vector count is greater, say 40 blocks, it takes 37 ms to write to disk, during which time about 25 new packets may have queued at the server port. What we want is larger blocks to write faster but low vector count to avoid queueing at the server port. This trade-off is also the determining factor for the optimal stripe size, which is discussed in Section

20 One additional consideration in choosing the optimal block size is file size. A large block size for small files would result in wasted disk space when the entire file is smaller than one block. A small block size, however, will not penalize large files since the stripe size is adjustable on a per file basis, and large stripes can be created for large files for faster I/O. Under this scenario, the RAMA file system is suitable for any file size. The conclusion is therefore that the block size should be set to the smallest number that assures optimal performance of the disk and network. Based on the above discussion, the optimal block size for writing in the system under test is 8-16KB. The explanation for the zig zag form of the write throughput with large block sizes is the fact that the specific disk that I am using has a track buffer of size 2MB and is controlled by the microcode in the disk itself. This is not an unusual phenomenon. When more than that amount of data is sent to the drive at a rate faster than the disk can empty that buffer, the sender (RAMA) is delayed. When 4MB of data is sent, there are two delays, and so on. This has no effect on the design of RAMA Disk Read Performance (Baseline) I measured the Read performance of the disk drives as stand alone disks (no network overhead) with various seek distances between Reads, varying the block size. The test consisted of reading one block from each seek position. No copies were performed. The seek distances were generated by a random number generator. The tests included the following block sizes (in bytes): 1,024, 2,048, 4,096, 8,192, 16,384, 32,768, 65,536, 131,072 and 262,

21 Read requests arrive at the server in single block requests. The read results are shown in Figure 9. The figure shows that a larger block size produces better results, however, they are significantly lower than the network read performance of 24 MB/sec. A block size of 8192 is read at 0.94Mb/sec and at 1.84 MB/sec K 2K 4K 8K 16K MB/sec 32K 64K 128K 256K block size Figure 9. Stand Alone Disk Read Performance With Random Seek. It is valid to assume, however, that the read performance will be better when reading full stripes because multiple file blocks are read from the same disk line and most of the data will most likely be in the disk cache and be available without a disk head movement. 79

22 4.3. Optimal Block Size Based on the discussions in Section and Section we find that: Block sizes of 1,024 and 2,048 cannot write to disk at the average network throughput of 12 MB/sec with a reasonable vector count. These block sizes are therefore eliminated from further consideration. In addition, the disk drive industry is moving toward larger native block size recorded on disks to increase from the currently pervasive 512 bytes to initially 4,096 bytes and in the future, larger block sizes in powers of two [31]. This is enough reason to eliminate the 1,024 and 2,048 byte block size from consideration. Block size of 4096 can write to disk at the average network throughput with an unreasonable vector count of 50. This block size can write to the network at the average throughput of 12 MB/sec. only if the client:server ratio is at least 3:1. This block size is therefore eliminated from further consideration. Block size of 32,768 can write to disk at the maximum network throughput with a 1:2 client:server port ratio, but when multiple servers are attached to the switch (which is the case with RAMA), the packet loss in the switch is excessive (not shown on figures). That is, even with only 2 servers on the network the network performance deteriorates. This 32,768 block size is therefore eliminated from further consideration. Block size of 54,916 can write to disk at the maximum network throughput with a 1:1 client:server ratio, but when multiple servers are attached to the switch, the packet loss in the switch is excessive and the network performance is extremely poor. This block size is therefore eliminated from further consideration. 80

23 Block sizes 8,192 and 16,384 show the best overall performance and will be used to demonstrate the performance of RAMA. On a different configuration, different values might be derived CPUs and Memory The RAMA server nodes 1 through 4 have two Pentium III CPUs with 256K L2 cache running at 600 MHz, with one GB of RAM and two NICs. The memory is used for the programs and the block cache. Client nodes 1 through 4 have two Pentium III CPUs with 256K L2 cache running at 1GHz, with one GB of RAM and two NICs. The memory is used for the program and for holding transmitted blocks for parity calculation. Client node 5 has two Pentium III CPUs with 256K L2 cache running at MHz, with 1,028 MB of RAM. This node has 2 NICs but only one of them is available for RAMA. The second NIC is connected to a different network. The memory is used for the program and for holding transmitted blocks before parity is calculated Client nodes 6 through 8 have one Pentium II CPU with 512K L2 cache running at MHz, with 320 MB of RAM and one NIC. The memory is used for the program and for holding transmitted blocks before parity is calculated Client nodes 9 through 11 are the server nodes which are not participating in a specific test. These nodes are never used as server and client in the same test. The memory is used for the program and for holding transmitted blocks before parity is calculated 81

24 The only consideration that has to be made regarding the memory in the server is how much cache space is needed to hold blocks before they are written to disk. If the cache is too small, there might not be enough blocks for the desired number of vectors to use when writing to disk. This may cause writing less block more often. In all tests, the cache could hold 8,000 blocks. 82

6. Results. This section describes the performance that was achieved using the RAMA file system.

6. Results. This section describes the performance that was achieved using the RAMA file system. 6. Results This section describes the performance that was achieved using the RAMA file system. The resulting numbers represent actual file data bytes transferred to/from server disks per second, excluding

More information

1. Introduction. Traditionally, a high bandwidth file system comprises a supercomputer with disks connected

1. Introduction. Traditionally, a high bandwidth file system comprises a supercomputer with disks connected 1. Introduction Traditionally, a high bandwidth file system comprises a supercomputer with disks connected by a high speed backplane bus such as SCSI [3][4] or Fibre Channel [2][67][71]. These systems

More information

Media Access Control (MAC) Sub-layer and Ethernet

Media Access Control (MAC) Sub-layer and Ethernet Media Access Control (MAC) Sub-layer and Ethernet Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF MAC Sub-layer The MAC sub-layer is a sub-layer

More information

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley,

More information

Lecture 6 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 6 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 6 The Data Link Layer Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Link Layer: setting the context two physically connected devices: host-router, router-router, host-host,

More information

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN

More information

Lecture 5 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 5 The Data Link Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 5 The Data Link Layer Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Link Layer: setting the context two physically connected devices: host-router, router-router, host-host,

More information

CCNA Exploration1 Chapter 7: OSI Data Link Layer

CCNA Exploration1 Chapter 7: OSI Data Link Layer CCNA Exploration1 Chapter 7: OSI Data Link Layer LOCAL CISCO ACADEMY ELSYS TU INSTRUCTOR: STELA STEFANOVA 1 Explain the role of Data Link layer protocols in data transmission; Objectives Describe how the

More information

Real-Time (Paradigms) (47)

Real-Time (Paradigms) (47) Real-Time (Paradigms) (47) Memory: Memory Access Protocols Tasks competing for exclusive memory access (critical sections, semaphores) become interdependent, a common phenomenon especially in distributed

More information

An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks

An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks Ryan G. Lane Daniels Scott Xin Yuan Department of Computer Science Florida State University Tallahassee, FL 32306 {ryanlane,sdaniels,xyuan}@cs.fsu.edu

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture Generic Architecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

Ethernet Hub. Campus Network Design. Hubs. Sending and receiving Ethernet frames via a hub

Ethernet Hub. Campus Network Design. Hubs. Sending and receiving Ethernet frames via a hub Campus Network Design Thana Hongsuwan Ethernet Hub 2003, Cisco Systems, Inc. All rights reserved. 1-1 2003, Cisco Systems, Inc. All rights reserved. BCMSN v2.0 1-2 Sending and receiving Ethernet frames

More information

TCP Strategies. Keepalive Timer. implementations do not have it as it is occasionally regarded as controversial. between source and destination

TCP Strategies. Keepalive Timer. implementations do not have it as it is occasionally regarded as controversial. between source and destination Keepalive Timer! Yet another timer in TCP is the keepalive! This one is not required, and some implementations do not have it as it is occasionally regarded as controversial! When a TCP connection is idle

More information

Medium Access Protocols

Medium Access Protocols Medium Access Protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division,Code Division, Frequency Division Random partitioning

More information

Network Superhighway CSCD 330. Network Programming Winter Lecture 13 Network Layer. Reading: Chapter 4

Network Superhighway CSCD 330. Network Programming Winter Lecture 13 Network Layer. Reading: Chapter 4 CSCD 330 Network Superhighway Network Programming Winter 2015 Lecture 13 Network Layer Reading: Chapter 4 Some slides provided courtesy of J.F Kurose and K.W. Ross, All Rights Reserved, copyright 1996-2007

More information

Summary of MAC protocols

Summary of MAC protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division, Code Division, Frequency Division Random partitioning (dynamic) ALOHA, S-ALOHA,

More information

Switching & ARP Week 3

Switching & ARP Week 3 Switching & ARP Week 3 Module : Computer Networks Lecturer: Lucy White lbwhite@wit.ie Office : 324 Many Slides courtesy of Tony Chen 1 Ethernet Using Switches In the last few years, switches have quickly

More information

NIC TEAMING IEEE 802.3ad

NIC TEAMING IEEE 802.3ad WHITE PAPER NIC TEAMING IEEE 802.3ad NIC Teaming IEEE 802.3ad Summary This tech note describes the NIC (Network Interface Card) teaming capabilities of VMware ESX Server 2 including its benefits, performance

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

Quiz for Chapter 6 Storage and Other I/O Topics 3.10 Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: 1. [6 points] Give a concise answer to each of the following

More information

Interface The exit interface a packet will take when destined for a specific network.

Interface The exit interface a packet will take when destined for a specific network. The Network Layer The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Wireless MACs: MACAW/802.11

Wireless MACs: MACAW/802.11 Wireless MACs: MACAW/802.11 Mark Handley UCL Computer Science CS 3035/GZ01 Fundamentals: Spectrum and Capacity A particular radio transmits over some range of frequencies; its bandwidth, in the physical

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

Data Link Layer, Part 3 Medium Access Control. Preface

Data Link Layer, Part 3 Medium Access Control. Preface Data Link Layer, Part 3 Medium Access Control These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make a single machine-readable

More information

High Level View. EE 122: Ethernet and Random Access protocols. Medium Access Protocols

High Level View. EE 122: Ethernet and Random Access protocols. Medium Access Protocols High Level View EE 122: Ethernet and 802.11 Ion Stoica September 18, 2002 Goal: share a communication medium among multiple hosts connected to it Problem: arbitrate between connected hosts Solution goals:

More information

Data Link Layer, Part 5. Medium Access Control

Data Link Layer, Part 5. Medium Access Control CS 455 Medium Access Control, Page 1 Data Link Layer, Part 5 Medium Access Control These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang s courses at GMU

More information

The Link Layer and LANs. Chapter 6: Link layer and LANs

The Link Layer and LANs. Chapter 6: Link layer and LANs The Link Layer and LANs EECS3214 2018-03-14 4-1 Chapter 6: Link layer and LANs our goals: understand principles behind link layer services: error detection, correction sharing a broadcast channel: multiple

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

STORAGE SYSTEMS. Operating Systems 2015 Spring by Euiseong Seo

STORAGE SYSTEMS. Operating Systems 2015 Spring by Euiseong Seo STORAGE SYSTEMS Operating Systems 2015 Spring by Euiseong Seo Today s Topics HDDs (Hard Disk Drives) Disk scheduling policies Linux I/O schedulers Secondary Storage Anything that is outside of primary

More information

Specifying Storage Servers for IP security applications

Specifying Storage Servers for IP security applications Specifying Storage Servers for IP security applications The migration of security systems from analogue to digital IP based solutions has created a large demand for storage servers high performance PCs

More information

Module 17: "Interconnection Networks" Lecture 37: "Introduction to Routers" Interconnection Networks. Fundamentals. Latency and bandwidth

Module 17: Interconnection Networks Lecture 37: Introduction to Routers Interconnection Networks. Fundamentals. Latency and bandwidth Interconnection Networks Fundamentals Latency and bandwidth Router architecture Coherence protocol and routing [From Chapter 10 of Culler, Singh, Gupta] file:///e /parallel_com_arch/lecture37/37_1.htm[6/13/2012

More information

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis Computer Network Fundamentals Spring 2008 Week 3 MAC Layer Andreas Terzis Outline MAC Protocols MAC Protocol Examples Channel Partitioning TDMA/FDMA Token Ring Random Access Protocols Aloha and Slotted

More information

LANCOM Techpaper IEEE n Indoor Performance

LANCOM Techpaper IEEE n Indoor Performance Introduction The standard IEEE 802.11n features a number of new mechanisms which significantly increase available bandwidths. The former wireless LAN standards based on 802.11a/g enable physical gross

More information

Ultra high-speed transmission technology for wide area data movement

Ultra high-speed transmission technology for wide area data movement Ultra high-speed transmission technology for wide area data movement Michelle Munson, president & co-founder Aspera Outline Business motivation Moving ever larger file sets over commodity IP networks (public,

More information

EE 122: Ethernet and

EE 122: Ethernet and EE 122: Ethernet and 802.11 Ion Stoica September 18, 2002 (* this talk is based in part on the on-line slides of J. Kurose & K. Rose) High Level View Goal: share a communication medium among multiple hosts

More information

Ethernet. Introduction. CSE 3213 Fall 2011

Ethernet. Introduction. CSE 3213 Fall 2011 Ethernet CSE 3213 Fall 2011 19 October 2011 1 Introduction Rapid changes in technology designs Broader use of LANs New schemes for high-speed LANs High-speed LAN technologies: Fast and gigabit Ethernet

More information

3. Evaluation of Selected Tree and Mesh based Routing Protocols

3. Evaluation of Selected Tree and Mesh based Routing Protocols 33 3. Evaluation of Selected Tree and Mesh based Routing Protocols 3.1 Introduction Construction of best possible multicast trees and maintaining the group connections in sequence is challenging even in

More information

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1 Ref: Chap 12 Secondary Storage and I/O Systems Applied Operating System Concepts 12.1 Part 1 - Secondary Storage Secondary storage typically: is anything that is outside of primary memory does not permit

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Question 1 (6 points) Compare circuit-switching and packet-switching networks based on the following criteria:

Question 1 (6 points) Compare circuit-switching and packet-switching networks based on the following criteria: Question 1 (6 points) Compare circuit-switching and packet-switching networks based on the following criteria: (a) Reserving network resources ahead of data being sent: (2pts) In circuit-switching networks,

More information

Iomega REV Drive Data Transfer Performance

Iomega REV Drive Data Transfer Performance Technical White Paper March 2004 Iomega REV Drive Data Transfer Performance Understanding Potential Transfer Rates and Factors Affecting Throughput Introduction Maximum Sustained Transfer Rate Burst Transfer

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Objectives. Hexadecimal Numbering and Addressing. Ethernet / IEEE LAN Technology. Ethernet

Objectives. Hexadecimal Numbering and Addressing. Ethernet / IEEE LAN Technology. Ethernet 2007 Cisco Systems, Inc. All rights reserved. Cisco Public Objectives Ethernet Network Fundamentals Chapter 9 ITE PC v4.0 Chapter 1 1 Introduce Hexadecimal number system Describe the features of various

More information

Introduction to Input and Output

Introduction to Input and Output Introduction to Input and Output The I/O subsystem provides the mechanism for communication between the CPU and the outside world (I/O devices). Design factors: I/O device characteristics (input, output,

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Problem Set Name the 7 OSI layers and give the corresponding functionalities for each layer.

Problem Set Name the 7 OSI layers and give the corresponding functionalities for each layer. Problem Set 1 1. Why do we use layering in computer networks? 2. Name the 7 OSI layers and give the corresponding functionalities for each layer. 3. Compare the network performance of the 3 Multiple Access

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 2 (ch13, ch12) Shudong Chen 1 Recap Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

Distributed Queue Dual Bus

Distributed Queue Dual Bus Distributed Queue Dual Bus IEEE 802.3 to 802.5 protocols are only suited for small LANs. They cannot be used for very large but non-wide area networks. IEEE 802.6 DQDB is designed for MANs It can cover

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과.

Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과. Link Layer and LANs 안상현서울시립대학교컴퓨터 통계학과 ahn@venus.uos.ac.kr Data Link Layer Goals: understand principles behind data link layer services: error detection, correction sharing a broadcast channel: multiple

More information

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course: CS 349/449 Internet Protocols Final Exam Winter 2003 12/15/2003 Name: Course: Instructions: 1. You have 2 hours to finish 2. Question 9 is only for 449 students 3. Closed books, closed notes. Write all

More information

Module 16: Distributed System Structures

Module 16: Distributed System Structures Chapter 16: Distributed System Structures Module 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology Communication Structure Communication

More information

CMPE 257: Wireless and Mobile Networking

CMPE 257: Wireless and Mobile Networking CMPE 257: Wireless and Mobile Networking Katia Obraczka Computer Engineering UCSC Baskin Engineering Lecture 3 CMPE 257 Winter'11 1 Announcements Accessing secure part of the class Web page: User id: cmpe257.

More information

Lecture 8 The Data Link Layer part I. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 8 The Data Link Layer part I. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 8 The Data Link Layer part I Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Link Layer: setting the context two physically connected devices: host-router, router-router,

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Network performance. slide 1 gaius. Network performance

Network performance. slide 1 gaius. Network performance slide 1 historically much network performance research was based on the assumption that network traffic was random apoisson distribution of traffic Paxson and Floyd 1994, Willinger 1995 found this assumption

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 24, minutes. Name: Student No: TOT

CS 421: COMPUTER NETWORKS SPRING FINAL May 24, minutes. Name: Student No: TOT CS 421: COMPUTER NETWORKS SPRING 2012 FINAL May 24, 2012 150 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable

More information

Networking Technologies and Applications

Networking Technologies and Applications Networking Technologies and Applications Rolland Vida BME TMIT September 23, 2016 Aloha Advantages: Different size packets No need for synchronization Simple operation If low upstream traffic, the solution

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

Data and Computer Communications

Data and Computer Communications Data and Computer Communications Chapter 16 High Speed LANs Eighth Edition by William Stallings Why High Speed LANs? speed and power of PCs has risen graphics-intensive applications and GUIs see LANs as

More information

CSE 461: Multiple Access Networks. This Lecture

CSE 461: Multiple Access Networks. This Lecture CSE 461: Multiple Access Networks This Lecture Key Focus: How do multiple parties share a wire? This is the Medium Access Control (MAC) portion of the Link Layer Randomized access protocols: 1. Aloha 2.

More information

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition Chapter 13: I/O Systems Silberschatz, Galvin and Gagne 2013 Chapter 13: I/O Systems Overview I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit)

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit) CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2003 Lecture 21: Network Protocols (and 2 Phase Commit) 21.0 Main Point Protocol: agreement between two parties as to

More information

(RCom) Reliable Communications for Teleoperated Rescue Robots

(RCom) Reliable Communications for Teleoperated Rescue Robots (RCom) Reliable Communications for Teleoperated Rescue Robots Ben Axelrod (baxelrod@cc.gatech.edu) John Envarli (envarli@cc.gatech.edu) Abstract We have created a reliable wireless communication framework

More information

Discussion Week 10. TA: Kyle Dewey. Tuesday, November 29, 11

Discussion Week 10. TA: Kyle Dewey. Tuesday, November 29, 11 Discussion Week 10 TA: Kyle Dewey Overview TA Evaluations Project #3 PE 5.1 PE 5.3 PE 11.8 (a,c,d) PE 10.1 TA Evaluations Project #3 PE 5.1 A CPU scheduling algorithm determines an order for the execution

More information

Access Technologies! Fabio Martignon

Access Technologies! Fabio Martignon Access Technologies! Fabio Martignon 1 LAN Ethernet - IEEE 802.3 Broadcast Bus Capacity=10 Mb/s Xerox-Intel-Digital inventors Standardized at the beginning of the 80s as IEEE 802.3 Big Success and Several

More information

Data Storage and Query Answering. Data Storage and Disk Structure (2)

Data Storage and Query Answering. Data Storage and Disk Structure (2) Data Storage and Query Answering Data Storage and Disk Structure (2) Review: The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM @200MHz) 6,400

More information

Transport layer issues

Transport layer issues Transport layer issues Dmitrij Lagutin, dlagutin@cc.hut.fi T-79.5401 Special Course in Mobility Management: Ad hoc networks, 28.3.2007 Contents Issues in designing a transport layer protocol for ad hoc

More information

Cisco Series Internet Router Architecture: Packet Switching

Cisco Series Internet Router Architecture: Packet Switching Cisco 12000 Series Internet Router Architecture: Packet Switching Document ID: 47320 Contents Introduction Prerequisites Requirements Components Used Conventions Background Information Packet Switching:

More information

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061/1110. Lecture 8. Medium Access Control Methods & LAN

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061/1110. Lecture 8. Medium Access Control Methods & LAN RMIT University Data Communication and Net-Centric Computing COSC 1111/2061/1110 Medium Access Control Methods & LAN Technology Slide 1 Lecture Overview During this lecture, we will Look at several Multiple

More information

Kernel Korner. Analysis of the HTB Queuing Discipline. Yaron Benita. Abstract

Kernel Korner. Analysis of the HTB Queuing Discipline. Yaron Benita. Abstract 1 of 9 6/18/2006 7:41 PM Kernel Korner Analysis of the HTB Queuing Discipline Yaron Benita Abstract Can Linux do Quality of Service in a way that both offers high throughput and does not exceed the defined

More information

19: Networking. Networking Hardware. Mark Handley

19: Networking. Networking Hardware. Mark Handley 19: Networking Mark Handley Networking Hardware Lots of different hardware: Modem byte at a time, FDDI, SONET packet at a time ATM (including some DSL) 53-byte cell at a time Reality is that most networking

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France Operating Systems Memory Management Mathieu Delalandre University of Tours, Tours city, France mathieu.delalandre@univ-tours.fr 1 Operating Systems Memory Management 1. Introduction 2. Contiguous memory

More information

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

ECEN Final Exam Fall Instructor: Srinivas Shakkottai ECEN 424 - Final Exam Fall 2013 Instructor: Srinivas Shakkottai NAME: Problem maximum points your points Problem 1 10 Problem 2 10 Problem 3 20 Problem 4 20 Problem 5 20 Problem 6 20 total 100 1 2 Midterm

More information

ECE 5730 Memory Systems

ECE 5730 Memory Systems ECE 5730 Memory Systems Spring 2009 Command Scheduling Disk Caching Lecture 23: 1 Announcements Quiz 12 I ll give credit for #4 if you answered (d) Quiz 13 (last one!) on Tuesday Make-up class #2 Thursday,

More information

Lecture 11: Networks & Networking

Lecture 11: Networks & Networking Lecture 11: Networks & Networking Contents Distributed systems Network types Network standards ISO and TCP/IP network models Internet architecture IP addressing IP datagrams AE4B33OSS Lecture 11 / Page

More information

Interference avoidance in wireless multi-hop networks 1

Interference avoidance in wireless multi-hop networks 1 Interference avoidance in wireless multi-hop networks 1 Youwei Zhang EE228A Project Report, Spring 2006 1 Motivation Wireless networks share the same unlicensed parts of the radio spectrum with devices

More information

CSE 123: Computer Networks Alex C. Snoeren. HW 2 due Thursday 10/21!

CSE 123: Computer Networks Alex C. Snoeren. HW 2 due Thursday 10/21! CSE 123: Computer Networks Alex C. Snoeren HW 2 due Thursday 10/21! Finishing up media access Contention-free methods (rings) Moving beyond one wire Link technologies have limits on physical distance Also

More information

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access File File System Implementation Operating Systems Hebrew University Spring 2009 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write

More information

Microsoft Exchange 2000 Front-End Server and SMTP Gateway Hardware Scalability Guide. White Paper

Microsoft Exchange 2000 Front-End Server and SMTP Gateway Hardware Scalability Guide. White Paper Microsoft Exchange 2000 Front-End Server and SMTP Gateway Hardware Scalability Guide White Paper Published: November 2000 Copyright The information contained in this document represents the current view

More information

Basic Reliable Transport Protocols

Basic Reliable Transport Protocols Basic Reliable Transport Protocols Do not be alarmed by the length of this guide. There are a lot of pictures. You ve seen in lecture that most of the networks we re dealing with are best-effort : they

More information

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File File File System Implementation Operating Systems Hebrew University Spring 2007 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write

More information

Lecture 9: Bridging. CSE 123: Computer Networks Alex C. Snoeren

Lecture 9: Bridging. CSE 123: Computer Networks Alex C. Snoeren Lecture 9: Bridging CSE 123: Computer Networks Alex C. Snoeren Lecture 9 Overview Finishing up media access Ethernet Contention-free methods (rings) Moving beyond one wire Link technologies have limits

More information

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition,

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition, Module 16: Distributed System Structures, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology

More information

Lecture 16: Network Layer Overview, Internet Protocol

Lecture 16: Network Layer Overview, Internet Protocol Lecture 16: Network Layer Overview, Internet Protocol COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016,

More information

CS 455/555 Intro to Networks and Communications. Link Layer

CS 455/555 Intro to Networks and Communications. Link Layer CS 455/555 Intro to Networks and Communications Link Layer Dr. Michele Weigle Department of Computer Science Old Dominion University mweigle@cs.odu.edu http://www.cs.odu.edu/~mweigle/cs455-s13 1 Link Layer

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

CSCD 330 Network Programming

CSCD 330 Network Programming CSCD 330 Network Programming Network Superhighway Spring 2018 Lecture 13 Network Layer Reading: Chapter 4 Some slides provided courtesy of J.F Kurose and K.W. Ross, All Rights Reserved, copyright 1996-2007

More information

Data Link Control Protocols

Data Link Control Protocols Protocols : Introduction to Data Communications Sirindhorn International Institute of Technology Thammasat University Prepared by Steven Gordon on 23 May 2012 Y12S1L07, Steve/Courses/2012/s1/its323/lectures/datalink.tex,

More information

Strengthening Unlicensed Band Wireless Backhaul

Strengthening Unlicensed Band Wireless Backhaul be in charge Strengthening Unlicensed Band Wireless Backhaul Use TDD/TDMA Based Channel Access Mechanism WHITE PAPER Strengthening Unlicensed Band Wireless Backhaul: Use TDD/TDMA Based Channel Access Mechanism

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Network Interface Architecture and Prototyping for Chip and Cluster Multiprocessors

Network Interface Architecture and Prototyping for Chip and Cluster Multiprocessors University of Crete School of Sciences & Engineering Computer Science Department Master Thesis by Michael Papamichael Network Interface Architecture and Prototyping for Chip and Cluster Multiprocessors

More information

Lesson 2-3: The IEEE x MAC Layer

Lesson 2-3: The IEEE x MAC Layer Module 2: Establishing Wireless Connectivity Lesson 2-3: The IEEE 802.11x MAC Layer Lesson Overview This lesson describes basic IEEE 802.11x MAC operation, beginning with an explanation of contention schemes

More information

Conges'on. Last Week: Discovery and Rou'ng. Today: Conges'on Control. Distributed Resource Sharing. Conges'on Collapse. Conges'on

Conges'on. Last Week: Discovery and Rou'ng. Today: Conges'on Control. Distributed Resource Sharing. Conges'on Collapse. Conges'on Last Week: Discovery and Rou'ng Provides end-to-end connectivity, but not necessarily good performance Conges'on logical link name Michael Freedman COS 461: Computer Networks Lectures: MW 10-10:50am in

More information

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware CSE 120 July 27, 2006 Day 8 Input/Output Instructor: Neil Rhodes How hardware works Operating Systems Layer What the kernel does API What the programmer does Overview 2 Kinds Block devices: read/write

More information

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto Recall the previous episode Detailed design principles in: The link layer The network

More information