TOWARD ENABLING SERVER-CENTRIC NETWORKS

Size: px
Start display at page:

Download "TOWARD ENABLING SERVER-CENTRIC NETWORKS"

Transcription

1 TOWARD ENABLING SERVER-CENTRIC NETWORKS A DISSERTATION SUBMITTED TO THE UNIVERSITY OF MANCHESTER FOR THE DEGREE OF MASTER OF SCIENCE IN THE FACULTY OF SCIENCE AND ENGINEERING 2016 WRITTEN BY: BRIAN RAMPRASAD THE SCHOOL OF COMPUTER SCIENCE

2 CONTENTS Abstract... 4 List of figures... 5 Declaration... 6 Intellectual Property Statement Introduction and Motivation Research Questions Aims and Objectives Report Structure Background and Literature review Architecture Switched-Centric Architecture Server-Centric Architecture Datacentre Traffic Patterns Forwarding Algorithms Distributed Applications Multitier MapReduce Evaluation Methodology Metrics Benchmarking Software and Hardware Summary of Research Findings and Discussion Server Centric System Design Network Architecture Software Architecture The Routing Layer Packet Design Node Discovery Engine Packet Forwarding Engine System Design Summary Implementation Requirements Implementation Tools

3 4.2.1 Hardware Software Software System Implementation Setting Up the Experimental Environment First Phase of Implementation (prototyping) Second Phase: Implementing the Full System Libraries Server Implementation Initialization Process Socket Multiplexing Message Encoding/Decoding Packet Routing Receiver Implementation Packet Controller Metrics Collector Client Application Implementation Summary Performance Evaluation Evaluation Plan System Testing and Validation Data Integrity Message Delivery Data Experimental Plan Testing Instruments Experiment Results One to One experiment (Ping Pong) One to Several (Random) All to All Conclusion Future Work References User Guide Word Count: 15,558 3

4 ABSTRACT The demand for computing power to meet the needs of modern day applications has been growing exponentially. It is a challenge for datacentre architects to provide a high-quality service while keeping costs low. Datacentres are rapidly growing in size which makes the current switch-centric architecture overly complex and costly to manage. The motivation for this project is to help solve this problem by trying to find a more efficient datacentre design by reusing existing infrastructure that uses commodity hardware that is inexpensive to scale. A recent and emerging trend in datacentre architecture is to take a server-centric approach. The server-centric approach is to use the servers for both computing and network infrastructure. We have developed an application that enables the communication of data on the network in a server centric way. The application was deployed and executed on a physical system. The application was then evaluated across different traffic types. We compare and contrast the difference in performance between a switched-centric architecture and our server-centric architecture. Our aim is to show that an application can run on top of a server-centric network and to determine which traffic type performs better according to key network performance measures. 4

5 LIST OF FIGURES Figure 1. Tree-based switched datacentre architecure[5]... 8 Figure 2. A DCell1 structure with n = 4. It is composed of 5 DCell0s[6]... 9 Figure 3. Datacentre Connection Types[11] Figure 4. A Fat-Tree structure with n = 4. It has three levels of switches.[6] Figure 5 A Clos Network Figure 6. BCube structure with 4 nodes[14] Figure 7 DCell with 4 nodes per switch[14] Figure 8 Example of services running on separate servers[17] Figure 9. Server Centric Architecture (single cell) Figure 10. Server Centric Architecture with 3 cells Figure 11. Server Centric Software Architecture Figure 12. Packet Architecture Figure 13. Local Node Discovery Process Figure 14. Remote Node Discovery Process Figure 15. Full Duplex Pipeline Figure 16. Local Forwarding to Remote Figure 17. Example Remote Bridge Figure 18. Utilite Hardware Design Figure 19. Virtual Machine Configuration Figure 20. The Client/Server Communication Messages Figure 21. Packet Controller State Machine Figure 22. Client and Receiver Interaction Figure 23. Physical hardware platform located at UoM Figure 24. One-to-One Ping Pong Experiment Figure 25. Server-Centric Ping Pong Latency Figure 26. Switched Ping Pong Latency Figure 27. Server-Centric Ping Pong Throughput Figure 28. Switched Ping Pong Throughput Figure 29. Server-Centric Vs. Switched Ping Pong Aggregate Throughput Figure 30. Server-Centric Vs. Switched Ping Pong Average Latency Figure 31. One to Several Random Experiment Figure 32. Server-Centric Random Latency Figure 33. Switched Random Latency Figure 34. Server-Centric Random Throughput Figure 35. Switched Random Throughput Figure 36. Server-Centric Vs. Switched Random Aggregate Throughput Figure 37. Server-Centric Vs. Switched Random Average Latency Figure 38. All-to-All Experiment Figure 39. Server-Centric All-to-All Latency Figure 40. Switched All-to-All Latency Figure 41. Server-Centric All-to-All Throughput Figure 42. Switched All-to-All Throughput Figure 43. Server-Centric Vs. Switched All-to-All Aggregate Throughput Figure 44. Server-Centric Vs. Switched All-to-All Average Latency

6 DECLARATION No portion of the work referred to in this dissertation has been submitted in support of an application for another degree or qualification of this or any other university or other institute of learning. 6

7 INTELLECTUAL PROPERTY STATEMENT i. The author of this dissertation (including any appendices and/or schedules to this dissertation) owns certain copyright or related rights in it (the Copyright ) and s/he has given The University of Manchester certain rights to use such Copyright, including for administrative purposes. ii. Copies of this dissertation, either in full or in extracts and whether in hard or electronic copy, may be made only in accordance with the Copyright, Designs and Patents Act 1988 (as amended) and regulations issued under it or, where appropriate, in accordance with licensing agreements which the University has entered into. This page must form part of any such copies made. iii. The ownership of certain Copyright, patents, designs, trademarks and other intellectual property (the Intellectual Property ) and any reproductions of copyright works in the dissertation, for example graphs and tables ( Reproductions ), which may be described in this dissertation, may not be owned by the author and may be owned by third parties. Such Intellectual Property and Reproductions cannot and must not be made available for use without the prior written permission of the owner(s) of the relevant Intellectual Property and/or Reproductions. iv. Further information on the conditions under which disclosure, publication and commercialisation of this dissertation, the Copyright and any Intellectual Property and/or Reproductions described in it may take place is available in the University IP Policy, in any relevant Dissertation restriction declarations deposited in the University Library, and The University Library s regulations. 7

8 1 INTRODUCTION AND MOTIVATION Designing a scalable and cost-efficient network infrastructure is a key concern for datacentre architects. The services that exist inside datacentres are increasingly becoming more data intensive requiring designers to find ways to increase network capacity without increasing costs. As the volumes of data increase, the concept of data-centric computing is another motivating factor for us to find a better datacentre network architecture. Data-centric computing is a paradigm concerning the manipulation of large datasets. The manipulation of these datasets on the network creates huge amounts of traffic when the operation involves several servers[1]. The network architecture must be robust enough to handle such traffic. The historical and current network infrastructure model calls for the use of intelligent switches and routers to build up the network inside the datacentre. As the number of servers that require interconnection in the datacentre has grown in recent years, the infrastructure becomes more costly to implement and manage[2], [3]. Routers compared to switches are very expensive to purchase and maintain. The traditional tree-based switched network infrastructure shown in Figure 1 requires routers at each level of the hierarchy. The core routers are much costlier compared to the routers on the aggregation level. Several approaches, namely Fat-tree and designs based on the Clos network, have attempted to address the scalability of switch-centric designs[4]. However, these designs have a finite number of servers that can exist in the network, due to the limits of the addressing schemes. Figure 1. Tree-based switched datacentre architecure[5] 8

9 In contrast to this type of architecture, the server-centric architecture does not use routers or layer3 switches to manage the traffic flows. There is a reduction in the cost and complexity of the network by removing the need to purchase the routers and having to consider their purpose when designing a datacentre network. As servers no longer belong to a hierarchy, they can be configured in a cluster to increase application performance due to data locality. Figure 2, shows an example where the servers are connected to each other either directly or via layer2 commodity switch to facilitate the communication between the servers. Figure 2. A DCell1 structure with n = 4. It is composed of 5 DCell0s[6] The primary implication of this design is that now the servers fulfill the role of traffic management. This additional role will increase the overhead on the server. This will consume compute cycles that could have otherwise been used for providing the application service hosted on the server [7]. Several different methods for implementing a new network traffic management protocol exist. A technical overview and description of how our network protocol implementation will be used to facilitate this communication will be provided in later sections. Modeling the performance of distributed applications running on top of the server-centric architecture is the focus of this project. We want to determine if a server-centric datacentre architecture can provide good scalability at a cheaper cost than a switch-centric architecture can provide. Our motivation to do this work is to help datacentre architects achieve these goals. New generations of distributed applications that are very data intensive may achieve increased 9

10 performance using the server-centric architecture[8]. Examples of data intensive applications are the Google File System, Apache Hadoop [9]. These represent the different implementations of distributed file systems. These types of applications represent the all-to-all traffic pattern. Based on the literature review we plan to implement our benchmark application so that we can measure this type traffic pattern. A description of how each performance measure was chosen and how it will be evaluated will be described in later chapters. 1.1 RESEARCH QUESTIONS Question 1: Is it possible to build software that will enable the deployment of an application on a server-centric datacentre architecture? Question 2: What are the components needed to build the application, and how can we optimize our software suite to increase the throughput and decrease the latency of the application? 1.2 AIMS AND OBJECTIVES To answer these two research questions, the aim of this project is: To develop a software suite (Server-Centric Application Enabler) that we can use to deploy distributed applications on top of server-centric datacentre architectures to achieve an increase in application performance while reducing implementation costs as compared to a switch-centric architecture. The objectives of the project are to: Configure a set of physical servers to implement our software suite and deploy an application on a server-centric architecture. The servers will be similar but will have less 10

11 computing power compared to the commodity hardware that can be typically found in datacentres. Companies such as Microsoft and Google have built their applications on top of commodity hardware to achieve cost savings rather than using specialized server hardware which can be more expensive [5,6]. Develop a new protocol on top of the TCP/IP protocol to support the needs of the application. TCP/IP is the underlying protocol that is used for communication on the Internet and within private networks [8]. We plan to use socket programming in C to create a new layer that will establish the communication channels between the servers in a server-centric way. The new layer will consist of a TCP translator that will be transparent to the application calling TCP functions. We will then use this new layer to service requests coming from the application layer. We will measure the performance characteristics of an application using the server-centric design and compare the results when using a switch-centric design. Produce a user guide for using the software. The user guide will entail how to install the software on a cluster of networked machines that fit the server-centric datacentre architecture. 1.3 REPORT STRUCTURE In section 2 we conduct a literature review of datacentre architectures and efforts to improve the network performance. Then in section 3 we present the Server-Centric system design which will become a software suite that will allow network administrators to deploy distributed applications on a server-centric datacentre architecture. Then in section 4 we describe the implementation methods for building the software suite and our experiment setup. In section 5, we discuss the evaluation of the system and attempt to draw some conclusions based on the results after conducting the experiments. In section 6 we provide a summary of the report. In section 7 we discuss possible future work to extend and enhance our design. 11

12 2 BACKGROUND AND LITERATURE REVIEW In the sections that follow, we describe the historical approaches to datacentre network design and the current state of the art server-centric datacentre designs. We examine both switch-centric and server-centric architectures because we will execute workloads on both architectures and compare the performance. Distributed applications are the most viable candidates to make use of a server-centric architecture because of their parallelized processing nature[10]. To guide our approach and to find the best practices to implement our Server-Centric Application Enabler, our literature review focused on discovering the various types of distributed applications found in academic research and industry. Since distributed applications likely each behave differently, we will also need a way to measure and evaluate performance that is specific to our chosen distributed application. An effort was made to find a widely accepted set of performance measures to evaluate our chosen distributed application type that will be executed on top of our server-centric enabling communication layer. In addition to investigating the experimental environment, we will also wanted to learn about the best practice software libraries and tooling methods to implement our design. 2.1 ARCHITECTURE There are various types of the interconnection types that can be found inside a datacentre. For example, Li and Wu investigated different datacentre s and found three distinct interconnection types for servers[11]. As shown in Figure 3, we have the server to switch connection type which would occur at the lowest level of a switched-centric and certain server-centric datacentre architectures. A switch to switch connection which would exist to interconnect a higher level of the hierarchy to a lower level hierarchy, typical of switch-centric architectures. Lastly, the server to server connection type would be commonly found in server-centric architectures. Figure 3. Datacentre Connection Types[11] 12

13 2.1.1 Switched-Centric Architecture Switch-centric datacentre designs have been the dominant datacentre network architecture for the past 30 years. As previously mentioned, optimizing and enhancing performance in the datacentre is a key concern for architects. Several studies have investigated various designs and methods for achieving these performance goals. We discuss only the two primary designs that have emerged as a baseline standard for switch-centric architectures for which other researchers have tried to improve upon. A Fat-Tree Figure 4. A Fat-Tree structure with n = 4. It has three levels of switches.[6] Fat-tree is an architecture first proposed in 1985 to provide a scalable solution for datacentre networks[12]. It was highly scalable at the time because new switches could easily be added and removed to extend the branches of the tree to accommodate more clients. However, since datacentres have grown, there are upper limits to the number of servers you can connect because switches can only hold a limited number of MAC addresses[4]. Each server on the network is identified by its MAC address. Fat-tree is still more cost effective than the basic tree architecture described in the introduction of this document because it does not use expensive routers. This design has two implications for our work. First, we only have access to switches to interconnect our servers, so this is a possible design for us. Second, many related works have optimized this model and implemented routing algorithms that we can use in our switched-centric experiments. 13

14 A Clos Network Figure 5 A Clos Network 1 The Clos Network design is widely used; it was first proposed in 1952 for circuit switching on telecom networks. The design has since been implemented in modern datacentres. In contrast to the Fat-tree architecture, the Clos Network only has two switching levels in its hierarchy compared to three levels. This means less cabling and switches are required for this datacentre network architecture[13]. This is also a possible design for us because it is simpler to implement than Fat-tree and achieves our goal of having a cost effective switched network architecture Server-Centric Architecture The server-centric datacentre architecture is a relatively recent design trend for datacentre architectures. We compare and contrast between the various implementations of this architecture. Current related work investigating server-centric architecture has yielded some promising results. Several server-centric architectures have been proposed such as DCell, BCube FiConn, HCN &BCN, GBC3, Dpillar and MCube. In general, the objectives of these designs is to show that server-centric designs are more cost effective, have lower latency and yield better throughput compared to switch-centric designs. Again, as with the switch-centric architectures, we discuss only the two primary designs that have emerged as a baseline standard for which other researchers have tried to improve upon

15 BCube Figure 6. BCube structure with 4 nodes[14] Microsoft has proposed the BCube server-centric architecture that has shown to significantly accelerate the performance of applications using this design[15]. The proposal involved using commodity servers and low-end commodity switches. They aimed to reduce costs as compared to switch-centric architectures and to improve the throughput performance using their servercentric architecture. These metrics were evaluated across four different traffic patterns that are representative of applications that currently exist in the datacentre. The traffic patterns are one to one, one to several, one to all, and all to all. This design can be implemented with a small number of servers and switches. It will be a possible design for us to implement in our lab experiments. DCell Figure 7 DCell with 4 nodes per switch[14] 15

16 This server-centric topology is another fixed recursive design similar to BCube. DCell interconnects the servers in a slightly different pattern. The goal of the DCell topology is to provide better scalability with a larger number of nodes as compared to BCube[9]. This is possible because DCell scales doubly exponentially compared to BCube. DCell also uses fewer switches as compared to BCube when scaling the network. This is a disadvantage of BCube as compared to DCell from a cost perspective and if scalability is not a primary concern when choosing a sever-centric datacentre architecture Datacentre Traffic Patterns After reviewing several of the server-centric datacentre architecture implementations described in the previous section, four distinct traffic patterns emerged. The following sections describe the patterns one to one, one to several, one to all, and all to all. The patterns represent how packets flow from one node (server) to the next. We are interested in this topic because we want to understand how the network is used in the datacentre. When we run our experiments, we will need to measure the performance metrics of the system for these four patterns. One to One This pattern represents the flow of packets from one server, directly to another. One to Several This pattern represents the flow of packets from one server to many different servers on the network but not to all servers on the network. One to All This pattern represents the flow of packets from a single server to all servers on the network. This is also known as a broadcast message. All to All Lastly, the all-to-all pattern represents a scenario when all servers send a message that is addressed to all other servers on the network. This is the situation that will put the most stress on the network because all servers are sending and receiving at the same time and many distributed applications have this type of application traffic pattern. 16

17 Depending on the type of server-centric datacentre architecture, some traffic patterns perform better than others. For instance, in some designs, a server may become the bottleneck because it is receiving packets from all other servers on the network[7]. Wang et al. proposed that a network should have a server dedicated to managing the network traffic (Forwarding Unit) so that worker nodes on the network can devote CPU resources to application tasks. If we experience the same problem in our experimental runs, this might be a possible solution to solving the issue of network congestion Forwarding Algorithms The decision made by a network device concerning how to route an outgoing packet is an important part of a datacentre architecture because we always want to consume the fewest resources by taking the shortest and most efficient path to the destination[16]. We are interested in the related works on this topic because our design must implement a software based decision making component that will manage the traffic on our experimental network. In this section, we discuss the forwarding algorithms of BCube and DCell. Forwarding in BCube BCube implements as a custom forwarding protocol that leverages the design of the architecture by attempting to maximize the available links and automatically load balances the traffic on the network[15]. Forwarding in DCell The process for moving packets from source to destination in DCell is slightly different compared to BCube. DCell implements a broadcast mechanism that is intended to find the shortest path to the servers connected to the same local switch. 17

18 2.2 DISTRIBUTED APPLICATIONS Distributed applications are essentially software that have their processing task spread across at least more than one server. Several definitions exist in industry and academic research. Two distinct types exist. In the following section, we will discuss and characterise these types and how they impact our work Multitier Multitier applications architectures are where server(s) have a dedicated role to serve a particular need of the whole application. For example, a web service may have a webserver role, application server role, and database server role that has instances running on separate machines[17]. The architecture in the figure below shows the communication flows between the servers. Figure 8 Example of services running on separate servers[17] The implication of this type of application architecture on the datacentre network is that there will be more of the one-to-one traffic pattern. Also, there will be more of the one-to-several type of traffic in cases where there are clusters of database servers or webservers. In cases where we have the one-to-several type of traffic, it may be useful to implement a fixed recursive type network architecture because clusters are likely connected to the same switch which is a key feature of the server-centric datacentre architecture. 18

19 2.2.2 MapReduce Another type of distributed application type is MapReduce. The genesis of the MapReduce programming paradigm began with the publication of the Google paper by Dean and Ghemawat[10]. This paper provided motivation to others in the academic and industrial setting to pursue similar implementations that could be used by others outside of Google. The MapReduce programming model was developed as a solution to be able to query information from extremely large sets of data. This set of data is broken into partitions and distributed across many servers. MapReduce generates large volumes of traffic on the network of the one to several and all-to-all type[1], [16],[6]. In particular, the all-to-all traffic type is a well-suited application architecture for implementation onto a sever-centric architecture. The main idea is that by parallelizing the processing tasks across many servers can speed up the time to complete the overall job process. In the following section, we investigate the types of benchmarks that we can use to evaluate our system. 2.3 EVALUATION METHODOLOGY Metrics Several metrics have been proposed and use to evaluate network performance[16], [18], [19]. The performance can be measured from different perspectives on the network. Typically, the network devices are polled for information about the traffic that flows through them to evaluate latency, throughput, and packet loss to identify bottlenecks on the network[19]. Another way to assess these performance metrics is to collect information from server level, where the network as well as information about the impact on the application running on the server is affected. In section on benchmarking we describe a possible tool we can use as a model for building a benchmark tool to monitor the network. In section on software, we investigate programming language libraries to evaluate network performance from the application perspective Benchmarking Benchmarking is an important process that allows us to compare the results between different designs using a common standard. Several tools are available for monitoring and evaluating 19

20 network performance. A key consideration when choosing benchmark tool is that it should be compatible with the hardware types we are using and the software libraries that manage the traffic flows on the network. Intel MPI Benchmarks 4.1 This is a tool that is used to evaluate the network performance 2. It provides a large set of benchmarks that simulate a variety of traffic patterns. Benchmarks can be customized to evaluate different amounts of traffic on the network. A tool similar to this can be used to gather general performance measurements of the network so we can compare our design with a switchedcentric datacentre architecture. Wire Shark This is well known and widely used tool to monitor and characterise network traffic 3. This application would be useful for checking the correctness of the results report by other monitoring methods that we plan to use such as 3 rd party programming libraries Software and Hardware To implement and evaluate our proposed system, we will need to use socket libraries that will run on our chosen operating system and programming language. Hardware The hardware that has been used in BCube and DCell are commodity servers[9], [15]. There are also smaller compute devices that can be used to build up our experimental model. Utilite is a miniature computer that is capable of performing functions similar to the commodity servers described in the previously mentioned server-centric designs 4. This is an option for us given the cost constraints of the project. We will also use low cost commodity switches as others have done as it fulfils the goal of reducing the cost of implementing datacentre networks

21 Software The underlying communication protocol of the software suite will need to be built in a programming language that provides low level access to sockets. The application deployer will need to be built using a language that can communicate with this layer. To work with sockets, the C, Python and Java languages provide libraries to implement them into an application. However, the higher level languages such as Python and Java have more overhead when executing because they must be first translated into byte code then machine code. Programs written in C do not have this translation overhead cost before the execution of the program. 2.4 SUMMARY OF RESEARCH FINDINGS AND DISCUSSION After conducting our background research and literature review, we have answered several important questions that will help us motivate the project. First, we have learned that only certain switch-centric models are useful to us given the hardware requirements. Regarding server-centric architectures, two designs are possible solutions for us, namely DCell and BCube. Our research of related works on traffic patterns has yielded four patterns that we should use in our performance evaluation model. These four patterns are one-to-one, one-to-many, one-to-all and all-to-all. We can, therefore, compare our results with what others have done. We have determined that applications that have the all-to-all traffic characteristic best exploit the servercentric datacentre design. This is a good choice for us because the contribution of our work will go towards an application that is becoming more widely used in solving the challenges in big data projects. We plan to evaluate our model to determine if there is a difference in performance, many of the related works have used aggregate throughput and average latency. We will use these same metrics to generate comparable results. For evaluating the raw performance of the network, we have determined that we will build our own custom sever-centric benchmark tool that supports our network architecture so we can evaluate different traffic patterns using synthetic workloads. As our experiment results will depend on how efficient our code will execute, the C programming choice is a better choice compared to Java and Python. C is a better choice because, given the overhead costs of Java and Python, C will run faster and give us deeper access to the network sockets. The hardware components that we need should be cost efficient and the Utilite miniature computer should meet our needs in deploying our servercentric design. 21

22 3 SERVER CENTRIC SYSTEM DESIGN In following sections, we propose the network architecture and the main functional components of the server-centric software architecture. First, we describe the overall server-centric hardware architecture. Second, we discuss the software architecture that will allow applications to run on top of the proposed hardware architecture. Third, we discuss the architecture of the network monitoring tool and benchmark application that was created to evaluate the system. The network infrastructure that was used were physical machines provided by the University of Manchester. As described in the literature review section, these machines are compact computing devices, which may have less computing power compared to a full-size server typically found in a datacentre. The hardware used in this experiment fully supported our research objectives: to be able to run an application on our server-centric network and to be able to compare it to a switched network. 22

23 3.1 NETWORK ARCHITECTURE The architecture that we have designed adheres to the server-centric principles of connecting servers together. As shown in Figure 9 below, we have 3 switches. This allows each switch to connect directly to 3 servers, so we have a total of 9 servers. Since each server has a second port, it will also connect directly to another server. In a routed network, the routers are responsible for forwarding the packets between the subnets and out to other networks. In our network, the servers handle all the tasks that a standalone router would have been responsible for. Figure 9. Server Centric Architecture (single cell) This design was chosen because it is capable of providing fault tolerance on the network. In case a single switch dies, all the servers connected to it are still accessible via other server-centric cells. The edge nodes that have a single link to the network represent the nodes that have connections to other network cells. In this case, the fault tolerance would be achieved by routing the traffic through another cell. This design also provides us with shorter communication paths from server-to-server without having to go through a switch which should help improve the key performance metrics that we will measure and present in the evaluation section. The design 23

24 represents a cell, which is a unit of servers and switches that exist in a datacentre. A datacentre may have many cells that are interconnected to each other to provide scalability to the network. As show in Figure 10 below, we can scale our design to have many interconnected cells. Figure 10. Server Centric Architecture with 3 cells 24

25 3.2 SOFTWARE ARCHITECTURE In this section, we describe in detail, the software components that work together to deliver a full service transportation mechanism to move data across the network in a server-centric way. The goal of our design was to have independence amongst the components. In practice, applications that run on top of the network should not care about how the information is routed to the destination. We used a structured layered approach to achieve this goal, which is similar to the way most modern network protocols are designed. In the sections that follow, we describe the architecture and its layers. We will take a bottom up approach, and first describe the routing layer. Then we will discuss the receiving layer that sits between the client application and the routing fabric. Together these two layers represent a custom network protocol that can be used for server-centric networks. The figure below shows all of the components that make up the server-centric application in use. As we can see, on the routing layer the Servers directly communicate with each other. Figure 11. Server Centric Software Architecture The receivers have connections to the servers and the client application. The client application can be any application that needs network connectivity. 25

26 3.2.1 The Routing Layer Packet Design A fundamental part of any architecture is how a request will be transported across a network on its way to the intended destination. In our design we make use of the TCP protocol. This provides us with reliable delivery and compatibility as it is a widely adopted standard. As TCP is a connection oriented protocol, this means that information is transmitted as a stream of bytes which may contain many packets. In contrast to UDP which is connectionless, the packets are segmented so the receiver may know where a packet begins and ends. To handle the receiving of many packets as a stream, we design a defined message protocol to be able to move data across our server centric network. The following figure shows the structure of our packet design. Figure 12. Packet Architecture Our packet architecture contains several key pieces of information that allow the message to be read and forwarded by each server. As shown in the figure above, the header of the packet consists of a message type, the size of the message, the destination address of the packet and lastly the payload of the packet which can be any number of bytes that needs to be sent to the destination client. We chose this design because it has the minimum information we need to move information across our network. In theory, any extra information added to the packet will increase the processing time and would negatively impact the performance of our server-centric network. We make use of our packet design by having a variety of packet types. We use these packet types to trigger different events in the servers. Our architecture allows for new packet types to be created if needed. An example would be packets that are used to initialize inter-cell communication. 26

27 Packet Types Type MSG LMP RMP MAP ACK ERR DIE EXP Function A MSG packet is used to send data from client to client. A LMP packet is used by the node discovery engine to trigger the initialization process of servers to connect to the other servers connected to the local subnet. A RMP packet is used by the node discovery engine to trigger the initialization process of servers to connect to remote servers, i.e. the server that is connected to a different subnet. A MAP packet contains the local and remote addresses of each machine. It is then transmitted to each other node. An ACK response it used by all services to acknowledge the receipt of the given packet. This helps control the sending and receiving of packets on the network by restricting the ability of a server to send a new packet unless it can confirm that the previous one was delivered. In the section that describes the monitoring component, we describe the state machine that manages this process. A packet is designated an ERR packet if a packet has arrived where the type cannot be determined. The DIE packet is a control mechanism that is used to shut down the server. An EXP packet is a packet that is use by our benchmark application to trigger experiments on the network. Table 1. Packet Types Node Discovery Engine When a new server is connected to the network, the other servers must know how to communicate with it. In the context of our server-centric network, each server must learn the address of the other servers on the network. The process of discovery, identification and assignment allows the server to become an active participant in the server-centric network. The purpose of the Node Discovery Engine is to identify the newly connected servers and update all of the other servers with its address and position on the network. If we simply hardcode or have a fixed formula to calculate the local IP address of the servers, offline nodes may incorrectly be considered in the route calculation process. Our design avoids nodes that are offline from being added to the routing table. To achieve this goal, we designed a two-part process to the node discovery process, namely local and remote discovery. 27

28 Local Discovery Process The purpose of the local discovery process is for each node to become aware of the other nodes who are on the local subnet. This is achieved by a synchronous message exchange between each of the servers on the subnet. Figure 13. Local Node Discovery Process They will each provide a MAP message that contains the local and remote IP address of itself. This allows each node to eventually build up a routing table that will be used to find the specific local server to deliver a given packet. Remote Discovery Process The purpose of the remote discovery process is for each node to become aware of the single node that it is attached to on a remote network. This is achieved by a synchronous message exchange between each of the servers on either end of the subnet as shown in Figure 14. Figure 14. Remote Node Discovery Process Similar to the local discovery process, each node will each provide a MAP message that contains the local and remote IP address of itself. This process is initiated by sending an RMP message. Since each machine has only one remote connection, this process completes faster compared to 28

29 the local delivery process. The outcome of these two processes is that there is at least one pathway for a packet to travel from any source node to any given destination node. The TCP protocol allows us to operate a socket in full duplex mode. This means we can both send and receive at the same time. We take advantage of this in our design and we only establish one logical communication channel between each of the participating servers. This means that the initialization process for some nodes may not create any new links because the other nodes have already discovered it and have created a connection to that node. Duplicate connections are not created. Figure 15. Full Duplex Pipeline This design provides us with the benefit of each server having to manage fewer communication channels. Managing many active connections at once is an expensive process in terms of compute cycles Packet Forwarding Engine The packet forwarding engine is the component at the heart of the Server-Centric Application Enabler. It lives on each of the servers, and it makes all of the routing decisions on where to send outgoing packets. The engine must compute the shortest path based on layout and the status of the server-centric network. The goal for the packet forwarding engine is to optimize the performance by reducing the latency and minimizing the throughput required to deliver the packet to the destination. Both of these objectives are achieved when the packet takes the shortest path from source to destination. 29

30 Routing Algorithm In our design, the application makes no assumption about the local network address structure. Any addressing scheme can be used which makes our design flexible because as long as the local IP can be determined when the server loads initially, we can derive the IP range for the possible nodes on that same subnet and try to connect to them. Also, if certain nodes are offline, they will not become part of the routing fabric and the active nodes will not consider them when deciding where to send a packet. However, for the remote interconnection between subnets, a bridge must be built between the two to allow for communication. Since this bridge is an isolated subnet, none of the nodes other than the two nodes connected together know about the subnet. For this part of the network, a defined addressing structure must be used. In our case, we have a simple computation on the subnet part of the IP to determine what subnet path it provides. The routing algorithm is essentially a fall-through process the algorithm is a method to find the next hop based on the available nodes starting with the shortest path for the destination packet and then broadening its search for a less optimal route. Forwarding Types: In total, we have five rules that make up our routing algorithm. We describe the process of local forwarding and then we discuss remote forwarding. Local Forwarding For the purpose of delivering a packet from one node to another node on the same subnet, we have a single path of machines on the same subnet which have a direct connection to each other. However, we must also consider direct routes to remote subnets when determining which local node to send to. Server-centric cells with more nodes compared to our prototype must be able to use our forwarding engine when there is more than one direct connection to a particular remote subnet. We do this because the optimal path in a network is typically the shortest one, holding other factors constant such as congestion or link failure. 30

31 The two local forwarding rules in our engine are as follows: Rule 1: Check to see if some machine on this subnet is the next hop, send the packet there. Rule 2: Check to see if some machine on the local network has a direct connection to the remote server, send it there. Figure 16. Local Forwarding to Remote Figure 16 above shows the local forwarding process to a remote server. We have a packet that needs to reach the 4 th server on subnet 3, and we have two choices. One of the choices results in a route that takes an extra hop (the red path). We avoid this extra hop and send it to the server that has the direct connection (the purple path). 31

32 Remote Forwarding We have established three rules for when a packet needs to traverse the network to reach a host on a different subnet. In principle, the rules below (1,2 and 3) that make up the remote subnet discovery formula are flexible enough so that they can be used inside an unlimited number of server-centric cells because the bridging subnets are only used to interconnect the machines within the same cell. The design is flexible because nodes in other cells can use the same internal IP address scheme to create bridges between subnets in their cells. When others implement our work, they can use these internal IPs and they will not conflict with IPs outside the cell. However, when creating a bridge between external server-centric cells, a more systematic IP addressing scheme would be needed. Since the resources available for this project did not allow for multi-cell architectures, we did not include a formula for this because it would never be used. It should be relatively simple for others to extend our work, as they can add another rule to handle inter-cell communication based on the IP addressing scheme in their particular servercentric architecture. An explanation of the rules and example is provided in the following section. 32

33 The rules for when a packet needs to choose a bridge to reach a remote subnet: Rule 1: Check to see if some machine has a direct connection to the external subnet, subtracting 100 to match on the third octet. Rule 2: Check to see if some machine has a direct connection to the external subnet, subtracting 100, then adding 1, to match on the third octet. Rule 3: Check to see if some machine has a direct connection to the external subnet, subtracting 100, then subtract 2, to match on the third octet Figure 17. Example Remote Bridge Figure 17 helps to illustrate the routing between subnets. With the remote bridge IP , there is no indication as to what local subnet this connects to. We apply Rule 1, and we subtract 100 from the subnet part of the IP which gives us a value of 2. The routing algorithm presents this as a connection to the nodes in subnet 3, as an access point into subnet 2 that can be used to send packets into that subnet. To go in the opposite direction, where we go from subnet 2 to subnet 3, we can apply Rule 2. 33

34 3.3 SYSTEM DESIGN SUMMARY In this chapter, we have discussed the architecture of the major system components and how they are related to meeting our research objectives. The hardware architecture that we used meets the standard of being classified as a true server-centric network architecture. The software architecture will provide us with a flexible design that can be used to enable server-centric networks. In the software architecture, we have described the two distinct layers, namely the routing layer and the receiving layer. At the routing layer we handle the encoding of the information and at the receiving layer we have the server component that manages the flow of traffic on the network. The proposed layered approached allows our system to have independence among components. This design provides a pathway to future development of a fully transparent TCP wrapper that can support any client application. A fully implemented TCP wrapper would likely further improve the performance of the key performance indicators. Our objective of measuring the performance of our server-centric design will be compared against the switched based architecture in the evaluation section. In the next section, we will provide the details of how the design was implemented into a fully functional software system. 34

35 4 IMPLEMENTATION In this section, we describe the phased approach that was used to transform the design into a working system. The requirements for our software implementation are based on the system design presented in Section 3. We discuss how they met our research objectives of being able to produce a working communication layer and a tool to measure the performance of our system. The programming language libraries and hardware tools are also briefly discussed. Lastly, the implementation of the major functions in the code is discussed. We also give an overview of how the application was tested in a virtual environment and then deployed into a physical environment for final testing and evaluation. The results of the experiment will be presented in section REQUIREMENTS To successfully meet our research objectives described in section 1.2, we can derive several high-level requirements to carry out our experiments. We need to have a functioning switch-centric and server-centric network, both virtual and physical. We need to use a programming language with suitable libraries. We need tools to measure the performance of the network. 4.2 IMPLEMENTATION TOOLS In this section, we describe the programming language requirements for the project concerning its suitability for our purpose and the available libraries. We also discuss the details of the hardware components that we will use in our experimental design Hardware Virtual Machines The virtual machine server that will be used is the Microsoft Hyper-V server. This was chosen because it is the one that we are familiar with. The choice of hypervisor should not affect the project as the experimental evaluation will be done on the physical hardware. 35

36 Physical Hardware For both the switch-centric and server-centric designs that we will be comparing against, we will use the same hardware types but in a different configuration for each. The switches will be low cost 6 port units. The servers that we have chosen to use are miniature computers, namely Utilite. The figure below shows that it has the 2 Gigabit ports that we need. Figure 18. Utilite Hardware Design Software Operating System We have chosen the Ubuntu Linux LTS distribution as the primary OS to run on the servers in our experimental network. We have chosen Linux because it is light-weight meaning it requires fewer resources comparatively to Microsoft Windows Server. Since we will be using a miniature computer that has fewer resources compared to a full-size commodity server, it is very important that we keep the overhead from the OS as low as possible. Programming Language The C language is general purpose programming language 6. As mentioned in the background research section, it is more efficient compared to Java or Python because it is compiled into machine language code rather than bytecode. We chose the C programming language for this reason

37 Development Environment To develop our code we used the Code::Blocks IDE. It is well suited to develop C and C++ programs because it natively supports these languages without having to add any extra plugins. Other development tools exist such as Eclipse IDE, but it requires a bit more configuration to manage projects in C. Realistically, both can be used but we prefer Code::Blocks because we are more familiar with it. Code Versioning To manage the overall project and to protect the integrity of code, we used a private Git Lab server. Libraries To meet our software requirements, we needed to leverage several third party C libraries in addition to the native ones. Specifically, we needed the TCP Library sockets library. Network Analyses and Benchmarking Wireshark Wireshark helped us to gain some insight into how the data was moving across our network. This tool was useful for debugging and was used to make sure packet sizes were correct. 37

38 4.3 SOFTWARE SYSTEM IMPLEMENTATION In this section, we describe the aspects of our functional software product. We begin by describing how we went from the design phase to the end of the implementation phase. This included first building a proof of concept by creating a small prototype in a virtual environment and then full deployment on to the physical hardware provided by the University of Manchester School of Computer Science. The outcome of this project will be a software suite and the corresponding user documentation that can be used to deploy distributed applications in a servercentric way. Another outcome of this project will be the performance results that we have gathered during our experiments. This will help us to evaluate the differences in performance between a switch-centric and a server-centric architecture. Before beginning the process of the implementing software and configuring hardware, we conducted an extensive background review of the types of approaches that have been used by others in order to determine best practices. The literature review described in Section 2 allowed us to learn more about the tools that are available and which tools were suitable for to use in our implementation of a servercentric datacentre architecture. We used the prototype in the initial phase to learn how we could build a server-centric system. This helped us to learn how the major system components should interoperate to deliver the services of the server-centric communication protocol. The components we will discuss are the first the Servers, which are the lowest layer of the system. Second, we discuss the Receiver which provides translation for our custom messaging protocol. Third, we discuss the example Client application. Lastly, we discuss the benchmark tool that is built into the Receiver. Our goal in this section is to show how the system works. In this section we also describe our challenges and we reflect upon how the work impacted the phases in which we built a fully functional system to meet out research objectives. 38

39 4.3.1 Setting Up the Experimental Environment As per the hardware and software resources described in Section 4.2, we used the tools identified to create and configure our virtual environment. The objectives for this phase were to: o Establish a suitable virtual machine environment. o Create and instantiate 3 identical virtual machines. o Install the Ubuntu Linux LTS. o Configure them so that they can discover each other via hostname. On a local server, a virtual environment has been setup using the Microsoft Hyper-V infrastructure manager. Then the machines were created using the minimum requirements. Each server has 2GB of RAM and 1 virtual CPU. The resources were kept minimal because when we move our software to the physical Utilite Servers, our application will have the same resources there. This has allowed us to create 3 instances of servers connected via a virtual switch as shown in Figure 19. Figure 19. Virtual Machine Configuration 39

40 Challenges in this Phase: The challenges in this part of the project were related to getting the machines to recognize each other without having a parent DHCP or DNS server. The network configuration needed to be configured manually on each server to achieve this goal First Phase of Implementation (prototyping) The objectives of this phase were to: o Create a basic client and install it on each server. o Create a basic server to respond to clients and install it on each server. o Verify that the client can send requests and the server can receive them. As described in the implementation plan in Section 4.2, the first step after selecting the C programming socket library was to create a simple client to facilitate basic packet communication between the virtual machines. At this point, we had a partially working prototype with a server and client application running on each server that could open a connection to one of the other two servers on the network and send it a packet. The receiving server could then acknowledge and respond as shown in Figure 20. Figure 20. The Client/Server Communication Messages Challenges in this phase: There were many online resources that show examples of how to configure a basic C clientserver application. At present, we have had to hard code the address of the servers in the test environment. There are limitations right now in that each server can only accept one connection, so as shown in the figure above, node 3 could not connect. We needed to explore the sockets library further to find ways to handle multiple clients. 40

41 Summary of First Phase At the end of this phase we were confident that we could have our basic prototype communicate between two machines. In the second phase, we describe the custom libraries that we created to support the full implementation of the design Second Phase: Implementing the Full System As described in the design section, we have nine physical servers that need to communicate. For this, we needed to build a more robust implementation of our server application that would be capable of handling many server and client connections so that communication would be possible on all layers of the system Libraries In this section, we describe the three main libraries that we created to support the software system. These libraries are used by both the server and receiver component to implement the server-centric system. The client application can be any software and therefore it does not rely on these libraries to communicate with the rest of the systems. After we have briefly described the libraries we show how they are used by the system. SocketSC.c Our server-centric sockets library is primarily used by the server layer to establish the connections between the nodes on the network. It provides functions to do this in a systematic way whereby a single call of the function will create all the connections at once. This library also contains functions that provide the encoding and decoding of messages that need to be passed around the network. Routing.c As the name suggests, the routing library is primarily used by the server to route packages from source to destination. We have several functions that build up and maintain a routing table that is used by the server to determine where a packet should be sent next based on the destination address. Monitor.c The monitor library is a fully independent mechanism for maintaining control of the messages entering the network from the client application. It is independent of the system because it can be 41

42 used in other systems to measure the throughput and latency as long as you start the monitor when the application is running. It maintains a lock to control this process which will be described in detail in the sections that follow. In addition to this control mechanism, we have built into the monitor a series of methods to gather the metrics that we need to meet our research objectives which is to measure both the latency and throughput of the network Server Implementation In this section, we discuss the server layer which contains the main component in the system. It is responsible for all the routing and forwarding of messages from source to destination. To successfully get a packet from source to destination, a sequence of coordinated events need to happen. We will use an example scenario to describe how this process works and the major functions that are invoked in this process from the start-up of the server to the delivery of the first packet Initialization Process As mentioned in the design section, the server first must initialise itself by learning about the other nodes on the network by passing a series of MAP messages that contain the IP addresses of each interface attached to a particular node. To handle the incoming data, we implemented a data structure called Routing Table into our application that can handle any number of IP pairs. Upon the server start up, the getnewroutingtable() is called and the server adds itself to the list. This is because the server must be self-aware of its own local and remote interface when deciding where to send a packet. The IP Pair is a data structure that stores the contents of the MAP messages that are received. Once this data structure has been populated the server has an understanding of its neighbouring nodes. We implemented a function called updateroutingtable() to add new entries to the routing table Socket Multiplexing In the first phase of the implementation process discussed, there were challenges in handling multiple clients on a single server. In order to overcome this challenge, we used a built-in function from the C sockets library that allows us to multiplex a single socket connection to handle multiple clients. The Select() function was used to achieve this goal. It is a blocking function meaning the process will monitor many connections and when something happens on the socket the server will react to it. A reaction in the case of our server-centric network would 42

43 be an incoming connection from another server which will occur during the initialization process described in the section above. Another action that might cause this function to unblock is an incoming message that needs to be processed. There were other options available such as the function poll(), but the select() was easier to manage and it can allow us scale well up to 1024 connections. This met our needs to create our network where each node would have to handle a much smaller number of connections. As mentioned in the design section regarding multi-cell server-centric networks, the number of connections between hosts is always very low because servers connect to only the nodes within a cell. The connections to outside cells are also very low. Therefore, our implementation of a multiplexed socket should provide us with good scalability Message Encoding/Decoding In the design section we described the architecture of our packet that is used to carry all messages within our server-centric network. To be able to read a message and forward it to the next node in the network, the message must be decoded so the server can read the information contained in the packet. We have created a single function that will create the packets in the correct format: encodefortransmission(). As the information in the header of the packet contains information about the contents of the packet, we don t need to decode it all at once when it arrives at another server. When a message arrives at the server, we systematically read parts of the packet and then process the parts as they arrive in the stream. Our packet implementation allows us to send packets of any size. The size of the packet is of course limited by the size of the buffer that is set on the server. In our implementation, we limited the buffer size to 1 million bytes which was suitable for meeting our research objectives of running workloads on the Utilite mini servers Packet Routing Moving packets from node to node is a core feature of our application. To move packets from source to destination we need to be able to determine the next hop. We have implemented several functions that allow us to determine the next hop. The functions were implemented based on the 5 forwarding rules described in the system design section. 43

44 4.3.6 Receiver Implementation In this section we describe the receiving layer, its purpose and how it interoperates with the routing and client layer. The receiving layer can be thought of as a translator for TCP. The purpose of TCP is to provide a standard way for any application to be able to use the network. In the case of our server-centric design, the messages must be encoded with certain information that will allow a given packet to move from server to server, where the next hop can be calculated, because there are no routers in our network. The receiving layer performs this translation task by encoding and decoding the message for the client application. In an optimal scenario, it is best to rewrite the TCP library to include these encoding functions and to trick the sending client into thinking it is directly connected to the receiving client. We must trick the client because it has no way of opening a direct connection to any machine in another subnet unless it has a connection to that subnet on its remote interface. Rewriting of the TCP library would have required a significant amount of time to implement and test, therefore, we chose to design our system with a receiving layer to handle these tasks Packet Controller The packet controller is a mechanism that is built into the receiving layer that controls whether new messages can be passed from the client to the routing layer. The purpose of the controller is to block communication of new packets until the system can determine if the previous packet has been received by the correct destination node. As TCP is a protocol that ensures that a message is reliably delivered between connected clients, in our case because the source node and destination node may not be directly connected, we need a mechanism to make sure the delivery was successful across many different connections. Each server only knows that the message was sent to another server. The client is not aware of the route in which the packet will take or even able to ping the destination. To achieve this goal, we have a state machine that controls the sending and receiving process. 44

45 Figure 21. Packet Controller State Machine When a message arrives from the client at the receiving layer, the controller will change its state from an OPEN state to a LOCKED state. When a message is received at the destination an ACK is sent back to the source. This triggers the state machine to open the lock so that a new message from that server can be sent. It is possible to make this process non-blocking but we would have to maintain a list of all outgoing packets and check the delivery upon receipt of an ACK tagged with the source address Metrics Collector Within the receiving layer we have built a metrics collector that can measure the flow between the client requests to different servers. The metrics collector makes use of the packet controller by allowing us to precisely measure the latency of a route across our server-centric network. When a packet is sent, a timer is started and then the lock is closed. When the ACK is received back the source node, the timer is ended and the lock is freed. The metrics collector also recomputes the average latency with each new ACK received. The metrics collector also computes the throughput of the network for a given experiment cycle Client Application The client application is a simple messaging tool to trigger the experiments on the network. It sends a series of instruction packets to the receiver which the receiver then takes the appropriate action based on the packet type received. The client first sends a (RES) message to the receiver to prepare for the next experiment. This resets the values stored in the monitor to zero so that we can get rid of the values from the previous experiment. Then we begin the process of setting up the experiment. First, we send the receiver a message with the type of experiment that we want 45

46 to run. For example, if we want to run the Random type experiment we send a message RND to the receiver. The receiver then updates the value in the monitor so that when the experiment runs it will send packets to random destination nodes in the network. After the experiment type has been set, the packet size for that particular experiment run is set. Figure 22. Client and Receiver Interaction For example, if we want to set the packet size to 500k bytes, we have a map that corresponds to a coded value of LV7. In the system we have nine levels ranging from 25k bytes to 700k bytes. A full list of the levels can be found in the results section where we show the performance at each level. After the experiment level has been configured, the experiment is ready to run. On the last step the client triggers the experiment by sending a SRT message to the receiver. At this point the receiver starts the experiment by sending the messages based on the configuration parameters we described above. After the SRT packet is sent, the client disconnects and the experiment continues to execute on the receiver until it completes. The ACK shown in the figure above is the response arriving from the destination node that the experiment pack was sent to. 46

47 4.4 IMPLEMENTATION SUMMARY In this section, we have described how the major system components were developed into a software suite. We described the tools and methods that were used to implement the proposed system design in Section 3. There were significant challenges in all phases of the development. In the prototyping stage we faced challenges toward handling multiple connections. We were able to overcome this challenge during the full system development phase by implementing a multiplexed socket design. The most significant challenges in the development phase were related to ensuring reliable delivery of the messages and implementing the various experiment profiles that was used in the evaluation phase of this project. To overcome the challenges in ensuring reliable delivery, we had to implement several controls that would verify and check to see if the message was intact at each hop. This component took a substantial amount of time to debug because any missing or extra characters in the byte stream stalled the network as the system was waiting for more data or did not know how to forward extra data. As time did not permit us to create a fully transparent TCP wrapper, we had to create our own benchmark tool. This added quite a bit of development time onto the project. We first had to implement the benchmark tool and then create the traffic profiles that would drive our benchmark tool. We were able to overcome these challenges and we were able to evaluate the system. The details of the benchmark tool will be discussed in Section 5. 47

48 5 PERFORMANCE EVALUATION In this section we describe the methods we used to test and evaluate the Server-Centric Application Enabler. In section 5.1 we discuss the plan for our experiment and the software components and the hardware that will be used to build up the experimental network infrastructure. We describe the two key measurement factors that were learned during the literature review to be most commonly used in network evaluation. In section 5.2 we then discuss how the system was tested and how the outputs and inputs were validated before running our experiments. In section 5.3 we discuss the different types of traffic patterns that will be evaluated. In section 5.4 we discuss the results of the evaluation. We attempt to draw some conclusions about the results we have obtained from running the experiments on our servercentric architecture and the switched-centric architecture. We compare and contrast the results and discuss possible factors that may influence the results of our experiment. Lastly, we describe how the steps in this phase enabled us to meet our research objective of being able to evaluate the performance of our system. 5.1 EVALUATION PLAN In this section, we define and describe the choices we have made regarding the types of metrics to collect, and how they will be measured. For the metrics component, we have decided to focus on a few key widely accepted measures identified in the literature review for evaluating network performance. The evaluation consisted of two phases. In the first phase, we collected the metrics from running the application in a server-centric datacentre architecture. This was used to create a baseline that we could compare against when we executed the application using a switched-centric architecture. In the second phase, we collected the metrics from running the application on the switched-centric architecture. Based on the previous related works, we have several measurements for evaluating the two architectures: Aggregate Throughput(AT): For our purposes, aggregate throughput of the network is defined as the sum total of the data transfer rates achieved on the network during a particular experiment cycle. 48

49 Latency: Latency is defined as the time elapsed between the client initiating the request and receiving a response from the server to that request[20]. This is also known as the round trip time(rtt) of the packet. In the context of our server-centric architecture, each server acts as both the client and the server. Aggregate throughput and latency are measures of the performance of the dataflow while the application is actively being used. Using a variety of measures will help us to understand if there are any trade-offs between key performance indicators. In the end, we should be able to show that there is a statistical difference of means between of both latency and the aggregate throughput when using a server-centric architecture compared to a switched-centric architecture using our protocol stack. 5.2 SYSTEM TESTING AND VALIDATION In this section we explain the various methods that were used to determine that the system is functioning correctly. First, we define what is a correctly functioning system and the various ways that it might not function correctly. We needed a way to make sure that the experiments functioned according to the definition provided for each experiment profile. For this, we implemented an experiment monitor that uses a lock to control the sending of packets until an acknowledgement (ACK) has arrived for an experimental packet. If the ACK is not sent from the specific node that the packet was destined for or if the ACK packet is incorrectly formatted, the lock will not reopen and the experiment will fail and remain in a running state. To complement this control mechanism, we also record the number of rejected packets in case another process tries to send more packets before the lock has responded. This ensures that reliable delivery is taking place in our network Data Integrity When passing data around the network, we want to make sure that the data arrives complete and in the same form that it was sent in. As described in the design section, the message size is encoded in the header of the packet. If the required number of bytes cannot be read from the 49

50 stream, then the system will consider this an incomplete packet and the experiment will fail to complete Message Delivery When a packet needs to travel from the source to the destination, it must take the correct path. Testing to make sure that the packet does not go through unnecessary hops is important to our measurements. This is because extra hops mean that more data needs to be read and forwarded by the servers Data We are interested in traffic patterns and bytes of the data. The data that we use in our system is a set of randomly generated bytes that is the output of a function in our library. The data used to evaluate our system does not need to be actual datasets, ex. Webpage data, or images, or s, etc. All information that needs to be sent across the network would be broken down into a stream of bytes. 5.3 EXPERIMENTAL PLAN As identified in the related works, there are four different traffic patterns that we need to evaluate. These patterns are One to One, One to Several, One to All and All to All. In the time available, we were only able to complete experiments for the one-one, one-to-several, and all-toall traffic patterns. As described in the implementation and design chapters, we created a custom benchmark tool for server-centric network architectures. We executed several workloads of various types and then measured the aggregate throughput and latency described in section 4.3. This will allow us to see what the differences are between traffic patterns and to draw some conclusions on how we can optimize our network architecture and software suite. 50

51 5.3.1 Testing Instruments Hardware The hardware that was used for the experiment was described in the implementation and background sections. They are mini computers suitable for prototyping small networks. The figure below shows the actual severs that were used in this experiment. The results presented in this section were the outcome of running the experiments only on the physical hardware platform. Figure 23. Physical hardware platform located at UoM Software The software stack that is used in this experiment is minimal to make sure we have most of the system resources available to run our experiments. We used Ubuntu Linux and a servercentric application software was created. 51

52 Configuration Template Example Type Example File Line Number Interface 1 IP Interface 2 IP Delay to Initialize 80 3 Experiment Cycles Node to Send to As shown in the figure above, we created a configuration template to allow the experiment to be customised. The interface IP 1 and 2 are the IP addresses of the local server. This is important because the IPs are used to check local delivery when a packet arrives at the machine. The initialization delay time is the number of seconds the node will wait before sending MAP messages to the other nodes to discover the network. The node to send IP addresses is an example of a destination node that a machine loading this template will use in a one-to-one experiment. Experimental Setup In this section we described all of the experiments that will be executed on the both the servercentric architecture and the switched-centric architecture. The variables that will be used across all experiments are presented below. Dependent Variables Since we want to measure the latency and throughput, these become our dependent variables. The latency and throughput will depend on the factors that we manipulate on the experiment such as traffic pattern, and size of the message. Independent Variables In our experiment, we have several traffic patterns that we want to examine. Therefore, we will attempt to run experiments on each of them to compare the change in the dependent variables latency and throughput. We also seek to investigate the impact on throughput when we change the size of the message. The size of the message is defined as the payload of the packet, in other words the data the client 52

53 application needs to transport across the network. For this we have several message size levels that are suitable for the type of hardware we have. The message levels we have chosen based on our system resources are , 50000, , , , , , , and All of these message sizes are presented in bytes. Experiment Constants In our experiment we want to maintain some consistency between the experiments to make them comparable and feasible. For this reason, we have chosen to fix the number of messages in a given experiment to 25 messages per run. This means when we execute a workload, we capture the metrics after 25 ACK messages have been received by the client that is running the benchmark tool. This helps to keep the experiment runtime low and does not impact the resulting dependent variables we are measuring. 5.4 EXPERIMENT RESULTS One to One experiment (Ping Pong) In this experiment, also known as a Ping Pong, we have a single node that sends to another node which then waits for an ACK packet. We then compute the latency. This experiment is repeated for every node in the network and we compute the average latency over an increasing number of packet sizes. This will give us the average latency of the network and the throughput for the whole network. We have created a profile for running this experiment. As shown in the figure below, we have several node pairs that participate in a peer-to-peer relationship. We have 9 live nodes available in our network so one unit receives from two other nodes. We want to have a good mixture of traffic paths because the more hops that a packet must take when traveling from source to destination, latency will increase and consume more of the available bandwidth on the network. Later we will examine this traffic pattern using random pairs. 53

54 In this experiment profile we have the following node pairs: Local subnet traffic: Mini 11 Mini13, Mini33 Mini22 Min23 Cross subnet traffic: Mini 12 Mini31 Mini32 Min21 Figure 24. One-to-One Ping Pong Experiment 54

55 Ping Pong Latency Results: Figure 25. Server-Centric Ping Pong Latency Figure 26. Switched Ping Pong Latency 55

56 Ping Pong Latency Discussion: The results of the Ping Pong experiment show that compared to the server-centric design, latency is much lower in the switch-centric ( switched ) design. The time it takes for the packet to reach the destination and to receive the acknowledgement (ACK) back at the source node is longer because the packet must make many more stops along the way in the server-centric design. In addition, an increase in message size has a lower impact on latency in a switched network than in a server-centric network. As message sizes become larger, latency in a server-centric network becomes much slower compared to the switched network over the same message size. As seen in Figure 25, at a message size of 100,000 bytes, the server-centric network has an average latency of 52, microseconds and latency of 134, microseconds at a message size of 500,000 bytes, 2.57 times more latency at a message size 5 times larger. Comparatively in Figure 26, at a message size of 100,000 bytes, the switched network has an average latency of 7,184 microseconds and latency of 15, microseconds at a message size of 500,000 bytes, 2.20 times more latency at a message size 5 times larger. The server-centric network shows a larger impact ( slow-down ) on latency at large message sizes while the switched network s latency still increases but at a lesser rate. Ping Pong Throughput Results: Figure 27. Server-Centric Ping Pong Throughput 56

57 Figure 28. Switched Ping Pong Throughput Ping Pong Throughput Discussion: The results of the Ping Pong experiment show that the throughput of the nodes in the servercentric architecture are much lower compared to the switched network architecture. This is because the round trip time is much longer in the server-centric network. In addition, as the message sizes become larger, throughput becomes increasingly lower marginally for the servercentric network compared to the switched network. As seen in Figure 27, at a message size of 100,000 bytes, the server-centric network has an aggregate throughput of bytes per microsecond and aggregate throughput of bytes per microsecond at a message size of 500,000 bytes, 1.97 times more throughput at a message size 5 times larger. Comparatively in Figure 28, at a message size of 100,000 bytes, the switched network has an aggregate throughput of bytes per microsecond and aggregate throughput of bytes per microsecond at a message size of 500,000 bytes, 2.35 times more throughput at a message size 5 times larger. The server-centric network shows less increased aggregate throughput per increase in message size byte while the switched network shows much more improved aggregate throughput over the same message size. 57

58 Ping Pong Aggregate Throughput: Figure 29. Server-Centric Vs. Switched Ping Pong Aggregate Throughput Ping Pong Aggregate Throughput Discussion: In the Ping Pong experiment, the overall aggregate throughput in the server-centric network is much lower compared to the switched network. This is consistent with the low throughput results from the Ping Pong experiments for the individual machines. The switched network is more responsive (i.e. has an increasingly higher aggregate throughput) with increasing message sizes compared the server-centric network. As seen in Figure 29, at a message size of 200,000 bytes, the switched network provides an aggregate throughput of bytes per microsecond compared to the server-centric networks of bytes per microsecond, 6.91 times more aggregate throughput comparatively. At a larger message size of 700,000 bytes, the switched network provides an aggregate throughput of bytes per microsecond compared to the server-centric networks of bytes per microsecond, 9.18 times more aggregate throughput comparatively. As message sizes get larger, the gap between aggregate throughput becomes marginally wider between server-centric and switched networks. In addition, as message sizes become larger, aggregate throughputs for both network types show a decreasing rate of growth marginally showing a potential ceiling in aggregate throughput at large message sizes. 58

59 Ping Pong Average Latency: Figure 30. Server-Centric Vs. Switched Ping Pong Average Latency Ping Pong Average Latency Discussion: In the Ping Pong experiment, the average latency in the switched network is much lower compared to the server-centric network. This is consistent with the high latency results for server-centric network and low latency results for switched networks from the Ping Pong experiments for the individual machines. The server-centric network has an increasingly higher average latency with increasing message sizes compared the switched network which stays relatively low. As seen in Figure 30, at a message size of 300,000 bytes, the switched network provides an average latency of 10, microseconds compared to the server-centric networks of 89, microseconds, 8.66 times lower average latency comparatively. At a larger message size of 600,000 bytes, the switched network provides an average latency of 27, microseconds compared to the server-centric networks of 177, microseconds, 6.49 times lower average latency comparatively. As the message sizes get larger, the gap between average latency becomes marginally wider between the server-centric and switched networks. In addition, as message sizes become larger, average latency for both network types show a decreasing rate of growth marginally showing a potential ceiling in average latency at large message sizes. 59

60 5.4.2 One to Several (Random) In this experiment, we have each node send a packet to a random number of nodes ranging from 1-9. Each receiving node will send back an ACK packet to the source node. We compute the average latency over an increasing number of packet sizes. We also collect the throughput for each node and sum these amounts to get the aggregate throughput of the network. This will give us the average latency of the network and the aggregate throughput for the entire network. Figure 31. One to Several Random Experiment 60

61 One to Several Latency Results: Figure 32. Server-Centric Random Latency Figure 33. Switched Random Latency 61

62 One to Several Latency Discussion: The results of the Random experiment show that compared to the server-centric design, latency is much lower in the switched design. In addition, an increase in message size has a lower impact on latency in a switched network than in a server-centric network. As message sizes become larger, latency in a server-centric network becomes much slower compared to the switched network over the same message size. As seen in Figure 32, at a message size of 100,000 bytes, the server-centric network has an average latency of 51, microseconds and latency of 182, microseconds at a message size of 500,000 bytes, 3.56 times more latency at a message size 5 times larger. Comparatively in Figure 33, at a message size of 100,000 bytes, the switched network has an average latency of 7, microseconds and latency of 16, microseconds at a message size of 500,000 bytes, 2.06 times more latency at a message size 5 times larger. The server-centric network shows a larger impact ( slow-down ) on latency at large message sizes while the switched network s latency is marginally steadying. One to Several Throughput Results: Figure 34. Server-Centric Random Throughput 62

63 Figure 35. Switched Random Throughput One to Several Throughput Discussion: The results of the Random experiment show that the throughput of the nodes in the server-centric architecture is much lower compared to the switched network architecture. In addition, as message sizes become larger, throughput becomes increasingly lower marginally for the servercentric network compared to the switched network. As seen in Figure 34, at a message size of 100,000 bytes, the server-centric network has an aggregate throughput of bytes per microsecond and aggregate throughput of bytes per microsecond at a message size of 500,000 bytes, 1.37 times more throughput at a message size 5 times larger. Comparatively in Figure 35, at a message size of 100,000 bytes, the switched network has an aggregate throughput of bytes per microsecond and aggregate throughput of bytes per microsecond at a message size of 500,000 bytes, 2.42 times more throughput at a message size 5 times larger. The server-centric network shows less increased aggregate throughput per increase in message size byte while the switched network shows much more improved aggregate throughput over the same message size. 63

64 One to Several Aggregate Throughput: Figure 36. Server-Centric Vs. Switched Random Aggregate Throughput One to Several Aggregate Throughput Discussion: In the Random experiment, the overall aggregate throughput in the server-centric network is much lower compared to the switched network. This is consistent with the low throughput results from the Random experiments for the individual machines. In addition, the difference between the aggregate throughput of the switched network and the aggregate throughput of the servercentric network becomes increasingly greater as message sizes become greater. The switched network is much more responsive (i.e. has an increasingly higher aggregate throughput) with increasing message sizes compared the server-centric network. As seen in Figure 36, at a message size of 200,000 bytes, the switched network provides an aggregate throughput of bytes per microsecond compared to the server-centric networks of bytes per microsecond, 9.67 times more aggregate throughput comparatively. At a larger message size of 700,000 bytes, the switched network provides an aggregate throughput of bytes per microsecond compared to the server-centric networks of bytes per microsecond, times more aggregate throughput comparatively. As message sizes get larger, the gap between aggregate throughput becomes marginally wider between server-centric and switched networks. 64

65 In addition, as message sizes become larger, aggregate throughputs for server-centric network type shows a decreasing rate of growth marginally showing a potential ceiling in aggregate throughput at large message sizes. For switched networks in the Random experiment, aggregate throughput does not appear to have an upper limit. One to Several Average Latency: Figure 37. Server-Centric Vs. Switched Random Average Latency One to Several Average Latency Discussion: In the Random experiment, the average latency in the switched network is much lower compared to the server-centric network. This is consistent with the high latency results for server-centric network and low latency results for switched networks from the Random experiments for the individual machines. In addition, the difference between the average latency of the switched network and the average latency of the server-centric network becomes increasingly greater as message sizes become greater. The server-centric network has an increasingly higher average latency with increasing message sizes compared the switched network which stays relatively low. As seen in Figure 37, at a message size of 300,000 bytes, the switched network provides an 65

66 average latency of 10, microseconds compared to the server-centric networks of 104, microseconds, times lower average latency comparatively. At a larger message size of 600,000 bytes, the switched network provides an average latency of 16, microseconds compared to the server-centric networks of 198, microseconds, times lower average latency comparatively. As message sizes get larger, the gap between average latency becomes marginally wider between server-centric and switched networks. In addition, as message sizes become larger, average latency for server-centric network types show an upward trend in average latency (i.e. slowing-down faster). For switched networks in the Random experiment, average latency appears to be more horizontal, with a potential ceiling for average latency. 66

67 5.4.3 All to All In this experiment, we have each node on the network send a packet to all the other nodes on the network. When a node receives the message, it will respond to the source node with an ACK message, and then another packet is sent out until the experiment cycle count is met. This traffic pattern most closely represents the MapReduce traffic patterns discussed in section 2. We compute the average latency over an increasing number of packet sizes. We also sum the throughput from each node to get the aggregate throughput of the network. This will give us the average latency of the network and the aggregate throughput for the entire network. Figure 38. All-to-All Experiment 67

Internetworking Part 1

Internetworking Part 1 CMPE 344 Computer Networks Spring 2012 Internetworking Part 1 Reading: Peterson and Davie, 3.1 22/03/2012 1 Not all networks are directly connected Limit to how many hosts can be attached Point-to-point:

More information

A Scalable, Commodity Data Center Network Architecture

A Scalable, Commodity Data Center Network Architecture A Scalable, Commodity Data Center Network Architecture B Y M O H A M M A D A L - F A R E S A L E X A N D E R L O U K I S S A S A M I N V A H D A T P R E S E N T E D B Y N A N X I C H E N M A Y. 5, 2 0

More information

Ch. 4 - WAN, Wide Area Networks

Ch. 4 - WAN, Wide Area Networks 1 X.25 - access 2 X.25 - connection 3 X.25 - packet format 4 X.25 - pros and cons 5 Frame Relay 6 Frame Relay - access 7 Frame Relay - frame format 8 Frame Relay - addressing 9 Frame Relay - access rate

More information

ECE 650 Systems Programming & Engineering. Spring 2018

ECE 650 Systems Programming & Engineering. Spring 2018 ECE 650 Systems Programming & Engineering Spring 2018 Networking Introduction Tyler Bletsch Duke University Slides are adapted from Brian Rogers (Duke) Computer Networking A background of important areas

More information

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Internetworking Multiple networks are a fact of life: Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Fault isolation,

More information

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Google File System (GFS) and Hadoop Distributed File System (HDFS) Google File System (GFS) and Hadoop Distributed File System (HDFS) 1 Hadoop: Architectural Design Principles Linear scalability More nodes can do more work within the same time Linear on data size, linear

More information

Switching and Forwarding Reading: Chapter 3 1/30/14 1

Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Next Problem: Enable communication between hosts that are not directly connected Fundamental Problem of the Internet or any

More information

THE OSI MODEL. Application Presentation Session Transport Network Data-Link Physical. OSI Model. Chapter 1 Review.

THE OSI MODEL. Application Presentation Session Transport Network Data-Link Physical. OSI Model. Chapter 1 Review. THE OSI MODEL Application Presentation Session Transport Network Data-Link Physical OSI Model Chapter 1 Review By: Allan Johnson Table of Contents Go There! Go There! Go There! Go There! Go There! Go There!

More information

EEC-484/584 Computer Networks

EEC-484/584 Computer Networks EEC-484/584 Computer Networks Lecture 13 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline 2 Review of lecture 12 Routing Congestion

More information

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print,

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print, ANNEX B - Communications Protocol Overheads The OSI Model is a conceptual model that standardizes the functions of a telecommunication or computing system without regard of their underlying internal structure

More information

Layer 3: Network Layer. 9. Mar INF-3190: Switching and Routing

Layer 3: Network Layer. 9. Mar INF-3190: Switching and Routing Layer 3: Network Layer 9. Mar. 2005 1 INF-3190: Switching and Routing Network Layer Goal Enable data transfer from end system to end system End systems Several hops, (heterogeneous) subnetworks Compensate

More information

CN1047 INTRODUCTION TO COMPUTER NETWORKING CHAPTER 6 OSI MODEL TRANSPORT LAYER

CN1047 INTRODUCTION TO COMPUTER NETWORKING CHAPTER 6 OSI MODEL TRANSPORT LAYER CN1047 INTRODUCTION TO COMPUTER NETWORKING CHAPTER 6 OSI MODEL TRANSPORT LAYER Transport Layer The Transport layer ensures the reliable arrival of messages and provides error checking mechanisms and data

More information

W H I T E P A P E R : O P E N. V P N C L O U D. Implementing A Secure OpenVPN Cloud

W H I T E P A P E R : O P E N. V P N C L O U D. Implementing A Secure OpenVPN Cloud W H I T E P A P E R : O P E N. V P N C L O U D Implementing A Secure OpenVPN Cloud Platform White Paper: OpenVPN Cloud Platform Implementing OpenVPN Cloud Platform Content Introduction... 3 The Problems...

More information

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals:

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals: Managing and Securing Computer Networks Guy Leduc Chapter 2: Software-Defined Networks (SDN) Mainly based on: Computer Networks and Internets, 6 th Edition Douglas E. Comer Pearson Education, 2015 (Chapter

More information

Importance of Interoperability in High Speed Seamless Redundancy (HSR) Communication Networks

Importance of Interoperability in High Speed Seamless Redundancy (HSR) Communication Networks Importance of Interoperability in High Speed Seamless Redundancy (HSR) Communication Networks Richard Harada Product Manager RuggedCom Inc. Introduction Reliable and fault tolerant high speed communication

More information

SUMMERY, CONCLUSIONS AND FUTURE WORK

SUMMERY, CONCLUSIONS AND FUTURE WORK Chapter - 6 SUMMERY, CONCLUSIONS AND FUTURE WORK The entire Research Work on On-Demand Routing in Multi-Hop Wireless Mobile Ad hoc Networks has been presented in simplified and easy-to-read form in six

More information

Running Head: NETWORKING 1

Running Head: NETWORKING 1 Running Head: NETWORKING 1 Switches and Bridges - Comparison and Contrast [Name of the Writer] [Name of the Institution] NETWORKING 2 Switches and Bridges Introduction This paper presents a comparison

More information

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad

More information

Data Center Network Topologies II

Data Center Network Topologies II Data Center Network Topologies II Hakim Weatherspoon Associate Professor, Dept of Computer cience C 5413: High Performance ystems and Networking April 10, 2017 March 31, 2017 Agenda for semester Project

More information

TCP/IP THE TCP/IP ARCHITECTURE

TCP/IP THE TCP/IP ARCHITECTURE TCP/IP-1 The Internet Protocol (IP) enables communications across a vast and heterogeneous collection of networks that are based on different technologies. Any host computer that is connected to the Internet

More information

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test

More information

Seven Criteria for a Sound Investment in WAN Optimization

Seven Criteria for a Sound Investment in WAN Optimization Seven Criteria for a Sound Investment in WAN Optimization Introduction WAN optimization technology brings three important business benefits to IT organizations: Reduces branch office infrastructure costs

More information

More on Link Layer. Recap of Last Class. Interconnecting Nodes in LAN (Local-Area Network) Interconnecting with Hubs. Computer Networks 9/21/2009

More on Link Layer. Recap of Last Class. Interconnecting Nodes in LAN (Local-Area Network) Interconnecting with Hubs. Computer Networks 9/21/2009 More on Link Layer Kai Shen Recap of Last Class Ethernet dominant link layer technology for local-area l networks Ethernet frame structure Ethernet multiple access control CSMA/CD, exponential back-off

More information

IP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview

IP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview This module describes IP Service Level Agreements (SLAs). IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs,

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - B COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - A COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

8/24/2017 Week 1-B Instructor: Sangmi Lee Pallickara

8/24/2017 Week 1-B Instructor: Sangmi Lee Pallickara Week 1-B-0 Week 1-B-1 CS535 BIG DATA FAQs Slides are available on the course web Wait list Term project topics PART 0. INTRODUCTION 2. DATA PROCESSING PARADIGMS FOR BIG DATA Sangmi Lee Pallickara Computer

More information

Lecture 3. The Network Layer (cont d) Network Layer 1-1

Lecture 3. The Network Layer (cont d) Network Layer 1-1 Lecture 3 The Network Layer (cont d) Network Layer 1-1 Agenda The Network Layer (cont d) What is inside a router? Internet Protocol (IP) IPv4 fragmentation and addressing IP Address Classes and Subnets

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

Module 16: Distributed System Structures

Module 16: Distributed System Structures Chapter 16: Distributed System Structures Module 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology Communication Structure Communication

More information

BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE

BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE E-Guide BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE SearchServer Virtualization P art 1 of this series explores how trends in buying server hardware have been influenced by the scale-up

More information

Embedded Technosolutions

Embedded Technosolutions Hadoop Big Data An Important technology in IT Sector Hadoop - Big Data Oerie 90% of the worlds data was generated in the last few years. Due to the advent of new technologies, devices, and communication

More information

COSC 6377 Mid-Term #2 Fall 2000

COSC 6377 Mid-Term #2 Fall 2000 Name: SSN: Signature: Open book, open notes. Your work must be your own. Assigned seating. Test time: 7:05pm to 8:05pm. You may not use a calculator or PalmPilot to calculate subnetting/host/netid information.

More information

Performing MapReduce on Data Centers with Hierarchical Structures

Performing MapReduce on Data Centers with Hierarchical Structures INT J COMPUT COMMUN, ISSN 1841-9836 Vol.7 (212), No. 3 (September), pp. 432-449 Performing MapReduce on Data Centers with Hierarchical Structures Z. Ding, D. Guo, X. Chen, X. Luo Zeliu Ding, Deke Guo,

More information

On the Design and Analysis of Data Center Network Architectures for Interconnecting Dual-Port Servers. Dawei Li and Prof. Jie Wu Temple University

On the Design and Analysis of Data Center Network Architectures for Interconnecting Dual-Port Servers. Dawei Li and Prof. Jie Wu Temple University On the Design and Analysis of Data Center Network Architectures for Interconnecting Dual-Port Servers Dawei Li and Prof. Jie Wu Temple University Outline Introduction Preliminaries Maximizing the number

More information

Optimizing Apache Spark with Memory1. July Page 1 of 14

Optimizing Apache Spark with Memory1. July Page 1 of 14 Optimizing Apache Spark with Memory1 July 2016 Page 1 of 14 Abstract The prevalence of Big Data is driving increasing demand for real -time analysis and insight. Big data processing platforms, like Apache

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

CS610- Computer Network Solved Subjective From Midterm Papers

CS610- Computer Network Solved Subjective From Midterm Papers Solved Subjective From Midterm Papers May 08,2012 MC100401285 Moaaz.pk@gmail.com Mc100401285@gmail.com PSMD01 CS610- Computer Network Midterm Examination - Fall 2011 1. Where are destination and source

More information

Chapter 7. Local Area Network Communications Protocols

Chapter 7. Local Area Network Communications Protocols Chapter 7 Local Area Network Communications Protocols The Network Layer The third layer of the OSI Model is the network layer. The network layer is concerned with providing a means for hosts to communicate

More information

Lecture 10.1 A real SDN implementation: the Google B4 case. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 10.1 A real SDN implementation: the Google B4 case. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 10.1 A real SDN implementation: the Google B4 case Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it WAN WAN = Wide Area Network WAN features: Very expensive (specialized high-end

More information

OSI Network Layer. Chapter 5

OSI Network Layer. Chapter 5 OSI Network Layer Network Fundamentals Chapter 5 Objectives Identify the role of the Network Layer, as it describes communication from one end device to another end device. Examine the most common Network

More information

Different Layers Lecture 20

Different Layers Lecture 20 Different Layers Lecture 20 10/15/2003 Jian Ren 1 The Network Layer 10/15/2003 Jian Ren 2 Network Layer Functions Transport packet from sending to receiving hosts Network layer protocols in every host,

More information

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University 18 447 Lecture 26: Interconnects James C. Hoe Department of ECE Carnegie Mellon University 18 447 S18 L26 S1, James C. Hoe, CMU/ECE/CALCM, 2018 Housekeeping Your goal today get an overview of parallel

More information

Request for Comments: S. Gabe Nortel (Northern Telecom) Ltd. May Nortel s Virtual Network Switching (VNS) Overview

Request for Comments: S. Gabe Nortel (Northern Telecom) Ltd. May Nortel s Virtual Network Switching (VNS) Overview Network Working Group Request for Comments: 2340 Category: Informational B. Jamoussi D. Jamieson D. Williston S. Gabe Nortel (Northern Telecom) Ltd. May 1998 Status of this Memo Nortel s Virtual Network

More information

ECE 697J Advanced Topics in Computer Networks

ECE 697J Advanced Topics in Computer Networks ECE 697J Advanced Topics in Computer Networks Switching Fabrics 10/02/03 Tilman Wolf 1 Router Data Path Last class: Single CPU is not fast enough for processing packets Multiple advanced processors in

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System

Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Distributed Scheduling for the Sombrero Single Address Space Distributed Operating System Donald S. Miller Department of Computer Science and Engineering Arizona State University Tempe, AZ, USA Alan C.

More information

AppleTalk. Chapter Goals. Introduction CHAPTER

AppleTalk. Chapter Goals. Introduction CHAPTER 35 CHAPTER Chapter Goals Describe the development history of the protocol, used almost exclusively in Macintosh computers. Describe the components of networks and extended network. Discuss the primary

More information

S5 Communications. Rev. 1

S5 Communications. Rev. 1 S5 Communications Rev. 1 Page 1 of 15 S5 Communications For a complete understanding of the S5 Battery Validation System (BVS) communication options, it is necessary to understand the measurements performed

More information

RoCE vs. iwarp Competitive Analysis

RoCE vs. iwarp Competitive Analysis WHITE PAPER February 217 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...5 Summary...6

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

Header Compression Capacity Calculations for Wireless Networks

Header Compression Capacity Calculations for Wireless Networks Header Compression Capacity Calculations for Wireless Networks Abstract Deployment of wireless transport in a data-centric world calls for a fresh network planning approach, requiring a balance between

More information

Applications and Performance Analysis of Bridging with L3 Forwarding on Wireless LANs

Applications and Performance Analysis of Bridging with L3 Forwarding on Wireless LANs Applications and Performance Analysis of Bridging with L3 Forwarding on Wireless LANs Chibiao Liu and James Yu DePaul University School of CTI Chicago, IL {cliu1, jyu}@cs.depaul.edu Abstract This paper

More information

Data & Computer Communication

Data & Computer Communication Basic Networking Concepts A network is a system of computers and other devices (such as printers and modems) that are connected in such a way that they can exchange data. A bridge is a device that connects

More information

COMP/ELEC 429/556 Introduction to Computer Networks

COMP/ELEC 429/556 Introduction to Computer Networks COMP/ELEC 429/556 Introduction to Computer Networks Let s Build a Scalable Global Network - IP Some slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang T. S. Eugene

More information

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS TUTORIAL: WHITE PAPER VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS 1 1. Introduction The Critical Mid-Tier... 3 2. Performance Challenges of J2EE Applications... 3

More information

ECE 333: Introduction to Communication Networks Fall 2001

ECE 333: Introduction to Communication Networks Fall 2001 ECE : Introduction to Communication Networks Fall 00 Lecture : Routing and Addressing I Introduction to Routing/Addressing Lectures 9- described the main components of point-to-point networks, i.e. multiplexed

More information

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson

Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies. Mohsin Y Ahmed Conlan Wesson Fault Tolerant and Secure Architectures for On Chip Networks With Emerging Interconnect Technologies Mohsin Y Ahmed Conlan Wesson Overview NoC: Future generation of many core processor on a single chip

More information

NETWORK OVERLAYS: AN INTRODUCTION

NETWORK OVERLAYS: AN INTRODUCTION NETWORK OVERLAYS: AN INTRODUCTION Network overlays dramatically increase the number of virtual subnets that can be created on a physical network, which in turn supports multitenancy and virtualization

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Mark Sandstrom ThroughPuter, Inc.

Mark Sandstrom ThroughPuter, Inc. Hardware Implemented Scheduler, Placer, Inter-Task Communications and IO System Functions for Many Processors Dynamically Shared among Multiple Applications Mark Sandstrom ThroughPuter, Inc mark@throughputercom

More information

Chapter 3 - Implement an IP Addressing Scheme and IP Services to Meet Network Requirements for a Small Branch Office

Chapter 3 - Implement an IP Addressing Scheme and IP Services to Meet Network Requirements for a Small Branch Office ExamForce.com 640-822 CCNA ICND Study Guide 31 Chapter 3 - Implement an IP Addressing Scheme and IP Services to Meet Network Requirements for a Small Branch Office Describe the need and role of addressing

More information

EEC-484/584 Computer Networks

EEC-484/584 Computer Networks EEC-484/584 Computer Networks Lecture 2 Wenbing Zhao wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Misc. Interested in research? Secure

More information

Networking interview questions

Networking interview questions Networking interview questions What is LAN? LAN is a computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected

More information

Introduction to Mobile Ad hoc Networks (MANETs)

Introduction to Mobile Ad hoc Networks (MANETs) Introduction to Mobile Ad hoc Networks (MANETs) 1 Overview of Ad hoc Network Communication between various devices makes it possible to provide unique and innovative services. Although this inter-device

More information

Course 6. Internetworking Routing 1/33

Course 6. Internetworking Routing 1/33 Course 6 Internetworking Routing 1/33 Routing The main function of the network layer is routing packets from the source machine to the destination machine. Along the way, at least one intermediate node

More information

Messaging Overview. Introduction. Gen-Z Messaging

Messaging Overview. Introduction. Gen-Z Messaging Page 1 of 6 Messaging Overview Introduction Gen-Z is a new data access technology that not only enhances memory and data storage solutions, but also provides a framework for both optimized and traditional

More information

[This is not an article, chapter, of conference paper!]

[This is not an article, chapter, of conference paper!] http://www.diva-portal.org [This is not an article, chapter, of conference paper!] Performance Comparison between Scaling of Virtual Machines and Containers using Cassandra NoSQL Database Sogand Shirinbab,

More information

Principles behind data link layer services

Principles behind data link layer services Data link layer Goals: Principles behind data link layer services Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control: Done!

More information

Chapter 11: Wide-Area Networks and the Internet

Chapter 11: Wide-Area Networks and the Internet Chapter 11: Wide-Area Networks and the Internet MULTIPLE CHOICE 1. MAN stands for: a. Manchester Access Network c. Metropolitan-Area Network b. Multiple-Area Network d. Multiple Access Network 2. Packet

More information

Principles behind data link layer services:

Principles behind data link layer services: Data link layer Goals: Principles behind data link layer services: Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control Example

More information

Principles behind data link layer services:

Principles behind data link layer services: Data link layer Goals: Principles behind data link layer services: Error detection, correction Sharing a broadcast channel: Multiple access Link layer addressing Reliable data transfer, flow control Example

More information

Introduction to iscsi

Introduction to iscsi Introduction to iscsi As Ethernet begins to enter into the Storage world a new protocol has been getting a lot of attention. The Internet Small Computer Systems Interface or iscsi, is an end-to-end protocol

More information

NaaS Network-as-a-Service in the Cloud

NaaS Network-as-a-Service in the Cloud NaaS Network-as-a-Service in the Cloud joint work with Matteo Migliavacca, Peter Pietzuch, and Alexander L. Wolf costa@imperial.ac.uk Motivation Mismatch between app. abstractions & network How the programmers

More information

Chapter Motivation For Internetworking

Chapter Motivation For Internetworking Chapter 17-20 Internetworking Part 1 (Concept, IP Addressing, IP Routing, IP Datagrams, Address Resolution 1 Motivation For Internetworking LANs Low cost Limited distance WANs High cost Unlimited distance

More information

Distributed Data Infrastructures, Fall 2017, Chapter 2. Jussi Kangasharju

Distributed Data Infrastructures, Fall 2017, Chapter 2. Jussi Kangasharju Distributed Data Infrastructures, Fall 2017, Chapter 2 Jussi Kangasharju Chapter Outline Warehouse-scale computing overview Workloads and software infrastructure Failures and repairs Note: Term Warehouse-scale

More information

Introduction to Open System Interconnection Reference Model

Introduction to Open System Interconnection Reference Model Chapter 5 Introduction to OSI Reference Model 1 Chapter 5 Introduction to Open System Interconnection Reference Model Introduction The Open Systems Interconnection (OSI) model is a reference tool for understanding

More information

Data Model Considerations for Radar Systems

Data Model Considerations for Radar Systems WHITEPAPER Data Model Considerations for Radar Systems Executive Summary The market demands that today s radar systems be designed to keep up with a rapidly changing threat environment, adapt to new technologies,

More information

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications Data and Computer Communications Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based s 1 Need For Protocol Architecture data exchange can involve complex procedures better if task broken into subtasks

More information

The Open System Interconnect model

The Open System Interconnect model The Open System Interconnect model Telecomunicazioni Undergraduate course in Electrical Engineering University of Rome La Sapienza Rome, Italy 2007-2008 1 Layered network design Data networks are usually

More information

Networking and Internetworking 1

Networking and Internetworking 1 Networking and Internetworking 1 Today l Networks and distributed systems l Internet architecture xkcd Networking issues for distributed systems Early networks were designed to meet relatively simple requirements

More information

OpFlex: An Open Policy Protocol

OpFlex: An Open Policy Protocol White Paper OpFlex: An Open Policy Protocol Data Center Challenges As data center environments become increasingly dynamic, networks are increasingly asked to provide agility and flexibility without compromising

More information

Upon successful completion of this course, the student should be competent to complete the following tasks:

Upon successful completion of this course, the student should be competent to complete the following tasks: COURSE INFORMATION Course Prefix/Number: IST 201 Course Title: Cisco Internetworking Concepts Lecture Hours/Week: 3.0 Lab Hours/Week: 0.0 Credit Hours/Semester: 3.0 VA Statement/Distance Learning Attendance

More information

King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering

King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering Student Name: Section #: King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering COE 344 Computer Networks (T072) Final Exam Date

More information

Distributed System Chapter 16 Issues in ch 17, ch 18

Distributed System Chapter 16 Issues in ch 17, ch 18 Distributed System Chapter 16 Issues in ch 17, ch 18 1 Chapter 16: Distributed System Structures! Motivation! Types of Network-Based Operating Systems! Network Structure! Network Topology! Communication

More information

Chapter 2 Network Models 2.1

Chapter 2 Network Models 2.1 Chapter 2 Network Models 2.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 2-1 LAYERED TASKS We use the concept of layers in our daily life. As an example,

More information

ETSF05/ETSF10 Internet Protocols Network Layer Protocols

ETSF05/ETSF10 Internet Protocols Network Layer Protocols ETSF05/ETSF10 Internet Protocols Network Layer Protocols 2016 Jens Andersson Agenda Internetworking IPv4/IPv6 Framentation/Reassembly ICMPv4/ICMPv6 IPv4 to IPv6 transition VPN/Ipsec NAT (Network Address

More information

B.Sc. (Hons.) Computer Science with Network Security B.Eng. (Hons) Telecommunications B.Sc. (Hons) Business Information Systems

B.Sc. (Hons.) Computer Science with Network Security B.Eng. (Hons) Telecommunications B.Sc. (Hons) Business Information Systems B.Sc. (Hons.) Computer Science with Network Security B.Eng. (Hons) Telecommunications B.Sc. (Hons) Business Information Systems Bridge BTEL/PT BCNS/14/FT BIS/14/FT BTEL/14/FT Examinations for 2014-2015

More information

- Hubs vs. Switches vs. Routers -

- Hubs vs. Switches vs. Routers - 1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing

More information

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era Massimiliano Sbaraglia Network Engineer Introduction In the cloud computing era, distributed architecture is used to handle operations of mass data, such as the storage, mining, querying, and searching

More information

CS 204 Lecture Notes on Elementary Network Analysis

CS 204 Lecture Notes on Elementary Network Analysis CS 204 Lecture Notes on Elementary Network Analysis Mart Molle Department of Computer Science and Engineering University of California, Riverside CA 92521 mart@cs.ucr.edu October 18, 2006 1 First-Order

More information

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017 Hadoop File System 1 S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y Moving Computation is Cheaper than Moving Data Motivation: Big Data! What is BigData? - Google

More information

Avaya ExpertNet Lite Assessment Tool

Avaya ExpertNet Lite Assessment Tool IP Telephony Contact Centers Mobility Services WHITE PAPER Avaya ExpertNet Lite Assessment Tool April 2005 avaya.com Table of Contents Overview... 1 Network Impact... 2 Network Paths... 2 Path Generation...

More information

Source-Route Bridging

Source-Route Bridging 25 CHAPTER Chapter Goals Describe when to use source-route bridging. Understand the difference between SRB and transparent bridging. Know the mechanism that end stations use to specify a source-route.

More information

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10

Extending the LAN. Context. Info 341 Networking and Distributed Applications. Building up the network. How to hook things together. Media NIC 10/18/10 Extending the LAN Info 341 Networking and Distributed Applications Context Building up the network Media NIC Application How to hook things together Transport Internetwork Network Access Physical Internet

More information

ET4254 Communications and Networking 1

ET4254 Communications and Networking 1 Topic 10:- Local Area Network Overview Aims:- LAN topologies and media LAN protocol architecture bridges, hubs, layer 2 & 3 switches 1 LAN Applications (1) personal computer LANs low cost limited data

More information

SUBJECT: DATA COMMUNICATION AND NETWORK SEMESTER: V SEMESTER COURSE: BCA SUBJECT TEACHER: Dr.K.Chitra Assistant Professor, Department of Computer

SUBJECT: DATA COMMUNICATION AND NETWORK SEMESTER: V SEMESTER COURSE: BCA SUBJECT TEACHER: Dr.K.Chitra Assistant Professor, Department of Computer SUBJECT: DATA COMMUNICATION AND NETWORK SEMESTER: V SEMESTER COURSE: BCA SUBJECT TEACHER: Dr.K.Chitra Assistant Professor, Department of Computer Science Chapter - 2 Switching and Network Architecture

More information