Implementation and Analysis of IP Multicast over ATM. Maryann P. Maher, Suresh K. Bhogavilli N. Fairfax Drive, Ste 620, Arlington, VA 22203
|
|
- Louise Wood
- 5 years ago
- Views:
Transcription
1 Implementation and Analysis of IP Multicast over ATM Maryann P. Maher, Suresh K. Bhogavilli USC/Information Sciences Institute 4350 N. Fairfax Drive, Ste 620, Arlington, VA fmaher, Abstract Today it is evident that ATM technology will have its role in the future of Internetworking. Two major Internet backbone service providers, Sprint and MCI, have marketed their investments in ATM as a way to push more bandwidth into their Internet infrastructures. Regardless of whether ATM technology will thrive in the WAN or LAN, it is also evident that the Internet Protocol (IP), which has been the glue of the Internet allowing it to endure exponential growth, must be well supported over ATM. IP multicast (protocol extensions to IP that allow IP packets from one source to be distributed to multiple destinations) is an essential part of IP which has allowed multimedia applications, for example, to more eciently use Internet resources. In this paper we describe an implementation of one model of supporting IP multicast over ATM, the Multicast over ATM model [1] developed by the IETF. We discuss empirical measurements gathered in a testbed with both a WAN and a LAN component that give insights into the behavior of this model. We detail some of the shortcomings of this model and discuss potential modications to improve it. Ideas for other approaches to IP multicast over ATM are also described. 1 Introduction The Multicast over ATM model described in \Support for Multicast over UNI 3.0/3.1 based ATM Networks" [1] (MC over ATM) is one model for supporting connectionless IP multicast over connectionoriented ATM networks. This model denes support for network layer multicasting over ATM using two dierent modes of operations. In one mode, senders of IP multicast trac create ATM virtual channel connections (VCC) to the other members of the particular multicast group. In the other mode, the senders send their trac to an intermediary server, the multicast This research is supported by Sprint Corporation, Research & Development, Overland Park, KS , under the IPARS '95 contract. server, which in turn relays the data to the members of the multicast group. In this paper we discuss the implementation of MC over ATM that includes both these modes of operation and the behavior of each in a testbed where the ATM hosts were connected in a LAN and WAN. A WAN topology was to gain insight into the eects of larger propagation delays on the MC over ATM model. Throughout this paper the word \group" or words \multicast group" are used to mean network layer multicast group. In addition, the words \group address" or \multicast address" are used to mean network layer multicast group address. 2 Overview of MC over ATM There are three main entities in the MC over ATM model: the Multicast Address Resolution Server (), the client (a Classical IP over ATM entity [2] running client code), and the Multicast Server (MCS). These entities communicate using a query-response style protocol. The keeps a database that maps network layer multicast addresses to ATM layer addresses. A host wishing to receive data from a particular network layer multicast group registers its ATM address with the for that group address. A host wishing to send data to a particular group address queries the for the list of ATM addresses that are registered for that group. The sending host can then set up a point-to-multipoint (p2mpt) VCC that includes as leaves all the hosts specied by the list of ATM addresses. If the list of ATM addresses contains the addresses of the actual group members, then the group is said to be operating in a VCC-mesh mode, or simply mesh mode. This corresponds to the rst mode of operation described in the previous section. If however, the returns a list containing
2 User Process Application (NV, VAT, SDR) ATMARP (RFC1577) IP Fore ATM Driver ATM Host MC_over_ATM (, MCS, Client) IP-VC ATM Adapter Benny Process UNI Sig. Benny IP IF Other Modules... Data Path Process Kernel Figure 1: Benny IP over ATM Host Architecture the ATM address of a single MCS, the sender will simply establish a p2mpt VCC with the MCS as the only leaf. This will be the case if an MCS has previously registered with the to support that particular multicast group. This mode of operation is termed MCS mode. The ingenious feature of the MC over ATM model is that group senders need not be aware of what mode of multicast support is taking place. From the point of view of the group sender, MCS mode appears to be the same as mesh mode but with a single group member. 3 Overview of Host Architecture The implementation described in this paper was done in the framework of \Benny". Benny is a derivative of VINCE, a freely available ATM software package developed at the US Naval Research Laboratory [3]. Benny includes many enhancements to VINCE and new code modules such as the MC over ATM implementation. Benny has been developed at USC/ISI and is freely available [4]. IP multicasting as dened in RFC 1112 [5] was designed as an extension to the normal IP layer found in a system kernel. In order to support basic IP over ATM, there needs to be some added \glue" in the kernel between IP and the ATM layer, specically there must be a layer that maps IP packet ows to ATM VCCs. We call this layer the IP-VC layer in Benny. In order to support IP multicast over ATM, the IP- VC layer must also distinguish between unicast and multicast packets and create the appropriate type of VCCs, namely point-to-point (pt2pt) or p2mpt. Figure 1 shows a diagram of the software and hardware components making up our IP over ATM host platform. In our testbed we use Fore Systems' SBA- 200 ATM host adapters and Fore Systems' ASX-200 and ASX-200BX ATM switches. 3.1 Benny Host Kernel Support Benny implements a \lightweight" ATM driver that attaches an IP multicast capable interface to the system kernel. (The only operating systems supported at this time are SunOS and ) We describe the Benny driver as \lightweight" because although it implements an IP interface, it does not go as far as controlling the hardware of the ATM adapter. Through this interface, Benny receives IP packets routed to it and does the needed interfacing with the client entity if the packets are destined for a multicast address. Furthermore, IGMP JoinLocalGroup and LeaveLocalGroup are processed by triggering the client to issue the corresponding JOINs and LEAVEs. Clearly an ATM driver is an essential part of doing any host side IP over ATM development. The Benny driver therefore has been a key component of our prototype even though there exist drawbacks in using it. The main disadvantage is that because this driver does not control the hardware, the data that comes down to the Benny driver must be sent to the hardware via the vendor supplied API. In the case of the FORE SBA platform, the FORE API is used. This means that data takes an extra `hop' from kernel space to user space. When the Benny driver receives an IP packet, the packet is sent to the Benny process running in user space. In gure 1 the arrows show the data path in our test host. Packets generated by an application are routed by IP to the Benny interface which in turn sends the packet up to the IP-VC layer of the Benny process. The IP-VC layer determines if there is a VCC for this packet, if so, it hands it to the FORE driver via the FORE API. If there is not a VCC, Benny triggers the necessary client and ATM signalling behavior, then sends the packet to the FORE driver for transmission. The extra kernel-process-kernel hop undoubtedly aects performance. Therefore, when we take performance measurements in our experiments we must factor in this delay. The remarkable advantage of using the Benny driver is that it is highly portable. It can easily be used along side any ATM vendor's adapter and driver software by merely swapping the API calls in Benny and including the corresponding API libraries.
3 CLIENT SCVC CLIENT CCVC MCS CLIENT p2mpt VCC pt2pt VCC CLIENT Figure 2: Cluster and its control VCCs 4 Protocol Implementation This section describes the MC over ATM protocol in more detail. All the procedures described here are implemented in the Benny MC over ATM code. Those familiar with MC over ATM may choose to skip this section. Throughout this paper the terms \host" or \client" refer to an ATM endsystem running client code. Figure 2 shows a cluster and its control VCCs. A host that wishes to be supported by the creates a pt2pt VCC to the and registers with it. All future messages to the are transmitted on this VCC. The maintains a list of registered clients and maintains a p2mpt control VCC, called Cluster Control VC (CCVC), to those clients. When a host issues an IGMP JoinLocalGroup for a multicast group, the client entity generates a JOIN for that network layer group. maintains a database of network layer group address to ATM layer address, called the host map. updates the host map to reect the latest membership information for the multicast group. In response to an IGMP LeaveLocalGroup, the client entity generates a LEAVE, and the removes its ATM address from the host map for that group. When the client wishes to send data to a multicast group to which it has no existing VCC and has no translation for that group address, it generates a REQUEST for that group. The responds with a MULTI containing a list of ATM addresses of the clients who have JOINed that group. The client opens a p2mpt VCC with those ATM addresses as leaf nodes. Further packets to the multicast group are forwarded on that VCC by the Benny IP-VC layer and are transparent to the client entity. retransmits the join and leave messages on the CCVC, so that clients with open VCCs to the multicast group can add or drop the corresponding client on their VCC to the group. As a way to conserve the number of VCCs required by the mesh mode to support the multicast group described above, a system administrator could congure a host to serve as an MCS. An MCS is a multicast server which relays data sent to it to all the members of a multicast group. The MCS registers with the through a registration MSERV message. The adds the MCS to a p2mpt Server Control VC (SCVC), which is used to distribute group membership updates. The hosts registered with a constitute a cluster and can open direct VCCs to one another. Two such clusters could be connected together through a multicast IP router (mrouter). An mrouter forwards multicast packets between dierent clusters. Refer to [1] for further details on protocol entities. Janeway (unused) Voyager (SS20) 5 Testing MC over ATM Sprint TIOC Kansas ASX-200bx Randi TESTBED CONFIGURATION Wide Area Link TIOC/GSFC OC-3c 27 msec RTT* * end to end RTT between junebug and voyager ** end to end RTT between junebug and electron Photon (SS10) NASA GSFC Maryland ASX-200bx Tanta3 Electron (SS10) 2 msec RTT ** ASX-200bx Tanta2 Figure 3: Testbed Conguration Positron (IPC) Proton MCS/MRouter Neutron (SS10) (~SS5) A conguration of the experimental testbed is shown in Figure 3. A requisite for testing IP multicast over ATM is to have a multicast capable host. In our testbed, SunOS and SunOS hosts were aug-
4 mented with Xerox PARC's public domain \IP Multicast Extensions" package version 3.5 [6]. The common MBone [7] applications such as SD, VAT, and NV, were run on the test hosts in order to generate multicast trac. In our experimental cluster, which equated to an IP subnet, a group of ATM endsystems at the NASA Goddard Space Flight Center (GSFC) in Maryland were connected in a LAN. The maximum propagation delay over ATM between any two of these endsystems was two milliseconds. In addition, there were two other endsystems located in the Sprint Technical Integration Operation Center (TIOC) in Kansas connected to the rest of the cluster via a wide area OC-3c link. The maximum propagation delay between endsystems at GSFC and the TIOC was 27 milliseconds. For the endsystem at Sprint TIOC, Voyager, connectivity to any other client was burdened with the WAN propagation delays. Endsystems in the LAN-sized group at GSFC incurred the WAN link delays whenever they added Voyager as a leaf onto a p2mpt VCC. The, MCS, and MRouter where all located at the GSFC site for these experiments. It would be interesting to place the at Sprint in future experiments. An initial goal of this prototyping was to gauge the general performance of the MC over ATM model. The Benny code is instrumented with code that time stamps and UNI signalling [8] messages in order to gather various latency measurements. The measurements taken were the following: Latency in joining a group: time transpired between the sending of a JOIN and receiving a conrmation copy on the CCVC Address translation latency: time transpired between the sending of a REQUEST until receiving a MULTI. Related to the measurement above, the latency between the initiation of an address translation request (triggered by the arrival of an IP packet at Benny driver for transmission) until a multipoint VCC is setup. 6 MC over ATM Protocol Measurements In our experiments, we view the testbed topology from the perspective of the location of the. This implies that all the hosts that are in a close proximity to the are considered \LAN" clients. Since the is located at GSFC for all the experiments, we refer to the hosts at GSFC as LAN clients. Following Table 1: Test Results Involving One WAN Client and Two LAN Clients (all times in msec). WAN Client Latencies: 1st Client (Voyager) Avg LAN Latencies: 2nd Client (Electron) Avg LAN Latencies: 3rd Client (Proton) Avg the same reasoning, the hosts at TIOC are referred to as \WAN" clients. In this section, we present various latencies of interest, measured over several experiments. Note that the results only emphasize trends of the scalability of various MC over ATM entities. Since entities are implemented in the process space, the measured latencies, to some extent, depend on the system load and are therefore coarse measurements. Nevertheless, every eort is made to minimize these eects. Table 1 lists the latency measurements in an experiment involving one WAN client and two LAN clients. The rst client to join a multicast group, with no prior registered group members, is referred to as the \rst" client. For each experiment, ve dierent multicast groups were used to measure the latencies of the three targeted metrics. Each row of the tables corresponds to a group and the column lists the respective metric. 6.1 Multicast Group Join The rst column in table 1 depicts the join latencies for all the three clients joining a group. All the hosts in the cluster open a pt2pt VCC to
5 Time (msec) Table 2: Test Results Involving Three LAN Clients (all times in msec) First Client Latencies (Electron) Avg Second Client Latencies (Proton) * * Avg Third Client Latencies (Positron) * Avg * excluded from average computations 1st Client (WAN) 2nd Client (LAN) 3rd Client (LAN) Test Number Figure 4: Multicast Group Join Latencies the to be used for all communication with the. When the receives a multicast group join message from a host, it rst checks the host map for the group to determine if the host already joined that group. The host map for the group is updated only if it does not already include the ATM address of the host. The host's group join message is then acknowledged on the existing pt2pt VCC between the and the host. The multicast group join latency is measured as the dierence between the time a host sends a JOIN message to the to the time the acknowledges this message to the host. As the number of hosts joined in a group increases, the time taken by the to check an existing group membership increases. This results in increased join latencies. Notice in table 1 that the average delay of msec for the third client is longer than the msec delay of the second client. The rst client being a WAN client took longer than the other two because of its remoteness. These measurements are also plotted in g. 4, which shows a slight increase in latencies from the second to the third client. These dierences are so minute that they are occasionally overridden by the extraneous effects of system load on Benny process scheduling, as is evident from the data points for the second group in this gure. These eects are also visible in the address resolution measurements presented in the next section. Table 2 shows the equivalent data for a cluster where all the members are connected over a LAN sized network. The implementation dependencies are more visible in the join data of the rst client. In this case, the server has to insert a new table entry for the new group the client is joining. This seems to take longer than the insertion of a new ATM address into an existing group translation entry, when a client joins an already known multicast group. 6.2 Address Resolution As the number of hosts that have joined a multicast group increases, the number of addresses copied into the MULTI message by the increases. This results in increased latency of address resolution from host to host, as shown in table 1, where on the average, the second client's requests were resolved in 7.5 msec, whereas the third client's requests were resolved in 7.7 msec. The rst client's resolution time of 31.6 msec includes the 27 msec RTT, since this is a WAN client. These values are plotted in g. 5. Table 2 shows similar results for the case where all the
6 st Client (WAN) 2nd Client (LAN) 3rd Client (LAN) st Client (WAN) 2nd Client (LAN) 3rd Client (LAN) Time (msec) 20 Time (msec) Test Number Test Number Figure 5: Address Resolution Latencies Figure 6: Multipoint VCC Setup Latencies clients are connected over a LAN sized network. Also, on average, address resolutions are faster than multicast group joins. This is purely an implementation issue. Joining a group requires the operation of inserting a new entry into a table. In addition, before inserting the ATM address (NSAP) of the new member, the current list of NSAPs has to be checked in order to avoid duplicate entries. In the case of the rst client, even though the table is empty, the process of checking has to be done. In our implementation we have an ecient hash table lookup and message generation and so the processing of a REQUEST is more ecient than processing a JOIN. This behavior appears to hold well for the number of hosts in the testbed. However, as the size of the cluster increases, the address resolution latencies are expected to exceed the group join latencies. It is worth noting that it is advantageous for senders to have address resolution and VCC setup occur as quickly as possible, since delay in this process could result in packets being dropped before transmission. 6.3 Point-to-Multipoint VCC Setup In table 1, the third column shows the time period from the point where a multicast IP packet arrives at the Benny interface for transmission, to the point when a p2mpt VCC for the group is established. A p2mpt VCC is considered to be established when a rst leaf is successfully added. As shown in g. 6, the multipoint VCC setup times for the rst client are considerably larger than those of the other two, since this client has to do address resolution and VCC setup across a WAN. WAN setup delays include both multipoint signalling and NNI routing delays at each switch hop. Since Benny is only a Table 3: Test Results Involving an MCS (all times in msec) Grp Regist Mserv Addr Res MptVC Setup Avg host ATM stack implementation, these network delays cannot be independently quantied through Benny. The second and third clients reect multipoint VCC setups that occurred in the LAN portion of the testbed. For this reason, these delay measurements are less than those of the rst client. In this particular experiment, these two clients were actually connecting to a multicast router, also located in the LAN part of the testbed, as their rst leaf. In table 2, the measurements for the third client are those for Positron, a Sparc IPC, which is considerably slow compared to a SparcStation 10. It appears that the capabilities of the host signicantly aect the latencies of VCC setup. Therefore, it seems wise to choose high performance machines for the critical entities such as the and the MCS. 6.4 Multicast Server Latencies The latencies associated with various MCS control messages are tabulated in table 3. Any entity that wishes to act as an MCS should rst register with
7 the. This registration latency, measured at an MCS, is the elapsed time between the transmission of registration request to the and the reception of an acknowledgement from the. In our implementation, when the receives an MCS registration request, it adds the MCS as a leaf to the server control VCC (SCVC) before acknowledging the request. If an MCS could not be added to the SCVC within a certain number of retries, the registration request is not acknowledged and the MCS retries its request to the. The registration latencies shown in 3, therefore include a leaf setup, ' MCS table update, and a round trip propagation delay. The Mserv latency is the time taken by an MCS to \join" the multicast group that it wishes to serve. The MCS acts as a relay for packets destined to this multicast group. The latency measured is the time taken by to update its table and a round trip latency. The address resolution times are the time taken by an MCS to get the ATM addresses of the endstations that joined the multicast group from the. An MCS sets up a p2mpt VCC for data transmission to the group in response to a SJOIN message relayed on the SCVC by the. Hence the VCC setup latencies listed are measured from the time the rst SJOIN message is received by the MCS to the time the VCC is created. Trial three in table 3 was performed with an MCS in the LAN portion of the cluster and the rst client to join the MCS supported group on the WAN part. Furthermore, the machine used as an MCS was a Sparc IPC, which took almost an order of magnitude longer than the LAN counterparts with a more ecient SparcStation 10 acting as an MCS. This signies the importance of having a high performance machine for the MCS. The metrics for comparing a mesh supported group against an MCS supported group are data latency and completion of full connectivity between all the participants in the group. In case of a mesh supported group, the latencies purely depend on the distance between the data sender and the receiver. However, in case of an MCS supported group, they largely depend on the location of the MCS itself, relative to the location of the multicast group members. When the group members are geographically distributed, the members close to the MCS experience lower data latencies compared to distant members. Ideally, there should be multiple MCSs, supporting a given multicast group. Each MCS supporting the senders / receivers in its physical proximity. A protocol specication to this eect is currently being developed by the IETF. An MCS supported group is also limited by the quality of service of the p2mpt VCC between the MCS and the other multicast group members. In case of a mesh supported group, every sender has a dedicated p2mpt VCC to all the receivers in the group. The QoS of this VCC could be better tailored to the requirements of the receiving entities than that of an MCS, which is eectively shared by all senders. However, an MCS could result in signicant savings in the number of VCCs needed to support a large multicast group. A mesh supported group with m senders and n receivers requires mn VCCs. An MCS supported group, on the other hand, would only need m + n VCCs. 7 Observations and Conclusions 7.1 Behavior of MBone Conferencing Tools In our research of IP multicast over ATM, it was our goal to gather empirical data in a testbed running common Internet multicast applications. That is the reason for choosing the popular MBone audio (VAT), video (NV), and session management (SDR) tools. These applications are freely available, well maintained by researchers, and enjoy a large user base. The use of these tools is growing rapidly and so is the amount of trac that they generate on the Internet. We were therefore interested in studying how these applications performed in an ATM network with MC over ATM. An early observation we made was that a model like MC over ATM, whose scaling properties, particularly in mesh mode, depend on the number of multicast senders, may not be the best suited for the current MBone environment. The mismatch is mainly a product of the way the MBone tools work. A feature of all the current MBone tools is that they cannot be run purely in receive-only mode, even when the intent is to only receive data destined to the multicast group. These applications transmit status reports periodically, though at a low frequency, to the multicast group joined. For example, VAT sends a session message every 6 seconds containing information about each participant's identity and participation status. This information is transmitted whether or not audio data is ever transmitted by the participant. This trickle of session messages results in the creation of a VCC, either pt2pt or p2mpt, wasting valuable VCC resources. It seems that it would be benecial to enhance such conferencing tools with options that give users the capability of adjusting the frequency with which the advertisements are sent to the
8 group when the host is in the receive-only mode. If advertisements were transmitted in order of minutes, it would allow VCCs to be released between advertisements, therefore not tying up network resources during the life of the session. Another modication which would conserve network resources would be to send advertisements for multiple groups to a single reserved group. One could argue that a x to this problem would be to use MCSs for all MBone multicast sessions. While this is a possibility, the transmission latencies that can be introduced by an MCS are least desired when running real-time conferencing applications like VAT and NV. 7.2 Interactions with UNI Signalling In the experiments involving both LAN and WAN clients, the general data latencies clients experience depended on the order in which clients joined the group. As mentioned earlier, when the p2mpt VCC setup is initiated with the WAN client as the rst member, the average startup latency experienced by the other members in the groups increases. For example, in case of a group supporting audio transmission, the audio starts o with greater latencies and or loss of data. There is a potential to decrease the initial VCC setup delay by adding a bit of \smarts" to the client. When the MULTI message received by a client includes a set of destination ATM addresses, VCC setup latencies could be minimized if the client could identify any destination that is \local", for example, with the same prex (HO-DSP part) in the NSAP address, and use it as the rst leaf. While the model addresses the problem of achieving multicasting within a cluster, several other issues, such as ecient VCC allocation, bandwidth conservation, and support for applications' QoS need to be addressed for multicast deployment across cluster boundaries. 7.3 Scalability Issues Several issues need to be addressed to work towards provisioning a scalable and manageable IP multicast service over ATM. The foremost issue is that of VCC scalability. The number of VCCs used for a particular multicast group is determined, to some extent, by the multicast routing protocols running between IP routers connecting dierent clusters. DVMRP, the most widely deployed multicast routing protocol, uses source based trees, in which a distinct multicast tree is calculated for every source network for the same group. Also, packets destined to a multicast group are periodically ooded across the down stream interfaces to guarantee the delivery of multicast datagrams to all the receivers whose group membership could be varying dynamically. The MC over ATM specication suggests that the multicast routers connected to a cluster join a block of multicast group addresses. When a router has to ood/forward IP multicast packets to a particular group, it gets a translation for the group from the. This translation includes all the routers which promiscuously joined that group, in addition to any end stations that are also members in that group. A p2mpt VCC is setup from the router to the group members to carry the multicast packets destined to that group. In a cluster with n senders and m groups, this results in a total of n m VCCs created between the same set of routers. For example, 20 routers sending trac to 50 groups in a block would result in 1000 VCCs between the same set of routers, which is about the maximum number of VCCs supported per ATM physical interface on majority of current IP routers. Also, in the absence of any downstream routers or hosts, the routers will receive IP multicast trac only to be dropped by the IP layer, resulting in the wastage of valuable link bandwidth. The VCC consumption could be reduced by: using a mesh of pt2pt VCCs between all the routers in the clusters, or setting up a p2mpt VCC from every router to every other router. In case of pt2pt VCCs, each VCC could be used as a dedicated link between two routers. IP multicast trac for a group would be forwarded across a VCC, only if a prune message is not received on that VCC for that group. This could be achieved completely independent of the existence of a. The main advantage of this model is that the costs of utilization of the physical link can be accurately determined for any customer. However, this results in multiple copies of the same packet transmitted on the physical interface resulting in wasted buer space in the router and bandwidth on the physical link. Alternatively, routers could use the p2mpt VCC set to the all routers group for forwarding multicast packets on the downstream interfaces for a group. This results in the delivery of multicast trac to the IP- VC interface of every router, regardless of the prune status of that interface. As yet another possibility, VCC resources could be conserved by reusing an existing p2mpt VCC with leaves that is a superset of the new targeted receivers. However, this complicates the algorithms used for VCC management, since dynamic joins and leaves by the clients to dierent multicast groups could change
9 the domain of the end systems served by a VCC map to one or more multicast groups. In order to conserve VCC resources and also to prune unwanted trac at the ATM level to the extent possible, the IP multicast group address space could be divided into pre-dened number of blocks. Each block in turn maps to a single VCC. Multicast routers that need to forward data of a particular group within a block will have to be added as a leaf to the VCC serving that block. A particular IP multicast group could be hashed into one of the blocks to uniformly distribute VCC load across multiple blocks. This technique would reduce the amount of unpruned trac delivered to an IP-VC interface of the router, but does not totally eliminate it, emulating the behavior of a multicast extended ethernet interface. Unlike DVMRP, which created a source based tree for every sender, the Protocol Independent Multicasting protocol in Sparse Mode (PIM-SM) creates just one multicast tree per multicast group, independent of the number of senders to the group. This indirectly saves a number of VCCs that need to be created when DVMRP like protocols are used at the IP level. Therefore PIM-SM is highly recommended for deployment over a wide area network, compared to DVMRP. 8 Alternative Models The MC over ATM model is a server based model and as such suers from the disadvantages associated with such models. They include availability, scalability, and reliability. Attempts to add counter measures frequently result in increased protocol complexity, as evident from the eorts of the IETF to produce separate specications [9][10], each addressing some of these issues. Distributed models, on the other hand, have a potential to oer a cleaner and more reliable solution at the expense of some extra complexity. One such model is the support for group addressing at the ATM layer. By using the ATM group addressing support dened in [11], an ATM end system could do the IP to ATM group address translation by algorithmically mapping one from the other independent of any server. With this capability, a sender to a multicast group could specify an ATM group address as the destination of the data VCC. It would be the job of PNNI [12] routing to create a VCC that reached all members of the multicast group. The PNNI routing protocol would need to be extended to do group address based routing. Furthermore, the \leaf initiated join" capability in UNI 4.0 signalling could be incorporated in PNNI to allow p2mpt VCCs from senders to be dynamically modied to add new group members. The nal component of this distributed model would be a mechanism for hosts to advertise themselves as members of a multicast group. Hosts could use an IGMP-style protocol to inform the ATM switch to which they are connected what multicast groups they wish to be members of. PNNI should ensure that this information gets distributed across the ATM network. Acknowledgements This research has greatly beneted from discussions had with Allison Mankin. The authors would also like to thank Bill Edwards, Frank DeNap, and Ann Demirtjis of Sprint Corporation for guiding and funding this research. We gratefully acknowledge the support of Pat Gary, Javad Boroumand, and Paul Diggins of NASA GSFC for providing testbed resources and their valuable time. References [1] Armitage, G., \Support for Multicast over UNI 3.0/3.1 based ATM Networks", RFC 2022, Bellcore, November [2] Laubach, M., \Classical IP and ARP over ATM", RFC 1577, Hewlett-Packard Laboratories, January [3] ftp://ftp.cmf.nrl.navy.mil/pub/vince [4] ftp://ftp.isi.edu/pub/isi-east/benny.tar.gz [5] Deering, S., \Host Extensions for IP Multicasting", RFC 1112, Stanford University, August [6] ftp://ftp.parc.xerox.com/pub/net research/ipmulti/ ipmulti3.5-sunos41x.tar.z [7] ftp://ftp.isi.edu/pub/mbone.faq.txt [8] The ATM Forum, \ATM User-Network Interface Specication", Prentice-Hall, [9] Luciani, J., Armitage, G., Halpern, J., \Server Cache Synchronization Protocol (SCSP) - NBMA", Internet-Draft, April [10] Talpade, R., Ammar, M., \Multicast Server Architectures for -based ATM multicasting", Internet-Draft, June [11] The ATM Forum, \ATM User-Network Interface (UNI) Signalling Specication v4.0", June [12] The ATM Forum, \Private Network-Network Interface (PNNI) v1.0", June 1996.
\Classical" RSVP and IP over ATM. Steven Berson. April 10, Abstract
\Classical" RSVP and IP over ATM Steven Berson USC Information Sciences Institute April 10, 1996 Abstract Integrated Services in the Internet is rapidly becoming a reality. Meanwhile, ATM technology is
More informationComparison of Concepts for IP Multicast over ATM. 1 Introduction. 2 IP Multicast. 3 IP-Multicast over ATM
Comparison of Concepts for IP Multicast over ATM Torsten Braun, Stefan Gumbrich, and Heinrich J. Stüttgen IBM European Networking Center, Vangerowstr. 18, D-69115 Heidelberg E-mail: braun@heidelbg.ibm.com,
More informationMulticast Technology White Paper
Multicast Technology White Paper Keywords: Multicast, IGMP, IGMP Snooping, PIM, MBGP, MSDP, and SSM Mapping Abstract: The multicast technology implements high-efficiency point-to-multipoint data transmission
More informationWhy multicast? The concept of multicast Multicast groups Multicast addressing Multicast routing protocols MBONE Multicast applications Conclusions
Tuomo Karhapää tuomo.karhapaa@otaverkko.fi Otaverkko Oy Why multicast? The concept of multicast Multicast groups Multicast addressing Multicast routing protocols MBONE Multicast applications Conclusions
More informationSupporting Multicast in ADSL Networks
Supporting Multicast in ADSL Networks A. Banchs, M. Gabrysch, T. Dietz, B. Lange, H. J. Stiittgen NEC Europe Ltd, Computer and Communication Research Laboratories Heidelberg E-mail: adsl@ccrle.nec.de Abstract:
More informationMultiple LAN Internet Protocol Converter (MLIC) for Multimedia Conferencing
Multiple LAN Internet Protocol Converter (MLIC) for Multimedia Conferencing Tat Chee Wan (tcwan@cs.usm.my) R. Sureswaran (sures@cs.usm.my) K. Saravanan (sara@network2.cs.usm.my) Network Research Group
More informationProcess/ Application NFS NTP SMTP UDP TCP. Transport. Internet
PERFORMANCE CONSIDERATIONS IN FILE TRANSFERS USING FTP OVER WIDE-AREA ATM NETWORKS Luiz A. DaSilva y, Rick Lett z and Victor S. Frost y y Telecommunications & Information Sciences Laboratory The University
More informationRouting in High Speed Networks. Technical Report Edition 1.0. John Crawford and Gill Waters.
Low Cost Quality of Service Multicast Routing in High Speed Networks Technical Report 13-97 - Edition 1.0 John Crawford and Gill Waters J.S.Crawford@ukc.ac.uk, A.G.Waters@ukc.ac.uk Computing Laboratory
More informationIP Multicast Technology Overview
IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of businesses and homes. Applications that take
More informationMC member other network node. Link used by the MC Link not used by the MC. Cell forwarding at X: cell forwarding. cell arrivial
Switch-Aided Flooding Operations in ATM Networks Yih Huang and Philip K. McKinley Department of Computer Science Michigan State University East Lansing, Michigan 48824 fhuangyih, mckinleyg@cps.msu.edu
More informationWhat is Multicasting? Multicasting Fundamentals. Unicast Transmission. Agenda. L70 - Multicasting Fundamentals. L70 - Multicasting Fundamentals
What is Multicasting? Multicasting Fundamentals Unicast transmission transmitting a packet to one receiver point-to-point transmission used by most applications today Multicast transmission transmitting
More informationSupporting IP Multicast for Mobile Hosts. Yu Wang Weidong Chen. Southern Methodist University. May 8, 1998.
Supporting IP Multicast for Mobile Hosts Yu Wang Weidong Chen Southern Methodist University fwy,wcheng@seas.smu.edu May 8, 1998 Abstract IP Multicast is an ecient mechanism of delivering a large amount
More informationTable of Contents 1 PIM Configuration 1-1
Table of Contents 1 PIM Configuration 1-1 PIM Overview 1-1 Introduction to PIM-DM 1-2 How PIM-DM Works 1-2 Introduction to PIM-SM 1-4 How PIM-SM Works 1-5 Introduction to Administrative Scoping in PIM-SM
More informationMulticast Communications
Multicast Communications Multicast communications refers to one-to-many or many-tomany communications. Unicast Broadcast Multicast Dragkedja IP Multicasting refers to the implementation of multicast communication
More informationMulticast Communications. Slide Set were original prepared by Dr. Tatsuya Susa
Multicast Communications Slide Set were original prepared by Dr. Tatsuya Susa Outline 1. Advantages of multicast 2. Multicast addressing 3. Multicast Routing Protocols 4. Multicast in the Internet 5. IGMP
More informationCCNA Exploration Network Fundamentals. Chapter 06 Addressing the Network IPv4
CCNA Exploration Network Fundamentals Chapter 06 Addressing the Network IPv4 Updated: 20/05/2008 1 6.0.1 Introduction Addressing is a key function of Network layer protocols that enables data communication
More informationMulticast routing Draft
Multicast routing Draft Lucia Tudose Nokia Research Center E-mail: tudose@research.nokia.com Abstract Multicast routing is establishing a tree which is routed from the source node and contains all the
More informationISI December NBMA Address Resolution Protocol (NARP) Status of this Memo
Network Working Group Request for Comments: 1735 Category: Experimental J. Heinanen Telecom Finland R. Govindan ISI December 1994 Status of this Memo NBMA Address Resolution Protocol (NARP) This memo defines
More informationConfiguring multicast VPN
Contents Configuring multicast VPN 1 Multicast VPN overview 1 Multicast VPN overview 1 MD-VPN overview 3 Protocols and standards 6 How MD-VPN works 6 Share-MDT establishment 6 Share-MDT-based delivery
More informationMulticast as an ISP service
Multicast as an ISP service Lecture slides for S-38.3192 15.2.2007 Mika Ilvesmäki Networking laboratory Goals of this lecture After this lecture you will be able to Give an overall technical view of multicast
More informationConfiguring Basic IP Multicast
IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of corporate businesses and homes. Applications
More informationES S S ES. Public network. End system. Private network. Switch. Calling end system 1 SETUP. Called end system. SETUP Data CONNECT SETUP CONNECT
ATM on Linux Werner Almesberger werner.almesberger@lrc.di.epfl.ch Laboratoire de Reseaux de Communication (LRC) EPFL, CH-05 Lausanne, Switzerland March, 996 Abstract Since the beginning of 995, ATM support
More informationFB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network
Congestion-free Routing of Streaming Multimedia Content in BMIN-based Parallel Systems Harish Sethu Department of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104, USA sethu@ece.drexel.edu
More informationIP over ATM. IP over ATM. Agenda. IP over ATM : Solving the Problem I.
IP over ATM IP over ATM Classical IP over ATM, MARS, MCS, NHRP, LANE, MPOA ATM is connection-oriented Assumes connection-oriented applications IP is connection-less Assumes connection-less network Significant
More informationMulticast overview. Introduction to multicast. Information transmission techniques. Unicast
Contents Multicast overview 1 Introduction to multicast 1 Information transmission techniques 1 Multicast features 3 Common notations in multicast 4 Multicast benefits and applications 4 Multicast models
More informationdraft-ietf-idmr-pim-sm-guidelines-00.ps 2 Abstract This document provides guidelines and recommendations for the incremental deployment of Protocol In
1 Protocol Independent Multicast-Sparse Mode (PIM-SM): Deployment Guidelines Deborah Estrin Ahmed Helmy David Thaler Liming Wei Computer Science Dept/ISI University of Southern Calif. Los Angeles, CA 90089
More informationMulticast overview. Introduction to multicast. Information transmission techniques. Unicast
Contents Multicast overview 1 Introduction to multicast 1 Information transmission techniques 1 Multicast features 3 Common notations in multicast 4 Multicast advantages and applications 4 Multicast models
More informationConfiguring IP Multicast Routing
34 CHAPTER This chapter describes how to configure IP multicast routing on the Cisco ME 3400 Ethernet Access switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive
More informationMULTICAST USE IN THE FINANCIAL INDUSTRY
12 th ANNUAL WORKSHOP 2016 MULTICAST USE IN THE FINANCIAL INDUSTRY Christoph Lameter GenTwo [ April, 5 th, 2016 ] OVERVIEW Multicast and the FSI (Financial Services Industry) Short refresher on Multicast
More informationDeveloping IP Muiticast Networks
Developing IP Muiticast Networks Volume I Beau Williamson Cisco SYSTEMS CISCO PRESS Cisco Press 201 West 103rd Street Indianapolis, IN 46290 USA Table of Contents Introduction xviii Part I Fundamentals
More informationIP Multicast. What is multicast?
IP Multicast 1 What is multicast? IP(v4) allows a host to send packets to a single host (unicast), or to all hosts (broadcast). Multicast allows a host to send packets to a subset of all host called a
More informationUnicasts, Multicasts and Broadcasts
Unicasts, Multicasts and Broadcasts Part 1: Frame-Based LAN Operation V1.0: Geoff Bennett Contents LANs as a Shared Medium A "Private" Conversation Multicast Addressing Performance Issues In this tutorial
More informationMulticast Communications. Tarik Čičić, 4. March. 2016
Multicast Communications Tarik Čičić, 4. March. 06 Overview One-to-many communication, why and how Algorithmic approach: Steiner trees Practical algorithms Multicast tree types Basic concepts in multicast
More informationMulticast EECS 122: Lecture 16
Multicast EECS 1: Lecture 16 Department of Electrical Engineering and Computer Sciences University of California Berkeley Broadcasting to Groups Many applications are not one-one Broadcast Group collaboration
More informationContents. Overview Multicast = Send to a group of hosts. Overview. Overview. Implementation Issues. Motivation: ISPs charge by bandwidth
EECS Contents Motivation Overview Implementation Issues Ethernet Multicast IGMP Routing Approaches Reliability Application Layer Multicast Summary Motivation: ISPs charge by bandwidth Broadcast Center
More informationExtensions to RTP to support Mobile Networking: Brown, Singh 2 within the cell. In our proposed architecture [3], we add a third level to this hierarc
Extensions to RTP to support Mobile Networking Kevin Brown Suresh Singh Department of Computer Science Department of Computer Science University of South Carolina Department of South Carolina Columbia,
More informationInterface between layer-2 and layer-3 MIS (1) (2) RSVP-S (3) (4) (8) EARTH-S. Control VC. QoS point-to-multipoint data VC. Actual QoS setup message
Internet Engineering Task Force L. Salgarelli, Editor / A. Corghi INTERNET-DRAFT CEFRIEL/Politecnico di Milano draft-salgarelli-issll-mis-00.ps M. Smirnow / H. Sanneck / D. Witaszek 10 November 1997 GMD-FOKUS
More informationCategory: Informational MITRE Corporation M. Pullen George Mason University August 1994
Network Working Group Request for Comments: 1667 Category: Informational S. Symington D. Wood M. Pullen George Mason University August 1994 Status of this Memo Modeling and Simulation Requirements for
More informationIPv6 and Multicast. Outline. IPv6 Multicast. S Computer Networks - Spring 2005
IPv6 and Multicast 188lecture5.ppt Pasi Lassila 1 Outline IPv6 Multicast 2 IPv6 overview Motivation Internet growth (address space depletion and routing information eplosion) CIDR has helped but eventually
More informationConnection Link Connection Member Intermediate Switch. Connection Link Receiver member Intermediate Switch Source member
Proceedings of the 996 IEEE International Conference on Distributed Computing Systems, pp 5-, Hong Kong, May 996 A Lightweight Protocol for Multipoint Connections under Link-State Routing Yih Huang and
More informationMULTICAST EXTENSIONS TO OSPF (MOSPF)
MULTICAST EXTENSIONS TO OSPF (MOSPF) Version 2 of the Open Shortest Path First (OSPF) routing protocol is defined in RFC-1583. It is an Interior Gateway Protocol (IGP) specifically designed to distribute
More informationAndrew T. Campbell, Javier Gomez. Center for Telecommunications Research, Columbia University, New York. [campbell,
An Overview of Cellular IP Andrew T. Campbell, Javier Gomez Center for Telecommunications Research, Columbia University, New York [campbell, javierg]@comet.columbia.edu Andras G. Valko Ericsson Research
More informationIP Multicast Technology Overview
IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of businesses and homes. Applications that take
More informationInteraction of RSVP with ATM for the support of shortcut QoS VCs: extension to the multicast case
Roberto Cocca, Stefano Salsano Interaction of RSVP with ATM for the support of shortcut QoS VCs: extension to the multicast case INFOCOM Department Report 004-004-1999 University of Rome La Sapienza Abstract
More informationCategory: Informational September 1997
Network Working Group G. Armitage Request for Comments: 2191 Lucent Technologies Category: Informational September 1997 Status of this Memo VENUS - Very Extensive Non-Unicast Service This memo provides
More informationSubnet Multicast for Delivery of One-to-Many Multicast Applications
Subnet Multicast for Delivery of One-to-Many Multicast Applications We propose a new delivery scheme for one-to-many multicast applications such as webcasting service used for the web-based broadcasting
More informationIP Multicast. Overview. Casts. Tarik Čičić University of Oslo December 2001
IP Multicast Tarik Čičić University of Oslo December 00 Overview One-to-many communication, why and how Algorithmic approach (IP) multicast protocols: host-router intra-domain (router-router) inter-domain
More informationITEC310 Computer Networks II
ITEC310 Computer Networks II Chapter 22 Network Layer:, and Routing Department of Information Technology Eastern Mediterranean University Objectives 2/131 After completing this chapter you should be able
More informationIP MULTICAST EXPLAINED
IP MULTICAST EXPLAINED June 2004 Jon Hardwick Data Connection Ltd. Jon.Hardwick@dataconnection.com Data Connection Limited 100 Church Street Enfield, UK Tel: +44 20 8366 1177 / Copyright 2004 Data Connection
More informationInterworking of B-ISDN Signaling and Internet Protocol
Interworking of -ISDN Signaling and Internet Protocol Muneyoshi Suzuki NTT Information Sharing Platform Laboratories 3-9-11, Midori-cho, Musashino-shi, Tokyo 180-8585, Japan suzuki@nal.ecl.net Abstract.
More informationSATELLITE NETWORK REGULAR CONNECTIONS
Supporting unidirectional links in the Internet Emmanuel Duros eduros@sophia.inria.fr I.N.R.I.A. Sophia Antipolis http://www.inria.fr/rodeo/personnel/eduros/ Walid Dabbous dabbous@sophia.inria.fr I.N.R.I.A.
More informationProtocol Independent Multicast (PIM): Protocol Specication. Deborah Estrin. Ching-gung Liu. January 11, Status of This Memo
Protocol Independent Multicast (PIM): Protocol Specication Stephen Deering Xerox PARC 3333 Coyoty Hill Road Palo Alto, CA 94304 deering@parc.xerox.com Van Jacobson Lawrence Berkeley Laboratory 1 Cyclotron
More informationIPv6 PIM-DM configuration example 36 IPv6 PIM-SM non-scoped zone configuration example 39 IPv6 PIM-SM admin-scoped zone configuration example 42 IPv6
Contents Configuring IPv6 PIM 1 Overview 1 IPv6 PIM-DM overview 1 IPv6 PIM-SM overview 3 IPv6 BIDIR-PIM overview 8 IPv6 administrative scoping overview 11 IPv6 PIM-SSM overview 13 Relationship among IPv6
More informationImplementation of Multicast Routing on IPv4 and IPv6 Networks
Implementation of Multicast Routing on IPv4 and IPv6 Networks Dr.Sridevi, Assistant Professor, Dept of Computer Science, Karnatak University, Dharwad. Abstract: Fast developing world of technology, multimedia
More informationExpires May 26, File: draft-ietf-rsvp-routing-01.ps November RSRR: A Routing Interface For RSVP
Internet Draft Daniel Zappala Expires May 26, 1997 USC/ISI File: draft-ietf-rsvp-routing-01.ps November 1996 RSRR: A Routing Interface For RSVP Status of Memo November 26, 1996 This document is an Internet-Draft.
More informationIPv6 PIM. Based on the forwarding mechanism, IPv6 PIM falls into two modes:
Overview Protocol Independent Multicast for IPv6 () provides IPv6 multicast forwarding by leveraging static routes or IPv6 unicast routing tables generated by any IPv6 unicast routing protocol, such as
More informationIPv6 Multicast Listener Discovery Protocol
Finding Feature Information, page 1 Information About, page 1 How to Configure, page 4 Configuration Examples for, page 10 Additional References, page 11, page 12 Finding Feature Information Your software
More informationConfiguring IP Multicast Routing
39 CHAPTER This chapter describes how to configure IP multicast routing on the Catalyst 3560 switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive
More informationCSE 123A Computer Networks
CSE 123A Computer Networks Winter 2005 Lecture 12 Internet Routing: Multicast Today: Multicast routing Multicast service model Host interface Host-router interactions (IGMP) Multicast Routing Limiters
More information1 Connectionless Routing
UCSD DEPARTMENT OF COMPUTER SCIENCE CS123a Computer Networking, IP Addressing and Neighbor Routing In these we quickly give an overview of IP addressing and Neighbor Routing. Routing consists of: IP addressing
More informationMPLS VPN. 5 ian 2010
MPLS VPN 5 ian 2010 What this lecture is about: IP CEF MPLS architecture What is MPLS? MPLS labels Packet forwarding in MPLS MPLS VPNs 3 IP CEF & MPLS Overview How does a router forward packets? Process
More informationMulticast Protocols and Applications
Multicast Protocols and Applications 240-642 Robert Elz kre@munnari.oz.au kre@fivedots.coe.psu.ac.th http://fivedots.coe.psu.ac.th/~kre Course Details Mid-Semester Exam 30% Final Exam 30% Participation
More informationMASTER S THESIS. IP Multicasting in Hybrid Networks. by Isatou Secka Advisor: Dr. John Baras CSHCN M.S (ISR M.S. 97-5)
MASTER S THESIS IP Multicasting in Hybrid Networks by Isatou Secka Advisor: Dr. John Baras CSHCN M.S. 97-2 (ISR M.S. 97-5) The Center for Satellite and Hybrid Communication Networks is a NASA-sponsored
More informationMeasuring MPLS overhead
Measuring MPLS overhead A. Pescapè +*, S. P. Romano +, M. Esposito +*, S. Avallone +, G. Ventre +* * ITEM - Laboratorio Nazionale CINI per l Informatica e la Telematica Multimediali Via Diocleziano, 328
More informationTechnische Universitat Munchen. Institut fur Informatik. D Munchen.
Developing Applications for Multicomputer Systems on Workstation Clusters Georg Stellner, Arndt Bode, Stefan Lamberts and Thomas Ludwig? Technische Universitat Munchen Institut fur Informatik Lehrstuhl
More informationResource Reservation Protocol
48 CHAPTER Chapter Goals Explain the difference between and routing protocols. Name the three traffic types supported by. Understand s different filter and style types. Explain the purpose of tunneling.
More informationCONGRESS: CONnection-oriented Group address RESolution. Service. Tal Anker, David Breitgand, Danny Dolev and Zohar Levy
CONGRESS: CONnection-oriented Group address RESolution Service Tal Anker, David Breitgand, Danny Dolev and Zohar Levy Institute of Computer Science The Hebrew University of Jerusalem Jerusalem, Israel
More informationNumerical Evaluation of Hierarchical QoS Routing. Sungjoon Ahn, Gayathri Chittiappa, A. Udaya Shankar. Computer Science Department and UMIACS
Numerical Evaluation of Hierarchical QoS Routing Sungjoon Ahn, Gayathri Chittiappa, A. Udaya Shankar Computer Science Department and UMIACS University of Maryland, College Park CS-TR-395 April 3, 1998
More informationTECHNICAL RESEARCH REPORT
TECHNICAL RESEARCH REPORT A Scalable Extension of Group Key Management Protocol by R. Poovendran, S. Ahmed, S. Corson, J. Baras CSHCN T.R. 98-5 (ISR T.R. 98-14) The Center for Satellite and Hybrid Communication
More informationExperimental Extensions to RSVP Remote Client and One-Pass Signalling
1 Experimental Extensions to RSVP Remote Client and One-Pass Signalling Industrial Process and System Communications, Darmstadt University of Technology Merckstr. 25 D-64283 Darmstadt Germany Martin.Karsten@KOM.tu-darmstadt.de
More informationIf the distance-vector algorithm is used :
An Experimental tudy of Asymmetric Routing Manish Karir y karir@isr.umd.edu University of Maryland College Park, MD 20742 Yongguang Zhang ygz@hrl.com HRL Laboratories Malibu, CA 90265 Abstract Traditionally,
More informationA Client Oriented, IP Level Redirection Mechanism. Sumit Gupta. Dept. of Elec. Engg. College Station, TX Abstract
A Client Oriented, Level Redirection Mechanism Sumit Gupta A. L. Narasimha Reddy Dept. of Elec. Engg. Texas A & M University College Station, TX 77843-3128 Abstract This paper introduces a new approach
More informationMBONE, the Multicast Backbone
MBONE, the Multicast Backbone Gaurav Wadkar Department of Computer Science S.P College Pune 411029 Introduction Changes are being made to the network components all the time, but people want to use the
More informationGrowth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.
Internetworking Multiple networks are a fact of life: Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Fault isolation,
More informationTCP over Wireless Networks Using Multiple. Saad Biaz Miten Mehta Steve West Nitin H. Vaidya. Texas A&M University. College Station, TX , USA
TCP over Wireless Networks Using Multiple Acknowledgements (Preliminary Version) Saad Biaz Miten Mehta Steve West Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX
More informationTopic: Multicast routing
Topic: Multicast routing What you will learn Broadcast routing algorithms Multicasting IGMP Multicast routing algorithms Multicast routing in the Internet Multicasting 1/21 Unicasting One source node and
More informationMulticast service model Host interface Host-router interactions (IGMP) Multicast Routing Distance Vector Link State. Shared tree.
CSE 123A Computer Networks Fall 2009 Lecture 10 Internet Routing: Multicast Today: Multicast routing Multicast service model Host interface Host-router interactions (IGMP) Multicast Routing Distance Vector
More informationLehrstuhl für Informatik 4 Kommunikation und verteilte Systeme. Auxiliary Protocols
Auxiliary Protocols IP serves only for sending packets with well-known addresses. Some questions however remain open, which are handled by auxiliary protocols: Address Resolution Protocol (ARP) Reverse
More informationDesign and implementation of a high performance metropolitan multicasting infrastructure
Design and implementation of a high performance metropolitan multicasting infrastructure FRANCESCO PALMIERI Centro Servizi Didattico Scientifico Università degli studi di Napoli Federico II Complesso Universitario
More informationRequest for Comments: 2583 Category: Informational ANL May Guidelines for Next Hop Client (NHC) Developers. Status of this Memo
Network Working Group Request for Comments: 2583 Category: Informational R. Carlson ANL L. Winkler ANL May 1999 Status of this Memo Guidelines for Next Hop Client (NHC) Developers This memo provides information
More informationSecurizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN
Platformă de e-learning și curriculă e-content pentru învățământul superior tehnic Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN MPLS VPN 5-ian-2010 What this lecture is about: IP
More informationIP Multicast Routing Technology Overview
Finding Feature Information, on page 1 Information About IP Multicast Technology, on page 1 Finding Feature Information Your software release may not support all the features documented in this module.
More informationTable of Contents. Cisco TCP/IP
Table of Contents TCP/IP Overview...1 TCP/IP Technology...1 TCP...1 IP...2 Routing in IP Environments...4 Interior Routing Protocols...5 RIP...5 IGRP...6 OSPF...6 Integrated IS IS...6 Exterior Routing
More informationFore ATM Switch ASX1000 D/E Box (0 to 20000km) ACTS (36000km)
Performance of TCP extensions on noisy high BDP networks Charalambous P. Charalambos, Victor S. Frost, Joseph B. Evans August 26, 1998 Abstract Practical experiments in a high bandwidth delay product (BDP)
More informationA New Approach. to Multicast Communication. in a Datagram Internetwork. Anthony J. Ballardie. May, Department of Computer Science
A New Approach to Multicast Communication in a Datagram Internetwork Anthony J. Ballardie May, 1995 Department of Computer Science University College London University of London A thesis submitted to the
More informationKing Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering
Student Name: Section #: King Fahd University of Petroleum and Minerals College of Computer Sciences and Engineering Department of Computer Engineering COE 344 Computer Networks (T072) Final Exam Date
More informationEnhancement of the CBT Multicast Routing Protocol
Enhancement of the CBT Multicast Routing Protocol Seok Joo Koh and Shin Gak Kang Protocol Engineering Center, ETRI, Korea E-mail: sjkoh@pec.etri.re.kr Abstract In this paper, we propose a simple practical
More informationBayeux: An Architecture for Scalable and Fault Tolerant Wide area Data Dissemination
Bayeux: An Architecture for Scalable and Fault Tolerant Wide area Data Dissemination By Shelley Zhuang,Ben Zhao,Anthony Joseph, Randy Katz,John Kubiatowicz Introduction Multimedia Streaming typically involves
More informationAbstract Studying network protocols and distributed applications in real networks can be dicult due to the need for complex topologies, hard to nd phy
ONE: The Ohio Network Emulator Mark Allman, Adam Caldwell, Shawn Ostermann mallman@lerc.nasa.gov, adam@eni.net ostermann@cs.ohiou.edu School of Electrical Engineering and Computer Science Ohio University
More informationAdvanced Network Training Multicast
Division of Brocade Advanced Network Training Multicast Larry Mathews Systems Engineer lmathews@brocade.com Training Objectives Session will concentrate on Multicast with emphasis on Protocol Independent
More informationFramework for IP Multicast in Satellite ATM Networks
22nd AIAA International Communications Satellite Systems Conference & Exhibit 2004 9-12 May 2004, Monterey, California AIAA 2004-3176 Framework for IP Multicast in Satellite ATM Networks Ayan Roy-Chowdhury
More informationStaged Refresh Timers for RSVP
Staged Refresh Timers for RSVP Ping Pan and Henning Schulzrinne Abstract The current resource Reservation Protocol (RSVP) design has no reliability mechanism for the delivery of control messages. Instead,
More informationA Protocol for Scalable Loop-Free Multicast Routing. M. Parsa, and J.J. Garcia-Luna-Aceves. Abstract
A Protocol for Scalable Loop-Free Multicast Routing M. Parsa, and J.J. Garcia-Luna-Aceves Abstract In network multimedia applications, such as multiparty teleconferencing, users often need to send the
More informationTable of Contents 1 IGMP Configuration 1-1
Table of Contents 1 IGMP Configuration 1-1 IGMP Overview 1-1 IGMP Versions 1-1 Introduction to IGMPv1 1-2 Enhancements in IGMPv2 1-3 Enhancements in IGMPv3 1-4 IGMP SSM Mapping 1-5 Protocols and Standards
More informationNetworking interview questions
Networking interview questions What is LAN? LAN is a computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected
More informationFSOS Multicast Configuration Guide
FSOS Multicast Configuration Guide Contents 1 Configuring IP Multicast-Routing...6 1.1 Overview...6 1.2 Configuration... 6 1.3 Validation... 6 2 Configuring IGMP... 8 2.1 Overview...8 2.2 References...9
More informationIP Multicast: PIM Configuration Guide, Cisco IOS Release 12.4T
IP Multicast: PIM Configuration Guide, Cisco IOS Release 12.4T Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS
More informationConfiguring IP Multicast Routing
CHAPTER 46 This chapter describes how to configure IP multicast routing on the Catalyst 3750-E or 3560-E switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive
More informationMEDIA An approach to an efficient integration of IPv6 and ATM multicast environments
MEDIA An approach to an efficient integration of IPv6 and ATM multicast environments Jorge Sá Silva 1, Sérgio Duarte 2, Nuno Veiga 3 and Fernando Boavida 1 Universidade de Coimbra, Departamento de Engenharia
More informationList of groups known at each router. Router gets those using IGMP. And where they are in use Where members are located. Enhancement to OSPF
Multicast OSPF OSPF Open Shortest Path First Link State Protocol Use Dijkstra s algorithm (SPF) Calculate shortest path from the router to every possible destination Areas Limit the information volume
More information