Installation and Operation Manual ML-IP. TDMoIP Main Link Module. MP-2100 Version 11 MEGAPLEX-2100 MODULE

Size: px
Start display at page:

Download "Installation and Operation Manual ML-IP. TDMoIP Main Link Module. MP-2100 Version 11 MEGAPLEX-2100 MODULE"

Transcription

1 Installation and Operation Manual ML-IP TDMoIP Main Link Module MP-2100 Version 11 MEGAPLEX-2100 MODULE

2

3 ML-IP TDMoIP Main Link Module for MP-2100 Version 11 Installation and Operation Manual Notice This manual contains information that is proprietary to RAD Data Communications Ltd. ("RAD"). No part of this publication may be reproduced in any form whatsoever without prior written approval by RAD Data Communications. Right, title and interest, all information, copyrights, patents, know-how, trade secrets and other intellectual property or other proprietary rights relating to this manual and to the ML-IP and any software components contained therein are proprietary products of RAD protected under international copyright law and shall be and remain solely with RAD. ML-IP is a registered trademark of RAD. No right, license, or interest to such trademark is granted hereunder, and you agree that no such right, license, or interest shall be asserted by you with respect to such trademark. You shall not copy, reverse compile or reverse assemble all or any portion of the Manual or the ML-IP. You are prohibited from, and shall not, directly or indirectly, develop, market, distribute, license, or sell any product that supports substantially similar functionality as the ML-IP, based on or derived in any way from the ML-IP. Your undertaking in this paragraph shall survive the termination of this Agreement. This Agreement is effective upon your opening of the ML-IP package and shall continue until terminated. RAD may terminate this Agreement upon the breach by you of any term hereof. Upon such termination by RAD, you agree to return to RAD the ML-IP and all copies and portions thereof. For further information contact RAD at the address below or contact your local distributor. International Headquarters RAD Data Communications Ltd. 24 Raoul Wallenberg St. Tel Aviv Israel Tel: Fax: market@rad.com North America Headquarters RAD Data Communications Inc. 900 Corporate Drive Mahwah, NJ USA Tel: (201) , Toll free: Fax: (201) market@radusa.com RAD Data Communications Ltd. Publication No /06

4 Installation Instructions for Compliance with EMC Requirements To ensure compliance with electromagnetic compatibility (EMC) requirements, it is recommended to connect only shielded data cables to the ML-IP module s ports.

5 Quick Start Guide Cable Connections If you are familiar with the ML-IP module, use this guide to prepare it for operation. Insert the module in the assigned I/O slot. Connect the prescribed cables to the appropriate module connectors, in accordance with the site installation plan. Configuration of Module Parameters Start configuration by entering the command: DEF CH SS *<Enter> where SS is the ML-IP slot number. The configuration parameters and the allowed range of values are listed below. General Module Parameters Parameter Ring Mode Protected IP s Protected IP Addresses Range of Values ENABLE DISABLE ENABLE DISABLE Use dotted-quad format External Port Parameters Parameter Auto-Negotiation Max. Capability Advertised LAN Rate LAN Type Range of Values YES NO 10Mbps HD 10Mbps FD 10Mbps HD 10Mbps FD Ethernet II 100Mbps HD 100Mbps HD 100Mbps FD 100Mbps FD For Net1 & Net 2 ports (EX1 & EX2) only Parameter Mng VLAN Tagging Mng VLAN ID Mng VLAN Priority For User port (EX3) only Traffic Priority Range of Values YES NO 1 to to 7 LOW HIGH 1

6 Quick Start Guide ML-IP Installation and Operation Manual Internal TDM Port Parameters Parameter Connect Signaling Sig. Profile IP Address Range of Values YES NO YES NO 1 to 5 Use dotted-quad format Parameter Subnet Mask Routing Protocol OOS Signaling Echo Canceller Range of Values Use dotted-quad format NONE PROPRIETY RIP FORCED BUSY FORCED IDLE YES NO RIP2 BUSY IDLE IDLE BUSY Bundle Parameters To add a new bundle, use the command: ADD BND B<Enter> where B is the bundle number, in the range of 1 to 120. The configuration parameters and the allowed range of values are listed below. Parameter Connect ML-IP Slot ML-IP TDM Destination IP Next Hop IP Range of Values ENABLE DISABLE Megaplex-2100: IO-1 to IO-12 Megaplex-2104: IO-1 to IO-4 IN1 IN2 Use dotted-quad format Use dotted-quad format Parameter IP TOS Ext Eth Dest Bundle Jitter Buffer Range of Values 0 to 255 EXT1 EXT2 AUTO 1 to to 300 msec in 1-msec steps Parameter Name TDM Bytes in Frame Voice OOS Data OOS Range of Values String of up to 8 alphanumeric characters 48, 96, 144, 192, 240, 288, 336, to FF 00 to FF Parameter Far End Type OAM Connectivity VLAN Tagging VLAN ID Range of E1 ENABLE YES 0 Values T1 ESF T1 D4 DISABLE NO 1 to 4000 Parameter VLAN Priority Redundancy Redundancy Bundle Recovery Time, sec Range of Values 0 to 7 YES NO BND1 to BND120 0 through 99 2

7 ML-IP Installation and Operation Manual Quick Start Guide Assignment of Bundle Timeslots Full Timeslot Assignment Assign timeslots by entering the command: DEF TS SS CC<Enter> where SS is the ML-IP slot number, and CC is the number of the ML-IP internal port on which the bundle is defined. For each timeslot, you have the option to connect the timeslot between an I/O channel and one of the defined bundles. Split Timeslot Assignment For a bundle serving an I/O channel using split timeslots, enter the command: DEF SPLIT TS SS CC<Enter> where SS is the ML-IP slot number, and CC is the number of the ML-IP internal port on which the bundle is defined. Adding Static Entries to ML-IP IP Routing Table To add a new entry, use the command: ADD ROUTE R<Enter> where R is the index number of the entry. Select SINGLE IP for Entry Type, and then configure the static entry parameters. The configuration parameters and the allowed range of values are listed below. Parameter Dest IP Address Dest IP Mask Next Hop Metric Range of Values Use dotted-quad format Use dotted-quad format Use dotted-quad format 1 to 15 Using the Adaptive Timing Mode To select one of the bundles defined on the ML-IP module as the nodal timing reference: 1. Enter the DEF SYS command. 2. In the Mode row, select Adaptive for the Main and/or Fallback fields. 3. Select the desired bundle number. Using the ML-IP Module to Provide the Management Link (CL Module with Ethernet Interface only) 1. Enter the DEF SP CON2 command. 2. Select No for Direct LAN Connection. 3. Check that the internal TDM ports Routing Protocol is configured for RIP2. 3

8 Quick Start Guide ML-IP Installation and Operation Manual 4

9 Contents Chapter 1. Introduction 1.1 Overview Versions Main Features Applications Physical Description Module Panels Port Operating Mode Indications Functional Description ML-IP Functional Block Diagram TDM Bus Interfaces Routing Matrix Echo Canceller Packet Processor Subsystem IP Traffic Handling Subsystem Ethernet Switch Subsystem Ethernet Ports Timing Subsystem Test Subsystem Local Management Subsystem Redundancy Diagnostics Technical Specifications Chapter 2. Module Installation and Operation 2.1 Safety Laser Safety Classification Laser Safety Statutory Warning and Operating Precautions Installing the ML-IP Module Connecting the Cables ML-IP Module with Electrical Interfaces ML-IP Modules with Optical Interfaces Normal Indications Chapter 3. Configuration Instructions 3.1 Introduction ML-IP Configuration Sequence Configuring the General ML-IP Module Parameters Configuring the External Ports Configuring the Internal TDM Ports Configuring Bundles Selection Guidelines for TDM Payload Bytes per Frame Jitter Buffer Sizing Guidelines Bundle Redundancy Configuration Guidelines Defining a New Bundle Changing the Configuration of an Existing Bundle ML-IP Installation and Operation Manual i

10 Table of Contents Deleting an Existing Bundle Assigning Bundle Timeslots Timeslot Assignment Rules Timeslot Assignment Example Selecting the System Timing Reference Timing Mode Selection Guidelines Using the Adaptive Timing Mode Defining Static Routes for ML-IP Basic Routing Process Configuring Static Entries Connecting to CL Modules through an ML-IP Module Connecting through ML-IP USER Port Internal Routing Additional Tasks Displaying External Port Status Information Displaying Internal Port Status Information Displaying Bundle Configuration Information Displaying Bundle Performance Statistics Displaying LAN Interface Performance Statistics Chapter 4. Configuring Typical Applications 4.1 Typical Application Configuring the Modules Chapter 5. Troubleshooting & Diagnostics 5.1 Introduction Performance Monitoring Overview Bundle Performance Statistics LAN Interface Performance Statistics Test and Diagnostic Functions Overview Internal Port Tests Local Loopback on Timeslot Bundle Tests and Loopbacks on Timeslots of an Internal Port Tests on Bits of a Split Timeslot Troubleshooting Instructions Preliminary Troubleshooting Instructions Systematic Troubleshooting Instructions Frequently Asked Questions Technical Support ii ML-IP Installation and Operation Manual

11 Chapter 1 Introduction 1.1 Overview This manual describes the technical characteristics, applications, installation and operation of the ML-IP TDMoIP main link module for the Megaplex-2100 and Megaplex-2104 modular E1/T1 multiplexer systems. Note In this manual, the generic term Megaplex is used when the information is applicable to both the Megaplex-2100 and Megaplex-2104 chassis types. The complete designation is used only for information applicable to a specific version. The main function of the ML-IP module is to provide external links for the modules installed in the Megaplex chassis through an IP network, using the TDMoIP (TDM over IP) protocol. ML-IP modules are fully compatible with the IPmux family of TDMoIP gateways offered by RAD, and therefore permit the Megaplex equipment to become part of an integrated corporate IP network. Moreover, the ML-IP module offers various options for ensuring high connection availability, using redundancy and other advanced technological approaches that enable the user to select an optimal solution matching its organizational user s availability goals in the best way. The ML-IP module includes all the functions needed to support the transfer of TDM traffic over IP networks. The module supports all the types of I/O modules that can be installed in Megaplex units; is also supports channel-associated signaling (CAS) for voice modules. An internal non-blocking cross-connect matrix provides full and flexible control over routing between I/O modules and IP destinations reached through the external links. The physical interfaces to the IP network are provided through Ethernet ports. The ML-IP has three such ports, interconnected through an internal Ethernet switch: two of these ports serve as uplink (network) ports and the third enables other user equipment to connect to the uplink without requiring additional equipment. The ML-IP module can use the internal Megaplex clock signals, provided by other main link modules, and can also provide the chassis clock reference and timing signals for the other modules installed in the same chassis, using the adaptive timing mode (timing recovered from the IP traffic). The chassis timing reference is used in common by all the Megaplex TDM buses. Overview 1-1

12 Chapter 1 Introduction ML-IP Installation and Operation Manual Versions ML-IP modules are currently available in two versions: Version with electrical Ethernet 10/100BaseT interfaces for all its three ports Versions with100base-fx optical interfaces for the two network ports, and electrical Ethernet 10/100BaseT interface for the user port. The optical interfaces operate at 850 or 1310 nm and can be ordered with FC/PC or ST connectors, for use with multimode or single-mode fibers. Main Features The function of the ML-IP module is to interface between the TDM buses of the Megaplex chassis, and external IP networks that provide communication links to other TDMoIP equipment. For this purpose, ML-IP converts the TDM bit streams carried over the internal Megaplex backplane into IP frames, which are then transmitted directly to a LAN or Ethernet-based backbone. Separate IP connections can be set up for any user-specified I/O channels, in accordance with the desired destination. The per-connection payload carrying capacity is user-selectable in accordance with the number of backplane timeslots to be transported, from a minimum of 2 timeslot bits (16 kbps payload) up to 31 timeslots (nearly 2 Mbps). This approach provides a cost-effective and versatile, modular TDMoIP solution for supporting the widely-deployed TDM equipment over the new IP infrastructures. The IANA-assigned UDP socket number for TDMoIP enables proper flow handling through the network switches and routers. Internal Traffic Routing Functions The ML-IP module includes an internal non-blocking cross-connect matrix that supports the same cross-connect features as the Megaplex ML-2E1 and ML-2T1 TDM main link modules. The cross-connect matrix connects to the four internal TDM buses of the Megaplex chassis, and routes them to the module internal processing circuits through two internal TDM ports. The internal matrix allows routing voice and data channels from any I/O module installed in the chassis, to any link, of the module. In addition, the matrix enables voice and data traffic to be routed between the two internal ports, as well as between internal ports and ports of regular TDM links on other modules. The full cross-connect capability of the matrix confers great flexibility in assigning timeslots to more efficiently utilize link bandwidths, as well as supporting drop&insert, bypass and broadcast, and multi-link applications.. For transmission over the IP network, timeslot bundling is supported. A bundle can include any number of timeslots, up to 31; when using split timeslot allocation, a bundle can carry only two or four bits of a timeslot. Up to 24 bundles (without CAS signaling), or 12 bundles (with CAS) are supported by each ML-IP module. To support more bundles (up to 120), a Megaplex chassis can be equipped with multiple ML-IP modules. Each bundle can be independently routed to any desired IP address, and can be tagged differently. 1-2 Overview

13 ML-IP Installation and Operation Manual Chapter 1 Introduction Echo Canceller The ML-IP module can be ordered with an optional near-end echo canceller. The echo canceller serves only voice timeslots connected through the second internal TDM port, and its operation is enabled/disabled by the user. The echo canceller supports both A-law and μ-law encoding. External Port Characteristics The ML-IP main link module provides three Ethernet interfaces: Two Ethernet uplink (network) ports, identified as NET 1 and NET 2, each capable of carrying the full traffic load of the chassis. One of the uplink ports can be used as the main link to the IP network, which carries the packetized TDM data stream from the Megaplex I/O modules to the IP network. The second uplink port can serve as connection for other ML-IP equipped Megaplex units. This enables transmitting the payload of multiple Megaplex units, interconnected in a daisy-chain topology, over a single Ethernet link to the IP network (see typical applications in Section 1.2). The network ports can be ordered with 10/100BaseT interfaces terminated in separate RJ-45 connectors, or with 100Base-FX interfaces terminated in separate ST or FC/PC connectors. An additional Ethernet port, identified as USER, serves for directly connecting a local LAN or a PC. The USER port payload is transmitted towards the IP network via one of the uplink ports. The USER port has a 10/100BaseT interface terminated in a RJ-45 connector. All three Ethernet interfaces operate either full or half duplex, at 10 Mbps (Ethernet) or 100 Mbps (Fast Ethernet) speeds, and comply with all the relevant Ethernet LAN standards, such as IEEE You can manually select the operating mode of each interface, or can enable auto-negotiation for automatic selection of the operating mode. Packet Traffic Handling Characteristics The ML-IP module includes all the functions needed for direct connection to IP networks via Ethernet LANs, and therefore does not require any external support. At the Ethernet level, the ML-IP module supports VLAN tagging and priority labeling according to IEEE 802.1D-2004 and 802.1Q to provide reliable, high quality of service (QoS). At the IP level, the ML-IP module supports user-configurable ToS (Type of Service) for the outgoing IP frames. This allows networks which support Diffserv (or ToS), to give higher priority to the ML-IP traffic, which is delay-sensitive. To compensate for the delay variation through the IP network, the ML-IP module uses a packet delay variation (jitter) buffer to store incoming IP packets. The buffer compensates for up to 300 msec of delay variation in the IP network. ML-IP operates as an IP host. It supports the ICMP ping function, generates ARP requests in case of unknown destination MAC address, and answers ARP requests. The user can control traffic routing by configuring static IP routing entries. Overview 1-3

14 Chapter 1 Introduction ML-IP Installation and Operation Manual Redundancy Options The ML-IP module supports redundancy at the bundle level, i.e., two bundles (defined on the same ML-IP module or on different ML-IP modules) can be configured to serve as a redundant pair. To use this capability for traffic protection, the same timeslot (or group of timeslots) are simultaneously connected to two different bundles. The receive function uses only the data received through one of these bundles; if the active bundle stream fails, Megaplex will start using the other bundle stream. In addition, redundancy can be provided at additional levels: Redundancy at the external port level (and the cable connected to the external port): this can be achieved by routing each bundle of a redundant pair to a different external port. Full redundancy at the transmission path level. For this type of redundancy, the bundle is routed through different ML-IP modules, and network routing is designed to ensure that the packets travel through the network via different paths. Ring redundancy: implemented by means of the RAD-proprietary RFER (resilient fast-ethernet ring) protocol, provides protection for the transmission path. In this redundancy mode, a network topology similar to a ring is used; therefore, data can propagate over two alternative paths ( clockwise or counterclockwise ). To comply with the Ethernet protocol characteristics, a pair of adjacent nodes on the ring keep the ring open by disconnecting an arbitrary ring segment, thereby preventing frames from making a full round trip. If a segment breaks (fails), the RFER protocol automatically moves the blocking nodes to the ends of the failed segment and reconnects the previously disconnected segment. Therefore, full connectivity is restored for any single point of failure. For TDMoIP traffic and other user-specified traffic, this change takes effect within 50 msec (assuming direct fiber connection); for other Ethernet traffic, is takes longer (approximately 20 seconds). Timing The ML-IP module supports three timing modes: Internal Mode. An on-board local oscillator is the source for the transmit and receive clocks, which are then also used by the Ethernet links, as well as by all the I/O modules installed in the chassis. In this case, ML-IP is the sole clock source for all the units in the network. External mode. The source for the ML-IP transmit and receive clocks is an external clock signal: this can be the clock signal applied to one of the data channel of one of the I/O modules, the recovered clock signal of a TDM main link or a station clock signal. The external reference is also used by all the other I/O modules in the chassis. Adaptive mode. The ML-IP module generates a clock signal locked to the rate of packets received from the IP network by one of the bundles defined on the module. This is called adaptive timing mode. The adaptive clock is then used by the Ethernet links, as well as by all the I/O modules installed in the chassis. 1-4 Overview

15 ML-IP Installation and Operation Manual Chapter 1 Introduction Management Support ML-IP allows transmitting the CL management channel through its uplink ports, for remote management through the IP network. The communication channel between ML-IP and the CL modules can carry all the management information, which is then encapsulated and transmitted as an independent stream of Ethernet frames, together with the Megaplex payload. 1.2 Applications Figure 1-1 shows a typical application, in which Megaplex units equipped with ML-IP modules are used to build a corporate private network. In this application, the headquarters provides centralized PBX and data services to the branch offices. At each location, the connection to the IP network is made through a single ML-IP uplink port, where the office LAN is connected to the USER port of the module and is sent to the network through the internal Ethernet switch of the module. In a similar way, the IPmux-1E TDMoIP gateway serving branch B, which is close to branch A, connects to the other uplink port of the ML-IP module at branch A. When long distances must be covered by the network connections, ML-IP modules with optical 100Base-FX network interfaces can be used. Applications 1-5

16 Chapter 1 Introduction ML-IP Installation and Operation Manual Headquarters Branch B LAN Fax IPmux-1E Server PC PC ASMi-31 USER NET1 Branch A LAN ML-IP Fax PBX VPN/ IP Network PC NET2 NET1 USER Branch C ML-IP ML-IP PBX USER NET1 Server Fax PC LAN Figure 1-1. Corporate Network Based on Megaplex with ML-IP Modules Figure 1-2 shows a large corporate network based on IP transmission that is implemented with TDMoIP RAD equipment. In this application, an IPmux port TDMoIP gateway is used to provide access to multiple branch offices via the IP network. The data and PBX equipment deployed at the headquarters site provide centralized voice and data services over IP, to the various branch sites. the Megaplex units with ML-IP modules operating at the branch level provide access via the IPmux-16 links to the headquarters site. Equipment configurations at the branch offices are similar to those shown in Figure 1-1. To increase availability for critical services, bundle redundancy can be used between ML-IP modules over the IP network. 1-6 Applications

17 ML-IP Installation and Operation Manual Chapter 1 Introduction Headquarters PBX E1 or T1 Branch A IPmux-16 ML-IP NET1 USER Main Offices Fax Fax ML-IP Server PC NET1 LAN Server PC ML-IP Branch B Branch E VPN/ IP Network NET1 USER PC Fax IPmux-1E PC LAN Branch D Branch C Fax PBX E1 or T1 IPmux-4 NET1 ML-IP USER Fax Server PC LAN Figure 1-2. Large Corporate Network Based on TDMoIP RAD Equipment Figure 1-3 shows the approach used to maximize the utilization of a 100BaseT link, by daisy-chaining multiple Megaplex units. Applications 1-7

18 Chapter 1 Introduction ML-IP Installation and Operation Manual Site D PC LAN Site A USER ML-IP PBX NET2 NET1 IP Network Site B PC Server LAN USER ML-IP Fax NET2 NET1 Site C LAN PC USER ML-IP NET2 NET1 PC LAN Figure 1-3. Multiple Megaplex-2100 Units Daisy-Chaining to Maximize Utilization of a Single Connection to the IP Network This configuration minimizes the number of independent connections needed to access the IP network, and reduces the total cost: instead of installing separate links from each site to several IP network access points, inter-site links are used to reach a Megaplex unit already connected to the IP network. This configuration is especially cost-effective when the connection to the IP network is made through optical fibers. 1-8 Applications

19 ML-IP Installation and Operation Manual Chapter 1 Introduction Figure 1-4 shows an application that utilizes the RAD-proprietary RFER (resilient Fast Ethernet ring) protocol, which provides a self-healing capability for 100 Mbps Fast Ethernet networks using ML-IP modules. This capability is available for both fiber and copper media. Fax ML-IP Terminal NET2 NET1 USER ML-IP PSTN 100BaseT USER PBX PC Terminal E1/T1s IPmux-16 10/100BaseT RFER (Recovery within 50msec) ML-IP USER NET2 NET1 100BaseF USER NET1 NET2 Fax PBX ML-IP RFER (Recovery within 50msec) 100BaseT Server 100BaseT NET1 USER NET2 NET1 NET2 NET2 NET1 USER ML-IP ML-IP ML-IP Fax Terminal Fax PC PC Figure 1-4. TDMoIP Application Using RFER Capabilities The RFER protocol uses the two uplink ports of ML-IP modules to implement a network topology similar to that of a ring, which emulates the dual ring topologies used in SDH networks. In case of link failure on any segment of the ring, RFER reroutes the TDMoIP traffic within 50 milliseconds, fast enough to maintain the required grade of service for voice traffic. For other Ethernet traffic, recovery takes longer approximately 20 seconds. Applications 1-9

20 Chapter 1 Introduction ML-IP Installation and Operation Manual RFER enables users to create highly reliable networks, using dark fiber or dry copper in a ring topology. Survivability is further enhanced by scalable support for multiple rings, which eliminates the risk of a single point of failure. This is especially important in applications using distributed topologies, such as voice communication networks for commuter railroads. Module Panels 1.3 Physical Description ML-IP is a 4U-high module that occupies one I/O module slot in the Megaplex chassis. All module functions are configured by software. Module with Electrical Network Interfaces The panel of the ML-IP version with electrical interfaces is shown in Figure 1-5. The module panel includes three groups of status indicators (one group for each Ethernet port), and a common test indicator. Each Ethernet port is terminated in an RJ-45 connector. Table 1-1 explains the functions of the components located on the module panel Physical Description

21 ML-IP Installation and Operation Manual Chapter 1 Introduction Table 1-1. ML-IP Module with Electrical Network Interfaces, Front Panel Item Function TEST Indicator NET1 LINK Indicator NET1 FDX Indicator NET1 100M Indicator Yellow indicator, lights when a test or loopback is activated on the ML-IP module Green indicator, lights when the NET1 interface is connected to an active Ethernet hub or switch port Green indicator, lights when the NET1 interface operates in the full-duplex mode Green indicator, lights when the NET1 interface operates at the 100 Mbps rate (Fast Ethernet mode) NET1 Connector RJ-45 connector for connection to the NET1 interface NET2 LINK Indicator NET2 FDX Indicator NET2 100M Indicator Same function as the NET1 LINK indicator, but for the NET2 interface Same function as the NET1 FDX indicator, but for the NET2 interface Same function as the NET1 100M indicator, but for the NET2 interface NET2 Connector RJ-45 connector for connection to the NET2 interface USER LINK Indicator USER FDX USER 100M Indicator Same function as the NET1 LINK indicator, but for the USER interface Same function as the NET1 FDX indicator, but for the USER interface Same function as the NET1 100M indicator, but for the USER interface USER Connector RJ-45 connector for connection to the USER interface FDX FDX FDX ML-IP TEST LINK 100M N E T 1 LINK 100M N E T 2 LINK 100M U S E R Figure 1-5. Panel of ML-IP Module with Electrical Interfaces ML-IP Modules with Optical Network Interfaces Figure 1-6 shows typical panels of ML-IP module versions with optical interfaces. The panels shown in Figure 1-6 are terminated in various types of optical connectors: ST and FC/PC, depending on ordering. The main difference between the panels of ML-IP modules with electrical interfaces and those with optical interfaces is that each network port of the latter type has two optical connectors, designated as follows: TX serves as the transmit (output) connector RX serves as the receive (input) connector. In addition to the optical connectors, each module panel includes a USER port and the indicators explained in Table 1-1. Physical Description 1-11

22 Chapter 1 Introduction ML-IP Installation and Operation Manual ML-IP ML-IP LINK LINK FDX 100 M FDX 100 M RX N E T 1 RX N E T 1 TX TX RX RX TX N E T 2 TX N E T 2 LINK LINK FDX 100M FDX 100 M U S E R U S E R Module with ST Connectors Port Operating Mode Indications Module with FC/PC Connectors Figure 1-6. Typical ML-IP Panels with Optical Network Interfaces The indications provided by the FDX and 100M indicators of each port can be used to identify the port operating mode. The interpretation of the port mode indications is given in Table 1-2. Table 1-2. Interpretation of Mode Indications FDX 100M Operating Mode Half-duplex operation at 10 Mbps Full-duplex operation at 10 Mbps Half-duplex operation at 100 Mbps Full-duplex operation at 100 Mbps 1-12 Physical Description

23 ML-IP Installation and Operation Manual Chapter 1 Introduction 1.4 Functional Description ML-IP Functional Block Diagram Figure 1-7 shows the functional block diagram of the ML-IP module. Clock Selection Bundle Selection Main Clock Fallback Clock TDM Buses A B C D Clock Generator Timing Recovery TDM Bus A Interface TDM Bus B Interface TDM Bus C Interface Routing Matrix Internal Port 1 Internal Port 2 Packet Buffers Packet Processor Ethernet Switch Ethernet Transceiver Ethernet Transceiver Ethernet Transceiver NET1 NET2 USER TDM Bus D Interface Test Subsystem Internal Clock & Timing Signals Control Management Channel Local Management Inband Management Channel To CL Module Figure 1-7. ML-IP Functional Block Diagram The ML-IP module includes the following subsystems: TDM bus interfaces Routing (cross-connect) matrix Echo canceller (option) Packet processor subsystem IP traffic handling subsystem Ethernet switch subsystem Ethernet ports Timing subsystem Functional Description 1-13

24 Chapter 1 Introduction ML-IP Installation and Operation Manual TDM Bus Interfaces Routing Matrix Test subsystem Local management subsystem. The ML-IP module has four independent TDM bus interfaces, one for each Megaplex TDM bus. Each TDM bus interface is used to connect timeslots from the corresponding bus to the internal routing matrix of the ML-IP module, in accordance with the commands received from the CL module. The ML-IP module includes a routing matrix that controls the routing of payload signals within the module. Overview of Routing Method To understand the payload routing method used within the ML-IP module, it is necessary to consider three entities: Chassis TDM buses: the flow of payload on these buses is organized in timeslots (31 timeslots per bus). The CL module automatically assigns to each connected I/O channel or internal port of the modules installed in the chassis timeslots on the TDM buses. Internal TDM port. Timeslots directed for transmission through the ML-IP module are formally connected to an entity designated internal TDM port. The internal TDM ports are used to control the utilization of the module traffic capacity. Each port has 31 timeslots for payload, numbered 1 to 31: each timeslot corresponds to a physical port of the routing matrix. When channel-associated signaling must be transmitted for some of the channels carried through an internal port, timeslot 16 of that port is reserved and only 30 of the port timeslots are available for payload. Timeslot bundle. The bundle is similar to a virtual internal port: it simply indicates which payload is to be routed to a certain destination, and enables you to specify the associated processing parameters. For example, a voice channel will be connected to a bundle having one timeslot and CAS support will be enabled for that bundle, whereas a 128 kbps data stream from a high-speed data channel will occupy two timeslots in a bundle without CAS support. The bundle can have a minimum bandwidth of 2 bits (16 kbps), which is the minimum bandwidth that can be assigned to a low-speed data channel; the maximum bandwidth is 31 timeslots (1984 kbps) without CAS support, and 30 timeslots (1920 kbps) with CAS support. After associating I/O channels to a bundle, the other configuration activities needed to transport their payload (and any associated signaling information) are defined on the bundle Functional Description

25 ML-IP Installation and Operation Manual Chapter 1 Introduction Note Routing Matrix Functions Matrix routing is user-programmable, under the control of the CL module, and enables connecting any timeslot between any two ports. The matrix routing resolution is even higher, and provides control over routing down to pairs of bits (split timeslot routing). As a result, the matrix can be used to perform the following functions: Connect timeslots from I/O channels installed in the Megaplex to form a timeslot bundle. This is performed by connecting the desired timeslots from the TDM buses to the internal TDM port on which the desired timeslot bundle is defined. Bypass timeslots between the two internal TDM ports of the ML-IP module. Each of these internal ports has 31 payload timeslots. When robbed-bit multiframe or channel-associated signaling must be transmitted for some of the channels carried through an internal TDM port, timeslot 16 of that port is reserved and only 30 of the port timeslots are available for payload. Bypass timeslots from main links or internal ports of other modules to the internal TDM ports interfaces located on the ML-IP module. When an internal TDM port is configured to support channel-associated signaling, the matrix also routes the CAS information associated with each channel in parallel with the channel data. The ML-IP module supports the unidirectional broadcast routing mode for all the I/O modules installed in the chassis, and the bidirectional broadcast routing mode for a number of Megaplex modules (the complete list can be found in Chapter 1 and Chapter 7 of the Megaplex Installation and Operation Manual). Echo Canceller The ML-IP module can be ordered with an echo canceller. The echo canceller is always installed on internal port 2, and when enabled, operates only on timeslots carrying voice signals in bundles that are transmitted through internal port 2. This means that the data passing between port 2 of the routing matrix and the packet processor subsystem passes through the echo canceller, and is processed as described below. The function of the echo canceller is to attenuate echo signals generated at the local (near-end) analog interface of each voice channel. Echo signals start degrading the perceived voice quality when the total end-to-end delay reaches 30msec: at delays exceeding this value, using echo cancellers significantly improves the perceived quality. Echo signals are usually generated when the interface operates in the 2-wire mode, however they may also appear when 4-wire interfaces are used, for example, because of acoustic feedback at the subscriber s handset. Therefore, echo cancellers are highly recommended for voice channels with 2-wire interfaces, and can also be used on 4-wire channels. To be effective, echo cancellers must be used at both ends of an audio channel. Functional Description 1-15

26 Chapter 1 Introduction ML-IP Installation and Operation Manual The echo canceller operates by performing the following actions, independently for each voice channel (timeslot) carried by internal port 2: 1. Analyzing the 64 kbps data stream that represents the analog signal received from the near-end subscriber (this is the transmit data stream arriving to the ML-IP module from the local voice channel). 2. Comparing this signal with the 64 kbps data stream that represents the analog signal received from the far-end subscriber (this is the receive data stream arriving to the ML-IP module from the far-end voice channel). 3. Detecting the echo signal contained in the local voice channel transmit stream (the echo is a delayed, distorted and attenuated version of the signal received from the far-end subscriber). 4. Modifying the transmit data stream by subtracting the echo signal before sending the data to the far-end subscriber or exchange. This process enables the ML-IP module to transmit to the far-end subscriber only the useful signal, that is, the actual signal transmitted by the near-end subscriber. The maximum delay for which the echo can be eliminated is 4 msec. Because the data stream transmitted to the far end is modified, when an echo canceller is used the channel is no longer transparent. Therefore, the echo canceller operates in accordance with the following rules: The echo canceller can be enabled/disabled by the user When enabled, the echo canceller can process all the 31 timeslots carried by internal port 2 The echo canceller assumes that the data streams carried by the timeslots are standard 64 kbps PCM signals encoded either in accordance with the A-law or μ-law, as defined in ITU-T Rec. G.711. The encoding law is determined by the framing mode of the remote equipment configured by the user: A-law for equipment using E1 framing and μ-law for equipment using T1 framing. All the bundles are processed in accordance with the same encoding law, therefore when the echo canceller is enabled it is not allowed to configure different framing modes. The echo canceller cannot be used on voice channels using ADPCM encoding, or low bit rate compression. The echo canceller processes only timeslots defined as voice timeslots: it is automatically disabled by the CL module on other types of timeslots (data, signaling, management, etc.) To avoid interference with voice band modems, when a fax or modem tone is detected in a voice channel, the echo canceller is temporarily disabled for the duration of the corresponding call. Packet Processor Subsystem The packet processor subsystem performs the functions needed to build packets for transmission over the network, and recover the payload carried by packets received from the network Functional Description

27 ML-IP Installation and Operation Manual Chapter 1 Introduction Note The packet processor has two TDM ports that connect to the internal TDM buses through the routing matrix (either directly or through the optional echo canceller, as explained in the Echo Canceller section above), and one packet port that connects to the network through the Ethernet switch. Actually, the packet processor also has a port for CL management traffic directed to the IP network, whose utilization depends on the user s configuration. The network port of the packet processor is assigned a unique IP address. This address is therefore shared by the two associated internal TDM port and by the associated bundles (the CL management traffic uses a different IP address, which can be independently selected by the user). To prepare the payload for transmission over the network, the packet processor performs the following functions: 1. Separates the data received through each internal TDM port into separate bundles, as specified by the user. 2. Fragments the continuous data stream of each bundle into slices having the size specified by the user. The slice size is always an integer multiple of bytes (N 48 bytes, where N is in the range of 1 to 8). Section 3.5 presents considerations related to the selection of the appropriate slice size. 3. Adds the overhead necessary to transmit each slice over the packet network and builds packets for transmission to the desired destination (see the Encapsulation section below). Packets received from the network pass the reverse process: 1. The headers of the packets retrieved from the received frames are stripped, and the bits are stored in packet buffers (see the Packet Buffers section below), and is then read in the order and at the rate used to restore the original data stream of each bundle. 2. The bundle data streams are sent through the internal TDM ports to the routing matrix. The routing matrix then inserts the received data into the prescribed timeslots of the appropriate TDM bus, to make it available to the local I/O channels. When an internal TDM port is configured to support channel-associated signaling, the packet processor will also handle the transmission of the signaling information associated with the payload carried by each bundle. Encapsulation Each bundle defined by the user is handled as an independent entity by the packet processor, and its payload is inserted in a unique sequence of TDMoIP frames. The process used to build TDMoIP frames for transmission to the network via Ethernet ports includes the following steps: 1. Building a UDP packet. This is performed by inserting the TDM bytes of the bundle into the payload field of a UDP packet, and adding the overhead data needed to build a UDP packet. Functional Description 1-17

28 Chapter 1 Introduction ML-IP Installation and Operation Manual The UDP overhead data specifies as the destination UDP port the IANA-assigned port number for TDMoIP, thereby identifying the type of payload and its handling method. The UDP source port inserted in the overhead indicates the number of the destination bundle. The UDP packet also includes a checksum field, but this field is not used (the checksum of the transmitted packets is set to 0, and the checksum of the received UDP packets is not checked. 2. Building an IP packet. This is performed by inserting the UDP packet into the payload field of an IP packet, and adding the IP overhead data. The main overhead elements of the IP packets overhead are: Source IP address. Each bundle uses the IP address of the network port of the packet processor. Destination IP address of the bundle. Fragmentation control fields. The TDMoIP service does not allow fragmentation of IP packets (the don t fragment bit is set to ON). Type of Service (ToS) field, an option defined in RFC 791 or RFC When supported by an IP network, the type-of-service parameter is interpreted as a set of qualitative parameters for the precedence, delay, throughput and delivery reliability to be provided to the IP traffic generated by this bundle. These qualitative parameters may be used by each network that transfers the bundle IP traffic to select specific values for the actual service parameters of the network, to achieve the desired quality of service Other parameters required by the IP protocol, including a checksum for header integrity control. 3. Building an Ethernet frame. This is performed by inserting the IP packet into the payload field of an Ethernet frame, and adding the MAC and LLC overhead needed to build an Ethernet frame for transmission to the destination MAC address (the MAC address needed to reach the desired IP destination is determined using the ARP protocol). The main overhead elements of the Ethernet frame overhead are: Source MAC address. Each bundle uses the MAC address of an external Ethernet port of the module. Destination MAC address. VLAN tagging fields, options defined in 802.1Q. When VLAN tagging is enabled for the bundle, a VLAN ID and a priority value are assigned to the bundle (see the VLAN Tagging and Priority Assignment section below with respect to priority). The VLAN tags can be used to control the flow of the Ethernet frames by bridges supporting IEEE 802.1Q and 802.1D trafficclass standards. For a detailed description of the encapsulation process, refer to the corresponding section of Appendix E of the Megaplex-2100 Installation and Operation Manual Functional Description

29 ML-IP Installation and Operation Manual Chapter 1 Introduction Note Packet Buffers The packet buffers are used in the receive path, to provide temporary storage for the received bits. Each bundle has its own buffer. These buffers can be used to compensate for variations in the network transmission delay. Because of their function, these buffers are called jitter buffers. To compensate for varying transmission delay, each buffer has two clock signals: Write clock, used to load packets into the buffer. Since each packet is loaded immediately after being successfully received from the network, packets are written into the buffer at irregular intervals. Read clock, used to transfer packets to the packet processor at a fixed rate. The jitter buffer operates as follows: At the beginning of a session, the buffer is loaded with received frames until it is half full. No bits are read from the buffer at this time. Therefore, a delay is introduced in the data path. After the buffer reaches the half-full mark, the read-out process is started. The packets are read out at an essentially constant rate. As explained in the Timing Subsystem section below, during normal operation the read-out rate is equal to the average rate at which frames are received from the network. Therefore, the buffer occupancy remains near the half-full mark. The buffer stores the frames in accordance with their arrival order. Processing of CAS Information The user can enable CAS support for each internal TDM port of the ML-IP module. The port CAS support must be enabled whenever one or more of the channels carried by bundles routed to this internal TDM port require end-to-end transmission of channel-associated signaling. The processing of CAS information is controlled by specifying the desired profile (a profile is defined by means of the DEF PROFILE command). ML-IP supports the standard CAS protocol (LEGACY) only. The R2 CAS signaling type is not supported. You can specify the following signaling format conversions: Conversion of signaling information received through a main link, in order to match the internal Megaplex signaling interpretation conventions. Conversion of the internal signaling information to the format used by the equipment connected to Megaplex links. Each link can use different receive and transmit translation rules. These translation rules are defined by means of signaling profiles. A profile enables you to select the translation of each individual signal bit. The available selections are A, B, C, D (value copied from the corresponding incoming bit), ~A, ~B, ~C, ~D (inverted value of corresponding incoming bit), 0 (always 0), and 1 (always 1). Functional Description 1-19

30 Chapter 1 Introduction ML-IP Installation and Operation Manual Note In addition to the translation of individual bits, the receive path conversion section can also be used to define the signaling bit patterns that indicate the busy and idle states. The ML-IP module independently processes the CAS signaling information of each bundle, and inserts it in the corresponding stream of packets. Therefore, for each bundle, the module automatically selects the appropriate packet structure in accordance with the following factors: The number of channels carried by the bundle CAS support on the internal TDM port The need for CAS support on each channel carried by the bundle (the bundle structure will change to that needed to support CAS transport even if only one of the channels is configured to use CAS) The type of equipment used at the bundle destination: another ML-IP module, or equipment from the IPmux family of TDMoIP gateways offered by RAD. This is because the default format of the packet structure transmitted by ML-IP modules is different from the format used by some of the IPmux versions. To achieve compatibility when operating in a link with an IPmux unit, you can specify the interface type (E1 or T1) of the remote equipment. When the destination IPmux has a T1 interface, you must also specify the IPmux framing mode: ESF or SF (D4). The framing mode also determines the processing mode of the optional echo canceller (see Echo Canceller section above). IP Traffic Handling Subsystem Basic IP Traffic Handling Subsystem Functions The ML-IP module includes an internal IP traffic handling subsystem. This subsystem performs the functions needed to handle the IP traffic generated by the module. In this respect, this subsystem operates as an IP host, whose address is the IP address of the packet processor network port configured by the user. The internal IP traffic handling subsystem also performs two additional functions: Provides support for the ARP protocol: the internal IP subsystem can generate ARP requests to find MAC addresses, and will respond to ARP requests related to its ports. Answers ICMP pings to the module IP address, thereby enabling other IP hosts to check the IP connectivity to the ML-IP module. The ML-IP internal IP traffic handling subsystem can also serve as a proxy server for the management IP router of the CL module. The basic IP addressing information needed for routing the traffic of each bundle is the destination IP address. However, when the bundle destination address is not within the local IP subnet of the bundle source address (which is actually the IP address of the packet processor network port), an additional item is needed the next-hop IP address. The next-hop address is used to direct the bundle traffic to an IP entity (usually a port of an IP router) that can find a route to the desired destination Functional Description

31 ML-IP Installation and Operation Manual Chapter 1 Introduction Handling Management Traffic for the CL Module CL modules with external Ethernet interface installed in a Megaplex chassis with ML-IP module can be configured to connect to a remote management station through the external ports of the ML-IP module, provided that station can be reached through the same IP network used for payload transmission. This saves the costs associated with providing an independent connection between the CL module and the management station. The CL management traffic is handled as follows: In the transmit-to-network direction, the CL module traffic directed to the IP network is transferred to the ML-IP module as IP packets, through an internal communication channel. The ML-IP module selects the next hop to which the CL management traffic is to be sent, using the information stored in the IP static routes table. The IP packets received from the CL module are then encapsulated in Ethernet frames, and sent to the network. If necessary, the ML-IP module will use ARP to find the destination MAC address for the Ethernet frames carrying the management traffic, and will also answer ARP requests directed to the CL management subsystem. Therefore, the ML-IP module serves as a proxy ARP for the CL management subsystem. In the receive-from-network direction, the ML-IP module transfers to the CL module all the packets of types not processed by the ML-IP module (for example, Telnet, SNMP, etc.). The IP router of the CL module then analyzes these packets, processes those with management contents and discards the other packets. The user can define static (permanent) routes for the IP router of the CL module. Each of these routes includes the next-hop address and a metric, that defines the maximum number of hops an IP management packet can pass before being discarded. By default, the IP router of the CL module exchanges only routing information in accordance with the RAD proprietary management traffic routing protocol. This is sufficient for inband management among Megaplex units and other RAD equipment using the same protocol (see the Megaplex-2100 Installation and Operation Manual). To enable standard IP routers to handle the management traffic sent through the ML-IP module, you can configure the IP router of the CL module to broadcast its RIP2 routing tables. The IP router, however, will not learn any RIP2 routing information, but will use only its static routes. Functional Description 1-21

32 Chapter 1 Introduction ML-IP Installation and Operation Manual Avoiding Flooding by TDMoIP Traffic Flooding may occur when the destination (the remote TDMoIP device, which may be another ML-IP module, an IPmux unit or other vendor s equipment) is either disconnected from the network, powered down or the IP route to the destination is no longer available. In this case, the ML-IP module continues transmitting the TDMoIP traffic (which uses the UDP protocol, which cannot detect loss of connectivity). This has the following results: After a certain aging time, the destination is removed from the tables of Layer 2 switches in the network. In response, the switches flood the network with TDMoIP traffic generated by the remaining operational TDMoIP device. Layer 3 switches send unreachable destination messages whenever TDMoIP frames cause overloading of switches or routers. The ML-IP module supports two mechanisms for avoiding flooding: Basic mechanism, which can be used even when the destination equipment does not support the OAM connectivity protocol. The basic mechanism uses periodical pinging of the destination address of each bundle to detect loss of communication: in case of loss of communication, the transmission of TDMoIP traffic to that destination is stopped, and renewed only after the pings are answered again. This basic mechanism requires the remote TDMoIP device to be able to answer pings: in the case of the ML-IP modules operating over Layer 3 network, this requires defining static routes using the ADD ROUTE command (see Chapter 3). Mechanism based on the RAD proprietary OAM connectivity protocol, supported by ML-IP modules and IPmux units. This protocol enables detecting loss of communication with the remote TDMoIP device by periodically transmitting special TDMoIP messages, and take steps that prevent the resulting flooding of this traffic. The protocol also enables checking that the remote TDMoIP device uses a compatible configuration. The protocol periodically transmits dedicated TDMoIP messages, which enable exchange of OAM information (for example, check for compatible configuration parameters) as well as providing a periodical connectivity check without using pinging. In case loss of communication is detected, the traffic load that is transmitted to the destination is significantly decreased (one packet every few seconds per bundle). When valid protocol messages are received again, normal TDMoIP traffic flow is automatically resumed. OAM connectivity protocol messages use the UDP port number assigned to TDMoIP traffic, have the same VLAN ID and ToS as the connection they are protecting, but are transmitted over a dedicated bundle (bundle 8191), which must not be used for other purposes. You can select the desired flooding avoidance mode: basic mechanism or OAM connectivity protocol Functional Description

33 ML-IP Installation and Operation Manual Chapter 1 Introduction Ethernet Switch Subsystem Operation of Ethernet Switch The flow of Ethernet traffic within the ML-IP module is controlled by means of a multi-port Ethernet switch. Each switch port can handle 10 Mbps or 100 Mbps traffic, independently of the other ports. The Ethernet switch provides the following main traffic routes: Between the packet processor and the two external Ethernet ports serving as uplink ports (NET1 and NET2). This permits the packet processor to send and receive frames through any of the uplink ports in accordance with user s configuration and the routing decisions made by the management subsystem. When VLAN tagging is used, you can specify, for each bundle, whether the frames will be routed to a specific external port, or let the external port be automatically selected in accordance with the paths learned by the switch. Between any pair of external Ethernet ports. This permits Ethernet traffic applied to any external port to appear at the other external ports. Since each port can operate at either 10 Mbps or 100 Mbps, the frame formats are automatically translated when ports operate in different modes. VLAN Tagging and Priority Assignment VLAN tagging enables assigning priorities to the various types of traffic. Note VLAN tagging cannot be used on the NET 1 and NET 2 ports when ring redundancy is enabled (see the Ring Redundancy section below). The Ethernet switch supports two transmit queues for each port: one for high priority traffic and another for low priority traffic. The queues are used as follows: 1. The priority assigned to tagged traffic originating from the internal ports of the ML-IP module and to tagged traffic from the NET 1 and NET2 ports depends on the priority field contained in the VLAN tag assigned to each frame: VLAN priorities in the range of 0 to 3 are defined as low priority Priorities in the range of 4 to 7 are defined as high priority. 2. The priority assigned to untagged traffic originating from the NET 1 and NET2 ports is always defined as low priority. 3. The priority assigned to untagged traffic originating from the internal ports depends on the type of traffic: Untagged management frames are defined as low priority Untagged bundle frames are defined as high priority. 4. The priority assigned to traffic from the USER port can be controlled by means of a dedicated parameter: High priority same as that of traffic whose VLAN tagging indicates a priority of 4 to 7 Low priority for priorities of 0 to 3. Moreover, the default VLAN of the frames received through the USER port is identical to the management VLAN (this means that an untagged frame Functional Description 1-23

34 Chapter 1 Introduction ML-IP Installation and Operation Manual received through the USER port will be transmitted through the NET 1 port tagged with the management VLAN. Ethernet Ports Timing Subsystem The ML-IP module has three independent Ethernet external ports, each having its own Ethernet transceiver. The port transceiver provides the interface between one of the Ethernet switch ports and the corresponding external port, and handles all the tasks needed to access the physical transmission media (for example, coding for transmission over the physical media, carrier sense and collision detection, etc.). The NET 1 and NET 2 ports can be ordered with copper (10/100BaseT) or optical (100Base-FX) transceivers. The USER port has always a copper (10/100BaseT) transceiver. The optical transceiver characteristics, and the connector types that can be ordered, are described in Table 1-3. Copper (10/100BaseT) transceivers are capable of operation at 10 Mbps or 100 Mbps, in either the half-duplex or full-duplex mode, whereas optical transceivers operate only at 100 Mbps. The operating mode of 10/100BaseT (copper) ports can be determined in two ways: By specifically configuring the mode (half-duplex or full-duplex) and the rate (10 or 100 Mbps) at which the port operates By enabling the auto-negotiation. In this case, the user must also specify the highest traffic handling capability to be advertised during the auto-negotiation process. The auto-negotiation process uses a standard protocol that permits intelligent 10/100BaseT Ethernet ports to automatically select the mode providing the highest possible traffic handling capability supported by the two ports at the end of a link. The operating mode selected as a result of auto-negotiation will not exceed the advertised capability. Therefore, when auto-negotiation is enabled, the ML-IP port automatically selects the appropriate operating mode as soon as it is connected to a LAN or to another Ethernet port. The port also negotiates the use of flow control. The ML-IP timing subsystem generates the various internal clock and timing signals required by the module. These signals are derived from the Megaplex nodal timing. In addition to providing internal signals, the timing subsystem of the ML-IP module can also serve as a source of timing signals for the Megaplex chassis, which is one of the functions that must be provided by any main link module in order to enable hierarchical dissemination of timing within a network. For this purpose, the ML-IP module includes a clock generator capable of generating main and/or fallback clock signals for the Megaplex chassis. The 1-24 Functional Description

35 ML-IP Installation and Operation Manual Chapter 1 Introduction Test Subsystem reference signal for the clock generator is provided by a special adaptive clock recovery mechanism. The adaptive clock recovery mechanism recovers the clock signal associated with the payload carried by a user-selected bundle. Actually, two bundles can be specified: One bundle for deriving the main clock signal A second bundle for deriving the fallback clock signal. The bundle clock signal is recovered using a mechanism that estimates the average rate of the payload data received in the frames arriving from the IP network. Assuming that the IP network does not lose data, the average rate at which payload arrives will be equal to the rate at which payload is transmitted by the source. As explained in the Packet Buffers section above, a buffer is used to store packets for an interval equal to the maximum expected delay variation. Therefore, this buffer can be used by the clock recovery mechanism. The method used to recover the payload clock of a bundle is based on monitoring the fill level of the bundle jitter buffer: the clock recovery mechanism monitors the buffer fill level, and generates a read-out clock signal with adjustable frequency. The frequency of this clock signal is adjusted so as to read frames out of the buffer at a rate that keeps the jitter buffer as near as possible to the half-full mark. This condition can be maintained only when the rate at which frames are loaded into the buffer is equal to the rate at which frames are removed. Therefore, the adaptive clock recovery mechanism actually recovers the original payload transmit clock. To further clean up the signal and obtain a stable signal at the internal chassis reference frequency, the recovered bundle payload clock signal is processed by the ML-IP clock generator. The output signal of the clock generator is a stable signal at the internal chassis reference frequency. The ML-IP module supports a wide range of diagnostic capabilities, similar to those of the other main link modules. These capabilities, described in the Diagnostics section below, are implemented by a dedicated internal test subsystem that together with the routing matrix can perform various loopbacks and route the signals transmitted and received by the test subsystem. The test subsystem is used to check the transmission performance and proper operation of the Megaplex system paths carrying the user s payload, without requiring any external test equipment. Accordingly, the test subsystem includes two main functions: BER test subsystem for evaluating data signal paths Tone injection subsystem for testing audio (voice) signal paths. Functional Description 1-25

36 Chapter 1 Introduction ML-IP Installation and Operation Manual Note BER Test Subsystem The BER test subsystem comprises a test sequence generator and a test sequence evaluator. During the test, the payload data is replaced by a pseudo-random sequence generated by the test sequence generator. Many types of test sequences can be generated, enabling the user to select the one best suited for each specific test. The available selections are: QRSS test sequence per ITU-T Rec. O.151 Pseudo-random sequences per ITU-T Rec. O.151: , , Pseudo-random sequences per ITU-T Rec. O.153: bit long pseudo-random sequence per ITU-T Rec. O.152/3 511-bit long pseudo-random sequence per ITU-T Rec. O bit long pseudo-random sequence Repetitive patterns of one mark ( 1 ) followed by seven spaces ( 0 ) (1M-7S); one space followed by seven marks (1S-7M); alternating marks and spaces (ALT); continuous mark (MARK), or continuous space (SPACE). The transmitted data is returned to the test sequence evaluator by a loopback activated somewhere along the signal path. The evaluator synchronizes to the incoming sequence, and then compares the received data, bit by bit, to the original data sequence and detects any difference (bit error). When two Megaplex units are operated in a link, it is also possible to perform the test by activating the BER test subsystems at both ends of the link at the same time and configure both subsystems to use the same test sequence. In this case, it is not necessary to activate a loopback, because the BER test subsystem can process the sequence transmitted by the far end subsystem in the same way as its own sequence. The test results are displayed as a number in the range of 0 (no errors detected during the current measurement interval) through If the upper limit is reached, the counter stops accumulating errors and retains this maximum value until it is manually reset. Error counts are accumulated starting from the activation of the BER test, or from the last clearing (resetting) of the error counters. During normal operation, no errors should be detected. To provide meaningful results even under marginal transmission conditions, error counting is automatically interrupted while the test evaluator is not synchronized to the incoming test sequence, and also during periods in which the tested signal path is not available (for example, during loss of bundle signal and/or loss of frame synchronization). The number of seconds during which error counting is interrupted is reported along with the accumulated test running time. To check that the tested path is live, the user can inject errors at a desired (calibrated) rate in the test sequence. The available error injection rates are 10-1, 10-2, 10-3, 10-4, 10-5, 10-6 and 10-7 ; single errors can also be injected. These errors will be counted as regular errors by the test sequence evaluator, thereby increasing the user s confidence in the measured performance Functional Description

37 ML-IP Installation and Operation Manual Chapter 1 Introduction The BER test subsystem input and output are routed by means of the ML-IP routing matrix. Therefore, the user can request testing in any individual timeslot or any bundle of timeslots. For timeslots with split assignment, it is also possible to select the bits on which the test is performed (with the same resolution as the split timeslot assignment, that is, pairs of consecutive bits). Moreover, the direction in which the test sequence is sent (local or remote) can also be selected. For convenience, the user can simultaneously activate a desired type of loopback together with the activation of the BER test. Another convenient function is automatic configuration of the same test on the redundancy partner, without requiring any manual intervention. At any time, only one BER test can be performed on each ML-IP module. Test Tone Injection Subsystem The test tone is a data sequence repeating at a rate of 1 khz. This data sequence is identical to the data sequence that would have been generated if a 1-kHz signal having a nominal level of 1 mw (0 dbm0) were applied to the input of the transmit path of an ITU-T Rec. G.711 voice channel codec. The receive path of a voice channel codec receiving the test tone sequence converts it to the corresponding analog signal. The resulting 1-kHz tone can be heard in the earpiece of a telephone set connected to the tested channel alternately, its level can be measured by a standard audio analyzer). The output signal of the test tone injection subsystem is also routed by means of the ML-IP routing matrix. Therefore, the user can select the timeslot in which the test tone is injected (only one timeslot at a time), and the direction in which the test tone is sent (local or remote). Local Management Subsystem The local management subsystem performs two main functions: Controls the operation of the various circuits located on the ML-IP module in accordance with the commands received from the CL module through the Megaplex management channel. Controls the routing of traffic through the external ports interfaces. Redundancy The ML-IP module supports redundancy at the bundle level, i.e., two bundles defined on the same module, or on different ML-IP modules, can operate as a redundant pair. To use this capability for traffic protection, the same timeslot (or group of timeslots) are grouped and connected to two different bundles simultaneously. If the active bundle stream fails, Megaplex will start using the other bundle stream. Moreover, it is also possible to provide redundancy between ML-IP modules and other TDM main link modules: this enables using ML-IP modules to backup the traffic carried by circuit-switched E1 and T1 links by means of the packet-switched IP network. Functional Description 1-27

38 Chapter 1 Introduction ML-IP Installation and Operation Manual The ML-IP modules also support the RAD proprietary ring redundancy mode. The available redundancy options are as follows: Path redundancy the two bundles are on the same module, and the packets take different paths on the IP network. On the receive direction, the ML-IP converts only one of the packet streams into a TDM stream. Link+Path redundancy the two bundles are on the same module, but on the different ports. The packets take different paths on the IP network. On the receive direction, the ML-IP converts only one of the packet streams into a TDM stream. Module+Path redundancy the two bundles reside on different modules in the same chassis, and may or may not take different paths on the IP network. On the receive direction, each module converts the packet stream into a TDM stream, and the CL is responsible for forwarding only one of those streams to the Megaplex backplane. Chassis/Site+Module+Path redundancy the two bundles reside on different modules installed in different chassis, possibly located at different sites. On the receive direction, if choosing one of the recovered TDM streams is required, it is performed by another module, depending on the specific application. It is possible (and sometimes necessary) to combine two application types at the two ends of the IP link. Below are some possible combinations: Module+path redundancy at the central site vs path redundancy at the remote site Chassis redundancy at the central site vs module+path redundancy at the remote site (see Figure 1-11). The available redundancy options (applications) are described below. IP Connection (Path) Redundancy Although IP networks are generally robust, congestion and faults can cause the IP network performance to vary widely, to the point that it no longer enables transmission of TDMoIP traffic. Figure 1-8 shows the equipment configuration used to overcome IP network transmission problems with minimal investment. ML-IP IP Network ML-IP Figure 1-8. Use of IP Connection (Path) Redundancy In the configuration shown in Figure 1-8, two bundles are defined as a redundancy pair on the same ML-IP module. The two bundles are routed through the same Ethernet uplink port to the network, and therefore share the same path up to the edge switch providing access to the IP network Functional Description

39 ML-IP Installation and Operation Manual Chapter 1 Introduction Since both bundles are routed to the same destination, they both have the same IP address, but are tagged with different VLAN ID numbers or have different next-hop addresses. Therefore, they follow different routes within the IP network up to the exit switch, at which the other Megaplex is connected (in Figure 1-8, the two paths are represented by different line types). Obviously, for this redundancy approach to be effective, it is necessary to ensure that VLAN support is available within the whole IP network, or that the network includes multiple routers. During normal conditions, the ML-IP module receives both bundles, but connects to the receive path only one of them. In case a fault is detected, the ML-IP module automatically selects the other bundle. The decision to switch takes place within 50 msec, thereby rapid restoration of service is possible, even without disconnecting voice calls. Note however that the actual switching time cannot be shorter than the jitter buffer size; therefore, to permit fast (50 msec) redundancy switching, the jitter buffer size must be less than 50 msec. The flip occurs if the system meets the following criteria: The active bundle has failed No alarms have been registered on the standby bundle The recovery period since last flip has elapsed. These criteria are valid for the Path, Link+Path, and Module+Path redundancy types. Physical Link and IP Connection (Link+Path) Redundancy The basic protection conferred by the configuration shown in Figure 1-8 can be extended to include the whole transmission path between the two Megaplex units. This is achieved by connecting the two uplink ports of the same ML-IP module, via different links, to the IP network access point, as shown in Figure 1-9. As in the previous configuration, both bundles have the same IP address, but are tagged with different VLAN ID numbers or have different next-hop addresses. The operation and protection characteristics of this redundancy configuration are similar to those described above. Note that to prevent packet storming, this option requires the network be based on switches supporting VLAN tagging that can block untagged frames. ML-IP ML-IP IP Network NET2 NET1 NET1 NET2 Figure 1-9. Physical Link and IP Connection (Link+Path) Redundancy Functional Description 1-29

40 Chapter 1 Introduction ML-IP Installation and Operation Manual Note Note To prevent the creation of loops, the external Ethernet switch to which the ML-IP Ethernet ports are connected must support either the spanning tree algorithm or use VLANs. If the switch does not support this algorithm, only one of the Ethernet ports can be connected to the external switch. This is sufficient for all the applications in which the additional ports are used for connecting user s equipment through other ML-IP modules (daisy-chaining configuration) and for connecting other user s equipment to the network through one of the ML-IP network Ethernet ports. Module, Link and IP Connection (Module+Path) Redundancy The configuration shown in Figure 1-9 can be extended to include protection for both the transmission paths and the ML-IP hardware. For this purpose, two ML-IP modules must be installed in the Megaplex units, in accordance with the configuration shown in Figure When two independent IP networks can be accessed, better protection is achieved by connecting, at each end, each ML-IP module to a different network. In the configuration shown in Figure 1-10, two different uplink ports, located on different ML-IP modules, are used. One of the bundles configured as a redundancy pair is defined on one ML-IP module, and the other bundle of the pair is defined on the other module. The two modules are assigned different IP addresses. The traffic of each bundle is independently routed and processed within each ML-IP module, however at any time only one module applies the received payload on the TDM buses. If a problem that requires redundancy switching is detected, a report is sent to the CL module installed in the same Megaplex chassis. The CL module then instructs the failed module to disconnect its traffic from the TDM buses and the other module to connect to the buses. This process takes place within a few seconds. ML-IP ML-IP IP Network Figure Module, Link and IP Connection (Module+Path) Redundancy Chassis (Site), Module, Link and IP Connection Redundancy In these applications, the ML-IP modules must be installed in different chassis, possibly located at different sites. This type of redundancy is usually combined with another redundancy option. For example, in the configuration shown in Figure 1-11, central site with chassis redundancy is operating opposite the remote sites with path+module redundancy. The chassis redundancy is configured by setting Redundancy =Yes and Redundancy Bundle =None Functional Description

41 ML-IP Installation and Operation Manual Chapter 1 Introduction In the configuration shown in Figure 1-11, two different ML-IP uplink ports, located on different chassis/sites, are used. One of the bundles configured as a redundancy pair is defined on one ML-IP module located on Site A, while the other bundle of the pair is defined on the another ML-IP module located on Site B. In the case of 1+1 redundancy, no special behavior is required from the modules at sites A and B. In the case of 1:1 redundancy, the active bundle is determined at site C according to redundancy application used at this site. In the case of 1:1 redundancy (see below) the Site C module(s) transmits OAM messages to sites A and B indicating the standby site not to transmit data. Site C Site A IP Network ML-IP ML-IP Site B ML-IP Figure Chassis (Site), Module, Link and IP Connection Redundancy Redundancy Type In each of the above applications, two types of redundancy are available: 1+1 Redundancy. When this redundancy type is enabled, both bundles transmit data packets all the time, offering potentially faster recovery at the expense of doubling the bandwidth. This provides a functionality similar to the parallel transmit redundancy used for TDM fractional E1 and T1 links. 1:1 Redundancy. When this redundancy type is enabled, only one of the bundles transmits and receives data packets. The other bundle transmits OAM packets to verify connectivity. Ring Redundancy Ring redundancy, implemented by means of the RAD-proprietary RFER (resilient Fast Ethernet ring) protocol, provides protection for the Ethernet transmission path, and is especially suited for MAN and dark fiber applications. When ring redundancy is enabled, the network topology is similar to that of a ring, and therefore it can propagate data either clockwise or counterclockwise. Because of the Ethernet protocol characteristics, actually the ring cannot be closed: a pair of adjacent nodes on the ring keep the ring open by disconnecting an arbitrary ring segment, thereby preventing frames from making a full round trip. Figure 1-12 shows a basic ring topology; the arrow shows the path followed by frames exchanged between ring nodes 1 and 4 during normal operation, assuming that the blocked segment is between nodes 1 and 4. Functional Description 1-31

42 Chapter 1 Introduction ML-IP Installation and Operation Manual Ring Node 2 Ring Node 1 Ring Node 3 X Ring Node 4 Figure Basic Ring Redundancy Topology Data Flow during Normal Operation If a segment, for example, the segment between nodes 2 and 3, breaks (fails), the RFER protocol automatically moves the blocking nodes to the ends of the failed segment and reconnects the previously disconnected segment. The new path of the frames is shown in Figure Therefore, full connectivity is restored for any single point of failure. For TDMoIP traffic, the RFER protocol ensures that this change takes effect within 50 msec; for other Ethernet traffic, is takes longer (approximately 20 seconds) because of the time needed by the switch to unlearn existing MAC addresses and learn new ones. Ring Node 2 Ring Node 1 Ring Node 3 Ring Node 4 Figure Basic Ring Redundancy Topology Data Flow after Recovery from Segment Failure 1-32 Functional Description

43 ML-IP Installation and Operation Manual Chapter 1 Introduction The method used to achieve fast recovery is based on the use of VLAN tagging. This approach enables adjacent nodes on the ring to exchange protocol messages that check the connectivity, and broadcast ring open messages to all the nodes in case a fault is detected on a segment. Note however that this means VLAN tagging cannot be used for other traffic. The fast redundancy protection available to the TDMoIP traffic within the ring can be extended to other equipment: such equipment is connected to the USER port of the ML-IP modules, and therefore its traffic is not processed by the ML-IP module: it only passes to the network through the ML-IP NET ports. The extended protection can be provided to up to 32 IP addresses, which are specifically defined by the user. The protected addresses are destination addresses for traffic connected to the ML-IP module through the USER port: this may be traffic from another ML-IP module (when the chained topology is used see Figure 1-3), or from any other type of equipment using the ML-IP module to connect to remote sites. Redundancy between ML-IP and TDM Main Link Modules Redundancy between ML-IP modules and TDM main link modules can be provided by configuring different databases: one for transferring the traffic through TDM main link modules, and the other for using the ML-IP modules to transmit the same traffic through IP networks. In this case, appropriate flipping conditions can be specified to switch between the two databases. Therefore, when a fault condition that requires switching is detected, the CL module loads the alternative database and reconfigures the Megaplex unit for using the alternative transmission path. Other Megaplex units, after detecting the loss of traffic from the first Megaplex unit that flips to the backup database, also perform flipping, and therefore after a short interval the network traffic flow is automatically restored. This enables using ML-IP modules to backup the traffic carried by circuit-switched PDH E1 and T1 links by means of the packet-switched IP network. Diagnostics Indicators The ML-IP module panel includes indicators that display the state and operating mode of each port interface, and the execution of tests and loopbacks in the module (see Section 1.3). Performance Monitoring The ML-IP module enables the collection of performance data, which enables the network operator to monitor the transmission performance and thus the quality of service provided to users, as well as identify transmission problems. Functional Description 1-33

44 Chapter 1 Introduction ML-IP Installation and Operation Manual Two types of performance statistics are available: Bundle performance statistics, which can be used to monitor the quality of transmission through the IP network LAN interface performance statistics, which can be used to monitor the performance at the Ethernet level. Performance parameters for all the active entities are continuously collected during equipment operation. IP Connectivity Testing The ML-IP module supports testing of connectivity at the IP level by means of the ICMP ping function. For this purpose, the module answers ping transmissions to its IP address. Test and Loopback Functions The ML-IP module includes various diagnostic functions that can be controlled by the operator using the Megaplex management system. The diagnostic capabilities provided by the ML-IP module cover the following levels: Internal port level: Local BER test toward local side Remote BER test toward remote side. Timeslot bundle level: local loopbacks. Individual timeslot level: Local loopback Local BER test toward local side Local test tone injection toward local side Remote loopback Remote BER test toward remote side Remote test tone injection toward remote side. Bit level (applicable only for split timeslots): Local BER test toward local side Remote BER test toward remote side. 1.5 Technical Specifications Module Function TDMoIP main link module with internal IP traffic handling subsystem and Ethernet network interfaces Supports all the types of Megaplex I/O modules Number of External Ports Two uplink Ethernet ports (NET 1 and NET 2) 1 user Ethernet port (USER) Full port interconnectivity via internal Ethernet Layer 2 switch 1-34 Technical Specifications

45 ML-IP Installation and Operation Manual Chapter 1 Introduction Payload Capacity Total Uplink Payload from Internal Megaplex Modules LAN Interface Characteristics Uplink Payload from Other Ports Interface Type Standards Compliance User configurable, from minimum of 16 kbps (2 bits) to maximum of Mbps (62 timeslots) Each Ethernet port can carry payload from external sources connected to the other ports IEEE IEEE 802.3, 802.1D, 802.1Q Interface Operating Modes 10 or 100 Mbps, full-duplex or half-duplex Physical Interface Range NET 1, NET 2 USER UTP 100Base-FX LAN Interface Connector UTP 100Base-FX Selection by operator or by auto-negotiation 10BaseT/100BaseTX (UTP) or 100Base-FX, in accordance with order 10BaseT/100BaseTX (UTP) Up to 100m/330 ft using UTP Cat. 5 cable See Table pin RJ-45 per port See Table 1-3 Table 1-3. ML-IP Link Fiber-Optic Interface Characteristics Interface Type and Wavelength Transmitter Type Typical Power Coupled into Fiber Receiver Sensitivity Maximum Receiver Input Power Typical Maximum Range (km/miles) 850 nm, 62.5/125 μm multi-mode fiber, ST, FC connectors 1310 nm, 9/125 μm single-mode fiber, ST or FC/PC connectors VCSEL -9 to -3 dbm -32 dbm -3 dbm 2/1.2 Laser -15 to -8 dbm -34 dbm -3 dbm 25/12.4 Internal Ports Number of Internal Ports per ML-IP Module Total Number of Timeslots per Internal Ports Signaling Support Access to TDM Buses MAC and IP Addresses Two independently-configurable internal TDM ports Without CAS: 31 With CAS: 30 Signaling profile (legacy CAS) is independently selectable per internal port Access to all of the chassis TDM buses via fully-configurable routing matrix Same MAC and IP addresses to both internal ports Technical Specifications 1-35

46 Chapter 1 Introduction ML-IP Installation and Operation Manual Routing Matrix Capabilities Number of Ports Function Four ports to chassis TDM buses Two ports to internal TDM ports Non-blocking, full cross-connect between all the ports, including routing of CAS information Payload Handling Bandwidth allocated by user-configurable timeslot bundles Timeslot Bundle Characteristics IP Routing Support Each timeslot bundle independently routed to user-specified IP address ToS labeling independently configurable for each bundle Independently configurable VLAN tagging per bundle (VLAN number and priority tagging) in accordance with IEEE 802.1Q Support for automatic selection of uplink port in accordance with VLAN number Maximum Number of Timeslot Bundles per Megaplex Chassis Maximum Number of Timeslot Bundles per ML-IP Module Number of Timeslots per Bundle Bundle Redundancy IP Traffic Handling Capabilities IP Network Delay Variation Tolerance 120 Without CAS signaling support: 24 With CAS signaling support: 12 Without CAS signaling support: 1 to 31 With CAS signaling support: 1 to 30 Split timeslot support (2, 4 or 6 bits) 1+1 (parallel transmit, independent receive) or 1:1 (one active bundle, one standby bundle) Redundancy partner on same module, or on different module 50 msec redundancy switching between bundles on same module for 1+1 redundancy (only when jitter buffer size is less than 50 msec) IP routing services per bundle ARP requests and responses Support for static entries in IP routing table Limited routing services for CL management traffic (proxy server) Jitter buffer per bundle, user-selectable size in the range of 3 to 300 msec in 1-msec steps IP Type-of-Service Support ToS in accordance with RFC 791, user-selectable per bundle Timing Modes Bundle Timing Locked to Megaplex nodal timing 1-36 Technical Specifications

47 ML-IP Installation and Operation Manual Chapter 1 Introduction Echo Canceller (optional) System Timing Modes Available with ML-IP Module Installed Internal clock mode Adaptive clock mode, locked to the average clock rate of any ML-IP bundle Receive clock mode (provided by other I/O or main link module) External (station) clock mode (provided by other main link module) Voice Channels Supported Up to 30 (all timeslots must be from one internal port) Echo Path Length Echo Return Loss Enhancement (ERLE) 4 msec for each channel >30 db Management Support Diagnostics ML-IP internal IP traffic handling subsystem provides full support for transport of CL module traffic through IP network Connection to IP network via ML-IP module can replace local connection via Ethernet port of CL module LAN Performance Statistics Receive Direction Transmit Direction According to RFC 2665: Total number of correct frames received Total number of unicast frames received Total number of broadcast frames received Total number of FCS errors detected Total number of discarded frames Total number of correct octets received Total number of multicast frames Total number of receive errors Total number of correct frames transmitted Total number of unicast frames transmitted Total number of broadcast frames transmitted Total number of frames transmitted after a single collision Total number of correct octets transmitted Total number of multicast frames transmitted Total number of frames suffering from late collisions Technical Specifications 1-37

48 Chapter 1 Introduction ML-IP Installation and Operation Manual Bundle Performance Statistics IP Connectivity Testing Internal Port Tests Bundle Tests Timeslot Tests and Loopbacks Split Timeslot Tests Transmitted frames Correct frames received Frames receive sequence errors Jitter buffer overflows and underflows Support for ping in accordance with ICMP Local BER test toward local side Remote BER test toward remote side Local loopback Local loopback Local BER test toward local side Local test tone injection toward local side Remote loopback Remote BER test toward remote side Remote test tone injection toward remote side Local BER test toward local side Remote BER test toward remote side Indicators Per Module TEST (yellow) - test being run on module (performed on any bundle or internal port) Configuration Per Port LINK (green): Lights when port is connected to an active Ethernet port or LAN. FDX (green): Lights when the port operates in the full-duplex mode 100M (green): Lights when the port operates at 100 Mbps Programmable via Megaplex management system 1-38 Technical Specifications

49 Chapter 2 Module Installation and Operation This Chapter provides installation and operation instructions for ML-IP modules. The information presented in this Chapter supplements the general Megaplex installation and operation instructions contained in the Megaplex-2100 Installation and Operation Manual. 2.1 Safety Warning Before performing any internal settings, adjustment, maintenance, or repairs, first disconnect all the cables from the module, and then remove the module from the Megaplex chassis. No internal settings, adjustment, maintenance, and repairs may be performed by either the operator or the user; such activities may be performed only by a skilled technician who is aware of the hazards involved. Always observe standard safety precautions during installation, operation, and maintenance of this product. Note The ML-IP modules contain components sensitive to electrostatic discharge (ESD). To prevent ESD damage, always hold the module by its sides, and do not touch the module components or connectors. Laser Safety Classification ML-IP modules equipped with laser devices comply with laser product performance standards set by government agencies for Class 1 laser products. The modules do not emit hazardous light, and the beam is totally enclosed during all operating modes of customer operation and maintenance. ML-IP modules are shipped with protective covers installed on all the optical connectors. Do not remove these covers until you are ready to connect optical cables to the ML-IP connectors. Keep the covers for reuse, to reinstall the cover over the optical connector as soon as the optical cable is disconnected. Safety 2-1

50 Chapter 2 Module Installation and Operation ML-IP Installation and Operation Manual Laser Safety Statutory Warning and Operating Precautions All the personnel involved in equipment installation, operation and maintenance must be aware that the laser radiation is invisible. Therefore, although protective device generally prevent direct exposure to the beam, the personnel must strictly observe the applicable safety precautions and in particular must avoid staring into optical connectors, neither directly nor using optical instruments. In addition to the general precautions described in this section, be sure to observe the following warnings when operating a product equipped with a laser device. Failure to observe these warnings could result in fire, bodily injury, and damage to the equipment. Warning To reduce the risk of exposure to hazardous radiation: Do not try to open the module enclosure. There are no user-serviceable components inside. Do not operate controls, make adjustments, or perform procedures to the laser device other than those specified herein. Allow only authorized RAD service technicians to repair the unit. 2.2 Installing the ML-IP Module Note The ML-IP modules do not include any internal user settings, and all their functions are programmable. Therefore, no preparations are required before installation. For ML-IP modules with optical interfaces, make sure that protective covers are installed on all the optical connectors. Do not remove these covers until you are ready to connect cables to the module, and keep the covers ready for reuse. To install the module: 1. Refer to the system installation plan, and insert the module in the assigned I/O slot of the Megaplex chassis. 2. The module starts operating as soon as it is plugged into an operating chassis, and performs power-up self-test, during which all the indicators are turned on for test purposes, and then start indicating the module status. At this stage, you may ignore any alarm indications. Upon power-up, the CL module checks the application software stored in the ML-IP modules installed in the chassis, and its validity. If the software version stored in the CL module is more recent than that stored in the ML-IP module, or the ML-IP software version is corrupted, the CL module will automatically download the application software to the ML-IP module. This may take a short time. 2-2 Installing the ML-IP Module

51 ML-IP Installation and Operation Manual Chapter 2 Module Installation and Operation 2.3 Connecting the Cables ML-IP Module with Electrical Interfaces Connector Data The ML-IP module with electrical interfaces has three RJ-45 connectors, identified as NET1, NET2 and USER. Each connector is wired in accordance with Table 2-1. Table 2-1. ML-IP Connectors, Pin Functions Pin Function 1 Transmit output + 2 Transmit output 3 Receive input + 4 Not connected 5 Not connected 6 Receive input 7 Not connected 8 Not connected Connection Instructions for Electrical Interface Cables Refer to the site installation plan and identify the cable intended for connection to each module connector. Make sure you use the proper cable type for each type of connection (with or without crossing of the receive and transmit pairs), in accordance with the wiring requirements of your system. The ML-IP ports have station interfaces, therefore the following cabling rules apply: Use straight cables (cables wired point-to-point) to connect an ML-IP Ethernet interface to an Ethernet hub or switch port, or to a router port. Use crossed cables (with crossing of the receive and transmit pairs) to connect an ML-IP Ethernet interface to that of another ML-IP module or IPmux, or to other equipment having station ports, for example, PC hosts. Therefore, when interconnecting several ML-IP modules in a daisy-chain configuration, use crossed cables to interconnect between the Ethernet interfaces of the modules, and use straight cable to connect the last module to the switch or router. After checking that you have the appropriate types of cable for each ML-IP connection, connect the cable to the corresponding module connector. ML-IP Modules with Optical Interfaces General Handling Instructions for Optical Cables When connecting optical cables, make sure to prevent cable twisting and avoid sharp bends (unless otherwise specified by the optical cable manufacturer, the Connecting the Cables 2-3

52 Chapter 2 Module Installation and Operation ML-IP Installation and Operation Manual minimum fiber bending radius is 35 mm). Always leave some slack, to prevent stress. Optical fibers intended for connection to ML-IP modules installed in a rack-mounted Megaplex should pass through fiber spoolers, located at the top or bottom of the rack, in accordance with the site routing arrangements (overhead or under-the-floor routing). The spoolers must contain enough fiber for routing within the rack up to the ML-IP optical connectors, and for fiber replacement in case of damage (splicing repairs). Caution Make sure all the optical connectors are closed at all times by the appropriate protective caps, or by the mating cable connector. Do not remove the protective cap until an optical fiber is connected to the corresponding connector, and immediately install a protective cap after a cable is disconnected. Before installing optical cables, it is recommended to clean thoroughly their connectors using an approved cleaning kit. Connection Instructions for Optical Cables For each optical interface, identify the prescribed cables intended for connection to this module, in accordance with the site installation plan: Connect the transmit fiber (connected to the receive input of the remote equipment) to the corresponding TX connector. Connect the receive fiber (connected to the transmit output of the remote equipment) to the RX connector of the same interface. 2.4 Normal Indications Note After the power-up self-test, the TEST indicator of the ML-IP module must be off, and the other indicators display the state of each Ethernet interface: The FDX indicator lights when the corresponding ML-IP interface operates in the full-duplex mode The 100M indicator lights when the corresponding ML-IP interface operates at the 100 Mbps rate. When auto-negotiation is used, the operating mode (half-duplex or full-duplex) and the LAN rate depends on the capabilities of the other nodes attached to the same LAN, and therefore may change in accordance with the LAN equipment LAN. The LINK indicator must light when the corresponding ML-IP interface is connected to an active Ethernet port or LAN (that is, to a LAN on which at least one station is active). When a test or loopback is activated on the ML-IP module, the TEST indicator will light for the duration of the test. You can see details on the test activity being performed on the ML-IP module by means of a Megaplex management station. 2-4 Normal Indications

53 Chapter 3 Configuration Instructions 3.1 Introduction This Chapter provides specific configuration information for ML-IP modules, and guidelines for the selection of critical parameters. The configuration activities are performed by means of the management system used to control the Megaplex unit. The instructions appearing in this Chapter assume that you are familiar with the management system being used: Supervision terminal or Telnet (covered by the Megaplex-2100 Installation and Operation Manual). Network management system, e.g., the RADview network management system (refer to the RADview User's Manual for instructions). This Chapter covers only the configuration activities specific to ML-IP modules: for general instructions and additional configuration procedures, refer to Chapter 5 and Appendix F of the Megaplex-2100 Installation and Operation Manual. 3.2 ML-IP Configuration Sequence The configuration sequence for a new ML-IP module includes the following main steps: 1. Include an ML-IP module not yet installed in the Megaplex into the database. This allows preprogramming the module parameters, so that when the module is installed in the enclosure it will immediately start operating in the desired mode. This is performed by means of the DEF SYS command. 2. Configure the general ML-IP module parameters use the DEF CH command. 3. Configure the ML-IP external (LAN) port parameters use the DEF CH command. 4. Configure the ML-IP internal TDM port parameters use the DEF CH command. Note Before configuring the ML-IP internal TDM port parameters, it may be necessary to define the required signaling profiles, using the DEF PROFILE command. 5. Define the timeslot bundles carried by the ML-IP module use the ADD BND command. ML-IP Configuration Sequence 3-1

54 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual 6. Assign internal TDM port timeslots to bundles use the DEF TS and DEF SPLIT TS commands. 7. When necessary: modify the system timing reference use the DEF SYS command. 8. When necessary: define new static routes for the ML-IP static IP routing table use the ADD ROUTE command. 9. When necessary: connect the Ethernet management port of the CL module to the IP network through the ML-IP module use the DEF SP CON2 command. 3.3 Configuring the General ML-IP Module Parameters The general parameters of the ML-IP module are used to control the protection options supported by the module: support for the ring topology, and the associated option of fast redundancy protection for user-selected destinations. To configure the general parameters of the desired ML-IP module, type: DEF CH SS<Enter> where SS is the module slot number. Table 3-1 lists the external port parameters. Table 3-1. General Module Parameters Parameter Function Values Ring Mode Protected IP s This parameter is used to enable the use of the RAD-proprietary RFR protocol, which supports fast redundancy switching. This option can be used only in a ring network topology Controls the extension of the fast redundancy protection to equipment connected to the USER port of the ML-IP module as well. This parameter is relevant only when the Ring Mode is ENABLE, therefore when the Ring Mode is DISABLE, NA is displayed and cannot be changed DISABLE Ring redundancy is disabled. Always use this option when the network includes non-rad equipment. ENABLE Default: DISABLE NA Ring redundancy is enabled. Displayed when ring redundancy is disabled. This selection cannot be changed. DISABLE Fast redundancy protection for traffic received through the USER port is disabled. ENABLE Default: DISABLE Fast redundancy protection for traffic received through the USER port is enabled. Define the desired destination addresses to be protected in the Protected IP Address list. 3-2 Configuring the General ML-IP Module Parameters

55 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Table 3-1. General Module Parameters (Cont.) Parameter Function Values Protected IP Addresses Used to specify up to 32 additional IP destination addresses (used by equipment connected to the USER port of the ML-IP module) for which fast redundancy protection is made available. You need not enter here IP addresses that are already used as destinations for TDMoIP traffic generated by the ML-IP module: such addresses are protected even for traffic received through the USER port of the module. This option is relevant only when the Protected IP s is ENABLE Type in each of the 32 fields the desired IP address, using the dotted-quad format (four groups of digits in the range of 0 through 255, separated by periods). Default: Configuring the External Ports The ML-IP module has three external ports. For management purposes, the ports are identified as follows: NET1 identified as 1 or EX1 NET2 identified as 2 or EX2 USER identified as 3 or EX3. Each port can be independently configured, in accordance with the characteristics of the LAN to which it is connected. To configure the parameters of the desired external port, type: DEF CH SS CC<Enter> where SS is the module slot number, and CC is the desired external port number. Table 3-2 lists the external port parameters. Table 3-2. External Port Parameters Parameter Function Values Auto- Negotiation Controls the use of auto-negotiation for the corresponding external port. Auto-negotiation is used to select automatically the mode providing the highest possible traffic handling capability YES Auto-negotiation is enabled. NO Auto-negotiation is disabled. Default: YES Configuring the External Ports 3-3

56 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Table 3-2. External Port Parameters (Cont.) Parameter Function Values Max. Capability Advertised LAN Rate Specifies the highest traffic handling capability to be advertised during the auto-negotiation process. The operating mode selected as a result of auto-negotiation cannot exceed the advertised capability. This parameter is relevant only when auto-negotiation is enabled, therefore its value is N/A when NO is selected for Auto-Negotiation Selects a specific port operating mode and data rate. These can be selected only when auto-negotiation is disabled, therefore the parameter value is N/A when YES is selected for Auto- Negotiation The available selections are listed in ascending order of capabilities: 10Mbps HD 10Mbps FD Half-duplex operation at 10 Mbps. Full-duplex operation at 10 Mbps. 100Mbps HD Half-duplex operation at 100 Mbps. 100Mbps FD Default: 100Mbps FD 10Mbps HD 10Mbps FD Full-duplex operation at 100 Mbps. Half-duplex operation at 10 Mbps. Full-duplex operation at 10 Mbps. 100Mbps HD Half-duplex operation at 100 Mbps. 100Mbps FD Default: 100Mbps FD Full-duplex operation at 100 Mbps. LAN Type Display only ETHERNET II 10/100BaseT interface complying with Ethernet 2.0 and IEEE Mng VLAN Tagging For NET 1 and NET2 ports (EX1 & EX2) only Controls the use of VLAN tagging for the Megaplex management traffic carried through this port. This parameter controls the tagging of management traffic, which may be necessary when the management traffic of the CL module is routed through the ML-IP module. In addition, this parameter also controls tagging of untagged frames received through the USER port. YES NO Default: NO Management VLAN tagging is enabled. In this case, the management VLAN specified in the Mng Vlan ID field is also used as the default VLAN for untagged frames received through the USER port: such frames will be sent through the NET 1 port tagged with the management VLAN. Management VLAN tagging is disabled. Use this selection when ring redundancy is enabled. 3-4 Configuring the External Ports

57 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Table 3-2. External Port Parameters (Cont.) Parameter Function Values Traffic Priority For USER port (EX3) only Mng VLAN ID For NET 1 and NET2 ports (EX1 & EX2) only Mng VLAN Priority For NET 1 and NET2 ports (EX1 & EX2) only Selects the processing priority for untagged frames received through the USER port. For the NET 1 and NET 2 ports, this parameter is always N/A (not applicable) When management VLAN tagging is enabled, specifies the VLAN ID number used by the management traffic sent through this port. When management VLAN tagging is disabled, this parameter is always N/A (not applicable) When management VLAN tagging is enabled, specifies the priority assigned to the management VLAN traffic sent through this port. When management VLAN tagging is disabled, this parameter is always N/A (not applicable) LOW Low priority, suitable for general LAN traffic (equivalent to priorities of 0 to 3). HIGH High priority, suitable for TDMoIP traffic (equivalent to priorities of 4 to 7). Default: LOW The allowed range is 1 to Default: 1 The allowed range is 7 (highest priority) to 0 (lowest priority). Default: Configuring the Internal TDM Ports The ML-IP module has two internal TDM ports, identified as IN1 and IN2. Each internal TDM port can be independently configured in accordance with the system requirements, except that both ports must be assigned the same IP address. To define the parameters of the desired internal TDM port, type: DEF CH SS CC where SS is the module slot number, and CC is the desired internal TDM port number (IN1 or IN2). Table 3-3 lists the internal TDM port parameters. Configuring the Internal TDM Ports 3-5

58 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Table 3-3. Internal TDM Port Parameters Parameter Function Values Connect Signaling Sig. Profile IP Address Determines whether this port is connected to the internal TDM buses. Note that bundles can be defined on this internal port only after it is connected Determines the signaling mode for this internal TDM port Selects a signaling profile for this port. The selected profile specifies the interpretation of the signaling information transmitted and received within the bundles defined on this port. The number of the profile appears in brackets, after the profile name. The signaling profiles and their names are defined by means of the DEF PROFILE command Specifies the IP address of the internal TDM port: This address is the source IP address of the TDMoIP packets sent by this port. The internal port processes only TDMoIP packets carrying this address in their destination address field. Both internal TDM ports of the ML-IP module are assigned the same IP address YES NO Default: NO YES NO Default: YES The internal TDM port is connected and you can configure timeslot bundles on this port. The internal TDM port is disconnected. You can still program the desired parameters, so the port will be ready to operate when needed. Support for CAS signaling enabled. This is needed for supporting voice channels. Support for CAS signaling disabled. The available range is 1 through 5. Default: 1 Type in the desired IP address in the dotted-quad format (four groups of digits in the range of 0 through 255, separated by periods). Default: Configuring the Internal TDM Ports

59 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Table 3-3. Internal TDM Port Parameters (Cont.) Parameter Function Values Subnet Mask Routing Protocol OOS Sig Echo Canceller Specifies the subnet mask associated with the IP address of the internal TDM port. This mask is used together with the IP address to determine the IP hosts that can be directly reached. Both internal TDM ports of the ML-IP module are assigned the same subnet mask Controls the transmission of RIP2 management traffic routing tables through this port. The transmission of these tables enables using the RIP2 routing protocol for management traffic carried through this port Determines the state of the signaling information transmitted by this port toward the Megaplex TDM buses during out-of-service periods. An out-of-service state is declared only when all the bundles carried by this port have lost synchronization. This parameter is relevant only when CAS signaling support is enabled (Signaling is YES) Controls the use of the optional echo canceller (ordering option for ML-IP modules). The echo canceller is installed only on port IN2. If the module does not include an echo canceller the Echo Canceller is always N/A (N/A is the only option for port IN1). Type in the appropriate subnet mask in the dotted-quad format. Make sure to define a subnet mask whose binary representation includes consecutive 1 up to the start of the unmasked bit positions. Default: NONE Routing is not supported. PROPRIETARY Support for RAD proprietary routing. RIP RIP2 Default: NONE Support for both RAD proprietary routing and RIP2 routing. FORCED BUSY The signaling information is forced to the busy state during out-of-service periods. FORCED IDLE BUSY IDLE IDLE BUSY Default: FORCED IDLE YES NO Default: NO The signaling information is forced to the idle state during out-of-service periods. The signaling information is forced to the busy state for 2.5 seconds, then switches to the idle state until the out-of-service condition disappears. The signaling information is forced to the idle state for 2.5 seconds, then switches to the busy state until the out-of-service condition disappears. The echo canceller is enabled. This selection is available only for port IN2. The echo canceller is disabled. Configuring the Internal TDM Ports 3-7

60 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual 3.6 Configuring Bundles The ML-IP module supports the definition of timeslot bundles on its internal TDM ports. The total number of bundles that can be defined on an ML-IP module depends on the internal port configuration: Without support for CAS signaling: up to 24 bundles per ML-IP module With support for CAS signaling: up to 12 bundles per ML-IP module. Bundles are identified by a number in the range of 1 to 120. The bundle number must be unique within a Megaplex chassis, irrespective of the number of ML-IP modules installed in the chassis. Each bundle can be independently configured in accordance with the system requirements. After configuring the bundle parameters, it is necessary to define its utilization, using the DEF TS or DEF SPLIT TS command. Selection Guidelines for TDM Payload Bytes per Frame Background Information The TDMoIP technology enables transmitting the continuous data stream generated by TDM equipment as a stream of discrete packets, having a structure suitable for transmission over packet-switched networks. This process is called packetizing. A simplified diagram of this process, which identifies the main steps of the process, is shown in Figure 3-1. The packetizing process comprises the following main steps: 1. Splitting the continuous TDM data stream into discrete slices of appropriate size. The slice size is always an integer number of bytes. For example, in Figure 3-1, the number of TDM bytes per slice, K, is Adding the overhead necessary to transmit each slice over the packet network and enable reaching the desired destination. Basically, this process includes the following steps: 1. Inserting the TDM bytes into the payload field of a UDP packet, and adding the overhead data needed to build a UDP packet. 2. Inserting the UDP packet into the payload field of an IP packet, and adding the overhead data needed to build an IP packet for transmission to the desired IP destination. 3. Inserting the IP packet into the payload field of an Ethernet frame, and adding the MAC overhead needed to build an Ethernet frame for transmission to the destination MAC address (the MAC address needed to reach the desired IP destination is determined using the ARP protocol). For example, in Figure 3-1, the resultant overhead comprises a total of 54 bytes. Note The actual overhead depends on several factors, one of them being the encoding method used to transmit CAS information. Figure 3-1 also ignores the minimum interpacket gap, which further increases the overhead. For additional information on this process, refer to the corresponding section of Appendix E of the Megaplex-2100 Installation and Operation Manual. 3-8 Configuring Bundles

61 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions TDM Frames (8000/sec) The receiving end performs the steps necessary to reconstruct the original TDM data stream from the received frames, in the best possible way, where best means meeting a set of criteria that describe a compromise among several conflicting requirements. A simplified diagram of this process, which identifies the main steps of the process, is shown in Figure 3-2. Frame 1 Frame 2 Frame 48N K Opening of New Frame K = 2 Closing of Frame Ethernet Frames F Overhead C... S 52 bytes Payload N bytes Figure 3-1. Building an Ethernet Frame with TDMoIP Payload Ethernet Frames... Payload N 4 8 F Overhead... CS... Start of New Frame End of Frame K = 2 TDM Frames (8000/sec) Frame 1 Frame 2 Frame 48N K Figure 3-2. Retrieving the Payload from an Ethernet Frame with TDMoIP Payload The number of TDM bytes inserted in each Ethernet frame sent to the network, which is actually the size of the UDP packet payload field, affects two important performance parameters: Bandwidth utilization Packetizing delay and the associated delay variance. Configuring Bundles 3-9

62 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Bandwidth Utilization Considerations The bandwidth utilization efficiency depends on the overhead that must be transmitted to the LAN in order to support the transmission of a certain amount of payload. To enable the transmission of as many TDM channels as possible through a LAN operating at a given rate (10 Mbps or 100 Mbps), it is necessary to estimate the bandwidth that must be reserved for overhead. A simplified approach that enables obtaining estimates of LAN bandwidth requirements for planning the link capacities at the system level, is presented below. This approach provides the information needed to understand the parameters which influence the amount of overhead, and to obtain rough estimates (usually accurate within ±10%) of the required bandwidth. You can also use this approach to estimate the processing capacity needed at Layer 2 switches and routers that handle the TDMoIP traffic. To obtain more accurate results, it is necessary to take into consideration additional factors, for example, the detailed structure of the bundle packets, which depends on parameters such as signaling support, etc. If very accurate estimates of bandwidth are required, contact RAD Technical Support Department for additional information. To find the frame overhead: As explained in Appendix E of the Megaplex-2100 Installation and Operation Manual, each Ethernet frame carrying TDMoIP traffic carries a fixed amount of overhead. Therefore, the maximum bandwidth that must be reserved for overhead transmission depends only on the maximum rate of Ethernet frames that can be transmitted to the LAN. The amount of overhead is as follows: Without VLAN tagging, each Ethernet frame includes 46 or 66 bytes (368 or 528 bits) of overhead. The lower value ignores the preamble and the interpacket gap (the simplified diagrams shown in Figure 3-1 and Figure 3-2 use a nominal value of 54 bytes (432 bits)). When VLAN tagging is used, the overhead increases by 4 bytes (32 bits). To find the maximum rate of frames that can be transmitted to the LAN: The payload carried by each Ethernet frame is determined by the number of TDM bytes selected by the user: as listed in Table 3-5, the number of TDM bytes inserted in each frame is N 48 bytes, where N can be selected by the user in the range of 1, 2, 8 (this results in payloads of 48 to 384 TDM bytes per frame). Considering that any given TDM byte is received only once in every TDM frame, the maximum rate at which TDM bytes can be received for filling packets is 8000 bytes per timeslot per second. Since an Ethernet frame will be sent only after its payload field has been filled, the maximum filling rate occurs for bundles carrying 31 timeslots and a payload of 48 bytes per frame: in this case, the filling of the 48 bytes takes 1.6 internal TDM frames, and thus the rate of payload filling is approx. 5000/sec Configuring Bundles

63 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Note Considering the overhead, the actual packet rate for a 31-timeslot bundle without CAS is 5290 packets per second; with CAS, the maximum bundle size is 30 timeslots and the resulting rate is 5280 packets per second. The filling rate for the other available TDM payload sizes is N times smaller: for example, for the largest n, 8, the filling rate is approx. 625/sec. To find the actual rate of frames that can be transmitted to the LAN: The approach presented above for calculating the maximum filling rate per port can also be used for calculating the actual filling rate for other configurations. For example, at any given N value, the filling rate also decreases when the number of timeslots per bundle is less than the maximum. However, in many practical cases the timeslots available on an internal TDM port will be divided among several bundles, where each bundle has its own filling rate. This tends to increase again the average filling rate per internal port. When calculating the filling rate per port for any particular multiple-bundle case, remember that the maximum number of bundles per ML-IP module is 24 without support for CAS signaling, and 12 with support for CAS signaling. To find the transport capacity requirements: The maximum bandwidth that must be allocated to the transmission of TDMoIP traffic depends on the payload rate, the number of TDM bytes per frame, CAS support, and the use of VLAN tagging. The maximum values for the minimum and maximum numbers of TDM bytes per frame are given in Table 3-4. Table 3-4. Maximum Transport Capacity Requirements TDM Bytes per Frame Maximum Required IP Capacity Maximum Required LAN Transport Capacity Without VLAN Tagging With VLAN Tagging TDM Port Module TDM Port Module TDM Port Module kbps 6252 kbps 4815 kbps 9630 kbps 4995 kbps 9990 kbps kbps 4350 kbps 2380 kbps 4760 kbps 2402 kbps 4804 kbps Table 3-4 provides two types of information: The maximum LAN transport capacity required to support a single internal TDM port. When considering the LAN capacity needed to support both internal ports of an ML-IP module, the values are doubled. The maximum IP transport and processing capacity needed to support a single TDM internal port, and the values for both internal ports. These values are useful for planning the maximum load on IP routers handling TDMoIP traffic. The values appearing in Table 3-4 should be used for conservative design, which means that full flexibility in changing bundle configuration is available without having to reconsider the LAN transport capacity. Configuring Bundles 3-11

64 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Other factors that must be considered when evaluating the ML-IP LAN transport capacity requirements are: Bandwidth needed for the CL module management traffic routed through the ML-IP module Bandwidth needed for transferring the user s port traffic (the traffic received through the ML-IP USER port), when this port is indeed used. You may also have to take into consideration the bandwidth needed by other equipment connected to the same LAN, when the LAN to which the ML-IP ports are connected is shared with additional users. Packetizing Delay Considerations The discussion presented above with respect to bandwidth utilization efficiency seems to point to the advantages of using a large payload size per frame. However, there are additional aspects that must be considered when selecting the TDM Bytes per Frame parameter: Filling time: the filling time, which is the time needed to load the payload into am Ethernet frame, increases in direct proportion to the TDM Bytes per Frame parameter. This is particularly significant for bundles with few timeslots; for example, a voice channel could be carried by a single-timeslot bundle. Considering the nominal filling rate (approximately one byte every msec), the time needed to fill a single-timeslot bundle is as follows: At 48 TDM bytes per frame: 5.5 msec with CAS support, and 5.9 msec without CAS support At 384 TDM bytes per frame: 44 msec with CAS support, and 47 msec without CAS support. Therefore, before considering any other delays encountered along the end-to-end transmission path, the round-trip (or echo) delay for the voice channel example presented above is 92 msec at 384 TDM bytes per frame (including the additional intrinsic delay of module see below). Such long delays may also cause time-out in certain data transmission protocols. Intrinsic jitter: the transmission of packets to the network is performed at nominally equal intervals of 1 msec. This means that every 1 msec the packet processor of the ML-IP module sends to the network (through the appropriate Ethernet interface) all the frames ready for transmission. As a result, the actual payload transmission intervals vary in an apparently random way whose peak value depends on the bundle size, an effect called delay variance (or jitter). For example, a bundle with 6 timeslots will fill a 48-byte payload field of an Ethernet frame every 1 msec. If the sending instants are not perfectly synchronized with the filling instants, the sending time will sometimes occur just in time and sometimes will be delayed by 1 msec relative to the ideal, creating a peak delay variance of 1 msec at the transmitting side. The intrinsic jitter in other cases is lower, therefore the delay variance generated by the ML-IP module cannot exceed 2 msec Configuring Bundles

65 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Jitter Buffer Sizing Guidelines The method used to mitigate the effects of delay variation is the use of a jitter buffer. Background Ideally, since frames are transmitted at regular intervals, they should reach the destination after some fixed delay. If the transmission delay through the network were indeed constant, the frames would be received at regular intervals and in their original transmission order. In practice, the transmission delay varies because of several factors: Intrinsic jitter at the transmit side, described above Variations in the transmission time through the network, caused by the frame handling method: frames pass through many switches and routers, and in each of them the frame (or the packet encapsulated in the frame) is first stored in a queue with frames or packets from other sources, and is then forwarded to the next link when its time arrives. Intrinsic jitter at the receive side, due to the variation in the time needed to extract the payload from the received packets. Jitter Buffer Functions Any network designed for reliable data transmission must have a negligibly low rate of data loss. Therefore, it is reasonable to assume that essentially all the transmitted frames reach their destination. Under these circumstances, the rate at which frames are received from the network is equal to the rate at which frames are transmitted by their source (provided that the measurement is made over a sufficiently long time). As a result, it is possible to compensate for transmission delay variations by using a large enough temporary storage. This storage, called jitter buffer, serves as a first-in, first-out buffer that operates as follows: At the beginning of a session, the buffer is loaded with received frames until it is half full. After reaching the half-full mark, the read-out process is started. The frames are read out at an essentially constant rate. To prevent the buffer from either overflowing or becoming empty (underflow), the read-out rate must be equal to the average rate at which frames are received from the network. See Timing Mode Selection Guidelines in Section 3.8 for details on selecting an appropriate timing mode. The buffer stores the frames in accordance with their arrival order. In addition to its storage function described above, the jitter buffer is also used as part of the adaptive clock recovery mechanism. This mechanism generates a clock signal having the frequency necessary to read-out frames at the rate that keeps the jitter buffer as near as possible to the half-full mark. Therefore, the adaptive clock recovery mechanism actually recovers the original payload transmit clock. The bundle used as the basis for recovering the adaptive clock can be selected by the user. Configuring Bundles 3-13

66 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Selecting an Optimal Jitter Buffer Size For reliable operation, the jitter buffer must be large enough to ensure that it is not emptied when the transmission delay increases temporarily (an effect called underflow, or underrun), nor fills up to the point that it can no longer accept new frames when the transmission delay decreases temporarily (an effect called overflow). The ML-IP module supports individually-configured jitter buffers for each timeslot bundle. The minimum size of the jitter buffer depends on the intrinsic jitter: for the ML-IP module, the minimum value is 3 msec. The maximum size is 300 msec. The theoretically correct value for the size of the jitter buffer of any given bundle is slightly more than the maximum variation in the transmission delay through the network, as observed on the particular link between the bundle source and the destination. For practical reasons, it is sufficient to select a value that is not exceeded for any desired percentage of time: for example, a value of 99.93% means that the jitter buffer will overflow or underflow for an accumulated total of only one minute per day. Jitter buffers are located at both ends of a link, therefore the delay added by the buffers is twice the selected value. The resultant increase in the round-trip delay of a connection may cause problems ranging from inconvenience because of long echo delays on audio circuits (similar to those encountered on satellite links) to time-out of data transmission protocols. For example, many protocols use master-slave polling communication. With master-slave polling, the master station sends a polling request to each slave element, and any element answers only after receiving a request to its individual address. The interval between pollings must take into consideration the transmission delay through the network, but must also take into consideration the need for covering (polling) all of the elements within a prescribed interval. Therefore, in networks with many elements, it is not feasible to select long polling intervals. If the interval between pollings is too short relative to the delays encountered in getting a response from some elements, the master station may assume that the polled element is not operational, and will skip to the next element. This effectively disconnects the master station from the elements which cannot answer in time. Therefore, the size of each jitter buffer must be minimized, to reduce the round-trip delay of each connection in as far as possible, while still maintaining the link availability at a level consistent with the application requirements. Bundle Redundancy Configuration Guidelines Parameter Compatibility Requirements When using the bundle redundancy feature, it is necessary to select the same values for the following parameters of the two bundles defined as a redundancy pair (see descriptions in Table 3-5): Connect, TDM Bytes in Frame, Voice OOS, Data OOS, Far End Type, Redundancy, Recovery Time. In addition, the Redundancy Bundle fields on the two bundles must point one to the other (symmetrical assignment) Configuring Bundles

67 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Defining a New Bundle When the bundles of a redundancy pair are defined on different ML-IP modules or on different internal ports of the same module, make sure to select identical values for the OOS Sig. parameter of the two ports. Fast Redundancy Switching The mechanism used to initiate and execute bundle redundancy switching is capable of rapid response (less than 50 msec). Therefore, when the two bundles are defined on the same module, the time needed to perform redundancy switching, following the detection of a fail condition on the currently active bundle, can be maximum 50 msec. However, this is not possible if the jitter buffer size is close to 50 msec (or larger). When the bundles are located on different ML-IP modules, the redundancy switching time increases because of the need to alert the Megaplex central control subsystem located on the CL module. In this case, the redundancy switching time is a few seconds. To define a new bundle, type: ADD BND B where B is the desired bundle number. Table 3-5 lists the bundle parameters. Table 3-5. Bundle Parameters Parameter Function Values Connect Controls the activation of the bundle ENABLE The bundle is activated. ML-IP Slot ML-IP TDM Selects the local ML-IP module on which the bundle will be defined. The module is specified by indicating the number of the I/O slot in which the desired module is installed Selects the internal TDM port of the local ML-IP module on which the bundle will be defined. Make sure that the selected internal port has already been defined as connected. DISABLE The bundle is not active. You can still program the desired parameters, so that the bundle will be ready for activation when needed. Default: DISABLE The allowed range depends on the number of I/O slots in the Megaplex chassis: Megaplex-2100: IO-1 to IO-12. Megaplex-2104: IO-1 to IO-4. Default: IO-1 The available selections are IN1 and IN2. Default: IN1 Configuring Bundles 3-15

68 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Table 3-5. Bundle Parameters (Cont.) Parameter Function Values Destination IP Next Hop IP IP TOS Specifies the IP address of the destination bundle (another TDMoIP device). When the destination bundle is located on an ML-IP module, this field specifies the IP address of the internal TDM port on which the bundle is defined Specifies an IP address to which the bundle packets will be sent, to enable reaching the destination IP address. This is usually the address of an IP router port. You need to specify a next-hop IP address only when the destination address is not within the IP subnet of the local internal TDM port on which the bundle is defined Specifies the IP type-of-service parameter for this bundle. The specified value is inserted in the IP TOS field of the bundle IP packets. When supported by an IP network, the type-of-service parameter is interpreted, in accordance with RFC 791 or RFC 2474, as a set of qualitative parameters for the precedence, delay, throughput and delivery reliability to be provided to the IP traffic generated by this bundle. These qualitative parameters may be used by each network that transfers the bundle IP traffic to select specific values for the actual service parameters of the network, to achieve the desired quality of service Type in the desired IP address, using the dotted-quad format (four groups of digits in the range of 0 through 255, separated by periods). Default: Type in the next hop IP address using the dotted-quad format. To use the default gateway, leave this field at the default value, Default: Type in the prescribed number, in the range of 0 to 255. Default: Configuring Bundles

69 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Table 3-5. Bundle Parameters (Cont.) Parameter Function Values Ext Eth Dest Bundle Jitter Buffer Name TDM Bytes in Frame Voice OOS Data OOS Specifies the external (LAN) port of the module through which the bundle IP traffic is transmitted to the IP network Specifies the number of the destination bundle at the remote device Specifies the value of the jitter buffer to be used on this bundle. See selection considerations in the Jitter Buffer Sizing Guidelines section below Optional field, can be used to assign a logical name to the bundle Specifies the number of TDM bytes of bundle payload to be inserted in each packet. See selection considerations in the Selection Guidelines for TDM Payload Bytes per Frame section below Specifies the code transmitted in the bundle timeslots defined as voice timeslots, during out-of-service periods Specifies the code transmitted in the bundle timeslots defined as data timeslots, during out-of-service periods EXT1 EXT2 AUTO Default: AUTO The bundle traffic is transferred through the NET1 port. This selection can be used only when the bundle VLAN Tagging parameter is YES and the Ring Mode is DISABLE. The bundle traffic is transferred through the NET2 port. This selection can be used only when the bundle VLAN Tagging parameter is YES and the Ring Mode is DISABLE. The external port (NET1 or NET2) is automatically selected by the internal Ethernet switch. The supported range is 1 to The range of values that can be used in any given application is device-specific. For example, for an IPmux-1 the only allowed value is 1, whereas for another ML-IP module is 1 to 120. Default: 1 The allowed range is 3 to 300 msec, in 1-msec steps. Default: 3 msec String of up to 8 alphanumeric characters. Default: Empty string The available selections are 48, 96, 144, 192, 240, 288, 336 and 384 bytes. Default: 48 The available selections are 00 to FF (hexa). Default: 00 The available selections are 00 to FF (hexa). Default: 00 Configuring Bundles 3-17

70 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Table 3-5. Bundle Parameters (Cont.) Parameter Function Values Far End Type OAM Connectivity VLAN Tagging VLAN ID Specifies the type of interface used by the remote IPmux unit. When using the optional echo canceller, also selects the type of voice channel encoding assumed by the echo canceller (A-law or µ-law, in accordance with ITU-T Rec. G.711). This parameter must always be specified when the bundle is terminated at an IPmux unit. However, even when the bundle is terminated at another ML-IP module, make sure to select the same value at both ends. The selected value must also match the encoding law used on PCM voice channels Controls the use of the RAD proprietary OAM connectivity protocol for this bundle. This protocol enables detecting loss of communication with the destination of TDMoIP traffic and take steps that prevent the resulting flooding of this traffic. The protocol also enables checking that the destination uses a compatible configuration Controls the use of VLAN tagging for this bundle. When several bundles are routed through the same external port, tagging should be either enabled or disabled on all these bundles. When VLAN tagging is enabled, specifies the VLAN ID number inserted in the frames of this bundle. When VLAN tagging is disabled for this bundle, this parameter is always N/A (not applicable) E1 E1 interface. The echo canceller processes PCM signals assuming that they are encoded in accordance with the A-law. T1 ESF T1 interface with ESF framing. The echo canceller processes PCM signals assuming that they are encoded in accordance with the µ-law. T1 D4 T1 interface with SF (D4) framing. The echo canceller processes PCM signals assuming that they are encoded in accordance with the µ-law. SERIAL Serial interface. The echo canceller is disabled. Default: E1 ENABLE The use of the OAM connectivity protocol is enabled. This selection is recommended when operating in a network comprising only RAD TDMoIP equipment (that is, ML-IP modules and IPmux TDMoIP gateways). DISABLE The use of the OAM connectivity protocol is disabled. Default: DISABLE YES NO Default: NO VLAN tagging is enabled. VLAN tagging is disabled. The allowed range: is 1 to When Ring Mode is ENABLE, value is 0 Default: Configuring Bundles

71 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Table 3-5. Bundle Parameters (Cont.) Parameter Function Values VLAN Priority When VLAN tagging is enabled, specifies the priority assigned to the bundle traffic. When VLAN tagging is disabled for this bundle, this parameter is always N/A (not applicable) The allowed range is 7 (highest priority) to 0 (lowest priority). Default: 1 Redundancy Controls the use of bundle redundancy NO Redundancy disabled for the bundle being configured. YES Redundancy function is enabled. In this case, the two bundles configured as a redundancy pair transmit in parallel the same data. Therefore, the remote unit can select the bundle from which to retrieve the payload, and does not have to synchronize its selection with that at the local Megaplex unit. Do not use this selection when ring redundancy is enabled. Redundancy Bundle Redundancy Type Redundancy Function Selects the number of the other bundle of a redundancy pair (the selection must always be symmetrical) Specifies the redundancy type. This field is displayed only when the redundancy option is enabled (Redundancy is YES) Ensures interoperability with the remote IPmux device: must be set to match the remote IPmux setting. This field is displayed only when the redundancy option is enabled (Redundancy is YES) Default: NONE NONE Set for Chassis redundancy applications. BND1 to BND120 The number of the other bundle of a redundancy pair. Default: BND1 1+1 The data is duplicated and transmitted to both bundles. 1:1 Only one bundle transmits the data. Default: 1+1 PRIMARY The remote IPmux device is set to PRIMARY. SECONDARY The remote IPmux device is set to SECONDARY. Default: PRIMARY Note: When using 1+1 redundancy, both redundancy-involved bundles may take the primary function. Recovery Time Selects the minimum time interval between two consecutive redundancy flips. This field is displayed only when the redundancy option is enabled (Redundancy is YES) The allowed range is 01 to 99 seconds. Default: 01 Configuring Bundles 3-19

72 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Notes: With the chassis redundancy application in 1:1 mode, the remote site (which has an application other than chassis redundancy) gets to choose the active bundle using OAM messages, and recovery time is not applicable at the local site (since the local chassis doesn t have bundles to switch between). In 1:1 mode, set the remote unit to a different value, since flipping of one side results in flipping of the other one. Changing the Configuration of an Existing Bundle To change the configuration of an existing bundle, type: DEF BND B where B is the desired bundle number. Table 3-5 explains the bundle parameters. The only difference is that the data form shows the currently configured parameter values, instead of the default values. Deleting an Existing Bundle To delete an existing bundle, type: DEL BND B You cannot delete a bundle to which timeslots have been connected. Therefore, before deleting an active bundle, use the DEF TS or DEF SPLIT TS command on the ML-IP internal port supporting the bundle to change the timeslot assignment. 3.7 Assigning Bundle Timeslots The bundles serve as virtual internal ports, to which one or more I/O channels are routed for transmission to the network through the ML-IP module. The only difference is that bundles are defined on the internal TDM ports of the module, and therefore timeslots are routed to a bundle through the mediation of the internal port. Timeslot Assignment Rules 1. A bundle can include timeslots from only one internal TDM port. 2. The maximum number of timeslots per bundle is 31. The maximum number of timeslots decreases to 30 when CAS support is enabled on the corresponding internal port (Signaling is YES see Table 3-3), because in this case timeslot 16 is reserved and cannot be assigned. 3. Timeslot assignment is performed by means of the DEF TS command (see detailed description of this command in Appendix F of the Megaplex-2100 Installation and Operation Manual). When the bundle includes channels using less than one timeslot, use the DEF SPLIT TS command Assigning Bundle Timeslots

73 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions 4. For convenience, when two bundles configured on the same ML-IP module are defined as a redundancy pair, you need assign timeslots only to one of these two bundles: the assignment is automatically copied to the other bundle of the redundancy pair. Timeslot Assignment Example The following example shows the response to a DEF TS command issued to internal port IN1 of the ML-IP module installed in I/O slot 2. Module: ML-IP Required Timeslots For IO-02: CH-01: IO Channel 01 BP BP Channel 02 BP BP Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel Channel TS 01 TS 02 TS 03 TS 04 TS 05 TS 06 10:01 10:02 10:03 01:01 01: BND:003 BND:003 BND: VOICE VOICE VOICE DATA DATA DATA TS 07 TS 08 TS 09 TS 10 TS 11 TS DATA DATA DATA DATA DATA DATA TS 13 TS 14 TS 15 TS 16 TS 17 TS DATA DATA DATA DATA DATA DATA TS 19 TS 20 TS 21 TS 22 TS 23 TS DATA DATA DATA DATA DATA DATA TS 25 TS 26 TS 27 TS 28 TS 29 TS DATA DATA DATA DATA DATA DATA TS DATA The Megaplex chassis covered by this example includes the following modules: Two ML-IP modules installed in I/O slots 1 and 2 One VC-16 module installed in I/O slot 10. Assigning Bundle Timeslots 3-21

74 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual The upper section of the data form shows the following timeslot routing requirements: Two channels (1 and 2) are bypassed between the two ML-IP modules Three voice channels (1, 2, and 3) of the VC-16 are to be routed to the ML-IP module in I/O slot 1. The lower section of the data form shows the timeslot routing made in response to these requirements. For each of the 31 timeslots available on the ML-IP internal port, the data form includes the following 4 subfields (listed from top to bottom): Timeslot number, in the range of TS01 through TS31 The I/O module and channel to be connected to the corresponding bundle timeslot, in the format SS CC, where SS is the module slot number, and CC is the desired external port number. By scrolling with the F, B, J and I keys, you can display any of the desired channels appearing in the upper section of the data form that are waiting for assignment. The destination bundle, in the format BND:B, where B is the bundle number. You can scroll only among the bundles currently defined on this port. The timeslot type, in accordance with the standard types explained for the DEF TS command in Appendix F of the Megaplex-2100 Installation and Operation Manual. Note that timeslot 16 cannot be assigned when the internal TDM port is configured to support CAS signaling (Signaling is YES see Table 3-3). 3.8 Selecting the System Timing Reference Timing Mode Selection Guidelines This section deals only with the selection of nodal timing sources in networks in which the Megaplex units are interconnected by both TDM and packet-switched (IP) links, or by packet-switched (IP) links only. If no packet-switched (IP) links are used, the only consideration with respect to timing is to use the timing source having the highest quality available, as the network timing reference. Given the characteristics of the Megaplex TDM main link modules, reliable, hierarchical timing distribution is possible throughout the whole network. When packet-switched (IP) links are also used, additional factors must be taken into consideration. In particular, when IP links are used, some of the Megaplex units in the network may have to switch to the adaptive timing mode. The quality of the timing signals generated in the adaptive timing mode is less predictable than timing reference quality in the other timing modes, because it depends on the transmission performance of the packet-switched network. Therefore, whenever other alternatives as available, avoid using the adaptive timing mode. The considerations involved in the selection of an optimal timing reference and its distribution in networks that include packet-switched (IP) links are presented below Selecting the System Timing Reference

75 ML-IP Installation and Operation Manual Chapter 3 Configuration Instructions Selecting the Timing Source in Networks with Mixed Link Types In networks that include both TDM and packet-switched (IP) links, it is often possible to avoid using adaptive timing. The preferred approach is to use the same timing distribution approach as in TDM-only networks: Select one Megaplex unit as the network timing reference (the master unit with respect to timing). The nodal timing of this unit may be one of the following, depending on the particular applications: Timing locked to a station clock signal, supplied to one of the TDM main link modules. The station clock signal is usually derived from high-quality timing sources, and therefore it is the preferred timing source. Timing locked to the receive clock of an ISDN channel, a station clock provided by a local exchange or PBX, or a data channel connected to a data transmission network having its own high-quality timing reference (for example, an atomic clock). Internal timing: this is practical for standalone networks. Configure the other Megaplex units in the network to lock their nodal timing to the timing of the master Megaplex unit. For this purpose, at each of the other Megaplex units, select the clock recovered from the TDM link leading toward the master Megaplex unit as the nodal timing reference. Figure 3-3 shows the resultant timing flow in a network with mixed links (both TDM and packet-switched (IP)) that is configured in accordance with these recommendations. When this approach is used, the timing mode of the ML-IP modules installed in these Megaplex units is as follows: The timing of the transmit-to-network path is determined by the nodal timing of the Megaplex unit in which the module is installed. As a result, all the ML-IP modules in the network shown in Figure 3-3 transmit data at a rate derived from a common timing source that used by the master Megaplex unit. The timing source used to read the contents of the jitter buffers (located in the receive-from-network path) is also derived from the nodal timing. Therefore, the read-out rate is equal to the transmit rate, a condition that supports normal operation. Selecting the System Timing Reference 3-23

76 Chapter 3 Configuration Instructions ML-IP Installation and Operation Manual Megaplex ML-IP TDM Main Link TDM Main Link Megaplex Megaplex Master Unit IP Network Station Clock TDM Main Link TDM Main Link Megaplex ML-IP ML-IP Megaplex ML-IP TDM Main Link Megaplex Figure 3-3. Timing Flow in a Typical Network with Mixed Link Types Selecting the Timing Source in Networks with IP Links Only In applications that use only ML-IP modules to provide connectivity to Megaplex units through the IP network, it is not always cost-effective or technically feasible to provide a legacy timing reference for each unit. The same situation occurs when the only way to connect to a Megaplex unit, which is part of a network using mixed link types (see for example Figure 3-3), is through an IP network. Moreover, in applications of the type described above, each Megaplex often serves as the network termination unit for all the equipment connected to its channels, and therefore there is no alternative source that can be used as a reliable timing reference. For such applications, the only solution is to use the adaptive timing mode at all these Megaplex units. As explained in the Jitter Buffer Functions section above, the clock recovered from a user-selected bundle can be used as the nodal timing reference. This mode is called adaptive timing. To use adaptive timing for timing distribution, configure the Megaplex units in the network as follows: Select one Megaplex unit as the network timing reference (the master unit with respect to timing). The nodal timing mode of this unit may be any of the modes listed above for the master unit in a network with mixed links Selecting the System Timing Reference

OPERATION MANUAL INSTALLATION AND MPW-1. TDM Pseudowire Access Gateway. MP-4100 Version 2.0. The Access Company

OPERATION MANUAL INSTALLATION AND MPW-1. TDM Pseudowire Access Gateway. MP-4100 Version 2.0. The Access Company INSTALLATION AND OPERATION MANUAL MPW-1 TDM Pseudowire Access Gateway MP-4100 Version 2.0 The Access Company MPW-1 TDM Pseudowire Access Gateway MP-4100 Version 2.0 Installation and Operation Manual Notice

More information

HS-RN. 4-Channel Low Speed Data Module with End-to-End Signaling OPERATION MANUAL INSTALLATION AND

HS-RN. 4-Channel Low Speed Data Module with End-to-End Signaling OPERATION MANUAL INSTALLATION AND INSTALLATION AND OPERATION MANUAL HS-RN 4-Channel Low Speed Data Module with End-to-End Signaling Megaplex-2100/2104 Version 12, Megaplex-4100 Version 1.2 The Access Company HS-RN 4-Channel Low Speed

More information

Gmux Modular TDMoIP Gateway FEATURES

Gmux Modular TDMoIP Gateway FEATURES FEATURES Carrier-class modular TDMoIP gateway, extending high capacity TDM traffic over packet-switched networks (PSNs) Operates opposite other members of RAD s TDMoIP family of products, offering a complete

More information

Installation and Operation Manual VF-30/VF-60. Single/Dual E1 Digital Voice Compression Modules. Megaplex-2100 Version 11 MEGAPLEX-2100 MODULES

Installation and Operation Manual VF-30/VF-60. Single/Dual E1 Digital Voice Compression Modules. Megaplex-2100 Version 11 MEGAPLEX-2100 MODULES Installation and Operation Manual VF-30/VF-60 Single/Dual E1 Digital Voice Compression Modules Megaplex-2100 Version 11 MEGAPLEX-2100 MODULES VF-30/VF-60 Single/Dual E1 Digital Voice Compression Modules

More information

OPERATION MANUAL INSTALLATION AND ML-8E1, ML-8T1. 8-Port Main Link E1 and T1 Modules. Megaplex-2100/2104 Version 12. The Access Company

OPERATION MANUAL INSTALLATION AND ML-8E1, ML-8T1. 8-Port Main Link E1 and T1 Modules. Megaplex-2100/2104 Version 12. The Access Company INSTALLATION AND OPERATION MANUAL ML-8E1, ML-8T1 8-Port Main Link E1 and T1 Modules Megaplex-2100/2104 Version 12 The Access Company ML-8E1, ML-8T1 8-Port Main Link E1 and T1 Modules Megaplex-2100/2104

More information

IPmux-1E TDM Pseudowire Access Gateway

IPmux-1E TDM Pseudowire Access Gateway Data Sheet IPmux-1E TDM circuit emulation over packet-switched networks Transporting E1, T1, ISDN and analog phone traffic over packet-switched networks Four FXS, FXO or E&M voice ports for standard analog

More information

OPERATION MANUAL INSTALLATION AND HS-ETH. 1/2/4-Channel Ethernet Bridge/Router Modules. Megaplex-2100/2104 Version 10. The Access Company

OPERATION MANUAL INSTALLATION AND HS-ETH. 1/2/4-Channel Ethernet Bridge/Router Modules. Megaplex-2100/2104 Version 10. The Access Company INSTALLATION AND OPERATION MANUAL HS-ETH 1/2/4-Channel Ethernet Bridge/Router Modules Megaplex-2100/2104 Version 10 The Access Company HS-ETH 1/2/4-Channel Ethernet Bridge/Router Modules Megaplex-2100/2104

More information

IPmux-11. TDMoIP Gateway DESCRIPTION FEATURES

IPmux-11. TDMoIP Gateway DESCRIPTION FEATURES IPmux-11 DESCRIPTION FEATURES TDMoIP CPE (Customer Premises Equipment) for the small and medium-sized enterprise sites, offering TDM leased line extension over a packet switched network and controlled

More information

Megaplex-2100/2104. Data Sheet. Modular Integrated Access Multiplexers

Megaplex-2100/2104. Data Sheet. Modular Integrated Access Multiplexers Product page > Data Sheet -2100/2104 Multiple n x 64 kbps, E1/T1 or fractional E1/T1 main links, with combined TDM capacity of up to 8 Mbps (124 timeslots) 10/100-Mbps IP access link for transparent circuit

More information

Installation and Operation Manual KHS.U. ISDN "U" Interface Data Module KILOMUX-2100 MODULE

Installation and Operation Manual KHS.U. ISDN U Interface Data Module KILOMUX-2100 MODULE Installation and Operation Manual KHS.U ISDN "U" Interface Data Module KILOMUX-2100 MODULE KHS.U ISDN "U" Interface Data Module Installation and Operation Manual Notice This manual contains information

More information

MEGAPLEX-2100 MODULE LS-2A. 10/6-Channel Sync/Async Low Speed Data Module for X.50, X.58 or SDM Multiplexing Installation and Operation Manual.

MEGAPLEX-2100 MODULE LS-2A. 10/6-Channel Sync/Async Low Speed Data Module for X.50, X.58 or SDM Multiplexing Installation and Operation Manual. MEGAPLEX-200 MODULE LS-2A 0/6-Channel Sync/Async Low Speed Data Module for X50, X58 or SDM Multiplexing Installation and Operation Manual Notice This manual contains information that is proprietary to

More information

IPmux-16 TDM Pseudowire Access Gateway

IPmux-16 TDM Pseudowire Access Gateway Data Sheet IPmux-16 TDM circuit emulation over packet-switched networks Robust TDM emulation over packet networks Single or dual 10/100BaseT or 100BaseFx uplink to the network, with redundancy at the Ethernet

More information

Kilomux-2100/2104 KVF.6 E1/T1 Voice Compression Module

Kilomux-2100/2104 KVF.6 E1/T1 Voice Compression Module Data Sheet Kilomux-2100/2104 KVF.6 Half or full PBX E1/T1 trunk compression Up to 186 voice channels in a single Kilomux-2100 chassis Group III Fax relay at up to 14.4 kbps, with automatic rate fallback

More information

IPmux-1E. TDMoIP Gateway FEATURES

IPmux-1E. TDMoIP Gateway FEATURES IPmux-1E FEATURES TDMoIP gateway enabling E1, T1, ISDN, and analog phone communication over asynchronous IP and Ethernet networks Framed (full or fractional) and unframed E1/T1 user traffic Four FXS/E&M/FXO

More information

Using RAD Vmux voice trunking over FleetBroadband

Using RAD Vmux voice trunking over FleetBroadband Using RAD Vmux voice trunking over FleetBroadband Version 01 22 October 2007 inmarsat.com/fleetbroadband Whilst the information has been prepared by Inmarsat in good faith, and all reasonable efforts have

More information

Vmux-2120 Voice Trunking Gateway

Vmux-2120 Voice Trunking Gateway Data Sheet Vmux-2120 Unique TDMoIP multiplexing, with various voice compression algorithms, provides up to 16:1 compression for effective bandwidth utilization Two channelized uplink ports for point-to-multipoint

More information

Vmux-110. Voice Trunking Gateway FEATURES

Vmux-110. Voice Trunking Gateway FEATURES Vmux-110 FEATURES Integrates voice and data over packet networks Connects one E1/T1 digital or four/eight (FXS/FXO/E&M) analog voice ports over a serial, TDM or 10/100BaseT uplink Second Ethernet port

More information

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer FEATURES

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer FEATURES Data, Voice, Fax and LAN Integrating Modular Multiplexer FEATURES Modular integrating access multiplexer for data, voice, fax and LAN Maximized utilization of main-link bandwidth and high quality low bit

More information

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer FEATURES

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer FEATURES -2100/2104,, and LAN Integrating Modular Multiplexer FEATURES Modular integrating access multiplexer for data, voice, fax and LAN Advanced voice compression techniques maximize utilization of main link

More information

ETX-102 Carrier Ethernet Demarcation Device

ETX-102 Carrier Ethernet Demarcation Device Data Sheet ETX-102 Smart demarcation point between the service provider and customer networks SLA monitoring to assure delivery of contracted Ethernet services VLAN bridging and stacking with P-bit, DSCP,

More information

FCD-E1E Managed E1 and Fractional E1 Access Device

FCD-E1E Managed E1 and Fractional E1 Access Device FCD-E1E E1 main link and sublink supporting both framed and unframed signals One data port with selectable sync data rates of n 64 kbps Optional sub-e1 drop-and-insert port Optional Ethernet 10/100BaseT

More information

Megaplex-2100/2104 Modular Integrated Access Multiplexers

Megaplex-2100/2104 Modular Integrated Access Multiplexers Data Sheet Megaplex-2100/2104 Sub-DS0 multiplexer providing voice, data, Ethernet, and other unique services (teleprotection, omnibus) Multiple n x 64 kbps, SHDSL, E1/T1 or fractional E1/T1 main links,

More information

FCD-155. STM-1/OC-3 Terminal Multiplexer FEATURES

FCD-155. STM-1/OC-3 Terminal Multiplexer FEATURES FEATURES SDH/SONET terminal multiplexer for grooming LAN and legacy traffic (TDM) over SDH/SONET networks Demarcation point between the carrier and the customer networks GFP (G.7041), LAPS (X.85/86) encapsulation

More information

Modular E1 or Fractional E1 Access Unit. Dial-out for alarm report The E1 main link can be supplied with the following options:

Modular E1 or Fractional E1 Access Unit. Dial-out for alarm report The E1 main link can be supplied with the following options: FEATURES E1 or Fractional E1 access unit Supports one data port with selectable sync data rates: n x 56, n x 64 kbps Optional sub-e1 drop & insert port for PABX connectivity Single slot supports MEGAPLEX

More information

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer

Kilomux-2100/2104. Data, Voice, Fax and LAN Integrating Modular Multiplexer Data Sheet Kilomux-2100/2104 Data, Voice, Fax and LAN Integrating Modular Multiplexer Maximized utilization of main-link bandwidth and high-quality low bit rate voice Fractional E1/T1, switched or leased

More information

HS-2. MEGAPLEX-2000/2100 Module. 2-Channel High Speed Data Module DESCRIPTION FEATURES

HS-2. MEGAPLEX-2000/2100 Module. 2-Channel High Speed Data Module DESCRIPTION FEATURES MEGAPLEX-2000/2100 Module HS-2 2-Channel High Speed Data Module FEATURES Supports two high speed data channels Programmable data rates up to 1,984 kbps in any multiple of 64 or 56 kbps User-selectable

More information

E1/T1 or Fractional E1/T1 Modular Access Device with Integrated Router

E1/T1 or Fractional E1/T1 Modular Access Device with Integrated Router Data Sheet E1/T1 or Fractional E1/T1 Modular Access Device with Integrated Router Modular Integrated Access Device (IAD), providing bundled data, IP and telephony services over E1/T1 access lines E1/T1

More information

IPmux-14. TDMoIP Gateway FEATURES DESCRIPTION

IPmux-14. TDMoIP Gateway FEATURES DESCRIPTION FEATURES TDMoIP CPE (Customer Premises Equipment), offering TDM leased line extension over a packet switched network (PSN) and controlled Ethernet access TDMoIP technology, implementing the emerging IETF,

More information

Optimux-108, Optimux-106

Optimux-108, Optimux-106 Where to buy > Product page > Data Sheet Optimux-108, Multiplexer for 4E1/4T1 and Ethernet or Serial Data Four E1 or T1 channels and Fast Ethernet link multiplexed over a fiber optic link Various fiber

More information

FCD-E1LC, FCD-T1LC Managed E1/T1 or Fractional E1/T1 Access Units

FCD-E1LC, FCD-T1LC Managed E1/T1 or Fractional E1/T1 Access Units Data Sheet FCD-E1LC, FCD-T1LC Managed TDM multiplexers for access to full or fractional E1/T1 services Managed access units for E1/T1 or fractional E1/T1 service E1/T1 main link and sublink support framed

More information

Vmux-210 Analog Voice Trunking Gateway

Vmux-210 Analog Voice Trunking Gateway Where to buy > See the product page > Unique TDMoIP multiplexing, together with various voice compression algorithms, provides up to 16:1 compression for effective bandwidth utilization Fully transparent

More information

- 10/100/1000T SFP based port - 10/100BaseT RJ-45 port - Auto crossing - Autonegotiation per IEEE 802.3ab

- 10/100/1000T SFP based port - 10/100BaseT RJ-45 port - Auto crossing - Autonegotiation per IEEE 802.3ab November 4, 2015 : General Availability Ruggedized SCADA-Aware Router Gateway is an Ethernet router gateway, a member of RAD s SecFlow suite of ruggedized Ethernet family. With a unique built-in packet-processing

More information

Optimux-34, Optimux-25

Optimux-34, Optimux-25 Data Sheet Optimux-34, Optimux-25 16-Channel E1/T1, Ethernet or Data over E3 or Fiber Multiplexers Up to 16 E1/T1 links and high-speed data or Ethernet traffic multiplexed into fiber opt7ic uplink E3 copper

More information

Optimux-108, Optimux-106

Optimux-108, Optimux-106 Product page > Data Sheet Optimux-108, Multiplexer for 4E1/4T1 and Ethernet or Serial Data Four E1 or T1 channels and Fast Ethernet link multiplexed over a fiber optic link Various fiber interfaces: multi,

More information

E1/T1 or Fractional E1/T1 Access Unit with Integrated Router

E1/T1 or Fractional E1/T1 Access Unit with Integrated Router Where to buy > See the product page > Data Sheet FCD-IP E1/T1 or Fractional E1/T1 Access Unit with Integrated Router Integrated access device providing bundled services over E1/T1 access lines Provides

More information

FCD-E1, FCD-T1 E1/T1 or Fractional E1/T1 Access Units

FCD-E1, FCD-T1 E1/T1 or Fractional E1/T1 Access Units FCD-E1, FCD-T1 Access units for E1/T1 or fractional E1/T1 services Several data ports with selectable sync data rates: n 56/64 kbps Optional sub-e1/t1 drop-and-insert port for PBX connectivity Fail-safe

More information

ASMi-52 2/4-wire SHDSL Modem/Multiplexer

ASMi-52 2/4-wire SHDSL Modem/Multiplexer Where to buy > See the product page > Data Sheet ASMi-52 Plastic Enclosure Metal Enclosure Rail-mount Metal Enclosure Dedicated managed SHDSL modem for Ethernet, E1 and serial services over 2- or 4-wire

More information

ET4254 Communications and Networking 1

ET4254 Communications and Networking 1 Topic 9 Internet Protocols Aims:- basic protocol functions internetworking principles connectionless internetworking IP IPv6 IPSec 1 Protocol Functions have a small set of functions that form basis of

More information

This provides an easily configurable solution, flexible enough to meet the specific requirements of a broad range of applications.

This provides an easily configurable solution, flexible enough to meet the specific requirements of a broad range of applications. Where to buy > Product page > Data Sheet, L Up to 28 T1 or 21 E1 channels multiplexed into a single 45 Mbps data stream Combination of T1 and E1 channels Transmission over coax or fiber optic cable Range

More information

IPmux-4LGE TDM Pseudowire Gateway

IPmux-4LGE TDM Pseudowire Gateway Where to buy > See the product page > Data Sheet IPmux-4LGE Legacy over PSN solution for transmitting E1 streams over packet switched networks Comprehensive support for pseudowire/circuit emulation standards

More information

ACE-2002, ACE-2002E. Multiservice Access Concentrators/ ATM Network Termination Units FEATURES

ACE-2002, ACE-2002E. Multiservice Access Concentrators/ ATM Network Termination Units FEATURES Multiservice Access Concentrators/ ATM Network Termination Units FEATURES Multiservice/ATM network demarcation device or access concentrator (ACE-2002), cellular access concentrator () Offer converged

More information

Where to buy > Product page >

Where to buy > Product page > Where to buy > Product page > Data Sheet.bis Modem.bis managed modem operating at full-duplex data rates of up to 5.7 Mbps per copper pair Ethernet service over 8-wire line of up to 22.8 Mbps in Point-to-Point,

More information

ASMi-52. 2/4-Wire SHDSL Modem FEATURES

ASMi-52. 2/4-Wire SHDSL Modem FEATURES FEATURES Dedicated managed SHDSL modem Operates over 2-wire and 4-wire lines, enabling service over any copper infrastructure Multiplexes of two data streams (E1, Ethernet, Serial) over SHDSL Utilizes

More information

IPmux-14. TDM Pseudowire Access Gateway FEATURES

IPmux-14. TDM Pseudowire Access Gateway FEATURES FEATURES TDM pseudowire CPE/CLE, offering TDM circuit emulation over a packet-switched network (PSN) and controlled Ethernet access Built on TDMoIP technology, implementing the emerging IETF, MFA Forum,

More information

Optimux-1032, Optimux-1025

Optimux-1032, Optimux-1025 Where to buy > Product page > Data Sheet Optimux-1032, Optimux-1025 Fiber Optic Multiplexer for 16 E1/T1 and Gigabit Ethernet Multiplexes up to 16 E1/T1 with up to 3x10/100/1000 user Ethernet traffic over

More information

Gmux-2000 Hub-Site Pseudowire and Voice Trunking Gateway

Gmux-2000 Hub-Site Pseudowire and Voice Trunking Gateway Data Sheet Gmux-2000 Scalable, carrier-class, multiservice pseudowire gateway converging TDM services over PSNs High capacity modular pseudowire gateway, transporting TDM traffic over packet-switched networks

More information

ASMi-54 SHDSL.bis Modem

ASMi-54 SHDSL.bis Modem Where to buy > See the product page > Metal Enclosure Plastic Enclosure Rail-mount Metal Enclosure Access with Ethernet Ring capabilities SHDSL.bis managed modem operating at full-duplex data rates of

More information

RICi-E1, RICi-T1 Fast Ethernet over E1/T1 Intelligent Converters

RICi-E1, RICi-T1 Fast Ethernet over E1/T1 Intelligent Converters Connect Fast Ethernet LANs over E1 or T1 circuits Wire-speed packet forwarding 4 levels of QoS, based on four VLAN priority queues in accordance with 802.1p and IP Precedence Inband and out-of-band management

More information

Optimux-108, Optimux-106 Four-Channel E1/T1 and Ethernet Multiplexers

Optimux-108, Optimux-106 Four-Channel E1/T1 and Ethernet Multiplexers Data Sheet Optimux-108, Optimux-106 Fiber Multiplexers, Transmit Any Traffic over Fiber Four E1 or T1 channels and Fast Ethernet link multiplexed over a fiber optic link Various fiber interfaces: multimode,

More information

MARKET SEGMENTS AND APPLICATION

MARKET SEGMENTS AND APPLICATION Product page > Data Sheet Managed SHDSL.bis modem transmitting full-duplex at data rates of up to 5.7 Mbps over 2-wire, and 11.4 Mbps over 2/4-wire lines Dual Bearer mode enabling or V.35 service combination

More information

ASMi-54L. Data Sheet. SHDSL.bis Modem

ASMi-54L. Data Sheet. SHDSL.bis Modem Product page > Data Sheet Modem Managed modem transmitting full-duplex at data rates of up to 5.7 Mbps over 2-wire and 11.4 Mbps over 4-wire lines Dual Bearer mode for E1 and Ethernet HDLC Extended rates

More information

SFP Transceivers Small Form-Factor Pluggable Transceivers

SFP Transceivers Small Form-Factor Pluggable Transceivers Data Sheet SFP Transceivers Fiber optic/electrical transceivers and System-on-an-SFP miniature converters Fiber optic or electrical transceiver units, providing pluggable interfaces according to known

More information

Vmux-400 GSM A-bis/A-ter Optimization Gateway

Vmux-400 GSM A-bis/A-ter Optimization Gateway Data Sheet Vmux-400 GSM A-bis/A-ter interface optimization of voice with up to 3:1 average bandwidth optimization Connects up to two E1/T1 digital GSM ports over a single E1/T1, serial, or Ethernet 10/100BaseT

More information

SFP Transceivers Small Form-Factor Pluggable Transceivers

SFP Transceivers Small Form-Factor Pluggable Transceivers Data Sheet SFP Transceivers Fiber optic/electrical transceivers and System-on-an-SFP miniature converters Fiber optic or electrical transceiver units, providing pluggable interfaces according to known

More information

Kilomux-2100/2104 KVF.8 8-Channel Analog Voice Compression Module

Kilomux-2100/2104 KVF.8 8-Channel Analog Voice Compression Module Data Sheet Kilomux-2100/2104 KVF.8 Up to 96 analog voice channels in a single Kilomux-2100/2104 chassis Group III Fax relay at up to 14.4 kbps, with automatic rate fallback V.22 bis and V.32 bis modem

More information

ASMi-54 SHDSL.bis Modem

ASMi-54 SHDSL.bis Modem Where to buy > See the product page > Data Sheet ASMi-54 Metal Enclosure Rail-mount Metal Enclosure Plastic Enclosure 19-inch Metal Enclosure High-end SHDSL.bis multiservice access equipment SHDSL.bis

More information

FCD-155 STM-1/OC-3 Terminal Multiplexer

FCD-155 STM-1/OC-3 Terminal Multiplexer Data Sheet FCD-155 Transports LAN and TDM traffic over SDH/SONET networks STM-1/OC-3 PDH/Ethernet terminal multiplexer grooms LAN and legacy (TDM) traffic over SDH/SONET networks 10/100BaseT and GbE (Gigabit)

More information

FCD-155 STM-1/OC-3 Terminal Multiplexer

FCD-155 STM-1/OC-3 Terminal Multiplexer Where to buy > See the product page > Data Sheet Groomed LAN and legacy (TDM) traffic over SDH/SONET networks VLAN and point-to-multipoint switching Transports LAN and TDM traffic over SDH/SONET networks

More information

ASMi-54L SHDSL.bis Modem

ASMi-54L SHDSL.bis Modem Where to buy > See the product page > Data Sheet ASMi-54L Managed modem transmitting full-duplex at data rates of up to 5.7 Mbps over 2-wire and 11.4 Mbps over 2/4-wire lines High-speed Ethernet and E1

More information

FOMi-40 Fiber Optic Modem with Remote Management

FOMi-40 Fiber Optic Modem with Remote Management Data Sheet FOMi-40 Provides a secure and long-range data link of up to 100 km (62 mi) Selectable data rates from 56 to 2048 kbps Multimode or single mode operation Extended transmission range up to 100

More information

2/4-wire SHDSL Modem/Multiplexer

2/4-wire SHDSL Modem/Multiplexer Data Sheet ASMi-52 SHDSL modem for effective provisioning of TDM and Ethernet data services at the rates of up to 4.6 Mbps Dedicated managed SHDSL modem for Ethernet, E1 and serial Ethernet and E1 or serial

More information

Vmux-110 Voice Trunking Gateway for Remote Sites

Vmux-110 Voice Trunking Gateway for Remote Sites Where to buy > See the product page > Data Sheet Vmux-110 Compressing one E1/T1 or four/eight analog (FXS/FXO/E&M) voice ports, and transmitting them over a serial, fractional E1/T1, or a 10/100BaseT uplink

More information

Megaplex-2100/2104. Modular Integrated Access Multiplexers

Megaplex-2100/2104. Modular Integrated Access Multiplexers Where to buy > See the product page > Data Sheet Megaplex-2100/2104 Sub-DS0 multiplexer providing voice, data, Ethernet, and other unique services (teleprotection, omnibus) Multiple n x 64 kbps, SHDSL,

More information

Optimux-1553 E3/T3 STM-1/OC-3 Terminal Multiplexer

Optimux-1553 E3/T3 STM-1/OC-3 Terminal Multiplexer Data Sheet Optimux-1553 STM-1/OC-3 terminal multiplexer for grooming high-order legacy traffic (TDM) over SDH/SONET networks Up to three E3 or T3 data channels multiplexed using a single hot-swappable

More information

LA-140. Advanced ATM IAD FEATURES

LA-140. Advanced ATM IAD FEATURES Advanced IAD FEATURES Offers voice, data and LAN to small and medium size businesses over single access line On the user side, supports up to 12 modular analog voice channels, 12 ISDN S0, E1/T1 port, a

More information

Chapter 2 - Part 1. The TCP/IP Protocol: The Language of the Internet

Chapter 2 - Part 1. The TCP/IP Protocol: The Language of the Internet Chapter 2 - Part 1 The TCP/IP Protocol: The Language of the Internet Protocols A protocol is a language or set of rules that two or more computers use to communicate 2 Protocol Analogy: Phone Call Parties

More information

Synopsis of Basic VoIP Concepts

Synopsis of Basic VoIP Concepts APPENDIX B The Catalyst 4224 Access Gateway Switch (Catalyst 4224) provides Voice over IP (VoIP) gateway applications for a micro branch office. This chapter introduces some basic VoIP concepts. This chapter

More information

Optimux-1553 STM-1/OC-3 Terminal Multiplexer

Optimux-1553 STM-1/OC-3 Terminal Multiplexer Data Sheet Optimux-1553 STM-1/OC-3 terminal multiplexer for grooming high order legacy traffic (TDM) over SDH/SONET networks Any Traffic Over Fiber Multiplexes up to three E3 or T3 data channels using

More information

Optimux-4T1 Four-Channel T1 Multiplexer

Optimux-4T1 Four-Channel T1 Multiplexer Data Sheet Optimux-4T1 Any Traffic Over Fiber Multiplexes four T1 channels over a fiber optics link with various fiber interfaces: multimode, single-mode (up to 120 km), and single-mode over single fiber

More information

Circuit Emulation over IP

Circuit Emulation over IP (CEoIP) provides a virtual circuit through an IP network--similar to a leased line--to integrate solutions that require a time-sensitive, bit-transparent transport into IP networks. Data, with proprietary

More information

Application RTU560 RJ45 RJ45. Management. Switch RJ45 RJ45. Switch 560NMS34 SDSL. Data Sheet Switch 560NMS34. Port 1. Port 2. Port 3.

Application RTU560 RJ45 RJ45. Management. Switch RJ45 RJ45. Switch 560NMS34 SDSL. Data Sheet Switch 560NMS34. Port 1. Port 2. Port 3. Switch 560NMS34 Application The RTU component 560NMS34 is a managed plug and play Layer2-switch providing four Fast auto-negotiating RJ45-ports with auto MDI/X (Automatic Crossover Detection and Correction)

More information

Special expressions, phrases, abbreviations and terms of Computer Networks

Special expressions, phrases, abbreviations and terms of Computer Networks access access point adapter Adderssing Realm ADSL (Asymmetrical Digital Subscriber Line) algorithm amplify amplitude analog antenna application architecture ARP (Address Resolution Protocol) AS (Autonomous

More information

ETX-201A Carrier Ethernet Demarcation Device

ETX-201A Carrier Ethernet Demarcation Device Data Sheet ETX-201A Carrier Demarcation Device Smart demarcation point between the service provider and customer networks Carrier demarcation device delivering business services over fiber infrastructure

More information

Configuring RTP Header Compression

Configuring RTP Header Compression Configuring RTP Header Compression First Published: January 30, 2006 Last Updated: July 23, 2010 Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted.

More information

IPmux-2L, IPmux-4L. IPmux-2L, IPmux-4L are a TDM pseudowire. also serves as an Ethernet-based access device.

IPmux-2L, IPmux-4L. IPmux-2L, IPmux-4L are a TDM pseudowire. also serves as an Ethernet-based access device. Where to buy > See the product page > Data Sheet IPmux-2L, IPmux-4L Legacy over PSN solution for transmitting E1 streams over packet switched networks Comprehensive support for pseudowire/circuit emulation

More information

RAD KILOMUX-2100/2104

RAD KILOMUX-2100/2104 Page 1 of 10 RAD KILOMUX-2100/2104 FEATURES Integrating access multiplexer for data, voice, fax and LAN Maximizes utilization of main link bandwidth Main link data rates from 9.6 to 768 kbps Connects to

More information

TDMoEA Interface Card for Loop-AM3440 Series

TDMoEA Interface Card for Loop-AM3440 Series Interface Card for Series Features Hot pluggable interface card for /B/C series Four ports for WAN or LAN port assignment Bandwidth up to 4 x 2 M and support N x 64K bps Two combo Gigabit (GbE) with 2

More information

LA-110 Advanced Integrated Access Device

LA-110 Advanced Integrated Access Device Data Sheet LA-110 Provide voice, data and LAN services over ATM or packet-switched networks, with DSL interfaces Offers multiple services over ATM or PSN Supports standard-compliant pseudowire (PW) with

More information

Atrie WireSpan 620. User's Manual

Atrie WireSpan 620. User's Manual Atrie WireSpan 620 User's Manual WireSpan 620 Fractional E1/Ethernet Access Unit Installation and Operation Manual (Version 1.00) CONTENTS CHAPTER 1 Introduction.. 1-1 CHAPTER 2 Installation and Setup..

More information

FCD-155 STM-1/OC-3 Terminal Multiplexer

FCD-155 STM-1/OC-3 Terminal Multiplexer Data Sheet FCD-155 Transports LAN and TDM traffic over SDH/SONET networks Groomed LAN and legacy (TDM) traffic over SDH/SONET networks VLAN and point-to-multipoint switching Ethernet traffic mapped to

More information

Optimux-34 Fiber Optic Multiplexer

Optimux-34 Fiber Optic Multiplexer Data Sheet Optimux-34 Multiple E1, Ethernet, or High-speed Data over E3 or Fiber, up to 110 km (68 miles) Up to 16 E1 links, high-speed data, and Ethernet traffic multiplexed into one E3 copper or fiber

More information

Megaplex-4100 Next Generation Multiservice Access Node

Megaplex-4100 Next Generation Multiservice Access Node Data Sheet Megaplex-4100 Gigabit Ethernet and/or STM-1/OC-3 uplinks Ethernet and TDM Central/Aggregation Solution Ethernet over copper, fiber or DSL aggregator STM-1/OC-3 ADM (add/drop multiplexer) 4/1/0

More information

Interface The exit interface a packet will take when destined for a specific network.

Interface The exit interface a packet will take when destined for a specific network. The Network Layer The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network

More information

ABSTRACT. that it avoids the tolls charged by ordinary telephone service

ABSTRACT. that it avoids the tolls charged by ordinary telephone service ABSTRACT VoIP (voice over IP - that is, voice delivered using the Internet Protocol) is a term used in IP telephony for a set of facilities for managing the delivery of voice information using the Internet

More information

Chapter 15 Local Area Network Overview

Chapter 15 Local Area Network Overview Chapter 15 Local Area Network Overview LAN Topologies Bus and Tree Bus: stations attach through tap to bus full duplex allows transmission and reception transmission propagates throughout medium heard

More information

Introduction to Internetworking

Introduction to Internetworking Introduction to Internetworking Introductory terms Communications Network Facility that provides data transfer services An internet Collection of communications networks interconnected by bridges and/or

More information

Question 7: What are Asynchronous links?

Question 7: What are Asynchronous links? Question 1:.What is three types of LAN traffic? Unicasts - intended for one host. Broadcasts - intended for everyone. Multicasts - intended for an only a subset or group within an entire network. Question2:

More information

FSOS. Ethernet Configuration Guide

FSOS. Ethernet Configuration Guide FSOS Ethernet Configuration Guide Contents 1 Configuring Interface... 1 1.1 Overview...1 1.2 Configuring Interface State...1 1.2.1 Configurations...1 1.2.2 Validation...1 1.3 Configuring Interface Speed...

More information

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications Data and Computer Communications Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based s 1 Need For Protocol Architecture data exchange can involve complex procedures better if task broken into subtasks

More information

Configuring RTP Header Compression

Configuring RTP Header Compression Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted. Header compression reduces network overhead and speeds up the transmission of either Real-Time

More information

TDM over IP. International Department

TDM over IP. International Department TDM over IP International Department 2010-7-15 RAISECOM TDM over IP TDMoIP Specification 4 E1/T1 TDMoIP product 1 E1/T1 TDMoIP product How to config Nview NNM V5 GUI TDMoIP product family Central RC1201-2GESTM1

More information

V C ALIANT OMMUNICATIONS. 4 x Ethernet over T1 (IP over TDM) Data Sheet & Product Brochure U.K. INDIA U.S.A. Valiant Communications (UK) Ltd

V C ALIANT OMMUNICATIONS. 4 x Ethernet over T1 (IP over TDM) Data Sheet & Product Brochure U.K. INDIA U.S.A. Valiant Communications (UK) Ltd V C ALIANT OMMUNICATIONS 4 x Ethernet over T1 (IP over TDM) Data Sheet & Product Brochure U.K. Valiant Communications (UK) Ltd 1, Acton Hill Mews, 310-328 Uxbridge Road, London W3 9QN, UK E-mail: gb@valiantcom.com

More information

ET4254 Communications and Networking 1

ET4254 Communications and Networking 1 Topic 10:- Local Area Network Overview Aims:- LAN topologies and media LAN protocol architecture bridges, hubs, layer 2 & 3 switches 1 LAN Applications (1) personal computer LANs low cost limited data

More information

RICi-16 Ethernet over Bonded PDH Network Termination Unit

RICi-16 Ethernet over Bonded PDH Network Termination Unit Data Sheet RICi-16 Connects Fast Ethernet LANs transparently over TDM infrastructure Transports Ethernet traffic over 16 bonded E1 or T1 ports or two clear channel T3 ports using Ethernet over NG-PDH protocols

More information

CE Ethernet Operation

CE Ethernet Operation 25 CHAPTER Note The terms "Unidirectional Path Switched Ring" and "UPSR" may appear in Cisco literature. These terms do not refer to using Cisco ONS 15xxx products in a unidirectional path switched ring

More information

Lecture 3. The Network Layer (cont d) Network Layer 1-1

Lecture 3. The Network Layer (cont d) Network Layer 1-1 Lecture 3 The Network Layer (cont d) Network Layer 1-1 Agenda The Network Layer (cont d) What is inside a router? Internet Protocol (IP) IPv4 fragmentation and addressing IP Address Classes and Subnets

More information

2. LAN Topologies Gilbert Ndjatou Page 1

2. LAN Topologies Gilbert Ndjatou Page 1 2. LAN Topologies Two basic categories of network topologies exist, physical topologies and logical topologies. The physical topology of a network is the cabling layout used to link devices. This refers

More information

Circuit Switching and Packet Switching

Circuit Switching and Packet Switching Chapter 10: Circuit Switching and Packet Switching CS420/520 Axel Krings Page 1 Switching Networks Long distance transmission is typically done over a network of switched nodes Nodes not concerned with

More information

Unit 5: Internet Protocols skong@itt-tech.edutech.edu Internet Protocols She occupied herself with studying a map on the opposite wall because she knew she would have to change trains at some point. Tottenham

More information

SEN366 (SEN374) (Introduction to) Computer Networks

SEN366 (SEN374) (Introduction to) Computer Networks SEN366 (SEN374) (Introduction to) Computer Networks Prof. Dr. Hasan Hüseyin BALIK (12 th Week) The Internet Protocol 12.Outline Principles of Internetworking Internet Protocol Operation Internet Protocol

More information