Campus Network Design Case Study

Similar documents
Cisco EtherChannel Technology

DATA SHEET. Catalyst Port Fast Ethernet Interface Modules THE CISCO CATALYST PORT FAST ETHERNET INTERFACE MODULES ARE IDEAL FOR

ENHANCED INTERIOR GATEWAY ROUTING PROTOCOL STUB ROUTER FUNCTIONALITY

Requirements for the Wiring Closet

Designing High-Performance Campus Intranets with Multilayer Switching

Cisco Aironet In-Building Wireless Solutions International Power Compliance Chart

Traffic Offload. Cisco 7200/Cisco 7500 APPLICATION NOTE

Cisco Catalyst 2950 Series Software Feature Comparison Standard Image (SI) and Enhanced Image (EI) Feature Comparison

High-Availability Solutions for SIP Enabled Voice-over-IP Networks

ANNOUNCING NEW PRODUCT OFFERINGS FOR THE CISCO CATALYST 6500 SERIES

Catalyst 6000 Family Enables New Business Models for Enterprise and Service Provider Environments

Quick Start Guide FASTH UB /100 SERIES CABLING AND START UP 1 TAKE OUT WHAT YOU NEED 2 CONNECT THE HUB TO OTHER NETWORK DEVICES

Catalyst 2924M XL 10/100 Autosensing Fast Ethernet Switch Catalyst 2912MF-XL 100baseFX Fast Ethernet Aggregator Switch

E-Seminar. Voice over IP. Internet Technical Solution Seminar

Quick Start Guide INSTALLING YOUR CISCO 1605 ROUTER 1 UNPACK THE BOX. 2 INSTALL THE ROUTER. 3 VERIFY YOUR INSTALLATION. Internet. Ethernet network 1

NEW CISCO IOS SOFTWARE RELEASE 12.2(25)EY FOR CISCO CATALYST 3750 METRO SERIES SWITCHES

DATA SHEET. Catalyst 5505 Switching System

Automated Internet Sign On with Cisco Network Registrar

DATA SHEET. Catalyst Inline Power Patch Panel

CISCO IP PHONE 7970G NEW! CISCO IP PHONE 7905G AND 7912G XML

Types of Cisco IOS Software Releases

Gigabit Campus Network Design Principles and Architecture

Cisco CallManager 4.0-PBX Interoperability: Lucent/Avaya Definity G3 MV1.3 PBX using 6608-T1 PRI NI2 with MGCP

NEW JERSEY S HIGHER EDUCATION NETWORK (NJEDGE.NET), AN IP-VPN CASE STUDY

CISCO 7304 SERIES ROUTER PORT ADAPTER CARRIER CARD

E-Seminar. Wireless LAN. Internet Technical Solution Seminar

CISCO SFP OPTICS FOR PACKET-OVER-SONET/SDH AND ATM APPLICATIONS

Cisco ONS SDH 12-Port STM-1 Electrical Interface Card

Cisco Voice Services Provisioning Tool 2.6(1)

Routing Between VLANs Overview

Cisco AVVID The Architecture for E-Business

CONFIGURING EPOLICY ORCHESTRATOR 3.0 AND MCAFEE 8.0i WITH CISCO CALLMANAGER

END-OF-SALE AND END-OF-LIFE ANNOUNCEMENT FOR THE CISCO MEDIA CONVERGENCE SERVER 7845H-2400

Cisco Systems Intelligent Storage Networking

Catalyst 3512 XL, 3524, and 3548 XL Stackable 10/100 and Gigabit Ethernet Switches

END-OF-SALE AND END-OF-LIFE ANNOUNCEMENT FOR THE CISCO FLEXWAN MODULE FOR USE WITH THE CISCO 7600 SERIES ROUTERS AND CATALYST 6500 SERIES SWITCHES

Взято с сайта

NEW METHOD FOR ORDERING CISCO 1700 SERIES MODULAR ACCESS ROUTERS AND CISCO 1800 SERIES INTEGRATED SERVICES ROUTERS SOFTWARE SPARE IMAGES

How Did LANs Evolve to Multilayer Switching?

Pass-Through Technology

THE POWER OF A STRONG PARTNERSHIP.

Cisco Extensible Provisioning and Operations Manager 4.5

Catalyst 6000 Family Accelerated Server Load Balancing

THE CISCO SUCCESS BUILDER PROGRAM THE CISCO SMALL OFFICE COMMUNICATIONS CENTER: AFFORDABLE, PROVEN COMMUNICATIONS SOLUTIONS FOR SMALL ORGANIZATIONS

Quick Start Guide Cisco CTE 1400 and Design Studio

Cisco Unified CallConnector for Microsoft Office Quick Reference Guide 1

CISCO WDM SERIES OF CWDM PASSIVE DEVICES

NEW CISCO IOS SOFTWARE RELEASE 12.2(25)FY FOR CISCO CATALYST EXPRESS 500 SERIES SWITCHES

Cisco Programs. Cisco Networking Academies Curriculum Scope and Sequence. Semester II

CISCO FAX SERVER. Figure 1. Example Deployment Scenario. The Cisco Fax Server solution consists of the following components:

Cisco 3745 Gateway - PBX Interoperability: Avaya Definity G3 PBX using Q.931 PRI Network Side Interfaces to an H.323 Gateway

MULTI-VRF AND IP MULTICAST

Using TAPS with +E.164 Directory Numbers

Managing Physical and Logical Network Services with Topology Services

Cisco CallManager Server Upgrade Program

Configuring DHCP for ShoreTel IP Phones

Cisco StackWise Technology

Cisco 2651XM Gateway - PBX Interoperability: Avaya Definity G3 PBX using Analog FXO Interfaces to an H.323 Gateway

USING MCAFEE VIRUSSCAN ENTERPRISE 8.0I WITH CISCO CALLMANAGER

Catalyst 5000 Family of Switches: Scalable Intranet Switching Solutions

Top-Down Network Design

Cisco Unified CallManager 4.0-PBX Interoperability: Mitel 3300 ICP Release 4.1 PBX to a Cisco 6608 Gateway using T1 QSIG with MGCP

LANE Design Recommendations

Understanding VLANs. Existing Shared LAN Configurations CHAPTER

CISCO CATALYST 6500 SERIES CONTENT SWITCHING MODULE

LAN Emulation Overview

Cisco MDS 9000 Family and EMC ECC Integration

Catalyst 1900 Series and Catalyst 2820 Series Enterprise Edition Software Configuration Guide

Routing Between VLANs Overview

CISCO NETWORK CONNECTIVITY CENTER BUSINESS DASHBOARD

Overview. About the Catalyst 2820 and Catalyst 1900 Switches CHAPTER

END-OF-SALE AND END-OF-LIFE ANNOUNCEMENT FOR THE CISCO CATALYST 6500 SERIES OC-12 ATM MODULE

Introducing Cisco Catalyst 4500 Series Supervisor Engine II-Plus-10GE and Cisco Catalyst 4500 Series 48-Port 100BASE-X SFP Line Card

Strategic IT Plan Improves NYCHA Resident Services While Reducing Costs US$150 Million

Top-Down Network Design

C ISCO INTELLIGENCE ENGINE 2100 SERIES M OUNTING AND CABLING

CISCO CENTRALIZED WIRELESS LAN SOFTWARE RELEASE 3.0

Customers want to transform their datacenter 80% 28% global IT budgets spent on maintenance. time spent on administrative tasks

Cisco VIP6-80 Services Accelerator

Enhancing Availability, Performance, and Security for BEA WebLogic Clusters Using Cisco CSS Series Content Services Switches 1

CISCO 7304 SERIES ROUTER PORT ADAPTER CARRIER CARD

USING TREND SERVERPROTECT5 WITH CISCO CALLMANAGER

Cisco Optimization Services

OmniMSS Powerful, high-performance LAN-to-ATM internetworking services

Easy Migration to Gigabit Ethernet over Copper

LAN Emulation Overview

Cisco Unity 4.0(4) with Cisco Unified CallManager 4.1(2) Configured as Message Center PINX using Cisco WS-X6608-T1 using Q.SIG as MGCP Gateway

Cisco AS5300 Gateway - PBX Interoperability: NEC NEAX 2400 PBX using T1 PRI Interfaces to an H.323 Gateway

Power Analyzer Firmware Update Utility Version Software Release Notes

IP Communications for Small Offices Using Cisco CallManager Express and Cisco Unity Express

EventBuilder.com. International Audio Conferencing Access Guide. This guide contains: :: International Toll-Free Access Dialing Instructions

CISCO 7200 SERIES NETWORK PROCESSING ENGINE NPE-G1

Global entertainment and media outlook Explore the content and tools

Portable Product Sheets Cat 4500 Supervisors last updated July 22, 2009

Exam: Title : Routing & Switching Exam (RSS) Ver :

Instructions. (For 6180 Industrial Computers) Applications. Overview & Safety

Introducing Campus Networks

Cisco Value Incentive Program Advanced Technologies: Period 7

Hybrid Wide-Area Network Application-centric, agile and end-to-end

Conferencing and Recording

Transcription:

White Paper Campus Network Design Case Study Questions or feedback? Contact: Stuart Hamilton shamilto@cisco.com Introduction This design guide illustrates scalable campus network design techniques for building large switched networks. It is presented in the form of a case study to provide a specific example of a set of customer requirements, followed by two detailed network designs: Switched Ethernet to the desk with a Fast Ethernet-based backbone Switched Ethernet to the desk with an Asynchronous Transfer Mode (ATM)-based backbone The methodology represents a conservative and practical approach to network design and is by no means the only way of achieving the customer goals. In fact, the design philosophy used here is to keep things as simple and straightforward as possible and then to use the rich tools available in Cisco IOS software to tune and tailor the network to achieve network optimization as traffic patterns and applications emerge. All the designs used here are possible with products and software available as of July 1997. Migrations to future technologies are also shown and are clearly labeled as being future. Current Network Environment The customer s current network has evolved from a sharedhub, collapsed-backbone, router network with pockets of switching, deployed in network hotspots. Some specifics of the current state of the network include: 6000-person campus, mostly PCs, 95 percent shared Ethernet, 5 percent Token Ring Protocol is IP, with IPX, limited AppleTalk, and some NetBIOS Some centralized enterprise servers, many workgroup servers Currently using network 10.0.0.0 IP addressing (RFC 1918) Collapsed-backbone router network with hubs and AGS+/Cisco 7000 series routers Experiencing heavy utilization on server segments and client subnets Current traffic flow patterns have not been characterized, but most of the traffic is assumed to be client server-based, with an increasing amount of cross-subnet traffic due primarily to Intranet Web applications. Page 1 of 27

Strategic Directions of the Customer Clearly, greater bandwidth for each user port as well as higher-speed server connections are required. In order to satisfy the business requirements during the next few years, numerous decisions have been made that comprise the core set of network requirements: Switched Ethernet to the desktop Microsoft NT servers using IP as the primary protocol. Mixture of enterprise servers (e-mail, notes, Web), high-power bandwidth-intensive workgroup servers and multimedia servers for IP videocasting Enable plug-and-play for adds, moves, and changes No single points of failure from a hardware perspective that affect more than 100 users (same as today with shared hubs) Ability to optimize the design based on measured traffic flow characteristics Ability to increase the bandwidth in any part of the network using existing equipment Connection to legacy networks during transition to the switched network Structured network growth with minimal infrastructure impact Design Methodology The design process uses functional building blocks such that the network can not only be constructed in a structured, hierarchical manner, but can also be managed and troubleshot in the same, simple way. For simplicity, this design is not bound by constraints such as physical location of wiring closets. One building with adequate wiring closets and equipment rooms to house all the proposed equipment is assumed. In practice, however, physical constraints are present, and such constraints affect the selection of specific equipment configurations. Other assumptions include: With a population of 6000 people, provisioning will be a total of 8000 ports, accounting for desktops with more than one connection, conference rooms, training rooms, and so on. Each wiring closet has an average of 100 unshielded twisted-pair (UTP) Category 5 wire drops going into it. Fiber runs are available from each wiring closet to intermediate equipment closets. Ethernet Design Design starts at the wiring closet and works down to the core of the network. With 8000 users and 100 users per wiring closet, 80 desktop Ethernet switches are needed. (See Figure 1.) These switches are either Catalyst 5000 or 5500 switches, depending on the exact port density required. Sometimes, customers start by provisioning for autosensing 10/100 ports, knowing that the migration path from the desktop is switched 10 and then switched 100, as required. A Catalyst 5500 is also desirable for high availability users because of the redundant supervisor feature available. Figure 1 Desktop Connectivity Switched 80 Catalyst 5000 Page 2 of 27

IP Addressing and Subnetting (VLAN structure) The main goal is to make IP addressing as simple as possible to minimize the cost of adds, moves, and changes. In the past, each workstation had to be statically configured with an IP address, address mask, and default gateway. Whenever a person moved, a live body had to visit that end-user machine and reconfigure all three of those entries. Now people are traveling to remote offices, and they have Ethernet at home behind Integrated Services Digital Networks (ISDN) access routers and, therefore, administering a different address every time a person changes location is not practical. Dynamic Host Configuration Protocol (DHCP) allows users to take a PC out of the box and plug it into a DHCP-enabled network and have all the necessary addresses, masks, default gateways, and more allocated to them automatically without any end-user intervention. Users can now move, travel to remote offices, and use the home office router by simply plugging in and running. This scheme is mature enough that many large corporations (including Cisco Systems) are using it in the day-to-day operation of their networks. (This feature is reminiscent of the plug-and-play networking features first available in the Apple Macintosh many years ago.) Given that DHCP is the solution to the adds, moves, and changes problem, an addressing and subnetting structure that is simple to deploy and troubleshoot can be chosen. Customers no longer need to let their IP addressing scheme act as a constraint in their network design. The big question is what size to make the subnets. Over the years, average subnet size has started very large, moved to smaller subnets to make the network more scalable when routers first emerged in the campus, enlarged again as the switching wave swept across the industry, and decreased yet again as new applications such as IP multicast began to permeate corporations. Clearly there is no one answer as to the correct subnet size. The only given is that the tools must be available to easily adjust subnet size one way or the other, depending upon the applications and scalability require that drive the network. For simplicity, network 10.0.0.0 addressing and a subnet mask of 255.255.255.0 are used. (See RFC 1918) This setup allows for a maximum of 254 hosts per subnet or Virtual LAN (VLAN). Later, using DHCP, this number can be adjusted in either direction without a visit to end-user stations. With 100 people per wiring closet, a subnet size of 100 means that, on average, 154 addresses per subnet are not used, but since they are free, this size constitutes a justifiable business decision. The allocation of subnets is shown in Figure 2. Half the ports from one box are next assigned to VLAN 2 (for example), and the other half to VLAN 3. This step is performed on each pair of boxes, for a total of 80 VLANs. Since each VLAN corresponds to an IP subnet, there are 80 subnets as well. As shown later in this discussion, this approach allows scaling the uplink bandwidth from the access layer to the next layer, the distribution layer. Also, containing the scope of VLAN membership greatly simplifies the overall maintenance of the network over time. Remember again that the specific subnet that someone is in doesn t matter; network connectivity is of importance. How to optimize both 2 and 3 throughput for access to network services is discussed later. Figure 2 VLAN Allocation for Wiring Closet Switches Switched VLAN# 2 3 2 3 4 5 4 5 6 7 6 7 78 79 78 79 80 81 80 81 80 Catalyst 5000 Page 3 of 27

Distribution Each access-layer Catalyst 5000 series switch has two Fast Ethernet ISL uplinks. To maximize network redundancy, each of those links terminates on a different Catalyst 5000 series switch at the distribution layer, as shown in Figure 3. Each distribution-layer Catalyst 5000 series switch has 12 Fast Ethernet uplinks, for a total of 14 switches. The network is constructed out of seven functional building blocks, each comprising 12 VLANS, 12 access-layer switches and two distribution-layer switches. Later, how all the VLANS communicate within a building block and how building blocks communicate with each other will be discussed. Also noteworthy is that, in most networks, physical constraints such as building sizes typically define the sizes of the building blocks, so a building-block approach maps very nicely into most real world scenarios. Figure 4 shows the distribution layer in more detail; it shows the different VLAN forwarding paths, based on spanning-tree parameter settings. Judiciously choosing the root switch for each VLAN enables traffic forwarding on both of the uplinks coming from each access-layer switch. Also configurable is redundancy such that on the failure of an uplink, the other uplink takes over the traffic of both VLANs. The diagram shows which VLANs are in forwarding state and which VLANs are in blocking state for the access-layer switches. You accomplish this load balancing by assigning the root bridges as shown in the diagram. Figure 3 Distribution 80 Catalyst 5000 Fast Ethernet ISL Distribution 14 Catalyst 5000 Building Block Expanded View in Figure 4 Figure 4 Distribution Detail (Single Building Block) VLANs: 2... 3 2 3 4 5 4 5 6-13 f2 f3 f2 f3 f4 f5 f4 b3 b2 b3 b2 b5 b4 b5 f5 b4... f = forwarding b = blocking Distribution STP Root for 2, 4, 6, 8, 10, 12 ISL 2-13 STP Root for 3, 5, 7, 9, 11, 13 Spanning tree Roots Back Each Other Up Page 4 of 27

To set the bottom left switch as the root bridge for VLAN 2, you would enter the command set spantree priority 500 2, where 500 is the priority and 2 corresponds to VLAN 2. On the other distribution layer Catalyst 5000, you would create a priority of 1000 (greater than 500 and less than the default of 32,768) such that it would be the root bridge in the event of the primary root bridge failure. The VLAN trunking scheme known as Inter-Switch Link (ISL) is used to carry two VLANs on a trunk between the access- and distribution-layer switches. To see how this design methodology works in the event of an uplink failure, examine the new forwarding path constructed by spanning tree when the leftmost Fast Ethernet uplink is removed or cut. (See Figure 5.) Notice that the second uplink from the layer switch becomes the forwarding path for VLANs 2 and 3. Figure 5 VLANs: 2 Fast Ethernet ISL Link Failure Redundancy Analysis Spanning Tree f3 f2 f3 f4 f5 f4 f2 b3 b2 b5 b4 b5 STP Root for 2, 4, 6, 8, 10, 12 ISL 2-13 STP Root for 3, 5, 7, 9, 11, 13...... 3 2 3 4 5 4 5 6-13 f = forwarding b = blocking Bandwidth Scaling In the initial design of a network, it is important to know what tools are available to tune network behavior after real user traffic starts flowing. One of the areas to consider for tuning is the amount of bandwidth between the access and distribution layers. One simple approach is shown in Figure 6. f5 b4 Figure 6 f = forwarding b = blocking Scaling the Bandwidth Because of the flexible VLAN/subnet scheme chosen earlier, new VLANs (VLAN 100 in this case) can be created to allow forwarding of more traffic between the two layers of the network. In fact, examination of real traffic flows might dictate segregation of a few power users off a switch and assignment of their own uplink, as shown in Figure 6. In practice, this task is very simple because it involves some trivial programming on the switches and using DHCP to assign appropriate addresses for the new clients of VLAN 100. No visitation of the end-user workstations is required. Another simple technique using Fast EtherChannel technology is illustrated in Figure 7. Figure 7 STP Root for 2, 4, 6, 8, 10, 12,100 ISL 2-13 STP Root for 3, 5, 7, 9, 11, 13 Scaling Bandwidth Fast EtherChannel (future)...... VLANs: 2 3 100 2 3 4 5 4 5 6-13 f2 f3 f2 f3 f4 f5 f4 b3 b2 b3 b2 b5 b4 b5 VLANs: 2 Fast EtherChannel Bundle f100 f5 b4 f = forwarding b = blocking... 3 2 3 4 5 4 5 6 13 f2 f3 f2 f3 f4 f5 f4 f5 b3 b2 b3 b2 b5 b4 b5 b4... f = forwarding b = blocking STP Root for 2, 4, 6, 8, 10, 12, ISL 2-13 STP Root for 3, 5, 7, 9, 11, 13 Page 5 of 27

Aggregating either two or four Fast Ethernet links between any two switches enables scaling the bandwidth within a single VLAN. In order to preserve packet sequencing, the Fast EtherChannel protocol locks an individual source/destination address conversation to a specific physical Fast Ethernet link. In the event of the failure of that link, the conversations on that link are automatically distributed on the remaining active links. Figure 8 Fast Ethernet ISL Workgroup Server Placement VLANs 2-13... Workgroup Server Strategies In an enterprise network, applications will determine the demands placed upon a network server and hence its optimal placement in the network hierarchy. In this document, workgroup servers are defined as mission-specific, high-throughput servers meant to serve a subset of the users in the enterprise. An example in Cisco s own network is engineering servers used to create new versions of Cisco IOS software. These servers are clearly targeted at a subset of users who write and maintain software releases. Typically, high throughput to these servers and highly predictable traffic flow patterns would be expected. For this reason, a direct 2 path to the server can be maintained to reap the benefits of the high aggregate throughputs offered by 2 switching. Figure 8 shows these servers attached via Fast Ethernet to the distributionlayer switches. These servers can participate in numerous VLANs if network interface cards (NICs) that have ISL VLAN intelligence are used. This setup ensures that anyone in this building block (of subnets 2 through 13) can have a 2 path to these servers. In practice, you would not likely have a single workgroup server serving this many users because the throughput requirements of a few dozen users might very well be the correct server sizing. Fast Ethernet ISL VLANs 2-13 Fast Ethernet ISL VLANs 2-13 Another decision to be made is the number of VLANS that you should configure on a server NIC card. The maximum number allowed by the various vendors varies widely (specifically in the case of multiple emulated LANS [ELANs] in LAN Emulation [LANE]), and users often take a conservative approach in choosing this number. In practice, putting multiple VLANs on a single NIC is functionally equivalent to having multiple NICs in a server. Based on those experiences, a conservative approach would be to begin with 6 to 10 VLANs on a single interface. Core- Strategies In review, Figure 9 shows the configuration so far. Figure 9 and Distribution Design 80 Catalyst 5000 Fast Ethernet ISL Distribution 14 Catalyst 5000 Page 6 of 27

A total of 14 distribution-layer switches comprise seven functional building blocks, and communication within a VLAN inside a building block is possible. The core layer provides three connectivity functions: Inter-VLAN communication within a building block Communication between building blocks (Inter-VLAN by definition) Creation of a communication path to the enterprise servers and to the WAN and Internet Two approaches in the creation of the core layer are examined: Using standalone routers Using integrated routing in the existing distribution-layer switches Both approaches yield the desired results, and the choice depends upon physical constraints and current installed equipment. The detailed connectivity of the core layer is shown on the left half of the network. (See Figure 11.) Figure 11 VLANs 2-13 Distribution Inter-VLAN Redundancy Fast Ethernet ISL P = HSRP Primary B = HSRP Backup Core 2-12 Even P 3-13 Odd 2-13 P B 2-13 14-25 14-25 3-13 Odd B 2-12 Even HSRP: B B P P Core- Design with Standalone Routers The most straightforward way to deploy the connectivity using standalone routers is illustrated in Figure 10. A total of four Cisco 7500 series routers, with six or eight Fast Ethernet Port Adapters are used to comprise the core layer of the network. Figure 10 Core Design with Standalone Routers Catalyst 5000 Fast Ethernet ISL Workgroup Servers 8 Fast Ethernet 6 Fast Ethernet Cisco 7500 Expanded view in Figure 11 Page 7 of 27

Two paths from the distribution layer to the core layer are desirable for redundancy purposes. The ability of both of those trunks to forward traffic in normal operating conditions is also a must. In order to achieve these two goals, the VLANs in each building block are split into odds and evens. For the first building block, remember that the left distribution switch is the forwarding path for the even-numbered VLANs (as defined by the root bridge placement earlier), so it makes sense to make even VLANs 2 through 12 the normal forwarding path into the router. The routers are running Hot Standby Router Protocol (HSRP), and the interfaces that will be the primary and backup HSRP routers are carefully chosen. (Support for HSRP over ISL is required and is planned for a future Cisco IOS release). In essence, the inter-vlan routing for the first building block (VLANs 2 through 13) is done by the left-hand router, while the routing for the next building block (VLANs 14 through 25) is done by the right-hand router. Using HSRP, the routers back each other up on a building-block basis. Overall, IP performance can be scaled by using the distributed switching feature (planned for a future Cisco IOS release) in the Cisco 7500 class routers. With this feature, all IP inter-vlan routing that stays on a single Versatile Interface Processor (VIP2) slot can be switched locally on the card. Overall in this design, 14 VIP2 cards are capable of doing distributed switching. Scaling Bandwidth to the Core Again, there are two methods of increasing the bandwidth between the distribution and core layer based upon user and applications needs. (See Figure 12.) Figure 12 VLANs 2-13 Distribution HSRP: Core Scaling Bandwidth to Router A simple approach is to break the VLANs into smaller groups and add more uplinks. This setup is illustrated by taking the ISL trunk that was labeled with VLANs 2 through 12 even and breaking it up into two separate trunks of 2 through 6 even and 8 through 12 even. A secondary approach available in future Cisco IOS software releases utilizes Fast EtherChannel bundle support on both the switches and the routers. Core- Interconnection Prior to this section, the network was partitioned in two pieces at the core layer, and there was a need to have the whole network communicate. Interconnecting the core layer, as shown in Figure 13, accomplishes this requirement: Figure 13 VLANs Core Subnets: 2-6 Even Fast Ethernet ISL 8-12 Even Core Design Detail 2-13 2-13 HSRP EtherChannel Bundle (future) 2-49 2-49 50-81 50-81 101 101 100 100 100 100 101 101... Cisco 7500 Fast Ethernet No ISL Catalyst 5002 Enterprise Servers Page 8 of 27

Two subnets are provisioned between the core routers and the Catalyst switches. There is no need to provision these subnets as ISL trunks; hence they are simply regular Fast Ethernet interfaces. Specifically, VLAN 100 is chosen to be the common subnet on the leftmost Catalyst switch and VLAN 101 is chosen to be the common subnet on the rightmost switch. From a traffic perspective, traffic between the two halves of the network has two active redundant paths to traverse. This setup is determined by the routing protocol used in the routers; it is a manifestation of having two equal cost paths between the two sides of the network. Thus, from a redundancy perspective, a fast-converging routing protocol is required to reroute traffic in failure conditions. Typically in an IP network, either Open Shortest Path First (OSPF) or Enhanced Interior Gateway Routing Protocol (Enhanced IGRP) are used as the fast converging routing protocol. Also shown in Figure 13 is the placement of the enterprise servers, Fast Ethernet attached with single VLAN membership. Servers, like any IP end station, need to have a default gateway (router s IP address) to follow a path off the subnet. In this case, there are two possible paths, so techniques such as Proxy Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP) redirects, or servers receiving routing information are used to find the correct route into the network. More will be discussed later about the functions of enterprise servers; first, an alternate core design is examined. Core Design with Integrated Routing in Switches An alternative technology available to perform the function of the core layer is the 3 switching engine for the Catalyst 5000 series of switches, called the Route Switch Module (RSM). Using an RSM enables distributing the 3 NetFlow Switching horsepower from the 14 VIP2 cards on the Cisco 7500 series routers to each of the 14 distribution layer Catalyst 5000 switches. (See Figure 14.) The new icon shown in Figure 14 represents a Catalyst 5000 series switch with an integrated RSM. The aggregate 3 switching performance in this network design is over 2 million packets per second. Figure 15 shows how the Route Switch Module works in detail. Figure 14 Core Design using Route Switch Module 80 Catalyst 5000 Fast Ethernet ISL 14 Catalyst 5000 with RSM Page 9 of 27

Figure 15 RSM Core Redundancy Detail VLANs 2-13 Following the pattern of the standalone router case, the core of the network is interconnected with Catalyst 2 switches using the same subnet approach. (See Figure 16.) HSRP Primary for VLANs 2-13 Even Backplane Interconnections to VLANs 2-13 HSRP Primary for VLANs 2-13 Odd The RSM effectively has interfaces that touch the backplane in each of its configured VLANs. The leftmost RSM, for example, has interfaces in each of VLANs 2 through 13 and acts as primary HSRP router for the even VLANs. The righthand router will also have interfaces in VLANs 2 through 13 and will be the HSRP primary router for the odd VLANs. Each RSM will back the other up in failure conditions. Enterprise Server Strategy Enterprise servers are servers that are typically used by large numbers of users in a campus where the community of interest from a traffic perspective is the whole enterprise. In Cisco s network, some examples of enterprise servers include e-mail, Web servers, meeting scheduling software, IP multicast servers, as well as service functions such as DHCP and Domain Name System (DNS). It is anticipated that over time, as servers become more powerful, support more users, and 3 forwarding is implemented in hardware, most servers will migrate from workgroup status to enterprise status. Figure 16 RSM Core Interconnection Fast Ethernet ISL Catalyst 5000 series with RSM Fast Ethernet/ Fast EtherChannel Catalyst 5000 Switch VLAN 100 VLAN 101 Page 10 of 27

Support for IP Multicast As IP multicast applications such as Microsoft Netshow and Precept IP/TV proliferate throughout the enterprise, strong support of intelligent multicast forwarding in the network is critical. With the combination of Internet Group Management Protocol (IGMP) support on end stations and Cisco Group Management Protocol (CGMP) on the routers and switches, this scenario can be accomplished as shown in Figure 17. (The standalone router case is shown for clarity, but it is equally applicable for the RSM core design.) Note that CGMP would be enabled on both the distribution- and access-layer switches for optimal multicast forwarding. Here, multicast traffic on the Catalyst switches is only flooded out ports on a path towards end stations that are a member of that multicast group. Figure 17 IP Multicast Support Client Station with IP/TV IGMP Group Joins Catalyst 5000 Switch with CGMP Enabled CGMP Message to Switch Cisco 7500 Router with PIM IP Multicast Server Page 11 of 27

Legacy and Outside Network Connectivity As outlined in the beginning of the document, there is a legacy network with other network media (Token Ring) and other protocols (IPX, NetBIOS, AppleTalk). The legacy network needs to be connected with the new network that has been created. For illustrative purposes, this is shown on the standalone router model. (See Figure 18.) The following connectivity exists in the core layer: Internet access via a router (or more for redundancy) and an address translation/firewall such as the PIX Firewall WAN connections to the core layer via routers, including other WAN sites, analog dialup access, and ISDN remote access Legacy network access for routed protocols (IPX, AppleTalk) using routers at the core layer (Note that it is common to physically implement this configuration by creating another building block that consists of external network connectivity instead of connecting directly to the core, as shown in the diagram.) To connect the Token Ring NetBIOS traffic to the new switched Ethernet side of the network, translational bridging must be performed at the distribution layer to the VLAN that contains your NetBIOS Ethernet clients. This scenario can best be accomplished with a standalone Cisco 4000 series router or via Fiber Distributed Data Interface (FDDI) using Catalyst 1800 Token Ring switches. Figure 18 Connectivity to the Outside Catalyst 5000 Fast Ethernet ISL Cisco 4000 Router or Catalyst 1800 with FDDI Workgroup Servers Core To Legacy Token Ring (Bridged) Cisco 7500 Catalyst 5002 To Legacy Token Ring (Routed) To the Internet Firewall Enterprise Servers To WAN Page 12 of 27

Fast Ethernet Core Design Summary This switched network has, therefore, two possible implementations; they differ by the core-layer switching technology used. In Figure 19, standalone Cisco 7500 series routers are used to provide the core-layer scalability. Using the route switch module, the final design looks like the configuration shown in Figure 20. Figure 19 Fast Ethernet Final Diagram Using Standalone Routers Catalyst 5000 Distribution Fast Ethernet ISL Cisco 4000 Router or Catalyst 1800 with FDDI To Legacy Token Ring (Bridged) Workgroup Servers Core Cisco 7500 Fast Ethernet Catalyst 5002 To Legacy Token Ring (Routed) To the Internet Enterprise Servers To WAN Figure 20 Fast Ethernet Final Design with RSM Catalyst 5000 Distribution Fast Ethernet ISL Catalyst 5000 with RSM Cisco 4000 Router or Catalyst 1800 with FDDI To Legacy Token Ring (Bridged) Workgroup Servers Core To Legacy Token Ring (Routed) To the Internet Enterprise Servers To WAN Page 13 of 27

Redundancy Analysis Now some redundancy scenarios are examined and the mechanisms that are in use to achieve the goal of network uptime are illustrated. As a reminder, no more than 100 users should be impacted when any link or device fails. (The only failure that would impact 100 users would be loss of an access switch, and this failure is the same as losing a shared hub in today s network.) For this example, a conversation between stations on VLAN 2 and VLAN 5 is considered. The paths shown are correct, based on the information earlier in this document. Again, for clarity only, the case of the standalone router design is shown. In normal steady-state conditions, the forwarding path between VLANs 2 and 5 is shown in Figure 21. Figure 21 Redundancy Analysis (Standalone Router Case) VLAN2 VLAN5 Normal Forwarding Path Catalyst 5002 VLAN 2 to VLAN 5 Page 14 of 27

Figure 22 shows what happens when the primary core router fails. Here, the HSRP mechanism has determined that the HSRP primary router has gone away and the backup router has automatically taken over the forwarding of frames. Figure 22 Fail Primary Core Router VLAN 2 VLAN 5 Figure 23 Fail Distribution Switch VLAN 2 VLAN 5 Page 15 of 27

In the next example consider the starting point of Figure 21 and fail one of the distribution layer switches. Now, convergence occurs between the access and distribution-layer switches using spanning tree, and HSRP is used on the routers to converge the core. Figure 23 shows one of the possible rerouted paths between the two routers. Adjustment of interface port costs can easily tune the desired backup path. Also, if this failure analysis was done in the case of the RSM, the reroute mechanism would be much more straightforward. Overall Design Comments The designs shown herein have been kept simple for good reasons. In general, it is much easier in the long term to build a scalable and manageable network if you start with basic principles and use the complex software tools to tune the network and to help relieve you from the inevitable corner cases that users create. Scaling this network is straightforward with a hierarchical building block approach as well. In most cases, each building block represents a group of users in a building, so with this approach adding more buildings in the network without upsetting the overall balance of the network is very simple. The choice of standalone routers or RSM in the core layer is a function of the existing environment. ATM Core Design The Ethernet-based core design and the ATM core design are similar, so there is not as much detail in the layers of the network that are common with the Fast Ethernet design. The ATM part of the network is described in detail however. and VLAN Assignment The access layer and VLAN allocation are exactly the same as in the case of the Ethernet-based design. A total of 80 Catalyst 5000 series switches with 100 users each and 80 VLANs with 100 users per VLAN are configured. (See Figure 24.) Figure 24 and VLAN Allocation Switched VLAN# 2 3 2 3 4 5 4 5 6 7 6 7 78 79 78 79 80 81 80 81 80 Catalyst 5000 Page 16 of 27

Connection to ATM Distribution Each wiring closet switch has an ATM OC-3 (155 Mbps) uplink to the distribution-layer LightStream 1010 ATM switches, as shown in Figure 25. The OC-3 ATM module on the Catalyst 5000 has a dual physical sublayer (PHY) connection to allow for redundant connections to attach to two different ATM switches. The active link is shown with a solid line, and the backup is shown dotted. Here, each of the eight LightStream 1010 switches uses 20 OC-3 interfaces (out of a total of 32). Later, the switches will be connected to each other with the remaining unused interfaces. Examining this configuration in more detail shows the placement of the LAN Emulation Clients (LECs). (See Figure 26.) On a given Catalyst 5000 ATM interface, you need to create a LEC for each VLAN that exists on the Ethernet interfaces. Therefore, the first and second access-layer Catalyst switches have LECs for VLANs 2 and 3 only. The active links are both connected to the same ATM switch by design. Upon failure of the primary link, the backup link automatically picks up the LEC association of the primary link without any special configuration. Another point to note here is that, even with the ATM switches interconnected with multiple links (shown later), this network has no bridge loops. Often people will disable spanning tree on the Catalyst switches in this scenario and rely on the convergence properties of ATM to route around failures. Of course, if spanning tree is disabled, you must ensure that there are no other 2 loops possible via any other links outside of this picture. Figure 25 ATM Distribution Connectivity OC-3 ATM LightStream 1010/ Catalyst 5500 Expanded view in Figure 26 Figure 26 Distribution Detail with LEC Placement LEC-2 LEC-3 LEC-2 LEC-3 LEC-4 LEC-5 LEC-4 LEC-5 LightStream 1010/ Catalyst 5500 OC-3 Dual PHY Backup Page 17 of 27

LANE requires the use of service entities (LAN Emulation Configuration Server [LECS], LAN Emulation Server [LES], Broadcast and Unknown Server [BUS]), and each of these entities needs to be provisioned in the design. The LECS is provisioned later; Figure 27 shows the placement of LES/BUS pairs. Figure 27 LANE Services Design LEC-2/3 LEC-2/3 LEC-4/5 LEC-4/5 Figure 28 New primary LEC Redundancy Analysis New Backup LES/BUS for Uplink Failure LES/ BUS-2 LES/ BUS-3 BKP LES/ BUS-3 LES/ BUS-2 BKP LES/ BUS-4 LES/ BUS-5 BKP LES/ BUS-4 BKP LightStream 1010/ Catalyst 5500 OC-3 Dual PHY Backup LES/ BUS-5 BKP= Backup The best place to provision the LES/BUS pairs is on the Catalyst 5000 LANE module. Each LANE card is a LES/BUS for one ELAN and is a backup for its partner Catalyst switch, as shown in Figure 28. Cisco s Simple Server Redundancy Protocol (SSRP) is used to ensure that, in the event of a LANE service failure, a backup is ready to take over. Bandwidth Scaling The bandwidth between the access and core layers can be scaled by two methods. First, as shown in Figure 29, an additional ATM uplink can be added, and the LEC can be split up such that each one is only a single LEC, making both VLANs 2 and 3 forwarding links. A future solution will be to use a single OC-12 (622 Mbps) ATM interface as the uplink from the access layer. This module is scheduled to be available later 1997. Figure 29 OC-3 Dual PHY Backup LightStream 1010/ Catalyst 5500 Scaling the Uplink Bandwidth Redundancy Analysis If an ATM uplink fails, two mechanisms will bring the network back up. (See Figure 28.) First, the dual PHY backup automatically becomes the LEC in place of the failed link. Since LANE services are also provisioned on that interface, SSRP moves the LANE services to the second Catalyst switch from the left. (Note that the ATM switches will, in fact, connected, but connections are not shown.) If an entire switch were to fail, the dual-phy interfaces will become the active LEC, but it would still be necessary to provision redundant LANE services on other Catalyst switches. Although not shown in Figure 28, more LES/BUS backups could be programmed into the other pair of Catalyst switches. LEC-2 LEC-3 OC-3 Dual PHY Backup LightStream 1010/ Catalyst 5500 Page 18 of 27

Workgroup Server Strategy The strategy for workgroup servers is essentially the same as in the Fast Ethernet case, except that now the implementation offers choices. (See Figure 30.) Figure 30 Workgroup Server Placement Most ATM NIC cards offer the capability of configuring multiple ELAN memberships on a single NIC card. The reason for this configuration is to maintain a 2 path to the server for those users requiring high throughput rates. Again, typically six to ten ELANs on a single server NIC represents a conservative design choice. Often network managers don t want the complexity involved in using ATM NICs, and another option for server connectivity is to use Fast Ethernet. In this case, the LightStream 1010 in the diagram would probably best be implemented inside the Catalyst 5500 chassis to offer the convenience of mixed-media connectivity in one platform. Now, servers can be attached via Fast Ethernet (using ISL if required) and networked to the ATM fabric using OC-3 or OC-12 (future) LANE modules. Workgroup Servers on ATM or Fast Ethernet LightStream 1010/ Catalyst 5500 Core ATM With eight LightSteam 1010 switches at the distribution layer, there is a need to interconnect all these devices in a failure-tolerant manner. Redundant OC-12 core links are used, as shown in Figure 31. Figure 31 Core ATM 80 Catalyst 5000 OC-3 ATM LightStream 1010/ Catalyst 5500 OC-12 ATM Page 19 of 27

Each distribution ATM switch has two OC-12 links to each of two core ATM switches. Each core ATM switch has two OC-12 links to two other core switches, creating a redundant core that converges using the Private Network- Network Interface (PNNI) routing protocol in the event of link or switch failure. Again, these core switches can exist as standalone LightStream 1010 switches or, alternatively, as part of a Catalyst 5500 chassis. Scaling the ATM Core Bandwidth As shown in the Figure 32, bandwidth can be scaled easily between ATM switches simply by adding additional OC-3 or OC-12 links, as required. PNNI finds the best paths for call setup and proportionately load balances across links. Figure 32 Scaling the Core Additional OC-12 or OC-3 Inter-ELAN Routing So far, this design has not provisioned communication between VLANs. This type of communication requires a router that has ELAN membership across a number of the ELANS. Figure 33 shows four Cisco 7500 series routers provisioned to provide the throughput and redundancy needed for inter-elan routing. Figure 33 Inter-ELAN Routing Catalyst 5000 OC-3 ATM LightStream 1010/ Catalyst 5500 OC-3 ATM Workgroup Servers OC-12 ATM Cisco 7500 Page 20 of 27

Two important considerations must be made when sizing the inter-elan routing: Total bandwidth required for inter-elan traffic Total throughput required in pps for inter-elan traffic For the present design, four OC-3 (155Mbps) ATM trunks with the following packet-per-second budgets (measured Figure 35 Inter ELAN Routing Detail ATM OC-3 HSRP Primary on ELAN: 42-61 2-21 by counting packets in and out the same ATM interface) are configured: Cisco 7500 series with ATM Interface Processor (AIP) 55 kpps Cisco 7500 series with ATM Versatile Interface Processor (VIP) 80 to 85 kpps LANE LECS 22-41 62-81 Figure 35 illustrates how the VLANs are allocated per router and how the load is shared; but first, the allocation of ELANs across the distribution-layer ATM switches is considered. The left half of the network (Figure 34) shows the associated ELANs touching the corresponding distribution layer ATM switches; the right half of the network (not shown) follows the same pattern. This level of detail provides for an efficient choice of which router attached to which core ATM switch should act as the primary HSRP default gateway for each of the ELANs. Figure 34 Core ATM ELAN Detail Note that all four of the routers have ELAN membership in all 80 ELANS. Each router is chosen via HSRP to be the primary default gateway on 20 of the ELANs. These ELANs are chosen based on having the most-direct path from the ELAN membership highlighted in Figure 34. With four routers in 80 ELANs, there would be hundreds of router adjacencies to be managed by the router. To optimize this situation, one would typically configure passive adjacencies on all but about six of the ELANs. Additionally, the LECS component of LANE services is placed here. Each router can act as an LECS and, using SSRP, only one LECS at a time is the master for all the ELANs. ELANs Attatched: Distribution Eight LightStream 1010 2, 3, 6, 7, 10, 11, 14, 15, 18, 19 Two OC-12 Each 20 OC-3, Each from 4, 5, 8, 9, 12, 13, 16, 17, 20, 21 22, 23, 26, 27, 30, 31, 34, 35, 38, 39 24, 25, 28, 29, 32 33, 36, 37, 40, 41 Legacy and Outside Network Connectivity Rounding out the design produces the same type of outside network connectivity as the Fast Ethernet case. (See Figure 36.) As can be seen from the diagram, the same routers that provide inter- ELAN routing are chosen to provide WAN connectivity. This setup is convenient in this case, but considerations such as bandwidth, throughput, CPU utilization, and port density must be considered before using these routers to perform both tasks. Enterprise servers can be attached with ATM interfaces or by using LAN interfaces behind routers or LAN switches. Core Four LightStream 1010 Page 21 of 27

LANE Switched Virtual Circuit Budget Calculations When building an ATM network of any kind it is always a good idea to calculate the worst-case quantity of switched virtual circuits (SVCs) to ensure that you are operating in the bounds of the ATM switching equipment. The following LECs are on each ELAN: Two Catalyst 5000 series LEC Four Cisco 7500 LEC Four workgroup server LEC (assumption) The maximum data-direct virtual channel connections (VCCs) per ELAN is (10*9)/2, or 45. The overhead SVCs are 20 point-to-point SVCs and 2 point-to-multipoint SVCs. Since the total number of ELANs is 80, the total SVC counts are as follows: Total point-to-point SVCs = 80*(45 + 20) = 5200 Total point-to-multipoint SVCs = 80*2 = 160 Total SVCs per router = 80 + 80 + (80*9) = 880 The maximum number of SVCs supported in a single LightStream 1010 switch is 32,000, so only 5200 SVCs across 12 switches is well within the limits. The router SVC numbers are also calculated because routers are points of concentration for numerous ELANs. An AIP can handle up to 2048 SVCs, so again, even the worst case is well within SVC budget constraints. Figure 36 ATM LANE Backbone Final Design Catalyst 5000 OC-3 ATM LightStream 1010/ Catalyst 5500 To Legacy Token Ring (Bridged) Workgroup Servers OC-3 ATM To Legacy Token Ring (Routed) Cisco 7500 OC-12 ATM Firewall To the Internet To WAN Cisco 7500 Enterprise Servers Page 22 of 27

A few design points should be considered in order to keep the total SVC count under control: Contain the scope of VLAN membership. That is, if you keep creating LECs on all switches for a single VLAN, the data-direct SVC quickly gets out of control because it scales as n squared (where n is the number of LECs in a single VLAN). It may be shortsighted to do this anyway, given the migration path to Multiprotocol Over ATM (MPOA), which is discussed later. Check to make sure that VLAN Trunk Protocol (VTP) is running in transparent mode on all Catalyst switches. This scenario ensures that LECs will not be created accidentally, thus driving up the SVC totals. If necessary, to lower the total number of SVCs, decrease the VLAN size and create more ELANs so as to lower the number of LEC in a single VLAN. (With the network as designed shown here, this step would not be required) Redundancy Analysis Again, traffic rerouting in the presence of failure conditions is considered. The normal forwarding path of a conversation between a client in VLAN 2 and a client in VLAN 5 is shown in Figure 37. Figure 37 Redundancy Analysis VLAN 2 VLAN 5 Normal VC Data Path OC-3 ATM Primary Router for VLAN 2 First HSRP Backup for VLAN 2 VLAN 2 to VLAN 5 Page 23 of 27

Traffic from VLAN 2 follows an SVC to its default gateway router and is, in turn, sent back on another SVC to the appropriate Catalyst switch containing the client in VLAN 5. Now if one of the core ATM switches that was passing through the call fails, the new SVC is established as shown in Figure 38. Because the primary HSRP router for VLAN 2 is now disconnected from the network, HSRP on the next router needs to take over. The new SVC also has to be set up through another core ATM switch that still provides a path to the router. The ATM convergence is achieved using PNNI. Starting again from the original diagram, if one of the distribution-layer ATM switches fails, again the call is rerouted using another path. (See Figure 39.) Figure 38 Fail Core ATM Switch or Router 2 5 Normal VC Data Path OC-3 ATM This Router is Disconnected New VC Data Path New HSRP Primary for VLAN 2 VLAN 2 to VLAN 5 Page 24 of 27

The dual PHY on the switch with VLAN 5, as well as PNNI on the switches, are used to find the best call setup route. In this instance it would also be required to have backup LES/BUS configured on neighboring Catalyst switches for SSRP to establish a new LES/BUS for ELAN 5. Migrating the ATM Network with MPOA The migration path for the ATM campus backbones of today is clearly MPOA, a protocol that leverages off the LANE standard by using LANE for communication within an Emulated LANs. Catalyst 5000 series ATM interfaces become MPOA Clients (MPCs), and Cisco 7500 series router ATM interfaces take on the role of MPOA Servers (MPSs). Using MPOA, the same VLAN 2 to VLAN 5 conversation data path is shown in Figure 40 for comparison purposes. Figure 39 Fail Distribution ATM Switch 2 5 Normal VC Data Path OC-3 ATM New VC Data Path Primary Router for VLAN 2 First HSRP Backup for VLAN 2 VLAN 2 to VLAN 5 Page 25 of 27

Figure 40 Example MPOA Cut through Path 2 5 MPC MPOA Cut-through Data Path OC-3 ATM Router Acts as MPOA Server (MPS) VLAN 2 to VLAN 5 Overall Network Design Summary Two alternative network designs based on a realistic set of network design criteria were proposed. One was based on Ethernet to the desktop with a Fast Ethernet core, and the other had ATM as the core technology. Either design can be used to achieve the overall network design and operational goals. The advantages often considered with each of the technologies include: Fast Ethernet Backbone Advantages Simple technology High density and low cost Simple server attachment with Fast Ethernet Easy migration to Fast EtherChannel and Gigabit Ethernet ATM Backbone Advantages Fast converging core Backbone capable of carrying voice/video from legacy systems 622-Mbps core speeds available today (since 1996) Standards-based LANE and MPOA No matter what the choice of technology, nothing beats the scalability, flexibility, and manageability of a hierarchical network design that uses simple building blocks and combines the best advantages of both 2 and 3 forwarding. Page 26 of 27

Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA World Wide Web URL: http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100 European Headquarters Cisco Systems Europe s.a.r.l. Parc Evolic-Batiment L1/L2 16, Avenue du Quebec BP 706-Villebon 91961 Courtaboeuf Cedex France Tel: 33 1 6918 61 00 Fax: 33 1 6928 83 26 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA Tel: 408 526-7660 Fax: 408 526-4646 Asia Headquarters Nihon Cisco Systems K.K. Fuji Building 3-2-3 Marunouchi Chiyoda-ku, Tokyo 100 Japan Tel: 81 3 5219 6000 Fax: 81 3 5219 6010 Cisco Systems has more than 190 offices in the following countries. Addresses, phone numbers, and fax numbers are listed on the Cisco Connection Online Web site at http://www.cisco.com. Argentina Australia Austria Belgium Brazil Canada Chile China (PRC) Colombia Costa Rica Czech Republic Denmark Finland France Germany Hong Kong Hungary India Indonesia Ireland Israel Italy Japan Korea Malaysia Mexico The Netherlands New Zealand Norway Philippines Poland Portugal Russia Singapore South Africa Spain Sweden Switzerland Taiwan, ROC Thailand United Arab Emirates United Kingdom Venezuela Copyright 1997 Cisco Systems, Inc. All rights reserved. Printed in USA. Cisco IOS, NetFlow, and PIX are trademarks; and Catalyst, Cisco, the Cisco logo, Cisco Systems, EtherChannel, and LightStream are registered trademarks of Cisco Systems, Inc. All other trademarks, service marks, registered trademarks, or registered service marks mentioned in this document are the property of their respective owners. 1296R 6/97 LW