Building Core Networks and Routers in the 2002 Economy June, 2002 David Ward Cisco Systems, Inc. (mailto:dward@cisco.com) 1
Internet Backbone Growth Key Inflections & Trends 10000 5120 Technology Milestone Convergence @10G Fibers, Optics, Routers 1280 2560 1000 640 Provider PoP Capacities 100 10 Emergence of IP over SONET 5 DWDM in Core IP dominant traffic nxoc48 Backbones 10 20 Core to Edge Transition 40 80 160 320 IP Backbones @ 10G Focus shifts to Services Edge Integration of IP & Optics nx10g followed by 40G PoP Consolidation : Core+Edge 1 1.25 2.5 Focus on IP Backbones Keep pace with Optical Core Jun-97 Mar-98 Dec-98 Sep-99 Jun-00 Mar-01 Dec-01 Sep-02 Jun-03 Mar-04 Dec-04 2
Internet Architecture Evolution Early 1990s 1994-95 1996-97 1998-99 2000-02 IP IP PPP/HDLC ATM/FR IP POS SONET/SDH SONET/SDH SONET/SDH IP IP Fiber Fiber Fiber DWDM Fiber OXC DWDM Fiber ATM 155Mbps POS 622Mbps DWDM 2.5Gbps POS DWDM Channels Sweet Spot @ 10G IP over TDM IP over ATM IP over SONET IP over DWDM IP @ 10G DS3 overlay Voice dominates Internet Commercialization ATM overlay Voice dominates Web based applications appear IP @ OC12/STM4 Rapid growth in IP traffic recognized Internet Data Centers Routers connect to the fiber directly IP dominates Backbone buildouts 10G sweet spot for Fiber, Optics & IP Everything over IP New services on edge 3
Requirements VPN (MPLS + IP encap) Small, Low Power/HVAC 10 -> 40 Gbps Improved performance Easy to Manage FLEXIBILITY SENSE OF URGENCY Features Low prices High Availability Anything Else? QoS Early deliveries 4
The Net Core Section WAN WAN WAN WAN WAN 5
A Generic Backbone POP A POP B POP F POP C POP D POP E 6
One way of interconnecting boxes: Lots of mesh, need TE A small Backbone 0C48/STM16 (2.4G) POINT OF OC3/STM1 (155M) POLICY 7
Another way of interconnecting boxes: Goal = reduce meshiness Access ring Working-1 POP A POP B W P POINT OF Working-2 POLICY Rate limiting and granular scheduling 8
POPs interconnected Data Centers WAN Peering Data Centers WAN Peering OC192 OC192s (POS) OC12 SRP RING (DPT) OC48s (POS) Data Centers WAN Peering Data Centers WAN Peering 9
It s all about building a better Network R1 G1 B1 R2 R3 G2 G3 B2 B3 X1 X2 X3 X4 X5 X6 X7 X8 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8 Z1 Z2 Z3 Z4 Z5 Z6 Z7 Z8 Multiple Tables Multiple Topologies Multiple Overlay Networks Multiple Address Families Multiple Logial Routers 10
Conflicts Dumb Packet Pushers vs all the features at all locations One box, chipset, codebase that covers all network niches We tell each other that Internet routers are simple. All routers do is make a forwarding decision, update a header, then forward packets to the correct outgoing interface. We know the internet isn t this simple either Some newer features Multicast IPv6 DiffServ, IntServ, priorities, WFQ etc. Latency requirements Packet sequence Others: Drop policies, VPNs, ACLs, DOS traceback, measurement, statistics, 11
Software Software on a Router used to do three things Forwards packets - IPV4 unicast In the correct direction - route me Makes it easy for the operator to engineer traffic - CLI Now Forward Packets and perform X0 s of treatments, stats, etc V4 uni && v4 mcast && V6 uni && V6 mcast && MPLS && tunnel && Optical Control && Traffic Engineering Route but, allow for selectable algorithms depending on topology, adjacency, N-level policy search Converge instantaneously End to end manageability solution with guru hooks, pointyclicky and infinite storage for stats and packet sniffing 12
It all makes perfect sense New regulations Security New services or different services to get customers Converging infrastructure Can t afford overlay deployment strategy Acquisition and converging internet New POP architecture to reduce capex costs New manageability architecture to reduce opex costs Better billing and statistics systems Must have packet based system with same High Availability as current circuit delivered services Reduce maintenance windows - increase uptime 13
Some of the hard stuff Keeping up with Moore s Law for all the parts The bottleneck is memory speed. Memory speed is not keeping up with Moore s Law. It s all about the Software: The Software is the Router Features, High Availability, # Lines of Code, Testing Control Plane is an Asynchronous, Distributed System Analogy to Supercomputer Not peer to peer, Not Grid, Not Clusters Specialized for Network processing and Routing Convergence Lack of simplicity of network design due to complex service offerings leads to complex devices 14
It s all about the Hardware Scheduler 15
Packet buffers 10Gbits Buffer Memory Write Rate, R One 40B packet every 8ns Use SRAM? Buffer Manager + Fast enough random access time, but - Too low density to store 10Gbits of data. Use DRAM? + High density means we can store data, but - Can t meet random access time. Read Rate, R One 40B packet every 8ns 16
Memory Bandwidth 1000 1980 1983 1986 1989 1992 1995 1998 2001 Access Time (ns) 100 10 1 0.1 0.01 0.001 Line Capacity 2x / 7 months DRAM 1.1x / 18months Router Capacity 2.2x / 18months Moore s Law 2x / 18 months 0.0001 17
Packet processing CPU Instructions per minimum length packet since 1996 1000 100 10 1 1996 1997 1998 1999 2000 2001 18
Summary CPU speeds, Memory access issues - channeling Seymour Power/Heat, cost, size, weight Control Plane Complexities Feature list always getting longer, rapidly changing Economy is hosed - growth is slower Big, dumb, core box not going to build networks with required services Most Core routers become the new edge or have customers In reduced mesh case, core routers needed all features It never was easy to build a Router and it never was easy to build a Network - if it was. 19
** Cisco Confidential ** 20