1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory January 21, 2008 Networking for the Future of Science
ESnet 3 with Sites and Peers (Early 2007) 2 ESnet Science Data Network (SDN) core Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren 42 end user sites LIGO PNNL TWC LLNL SNLL YUCCA MT GA INEEL Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) commercial peering points ESnet core hubs IP JGI LBNL NERSC SLAC SNV SDN AU AU Abilene SEA SDSC NASA Ames SNV PNWGPoP/ PAcificWave SNV PAIX-PA Equinix, etc. AMPATH (S. America) ALB CA*net4 MREN France Netherlands GLORIAD StarTap (Russia, China) Taiwan (TANet2, Korea (Kreonet2 ASCC) ESnet IP core: Packet over SONET Optical Ring and Hubs NREL LANL R&E networks SNLA Allied Signal DOE-ALB ELP high-speed peering points with Internet2/Abilene AMES ARM CHI-SL Equinix KCP Specific R&E network peers Other R&E peering points SINet (Japan) Russia (BINP) CHI FNAL ANL OSTI NOAA CERN (USLHCnet DOE+CERN funded) ORNL Abilene OSC GTN NNSA ORAU ATL Lab DC Offices SRS MAE-E JLAB PPPL International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings ( 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less GÉANT - France, Germany, Italy, UK, etc NSF/IRNC funded MAN LAN Abilene Equinix MIT BNL AMPATH (S. America)
ESnet 3 Backbone as of January 1, 2007 3 Seattle Sunnyvale Chicago New York City Washington DC San Diego Albuquerque Atlanta El Paso Future ESnet Hub ESnet Hub 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone as of April 15, 2007 4 Seattle Boston Sunnyvale Clev. New York City Chicago Washington DC San Diego Albuquerque Atlanta Future ESnet Hub ESnet Hub El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone as of May 15, 2007 5 Seattle Boston Sunnyvale Clev. New York City SNV Chicago Washington DC San Diego Albuquerque Atlanta Future ESnet Hub ESnet Hub El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone as of June 20, 2007 6 Seattle Boston Sunnyvale Denver Kansas City Chicago Clev. New York City Washington DC San Diego Albuquerque Atlanta Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL) 7 Seattle Boston Sunnyvale Los Angeles Denver Kansas City Chicago Clev. New York City Washington DC San Diego Albuquerque Atlanta Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone September 30, 2007 8 Seattle Sunnyvale Los Angeles Boise Denver Kansas City Chicago Clev. Boston New York City Washington DC San Diego Albuquerque Atlanta Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone December 2007 9 Seattle Boise Clev. Boston New York City Sunnyvale Denver Chicago Washington DC Los Angeles San Diego Albuquerque Kansas City Nashville Atlanta Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 2.5 Gb/s IP Tail (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
ESnet 4 Backbone December, 2008 10 Seattle Boston Sunnyvale Los Angeles San Diego Denver X2 Kansas City Albuquerque X2 Nashville X2 Chicago Clev. X2 DC X2 Atlanta X2 X2 Washington DC New York City Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links
AU ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) AU Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren JGI LBNL NERSC SLAC SNV1 ~45 end user sites PNNL LIGO YUCCA MT GA Office Of Science Sponsored (22) NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) SUNN SEA SDSC NASA Ames AMPATH (S. America) Abilene PacWave PacWave SUNN PAIX-PA Equinix, etc. commercial peering points ESnet core hubs KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 ALBU R&E networks CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America Salt Lake LANL SNLA Allied Signal ELPA MREN StarTap Taiwan (TANet2, ASCC) Specific R&E network peers Other R&E peering points ANL ARM CHI-SL Equinix KCP SINet (Japan) Russia (BINP) NOAA Geography is only representational CERN (USLHCnet: DOE+CERN funded) NETL DOE GTN NNSA ORAU OSTI ATLA Lab DC Offices SRS GÉANT - France, Germany, Italy, UK, etc NSF/IRNC funded DOE Equinix JLAB PPPL MIT/ PSFC BNL AMPATH (S. America) International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings ( 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less USHLCNet to GÉANT
1625 miles / 2545 km Australia Asia-Pacific Australia LA San Diego ESnet4 Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits), 500-600 Gbps by 2011-2012 (100 Gb/s circuits) Boise Canada (CANARIE) South America (AMPATH) IP core hubs SDN hubs Primary DOE Labs High speed cross-connects with Ineternet2/Abilene Possible hubs Denver Asia Pacific Science Data Network Core IP Core Albuquerque Canada (CANARIE) GLORIAD (Russia and China) Core network fiber path is ~ 14,000 miles / 24,000 km Tulsa 2700 miles / 4300 km Cleveland CERN (30+ Gbps) Jacksonville Europe (GEANT) 12 CERN (30+ Gbps) Boston New York Washington DC South America (AMPATH) Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections
A Tail of Two ESnet4 Hubs MX960 Switch 6509 Switch T320 T320Routers Router Sunnyvale Ca Hub Chicago Hub ESnet s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s each present their own unique challenges. 13
ESnet 4 Factoids as of January 21, 2008 14 ESnet4 installation to date: o 32 new 10Gb/s backbone circuits - Over 3 times the number from last JT meeting o 20,284 10Gb/s backbone Route Miles - More than doubled from last JT meeting o 10 new hubs - Since last meeting Seattle Sunnyvale Nashville o 7 new routers 4 new switches o Chicago MAN now connected to Level3 POP - 2 x 10GE to ANL - 2 x 10GE to FNAL - 3 x 10GE to Starlight
ESnet Traffic Continues to Exceed 2 PetaBytes/Month 15 Overall traffic tracks the very large science use of the network 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months
When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off
ESnet Continues to be Highly Reliable; Even During the Transition 17 5 nines (>99.995%) 4 nines (>99.95%) 3 nines (>99.5%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.
OSCARS Overview 18 On-demand Secure Circuits and Advance Reservation System Path Computation Topology Reachability Contraints OSCARS Guaranteed Bandwidth Virtual Circuit Services Provisioning Signalling Security Resiliency/Redundancy Scheduling AAA Availability
OSCARS Status Update 19 ESnet Centric Deployment o Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet (1Q05) o Prototype layer 2 (Ethernet VLAN) virtual circuit service deployed in ESnet (3Q07) Inter-Domain Collaborative Efforts o Terapaths (BNL) - Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06) - Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) o LambdaStation (FNAL) - Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) o HOPI/DRAGON - Inter-domain exchange of control messages demonstrated (1Q07) - Integration of OSCARS and DRAGON has been successful (1Q07) o DICE - First draft of topology exchange schema has been formalized (in collaboration with NMWG) (2Q07), interoperability test demonstrated 3Q07 - Initial implementation of reservation and signaling messages demonstrated at SC07 (4Q07) o UVA - Integration of Token based authorization in OSCARS under testing o Nortel - Topology exchange demonstrated successfully 3Q07 - Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07)
Network Measurement Update 20 ESnet o About 1/3 of the 10GE bandwidth test platforms & 1/2 of the latency test platforms for ESnet 4 have been deployed. - 10GE test systems are being used extensively for acceptance testing and debugging - Structured & ad-hoc external testing capabilities have not been enabled yet. - Clocking issues at a couple POPS are not resolved. o Work is progressing on revamping the ESnet statistics collection, management & publication systems - ESxSNMP & TSDB & PerfSONAR Measurement Archive (MA) - PerfSONAR TS & OSCARS Topology DB - NetInfo being restructured to be PerfSONAR based LHC and PerfSONAR o PerfSONAR based network measurement solutions for the Tier 1/Tier 2 community are nearing completion. o A proposal from DANTE to deploy a perfsonar based network measurement service across the LHCOPN at all Tier1 sites is being evaluated by the Tier 1 centers
End 21