CERN network overview. Ryszard Jurga

Similar documents
CINBAD. CERN/HP ProCurve Joint Project on Networking. Post-C5 meeting, 12 June 2009 (hepix, 26 May 2009)

HOME GOLE Status. David Foster CERN Hawaii GLIF January CERN - IT Department CH-1211 Genève 23 Switzerland

Next Generation Networking in and FEDERICA

Multi-Domain Management:

Connectivity Services, Autobahn and New Services

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

The CMS Computing Model

CERN Network activities update

CERN and Scientific Computing

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Status of KISTI Tier2 Center for ALICE

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Grid Computing Activities at KIT

PassReview. PassReview - IT Certification Exams Pass Review

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

The LHC Computing Grid

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

SURFnet network developments 10th E-VLBI workshop 15 Nov Wouter Huisman SURFnet

Data Transfers Between LHC Grid Sites Dorian Kcira

Experience of the WLCG data management system from the first two years of the LHC data taking

Networks for Research and Education in Europe in the Age of Fibre - Where do we move? -

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Gigabit Networks, VLANs & Wireless LANs

Operating the Distributed NDGF Tier-1

Presentation of the LHCONE Architecture document

The European DataGRID Production Testbed

Chapter 10: Planning and Cabling Networks

Experience of Data Grid simulation packages using.

Summary of the LHC Computing Review

about us bandwidth changes everything

Optical Networking Activities in NetherLight

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

Research and Education Networking Ecosystem and NSRC Assistance

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

First Experience with LCG. Board of Sponsors 3 rd April 2009

COURSE PROJECT SEM ATTENTION ALL ADVANCED DIPLOMA & BACHELOR STUDENTS

Meraki 2014 Solution Brochure

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

LHCb Computing Resource usage in 2017

GÉANT Lambda Service Description. Dedicated full wavelengths up to 100Gbps for exceptionally demanding network requirements

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

Database monitoring and service validation. Dirk Duellmann CERN IT/PSS and 3D

Storage and I/O requirements of the LHC experiments

Network Management Functions - Fault. Network Management

The grid for LHC Data Analysis

31270 Networking Essentials Focus, Pre-Quiz, and Sample Exam Answers

RESEARCH NETWORKS & THEIR ROLE IN e-infrastructures

The LHC computing model and its evolution. Dr Bob Jones CERN

IPv6 Address Allocation Policies and Management

WLCG Network Throughput WG

CCNA Discovery 4.0 Designing and Supporting Computer Networks

Campus Network Design. 2003, Cisco Systems, Inc. All rights reserved. 2-1

The extension of optical networks into the campus

Strategy for SWITCH's next generation optical network

Campus Network Design

1. What type of network cable is used between a terminal and a console port? cross-over straight-through rollover patch cable 2.

Introducing Campus Networks

Introduction. High Speed LANs. Emergence of High-Speed LANs. Characteristics of High Speed LANS. Text ch. 6, High-Speed Networks and

IT114 NETWORK+ Learning Unit 1 Objectives: 1, 2 Time In-Class Time Out-Of-Class Hours 2-3. Lectures: Course Introduction and Overview

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

Observer Probe Family

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

Configuring Cisco IOS IP SLAs Operations

Campus network: Looking at the big picture

Configuring Cisco IOS IP SLA Operations

MTA_98-366_Vindicator930

Distributed Monte Carlo Production for

Visual TruView Unified Network and Application Performance Management Focused on the Experience of the End User

Meraki Solution Brochure

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

Configuring sflow. Information About sflow. sflow Agent. This chapter contains the following sections:

RIPE NCC Routing Information Service (RIS)

Intercontinental Multi-Domain Monitoring for LHC with perfsonar

Expected Outcomes Able to design the network security for the entire network Able to develop and suggest the security plan and policy

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Exam : Title : Designing for Cisco Internetwork Solutions. Version : Demo

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Configuring Cisco IOS IP SLAs Operations

24 Gigabit Fiber Copper PoE and Stacking Switch Series. Network Transmission Products 24 Gigabit Fiber Copper PoE and Stacking Switch Series.

Towards Network Awareness in LHC Computing

Ciprian Stroe Senior Presales Consultant, CCIE# Cisco and/or its affiliates. All rights reserved.

Driving Network Visibility

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

Več kot SDN - SDA arhitektura v uporabniških omrežjih

1. Introduction. Outline

Service Definition Internet Service

150 million sensors deliver data. 40 million times per second

Navpreet Singh INTRODUCTION TO COMPUTER NETWORKS. Computer Centre Indian Institute of Technology Kanpur Kanpur INDIA

The Challenges of Measuring Wireless Networks. David Kotz Dartmouth College August 2005

GARR-X phase 0. GARR network status GARR-X project updates. Massimo Carboni 9 WORKSHOP GARR, Rome - June 16th, 2009

IT4405 Computer Networks (Compulsory)

CS-MARS Integration for Cisco Unified Wireless

NetAlly. Application Advisor. Distributed Sites and Applications. Monitor and troubleshoot end user application experience.

Special expressions, phrases, abbreviations and terms of Computer Networks

Transcription:

CERN network overview Ryszard Jurga CERN openlab Summer Student Programme 2010

Outline CERN networks Campus network LHC network CERN openlab NCC 2

CERN campus network 3

Network Backbone Topology Gigabit 10 Gigabit Multi Gigabit Multi 10 Gigabit Computer Center Vault Farms 230x BB20 RB20 PG1 PL1 SL1 BL7 RL7 BB16 40x RB16 50x BB17 RB17 SG1 IB1 Internet PG2 SG2 IB2 Monitoring BB51 ZB51 BB52 ZB52 BB53 ZB53 ZB54 BB54 ZB50 BB50 ZB55 BB55 BB56 ZB56 AB51 RB51 AB52 RB52 AB53 RB53 RB54 AB54 RB50 AB50 RB55 AB55 RB56 AB56 Meyrin area Prevessin area BL5 RL5 2-S 10-1 40-S2 376-R 513-C 874-R 887-R 513-CCR BT2 BT3 BT13 212 354 376 513 RT3 RT4 RT14 RT13 BT12 BT1 BT16 874 874 RT17 RT2 513-CCR BL4 513 874 RL4 100x BT15 BL3 RL3 BL2 RL2 1000x or 874-CCC RT16 LHC area BT4 RT5 BT5 RT6 BT6 BT7 BT8 BT9 BT10 BT11 RT7 RT8 RT9 RT10 RT11 RT12 BL1 RL1 TT1 TT2 TT3 TT4 TT5 TT6 TT7 CERN sites Minor Starpoints 100x 2175 2275 2375 2475 2575 2675 2775 2875 TT8 21x 433x AG AG AG ATCN 15x Control 15x Control 10x Control 10x 15x TN 15x TN 10x TN 10x TN AG AG AG CDR AG 8x CDR AG CDR CDR AG AG HLT DAQ DAQ DAQ DAQ 25x 90x 90x Original document : 2007/M.C. Latest update : 19-Feb-2009 -O.v.d.V. MS3634-version 13-Mar-2009 O.v.d.V.

The CERN Networks Facts and Figures Three distinct multi-ten-gigabit backbones 150+ very high performance routers 3 700+ subnets 2200+ switches (increasing) 50 000 active user devices (exploding) 80 000 sockets 5 000 km of UTP cable 400+ starpoints (from 20 to 1 000 outlets) 5 000 km of fibers (CERN owned) 150 Gbps of WAN connectivity 4.8 TB LCG core Desktops, HPC, VoIP, Process control, etc Extremely Dynamic 1 500+ requests for Moves-Adds-Changes per month ISP like (2x more visitors than staff) Multi vendor site using only standards

Switches & Routers Force10 for the CORE HP for distribution

Network Management (1) CERN is particular Very large infrastructure all centrally managed Large number of network devices (of different manufacturers) Very limited staff, rationalization 10+ years of development of our own NMS ~5 software developers Extremely high level of automation 500 000+ lines of code/150+ DB tables Only one commercial package Network Supervision: SPECTRUM (CA) 7

Network Management (2) SOAP Devices States Topology Addressing Cables Fibers Security (FW) SLAs User Requests Tickets 2500 network devices Cisco HP HP 3com F10 Configuration Manager HR Active Dir. Others If it does not fit, it will not happen

DNS Stats ~2500 DNS queries/sec (all CERN DNS servers) 9

DHCP Stats Annual Closure 10

LHC Network 11

CMS ATLAS ALICE CERN Computer Center

LHC Networking 13

The Roles of Tier Centers 11 Tier1s, over 100 Tier2s LHC Computing will be more dynamic & networkoriented Prompt calibration and alignment Reconstruction Store complete set of RAW data Tier 0 (CERN) Tier 1 Reprocessing Store part of processed data Monte Carlo Production Physics Analysis Physics Analysis Tier 1 Tier 2 Tier 3 14

LHC Open Private Network It consists of any T0-T1 or T1-T1 link which is dedicated to the transport of WLCG traffic It s based on 10Gbps light path technologies Infrastructure is provided by GEANT, US LHCNet, National Research and Education Networks (NREN) and Commercial links 70 000 km of optical paths 15

LHCOPN map 16

T0-T1 Lambda Routing TRIUMF T1??? ASGC Via SMW-3 or 4 (?) T1 Copenhagen DK T1 NDGF BNL T1 MAN LAN NY??? AC-1 AC-2/Yellow VSNL N RAL T1 London UK SURFnet NL T1 SARA DE Hamburg Frankfurt CH T1 Starlight VSNL S Paris FR Strasbourg/Kehl T1 GRIDKA Stuttgart FNAL Atlantic Ocean Basel Zurich Lyon Madrid ES Barcelona T1 PIC T1 IN2P3 T0 GENEVA Milan IT T1 CNAF Michael Enrico (DANTE) 17

LHCOPN traffic Collisions and Luminosity Increase STEP 09 Preparation for the LHC startup 10 27 10 28 1030 1029 LHC is back 18

Network Competence Center Projects CINBAD (2007-) & WIND (2010-) 19

CINBAD codename deciphered CERN Investigation of Network Behaviour and Anomaly Detection Project Goal To understand the behaviour of large computer networks (10 000+ nodes) in High Performance Computing or large Campus installations to be able to: Detect traffic anomalies in the system Be able to perform trend analysis Automatically take counter measures Provide post-mortem analysis facilities

data sources CINBAD project principle storage analysis collectors

sflow main data source Based on packet sampling (RFC 3176) on average 1-out-of-N packet is sampled by an agent and sent to a collector packet header and payload included (max 128 bytes) switching/routing/transport protocol information application protocol data (e.g. http, dns) SNMP counters included low CPU/memory requirements scalable For more details, see our technical report http://openlab-mu-internal.web.cern.ch/openlab-mu-internal/documents/2_technical_documents/technical_reports/2007/rj-mm_samplingreport.pdf 22

CINBAD sflow data collection Configuration via SNMP Configurator Data for adjusting sflow configuration Collector Up to 1Gbps of raw sflow Multicasted to two collectors Unpacked Data Aggregated data CINBAD DB Level II Storage Redundant Collector Level I Processing Level I Disk Storage Level II Processing Current collection based on traffic from ~1000 switches ~6000 sampled packets per second ~ 3500 snmp counter sets per second ~100GB per day Level III Processing Data Mining

CINBAD-eye Host Data Activity Collection and Connectivity, and Switch Activity Trends 24

Tools for Anomaly Detection Statistical analysis methods detect a change from normal network behavior selection of suitable metrics is needed can detect new, unknown anomalies poor anomaly type identification Signature based we ported SNORT to work with sampled data performs well against known problems tends to have low false positive rate does not work against unknown anomalies 25

Synergy from both detection techniques Translation Rules Incoming sflow stream Sample-based SNORT evaluation engine Anomaly Alerts Statistical Analysis engine New baselines New Signatures Traffic Profiles 26

Statistical and Signature-based Anomaly Detection 27

WIND 28

WIND codename deciphered Wireless Infrastructure Network Deployment Project Goals Analyze the problems of large scale wireless deployments and understand the constraints Simulate behaviour of WLAN Develop new optimisation algorithms Verify them in the real world Improve and refine the algorithms Deliver : algorithms, guidelines, solutions

Why WIND? WLAN deployments are problematic Radio propagation is very difficult to predict Interference an ever present danger WLANs difficult to properly deploy Monitoring was not an issue when the first standards were developed When administrators are struggling just to operate the WLAN, performance optimisation is often forgotten

Problem Example RF interference Max data rate in 0031-S: The APs work on 3 independent channels Max data rate in 0031-S: The APs work on the same channel S. Ceuterickx 31

Acknowledgments IT/CS 32

Thank you! 33