BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke
From very small to very big... Elementary particles
From very small to very big... Our Tools Elementary particles We study a very wide scale of objects at IIHE!
So many questions to answer... What is dark matter? Dark energy? New Particles? Where does inflation comes from? Can we unify the 4 forces? Why do particles have these particular masses?
The Experiments in High-Energy Physics (HEP)
Scale for the IceCube experiment Elementary particles
IceCube : a neutrino telescope 12 countries, 48 institutes, ~300 people IceCube Neutrino Observatory at the South Pole Ice Depth 5964 sensors Volume ~ 1 km3
IceCube : a neutrino telescope Optical sensor Captures the light emmited by particles through the ice Tracking a particle through the ice
Scale for SoLid and CMS experiments Elementary particles
SoLid : Neutrino Physics at a Belgian reactor Nuclear Reactor emits beam of Neutrinos 4 countries, 10 institutes, ~50 people 12800 detector modules 100 TB expected per year Particle passes through the cube and emits light Light captured by optical fibers and photo-sensors
The Compact Muon Solenoid 42 countries, 182 institutes, ~4300 people
The Compact Muon Solenoid 42 countries, 182 institutes, ~4300 people More than 75 Million channels Each event is ~1MB in size Event reduction : Collisions every 25ns 40 000 000 100 events/s That's still 15 PB collected per year!
One of the (many) results we published Last missing piece of the Standard Model
One of the (many) results we published l e b o N z i r P 3 1 0 2 e
So many locations so much data IceCube : 12 countries, 48 institutes, ~300 people SoLid : 4 countries, 10 institutes, ~50 people CMS : 42 countries, 182 institutes, ~4300 people IceCube : 5.2 PB of data stored until now SoLid : 100 TB/year of data expected CMS : 20 PB of data expected this year How to work together with people in so many different locations? How to handle this many data and have everyone access it?
The Grid
The Grid A definition Usually compared to electric / water / phone companies grid infrastructure CERN's definition : «a service for sharing computer power and data storage capacity over the Internet» Ian Foster's definition : «A regroupment of computing resources not centrally administered not a cluster with standard, open, general-purpose protocols and interfaces open middleware delivering nontrivial qualities of service» heterogenous resources
The Grid A definition Usually compared to electric / water / phone companies grid infrastructure CERN's definition : «a service for sharing computer power and data storage capacity over the Internet» Ian Foster's definition : «A regroupment of computing resources not centrally administered not a cluster with standard, open, general-purpose protocols and interfaces open middleware delivering nontrivial qualities of service» heterogenous resources In Europe, the European Grid Infrastructure (EGI) federates the national grids In Belgium, we have BEgrid, which is part of EGI
The Grid An example An example : «[ ] a collaboration of more than 170 computing centres in 42 countries» Tier Structure : T0, T1, T2, T3 based on size + function Data replication Completely meshed grid for data & computing AAA principle : Any data Anytime Anywhere At IIHE, our T2 is part of : CMS : 7 T1, ~30 T2, ~20 T3 IceCube : Wisconsin as T0, 1 T1, 18 T2 SoLid : IIHE's CC as T0+T1+T2, 1 T1, 2 T2 132,992 physical CPUs 553,611 logical CPUs 300 PB of online disk storage 230 PB of tape storage
GEANT Network Interconnects R&E networks in Europe Interconnections with other major networks in the world, eg Internet2 BELNET in GEANT - Connects the 2 Belgian T2s to the GRID - Is an integral part of the network meshing of the GRID BELNET
Our T2 infrastructure in Brussels Users : 150 Users from 4 universities (ULB, VUB, UA, UG) Solid users (FR, UK) since T2 Part of WLCG, EGI, BEgrid,... Large Grid-accessible storage : Large Grid-accessible Computing Cluster : 3.2 PB 4500 CPU cores Internal Network between cluster & storage : > 120 Gbits/s (observed 75Gbits/s) WAN connectivity provided by BELNET : 1x 10 Gbits/s line - standard R&E connection - special connection to Solid experiment 1x 1 Gbits/s backup line Certificate authority : Use DigiCert service contracted through BELNET - machine certificates for securing connections - grid user certificates mandatory to trust user in the whole grid
Our T2 infrastructure in Brussels Users : 150 Users from 4 universities (ULB, VUB, UA, UG) Solid users (FR, UK) since T0 Part of WLCG, EGI, BEgrid,... Large Grid-accessible storage : Large Grid-accessible Computing Cluster : 3.2 PB 4500 CPU cores Internal Network between cluster & storage : > 120 Gbits/s (observed 75Gbits/s) WAN connectivity provided by BELNET : 1x 10 Gbits/s line - standard R&E connection - special connection to Solid experiment 1x 1 Gbits/s backup line ' n a tc s u j We o h t i kw r o tw Certificate authority : Use DigiCert service contracted through BELNET - machine certificates for securing connections - grid user certificates mandatory to trust user in the whole grid em h t ut
What about people not from HEP? Vlaams Supercomputer Centrum (VSC) As a VSC-T2, we provide resources for all research communities : We are the VSC grid cluster Computing resources Storage space Provide High Throughput Computing (HTC) resources Users get access to grid resources through Virtual Organisations (VO) Provide support & training for grid usage We also operate a VSC cloud infrastructure
What about people not from HEP? European Grid Infrastructure (EGI) We are a standard grid cluster Every researcher can get grid-access through BEgrid Provide resources to VOs with belgian collaboration : cms, icecube, solid, beapps, projects.nl, enmr.eu, users from all around the world use our cluster and we use theirs We are part of several european-level projects : Long Tail of Science (LToS) grid computing for isolated users / small communities Resource provider (LToS VO) Test easier interface for grid-access Operate resource-request tools FedCloud federated cloud-computing Resource provider (integrating VSC cloud)
T2B - The Future Continue supporting more groups of researchers and offer them computing resources Our computing needs are still growing : LHC is out-performing predictions, more data than expected Storage & CPU needs will increase more rapidly Data transfers for CMS & other exp are growing Already discussions from some sites to increase WAN bandwidth Upgrades of experiments require huge increase of resources (HC-LHC x60) CERN services are starting to go IPv6 will have to start preparing the switch with the help of BELNET Our WAN bandwidth since April 2015
Thank You for your attention