Grid Computing at the IIHE

Similar documents
Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Travelling securely on the Grid to the origin of the Universe

The LHC Computing Grid

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

Virtualizing a Batch. University Grid Center

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April

Opportunities A Realistic Study of Costs Associated

The CMS Computing Model

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

CC-IN2P3: A High Performance Data Center for Research

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Accelerating Throughput from the LHC to the World

Data Transfers Between LHC Grid Sites Dorian Kcira

Grid Computing: dealing with GB/s dataflows

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Grid Data Management

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

Belnet, the Belgian research network. Davina Luyten Marketing & Communication TF-CPR meeting, Brussels, 6 October 2014

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Austrian Federated WLCG Tier-2

CERN and Scientific Computing

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

The LHC Computing Grid

Distributing storage of LHC data - in the nordic countries

Grid Computing a new tool for science

N. Marusov, I. Semenov

The Grid: Processing the Data from the World s Largest Scientific Machine

Grid and Cloud Activities in KISTI

European Grid Infrastructure

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

DESY. Andreas Gellrich DESY DESY,

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Summary of the LHC Computing Review

CSCS CERN videoconference CFD applications

Distributed e-infrastructures for data intensive science

irods usage at CC-IN2P3: a long history

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

CernVM-FS beyond LHC computing

1. Introduction. Outline

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

Storage and I/O requirements of the LHC experiments

Moving e-infrastructure into a new era the FP7 challenge

Towards Network Awareness in LHC Computing

On the EGI Operational Level Agreement Framework

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

Support for multiple virtual organizations in the Romanian LCG Federation

e-science for High-Energy Physics in Korea

Overview. About CERN 2 / 11

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

First Experience with LCG. Board of Sponsors 3 rd April 2009

Striped Data Server for Scalable Parallel Data Analysis

Grid Computing: dealing with GB/s dataflows

EGI: Linking digital resources across Eastern Europe for European science and innovation

National R&E Networks: Engines for innovation in research

Towards a Strategy for Data Sciences at UW

Big Data Analytics and the LHC

IEPSAS-Kosice: experiences in running LCG site

HEP replica management

LHC and LSST Use Cases

On-demand Research Computing: the European Grid Infrastructure

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Operating the Distributed NDGF Tier-1

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011

Andrea Sciabà CERN, Switzerland

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

LHCb Computing Resources: 2018 requests and preview of 2019 requests

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

AMRES Combining national, regional and & EU efforts

Computing for LHC in Germany

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

Using IKAROS as a data transfer and management utility within the KM3NeT computing model

Programmable Information Highway (with no Traffic Jams)

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

GÉANT Mission and Services

Connectivity Services, Autobahn and New Services

Robustness Studies of the CMS Tracker for the LHC Upgrade Phase I

The JINR Tier1 Site Simulation for Research and Development Purposes

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Service Level Agreement Metrics

Visita delegazione ditte italiane

Spark and HPC for High Energy Physics Data Analyses

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Figure 1: cstcdie Grid Site architecture

The EU DataGrid Testbed

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Transcription:

BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

From very small to very big... Elementary particles

From very small to very big... Our Tools Elementary particles We study a very wide scale of objects at IIHE!

So many questions to answer... What is dark matter? Dark energy? New Particles? Where does inflation comes from? Can we unify the 4 forces? Why do particles have these particular masses?

The Experiments in High-Energy Physics (HEP)

Scale for the IceCube experiment Elementary particles

IceCube : a neutrino telescope 12 countries, 48 institutes, ~300 people IceCube Neutrino Observatory at the South Pole Ice Depth 5964 sensors Volume ~ 1 km3

IceCube : a neutrino telescope Optical sensor Captures the light emmited by particles through the ice Tracking a particle through the ice

Scale for SoLid and CMS experiments Elementary particles

SoLid : Neutrino Physics at a Belgian reactor Nuclear Reactor emits beam of Neutrinos 4 countries, 10 institutes, ~50 people 12800 detector modules 100 TB expected per year Particle passes through the cube and emits light Light captured by optical fibers and photo-sensors

The Compact Muon Solenoid 42 countries, 182 institutes, ~4300 people

The Compact Muon Solenoid 42 countries, 182 institutes, ~4300 people More than 75 Million channels Each event is ~1MB in size Event reduction : Collisions every 25ns 40 000 000 100 events/s That's still 15 PB collected per year!

One of the (many) results we published Last missing piece of the Standard Model

One of the (many) results we published l e b o N z i r P 3 1 0 2 e

So many locations so much data IceCube : 12 countries, 48 institutes, ~300 people SoLid : 4 countries, 10 institutes, ~50 people CMS : 42 countries, 182 institutes, ~4300 people IceCube : 5.2 PB of data stored until now SoLid : 100 TB/year of data expected CMS : 20 PB of data expected this year How to work together with people in so many different locations? How to handle this many data and have everyone access it?

The Grid

The Grid A definition Usually compared to electric / water / phone companies grid infrastructure CERN's definition : «a service for sharing computer power and data storage capacity over the Internet» Ian Foster's definition : «A regroupment of computing resources not centrally administered not a cluster with standard, open, general-purpose protocols and interfaces open middleware delivering nontrivial qualities of service» heterogenous resources

The Grid A definition Usually compared to electric / water / phone companies grid infrastructure CERN's definition : «a service for sharing computer power and data storage capacity over the Internet» Ian Foster's definition : «A regroupment of computing resources not centrally administered not a cluster with standard, open, general-purpose protocols and interfaces open middleware delivering nontrivial qualities of service» heterogenous resources In Europe, the European Grid Infrastructure (EGI) federates the national grids In Belgium, we have BEgrid, which is part of EGI

The Grid An example An example : «[ ] a collaboration of more than 170 computing centres in 42 countries» Tier Structure : T0, T1, T2, T3 based on size + function Data replication Completely meshed grid for data & computing AAA principle : Any data Anytime Anywhere At IIHE, our T2 is part of : CMS : 7 T1, ~30 T2, ~20 T3 IceCube : Wisconsin as T0, 1 T1, 18 T2 SoLid : IIHE's CC as T0+T1+T2, 1 T1, 2 T2 132,992 physical CPUs 553,611 logical CPUs 300 PB of online disk storage 230 PB of tape storage

GEANT Network Interconnects R&E networks in Europe Interconnections with other major networks in the world, eg Internet2 BELNET in GEANT - Connects the 2 Belgian T2s to the GRID - Is an integral part of the network meshing of the GRID BELNET

Our T2 infrastructure in Brussels Users : 150 Users from 4 universities (ULB, VUB, UA, UG) Solid users (FR, UK) since T2 Part of WLCG, EGI, BEgrid,... Large Grid-accessible storage : Large Grid-accessible Computing Cluster : 3.2 PB 4500 CPU cores Internal Network between cluster & storage : > 120 Gbits/s (observed 75Gbits/s) WAN connectivity provided by BELNET : 1x 10 Gbits/s line - standard R&E connection - special connection to Solid experiment 1x 1 Gbits/s backup line Certificate authority : Use DigiCert service contracted through BELNET - machine certificates for securing connections - grid user certificates mandatory to trust user in the whole grid

Our T2 infrastructure in Brussels Users : 150 Users from 4 universities (ULB, VUB, UA, UG) Solid users (FR, UK) since T0 Part of WLCG, EGI, BEgrid,... Large Grid-accessible storage : Large Grid-accessible Computing Cluster : 3.2 PB 4500 CPU cores Internal Network between cluster & storage : > 120 Gbits/s (observed 75Gbits/s) WAN connectivity provided by BELNET : 1x 10 Gbits/s line - standard R&E connection - special connection to Solid experiment 1x 1 Gbits/s backup line ' n a tc s u j We o h t i kw r o tw Certificate authority : Use DigiCert service contracted through BELNET - machine certificates for securing connections - grid user certificates mandatory to trust user in the whole grid em h t ut

What about people not from HEP? Vlaams Supercomputer Centrum (VSC) As a VSC-T2, we provide resources for all research communities : We are the VSC grid cluster Computing resources Storage space Provide High Throughput Computing (HTC) resources Users get access to grid resources through Virtual Organisations (VO) Provide support & training for grid usage We also operate a VSC cloud infrastructure

What about people not from HEP? European Grid Infrastructure (EGI) We are a standard grid cluster Every researcher can get grid-access through BEgrid Provide resources to VOs with belgian collaboration : cms, icecube, solid, beapps, projects.nl, enmr.eu, users from all around the world use our cluster and we use theirs We are part of several european-level projects : Long Tail of Science (LToS) grid computing for isolated users / small communities Resource provider (LToS VO) Test easier interface for grid-access Operate resource-request tools FedCloud federated cloud-computing Resource provider (integrating VSC cloud)

T2B - The Future Continue supporting more groups of researchers and offer them computing resources Our computing needs are still growing : LHC is out-performing predictions, more data than expected Storage & CPU needs will increase more rapidly Data transfers for CMS & other exp are growing Already discussions from some sites to increase WAN bandwidth Upgrades of experiments require huge increase of resources (HC-LHC x60) CERN services are starting to go IPv6 will have to start preparing the switch with the help of BELNET Our WAN bandwidth since April 2015

Thank You for your attention