Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Size: px
Start display at page:

Download "Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan"

Transcription

1 Philippe Laurens, Michigan State University, for USATLAS Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan ESCC/Internet2 Joint Techs July 2011

2 Content Introduction LHC, ATLAS, USATLAS Data Deluge Hierarchical Tiers of Computing Robust Network is key Network Infrastructure monitoring Distributed perfsonar nodes at USATLAS T1 & T2s Centralized Dashboard Examples of diagnostics Prospects after our pilot deployment 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 2

3 Large Hadron Collider at the European Center for Nuclear Research 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 3

4 Large Hadron Collider 2x circulating beams of protons guided by superconducting magnets cooled down to -271 o C 27 km (~17 miles) circumference Underground: m ATLAS is one of the two primary experiments at LHC 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 4

5 ATLAS Particle Physics Experiment to explore the fundamental forces and the structure of matter in our universe One quark or gluon from each proton collide at the center of the ATLAS detector and produce other particles, which themselves decay or further collide with material in the detector giving jets and showers of secondary particles A Russian doll set of sub-detector components surround the collision point (altogether 7,000 tons, 25m high) Particle paths and energies are measured using millions of channels of data acquisition repeat 25ns later, up to 30 million times/sec 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 5

6 The ATLAS Collaboration A Torroidal LHC ApparatuS The collaboration 38 Countries ~170 Institutes/Universities ~3000 physicists Including ~1000 students One of the largest efforts in physical sciences 2011: Started 2 year run of data taking at 3.5 TeV per beam (7 TeV total) Already produced papers and new physics results from 2010 low luminosity data, more results coming at this summer conferences 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 6

7 Data Deluge Proton beams cross at 40 MHz but only ~30MHz of collisions Trigger System: multi-level online event selection, down to < 1kHz of recorded events Huge Data Volume > 300 MB/s Huge Computer Storage and Analysis resources needed for simulation (Monte Carlo) and event reconstruction Worldwide Grid 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 7

8 ATLAS Tiered Computing Model To address these computing needs (Storage and CPU) ATLAS has chosen a Tiered Computing Model: Tier 0 : at CERN Tier 1 : ~10 national centers Tier 2 : Regional centers Tier 3 : Institutional/group centers (Tier 4 : Desktops) Raw data is duplicated among Tier-0 and Tier-1s for backup Reconstructed data and simulation data are available to all Tier-1s and Tier-2s Tier-2s primarily handle Simulation (MC production) & Analysis Tasks Implicit in this distributed model and central to its success are: High-performance, ubiquitous and robust networks Grid middleware to securely find, prioritize and manage resources User jobs need to find the data they need 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 8

9 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 9

10 USATLAS sites US Tier 1 BNL Brookhaven National Lab (Long Island, NY) Five US Tier 2 centers AGLT2 Atlas Great Lakes Tier 2 MSU Michigan State University UM University of Michigan MWT2 Mid-West Tier 2 IU Indiana University Purdue University Indianapolis UC University of Chicago NET2 North-East Tier 2 BU Boston University HU Harvard SWT2 South-West Tier 2 OU University of Oklahoma UTA University of Texas at Arlington WT2 West Tier 2 SLAC SLAC National Accelerator Laboratory 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 10

11 T2 Site example: AGLT2 Split between MSU and UM 10Gb network between MSU and UM and between each site and the rest of the world ~20 file server nodes > 1.9 PB of data ~ 375 compute nodes > 4,500 job slots 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 11

12 Robust Network is Key Desire to instrument the network connections between US Tier 1 and all US Tier 2 sites in one uniform way Primary motives Aid in problem diagnosis and location identification Archive of standard regular measurements over time USATLAS Adopted perfsonar-ps Implemented end points in each facility Define mesh of connection tests between all facilities 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 12

13 perfsonar-ps Deployment at USATLAS Deploy same inexpensive hardware at all sites ~$600 per KOI 1U system, 1Gb NIC Used same linux based ps perfsonar Toolkit LiveCD now most sites use the new net-install, rpm distribution Dedicate one node for throughput and one node for latency at each site Throughput tests are resource intensive and tend to bias latency test results Define a common set of test Mesh of Throughput tests to/from all T1/T2 perfsonar nodes Mesh of Latency tests to/from all T1/T2 perfsonar nodes Now augmented with summary via Dashboard (more later) 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 13

14 perfsonar-ps Project A deployable measurement infrastructure perfsonar-ps is comprised of several members: Esnet, Fermilab, Georgia Tech, Indiana University, Internet2, SLAC, The University of Delaware perfsonar-ps products are written in the perl programming language and are available for installation via source or RPM (Red Hat Compatible) packages perfsonar-ps is also a major component of the ps Performance Toolkit A bootable Linux CD containing measurement tools and GUI ready to be configured for desired tests. 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 14

15 perfsonar-ps Tools Web based GUIs for admin to configure and for user to display measurements After initial setup of local disk, IP, NTP Nodes may be declared being part of some communities (e.g. LHC or USATLAS) to help identification in a directory lookup service Two main test types Throughput tests (bwctl) non-concurrent Two-way Ping Latency tests (PingER) and One-Way Latency tests with packet loss accounting (owamp), can run concurrently Tests are scheduled and a Measurement Archive manages results Also available traceroute and ping (i.e. reverse route from remote PS host) Network Diagnostic Tools (NDT,NPAD) on demand Cacti pre-installed 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 15

16 perfsonar: Web GUI 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 16

17 perfsonar: Throughput Tests web page 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 17

18 perfsonar: Throughput graphs 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 18

19 perfsonar: Latency Tests web page 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 19

20 perfsonar: Latency Graph Graph for current time shown here, but one can also retrieve older time slices from archive, or zoom in on a particular time within such graph. 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 20

21 perfsonar: reverse traceroute 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 21

22 Centralized Monitoring of our Distributed Monitoring: the BNL Dashboard 9 separate T1/T2 sites monitored, i.e. 18 perfsonar nodes Total of 108 critical services, 72 throughput tests, 72 one-way latency tests Need a centralized Dashboard to keep track of the overall mesh Developed by BNL (Tom Wlodek) for USATLAS (and now other clouds) First within Nagios (but complex and hard to access) Now rewritten as a standalone project accessible by all (and portable) Use probes to monitor proper operation of critical services on each node Alert s sent to site admin on failing services Use probes to retrieve the latest test results on pre-defined mesh of measurements (throughput & Latency) Both measurements about link A B measured by A & B Thresholds on results for label (OK, CRITICAL, etc) and color code History and time plots of service status and mesh of measurements Present a compact overview of all USATLAS inter-site network connections (and perfsonar nodes health) 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 22

23 Dashboard: in Nagios first 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 23

24 Dashboard: now standalone version 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 24

25 Dashboard: Primitive Services 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 25

26 Dashboard: Service History 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 26

27 Dashboard: Throughput Measurement plot 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 27

28 Dashboard: Latency Measurement plot 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 28

29 Dashboard: other ATLAS clouds 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 29

30 Diagnostics Throughput: Notice and localize problems to help debug network, also help differentiate server problems from path problems Latency: Notice route changes, asymmetric routes Watch for excessive Packet Loss Optionally: Install additional perfsonar nodes inside local network and/or at periphery Characterize local performance and internal packet loss Separate WAN performance from LAN performance Daily Dashboard check of own site, and peers 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 30

31 Example of diagnostics Asymmetric throughput between peer sites IU & AGLT2 was documented, then resolved 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 31

32 Other example of diagnostics Most recent case, right after routing upgrade work, quickly noticed small 0.7ms latency increase Traceroute showed an unintended minor route change (packets to MSU were going through UM) router config quickly fixed 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 32

33 Prospects after our pilot deployment perfsonar-ps Has proven extremely useful for USATLAS to-date! perfsonar-ps will be recommended for US T3 sites perfsonar is being deployed in other ATLAS clouds Italy has started, Canada also in process BNL Dashboard already monitoring IT cloud (at least for now) Dashboard code will be packaged & distributed perfsonar is being deployed at LHC T1 sites LHCOPN already plans to deploy it LHCONE is considering perfsonar-ps for their monitoring Will continue usage at USATLAS T1&T2s Expand to Inter-cloud monitoring, between T2s of different clouds Add 10Gb throughput tests perfsonar is open source with new release ~twice/year e.g. work underway to use single multi-core node for both throughput and latency The more test points along the paths the better Integrating information from backbone, routing points Allows a divide-and-conquer approach to problem isolation 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 33

34 Thank you perfsonar Jason Zurawski USATLAS perfsonar Dashboard Nagios needs BNL login Standalone Dashboard Tom Wlodek AGLT2 Our Compute Summary page My 7/12/2011 Philippe Laurens, MSU AGLT2 USATLAS 34

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Programmable Information Highway (with no Traffic Jams)

Programmable Information Highway (with no Traffic Jams) Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan

More information

perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan

perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan perfsonar Update Introduction & Overview Development Status Authentication & Authorization GUI Status Deployment

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

WLCG Network Throughput WG

WLCG Network Throughput WG WLCG Network Throughput WG Shawn McKee, Marian Babik for the Working Group HEPiX Tsukuba 16-20 October 2017 Working Group WLCG Network Throughput WG formed in the fall of 2014 within the scope of WLCG

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology

Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology S Campana 1, A Brown 2, D Bonacorsi 3, V Capone 4, D De Girolamo 5, A F Casani 6, J Flix 7,11, A Forti 8, I Gable

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

perfsonar Deployment on ESnet

perfsonar Deployment on ESnet perfsonar Deployment on ESnet Brian Tierney ESnet ISMA 2011 AIMS-3 Workshop on Active Internet Measurements Feb 9, 2011 Why does the Network seem so slow? Where are common problems? Source Campus Congested

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

Intercontinental Multi-Domain Monitoring for LHC with perfsonar

Intercontinental Multi-Domain Monitoring for LHC with perfsonar Journal of Physics: Conference Series Intercontinental Multi-Domain Monitoring for LHC with perfsonar To cite this article: D Vicinanza 2012 J. Phys.: Conf. Ser. 396 042060 View the article online for

More information

perfsonar ESCC Indianapolis IN

perfsonar ESCC Indianapolis IN perfsonar ESCC Indianapolis IN July 21, 2009 Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics Nuclear

More information

Federated data storage system prototype for LHC experiments and data intensive science

Federated data storage system prototype for LHC experiments and data intensive science Federated data storage system prototype for LHC experiments and data intensive science A. Kiryanov 1,2,a, A. Klimentov 1,3,b, D. Krasnopevtsev 1,4,c, E. Ryabinkin 1,d, A. Zarochentsev 1,5,e 1 National

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Installation & Basic Configuration

Installation & Basic Configuration Installation & Basic Configuration This document is a result of work by the perfsonar Project (http://www.perfsonar.net) and is licensed under CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

The Science DMZ: Evolution

The Science DMZ: Evolution The Science DMZ: Evolution Eli Dart, ESnet CC-NIE PI Meeting Washington, DC May 1, 2014 Why Are We Doing This? It s good to build high-quality infrastructure As network engineers, we like building networks

More information

perfsonar psui in a multi-domain federated environment

perfsonar psui in a multi-domain federated environment perfsonar psui in a multi-domain federated environment WACREN Conference 17-18 March 2016 Antoine Delvaux PSNC/GÉANT adelvaux@man.poznan.pl GÉANT perfsonar Service Manager perfsonar Developer What is perfsonar?

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 3 May 2012 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

N. Marusov, I. Semenov

N. Marusov, I. Semenov GRID TECHNOLOGY FOR CONTROLLED FUSION: CONCEPTION OF THE UNIFIED CYBERSPACE AND ITER DATA MANAGEMENT N. Marusov, I. Semenov Project Center ITER (ITER Russian Domestic Agency N.Marusov@ITERRF.RU) Challenges

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Tier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006

Tier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Tier2 Centers Rob Gardner University of Chicago LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Outline ATLAS Tier2 centers in the computing model Current scale and deployment plans Southwest

More information

Connectivity Services, Autobahn and New Services

Connectivity Services, Autobahn and New Services Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE 09, Barcelona, 21 st -25 th September 2009 Agenda Background GÉANT Connectivity services: GÉANT IP GÉANT Plus GÉANT Lambda

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

Adding timing to the VELO

Adding timing to the VELO Summer student project report: Adding timing to the VELO supervisor: Mark Williams Biljana Mitreska Cern Summer Student Internship from June 12 to August 4, 2017 Acknowledgements I would like to thank

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.

More information

Experience with ATLAS MySQL PanDA database service

Experience with ATLAS MySQL PanDA database service Journal of Physics: Conference Series Experience with ATLAS MySQL PanDA database service To cite this article: Y Smirnov et al 2010 J. Phys.: Conf. Ser. 219 042059 View the article online for updates and

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites

Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites Jeff Boote 1, Eric Boyd 1, Aaron Brown 1, Maxim Grigoriev 2, Joe Metzger 3, Phil DeMar 2, Martin Swany 4, Brian Tierney 3,

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Data Quality Monitoring Display for ATLAS experiment

Data Quality Monitoring Display for ATLAS experiment Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics,

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

The perfsonar Project at 10 Years: Status and Trajectory

The perfsonar Project at 10 Years: Status and Trajectory With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester, B. Johnston, A. Lake, E. Pouyoul, L. Rotman, B. Tierney and others @ ESnet The perfsonar Project at 10 Years: Status and Trajectory

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012 Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012 This report describes the status and plans of the Canadian network

More information

CSCS CERN videoconference CFD applications

CSCS CERN videoconference CFD applications CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

Introduction to. Network Startup Resource Center. Partially adopted from materials by

Introduction to. Network Startup Resource Center. Partially adopted from materials by Introduction to Network Startup Resource Center Partially adopted from materials by These materials are licensed under the Creative Commons Attribution-NonCommercial 4.0 International license (http://creativecommons.org/licenses/by-nc/4.0/)

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

An ATCA framework for the upgraded ATLAS read out electronics at the LHC

An ATCA framework for the upgraded ATLAS read out electronics at the LHC An ATCA framework for the upgraded ATLAS read out electronics at the LHC Robert Reed School of Physics, University of the Witwatersrand, Johannesburg, South Africa E-mail: robert.reed@cern.ch Abstract.

More information

A Step Towards Automated Event Diagnosis Stanford Linear Accelerator Center. Adnan Iqbal, Yee-Ting Li, Les Cottrell Connie A. Log.

A Step Towards Automated Event Diagnosis Stanford Linear Accelerator Center. Adnan Iqbal, Yee-Ting Li, Les Cottrell Connie A. Log. A Step Towards Automated Event Diagnosis Stanford Linear Accelerator Center Adnan Iqbal, Yee-Ting Li, Les Cottrell Connie A. Log. Williams Jerrod In this presentation Cause of Problems Background Motivation

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

The ATLAS PanDA Pilot in Operation

The ATLAS PanDA Pilot in Operation The ATLAS PanDA Pilot in Operation P. Nilsson 1, J. Caballero 2, K. De 1, T. Maeno 2, A. Stradling 1, T. Wenaus 2 for the ATLAS Collaboration 1 University of Texas at Arlington, Science Hall, P O Box 19059,

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010

Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010 Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010 Published: June 2011 The following content may no longer reflect Microsoft s current position or infrastructure. This content should

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

Internet2 Technology Update. Eric Boyd Deputy Technology Officer

Internet2 Technology Update. Eric Boyd Deputy Technology Officer Internet2 Technology Update Eric Boyd Deputy Technology Officer Internet2 Mission and Goals Internet2 Mission Develop and deploy advanced network applications and technologies, accelerating the creation

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration Direct photon measurements in ALICE Alexis Mas for the ALICE collaboration 1 Outline I - Physics motivations for direct photon measurements II Direct photon measurements in ALICE i - Conversion method

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Multi-domain Internet Performance Sampling and Analysis Tools

Multi-domain Internet Performance Sampling and Analysis Tools Multi-domain Internet Performance Sampling and Analysis Tools Prasad Calyam, Ph.D., pcalyam@osc.edu Student Research Assistants: Jialu Pu, Lakshmi Kumarasamy ESCC/Internet2 Joint Techs, July 2010 Project

More information

Embedded Network Systems. Internet2 Technology Exchange 2018 October, 2018 Eric Boyd Ed Colone

Embedded Network Systems. Internet2 Technology Exchange 2018 October, 2018 Eric Boyd Ed Colone Embedded Network Systems Internet2 Technology Exchange 2018 October, 2018 Eric Boyd , Ed Colone 1. Background 2. Standards Principles Requirements 3. Emerging Technology

More information

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2 M. E. Pozo Astigarraga, on behalf of the ATLAS Collaboration CERN, CH-1211 Geneva 23, Switzerland E-mail: eukeni.pozo@cern.ch The LHC has been providing proton-proton

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

A MIMD Multi Threaded Processor

A MIMD Multi Threaded Processor F. Lesser, www.ti.uni-hd.de A MIMD Multi Threaded Processor Falk Lesser V. Angelov, J. de Cuveland, V. Lindenstruth, C. Reichling, R. Schneider, M.W. Schulz Kirchhoff Institute for Physics University Heidelberg,

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data Maaike Limper Emil Pilecki Manuel Martín Márquez About the speakers Maaike Limper Physicist and project leader Manuel Martín

More information