Austrian Federated WLCG Tier-2

Similar documents
Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Andrea Sciabà CERN, Switzerland

IEPSAS-Kosice: experiences in running LCG site

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Status of KISTI Tier2 Center for ALICE

The INFN Tier1. 1. INFN-CNAF, Italy

150 million sensors deliver data. 40 million times per second

Philippe Charpentier PH Department CERN, Geneva

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Distributing storage of LHC data - in the nordic countries

DESY. Andreas Gellrich DESY DESY,

The Grid: Processing the Data from the World s Largest Scientific Machine

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

The LHC Computing Grid

EGEE and Interoperation

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Challenges of the LHC Computing Grid by the CMS experiment

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

Tier2 Centre in Prague

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

Operating the Distributed NDGF Tier-1

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

HammerCloud: A Stress Testing System for Distributed Analysis

Computing for LHC in Germany

Computing / The DESY Grid Center

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

CHIPP Phoenix Cluster Inauguration

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Grid Computing Activities at KIT

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

The LHC Computing Grid

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Tier-2 DESY Volker Gülzow, Peter Wegner

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

ARC integration for CMS

Edinburgh (ECDF) Update

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Virtualizing a Batch. University Grid Center

Support for multiple virtual organizations in the Romanian LCG Federation

The grid for LHC Data Analysis

Experience of the WLCG data management system from the first two years of the LHC data taking

Service Availability Monitor tests for ATLAS

UW-ATLAS Experiences with Condor

High Throughput WAN Data Transfer with Hadoop-based Storage

Data transfer over the wide area network with a large round trip time

Benoit DELAUNAY Benoit DELAUNAY 1

Travelling securely on the Grid to the origin of the Universe

HEP Grid Activities in China

ATLAS operations in the GridKa T1/T2 Cloud

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

LHCb Computing Strategy

Department of Physics & Astronomy

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Grid Operation at Tokyo Tier-2 Centre for ATLAS

CPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe

Unified storage systems for distributed Tier-2 centres

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

CERN and Scientific Computing

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Improved ATLAS HammerCloud Monitoring for Local Site Administration

The National Analysis DESY

Clouds at other sites T2-type computing

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

Grid Computing at the IIHE

Middleware-Tests with our Xen-based Testcluster

A copy can be downloaded for personal non-commercial research or study, without prior permission or charge

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

ALICE Grid Activities in US

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

The PanDA System in the ATLAS Experiment

ATLAS Tier-3 UniGe

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

Site Report. Stephan Wiesand DESY -DV

FREE SCIENTIFIC COMPUTING

High Energy Physics data analysis

Experimental Computing. Frank Porter System Manager: Juan Barayoga

First Experience with LCG. Board of Sponsors 3 rd April 2009

ATLAS Experiment and GCE

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

Grid Computing a new tool for science

Distributed Computing Framework. A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

Monte Carlo Production on the Grid by the H1 Collaboration

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

LHC Computing Grid today Did it work?

SAM at CCIN2P3 configuration issues

Data Access and Data Management

arxiv: v1 [physics.ins-det] 1 Oct 2009

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Global Grid and the Local Analysis

Grid-related related Activities around KISTI Tier2 Center for ALICE

Transcription:

Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1 Institute of Astro- and Particle Physics, University of Innsbruck 2 Zentraler Informatik Dienst, University of Innsbruck 3 Institute of High Energy Physics, Austrian Academy of Science, Vienna

Content Introduction The Worldwide LHC Computing Grid The Austrian Federated Tier-2 Recent Tests Outlook & Conclusion

Introduction LHC starts operation in fall 2009 Austrian institutes participates in the CMS and ATLAS experiments LHC experiments will produce about 15PB per year Data needs to be stored, processed and made available to over 5000 physicists at more than 500 institutes Worldwide LHC Computing Grid (WLCG) should provide the resources

The WLCG Data storage and analysis infrastructure for the LHC high energy physics community Data from the experiments will be distributed around the globe according to a four-tiered model Tier-0: located at CERN, primary backup on tape, initial processing and data distribution to Tier-1 s Tier-1: 11 large computer centers with round-theclock support, mass storage and processing facilities; data distribution to Tier-2 s

The WLCG continued Tier-2: consisting of one or several collaborating computing facilities with sufficient data storage and adequate computing power for Monte Carlo and analysis tasks Tier-3: Grid access for individual scientists; can be a local department cluster or even individual PC s Based on several Grid infrastructures: EGEE (Enabling Grids for E-sciencE) in Europe OSG (Open Science Grid) in the US NDGF (Nordic Data Grid Facility) in Scandinavia

The WLCG continued Infrastructures support different middleware flavors, but key components (security, accounting, file transfer services) are fully interoperable WLCG provides an interface to seamlessly access these infrastructures LHC experiments developed services on top to operate the infrastructure Workload Management (DIRAC, Alien, Panda,...) Data Management (PhEDEx, DQ2,...) User Analysis (Ganga, CRAB, DIRAC, Alien,...)

Austrian Federated Tier-2 Innsbruck set up their first Grid site in 2003 and participated in ATLAS Data Challenge 2 and large scale production for the workshop in Rome 2005 Innsbruck is associated to ATLAS via the German GridKa cloud (Tier-1) since 2008 Innsbruck currently receives 10% of the data

Austrian Federated Tier-2 cont d Vienna started in 2005 Supports the CMS computing activities with emphasis on user analysis Will store data according to the CMS model 1/3 is general data - real data and simulation 1/3 is group specific data (SUSY and BTag) 1/3 is analysis specific data

Tier-2 Layout - Innsbruck

Tier-2 Layout - Innsbruck cont d Computing Elements 2 x LCG-CE with Torque/Maui Batch System on SL 4.7 28 WN s: 2 x Quad-Core Intel Xeon L5420 CPU s (2.5 GHz), 16 GByte RAM 9 WN s: 2 x Dual-Core Intel Xeon 5160, 8 GByte RAM

Tier-2 Layout - Innsbruck cont d Storage Element, Disk Pool Manager (DPM) 1 DPM Head Node Transtec SUMO RAID, 48 x 1 TByte Extension 48 x 2 TByte projected Starline Easy Raid, 16 x 1 TByte 3 DPM Disk Nodes (2 additional projected) 360 (600) MByte/s between WN s and disks

Tier-2 Layout - Innsbruck cont d Core Service: Top-level BDII (Berkeley Database Information Index) for Central Europe Part of bdii.ce-egee.org DNS pool DNS pool currently contains 6 top-level BDII s for load balancing

Tier-2 Layout - Vienna

Tier-2 Layout - Vienna Computing Element LCG-CE with Torque/Maui Batch System on SL 4.7 50 WN s: Sun blades, 2 x Quad Core Intel Xeon CPU s (2.6 GHz), 16 GByte RAM 50 blades will be added after the upgrade of electric power, cooling and network is finished

Tier-2 Layout - Vienna continued Storage Element, DPM 1 DPM Head Node 4 DPM Disk Nodes 4 Supermicro Raid s á 45 TByte 6 more will be added when the upgrade is finished 2 GBit/s (10 GBit/s) between WN s and disks

Austrian Federated Tier-2 Pledges 2009 2009 pledged ATLAS CMS Total % of pledged CPU [HEP-SPEC06] 4240 1850 3100 4950 117 % Disk [TByte] 295 54 220 274 93 %

Austrian Federated Tier-2 Pledges 2010 2010 pledged ATLAS planned CMS planned Total planned % of pledged CPU [HEP-SPEC06] 4800 1850 7000 8850 184 % Disk [TByte] 330 134 500 634 192 %

Availability and Reliability Tier-2 Reliability Report July 2009 Availability of Innsbruck dropped in July due too network layout improvements AT-HEPHY-VIENNA-UIBK usually within top 10 most reliable sites

Recent Tests STEP09 (Scale Testing for the Experiment Program 2009) HammerCloud (HC) Test July HC Test August HC Test August retested

STEP09 All experiments nominal rate Production User analysis stress test Production: Innsbruck performed good (95% efficiency) User analysis: Innsbruck performed bad Network overload @ many sites HC 432: 76% failure rate Failed HC 430: 62% failure rate Completed

STEP09 - Bottlenecks identified WN s access storage through NAT 2nd cluster s bandwidth to SE Bandwidth to FZK

/012&/'()(*+,-,'345,' 6-"7'('.8)"-9#'4:';7' <8.-)='2'6##>,',#)"' "4'?2@A-#))(!8BB#,,:8*C'&D1;'E' Vienna STEP09

HC July Disk servers connected to internal network now HC 525: 62% failure rate HC 525: gsi-ftp traffic still through NAT HC 531: rfio traffic through internal network HC 531: 1% failure rate Failed Completed Submitted Running

rfio gsi-ftp HC July - continued

HC August HC 574: rfio HC 574: 0% failure rate HC 575: rfcp / FileStager HC 579: Panda HC 575: 0% failure rate Misconfiguration of DPM disk servers HC 579: 97% failure rate

HC August retested HC 585: Panda; limited to around 150-200 concurrent jobs HC 585: 7.7% failure rate HC 600: Panda; limited to around 70-150 concurrent jobs HC 600: 2.8% failure rate Failed Submitted Completed Running

Outlook & Conclusion Austria participates in LHC experiments not only in physics but also in computing Austria setup a medium sized Tier-2 which exceeds the pledges Production is running well Problems with user analysis jobs were identified and are addressed Network bandwidth Need to limit number of concurrent analysis job to the available bandwidth Austrian Federated WLCG Tier-2 will be ready for the LHC start

Thank you for your attention! More information are available here: http://www.uibk.ac.at/austrian-wlcg-tier-2/ Questions?