Grid Computing Activities at KIT

Similar documents
Computing for LHC in Germany

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

CMS Computing Model with Focus on German Tier1 Activities

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

The National Analysis DESY

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Virtualizing a Batch. University Grid Center

Summary of the LHC Computing Review

IEPSAS-Kosice: experiences in running LCG site

Andrea Sciabà CERN, Switzerland

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

DESY. Andreas Gellrich DESY DESY,

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Computing / The DESY Grid Center

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

LHC Computing Grid today Did it work?

The CMS Computing Model

ATLAS operations in the GridKa T1/T2 Cloud

The LHC Computing Grid

Austrian Federated WLCG Tier-2

The grid for LHC Data Analysis

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Clouds at other sites T2-type computing

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Challenges of the LHC Computing Grid by the CMS experiment

Clouds in High Energy Physics

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

High-Energy Physics Data-Storage Challenges

Operating the Distributed NDGF Tier-1

Grid and Cloud Activities in KISTI

Distributed Monte Carlo Production for

LHCb Computing Resources: 2018 requests and preview of 2019 requests

Status of KISTI Tier2 Center for ALICE

Storage and I/O requirements of the LHC experiments

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

SAM at CCIN2P3 configuration issues

LHCb Computing Resource usage in 2017

Past. Inputs: Various ATLAS ADC weekly s Next: TIM wkshop in Glasgow, 6-10/06

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Batch Services at CERN: Status and Future Evolution

Tier 3 batch system data locality via managed caches

PROOF-Condor integration for ATLAS

The INFN Tier1. 1. INFN-CNAF, Italy

UW-ATLAS Experiences with Condor

Experience of the WLCG data management system from the first two years of the LHC data taking

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Towards Network Awareness in LHC Computing

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

The ATLAS Distributed Analysis System

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

Distributing storage of LHC data - in the nordic countries

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Hands-On Workshop bwunicluster June 29th 2015

The Global Grid and the Local Analysis

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

Site Report. Stephan Wiesand DESY -DV

First Experience with LCG. Board of Sponsors 3 rd April 2009

CERN s Business Computing

ISTITUTO NAZIONALE DI FISICA NUCLEARE

Support for multiple virtual organizations in the Romanian LCG Federation

Distributed Data Management on the Grid. Mario Lassnig

Edinburgh (ECDF) Update

CouchDB-based system for data management in a Grid environment Implementation and Experience

Tier2 Centre in Prague

The Legnaro-Padova distributed Tier-2: challenges and results

The LHC Computing Grid

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

and the GridKa mass storage system Jos van Wezel / GridKa

ANSE: Advanced Network Services for [LHC] Experiments

Parallel Storage Systems for Large-Scale Machines

Lessons learned from Lustre file system operation

A scalable storage element and its usage in HEP

LHCb Computing Strategy

Meta-Monitoring for performance optimization of a computing cluster for data-intensive analysis

Tier-2 DESY Volker Gülzow, Peter Wegner

The JINR Tier1 Site Simulation for Research and Development Purposes

CHIPP Phoenix Cluster Inauguration

Lessons Learned in the NorduGrid Federation

Computing at Belle II

IWR GridKa 6. July GridKa Site Report. Holger Marten

arxiv: v1 [cs.dc] 20 Jul 2015

Benchmarking the ATLAS software through the Kit Validation engine

Opportunities A Realistic Study of Costs Associated

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

The EU DataGrid Testbed

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Storage Resource Sharing with CASTOR.

Long Term Data Preservation for CDF at INFN-CNAF

Transcription:

Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy Argonne National Laboratory 1 KIT University of the State of Baden-Württemberg and National Research Center of the Helmholtz Association www.kit.edu

Grid Computing Today Tier 0 7(8) Tier 1 ~50 Tier 2 Tier 3 2 More than 50 CMS centers, in more then 20 countries Flags taken from Wikipedia: http://de.wikipedia.org/wiki/liste_der_nationalflaggen Christoph Wissing Tier-2 Candidates: - Thailand - Malaysia > Tier 0 W-LCG Resources: prompt reconstruction 590k logical CPU cores 325,000 Terabyte Disk store RAW data and 257,000 export to T1s Terabyte Tape > Tier 1 2 Million jobs/day re-recontruction 20 GB/s network transfers long term storage of RAW and MC data German CMS Contribution: > Tier 2 GridKa TIER 1 MC production ~ 10 % (2015) User analysis Aachen/Hamburg TIER 2 > Tier 3 ~ 8 % (2015) Mainly user analysis World-LHC Computing Grid (W-LCG) Rebus

GridKa TIER 1 3

GridKa History Established 2002 on request of the German HEP community First customers during the startup phase BABAR, D0, CDF Currently supporting ALICE, ATLAS, CMS, LHCb BABAR, Belle (2), Compass Auger 4

GridKa Farm Batch System: Univa Grid Engine ~ 630 Worker Nodes, ~ 16300 logical cores, ~ 26 TB RAM ~ 13000 job slots for single and multi-core jobs Storage System: dcache ~ 10,000 TB disk storage ~ 46,000 TB tape capacity Network Connectivity 100 Gbit/s connectivity to LHCOPN (WLCG Tier0-Tier1 Private Network) 100 Gbit/s connectivity to LHCONE (LHC Open Network Environment) Very stable operation! 5

GridKa T1 Resources for CMS Pledged Resources: CPU: 1382 job slots, 26850 HEPSpec2006 2600 TB disk space 7400 TB tape space Opportunistic resource allocation is allowed Actual CPU usage is normally over the pledge Resources for the national CMS community: CPU: Opportunistic resource usage Separate storage instance shared with ATLAS 340 TB total disk space, 170 TB allocated to CMS Access to tape possible Support for the national analysis communities (Aachen, Hamburg, Karlsruhe) 6

GridKa Contribution to CMS 12 % of CMS data (both MC and recorded data) stored at GridKa Around 10% of CMS TIER 1 jobs running at GridKa 7

CMS GridKa TIER 1 Support Team Expert rotation scheme Shift leader: 3 senior group members taking shift leadership for 4 to 6 weeks Shifter: 4 PhD students taking shifts for 1 week each Support effort is credited as CMS service work 2.6 FTE per year dedicated to CMS GridKa TIER 1 support and development 8

CMS Support Team Operations Regular monitoring of CMS operations at GridKa TIER 1 Data transfer and storage management Management of CMS site specific services Troubleshooting and issue follow-up: Responding to and opening tickets Mailing lists monitoring Attend and report to weekly coordination meetings CMS site support and computing operations Institute computing meeting GridKa TIER 1 Middleware meeting 9

Education Program Annual International GridKa School on advanced computing technologies organized by GridKa TIER 1 center 50% Plenary presentations 50% Hands-on courses 2014:120 participants / 19 countries Audience: Grid and cloud newbies Advanced users Administrators Graduate and PhD students Also participants from industry 10

Development Projects Much more computing activities are ongoing at IEKP: Development of CMS data transfer and data management tools High-throughput data analysis cluster - based on local SSD caching Virtualisation and cloud computing - institute desktop cloud, Freiburg HPC Cloud, planned cooperation with commercial cloud service providers HappyFace (meta-monitoring tool) development Data Requests Cache Drives (SSD/HD) Physicists Cache Scheduler Worker Cache Mapping Network file space 11

Questions? 12