Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Similar documents
Computing for LHC in Germany

First Experience with LCG. Board of Sponsors 3 rd April 2009

ATLAS operations in the GridKa T1/T2 Cloud

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

This report is based on sampled data. Jun 1 Jul 6 Aug 10 Sep 14 Oct 19 Nov 23 Dec 28 Feb 1 Mar 8 Apr 12 May 17 Ju

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Grid Computing Activities at KIT

150 million sensors deliver data. 40 million times per second

The ATLAS EventIndex: Full chain deployment and first operation

The CMS Computing Model

Tier-2 DESY Volker Gülzow, Peter Wegner

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

DQ2 - Data distribution with DQ2 in Atlas

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Challenges of the LHC Computing Grid by the CMS experiment

LHC Computing Models

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

The LHC Computing Grid

Data Transfers in the Grid: Workload Analysis of Globus GridFTP

The grid for LHC Data Analysis

software.sci.utah.edu (Select Visitors)

The ATLAS Distributed Analysis System

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

CMS Computing Model with Focus on German Tier1 Activities

The National Analysis DESY

Summary of the LHC Computing Review

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

National Aeronautics and Space Admin. - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Polycom Advantage Service Endpoint Utilization Report

Status of KISTI Tier2 Center for ALICE

2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K)

Asks for clarification of whether a GOP must communicate to a TOP that a generator is in manual mode (no AVR) during start up or shut down.

Technical University of Munich - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Eindhoven University of Technology - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

A Long-distance InfiniBand Interconnection between two Clusters in Production Use

Server Virtualization and Optimization at HSBC. John Gibson Chief Technical Specialist HSBC Bank plc

A L I C E Computing Model

AVM Networks - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1

The Power of Prediction: Cloud Bandwidth and Cost Reduction

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Andrea Sciabà CERN, Switzerland

The ATLAS Production System

Excel Functions & Tables

Annex A to the DVD-R Disc and DVD-RW Disc Patent License Agreement Essential Sony Patents relevant to DVD-RW Disc

COURSE LISTING. Courses Listed. with SAP Hybris Marketing Cloud. 24 January 2018 (23:53 GMT) HY760 - SAP Hybris Marketing Cloud

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

GWDG Software Archive - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

HEP Grid Activities in China

INTERNET MARKETING January 2009

High Energy Physics data analysis

Polycom Advantage Service Endpoint Utilization Report

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Data handling and processing at the LHC experiments

ACTIVE MICROSOFT CERTIFICATIONS:

CIMA Asia. Interactive Timetable Live Online

Quatius Corporation - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Section 1.2: What is a Function? y = 4x

CHIPP Phoenix Cluster Inauguration

SCI - software.sci.utah.edu (Select Visitors)

Atlas Managed Production on Nordugrid

Experience with Data-flow, DQM and Analysis of TIF Data

The LHC Computing Grid

Monthly SEO Report. Example Client 16 November 2012 Scott Lawson. Date. Prepared by

HPE Security Data Security. HPE SecureData. Product Lifecycle Status. End of Support Dates. Date: April 20, 2017 Version:

IKS Service GmbH - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

3. EXCEL FORMULAS & TABLES

Sophos Central for partners and customers: overview and new features. Jonathan Shaw Senior Product Manager, Sophos Central

Decision Making Information from Your Mobile Device with Today's Rockwell Software

For the mobile people of. Oulu

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

1. Introduction. Outline

University of Rochester - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

ACTIVE MICROSOFT CERTIFICATIONS:

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

Data Transfers Between LHC Grid Sites Dorian Kcira

Analysis & Tier 3s. Amir Farbin University of Texas at Arlington

Mpoli Archive - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

CIMA Asia. Interactive Timetable Live Online

University of Stuttgart - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Tracking the Internet s BGP Table

ICT PROFESSIONAL MICROSOFT OFFICE SCHEDULE MIDRAND

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

New CLIK (CLIK 3.0) CLimate Information tool Kit User Manual

University of Osnabruck - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

A European Vision and Plan for a Common Grid Infrastructure

DØ Southern Analysis Region Workshop Goals and Organization

Yield Reduction Due to Shading:

COURSE LISTING. Courses Listed. Training for Cloud with SAP Cloud Platform in Development. 23 November 2017 (08:12 GMT) Beginner.

Heilbronn University - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Wisconsin Gov Archive - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

Funcom Multiplayer Online Games - FTP Site Statistics. Top 20 Directories Sorted by Disk Space

COURSE LISTING. Courses Listed. Training for Database & Technology with Development in SAP Cloud Platform. 1 December 2017 (22:41 GMT) Beginner

OpenINTEL an infrastructure for long-term, large-scale and high-performance active DNS measurements. Design and Analysis of Communication Systems

Computing at Belle II

DAS LRS Monthly Service Report

Transcription:

ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1

ATLAS Offline Computing ~Pb/sec PC (2004) = ~1 kspecint2k Event Builder 10 GB/sec Some data for calibration and monitoring to institutes Event Filter ~7.5MSI2k 400 MB/sec ~5 PB/year Calibrations flow back Tier 0 T0 ~5MSI2k No simulation 10 sites Tier 1 US Regional Centre Dutch Regional Centre French Regional Centre German Regional Centre (GridKa) 18 MSI2k ~12 PB 622Mb/s links Tier 2 LRZ/RZG Tier2 ~500kSI2k Desy Tier2 ~500kSI2k Fr/Wu Tier2 ~500kSI2k LMU Munich MPI Munich Uni D... Tier 3 Uni M... Physics data cache Desktop GridKa-TAB, 30.9.05 2

Data Volumes and Data Types RAW data for primary reco at Tier-0 (and Tier-1 for reproc) 1.6 MB/event, 2*109 ev/year, 3.2 PB/year 1 copy at Tier-0, 1 copy distributed over ~10 Tier-1 (on tape) ESD (event summary data, reco objects + raw data subset), for physics-group analysis at Tier-1 0.5 MB/event, 1 PB/year 2 copies distributed over ~10 Tier-1 on disk AOD (analysis object data, reconstructed physics objects: jets, leptons, etc) for user analysis at Tier-2 0.1 MB/event, 180 TB/year 1 copy at each Tier-1 and 1 copy shared among ~3 Tier-2 centers TAG (TAG data, basic event-level info) for fast skimming 1 kb/event, 2 TB/year, each T-1/T-2 center Same structure for simulated data, size ~20% of real events GridKa-TAB, 30.9.05 3

Tier-1/Tier-2 Tasks Tier-1: Physics-group ``organized'' analysis of ESD data Calibration-group ``organized'' analysis of ESD data (and Rawd) ATLAS wide re-processing of RAW data 1-2 x / year Main repository for ESD, AOD (real and simulated) No User-level analysis! Tier-2: User-level ``chaotic'' analysis of AOD data Organized Simulation production by ATLAS & Physics groups Analysis of group and user data Repository for AOD, group data sets and some user data... eventually complemented by Tier-3... GridKa-TAB, 30.9.05 4

Average Tier-2 requirements (2008) Disk (TB) Raw 1.5 General ESD (curr.) 0.0 AOD 86.0 TAG 3.0 CPU (ksi2k) Reconstruction 65 Simulation 180 Analysis 290 Total 540 ESD Sim (curr.) 6.0 AOD Sim 20.0 Tag Sim 1.0 User Group 40.0 User Data 60.0 Total 333.0 Proportional scaling not required. Can have Tier-2 focused more on Simulation = CPU AOD analysis=disk User Analysis=both GridKa-TAB, 30.9.05 5

Evolution of Tier-2 requirements T2 Cloud Growth Resource needs basically proportional to accumulated data slight kink in 2009 due to projected high-lumi running Trigger rate const larger events 100000 90000 80000 70000 60000 50000 40000 30000 20000 10000 0 2007 2008 2009 2010 2011 2012 Disk (TB) CPU (ksi2k) Disk (TB) 1606.60 8747.98 15904.56 25815.10 35725.63 45654.33 CPU (ksi2k) 3653.24 19938.74 31767.93 53014.37 71121.85 89229.33 GridKa-TAB, 30.9.05 6

Networking and Tier 2s Tier-2 to Tier-1 networking requirements presumably low 2x / year AOD Tier-1 --> Tier-2 Continuous phys. group sets Tier-1 --> Tier-2 Continuous simul data Tier-2 --> Tier-1 without job traffic ~17.5MB/s for average T2 1 Gbps should be sufficient for peak load and leave some headroom MB/s (nominal) 120.0 100.0 80.0 60.0 40.0 20.0 0.0 ATLAS Average T1 to T2s traffic ATLAS HI ATLAS Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2008 2009 2010 Month GridKa-TAB, 30.9.05 7

ATLAS Tier-2 in D - Plans German Tier-2 structure emerging: 3 `average-size Tier-2s planned: DESY standalone Freiburg and Wuppertal as `remote federation Munich RZG and LRZ as `close federation firm pledges from institutional sites: DESY and MPI-M/RZG funding for university sites still unclear started regular meetings share experience with ATLAS specific installation issues next goal is to participate at DC3/Computing Commissioning beginning this winter GridKa-TAB, 30.9.05 8

ATLAS Tier-2 in D - Status Desy fully functional LCG installation & active in LCG SC3 (see Desy talk) Wuppertal several years experience with EDG/LCG/D0Grid operational testbed setup, installation on super-cluster (AliceNext) underway Freiburg LCG 2.6 installed on testbed Munich: LMU/LRZ used existing shared cluster (250 CPU) for DC2/Rome prod via NorduGrid LCG testbed installed Munich: MPG/RZG new 140 CPU/24 TB disk cluster. Shared ATLAS/Magic. LCG testbed installed at MPI, port to RZG cluster in progress. GridKa-TAB, 30.9.05 9

GridKa/Tier-1 Tier-2 Relations Networking Transfer AOD, Sim. Data, Group data sets ~20 MB/sec average between single Tier-2 and Tier-1 Storage for simulated data Primary storage for simulated events at Tier-2/3 are Tier-1 sites. resources already considered in CompModel and planning presumably dataset driven, not a static Tier-2/Tier-1 relation Support for Grid software/deployment crucial to have qualified and responsible support heavily used, cf MPI LCG setup, LMU/LRZ installation GridKa-TAB, 30.9.05 10

GridKa/Tier-1 Tier-2 Relations - cont Support for Operations: SC3, SC4, DC4,... Serve as knowledge center Represent interests of associated Tier-2 GridKa-TAB, 30.9.05 11