Distributed Computing Grid Experiences in CMS Data Challenge

Similar documents
Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

The CMS Computing Model

HEP Grid Activities in China

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Andrea Sciabà CERN, Switzerland

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

THE Compact Muon Solenoid (CMS) is one of four particle

Overview of HEP software & LCG from the openlab perspective

ISTITUTO NAZIONALE DI FISICA NUCLEARE

UW-ATLAS Experiences with Condor

glite Grid Services Overview

LHCb Computing Strategy

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Status of KISTI Tier2 Center for ALICE

The LHC Computing Grid

PROOF-Condor integration for ATLAS

CHIPP Phoenix Cluster Inauguration

LHCb Distributed Conditions Database

Storage and I/O requirements of the LHC experiments

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

The Grid: Processing the Data from the World s Largest Scientific Machine

LCG-2 and glite Architecture and components

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

Reprocessing DØ data with SAMGrid

and the GridKa mass storage system Jos van Wezel / GridKa

The INFN Tier1. 1. INFN-CNAF, Italy

High Throughput WAN Data Transfer with Hadoop-based Storage

Introduction to SRM. Riccardo Zappi 1

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Challenges of the LHC Computing Grid by the CMS experiment

The grid for LHC Data Analysis

Storage Resource Sharing with CASTOR.

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

Understanding StoRM: from introduction to internals

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

Experience with Data-flow, DQM and Analysis of TIF Data

CMS HLT production using Grid tools

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Distributed Monte Carlo Production for

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices

LHCb Computing Resource usage in 2017

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

CMS Computing Model with Focus on German Tier1 Activities

Data Transfers Between LHC Grid Sites Dorian Kcira

IEPSAS-Kosice: experiences in running LCG site

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

CRAB tutorial 08/04/2009

High Energy Physics data analysis

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

EGEE and Interoperation

DESY. Andreas Gellrich DESY DESY,

arxiv: v1 [physics.ins-det] 1 Oct 2009

The glite middleware. Ariel Garcia KIT

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

ARDA status. Massimo Lamanna / CERN. LHCC referees meeting, 28 June

Streamlining CASTOR to manage the LHC data torrent

Interconnect EGEE and CNGRID e-infrastructures

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

CouchDB-based system for data management in a Grid environment Implementation and Experience

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

FREE SCIENTIFIC COMPUTING

WHEN the Large Hadron Collider (LHC) begins operation

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

Data storage services at KEK/CRC -- status and plan

Distributed Data Management on the Grid. Mario Lassnig

The European DataGRID Production Testbed

Philippe Charpentier PH Department CERN, Geneva

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

Monte Carlo Production on the Grid by the H1 Collaboration

ELFms industrialisation plans

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

EU DataGRID testbed management and support at CERN

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

AMGA metadata catalogue system

Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production

Virtualizing a Batch. University Grid Center

The LHC Computing Grid

Monitoring the Usage of the ZEUS Analysis Grid

On the employment of LCG GRID middleware

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

A scalable storage element and its usage in HEP

LHCb Computing Resources: 2018 requests and preview of 2019 requests

The EU DataGrid Testbed

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

Data handling and processing at the LHC experiments

A L I C E Computing Model

The COMPASS Event Store in 2002

Data services for LHC computing

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

Outline. ASP 2012 Grid School

Transcription:

Distributed Computing Grid Experiences in CMS Data Challenge A.Fanfani Dept. of Physics and INFN, Bologna Introduction about LHC and CMS CMS Production on Grid CMS Data challenge 2 nd GGF School on Grid Computing July 2004 n 1

Introduction o Large Hadron Collider o CMS (Compact Muon Solenoid) Detector o CMS Data Acquisition o CMS Computing Activities 2 nd GGF School on Grid Computing July 2004 n 2

Large Hadron Collider LHC Proton- Proton Collision Beam energy : 7 TeV Luminosity : 10 34 cm -2 s -1 Data taking : > 2007 bunch-crossing rate: 40 MHz 20 p-p collisions for each bunch-crossing p-p collisions 10 9 evt/s ( Hz ) 2 nd GGF School on Grid Computing July 2004 n 3

CMS detector p p 2 nd GGF School on Grid Computing July 2004 n 4

CMS Data Acquisition Bunch crossing 40 MHz 1event is 1MB in size GHz ( PB/sec) Level 1 Trigger - special hardware 75 KHz (75 GB/sec) High Level Trigger PCs Online system multi-level trigger to: filter out not interesting events reduce data volume 100 Hz (100 MB/sec) data recording data Offline analysis 2 nd GGF School on Grid Computing July 2004 n 5

CMS Computing Large amounts of events will be available when the detector will start collecting data Large scale distributed Computing and Data Access o Must handle PetaBytes per year o Tens of thousands of CPUs o Tens of thousands of jobs o heterogeneity of resources : hardware, software, architecture and Personnel o Physical distribution of the CMS Collaboration 2 nd GGF School on Grid Computing July 2004 n 6

CMS Computing Hierarchy PB/sec 100MB/sec Online system Offline farm Tier 0 CERN Computer center 10K PCs* recorded data 1PC* PIII 1GHz Filter raw data Data Reconstruction Data Recordin Distribution to Tier-1 Tier 1 Tier 2 Tier 3 France 2K PCs Regional Center 2.4 Gbits/sec Tier2 CenterTier2 Center 500 PCs 0.6 2. Gbits/sec InstituteA 100-1000 Mbits/sec Italy Regional Center InstituteB Tier2 Center Fermilab Regional Center... workstation.. Permamnet data storage and management Data-heavy analysis re-processing Simulation,Regional support Well-managed disk storage Simulation End-user analysis 2 nd GGF School on Grid Computing July 2004 n 7

CMS Production and Analysis The main computing activity of CMS is currently related to the simulation, with Monte Carlo based programs, of how the experimental apparatus will behave once it is operational Long term need of large-scale simulation efforts to : o optimise the detectors and investigate any possible modifications required to the data acquisition and processing o better understand the physics discovery potential o perform large scale test of the computing and analysis models The preparation and building of the Computing System able to treat the data being collected pass through sequentially planned steps of increasing complexities (Data Challenges) 2 nd GGF School on Grid Computing July 2004 n 8

CMS MonteCarlo production chain µ + µ p e + H e - Z Z p Generation Ntuple files (Hbook zebra) CMKIN: MonteCarlo Generation of the proton-proton interaction, based on PYTHIA CPU time depends strongly on the physical process Simulation zebra files POOL files CMSIM/OSCAR: Simulation of tracking in the CMS detector, based on GEANT3/GEANT4(=toolkit for the simulation of the passage of particles through matter ) very CPU intensive, non-negligible I/O requirement Digitization Reconstruction Analysis POOL files ORCA: reproduction of detector signals (Digis) simulation of trigger response reconstruction of physical information for final analysis POOL (Pool Of persistent Object for LHC) used as persistency layer 2 nd GGF School on Grid Computing July 2004 n 9

CMS Data Challenge 2004 Planned to reach a complexity scale equal to about 25% of that foreseen for LHC initial running Pre-Challenge Production in 2003/04 o Simulation and digitization of 70 Million events needed as input for the Data Challenge Digitization is still running 750K jobs, 3500 KSI2000 months, 700 Kfiles,80 TB of data o Classic and Grid (CMS/LCG-0, LCG-1, Grid3) productions Data Challenge (DC04) o Reconstruction of data for sustained period at 25Hz o Data distribution to Tier-1,Tier-2 sites Tier-1 o Data analysis at remote sites o Demonstrate the feasibility of the full chain Analysis Reco Data PCP DC04 Tier-0 Generation Simulation Digitization Tier-1 25Hz Reconstruction Reco Data Analysis Reco Data Tier-2 Analysis Tier-2 Tier-2 2 nd GGF School on Grid Computing July 2004 n 10

CMS Production o Prototypes of CMS distributed production based on grid middleware used within the official CMS production system: Experience on LCG Experience on Grid3 2 nd GGF School on Grid Computing July 2004 n 11

CMS permanent production Pre DC04 start DC04 start # Datasets/month Spring02 prod Summer02 prod CMKIN CMSIM + OSCAR Digitisation 2002 2003 2004 The system is evolving into a permanent production effort 2 nd GGF School on Grid Computing July 2004 n 12

CMS Production tools CMS production tools (OCTOPUS) o RefDB Central SQL DB at CERN. Contains production requests with all needed parameters to produce the dataset and the details about the production process o MCRunJob (or CMSProd) Tool/framework for job preparation and job submission. Modular (plug-in approach) to allow running both in a local or in a distributed environment (hybrid model) o BOSS Real-time job-dependent parameter tracking. The running job standard output/error are intercepted and filtered information are stored in BOSS database. Interface the CMS Production Tools to the Grid using the implementations of many projects: o LHC Computing Grid (LCG), based on EU middleware o Grid3, Grid infrastructure in the US 2 nd GGF School on Grid Computing July 2004 n 13

Pre-Challenge Production setup RLS Dataset Phys.Group asks for a new dataset RefDB Job metadata metadata Data-level query shell scripts Local Batch Manager Job level query BOSS DB JDL Grid (LCG) Scheduler Computer farm LCG-0/1 Site Manager starts an assignment McRunjob + plug-in CMSProd DAG job job job job DAGMan (MOP) Grid3 Push data or info Pull info Chimera VDL Virtual Data Catalogue Planner 2 nd GGF School on Grid Computing July 2004 n 14

CMS/LCG Middleware and Software Use as much as possible the High-level Grid functionalities provided by LCG LCG Middleware o Resource Broker (RB) o Replica Manager and Replica Location Service (RLS) o GLUE Information scheme and Information Index o Computing Elements (CEs) and Storage Elements (SEs) o User Interfaces (UIs) o Virtual Organization Management Servers (VO) and Clients o GridICE Monitoring o Virtual Data Toolkit (VDT) o Etc. CMS software distributed as rpms and installed on the CE CMS Production tools installed on UserInterface 2 nd GGF School on Grid Computing July 2004 n 15

CMS production components interfaced to LCG middleware Production is managed from the User Interface with McRunjob/BOSS RefDB CMS LCG Dataset metadata RLS UI McRunjob BOSS JDL RB bdii CE CMS sw CE CMS sw CE CEX SE WN SE read/write Push data or info Job metadata SE Pull info Computing resources are matched by the Resource Broker to the job requirements (installed CMS software, MaxCPUTime, etc) Output data stored into SE and registered in RLS 2 nd GGF School on Grid Computing July 2004 n 16

Nb of jobs in the system Nb of jobs distribution of jobs: executing CEs Executing Computing Element 1 month activity 2 nd GGF School on Grid Computing July 2004 n 17

Production on grid: CMS-LCG Resources: Nb of events About 170 CPU s and 4TB CMS/LCG-0 o LCG-1 o Sites: Bari,Bologna, CNAF, EcolePolytecnique, Imperial College, Islamabad,Legnaro, Taiwan, Padova,Iowa sites of south testbed (Italy-Spain)/Gridit Gen+Sim on LCG Assigned Produced CMS-LCG Regional Center Statistics - 0.5 Mevts heavy CMKIN: ~2000 jobs ~8 hours each - 2.1 Mevts CMSIM+OSCAR: ~8500 jobs ~10hours each ~2 TB data Aug03 CMS/LCG-0 Date LCG-1 Dec 03 Feb 03 2 nd GGF School on Grid Computing July 2004 n 18

LCG: results and observations CMS Official Production on early deployed LCG implementations o 2.6 Milions of events ( 10K long jobs), 2TB data Overall Job Efficiency ranging from 70% to 90% The failure rate varied depending on the incidence of some problems: RLS unavailability few times, in those periods the job failure rates could increase up to 25-30% single point of failure Instability due to site mis-configuration, network problems, local scheduler problem, hardware failure with overall inefficiency about 5-10% Few % due to service failures Success Rate on LCG-1 was lower wrt CMS/LCG-0 (efficiency 60%) less control on sites, less support for services and sites (also due to Christmas) Major difficulties identified in the distributed sites consistent configuration Good efficiencies and stable conditions of the system in comparison with what obtained in previous challenges o showing the maturity of the middleware and of the services, provided that a continuous and rapid maintenance is guaranteed by the middleware providers and by the involved site administrators 2 nd GGF School on Grid Computing July 2004 n 19

USCMS/Grid3 Middleware & Software Use as much a possible the low-level Grid functionalities provided by basic components A Pacman package encoded the basic VDT-based middleware installation, providing services from: o Globus (GSI, GRAM, GridGFTP) o Condor (Condor-G, DAGMan, ) o Information service based on MDS o Monitoring based on MonaLisa + Ganglia o VOMS from EDG project o Etc. Additional services can be provided by the experiment, i.g. o Storage Resource Manager (SRM), dcache for storing data CMS Production tools on MOP master 2 nd GGF School on Grid Computing July 2004 n 20

CMS/Grid3 MOP Tool Job created/submitted from MOP Master MOP (MOnteCarlo distributed Production) is a system for packaging production processing jobs into DAGMan format McRunjob FNAL Master Site mop_submitter DAGMan Condor-G GridFTP Mop_submitter wraps McRunjob jobs in DAG format at the MOP master site DAGMan runs DAG jobs through remote sites Globus JobManagers through Condor-G Condor-based match-making process selects resources Results are returned using GridFTP to dcache at FNAL Globus Globus Globus Globus Remote Site 1 Batch Queue GridFTP Remote Site N Batch Queue GridFTP computer nodes computer nodes 2 nd GGF School on Grid Computing July 2004 n 21

Production on Grid: Grid3 Number of events per day Distribution of usage (in CPU-days) by site in Grid2003 150 days period from Nov-03 2 nd GGF School on Grid Computing July 2004 n 22

Production on Grid: Grid3 Resources: US CMS Canonical resources (Caltech,UCSD,Florida,FNAL ) o 500-600 CPUs Grid3 shared resources ( 17 sites) o over 2000 CPUs (shared) o realistic usage (few hundred to 1000) Nb of events Simulation on Grid3 Assigned Produced USMOP Regional Center Statistics - 3 Mevts CMKIN: Date ~3000 jobs ~ 2.5min each Aug 03 Nov 03 Jul 04-17 Mevts CMSIM+OSCAR: ~17000 jobs ~ few days each (20-50h), ~12 TB data 2 nd GGF School on Grid Computing July 2004 n 23

Grid3: results and observations Massive CMS Official Production on Grid3 o 17Milions of events (17K very long jobs), 12TB data Overall Job Efficiency 70% Reasons of job failures o CMS application bugs ~ few % o No significant failure rate from Grid middleware per se o o can generate high loads infrastructure relies on shared filesystem Most failures due to normal system issues hardware failure NIS, NFS problems disks fill up Reboots Service level monitoring need to be improved a service failure may cause all the jobs submitted to a site to fail 2 nd GGF School on Grid Computing July 2004 n 24

CMS Data Challenge o CMS Data Challenge overview o LCG-2 components involved 2 nd GGF School on Grid Computing July 2004 n 25

Definition of CMS Data Challenge 2004 Aim of DC04 (march-april): o reach a sustained 25Hz reconstruction rate in the Tier-0 farm (25% of the target conditions for LHC startup) o register data and metadata to a catalogue o transfer the reconstructed data to all Tier-1 centers o analyze the reconstructed data at the Tier-1 s as they arrive o publicize to the community the data produced at Tier-1 s o monitor and archive of performance criteria of the ensemble of activities for debugging and post-mortem analysis Not a CPU challenge, but a full chain demonstration! 2 nd GGF School on Grid Computing July 2004 n 26

DC04 layout Tier-0 RefDB General DistributIon Buffer ORCA RECO Job IB 25Hz fake on-line process Castor Tier-0 data distribution agents Export Buffer TMDB Transfer Management POOL RLS catalogue LCG-2 Services Tier-1 Tier-1 Tier-1 agent Tier-1 agent T1 MSS storage T1 MSS storage ORCA ORCA Analysis Job Analysis Job Tier-2 Tier-2 Tier-2 Physicist Physicist Physicist T2 storage T2 storage T2 storage ORCA ORCA Local Job Local ORCAJob Local Job 2 nd GGF School on Grid Computing July 2004 n 27

Main Aspects Reconstruction at Tier-0 at 25Hz Data Distribution o an ad-hoc developed Transfer Management DataBase (TMDB) has been used o a set of transfer agents communicating through the TMDB o The agent system was created to fill the gap in EDG/LCG middleware for mechanism for large-scale(bulk) scheduling of transfers Support a (reasonable) variety of data transfer tools o SRB Storage Resource Broker o LCG Replica Manager o SRM Storage Resource Manager Each with an agent at Tier-0 copying data to the appropriate Export Buffer (EB) General Buffer SRM Export Buffer LCG SE Export Buffer FNAL T1 dcache/enstore MSS CNAF T1 PIC T1 CASTOR MSS Use a single file catalogue (accessible from Tier-1 s) o RLS used for data and metadata by all transfer tools Monitor and archive resource and process information o MonaLisa used on almost all resources o GridICE used on all LCG resources (including WN s) o LEMON on all IT resources o Ad-hoc monitoring of TMDB information Job submission at Regional Centers to perform analysis SRB Export Buffer CERN Tier-0 Lyon T1 RAL T1 GridKA T1 CASTOR, HPSS, Tivoli MSS 2 nd GGF School on Grid Computing July 2004 n 28

Processing Rate at Tier-0 Reconstruction jobs at Tier-0: produce data and register them into RLS Tier-0 Events Processed about 30M events Generally kept up at T1 s in CNAF, FNAL, PIC Event Processing Rate Got above 25Hz on many short occasions o But only one full day above 25Hz with full system 2 nd GGF School on Grid Computing July 2004 n 29

LCG-2 in DC04 Aspects of DC04 involving LCG-2 components o register all data and metadata to a world-readable catalogue RLS o transfer the reconstructed data from Tier-0 to Tier-1 centers Data transfer between LCG-2 Storage Elements o analyze the reconstructed data at the Tier-1 s as data arrive Real-Time Analysis with Resource Broker on LCG-2 sites o publicize to the community the data produced at Tier-1 s straightforward using the usual Replica Manager tools o end-user analysis at the Tier-2 s (not really a DC04 milestone) first attempts o monitor and archive resource and process information GridICE Full chain (but the Tier-0 reconstruction) done in LCG-2 2 nd GGF School on Grid Computing July 2004 n 30

Description of CMS/LCG-2 system RLS at CERN with Oracle backend Dedicated information index (bdii) at CERN (by LCG) o CMS adds its own resources and removes problematic sites Dedicated Resource Broker at CERN (by LCG) o Other RB s available at CNAF and PIC, in future use them in cascade Official LCG-2 Virtual Organization tools and services Dedicated GridICE monitoring server at CNAF Storage Elements o Castor SE at CNAF and PIC o Classic disk SE at CERN (Export Buffer), CNAF, PIC, Legnaro, Taiwan Computing Elements at CNAF, PIC, Legnaro, Ciemat, Taiwan User Interfaces at CNAF, PIC, LNL 2 nd GGF School on Grid Computing July 2004 n 31

RLS usage CMS framework uses POOL catalogues with file information by GUID LFN PFNs for every replica Meta data attributes RLS used as a global POOL catalogue, with full file meta data o Global file catalogue (LRC component of RLS: GUID PFNs) Registration of files location by reconstruction jobs and by all transfer tools Query by the Resource Broker to submit analysis jobs close to the data o Global metadata catalogue (RMC component of RLS: GUID metadata) Meta data schema handled and pushed into RLS catalogue by POOL Some attributes are highly CMS-specific Query (by users or agents) to find logical collection of files CMS does not use a separate file catalogue for meta data Total Number of files registered in the RLS during DC04: o 570K LFNs each with 5-10 PFN s o 9 metadata attributes per file (up to ~1 KB metadata per file) 2 nd GGF School on Grid Computing July 2004 n 32

RLS issues Inserting information into RLS: o insert PFN (file catalogue) was fast enough if using the appropriate tools, produced in-course LRC C++ API programs ( 0.1-0.2sec/file), POOL CLI with GUID (secs/file) o insert files with their attributes (file and metadata catalogue) was slow We more or less survived, higher data rates would be troublesome Querying information from RLS o Looking up file information by GUID seems sufficiently fast o Bulk queries by GUID take a long time (seconds per file) o Queries on metadata are too slow (hours for a dataset collection) Time to register the output of a Tier-0 job (16 files) 2sec/file 2 Apr 18:00 5 Apr 10:00 Sometimes the load on RLS increases and requires intervention on the server (i.g. log partition full, switch of server node, un-optimized queries) able to keep up in optimal condition, so and so otherwise 2 nd GGF School on Grid Computing July 2004 n 33

RLS current status Important performance issues found Several workarounds or solutions were provided to speed up the access to RLS during DC04 o Replace (java) replica manager CLI with C++ API programs o POOL improvements and workarounds o Index some meta data attributes in RLS (ORACLE indices) Requirements not supported during DC04 o Transactions o Small overhead compared to direct RDBMS catalogues Direct access to the RLS Oracle backend was much faster (2min to suck the entire catalogue wrt several hours) Dump from a POOL MySQL catalogue is minimum factor 10 faster than dump from POOL RLS o Fast queries Some are being addressed o Bulk functionalities are now available in RLS with promising reports o Transactions still not supported o Tests of RLS Replication currently carried out ORACLE streams-based replication mechanism 2 nd GGF School on Grid Computing July 2004 n 34

Data management Data transfer between LCG-2 Storage Elements using the Replica Manager based agents o Data uploaded at Tier-0 in an Export Buffer being a disk based SE and registered in RLS o Data transfer from Tier-0 to CASTOR SEs at Tier-1 (CNAF and PIC) o Data replication from Tier-1 to Tier-2 disk SEs Comments o No SRM based SE used since compliant RM was not available o Replica manager command line (java startup) can introduce a not negligible overhead o Replica manager behavior under error condition needs improvement (a clean rollback is not always granted and this requires ad-hoc checking/fixing) CERN Castor Tier-0 Tier-1 Castor Tier-2 Transfer Management DB RM data distribution agent RLS LCG Disk SE Export Buffer Tier-1 agent CASTOR SE Disk SE 2 nd GGF School on Grid Computing July 2004 n 35

Data transfer from CERN to Tier-1 A total of >500k files and ~6 TB of data transferred CERN Tier-0 Tier-1 Performance has been good ototal network throughput limited by small file size osome transfer problem caused by performance of underlying MSS (CASTOR) max nb.files per day is: ~45000 max size per day is: ~700GB exercise with big files Global CNAF network May 1 st May 2 nd ~340 Mbps (>42 MB/s) sustained for ~5 hours 2 nd GGF School on Grid Computing July 2004 n 36

Data Replication to disk SEs CNAF T1 Castor SE eth I/O input data from CERN Export Buffer CNAF T1 Castor SE TCP connections CNAF T1 disk-se eth I/O input data from Castor SE green Legnaro T2 disk-se Just one day: Apr, 19 th eth I/O input from Castor SE 2 nd GGF School on Grid Computing July 2004 n 37

Real-Time (Fake) Analysis Goals o Demonstrate that data can be analyzed in real time at the T1 Fast feedback to reconstruction (e.g. calibration, alignment, check of reconstruction code, etc.) o Establish automatic data replication to Tier-2s Make data available for offline analysis o Measure time elapsed between reconstruction at Tier-0 and analysis at Tier-1 Strategy o Set of software agents to allow analysis job preparation and submission synchronous with data arrival o Using LCG-2 Resource Broker and LCG-2 CMS resources (Tier-1/2 in Italy and Spain) 2 nd GGF School on Grid Computing July 2004 n 38

Real-time Analysis Architecture Tier-2 Disk SE Data Replication CASTOR SE Castor Tier-1 Replica agent 1. Replicate data to disk SEs Disk SE 2. Notify that new files are available Fake Analysis 6. Job run on CE close to the data Drop Files CE LCG Resource Broker 5. Job submission to RB Fake Analysis agent 3. Check file-sets (run) completeness 4. Trigger job preparation Replication Agent make data available for analysis (on disk) and notify that Fake Analysis agent: o trigger job preparation when all files of a given file set are available o job submission to the LCG Resource Broker 2 nd GGF School on Grid Computing July 2004 n 39

Real-Time (fake) Analysis CMS software installation o CMS Software Manager installs software via a grid job provided by LCG RPM distribution or DAR distribution o o Used at CNAF, PIC, Legnaro, Ciemat and Taiwan with RPMs Site manager installs RPM s via LCFGng Used at Imperial College Still inadequate for general CMS users Real-time analysis at Tier-1 o Main difficulty is to identify complete input file sets (i.e. runs) o Job submission to LCG RB, matchmaking driven by input data location o Job processes single runs at the site close to the data files File access via rfio o Output data registered in RLS o Job monitoring using BOSS BOSS Push data or info Pull info UI RLS JDL bdii Job metadata output data registration input data location RB CE CE CE WN SE SE SE rfio 2 nd GGF School on Grid Computing July 2004 n 40

Job processing statistic time spent by an analysis job varies depending on the kind of data and specific analysis performed (anyway not very CPU demanding fast jobs) An Example: Dataset bt03_ttbb_tth analysed with executable tthwmu Total execution time ~ 28 minutes ORCA application execution time ~ 25 minutes Job waiting time before starting ~ 120 s Time for staging input and output files ~ 170 s Overhead of GRID + waiting time in queue 2 nd GGF School on Grid Computing July 2004 n 41

Total Analysis jobs and job rates Total number of analysis jobs ~15000 submitted in about 2 weeks o Maximum rate of analysis jobs: ~ 200 jobs/hour o Maximum rate of analysed events: ~ 30Hz 2 nd GGF School on Grid Computing July 2004 n 42

Time delay from data at Tier-0 and Analysis During the last days of DC04 running an average latency of 20 minutes was measured between the appearance of the file at Tier-0 and the start of the analysis job at the remote sites Time delay (File analized at T1) - (File on GDB) 400 Entries Mean RMS OVFLW 3782 28.68 23.68 0.000 300 200 PIC real time fake analysis Apr 27 11pm 2004 - May 1st 5am 2004 100 Median 20 minutes 0 0 50 100 150 200 minutes Reconstruction at Ter-0 General Distribution Buffer Export Buffer Tier-1 Analysis At Tier-1 2 nd GGF School on Grid Computing July 2004 n 43

Summary of Real-time Analysis Real-time analysis at LCG Tier-1/2 o two weeks of quasi-continuous running o total number of analysis jobs submitted ~ 15000 o average delay of 20 minutes from data at Tier-0 to their analysis at Tier-1 Overall Grid efficiency ~ 90-95% Problems : o RLS query needed at job preparation time where done by GUID, otherwise much slower o Resource Broker disk being full causing the RB unavailability for several hours. This problem was related to many large input/output sandboxes saturating the RB disk space. Possible solutions: Set quotas on RB space for sandbox Configure to use RB in cascade o Network problem at CERN, not allowing connections to RLS and CERN RB o one site CE/SE disappeared in the Information System during one night o CMS specific failures in updating Boss database due to overload of MySQL server (~30% ). The Boss recovery procedure was used 2 nd GGF School on Grid Computing July 2004 n 44

Conclusions HEP Applications requiring GRID Computing are already there All the LHC experiments are using the current implementations of many Projects for their Data Challenges o The CMS example : Massive CMS event simulation production (LCG,Grid-3) full chain of CMS DataChallenge 2004 demostrated in LCG-2 o Scalability and performance are key issue LHC experiments look forward for EGEE deployments 2 nd GGF School on Grid Computing July 2004 n 45