Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Similar documents
HEP Grid Activities in China

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Status of KISTI Tier2 Center for ALICE

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

A new method of The Monte Carlo Simulation based on Hit stream for the LHAASO

Interconnect EGEE and CNGRID e-infrastructures

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

glite Grid Services Overview

FREE SCIENTIFIC COMPUTING

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

The High-Level Dataset-based Data Transfer System in BESDIRAC

Monitoring tools in EGEE

The CMS Computing Model

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Distributed Archive System for the Cherenkov Telescope Array

The Grid: Processing the Data from the World s Largest Scientific Machine

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

ORIENT/ORIENTplus - Connecting Academic Networks in China and Europe. Jennifer(Jie) An CERNET Center/Tsinghua University 14 Feb.

Overview of HEP software & LCG from the openlab perspective

Austrian Federated WLCG Tier-2

Andrea Sciabà CERN, Switzerland

Storage Resource Sharing with CASTOR.

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

Local Station: the data read-out basic unit

Performance quality monitoring system (PQM) for the Daya Bay experiment

AGATA Analysis on the GRID

Exploring cloud storage for scien3fic research

Performance quality monitoring system for the Daya Bay reactor neutrino experiment

IEPSAS-Kosice: experiences in running LCG site

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

CMS Computing Model with Focus on German Tier1 Activities

Monte Carlo Production on the Grid by the H1 Collaboration

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

RDMS CMS Computing Activities before the LHC start

On the employment of LCG GRID middleware

Edinburgh (ECDF) Update

UW-ATLAS Experiences with Condor

Failover procedure for Grid core services

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

XRAY Grid TO BE OR NOT TO BE?

Tier2 Centre in Prague

High Energy Physics data analysis

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

Reprocessing DØ data with SAMGrid

Grid-related related Activities around KISTI Tier2 Center for ALICE

Deploying virtualisation in a production grid


The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Grid Infrastructure For Collaborative High Performance Scientific Computing

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

On-demand Research Computing: the European Grid Infrastructure

The EU DataGrid Testbed

CERN and Scientific Computing

Computing at Belle II

ISTITUTO NAZIONALE DI FISICA NUCLEARE

The glite middleware. Ariel Garcia KIT

Monitoring the Usage of the ZEUS Analysis Grid

Grid Interoperation and Regional Collaboration

ATLAS Tier-3 UniGe

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

Distributed Computing Grid Experiences in CMS Data Challenge

Middleware-Tests with our Xen-based Testcluster

Virtualizing a Batch. University Grid Center

Service Availability Monitor tests for ATLAS

Data storage services at KEK/CRC -- status and plan

Multi-threaded, discrete event simulation of distributed computing systems

The LHC Computing Grid

Philippe Charpentier PH Department CERN, Geneva

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

SNiPER: an offline software framework for non-collider physics experiments

ARC integration for CMS

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

150 million sensors deliver data. 40 million times per second

LHCb Computing Strategy

Greece s Collaborative Ground Segment Initiatives

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

High Throughput WAN Data Transfer with Hadoop-based Storage

DESY. Andreas Gellrich DESY DESY,

Computing grids, a tool for international collaboration and against digital divide Guy Wormser Director of CNRS Institut des Grilles (CNRS, France)

EGEE and Interoperation

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Introduction Technology Equipment Performance Current developments Conclusions. White Rabbit. A quick introduction. Javier Serrano

Data handling and processing at the LHC experiments

N. Marusov, I. Semenov

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

Grid and Cloud Activities in KISTI

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

BESIII physical offline data analysis on virtualization platform

Travelling securely on the Grid to the origin of the Universe

Transcription:

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn

Extensive Air Shower

Main Physics Goals 90 o 31 50 E, 30 o 6 38 N 4300m a. s. l., 606g/cm 2 γ-astronomy(sub-tev, 0.3I Crab ) Diffused γ sources(sub- TeV) GRB(10GeV) Knee Physics Anti-p/p(300GeV) Primary Proton Spectrum (10TeV) Solar Physics

YBJ-ARGO Italy-China cosmic ray observatories in Tibet. 200TB raw data per year. Data transferred to IHEP and INFN. Rec. data accessible by collaborators.

YBJ-ARGO Experiment Full Coverage Full coverage carpet 78x75m 2 (130 clusters) (95 % of active surface) Operating and taking data since July 2006. 154 CLUSTERs, 18480 PADs Total detector area: 6500 m 2 Angular Resolution 0.5 o (nhit>50) Measurement: time, number Guard ring surrounding the central carpet 111 x 99 m 2 (24 clusters) (20 % of active surface) Completed by the end of 2006.

G. Chen & H. Zhang CODATA 06 Beijing, 24.10.2006

G. Chen & H. Zhang CODATA 06 Beijing, 24.10.2006

Data Process and Physics Analysis filtering filtering (Rebuild (Rebuild) Disk cache Rec. Data Detector Raw Data Event Event Reconstruction Reconstruction Batch Batch jobs jobs Events (physics goal) Physics Physics Simulation Simulation Interactive Analysis

New ARGO-YBJ GRID computing model Resources integration for data processing Use GRID tools for an automatic data transfer system Data storage : keep a copy of the raw and processed data in each main computing centre (IHEP and CNAF) - backup Use synchronized Data Catalogues Use synchronized Metadata Catalogues Use one ARGO Data Base and replicated copy Same policy for Monte Carlo production

Data Transfer schema

Testbed for data transfer project The GRID middleware based on glite installed Data transfer from DAQ to YBJ Data Mover, use of a local DB File Transfer via FTS to IHEP and insert in the data catalog, Data synchronization to CNAF via data catalog comparison The mechanism to delete the data files at origin (YBJ), once transferred to both computing centers, was implemented

Network Link 155Mbps 2.5Gbps Thanks to CNIC! 2.5Gbps to US/EU

Computing Resources at IHEP

Resource Monitoring

Database

DAQ and Data Mover installation New DAQ hardware in place at Roma Tre ; new software and the data mover is in advanced status of installation and testing

Grid Monitoring

GridICE

Grid site at IHEP RB, UI, CE, SE, Myproxy, MON (R- GMA), BDII WNs with Xeon CPUs China HEP CA, serve HEP and other communities

CA

Some events 4000 ADC counts ~ 90 p/m 2 Very big shower!!

Smooth window radius = 1.5

Moon shadows (zenith <40 o ) -3.5δ north sourth west Moon position east Moon position -3.0δ The left histogram is for the 1st reconstruction:events: 536165 background: 537473,difference:1309±733(stat.) ±?(sys.), expected deficit number :3168( ~ Nbg π (Moon Radius ) 2 / (6 6) The right histogram is for the 2nd reconstruction:event:: 539039,background: 540060,difference:1021±734(stat.) ±?(sys), expected deficit number :3178( ~ Nbg π (Moon Radius ) 2 / (6 6)

Next developments API for the data transfer procedure and more automatization of the whole procedure Use of the ARGO GRID infrastructure in Italy and China

New Job submitting and monitoring GUI for job submitting in batch mode Dataset Selection.

Thank you! For more information of or joining to EUChinaGrid Project: Contact Gang.Chen@ihep.ac.cn