Deployment and Testing of Storage Management software, for CMS experiment

Size: px
Start display at page:

Download "Deployment and Testing of Storage Management software, for CMS experiment"

Transcription

1 Deployment and Testing of Storage Management software, for CMS experiment G.Donvito INFN Bari EGEE is a project funded by the European Union under contract IST

2 Introduction on SRM CMS Requirements Outline Experiment requirements (Numbers and use cases ) Users and administrators experiences and requirements DPM dcache STORM Summary on features and performances Conclusion IPRD06, 1-5 October

3 SRM Overview Storage Resource Manager SRM is a Control protocol What it does: Ask to make file ready for upload/download Basic metadata (size, checksum, ) Many components optional Web service (over GSI HTTP) What it doesn t: Data transfer However it can do third party transfer Access control & permissions However some implementation have already been tried IPRD06, 1-5 October

4 SRM functionalities Features from version 1.1 Get Put copy getfilemetadata getrequeststatus getprotocols AdvisoryDelete Critical subset of version 3 Tape-resident with system-managed disk cache: Tape1Disk0 == CUSTODIAL + NEARLINE Tape-resident with guaranteed copy on disk: Tape1Disk1 == CUSTODIAL + ONLINE Disk-resident, user-managed: Tape0Disk1 == REPLICA + ONLINE Critical subset of version 2.2 File types Space reservation Permission functions Directory functions Data transfer control functions Relative paths Query supported protocols IPRD06, 1-5 October

5 DATA TYPE STORAGE TYPE Tape: 1 Disk: 0 Tape: 1 Disk: 1 Tape: 0 Disk: 1 IPRD06, 1-5 October

6 CMS Requirements Services requirements: Reliable SRM Storage Elements Providing almost all needed features FTS and glite Services compliance SRM interoperability File access compliant with LHC software (2008) Network transfers between T0-T1 centers 2008 scale is ~300 MB/s (provision roughly twice that) Network transfers between T1-T2 centers 2008 peak rates from Tier-1 to all related Tiers-2 of MB/s (from 50 to 100 MB/sec per Tier2) Selection Submissions at Tier-1 centers ~800 MB/s to all WNs Analysis Submissions at Tier-2 centers Up to 1 GB/s to all WN IPRD06, 1-5 October

7 CMS Requirements Administrators requirements: Powerful configuration tools continued Powerful monitoring and debugging tools Powerful tools for handling with metadata and with files Capability of handling very large number of parallel requests User friendly installation and configuration procedures Tools to recover or prevent the failure Painless Back-up and Restore (data and metadata ) procedures (in case of failure) IPRD06, 1-5 October

8 DPM IPRD06, 1-5 October

9 DPM overview Developed mainly by Jean Philippe Bode at CERN It is the SRM supported by LCG Provides file access via (authenticated) RFIO and GridFTP It is possible to unify more physical pool under the same virtual namespace (browsable via dedicated command) It consists of a thin layer of software (C++) over MySQL DB easy to manage Not too heavy for CPU It is compatible with CLASSIC SE (it is possible to transform a CLASSIC SE in a DPM POOL) It has an implementation of SRM2.2 Each pool can be configured to serve just one or all VO (unix groups) It has a VOMS implementation in the latest release There is support for VOMS based ACL Accounting system based only on logs and space used per VO IPRD06, 1-5 October

10 Example: SRM put processing (1) SRM Server MySQL Server IPRD06, 1-5 October

11 Example: SRM put processing (2) SRM Server MySQL Server IPRD06, 1-5 October

12 Example: SRM put processing (3) SRM Server MySQL Server IPRD06, 1-5 October

13 Example: SRM put processing (4) SRM Server MySQL Server IPRD06, 1-5 October

14 Example: SRM put processing (5) SRM Server MySQL Server IPRD06, 1-5 October

15 Example: SRM put processing (6) SRM Server MySQL Server IPRD06, 1-5 October

16 DPM Tests Installed at many Tier2 in a production environment Simple to install and manage Simple to dedicate pools to VO (unix groups) Some problems with CMS software (RFIO library incompatibilities) Good performances with RFIO (LAN accesses) Problems with srmcp -pushmode= true (causes load on the other end-point) Installed at Bari to test the new functionalities (developed an automatic tests suite to test many SRMv2.x functionalities) Some new functionalities not yet at production level IPRD06, 1-5 October

17 DPM issues (Thank s to DPM Tiers2) SRMv2.2 not yet at production level (see back-up slides for details) Poor advanced functionalities for management remove files from root vacating a pool for hw problems Load limiting on a pool (Pool overload issue) Configurable Match-making Possibility to choose file destination Poor advanced functionalities for big sites move and replicate files between pools (both for managements and performance issue) Queue for local files requests (all requests must be served or rejected) Scalability to hundreds of TB to be proven More control on pool before writing Quota support has only Unix groups granularity It is not possible to associate dpm path to group of pools Monitoring and rate-mesuring functionalities are not available Developers Man Power IPRD06, 1-5 October

18 DPM Future plans Support for Xrootd Support for advanced SRMv2 functionalities Support for SRMCP Support for file replication between pools Support for cheap back-up system IPRD06, 1-5 October

19 dcache IPRD06, 1-5 October

20 dcache overview It is developed in a large collaboration between Desy and FNAL (plus some other minor contributions) GOALS: To make a distributed storage system that can use cheap disk-server to gain high performance and high-availability To provide an abstraction of whole disk space under a unique NFS like file-system (just for metadata operations) To possibly add the support for its own MSS system They are needed only 2 or 3 scripts (put/get/remove) To provide a system that scale at hundreds of TB of disk cache hundreds of pool nodes hundreds of TB per day to clients File access: provides local and remote access (posix like) with many protocols (dcap, ftp) both with and without authentication (gsi or kerberos) IPRD06, 1-5 October

21 dcache overview (2) Access management: access priority and load balancing obtained trough the use of different queue Allows multiple copy of files spread over different pools to improve performance and HA pool-2-pool automatic (or manually) transfers Allows dynamic match-making between pools According to the parameters chosen by the administrator (they can be based on disk space, load, network, type of access etc.) It is possible to split different type of access point (doors) on different nodes It is possible to move all the files in a pool to put it in a scheduled downtime Or just to choose which file you want to move and where. Also the central services can be split on different nodes to improve the scalability IPRD06, 1-5 October

22 Pool management: dcache overview (3) gives the possibility to create groups of pools named storage class (read, write, cache, or per VO and user bases or use bases) Can be useful for quota management Web monitoring, statistical module (also with rate-plot) The SRM layer can be used as stand-alone software (on standard Unix file-system) It is possible to choose the space used by dcache pool in a partition (you can host many services on the same partition) JAVA GUI for administration Also Xrootd protocol is supported (read only) Accounting system flat-files or DB based (not user friendly but there are many information) and space used per VO It is possible to use WN (or other not reliable space) disks to improve performance for local access IPRD06, 1-5 October

23 dcache overview (4) IPRD06, 1-5 October

24 dcache field test Used in production in many CMS Tiers1 and Tiers2 Used in production since may 2005 at INFN-BARI Good stability Achieved the performance needed for a tier2 in 2006 Good behavior with CMS software (both old and new framework) Performances measured at the CNAF test-bed Reached the performances and the scale needed for a Tier2 in 2008 Advanced functionalities tried (successfully) in production environment at INFN-BARI A solution to improve the scalability for a large number of concurrent accesses (splitting of central services over many machines) A solution to improve the managements of the accesses Splitting read-write access on different nodes Different queue for different type of access (priority and limit can be arranged on type-access bases) IPRD06, 1-5 October

25 dcache tests results CERN CNAF WAN CNAF LAN accesses (with ReplicaManager enabled) IPRD06, 1-5 October

26 dcache issues It is written in JAVA CPU and memory issues The configuration of the advanced features is not so easy The documentation has been improved, but still lacking The support is on best-effort bases The license is free but not completely Open-Source IPRD06, 1-5 October

27 dcache future plans Support SRMv2.2 Support VOMS (gplazma) Improve the pools management Improve the I/O queue management Improve the file replication functionalities Support Multiple PNFS server on different machines IPRD06, 1-5 October

28 STORM/GPFS IPRD06, 1-5 October

29 Storm Overview Is an Italian project (developed by INFN and EGRID) It is a lightweight SRM interface on a posix file-system It can exploit all the potentiality of every distributed file-system It provides VOMS authentication It provides ACL (permanent and on the fly) It will provide all needed SRMv2.2 functionality It is simply to install and manage It provides file access through different protocols: GsiFTP, and local file (i.e. by mounting GPFS file-system) It is possible to implement a load-balance between SRM and GsiFTP servers IPRD06, 1-5 October

30 Storm Overview Using to GPFS as back-end it is possible to: To vacate a pool (putting a machine off-line) To mirror files (and directory) for HA To associate some pool to some Storage Classes To provide an abstraction of whole disk space under a unique mount point To avoid single point of failure High performance, both in reading and writing, are yet proved in big installation (in the order of hundreds node and tens of terabyte) It is obtained by balancing both writing and reading operation between all pools IPRD06, 1-5 October

31 Storm/GPFS performances In this test the network infrastructure is limited to 3Gbit/sec Transfer rate between: 4 servers and 34 clients Time to open a file IPRD06, 1-5 October

32 STORM Issue Does not provide backward compatibility for SRMv1.1 Some high scale test are still needed in order to prove the stability and scalability Does not provide a high configurable match-making between the available SRM and GsiFTP servers Does not provide a possibility to configure a max number of connection per server or to queue the requests Monitoring and rate-measuring functionalities are not available Accounting is available only for GsiFTP access IPRD06, 1-5 October

33 Summary Functionality Feature dcache DPM Storm/GPFS FileReplication Yes No Yes PoolVacation Yes No Yes VOMS support Alpha Yes Yes ACL-VOMS Alpha Yes Yes SlotLimit Yes Partially No MatchMaking Yes Hard Coded Partially Group of pool per use type Yes No Yes VO quota Yes Yes Yes Advanced Quota Yes No Yes Services Splitting Yes Partially Yes Monitoring Features Yes No No MSS Support Yes No No Accounting Features Yes No No IPRD06, 1-5 October

34 Some good test of Performance: SC4 The graph shown the interference between local job activity and WAN transfers A long run example of trasfer Tier1->Tier2 (CNAF->BARI) IPRD06, 1-5 October

35 Acknowledgement & Links Acknowledgements: Vincenzo Spinoso, Vincenzo Vagnoni, Daniele Bonacorsi, Piergiorgio Cerello, Alessandra Doria Massimo Biasotto, Simone Badoer, Emidio Giorgio Giuseppe Lo Re, Pierpaolo Ricci, Vladimir Saputenko Jean Philippe Baud, Patrick Fuhrman Link: Wiki Storage Group: Test report and installation Guide LCG Baseline Services Group Report Storage Resource Management Working Group: Tests on Storage Managers SC4 / pilot WLCG Service Workshop IPRD06, 1-5 October

36 Conclusions The requirements coming from experiments are quite difficult to be addressed The storage system is one of the most critical part of the computing model of a farm There are several software solution at different level of maturity Each of these products is constantly evolving Each of them has positive and negative aspects The final choice may be driven by local-site needs and experience Many people are working to find out limit and solve problem IPRD06, 1-5 October

37 BACK-UP Slides IPRD06, 1-5 October

38 DPM Tests Total Rate MB/sec WN 2 WN 3 WN 4 WN 8 WN 20 WN Node Number RFIO Write RFIO Read SRM Write SRM Read LNL LAN LNL WAN IPRD06, 1-5 October

39 dcache overview (4) IPRD06, 1-5 October

40 dcache tests results (1) Transfer test done with both WAN Transfers ~50MB/sec Local I/O of 66 CMS analysis jobs ~100MB/sec analysis jobs and WAN transfer concurrently CNAF->BARI using PHEDEX IPRD06, 1-5 October

41 CNAF test Installation Schema Admin node dcache CORE PNFS Server Postgres DB Pool node gridftp door SRM door Gsi-dcap door Pool service Pool node gridftp door SRM door Gsi-dcap door Pool service Pool node gridftp door SRM door Gsi-dcap door Pool service Pool node gridftp door SRM door Gsi-dcap door Pool service IPRD06, 1-5 October

42 dcache Advanced Installation Schema DB Server Postgres DB Admin node dcache CORE Pool node gridftp door SRM door Gsi-dcap door Pool service (read) DNS Aliased Pool node gridftp door SRM door Gsi-dcap door Pool service (write) PNFS Server PNFS Server Pool node gridftp door SRM door Gsi-dcap door Pool service (xrootd) IPRD06, 1-5 October

43 SC4 / pilot WLCG Service Workshop (1/2) IPRD06, 1-5 October

44 SC4 / pilot WLCG Service Workshop (2/2) IPRD06, 1-5 October

45 DPM issues 1. Test sulle funzionalità di directories: Mkdir, Rmdir, Mv, Rm funzionavano e funzionano correttamente. Ls non funzionava e non funziona ancora (non fa il listing, ma fa "stat" della directory) 2. Test su "prepare to put": funzionava e funziona 3. Test su "prepare to get": funzionava e funziona 4. Test sul pinning: rimozione durante globus-url-copy, dopo prepare to get/put: non funzionavano entrambi, e non funzionano ancora, nel senso che durante GridFTP transfer noi ci aspettiamo che non sia possibile rimuovere il file; io riesco a rimuovere il file via SRM (Rm) 5. Test "putoversize": il test indica che è possibile effettuare una put di un file le cui dimensioni superano quelle dichiarate nella preparetoput; questo problema c'era e c'è ancora 6. Test "putoverspace": il test effettua successive operazioni di preparetoput fino a saturare lo spazio disponibile sull'srm; mentre prima non si riusciva a disallocare (AbortRequest) lo spazio allocato senza trasferire corrispondentemente il file, adesso AbortRequest funziona. E' pertanto adesso possibile disallocare tutte le preparetoput effettuate dal test. 7. NEW! Test sulla lifetime: provando ad effettuare il test di putoverspace senza effettuarne il "rollback", abbiamo aspettato il lifetime delle preparetoput, superato il quale lo spazio doveva essere reso nuovamente disponibile. Osserviamo che anche se lo spazio non viene disallocato subito, almeno deve essere disallocato in conseguenza di una richiesta di preparetoput. Lo spazio rimane purtroppo allocato, e disallocabile per mezzo di un esplicito AbortRequest. IPRD06, 1-5 October

46 Summary Functionality IPRD06, 1-5 October

47 IPRD06, 1-5 October

48 Transfer activities Tier1-Tier2 for SC4 Tier-1 to Tier-2: very bursty and driven by analysis Goal is to reach from 10MB/s (worst Tier-2s) to 100MB/s (best Tier- 2s) by June Tier-2 to Tier-1: continuous simulation transfers Goal is to reach 10MB/s from Tier-2s to Tier-1 centers (1TB per day) The PhEDEx FTS integration has been reached Two tools (Heartbeat and transfer activity) help CMS with the continuous transfer CMS distributed analysis uses CMS Remote Analysis Builder (CRAB), now interfaced to CMSSW Also trivial file catalogs work The goal is kjobs/day IPRD06, 1-5 October

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

dcache Introduction Course

dcache Introduction Course GRIDKA SCHOOL 2013 KARLSRUHER INSTITUT FÜR TECHNOLOGIE KARLSRUHE August 29, 2013 dcache Introduction Course Overview Chapters I, II and Ⅴ christoph.anton.mitterer@lmu.de I. Introduction To dcache Slide

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Introduction to SRM. Riccardo Zappi 1

Introduction to SRM. Riccardo Zappi 1 Introduction to SRM Grid Storage Resource Manager Riccardo Zappi 1 1 INFN-CNAF, National Center of INFN (National Institute for Nuclear Physic) for Research and Development into the field of Information

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers Journal of Physics: Conference Series The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers To cite this article: D Bonacorsi et al 2010 J. Phys.: Conf. Ser. 219 072027 View

More information

and the GridKa mass storage system Jos van Wezel / GridKa

and the GridKa mass storage system Jos van Wezel / GridKa and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Experiences in testing a Grid service in a production environment

Experiences in testing a Grid service in a production environment EWDC 2009 12th European Workshop on Dependable Computing Toulouse, France, 14 15 May, 2009 Experiences in testing a Grid service in a production environment Flavia Donno CERN, European Organization for

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008 Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, 13-14 November 2008 Outline Introduction SRM Storage Elements in glite LCG File Catalog (LFC) Information System Grid Tutorial, 13-14

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Metadaten Workshop 26./27. März 2007 Göttingen. Chimera. a new grid enabled name-space service. Martin Radicke. Tigran Mkrtchyan

Metadaten Workshop 26./27. März 2007 Göttingen. Chimera. a new grid enabled name-space service. Martin Radicke. Tigran Mkrtchyan Metadaten Workshop 26./27. März Chimera a new grid enabled name-space service What is Chimera? a new namespace provider provides a simulated filesystem with additional metadata fast, scalable and based

More information

dcache Ceph Integration

dcache Ceph Integration dcache Ceph Integration Paul Millar for dcache Team ADC TIM at CERN 2016 06 16 https://indico.cern.ch/event/438205/ Many slides stolen fromdonated by Tigran Mkrtchyan dcache as Storage System Provides

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Data Storage. Paul Millar dcache

Data Storage. Paul Millar dcache Data Storage Paul Millar dcache Overview Introducing storage How storage is used Challenges and future directions 2 (Magnetic) Hard Disks 3 Tape systems 4 Disk enclosures 5 RAID systems 6 Types of RAID

More information

Data Access and Data Management

Data Access and Data Management Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group Distributed production managers meeting Armando Fella on behalf of Italian distributed computing group Distributed Computing human network CNAF Caltech SLAC McGill Queen Mary RAL LAL and Lyon Bari Legnaro

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Service Availability Monitor tests for ATLAS

Service Availability Monitor tests for ATLAS Service Availability Monitor tests for ATLAS Current Status Work in progress Alessandro Di Girolamo CERN IT/GS Critical Tests: Current Status Now running ATLAS specific tests together with standard OPS

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN EMI Deployment Planning C. Aiftimiei D. Dongiovanni INFN Outline Migrating to EMI: WHY What's new: EMI Overview Products, Platforms, Repos, Dependencies, Support / Release Cycle Migrating to EMI: HOW Admin

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Storage Resource Manager Interface Specification V2.2 Implementations Experience Report

Storage Resource Manager Interface Specification V2.2 Implementations Experience Report GFD-E.XXXX Grid Storage Resource Management https://forge.gridforum.org/projects/gsm-wg Editors: A. Sim A. Shoshani F. Donno J. Jensen 5/22/2009 Storage Resource Manager Interface Specification V2.2 Implementations

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

The PanDA System in the ATLAS Experiment

The PanDA System in the ATLAS Experiment 1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

Data Movement and Storage. 04/07/09 1

Data Movement and Storage. 04/07/09  1 Data Movement and Storage 04/07/09 www.cac.cornell.edu 1 Data Location, Storage, Sharing and Movement Four of the seven main challenges of Data Intensive Computing, according to SC06. (Other three: viewing,

More information

Influence of Distributing a Tier-2 Data Storage on Physics Analysis

Influence of Distributing a Tier-2 Data Storage on Physics Analysis ACAT Conference 2013 Influence of Distributing a Tier-2 Data Storage on Physics Analysis Jiří Horký 1,2 (horky@fzu.cz) Miloš Lokajíček 1, Jakub Peisar 2 1 Institute of Physics ASCR, 2 CESNET 17th of May,

More information

Comparative evaluation of software tools accessing relational databases from a (real) grid environments

Comparative evaluation of software tools accessing relational databases from a (real) grid environments Comparative evaluation of software tools accessing relational databases from a (real) grid environments Giacinto Donvito, Guido Cuscela, Massimiliano Missiato, Vicenzo Spinoso, Giorgio Maggi INFN-Bari

More information

GEMSS: a novel Mass Storage System for Large Hadron Collider da

GEMSS: a novel Mass Storage System for Large Hadron Collider da Jun 8, 2010 GEMSS: a novel Mass Storage System for Large Hadron Collider da A.Cavalli 1, S. Dal Pra 1, L. dell Agnello 1, A. Forti 1, D.Gregori 1 B.Matrelli 1, A.Prosperini 1, P.Ricci 1, E.Ronchieri 1,

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

dcache Introduction Course

dcache Introduction Course GRIDKA SCHOOL 2013 KARLSRUHER INSTITUT FÜR TECHNOLOGIE KARLSRUHE August 29, 2013 dcache Introduction Course Overview Chapters I, II and Ⅴ Christoph Anton Mitterer christoph.anton.mitterer@lmu.de ⅤIII.

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

CMS users data management service integration and first experiences with its NoSQL data storage

CMS users data management service integration and first experiences with its NoSQL data storage Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti Tools to use heterogeneous Grid schedulers and storage system INFN and Università di Perugia E-mail: mattia.cinquilli@pg.infn.it Giuseppe Codispoti INFN and Università di Bologna E-mail: giuseppe.codispoti@bo.infn.it

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Benoit DELAUNAY Benoit DELAUNAY 1

Benoit DELAUNAY Benoit DELAUNAY 1 Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some

More information

Bootstrapping a (New?) LHC Data Transfer Ecosystem

Bootstrapping a (New?) LHC Data Transfer Ecosystem Bootstrapping a (New?) LHC Data Transfer Ecosystem Brian Paul Bockelman, Andy Hanushevsky, Oliver Keeble, Mario Lassnig, Paul Millar, Derek Weitzel, Wei Yang Why am I here? The announcement in mid-2017

More information

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices Data Management 1 Grid data management Different sources of data Sensors Analytic equipment Measurement tools and devices Need to discover patterns in data to create information Need mechanisms to deal

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Managed Data Storage and Data Access Services for Data Grids

Managed Data Storage and Data Access Services for Data Grids Managed Data Storage and Data Access Services for Data Grids 1 M. Ernst, P. Fuhrmann, T. Mkrtchyan DESY J. Bakken, I. Fisk, T. Perelmutov, D. Petravick Fermilab s defined by the GriPhyN Project lobal scientific

More information

Storage Resource Manager Interface Specification V2.2 Implementations Experience Report

Storage Resource Manager Interface Specification V2.2 Implementations Experience Report GFD-E.154 Grid Storage Resource Management https://forge.gridforum.org/projects/gsm-wg Editors: A. Sim A. Shoshani F. Donno J. Jensen 8/18/2009 Storage Resource Manager Interface Specification V2.2 Implementations

More information

Distributed Computing Framework. A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille

Distributed Computing Framework. A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille Distributed Computing Framework A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille EGI Webinar, 7 June 2016 Plan DIRAC Project Origins Agent based Workload Management System Accessible computing resources Data

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Patrick Fuhrmann (DESY)

Patrick Fuhrmann (DESY) Patrick Fuhrmann (DESY) EMI Data Area lead (on behalf of many people and slides stolen from all over the place) Credits Alejandro Alvarez Alex Sim Claudio Cacciari Christian Bernardt Christian Loeschen

More information

A Simple Mass Storage System for the SRB Data Grid

A Simple Mass Storage System for the SRB Data Grid A Simple Mass Storage System for the SRB Data Grid Michael Wan, Arcot Rajasekar, Reagan Moore, Phil Andrews San Diego Supercomputer Center SDSC/UCSD/NPACI Outline Motivations for implementing a Mass Storage

More information

The German National Analysis Facility What it is and how to use it efficiently

The German National Analysis Facility What it is and how to use it efficiently The German National Analysis Facility What it is and how to use it efficiently Andreas Haupt, Stephan Wiesand, Yves Kemp GridKa School 2010 Karlsruhe, 8 th September 2010 Outline > NAF? What's that? >

More information

CRAB tutorial 08/04/2009

CRAB tutorial 08/04/2009 CRAB tutorial 08/04/2009 Federica Fanzago INFN Padova Stefano Lacaprara INFN Legnaro 1 Outline short CRAB tool presentation hand-on session 2 Prerequisities We expect you know: Howto run CMSSW codes locally

More information

Department of Physics & Astronomy

Department of Physics & Astronomy Department of Physics & Astronomy Experimental Particle Physics Group Kelvin Building, University of Glasgow, Glasgow, G1 8QQ, Scotland Telephone: +44 ()141 339 8855 Fax: +44 ()141 33 5881 GLAS-PPE/7-3

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

DIRAC data management: consistency, integrity and coherence of data

DIRAC data management: consistency, integrity and coherence of data Journal of Physics: Conference Series DIRAC data management: consistency, integrity and coherence of data To cite this article: M Bargiotti and A C Smith 2008 J. Phys.: Conf. Ser. 119 062013 Related content

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

DSIT WP1 WP2. Federated AAI and Federated Storage Report for the Autumn 2014 All Hands Meeting

DSIT WP1 WP2. Federated AAI and Federated Storage Report for the Autumn 2014 All Hands Meeting DSIT WP1 WP2 Federated AAI and Federated Storage Report for the Autumn 2014 All Hands Meeting Content WP1 GSI: GSI Web Services accessible via IdP credentials GSI: Plan to integrate with UNITY (setting

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information