Tier-2 DESY Volker Gülzow, Peter Wegner

Size: px
Start display at page:

Download "Tier-2 DESY Volker Gülzow, Peter Wegner"

Transcription

1 Tier-2 DESY Volker Gülzow, Peter Wegner DESY DV&IT 1

2 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2

3 LCG requirements and concepts DESY DV&IT 3

4 DESY DV&IT 4

5 Challenge: A large and distributed Community ATLAS Offline software effort: 1000 person-years per experiment CMS Storage Datennahmerate GBytes/sec -> 5-8 Petabyte Processing 200,000 of today s fastest PCs Software life span: 20 years ~ 5000 Physicists around the world LHCb DESY DV&IT 5

6 LHC Computing Model Lab a Uni x Lab m ATLAS CERN Tier 1 USA UK Uni a CMS Tier3 physics department Tier2 The LHC Computing Centre Tier 1 Italy CERN CERN Tier 0 LHCb France. Uni n Desktop γ Lab b. Germany Lab c β α Uni y Uni b Tier 0 Centre at CERN DESY DV&IT 6

7 The LCG Project Approved by the CERN Council in September 2001 Phase 1 ( ): Development and prototyping a distributed production prototype at CERN and elsewhere that will be operated as a platform for the data challenges - leading to a Technical Design Report, which will serve as a basis for agreeing the relations between the distributed Grid nodes and their co-ordinated deployment and exploitation. Phase 2 ( ): Installation and operation of the full world-wide initial production Grid system, requiring continued manpower efforts and substantial material resources. DESY DV&IT 7

8 Organizational Structure for Phase 2 LHC Committee LHCC Scientific Review Computing Resources Review Board - C-RRB Funding Agencies Collaboration Board CB Experiments and Regional Centres Overview Board - OB Management Board - MB Management of the Project Grid Deployment Board Coordination of Grid Operation Architects Forum Coordination of Common Applications DESY DV&IT 8

9 The Hierarchical Model Tier-0 at CERN Record RAW data Distribute second copy to Tier-1s Calibrate and do first-pass reconstruction Tier-1 centres (11 defined) Manage permanent storage RAW, simulated, processed Capacity for reprocessing, bulk analysis Tier-2 centres (>~ 100 identified) Monte Carlo event simulation End-user analysis Tier-3 Facilities at universities and laboratories Access to data and processing in Tier-2s, Tier-1s Outside the scope of the project DESY DV&IT 9

10 Tier-1s Tier-1 Centre TRIUMF, Canada GridKA, Germany CC, IN2P3, France CNAF, Italy SARA/NIKHEF, NL Nordic Data Grid Facility (NDGF) ASCC, Taipei RAL, UK BNL, US FNAL, US PIC, Spain Experiments served with priority ALICE ATLAS CMS LHCb X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X DESY DV&IT 10

11 Tier-2s ~100 identified number still growing DESY DV&IT 11

12 The Eventflow Rate [Hz] RAW [MB] ESD rdst RECO [MB] AOD [kb] Monte Carlo [MB/evt] Monte Carlo % of real ALICE HI ALICE pp ATLAS CMS LHCb days running in seconds/year pp from 2008 on ~10 9 events/experiment 10 6 seconds/year heavy ion DESY DV&IT 12

13 CPU Requirements 350 MSI Year 58% pledged CERN Tier-1 Tier-2 LHCb-Tier-2 CMS-Tier-2 ATLAS-Tier-2 ALICE-Tier-2 LHCb-Tier-1 CMS-Tier-1 ATLAS-Tier-1 ALICE-Tier-1 LHCb-CERN CMS-CERN ATLAS-CERN ALICE-CERN DESY DV&IT 13

14 Disk Requirements 160 PB Year 54% pledged CERN Tier-1 Tier-2 LHCb-Tier-2 CMS-Tier-2 ATLAS-Tier-2 ALICE-Tier-2 LHCb-Tier-1 CMS-Tier-1 ATLAS-Tier-1 ALICE-Tier-1 LHCb-CERN CMS-CERN ATLAS-CERN ALICE-CERN DESY DV&IT 14

15 Tape Requirements 160 PB % pledged Year CERN Tier-1 LHCb-Tier-1 CMS-Tier-1 ATLAS-Tier-1 ALICE-Tier-1 LHCb-CERN CMS-CERN ATLAS-CERN ALICE-CERN DESY DV&IT 15

16 Typical Grid Components ssh UI JDL output RB RLS BDII GIIS CAT certs $HOME/.globus/ Computing Element ldap://ldap.desy.de VO RB CE SE /etc/grid-security/grid-mapfile CE PBS GRIS GRIS SE SRM world WN WN WN WN disk DESY DV&IT 16

17 Monitoring Accounting DESY DV&IT 17

18 Cooperation with other projects Network Services LCG will be one of the most demanding applications of national&international research networks such as GÉANT Grid Software Globus, Condor and VDT have provided key components of the middleware used. Key members participate in OSG and EGEE Enabling Grids for E-sciencE (EGEE) includes a substantial middleware activity. Grid Operational Groupings The majority of the resources used are made available as part of the EGEE Grid (~170 sites, 15,000 processors). The US LHC programmes contribute to and depend on the Open Science Grid (OSG). Formal relationship with LCG through US- Atlas and US-CMS computing projects. The Nordic Data Grid Facility (NDGF) will begin operation in Prototype work is based on the NorduGrid middleware ARC. DESY DV&IT 18

19 DESY DV&IT 19

20 Tier1/2 Summary Table Tier-1 Planning for 2008 ALICE ATLAS CMS LHCb SUM 2008 Offered CPU - MSI2K Disk - PBytes Tape - PBytes Includes current planning for all Tier-1 centres TDR Requirements Balance -46% -5% -18% -0% -17% Offered TDR Requirements Balance -62% -13% -18% -10% -25% Offered TDR Requirements Balance -54% 1% -51% -9% -36% Tier-2 Planning for 2008 ALICE ATLAS CMS LHCb SUM 2008 CPU - MSI2K Offered TDR Requirements Balance -65% -2% -10% -42% -24% Disk - PBytes Offered TDR Requirements Balance -59% -33% -8% n/a -26% # Tier-2 federations - included(expected) 12 (13) 20 (28) 17 (19) 11 (12) 28 (37) DESY DV&IT 20

21 Status and DESY DESY DV&IT 21

22 LHC and DESY DESY has decided to participate on an external experiment at LHC DESY will participate in Atlas and CMS DESY offers for both experiments to run a av. Tier 2 centre each DESY has to offer Tier 3 service to local groups A joint Hamburg-Zeuthen activity For DESY this is a long term committment DESY DV&IT 22

23 Where are the requirements stated? 23

24 From G. Quast, OB-meeting, May 2005 DESY DV&IT 24

25 From G. Quast, OB-meeting, May 2005 DESY DV&IT 25

26 Proposed Tier 2 DESY Under Consideration: Proposal for a 3 year project for ramp up Will become part of standard computer centres in Hamburg&Zeuthen Current key persons: Michael Ernst, Patrick Fuhrmann, Martin Gasthuber, Andreas Gellrich, Volker Gülzow, Andreas Haupt, Stefan Wiesand, Peter Wegner, Knut Woller et al. DESY DV&IT 26

27 Managed via Virtual Organisations (VOs) H1/Zeus Grids Lattice data Grid LHC-Tier 2 others Amanda/Ice Cube ILC DESY DV&IT 27

28 Plans for DESY-Tier2&3 Tier 2 is part of a larger grid infrastructure Tier 2 for CMS as a federated Tier 2 with RWTH Aachen Tier 2 for Atlas very likely as a federated Tier 2 with Freiburg and Wuppertal Efficient and shared setup for Hamburg&Zeuthen DESY DV&IT 28

29 Proposed hardware resources (total) As much as possible only one resource pool (per site) Distribution via Fair Share Scheduler CPU [ksi2k] Disk [TB] Tape [TB]? DESY DV&IT 29

30 Proposed hardware resources (total, A=Atlas, C=CMS) CPU [ksi2k] 100 A 400 A 700 A 700 A 900 A 100 C 400 C 700 C 900 C 1200 C Disk [TB] 15 A 100 A 340 A 340 A 570 A 15 C 100 C 200 C 200 C 300 C Tape [TB] 10 A 50 A 200 A 340 A 570 A (?) 10 C 50 C 100 C 200 C 300 C DESY DV&IT 30

31 DESY DV&IT 31

32 DESY DV&IT 32

33 Connectivity DESY-HH will have a in 2006 a 1 Gb/s ET XWIN-connection Bandwidth will be (according to the needs) at the beginning 300 Mb/s or 600 Mb/s Plan to have a 1Gb/s VPN: HH <-> Zeuthen We will have a VPN connection to GridKa (p2p) (likely 10 Gb/s in 2007) Atlas needs less bandwidth for Tier 2s (cf CM) than CMS DESY DV&IT 33

34 In Hamburg: ~ 250 kspecint2k ~ 70 TB Storage DESY DV&IT 34

35 Software Grid Infrastructure fully on LCG 2.6 VOMS will be available soon. currently 15 VO s are supported Atlas Software: done, CMS: done CMS: Successfull Analysis runs with CRAB ( Ernst, Rosemann) DESY DV&IT 35

36 Memorandum of Understanding for Collaboration in the Deployment and Exploitation of the LHC Computing Grid between The EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH ( CERN ), an intergovernmental Organization having its seat at Geneva, Switzerland, as the Host Laboratory, the provider of the Tier0 Centre and the CERN Analysis Facility, and as the coordinator of the LCG project, on the one hand, and all the Institutions participating in the provision of the LHC Computing Grid with a Tier1 and/or Tier2 Computing Centre (including federations of such Institutions with computer centres that together form a Tier1 or Tier2 Centre), as the case may be, represented by their Funding Agencies for the purposes of signature of this Memorandum of Understanding, on the other hand, (hereafter collectively referred to as the Parties ). DESY DV&IT 36

37 Tier-1 DESY DV&IT 37

38 Tier-2 Resources will be monitored under same conditions as the HERA resources (and much better then the min level) DESY DV&IT 38

39 Service Challenge DESY DV&IT 39

40 Service Challenge Goals An integration test for the next production system Full experiment software stack not a middleware test Stack = S/W required by transfers, data serving, processing jobs Main output for SC3 data transfer and data serving infrastructure known to work for realistic use Including testing the workload management components: the resource broker and computing elements Bulk data processing mode of operation Crucial step toward SC4, ATLAS DC, CMS DC06 and LHC Failure of any major component at any level would make it difficult to recover and still be on track with increased scale and complexity in SC4 and the ATLAS/CMS DCs Need to leave SC3 with functional system with room to scale DESY DV&IT 40

41 DESY 62 MB/s Hourly Averaged Throughput (from CERN CIC to DESY) 100 MB/s (Ernst, Fuhrmann, Gellrich et al) DESY DV&IT 41

42 2006 Sep LHC service available The SC4 service becomes the permanent LHC service available for experiments testing, commissioning, processing of cosmic data, etc. All centres ramp-up to capacity needed at LHC startup TWICE nominal performance Milestone to demonstrate this 3 months before first physics data April SC2 SC3 SC4 LHC Service Operation cosmics First physics First beams Full physics run DESY DV&IT 42

43 Funding and personnel Money comes partly from DESY Additional funds via projects Existing computer centre staff for operation In cooperation with experiments Atlas and CMS specific Softwaresupport EU/national projects DESY DV&IT 43

44 Conclusion 3 year project planned for setup Joint activity between IT-HH and DV-Zn After 3 years in standard operation mode Tier 3 demands have to be considered Closed cooperation with the Atlas/CMS groups needed Tier 2 situation in Germany not settled Other research will profit DESY DV&IT 44

45 links DESY DV&IT 45

46 Farm Computing in Zeuthen installation Global Batch Farm based on Sun Gridengine starting from 2001 Batch: 60 Hosts amd64 (2.4 GHz Dual Opteron, 4 8 GB Memory) 96 Hosts ia32 (46 x 800 MHz Dual PIII + 50 x 3.06 GHz Dual Xeon) CPU Performance: ksi2k Parallel Environment: 16 Nodes amd64 (2.6 GHz Dual Opteron) Infiniband Network 16 Nodes ia32 (1.7 GHz Dual Xeon) Myrinet Network Disk Raid Storage: 40 TB, 60% AFS data volumes, 30% dcache pools DESY DV&IT 46

47 Farm Computing in Zeuthen Performance in ksi2k (Kilo SpectInt2000) CPU ksi2k/cpu ksi2k/node (*1.7) Farm (46) PIII 800 MHz (50) XEON 3 GHz (60) Opteron 2.4 GHz Average Tier 2 Center: 6% MSI2k / 30 = 633 (700) ksi2k MSI2k / 30 = 1733 ksi2k 59% 35% ice PIII 800 MHz globe Sun V65x, Xeon 3.06 GHz heliade Sun V20z, Opteron 250 DESY DV&IT 47

48 Farm Computing in Zeuthen installation Global Batch Farm based on Sun Gridengine starting from 2001 Batch: 60 Hosts amd64 (2.4 GHz Dual Opteron, 4 8 GB Memory) 96 Hosts ia32 (46 x 800 MHz Dual PIII + 50 x 3.06 GHz Dual Xeon) CPU Performance: ksi2k Parallel Environment: 16 Nodes amd64 (2.6 GHz Dual Opteron) Infiniband Network 16 Nodes ia32 (1.7 GHz Dual Xeon) Myrinet Network Disk Raid Storage: 40 TB, 60% AFS data volumes, 30% dcache pools DESY DV&IT 48

49 Farm Computing in Zeuthen (statistics from July 2005 to December 2005) Amanda/IceCube/Baikal: 59% Theory/NIC: 35% Pitz/Tesla/LC: 5% Pitz,H1,Theory in addition dedicated systems CPU hours % 7% 9% 2% amanda apeuser baikal grid h1 herab hermes lc nic other pitz % 52% rz sysprog theorie theorie_zn amanda apeuser baikal grid h1 herab hermes lc nic other pitz rz sysprog theorie theorie_zn DESY DV&IT 49

50 Current Grid User Interface CE SGE master Resource Broker dcache door globe farm worker nodes BDII dcache head node VOBox File Catalog dcache pool dcache pool DESY DV&IT 50

51 Tier 2 / Grid plans Integrated Grid Installation Grid User Common GRID & Local environment Local User Farm management global farm worker nodes local farm Global/Local Storage (shared) DESY DV&IT 51

52 Tier 2 / Grid plans Dedicated Grid Installation Grid User Grid environment Local environment Local User Grid Farm management Torque Local Farm management SGE local farm global farm worker nodes Global/Local Storage DESY DV&IT 52

53 Tier 2 / Grid plans / ILDG International Lattice DataGrid Goal: Build-up infrastructure for long-term storage and global sharing of simulation data Participants: au, de, it, jp, fr, uk, usa Concept: grid-of-grids, webservice based interfaces DESY contribution: Implementation and operation of metadata catalogue for extensible XML documents Coordination of LCG-based grid infrastructure and operation of central information services DESY DV&IT 53

54 Farm, Storage Commitments to Experiments at Zeuthen IceCube (Maintenance and Operations Data Systems document): Offline data formatting & merging, filtering, re-processing, analysis, MC production ~ 400 nodes (2009), Storage not yet defined (30 TB raw data / year) NIC (LQCD): processing of configurations, simulations nodes, O(10) TB Disk Space / year expected more due after full >2TFlops apenext installation DESY DV&IT 54

55 Tier 2/Grid plans in Zeuthen New computer room in the upper floor (2007) New UPS (uninterruptible power supply) systems (2006) New Cooling (attached to the PITZ cooling system) (2006) Replacement of old tape robot system (2007) WAN VPN to DESY HH, 1 GBit/s (2006) Tier2/Grid xx nodes, yy TB raid disk, zz tape Main Problem: Disk Storage integration DESY DV&IT 55

Site Report. Stephan Wiesand DESY -DV

Site Report. Stephan Wiesand DESY -DV Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP, Computing in HEP Andreas Gellrich DESY IT Group - Physics Computing DESY Summer Student Program 2005 Lectures in HEP, 11.08.2005 Program for Today Computing in HEP The DESY Computer Center Grid Computing

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Tier2 Centre in Prague

Tier2 Centre in Prague Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN) A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

High Energy Physics data analysis

High Energy Physics data analysis escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

The DESY Grid Testbed

The DESY Grid Testbed The DESY Grid Testbed Andreas Gellrich * DESY IT Group IT-Seminar 27.01.2004 * e-mail: Andreas.Gellrich@desy.de Overview The Physics Case: LHC DESY The Grid Idea: Concepts Implementations Grid @ DESY:

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

Grid Computing at Ljubljana and Nova Gorica

Grid Computing at Ljubljana and Nova Gorica Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Aleksandar Belić Scientific Computing Laboratory Institute of Physics EGEE Introduction EGEE = Enabling Grids for

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

RDMS CMS Computing Activities before the LHC start

RDMS CMS Computing Activities before the LHC start RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

ATLAS COMPUTING AT OU

ATLAS COMPUTING AT OU ATLAS COMPUTING AT OU Outline HORST SEVERINI OU DOE REVIEW FEBRUARY 1, 2010 Introduction US ATLAS Grid Computing and Open Science Grid (OSG) US ATLAS Tier 2 Center OU Resources and Network Summary and

More information

Grid and Cloud Activities in KISTI

Grid and Cloud Activities in KISTI Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1 Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science

More information

Grid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen

Grid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen Grid Engine - A Batch System for DESY Andreas Haupt, Peter Wegner 15.6.2005 DESY Zeuthen Introduction Motivations for using a batch system more effective usage of available computers (e.g. reduce idle

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer University of Johannesburg South Africa Stavros Lambropoulos Network Engineer History of the UJ Research Cluster User Groups Hardware South African Compute Grid (SA Grid) Status Applications Issues Future

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

Experience of Data Grid simulation packages using.

Experience of Data Grid simulation packages using. Experience of Data Grid simulation packages using. Nechaevskiy A.V. (SINP MSU), Korenkov V.V. (LIT JINR) Dubna, 2008 Contant Operation of LCG DataGrid Errors of FTS services of the Grid. Primary goals

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

The LHC Computing Grid Project in Spain (LCG-ES) Presentation to RECFA

The LHC Computing Grid Project in Spain (LCG-ES) Presentation to RECFA * Collaboration of the Generalitat de Catalunya, IFAE and Universitat Autònoma de Barcelona Port d The LHC Computing Grid Project in Spain (LCG-ES) Presentation to RECFA Prof. Manuel Delfino Coordinating

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information