IWR GridKa 6. July GridKa Site Report. Holger Marten

Size: px
Start display at page:

Download "IWR GridKa 6. July GridKa Site Report. Holger Marten"

Transcription

1 GridKa Site Report 51 st Session of the GridKa TAB, Holger Marten Forschungszentrum Karlsruhe GmbH Institut für Wissenschaftliches Rechnen, IWR Postfach 3640 D Karlsruhe Content 1. Input for discussion on manpower issues 2. Status of new hardware 3. Other services & follow-ups from last TAB 4. Problems since last TAB Corrections: Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 1

2 1. Input for discussion on manpower issues Don t interpret this table without reading explanations on the next page! Name Status FTE on GridKa payroll FTE for GridKa operations main tasks GIS Alef (s, p) 1,0 1,0 front-ends, WNs, Linux, electricity, cooling, education of BA students Epting (t, p) 1,0 0,05 ISSeG, EGEE, D-Grid, CA, security Ernst (t, p) 1,0 1,0 batch, accounting, certificates Gabriel (s, t) 1,0 1,0 PPS & production grid services Garcia Marti (s, t) ISSeG 0,05 ISSeG Halstenberg (s, t) 1,0 1,0 FTS, LFC, tape, dcache, srm, tape, (experiment data bases) Heiss (s, t) 1,0 1,0 Tier-2, experiment and SC contact & coordination Hermann (s, t) 1,0 0,05 EGEE ROC management DECH, EGEE SA1 tasks DECH Hoeft (t, p) 1,0 0,75 ISSeG, LAN, WAN, FTS, security, education of students from foreign countries Hohn (t, p) 1,0 1,0 Linux, server/os installations, development & implementation of fast recovery tools, mail, education of BA students Jaeger (t, p) 1,0 1,0 Linux, ROCKS packaging, Ganglia, Nagios, infrastructure installation Koerdt (s, t) EGEE 0,05 EGEE SA1 tasks DECH, deputy ROC management DECH, Marten (s, p) 1,0 1,0 GridKa management, financing, Meier (t, t) 1,0 1,0 disk, file server operation Motzke (s, t) from 7/06 (1,0) (1,0 planned) experiment data bases, Oracle, LFC Ressmann (s, p) 1,0 1,0 dcache, srm, tape Schäffner (t, p) 1,0 0,5 EGEE, D-Grid, VO management, certificates, web pages, Sharma (t, t) 1,0 1,0 administrative support, wiki & cms documentation systems Stannosek (t, t) 1,0 1,0 hardware setup, repair, exchange van Wezel (s, p) 1,0 1,0 disk storage + almost all other technical issues Verstege (t, p) 1,0 1,0 Linux, ROCKS packaging, Ganglia, Nagios, infrastructure installation NN1 (s, t) from 11/06? (1,0) (1,0 planned) PPS & production grid services (planned companion for Gabriel) NN2 (t, t) from 10/06? (1,0) (1,0 planned) LAN, WAN, security (planned companion for Hoeft) DASI Antoni (s, t) 0,3 0,3 GGUS development Dres (t, t) from 9/06 (1,0) (? Tbd) GGUS development, ticket handling Glöer (s, t) 0,2 0,2 tape system management Grein (t, t) from 9/06 (1,0) (? Tbd) GGUS development, ticket handling Heathman (t, p) 0,3 0,3 marketing, conference contributions Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 2

3 Wochele (s, p) 0,15 0,15 Oracle, experiment DBs company 1,0 1,0 LAN technical support Sum 19,95 +(5,0) 17,40 +(3,0) This is sensitive information. Please don t distribute, and don t use this to directly contact respective persons in case of problems. Instead, please use trouble ticket systems to avoid problems in cases of vacation etc. Status (s/t, p/t) means: (scientist / technician, permanent / temporary contract). No distinction is made between real technicians and engineers. FTE on GridKa payroll means: fraction of FTE per person financially accounted for project GridKa. FTE for GridKa operations means: local operation in the widest sense, i.e. including management, planning, installation & operation of hard and software, optimization, ticket solutions etc. in the GridKa environment. It does not include activities that are formally run under the GridKa flag but don t directly contribute to local operations tasks (like e.g. EGEE ROC management for DECH). However, it should be emphasized that there are definitely synergy among these different projects. Since it is difficult to count these in numbers, an estimation of 0,05 FTE is given in these cases. What are the potential issues? 1. In the current phase of permanent upgrades, improvements, new requirements, new functionalities (of hardware, services, experiment requirements etc.) many of these tasks explicitly require expert knowledge. These experts that maintain the services are the same persons that are needed in meetings, workshops, for reports, conference contributions, publications, funding proposals etc. and a deputy is not always guaranteed. 2. There is a significant fraction of experts sitting on temporary positions and not having an adequate full deputy. There is a high risk of loosing these people and thus of the know-how if the expert decides to move to another institute / job. 3. The communication between GridKa staff and experiments through boards, meetings (esp. TAB) etc. is not bad, but during some phases not intense enough, sometimes leading to misunderstandings of requirements, priorities and objections on both sides. Possibilities for improvements 1. Wide(r) spread of expert knowledge among GridKa staff. This is already done in the following ways: Central documentation of the GridKa setup and operations procedures through internal wiki, inventory and other databases. Already existing and permanently extended and updated. Well defined communication channels through several internal and partially internal mailing lists and ticket process workflows. Already existing. Two permanent weekly meetings plus technical meetings at fixed daily times on demand (every admin can ask for a meeting on the following day). Already existing. Identification of recurring (and not too complex) expert tasks that could be done by other people to unburden and deputize the experts. This process has just started and we ll see how far we can get with it. These methods are quite obvious and strengthen the collaboration of people, but it is also clear that there are natural limitations, especially during phases with very dynamic changes of (external) software and requirements. Detailed expert knowledge will always be needed; Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 3

4 information exchange cannot replace experience. It is also obvious that this is a continuous process that needs permanent improvements and time (learning curve). 2. Short-term contributions by / involvement of external experts would be very much appreciated in the following fields: srm/dcache; SAM/Monitoring; configuration of storage subsystems; xrootd; experiment specific SFTs, connectivities (to other centres), tests and ticket solutions. However, the emphasis here lies on short-term and experts that have a deep insight into and knowledge of these systems and tasks and of the GridKa environment and requirements (I guess, at least the latter could be gained fast). In the current situation we don t think that we can cope with additional people that have to be trained by ourselves. This might be even more time consuming than helpful. 3. Better information exchange with experiments. We didn t get the impression that huge workshops with extremely tight agendae and several dozens of attendees are always successful, and we don t want to suggest yet other regular meetings. However, ad hoc phone conferences with a few experts on each end of the line focusing on one or two hot topics and without preparing high gloss transparencies are extremely effective and satisfactory for both sides. 4. Medium-term contributions by experiment people. Again, this kind of collaboration is helpful for experiments as well as sites. The application for funding extra people and for a virtual institute for sure is a good step into the right direction. 2. Status of new hardware OPN 10 Gbps to CERN Light path has been delivered by DFN in June. Performance and error rate tests are ongoing but still some routing problems (separate tests not influencing the production environment). CPU Problems with new temperature offset of CPUs reported in last TAB. First bios patch delivered in June didn t improve the situation. However, NEC storage servers that are identical in construction do not show this problem. We try to get the same bios certified for the WNs as well. Delivery of new WNs to users expected during July. Side note: This problem is now documented in the revision guide for AMD / Opteron-CPUs, Rev 3.59: Errata 154, Incorrect diode offset. Laugh or cry. Disk First 20 TB of NEC storage is handed over to BaBar. We will address demands of other experiments and next BaBar storage asap. Availability in chunks of 20 TB (access via xrootd, nfs, dcache). Not all newly delivered storage will be available soon. 17 TB of disk-only dcache storage (no tape connection) is being put on-line to fullfill demands of LHCb and CMS for SC4. Plan to finish this week. New front-ends / VO-boxes New hardware has been received. Machines for ALICE and CMS already configured and delivered to experiments. Next is ATLAS within the next days (because of performance problems with the old machine during SC4), then LHCb, then Dzero. 3. Other services & follow-ups from last TAB Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 4

5 Squid for CMS Was defined / followed within the LCG 3D sub-project. Squid ready for testing by CMS. Oracle services Were defined / followed within the LCG 3D sub-project. Delays at GridKa because of missing manpower. Oracle RACs for LHCb and ATLAS have been set up. Oracle streams T0-T1 are currently being implemented. glite 3.0 in production Migration was done ad hoc on June 20/21 and experiments (esp. Atlas) complained for not being informed well in advance. See also mail exchange on the TAB mailing list. SAM monitoring is not working properly at GridKa. A respective ticket has been opened to the developers (info as of June 30). Consolidation of grid mapfiles Generation of grid mapfiles at GridKa has been centralised and consolidated because of too many error prone inconsistencies in the internal environment. Obviously, the migration hasn t been discussed with / announced in time to the experiments and affected Dzero production because the VO server at FNAL requires ticket exchange of VO-servers, which is not used within LCG/EGEE and thus exposed a bug in the middleware. A temporary work around has been elaborated with Dzero. News via GGUS (from old TAB) 1. There were valid complains about the diversity of news posted via GGUS. We have implemented a workflow through well defined people to write the news. 2. Mail-forwarding problems of news to vo-softadmins with end have been solved. See separate mails on the TAB list. 3. Please note that the correct portal for news posted by centres is the regional support portal (DECH in our case) and not GGUS. We ll move the news announcements by GridKa to the DECH portal in the near future. Fair share to be published on GridKa web pages (from old TAB) Done. See -> PBS -> akt. Statistik Policy to remove old user data Based upon policies at other sites we have drafted the following and received general agreement with our data privacy commissioner: Account and File Deletion: Local accounts that have not been used for 12 months will be deleted and all data directly associated with them (home directory) will be lost. The account owner and his experiment's representatives will receive a warning one month before an account is to be deleted. However GridKa is not liable for any failure to give notification before deletion. Data written by the user to disk space outside his home directory, e.g. into experiment specific data areas, may be deleted, or the ownership of the data may be switched over to another user, at the request of the experiment's representatives. These public data areas must not contain any private data, e.g. mailbox or SSH key files. Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 5

6 The draft is open for discussion within TAB, some details on workflow implementation and formulation still have to be figured out. 4. Problems since last TAB See separated list of tickets. Some problems concentrate accumulate at file server outages around Pfingsten, some inconsistencies after the migration to glite and an outage of a single server about a week ago. Document ID: IWR-Rep-GridKa0073-v1.0-TABSiteReport doc 6

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

and the GridKa mass storage system Jos van Wezel / GridKa

and the GridKa mass storage system Jos van Wezel / GridKa and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Tier2 Centre in Prague

Tier2 Centre in Prague Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

Service Level Agreement Metrics

Service Level Agreement Metrics E Service Level Agreement Metrics SLA SA1 Working Group Łukasz Skitał Central European ROC ACK CYFRONET AGH Introduction Objectives To provide formal description of resources/services provided by Resource

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this

More information

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 SEE-GRID-SCI SEE-GRID-SCI Operations Procedures and Tools www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 Antun Balaz Institute

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE Geographical failover for the EGEE-WLCG Grid collaboration tools CHEP 2007 Victoria, Canada, 2-7 September Alessandro Cavalli, Alfredo Pagano (INFN/CNAF, Bologna, Italy) Cyril L'Orphelin, Gilles Mathieu,

More information

Grid Computing at Ljubljana and Nova Gorica

Grid Computing at Ljubljana and Nova Gorica Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources

More information

Service Availability Monitor tests for ATLAS

Service Availability Monitor tests for ATLAS Service Availability Monitor tests for ATLAS Current Status Work in progress Alessandro Di Girolamo CERN IT/GS Critical Tests: Current Status Now running ATLAS specific tests together with standard OPS

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Aleksandar Belić Scientific Computing Laboratory Institute of Physics EGEE Introduction EGEE = Enabling Grids for

More information

COD DECH giving feedback on their initial shifts

COD DECH giving feedback on their initial shifts COD DECH giving feedback on their initial shifts Clemens Koerdt, Victor Penso, Sven Hermann www.eu-egee.org Centres in DECH contributing to the infrastructure Enabling Grids for E-sciencE U Dortmund DESY

More information

Global grid user support building a worldwide distributed user support infrastructure

Global grid user support building a worldwide distributed user support infrastructure Journal of Physics: Conference Series Global grid user support building a worldwide distributed user support infrastructure To cite this article: T Antoni et al 2008 J. Phys.: Conf. Ser. 119 052002 View

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Availability measurement of grid services from the perspective of a scientific computing centre

Availability measurement of grid services from the perspective of a scientific computing centre Journal of Physics: Conference Series Availability measurement of grid services from the perspective of a scientific computing centre To cite this article: H Marten and T Koenig 2011 J. Phys.: Conf. Ser.

More information

Influence of Distributing a Tier-2 Data Storage on Physics Analysis

Influence of Distributing a Tier-2 Data Storage on Physics Analysis ACAT Conference 2013 Influence of Distributing a Tier-2 Data Storage on Physics Analysis Jiří Horký 1,2 (horky@fzu.cz) Miloš Lokajíček 1, Jakub Peisar 2 1 Institute of Physics ASCR, 2 CESNET 17th of May,

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

<Insert Picture Here> Enterprise Data Management using Grid Technology

<Insert Picture Here> Enterprise Data Management using Grid Technology Enterprise Data using Grid Technology Kriangsak Tiawsirisup Sales Consulting Manager Oracle Corporation (Thailand) 3 Related Data Centre Trends. Service Oriented Architecture Flexibility

More information

LCG Conditions Database Project

LCG Conditions Database Project Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,

More information

: Administration of Symantec IT Management Suite 8.0 SCS Exam. Study Guide v. 1.0

: Administration of Symantec IT Management Suite 8.0 SCS Exam. Study Guide v. 1.0 250-423: Administration of Symantec IT Management Suite 8.0 SCS Exam Study Guide v. 1.0 1 Symantec Study Guide Table of Contents Recommended Preparation Materials... 3 Recommended Courses... 3 Optional

More information

Access the power of Grid with Eclipse

Access the power of Grid with Eclipse Access the power of Grid with Eclipse Harald Kornmayer (Forschungszentrum Karlsruhe GmbH) Markus Knauer (Innoopract GmbH) October 11th, 2006, Eclipse Summit, Esslingen 2006 by H. Kornmayer, M. Knauer;

More information

Monitoring ARC services with GangliARC

Monitoring ARC services with GangliARC Journal of Physics: Conference Series Monitoring ARC services with GangliARC To cite this article: D Cameron and D Karpenko 2012 J. Phys.: Conf. Ser. 396 032018 View the article online for updates and

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

Phire Frequently Asked Questions - FAQs

Phire Frequently Asked Questions - FAQs Phire Frequently Asked Questions - FAQs Phire Company Profile Years in Business How long has Phire been in business? Phire was conceived in early 2003 by a group of experienced PeopleSoft professionals

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Clouds at other sites T2-type computing

Clouds at other sites T2-type computing Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production

More information

irods usage at CC-IN2P3: a long history

irods usage at CC-IN2P3: a long history Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods usage at CC-IN2P3: a long history Jean-Yves Nief Yonny Cardenas Pascal Calvat What is CC-IN2P3? IN2P3:

More information

ELFms industrialisation plans

ELFms industrialisation plans ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst II

INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst II Adopted: July 2000 Revised : April 2004; August 2009; June 2014; February 2018 INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst UK Tier-2 site evolution for ATLAS Alastair Dewhurst Introduction My understanding is that GridPP funding is only part of the story when it comes to paying for a Tier 2 site. Each site is unique. Aim to

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Open data and scientific reproducibility

Open data and scientific reproducibility Open data and scientific reproducibility Victoria Stodden School of Information Sciences University of Illinois at Urbana-Champaign Data Science @ LHC 2015 Workshop CERN Nov 13, 2015 Closing Remarks: Open

More information

Benoit DELAUNAY Benoit DELAUNAY 1

Benoit DELAUNAY Benoit DELAUNAY 1 Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group Distributed production managers meeting Armando Fella on behalf of Italian distributed computing group Distributed Computing human network CNAF Caltech SLAC McGill Queen Mary RAL LAL and Lyon Bari Legnaro

More information

Site Report. Stephan Wiesand DESY -DV

Site Report. Stephan Wiesand DESY -DV Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing

More information

Remote power and console management in large datacenters

Remote power and console management in large datacenters Remote power and console management in large datacenters A Horváth IT department, CERN, CH-1211 Genève 23, Switzerland E-mail: Andras.Horvath@cern.ch Abstract. Today s datacenters are often built of a

More information

Sparta Systems TrackWise Digital Solution

Sparta Systems TrackWise Digital Solution Systems TrackWise Digital Solution 21 CFR Part 11 and Annex 11 Assessment February 2018 Systems TrackWise Digital Solution Introduction The purpose of this document is to outline the roles and responsibilities

More information

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools EGI-InSPIRE EGI Operations Tiziana Ferrari/EGI.eu EGI Chief Operations Officer 1 Outline Infrastructure and operations architecture Services Monitoring and management tools Operations 2 Installed Capacity

More information

CERN s Business Computing

CERN s Business Computing CERN s Business Computing Where Accelerated the infinitely by Large Pentaho Meets the Infinitely small Jan Janke Deputy Group Leader CERN Administrative Information Systems Group CERN World s Leading Particle

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

Exam : Title : High-End Disk for Open Systems V2. Version : DEMO

Exam : Title : High-End Disk for Open Systems V2. Version : DEMO Exam : 000-968 Title : High-End Disk for Open Systems V2 Version : DEMO 1.An international company has a heterogeneous IBM storage environment with two IBM DS8700 systems in a Metro Mirror relationship.

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

REPORT 2015/149 INTERNAL AUDIT DIVISION

REPORT 2015/149 INTERNAL AUDIT DIVISION INTERNAL AUDIT DIVISION REPORT 2015/149 Audit of the information and communications technology operations in the Investment Management Division of the United Nations Joint Staff Pension Fund Overall results

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

LAMBDA The LSDF Execution Framework for Data Intensive Applications

LAMBDA The LSDF Execution Framework for Data Intensive Applications LAMBDA The LSDF Execution Framework for Data Intensive Applications Thomas Jejkal 1, V. Hartmann 1, R. Stotzka 1, J. Otte 2, A. García 3, J. van Wezel 3, A. Streit 3 1 Institute for Dataprocessing and

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Solution Pack Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Subject Governing Agreement Term DXC Services Requirements Agreement between DXC and Customer

More information

XRAY Grid TO BE OR NOT TO BE?

XRAY Grid TO BE OR NOT TO BE? XRAY Grid TO BE OR NOT TO BE? 1 I was not always a Grid sceptic! I started off as a grid enthusiast e.g. by insisting that Grid be part of the ESRF Upgrade Program outlined in the Purple Book : In this

More information

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN EMI Deployment Planning C. Aiftimiei D. Dongiovanni INFN Outline Migrating to EMI: WHY What's new: EMI Overview Products, Platforms, Repos, Dependencies, Support / Release Cycle Migrating to EMI: HOW Admin

More information

NAF & NUC reports. Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden,

NAF & NUC reports. Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden, NAF & NUC reports Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden, 2.12.2010 NAF: The Overview Picture and current resources Yves Kemp NAF 2.12.2010 Page 2 ATLAS NAF

More information

Oracle Java SE Advanced for ISVs

Oracle Java SE Advanced for ISVs Oracle Java SE Advanced for ISVs Oracle Java SE Advanced for ISVs is designed to enhance the Java based solutions that ISVs are providing to their enterprise customers. It brings together industry leading

More information

Oracle Database 11g: RAC Administration Release 2 NEW

Oracle Database 11g: RAC Administration Release 2 NEW Oracle University Contact Us: Local: 1800 103 4775 Intl: +91 80 4108 4709 Oracle Database 11g: RAC Administration Release 2 NEW Duration: 4 Days What you will learn This Oracle Database 11g: RAC Administration

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information