Austrian Federated WLCG Tier-2
|
|
- Liliana Esther Phelps
- 5 years ago
- Views:
Transcription
1 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1 Institute of Astro- and Particle Physics, University of Innsbruck 2 Zentraler Informatik Dienst, University of Innsbruck 3 Institute of High Energy Physics, Austrian Academy of Science, Vienna
2 Content Introduction The Worldwide LHC Computing Grid The Austrian Federated Tier-2 Recent Tests Outlook & Conclusion
3 Introduction LHC starts operation in fall 2009 Austrian institutes participates in the CMS and ATLAS experiments LHC experiments will produce about 15PB per year Data needs to be stored, processed and made available to over 5000 physicists at more than 500 institutes Worldwide LHC Computing Grid (WLCG) should provide the resources
4 The WLCG Data storage and analysis infrastructure for the LHC high energy physics community Data from the experiments will be distributed around the globe according to a four-tiered model Tier-0: located at CERN, primary backup on tape, initial processing and data distribution to Tier-1 s Tier-1: 11 large computer centers with round-theclock support, mass storage and processing facilities; data distribution to Tier-2 s
5 The WLCG continued Tier-2: consisting of one or several collaborating computing facilities with sufficient data storage and adequate computing power for Monte Carlo and analysis tasks Tier-3: Grid access for individual scientists; can be a local department cluster or even individual PC s Based on several Grid infrastructures: EGEE (Enabling Grids for E-sciencE) in Europe OSG (Open Science Grid) in the US NDGF (Nordic Data Grid Facility) in Scandinavia
6 The WLCG continued Infrastructures support different middleware flavors, but key components (security, accounting, file transfer services) are fully interoperable WLCG provides an interface to seamlessly access these infrastructures LHC experiments developed services on top to operate the infrastructure Workload Management (DIRAC, Alien, Panda,...) Data Management (PhEDEx, DQ2,...) User Analysis (Ganga, CRAB, DIRAC, Alien,...)
7 Austrian Federated Tier-2 Innsbruck set up their first Grid site in 2003 and participated in ATLAS Data Challenge 2 and large scale production for the workshop in Rome 2005 Innsbruck is associated to ATLAS via the German GridKa cloud (Tier-1) since 2008 Innsbruck currently receives 10% of the data
8 Austrian Federated Tier-2 cont d Vienna started in 2005 Supports the CMS computing activities with emphasis on user analysis Will store data according to the CMS model 1/3 is general data - real data and simulation 1/3 is group specific data (SUSY and BTag) 1/3 is analysis specific data
9 Tier-2 Layout - Innsbruck
10 Tier-2 Layout - Innsbruck cont d Computing Elements 2 x LCG-CE with Torque/Maui Batch System on SL WN s: 2 x Quad-Core Intel Xeon L5420 CPU s (2.5 GHz), 16 GByte RAM 9 WN s: 2 x Dual-Core Intel Xeon 5160, 8 GByte RAM
11 Tier-2 Layout - Innsbruck cont d Storage Element, Disk Pool Manager (DPM) 1 DPM Head Node Transtec SUMO RAID, 48 x 1 TByte Extension 48 x 2 TByte projected Starline Easy Raid, 16 x 1 TByte 3 DPM Disk Nodes (2 additional projected) 360 (600) MByte/s between WN s and disks
12 Tier-2 Layout - Innsbruck cont d Core Service: Top-level BDII (Berkeley Database Information Index) for Central Europe Part of bdii.ce-egee.org DNS pool DNS pool currently contains 6 top-level BDII s for load balancing
13 Tier-2 Layout - Vienna
14 Tier-2 Layout - Vienna Computing Element LCG-CE with Torque/Maui Batch System on SL WN s: Sun blades, 2 x Quad Core Intel Xeon CPU s (2.6 GHz), 16 GByte RAM 50 blades will be added after the upgrade of electric power, cooling and network is finished
15 Tier-2 Layout - Vienna continued Storage Element, DPM 1 DPM Head Node 4 DPM Disk Nodes 4 Supermicro Raid s á 45 TByte 6 more will be added when the upgrade is finished 2 GBit/s (10 GBit/s) between WN s and disks
16 Austrian Federated Tier-2 Pledges pledged ATLAS CMS Total % of pledged CPU [HEP-SPEC06] % Disk [TByte] %
17 Austrian Federated Tier-2 Pledges pledged ATLAS planned CMS planned Total planned % of pledged CPU [HEP-SPEC06] % Disk [TByte] %
18 Availability and Reliability Tier-2 Reliability Report July 2009 Availability of Innsbruck dropped in July due too network layout improvements AT-HEPHY-VIENNA-UIBK usually within top 10 most reliable sites
19 Recent Tests STEP09 (Scale Testing for the Experiment Program 2009) HammerCloud (HC) Test July HC Test August HC Test August retested
20 STEP09 All experiments nominal rate Production User analysis stress test Production: Innsbruck performed good (95% efficiency) User analysis: Innsbruck performed bad Network many sites HC 432: 76% failure rate Failed HC 430: 62% failure rate Completed
21 STEP09 - Bottlenecks identified WN s access storage through NAT 2nd cluster s bandwidth to SE Bandwidth to FZK
22 /012&/'()(*+,-,'345,' 6-"7'('.8)"-9#'4:';7' <8.-)='2'6##>,',#)"' Vienna STEP09
23 HC July Disk servers connected to internal network now HC 525: 62% failure rate HC 525: gsi-ftp traffic still through NAT HC 531: rfio traffic through internal network HC 531: 1% failure rate Failed Completed Submitted Running
24 rfio gsi-ftp HC July - continued
25 HC August HC 574: rfio HC 574: 0% failure rate HC 575: rfcp / FileStager HC 579: Panda HC 575: 0% failure rate Misconfiguration of DPM disk servers HC 579: 97% failure rate
26 HC August retested HC 585: Panda; limited to around concurrent jobs HC 585: 7.7% failure rate HC 600: Panda; limited to around concurrent jobs HC 600: 2.8% failure rate Failed Submitted Completed Running
27 Outlook & Conclusion Austria participates in LHC experiments not only in physics but also in computing Austria setup a medium sized Tier-2 which exceeds the pledges Production is running well Problems with user analysis jobs were identified and are addressed Network bandwidth Need to limit number of concurrent analysis job to the available bandwidth Austrian Federated WLCG Tier-2 will be ready for the LHC start
28 Thank you for your attention! More information are available here: Questions?
Scientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationSpanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More information150 million sensors deliver data. 40 million times per second
CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger
More informationPhilippe Charpentier PH Department CERN, Geneva
Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about
More informationConsiderations for a grid-based Physics Analysis Facility. Dietrich Liko
Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce
More informationDistributing storage of LHC data - in the nordic countries
Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed
More informationDESY. Andreas Gellrich DESY DESY,
Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was
More informationThe Grid: Processing the Data from the World s Largest Scientific Machine
The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationEGEE and Interoperation
EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The
More informationIvane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU
Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:
More informationI Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011
I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss
Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationThe Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop
The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationNCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan
NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and
More informationTier2 Centre in Prague
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle
More informationHigh-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011
High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary
More informationOperating the Distributed NDGF Tier-1
Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?
More informationwhere the Web was born Experience of Adding New Architectures to the LCG Production Environment
where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop
More informationPoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.
First sights on a non-grid end-user analysis model on Grid Infrastructure Roberto Santinelli CERN E-mail: roberto.santinelli@cern.ch Fabrizio Furano CERN E-mail: fabrzio.furano@cern.ch Andrew Maier CERN
More informationA short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens
More informationHammerCloud: A Stress Testing System for Distributed Analysis
HammerCloud: A Stress Testing System for Distributed Analysis Daniel C. van der Ster 1, Johannes Elmsheuser 2, Mario Úbeda García 1, Massimo Paladin 1 1: CERN, Geneva, Switzerland 2: Ludwig-Maximilians-Universität
More informationComputing for LHC in Germany
1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''
More informationComputing / The DESY Grid Center
Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationCHIPP Phoenix Cluster Inauguration
TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch
More informationSystem upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.
System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo
More informationThe LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008
The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationConstant monitoring of multi-site network connectivity at the Tokyo Tier2 center
Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University
More informationThe LHC Computing Grid
The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationTier-2 DESY Volker Gülzow, Peter Wegner
Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts
More informationCMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February
UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB
More informationARC integration for CMS
ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki
More informationEdinburgh (ECDF) Update
Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationSupport for multiple virtual organizations in the Romanian LCG Federation
INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information
More informationThe grid for LHC Data Analysis
The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data
More informationExperience of the WLCG data management system from the first two years of the LHC data taking
Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz
More informationService Availability Monitor tests for ATLAS
Service Availability Monitor tests for ATLAS Current Status Work in progress Alessandro Di Girolamo CERN IT/GS Critical Tests: Current Status Now running ATLAS specific tests together with standard OPS
More informationUW-ATLAS Experiences with Condor
UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS
More informationHigh Throughput WAN Data Transfer with Hadoop-based Storage
High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San
More informationData transfer over the wide area network with a large round trip time
Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two
More informationBenoit DELAUNAY Benoit DELAUNAY 1
Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some
More informationTravelling securely on the Grid to the origin of the Universe
1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationATLAS operations in the GridKa T1/T2 Cloud
Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationLHCb Computing Strategy
LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy
More informationDepartment of Physics & Astronomy
Department of Physics & Astronomy Experimental Particle Physics Group Kelvin Building, University of Glasgow, Glasgow, G1 8QQ, Scotland Telephone: +44 ()141 339 8855 Fax: +44 ()141 33 5881 GLAS-PPE/7-3
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for
More informationRUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE
RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,
More informationGrid Operation at Tokyo Tier-2 Centre for ATLAS
Grid Operation at Tokyo Tier-2 Centre for ATLAS Hiroyuki Matsunaga, Tadaaki Isobe, Tetsuro Mashimo, Hiroshi Sakamoto & Ikuo Ueda International Centre for Elementary Particle Physics, the University of
More informationCPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe
CPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe SPEC Colloquium, Dresden, 2007-06-22 Manfred Alef Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationUnified storage systems for distributed Tier-2 centres
Journal of Physics: Conference Series Unified storage systems for distributed Tier-2 centres To cite this article: G A Cowan et al 28 J. Phys.: Conf. Ser. 119 6227 View the article online for updates and
More informationTier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow
Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations
More informationCERN and Scientific Computing
CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport
More information30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy
Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of
More informationImproved ATLAS HammerCloud Monitoring for Local Site Administration
Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs
More informationThe National Analysis DESY
The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?
More informationClouds at other sites T2-type computing
Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production
More informationPan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE
Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Aleksandar Belić Scientific Computing Laboratory Institute of Physics EGEE Introduction EGEE = Enabling Grids for
More informationGrid Computing at the IIHE
BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke
More informationMiddleware-Tests with our Xen-based Testcluster
Tier-2 meeting March 3, 2008 1 Introduction Overview of the Testcluster Overview of the installed Software Xen 2 Main Original Usage of the Testcluster Present Activities The Testcluster Future Activities
More informationA copy can be downloaded for personal non-commercial research or study, without prior permission or charge
Bhimji, W., Bland, J., Clark, P. J., Mouzeli, E. G., Skipsey, S., and Walker, C. J. (11) Tuning grid storage resources for LHC data analysis. In: International Conference on Computing in High Energy and
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2
More informationALICE Grid Activities in US
ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions
More informationDESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities
DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group
More informationThe PanDA System in the ATLAS Experiment
1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX
More informationATLAS Tier-3 UniGe
ATLAS Tier-3 cluster @ UniGe Luis March and Yann Meunier (Université de Genève) CHIPP + CSCS GRID: Face To Face meeting CERN, September 1st 2016 Description of ATLAS Tier-3 cluster at UniGe The ATLAS Tier-3
More informationGRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS
GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS M. Dulea, S. Constantinescu, M. Ciubancan, T. Ivanoaica, C. Placinta, I.T. Vasile, D. Ciobanu-Zabet Department of Computational
More informationThe ATLAS Tier-3 in Geneva and the Trigger Development Facility
Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online
More informationSite Report. Stephan Wiesand DESY -DV
Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing
More informationFREE SCIENTIFIC COMPUTING
Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX
More informationHigh Energy Physics data analysis
escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,
More informationExperimental Computing. Frank Porter System Manager: Juan Barayoga
Experimental Computing Frank Porter System Manager: Juan Barayoga 1 Frank Porter, Caltech DoE Review, July 21, 2004 2 Frank Porter, Caltech DoE Review, July 21, 2004 HEP Experimental Computing System Description
More informationFirst Experience with LCG. Board of Sponsors 3 rd April 2009
First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large
More informationATLAS Experiment and GCE
ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors
More informationUniversity of Johannesburg South Africa. Stavros Lambropoulos Network Engineer
University of Johannesburg South Africa Stavros Lambropoulos Network Engineer History of the UJ Research Cluster User Groups Hardware South African Compute Grid (SA Grid) Status Applications Issues Future
More informationGrid Computing a new tool for science
Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50
More informationDistributed Computing Framework. A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille
Distributed Computing Framework A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille EGI Webinar, 7 June 2016 Plan DIRAC Project Origins Agent based Workload Management System Accessible computing resources Data
More informationATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP
ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing
More informationMonte Carlo Production on the Grid by the H1 Collaboration
Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring
More informationEvolution of the ATLAS PanDA Workload Management System for Exascale Computational Science
Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,
More informationLHC Computing Grid today Did it work?
Did it work? Sept. 9th 2011, 1 KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Institut www.kit.edu Abteilung Large Hadron Collider and Experiments
More informationSAM at CCIN2P3 configuration issues
SAM at CCIN2P3 configuration issues Patrice Lebrun - IPNL/IN2P3 CCIN2P3 present actions Computing and data storage services for about 45 experiments Regional Center services for: EROS II BaBar ( Tier A)
More informationData Access and Data Management
Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationThe Software Defined Online Storage System at the GridKa WLCG Tier-1 Center
The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)
More informationThe Global Grid and the Local Analysis
The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between
More informationGrid-related related Activities around KISTI Tier2 Center for ALICE
2010 APCTP LHC Workshop@ Konkuk Univ. Grid-related related Activities around KISTI Tier2 Center for ALICE August 10, 2010 Soonwook Hwang KISTI 1 Outline Introduction to KISTI KISTI ALICE Tier2 Center Development
More information