Grid Computing at Ljubljana and Nova Gorica

Similar documents
Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Scalability / Data / Tasks

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

ATLAS NorduGrid related activities

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Tier2 Centre in Prague

EGEE and Interoperation

The LHC Computing Grid

Operating the Distributed NDGF Tier-1

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Travelling securely on the Grid to the origin of the Universe

Andrea Sciabà CERN, Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Status of KISTI Tier2 Center for ALICE

Data Management for the World s Largest Machine

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Monitoring ARC services with GangliARC

Grid and Cloud Activities in KISTI

The INFN Tier1. 1. INFN-CNAF, Italy

Grid Computing Activities at KIT

FREE SCIENTIFIC COMPUTING

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Data transfer over the wide area network with a large round trip time

Support for multiple virtual organizations in the Romanian LCG Federation

IEPSAS-Kosice: experiences in running LCG site

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

ISTITUTO NAZIONALE DI FISICA NUCLEARE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April

DESY. Andreas Gellrich DESY DESY,

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

ARC integration for CMS

ATLAS Tier-3 UniGe

Data storage services at KEK/CRC -- status and plan

1. Introduction. Outline

Distributing storage of LHC data - in the nordic countries

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

Edinburgh (ECDF) Update

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

The Portuguese National Grid Initiative (INGRID)

The LHC Computing Grid

Clouds in High Energy Physics

Implementing GRID interoperability

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Service Level Agreement Metrics

Access the power of Grid with Eclipse

GRID 2008 International Conference on Distributed computing and Grid technologies in science and education 30 June 04 July, 2008, Dubna, Russia

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

Computing for ILC experiments

Computing at Belle II

Tel-Aviv University GRID Status

First Experience with LCG. Board of Sponsors 3 rd April 2009

Computing for LHC in Germany

Virtualizing a Batch. University Grid Center

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

IWR GridKa 6. July GridKa Site Report. Holger Marten

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

Figure 1: cstcdie Grid Site architecture

Moving e-infrastructure into a new era the FP7 challenge

Grid Interoperation and Regional Collaboration

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

A data Grid testbed environment in Gigabit WAN with HPSS

Andrej Filipčič

Ganga - a job management and optimisation tool. A. Maier CERN

10 Gbit/s Challenge inside the Openlab framework

EGEODE. !Dominique Thomas;! Compagnie Générale de Géophysique (CGG, France) R&D. Expanding Geosciences On Demand 1. «Expanding Geosciences On Demand»

Challenges of the LHC Computing Grid by the CMS experiment

CESGA: General description of the institution. FINIS TERRAE: New HPC Supercomputer for RECETGA: Galicia s Science & Technology Network

Distributed Computing Framework. A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

Lessons Learned in the NorduGrid Federation

Oxford University Particle Physics Site Report

AGATA Analysis on the GRID

Meta-cluster of distributed computing Dubna-Grid

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst

Grid security and NREN CERTS in the Nordic Countries

Grid Challenges and Experience

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

ELFms industrialisation plans

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

On the employment of LCG GRID middleware

Austrian Federated WLCG Tier-2

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

Introduction to Grid Infrastructures

ALICE Grid Activities in US

SAM at CCIN2P3 configuration issues

PoS(EGICF12-EMITC2)081

The EU DataGrid Testbed

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Grid-related related Activities around KISTI Tier2 Center for ALICE

Scientific Computing at SLAC

Transcription:

Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources Experience Plans and expectations Belle II General Meeting, KEK, 7th - 9th July 29

Introduction GRID activities at the J. Stefan Institute (JSI) were initiated by the ATLAS group of the Experimental Particle Physics Department Some of those colleagues are also members of the Laboratory of Astroparticle Physics (LAP) at the University of Nova Gorica (UNG) (natural collaboration) JSI as members of the ATLAS experiment joined the two projects: LCG (LHC Grid) and EGEE (Enabling Grids for E-science) projects (23) SiGNET cluster (Slovenian Grid NETwork) was set-up as a pilot GRID project in Slovenia (started in autumn 23, was available for EGEE on 1st April 24) SiGNET cluster was financed by Rep. of Slovenia and EU: one of the first operating clusters with the 64-bit architecture SiGNET VO and SiGNET CA (an accredited Certification Authority) established B2GM, KEK, July 29 2

Computing resources SiGNET framework : Servers: ~1 AMD Opterons with 16 GB memory, 1 Gb/s links Data servers/disk space: AMD Opterons with 4 GB memory, ~3 TB disk space Computing cluster: Computing nodes: 2-proc. 4-core AMD Opteron 23XX with 2 GB memory 1 Gb/s link Desktop PC's: ~24 GB disk space, 1 Gb/s link B2GM, KEK, July 29 3

Computing resources SiGNET framework : Servers: ~1 AMD Opterons with 16 GB memory, 1 Gb/s links Data servers/disk space: AMD Opterons with 4 GB memory, ~3 TB disk space Altogether: ~6 processors in the system Computing cluster: Computing nodes: 2-proc. 4-core AMD Opteron 23XX with 2 GB memory 1 Gb/s link Desktop PC's: ~24 GB disk space, 1 Gb/s link B2GM, KEK, July 29 4

Computing resources SiGNET architecture : Operation system: Mainly Gentoo Linux Partly GNU/linux SLC (Scientific Linux CERN) File system: OpenAFS on servers Job management: MAUI Cluster Scheduler TORQUE resource manager GRID middleware: NorduGrid ARC (Advanced Resource Connector) glite 3.1. B2GM, KEK, July 29 5

Internet connection Service provided by ARNES (Academic and research network of Slovenia), linked to... ARNES backbone B2GM, KEK, July 29 6

Internet connection GÉANT2 (Pan-European research and education network) and via it to SINET3 (Science Information Network of Japan) over the U.S.A. GÉANT2 backbone SINET3 B2GM, KEK, July 29 7

Internet connection IJS, UNG 1 GB/s 2 GB/s GÉANT2 1+2.4 GB/s SINET3 2 GB/s KEK pikolit [4] traceroute belle.kek.jp traceroute to belle.kek.jp (13.87.15.46), 3 hops max, 6 byte packets 1 f9gw.ijs.si (194.249.156.9).737 ms.784 ms.87 ms 2 194.249.61.13 (194.249.61.13).115 ms.147 ms.143 ms 3 194.249.61.129 (194.249.61.129).485 ms.633 ms.632 ms 4 rarnes1-x--x11.arnes.si (212.235.16.241) 31.251 ms 31.262 ms 31.26 ms 5 arnes.rt1.vie.at.geant2.net (62.4.124.5) 7.5 ms 7.69 ms 7.197 ms 6 so-7--.rt1.pra.cz.geant2.net (62.4.112.6) 13.48 ms 13.53 ms 13.51 ms 7 so-6-3-.rt1.fra.de.geant2.net (62.4.112.38) 21.226 ms 21.93 ms 21.11 ms 8 so-5--.rt1.ams.nl.geant2.net (62.4.112.58) 28.44 ms 28.582 ms 28.492 ms 9 nyc-gate1-rm-ge-7-2--27.sinet.ad.jp (15.99.188.21) 112.227 ms 112.198 ms 112.198 ms 1 tokyo1-dc-rm-p-2-3--11.sinet.ad.jp (15.99.23.57) 33.85 ms 33.885 ms 34.5 ms 11 tsukuba-dc-rm-ae--11.sinet.ad.jp (15.99.23.9) 38.84 ms 38.851 ms 38.844 ms 12 kek-lan-1.gw.sinet.ad.jp (15.99.19.182) 39.328 ms 39.978 ms 39.456 ms 13 13.87.4.42 (13.87.4.42) 39.33 ms 39.17 ms 39.325 ms 14 belle1.kek.jp (13.87.15.46) 39.324 ms 39.173 ms 39.172 ms Transfer rate: e.g. tested a few years ago with ssh from ijs.si up to ~ 2kB/s ; up to ~ 1 ssh connections at once B2GM, KEK, July 29 8

GRID experience SiGNET : a member of NorduGrid and EGEE B2GM, KEK, July 29 9

GRID experience B2GM, KEK, July 29 1

GRID experience B2GM, KEK, July 29 11

GRID experience Very stable operation during the Atlas Data Challenge 2 experiment B2GM, KEK, July 29 12

GRID experience Belle grid on SiGNET: Joined Belle VO (28/8); set up Belle environment (29/2) lcgce.ijs.si [6] lcg-infosites --vo belle ce #CPU Free Total Jobs Running Waiting ComputingElement ---------------------------------------------------------856 34 524 58 16 grid-lcgce.rzg.mpg.de:2119/jobmanager-lcgsge-long 4939 135 ce-1-fzk.gridka.de:2119/jobmanager-pbspro-bellel 4939 135 ce-1-fzk.gridka.de:2119/jobmanager-pbspro-bellexl truncated 476 3484 24 72 3484 3484 582 14 3484 7 4939 3484 4939 3484 476 232 24 71 2297 2294 48 2294 7 1267 2297 1267 2294 kek2-ce2.cc.kek.jp:2119/jobmanager-lcgpbs-belle cream-2-fzk.gridka.de:8443/cream-pbs-bellexxl dg1.cc.kek.jp:2119/jobmanager-lcgpbs-belle kek2-ce1.cc.kek.jp:2119/jobmanager-lcgpbs-belle ce-2-fzk.gridka.de:2119/jobmanager-pbspro-bellexs ce-4-fzk.gridka.de:2119/jobmanager-pbspro-bellexl lcgce.ijs.si:2119/jobmanager-pbs-belle ce-alice.sdfarm.kr:2119/jobmanager-lcgpbs-belle ce-4-fzk.gridka.de:2119/jobmanager-pbspro-bellel ce.cc.ncu.edu.tw:2119/jobmanager-lcgpbs-belle cream-1-fzk.gridka.de:8443/cream-pbs-bellexxl ce-2-fzk.gridka.de:2119/jobmanager-pbspro-bellel cream-1-fzk.gridka.de:8443/cream-pbs-belles ce-4-fzk.gridka.de:2119/jobmanager-pbspro-bellexxl B2GM, KEK, July 29 13

GRID experience Belle grid on SiGNET: Joined Belle VO (28/8); set up Belle environment (29/2) G. Iwai B2GM, KEK, July 29 14

GRID experience Belle grid on SiGNET: Joined Belle VO (28/8); set up Belle environment (29/2) Belle MC started (29/3) We are still establishing the stable setup H. Nakazawa B2GM, KEK, July 29 15

Plans and expectations Jozef Stefan Institute Funding for Grid approved by the Slovenian Research Agency (June 29) Part of the equipment (totalling at ~5 keur) will be purchased and installed (within 29) System upgraded/extended : Probably some storage units Computing clusters? Coexistence of ATLAS/Belle/P. Auger Grid... For Belle part it is important to solve the infrastructure/software problems and contribute proportionally to the Belle MC B2GM, KEK, July 29 16

Plans and expectations University of Nova Gorica Funding for Grid approved by the Slovenian Research Agency (June 29) Part of the equipment (totalling at ~7 keur) will be purchased and installed (within 29) System configuration will be modular and based on Fibre Channel technology: Main server (25 Tb) Computing cluster: (initially 3 high end servers with dual quad-core CPU s = 24 cores) Tape library for data archiving Installation and integration into Belle and P. Auger Grid will be done jointly with the JSI UNG has a 1 Gb/s link with ARNES and JSI, so networking connectivity of the nodes at UNG will be good B2GM, KEK, July 29 17

SiGNET@JSI and future_facilities@ung should play an active role and contribute significantly to the Belle/Belle-II Grid.